S: 7000 Stuttgart 50
S: Germany
+N: Christoph Rohland
+E: hans-christoph.rohland@sap.com
+E: ch.rohland@gmx.net
+D: shm fs, SYSV semaphores, af_unix
+S: Neue Heimat Str. 8
+S: D-68789 St.Leon-Rot
+S: Germany
+
N: Stephen Rothwell
E: sfr@linuxcare.com
W: http://linuxcare.com.au/sfr
CONFIG_NOHIGHMEM
If you are compiling a kernel which will never run on a machine
with more than 1 Gigabyte total physical RAM, answer "off"
- here (default choice).
+ here (default choice). This will result in the old "3GB/1GB"
+ virtual/physical memory split. 3BG are mapped so as each processus
+ sees a 3GB virtual memory space.
+ The remaining part of the 4G virtual memory space is used by the
+ kernel to 'permanently map' as much physical memory as possible.
+ Certain types of applications perform better if there is more
+ 'permanently mapped' kernel memory.
+ Certain types of applications (eg. database servers) perform
+ better if they have as much virtual memory per process as possible.
Linux can use up to 64 Gigabytes of physical memory on x86 systems.
- High memory is all the physical RAM that could not be directly
+ However 32-bit x86 processors have only 4 Gigabytes of virtual memory
+ space.
+
+ Any potentially remaining part of physical memory is called
+ 'high memory' that is all the physical RAM that could not be directly
mapped by the kernel - ie. 3GB if there is 4GB RAM in the system,
7GB if there is 8GB RAM in the system.
processors (PPro and better). NOTE: The "64GB" kernel will not
boot CPUs that not support PAE!
+ The actual amount of total physical memory will either be
+ autodetected or can be forced by using a kernel command line option
+ such as "mem=256M". (Try "man bootparam" or see the documentation of
+ your boot loader (lilo or loadlin) about how to pass options to the
+ kernel at boot time. The lilo procedure is also explained in the
+ SCSI-HOWTO, available from http://www.linuxdoc.org/docs.html#howto .)
+
Normal PC floppy disk support
CONFIG_BLK_DEV_FD
If you want to use the floppy disk drive(s) of your PC under Linux,
Useful information about large (>540 MB) IDE disks, multiple
interfaces, what to do if ATA/IDE devices are not automatically
detected, sound card ATA/IDE ports, module support, and other topics, is
- contained in Documentation/ata-ide.txt. For detailed information about
+ contained in Documentation/ide.txt. For detailed information about
hard drives, consult the Disk-HOWTO and the Multi-Disk-HOWTO,
available from http://www.linuxdoc.org/docs.html#howto .
Say Y here to enable support in the dumb serial driver to support
the HUB6 card.
+Support for hot-pluggable devices
+CONFIG_HOTPLUG
+ Say Y here to enable support for hot plugin of certain hardware such as
+ PCMCIA cards and the like.
+
+ At this moment, few drivers support it, but as they get converted to use the
+ new ressource allocator/manager, their number will increase.
+
PCMCIA serial device support
CONFIG_PCMCIA_SERIAL_CS
Say Y here to enable support for 16-bit PCMCIA serial devices,
CONFIG_FB_S3TRIO
If you have a S3 Trio say Y. Say N for S3 Virge.
+3Dfx Banshee/Voodoo3 display support (EXPERIMENTAL)
+CONFIG_FB_3DFX
+ This driver supports graphics boards with the 3Dfx Banshee/Voodoo3 chips.
+ Say Y if you have such a graphics board.
+
+ The driver is also available as a module ( = code which can be
+ inserted and removed from the running kernel whenever you want). The
+ module will be called tdfxfb.o. If you want to compile it as a
+ module, say M here and read Documentation/modules.txt.
+
+nVidia Riva support (EXPERIMENTAL)
+CONFIG_FB_RIVA
+ This driver supports graphics boards with the nVidia Riva (aka TNTx)
+ chips.
+ Say Y if you have such a graphics board.
+
+ The driver is also available as a module ( = code which can be
+ inserted and removed from the running kernel whenever you want). The
+ module will be called rivafb.o. If you want to compile it as a
+ module, say M here and read Documentation/modules.txt.
+
ATI Mach64 display support (EXPERIMENTAL)
CONFIG_FB_ATY
This driver supports graphics boards with the ATI Mach64 chips.
running kernel whenever you want), say M here and read
Documentation/modules.txt. The module will be called vga16fb.o.
+Select other compiled-in fonts
+CONFIG_FBCON_FONTS
+ Say Y here if you would like to use fonts other than the default your frame
+ buffer console usually use.
+
+ Note that the answer to this question won't directly affect the kernel:
+ saying N will just cause this configure script to skip all the questions
+ about foreign fonts.
+
+ If unsure, say N (the default choices are safe).
+
VGA 8x16 font
CONFIG_FONT_8x16
This is the "high resolution" font for the VGA frame buffer (the one
- provided by the text console 80x25 mode.
+ provided by the VGA text console 80x25 mode.
+
+ If unsure, say Y.
Support only 8 pixels wide fonts
CONFIG_FBCON_FONTWIDTH8_ONLY
Answer Y here will make the kernel provide only the 8x8 fonts (these
are the less readable).
+ If unsure, say N.
+
Sparc console 8x16 font
CONFIG_FONT_SUN8x16
- This is the high resolution console font for Sun machines. Say Y.
+ This is the high resolution console font for Sun machines.
+
+ Say Y.
Sparc console 12x22 font (not supported by all drivers)
CONFIG_FONT_SUN12x22
provided by the text console 80x50 (and higher) modes.
Note this is a poor quality font. The VGA 8x16 font is quite a lot
more readable.
+
Given the resolution provided by the frame buffer device, answer N
here is safe.
includes a server that supports the frame buffer device directly
(XF68_FBDev).
+HGA monochrome support (EXPERIMENTAL)
+Hercules mono graphics console (EXPERIMENTAL)
+CONFIG_FBCON_HGA
+ Say Y here if you have a Hercules mono graphics card.
+
+ This driver is also available as a module ( = code which can be
+ inserted and removed from the running kernel whenever you want).
+ The module will be called hgafb.o. If you want to compile it as
+ a module, say M here and read Documentation/modules.txt.
+
+ As this card technology is 15 years old, most people will answer N here.
+
Matrox unified accelerated driver (EXPERIMENTAL)
CONFIG_FB_MATROX
Say Y here if you have Matrox Millennium, Matrox Millennium II,
Matrox Mystique, Matrox Mystique 220, Matrox Productiva G100, Matrox
- Mystique G200, Matrox Millennium G200 or Matrox Marvel G200 video
- card in your box. At this time, support for the G100, Mystique G200
- and Marvel G200 is untested.
+ Mystique G200, Matrox Millennium G200, Matrox Marvel G200 video or
+ Matrox G400 card in your box. At this time, support for the G100,
+ Mystique G200 and Marvel G200 is untested.
This driver is also available as a module ( = code which can be
inserted and removed from the running kernel whenever you want).
See Documentation/networking/decnet.txt for more information.
+Appletalk interfaces support
+CONFIG_APPLETALK
+ AppleTalk is the way Apple computers speak to each other on a
+ network. If your Linux box is connected to such a network and you
+ want to join the conversation, say Y.
+
AppleTalk DDP
CONFIG_ATALK
AppleTalk is the way Apple computers speak to each other on a
with a similar Bonding Linux driver, a Cisco 5500 switch or a
SunTrunking SunSoft driver.
- This is similar to the EQL driver, but it merge etherner segments instead
+ This is similar to the EQL driver, but it merge ethernet segments instead
of serial lines.
If you want to compile this as a module ( = code which can be
Say Y here if you have a native Econet network card installed in
your computer.
+Wan interfaces support
+CONFIG_WAN
+ Wide Area Networks (WANs), such as X.25, frame relay and leased
+ lines, are used to interconnect Local Area Networks (LANs) over vast
+ distances with data transfer rates significantly higher than those
+ achievable with commonly used asynchronous modem connections.
+
+ Say Y here if you want to use such interconnections.
+
+ It is safe to say N. Most people won't need it.
+
WAN Router
CONFIG_WAN_ROUTER
Wide Area Networks (WANs), such as X.25, frame relay and leased
If unsure, say N.
+WAN router drivers
+CONFIG_WAN_ROUTER_DRIVERS
+ Wide Area Networks (WANs), such as X.25, frame relay and leased
+ lines, are used to interconnect Local Area Networks (LANs) over vast
+ distances with data transfer rates significantly higher than those
+ achievable with commonly used asynchronous modem connections.
+ Usually, a quite expensive external device called a `WAN router' is
+ needed to connect to a WAN.
+
+ Say Y here will enable the kernel to a??? as a WAN router betwenn LAN by
+ means of WAN adapters.
+
Fast switching (read help!)
CONFIG_NET_FASTROUTE
Saying Y here enables direct NIC-to-NIC (NIC = Network Interface
The module will be called cosa.o. For general information about
modules read Documentation/modules.txt.
-# Fibre Channel driver support
-# CONFIG_NET_FC
+Fibre Channel driver support
+CONFIG_NET_FC
+ Say Y here provide support for storage arrays connected to
+ the system using Fibre Optic and the "X3.269-199X Fibre Channel
+ Protocol for SCSI" specification. You'll also need the generic SCSI
+ support, as well as the drivers for the storage array itself and
+ for the interface adapter such as SOC or SOC+. This subsystem could even
+ serve for IP networking, with some code extensions. If unsure, say N.
# Interphase 5526 Tachyon chipset based adaptor support
# CONFIG_IPHASE5526
The module will be called dc2xx.o. If you want to compile it as a
module, say M here and read Documentation/modules.txt.
+
+USB Mustek MDC800 Digital Camera Support
+CONFIG_USB_MDC800
+ Say Y here if you want to connect this type of still camera to
+ your computer's USB port. This driver can be used with gphoto 0.4.3
+ and higher (look at www.gphoto.org).
+ To use it create a devicenode with mknod /dev/mustek c 10 171 and
+ configure it in your software.
+
+ This code is also available as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want).
+ The module will be called mdc800.o. If you want to compile it as a
+ module, say M here and read Documentation/modules.txt.
+
+
USB Mass Storage support
CONFIG_USB_STORAGE
Say Y here if you want to connect USB mass storage devices to your
say M here and read Documentation/modules.txt. The module will be
called vfat.o.
+Compressed ROM file system support
+CONFIG_CRAMFS
+ This option provides support for CramFs (Compressed ROM File System).
+ Cramfs is designed to be a simple, small, and compressed file system for ROM
+ based embedded systems.
+ CramFs is read-only, limited to 256MB file systems (with 16MB files), don't
+ support neither 16/32 bits uid/gid nor hard links. Neither are timestamps.
+ It isn't endian aware.
+
+ See Documentation/filesystems/cramfs.txt and fs/cramfs/README
+ for further information.
+
+ If you want to compile this as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want),
+ say M here and read Documentation/modules.txt. The module will be
+ called cramfs.o.
+
UMSDOS: Unix-like file system on top of standard MSDOS fs
CONFIG_UMSDOS_FS
Say Y here if you want to run Linux from within an existing DOS
Note that the answer to this question won't directly affect the
kernel: saying N will just cause this configure script to skip all
- the questions about foreign partitioning schemes. If unsure, say N.
+ the questions about foreign partitioning schemes.
+
+ If unsure, say N.
Alpha OSF partition support
CONFIG_OSF_PARTITION
This enable the kernel to lower the requested computer power by making some
devices enter in lower power levels (standy, sleep, ... modes).
Basically, this let you save power.
+
Two majors interfaces exist between the hardware and the OS, the older
Advanced Power Management (APM) and the newer Advanced Configuration and
Power Interface (ACPI).
+
Both are supported by the Linux Kernel.
+ Note that on some architectures (such as ia32), the idle task perform hlt
+ instructions which makes the CPU enter a low power mode. This can be seen as
+ the first kernel PM level.
+
Enter S1 for sleep (EXPERIMENTAL)
CONFIG_ACPI_S1_SLEEP
This enable ACPI compliant devices to enter level 1 of ACPI saving
If you plan to try to use the kernel on such a machine say Y here.
Everybody else says N.
+Sun 3X support
+CONFIG_SUN3X
+ This option enables support for the Sun 3x series of workstations. Be
+ warned that this support is very experimental. You will also want to
+ say Y to 68020 support and N to the other processors below.
+
+ If you don't want to compile a kernel for a Sun 3x, say N.
+
Sun 3 support
CONFIG_SUN3
This option enables support for the Sun 3 series of workstations. Be
mantissa and round slightly incorrect, what is more then enough
for normal usage.
-Advanced processor options
-CONFIG_ADVANCED_CPU
+Advanced configuration options
+CONFIG_ADVANCED
This gives you access to some advanced options for the CPU. The
defaults should be fine for most users, but these options may make
it possible for you to improve performance somewhat if you know what
If you have any questions or comments about the Compaq Personal
Server, send e-mail to skiff@crl.dec.com
-Virtual/Physical Memory Split
-CONFIG_1GB
- If you are compiling a kernel which will never run on a machine
- with more than 1 Gigabyte total physical RAM, answer "3GB/1GB"
- here (default choice).
-
- On 32-bit x86 systems Linux can use up to 64 Gigabytes of physical
- memory. However 32-bit x86 processors have only 4 Gigabytes of
- virtual memory space. This option specifies the maximum amount of
- virtual memory space one process can potentially use. Certain types
- of applications (eg. database servers) perform better if they have
- as much virtual memory per process as possible.
-
- The remaining part of the 4G virtual memory space is used by the
- kernel to 'permanently map' as much physical memory as possible.
- Certain types of applications perform better if there is more
- 'permanently mapped' kernel memory.
-
- [WARNING! Certain boards do not support PCI DMA to physical addresses
- bigger than 2 Gigabytes. Non-DMA-able memory must not be permanently
- mapped by the kernel, thus a 1G/3G split will not work on such boxes.]
-
- As you can see there is no 'perfect split' - the fundamental
- problem is that 4G of 32-bit virtual memory space is short. So
- you'll have to pick your own choice - depending on the application
- load of your box. A 2G/2G split is typically a good choice for a
- generic Linux server with lots of RAM.
-
- Any potentially remaining (not permanently mapped) part of physical
- memory is called 'high memory'. How much total high memory the kernel
- can handle is influenced by the (next) High Memory configuration option.
-
- The actual amount of total physical memory will either be
- autodetected or can be forced by using a kernel command line option
- such as "mem=256M". (Try "man bootparam" or see the documentation of
- your boot loader (lilo or loadlin) about how to pass options to the
- kernel at boot time. The lilo procedure is also explained in the
- SCSI-HOWTO, available from http://www.linuxdoc.org/docs.html#howto .)
-
Math emulation
CONFIG_NWFPE
Say Y to include the NWFPE floating point emulator in the kernel.
$(TOPDIR)/drivers/char/misc.c \
$(TOPDIR)/drivers/char/videodev.c \
$(TOPDIR)/drivers/net/net_init.c \
+ $(TOPDIR)/drivers/net/8390.c \
$(TOPDIR)/drivers/char/serial.c \
+ $(TOPDIR)/drivers/pci/pci.c \
$(TOPDIR)/drivers/sound/sound_core.c \
$(TOPDIR)/drivers/sound/sound_firmware.c \
$(TOPDIR)/drivers/net/wan/syncppp.c \
<chapter id="netdev">
<title>Network devices</title>
!Idrivers/net/net_init.c
+!Edrivers/net/8390.c
</chapter>
<chapter id="snddev">
!Edrivers/net/wan/z85230.c
</chapter>
+ <chapter id="pcilib">
+ <title>PCI Support Library</title>
+!Edrivers/pci/pci.c
+ </chapter>
+
</book>
--- /dev/null
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V3.1//EN"[]>
+
+<book id="ParportGuide">
+ <bookinfo>
+ <title>The Parallel Port Subsystem</title>
+
+ <authorgroup>
+ <author>
+ <firstname>Tim</firstname>
+ <surname>Waugh</surname>
+ <affiliation>
+ <address>
+ <email>twaugh@redhat.com</email>
+ </address>
+ </affiliation>
+ </author>
+ </authorgroup>
+
+ <copyright>
+ <year>1999-2000</year>
+ <holder>Tim Waugh</holder>
+ </copyright>
+
+ <legalnotice>
+ <para>
+ This documentation is free software; you can redistribute
+ it and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+ </para>
+
+ <para>
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ See the GNU General Public License for more details.
+ </para>
+
+ <para>
+ You should have received a copy of the GNU General Public
+ License along with this program; if not, write to the Free
+ Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ MA 02111-1307 USA
+ </para>
+
+ <para>
+ For more details see the file COPYING in the source
+ distribution of Linux.
+ </para>
+ </legalnotice>
+ </bookinfo>
+
+<toc></toc>
+
+<chapter id="design">
+<title>Design goals</title>
+
+<sect1>
+<title>The problems</title>
+
+<!-- Short-comings -->
+<!-- How they are addressed -->
+
+<!-- Short-comings
+ - simplistic lp driver
+ - platform differences
+ - no support for Zip drive pass-through
+ - no support for readback? When did Carsten add it?
+ - more parallel port devices. Figures?
+ - IEEE 1284 transfer modes: no advanced modes
+ -->
+
+<para>The first parallel port support for Linux came with the line
+printer driver, <filename>lp</filename>. The printer driver is a
+character special device, and (in Linux 2.0) had support for writing,
+via <function>write</function>, and configuration and statistics
+reporting via <function>ioctl</function>.</para>
+
+<para>The printer driver could be used on any computer that had an IBM
+PC-compatible parallel port. Because some architectures have parallel
+ports that aren't really the same as PC-style ports, other variants of
+the printer driver were written in order to support Amiga and Atari
+parallel ports.</para>
+
+<para>When the Iomega Zip drive was released, and a driver written for
+it, a problem became apparent. The Zip drive is a parallel port
+device that provides a parallel port of its own---it is designed to
+sit between a computer and an attached printer, with the printer
+plugged into the Zip drive, and the Zip drive plugged into the
+computer.</para>
+
+<para>The problem was that, although printers and Zip drives were both
+supported, for any given port only one could be used at a time. Only
+one of the two drivers could be present in the kernel at once. This
+was because of the fact that both drivers wanted to drive the same
+hardware---the parallel port. When the printer driver initialised, it
+would call the <function>check_region</function> function to make sure
+that the IO region associated with the parallel port was free, and
+then it would call <function>request_region</function> to allocate it.
+The Zip drive used the same mechanism. Whichever driver initialised
+first would gain exclusive control of the parallel port.</para>
+
+<para>The only way around this problem at the time was to make sure
+that both drivers were available as loadable kernel modules. To use
+the printer, load the printer driver module; then for the Zip drive,
+unload the printer driver module and load the Zip driver
+module.</para>
+
+<para>The net effect was that printing a document that was stored on a Zip
+drive was a bit of an ordeal, at least if the Zip drive and printer
+shared a parallel port. A better solution was needed.</para>
+
+<para>Zip drives are not the only devices that presented problems for
+Linux. There are other devices with pass-through ports, for example
+parallel port CD-ROM drives. There are also printers that report
+their status textually rather than using simple error pins: sending a
+command to the printer can cause it to report the number of pages that
+it has ever printed, or how much free memory it has, or whether it is
+running out of toner, and so on. The printer driver didn't originally
+offer any facility for reading back this information (although Carsten
+Gross added nibble mode readback support for kernel 2.2).</para>
+
+<!-- IEEE 1284 transfer modes: no advanced modes -->
+
+<para>The IEEE has issued a standards document called IEEE 1284, which
+documents existing practice for parallel port communications in a
+variety of modes. Those modes are: <quote>compatibility</quote>,
+reverse nibble, reverse byte, ECP and EPP. Newer devices often use
+the more advanced modes of transfer (ECP and EPP). In Linux 2.0, the
+printer driver only supported <quote>compatibility mode</quote>
+(i.e. normal printer protocol) and reverse nibble mode.</para>
+
+</sect1>
+
+<sect1>
+<title>The solutions</title>
+
+<!-- How they are addressed
+ - sharing model
+ - overview of structure (i.e. port drivers) in 2.2 and 2.3.
+ - IEEE 1284 stuff
+ - whether or not 'platform independence' goal was met
+ -->
+
+<para>The <filename>parport</filename> code in Linux 2.2 was designed
+to meet these problems of architectural differences in parallel ports,
+of port-sharing between devices with pass-through ports, and of lack
+of support for IEEE 1284 transfer modes.</para>
+
+<!-- platform differences -->
+
+<para>There are two layers to the
+<filename>parport</filename> subsystem, only one of which deals
+directly with the hardware. The other layer deals with sharing and
+IEEE 1284 transfer modes. In this way, parallel support for a
+particular architecture comes in the form of a module which registers
+itself with the generic sharing layer.</para>
+
+<!-- sharing model -->
+
+<para>The sharing model provided by the <filename>parport</filename>
+subsystem is one of exclusive access. A device driver, such as the
+printer driver, must ask the <filename>parport</filename> layer for
+access to the port, and can only use the port once access has been
+granted. When it has finished a <quote>transaction</quote>, it can
+tell the <filename>parport</filename> layer that it may release the
+port for other device drivers to use.</para>
+
+<!-- talk a bit about how drivers can share devices on the same port -->
+
+<para>Devices with pass-through ports all manage to share a parallel
+port with other devices in generally the same way. The device has a
+latch for each of the pins on its pass-through port. The normal state
+of affairs is pass-through mode, with the device copying the signal
+lines between its host port and its pass-through port. When the
+device sees a special signal from the host port, it latches the
+pass-through port so that devices further downstream don't get
+confused by the pass-through device's conversation with the host
+parallel port: the device connected to the pass-through port (and any
+devices connected in turn to it) are effectively cut off from the
+computer. When the pass-through device has completed its transaction
+with the computer, it enables the pass-through port again.</para>
+
+<mediaobject>
+<imageobject>
+<imagedata Align=center scalefit=1 fileref="parport-share.eps">
+</imageobject>
+</mediaobject>
+
+<para>This technique relies on certain <quote>special signals</quote>
+being invisible to devices that aren't watching for them. This tends
+to mean only changing the data signals and leaving the control signals
+alone. IEEE 1284.3 documents a standard protocol for daisy-chaining
+devices together with parallel ports.</para>
+
+<!-- transfer modes -->
+
+<para>Support for standard transfer modes are provided as operations
+that can be performed on a port, along with operations for setting the
+data lines, or the control lines, or reading the status lines. These
+operations appear to the device driver as function pointers; more
+later.</para>
+
+</sect1>
+
+</chapter>
+
+<chapter id="transfermodes">
+<title>Standard transfer modes</title>
+
+<!-- Defined by IEEE, but in common use (even though there are widely -->
+<!-- varying implementations). -->
+
+<para>The <quote>standard</quote> transfer modes in use over the
+parallel port are <quote>defined</quote> by a document called IEEE
+1284. It really just codifies existing practice and documents
+protocols (and variations on protocols) that have been in common use
+for quite some time.</para>
+
+<para>The original definitions of which pin did what were set out by
+Centronics Data Computer Corporation, but only the printer-side
+interface signals were specified.</para>
+
+<para>By the early 1980s, IBM's host-side implementation had become
+the most widely used. New printers emerged that claimed Centronics
+compatibility, but although compatible with Centronics they differed
+from one another in a number of ways.</para>
+
+<para>As a result of this, when IEEE 1284 was published in 1994, all
+that it could really do was document the various protocols that are
+used for printers (there are about six variations on a theme).</para>
+
+<para>In addition to the protocol used to talk to
+Centronics-compatible printers, IEEE 1284 defined other protocols that
+are used for unidirectional peripheral-to-host transfers (reverse
+nibble and reverse byte) and for fast bidirectional transfers (ECP and
+EPP).</para>
+
+</chapter>
+
+<chapter id="structure">
+<title>Structure</title>
+
+<!-- Main structure
+ - sharing core
+ - parports and their IEEE 1284 overrides
+ - IEEE 1284 transfer modes for generic ports
+ - maybe mention muxes here
+ - pardevices
+ - IEEE 1284.3 API
+ -->
+
+<!-- Diagram -->
+
+<mediaobject>
+<imageobject>
+<imagedata Align=Center ScaleFit=1 fileref="parport-structure.eps">
+</imageobject>
+</mediaobject>
+
+<sect1>
+<title>Sharing core</title>
+
+<!-- sharing core -->
+
+<para>At the core of the <filename>parport</filename> subsystem is the
+sharing mechanism (see <filename>drivers/parport/share.c</filename>).
+This module, <filename>parport</filename>, is responsible for
+keeping track of which ports there are in the system, which device
+drivers might be interested in new ports, and whether or not each port
+is available for use (or if not, which driver is currently using
+it).</para>
+
+</sect1>
+
+<sect1>
+<title>Parports and their overrides</title>
+<!-- parports and their overrides -->
+
+<para>The generic <filename>parport</filename> sharing code doesn't
+directly handle the parallel port hardware. That is done instead by
+<quote>low-level</quote> <filename>parport</filename> drivers. The
+function of a low-level <filename>parport</filename> driver is to
+detect parallel ports, register them with the sharing code, and
+provide a list of access functions for each port.</para>
+
+<para>The most basic access functions that must be provided are ones
+for examining the status lines, for setting the control lines, and for
+setting the data lines. There are also access functions for setting
+the direction of the data lines; normally they are in the
+<quote>forward</quote> direction (that is, the computer drives them),
+but some ports allow switching to <quote>reverse</quote> mode (driven
+by the peripheral). There is an access function for examining the
+data lines once in reverse mode.</para>
+
+</sect1>
+
+<sect1>
+<title>IEEE 1284 transfer modes</title>
+<!-- IEEE 1284 transfer modes -->
+
+<para>Stacked on top of the sharing mechanism, but still in the
+<filename>parport</filename> module, are functions for transferring
+data. They are provided for the device drivers to use, and are very
+much like library routines. Since these transfer functions are
+provided by the generic <filename>parport</filename> core they must
+use the <quote>lowest common denominator</quote> set of access
+functions: they can set the control lines, examine the status lines,
+and use the data lines. With some parallel ports the data lines can
+only be set and not examined, and with other ports accessing the data
+register causes control line activity; with these types of situations,
+the IEEE 1284 transfer functions make a best effort attempt to do the
+right thing. In some cases, it is not physically possible to use
+particular IEEE 1284 transfer modes.</para>
+
+<para>The low-level <filename>parport</filename> drivers also provide
+IEEE 1284 transfer functions, as names in the access function list.
+The low-level driver can just name the generic IEEE 1284 transfer
+functions for this. Some parallel ports can do IEEE 1284 transfers in
+hardware; for those ports, the low-level driver can provide functions
+to utilise that feature.</para>
+
+</sect1>
+
+<!-- muxes? -->
+
+<!-- pardevices and pardrivers -->
+
+<sect1>
+<title>Pardevices and parport_drivers</title>
+
+<para>When a parallel port device driver (such as
+<filename>lp</filename>) initialises it tells the sharing layer about
+itself using <function>parport_register_driver</function>. The
+information is put into a <structname>struct
+parport_driver</structname>, which is put into a linked list. The
+information in a <structname>struct parport_driver</structname> really
+just amounts to some function pointers to callbacks in the parallel
+port device driver.</para>
+
+<para>During its initialisation, a low-level port driver tells the
+sharing layer about all the ports that it has found (using
+<function>parport_register_port</function>), and the sharing layer
+creates a <structname>struct parport</structname> for each of them.
+Each <structname>struct parport</structname> contains (among other
+things) a pointer to a <structname>struct
+parport_operations</structname>, which is a list of function pointers
+for the various operations that can be performed on a port. You can
+think of a <structname>struct parport</structname> as a parallel port
+<quote>object</quote>, if <quote>object-orientated</quote> programming
+is your thing. The <structname>parport</structname> structures are
+chained in a linked list, whose head is <varname>portlist</varname>
+(in <filename>drivers/parport/share.c</filename>).</para>
+
+<para>Once the port has been registered, the low-level port driver
+announces it. The <function>parport_announce_port</function> function
+walks down the list of parallel port device drivers
+(<structname>struct parport_driver</structname>s) calling the
+<function>attach</function> function of each.</para>
+
+<para>Similarly, a low-level port driver can undo the effect of
+registering a port with the
+<function>parport_unregister_port</function> function, and device
+drivers are notified using the <function>detach</function>
+callback.</para>
+
+<para>Device drivers can undo the effect of registering themselves
+with the <function>parport_unregister_driver</function>
+function.</para>
+
+</sect1>
+
+<!-- IEEE 1284.3 API -->
+
+<sect1>
+<title>The IEEE 1284.3 API</title>
+
+<para>The ability to daisy-chain devices is very useful, but if every
+device does it in a different way it could lead to lots of
+complications for device driver writers. Fortunately, the IEEE are
+standardising it in IEEE 1284.3, which covers daisy-chain devices and
+port multiplexors.</para>
+
+<para>At the time of writing, IEEE 1284.3 has not been published, but
+the draft specifies the on-the-wire protocol for daisy-chaining and
+multiplexing, and also suggests a programming interface for using it.
+That interface (or most of it) has been implemented in the
+<filename>parport</filename> code in Linux.</para>
+
+<para>At initialisation of the parallel port <quote>bus</quote>, daisy-chained
+devices are assigned addresses starting from zero. There can only be
+four devices with daisy-chain addresses, plus one device on the end
+that doesn't know about daisy-chaining and thinks it's connected
+directly to a computer.</para>
+
+<para>Another way of connecting more parallel port devices is to use a
+multiplexor. The idea is to have a device that is connected directly
+to a parallel port on a computer, but has a number of parallel ports
+on the other side for other peripherals to connect to (two or four
+ports are allowed). The multiplexor switches control to different
+ports under software control---it is, in effect, a programmable
+printer switch.</para>
+
+<para>Combining the ability of daisy-chaining five devices together
+with the ability to multiplex one parallel port between four gives the
+potential to have twenty peripherals connected to the same parallel
+port!</para>
+
+<para>In addition, of course, a single computer can have multiple
+parallel ports. So, each parallel port peripheral in the system can
+be identified with three numbers, or co-ordinates: the parallel port,
+the multiplexed port, and the daisy-chain address.</para>
+
+<mediaobject>
+<imageobject>
+<imagedata align=center scalefit=1 fileref="parport-multi.eps">
+</imageobject>
+</mediaobject>
+
+<!-- x parport_open -->
+<!-- x parport_close -->
+<!-- x parport_device_id -->
+<!-- x parport_device_num -->
+<!-- x parport_device_coords -->
+<!-- x parport_find_device -->
+<!-- x parport_find_class -->
+
+<para>Each device in the system is numbered at initialisation (by
+<function>parport_daisy_init</function>). You can convert between
+this device number and its co-ordinates with
+<function>parport_device_num</function> and
+<function>parport_device_coords</function>.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_device_num</function></funcdef>
+ <paramdef>int <parameter>parport</parameter></paramdef>
+ <paramdef>int <parameter>mux</parameter></paramdef>
+ <paramdef>int <parameter>daisy</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_device_coords</function></funcdef>
+ <paramdef>int <parameter>devnum</parameter></paramdef>
+ <paramdef>int *<parameter>parport</parameter></paramdef>
+ <paramdef>int *<parameter>mux</parameter></paramdef>
+ <paramdef>int *<parameter>daisy</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>Any parallel port peripheral will be connected directly or
+indirectly to a parallel port on the system, but it won't have a
+daisy-chain address if it does not know about daisy-chaining, and it
+won't be connected through a multiplexor port if there is no
+multiplexor. The special co-ordinate value <constant>-1</constant> is
+used to indicate these cases.</para>
+
+<para>Two functions are provided for finding devices based on their
+IEEE 1284 Device ID: <function>parport_find_device</function> and
+<function>parport_find_class</function>.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_find_device</function></funcdef>
+ <paramdef>const char *<parameter>mfg</parameter></paramdef>
+ <paramdef>const char *<parameter>mdl</parameter></paramdef>
+ <paramdef>int <parameter>from</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_find_class</function></funcdef>
+ <paramdef>parport_device_class <parameter>cls</parameter></paramdef>
+ <paramdef>int <parameter>from</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>These functions take a device number (in addition to some other
+things), and return another device number. They walk through the list
+of detected devices until they find one that matches the requirements,
+and then return that device number (or <constant>-1</constant> if
+there are no more such devices). They start their search at the
+device after the one in the list with the number given (at
+<parameter>from</parameter>+1, in other words).</para>
+
+</sect1>
+
+</chapter>
+
+<chapter id="drivers">
+<title>Device driver's view</title>
+
+<!-- Cover:
+ - sharing interface, preemption, interrupts, wakeups...
+ - IEEE 1284.3 interface
+ - port operations
+ - why can read data but ctr is faked, etc.
+ -->
+
+<!-- I should take a look at the kernel hackers' guide bit I wrote, -->
+<!-- as that deals with a lot of this. The main complaint with it -->
+<!-- was that there weren't enough examples, but 'The printer -->
+<!-- driver' should deal with that later; might be worth mentioning -->
+<!-- in the text. -->
+
+<para>This section is written from the point of view of the device
+driver programmer, who might be writing a driver for a printer or a
+scanner or else anything that plugs into the parallel port. It
+explains how to use the <filename>parport</filename> interface to find
+parallel ports, use them, and share them with other device
+drivers.</para>
+
+<para>We'll start out with a description of the various functions that
+can be called, and then look at a reasonably simple example of their
+use: the printer driver.</para>
+
+<para>The interactions between the device driver and the
+<filename>parport</filename> layer are as follows. First, the device
+driver registers its existence with <filename>parport</filename>, in
+order to get told about any parallel ports that have been (or will be)
+detected. When it gets told about a parallel port, it then tells
+<filename>parport</filename> that it wants to drive a device on that
+port. Thereafter it can claim exclusive access to the port in order
+to talk to its device.</para>
+
+<para>So, the first thing for the device driver to do is tell
+<filename>parport</filename> that it wants to know what parallel ports
+are on the system. To do this, it uses the
+<function>parport_register_device</function> function:</para>
+
+<programlisting>
+<![CDATA[
+struct parport_driver {
+ const char *name;
+ void (*attach) (struct parport *);
+ void (*detach) (struct parport *);
+ struct parport_driver *next;
+};
+]]></programlisting>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_register_driver</function></funcdef>
+ <paramdef>struct parport_driver *<parameter>driver</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>In other words, the device driver passes pointers to a couple of
+functions to <filename>parport</filename>, and
+<filename>parport</filename> calls <function>attach</function> for
+each port that's detected (and <function>detach</function> for each
+port that disappears -- yes, this can happen).</para>
+
+<para>The next thing that happens is that the device driver tells
+<filename>parport</filename> that it thinks there's a device on the
+port that it can drive. This typically will happen in the driver's
+<function>attach</function> function, and is done with
+<function>parport_register_device</function>:</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>struct pardevice *<function>parport_register_device</function></funcdef>
+ <paramdef>struct parport *<parameter>port</parameter></paramdef>
+ <paramdef>const char *<parameter>name</parameter></paramdef>
+ <paramdef>int <parameter>(*pf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>void <parameter>(*kf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>void <parameter>(*irq_func)</parameter>
+ <funcparams>int, void *, struct pt_regs *</funcparams></paramdef>
+ <paramdef>int <parameter>flags</parameter></paramdef>
+ <paramdef>void *<parameter>handle</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>The <parameter>port</parameter> comes from the parameter supplied
+to the <function>attach</function> function when it is called, or
+alternatively can be found from the list of detected parallel ports
+directly with the (now deprecated)
+<function>parport_enumerate</function> function.</para>
+
+<para>The next three parameters, <parameter>pf</parameter>,
+<parameter>kf</parameter>, and <parameter>irq_func</parameter>, are
+more function pointers. These callback functions get called under
+various circumstances, and are always given the
+<parameter>handle</parameter> as one of their parameters.</para>
+
+<para>The preemption callback, <parameter>pf</parameter>, is called
+when the driver has claimed access to the port but another device
+driver wants access. If the driver is willing to let the port go, it
+should return zero and the port will be released on its behalf. There
+is no need to call <function>parport_release</function>. If
+<parameter>pf</parameter> gets called at a bad time for letting the
+port go, it should return non-zero and no action will be taken. It is
+good manners for the driver to try to release the port at the earliest
+opportunity after its preemption callback is called.</para>
+
+<para>The <quote>kick</quote> callback, <parameter>kf</parameter>, is
+called when the port can be claimed for exclusive access; that is,
+<function>parport_claim</function> is guaranteed to succeed inside the
+<quote>kick</quote> callback. If the driver wants to claim the port
+it should do so; otherwise, it need not take any action.</para>
+
+<para>The <parameter>irq_func</parameter> callback is called,
+predictably, when a parallel port interrupt is generated. But it is
+not the only code that hooks on the interrupt. The sequence is this:
+the lowlevel driver is the one that has done
+<function>request_irq</function>; it then does whatever
+hardware-specific things it needs to do to the parallel port hardware
+(for PC-style ports, there is nothing special to do); it then tells
+the IEEE 1284 code about the interrupt, which may involve reacting to
+an IEEE 1284 event, depending on the current IEEE 1284 phase; and
+finally the <parameter>irq_func</parameter> function is called.</para>
+
+<para>None of the callback functions are allowed to block.</para>
+
+<para>The <parameter>flags</parameter> are for telling
+<filename>parport</filename> any requirements or hints that are
+useful. The only useful value here (other than
+<constant>0</constant>, which is the usual value) is
+<constant>PARPORT_DEV_EXCL</constant>. The point of that flag is to
+request exclusive access at all times---once a driver has successfully
+called <function>parport_register_device</function> with that flag, no
+other device drivers will be able to register devices on that port
+(until the successful driver deregisters its device, of
+course).</para>
+
+<para>The <constant>PARPORT_DEV_EXCL</constant> flag is for preventing
+port sharing, and so should only be used when sharing the port with
+other device drivers is impossible and would lead to incorrect
+behaviour. Use it sparingly!</para>
+
+<para>Devices can also be registered by device drivers based on their
+device numbers (the same device numbers as in the previous
+section).</para>
+
+<para>The <function>parport_open</function> function is similar to
+<function>parport_register_device</function>, and
+<function>parport_close</function> is the equivalent of
+<function>parport_unregister_device</function>. The difference is
+that <function>parport_open</function> takes a device number rather
+than a pointer to a <structname>struct parport</structname>.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>struct pardevice *<function>parport_open</function></funcdef>
+ <paramdef>int <parameter>devnum</parameter></paramdef>
+ <paramdef>int <parameter>(*pf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>int <parameter>(*kf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>int <parameter>(*irqf)</parameter>
+ <funcparams>int, void *, struct pt_regs *</funcparams></paramdef>
+ <paramdef>int <parameter>flags</parameter></paramdef>
+ <paramdef>void *<parameter>handle</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>void <function>parport_close</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>struct pardevice *<function>parport_register_device</function></funcdef>
+ <paramdef>struct parport *<parameter>port</parameter></paramdef>
+ <paramdef>const char *<parameter>name</parameter></paramdef>
+ <paramdef>int <parameter>(*pf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>int <parameter>(*kf)</parameter>
+ <funcparams>void *</funcparams></paramdef>
+ <paramdef>int <parameter>(*irqf)</parameter>
+ <funcparams>int, void *, struct pt_regs *</funcparams></paramdef>
+ <paramdef>int <parameter>flags</parameter></paramdef>
+ <paramdef>void *<parameter>handle</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>void <function>parport_unregister_device</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>The intended use of these functions is during driver
+initialisation while the driver looks for devices that it supports, as
+demonstrated by the following code fragment:</para>
+
+<programlisting>
+<![CDATA[
+int devnum = -1;
+while ((devnum = parport_find_class (PARPORT_CLASS_DIGCAM,
+ devnum)) != -1) {
+ struct pardevice *dev = parport_open (devnum, ...);
+ ...
+}
+]]></programlisting>
+
+<para>Once your device driver has registered its device and been
+handed a pointer to a <structname>struct pardevice</structname>, the
+next thing you are likely to want to do is communicate with the device
+you think is there. To do that you'll need to claim access to the
+port.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_claim</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_claim_or_block</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>void <function>parport_release</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>To claim access to the port, use
+<function>parport_claim</function> or
+<function>parport_claim_or_block</function>. The first of these will
+not block, and so can be used from interrupt context. If
+<function>parport_claim</function> succeeds it will return zero and
+the port is available to use. It may fail (returning non-zero) if the
+port is in use by another driver and that driver is not willing to
+relinquish control of the port.</para>
+
+<para>The other function, <function>parport_claim_or_block</function>,
+will block if necessary to wait for the port to be free. If it slept,
+it returns <constant>1</constant>; if it succeeded without needing to
+sleep it returns <constant>0</constant>. If it fails it will return a
+negative error code.</para>
+
+<para>When you have finished communicating with the device, you can
+give up access to the port so that other drivers can communicate with
+their devices. The <function>parport_release</function> function
+cannot fail, but it should not be called without the port claimed.
+Similarly, you should not try to claim the port if you already have it
+claimed.</para>
+
+<para>You may find that although there are convenient points for your
+driver to relinquish the parallel port and allow other drivers to talk
+to their devices, it would be preferable to keep hold of the port.
+The printer driver only needs the port when there is data to print,
+for example, but a network driver (such as PLIP) could be sent a
+remote packet at any time. With PLIP, it is no huge catastrophe if a
+network packet is dropped, since it will likely be sent again, so it
+is possible for that kind of driver to share the port with other
+(pass-through) devices.</para>
+
+<para>The <function>parport_yield</function> and
+<function>parport_yield_blocking</function> functions are for marking
+points in the driver at which other drivers may claim the port and use
+their devices. Yielding the port is similar to releasing it and
+reclaiming it, but it more efficient because nothing is done if there
+are no other devices needing the port. In fact, nothing is done even
+if there are other devices waiting but the current device is still
+within its <quote>timeslice</quote>. The default timeslice is half a
+second, but it can be adjusted via a <filename>/proc</filename>
+entry.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_yield</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_yield_blocking</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>The first of these, <function>parport_yield</function>, will not
+block but as a result may fail. The return value for
+<function>parport_yield</function> is the same as for
+<function>parport_claim</function>. The blocking version,
+<function>parport_yield_blocking</function>, has the same return code
+as <function>parport_claim_or_block</function>.</para>
+
+<para>Once the port has been claimed, the device driver can use the
+functions in the <structname>struct parport_operations</structname>
+pointer in the <structname>struct parport</structname> it has a
+pointer to. For example:</para>
+
+<programlisting>
+<![CDATA[
+port->ops->write_data (port, d);
+]]></programlisting>
+
+<para>Some of these operations have <quote>shortcuts</quote>. For
+instance, <function>parport_write_data</function> is equivalent to the
+above, but may be a little bit faster (it's a macro that in some cases
+can avoid needing to indirect through <varname>port</varname> and
+<varname>ops</varname>).</para>
+
+</chapter>
+
+<chapter id="portdrivers">
+<title>Port drivers</title>
+
+<!-- What port drivers are for (i.e. implementing parport objects). -->
+
+<para>To recap, then:</para>
+
+<itemizedlist spacing=compact>
+
+<listitem>
+<para>
+The device driver registers itself with <filename>parport</filename>.
+</para>
+</listitem>
+
+<listitem>
+<para>
+A low-level driver finds a parallel port and registers it with
+<filename>parport</filename> (these first two things can happen in
+either order). This registration creates a <structname>struct
+parport</structname> which is linked onto a list of known ports.
+</para>
+</listitem>
+
+<listitem>
+<para>
+<filename>parport</filename> calls the <function>attach</function>
+function of each registered device driver, passing it the pointer to
+the new <structname>struct parport</structname>.
+</para>
+</listitem>
+
+<listitem>
+<para>
+The device driver gets a handle from <filename>parport</filename>, for
+use with
+<function>parport_claim</function>/<function>release</function>. This
+handle takes the form of a pointer to a <structname>struct
+pardevice</structname>, representing a particular device on the
+parallel port, and is acquired using
+<function>parport_register_device</function>.
+</para>
+</listitem>
+
+<listitem>
+<para>
+The device driver claims the port using
+<function>parport_claim</function> (or
+<function>function_claim_or_block</function>).
+</para>
+</listitem>
+
+<listitem>
+<para>
+Then it goes ahead and uses the port. When finished it releases the
+port.
+</para>
+</listitem>
+
+</itemizedlist>
+
+<para>The purpose of the low-level drivers, then, is to detect
+parallel ports and provide methods of accessing them
+(i.e. implementing the operations in <structname>struct
+parport_operations</structname>).</para>
+
+<!-- Interaction with sharing engine; port state -->
+<!-- What did I mean by that? -->
+
+<!-- Talk about parport_pc implementation, and contrast with e.g. amiga -->
+
+<para>A more complete description of which operation is supposed to do
+what is available in
+<filename>Documentation/parport-lowlevel.txt</filename>.</para>
+
+</chapter>
+
+<chapter id="lp">
+<title>The printer driver</title>
+
+<!-- Talk the reader through the printer driver. -->
+<!-- Could even talk about parallel port console here. -->
+
+<para>The printer driver, <filename>lp</filename> is a character
+special device driver and a <filename>parport</filename> client. As a
+character special device driver it registers a <structname>struct
+file_operations</structname> using
+<function>register_chrdev</function>, with pointers filled in for
+<structfield>write</structfield>, <structfield>ioctl</structfield>,
+<structfield>open</structfield> and
+<structfield>release</structfield>. As a client of
+<filename>parport</filename>, it registers a <structname>struct
+parport_driver</structname> using
+<function>parport_register_driver</function>, so that
+<filename>parport</filename> knows to call
+<function>lp_attach</function> when a new parallel port is discovered
+(and <function>lp_detach</function> when it goes away).</para>
+
+<para>The parallel port console functionality is also implemented in
+<filename>lp.c</filename>, but that won't be covered here (it's quite
+simple though).</para>
+
+<para>The initialisation of the driver is quite easy to understand
+(see <function>lp_init</function>). The <varname>lp_table</varname>
+is an array of structures that contain information about a specific
+device (the <structname>struct pardevice</structname> associated with
+it, for example). That array is initialised to sensible values first
+of all.</para>
+
+<para>Next, the printer driver calls
+<function>register_chrdev</function> passing it a pointer to
+<varname>lp_fops</varname>, which contains function pointers for the
+printer driver's implementation of <function>open</function>,
+<function>write</function>, and so on. This part is the same as for
+any character special device driver.</para>
+
+<para>After successfully registering itself as a character special
+device driver, the printer driver registers itself as a
+<filename>parport</filename> client using
+<function>parport_register_driver</function>. It passes a pointer to
+this structure:</para>
+
+<programlisting>
+<![CDATA[
+static struct parport_driver lp_driver = {
+ "lp",
+ lp_attach,
+ lp_detach,
+ NULL
+};
+]]></programlisting>
+
+<para>The <function>lp_detach</function> function is not very
+interesting (it does nothing); the interesting bit is
+<function>lp_attach</function>. What goes on here depends on whether
+the user supplied any parameters. The possibilities are: no
+parameters supplied, in which case the printer driver uses every port
+that is detected; the user supplied the parameter <quote>auto</quote>,
+in which case only ports on which the device ID string indicates a
+printer is present are used; or the user supplied a list of parallel
+port numbers to try, in which case only those are used.</para>
+
+<para>For each port that the printer driver wants to use (see
+<function>lp_register</function>), it calls
+<function>parport_register_device</function> and stores the resulting
+<structname>struct pardevice</structname> pointer in the
+<varname>lp_table</varname>. If the user told it to do so, it then
+resets the printer.</para>
+
+<para>The other interesting piece of the printer driver, from the
+point of view of <filename>parport</filename>, is
+<function>lp_write</function>. In this function, the user space
+process has data that it wants printed, and the printer driver hands
+it off to the <filename>parport</filename> code to deal with.</para>
+
+<para>The <filename>parport</filename> functions it uses that we have
+not seen yet are <function>parport_negotiate</function>,
+<function>parport_set_timeout</function>, and
+<function>parport_write</function>. These functions are part of the
+IEEE 1284 implementation.</para>
+
+<para>The way the IEEE 1284 protocol works is that the host tells the
+peripheral what transfer mode it would like to use, and the peripheral
+either accepts that mode or rejects it; if the mode is rejected, the
+host can try again with a different mode. This is the negotation
+phase. Once the peripheral has accepted a particular transfer mode,
+data transfer can begin that mode.</para>
+
+<para>The particular transfer mode that the printer driver wants to
+use is named in IEEE 1284 as <quote>compatibility</quote> mode, and
+the function to request a particular mode is called
+<function>parport_negotiate</function>.</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>int <function>parport_negotiate</function></funcdef>
+ <paramdef>struct parport *<parameter>port</parameter></paramdef>
+ <paramdef>int <parameter>mode</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>The <parameter>modes</parameter> parameter is a symbolic
+constant representing an IEEE 1284 mode; in this instance, it is
+<constant>IEEE1284_MODE_COMPAT</constant>. (Compatibility mode is
+slightly different to the other modes---rather than being specifically
+requested, it is the default until another mode is selected.)</para>
+
+<para>Back to <function>lp_write</function> then. First, access to
+the parallel port is secured with
+<function>parport_claim_or_block</function>. At this point the driver
+might sleep, waiting for another driver (perhaps a Zip drive driver,
+for instance) to let the port go. Next, it goes to compatibility mode
+using <function>parport_negotiate</function>.</para>
+
+<para>The main work is done in the write-loop. In particular, the
+line that hands the data over to <filename>parport</filename>
+reads:</para>
+
+<programlisting>
+<![CDATA[
+ written = parport_write (port, kbuf, copy_size);
+]]></programlisting>
+
+<para>The <function>parport_write</function> function writes data to
+the peripheral using the currently selected transfer mode
+(compatibility mode, in this case). It returns the number of bytes
+successfully written:</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>ssize_t <function>parport_write</function></funcdef>
+ <paramdef>struct parport *<parameter>port</parameter></paramdef>
+ <paramdef>const void *<parameter>buf</parameter></paramdef>
+ <paramdef>size_t <parameter>len</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<funcsynopsis><funcprototype>
+ <funcdef>ssize_t <function>parport_read</function></funcdef>
+ <paramdef>struct parport *<parameter>port</parameter></paramdef>
+ <paramdef>void *<parameter>buf</parameter></paramdef>
+ <paramdef>size_t <parameter>len</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>(<function>parport_read</function> does what it sounds like, but
+only works for modes in which reverse transfer is possible. Of
+course, <function>parport_write</function> only works in modes in
+which forward transfer is possible, too.)</para>
+
+<para>The <parameter>buf</parameter> pointer should be to kernel space
+memory, and obviously the <parameter>len</parameter> parameter
+specifies the amount of data to transfer.</para>
+
+<para>In fact what <function>parport_write</function> does is call the
+appropriate block transfer function from the <structname>struct
+parport_operations</structname>:</para>
+
+<programlisting>
+<![CDATA[
+struct parport_operations {
+ [...]
+
+ /* Block read/write */
+ size_t (*epp_write_data) (struct parport *port, const void *buf,
+ size_t len, int flags);
+ size_t (*epp_read_data) (struct parport *port, void *buf, size_t len,
+ int flags);
+ size_t (*epp_write_addr) (struct parport *port, const void *buf,
+ size_t len, int flags);
+ size_t (*epp_read_addr) (struct parport *port, void *buf, size_t len,
+ int flags);
+
+ size_t (*ecp_write_data) (struct parport *port, const void *buf,
+ size_t len, int flags);
+ size_t (*ecp_read_data) (struct parport *port, void *buf, size_t len,
+ int flags);
+ size_t (*ecp_write_addr) (struct parport *port, const void *buf,
+ size_t len, int flags);
+
+ size_t (*compat_write_data) (struct parport *port, const void *buf,
+ size_t len, int flags);
+ size_t (*nibble_read_data) (struct parport *port, void *buf,
+ size_t len, int flags);
+ size_t (*byte_read_data) (struct parport *port, void *buf,
+ size_t len, int flags);
+};
+]]></programlisting>
+
+<para>The transfer code in <filename>parport</filename> will tolerate
+a data transfer stall only for so long, and this timeout can be
+specified with <function>parport_set_timeout</function>, which returns
+the previous timeout:</para>
+
+<funcsynopsis><funcprototype>
+ <funcdef>long <function>parport_set_timeout</function></funcdef>
+ <paramdef>struct pardevice *<parameter>dev</parameter></paramdef>
+ <paramdef>long <parameter>inactivity</parameter></paramdef>
+</funcprototype></funcsynopsis>
+
+<para>This timeout is specific to the device, and is restored on
+<function>parport_claim</function>.</para>
+
+</chapter>
+
+<chapter id="ppdev">
+<title>User-level device drivers</title>
+
+<!-- ppdev -->
+<sect1>
+<title>Introduction to ppdev</title>
+
+<para>The printer is accessible through <filename>/dev/lp0</filename>;
+in the same way, the parallel port itself is accessible through
+<filename>/dev/parport0</filename>. The difference is in the level of
+control that you have over the wires in the parallel port
+cable.</para>
+
+<para>With the printer driver, a user-space program (such as the
+printer spooler) can send bytes in <quote>printer protocol</quote>.
+Briefly, this means that for each byte, the eight data lines are set
+up, then a <quote>strobe</quote> line tells the printer to look at the
+data lines, and the printer sets an <quote>acknowledgement</quote>
+line to say that it got the byte. The printer driver also allows the
+user-space program to read bytes in <quote>nibble mode</quote>, which
+is a way of transferring data from the peripheral to the computer half
+a byte at a time (and so it's quite slow).</para>
+
+<para>In contrast, the <filename>ppdev</filename> driver (accessed via
+<filename>/dev/parport0</filename>) allows you to:</para>
+
+<itemizedlist spacing=compact>
+
+<listitem>
+<para>
+examine status lines,
+</para>
+</listitem>
+
+<listitem>
+<para>
+set control lines,
+</para>
+</listitem>
+
+<listitem>
+<para>
+set/examine data lines (and control the direction of the data lines),
+</para>
+</listitem>
+
+<listitem>
+<para>
+wait for an interrupt (triggered by one of the status lines),
+</para>
+</listitem>
+
+<listitem>
+<para>
+find out how many new interrupts have occurred,
+</para>
+</listitem>
+
+<listitem>
+<para>
+set up a response to an interrupt,
+</para>
+</listitem>
+
+<listitem>
+<para>
+use IEEE 1284 negotiation (for telling peripheral which transfer mode,
+to use)
+</para>
+</listitem>
+
+<listitem>
+<para>
+transfer data using a specified IEEE 1284 mode.
+</para>
+</listitem>
+
+</itemizedlist>
+
+</sect1>
+
+<sect1>
+<title>User-level or kernel-level driver?</title>
+
+<para>The decision of whether to choose to write a kernel-level device
+driver or a user-level device driver depends on several factors. One
+of the main ones from a practical point of view is speed: kernel-level
+device drivers get to run faster because they are not preemptable,
+unlike user-level applications.</para>
+
+<para>Another factor is ease of development. It is in general easier
+to write a user-level driver because (a) one wrong move does not
+result in a crashed machine, (b) you have access to user libraries
+(such as the C library), and (c) debugging is easier.</para>
+
+</sect1>
+
+<sect1>
+<title>Programming interface</title>
+
+<para>The <filename>ppdev</filename> interface is largely the same as
+that of other character special devices, in that it supports
+<function>open</function>, <function>close</function>,
+<function>read</function>, <function>write</function>, and
+<function>ioctl</function>.</para>
+
+<sect2>
+<title>Starting and stopping: <function>open</function> and
+<function>close</function></title>
+
+<para>The device node <filename>/dev/parport0</filename> represents
+any device that is connected to <filename>parport0</filename>, the
+first parallel port in the system. Each time the device node is
+opened, it represents (to the process doing the opening) a different
+device. It can be opened more than once, but only one instance can
+actually be in control of the parallel port at any time. A process
+that has opened <filename>/dev/parport0</filename> shares the parallel
+port in the same way as any other device driver. A user-land driver
+may be sharing the parallel port with in-kernel device drivers as well
+as other user-land drivers.</para>
+</sect2>
+
+<sect2>
+<title>Control: <function>ioctl</function></title>
+
+<para>Most of the control is done, naturally enough, via the
+<function>ioctl</function> call. Using <function>ioctl</function>,
+the user-land driver can control both the <filename>ppdev</filename>
+driver in the kernel and the physical parallel port itself. The
+<function>ioctl</function> call takes as parameters a file descriptor
+(the one returned from opening the device node), a command, and
+optionally (a pointer to) some data.</para>
+
+<variablelist>
+<varlistentry><term><constant>PPCLAIM</constant></term>
+<listitem>
+
+<para>Claims access to the port. As a user-land device driver writer,
+you will need to do this before you are able to actually change the
+state of the parallel port in any way. Note that some operations only
+affect the <filename>ppdev</filename> driver and not the port, such as
+<constant>PPSETMODE</constant>; they can be performed while access to
+the port is not claimed.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPEXCL</constant></term>
+<listitem>
+
+<para>Instructs the kernel driver to forbid any sharing of the port
+with other drivers, i.e. it requests exclusivity. The
+<constant>PPEXCL</constant> command is only valid when the port is not
+already claimed for use, and it may mean that the next
+<constant>PPCLAIM</constant> <function>ioctl</function> will fail:
+some other driver may already have registered itself on that
+port.</para>
+
+<para>Most device drivers don't need exclusive access to the port.
+It's only provided in case it is really needed, for example for
+devices where access to the port is required for extensive periods of
+time (many seconds).</para>
+
+<para>Note that the <constant>PPEXCL</constant>
+<function>ioctl</function> doesn't actually claim the port there and
+then---action is deferred until the <constant>PPCLAIM</constant>
+<function>ioctl</function> is performed.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPRELEASE</constant></term>
+<listitem>
+
+<para>Releases the port. Releasing the port undoes the effect of
+claiming the port. It allows other device drivers to talk to their
+devices (assuming that there are any).</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPYIELD</constant></term>
+<listitem>
+
+<para>Yields the port to another driver. This
+<function>ioctl</function> is a kind of short-hand for releasing the
+port and immediately reclaiming it. It gives other drivers a chance
+to talk to their devices, but afterwards claims the port back. An
+example of using this would be in a user-land printer driver: once a
+few characters have been written we could give the port to another
+device driver for a while, but if we still have characters to send to
+the printer we would want the port back as soon as possible.</para>
+
+<para>It is important not to claim the parallel port for too long, as
+other device drivers will have no time to service their devices. If
+your device does not allow for parallel port sharing at all, it is
+better to claim the parallel port exclusively (see
+<constant>PPEXCL</constant>).</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPNEGOT</constant></term>
+<listitem>
+
+<para>Performs IEEE 1284 negotiation into a particular mode. Briefly,
+negotiation is the method by which the host and the peripheral decide
+on a protocol to use when transferring data.</para>
+
+<para>An IEEE 1284 compliant device will start out in compatibility
+mode, and then the host can negotiate to another mode (such as
+ECP).</para>
+
+<para>The <function>ioctl</function> parameter should be a pointer to
+an <type>int</type>; values for this are in
+<filename>parport.h</filename> and include:</para>
+
+<itemizedlist spacing=compact>
+<listitem><para><constant>IEEE1284_MODE_COMPAT</constant></para></listitem>
+<listitem><para><constant>IEEE1284_MODE_NIBBLE</constant></para></listitem>
+<listitem><para><constant>IEEE1284_MODE_BYTE</constant></para></listitem>
+<listitem><para><constant>IEEE1284_MODE_EPP</constant></para></listitem>
+<listitem><para><constant>IEEE1284_MODE_ECP</constant></para></listitem>
+</itemizedlist>
+
+<para>The <constant>PPNEGOT</constant> <function>ioctl</function>
+actually does two things: it performs the on-the-wire negotiation, and
+it sets the behaviour of subsequent
+<function>read</function>/<function>write</function> calls so that
+they use that mode (but see <constant>PPSETMODE</constant>).</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPSETMODE</constant></term>
+<listitem>
+
+<para>Sets which IEEE 1284 protocol to use for the
+<function>read</function> and <function>write</function> calls.</para>
+
+<para>The <function>ioctl</function> parameter should be a pointer to
+an <type>int</type>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPGETTIME</constant></term>
+<listitem>
+
+<para>Retrieves the time-out value. The <function>read</function> and
+<function>write</function> calls will time out if the peripheral
+doesn't respond quickly enough. The <constant>PPGETTIME</constant>
+<function>ioctl</function> retrieves the length of time that the
+peripheral is allowed to have before giving up.</para>
+
+<para>The <function>ioctl</function> parameter should be a pointer to
+a <structname>struct timeval</structname>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPSETTIME</constant></term>
+<listitem>
+
+<para>Sets the time-out. The <function>ioctl</function> parameter
+should be a pointer to a <structname>struct
+timeval</structname>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPWCONTROL</constant></term>
+<listitem>
+
+<para>Sets the control lines. The <function>ioctl</function>
+parameter is a pointer to an <type>unsigned char</type>, the bitwise
+OR of the control line values in
+<filename>parport.h</filename>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPRCONTROL</constant></term>
+<listitem>
+
+<para>Returns the last value written to the control register, in the
+form of an <type>unsigned char</type>: each bit corresponds to a
+control line (although some are unused). The
+<function>ioctl</function> parameter should be a pointer to an
+<type>unsigned char</type>.</para>
+
+<para>This doesn't actually touch the hardware; the last value written
+is remembered in software. This is because some parallel port
+hardware does not offer read access to the control register.</para>
+
+<para>The control lines bits are defined in
+<filename>parport.h</filename>:</para>
+
+<itemizedlist spacing=compact>
+<listitem><para><constant>PARPORT_CONTROL_STROBE</constant></para></listitem>
+<listitem><para><constant>PARPORT_CONTROL_AUTOFD</constant></para></listitem>
+<listitem><para><constant>PARPORT_CONTROL_SELECT</constant></para></listitem>
+<listitem><para><constant>PARPORT_CONTROL_INIT</constant></para></listitem>
+</itemizedlist>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPFCONTROL</constant></term>
+<listitem>
+
+<para>Frobs the control lines. Since a common operation is to change
+one of the control signals while leaving the others alone, it would be
+quite inefficient for the user-land driver to have to use
+<constant>PPRCONTROL</constant>, make the change, and then use
+<constant>PPWCONTROL</constant>. Of course, each driver could
+remember what state the control lines are supposed to be in (they are
+never changed by anything else), but in order to provide
+<constant>PPRCONTROL</constant>, <filename>ppdev</filename> must
+remember the state of the control lines anyway.</para>
+
+<para>The <constant>PPFCONTROL</constant> <function>ioctl</function>
+is for <quote>frobbing</quote> control lines, and is like
+<constant>PPWCONTROL</constant> but acts on a restricted set of
+control lines. The <function>ioctl</function> parameter is a pointer
+to a <structname>struct ppdev_frob_struct</structname>:</para>
+
+<programlisting>
+<![CDATA[
+struct ppdev_frob_struct {
+ unsigned char mask;
+ unsigned char val;
+};
+]]>
+</programlisting>
+
+<para>The <structfield>mask</structfield> and
+<structfield>val</structfield> fields are bitwise ORs of control line
+names (such as in <constant>PPWCONTROL</constant>). The operation
+performed by <constant>PPFCONTROL</constant> is:</para>
+
+<programlisting>
+<![CDATA[new_ctr = (old_ctr & ~mask) | val;]]>
+</programlisting>
+
+<para>In other words, the signals named in
+<structfield>mask</structfield> are set to the values in
+<structfield>val</structfield>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPRSTATUS</constant></term>
+<listitem>
+
+<para>Returns an <type>unsigned char</type> containing bits set for
+each status line that is set (for instance,
+<constant>PARPORT_STATUS_BUSY</constant>). The
+<function>ioctl</function> parameter should be a pointer to an
+<type>unsigned char</type>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPDATADIR</constant></term>
+<listitem>
+
+<para>Controls the data line drivers. Normally the computer's
+parallel port will drive the data lines, but for byte-wide transfers
+from the peripheral to the host it is useful to turn off those drivers
+and let the peripheral drive the signals. (If the drivers on the
+computer's parallel port are left on when this happens, the port might
+be damaged.)</para>
+
+<para>This is only needed in conjunction with
+<constant>PPWDATA</constant> or <constant>PPRDATA</constant>.</para>
+
+<para>The <function>ioctl</function> parameter is a pointer to an
+<type>int</type>. If the <type>int</type> is zero, the drivers are
+turned on (forward direction); if non-zero, the drivers are turned off
+(reverse direction).</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPWDATA</constant></term>
+<listitem>
+
+<para>Sets the data lines (if in forward mode). The
+<function>ioctl</function> parameter is a pointer to an <type>unsigned
+char</type>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPRDATA</constant></term>
+<listitem>
+
+<para>Reads the data lines (if in reverse mode). The
+<function>ioctl</function> parameter is a pointer to an <type>unsigned
+char</type>.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPCLRIRQ</constant></term>
+<listitem>
+
+<para>Clears the interrupt count. The <filename>ppdev</filename>
+driver keeps a count of interrupts as they are triggered.
+<constant>PPCLRIRQ</constant> stores this count in an
+<type>int</type>, a pointer to which is passed in as the
+<function>ioctl</function> parameter.</para>
+
+<para>In addition, the interrupt count is reset to zero.</para>
+
+</listitem></varlistentry>
+
+<varlistentry><term><constant>PPWCTLONIRQ</constant></term>
+<listitem>
+
+<para>Set a trigger response. Afterwards when an interrupt is
+triggered, the interrupt handler will set the control lines as
+requested. The <function>ioctl</function> parameter is a pointer to
+an <type>unsigned char</type>, which is interpreted in the same way as
+for <constant>PPWCONTROL</constant>.</para>
+
+<para>The reason for this <function>ioctl</function> is simply speed.
+Without this <function>ioctl</function>, responding to an interrupt
+would start in the interrupt handler, switch context to the user-land
+driver via <function>poll</function> or <function>select</function>,
+and then switch context back to the kernel in order to handle
+<constant>PPWCONTROL</constant>. Doing the whole lot in the interrupt
+handler is a lot faster.</para>
+
+</listitem></varlistentry>
+
+<!-- PPSETPHASE? -->
+
+</variablelist>
+
+</sect2>
+
+<sect2>
+<title>Transferring data: <function>read</function> and
+<function>write</function></title>
+
+<para>Transferring data using <function>read</function> and
+<function>write</function> is straightforward. The data is
+transferring using the current IEEE 1284 mode (see the
+<constant>PPSETMODE</constant> <function>ioctl</function>). For modes
+which can only transfer data in one direction, only the appropriate
+function will work, of course.</para>
+</sect2>
+
+<sect2>
+<title>Waiting for events: <function>poll</function> and
+<function>select</function></title>
+
+<para>The <filename>ppdev</filename> driver provides user-land device
+drivers with the ability to wait for interrupts, and this is done
+using <function>poll</function> (and <function>select</function>,
+which is implemented in terms of <function>poll</function>).</para>
+
+<para>When a user-land device driver wants to wait for an interrupt,
+it sleeps with <function>poll</function>. When the interrupt arrives,
+<filename>ppdev</filename> wakes it up (with a <quote>read</quote>
+event, although strictly speaking there is nothing to actually
+<function>read</function>).</para>
+
+</sect2>
+
+</sect1>
+
+<sect1>
+<title>Examples</title>
+
+<para>Presented here are two demonstrations of how to write a simple
+printer driver for <filename>ppdev</filename>. Firstly we will use
+the <function>write</function> function, and after that we will drive
+the control and data lines directly.</para>
+
+<para>The first thing to do is to actually open the device.</para>
+
+<programlisting><![CDATA[
+int drive_printer (const char *name)
+{
+ int fd;
+ int mode; /* We'll need this later. */
+
+ fd = open (name, O_RDWR);
+ if (fd == -1) {
+ perror ("open");
+ return 1;
+ }
+]]></programlisting>
+
+<para>Here <varname>name</varname> should be something along the lines
+of <filename>"/dev/parport0"</filename>. (If you don't have any
+<filename>/dev/parport</filename> files, you can make them with
+<command>mknod</command>; they are character special device nodes with
+major 99.)</para>
+
+<para>In order to do anything with the port we need to claim access to
+it.</para>
+
+<programlisting><![CDATA[
+ if (ioctl (fd, PPCLAIM)) {
+ perror ("PPCLAIM");
+ close (fd);
+ return 1;
+ }
+]]></programlisting>
+
+<para>Our printer driver will copy its input (from
+<varname>stdin</varname>) to the printer, and it can do that it one of
+two ways. The first way is to hand it all off to the kernel driver,
+with the knowledge that the protocol that the printer speaks is IEEE
+1284's <quote>compatibility</quote> mode.</para>
+
+<programlisting><![CDATA[
+ /* Switch to compatibility mode. (In fact we don't need
+ * to do this, since we start off in compatibility mode
+ * anyway, but this demonstrates PPNEGOT.)
+ mode = IEEE1284_MODE_COMPAT;
+ if (ioctl (fd, PPNEGOT, &mode)) {
+ perror ("PPNEGOT");
+ close (fd);
+ return 1;
+ }
+
+ for (;;) {
+ char buffer[1000];
+ char *ptr = buffer;
+ size_t got;
+
+ got = read (0 /* stdin */, buffer, 1000);
+ if (got < 0) {
+ perror ("read");
+ close (fd);
+ return 1;
+ }
+
+ if (got == 0)
+ /* End of input */
+ break;
+
+ while (got > 0) {
+ int written = write_printer (fd, ptr, got);
+
+ if (written < 0) {
+ perror ("write");
+ close (fd);
+ return 1;
+ }
+
+ ptr += written;
+ got -= written;
+ }
+ }
+]]></programlisting>
+
+<para>The <function>write_printer</function> function is not pictured
+above. This is because the main loop that is shown can be used for
+both methods of driving the printer. Here is one implementation of
+<function>write_printer</function>:</para>
+
+<programlisting><![CDATA[
+ssize_t write_printer (int fd, const void *ptr, size_t count)
+{
+ return write (fd, ptr, count);
+}
+]]></programlisting>
+
+<para>We hand the data to the kernel-level driver (using
+<function>write</function>) and it handles the printer
+protocol.</para>
+
+<para>Now let's do it the hard way! In this particular example there
+is no practical reason to do anything other than just call
+<function>write</function>, because we know that the printer talks an
+IEEE 1284 protocol. On the other hand, this particular example does
+not even need a user-land driver since there is already a kernel-level
+one; for the purpose of this discussion, try to imagine that the
+printer speaks a protocol that is not already implemented under
+Linux.</para>
+
+<para>So, here is the alternative implementation of
+<function>write_printer</function> (for brevity, error checking has
+been omitted):</para>
+
+<programlisting><![CDATA[
+ssize_t write_printer (int fd, const void *ptr, size_t count)
+{
+ ssize_t wrote = 0;
+
+ while (wrote < count) {
+ unsigned char status, control, data;
+ unsigned char mask = (PARPORT_STATUS_ERROR
+ | PARPORT_STATUS_BUSY);
+ unsigned char val = (PARPORT_STATUS_ERROR
+ | PARPORT_STATUS_BUSY);
+ struct parport_frob_struct frob;
+ struct timespec ts;
+
+ /* Wait for printer to be ready */
+ for (;;) {
+ ioctl (fd, PPRSTATUS, &status);
+
+ if ((status & mask) == val)
+ break;
+
+ ioctl (fd, PPRELEASE);
+ sleep (1);
+ ioctl (fd, PPCLAIM);
+ }
+
+ /* Set the data lines */
+ data = * ((char *) ptr)++;
+ ioctl (fd, PPWDATA, &data);
+
+ /* Delay for a bit */
+ ts.tv_sec = 0;
+ ts.tv_nsec = 1000;
+ nanosleep (&ts, NULL);
+
+ /* Pulse strobe */
+ frob.mask = PARPORT_CONTROL_STROBE;
+ frob.val = PARPORT_CONTROL_STROBE;
+ ioctl (fd, PPFCONTROL, &frob);
+ nanosleep (&ts, NULL);
+
+ /* End the pulse */
+ frob.val = 0;
+ ioctl (fd, PPFCONTROL, &frob);
+ nanosleep (&ts, NULL);
+
+ wrote++;
+ }
+
+ return wrote;
+}
+]]></programlisting>
+
+<para>To show a bit more of the <filename>ppdev</filename> interface,
+here is a small piece of code that is intended to mimic the printer's
+side of printer protocol.</para>
+
+<programlisting><![CDATA[
+ for (;;)
+ {
+ int irqc;
+ int busy = nAck | nFault;
+ int acking = nFault;
+ int ready = Busy | nAck | nFault;
+ char ch;
+
+ /* Set up the control lines when an interrupt happens. */
+ ioctl (fd, PPWCTLONIRQ, &busy);
+
+ /* Now we're ready. */
+ ioctl (fd, PPWCONTROL, &ready);
+
+ /* Wait for an interrupt. */
+ {
+ fd_set rfds;
+ FD_ZERO (&rfds);
+ FD_SET (fd, &rfds);
+ if (!select (fd + 1, &rfds, NULL, NULL, NULL))
+ /* Caught a signal? */
+ continue;
+ }
+
+ /* We are now marked as busy. */
+
+ /* Fetch the data. */
+ ioctl (fd, PPRDATA, &ch);
+
+ /* Clear the interrupt. */
+ ioctl (fd, PPCLRIRQ, &irqc);
+ if (irqc > 1)
+ fprintf (stderr, "Arghh! Missed %d interrupt%s!\n",
+ irqc - 1, irqc == 2 ? "s" : "");
+
+ /* Ack it. */
+ ioctl (fd, PPWCONTROL, &acking);
+ usleep (2);
+ ioctl (fd, PPWCONTROL, &busy);
+
+ putchar (ch);
+ }
+]]></programlisting>
+
+</sect1>
+
+</chapter>
+</book>
\ No newline at end of file
Version history
===============
-0.9.4.1:
+0.9.4.2 (March 21, 2000):
+* Fix 21041 CSR7, CSR13/14/15 handling
+* Merge some PCI ids from tulip 0.91x
+* Merge some HAS_xxx flags and flag settings from tulip 0.91x
+* asm/io.h fix (submitted by many) and cleanup
+* s/HAS_NWAY143/HAS_NWAY/
+* Cleanup 21041 mode reporting
+* Small code cleanups
+
+0.9.4.1 (March 18, 2000):
* Finish PCI DMA conversion (davem)
* Do not netif_start_queue() at end of tulip_tx_timeout() (kuznet)
* PCI DMA fix (kuznet)
CFLAGS := $(CPPFLAGS) -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer
AFLAGS := $(CPPFLAGS)
+# use '-fno-strict-aliasing', but only if the compiler can take it
+CFLAGS += $(shell if $(CC) -fno-strict-aliasing -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-fno-strict-aliasing"; fi)
+
export CPPFLAGS CFLAGS AFLAGS
#
export NETWORKS DRIVERS LIBS HEAD LDFLAGS LINKFLAGS MAKEBOOT ASFLAGS
-# use '-fno-strict-aliasing', but only if the compiler can take it
-CFLAGS += $(shell if $(CC) -fno-strict-aliasing -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-fno-strict-aliasing"; fi)
-
.S.s:
- $(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -E -o $*.s $<
+ $(CPP) -D__ASSEMBLY__ $(AFLAGS) -traditional -o $*.s $<
.S.o:
$(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -c -o $*.o $<
rm -f .hdepend scripts/mkdep scripts/split-include scripts/docproc
rm -f $(TOPDIR)/include/linux/modversions.h
rm -rf $(TOPDIR)/include/linux/modules
- rm -f Documentation/DocBook/*.sgml
+ make clean TOPDIR=$(TOPDIR) -C Documentation/DocBook
distclean: mrproper
rm -f core `find . \( -name '*.orig' -o -name '*.rej' -o -name '*~' \
endif # CONFIG_MODVERSIONS
ifneq "$(strip $(SYMTAB_OBJS))" ""
-$(SYMTAB_OBJS): $(TOPDIR)/include/linux/modversions.h $(SYMTAB_OBJS:.o=.c)
+$(SYMTAB_OBJS): $(SYMTAB_OBJS:.o=.c) $(TOPDIR)/include/linux/modversions.h
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(CFLAGS_$@) -DEXPORT_SYMTAB -c $(@:.o=.c)
@ ( \
echo 'ifeq ($(strip $(subst $(comma),:,$(CFLAGS) $(EXTRA_CFLAGS) $(CFLAGS_$@) -DEXPORT_SYMTAB)),$$(strip $$(subst $$(comma),:,$$(CFLAGS) $$(EXTRA_CFLAGS) $$(CFLAGS_$@) -DEXPORT_SYMTAB)))' ; \
pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = APECS_CONF;
hose->index = 0;
+ hose->sparse_mem_base = APECS_SPARSE_MEM - IDENT_ADDR;
+ hose->dense_mem_base = APECS_DENSE_MEM - IDENT_ADDR;
+ hose->sparse_io_base = APECS_IO - IDENT_ADDR;
+ hose->dense_io_base = 0;
+
/*
* Set up the PCI to main memory translation windows.
*
/* Fifth, verify that a previously invalid PTE entry gets
filled from the page table. */
- data0 = 0xabcdef123;
+ data0 = 0xabcdef12;
page[0] = data0;
arena->ptes[5] = pte0;
mcheck_expected(0) = 1;
pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = CIA_CONF;
hose->index = 0;
if (! is_pyxis) {
if (request_resource(&iomem_resource, hae_mem) < 0)
printk(KERN_ERR "Failed to request HAE_MEM\n");
+
+ hose->sparse_mem_base = CIA_SPARSE_MEM - IDENT_ADDR;
+ hose->dense_mem_base = CIA_DENSE_MEM - IDENT_ADDR;
+ hose->sparse_io_base = CIA_IO - IDENT_ADDR;
+ hose->dense_io_base = 0;
+ } else {
+ hose->sparse_mem_base = 0;
+ hose->dense_mem_base = CIA_BW_MEM - IDENT_ADDR;
+ hose->sparse_io_base = 0;
+ hose->dense_io_base = CIA_BW_IO - IDENT_ADDR;
}
/*
* Create our single hose.
*/
- hose = alloc_pci_controler();
+ pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = IRONGATE_CONF;
hose->index = 0;
+ /* This is for userland consumption. For some reason, the 40-bit
+ PIO bias that we use in the kernel through KSEG didn't work for
+ the page table based user mappings. So make sure we get the
+ 43-bit PIO bias. */
+ hose->sparse_mem_base = 0;
+ hose->sparse_io_base = 0;
+ hose->dense_mem_base
+ = (IRONGATE_MEM & 0xffffffffff) | 0x80000000000;
+ hose->dense_io_base
+ = (IRONGATE_IO & 0xffffffffff) | 0x80000000000;
+
hose->sg_isa = hose->sg_pci = NULL;
__direct_map_base = 0;
__direct_map_size = 0xffffffff;
pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = LCA_CONF;
hose->index = 0;
+ hose->sparse_mem_base = LCA_SPARSE_MEM - IDENT_ADDR;
+ hose->dense_mem_base = LCA_DENSE_MEM - IDENT_ADDR;
+ hose->sparse_io_base = LCA_IO - IDENT_ADDR;
+ hose->dense_io_base = 0;
+
/*
* Set up the PCI to main memory translation windows.
*
bus = 0;
addr = (bus << 16) | (devfn << 8) | (where);
addr <<= 5; /* swizzle for SPARSE */
- addr |= hose->config_space;
+ addr |= hose->config_space_base;
*pci_addr = addr;
DBG_CFG(("mk_conf_addr: returning pci_addr 0x%lx\n", addr));
int mid = MCPCIA_HOSE2MID(h);
hose = alloc_pci_controler();
+ if (h == 0)
+ pci_isa_hose = hose;
io = alloc_resource();
mem = alloc_resource();
hae_mem = alloc_resource();
hose->io_space = io;
hose->mem_space = hae_mem;
- hose->config_space = MCPCIA_CONF(mid);
+ hose->sparse_mem_base = MCPCIA_SPARSE(mid) - IDENT_ADDR;
+ hose->dense_mem_base = MCPCIA_DENSE(mid) - IDENT_ADDR;
+ hose->sparse_io_base = MCPCIA_IO(mid) - IDENT_ADDR;
+ hose->dense_io_base = 0;
+ hose->config_space_base = MCPCIA_CONF(mid);
hose->index = h;
io->start = MCPCIA_IO(mid) - MCPCIA_IO_BIAS;
* Create our single hose.
*/
- hose = alloc_pci_controler();
+ pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = POLARIS_DENSE_CONFIG_BASE;
hose->index = 0;
+ hose->sparse_mem_base = 0;
+ hose->dense_mem_base = POLARIS_DENSE_MEM_BASE - IDENT_ADDR;
+ hose->sparse_io_base = 0;
+ hose->dense_io_base = POLARIS_DENSE_IO_BASE - IDENT_ADDR;
+
hose->sg_isa = hose->sg_pci = NULL;
/* The I/O window is fixed at 2G @ 2G. */
* Create our single hose.
*/
- hose = alloc_pci_controler();
+ pci_isa_hose = hose = alloc_pci_controler();
hose->io_space = &ioport_resource;
hose->mem_space = &iomem_resource;
- hose->config_space = T2_CONF;
hose->index = 0;
+ hose->sparse_mem_base = T2_SPARSE_MEM - IDENT_ADDR;
+ hose->dense_mem_base = T2_DENSE_MEM - IDENT_ADDR;
+ hose->sparse_io_base = T2_IO - IDENT_ADDR;
+ hose->dense_io_base = 0;
+
hose->sg_isa = hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
*type1 = (bus != 0);
addr = (bus << 16) | (device_fn << 8) | where;
- addr |= hose->config_space;
+ addr |= hose->config_space_base;
*pci_addr = addr;
DBG_CFG(("mk_conf_addr: returning pci_addr 0x%lx\n", addr));
hose->io_space = alloc_resource();
hose->mem_space = alloc_resource();
- hose->config_space = TSUNAMI_CONF(index);
+ /* This is for userland consumption. For some reason, the 40-bit
+ PIO bias that we use in the kernel through KSEG didn't work for
+ the page table based user mappings. So make sure we get the
+ 43-bit PIO bias. */
+ hose->sparse_mem_base = 0;
+ hose->sparse_io_base = 0;
+ hose->dense_mem_base
+ = (TSUNAMI_MEM(index) & 0xffffffffff) | 0x80000000000;
+ hose->dense_io_base
+ = (TSUNAMI_IO(index) & 0xffffffffff) | 0x80000000000;
+
+ hose->config_space_base = TSUNAMI_CONF(index);
hose->index = index;
hose->io_space->start = TSUNAMI_IO(index) - TSUNAMI_IO_BIAS;
#define SIGCHLD 20
-#define NR_SYSCALLS 376
+#define NR_SYSCALLS 377
/*
* These offsets must match with alpha_mv in <asm/machvec.h>.
.quad sys_ni_syscall /* sys_dipc */
.quad sys_pivot_root
.quad sys_mincore /* 375 */
+ .quad sys_pciconfig_iobase
/* Return current software fp control & status bits. */
/* Note that DU doesn't verify available space here. */
- /* EV6 implements most of the bits in hardware. If
- UNDZ is not set, UNFD is maintained in software. */
- if (implver() == IMPLVER_EV6) {
- unsigned long fpcr = rdfpcr();
- w = ieee_fpcr_to_swcr(fpcr);
- if (!(fpcr & FPCR_UNDZ)) {
- w &= ~IEEE_TRAP_ENABLE_UNF;
- w |= (current->thread.flags
- & IEEE_TRAP_ENABLE_UNF);
- }
- } else {
- /* Otherwise we are forced to do everything in sw. */
- w = current->thread.flags & IEEE_SW_MASK;
- }
-
+ w = current->thread.flags & IEEE_SW_MASK;
+ w = swcr_update_status(w, rdfpcr());
if (put_user(w, (unsigned long *) buffer))
return -EFAULT;
return 0;
{
switch (op) {
case SSI_IEEE_FP_CONTROL: {
- unsigned long swcr, fpcr, undz;
+ unsigned long swcr, fpcr;
/*
* Alpha Architecture Handbook 4.7.7.3:
current->thread.flags &= ~IEEE_SW_MASK;
current->thread.flags |= swcr & IEEE_SW_MASK;
- /* Update the real fpcr. Keep UNFD off if not UNDZ. */
+ /* Update the real fpcr. */
fpcr = rdfpcr();
- undz = (fpcr & FPCR_UNDZ);
- fpcr &= ~(FPCR_MASK | FPCR_DYN_MASK | FPCR_UNDZ);
+ fpcr &= FPCR_DYN_MASK;
fpcr |= ieee_swcr_to_fpcr(swcr);
- fpcr &= ~(undz << 1);
wrfpcr(fpcr);
-
+
+ /* If any exceptions are now unmasked, send a signal. */
+ if (((swcr & IEEE_STATUS_MASK)
+ >> IEEE_STATUS_TO_EXCSUM_SHIFT) & swcr) {
+ send_sig(SIGFPE, current, 1);
+ }
+
return 0;
}
return res;
}
+
+
+/* Provide information on locations of various I/O regions in physical
+ memory. Do this on a per-card basis so that we choose the right hose. */
+
+asmlinkage long
+sys_pciconfig_iobase(long which, unsigned long bus, unsigned long dfn)
+{
+ struct pci_controler *hose;
+ struct pci_dev *dev;
+
+ /* Special hook for ISA access. */
+ if (bus == 0 && dfn == 0) {
+ hose = pci_isa_hose;
+ } else {
+ dev = pci_find_slot(bus, dfn);
+ if (!dev)
+ return -ENODEV;
+ hose = dev->sysdata;
+ }
+
+ switch (which) {
+ case IOBASE_HOSE:
+ return hose->index;
+ case IOBASE_SPARSE_MEM:
+ return hose->sparse_mem_base;
+ case IOBASE_DENSE_MEM:
+ return hose->dense_mem_base;
+ case IOBASE_SPARSE_IO:
+ return hose->sparse_io_base;
+ case IOBASE_DENSE_IO:
+ return hose->dense_io_base;
+ }
+
+ return -EOPNOTSUPP;
+}
flush_thread(void)
{
/* Arrange for each exec'ed process to start off with a clean slate
- with respect to the FPU. This is all exceptions disabled. Note
- that EV6 defines UNFD valid only with UNDZ, which we don't want
- for IEEE conformance -- so that disabled bit remains in software. */
-
+ with respect to the FPU. This is all exceptions disabled. */
current->thread.flags &= ~IEEE_SW_MASK;
- wrfpcr(FPCR_DYN_NORMAL | FPCR_INVD | FPCR_DZED | FPCR_OVFD | FPCR_INED);
+ wrfpcr(FPCR_DYN_NORMAL | ieee_swcr_to_fpcr(0));
}
void
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/system.h>
+#include <asm/fpu.h>
#include "proto.h"
/*
* Get contents of register REGNO in task TASK.
*/
-static inline long
+static long
get_reg(struct task_struct * task, unsigned long regno)
{
+ /* Special hack for fpcr -- combine hardware and software bits. */
+ if (regno == 63) {
+ unsigned long fpcr = *get_reg_addr(task, regno);
+ unsigned long swcr = task->thread.flags & IEEE_SW_MASK;
+ swcr = swcr_update_status(swcr, fpcr);
+ return fpcr | swcr;
+ }
return *get_reg_addr(task, regno);
}
/*
* Write contents of register REGNO in task TASK.
*/
-static inline int
+static int
put_reg(struct task_struct *task, unsigned long regno, long data)
{
+ if (regno == 63) {
+ task->thread.flags = ((task->thread.flags & ~IEEE_SW_MASK)
+ | (data & IEEE_SW_MASK));
+ data = (data & FPCR_DYN_MASK) | ieee_swcr_to_fpcr(data);
+ }
*get_reg_addr(task, regno) = data;
return 0;
}
#include "proto.h"
#include "irq_impl.h"
+#include "pci_impl.h"
#include "machvec_impl.h"
static void __init
jensen_init_arch(void)
{
+ struct pci_controler *hose;
+
+ /* Create a hose so that we can report i/o base addresses to
+ userland. */
+
+ pci_isa_hose = hose = alloc_pci_controler();
+ hose->io_space = &ioport_resource;
+ hose->mem_space = &iomem_resource;
+ hose->index = 0;
+
+ hose->sparse_mem_base = EISA_MEM - IDENT_ADDR;
+ hose->dense_mem_base = 0;
+ hose->sparse_io_base = EISA_IO - IDENT_ADDR;
+ hose->dense_io_base = 0;
+
+ hose->sg_isa = hose->sg_pci = NULL;
__direct_map_base = 0;
__direct_map_size = 0xffffffff;
}
FP_DECL_D(DA); FP_DECL_D(DB); FP_DECL_D(DR);
unsigned long fa, fb, fc, func, mode, src;
- unsigned long fpcw = current->thread.flags;
- unsigned long res, va, vb, vc, fpcr;
+ unsigned long res, va, vb, vc, swcr, fpcr;
__u32 insn;
MOD_INC_USE_COUNT;
mode = (insn >> 11) & 0x3;
fpcr = rdfpcr();
+ swcr = swcr_update_status(current->thread.flags, fpcr);
if (mode == 3) {
- /* Dynamic -- get rounding mode from fpcr. */
- mode = (fpcr >> FPCR_DYN_SHIFT) & 3;
+ /* Dynamic -- get rounding mode from fpcr. */
+ mode = (fpcr >> FPCR_DYN_SHIFT) & 3;
}
switch (src) {
}
FP_CMP_D(res, DA, DB, 3);
vc = 0x4000000000000000;
- /* CMPTEQ, CMPTUN don't trap on QNaN, while CMPTLT and CMPTLE do */
- if (res == 3 && ((func & 3) >= 2 || FP_ISSIGNAN_D(DA) || FP_ISSIGNAN_D(DB)))
+ /* CMPTEQ, CMPTUN don't trap on QNaN,
+ while CMPTLT and CMPTLE do */
+ if (res == 3
+ && ((func & 3) >= 2
+ || FP_ISSIGNAN_D(DA)
+ || FP_ISSIGNAN_D(DB))) {
FP_SET_EXCEPTION(FP_EX_INVALID);
+ }
switch (func) {
case FOP_FNC_CMPxUN: if (res != 3) vc = 0; break;
case FOP_FNC_CMPxEQ: if (res) vc = 0; break;
}
case FOP_FNC_CVTxQ:
- if (DB_c == FP_CLS_NAN && (_FP_FRAC_HIGH_RAW_D(DB) & _FP_QNANBIT_D))
- vc = 0; /* AAHB Table B-2 sais QNaN should not trigger INV */
- else
+ if (DB_c == FP_CLS_NAN
+ && (_FP_FRAC_HIGH_RAW_D(DB) & _FP_QNANBIT_D)) {
+ /* AAHB Table B-2 says QNaN should not trigger INV */
+ vc = 0;
+ } else
FP_TO_INT_ROUND_D(vc, DB, 64, 2);
goto done_d;
}
pack_s:
FP_PACK_SP(&vc, SR);
+ if ((_fex & FP_EX_UNDERFLOW) && (swcr & IEEE_MAP_UMZ))
+ vc = 0;
alpha_write_fp_reg_s(fc, vc);
goto done;
pack_d:
FP_PACK_DP(&vc, DR);
+ if ((_fex & FP_EX_UNDERFLOW) && (swcr & IEEE_MAP_UMZ))
+ vc = 0;
done_d:
alpha_write_fp_reg(fc, vc);
goto done;
done:
if (_fex) {
/* Record exceptions in software control word. */
- current->thread.flags
- = fpcw |= (_fex << IEEE_STATUS_TO_EXCSUM_SHIFT);
+ swcr |= (_fex << IEEE_STATUS_TO_EXCSUM_SHIFT);
+ current->thread.flags |= (_fex << IEEE_STATUS_TO_EXCSUM_SHIFT);
- /* Update hardware control register */
+ /* Update hardware control register. */
fpcr &= (~FPCR_MASK | FPCR_DYN_MASK);
- fpcr |= ieee_swcr_to_fpcr(fpcw);
+ fpcr |= ieee_swcr_to_fpcr(swcr);
wrfpcr(fpcr);
/* Do we generate a signal? */
- if (_fex & fpcw & IEEE_TRAP_ENABLE_MASK) {
+ if (_fex & swcr & IEEE_TRAP_ENABLE_MASK) {
MOD_DEC_USE_COUNT;
return 0;
}
alpha_fp_emul_imprecise (struct pt_regs *regs, unsigned long write_mask)
{
unsigned long trigger_pc = regs->pc - 4;
- unsigned long insn, opcode, rc;
+ unsigned long insn, opcode, rc, no_signal = 0;
MOD_INC_USE_COUNT;
case OPC_PAL:
case OPC_JSR:
case 0x30 ... 0x3f: /* branches */
- MOD_DEC_USE_COUNT;
- return 0;
+ goto egress;
case OPC_MISC:
switch (insn & 0xffff) {
case MISC_TRAPB:
case MISC_EXCB:
- MOD_DEC_USE_COUNT;
- return 0;
+ goto egress;
default:
break;
break;
}
if (!write_mask) {
- if (alpha_fp_emul(trigger_pc)) {
- /* re-execute insns in trap-shadow: */
- regs->pc = trigger_pc + 4;
- MOD_DEC_USE_COUNT;
- return 1;
- }
- break;
+ /* Re-execute insns in the trap-shadow. */
+ regs->pc = trigger_pc + 4;
+ no_signal = alpha_fp_emul(trigger_pc);
+ goto egress;
}
trigger_pc -= 4;
}
+
+egress:
MOD_DEC_USE_COUNT;
- return 0;
+ return no_signal;
}
# CONFIG_NET_ISA is not set
CONFIG_NET_PCI=y
# CONFIG_PCNET32 is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
# CONFIG_APRICOT is not set
# CONFIG_CS89x0 is not set
# CONFIG_DE4X5 is not set
# CONFIG_PCMCIA_XIRC2PS is not set
# CONFIG_ARCNET_COM20020_CS is not set
# CONFIG_PCMCIA_3C575 is not set
+# CONFIG_PCMCIA_XIRTULIP is not set
CONFIG_NET_PCMCIA_RADIO=y
CONFIG_PCMCIA_RAYCS=y
# CONFIG_PCMCIA_NETWAVE is not set
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/mm.h>
-#include <linux/autoconf.h>
#include <asm/bcache.h>
#include <asm/sgi/sgimc.h>
#
-# Automatically generated by make menuconfig: don't edit
+# Automatically generated make config: don't edit
#
# CONFIG_UID16 is not set
#
# General setup
#
+# CONFIG_PCI is not set
CONFIG_PCI=y
CONFIG_PCI=y
CONFIG_NET=y
CONFIG_PMAC_PBOOK=y
CONFIG_MAC_FLOPPY=y
CONFIG_MAC_SERIAL=y
-# CONFIG_SERIAL_CONSOLE is not set
CONFIG_ADB=y
CONFIG_ADB_CUDA=y
CONFIG_ADB_MACIO=y
# Plug and Play configuration
#
# CONFIG_PNP is not set
-# CONFIG_ISAPNP is not set
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
-CONFIG_BLK_DEV_IDE=y
-# CONFIG_BLK_DEV_HD_IDE is not set
-CONFIG_BLK_DEV_IDEDISK=y
-# CONFIG_IDEDISK_MULTI_MODE is not set
-# CONFIG_BLK_DEV_IDECS is not set
-CONFIG_BLK_DEV_IDECD=y
-# CONFIG_BLK_DEV_IDETAPE is not set
-CONFIG_BLK_DEV_IDEFLOPPY=y
-CONFIG_BLK_DEV_IDESCSI=y
-# CONFIG_BLK_DEV_CMD640 is not set
-# CONFIG_BLK_DEV_RZ1000 is not set
-CONFIG_BLK_DEV_IDEPCI=y
-# CONFIG_IDEPCI_SHARE_IRQ is not set
-# CONFIG_BLK_DEV_IDEDMA_PCI is not set
-# CONFIG_BLK_DEV_OFFBOARD is not set
-# CONFIG_BLK_DEV_AEC6210 is not set
-# CONFIG_BLK_DEV_CMD64X is not set
-# CONFIG_BLK_DEV_CS5530 is not set
-# CONFIG_BLK_DEV_OPTI621 is not set
-CONFIG_BLK_DEV_SL82C105=y
-CONFIG_BLK_DEV_IDE_PMAC=y
-CONFIG_BLK_DEV_IDEDMA_PMAC=y
-CONFIG_IDEDMA_PMAC_AUTO=y
-CONFIG_BLK_DEV_IDEDMA=y
-CONFIG_IDEDMA_AUTO=y
-# CONFIG_IDE_CHIPSETS is not set
+# CONFIG_BLK_DEV_XD is not set
# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+
+#
+# Additional Block Devices
+#
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_INITRD=y
-# CONFIG_BLK_DEV_XD is not set
-# CONFIG_BLK_DEV_DAC960 is not set
-# CONFIG_PARIDE is not set
-CONFIG_BLK_DEV_IDE_MODES=y
-# CONFIG_BLK_DEV_HD is not set
#
# Networking options
# CONFIG_IP_MROUTE is not set
CONFIG_IP_ALIAS=y
CONFIG_SYN_COOKIES=y
+
+#
+# (it is safe to leave these untouched)
+#
CONFIG_SKB_LARGE=y
# CONFIG_IPV6 is not set
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
+
+#
+#
+#
# CONFIG_IPX is not set
CONFIG_ATALK=m
# CONFIG_DECNET is not set
#
# CONFIG_NET_SCHED is not set
+#
+# ATA/IDE/MFM/RLL support
+#
+CONFIG_IDE=y
+
+#
+# IDE, ATA and ATAPI Block devices
+#
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_HD_IDE is not set
+# CONFIG_BLK_DEV_HD is not set
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+CONFIG_BLK_DEV_IDECD=y
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+CONFIG_BLK_DEV_IDESCSI=y
+
+#
+# IDE chipset support/bugfixes
+#
+# CONFIG_BLK_DEV_CMD640 is not set
+# CONFIG_BLK_DEV_RZ1000 is not set
+CONFIG_BLK_DEV_IDEPCI=y
+# CONFIG_IDEPCI_SHARE_IRQ is not set
+# CONFIG_BLK_DEV_IDEDMA_PCI is not set
+# CONFIG_BLK_DEV_OFFBOARD is not set
+# CONFIG_BLK_DEV_IDEDMA is not set
+CONFIG_IDEDMA_PCI_EXPERIMENTAL=y
+# CONFIG_BLK_DEV_CY82C693 is not set
+# CONFIG_BLK_DEV_NS87415 is not set
+# CONFIG_BLK_DEV_OPTI621 is not set
+# CONFIG_BLK_DEV_TRM290 is not set
+# CONFIG_BLK_DEV_VIA82CXXX is not set
+# CONFIG_BLK_DEV_SL82C105 is not set
+CONFIG_BLK_DEV_IDE_PMAC=y
+CONFIG_BLK_DEV_IDEDMA_PMAC=y
+CONFIG_IDEDMA_PMAC_AUTO=y
+CONFIG_BLK_DEV_IDEDMA=y
+# CONFIG_IDE_CHIPSETS is not set
+CONFIG_IDEDMA_AUTO=y
+CONFIG_BLK_DEV_IDE_MODES=y
+
#
# SCSI support
#
CONFIG_SCSI=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
CONFIG_BLK_DEV_SD=y
CONFIG_SD_EXTRA_DEVS=40
CONFIG_CHR_DEV_ST=y
-CONFIG_ST_EXTRA_DEVS=2
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_SR_EXTRA_DEVS=2
CONFIG_CHR_DEV_SG=y
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
# CONFIG_SCSI_DEBUG_QUEUES is not set
# CONFIG_SCSI_MULTI_LUN is not set
CONFIG_SCSI_CONSTANTS=y
#
# Appletalk devices
#
-# CONFIG_LTPC is not set
-# CONFIG_COPS is not set
-# CONFIG_IPDDP is not set
+# CONFIG_APPLETALK is not set
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
#
# CONFIG_FTAPE is not set
# CONFIG_DRM is not set
-# CONFIG_DRM_TDFX is not set
# CONFIG_AGP is not set
#
# USB support
#
CONFIG_USB=y
+
+#
+# USB Controllers
+#
# CONFIG_USB_UHCI is not set
# CONFIG_USB_UHCI_ALT is not set
CONFIG_USB_OHCI=y
+
+#
+# Miscellaneous USB options
+#
# CONFIG_USB_DEVICEFS is not set
+
+#
+# USB Devices
+#
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_SCANNER is not set
# CONFIG_USB_AUDIO is not set
# CONFIG_USB_OV511 is not set
# CONFIG_USB_DC2XX is not set
# CONFIG_USB_STORAGE is not set
-# CONFIG_USB_USS720 is not set
# CONFIG_USB_DABUSB is not set
# CONFIG_USB_PLUSB is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_DSBR is not set
+
+#
+# USB HID
+#
# CONFIG_USB_HID is not set
CONFIG_USB_KBD=y
CONFIG_USB_MOUSE=y
# CONFIG_HPFS_FS is not set
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
-# CONFIG_DEVFS_DEBUG is not set
CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
# CONFIG_ROMFS_FS is not set
#
# CONFIG_CODA_FS is not set
CONFIG_NFS_FS=y
-# CONFIG_ROOT_NFS is not set
CONFIG_NFSD=y
# CONFIG_NFSD_V3 is not set
CONFIG_SUNRPC=y
# CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_SOUND_MSNDPIN is not set
CONFIG_SOUND_OSS=y
+# CONFIG_SOUND_TRACEINIT is not set
+# CONFIG_SOUND_DMAP is not set
# CONFIG_SOUND_AD1816 is not set
# CONFIG_SOUND_SGALAXY is not set
+# CONFIG_SOUND_ADLIB is not set
+# CONFIG_SOUND_ACI_MIXER is not set
CONFIG_SOUND_CS4232=m
# CONFIG_SOUND_SSCAPE is not set
# CONFIG_SOUND_GUS is not set
# CONFIG_SOUND_NM256 is not set
# CONFIG_SOUND_MAD16 is not set
# CONFIG_SOUND_PAS is not set
-# CONFIG_PAS_JOYSTICK is not set
# CONFIG_SOUND_PSS is not set
-# CONFIG_PSS_HAVE_BOOT is not set
# CONFIG_SOUND_SOFTOSS is not set
# CONFIG_SOUND_SB is not set
+# CONFIG_SOUND_AWE32_SYNTH is not set
# CONFIG_SOUND_WAVEFRONT is not set
# CONFIG_SOUND_MAUI is not set
# CONFIG_SOUND_VIA82CXXX is not set
# CONFIG_SOUND_OPL3SA1 is not set
# CONFIG_SOUND_OPL3SA2 is not set
# CONFIG_SOUND_UART6850 is not set
-
-#
-# Additional low level sound drivers
-#
-# CONFIG_LOWLEVEL_SOUND is not set
+# CONFIG_SOUND_AEDSP16 is not set
#
# Kernel hacking
#
-# Automatically generated by make menuconfig: don't edit
+# Automatically generated make config: don't edit
#
# CONFIG_UID16 is not set
#
# General setup
#
+# CONFIG_PCI is not set
CONFIG_PCI=y
CONFIG_PCI=y
CONFIG_NET=y
CONFIG_PMAC_PBOOK=y
CONFIG_MAC_FLOPPY=y
CONFIG_MAC_SERIAL=y
-# CONFIG_SERIAL_CONSOLE is not set
CONFIG_ADB=y
CONFIG_ADB_CUDA=y
CONFIG_ADB_MACIO=y
# Plug and Play configuration
#
# CONFIG_PNP is not set
-# CONFIG_ISAPNP is not set
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_XD is not set
# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+
+#
+# Additional Block Devices
+#
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_INITRD=y
-# CONFIG_BLK_DEV_XD is not set
-# CONFIG_BLK_DEV_DAC960 is not set
-# CONFIG_PARIDE is not set
#
# Networking options
# CONFIG_IP_MROUTE is not set
CONFIG_IP_ALIAS=y
CONFIG_SYN_COOKIES=y
+
+#
+# (it is safe to leave these untouched)
+#
CONFIG_SKB_LARGE=y
# CONFIG_IPV6 is not set
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
+
+#
+#
+#
# CONFIG_IPX is not set
CONFIG_ATALK=m
# CONFIG_DECNET is not set
# ATA/IDE/MFM/RLL support
#
CONFIG_IDE=y
+
+#
+# IDE, ATA and ATAPI Block devices
+#
CONFIG_BLK_DEV_IDE=y
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
# CONFIG_BLK_DEV_HD_IDE is not set
+# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
-# CONFIG_BLK_DEV_IDECS is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
-CONFIG_BLK_DEV_IDEFLOPPY=y
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
CONFIG_BLK_DEV_IDESCSI=y
+
+#
+# IDE chipset support/bugfixes
+#
# CONFIG_BLK_DEV_CMD640 is not set
# CONFIG_BLK_DEV_RZ1000 is not set
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_IDEPCI_SHARE_IRQ is not set
# CONFIG_BLK_DEV_IDEDMA_PCI is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
-# CONFIG_BLK_DEV_AEC6210 is not set
-# CONFIG_BLK_DEV_CMD64X is not set
-# CONFIG_BLK_DEV_CS5530 is not set
+# CONFIG_BLK_DEV_IDEDMA is not set
+CONFIG_IDEDMA_PCI_EXPERIMENTAL=y
+# CONFIG_BLK_DEV_CY82C693 is not set
+# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
-CONFIG_BLK_DEV_SL82C105=y
+# CONFIG_BLK_DEV_TRM290 is not set
+# CONFIG_BLK_DEV_VIA82CXXX is not set
+# CONFIG_BLK_DEV_SL82C105 is not set
CONFIG_BLK_DEV_IDE_PMAC=y
CONFIG_BLK_DEV_IDEDMA_PMAC=y
CONFIG_IDEDMA_PMAC_AUTO=y
CONFIG_BLK_DEV_IDEDMA=y
-CONFIG_IDEDMA_AUTO=y
# CONFIG_IDE_CHIPSETS is not set
+CONFIG_IDEDMA_AUTO=y
CONFIG_BLK_DEV_IDE_MODES=y
-# CONFIG_BLK_DEV_HD is not set
#
# SCSI support
#
CONFIG_SCSI=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
CONFIG_BLK_DEV_SD=y
CONFIG_SD_EXTRA_DEVS=40
CONFIG_CHR_DEV_ST=y
-CONFIG_ST_EXTRA_DEVS=2
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_SR_EXTRA_DEVS=2
CONFIG_CHR_DEV_SG=y
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
# CONFIG_SCSI_DEBUG_QUEUES is not set
# CONFIG_SCSI_MULTI_LUN is not set
CONFIG_SCSI_CONSTANTS=y
#
# Appletalk devices
#
-# CONFIG_LTPC is not set
-# CONFIG_COPS is not set
-# CONFIG_IPDDP is not set
+# CONFIG_APPLETALK is not set
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
#
# CONFIG_FTAPE is not set
# CONFIG_DRM is not set
-# CONFIG_DRM_TDFX is not set
# CONFIG_AGP is not set
#
# USB support
#
CONFIG_USB=y
+
+#
+# USB Controllers
+#
# CONFIG_USB_UHCI is not set
# CONFIG_USB_UHCI_ALT is not set
CONFIG_USB_OHCI=y
+
+#
+# Miscellaneous USB options
+#
# CONFIG_USB_DEVICEFS is not set
+
+#
+# USB Devices
+#
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_SCANNER is not set
# CONFIG_USB_AUDIO is not set
# CONFIG_USB_OV511 is not set
# CONFIG_USB_DC2XX is not set
# CONFIG_USB_STORAGE is not set
-# CONFIG_USB_USS720 is not set
# CONFIG_USB_DABUSB is not set
# CONFIG_USB_PLUSB is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_DSBR is not set
+
+#
+# USB HID
+#
# CONFIG_USB_HID is not set
CONFIG_USB_KBD=y
CONFIG_USB_MOUSE=y
# CONFIG_HPFS_FS is not set
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
-# CONFIG_DEVFS_DEBUG is not set
CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
# CONFIG_ROMFS_FS is not set
#
# CONFIG_CODA_FS is not set
CONFIG_NFS_FS=y
-# CONFIG_ROOT_NFS is not set
CONFIG_NFSD=y
# CONFIG_NFSD_V3 is not set
CONFIG_SUNRPC=y
# CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_SOUND_MSNDPIN is not set
CONFIG_SOUND_OSS=y
+# CONFIG_SOUND_TRACEINIT is not set
+# CONFIG_SOUND_DMAP is not set
# CONFIG_SOUND_AD1816 is not set
# CONFIG_SOUND_SGALAXY is not set
+# CONFIG_SOUND_ADLIB is not set
+# CONFIG_SOUND_ACI_MIXER is not set
CONFIG_SOUND_CS4232=m
# CONFIG_SOUND_SSCAPE is not set
# CONFIG_SOUND_GUS is not set
# CONFIG_SOUND_NM256 is not set
# CONFIG_SOUND_MAD16 is not set
# CONFIG_SOUND_PAS is not set
-# CONFIG_PAS_JOYSTICK is not set
# CONFIG_SOUND_PSS is not set
-# CONFIG_PSS_HAVE_BOOT is not set
# CONFIG_SOUND_SOFTOSS is not set
# CONFIG_SOUND_SB is not set
+# CONFIG_SOUND_AWE32_SYNTH is not set
# CONFIG_SOUND_WAVEFRONT is not set
# CONFIG_SOUND_MAUI is not set
# CONFIG_SOUND_VIA82CXXX is not set
# CONFIG_SOUND_OPL3SA1 is not set
# CONFIG_SOUND_OPL3SA2 is not set
# CONFIG_SOUND_UART6850 is not set
-
-#
-# Additional low level sound drivers
-#
-# CONFIG_LOWLEVEL_SOUND is not set
+# CONFIG_SOUND_AEDSP16 is not set
#
# Kernel hacking
page = grab_cache_page(mapping, index);
if (!page)
goto fail;
- if (aops->prepare_write(page, offset, offset+size))
+ if (aops->prepare_write(file, page, offset, offset+size))
goto unlock;
kaddr = (char*)page_address(page);
if ((lo->transfer)(lo, WRITE, kaddr+offset, data, size, IV))
s = protocol; e = s+1;
+ if (!protocols[0])
+ request_module ("paride_protocol");
+
if (autoprobe) {
s = 0;
e = MAX_PROTOS;
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/init.h>
+#include <linux/spinlock.h>
static int acq_is_open=0;
+static spinlock_t acq_lock;
/*
* You must set these - there is no sane way to probe for this board.
switch(MINOR(inode->i_rdev))
{
case WATCHDOG_MINOR:
+ spin_lock(&acq_lock);
if(acq_is_open)
+ {
+ spin_unlock(&acq_lock);
return -EBUSY;
+ }
MOD_INC_USE_COUNT;
/*
* Activate
acq_is_open=1;
inb_p(WDT_START);
+ spin_unlock(&acq_lock);
return 0;
default:
return -ENODEV;
{
if(MINOR(inode->i_rdev)==WATCHDOG_MINOR)
{
+ spin_lock(&acq_lock);
#ifndef CONFIG_WATCHDOG_NOWAYOUT
inb_p(WDT_STOP);
#endif
acq_is_open=0;
+ spin_unlock(&acq_lock);
}
MOD_DEC_USE_COUNT;
return 0;
{
printk("WDT driver for Acquire single board computer initialising.\n");
+ spin_lock_init(acq_lock);
misc_register(&acq_miscdev);
request_region(WDT_STOP, 1, "Acquire WDT");
request_region(WDT_START, 1, "Acquire WDT");
static int open_mouse(struct inode * inode, struct file * file)
{
+ /* Lock module first - request_irq might sleep */
+
+ MOD_INC_USE_COUNT;
+
/*
* use VBL to poll mouse deltas
*/
if(request_irq(IRQ_AMIGA_VERTB, mouse_interrupt, 0,
"Amiga mouse", mouse_interrupt)) {
printk(KERN_INFO "Installing Amiga mouse failed.\n");
+ MOD_DEC_USE_COUNT;
return -EIO;
}
- MOD_INC_USE_COUNT;
#if AMIGA_OLD_INT
AMI_MSE_INT_ON();
#endif
/* et passe en argument a acinit, mais est scrute sur le bus pour s'adapter */
/* au nombre de cartes presentes sur le bus. IOCL code 6 affichait V2.4.3 */
/* F.LAFORSE 28/11/95 creation de fichiers acXX.o avec les differentes */
-/* adresses de base des cartes, IOCTL 6 plus complet */
+/* adresses de base des cartes, IOCTL 6 plus complet */
/* J.PAGET le 19/08/96 copie de la version V2.6 en V2.8.0 sans modification */
/* de code autre que le texte V2.6.1 en V2.8.0 */
/*****************************************************************************/
#undef DEBUG
#define DEVPRIO PZERO+8
#define FALSE 0
-#define TRUE ~FALSE
-#define MAX_BOARD 8 /* maximum of pc board possible */
+#define TRUE ~FALSE
+#define MAX_BOARD 8 /* maximum of pc board possible */
#define MAX_ISA_BOARD 4
#define LEN_RAM_IO 0x800
#define AC_MINOR 157
#ifndef PCI_VENDOR_ID_APPLICOM
-#define PCI_VENDOR_ID_APPLICOM 0x1389
+#define PCI_VENDOR_ID_APPLICOM 0x1389
#define PCI_DEVICE_ID_APPLICOM_PCIGENERIC 0x0001
#define PCI_DEVICE_ID_APPLICOM_PCI2000IBS_CAN 0x0002
#define PCI_DEVICE_ID_APPLICOM_PCI2000PFB 0x0003
#define MAX_PCI_DEVICE_NUM 3
#endif
-static char *applicom_pci_devnames[]={
- "PCI board", "PCI2000IBS / PCI2000CAN", "PCI2000PFB"};
+static char *applicom_pci_devnames[] = {
+ "PCI board", "PCI2000IBS / PCI2000CAN", "PCI2000PFB"
+};
MODULE_AUTHOR("David Woodhouse & Applicom International");
MODULE_DESCRIPTION("Driver for Applicom Profibus card");
MODULE_PARM(irq, "i");
MODULE_PARM_DESC(irq, "IRQ of the Applicom board");
-MODULE_PARM(mem,"i");
+MODULE_PARM(mem, "i");
MODULE_PARM_DESC(mem, "Shared Memory Address of Applicom board");
MODULE_SUPPORTED_DEVICE("ac");
struct applicom_board {
- unsigned long PhysIO;
- unsigned long RamIO;
+ unsigned long PhysIO;
+ unsigned long RamIO;
#if LINUX_VERSION_CODE > 0x20300
- wait_queue_head_t FlagSleepSend;
+ wait_queue_head_t FlagSleepSend;
#else
- struct wait_queue *FlagSleepSend;
+ struct wait_queue *FlagSleepSend;
#endif
- long irq;
+ long irq;
} apbs[MAX_BOARD];
-static unsigned int irq=0; /* interrupt number IRQ */
-static unsigned long mem=0; /* physical segment of board */
+static unsigned int irq = 0; /* interrupt number IRQ */
+static unsigned long mem = 0; /* physical segment of board */
-static unsigned int numboards; /* number of installed boards */
+static unsigned int numboards; /* number of installed boards */
static volatile unsigned char Dummy;
#if LINUX_VERSION_CODE > 0x20300
-static DECLARE_WAIT_QUEUE_HEAD (FlagSleepRec);
+static DECLARE_WAIT_QUEUE_HEAD(FlagSleepRec);
#else
static struct wait_queue *FlagSleepRec;
#endif
-static unsigned int WriteErrorCount; /* number of write error */
-static unsigned int ReadErrorCount; /* number of read error */
-static unsigned int DeviceErrorCount; /* number of device error */
+static unsigned int WriteErrorCount; /* number of write error */
+static unsigned int ReadErrorCount; /* number of read error */
+static unsigned int DeviceErrorCount; /* number of device error */
static loff_t ac_llseek(struct file *file, loff_t offset, int origin);
static int ac_open(struct inode *inode, struct file *filp);
-static ssize_t ac_read (struct file *filp, char *buf, size_t count, loff_t *ptr);
-static ssize_t ac_write (struct file *file, const char *buf, size_t count, loff_t *ppos);
+static ssize_t ac_read(struct file *filp, char *buf, size_t count, loff_t * ptr);
+static ssize_t ac_write(struct file *file, const char *buf, size_t count, loff_t * ppos);
static int ac_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg);
static int ac_release(struct inode *inode, struct file *file);
static void ac_interrupt(int irq, void *dev_instance, struct pt_regs *regs);
-struct file_operations ac_fops={
- llseek: ac_llseek,
- read: ac_read,
- write: ac_write,
- ioctl: ac_ioctl,
- open: ac_open,
- release: ac_release,
+struct file_operations ac_fops = {
+ llseek:ac_llseek,
+ read:ac_read,
+ write:ac_write,
+ ioctl:ac_ioctl,
+ open:ac_open,
+ release:ac_release,
};
-struct miscdevice ac_miscdev={
- AC_MINOR,
- "ac",
- &ac_fops
+struct miscdevice ac_miscdev = {
+ AC_MINOR,
+ "ac",
+ &ac_fops
};
-int ac_register_board(unsigned long physloc, unsigned long loc,
- unsigned char boardno)
+int ac_register_board(unsigned long physloc, unsigned long loc, unsigned char boardno)
{
volatile unsigned char byte_reset_it;
- if((readb(loc + CONF_END_TEST) != 0x00) ||
- (readb(loc + CONF_END_TEST + 1) != 0x55) ||
- (readb(loc + CONF_END_TEST + 2) != 0xAA) ||
- (readb(loc + CONF_END_TEST + 3) != 0xFF))
- return 0;
+ if ((readb(loc + CONF_END_TEST) != 0x00) || (readb(loc + CONF_END_TEST + 1) != 0x55) || (readb(loc + CONF_END_TEST + 2) != 0xAA) || (readb(loc + CONF_END_TEST + 3) != 0xFF))
+ return 0;
if (!boardno)
- boardno = readb(loc + NUMCARD_OWNER_TO_PC);
+ boardno = readb(loc + NUMCARD_OWNER_TO_PC);
- if (!boardno && boardno > MAX_BOARD)
- {
- printk(KERN_WARNING "Board #%d (at 0x%lx) is out of range (1 <= x <= %d).\n",boardno, physloc, MAX_BOARD);
- return 0;
- }
+ if (!boardno && boardno > MAX_BOARD) {
+ printk(KERN_WARNING "Board #%d (at 0x%lx) is out of range (1 <= x <= %d).\n", boardno, physloc, MAX_BOARD);
+ return 0;
+ }
- if (apbs[boardno-1].RamIO)
- {
- printk(KERN_WARNING "Board #%d (at 0x%lx) conflicts with previous board #%d (at 0x%lx)\n",
- boardno, physloc, boardno, apbs[boardno-1].PhysIO);
- return 0;
- }
+ if (apbs[boardno - 1].RamIO) {
+ printk(KERN_WARNING "Board #%d (at 0x%lx) conflicts with previous board #%d (at 0x%lx)\n", boardno, physloc, boardno, apbs[boardno - 1].PhysIO);
+ return 0;
+ }
boardno--;
byte_reset_it = readb(loc + RAM_IT_TO_PC);
numboards++;
- return boardno+1;
+ return boardno + 1;
}
#ifdef MODULE
int i;
misc_deregister(&ac_miscdev);
- for (i=0; i< MAX_BOARD; i++)
- {
- if (!apbs[i].RamIO)
- continue;
- iounmap((void *)apbs[i].RamIO);
- if (apbs[i].irq)
- free_irq(apbs[i].irq,&ac_open);
- }
- // printk("Removing Applicom module\n");
+ for (i = 0; i < MAX_BOARD; i++) {
+ if (!apbs[i].RamIO)
+ continue;
+ iounmap((void *) apbs[i].RamIO);
+ if (apbs[i].irq)
+ free_irq(apbs[i].irq, &ac_open);
+ }
+ // printk("Removing Applicom module\n");
}
-#endif /* MODULE */
+#endif /* MODULE */
int __init applicom_init(void)
{
- int i, numisa=0;
+ int i, numisa = 0;
struct pci_dev *dev = NULL;
void *RamIO;
int boardno;
#endif
printk(KERN_INFO "Applicom driver: $Id: ac.c,v 1.16 1999/08/28 15:11:50 dwmw2 Exp $\n");
-
+
/* No mem and irq given - check for a PCI card */
-
- while ( (dev = pci_find_device(PCI_VENDOR_ID_APPLICOM, 1, dev)))
- {
- // mem = dev->base_address[0];
- // irq = dev->irq;
-
- RamIO = ioremap(PCI_BASE_ADDRESS(dev), LEN_RAM_IO);
-
- if (!RamIO) {
- printk(KERN_INFO "ac.o: Failed to ioremap PCI memory space at 0x%lx\n", PCI_BASE_ADDRESS(dev));
- return -EIO;
- }
-
- printk(KERN_INFO "Applicom %s found at mem 0x%lx, irq %d\n",
- applicom_pci_devnames[dev->device-1], PCI_BASE_ADDRESS(dev),
- dev->irq);
-
- if (!(boardno = ac_register_board(PCI_BASE_ADDRESS(dev),
- (unsigned long)RamIO,0)))
- {
- printk(KERN_INFO "ac.o: PCI Applicom device doesn't have correct signature.\n");
- iounmap(RamIO);
- continue;
- }
-
- if (request_irq(dev->irq, &ac_interrupt, SA_SHIRQ, "Applicom PCI", &ac_open))
- {
- printk(KERN_INFO "Could not allocate IRQ %d for PCI Applicom device.\n", dev->irq);
- iounmap(RamIO);
- apbs[boardno-1].RamIO = 0;
- continue;
- }
-
- /* Enable interrupts. */
-
- writeb(0x40, apbs[boardno-1].RamIO + RAM_IT_FROM_PC);
-
- apbs[boardno-1].irq = dev->irq;
- }
-
+
+ while ((dev = pci_find_device(PCI_VENDOR_ID_APPLICOM, 1, dev))) {
+ // mem = dev->base_address[0];
+ // irq = dev->irq;
+
+ RamIO = ioremap(PCI_BASE_ADDRESS(dev), LEN_RAM_IO);
+
+ if (!RamIO) {
+ printk(KERN_INFO "ac.o: Failed to ioremap PCI memory space at 0x%lx\n", PCI_BASE_ADDRESS(dev));
+ return -EIO;
+ }
+
+ printk(KERN_INFO "Applicom %s found at mem 0x%lx, irq %d\n", applicom_pci_devnames[dev->device - 1], PCI_BASE_ADDRESS(dev), dev->irq);
+
+ if (!(boardno = ac_register_board(PCI_BASE_ADDRESS(dev), (unsigned long) RamIO, 0))) {
+ printk(KERN_INFO "ac.o: PCI Applicom device doesn't have correct signature.\n");
+ iounmap(RamIO);
+ continue;
+ }
+
+ if (request_irq(dev->irq, &ac_interrupt, SA_SHIRQ, "Applicom PCI", &ac_open)) {
+ printk(KERN_INFO "Could not allocate IRQ %d for PCI Applicom device.\n", dev->irq);
+ iounmap(RamIO);
+ apbs[boardno - 1].RamIO = 0;
+ continue;
+ }
+
+ /* Enable interrupts. */
+
+ writeb(0x40, apbs[boardno - 1].RamIO + RAM_IT_FROM_PC);
+
+ apbs[boardno - 1].irq = dev->irq;
+ }
+
/* Finished with PCI cards. If none registered,
* and there was no mem/irq specified, exit */
- if (!mem || !irq)
- {
- if (numboards)
- goto fin;
- else
- {
- printk(KERN_INFO "ac.o: No PCI boards found.\n");
- printk(KERN_INFO "ac.o: For an ISA board you must supply memory and irq parameters.\n");
- return -ENXIO;
- }
- }
-
+ if (!mem || !irq) {
+ if (numboards)
+ goto fin;
+ else {
+ printk(KERN_INFO "ac.o: No PCI boards found.\n");
+ printk(KERN_INFO "ac.o: For an ISA board you must supply memory and irq parameters.\n");
+ return -ENXIO;
+ }
+ }
+
/* Now try the specified ISA cards */
- RamIO = ioremap(mem, LEN_RAM_IO * MAX_ISA_BOARD);
+ RamIO = ioremap(mem, LEN_RAM_IO * MAX_ISA_BOARD);
if (!RamIO) {
- printk(KERN_INFO "ac.o: Failed to ioremap ISA memory space at 0x%lx\n",mem);
+ printk(KERN_INFO "ac.o: Failed to ioremap ISA memory space at 0x%lx\n", mem);
+ }
+
+ for (i = 0; i < MAX_ISA_BOARD; i++) {
+ RamIO = ioremap(mem + (LEN_RAM_IO * i), LEN_RAM_IO);
+
+ if (!RamIO) {
+ printk(KERN_INFO "ac.o: Failed to ioremap the ISA card's memory space (slot #%d)\n", i + 1);
+ continue;
+ }
+
+ if (!(boardno = ac_register_board((unsigned long) mem + (LEN_RAM_IO * i), (unsigned long) RamIO, i + 1))) {
+ iounmap(RamIO);
+ continue;
+ }
+
+ printk("Applicom ISA card found at mem 0x%lx, irq %d\n", mem + (LEN_RAM_IO * i), irq);
+
+ if (!numisa) {
+ if (request_irq(irq, &ac_interrupt, SA_SHIRQ, "Applicom ISA", &ac_open)) {
+ printk("Could not allocate IRQ %d for ISA Applicom device.\n", irq);
+ iounmap((void *) RamIO);
+ apbs[boardno - 1].RamIO = 0;
+ }
+ apbs[boardno - 1].irq = irq;
+ } else
+ apbs[boardno - 1].irq = 0;
+
+ numisa++;
}
-
- for (i=0; i< MAX_ISA_BOARD; i++)
- {
- RamIO = ioremap(mem+ (LEN_RAM_IO*i), LEN_RAM_IO);
-
- if (!RamIO) {
- printk(KERN_INFO "ac.o: Failed to ioremap the ISA card's memory space (slot #%d)\n",i+1);
- continue;
- }
-
- if (!(boardno = ac_register_board((unsigned long)mem+ (LEN_RAM_IO*i),
- (unsigned long)RamIO,i+1))) {
- iounmap(RamIO);
- continue;
- }
-
- printk("Applicom ISA card found at mem 0x%lx, irq %d\n", mem + (LEN_RAM_IO*i), irq);
-
- if (!numisa)
- {
- if (request_irq(irq, &ac_interrupt, SA_SHIRQ, "Applicom ISA", &ac_open))
- {
- printk("Could not allocate IRQ %d for ISA Applicom device.\n", irq);
- iounmap((void *)RamIO);
- apbs[boardno-1].RamIO = 0;
- }
- apbs[boardno-1].irq=irq;
- }
- else
- apbs[boardno-1].irq=0;
-
- numisa++;
- }
if (!numisa)
- printk("ac.o: No valid ISA Applicom boards found at mem 0x%lx\n",mem);
+ printk("ac.o: No valid ISA Applicom boards found at mem 0x%lx\n", mem);
#if LINUX_VERSION_CODE > 0x20300
init_waitqueue_head(&FlagSleepRec);
#else
- FlagSleepRec = NULL;
+ FlagSleepRec = NULL;
#endif
- WriteErrorCount = 0;
- ReadErrorCount = 0;
+ WriteErrorCount = 0;
+ ReadErrorCount = 0;
DeviceErrorCount = 0;
-fin:
- if (numboards)
- {
- misc_register (&ac_miscdev);
- for (i=0; i<MAX_BOARD; i++)
- {
- int serial;
- char boardname[(SERIAL_NUMBER - TYPE_CARD) + 1];
+ fin:
+ if (numboards) {
+ misc_register(&ac_miscdev);
+ for (i = 0; i < MAX_BOARD; i++) {
+ int serial;
+ char boardname[(SERIAL_NUMBER - TYPE_CARD) + 1];
- if (!apbs[i].RamIO)
- continue;
-
- for(serial = 0; serial < SERIAL_NUMBER - TYPE_CARD; serial++)
- boardname[serial] = readb(apbs[i].RamIO + TYPE_CARD + serial);
- boardname[serial]=0;
-
+ if (!apbs[i].RamIO)
+ continue;
- printk("Applicom board %d: %s, PROM V%d.%d",
- i+1, boardname,
- (int)(readb(apbs[i].RamIO + VERS) >> 4),
- (int)(readb(apbs[i].RamIO + VERS) & 0xF));
+ for (serial = 0; serial < SERIAL_NUMBER - TYPE_CARD; serial++)
+ boardname[serial] = readb(apbs[i].RamIO + TYPE_CARD + serial);
+ boardname[serial] = 0;
+
+
+ printk(KERN_INFO "Applicom board %d: %s, PROM V%d.%d", i + 1, boardname, (int) (readb(apbs[i].RamIO + VERS) >> 4), (int) (readb(apbs[i].RamIO + VERS) & 0xF));
+
+ serial = (readb(apbs[i].RamIO + SERIAL_NUMBER) << 16) + (readb(apbs[i].RamIO + SERIAL_NUMBER + 1) << 8) + (readb(apbs[i].RamIO + SERIAL_NUMBER + 2));
+
+ if (serial != 0)
+ printk(" S/N %d\n", serial);
+ else
+ printk("\n");
+ }
+ return 0;
+ }
- serial = (readb(apbs[i].RamIO + SERIAL_NUMBER) << 16) +
- (readb(apbs[i].RamIO + SERIAL_NUMBER + 1) << 8) +
- (readb(apbs[i].RamIO + SERIAL_NUMBER + 2) );
-
- if (serial != 0)
- printk (" S/N %d\n", serial);
- else
- printk("\n");
- }
- return 0;
- }
-
else
- return -ENXIO;
+ return -ENXIO;
}
#ifndef MODULE
-__initcall (applicom_init);
+__initcall(applicom_init);
#endif
static loff_t ac_llseek(struct file *file, loff_t offset, int origin)
}
-static ssize_t ac_write (struct file *file, const char *buf, size_t count, loff_t *ppos)
+static ssize_t ac_write(struct file *file, const char *buf, size_t count, loff_t * ppos)
{
- unsigned int NumCard; /* Board number 1 -> 8 */
- unsigned int IndexCard; /* Index board number 0 -> 7 */
- unsigned char TicCard; /* Board TIC to send */
- unsigned long flags; /* Current priority */
- struct st_ram_io st_loc;
- struct mailbox tmpmailbox;
-
- if (count != sizeof(struct st_ram_io) + sizeof(struct mailbox)) {
- printk("Hmmm. write() of Applicom card, length %d != expected %d\n",count,sizeof(struct st_ram_io) + sizeof(struct mailbox));
- return -EINVAL;
- }
-
- if(copy_from_user (&st_loc, buf, sizeof(struct st_ram_io))) {
- return -EFAULT;
- }
- if(copy_from_user (&tmpmailbox, &buf[sizeof(struct st_ram_io)], sizeof(struct mailbox))) {
- return -EFAULT;
- }
-
- NumCard = st_loc.num_card; /* board number to send */
- TicCard = st_loc.tic_des_from_pc; /* tic number to send */
- IndexCard = NumCard -1;
- if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO)
- { /* User board number not OK */
- // printk("Write to invalid Applicom board %d\n", NumCard);
- return -EINVAL; /* Return error code user buffer */
- }
-
+ unsigned int NumCard; /* Board number 1 -> 8 */
+ unsigned int IndexCard; /* Index board number 0 -> 7 */
+ unsigned char TicCard; /* Board TIC to send */
+ unsigned long flags; /* Current priority */
+ struct st_ram_io st_loc;
+ struct mailbox tmpmailbox;
+
+ if (count != sizeof(struct st_ram_io) + sizeof(struct mailbox)) {
+ printk("Hmmm. write() of Applicom card, length %d != expected %d\n", count, sizeof(struct st_ram_io) + sizeof(struct mailbox));
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&st_loc, buf, sizeof(struct st_ram_io))) {
+ return -EFAULT;
+ }
+ if (copy_from_user(&tmpmailbox, &buf[sizeof(struct st_ram_io)], sizeof(struct mailbox))) {
+ return -EFAULT;
+ }
+
+ NumCard = st_loc.num_card; /* board number to send */
+ TicCard = st_loc.tic_des_from_pc; /* tic number to send */
+ IndexCard = NumCard - 1;
+ if ((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO) { /* User board number not OK */
+ // printk("Write to invalid Applicom board %d\n", NumCard);
+ return -EINVAL; /* Return error code user buffer */
+ }
#ifdef DEBUG
- {
- int c;
-
- printk("Write to applicom card #%d. struct st_ram_io follows:",NumCard);
-
-
-
- for (c=0; c< sizeof(struct st_ram_io);)
- {
- printk("\n%5.5X: %2.2X",c,((unsigned char *)&st_loc)[c]);
-
- for (c++ ; c%8 && c<sizeof(struct st_ram_io); c++)
- {
- printk(" %2.2X",((unsigned char *)&st_loc)[c]);
- }
- }
+ {
+ int c;
+
+ printk("Write to applicom card #%d. struct st_ram_io follows:", NumCard);
+
+
+
+ for (c = 0; c < sizeof(struct st_ram_io);) {
+ printk("\n%5.5X: %2.2X", c, ((unsigned char *) &st_loc)[c]);
+
+ for (c++; c % 8 && c < sizeof(struct st_ram_io); c++) {
+ printk(" %2.2X", ((unsigned char *) &st_loc)[c]);
+ }
+ }
+
+ printk("\nstruct mailbox follows:");
+
+ for (c = 0; c < sizeof(struct mailbox);) {
+ printk("\n%5.5X: %2.2X", c, ((unsigned char *) &tmpmailbox)[c]);
+
+ for (c++; c % 8 && c < sizeof(struct mailbox); c++) {
+ printk(" %2.2X", ((unsigned char *) &tmpmailbox)[c]);
+ }
+ }
+
+ printk("\n");
+ }
- printk("\nstruct mailbox follows:");
-
- for (c=0; c< sizeof(struct mailbox);)
- {
- printk("\n%5.5X: %2.2X",c,((unsigned char *)&tmpmailbox)[c]);
-
- for (c++ ; c%8 && c<sizeof(struct mailbox); c++)
- {
- printk(" %2.2X",((unsigned char *)&tmpmailbox)[c]);
- }
- }
-
- printk("\n");
- }
-
#endif
- save_flags (flags);
- cli(); /* disable interrupt */
-
- if(readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY) > 2) /* Test octet ready correct */
- {
- Dummy = readb(apbs[IndexCard].RamIO + VERS);
- restore_flags(flags);
- printk("APPLICOM driver write error board %d, DataFromPcReady = %d\n",
- IndexCard,(int)readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY));
- DeviceErrorCount++;
- return -EIO;
- }
- while (readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY) != 0)
- {
- Dummy = readb(apbs[IndexCard].RamIO + VERS);
- restore_flags(flags);
- interruptible_sleep_on (&apbs[IndexCard].FlagSleepSend);
- if (signal_pending(current))
- return -EINTR;
- save_flags(flags);
- cli();
- }
- writeb(1, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
-
- // memcpy_toio ((void *)apbs[IndexCard].PtrRamFromPc, (void *)&tmpmailbox, sizeof(struct mailbox));
- {
- unsigned char *from = (unsigned char *)&tmpmailbox;
- unsigned long to = (unsigned long)apbs[IndexCard].RamIO + RAM_FROM_PC;
- int c;
-
- for (c=0; c<sizeof(struct mailbox) ; c++)
- writeb(*(from++), to++);
- }
- writeb(0x20, apbs[IndexCard].RamIO + TIC_OWNER_FROM_PC);
- writeb(0xff, apbs[IndexCard].RamIO + NUMCARD_OWNER_FROM_PC);
- writeb(TicCard, apbs[IndexCard].RamIO + TIC_DES_FROM_PC);
- writeb(NumCard, apbs[IndexCard].RamIO + NUMCARD_DES_FROM_PC);
- writeb(2, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
- writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
- Dummy = readb(apbs[IndexCard].RamIO + VERS);
- restore_flags (flags);
- return 0;
+ save_flags(flags);
+ cli(); /* disable interrupt */
+
+ if (readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY) > 2) { /* Test octet ready correct */
+ Dummy = readb(apbs[IndexCard].RamIO + VERS);
+ restore_flags(flags);
+ printk("APPLICOM driver write error board %d, DataFromPcReady = %d\n", IndexCard, (int) readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY));
+ DeviceErrorCount++;
+ return -EIO;
+ }
+ while (readb(apbs[IndexCard].RamIO + DATA_FROM_PC_READY) != 0) {
+ Dummy = readb(apbs[IndexCard].RamIO + VERS);
+ restore_flags(flags);
+ /*
+ * FIXME: Race on wakeup. Race on re-entering write
+ * in another thread.
+ */
+ interruptible_sleep_on(&apbs[IndexCard].FlagSleepSend);
+ if (signal_pending(current))
+ return -EINTR;
+ save_flags(flags);
+ cli();
+ }
+ writeb(1, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
+
+ // memcpy_toio ((void *)apbs[IndexCard].PtrRamFromPc, (void *)&tmpmailbox, sizeof(struct mailbox));
+ {
+ unsigned char *from = (unsigned char *) &tmpmailbox;
+ unsigned long to = (unsigned long) apbs[IndexCard].RamIO + RAM_FROM_PC;
+ int c;
+
+ for (c = 0; c < sizeof(struct mailbox); c++)
+ writeb(*(from++), to++);
+ }
+ writeb(0x20, apbs[IndexCard].RamIO + TIC_OWNER_FROM_PC);
+ writeb(0xff, apbs[IndexCard].RamIO + NUMCARD_OWNER_FROM_PC);
+ writeb(TicCard, apbs[IndexCard].RamIO + TIC_DES_FROM_PC);
+ writeb(NumCard, apbs[IndexCard].RamIO + NUMCARD_DES_FROM_PC);
+ writeb(2, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
+ writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
+ Dummy = readb(apbs[IndexCard].RamIO + VERS);
+ restore_flags(flags);
+ return 0;
}
-static ssize_t ac_read (struct file *filp, char *buf, size_t count, loff_t *ptr)
+static ssize_t ac_read(struct file *filp, char *buf, size_t count, loff_t * ptr)
{
- unsigned int NumCard; /* board number 1 -> 8 */
- unsigned int IndexCard; /* index board number 0 -> 7 */
- unsigned long flags;
- unsigned int i;
- unsigned char tmp=0;
- struct st_ram_io st_loc;
- struct mailbox tmpmailbox; /* bounce buffer - can't copy to user space with cli() */
-
-
- if (count != sizeof(struct st_ram_io) + sizeof(struct mailbox)) {
- printk("Hmmm. read() of Applicom card, length %d != expected %d\n",count,sizeof(struct st_ram_io) + sizeof(struct mailbox));
- return -EINVAL;
- }
-
- save_flags(flags);
- cli();
-
- i = 0;
-
- while (tmp != 2)
- {
- for (i=0; i < MAX_BOARD; i++)
- {
- if (!apbs[i].RamIO)
- continue;
-
- tmp = readb(apbs[i].RamIO + DATA_TO_PC_READY);
-
- if (tmp == 2)
- break;
-
- if (tmp > 2) /* Test octet ready correct */
- {
- Dummy = readb(apbs[i].RamIO + VERS);
- restore_flags(flags);
- printk("APPLICOM driver read error board %d, DataToPcReady = %d\n",
- i,(int)readb(apbs[i].RamIO + DATA_TO_PC_READY));
- DeviceErrorCount++;
- return -EIO;
- }
- Dummy = readb(apbs[i].RamIO + VERS);
+ unsigned int NumCard; /* board number 1 -> 8 */
+ unsigned int IndexCard; /* index board number 0 -> 7 */
+ unsigned long flags;
+ unsigned int i;
+ unsigned char tmp = 0;
+ struct st_ram_io st_loc;
+ struct mailbox tmpmailbox; /* bounce buffer - can't copy to user space with cli() */
+
+
+ if (count != sizeof(struct st_ram_io) + sizeof(struct mailbox)) {
+ printk("Hmmm. read() of Applicom card, length %d != expected %d\n", count, sizeof(struct st_ram_io) + sizeof(struct mailbox));
+ return -EINVAL;
+ }
+ save_flags(flags);
+ cli();
+
+ i = 0;
+
+ while (tmp != 2) {
+ for (i = 0; i < MAX_BOARD; i++) {
+ if (!apbs[i].RamIO)
+ continue;
+
+ tmp = readb(apbs[i].RamIO + DATA_TO_PC_READY);
+
+ if (tmp == 2)
+ break;
+
+ if (tmp > 2) { /* Test octet ready correct */
+ Dummy = readb(apbs[i].RamIO + VERS);
+ restore_flags(flags);
+ printk(KERN_WARNING "APPLICOM driver read error board %d, DataToPcReady = %d\n", i, (int) readb(apbs[i].RamIO + DATA_TO_PC_READY));
+ DeviceErrorCount++;
+ return -EIO;
+ }
+ Dummy = readb(apbs[i].RamIO + VERS);
+
+ }
+ if (tmp != 2) {
+ /*
+ * FIXME: race on wakeup. O_NDELAY not implemented
+ * Parallel read threads race.
+ */
+ restore_flags(flags);
+ interruptible_sleep_on(&FlagSleepRec);
+ if (signal_pending(current))
+ return -EINTR;
+ save_flags(flags);
+ cli();
+ }
}
- if (tmp != 2)
+
+ IndexCard = i;
+ NumCard = i + 1;
+ st_loc.tic_owner_to_pc = readb(apbs[IndexCard].RamIO + TIC_OWNER_TO_PC);
+ st_loc.numcard_owner_to_pc = readb(apbs[IndexCard].RamIO + NUMCARD_OWNER_TO_PC);
+
+
+ // memcpy_fromio(&tmpmailbox, apbs[IndexCard].PtrRamToPc, sizeof(struct mailbox));
{
- restore_flags(flags);
- interruptible_sleep_on (&FlagSleepRec);
- if (signal_pending(current))
- return -EINTR;
- save_flags(flags);
- cli();
+ unsigned long from = (unsigned long) apbs[IndexCard].RamIO + RAM_TO_PC;
+ unsigned char *to = (unsigned char *) &tmpmailbox;
+ int c;
+
+ for (c = 0; c < sizeof(struct mailbox); c++)
+ *(to++) = readb(from++);
}
- }
-
- IndexCard = i;
- NumCard = i+1;
- st_loc.tic_owner_to_pc = readb(apbs[IndexCard].RamIO + TIC_OWNER_TO_PC);
- st_loc.numcard_owner_to_pc = readb(apbs[IndexCard].RamIO + NUMCARD_OWNER_TO_PC);
-
-
- // memcpy_fromio(&tmpmailbox, apbs[IndexCard].PtrRamToPc, sizeof(struct mailbox));
- {
- unsigned long from = (unsigned long)apbs[IndexCard].RamIO + RAM_TO_PC;
- unsigned char *to = (unsigned char *)&tmpmailbox;
- int c;
-
- for (c=0; c<sizeof(struct mailbox) ; c++)
- *(to++) = readb(from++);
- }
- writeb(1,apbs[IndexCard].RamIO + ACK_FROM_PC_READY);
- writeb(1,apbs[IndexCard].RamIO + TYP_ACK_FROM_PC);
- writeb(NumCard, apbs[IndexCard].RamIO + NUMCARD_ACK_FROM_PC);
- writeb(readb(apbs[IndexCard].RamIO + TIC_OWNER_TO_PC),
- apbs[IndexCard].RamIO + TIC_ACK_FROM_PC);
- writeb(2, apbs[IndexCard].RamIO + ACK_FROM_PC_READY);
- writeb(0, apbs[IndexCard].RamIO + DATA_TO_PC_READY);
- writeb(2, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
- Dummy = readb(apbs[IndexCard].RamIO + VERS);
- restore_flags(flags);
+ writeb(1, apbs[IndexCard].RamIO + ACK_FROM_PC_READY);
+ writeb(1, apbs[IndexCard].RamIO + TYP_ACK_FROM_PC);
+ writeb(NumCard, apbs[IndexCard].RamIO + NUMCARD_ACK_FROM_PC);
+ writeb(readb(apbs[IndexCard].RamIO + TIC_OWNER_TO_PC), apbs[IndexCard].RamIO + TIC_ACK_FROM_PC);
+ writeb(2, apbs[IndexCard].RamIO + ACK_FROM_PC_READY);
+ writeb(0, apbs[IndexCard].RamIO + DATA_TO_PC_READY);
+ writeb(2, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
+ Dummy = readb(apbs[IndexCard].RamIO + VERS);
+ restore_flags(flags);
#ifdef DEBUG
- { int c;
+ {
+ int c;
- printk("Read from applicom card #%d. struct st_ram_io follows:",NumCard);
-
- for (c=0; c< sizeof(struct st_ram_io);)
- {
- printk("\n%5.5X: %2.2X",c,((unsigned char *)&st_loc)[c]);
-
- for (c++ ; c%8 && c<sizeof(struct st_ram_io); c++)
- {
- printk(" %2.2X",((unsigned char *)&st_loc)[c]);
- }
- }
+ printk("Read from applicom card #%d. struct st_ram_io follows:", NumCard);
- printk("\nstruct mailbox follows:");
-
- for (c=0; c< sizeof(struct mailbox);)
- {
- printk("\n%5.5X: %2.2X",c,((unsigned char *)&tmpmailbox)[c]);
-
- for (c++ ; c%8 && c<sizeof(struct mailbox); c++)
- {
- printk(" %2.2X",((unsigned char *)&tmpmailbox)[c]);
- }
- }
- printk("\n");
-
- }
+ for (c = 0; c < sizeof(struct st_ram_io);) {
+ printk("\n%5.5X: %2.2X", c, ((unsigned char *) &st_loc)[c]);
+
+ for (c++; c % 8 && c < sizeof(struct st_ram_io); c++) {
+ printk(" %2.2X", ((unsigned char *) &st_loc)[c]);
+ }
+ }
+
+ printk("\nstruct mailbox follows:");
+
+ for (c = 0; c < sizeof(struct mailbox);) {
+ printk("\n%5.5X: %2.2X", c, ((unsigned char *) &tmpmailbox)[c]);
+
+ for (c++; c % 8 && c < sizeof(struct mailbox); c++) {
+ printk(" %2.2X", ((unsigned char *) &tmpmailbox)[c]);
+ }
+ }
+ printk("\n");
+
+ }
#endif
-
- /* Je suis stupide. DW. */
- if(copy_to_user (buf, &st_loc, sizeof(struct st_ram_io)))
- return -EFAULT;
- if(copy_to_user (&buf[sizeof(struct st_ram_io)], &tmpmailbox, sizeof(struct mailbox)))
- return -EFAULT;
+ /* Je suis stupide. DW. */
- return 0;
+ if (copy_to_user(buf, &st_loc, sizeof(struct st_ram_io)))
+ return -EFAULT;
+ if (copy_to_user(&buf[sizeof(struct st_ram_io)], &tmpmailbox, sizeof(struct mailbox)))
+ return -EFAULT;
+
+ return 0;
}
static void ac_interrupt(int vec, void *dev_instance, struct pt_regs *regs)
{
- unsigned int i;
- unsigned int FlagInt;
- unsigned int LoopCount;
- // volatile unsigned char ResetIntBoard;
-
- // printk("Applicom interrupt on IRQ %d occurred\n", vec);
-
- LoopCount = 0;
- // for(i=boardno;i<MAX_BOARD;i++) /* loop for not configured board */
- // if (apbs[i].RamIO)
- // ResetIntBoard = *apbs[i].PtrRamItToPc; /* reset interrupt of unused boards */
-
- do
- {
- FlagInt = FALSE;
- for(i=0;i<MAX_BOARD;i++)
- {
- if (!apbs[i].RamIO)
- continue;
-
- if(readb(apbs[i].RamIO + RAM_IT_TO_PC) != 0)
- FlagInt = TRUE;
- writeb(0, apbs[i].RamIO + RAM_IT_TO_PC);
-
- if(readb(apbs[i].RamIO + DATA_TO_PC_READY) > 2)
- {
- printk("APPLICOM driver interrupt err board %d, DataToPcReady = %d\n",
- i+1,(int)readb(apbs[i].RamIO + DATA_TO_PC_READY));
- DeviceErrorCount++;
- }
- if((readb(apbs[i].RamIO + DATA_FROM_PC_READY) > 2) &&
- (readb(apbs[i].RamIO + DATA_FROM_PC_READY) != 6))
- {
- printk("APPLICOM driver interrupt err board %d, DataFromPcReady = %d\n",
- i+1,(int)readb(apbs[i].RamIO + DATA_FROM_PC_READY));
- DeviceErrorCount++;
- }
- if(readb(apbs[i].RamIO + DATA_TO_PC_READY) == 2) /* mailbox sent by the card ? */
- {
- wake_up_interruptible(&FlagSleepRec);
- }
- if(readb(apbs[i].RamIO + DATA_FROM_PC_READY) == 0) /* ram i/o free for write by pc ? */
- {
- if(waitqueue_active(&apbs[i].FlagSleepSend)) /* process sleep during read ? */
- {
- wake_up_interruptible(&apbs[i].FlagSleepSend);
- }
- }
- Dummy = readb(apbs[i].RamIO + VERS);
-
- if(readb(apbs[i].RamIO + RAM_IT_TO_PC))
- i--; /* There's another int waiting on this card */
- }
- if(FlagInt) LoopCount = 0;
- else LoopCount++;
- }
- while(LoopCount < 2);
+ unsigned int i;
+ unsigned int FlagInt;
+ unsigned int LoopCount;
+ // volatile unsigned char ResetIntBoard;
+
+ // printk("Applicom interrupt on IRQ %d occurred\n", vec);
+
+ LoopCount = 0;
+ // for(i=boardno;i<MAX_BOARD;i++) /* loop for not configured board */
+ // if (apbs[i].RamIO)
+ // ResetIntBoard = *apbs[i].PtrRamItToPc; /* reset interrupt of unused boards */
+
+ do {
+ FlagInt = FALSE;
+ for (i = 0; i < MAX_BOARD; i++) {
+ if (!apbs[i].RamIO)
+ continue;
+
+ if (readb(apbs[i].RamIO + RAM_IT_TO_PC) != 0)
+ FlagInt = TRUE;
+ writeb(0, apbs[i].RamIO + RAM_IT_TO_PC);
+
+ if (readb(apbs[i].RamIO + DATA_TO_PC_READY) > 2) {
+ printk(KERN_WARNING "APPLICOM driver interrupt err board %d, DataToPcReady = %d\n", i + 1, (int) readb(apbs[i].RamIO + DATA_TO_PC_READY));
+ DeviceErrorCount++;
+ }
+ if ((readb(apbs[i].RamIO + DATA_FROM_PC_READY) > 2) && (readb(apbs[i].RamIO + DATA_FROM_PC_READY) != 6)) {
+ printk("APPLICOM driver interrupt err board %d, DataFromPcReady = %d\n", i + 1, (int) readb(apbs[i].RamIO + DATA_FROM_PC_READY));
+ DeviceErrorCount++;
+ }
+ if (readb(apbs[i].RamIO + DATA_TO_PC_READY) == 2) { /* mailbox sent by the card ? */
+ wake_up_interruptible(&FlagSleepRec);
+ }
+ if (readb(apbs[i].RamIO + DATA_FROM_PC_READY) == 0) { /* ram i/o free for write by pc ? */
+ if (waitqueue_active(&apbs[i].FlagSleepSend)) { /* process sleep during read ? */
+ wake_up_interruptible(&apbs[i].FlagSleepSend);
+ }
+ }
+ Dummy = readb(apbs[i].RamIO + VERS);
+
+ if (readb(apbs[i].RamIO + RAM_IT_TO_PC))
+ i--; /* There's another int waiting on this card */
+ }
+ if (FlagInt)
+ LoopCount = 0;
+ else
+ LoopCount++;
+ }
+ while (LoopCount < 2);
}
static int ac_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+{ /* @ ADG ou ATO selon le cas */
+ int i;
+ unsigned char IndexCard;
+ unsigned long pmem;
+ volatile unsigned char byte_reset_it;
+ struct st_ram_io adgl;
+ unsigned char TmpRamIo[sizeof(struct st_ram_io)];
-{ /* @ ADG ou ATO selon le cas */
- int i;
- unsigned char IndexCard;
- unsigned long pmem ;
- volatile unsigned char byte_reset_it;
- struct st_ram_io adgl ;
- unsigned char TmpRamIo[sizeof(struct st_ram_io)];
-
-
- if (copy_from_user (&adgl, (void *)arg,sizeof(struct st_ram_io)))
- return -EFAULT;
-
- IndexCard = adgl.num_card-1;
- if(cmd != 0 && cmd != 6 &&
- ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO))
- {
- printk("APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
- printk("apbs[%d].RamIO = %lx\n",IndexCard, apbs[IndexCard].RamIO);
- return -EINVAL;
- }
-
- switch( cmd )
- {
- case 0 :
- pmem = apbs[IndexCard].RamIO;
- for(i=0;i<sizeof(struct st_ram_io);i++)TmpRamIo[i]=readb(pmem++);
- if (copy_to_user((void *)arg, TmpRamIo, sizeof(struct st_ram_io)))
- return -EFAULT;
- break;
- case 1 :
- pmem = apbs[IndexCard].RamIO + CONF_END_TEST;
- for (i=0;i<4;i++)
- adgl.conf_end_test[i] = readb(pmem++);
- for (i=0;i<2;i++)
- adgl.error_code[i] = readb(pmem++);
- for (i=0;i<4;i++)
- adgl.parameter_error[i] = readb(pmem++);
- pmem = apbs[IndexCard].RamIO + VERS;
- adgl.vers = readb(pmem);
- pmem = apbs[IndexCard].RamIO + TYPE_CARD;
- for (i=0;i<20;i++)
- adgl.reserv1[i] = readb(pmem++);
- *(int *)&adgl.reserv1[20] =
- (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER) << 16) +
- (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER + 1) << 8) +
- (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER + 2) );
-
- if (copy_to_user ((void *)arg, &adgl, sizeof(struct st_ram_io)))
- return -EFAULT;
- break;
- case 2 :
- pmem = apbs[IndexCard].RamIO + CONF_END_TEST;
- for (i=0;i<10;i++)
- writeb(0xff, pmem++);
- writeb(adgl.data_from_pc_ready,
- apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
-
- writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
+
+ if (copy_from_user(&adgl, (void *) arg, sizeof(struct st_ram_io)))
+ return -EFAULT;
+
+ IndexCard = adgl.num_card - 1;
+
+ /*
+ * FIXME: user can flood the console using bogus ioctls
+ */
+
+ if (cmd != 0 && cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) {
+ printk("APPLICOM driver IOCTL, bad board number %d\n", (int) IndexCard + 1);
+ printk("apbs[%d].RamIO = %lx\n", IndexCard, apbs[IndexCard].RamIO);
+ return -EINVAL;
+ }
+
+ /*
+ * FIXME races between ioctls with multiple clients
+ */
+
+ switch (cmd) {
+ case 0:
+ pmem = apbs[IndexCard].RamIO;
+ for (i = 0; i < sizeof(struct st_ram_io); i++)
+ TmpRamIo[i] = readb(pmem++);
+ if (copy_to_user((void *) arg, TmpRamIo, sizeof(struct st_ram_io)))
+ return -EFAULT;
+ break;
+ case 1:
+ pmem = apbs[IndexCard].RamIO + CONF_END_TEST;
+ for (i = 0; i < 4; i++)
+ adgl.conf_end_test[i] = readb(pmem++);
+ for (i = 0; i < 2; i++)
+ adgl.error_code[i] = readb(pmem++);
+ for (i = 0; i < 4; i++)
+ adgl.parameter_error[i] = readb(pmem++);
+ pmem = apbs[IndexCard].RamIO + VERS;
+ adgl.vers = readb(pmem);
+ pmem = apbs[IndexCard].RamIO + TYPE_CARD;
+ for (i = 0; i < 20; i++)
+ adgl.reserv1[i] = readb(pmem++);
+ *(int *) &adgl.reserv1[20] = (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER) << 16) + (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER + 1) << 8) + (readb(apbs[IndexCard].RamIO + SERIAL_NUMBER + 2));
+
+ if (copy_to_user((void *) arg, &adgl, sizeof(struct st_ram_io)))
+ return -EFAULT;
+ break;
+ case 2:
+ pmem = apbs[IndexCard].RamIO + CONF_END_TEST;
+ for (i = 0; i < 10; i++)
+ writeb(0xff, pmem++);
+ writeb(adgl.data_from_pc_ready, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
+
+ writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
+
+ /*
+ * FIXME: can trash waitqueue that is active.
+ */
#if LINUX_VERSION_CODE > 0x20300
- init_waitqueue_head (&FlagSleepRec);
+ init_waitqueue_head(&FlagSleepRec);
#else
- FlagSleepRec = NULL;
+ FlagSleepRec = NULL;
#endif
- for (i=0;i<MAX_BOARD;i++)
- {
- if (apbs[i].RamIO)
- {
+ for (i = 0; i < MAX_BOARD; i++) {
+ if (apbs[i].RamIO) {
#if LINUX_VERSION_CODE > 0x20300
- init_waitqueue_head (&apbs[i].FlagSleepSend);
+ init_waitqueue_head(&apbs[i].FlagSleepSend);
#else
- apbs[i].FlagSleepSend = NULL;
+ apbs[i].FlagSleepSend = NULL;
#endif
- byte_reset_it = readb(apbs[i].RamIO + RAM_IT_TO_PC);
- }
- }
- break ;
- case 3 :
- pmem = apbs[IndexCard].RamIO + TIC_DES_FROM_PC;
- writeb(adgl.tic_des_from_pc, pmem);
- break;
- case 4 :
- pmem = apbs[IndexCard].RamIO + TIC_OWNER_TO_PC;
- adgl.tic_owner_to_pc = readb(pmem++);
- adgl.numcard_owner_to_pc = readb(pmem);
- if (copy_to_user ((void *)arg, &adgl,sizeof(struct st_ram_io)))
- return -EFAULT;
- break;
- case 5 :
- writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_OWNER_TO_PC);
- writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_DES_FROM_PC);
- writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_ACK_FROM_PC);
- writeb(4, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
- writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
- break ;
- case 6 :
- printk("APPLICOM driver release .... V2.8.0\n");
- printk("Number of installed boards . %d\n",(int)numboards);
- printk("Segment of board ........... %X\n",(int)mem);
- printk("Interrupt IRQ number ....... %d\n",(int)irq);
- for(i=0;i<MAX_BOARD;i++)
- {
- int serial;
- char boardname[(SERIAL_NUMBER - TYPE_CARD) + 1];
-
- if (!apbs[i].RamIO)
- continue;
-
-
- for(serial = 0; serial < SERIAL_NUMBER - TYPE_CARD; serial++)
- boardname[serial] = readb(apbs[i].RamIO + TYPE_CARD + serial);
- boardname[serial]=0;
-
-
- printk("Prom version board %d ....... V%d.%d %s",
- i+1,
- (int)(readb(apbs[IndexCard].RamIO + VERS) >> 4),
- (int)(readb(apbs[IndexCard].RamIO + VERS) & 0xF),
- boardname);
-
-
- serial = (readb(apbs[i].RamIO + SERIAL_NUMBER) << 16) +
- (readb(apbs[i].RamIO + SERIAL_NUMBER + 1) << 8) +
- (readb(apbs[i].RamIO + SERIAL_NUMBER + 2) );
-
- if (serial != 0)
- printk (" S/N %d\n", serial);
- else
- printk("\n");
- }
- if(DeviceErrorCount != 0)
- printk("DeviceErrorCount ........... %d\n",DeviceErrorCount);
- if(ReadErrorCount != 0)
- printk("ReadErrorCount ............. %d\n",ReadErrorCount);
- if(WriteErrorCount != 0)
- printk("WriteErrorCount ............ %d\n",WriteErrorCount);
- if(waitqueue_active(&FlagSleepRec))
- printk("Process in read pending\n");
- for(i=0;i<MAX_BOARD;i++)
- {
- if (apbs[i].RamIO && waitqueue_active(&apbs[i].FlagSleepSend))
- printk("Process in write pending board %d\n",i+1);
- }
- break;
- default :
- printk("APPLICOM driver ioctl, unknown function code %d\n",cmd) ;
- return -EINVAL;
- break;
- }
- Dummy = readb(apbs[IndexCard].RamIO + VERS);
- return 0;
+ byte_reset_it = readb(apbs[i].RamIO + RAM_IT_TO_PC);
+ }
+ }
+ break;
+ case 3:
+ pmem = apbs[IndexCard].RamIO + TIC_DES_FROM_PC;
+ writeb(adgl.tic_des_from_pc, pmem);
+ break;
+ case 4:
+ pmem = apbs[IndexCard].RamIO + TIC_OWNER_TO_PC;
+ adgl.tic_owner_to_pc = readb(pmem++);
+ adgl.numcard_owner_to_pc = readb(pmem);
+ if (copy_to_user((void *) arg, &adgl, sizeof(struct st_ram_io)))
+ return -EFAULT;
+ break;
+ case 5:
+ writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_OWNER_TO_PC);
+ writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_DES_FROM_PC);
+ writeb(adgl.num_card, apbs[IndexCard].RamIO + NUMCARD_ACK_FROM_PC);
+ writeb(4, apbs[IndexCard].RamIO + DATA_FROM_PC_READY);
+ writeb(1, apbs[IndexCard].RamIO + RAM_IT_FROM_PC);
+ break;
+ case 6:
+ printk(KERN_INFO "APPLICOM driver release .... V2.8.0\n");
+ printk(KERN_INFO "Number of installed boards . %d\n", (int) numboards);
+ printk(KERN_INFO "Segment of board ........... %X\n", (int) mem);
+ printk(KERN_INFO "Interrupt IRQ number ....... %d\n", (int) irq);
+ for (i = 0; i < MAX_BOARD; i++) {
+ int serial;
+ char boardname[(SERIAL_NUMBER - TYPE_CARD) + 1];
+
+ if (!apbs[i].RamIO)
+ continue;
+
+
+ for (serial = 0; serial < SERIAL_NUMBER - TYPE_CARD; serial++)
+ boardname[serial] = readb(apbs[i].RamIO + TYPE_CARD + serial);
+ boardname[serial] = 0;
+
+
+ printk(KERN_INFO "Prom version board %d ....... V%d.%d %s", i + 1, (int) (readb(apbs[IndexCard].RamIO + VERS) >> 4), (int) (readb(apbs[IndexCard].RamIO + VERS) & 0xF), boardname);
+
+
+ serial = (readb(apbs[i].RamIO + SERIAL_NUMBER) << 16) + (readb(apbs[i].RamIO + SERIAL_NUMBER + 1) << 8) + (readb(apbs[i].RamIO + SERIAL_NUMBER + 2));
+
+ if (serial != 0)
+ printk(" S/N %d\n", serial);
+ else
+ printk("\n");
+ }
+ if (DeviceErrorCount != 0)
+ printk(KERN_INFO "DeviceErrorCount ........... %d\n", DeviceErrorCount);
+ if (ReadErrorCount != 0)
+ printk(KERN_INFO "ReadErrorCount ............. %d\n", ReadErrorCount);
+ if (WriteErrorCount != 0)
+ printk(KERN_INFO "WriteErrorCount ............ %d\n", WriteErrorCount);
+ if (waitqueue_active(&FlagSleepRec))
+ printk("Process in read pending\n");
+ for (i = 0; i < MAX_BOARD; i++) {
+ if (apbs[i].RamIO && waitqueue_active(&apbs[i].FlagSleepSend))
+ printk("Process in write pending board %d\n", i + 1);
+ }
+ break;
+ default:
+ printk("APPLICOM driver ioctl, unknown function code %d\n", cmd);
+ return -EINVAL;
+ break;
+ }
+ Dummy = readb(apbs[IndexCard].RamIO + VERS);
+ return 0;
}
#ifndef MODULE
static int __init applicom_setup(char *str)
{
int ints[4];
-
- (void)get_options(str, 4, ints);
+
+ (void) get_options(str, 4, ints);
if (ints[0] > 2) {
printk(KERN_WARNING "Too many arguments to 'applicom=', expected mem,irq only.\n");
}
-
+
if (ints[0] < 2) {
printk("applicom numargs: %d\n", ints[0]);
return 0;
}
-
- mem=ints[1];
- irq=ints[2];
+
+ mem = ints[1];
+ irq = ints[2];
return 1;
}
+
#if LINUX_VERSION_CODE > 0x20300
__setup("applicom=", applicom_setup);
#endif
-#endif /* MODULE */
-
+#endif /* MODULE */
static int open_mouse(struct inode * inode, struct file * file)
{
+ /* Lock module as request_irq may sleep */
+ MOD_INC_USE_COUNT;
if (request_irq(ATIXL_MOUSE_IRQ, mouse_interrupt, 0, "ATIXL mouse", NULL))
+ {
+ MOD_DEC_USE_COUNT;
return -EBUSY;
+ }
ATIXL_MSE_INT_ON(); /* Interrupts are really enabled here */
- MOD_INC_USE_COUNT;
return 0;
}
{
unsigned char a,b,c;
- if (check_region(ATIXL_MSE_DATA_PORT, 3))
+ /*
+ * We must request the resource and claim it atomically
+ * nowdays. We can throw it away on error. Otherwise we
+ * may race another module load of the same I/O
+ */
+
+ if (request_region(ATIXL_MSE_DATA_PORT, 3))
return -EIO;
a = inb( ATIXL_MSE_SIGNATURE_PORT ); /* Get signature */
if (( a != b ) && ( a == c ))
printk(KERN_INFO "\nATI Inport ");
else
+ {
+ free_region(ATIXL_MSE_DATA_PORT,3);
return -EIO;
+ }
outb(0x80, ATIXL_MSE_CONTROL_PORT); /* Reset the Inport device */
outb(0x07, ATIXL_MSE_CONTROL_PORT); /* Select Internal Register 7 */
outb(0x0a, ATIXL_MSE_DATA_PORT); /* Data Interrupts 8+, 1=30hz, 2=50hz, 3=100hz, 4=200hz rate */
- request_region(ATIXL_MSE_DATA_PORT, 3, "atixl");
-
msedev = register_busmouse(&atixlmouse);
if (msedev < 0)
+ {
printk("Bus mouse initialisation error.\n");
+ free_region(ATIXL_MSE_DATA_PORT,3); /* Was missing */
+ }
else
printk("Bus mouse detected and installed.\n");
return msedev < 0 ? msedev : 0;
*
* 8/99: Generalized PCI support added. Theodore Ts'o
*
+ * 3/00: Rid circular buffer of redundant xmit_cnt. Fix a
+ * few races on freeing buffers too.
+ * Alan Modra <alan@linuxcare.com>
+ *
* This module exports the following rs232 io functions:
*
* int rs_init(void);
*/
-static char *serial_version = "4.92";
-static char *serial_revdate = "2000-1-27";
+static char *serial_version = "4.93";
+static char *serial_revdate = "2000-03-20";
/*
* Serial driver configuration section. Here are the various options:
#endif
#endif
-#ifdef CONFIG_ISAPNP
+#if defined(CONFIG_ISAPNP)|| (defined(CONFIG_ISAPNP_MODULE) && defined(MODULE))
#ifndef ENABLE_SERIAL_PNP
#define ENABLE_SERIAL_PNP
#endif
unsigned int minor);
extern void tty_unregister_devfs (struct tty_driver *driver, unsigned minor);
-
static char *serial_name = "Serial driver";
static DECLARE_TASK_QUEUE(tq_serial);
#endif
static unsigned detect_uart_irq (struct serial_state * state);
-static void autoconfig(struct serial_state * info);
+static void autoconfig(struct serial_state * state);
static void change_speed(struct async_struct *info, struct termios *old);
static void rs_wait_until_sent(struct tty_struct *tty, int timeout);
static struct termios *serial_termios[NR_PORTS];
static struct termios *serial_termios_locked[NR_PORTS];
-#ifndef MIN
-#define MIN(a,b) ((a) < (b) ? (a) : (b))
-#endif
#if defined(MODULE) && defined(SERIAL_DEBUG_MCOUNT)
#define DBG_CNT(s) printk("(%s): [%x] refc=%d, serc=%d, ttyc=%d -> %s\n", \
return inb(info->port+1);
#endif
case SERIAL_IO_MEM:
- return readb(info->iomem_base +
+ return readb((unsigned long) info->iomem_base +
(offset<<info->iomem_reg_shift));
#ifdef CONFIG_SERIAL_GSC
case SERIAL_IO_GSC:
break;
#endif
case SERIAL_IO_MEM:
- writeb(value, info->iomem_base +
+ writeb(value, (unsigned long) info->iomem_base +
(offset<<info->iomem_reg_shift));
break;
#ifdef CONFIG_SERIAL_GSC
return;
save_flags(flags); cli();
- if (info->xmit_cnt && info->xmit_buf && !(info->IER & UART_IER_THRI)) {
+ if (info->xmit.head != info->xmit.tail
+ && info->xmit.buf
+ && !(info->IER & UART_IER_THRI)) {
info->IER |= UART_IER_THRI;
serial_out(info, UART_IER, info->IER);
}
static _INLINE_ void transmit_chars(struct async_struct *info, int *intr_done)
{
int count;
-
+
if (info->x_char) {
serial_outp(info, UART_TX, info->x_char);
info->state->icount.tx++;
*intr_done = 0;
return;
}
- if ((info->xmit_cnt <= 0) || info->tty->stopped ||
- info->tty->hw_stopped) {
+ if (info->xmit.head == info->xmit.tail
+ || info->tty->stopped
+ || info->tty->hw_stopped) {
info->IER &= ~UART_IER_THRI;
serial_out(info, UART_IER, info->IER);
return;
count = info->xmit_fifo_size;
do {
- serial_out(info, UART_TX, info->xmit_buf[info->xmit_tail++]);
- info->xmit_tail = info->xmit_tail & (SERIAL_XMIT_SIZE-1);
+ serial_out(info, UART_TX, info->xmit.buf[info->xmit.tail]);
+ info->xmit.tail = (info->xmit.tail + 1) & (SERIAL_XMIT_SIZE-1);
info->state->icount.tx++;
- if (--info->xmit_cnt <= 0)
+ if (info->xmit.head == info->xmit.tail)
break;
} while (--count > 0);
- if (info->xmit_cnt < WAKEUP_CHARS)
+ if (CIRC_CNT(info->xmit.head,
+ info->xmit.tail,
+ SERIAL_XMIT_SIZE) < WAKEUP_CHARS)
rs_sched_event(info, RS_EVENT_WRITE_WAKEUP);
#ifdef SERIAL_DEBUG_INTR
if (intr_done)
*intr_done = 0;
- if (info->xmit_cnt <= 0) {
+ if (info->xmit.head == info->xmit.tail) {
info->IER &= ~UART_IER_THRI;
serial_out(info, UART_IER, info->IER);
}
#ifdef SERIAL_DEBUG_INTR
printk("rs_interrupt(%d)...", irq);
#endif
-
+
info = IRQ_ports[irq];
if (!info)
return;
-
+
#ifdef CONFIG_SERIAL_MULTIPORT
multi = &rs_multiport[irq];
if (multi->port_monitor)
#ifdef SERIAL_DEBUG_INTR
printk("rs_interrupt_single(%d)...", irq);
#endif
-
+
info = IRQ_ports[irq];
if (!info || !info->tty)
return;
#ifdef SERIAL_DEBUG_INTR
printk("rs_interrupt_multi(%d)...", irq);
#endif
-
+
info = IRQ_ports[irq];
if (!info)
return;
serial_out(info, UART_IER, info->IER);
info = info->next_port;
} while (info);
-#ifdef CONFIG_SERIAL_MULTIPORT
+#ifdef CONFIG_SERIAL_MULTIPORT
if (rs_multiport[i].port1)
rs_interrupt_multi(i, NULL, NULL);
else
free_page(page);
goto errout;
}
- if (info->xmit_buf)
+ if (info->xmit.buf)
free_page(page);
else
- info->xmit_buf = (unsigned char *) page;
+ info->xmit.buf = (unsigned char *) page;
#ifdef SERIAL_DEBUG_OPEN
printk("starting up ttys%d (irq %d)...", info->line, state->irq);
if (info->tty)
clear_bit(TTY_IO_ERROR, &info->tty->flags);
- info->xmit_cnt = info->xmit_head = info->xmit_tail = 0;
+ info->xmit.head = info->xmit.tail = 0;
/*
* Set up serial timers...
free_irq(state->irq, &IRQ_ports[state->irq]);
}
- if (info->xmit_buf) {
- free_page((unsigned long) info->xmit_buf);
- info->xmit_buf = 0;
+ if (info->xmit.buf) {
+ unsigned long pg = (unsigned long) info->xmit.buf;
+ info->xmit.buf = 0;
+ free_page(pg);
}
info->IER = 0;
* when DLL is 0.
*/
if (((quot & 0xFF) == 0) && (info->state->type == PORT_16C950) &&
- (info->state->revision == 0x5202))
+ (info->state->revision == 0x5201))
quot++;
info->quot = quot;
serial_outp(info, UART_FCR, UART_FCR_ENABLE_FIFO);
}
serial_outp(info, UART_FCR, fcr); /* set fcr */
- }
+ }
restore_flags(flags);
}
if (serial_paranoia_check(info, tty->device, "rs_put_char"))
return;
- if (!tty || !info->xmit_buf)
+ if (!tty || !info->xmit.buf)
return;
save_flags(flags); cli();
- if (info->xmit_cnt >= SERIAL_XMIT_SIZE - 1) {
+ if (CIRC_SPACE(info->xmit.head,
+ info->xmit.tail,
+ SERIAL_XMIT_SIZE) == 0) {
restore_flags(flags);
return;
}
- info->xmit_buf[info->xmit_head++] = ch;
- info->xmit_head &= SERIAL_XMIT_SIZE-1;
- info->xmit_cnt++;
+ info->xmit.buf[info->xmit.head] = ch;
+ info->xmit.head = (info->xmit.head + 1) & (SERIAL_XMIT_SIZE-1);
restore_flags(flags);
}
if (serial_paranoia_check(info, tty->device, "rs_flush_chars"))
return;
- if (info->xmit_cnt <= 0 || tty->stopped || tty->hw_stopped ||
- !info->xmit_buf)
+ if (info->xmit.head == info->xmit.tail
+ || tty->stopped
+ || tty->hw_stopped
+ || !info->xmit.buf)
return;
save_flags(flags); cli();
if (serial_paranoia_check(info, tty->device, "rs_write"))
return 0;
- if (!tty || !info->xmit_buf || !tmp_buf)
+ if (!tty || !info->xmit.buf || !tmp_buf)
return 0;
save_flags(flags);
if (from_user) {
down(&tmp_buf_sem);
while (1) {
- c = MIN(count,
- MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1,
- SERIAL_XMIT_SIZE - info->xmit_head));
+ int c1;
+ c = CIRC_SPACE_TO_END(info->xmit.head,
+ info->xmit.tail,
+ SERIAL_XMIT_SIZE);
+ if (count < c)
+ c = count;
if (c <= 0)
break;
break;
}
cli();
- c = MIN(c, MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1,
- SERIAL_XMIT_SIZE - info->xmit_head));
- memcpy(info->xmit_buf + info->xmit_head, tmp_buf, c);
- info->xmit_head = ((info->xmit_head + c) &
+ c1 = CIRC_SPACE_TO_END(info->xmit.head,
+ info->xmit.tail,
+ SERIAL_XMIT_SIZE);
+ if (c1 < c)
+ c = c1;
+ memcpy(info->xmit.buf + info->xmit.head, tmp_buf, c);
+ info->xmit.head = ((info->xmit.head + c) &
(SERIAL_XMIT_SIZE-1));
- info->xmit_cnt += c;
restore_flags(flags);
buf += c;
count -= c;
}
up(&tmp_buf_sem);
} else {
+ cli();
while (1) {
- cli();
- c = MIN(count,
- MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1,
- SERIAL_XMIT_SIZE - info->xmit_head));
+ c = CIRC_SPACE_TO_END(info->xmit.head,
+ info->xmit.tail,
+ SERIAL_XMIT_SIZE);
+ if (count < c)
+ c = count;
if (c <= 0) {
- restore_flags(flags);
break;
}
- memcpy(info->xmit_buf + info->xmit_head, buf, c);
- info->xmit_head = ((info->xmit_head + c) &
+ memcpy(info->xmit.buf + info->xmit.head, buf, c);
+ info->xmit.head = ((info->xmit.head + c) &
(SERIAL_XMIT_SIZE-1));
- info->xmit_cnt += c;
- restore_flags(flags);
buf += c;
count -= c;
ret += c;
}
+ restore_flags(flags);
}
- if (info->xmit_cnt && !tty->stopped && !tty->hw_stopped &&
- !(info->IER & UART_IER_THRI)) {
+ if (info->xmit.head != info->xmit.tail
+ && !tty->stopped
+ && !tty->hw_stopped
+ && !(info->IER & UART_IER_THRI)) {
info->IER |= UART_IER_THRI;
serial_out(info, UART_IER, info->IER);
}
static int rs_write_room(struct tty_struct *tty)
{
struct async_struct *info = (struct async_struct *)tty->driver_data;
- int ret;
-
+
if (serial_paranoia_check(info, tty->device, "rs_write_room"))
return 0;
- ret = SERIAL_XMIT_SIZE - info->xmit_cnt - 1;
- if (ret < 0)
- ret = 0;
- return ret;
+ return CIRC_SPACE(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
}
static int rs_chars_in_buffer(struct tty_struct *tty)
if (serial_paranoia_check(info, tty->device, "rs_chars_in_buffer"))
return 0;
- return info->xmit_cnt;
+ return CIRC_CNT(info->xmit.head, info->xmit.tail, SERIAL_XMIT_SIZE);
}
static void rs_flush_buffer(struct tty_struct *tty)
if (serial_paranoia_check(info, tty->device, "rs_flush_buffer"))
return;
save_flags(flags); cli();
- info->xmit_cnt = info->xmit_head = info->xmit_tail = 0;
+ info->xmit.head = info->xmit.tail = 0;
restore_flags(flags);
wake_up_interruptible(&tty->write_wait);
if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
* interrupt happens).
*/
if (info->x_char ||
- ((info->xmit_cnt > 0) && !info->tty->stopped &&
- !info->tty->hw_stopped))
+ ((CIRC_CNT(info->xmit.head, info->xmit.tail,
+ SERIAL_XMIT_SIZE) > 0) &&
+ !info->tty->stopped && !info->tty->hw_stopped))
result &= TIOCSER_TEMT;
if (copy_to_user(value, &result, sizeof(int)))
char_time = char_time / 5;
if (char_time == 0)
char_time = 1;
- if (timeout)
- char_time = MIN(char_time, timeout);
+ if (timeout && timeout < char_time)
+ char_time = timeout;
/*
* If the transmitter hasn't cleared in twice the approximate
* amount of time to send the entire FIFO, it probably won't
int ret;
unsigned long flags;
- ret = sprintf(buf, "%d: uart:%s port:%X irq:%d",
+ ret = sprintf(buf, "%d: uart:%s port:%lX irq:%d",
state->line, uart_config[state->type].name,
state->port, state->irq);
}
save_flags(flags); cli();
status = serial_in(info, UART_MSR);
- control = info ? info->MCR : serial_in(info, UART_MCR);
+ control = info != &scr_info ? info->MCR : serial_in(info, UART_MCR);
restore_flags(flags);
-
+
stat_buf[0] = 0;
stat_buf[1] = 0;
if (control & UART_MCR_RTS)
state->type = PORT_UNKNOWN;
#ifdef SERIAL_DEBUG_AUTOCONF
- printk("Testing ttyS%d (0x%04x, 0x%04x)...\n", state->line,
+ printk("Testing ttyS%d (0x%04lx, 0x%04x)...\n", state->line,
state->port, (unsigned) state->iomem_base);
#endif
/* enable/disable interrupts */
p = ioremap(PCI_BASE_ADDRESS(dev, 0), 0x80);
- if (dev->vendor == PCI_VENDOR_ID_PANACOM) {
- scratch = readl(p + 0x4c);
- if (enable)
- scratch |= 0x40;
- else
- scratch &= ~0x40;
- writel(scratch, p + 0x4c);
- } else
- writel(enable ? 0x41 : 0x00, p + 0x4c);
+ scratch = 0x41;
+ if (dev->vendor == PCI_VENDOR_ID_PANACOM)
+ scratch = 0x43;
+ writel(enable ? scratch : 0x00, (unsigned long)p + 0x4c);
iounmap(p);
if (!enable)
break;
}
- writew(readw(p + 0x28) & data, p + 0x28);
+ writew(readw((unsigned long) p + 0x28) & data, (unsigned long) p + 0x28);
iounmap(p);
return 0;
}
{ PCI_VENDOR_ID_USR, 0x1008,
PCI_ANY_ID, PCI_ANY_ID,
SPCI_FL_BASE0, 1, 115200 },
+ /* Titan Electronic cards */
+ { PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_100,
+ PCI_ANY_ID, PCI_ANY_ID,
+ SPCI_FL_BASE0, 1, 921600 },
+ { PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_200,
+ PCI_ANY_ID, PCI_ANY_ID,
+ SPCI_FL_BASE0, 2, 921600 },
+ { PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_400,
+ PCI_ANY_ID, PCI_ANY_ID,
+ SPCI_FL_BASE0, 4, 921600 },
+ { PCI_VENDOR_ID_TITAN, PCI_DEVICE_ID_TITAN_800B,
+ PCI_ANY_ID, PCI_ANY_ID,
+ SPCI_FL_BASE0, 4, 921600 },
/*
* Untested PCI modems, sent in from various folks...
*/
{ PCI_VENDOR_ID_ROCKWELL, 0x1004,
0x1048, 0x1500,
SPCI_FL_BASE1, 1, 115200 },
-#ifdef CONFIG_DDB5074
+#if 0 /* No definition for PCI_DEVICE_ID_NEC_NILE4 */
/*
* NEC Vrc-5074 (Nile 4) builtin UART.
- * Conditionally compiled in since this is a motherboard device.
*/
{ PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_NILE4,
PCI_ANY_ID, PCI_ANY_ID,
SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
/* Rockwell 56K ACF II Fax+Data+Voice Modem */
{ ISAPNP_VENDOR('A', 'K', 'Y'), ISAPNP_DEVICE(0x1021), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_NO_SHIRQ | SPCI_FL_PNPDEFAULT,
+ 1, 115200 },
+ /* AZT3005 PnP SOUND DEVICE */
+ { ISAPNP_VENDOR('A', 'Z', 'T'), ISAPNP_DEVICE(0x4001), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+ /* Best Data Products Inc. Smart One 336F PnP Modem */
+ { ISAPNP_VENDOR('B', 'D', 'P'), ISAPNP_DEVICE(0x3336), 0, 0,
SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
/* Boca Research 33,600 ACF Modem */
{ ISAPNP_VENDOR('B', 'R', 'I'), ISAPNP_DEVICE(0x1400), 0, 0,
SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
- /* Best Data Products Inc. Smart One 336F PnP Modem */
- { ISAPNP_VENDOR('B', 'D', 'P'), ISAPNP_DEVICE(0x3336), 0, 0,
+ /* Creative Modem Blaster Flash56 DI5601-1 */
+ { ISAPNP_VENDOR('D', 'M', 'B'), ISAPNP_DEVICE(0x1032), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+ /* Creative Modem Blaster V.90 DI5660 */
+ { ISAPNP_VENDOR('D', 'M', 'B'), ISAPNP_DEVICE(0x2001), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+ /* Pace 56 Voice Internal Plug & Play Modem */
+ { ISAPNP_VENDOR('P', 'M', 'C'), ISAPNP_DEVICE(0x2430), 0, 0,
SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
/* SupraExpress 28.8 Data/Fax PnP modem */
{ ISAPNP_VENDOR('S', 'U', 'P'), ISAPNP_DEVICE(0x1310), 0, 0,
SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+ /* US Robotics Sporster 33600 Modem */
+ { ISAPNP_VENDOR('U', 'S', 'R'), ISAPNP_DEVICE(0x0006), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+ /* U.S. Robotics 56K FAX INT */
+ { ISAPNP_VENDOR('U', 'S', 'R'), ISAPNP_DEVICE(0x3031), 0, 0,
+ SPCI_FL_BASE0 | SPCI_FL_PNPDEFAULT, 1, 115200 },
+
/* These ID's are taken from M$ documentation */
/* Compaq 14400 Modem */
{ ISAPNP_VENDOR('P', 'N', 'P'), ISAPNP_DEVICE(0xC000), 0, 0,
{ 0, }
};
+static void inline avoid_irq_share(struct pci_dev *dev)
+{
+ int i, map = 0x1FF8;
+ struct serial_state *state = rs_table;
+ struct isapnp_irq *irq;
+ struct isapnp_resources *res = dev->sysdata;
+
+ for (i = 0; i < NR_PORTS; i++) {
+ if (state->type != PORT_UNKNOWN)
+ clear_bit(state->irq, &map);
+ state++;
+ }
+
+ for ( ; res; res = res->alt)
+ for(irq = res->irq; irq; irq = irq->next)
+ irq->map = map;
+}
+
static void __init probe_serial_pnp(void)
{
struct pci_dev *dev = NULL;
for (board = pnp_devices; board->vendor; board++) {
while ((dev = isapnp_find_dev(NULL, board->vendor,
board->device, dev))) {
-
- start_pci_pnp_board(dev, board);
-
+ if (board->flags & SPCI_FL_NO_SHIRQ)
+ avoid_irq_share(dev);
+ start_pci_pnp_board(dev, board);
}
}
#if (LINUX_VERSION_CODE > 0x20100)
serial_driver.driver_name = "serial";
#endif
+#if (LINUX_VERSION_CODE > 0x2032D)
serial_driver.name = "tts/%d";
+#else
+ serial_driver.name = "ttyS";
+#endif
serial_driver.major = TTY_MAJOR;
serial_driver.minor_start = 64 + SERIAL_DEV_OFFSET;
serial_driver.num = NR_PORTS;
* major number and the subtype code.
*/
callout_driver = serial_driver;
+#if (LINUX_VERSION_CODE > 0x2032D)
callout_driver.name = "cua/%d";
+#else
+ callout_driver.name = "cua";
+#endif
callout_driver.major = TTYAUX_MAJOR;
callout_driver.subtype = SERIAL_TYPE_CALLOUT;
#if (LINUX_VERSION_CODE >= 131343)
&& (state->flags & ASYNC_AUTO_IRQ)
&& (state->port != 0))
state->irq = detect_uart_irq(state);
- printk(KERN_INFO "ttyS%02d%s at 0x%04x (irq = %d) is a %s\n",
+ printk(KERN_INFO "ttyS%02d%s at 0x%04lx (irq = %d) is a %s\n",
state->line + SERIAL_DEV_OFFSET,
(state->flags & ASYNC_FOURPORT) ? " FourPort" : "",
state->port, state->irq,
* The port is then probed and if neccessary the IRQ is autodetected
* If this fails an error is returned.
*
- * On success the port is ready to use and the line number is returned.
+ * On success the port is ready to use and the line number is returned.
*/
int register_serial(struct serial_struct *req)
struct serial_state *state;
struct async_struct *info;
- save_flags(flags);
- cli();
+ save_flags(flags); cli();
for (i = 0; i < NR_PORTS; i++) {
if ((rs_table[i].port == req->port) &&
(rs_table[i].iomem_base == req->iomem_base))
state = &rs_table[i];
if (rs_table[i].count) {
restore_flags(flags);
- printk("Couldn't configure serial #%d (port=%d,irq=%d): "
+ printk("Couldn't configure serial #%d (port=%ld,irq=%d): "
"device already open\n", i, req->port, req->irq);
return -1;
}
if ((state->flags & ASYNC_AUTO_IRQ) && CONFIGURED_SERIAL_PORT(state))
state->irq = detect_uart_irq(state);
- printk(KERN_INFO "ttyS%02d at %s 0x%04lx (irq = %d) is a %s\n",
+ printk(KERN_INFO "ttyS%02d at %s 0x%04lx (irq = %d) is a %s\n",
state->line + SERIAL_DEV_OFFSET,
state->iomem_base ? "iomem" : "port",
state->iomem_base ? (unsigned long)state->iomem_base :
- (unsigned long)state->port,
- state->irq, uart_config[state->type].name);
+ state->port, state->irq, uart_config[state->type].name);
tty_register_devfs(&serial_driver, 0,
serial_driver.minor_start + state->line);
tty_register_devfs(&callout_driver, 0,
unsigned long flags;
struct serial_state *state = &rs_table[line];
- save_flags(flags);
- cli();
+ save_flags(flags); cli();
if (state->info && state->info->tty)
tty_hangup(state->info->tty);
state->type = PORT_UNKNOWN;
}
#ifdef MODULE
-int init_module(void)
-{
- return rs_init();
-}
-
-void cleanup_module(void)
+void rs_fini(void)
{
unsigned long flags;
int e1, e2;
struct async_struct *info;
/* printk("Unloading %s: version %s\n", serial_name, serial_version); */
- save_flags(flags);
- cli();
+ save_flags(flags); cli();
timer_active &= ~(1 << RS_TIMER);
timer_table[RS_TIMER].fn = NULL;
timer_table[RS_TIMER].expires = 0;
}
#endif
if (tmp_buf) {
- free_page((unsigned long) tmp_buf);
+ unsigned long pg = (unsigned long) tmp_buf;
tmp_buf = NULL;
+ free_page(pg);
}
}
#endif /* MODULE */
+module_init(rs_init);
+module_exit(rs_fini);
+
/*
* ------------------------------------------------------------
#ifdef CONFIG_ESPSERIAL /* init ESP before rs, so rs doesn't see the port */
espserial_init();
#endif
-#ifdef CONFIG_SERIAL
- rs_init();
-#endif
#if defined(CONFIG_MVME162_SCC) || defined(CONFIG_BVME6000_SCC) || defined(CONFIG_MVME147_SCC)
vme_scc_init();
#endif
* Handle an interrupt from the board. These are raised when the status
* map changes in what the board considers an interesting way. That means
* a failure condition occuring.
- *
- * FIXME: We need to pass a dev_id as the PCI card can share irqs
- * although its arguably a _very_ dumb idea to share watchdog
- * irq lines
*/
void wdt_interrupt(int irq, void *dev_id, struct pt_regs *regs)
int __init wdt_init(void)
{
printk(KERN_INFO "WDT500/501-P driver 0.07 at %X (Interrupt %d)\n", io,irq);
- if(request_irq(irq, wdt_interrupt, SA_INTERRUPT, "wdt501p", NULL))
+ if(request_irq(irq, wdt_interrupt, SA_INTERRUPT, "wdt501p", &wdt_miscdev))
{
printk(KERN_ERR "IRQ %d is not free.\n", irq);
return -EIO;
return 0;
}
-static int __init i2c_algo_pcf_init (void)
+int __init i2c_algo_pcf_init (void)
{
printk("i2c-algo-pcf.o: i2c pcf8584 algorithm module\n");
return 0;
#include <linux/stat.h>
#include <linux/proc_fs.h>
-static int sis_get_info(char *, char **, off_t, int);
+static int __init sis_get_info(char *, char **, off_t, int);
extern int (*sis_display_info)(char *, char **, off_t, int); /* ide-proc.c */
struct pci_dev *bmide_dev;
-static char *cable_type[] = {
+static char *cable_type[] __initdata = {
"80 pins",
"40 pins"
};
-static char *recovery_time [] ={
+static char *recovery_time [] __initdata ={
"12 PCICLK", "1 PCICLK",
"2 PCICLK", "3 PCICLK",
"4 PCICLK", "5 PCICLCK",
"15 PCICLK", "15 PCICLK"
};
-static char *cycle_time [] = {
+static char * cycle_time [] __initdata = {
"Undefined", "2 CLCK",
"3 CLK", "4 CLK",
"5 CLK", "6 CLK",
"7 CLK", "8 CLK"
};
-static char *active_time [] = {
+static char * active_time [] __initdata = {
"8 PCICLK", "1 PCICLCK",
"2 PCICLK", "2 PCICLK",
"4 PCICLK", "5 PCICLK",
"6 PCICLK", "12 PCICLK"
};
-static int sis_get_info (char *buffer, char **addr, off_t offset, int count)
+static int __init sis_get_info (char *buffer, char **addr, off_t offset, int count)
{
int rc;
char *p = buffer;
with "nopnp=1" before, does not harm if not. */
idev->deactivate(idev);
idev->activate(idev);
- if (!idev->resource[0].start || check_region(idev->resource[0].start,16))
+ if (!idev->resource[0].start || check_region(idev->resource[0].start, EL3_IO_EXTENT))
continue;
ioaddr = idev->resource[0].start;
+ if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509 PnP"))
+ return -EBUSY;
irq = idev->irq_resource[0].start;
if (el3_debug > 3)
printk ("ISAPnP reports %s at i/o 0x%x, irq %d\n",
if (inw(ioaddr + Wn0EepromData) != 0x6d50)
continue;
}
- printk(KERN_INFO "3c515 Resource configuraiton register %#4.4x, DCR %4.4x.\n",
+ printk(KERN_INFO "3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
/* irq = inw(ioaddr + 0x2002) & 15; */ /* Use the irq from isapnp */
corkscrew_isapnp_phys_addr[pnp_cards] = ioaddr;
if (inw(ioaddr + Wn0EepromData) != 0x6d50)
continue;
}
- printk(KERN_INFO "3c515 Resource configuraiton register %#4.4x, DCR %4.4x.\n",
+ printk(KERN_INFO "3c515 Resource configuration register %#4.4x, DCR %4.4x.\n",
inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
irq = inw(ioaddr + 0x2002) & 15;
corkscrew_found_device(dev, ioaddr, irq, CORKSCREW_ID, dev
*/
-static char *version =
-"3c59x.c:v0.99H+lk1.0 Feb 9, 2000 The Linux Kernel Team http://cesdis.gsfc.nasa.gov/linux/drivers/vortex.html\n";
/* "Knobs" that adjust features and parameters. */
/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
#define PCI_SUPPORT_VER2
#define DEV_FREE_SKB(skb) dev_kfree_skb(skb);
+static char *version __initdata =
+"3c59x.c:v0.99H+lk1.0 Feb 9, 2000 The Linux Kernel Team http://cesdis.gsfc.nasa.gov/linux/drivers/vortex.html\n";
+
MODULE_AUTHOR("Donald Becker <becker@cesdis.gsfc.nasa.gov>");
MODULE_DESCRIPTION("3Com 3c590/3c900 series Vortex/Boomerang driver");
MODULE_PARM(debug, "i");
static void set_multicast_list(struct net_device *dev);
static void do_set_multicast_list(struct net_device *dev);
-/*
- * SMP and the 8390 setup.
+/**
+ * DOC: SMP and the 8390 setup.
*
* The 8390 isnt exactly designed to be multithreaded on RX/TX. There is
* a page register that controls bank and packet buffer access. We guard
\f
-/* Open/initialize the board. This routine goes all-out, setting everything
- up anew at each open, even though many of these registers should only
- need to be set once at boot.
- */
+/**
+ * ei_open - Open/initialize the board.
+ * @dev: network device to initialize
+ *
+ * This routine goes all-out, setting everything
+ * up anew at each open, even though many of these registers should only
+ * need to be set once at boot.
+ */
int ei_open(struct net_device *dev)
{
unsigned long flags;
return 0;
}
-/* Opposite of above. Only used when "ifconfig <devname> down" is done. */
+/**
+ * ei_close - shut down network device
+ * @dev: network device to close
+ *
+ * Opposite of ei_open. Only used when "ifconfig <devname> down" is done.
+ */
int ei_close(struct net_device *dev)
{
struct ei_device *ei_local = (struct ei_device *) dev->priv;
return 0;
}
+/**
+ * ei_start_xmit - begin packet transmission
+ * @skb: packet to be sent
+ * @dev: network device to which packet is sent
+ *
+ * Sends a packet to an 8390 network device.
+ */
+
static int ei_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
long e8390_base = dev->base_addr;
return 0;
}
\f
-/* The typical workload of the driver:
- Handle the ether interface interrupts. */
+/**
+ * ei_interrupt -
+ * @irq:
+ * @dev_id:
+ * @regs:
+ *
+ * The typical workload of the driver:
+ * Handle the ether interface interrupts.
+ */
void ei_interrupt(int irq, void *dev_id, struct pt_regs * regs)
{
return;
}
-/*
+/**
+ * ei_tx_err - handle transmitter error
+ * @dev: network device which threw the exception
+ *
* A transmitter error has happened. Most likely excess collisions (which
* is a fairly normal condition). If the error is one where the Tx will
* have been aborted, we try and send another one right away, instead of
}
}
-/* We have finished a transmit: check for errors and then trigger the next
- packet to be sent. Called with lock held */
+/**
+ * ei_tx_intr - transmit interrupt handler
+ * @dev: network device for which tx intr is handled
+ *
+ * We have finished a transmit: check for errors and then trigger the next
+ * packet to be sent. Called with lock held
+ */
static void ei_tx_intr(struct net_device *dev)
{
netif_wake_queue(dev);
}
-/* We have a good packet(s), get it/them out of the buffers.
- Called with lock held */
+/**
+ * ei_receive - receive some packets
+ * @dev: network device with which receive will be run
+ *
+ * We have a good packet(s), get it/them out of the buffers.
+ * Called with lock held
+ */
static void ei_receive(struct net_device *dev)
{
return;
}
-/*
+/**
+ * ei_rx_overrun - handle receiver overrun
+ * @dev: network device which threw exception
+ *
* We have a receiver overrun: we have to kick the 8390 to get it started
* again. Problem is that you have to kick it exactly as NS prescribes in
* the updated datasheets, or "the NIC may act in an unpredictable manner."
}
}
-/*
+/**
+ * do_set_multicast_list - set/clear multicast filter
+ * @dev: net device for which multicast filter is adjusted
+ *
* Set or clear the multicast filter for this adaptor. May be called
* from a BH in 2.1.x. Must be called with lock held.
*/
spin_unlock_irqrestore(&ei_local->page_lock, flags);
}
-/*
+/**
+ * ethdev_init - init rest of 8390 device struct
+ * @dev: network device structure to init
+ *
* Initialize the rest of the 8390 device structure. Do NOT __init
* this, as it is used by 8390 based modular drivers too.
*/
/* This page of functions should be 8390 generic */
/* Follow National Semi's recommendations for initializing the "NIC". */
-/*
+/**
+ * NS8390_init - initialize 8390 hardware
+ * @dev: network device to initialize
+ * @startp: boolean. non-zero value to initiate chip processing
+ *
* Must be called with lock held.
*/
outb_p(E8390_RXCONFIG, e8390_base + EN0_RXCR); /* rx on, */
do_set_multicast_list(dev); /* (re)load the mcast table */
}
- return;
}
/* Trigger a transmit start, assuming the length is valid.
fi
bool ' Pocket and portable adapters' CONFIG_NET_POCKET
if [ "$CONFIG_NET_POCKET" = "y" ]; then
- tristate ' AT-LAN-TEC/RealTek pocket adapter support' CONFIG_ATP
+ if [ "$CONFIG_X86" = "y" ]; then
+ tristate ' AT-LAN-TEC/RealTek pocket adapter support' CONFIG_ATP
+ fi
tristate ' D-Link DE600 pocket adapter support' CONFIG_DE600
tristate ' D-Link DE620 pocket adapter support' CONFIG_DE620
fi
switch (cmd) {
case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_task_execute(self, __irport_change_speed, NULL, NULL,
(void *) irq->ifr_baudrate);
break;
case SIOCSDONGLE: /* Set dongle */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
/* Initialize dongle */
dongle = irda_device_dongle_init(dev, irq->ifr_dongle);
if (!dongle)
NULL);
break;
case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_device_set_media_busy(self->netdev, TRUE);
break;
case SIOCGRECEIVING: /* Check if we are receiving right now */
irq->ifr_receiving = irport_is_receiving(self);
break;
case SIOCSDTRRTS:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irport_set_dtr_rts(dev, irq->ifr_dtr, irq->ifr_rts);
break;
default:
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Tue Dec 9 21:18:38 1997
- * Modified at: Fri Jan 14 21:02:27 2000
+ * Modified at: Sat Mar 11 07:43:30 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
* Sources: slip.c by Laurence Culhane, <loz@holmes.demon.co.uk>
* Fred N. van Kempen, <waltje@uwalt.nl.mugnet.org>
switch (cmd) {
case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_task_execute(self, irtty_change_speed, NULL, NULL,
(void *) irq->ifr_baudrate);
break;
case SIOCSDONGLE: /* Set dongle */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
/* Initialize dongle */
dongle = irda_device_dongle_init(dev, irq->ifr_dongle);
if (!dongle)
NULL);
break;
case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_device_set_media_busy(self->netdev, TRUE);
break;
case SIOCGRECEIVING: /* Check if we are receiving right now */
irq->ifr_receiving = irtty_is_receiving(self);
break;
case SIOCSDTRRTS:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irtty_set_dtr_rts(dev, irq->ifr_dtr, irq->ifr_rts);
break;
case SIOCSMODE:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irtty_set_mode(dev, irq->ifr_mode);
break;
default:
switch (cmd) {
case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
nsc_ircc_change_speed(self, irq->ifr_baudrate);
break;
case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_device_set_media_busy(self->netdev, TRUE);
break;
case SIOCGRECEIVING: /* Check if we are receiving right now */
switch (cmd) {
case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
/* toshoboe_setbaud(self, irq->ifr_baudrate); */
/* Just change speed once - inserted by Paul Bristow */
self->new_speed = irq->ifr_baudrate;
break;
case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_device_set_media_busy(self->netdev, TRUE);
break;
case SIOCGRECEIVING: /* Check if we are receiving right now */
switch (cmd) {
case SIOCSBANDWIDTH: /* Set bandwidth */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
w83977af_change_speed(self, irq->ifr_baudrate);
break;
case SIOCSMEDIABUSY: /* Set media busy */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
irda_device_set_media_busy(self->netdev, TRUE);
break;
case SIOCGRECEIVING: /* Check if we are receiving right now */
STOP_PG_0x60=0x100,
};
-/* This will eventually be converted to the standard PCI probe table. */
+
+enum ne2k_pci_chipsets {
+ CH_RealTek_RTL_8029 = 0,
+ CH_Winbond_89C940,
+ CH_Compex_RL2000,
+ CH_KTI_ET32P2,
+ CH_NetVin_NV5000SC,
+ CH_Via_86C926,
+ CH_SureCom_NE34,
+ CH_Winbond_W89C940F,
+ CH_Holtek_HT80232,
+ CH_Holtek_HT80229,
+};
+
static struct {
- unsigned short vendor, dev_id;
char *name;
int flags;
-}
-pci_clone_list[] __initdata = {
- {0x10ec, 0x8029, "RealTek RTL-8029", 0},
- {0x1050, 0x0940, "Winbond 89C940", 0},
- {0x11f6, 0x1401, "Compex RL2000", 0},
- {0x8e2e, 0x3000, "KTI ET32P2", 0},
- {0x4a14, 0x5000, "NetVin NV5000SC", 0},
- {0x1106, 0x0926, "Via 86C926", ONLY_16BIT_IO},
- {0x10bd, 0x0e34, "SureCom NE34", 0},
- {0x1050, 0x5a5a, "Winbond W89C940F", 0},
- {0x12c3, 0x0058, "Holtek HT80232", ONLY_16BIT_IO | HOLTEK_FDX},
- {0x12c3, 0x5598, "Holtek HT80229",
- ONLY_32BIT_IO | HOLTEK_FDX | STOP_PG_0x60 },
+} pci_clone_list[] __devinitdata = {
+ {"RealTek RTL-8029", 0},
+ {"Winbond 89C940", 0},
+ {"Compex RL2000", 0},
+ {"KTI ET32P2", 0},
+ {"NetVin NV5000SC", 0},
+ {"Via 86C926", ONLY_16BIT_IO},
+ {"SureCom NE34", 0},
+ {"Winbond W89C940F", 0},
+ {"Holtek HT80232", ONLY_16BIT_IO | HOLTEK_FDX},
+ {"Holtek HT80229", ONLY_32BIT_IO | HOLTEK_FDX | STOP_PG_0x60 },
{0,}
};
+
+static struct pci_device_id ne2k_pci_tbl[] __devinitdata = {
+ { 0x10ec, 0x8029, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RealTek_RTL_8029 },
+ { 0x1050, 0x0940, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Winbond_89C940 },
+ { 0x11f6, 0x1401, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Compex_RL2000 },
+ { 0x8e2e, 0x3000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_KTI_ET32P2 },
+ { 0x4a14, 0x5000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_NetVin_NV5000SC },
+ { 0x1106, 0x0926, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Via_86C926 },
+ { 0x10bd, 0x0e34, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_SureCom_NE34 },
+ { 0x1050, 0x5a5a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Winbond_W89C940F },
+ { 0x12c3, 0x0058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Holtek_HT80232 },
+ { 0x12c3, 0x5598, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_Holtek_HT80229 },
+ { 0, },
+};
+MODULE_DEVICE_TABLE(pci, ne2k_pci_tbl);
+
+
/* ---- No user-serviceable parts below ---- */
#define NE_BASE (dev->base_addr)
#define NESM_START_PG 0x40 /* First page of TX buffer */
#define NESM_STOP_PG 0x80 /* Last page +1 of RX ring */
-static int ne2k_pci_probe(void);
-static struct net_device *ne2k_pci_probe1(long ioaddr, int irq, int chip_idx);
static int ne2k_pci_open(struct net_device *dev);
static int ne2k_pci_close(struct net_device *dev);
/* No room in the standard 8390 structure for extra info we need. */
struct ne2k_pci_card {
- struct ne2k_pci_card *next;
struct net_device *dev;
struct pci_dev *pci_dev;
};
-/* A list of all installed devices, for removing the driver module. */
-static struct ne2k_pci_card *ne2k_card_list = NULL;
-static int __init ne2k_pci_init_module(void)
-{
- /* We must emit version information. */
- if (debug)
- printk(KERN_INFO "%s", version);
- if (ne2k_pci_probe()) {
- printk(KERN_NOTICE "ne2k-pci.c: No useable cards found, driver NOT installed.\n");
- return -ENODEV;
- }
- lock_8390_module();
- return 0;
-}
-
-static void __exit ne2k_pci_cleanup_module(void)
-{
- struct net_device *dev;
- struct ne2k_pci_card *this_card;
-
- /* No need to check MOD_IN_USE, as sys_delete_module() checks. */
- while (ne2k_card_list) {
- dev = ne2k_card_list->dev;
- unregister_netdev(dev);
- release_region(dev->base_addr, NE_IO_EXTENT);
- kfree(dev);
- this_card = ne2k_card_list;
- ne2k_card_list = ne2k_card_list->next;
- kfree(this_card);
- }
- unlock_8390_module();
-}
-
-module_init(ne2k_pci_init_module);
-module_exit(ne2k_pci_cleanup_module);
/*
NEx000-clone boards have a Station Address (SA) PROM (SAPROM) in the packet
in the 'dev' and 'ei_status' structures.
*/
-#ifdef HAVE_DEVLIST
-struct netdev_entry netcard_drv =
-{"ne2k_pci", ne2k_pci_probe1, NE_IO_EXTENT, 0};
-#endif
-static int __init ne2k_pci_probe(void)
+static int __devinit ne2k_pci_init_one (struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
- struct pci_dev *pdev = NULL;
- int cards_found = 0;
- int i;
struct net_device *dev;
+ int i, irq, reg0, start_page, stop_page;
+ unsigned char SA_prom[32];
+ int chip_idx = ent->driver_data;
+ static unsigned version_printed = 0;
+ long ioaddr;
+
+ if (version_printed++ == 0)
+ printk(KERN_INFO "%s", version);
- if ( ! pci_present())
+ ioaddr = pci_resource_start (pdev, 0);
+ irq = pdev->irq;
+
+ if (!ioaddr || ((pci_resource_flags (pdev, 0) & IORESOURCE_IO) == 0)) {
+ printk (KERN_ERR "ne2k-pci: no I/O resource at PCI BAR #0\n");
return -ENODEV;
-
- while ((pdev = pci_find_class(PCI_CLASS_NETWORK_ETHERNET << 8, pdev)) != NULL) {
- int pci_irq_line;
- u16 pci_command, new_command;
- unsigned long pci_ioaddr;
-
- /* Note: some vendor IDs (RealTek) have non-NE2k cards as well. */
- for (i = 0; pci_clone_list[i].vendor != 0; i++)
- if (pci_clone_list[i].vendor == pdev->vendor
- && pci_clone_list[i].dev_id == pdev->device)
- break;
- if (pci_clone_list[i].vendor == 0)
- continue;
-
- pci_ioaddr = pdev->resource[0].start;
- pci_irq_line = pdev->irq;
- pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
-
- /* Avoid already found cards from previous calls */
- if (check_region(pci_ioaddr, NE_IO_EXTENT))
- continue;
-
- {
- static unsigned version_printed = 0;
- if (version_printed++ == 0)
- printk(KERN_INFO "%s", version);
- }
-
- /* Activate the card: fix for brain-damaged Win98 BIOSes. */
- new_command = pci_command | PCI_COMMAND_IO;
- if (pci_command != new_command) {
- printk(KERN_INFO " The PCI BIOS has not enabled this"
- " NE2k clone! Updating PCI command %4.4x->%4.4x.\n",
- pci_command, new_command);
- pci_write_config_word(pdev, PCI_COMMAND, new_command);
- }
-#ifndef __sparc__
- if (pci_irq_line <= 0 || pci_irq_line >= NR_IRQS)
- printk(KERN_WARNING " WARNING: The PCI BIOS assigned this PCI NE2k"
- " card to IRQ %d, which is unlikely to work!.\n"
- KERN_WARNING " You should use the PCI BIOS setup to assign"
- " a valid IRQ line.\n", pci_irq_line);
-#endif
- printk("ne2k-pci.c: PCI NE2000 clone '%s' at I/O %#lx, IRQ %d.\n",
- pci_clone_list[i].name, pci_ioaddr, pci_irq_line);
- dev = ne2k_pci_probe1(pci_ioaddr, pci_irq_line, i);
- if (dev == 0) {
- /* Should not happen. */
- printk(KERN_ERR "ne2k-pci: Probe of PCI card at %#lx failed.\n",
- pci_ioaddr);
- continue;
- } else {
- struct ne2k_pci_card *ne2k_card =
- kmalloc(sizeof(struct ne2k_pci_card), GFP_KERNEL);
- ne2k_card->next = ne2k_card_list;
- ne2k_card_list = ne2k_card;
- ne2k_card->dev = dev;
- ne2k_card->pci_dev = pdev;
- }
-
- cards_found++;
}
-
- return cards_found ? 0 : -ENODEV;
-}
-
-static struct net_device __init *ne2k_pci_probe1(long ioaddr, int irq, int chip_idx)
-{
- struct net_device *dev;
- int i;
- unsigned char SA_prom[32];
- int start_page, stop_page;
- int reg0 = inb(ioaddr);
-
+
+ if (pci_enable_device (pdev)) {
+ printk (KERN_ERR "ne2k-pci: cannot enable device\n");
+ return -EIO;
+ }
+
+ if (request_region (ioaddr, NE_IO_EXTENT, "ne2k-pci") == NULL) {
+ printk (KERN_ERR "ne2k-pci: I/O resource 0x%x @ 0x%lx busy\n",
+ NE_IO_EXTENT, ioaddr);
+ return -EBUSY;
+ }
+
+ reg0 = inb(ioaddr);
if (reg0 == 0xFF)
- return 0;
+ goto err_out_free_res;
/* Do a preliminary verification that we have a 8390. */
{
if (inb(ioaddr + EN0_COUNTER0) != 0) {
outb(reg0, ioaddr);
outb(regd, ioaddr + 0x0d); /* Restore the old values. */
- return 0;
+ goto err_out_free_res;
}
}
dev = init_etherdev(NULL, 0);
-
+ if (!dev) {
+ printk (KERN_ERR "ne2k-pci: cannot allocate ethernet device\n");
+ goto err_out_free_res;
+ }
+
/* Reset card. Who knows what dain-bramaged state it was left in. */
{
unsigned long reset_start_time = jiffies;
/* Limit wait: '2' avoids jiffy roll-over. */
if (jiffies - reset_start_time > 2) {
printk("ne2k-pci: Card failure (no reset ack).\n");
- return 0;
+ goto err_out_free_netdev;
}
outb(0xff, ioaddr + EN0_ISR); /* Ack all intr. */
}
if (load_8390_module("ne2k-pci.c")) {
- return 0;
+ printk (KERN_ERR "ne2k-pci: cannot load 8390 module\n");
+ goto err_out_free_netdev;
}
/* Read the 16 bytes of station address PROM.
/* Allocate dev->priv and fill in 8390 specific dev fields. */
if (ethdev_init(dev)) {
- printk ("%s: unable to get memory for dev->priv.\n", dev->name);
- return 0;
+ printk (KERN_ERR "%s: unable to get memory for dev->priv.\n", dev->name);
+ goto err_out_free_netdev;
}
- request_region(ioaddr, NE_IO_EXTENT, dev->name);
-
printk("%s: %s found at %#lx, IRQ %d, ",
dev->name, pci_clone_list[chip_idx].name, ioaddr, dev->irq);
for(i = 0; i < 6; i++) {
dev->open = &ne2k_pci_open;
dev->stop = &ne2k_pci_close;
NS8390_init(dev, 0);
- return dev;
+ return 0;
+
+err_out_free_netdev:
+ unregister_netdev (dev);
+ kfree (dev);
+err_out_free_res:
+ release_region (ioaddr, NE_IO_EXTENT);
+ return -ENODEV;
+
}
static int
ne2k_pci_open(struct net_device *dev)
{
- if (request_irq(dev->irq, ei_interrupt, SA_SHIRQ, dev->name, dev))
+ MOD_INC_USE_COUNT;
+ if (request_irq(dev->irq, ei_interrupt, SA_SHIRQ, dev->name, dev)) {
+ MOD_DEC_USE_COUNT;
return -EAGAIN;
+ }
ei_open(dev);
- MOD_INC_USE_COUNT;
return 0;
}
return;
}
+
+static void __devexit ne2k_pci_remove_one (struct pci_dev *pdev)
+{
+ struct net_device *dev = pdev->driver_data;
+
+ if (!dev) {
+ printk (KERN_ERR "bug! ne2k_pci_remove_one called w/o net_device\n");
+ return;
+ }
+
+ unregister_netdev (dev);
+ release_region (dev->base_addr, NE_IO_EXTENT);
+ kfree (dev);
+}
+
+
+static struct pci_driver ne2k_driver = {
+ name: "ne2k-pci",
+ probe: ne2k_pci_init_one,
+ remove: ne2k_pci_remove_one,
+ id_table: ne2k_pci_tbl,
+};
+
+
+static int __init ne2k_pci_init(void)
+{
+ int rc;
+
+ MOD_INC_USE_COUNT;
+ lock_8390_module();
+
+ rc = pci_module_init (&ne2k_driver);
+
+ /* XXX should this test CONFIG_HOTPLUG like pci_module_init? */
+ if (rc <= 0)
+ unlock_8390_module();
+
+ MOD_DEC_USE_COUNT;
+
+ return rc;
+}
+
+
+static void __exit ne2k_pci_cleanup(void)
+{
+ pci_unregister_driver (&ne2k_driver);
+ unlock_8390_module();
+}
+
+module_init(ne2k_pci_init);
+module_exit(ne2k_pci_cleanup);
+
\f
/*
* Local variables:
if [ "$CONFIG_CARDBUS" = "y" ]; then
dep_tristate ' 3Com 3c575 CardBus support' CONFIG_PCMCIA_3C575 m
- dep_tristate ' Xircom Tulip-like CardBus support' CONFIG_PCMCIA_XIRTULIP m
+ tristate ' Xircom Tulip-like CardBus support' CONFIG_PCMCIA_XIRTULIP
fi
bool 'Pcmcia Wireless LAN' CONFIG_NET_PCMCIA_RADIO
int flags;
void (*media_timer)(unsigned long data);
} tulip_tbl[] = {
-#if 0 /* these entries conflict with regular tulip driver */
{ "Digital DC21040 Tulip", 128, 0x0001ebef, 0, tulip_timer },
{ "Digital DC21041 Tulip", 128, 0x0001ebef, HAS_MEDIA_TABLE, tulip_timer },
{ "Digital DS21140 Tulip", 128, 0x0001ebef,
MC_HASH_ONLY, comet_timer },
{ "Compex 9881 PMAC", 128, 0x0001ebef,
HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM, mxic_timer },
-#endif
{ "Xircom Cardbus Adapter (DEC 21143 compatible mode)", 128, 0x0801fbff,
HAS_MII | HAS_ACPI, tulip_timer },
{0},
};
/* This matches the table above. Note 21142 == 21143. */
enum chips {
-#if 0 /* these entries conflict with regular tulip driver */
DC21040=0, DC21041=1, DC21140=2, DC21142=3, DC21143=3,
LC82C168, MX98713, MX98715, MX98725, AX88140, PNIC2, COMET, COMPEX9881,
X3201_3,
-#else
- X3201_3 = 0,
-#endif
};
/* A full-duplex map for media types. */
static int shaper_neigh_setup(struct neighbour *n)
{
+#ifdef CONFIG_INET
if (n->nud_state == NUD_NONE) {
n->ops = &arp_broken_ops;
n->output = n->ops->output;
}
+#endif
return 0;
}
static int shaper_neigh_setup_dev(struct net_device *dev, struct neigh_parms *p)
{
+#ifdef CONFIG_INET
if (p->tbl->family == AF_INET) {
p->neigh_setup = shaper_neigh_setup;
p->ucast_probes = 0;
p->mcast_probes = 0;
}
+#endif
return 0;
}
* First release to the public
* 03/03/00 - Merged to kernel, indented -kr -i8 -bri0, fixed some missing
* malloc free checks, reviewed code. <alan@redhat.com>
+ * 03/13/00 - Added spinlocks for smp
*
* To Do:
*
#include <linux/stddef.h>
#include <linux/init.h>
#include <linux/pci.h>
+#include <linux/spinlock.h>
#include <net/checksum.h>
#include <asm/io.h>
* Official releases will only have an a.b.c version number format.
*/
-static char *version = "LanStreamer.c v0.1.0 12/10/99 - Mike Sullivan";
+static char *version = "LanStreamer.c v0.3.1 03/13/99 - Mike Sullivan";
static char *open_maj_error[] = {
"No error", "Lobe Media Test", "Physical Insertion",
/* Check to see if io has been allocated, if so, we've already done this card,
so continue on the card discovery loop */
- if (check_region(pci_device->resource[0].start, STREAMER_IO_SPACE))
+ if (check_region(pci_device->resource[0].start & (~3), STREAMER_IO_SPACE))
{
card_no++;
continue;
break;
}
memset(streamer_priv, 0, sizeof(struct streamer_private));
+ init_waitqueue_head(&streamer_priv->srb_wait);
+ init_waitqueue_head(&streamer_priv->trb_wait);
#ifndef MODULE
dev = init_trdev(dev, 0);
if(dev==NULL)
pci_device, dev, dev->priv);
#endif
dev->irq = pci_device->irq;
- dev->base_addr = pci_device->resource[0].start;
+ dev->base_addr = pci_device->resource[0].start & (~3);
dev->init = &streamer_init;
+ streamer_priv->streamer_card_name = (char *)pci_device->resource[0].name;
streamer_priv->streamer_mmio = ioremap(pci_device->resource[1].start, 256);
- init_waitqueue_head(&streamer_priv->srb_wait);
- init_waitqueue_head(&streamer_priv->trb_wait);
+
if ((pkt_buf_sz[card_no] < 100) || (pkt_buf_sz[card_no] > 18000))
streamer_priv->pkt_buf_sz = PKT_BUF_SZ;
else
streamer_priv->streamer_ring_speed = ringspeed[card_no];
streamer_priv->streamer_message_level = message_level[card_no];
- streamer_priv->streamer_multicast_set = 0;
if (streamer_init(dev) == -1) {
unregister_netdevice(dev);
}
-static int __init streamer_init(struct net_device *dev)
+static int streamer_reset(struct net_device *dev)
{
struct streamer_private *streamer_priv;
__u8 *streamer_mmio;
streamer_priv = (struct streamer_private *) dev->priv;
streamer_mmio = streamer_priv->streamer_mmio;
- printk("%s \n", version);
- printk(KERN_INFO "%s: IBM PCI tokenring card. I/O at %hx, MMIO at %p, using irq %d\n",
- dev->name, (unsigned int) dev->base_addr,
- streamer_priv->streamer_mmio, dev->irq);
-
- request_region(dev->base_addr, STREAMER_IO_SPACE, "streamer");
writew(readw(streamer_mmio + BCTL) | BCTL_SOFTRESET, streamer_mmio + BCTL);
t = jiffies;
/* Hold soft reset bit for a while */
printk(KERN_INFO "%s: skb allocation for diagnostics failed...proceeding\n",
dev->name);
} else {
- streamer_priv->streamer_rx_ring[0].forward = 0;
- streamer_priv->streamer_rx_ring[0].status = 0;
- streamer_priv->streamer_rx_ring[0].buffer = virt_to_bus(skb->data);
- streamer_priv->streamer_rx_ring[0].framelen_buflen = 512; /* streamer_priv->pkt_buf_sz; */
- writel(virt_to_bus(&streamer_priv->streamer_rx_ring[0]), streamer_mmio + RXBDA);
+ struct streamer_rx_desc *rx_ring;
+ u8 *data;
+
+ rx_ring=(struct streamer_rx_desc *)skb->data;
+ data=((u8 *)skb->data)+sizeof(struct streamer_rx_desc);
+ rx_ring->forward=0;
+ rx_ring->status=0;
+ rx_ring->buffer=virt_to_bus(data);
+ rx_ring->framelen_buflen=512;
+ writel(virt_to_bus(rx_ring),streamer_mmio+RXBDA);
}
#if STREAMER_DEBUG
writew(readw(streamer_mmio + LAPWWO) + 6, streamer_mmio + LAPA);
if (readw(streamer_mmio + LAPD)) {
printk(KERN_INFO "tokenring card intialization failed. errorcode : %x\n",
- readw(streamer_mmio + LAPD));
+ ntohs(readw(streamer_mmio + LAPD)));
release_region(dev->base_addr, STREAMER_IO_SPACE);
return -1;
}
printk("UAA resides at %x\n", uaa_addr);
#endif
- /* setup uaa area for access with LAPD */
- writew(uaa_addr, streamer_mmio + LAPA);
-
/* setup uaa area for access with LAPD */
{
int i;
__u16 addr;
writew(uaa_addr, streamer_mmio + LAPA);
for (i = 0; i < 6; i += 2) {
- addr = readw(streamer_mmio + LAPDINC);
- dev->dev_addr[i] = addr & 0xff;
- dev->dev_addr[i + 1] = (addr >> 8) & 0xff;
+ addr=ntohs(readw(streamer_mmio+LAPDINC));
+ dev->dev_addr[i]= (addr >> 8) & 0xff;
+ dev->dev_addr[i+1]= addr & 0xff;
}
#if STREAMER_DEBUG
printk("Adapter address: ");
return 0;
}
+static int __init streamer_init(struct net_device *dev)
+{
+ struct streamer_private *streamer_priv;
+ __u8 *streamer_mmio;
+ int rc;
+
+ streamer_priv=(struct streamer_private *)dev->priv;
+ streamer_mmio=streamer_priv->streamer_mmio;
+
+ spin_lock_init(&streamer_priv->streamer_lock);
+
+ printk("%s \n", version);
+ printk("%s: %s. I/O at %hx, MMIO at %p, using irq %d\n",dev->name,
+ streamer_priv->streamer_card_name,
+ (unsigned int) dev->base_addr,
+ streamer_priv->streamer_mmio,
+ dev->irq);
+
+ request_region(dev->base_addr, STREAMER_IO_SPACE, "streamer");
+
+ rc=streamer_reset(dev);
+ return rc;
+}
+
+
+
static int streamer_open(struct net_device *dev)
{
struct streamer_private *streamer_priv = (struct streamer_private *) dev->priv;
int i, open_finished = 1;
__u16 srb_word;
__u16 srb_open;
+ int rc;
+ if (readw(streamer_mmio+BMCTL_SUM) & BMCTL_RX_ENABLED) {
+ rc=streamer_reset(dev);
+ }
if (request_irq(dev->irq, &streamer_interrupt, SA_SHIRQ, "streamer", dev)) {
return -EAGAIN;
writew(0, streamer_mmio + LAPDINC);
}
- writew(readw(streamer_mmio + LAPWWO), streamer_mmio + LAPA);
- writew(SRB_OPEN_ADAPTER, streamer_mmio + LAPDINC); /* open */
+ writew(readw(streamer_mmio+LAPWWO),streamer_mmio+LAPA);
+ writew(htons(SRB_OPEN_ADAPTER<<8),streamer_mmio+LAPDINC) ; /* open */
+ writew(htons(STREAMER_CLEAR_RET_CODE<<8),streamer_mmio+LAPDINC);
writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
writew(readw(streamer_mmio + LAPWWO) + 8, streamer_mmio + LAPA);
#if STREAMER_NETWORK_MONITOR
/* If Network Monitor, instruct card to copy MAC frames through the ARB */
- writew(ntohs(OPEN_ADAPTER_ENABLE_FDX | OPEN_ADAPTER_PASS_ADC_MAC | OPEN_ADAPTER_PASS_ATT_MAC | OPEN_ADAPTER_PASS_BEACON), streamer_mmio + LAPDINC); /* offset 8 word contains open options */
+ writew(htons(OPEN_ADAPTER_ENABLE_FDX | OPEN_ADAPTER_PASS_ADC_MAC | OPEN_ADAPTER_PASS_ATT_MAC | OPEN_ADAPTER_PASS_BEACON), streamer_mmio + LAPDINC); /* offset 8 word contains open options */
#else
- writew(ntohs(OPEN_ADAPTER_ENABLE_FDX), streamer_mmio + LAPDINC); /* Offset 8 word contains Open.Options */
+ writew(htons(OPEN_ADAPTER_ENABLE_FDX), streamer_mmio + LAPDINC); /* Offset 8 word contains Open.Options */
#endif
if (streamer_priv->streamer_laa[0]) {
writew(readw(streamer_mmio + LAPWWO) + 12, streamer_mmio + LAPA);
- writew(((__u16 *) (streamer_priv->streamer_laa))[0], streamer_mmio + LAPDINC); /* offset 12 word */
- writew(((__u16 *) (streamer_priv->streamer_laa))[2], streamer_mmio + LAPDINC); /* offset 14 word */
- writew(((__u16 *) (streamer_priv->streamer_laa))[4], streamer_mmio + LAPDINC); /* offset 16 word */
+ writew(htons((streamer_priv->streamer_laa[0] << 8) |
+ streamer_priv->streamer_laa[1]),streamer_mmio+LAPDINC);
+ writew(htons((streamer_priv->streamer_laa[2] << 8) |
+ streamer_priv->streamer_laa[3]),streamer_mmio+LAPDINC);
+ writew(htons((streamer_priv->streamer_laa[4] << 8) |
+ streamer_priv->streamer_laa[5]),streamer_mmio+LAPDINC);
memcpy(dev->dev_addr, streamer_priv->streamer_laa, dev->addr_len);
}
* timed out.
*/
writew(srb_open + 2, streamer_mmio + LAPA);
- srb_word = readw(streamer_mmio + LAPD) & 0xFF;
+ srb_word = ntohs(readw(streamer_mmio + LAPD)) & 0xFF;
if (srb_word == STREAMER_CLEAR_RET_CODE) {
printk(KERN_WARNING "%s: Adapter Open time out or error.\n",
dev->name);
} while (!(open_finished)); /* Will only loop if ring speed mismatch re-open attempted && autosense is on */
writew(srb_open + 18, streamer_mmio + LAPA);
- srb_word = readw(streamer_mmio + LAPD) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPD)) >> 8;
if (srb_word & (1 << 3))
if (streamer_priv->streamer_message_level)
printk(KERN_INFO "%s: Opened in FDX Mode\n", dev->name);
writew(~BMCTL_RX_DIS, streamer_mmio + BMCTL_RUM);
/* setup rx descriptors */
+ streamer_priv->streamer_rx_ring=
+ kmalloc( sizeof(struct streamer_rx_desc)*
+ STREAMER_RX_RING_SIZE,GFP_KERNEL);
+ if (!streamer_priv->streamer_rx_ring) {
+ printk(KERN_WARNING "%s ALLOC of streamer rx ring FAILED!!\n",dev->name);
+ return -EIO;
+ }
+
for (i = 0; i < STREAMER_RX_RING_SIZE; i++) {
struct sk_buff *skb;
/* setup tx ring */
+ streamer_priv->streamer_tx_ring=kmalloc(sizeof(struct streamer_tx_desc)*
+ STREAMER_TX_RING_SIZE,GFP_KERNEL);
+ if (!streamer_priv->streamer_tx_ring) {
+ printk(KERN_WARNING "%s ALLOC of streamer_tx_ring FAILED\n",dev->name);
+ return -EIO;
+ }
+
writew(~BMCTL_TX2_DIS, streamer_mmio + BMCTL_RUM); /* Enables TX channel 2 */
for (i = 0; i < STREAMER_TX_RING_SIZE; i++) {
streamer_priv->streamer_tx_ring[i].forward = virt_to_bus(&streamer_priv->streamer_tx_ring[i + 1]);
memcpy(skb_put(skb, length),bus_to_virt(rx_desc->buffer), length); /* copy this fragment */
streamer_priv->streamer_rx_ring[rx_ring_last_received].status = 0;
streamer_priv->streamer_rx_ring[rx_ring_last_received].framelen_buflen = streamer_priv->pkt_buf_sz;
- streamer_priv->streamer_rx_ring[rx_ring_last_received].buffer = virt_to_bus(skb->data);
+
/* give descriptor back to the adapter */
writel(virt_to_bus(&streamer_priv->streamer_rx_ring[rx_ring_last_received]), streamer_mmio + RXLBDA);
misr = readw(streamer_mmio + MISR_RUM);
writew(~misr, streamer_mmio + MISR_RUM);
- if (!sisr) { /* Interrupt isn't for us */
+ if (!sisr)
+ { /* Interrupt isn't for us */
+ writew(~misr,streamer_mmio+MISR_RUM);
return;
}
+ spin_lock(&streamer_priv->streamer_lock);
+
if ((sisr & (SISR_SRB_REPLY | SISR_ADAPTER_CHECK | SISR_ASB_FREE | SISR_ARB_CMD | SISR_TRB_REPLY))
|| (misr & (MISR_TX2_EOF | MISR_RX_NOBUF | MISR_RX_EOF))) {
if (sisr & SISR_SRB_REPLY) {
writel(readl(streamer_mmio + LAPWWO), streamer_mmio + LAPA);
printk(KERN_WARNING "%s: Words %x:%x:%x:%x:\n",
dev->name, readw(streamer_mmio + LAPDINC),
- readw(streamer_mmio + LAPDINC),
- readw(streamer_mmio + LAPDINC),
- readw(streamer_mmio + LAPDINC));
+ ntohs(readw(streamer_mmio + LAPDINC)),
+ ntohs(readw(streamer_mmio + LAPDINC)),
+ ntohs(readw(streamer_mmio + LAPDINC)));
free_irq(dev->irq, dev);
}
} /* One if the interrupts we want */
writew(SISR_MI, streamer_mmio + SISR_MASK_SUM);
+ spin_unlock(&streamer_priv->streamer_lock) ;
}
-
static int streamer_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct streamer_private *streamer_priv =
(struct streamer_private *) dev->priv;
__u8 *streamer_mmio = streamer_priv->streamer_mmio;
+ unsigned long flags ;
+ spin_lock_irqsave(&streamer_priv->streamer_lock, flags);
netif_stop_queue(dev);
-
+
if (streamer_priv->free_tx_ring_entries) {
streamer_priv->streamer_tx_ring[streamer_priv->tx_ring_free].status = 0;
streamer_priv->streamer_tx_ring[streamer_priv->tx_ring_free].bufcnt_framelen = 0x00010000 | skb->len;
writel(virt_to_bus (&streamer_priv->streamer_tx_ring[streamer_priv->tx_ring_free]),streamer_mmio + TX2LFDA);
streamer_priv->tx_ring_free = (streamer_priv->tx_ring_free + 1) & (STREAMER_TX_RING_SIZE - 1);
- netif_start_queue(dev);
+ netif_wake_queue(dev);
+ spin_unlock_irqrestore(&streamer_priv->streamer_lock,flags);
return 0;
} else {
+ spin_unlock_irqrestore(&streamer_priv->streamer_lock,flags);
return 1;
}
}
unsigned long flags;
int i;
+ netif_stop_queue(dev);
writew(streamer_priv->srb, streamer_mmio + LAPA);
- writew(SRB_CLOSE_ADAPTER, streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
+ writew(htons(SRB_CLOSE_ADAPTER << 8),streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
save_flags(flags);
cli();
streamer_priv->rx_ring_last_received = (streamer_priv->rx_ring_last_received + 1) & (STREAMER_RX_RING_SIZE - 1);
for (i = 0; i < STREAMER_RX_RING_SIZE; i++) {
- dev_kfree_skb(streamer_priv->rx_ring_skb[streamer_priv->rx_ring_last_received]);
+ if (streamer_priv->rx_ring_skb[streamer_priv->rx_ring_last_received]) {
+ dev_kfree_skb(streamer_priv->rx_ring_skb[streamer_priv->rx_ring_last_received]);
+ }
streamer_priv->rx_ring_last_received = (streamer_priv->rx_ring_last_received + 1) & (STREAMER_RX_RING_SIZE - 1);
}
writew(streamer_priv->srb, streamer_mmio + LAPA);
printk("srb): ");
for (i = 0; i < 2; i++) {
- printk("%x ", htons(readw(streamer_mmio + LAPDINC)));
+ printk("%x ", ntohs(readw(streamer_mmio + LAPDINC)));
}
printk("\n");
#endif
- netif_stop_queue(dev);
free_irq(dev->irq, dev);
MOD_DEC_USE_COUNT;
struct streamer_private *streamer_priv =
(struct streamer_private *) dev->priv;
__u8 *streamer_mmio = streamer_priv->streamer_mmio;
- __u8 options = 0, set_mc_list = 0;
- __u16 ata1, ata2;
+ __u8 options = 0;
struct dev_mc_list *dmi;
+ unsigned char dev_mc_address[5];
+ int i;
writel(streamer_priv->srb, streamer_mmio + LAPA);
options = streamer_priv->streamer_copy_all_options;
else
options &= ~(3 << 5);
- if (dev->mc_count) {
- set_mc_list = 1;
- }
-
/* Only issue the srb if there is a change in options */
if ((options ^ streamer_priv->streamer_copy_all_options))
{
/* Now to issue the srb command to alter the copy.all.options */
-
- writew(SRB_MODIFY_RECEIVE_OPTIONS,
- streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
- writew(streamer_priv->streamer_receive_options | (options << 8), streamer_mmio + LAPDINC);
- writew(0x414a, streamer_mmio + LAPDINC);
- writew(0x454d, streamer_mmio + LAPDINC);
- writew(0x2053, streamer_mmio + LAPDINC);
+ writew(htons(SRB_MODIFY_RECEIVE_OPTIONS << 8), streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
+ writew(htons((streamer_priv->streamer_receive_options << 8) | options),streamer_mmio+LAPDINC);
+ writew(htons(0x4a41),streamer_mmio+LAPDINC);
+ writew(htons(0x4d45),streamer_mmio+LAPDINC);
+ writew(htons(0x5320),streamer_mmio+LAPDINC);
writew(0x2020, streamer_mmio + LAPDINC);
streamer_priv->srb_queued = 2; /* Can't sleep, use srb_bh */
return;
}
- if (set_mc_list ^ streamer_priv->streamer_multicast_set)
- { /* Multicast options have changed */
- dmi = dev->mc_list;
-
- writel(streamer_priv->streamer_addr_table_addr, streamer_mmio + LAPA);
- ata1 = readw(streamer_mmio + LAPDINC);
- ata2 = readw(streamer_mmio + LAPD);
-
- writel(streamer_priv->srb, streamer_mmio + LAPA);
-
- if (set_mc_list)
- {
- /* Turn multicast on */
-
- /* RFC 1469 Says we must support using the functional address C0 00 00 04 00 00
- * We do this with a set functional address mask.
- */
-
- if (!(ata1 & 0x0400)) { /* need to set functional mask */
- writew(SRB_SET_FUNC_ADDRESS, streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
- writew(0, streamer_mmio + LAPDINC);
- writew(ata1 | 0x0400, streamer_mmio + LAPDINC);
- writew(ata2, streamer_mmio + LAPD);
-
- streamer_priv->srb_queued = 2;
- writel(LISR_SRB_CMD, streamer_mmio + LISR_SUM);
-
- streamer_priv->streamer_multicast_set = 1;
- }
-
- } else { /* Turn multicast off */
-
- if ((ata1 & 0x0400)) { /* Hmmm, need to reset the functional mask */
- writew(SRB_SET_FUNC_ADDRESS, streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
- writew(0, streamer_mmio + LAPDINC);
- writew(ata1 & ~0x0400, streamer_mmio + LAPDINC);
- writew(ata2, streamer_mmio + LAPD);
-
- streamer_priv->srb_queued = 2;
- writel(LISR_SRB_CMD, streamer_mmio + LISR_SUM);
-
- streamer_priv->streamer_multicast_set = 0;
- }
- }
-
+ /* Set the functional addresses we need for multicast */
+ writel(streamer_priv->srb,streamer_mmio+LAPA);
+ dev_mc_address[0] = dev_mc_address[1] = dev_mc_address[2] = dev_mc_address[3] = 0 ;
+
+ for (i=0,dmi=dev->mc_list;i < dev->mc_count; i++,dmi = dmi->next)
+ {
+ dev_mc_address[0] |= dmi->dmi_addr[2] ;
+ dev_mc_address[1] |= dmi->dmi_addr[3] ;
+ dev_mc_address[2] |= dmi->dmi_addr[4] ;
+ dev_mc_address[3] |= dmi->dmi_addr[5] ;
}
+
+ writew(htons(SRB_SET_FUNC_ADDRESS << 8),streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
+ writew(0,streamer_mmio+LAPDINC);
+ writew(htons( (dev_mc_address[0] << 8) | dev_mc_address[1]),streamer_mmio+LAPDINC);
+ writew(htons( (dev_mc_address[2] << 8) | dev_mc_address[3]),streamer_mmio+LAPDINC);
+ streamer_priv->srb_queued = 2 ;
+ writel(LISR_SRB_CMD,streamer_mmio+LISR_SUM);
}
static void streamer_srb_bh(struct net_device *dev)
__u16 srb_word;
writew(streamer_priv->srb, streamer_mmio + LAPA);
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
*/
case SRB_MODIFY_RECEIVE_OPTIONS:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
+
switch (srb_word) {
case 0x01:
printk(KERN_WARNING "%s: Unrecognized srb command\n", dev->name);
/* SRB_SET_GROUP_ADDRESS - Multicast group setting
*/
case SRB_SET_GROUP_ADDRESS:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
case 0x00:
- streamer_priv->streamer_multicast_set = 1;
- break;
+ break;
case 0x01:
printk(KERN_WARNING "%s: Unrecognized srb command \n",dev->name);
break;
/* SRB_RESET_GROUP_ADDRESS - Remove a multicast address from group list
*/
case SRB_RESET_GROUP_ADDRESS:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
case 0x00:
- streamer_priv->streamer_multicast_set = 0;
- break;
+ break;
case 0x01:
printk(KERN_WARNING "%s: Unrecognized srb command \n", dev->name);
break;
*/
case SRB_SET_FUNC_ADDRESS:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
case 0x00:
if (streamer_priv->streamer_message_level)
*/
case SRB_READ_LOG:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
case 0x00:
{
/* SRB_READ_SR_COUNTERS - Read and reset the source routing bridge related counters */
case SRB_READ_SR_COUNTERS:
- srb_word = readw(streamer_mmio + LAPDINC) & 0xFF;
+ srb_word=ntohs(readw(streamer_mmio+LAPDINC)) >> 8;
switch (srb_word) {
case 0x00:
if (streamer_priv->streamer_message_level)
struct sockaddr *saddr = addr;
struct streamer_private *streamer_priv = (struct streamer_private *) dev->priv;
- if (netif_running(dev)) {
+ if (netif_running(dev))
+ {
printk(KERN_WARNING "%s: Cannot set mac/laa address while card is open\n", dev->name);
- return -EBUSY;
+ return -EIO;
}
memcpy(streamer_priv->streamer_laa, saddr->sa_data, dev->addr_len);
#endif
writew(streamer_priv->arb, streamer_mmio + LAPA);
- arb_word = readw(streamer_mmio + LAPD) & 0xFF;
-
+ arb_word=ntohs(readw(streamer_mmio+LAPD)) >> 8;
+
if (arb_word == ARB_RECEIVE_DATA) { /* Receive.data, MAC frames */
writew(streamer_priv->arb + 6, streamer_mmio + LAPA);
streamer_priv->mac_rx_buffer = buff_off = ntohs(readw(streamer_mmio + LAPDINC));
- header_len = readw(streamer_mmio + LAPDINC) & 0xff; /* 802.5 Token-Ring Header Length */
+ header_len=ntohs(readw(streamer_mmio+LAPDINC)) >> 8; /* 802.5 Token-Ring Header Length */
frame_len = ntohs(readw(streamer_mmio + LAPDINC));
#if STREAMER_DEBUG
__u16 len;
writew(ntohs(buff_off), streamer_mmio + LAPA); /*setup window to frame data */
- next = ntohs(readw(streamer_mmio + LAPDINC));
+ next = htons(readw(streamer_mmio + LAPDINC));
status =
ntohs(readw(streamer_mmio + LAPDINC)) & 0xff;
len = ntohs(readw(streamer_mmio + LAPDINC));
int i;
__u16 rx_word;
- writew(ntohs(buff_off), streamer_mmio + LAPA); /* setup window to frame data */
+ writew(htons(buff_off), streamer_mmio + LAPA); /* setup window to frame data */
next_ptr = ntohs(readw(streamer_mmio + LAPDINC));
readw(streamer_mmio + LAPDINC); /* read thru status word */
buffer_len = ntohs(readw(streamer_mmio + LAPDINC));
i = 0;
while (i < buffer_len) {
- rx_word = readw(streamer_mmio + LAPDINC);
- frame_data[i] = rx_word & 0xff;
- frame_data[i + 1] = (rx_word >> 8) & 0xff;
+ rx_word=ntohs(readw(streamer_mmio+LAPDINC));
+ frame_data[i]=rx_word >> 8;
+ frame_data[i+1]=rx_word & 0xff;
i += 2;
}
writew(streamer_priv->asb, streamer_mmio + LAPA);
- writew(ASB_RECEIVE_DATA, streamer_mmio + LAPDINC); /* Receive data */
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC); /* Necessary ?? */
+ writew(htons(ASB_RECEIVE_DATA << 8), streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
writew(0, streamer_mmio + LAPDINC);
- writew(ntohs(streamer_priv->mac_rx_buffer), streamer_mmio + LAPD);
+ writew(htons(streamer_priv->mac_rx_buffer), streamer_mmio + LAPD);
writel(LISR_ASB_REPLY | LISR_ASB_FREE_REQ, streamer_priv->streamer_mmio + LISR_SUM);
} else if (arb_word == ARB_LAN_CHANGE_STATUS) { /* Lan.change.status */
writew(streamer_priv->arb + 6, streamer_mmio + LAPA);
lan_status = ntohs(readw(streamer_mmio + LAPDINC));
- fdx_prot_error = readw(streamer_mmio + LAPD) & 0xFF;
-
+ fdx_prot_error = ntohs(readw(streamer_mmio+LAPD)) >> 8;
+
/* Issue ARB Free */
writew(LISR_ARB_FREE, streamer_priv->streamer_mmio + LISR_SUM);
- lan_status_diff = streamer_priv->streamer_lan_status ^ lan_status;
+ lan_status_diff = (streamer_priv->streamer_lan_status ^ lan_status) &
+ lan_status;
if (lan_status_diff & (LSC_LWF | LSC_ARW | LSC_FPE | LSC_RR))
{
/* Issue READ.LOG command */
writew(streamer_priv->srb, streamer_mmio + LAPA);
- writew(SRB_READ_LOG, streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC);
+ writew(htons(SRB_READ_LOG << 8),streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
writew(0, streamer_mmio + LAPDINC);
streamer_priv->srb_queued = 2; /* Can't sleep, use srb_bh */
/* Issue a READ.SR.COUNTERS */
writew(streamer_priv->srb, streamer_mmio + LAPA);
- writew(SRB_READ_SR_COUNTERS,
- streamer_mmio + LAPDINC);
- writew(STREAMER_CLEAR_RET_CODE,
- streamer_mmio + LAPDINC);
+ writew(htons(SRB_READ_SR_COUNTERS << 8),
+ streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8),
+ streamer_mmio+LAPDINC);
streamer_priv->srb_queued = 2; /* Can't sleep, use srb_bh */
writew(LISR_SRB_CMD, streamer_mmio + LISR_SUM);
/* Dropped through the first time */
writew(streamer_priv->asb, streamer_mmio + LAPA);
- writew(ASB_RECEIVE_DATA, streamer_mmio + LAPDINC); /* Receive data */
- writew(STREAMER_CLEAR_RET_CODE, streamer_mmio + LAPDINC); /* Necessary ?? */
+ writew(htons(ASB_RECEIVE_DATA << 8),streamer_mmio+LAPDINC);
+ writew(htons(STREAMER_CLEAR_RET_CODE << 8), streamer_mmio+LAPDINC);
writew(0, streamer_mmio + LAPDINC);
- writew(ntohs(streamer_priv->mac_rx_buffer), streamer_mmio + LAPD);
+ writew(htons(streamer_priv->mac_rx_buffer), streamer_mmio + LAPD);
writel(LISR_ASB_REPLY | LISR_ASB_FREE_REQ, streamer_priv->streamer_mmio + LISR_SUM);
streamer_priv->asb_queued = 2;
if (streamer_priv->asb_queued == 2) {
__u8 rc;
writew(streamer_priv->asb + 2, streamer_mmio + LAPA);
- rc = readw(streamer_mmio + LAPD) & 0xff;
+ rc=ntohs(readw(streamer_mmio+LAPD)) >> 8;
switch (rc) {
case 0x01:
printk(KERN_WARNING "%s: Unrecognized command code \n", dev->name);
off_t pos = 0;
int size;
- struct net_device *dev;
+ struct device *dev;
size = sprintf(buffer, "IBM LanStreamer/MPC Chipset Token Ring Adapters\n");
for (dev = dev_base; dev != NULL; dev = dev->next)
{
- if (dev->base_addr == (pci_device->base_address[0] & (~3)))
- { /* Yep, a Streamer device */
+ if (dev->base_addr == pci_device->resource[0].start)
+ { /* Yep, a Streamer device */
size = sprintf_info(buffer + len, dev);
len += size;
pos = begin + len;
for (i = 0; i < 14; i += 2) {
__u16 io_word;
__u8 *datap = (__u8 *) & sat;
- io_word = readw(streamer_mmio + LAPDINC);
- datap[size] = io_word & 0xff;
- datap[size + 1] = (io_word >> 8) & 0xff;
+ io_word=ntohs(readw(streamer_mmio+LAPDINC));
+ datap[size]=io_word >> 8;
+ datap[size+1]=io_word & 0xff;
}
writew(streamer_priv->streamer_parms_addr, streamer_mmio + LAPA);
for (i = 0; i < 68; i += 2) {
__u16 io_word;
__u8 *datap = (__u8 *) & spt;
- io_word = readw(streamer_mmio + LAPDINC);
- datap[size] = io_word & 0xff;
- datap[size + 1] = (io_word >> 8) & 0xff;
+ io_word=ntohs(readw(streamer_mmio+LAPDINC));
+ datap[size]=io_word >> 8;
+ datap[size+1]=io_word & 0xff;
}
#if STREAMER_NETWORK_MONITOR
#ifdef CONFIG_PROC_FS
- struct proc_dir_entry *ent;
-
- ent = create_proc_entry("net/streamer_tr", 0, 0);
- ent->read_proc = &streamer_proc_info;
+ create_proc_read_entry("net/streamer_tr",0,0,streamer_proc_info,NULL);
#endif
#endif
for (i = 0; (i < STREAMER_MAX_ADAPTERS); i++)
void cleanup_module(void)
{
int i;
+ struct streamer_private *streamer_priv;
for (i = 0; i < STREAMER_MAX_ADAPTERS; i++)
if (dev_streamer[i]) {
unregister_trdev(dev_streamer[i]);
release_region(dev_streamer[i]->base_addr, STREAMER_IO_SPACE);
+ streamer_priv=(struct streamer_private *)dev_streamer[i]->priv;
+ kfree_s(streamer_priv->streamer_rx_ring,
+ sizeof(struct streamer_rx_desc)*STREAMER_RX_RING_SIZE);
+ kfree_s(streamer_priv->streamer_tx_ring,
+ sizeof(struct streamer_tx_desc)*STREAMER_TX_RING_SIZE);
kfree_s(dev_streamer[i]->priv, sizeof(struct streamer_private));
kfree_s(dev_streamer[i], sizeof(struct net_device));
dev_streamer[i] = NULL;
#define BMCTL_TX1_DIS (1<<14)
#define BMCTL_TX2_DIS (1<<10)
#define BMCTL_RX_DIS (1<<6)
+#define BMCTL_RX_ENABLED (1<<5)
#define RXLBDA 0x90
#define RXBDA 0x94
__u16 asb;
__u8 *streamer_mmio;
+ char *streamer_card_name;
+
+ spinlock_t streamer_lock;
volatile int srb_queued; /* True if an SRB is still posted */
wait_queue_head_t srb_wait;
volatile int asb_queued; /* True if an ASB is posted */
volatile int trb_queued; /* True if a TRB is posted */
- wait_queue_head_t trb_wait;
+ wait_queue_head_t trb_wait;
- struct streamer_rx_desc streamer_rx_ring[STREAMER_RX_RING_SIZE];
- struct streamer_tx_desc streamer_tx_ring[STREAMER_TX_RING_SIZE];
+ struct streamer_rx_desc *streamer_rx_ring;
+ struct streamer_tx_desc *streamer_tx_ring;
struct sk_buff *tx_ring_skb[STREAMER_TX_RING_SIZE],
*rx_ring_skb[STREAMER_RX_RING_SIZE];
int tx_ring_free, tx_ring_last_status, rx_ring_last_received,
__u16 pkt_buf_sz;
__u8 streamer_receive_options, streamer_copy_all_options,
streamer_message_level;
- __u8 streamer_multicast_set;
__u16 streamer_addr_table_addr, streamer_parms_addr;
__u16 mac_rx_buffer;
__u8 streamer_laa[6];
*/
#include "tulip.h"
-#include <asm/io.h>
static u16 t21142_csr13[] = { 0x0001, 0x0009, 0x0009, 0x0000, 0x0001, };
#include "tulip.h"
#include <linux/init.h>
-#include <asm/io.h>
#include <asm/unaligned.h>
*/
#include "tulip.h"
-#include <asm/io.h>
#include <linux/etherdevice.h>
#include <linux/pci.h>
*/
#include "tulip.h"
-#include <asm/io.h>
/* This is a mysterious value that can be written to CSR11 in the 21040 (only)
#include <linux/kernel.h>
#include "tulip.h"
-#include <asm/io.h>
void pnic_do_nway(struct net_device *dev)
*/
#include "tulip.h"
-#include <asm/io.h>
void tulip_timer(unsigned long data)
break;
}
break;
- case DC21140: case DC21142: case MX98713: case COMPEX9881: default: {
+ case DC21140:
+ case DC21142:
+ case MX98713:
+ case COMPEX9881:
+ default: {
struct medialeaf *mleaf;
unsigned char *p;
if (tp->mtable == NULL) { /* No EEPROM info, use generic code. */
#include <linux/spinlock.h>
#include <linux/netdevice.h>
#include <linux/timer.h>
-
+#include <asm/io.h>
struct tulip_chip_table {
char *chip_name;
HAS_ACPI = 0x10,
MC_HASH_ONLY = 0x20, /* Hash-only multicast filter. */
HAS_PNICNWAY = 0x80,
- HAS_NWAY143 = 0x40, /* Uses internal NWay xcvr. */
- HAS_8023X = 0x100,
+ HAS_NWAY = 0x40, /* Uses internal NWay xcvr. */
+ HAS_INTR_MITIGATION = 0x100,
+ IS_ASIX = 0x200,
+ HAS_8023X = 0x400,
};
};
+enum t21041_csr13_bits {
+ csr13_eng = (0xEF0<<4), /* for eng. purposes only, hardcode at EF0h */
+ csr13_aui = (1<<3), /* clear to force 10bT, set to force AUI/BNC */
+ csr13_cac = (1<<2), /* CSR13/14/15 autoconfiguration */
+ csr13_srl = (1<<0), /* When reset, resets all SIA functions, machines */
+
+ csr13_mask_auibnc = (csr13_eng | csr13_aui | csr13_cac | csr13_srl),
+ csr13_mask_10bt = (csr13_eng | csr13_cac | csr13_srl),
+};
+
+
/* Keep the ring sizes a power of two for efficiency.
Making the Tx ring too large decreases the effectiveness of channel
bonding and packet priority.
*/
-static const char version[] = "Linux Tulip driver version 0.9.4.1 (Mar 18, 2000)\n";
+static const char version[] = "Linux Tulip driver version 0.9.4.2 (Mar 21, 2000)\n";
#include <linux/module.h>
#include "tulip.h"
#include <linux/init.h>
#include <linux/etherdevice.h>
#include <linux/delay.h>
-#include <asm/io.h>
#include <asm/unaligned.h>
struct tulip_chip_table tulip_tbl[] = {
{ "Digital DC21040 Tulip", 128, 0x0001ebef, 0, tulip_timer },
- { "Digital DC21041 Tulip", 128, 0x0001ebff, HAS_MEDIA_TABLE, tulip_timer },
+ { "Digital DC21041 Tulip", 128, 0x0001ebef,
+ HAS_MEDIA_TABLE | HAS_NWAY, tulip_timer },
{ "Digital DS21140 Tulip", 128, 0x0001ebef,
HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM, tulip_timer },
{ "Digital DS21143 Tulip", 128, 0x0801fbff,
- HAS_MII | HAS_MEDIA_TABLE | ALWAYS_CHECK_MII | HAS_ACPI | HAS_NWAY143,
- t21142_timer },
+ HAS_MII | HAS_MEDIA_TABLE | ALWAYS_CHECK_MII | HAS_ACPI | HAS_NWAY
+ | HAS_INTR_MITIGATION, t21142_timer },
{ "Lite-On 82c168 PNIC", 256, 0x0001ebef,
HAS_MII | HAS_PNICNWAY, pnic_timer },
{ "Macronix 98713 PMAC", 128, 0x0001ebef,
{ "Macronix 98725 PMAC", 256, 0x0001ebef,
HAS_MEDIA_TABLE, mxic_timer },
{ "ASIX AX88140", 128, 0x0001fbff,
- HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM | MC_HASH_ONLY, tulip_timer },
+ HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM | MC_HASH_ONLY | IS_ASIX, tulip_timer },
{ "Lite-On PNIC-II", 256, 0x0801fbff,
- HAS_MII | HAS_NWAY143 | HAS_8023X, t21142_timer },
+ HAS_MII | HAS_NWAY | HAS_8023X, t21142_timer },
{ "ADMtek Comet", 256, 0x0001abef,
MC_HASH_ONLY, comet_timer },
{ "Compex 9881 PMAC", 128, 0x0001ebef,
HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM, mxic_timer },
{ "Intel DS21145 Tulip", 128, 0x0801fbff,
- HAS_MII | HAS_MEDIA_TABLE | ALWAYS_CHECK_MII | HAS_NWAY143,
+ HAS_MII | HAS_MEDIA_TABLE | ALWAYS_CHECK_MII | HAS_ACPI | HAS_NWAY,
t21142_timer },
{0},
};
{ 0x11AD, 0x0002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, LC82C168 },
{ 0x10d9, 0x0512, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98713 },
{ 0x10d9, 0x0531, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98715 },
- { 0x10d9, 0x0531, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98725 },
+/* { 0x10d9, 0x0531, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98725 },*/
{ 0x125B, 0x1400, PCI_ANY_ID, PCI_ANY_ID, 0, 0, AX88140 },
{ 0x11AD, 0xc115, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PNIC2 },
{ 0x1317, 0x0981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
+ { 0x1317, 0x0985, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
+ { 0x1317, 0x1985, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
{ 0x11F6, 0x9881, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMPEX9881 },
{ 0x8086, 0x0039, PCI_ANY_ID, PCI_ANY_ID, 0, 0, I21145 },
+ { 0x1282, 0x9100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21140 },
+ { 0x1282, 0x9102, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DC21140 },
+ { 0x1113, 0x1217, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98715 },
{0},
};
MODULE_DEVICE_TABLE(pci, tulip_pci_tbl);
u8 t21040_csr13[] = {2,0x0C,8,4, 4,0,0,0, 0,0,0,0, 4,0,0,0};
/* 21041 transceiver register settings: 10-T, 10-2, AUI, 10-T, 10T-FD*/
-u16 t21041_csr13[] = { 0xEF01, 0xEF09, 0xEF09, 0xEF01, 0xEF09, };
+u16 t21041_csr13[] = {
+ csr13_mask_10bt, /* 10-T */
+ csr13_mask_auibnc, /* 10-2 */
+ csr13_mask_auibnc, /* AUI */
+ csr13_mask_10bt, /* 10-T */
+ csr13_mask_10bt, /* 10T-FD */
+};
u16 t21041_csr14[] = { 0xFFFF, 0xF7FD, 0xF7FD, 0x7F3F, 0x7F3D, };
u16 t21041_csr15[] = { 0x0008, 0x0006, 0x000E, 0x0008, 0x0008, };
/* Reset the chip, holding bit 0 set at least 50 PCI cycles. */
outl(0x00000001, ioaddr + CSR0);
+ udelay(100);
/* Deassert reset.
Wait the specified 50 PCI cycles after a reset by initializing
Tx and Rx queues and the address filter list. */
outl(tp->csr0, ioaddr + CSR0);
+ udelay(100);
if (tulip_debug > 1)
printk(KERN_DEBUG "%s: tulip_up(), irq==%d.\n", dev->name, dev->irq);
case SIOCDEVPRIVATE: /* Get the address of the PHY in use. */
if (tp->mii_cnt)
data[0] = phy;
- else if (tp->flags & HAS_NWAY143)
+ else if (tp->flags & HAS_NWAY)
data[0] = 32;
else if (tp->chip_id == COMET)
data[0] = 1;
else
return -ENODEV;
case SIOCDEVPRIVATE+1: /* Read the specified MII register. */
- if (data[0] == 32 && (tp->flags & HAS_NWAY143)) {
+ if (data[0] == 32 && (tp->flags & HAS_NWAY)) {
int csr12 = inl(ioaddr + CSR12);
int csr14 = inl(ioaddr + CSR14);
switch (data[1]) {
case SIOCDEVPRIVATE+2: /* Write the specified MII register */
if (!capable(CAP_NET_ADMIN))
return -EPERM;
- if (data[0] == 32 && (tp->flags & HAS_NWAY143)) {
+ if (data[0] == 32 && (tp->flags & HAS_NWAY)) {
if (data[1] == 5)
tp->to_advertise = data[2];
} else {
/* Clear the missed-packet counter. */
(volatile int)inl(ioaddr + CSR8);
- if (chip_idx == DC21041 && inl(ioaddr + CSR9) & 0x8000) {
- printk(" 21040 compatible mode,");
- chip_idx = DC21040;
+ if (chip_idx == DC21041) {
+ if (inl(ioaddr + CSR9) & 0x8000) {
+ printk(" 21040 compatible mode,");
+ chip_idx = DC21040;
+ } else {
+ printk(" 21041 mode,");
+ }
}
/* The station address ROM is read byte serially. The register must
dev->do_ioctl = private_ioctl;
dev->set_multicast_list = set_rx_mode;
- if ((tp->flags & HAS_NWAY143) || tp->chip_id == DC21041)
+ if ((tp->flags & HAS_NWAY) || tp->chip_id == DC21041)
tp->link_change = t21142_lnk_change;
else if (tp->flags & HAS_PNICNWAY)
tp->link_change = pnic_lnk_change;
LIST_HEAD(pci_root_buses);
LIST_HEAD(pci_devices);
+/**
+ * pci_find_slot - locate PCI device from a given PCI slot
+ * @bus: number of PCI bus on which desired PCI device resides
+ * @devfn: number of PCI slot in which desired PCI device resides
+ *
+ * Given a PCI bus and slot number, the desired PCI device is
+ * located in system global list of PCI devices. If the device
+ * is found, a pointer to its data structure is returned. If no
+ * device is found, %NULL is returned.
+ */
struct pci_dev *
pci_find_slot(unsigned int bus, unsigned int devfn)
{
}
+/**
+ * pci_find_device - begin or continue searching for a PCI device by vendor/device id
+ * @vendor: PCI vendor id to match, or %PCI_ANY_ID to match all vendor ids
+ * @device: PCI device id to match, or %PCI_ANY_ID to match all vendor ids
+ * @from: Previous PCI device found in search, or %NULL for new search.
+ *
+ * Iterates through the list of known PCI devices. If a PCI device is
+ * found with a matching @vendor and @device, a pointer to its device structure is
+ * returned. Otherwise, %NULL is returned.
+ *
+ * A new search is initiated by passing %NULL to the @from argument.
+ * Otherwise if @from is not null, searches continue from that point.
+ */
struct pci_dev *
pci_find_device(unsigned int vendor, unsigned int device, const struct pci_dev *from)
{
}
+/**
+ * pci_find_class - begin or continue searching for a PCI device by class
+ * @class: search for a PCI device with this class designation
+ * @from: Previous PCI device found in search, or %NULL for new search.
+ *
+ * Iterates through the list of known PCI devices. If a PCI device is
+ * found with a matching @class, a pointer to its device structure is
+ * returned. Otherwise, %NULL is returned.
+ *
+ * A new search is initiated by passing %NULL to the @from argument.
+ * Otherwise if @from is not null, searches continue from that point.
+ */
struct pci_dev *
pci_find_class(unsigned int class, const struct pci_dev *from)
{
}
-/*
+/**
+ * pci_find_parent_resource - return resource region of parent bus of given region
+ * @dev: PCI device structure contains resources to be searched
+ * @res: child resource record for which parent is sought
+ *
* For given resource region of given device, return the resource
* region of parent bus the given region is contained in or where
* it should be allocated from.
return best;
}
-/*
+/**
+ * pci_set_power_state - Set power management state of a device.
+ * @dev: PCI device for which PM is set
+ * @new_state: new power management statement (0 == D0, 3 == D3, etc.)
+ *
* Set power management state of a device. For transitions from state D3
* it isn't as straightforward as one could assume since many devices forget
* their configuration space during wakeup. Returns old power state.
return old_state;
}
-/*
+/**
+ * pci_enable_device - Initialize device before it's used by a driver.
+ * @dev: PCI device to be initialized
+ *
* Initialize device before it's used by a driver. Ask low-level code
* to enable I/O and memory. Wake up the device if it was suspended.
* Beware, this function can fail.
ints[0] = i - 1;
internal_setup(cur, ints);
- return 0;
+ return 1;
}
static void add_pci_ports(void) {
* Copyright 2000, Jayson C. Vantuyl <vantuyl@csc.smsu.edu>
* and Bryon W. Roche <bryon@csc.smsu.edu>
*
+ * 64-bit addressing added by Kanoj Sarcar <kanoj@sgi.com>
+ * and Leo Dagum <dagum@sgi.com>
+ *
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2, or (at your option) any
#define MBOX3 0x76 /* mailbox 3 */
#define MBOX4 0x78 /* mailbox 4 */
#define MBOX5 0x7a /* mailbox 5 */
+#define MBOX6 0x7c /* mailbox 6 */
+#define MBOX7 0x7e /* mailbox 7 */
/* mailbox command complete status codes */
#define MBOX_COMMAND_COMPLETE 0x4000
#define REQUEST_QUEUE_WAKEUP 0x8005
#define EXECUTION_TIMEOUT_RESET 0x8006
+#ifdef CONFIG_QL_ISP_A64
+#define IOCB_SEGS 2
+#define CONTINUATION_SEGS 5
+#define MAX_CONTINUATION_ENTRIES 254
+#else
+#define IOCB_SEGS 4
+#define CONTINUATION_SEGS 7
+#endif /* CONFIG_QL_ISP_A64 */
+
struct Entry_header {
u_char entry_type;
u_char entry_cnt;
};
/* entry header type commands */
+#ifdef CONFIG_QL_ISP_A64
+#define ENTRY_COMMAND 9
+#define ENTRY_CONTINUATION 0xa
+#else
#define ENTRY_COMMAND 1
#define ENTRY_CONTINUATION 2
+#endif /* CONFIG_QL_ISP_A64 */
+
#define ENTRY_STATUS 3
#define ENTRY_MARKER 4
#define ENTRY_EXTENDED_COMMAND 5
struct dataseg {
u_int d_base;
+#ifdef CONFIG_QL_ISP_A64
+ u_int d_base_hi;
+#endif
u_int d_count;
};
u_short time_out;
u_short segment_cnt;
u_char cdb[12];
- struct dataseg dataseg[4];
+#ifdef CONFIG_QL_ISP_A64
+ u_int rsvd1;
+ u_int rsvd2;
+#endif
+ struct dataseg dataseg[IOCB_SEGS];
};
/* command entry control flag definitions */
struct Continuation_Entry {
struct Entry_header hdr;
+#ifndef CONFIG_QL_ISP_A64
u_int reserved;
- struct dataseg dataseg[7];
+#endif
+ struct dataseg dataseg[CONTINUATION_SEGS];
};
struct Marker_Entry {
#define MBOX_WRITE_FOUR_RAM_WORDS 0x0041
#define MBOX_EXEC_BIOS_IOCB 0x0042
+#ifdef CONFIG_QL_ISP_A64
+#define MBOX_CMD_INIT_REQUEST_QUEUE_64 0x0052
+#define MBOX_CMD_INIT_RESPONSE_QUEUE_64 0x0053
+#endif /* CONFIG_QL_ISP_A64 */
+
#include "qlogicisp_asm.c"
#define PACKB(a, b) (((a)<<4)|(b))
PACKB(1, 2), /* MBOX_RETURN_BIOS_BLOCK_ADDR */
PACKB(6, 1), /* MBOX_WRITE_FOUR_RAM_WORDS */
PACKB(2, 3) /* MBOX_EXEC_BIOS_IOCB */
+#ifdef CONFIG_QL_ISP_A64
+ ,PACKB(0, 0), /* 0x0043 */
+ PACKB(0, 0), /* 0x0044 */
+ PACKB(0, 0), /* 0x0045 */
+ PACKB(0, 0), /* 0x0046 */
+ PACKB(0, 0), /* 0x0047 */
+ PACKB(0, 0), /* 0x0048 */
+ PACKB(0, 0), /* 0x0049 */
+ PACKB(0, 0), /* 0x004a */
+ PACKB(0, 0), /* 0x004b */
+ PACKB(0, 0), /* 0x004c */
+ PACKB(0, 0), /* 0x004d */
+ PACKB(0, 0), /* 0x004e */
+ PACKB(0, 0), /* 0x004f */
+ PACKB(0, 0), /* 0x0050 */
+ PACKB(0, 0), /* 0x0051 */
+ PACKB(8, 8), /* MBOX_CMD_INIT_REQUEST_QUEUE_64 (0x0052) */
+ PACKB(8, 8) /* MBOX_CMD_INIT_RESPONSE_QUEUE_64 (0x0053) */
+#endif /* CONFIG_QL_ISP_A64 */
};
#define MAX_MBOX_COMMAND (sizeof(mbox_param)/sizeof(u_short))
struct Continuation_Entry *cont;
struct Scsi_Host *host;
struct isp1020_hostdata *hostdata;
+ dma_addr_t dma_addr;
ENTER("isp1020_queuecommand");
/* fill in first four sg entries: */
n = sg_count;
- if (n > 4)
- n = 4;
+ if (n > IOCB_SEGS)
+ n = IOCB_SEGS;
for (i = 0; i < n; i++) {
- ds[i].d_base = cpu_to_le32(sg_dma_address(sg));
+ dma_addr = sg_dma_address(sg);
+ ds[i].d_base = cpu_to_le32((u32) dma_addr);
+#ifdef CONFIG_QL_ISP_A64
+ ds[i].d_base_hi = cpu_to_le32((u32) (dma_addr>>32));
+#endif /* CONFIG_QL_ISP_A64 */
ds[i].d_count = cpu_to_le32(sg_dma_len(sg));
++sg;
}
- sg_count -= 4;
+ sg_count -= IOCB_SEGS;
while (sg_count > 0) {
++cmd->hdr.entry_cnt;
cont->hdr.entry_cnt = 0;
cont->hdr.sys_def_1 = 0;
cont->hdr.flags = 0;
+#ifndef CONFIG_QL_ISP_A64
cont->reserved = 0;
+#endif
ds = cont->dataseg;
n = sg_count;
- if (n > 7)
- n = 7;
+ if (n > CONTINUATION_SEGS)
+ n = CONTINUATION_SEGS;
for (i = 0; i < n; ++i) {
- ds[i].d_base = cpu_to_le32(sg_dma_address(sg));
+ dma_addr = sg_dma_address(sg);
+ ds[i].d_base = cpu_to_le32((u32) dma_addr);
+#ifdef CONFIG_QL_ISP_A64
+ ds[i].d_base_hi = cpu_to_le32((u32)(dma_addr>>32));
+#endif /* CONFIG_QL_ISP_A64 */
ds[i].d_count = cpu_to_le32(sg_dma_len(sg));
++sg;
}
sg_count -= n;
}
} else if (Cmnd->request_bufflen) {
- Cmnd->SCp.ptr = (char *)(unsigned long)
- pci_map_single(hostdata->pci_dev,
+ /*Cmnd->SCp.ptr = (char *)(unsigned long)*/
+ dma_addr = pci_map_single(hostdata->pci_dev,
Cmnd->request_buffer,
Cmnd->request_bufflen,
scsi_to_pci_dma_dir(Cmnd->sc_data_direction));
+ Cmnd->SCp.ptr = (char *)(unsigned long) dma_addr;
cmd->dataseg[0].d_base =
- cpu_to_le32((u32)(long)Cmnd->SCp.ptr);
+ cpu_to_le32((u32) dma_addr);
+#ifdef CONFIG_QL_ISP_A64
+ cmd->dataseg[0].d_base_hi =
+ cpu_to_le32((u32) (dma_addr>>32));
+#endif /* CONFIG_QL_ISP_A64 */
cmd->dataseg[0].d_count =
cpu_to_le32((u32)Cmnd->request_bufflen);
cmd->segment_cnt = cpu_to_le16(1);
} else {
cmd->dataseg[0].d_base = 0;
+#ifdef CONFIG_QL_ISP_A64
+ cmd->dataseg[0].d_base_hi = 0;
+#endif /* CONFIG_QL_ISP_A64 */
cmd->dataseg[0].d_count = 0;
cmd->segment_cnt = cpu_to_le16(1); /* Shouldn't this be 0? */
}
scsi_to_pci_dma_dir(Cmnd->sc_data_direction));
else if (Cmnd->request_bufflen)
pci_unmap_single(hostdata->pci_dev,
+#ifdef CONFIG_QL_ISP_A64
+ (dma_addr_t)((long)Cmnd->SCp.ptr),
+#else
(u32)((long)Cmnd->SCp.ptr),
+#endif
Cmnd->request_bufflen,
scsi_to_pci_dma_dir(Cmnd->sc_data_direction));
static int isp1020_load_parameters(struct Scsi_Host *host)
{
int i, k;
+#ifdef CONFIG_QL_ISP_A64
+ u_long queue_addr;
+ u_short param[8];
+#else
u_int queue_addr;
u_short param[6];
+#endif
u_short isp_cfg1, hwrev;
unsigned long flags;
struct isp1020_hostdata *hostdata =
}
queue_addr = hostdata->res_dma;
-
+#ifdef CONFIG_QL_ISP_A64
+ param[0] = MBOX_CMD_INIT_RESPONSE_QUEUE_64;
+#else
param[0] = MBOX_INIT_RES_QUEUE;
+#endif
param[1] = RES_QUEUE_LEN + 1;
param[2] = (u_short) (queue_addr >> 16);
param[3] = (u_short) (queue_addr & 0xffff);
param[4] = 0;
param[5] = 0;
+#ifdef CONFIG_QL_ISP_A64
+ param[6] = (u_short) (queue_addr >> 48);
+ param[7] = (u_short) (queue_addr >> 32);
+#endif
isp1020_mbox_command(host, param);
}
queue_addr = hostdata->req_dma;
-
+#ifdef CONFIG_QL_ISP_A64
+ param[0] = MBOX_CMD_INIT_REQUEST_QUEUE_64;
+#else
param[0] = MBOX_INIT_REQ_QUEUE;
+#endif
param[1] = QLOGICISP_REQ_QUEUE_LEN + 1;
param[2] = (u_short) (queue_addr >> 16);
param[3] = (u_short) (queue_addr & 0xffff);
param[4] = 0;
+#ifdef CONFIG_QL_ISP_A64
+ param[5] = 0;
+ param[6] = (u_short) (queue_addr >> 48);
+ param[7] = (u_short) (queue_addr >> 32);
+#endif
+
isp1020_mbox_command(host, param);
if (param[0] != MBOX_COMMAND_COMPLETE) {
printk("qlogicisp: mbox_command loop timeout #1\n");
switch(mbox_param[param[0]] >> 4) {
+ case 8: isp_outw(param[7], host, MBOX7);
+ case 7: isp_outw(param[6], host, MBOX6);
case 6: isp_outw(param[5], host, MBOX5);
case 5: isp_outw(param[4], host, MBOX4);
case 4: isp_outw(param[3], host, MBOX3);
printk("qlogicisp: mbox_command loop timeout #3\n");
switch(mbox_param[param[0]] & 0xf) {
+ case 8: param[7] = isp_inw(host, MBOX7);
+ case 7: param[6] = isp_inw(host, MBOX6);
case 6: param[5] = isp_inw(host, MBOX5);
case 5: param[4] = isp_inw(host, MBOX4);
case 4: param[3] = isp_inw(host, MBOX3);
ints[0] = i - 1;
internal_setup(cur, ints);
- return 0;
+ return 1;
}
int u14_34f_detect(Scsi_Host_Template *tpnt)
*
* Access routines and definitions for the low level driver for the
* Creative AWE32/SB32/AWE64 wave table synth.
- * version 0.4.3; Mar. 1, 1998
+ * version 0.4.4; Jan. 4, 2000
*
- * Copyright (C) 1996-1998 Takashi Iwai
+ * Copyright (C) 1996-2000 Takashi Iwai
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* sound/awe_wave.c
*
* The low level driver for the AWE32/SB32/AWE64 wave table synth.
- * version 0.4.3; Feb. 1, 1999
+ * version 0.4.4; Jan. 4, 2000
*
- * Copyright (C) 1996-1999 Takashi Iwai
+ * Copyright (C) 1996-2000 Takashi Iwai
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
#include <linux/init.h>
#include <linux/module.h>
#include <linux/string.h>
+#ifdef CONFIG_ISAPNP
+#include <linux/isapnp.h>
+#endif
#include "sound_config.h"
#include "soundmodule.h"
* debug message
*/
-/* do not allocate buffer at beginning */
-#define INIT_TABLE(buffer,index,nums,type) {buffer=NULL; index=0;}
-
#ifdef AWE_DEBUG_ON
#define DEBUG(LVL,XXX) {if (ctrls[AWE_MD_DEBUG_MODE] > LVL) { XXX; }}
#define ERRMSG(XXX) {if (ctrls[AWE_MD_DEBUG_MODE]) { XXX; }}
* bank and voice record
*/
+typedef struct _sf_list sf_list;
+typedef struct _awe_voice_list awe_voice_list;
+typedef struct _awe_sample_list awe_sample_list;
+
/* soundfont record */
-typedef struct _sf_list {
- unsigned short sf_id;
- unsigned short type;
+struct _sf_list {
+ unsigned short sf_id; /* id number */
+ unsigned short type; /* lock & shared flags */
int num_info; /* current info table index */
int num_sample; /* current sample table index */
int mem_ptr; /* current word byte pointer */
- int infos;
- int samples;
+ awe_voice_list *infos, *last_infos; /* instruments */
+ awe_sample_list *samples, *last_samples; /* samples */
#ifdef AWE_ALLOW_SAMPLE_SHARING
- int shared; /* shared index */
- unsigned char name[AWE_PATCH_NAME_LEN];
+ sf_list *shared; /* shared list */
+ unsigned char name[AWE_PATCH_NAME_LEN]; /* sharing id */
#endif
-} sf_list;
+ sf_list *next, *prev;
+};
-/* bank record */
-typedef struct _awe_voice_list {
- int next; /* linked list with same sf_id */
+/* instrument list */
+struct _awe_voice_list {
+ awe_voice_info v; /* instrument information */
+ sf_list *holder; /* parent sf_list of this record */
unsigned char bank, instr; /* preset number information */
char type, disabled; /* type=normal/mapped, disabled=boolean */
- awe_voice_info v; /* voice information */
- int next_instr; /* preset table list */
- int next_bank; /* preset table list */
-} awe_voice_list;
+ awe_voice_list *next; /* linked list with same sf_id */
+ awe_voice_list *next_instr; /* instrument list */
+ awe_voice_list *next_bank; /* hash table list */
+};
/* voice list type */
#define V_ST_NORMAL 0
#define V_ST_MAPPED 1
-typedef struct _awe_sample_list {
- int next; /* linked list with same sf_id */
+/* sample list */
+struct _awe_sample_list {
awe_sample_info v; /* sample information */
-} awe_sample_list;
+ sf_list *holder; /* parent sf_list of this record */
+ awe_sample_list *next; /* linked list with same sf_id */
+};
/* sample and information table */
-static int current_sf_id = 0;
-static int locked_sf_id = 0;
-static int max_sfs;
-static sf_list *sflists = NULL;
-
-#define awe_free_mem_ptr() (current_sf_id <= 0 ? 0 : sflists[current_sf_id-1].mem_ptr)
-#define awe_free_info() (current_sf_id <= 0 ? 0 : sflists[current_sf_id-1].num_info)
-#define awe_free_sample() (current_sf_id <= 0 ? 0 : sflists[current_sf_id-1].num_sample)
-
-static int max_samples;
-static awe_sample_list *samples = NULL;
-
-static int max_infos;
-static awe_voice_list *infos = NULL;
+static int current_sf_id = 0; /* current number of fonts */
+static int locked_sf_id = 0; /* locked position */
+static sf_list *sfhead = NULL, *sftail = NULL; /* linked-lists */
+#define awe_free_mem_ptr() (sftail ? sftail->mem_ptr : 0)
+#define awe_free_info() (sftail ? sftail->num_info : 0)
+#define awe_free_sample() (sftail ? sftail->num_sample : 0)
#define AWE_MAX_PRESETS 256
#define AWE_DEFAULT_PRESET 0
#define MAX_LAYERS AWE_MAX_VOICES
/* preset table index */
-static int preset_table[AWE_MAX_PRESETS];
+static awe_voice_list *preset_table[AWE_MAX_PRESETS];
/*
* voice table
int main_vol; /* channel volume (0-127) */
int expression_vol; /* midi expression (0-127) */
int chan_press; /* channel pressure */
- int vrec; /* instrument list */
- int def_vrec; /* default instrument list */
int sustained; /* sustain status in MIDI */
FX_Rec fx; /* effects */
FX_Rec fx_layer[MAX_LAYERS]; /* layer effects */
static awe_chan_info channels[AWE_MAX_CHANNELS];
-/*----------------------------------------------------------------
+/*
* global variables
- *----------------------------------------------------------------*/
+ */
#ifndef AWE_DEFAULT_BASE_ADDR
#define AWE_DEFAULT_BASE_ADDR 0 /* autodetect */
#define AWE_DEFAULT_MEM_SIZE -1 /* autodetect */
#endif
-#define awe_port io
-#define awe_mem_size memsize
int io = AWE_DEFAULT_BASE_ADDR; /* Emu8000 base address */
int memsize = AWE_DEFAULT_MEM_SIZE; /* memory size in Kbytes */
+#ifdef CONFIG_ISAPNP
+int isapnp = 1;
+#else
+int isapnp = 0;
+#endif
MODULE_AUTHOR("Takashi Iwai <iwai@ww.uni-erlangen.de>");
MODULE_DESCRIPTION("SB AWE32/64 WaveTable driver");
MODULE_PARM_DESC(io, "base i/o port of Emu8000");
MODULE_PARM(memsize, "i");
MODULE_PARM_DESC(memsize, "onboard DRAM size in Kbytes");
+MODULE_PARM(isapnp, "i");
+MODULE_PARM_DESC(isapnp, "use ISAPnP detection");
EXPORT_NO_SYMBOLS;
/* DRAM start offset */
0, /* perc_mode (obsolete) */
AWE_MAX_VOICES, /* nr_voices */
0, /* nr_drums (obsolete) */
- AWE_MAX_INFOS /* instr_bank_size */
+ 400 /* instr_bank_size */
};
static void awe_terminate_and_init(int voice, int forced);
/* voice search */
-static int awe_search_instr(int bank, int preset);
-static int awe_search_multi_voices(int rec, int note, int velocity, awe_voice_info **vlist);
+static int awe_search_key(int bank, int preset, int note);
+static awe_voice_list *awe_search_instr(int bank, int preset, int note);
+static int awe_search_multi_voices(awe_voice_list *rec, int note, int velocity, awe_voice_info **vlist);
static void awe_alloc_multi_voices(int ch, int note, int velocity, int key);
static void awe_alloc_one_voice(int voice, int note, int velocity);
static int awe_clear_voice(void);
static int awe_close_patch(awe_patch_info *patch, const char *addr, int count);
static int awe_unload_patch(awe_patch_info *patch, const char *addr, int count);
static int awe_load_info(awe_patch_info *patch, const char *addr, int count);
+static int awe_remove_info(awe_patch_info *patch, const char *addr, int count);
static int awe_load_data(awe_patch_info *patch, const char *addr, int count);
static int awe_replace_data(awe_patch_info *patch, const char *addr, int count);
static int awe_load_map(awe_patch_info *patch, const char *addr, int count);
#endif
/*static int awe_probe_info(awe_patch_info *patch, const char *addr, int count);*/
static int awe_probe_data(awe_patch_info *patch, const char *addr, int count);
-static int check_patch_opened(int type, char *name);
-static int awe_write_wave_data(const char *addr, int offset, awe_sample_info *sp, int channels);
-static void add_sf_info(int rec);
-static void add_sf_sample(int rec);
-static void purge_old_list(int rec, int next);
-static void add_info_list(int rec);
+static sf_list *check_patch_opened(int type, char *name);
+static int awe_write_wave_data(const char *addr, int offset, awe_sample_list *sp, int channels);
+static int awe_create_sf(int type, char *name);
+static void awe_free_sf(sf_list *sf);
+static void add_sf_info(sf_list *sf, awe_voice_list *rec);
+static void add_sf_sample(sf_list *sf, awe_sample_list *smp);
+static void purge_old_list(awe_voice_list *rec, awe_voice_list *next);
+static void add_info_list(awe_voice_list *rec);
static void awe_remove_samples(int sf_id);
static void rebuild_preset_list(void);
-static short awe_set_sample(awe_voice_info *vp);
-static int search_sample_index(int sf, int sample, int level);
+static short awe_set_sample(awe_voice_list *rec);
+static awe_sample_list *search_sample_index(sf_list *sf, int sample);
+static int is_identical_holder(sf_list *sf1, sf_list *sf2);
#ifdef AWE_ALLOW_SAMPLE_SHARING
-static int is_identical_id(int id1, int id2);
-static int is_identical_name(unsigned char *name, int id);
+static int is_identical_name(unsigned char *name, sf_list *p);
static int is_shared_sf(unsigned char *name);
-static int info_duplicated(awe_voice_list *rec);
+static int info_duplicated(sf_list *sf, awe_voice_list *rec);
#endif /* allow sharing */
/* lowlevel functions */
static int awe_open_dram_for_write(int offset, int channels);
static void awe_open_dram_for_check(void);
static void awe_close_dram(void);
-static void awe_write_dram(unsigned short c);
+/*static void awe_write_dram(unsigned short c);*/
static int awe_detect_base(int addr);
static int awe_detect(void);
static void awe_check_dram(void);
static int ctrls[AWE_MD_END];
-/*----------------------------------------------------------------
+/*
* synth operation table
- *----------------------------------------------------------------*/
+ */
static struct synth_operations awe_operations =
{
};
-/*================================================================
+/*
* General attach / unload interface
- *================================================================*/
+ */
-static int _attach_awe(void)
+static int __init _attach_awe(void)
{
if (awe_present) return 0; /* for OSS38.. called twice? */
/* check presence of AWE32 card */
if (! awe_detect()) {
- printk(KERN_WARNING "AWE32: not detected\n");
+ printk(KERN_ERR "AWE32: not detected\n");
return 0;
}
/* check AWE32 ports are available */
if (awe_check_port()) {
- printk(KERN_WARNING "AWE32: I/O area already used.\n");
+ printk(KERN_ERR "AWE32: I/O area already used.\n");
return 0;
}
/* set buffers to NULL */
- sflists = NULL;
- samples = NULL;
- infos = NULL;
-
- /* allocate sample tables */
- INIT_TABLE(sflists, max_sfs, AWE_MAX_SF_LISTS, sf_list);
- INIT_TABLE(samples, max_samples, AWE_MAX_SAMPLES, awe_sample_list);
- INIT_TABLE(infos, max_infos, AWE_MAX_INFOS, awe_voice_list);
+ sfhead = sftail = NULL;
my_dev = sound_alloc_synthdev();
if (my_dev == -1) {
- printk(KERN_WARNING "AWE32 Error: too many synthesizers\n");
+ printk(KERN_ERR "AWE32 Error: too many synthesizers\n");
return 0;
}
awe_initialize();
sprintf(awe_info.name, "AWE32-%s (RAM%dk)",
- AWEDRV_VERSION, awe_mem_size/1024);
- printk("<SoundBlaster EMU8000 (RAM%dk)>\n", awe_mem_size/1024);
+ AWEDRV_VERSION, memsize/1024);
+ printk(KERN_INFO "<SoundBlaster EMU8000 (RAM%dk)>\n", memsize/1024);
awe_present = TRUE;
static void free_tables(void)
{
- if(sflists)
- vfree(sflists);
- sflists = NULL; max_sfs = 0;
- if (samples)
- vfree(samples);
- samples = NULL; max_samples = 0;
- if (infos)
- vfree(infos);
- infos = NULL; max_infos = 0;
-}
-
-static void *realloc_block(void *buf, int oldsize, int size)
-{
- void *ptr;
- if (oldsize == size)
- return buf;
- if ((ptr = vmalloc(size)) == NULL)
- return NULL;
- if (oldsize && size)
- memcpy(ptr, buf, ((oldsize < size) ? oldsize : size) );
- if (buf)
- vfree(buf);
- return ptr;
+ if (sftail) {
+ sf_list *p, *prev;
+ for (p = sftail; p; p = prev) {
+ prev = p->prev;
+ awe_free_sf(p);
+ }
+ }
+ sfhead = sftail = NULL;
}
-static void _unload_awe(void)
+static void __exit _unload_awe(void)
{
if (awe_present) {
awe_reset_samples();
}
}
-/*
- * Linux PnP driver support
- */
-
-#ifdef CONFIG_PNP_DRV
-
-#include <linux/pnp.h>
-
-static int pnp = 1; /* use PnP as default */
-
-#define AWE_NUM_CHIPS 3
-static unsigned int pnp_ids[AWE_NUM_CHIPS] = {
- PNP_EISAID('C','T','L',0x0021),
- PNP_EISAID('C','T','L',0x0022),
- PNP_EISAID('C','T','L',0x0023),
-};
-static struct pnp_driver pnp_awe[AWE_NUM_CHIPS];
-static int awe_pnp_ok = 0;
-
-static void awe_pnp_config(struct pnp_device *d)
-{
- struct pnp_resource *r;
- int port[3];
- int nio = 0;
-
- port[0] = port[1] = port[2] = 0;
- for (r = d->res; r != NULL; r = r->next) {
- if (r->type == PNP_RES_IO) {
- if (nio >= 0 && nio < 3)
- port[nio] = r->start;
- nio++;
- }
- }
- setup_ports(port[0], port[1], port[2]);
- DEBUG(0,printk("AWE32: PnP setup ports: %x:%x:%x\n", port[0], port[1], port[2]));
-}
-
-static int awe_pnp_event (struct pnp_device *d, struct pnp_drv_event *e)
-{
- struct pnp_driver *drv = d->l.k.driver;
-
- switch (e->type) {
- case PNP_DRV_ALLOC:
- drv->flags |= PNP_DRV_INUSE;
- awe_pnp_ok = 1;
- awe_pnp_config(d);
- _attach_awe();
- break;
-
- case PNP_DRV_DISABLE:
- case PNP_DRV_EMERGSTOP:
- drv->flags &= ~PNP_DRV_INUSE;
- awe_pnp_ok = 0;
- _unload_awe();
- break;
-
- case PNP_DRV_CONFIG:
- if (awe_busy) return 1; /* used now */
- awe_release_region();
- awe_pnp_config(d);
- awe_request_region();
- break;
-
- case PNP_DRV_RECONFIG:
- break;
- }
- return 0;
-}
-
-static int awe_initpnp (void)
-{
- int i;
- for (i = 0; i < AWE_NUM_CHIPS; i++) {
- pnp_awe[i].id.type = PNP_HDL_ISA;
- pnp_awe[i].id.t.isa.id = pnp_ids[i];
- pnp_awe[i].id.next = NULL;
- pnp_awe[i].name = "Soundblaster AWE32/AWE64 PnP";
- pnp_awe[i].event = awe_pnp_event;
- pnp_register_driver(&pnp_awe[i], 1);
- }
- return 0;
-}
-
-static void awe_unload_pnp (void)
-{
- int i;
- for (i = 0; i < AWE_NUM_CHIPS; i++)
- pnp_unregister_driver(&pnp_awe[i]);
-}
-#endif /* PnP support */
/*
* clear sample tables
static void
awe_reset_samples(void)
{
- int i;
-
/* free all bank tables */
- for (i = 0; i < AWE_MAX_PRESETS; i++)
- preset_table[i] = -1;
-
+ memset(preset_table, 0, sizeof(preset_table));
free_tables();
current_sf_id = 0;
}
/* write 16bit data */
-static inline void
+static void
awe_poke(unsigned short cmd, unsigned short port, unsigned short data)
{
awe_set_cmd(cmd);
}
/* write 32bit data */
-static inline void
+static void
awe_poke_dw(unsigned short cmd, unsigned short port, unsigned int data)
{
unsigned short addr = awe_ports[port];
}
/* read 16bit data */
-static inline unsigned short
+static unsigned short
awe_peek(unsigned short cmd, unsigned short port)
{
unsigned short k;
}
/* read 32bit data */
-static inline unsigned int
+static unsigned int
awe_peek_dw(unsigned short cmd, unsigned short port)
{
unsigned int k1, k2;
current->state = TASK_INTERRUPTIBLE;
schedule_timeout((HZ*(unsigned long)delay + 44099)/44100);
}
+/*
+static void awe_wait(unsigned short delay)
+{
+ udelay(((unsigned long)delay * 1000000L + 44099) / 44100);
+}
+*/
#endif /* wait by loop */
/* write a word data */
-static inline void
-awe_write_dram(unsigned short c)
-{
- awe_poke(AWE_SMLD, c);
-}
+#define awe_write_dram(c) awe_poke(AWE_SMLD, c)
/*
* 0x620-623, 0xA20-A23, 0xE20-E23
*/
-static int
+static int __init
awe_check_port(void)
{
if (! port_setuped) return 0;
check_region(awe_ports[3], 4));
}
-static void
+static void __init
awe_request_region(void)
{
if (! port_setuped) return;
request_region(awe_ports[3], 4, "sound driver (AWE32)");
}
-static void
+static void __exit
awe_release_region(void)
{
if (! port_setuped) return;
release_region(awe_ports[3], 4);
}
+
/*
- * AWE32 initialization
+ * initialization of AWE driver
*/
+
static void
awe_initialize(void)
{
static void
awe_init_voice_info(awe_voice_info *vp)
{
- vp->sf_id = 0; /* normal mode */
vp->sample = 0;
vp->rate_offset = 0;
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
/* A voice sample must assigned before calling */
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
parm = (awe_voice_parm_block*)&vp->parm;
awe_poke_dw(AWE_VTFT(voice), (vtarget<<16)|ftarget);
awe_poke_dw(AWE_CVCF(voice), (vtarget<<16)|ftarget);
- /* turn on envelope */
- awe_poke(AWE_DCYSUSV(voice),
- FX_COMB(fx, fx_lay, AWE_FX_ENV2_SUSTAIN, AWE_FX_ENV2_DECAY,
- vp->parm.voldcysus));
/* set reverb */
temp = FX_BYTE(fx, fx_lay, AWE_FX_REVERB, vp->parm.reverb);
temp = (temp << 8) | (ptarget << 16) | voices[voice].aaux;
awe_poke_dw(AWE_PTRX(voice), temp);
awe_poke_dw(AWE_CPF(voice), ptarget << 16);
+ /* turn on envelope */
+ awe_poke(AWE_DCYSUSV(voice),
+ FX_COMB(fx, fx_lay, AWE_FX_ENV2_SUSTAIN, AWE_FX_ENV2_DECAY,
+ vp->parm.voldcysus));
voices[voice].state = AWE_ST_ON;
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (!IS_PLAYING(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
tmp2 = FX_BYTE(fx, fx_lay, AWE_FX_CUTOFF,
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (IS_NO_EFFECT(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
/* pan & loop start (pan 8bit, MSB, 0:right, 0xff:left) */
}
if (forced || temp != voices[voice].apan) {
voices[voice].apan = temp;
+ if (temp == 0)
+ voices[voice].aaux = 0xff;
+ else
+ voices[voice].aaux = (-temp) & 0xff;
addr = vp->loopstart - 1;
addr += FX_OFFSET(fx, fx_lay, AWE_FX_LOOP_START,
AWE_FX_COARSE_LOOP_START, vp->mode);
temp = (temp<<24) | (unsigned int)addr;
awe_poke_dw(AWE_PSST(voice), temp);
DEBUG(4,printk("AWE32: [-- loopstart=%x/%x]\n", vp->loopstart, addr));
- if (temp == 0) voices[voice].aaux = 0xff;
- else voices[voice].aaux = (-temp)&0xff;
}
}
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (IS_NO_EFFECT(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
awe_poke(AWE_FMMOD(voice),
FX_COMB(fx, fx_lay, AWE_FX_LFO1_PITCH, AWE_FX_LFO1_CUTOFF,
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (IS_NO_EFFECT(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
awe_poke(AWE_TREMFRQ(voice),
FX_COMB(fx, fx_lay, AWE_FX_LFO1_VOLUME, AWE_FX_LFO1_FREQ,
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (IS_NO_EFFECT(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
awe_poke(AWE_FM2FRQ2(voice),
FX_COMB(fx, fx_lay, AWE_FX_LFO2_PITCH, AWE_FX_LFO2_FREQ,
fx_lay = &voices[voice].cinfo->fx_layer[voices[voice].layer];
if (IS_NO_EFFECT(voice) && !forced) return;
- if ((vp = voices[voice].sample) == NULL || vp->index < 0)
+ if ((vp = voices[voice].sample) == NULL || vp->index == 0)
return;
addr = awe_peek_dw(AWE_CCCA(voice)) & 0xffffff;
awe_poke_dw(AWE_CCCA(voice), addr);
}
-/*================================================================
+/*
* calculate pitch offset
- *----------------------------------------------------------------
+ *
* 0xE000 is no pitch offset at 44100Hz sample.
* Every 4096 is one octave.
- *================================================================*/
+ */
static void
awe_calc_pitch(int voice)
/* search voice information */
if ((ap = vp->sample) == NULL)
return;
- if (ap->index < 0) {
+ if (ap->index == 0) {
DEBUG(3,printk("AWE32: set sample (%d)\n", ap->sample));
- if (awe_set_sample(ap) < 0)
+ if (awe_set_sample((awe_voice_list*)ap) == 0)
return;
}
/* search voice information */
if ((ap = vp->sample) == NULL)
return;
- if (ap->index < 0) {
+ if (ap->index == 0) {
DEBUG(3,printk("AWE32: set sample (%d)\n", ap->sample));
- if (awe_set_sample(ap) < 0)
+ if (awe_set_sample((awe_voice_list*)ap) == 0)
return;
}
note = freq_to_note(freq);
#endif /* AWE_HAS_GUS_COMPATIBILITY */
-/*================================================================
+/*
* calculate volume attenuation
- *----------------------------------------------------------------
+ *
* Voice volume is controlled by volume attenuation parameter.
* So volume becomes maximum when avol is 0 (no attenuation), and
* minimum when 255 (-96dB or silence).
- *================================================================*/
+ */
static int vol_table[128] = {
255,111,95,86,79,74,70,66,63,61,58,56,54,52,50,49,
return;
ap = vp->sample;
- if (ap->index < 0) {
+ if (ap->index == 0) {
DEBUG(3,printk("AWE32: set sample (%d)\n", ap->sample));
- if (awe_set_sample(ap) < 0)
+ if (awe_set_sample((awe_voice_list*)ap) == 0)
return;
}
cp->instr = ctrls[AWE_MD_DEF_PRESET];
cp->bank = ctrls[AWE_MD_DEF_BANK];
}
- cp->vrec = -1;
- cp->def_vrec = -1;
}
cp->bender = 0; /* zero tune skew */
}
-/*----------------------------------------------------------------
+/*
* device open / close
- *----------------------------------------------------------------*/
+ */
/* open device:
* reset status of all voices, and clear sample position flag
awe_info.nr_voices = awe_max_voices;
else
awe_info.nr_voices = AWE_MAX_CHANNELS;
- memcpy((char*)arg, &awe_info + 0, sizeof(awe_info));
+ memcpy((char*)arg, &awe_info, sizeof(awe_info));
return 0;
break;
case SNDCTL_SEQ_RESETSAMPLES:
- awe_reset_samples();
awe_reset(dev);
+ awe_reset_samples();
return 0;
break;
break;
case SNDCTL_SYNTH_MEMAVL:
- return awe_mem_size - awe_free_mem_ptr() * 2;
+ return memsize - awe_free_mem_ptr() * 2;
default:
- printk("AWE32: unsupported ioctl %d\n", cmd);
+ printk(KERN_WARNING "AWE32: unsupported ioctl %d\n", cmd);
return -EINVAL;
}
}
}
-/* search instrument from preset table with the specified bank */
+/* calculate hash key */
static int
-awe_search_instr(int bank, int preset)
+awe_search_key(int bank, int preset, int note)
{
- int i;
+ unsigned int key;
- limitvalue(preset, 0, AWE_MAX_PRESETS-1);
- for (i = preset_table[preset]; i >= 0; i = infos[i].next_bank) {
- if (infos[i].bank == bank)
- return i;
+#if 1 /* new hash table */
+ if (bank == AWE_DRUM_BANK)
+ key = preset + note + 128;
+ else
+ key = bank + preset;
+#else
+ key = preset;
+#endif
+ key %= AWE_MAX_PRESETS;
+
+ return (int)key;
+}
+
+
+/* search instrument from hash table */
+static awe_voice_list *
+awe_search_instr(int bank, int preset, int note)
+{
+ awe_voice_list *p;
+ int key, key2;
+
+ key = awe_search_key(bank, preset, note);
+ for (p = preset_table[key]; p; p = p->next_bank) {
+ if (p->instr == preset && p->bank == bank)
+ return p;
}
- return -1;
+ key2 = awe_search_key(bank, preset, 0); /* search default */
+ if (key == key2)
+ return NULL;
+ for (p = preset_table[key2]; p; p = p->next_bank) {
+ if (p->instr == preset && p->bank == bank)
+ return p;
+ }
+ return NULL;
}
awe_set_instr(int dev, int voice, int instr_no)
{
awe_chan_info *cinfo;
- int def_bank;
if (! voice_in_range(voice))
return -EINVAL;
return -EINVAL;
cinfo = &channels[voice];
-
- if (MULTI_LAYER_MODE() && IS_DRUM_CHANNEL(voice))
- def_bank = AWE_DRUM_BANK; /* always search drumset */
- else
- def_bank = cinfo->bank;
-
- cinfo->vrec = -1;
- cinfo->def_vrec = -1;
- cinfo->vrec = awe_search_instr(def_bank, instr_no);
- if (def_bank == AWE_DRUM_BANK) /* search default drumset */
- cinfo->def_vrec = awe_search_instr(def_bank, ctrls[AWE_MD_DEF_DRUM]);
- else /* search default preset */
- cinfo->def_vrec = awe_search_instr(ctrls[AWE_MD_DEF_BANK], instr_no);
-
- if (cinfo->vrec < 0 && cinfo->def_vrec < 0) {
- DEBUG(1,printk("AWE32 Warning: can't find instrument %d\n", instr_no));
- }
-
cinfo->instr = instr_no;
- DEBUG(2,printk("AWE32: [program(%d) %d/%d]\n", voice, instr_no, def_bank));
+ DEBUG(2,printk("AWE32: [program(%d) %d]\n", voice, instr_no));
return 0;
}
switch (cmd) {
case _AWE_DEBUG_MODE:
ctrls[AWE_MD_DEBUG_MODE] = p1;
- printk("AWE32: debug mode = %d\n", ctrls[AWE_MD_DEBUG_MODE]);
+ printk(KERN_DEBUG "AWE32: debug mode = %d\n", ctrls[AWE_MD_DEBUG_MODE]);
break;
case _AWE_REVERB_MODE:
ctrls[AWE_MD_REVERB_MODE] = p1;
case _AWE_REMOVE_LAST_SAMPLES:
DEBUG(0,printk("AWE32: remove last samples\n"));
+ awe_reset(0);
if (locked_sf_id > 0)
awe_remove_samples(locked_sf_id);
break;
if (MULTI_LAYER_MODE() && IS_DRUM_CHANNEL(voice) &&
!ctrls[AWE_MD_TOGGLE_DRUM_BANK])
break;
+ if (value < 0 || value > 255)
+ break;
cinfo->bank = value;
if (cinfo->bank == AWE_DRUM_BANK)
DRUM_CHANNEL_ON(cinfo->channel);
}
-/*----------------------------------------------------------------
+/*
* load a sound patch:
* three types of patches are accepted: AWE, GUS, and SYSEX.
- *----------------------------------------------------------------*/
+ */
static int
awe_load_patch(int dev, int format, const char *addr,
/* no system exclusive message supported yet */
return 0;
} else if (format != AWE_PATCH) {
- printk("AWE32 Error: Invalid patch format (key) 0x%x\n", format);
+ printk(KERN_WARNING "AWE32 Error: Invalid patch format (key) 0x%x\n", format);
return -EINVAL;
}
if (count < AWE_PATCH_INFO_SIZE) {
- printk("AWE32 Error: Patch header too short\n");
+ printk(KERN_WARNING "AWE32 Error: Patch header too short\n");
return -EINVAL;
}
- copy_from_user(((char*)&patch) + offs, addr + offs,
- AWE_PATCH_INFO_SIZE - offs);
+ if (copy_from_user(((char*)&patch) + offs, addr + offs,
+ AWE_PATCH_INFO_SIZE - offs))
+ return -EFAULT;
count -= AWE_PATCH_INFO_SIZE;
if (count < patch.len) {
- printk("AWE32: sample: Patch record too short (%d<%d)\n",
+ printk(KERN_WARNING "AWE32: sample: Patch record too short (%d<%d)\n",
count, patch.len);
return -EINVAL;
}
case AWE_PROBE_DATA:
rc = awe_probe_data(&patch, addr, count);
break;
+ case AWE_REMOVE_INFO:
+ rc = awe_remove_info(&patch, addr, count);
+ break;
case AWE_LOAD_CHORUS_FX:
rc = awe_load_chorus_fx(&patch, addr, count);
break;
break;
default:
- printk("AWE32 Error: unknown patch format type %d\n",
+ printk(KERN_WARNING "AWE32 Error: unknown patch format type %d\n",
patch.type);
rc = -EINVAL;
}
}
-/* create an sflist record */
+/* create an sf list record */
static int
awe_create_sf(int type, char *name)
{
/* terminate sounds */
awe_reset(0);
- if (current_sf_id >= max_sfs) {
- int newsize = max_sfs + AWE_MAX_SF_LISTS;
- sf_list *newlist = realloc_block(sflists, sizeof(sf_list)*max_sfs,
- sizeof(sf_list)*newsize);
- if (newlist == NULL)
- return 1;
- sflists = newlist;
- max_sfs = newsize;
- }
- rec = &sflists[current_sf_id];
+ rec = (sf_list *)kmalloc(sizeof(*rec), GFP_KERNEL);
+ if (rec == NULL)
+ return 1; /* no memory */
rec->sf_id = current_sf_id + 1;
rec->type = type;
- if (current_sf_id == 0 || (type & AWE_PAT_LOCKED) != 0)
+ if (/*current_sf_id == 0 ||*/ (type & AWE_PAT_LOCKED) != 0)
locked_sf_id = current_sf_id + 1;
rec->num_info = awe_free_info();
rec->num_sample = awe_free_sample();
rec->mem_ptr = awe_free_mem_ptr();
- rec->infos = -1;
- rec->samples = -1;
+ rec->infos = rec->last_infos = NULL;
+ rec->samples = rec->last_samples = NULL;
+
+ /* add to linked-list */
+ rec->next = NULL;
+ rec->prev = sftail;
+ if (sftail)
+ sftail->next = rec;
+ else
+ sfhead = rec;
+ sftail = rec;
+ current_sf_id++;
#ifdef AWE_ALLOW_SAMPLE_SHARING
- rec->shared = 0;
+ rec->shared = NULL;
if (name)
memcpy(rec->name, name, AWE_PATCH_NAME_LEN);
else
strcpy(rec->name, "*TEMPORARY*");
- if (current_sf_id > 0 && name && (type & AWE_PAT_SHARED) != 0) {
+ if (current_sf_id > 1 && name && (type & AWE_PAT_SHARED) != 0) {
/* is the current font really a shared font? */
if (is_shared_sf(rec->name)) {
/* check if the shared font is already installed */
- int i;
- for (i = current_sf_id; i > 0; i--) {
- if (is_identical_name(rec->name, i)) {
- rec->shared = i;
+ sf_list *p;
+ for (p = rec->prev; p; p = p->prev) {
+ if (is_identical_name(rec->name, p)) {
+ rec->shared = p;
break;
}
}
}
#endif /* allow sharing */
- current_sf_id++;
-
return 0;
}
#define ASC_TO_KEY(c) ((c) - 'A' + 1)
static int is_shared_sf(unsigned char *name)
{
- static unsigned char id_head[6] = {
+ static unsigned char id_head[4] = {
ASC_TO_KEY('A'), ASC_TO_KEY('W'), ASC_TO_KEY('E'),
AWE_MAJOR_VERSION,
- AWE_MINOR_VERSION,
- AWE_TINY_VERSION,
};
- if (memcmp(name, id_head, 6) == 0)
+ if (memcmp(name, id_head, 4) == 0)
return TRUE;
return FALSE;
}
/* check if the given name matches to the existing list */
-static int is_identical_name(unsigned char *name, int sf)
+static int is_identical_name(unsigned char *name, sf_list *p)
{
- char *id = sflists[sf-1].name;
+ char *id = p->name;
if (is_shared_sf(id) && memcmp(id, name, AWE_PATCH_NAME_LEN) == 0)
return TRUE;
return FALSE;
}
/* check if the given voice info exists */
-static int info_duplicated(awe_voice_list *rec)
+static int info_duplicated(sf_list *sf, awe_voice_list *rec)
{
- int j, sf_id;
- sf_list *sf;
-
/* search for all sharing lists */
- for (sf_id = rec->v.sf_id; sf_id > 0 && sf_id <= current_sf_id; sf_id = sf->shared) {
- sf = &sflists[sf_id - 1];
- for (j = sf->infos; j >= 0; j = infos[j].next) {
- awe_voice_list *p = &infos[j];
+ for (; sf; sf = sf->shared) {
+ awe_voice_list *p;
+ for (p = sf->infos; p; p = p->next) {
if (p->type == V_ST_NORMAL &&
p->bank == rec->bank &&
p->instr == rec->instr &&
#endif /* AWE_ALLOW_SAMPLE_SHARING */
+/* free sf_list record */
+/* linked-list in this function is not cared */
+static void
+awe_free_sf(sf_list *sf)
+{
+ if (sf->infos) {
+ awe_voice_list *p, *next;
+ for (p = sf->infos; p; p = next) {
+ next = p->next;
+ kfree(p);
+ }
+ }
+ if (sf->samples) {
+ awe_sample_list *p, *next;
+ for (p = sf->samples; p; p = next) {
+ next = p->next;
+ kfree(p);
+ }
+ }
+ kfree(sf);
+}
+
+
/* open patch; create sf list and set opened flag */
static int
awe_open_patch(awe_patch_info *patch, const char *addr, int count)
awe_open_parm parm;
int shared;
- copy_from_user(&parm, addr + AWE_PATCH_INFO_SIZE, sizeof(parm));
+ if (copy_from_user(&parm, addr + AWE_PATCH_INFO_SIZE, sizeof(parm)))
+ return -EFAULT;
shared = FALSE;
#ifdef AWE_ALLOW_SAMPLE_SHARING
- if (current_sf_id > 0 && (parm.type & AWE_PAT_SHARED) != 0) {
+ if (sftail && (parm.type & AWE_PAT_SHARED) != 0) {
/* is the previous font the same font? */
- if (is_identical_name(parm.name, current_sf_id)) {
+ if (is_identical_name(parm.name, sftail)) {
/* then append to the previous */
shared = TRUE;
awe_reset(0);
#endif /* allow sharing */
if (! shared) {
if (awe_create_sf(parm.type, parm.name)) {
- printk("AWE32: can't open: failed to alloc new list\n");
- return -ENOSPC;
+ printk(KERN_ERR "AWE32: can't open: failed to alloc new list\n");
+ return -ENOMEM;
}
}
patch_opened = TRUE;
}
/* check if the patch is already opened */
-static int
+static sf_list *
check_patch_opened(int type, char *name)
{
if (! patch_opened) {
if (awe_create_sf(type, name)) {
- printk("AWE32: failed to alloc new list\n");
- return -ENOSPC;
+ printk(KERN_ERR "AWE32: failed to alloc new list\n");
+ return NULL;
}
patch_opened = TRUE;
- return current_sf_id;
+ return sftail;
}
- return current_sf_id;
+ return sftail;
}
/* close the patch; if no voice is loaded, remove the patch */
static int
awe_close_patch(awe_patch_info *patch, const char *addr, int count)
{
- if (patch_opened && current_sf_id > 0) {
+ if (patch_opened && sftail) {
/* if no voice is loaded, release the current patch */
- if (sflists[current_sf_id-1].infos == -1)
+ if (sftail->infos == NULL) {
+ awe_reset(0);
awe_remove_samples(current_sf_id - 1);
+ }
}
patch_opened = 0;
return 0;
static int
awe_unload_patch(awe_patch_info *patch, const char *addr, int count)
{
- if (current_sf_id > 0 && current_sf_id > locked_sf_id)
+ if (current_sf_id > 0 && current_sf_id > locked_sf_id) {
+ awe_reset(0);
awe_remove_samples(current_sf_id - 1);
+ }
return 0;
}
/* allocate voice info list records */
-static int alloc_new_info(int nvoices)
+static awe_voice_list *
+alloc_new_info(void)
{
- int newsize, free_info;
awe_voice_list *newlist;
- free_info = awe_free_info();
- if (free_info + nvoices >= max_infos) {
- do {
- newsize = max_infos + AWE_MAX_INFOS;
- } while (free_info + nvoices >= newsize);
- newlist = realloc_block(infos, sizeof(awe_voice_list)*max_infos,
- sizeof(awe_voice_list)*newsize);
- if (newlist == NULL) {
- printk("AWE32: can't alloc info table\n");
- return -ENOSPC;
- }
- infos = newlist;
- max_infos = newsize;
+
+ newlist = (awe_voice_list *)kmalloc(sizeof(*newlist), GFP_KERNEL);
+ if (newlist == NULL) {
+ printk(KERN_ERR "AWE32: can't alloc info table\n");
+ return NULL;
}
- return 0;
+ return newlist;
}
/* allocate sample info list records */
-static int alloc_new_sample(void)
+static awe_sample_list *
+alloc_new_sample(void)
{
- int newsize, free_sample;
awe_sample_list *newlist;
- free_sample = awe_free_sample();
- if (free_sample >= max_samples) {
- newsize = max_samples + AWE_MAX_SAMPLES;
- newlist = realloc_block(samples,
- sizeof(awe_sample_list)*max_samples,
- sizeof(awe_sample_list)*newsize);
- if (newlist == NULL) {
- printk("AWE32: can't alloc sample table\n");
- return -ENOSPC;
- }
- samples = newlist;
- max_samples = newsize;
+
+ newlist = (awe_sample_list *)kmalloc(sizeof(*newlist), GFP_KERNEL);
+ if (newlist == NULL) {
+ printk(KERN_ERR "AWE32: can't alloc sample table\n");
+ return NULL;
}
- return 0;
+ return newlist;
}
/* load voice map */
awe_load_map(awe_patch_info *patch, const char *addr, int count)
{
awe_voice_map map;
- awe_voice_list *rec;
- int p, free_info;
+ awe_voice_list *rec, *p;
+ sf_list *sf;
/* get the link info */
if (count < sizeof(map)) {
- printk("AWE32 Error: invalid patch info length\n");
+ printk(KERN_WARNING "AWE32 Error: invalid patch info length\n");
return -EINVAL;
}
- copy_from_user(&map, addr + AWE_PATCH_INFO_SIZE, sizeof(map));
+ if (copy_from_user(&map, addr + AWE_PATCH_INFO_SIZE, sizeof(map)))
+ return -EFAULT;
/* check if the identical mapping already exists */
- p = awe_search_instr(map.map_bank, map.map_instr);
- for (; p >= 0; p = infos[p].next_instr) {
- if (p >= 0 && infos[p].type == V_ST_MAPPED &&
- infos[p].v.low == map.map_key &&
- infos[p].v.start == map.src_instr &&
- infos[p].v.end == map.src_bank &&
- infos[p].v.fixkey == map.src_key)
+ p = awe_search_instr(map.map_bank, map.map_instr, map.map_key);
+ for (; p; p = p->next_instr) {
+ if (p->type == V_ST_MAPPED &&
+ p->v.start == map.src_instr &&
+ p->v.end == map.src_bank &&
+ p->v.fixkey == map.src_key)
return 0; /* already present! */
}
- if (check_patch_opened(AWE_PAT_TYPE_MAP, NULL) < 0)
- return -ENOSPC;
+ if ((sf = check_patch_opened(AWE_PAT_TYPE_MAP, NULL)) == NULL)
+ return -ENOMEM;
- if (alloc_new_info(1) < 0)
- return -ENOSPC;
+ if ((rec = alloc_new_info()) == NULL)
+ return -ENOMEM;
- free_info = awe_free_info();
- rec = &infos[free_info];
rec->bank = map.map_bank;
rec->instr = map.map_instr;
rec->type = V_ST_MAPPED;
rec->v.start = map.src_instr;
rec->v.end = map.src_bank;
rec->v.fixkey = map.src_key;
- rec->v.sf_id = current_sf_id;
- add_info_list(free_info);
- add_sf_info(free_info);
+ add_sf_info(sf, rec);
+ add_info_list(rec);
return 0;
}
{
#ifdef AWE_ALLOW_SAMPLE_SHARING
awe_voice_map map;
- int p;
+ awe_voice_list *p;
if (! patch_opened)
return -EINVAL;
/* get the link info */
if (count < sizeof(map)) {
- printk("AWE32 Error: invalid patch info length\n");
+ printk(KERN_WARNING "AWE32 Error: invalid patch info length\n");
return -EINVAL;
}
- copy_from_user(&map, addr + AWE_PATCH_INFO_SIZE, sizeof(map));
+ if (copy_from_user(&map, addr + AWE_PATCH_INFO_SIZE, sizeof(map)))
+ return -EFAULT;
/* check if the identical mapping already exists */
- p = awe_search_instr(map.src_bank, map.src_instr);
- for (; p >= 0; p = infos[p].next_instr) {
- if (p >= 0 && infos[p].type == V_ST_NORMAL &&
- is_identical_id(infos[p].v.sf_id, current_sf_id) &&
- infos[p].v.low <= map.src_key &&
- infos[p].v.high >= map.src_key)
+ if (sftail == NULL)
+ return -EINVAL;
+ p = awe_search_instr(map.src_bank, map.src_instr, map.src_key);
+ for (; p; p = p->next_instr) {
+ if (p->type == V_ST_NORMAL &&
+ is_identical_holder(p->holder, sftail) &&
+ p->v.low <= map.src_key &&
+ p->v.high >= map.src_key)
return 0; /* already present! */
}
#endif /* allow sharing */
return -EINVAL;
/* search the specified sample by optarg */
- if (search_sample_index(current_sf_id, patch->optarg, 0) >= 0)
+ if (search_sample_index(sftail, patch->optarg) != NULL)
return 0;
#endif /* allow sharing */
return -EINVAL;
}
+
+/* remove the present instrument layers */
+static int
+remove_info(sf_list *sf, int bank, int instr)
+{
+ awe_voice_list *prev, *next, *p;
+ int removed = 0;
+
+ prev = NULL;
+ for (p = sf->infos; p; prev = p, p = next) {
+ next = p->next;
+ if (p->type == V_ST_NORMAL &&
+ p->bank == bank && p->instr == instr) {
+ /* remove this layer */
+ if (prev)
+ prev->next = next;
+ else
+ sf->infos = next;
+ if (p == sf->last_infos)
+ sf->last_infos = prev;
+ sf->num_info--;
+ removed++;
+ kfree(p);
+ }
+ }
+ return removed;
+}
+
/* load voice information data */
static int
awe_load_info(awe_patch_info *patch, const char *addr, int count)
awe_voice_rec_hdr hdr;
int i;
int total_size;
+ sf_list *sf;
+ awe_voice_list *rec;
if (count < AWE_VOICE_REC_SIZE) {
- printk("AWE32 Error: invalid patch info length\n");
+ printk(KERN_WARNING "AWE32 Error: invalid patch info length\n");
return -EINVAL;
}
offset = AWE_PATCH_INFO_SIZE;
- copy_from_user((char*)&hdr, addr + offset, AWE_VOICE_REC_SIZE);
+ if (copy_from_user((char*)&hdr, addr + offset, AWE_VOICE_REC_SIZE))
+ return -EFAULT;
offset += AWE_VOICE_REC_SIZE;
if (hdr.nvoices <= 0 || hdr.nvoices >= 100) {
- printk("AWE32 Error: Illegal voice number %d\n", hdr.nvoices);
+ printk(KERN_WARNING "AWE32 Error: Invalid voice number %d\n", hdr.nvoices);
return -EINVAL;
}
total_size = AWE_VOICE_REC_SIZE + AWE_VOICE_INFO_SIZE * hdr.nvoices;
if (count < total_size) {
- printk("AWE32 Error: patch length(%d) is smaller than nvoices(%d)\n",
+ printk(KERN_WARNING "AWE32 Error: patch length(%d) is smaller than nvoices(%d)\n",
count, hdr.nvoices);
return -EINVAL;
}
- if (check_patch_opened(AWE_PAT_TYPE_MISC, NULL) < 0)
- return -ENOSPC;
+ if ((sf = check_patch_opened(AWE_PAT_TYPE_MISC, NULL)) == NULL)
+ return -ENOMEM;
-#if 0 /* it looks like not so useful.. */
- /* check if the same preset already exists in the info list */
- for (i = sflists[current_sf_id-1].infos; i >= 0; i = infos[i].next) {
- if (infos[i].disabled) continue;
- if (infos[i].bank == hdr.bank && infos[i].instr == hdr.instr) {
- /* in exclusive mode, do skip loading this */
- if (hdr.write_mode == AWE_WR_EXCLUSIVE)
- return 0;
- /* in replace mode, disable the old data */
- else if (hdr.write_mode == AWE_WR_REPLACE)
- infos[i].disabled = TRUE;
+ switch (hdr.write_mode) {
+ case AWE_WR_EXCLUSIVE:
+ /* exclusive mode - if the instrument already exists,
+ return error */
+ for (rec = sf->infos; rec; rec = rec->next) {
+ if (rec->type == V_ST_NORMAL &&
+ rec->bank == hdr.bank &&
+ rec->instr == hdr.instr)
+ return -EINVAL;
}
+ break;
+ case AWE_WR_REPLACE:
+ /* replace mode - remoe the instrument if it already exists */
+ remove_info(sf, hdr.bank, hdr.instr);
+ break;
}
- if (hdr.write_mode == AWE_WR_REPLACE)
- rebuild_preset_list();
-#endif
-
- if (alloc_new_info(hdr.nvoices) < 0)
- return -ENOSPC;
+ /* append new layers */
for (i = 0; i < hdr.nvoices; i++) {
- int rec = awe_free_info();
+ rec = alloc_new_info();
+ if (rec == NULL)
+ return -ENOMEM;
- infos[rec].bank = hdr.bank;
- infos[rec].instr = hdr.instr;
- infos[rec].type = V_ST_NORMAL;
- infos[rec].disabled = FALSE;
+ rec->bank = hdr.bank;
+ rec->instr = hdr.instr;
+ rec->type = V_ST_NORMAL;
+ rec->disabled = FALSE;
/* copy awe_voice_info parameters */
- copy_from_user(&infos[rec].v, addr + offset, AWE_VOICE_INFO_SIZE);
+ if (copy_from_user(&rec->v, addr + offset, AWE_VOICE_INFO_SIZE)) {
+ kfree(rec);
+ return -EFAULT;
+ }
offset += AWE_VOICE_INFO_SIZE;
- infos[rec].v.sf_id = current_sf_id;
#ifdef AWE_ALLOW_SAMPLE_SHARING
- if (sflists[current_sf_id-1].shared) {
- if (info_duplicated(&infos[rec]))
+ if (sf && sf->shared) {
+ if (info_duplicated(sf, rec)) {
+ kfree(rec);
continue;
+ }
}
#endif /* allow sharing */
- if (infos[rec].v.mode & AWE_MODE_INIT_PARM)
- awe_init_voice_parm(&infos[rec].v.parm);
- awe_set_sample(&infos[rec].v);
+ if (rec->v.mode & AWE_MODE_INIT_PARM)
+ awe_init_voice_parm(&rec->v.parm);
+ add_sf_info(sf, rec);
+ awe_set_sample(rec);
add_info_list(rec);
- add_sf_info(rec);
}
return 0;
}
+/* remove instrument layers */
+static int
+awe_remove_info(awe_patch_info *patch, const char *addr, int count)
+{
+ unsigned char bank, instr;
+ sf_list *sf;
+
+ if (! patch_opened || (sf = sftail) == NULL) {
+ printk(KERN_WARNING "AWE32: remove_info: patch not opened\n");
+ return -EINVAL;
+ }
+
+ bank = ((unsigned short)patch->optarg >> 8) & 0xff;
+ instr = (unsigned short)patch->optarg & 0xff;
+ if (! remove_info(sf, bank, instr))
+ return -EINVAL;
+ return 0;
+}
+
+
/* load wave sample data */
static int
awe_load_data(awe_patch_info *patch, const char *addr, int count)
{
int offset, size;
- int rc, free_sample;
- awe_sample_info tmprec, *rec;
+ int rc;
+ awe_sample_info tmprec;
+ awe_sample_list *rec;
+ sf_list *sf;
- if (check_patch_opened(AWE_PAT_TYPE_MISC, NULL) < 0)
- return -ENOSPC;
+ if ((sf = check_patch_opened(AWE_PAT_TYPE_MISC, NULL)) == NULL)
+ return -ENOMEM;
size = (count - AWE_SAMPLE_INFO_SIZE) / 2;
offset = AWE_PATCH_INFO_SIZE;
- copy_from_user(&tmprec, addr + offset, AWE_SAMPLE_INFO_SIZE);
+ if (copy_from_user(&tmprec, addr + offset, AWE_SAMPLE_INFO_SIZE))
+ return -EFAULT;
offset += AWE_SAMPLE_INFO_SIZE;
if (size != tmprec.size) {
- printk("AWE32: load: sample size differed (%d != %d)\n",
+ printk(KERN_WARNING "AWE32: load: sample size differed (%d != %d)\n",
tmprec.size, size);
return -EINVAL;
}
- if (search_sample_index(current_sf_id, tmprec.sample, 0) >= 0) {
+ if (search_sample_index(sf, tmprec.sample) != NULL) {
#ifdef AWE_ALLOW_SAMPLE_SHARING
/* if shared sample, skip this data */
- if (sflists[current_sf_id-1].type & AWE_PAT_SHARED)
+ if (sf->type & AWE_PAT_SHARED)
return 0;
#endif /* allow sharing */
DEBUG(1,printk("AWE32: sample data %d already present\n", tmprec.sample));
return -EINVAL;
}
- if (alloc_new_sample() < 0)
- return -ENOSPC;
+ if ((rec = alloc_new_sample()) == NULL)
+ return -ENOMEM;
- free_sample = awe_free_sample();
- rec = &samples[free_sample].v;
- *rec = tmprec;
+ memcpy(&rec->v, &tmprec, sizeof(tmprec));
- if (rec->size > 0)
- if ((rc = awe_write_wave_data(addr, offset, rec, -1)) != 0)
+ if (rec->v.size > 0) {
+ if ((rc = awe_write_wave_data(addr, offset, rec, -1)) < 0) {
+ kfree(rec);
return rc;
+ }
+ sf->mem_ptr += rc;
+ }
- rec->sf_id = current_sf_id;
-
- add_sf_sample(free_sample);
-
+ add_sf_sample(sf, rec);
return 0;
}
{
int offset;
int size;
- int rc, i;
+ int rc;
int channels;
awe_sample_info cursmp;
int save_mem_ptr;
+ sf_list *sf;
+ awe_sample_list *rec;
- if (! patch_opened) {
- printk("AWE32: replace: patch not opened\n");
+ if (! patch_opened || (sf = sftail) == NULL) {
+ printk(KERN_WARNING "AWE32: replace: patch not opened\n");
return -EINVAL;
}
size = (count - AWE_SAMPLE_INFO_SIZE) / 2;
offset = AWE_PATCH_INFO_SIZE;
- copy_from_user(&cursmp, addr + offset, AWE_SAMPLE_INFO_SIZE);
+ if (copy_from_user(&cursmp, addr + offset, AWE_SAMPLE_INFO_SIZE))
+ return -EFAULT;
offset += AWE_SAMPLE_INFO_SIZE;
if (cursmp.size == 0 || size != cursmp.size) {
- printk("AWE32: replace: illegal sample size (%d!=%d)\n",
+ printk(KERN_WARNING "AWE32: replace: invalid sample size (%d!=%d)\n",
cursmp.size, size);
return -EINVAL;
}
channels = patch->optarg;
if (channels <= 0 || channels > AWE_NORMAL_VOICES) {
- printk("AWE32: replace: illegal channels %d\n", channels);
+ printk(KERN_WARNING "AWE32: replace: invalid channels %d\n", channels);
return -EINVAL;
}
- for (i = sflists[current_sf_id-1].samples;
- i >= 0; i = samples[i].next) {
- if (samples[i].v.sample == cursmp.sample)
+ for (rec = sf->samples; rec; rec = rec->next) {
+ if (rec->v.sample == cursmp.sample)
break;
}
- if (i < 0) {
- printk("AWE32: replace: cannot find existing sample data %d\n",
+ if (rec == NULL) {
+ printk(KERN_WARNING "AWE32: replace: cannot find existing sample data %d\n",
cursmp.sample);
return -EINVAL;
}
- if (samples[i].v.size != cursmp.size) {
- printk("AWE32: replace: exiting size differed (%d!=%d)\n",
- samples[i].v.size, cursmp.size);
+ if (rec->v.size != cursmp.size) {
+ printk(KERN_WARNING "AWE32: replace: exiting size differed (%d!=%d)\n",
+ rec->v.size, cursmp.size);
return -EINVAL;
}
save_mem_ptr = awe_free_mem_ptr();
- sflists[current_sf_id-1].mem_ptr = samples[i].v.start - awe_mem_start;
- memcpy(&samples[i].v, &cursmp, sizeof(cursmp));
- if ((rc = awe_write_wave_data(addr, offset, &samples[i].v, channels)) != 0)
+ sftail->mem_ptr = rec->v.start - awe_mem_start;
+ memcpy(&rec->v, &cursmp, sizeof(cursmp));
+ rec->v.sf_id = current_sf_id;
+ if ((rc = awe_write_wave_data(addr, offset, rec, channels)) < 0)
return rc;
- sflists[current_sf_id-1].mem_ptr = save_mem_ptr;
- samples[i].v.sf_id = current_sf_id;
+ sftail->mem_ptr = save_mem_ptr;
return 0;
}
static const char *readbuf_addr;
static int readbuf_offs;
static int readbuf_flags;
-#ifdef MALLOC_LOOP_DATA
-static unsigned short *readbuf_loop;
-static int readbuf_loopstart, readbuf_loopend;
-#endif
/* initialize read buffer */
static int
readbuf_init(const char *addr, int offset, awe_sample_info *sp)
{
-#ifdef MALLOC_LOOP_DATA
- readbuf_loop = NULL;
- readbuf_loopstart = sp->loopstart;
- readbuf_loopend = sp->loopend;
- if (sp->mode_flags & (AWE_SAMPLE_BIDIR_LOOP|AWE_SAMPLE_REVERSE_LOOP)) {
- int looplen = sp->loopend - sp->loopstart;
- readbuf_loop = vmalloc(looplen * 2);
- if (readbuf_loop == NULL) {
- printk("AWE32: can't malloc temp buffer\n");
- return -ENOSPC;
- }
- }
-#endif
readbuf_addr = addr;
readbuf_offs = offset;
readbuf_flags = sp->mode_flags;
/* read from user buffer */
if (readbuf_flags & AWE_SAMPLE_8BITS) {
unsigned char cc;
- get_user(cc, (unsigned char*)&(readbuf_addr)[readbuf_offs + pos]);
- c = cc << 8; /* convert 8bit -> 16bit */
+ get_user(cc, (unsigned char*)(readbuf_addr + readbuf_offs + pos));
+ c = (unsigned short)cc << 8; /* convert 8bit -> 16bit */
} else {
- get_user(c, (unsigned short*)&(readbuf_addr)[readbuf_offs + pos * 2]);
+ get_user(c, (unsigned short*)(readbuf_addr + readbuf_offs + pos * 2));
}
if (readbuf_flags & AWE_SAMPLE_UNSIGNED)
c ^= 0x8000; /* unsigned -> signed */
-#ifdef MALLOC_LOOP_DATA
- /* write on cache for reverse loop */
- if (readbuf_flags & (AWE_SAMPLE_BIDIR_LOOP|AWE_SAMPLE_REVERSE_LOOP)) {
- if (pos >= readbuf_loopstart && pos < readbuf_loopend)
- readbuf_loop[pos - readbuf_loopstart] = c;
- }
-#endif
return c;
}
-#ifdef MALLOC_LOOP_DATA
-/* read from cache */
-static unsigned short
-readbuf_word_cache(int pos)
-{
- if (pos >= readbuf_loopstart && pos < readbuf_loopend)
- return readbuf_loop[pos - readbuf_loopstart];
- return 0;
-}
-
-static void
-readbuf_end(void)
-{
- if (readbuf_loop)
- vfree(readbuf_loop);
- readbuf_loop = NULL;
-}
-
-#else
-
#define readbuf_word_cache readbuf_word
#define readbuf_end() /**/
-#endif
-
/*----------------------------------------------------------------*/
#define BLANK_LOOP_START 8
#define BLANK_LOOP_END 40
#define BLANK_LOOP_SIZE 48
-/* loading onto memory */
+/* loading onto memory - return the actual written size */
static int
-awe_write_wave_data(const char *addr, int offset, awe_sample_info *sp, int channels)
+awe_write_wave_data(const char *addr, int offset, awe_sample_list *list, int channels)
{
int i, truesize, dram_offset;
+ awe_sample_info *sp = &list->v;
int rc;
/* be sure loop points start < end */
/* compute true data size to be loaded */
truesize = sp->size;
- if (sp->mode_flags & AWE_SAMPLE_BIDIR_LOOP)
+ if (sp->mode_flags & (AWE_SAMPLE_BIDIR_LOOP|AWE_SAMPLE_REVERSE_LOOP))
truesize += sp->loopend - sp->loopstart;
if (sp->mode_flags & AWE_SAMPLE_NO_BLANK)
truesize += BLANK_LOOP_SIZE;
- if (awe_free_mem_ptr() + truesize >= awe_mem_size/2) {
+ if (awe_free_mem_ptr() + truesize >= memsize/2) {
DEBUG(-1,printk("AWE32 Error: Sample memory full\n"));
return -ENOSPC;
}
}
}
- sflists[current_sf_id-1].mem_ptr += truesize;
awe_close_dram();
/* initialize FM */
awe_init_fm();
- return 0;
+ return truesize;
}
struct patch_info patch;
awe_voice_info *rec;
awe_sample_info *smp;
+ awe_voice_list *vrec;
+ awe_sample_list *smprec;
int sizeof_patch;
- int note, free_sample, free_info;
- int rc;
+ int note, rc;
+ sf_list *sf;
sizeof_patch = (int)((long)&patch.data[0] - (long)&patch); /* header size */
if (size < sizeof_patch) {
- printk("AWE32 Error: Patch header too short\n");
+ printk(KERN_WARNING "AWE32 Error: Patch header too short\n");
return -EINVAL;
}
- copy_from_user(((char*)&patch) + offs, addr + offs, sizeof_patch - offs);
+ if (copy_from_user(((char*)&patch) + offs, addr + offs, sizeof_patch - offs))
+ return -EFAULT;
size -= sizeof_patch;
if (size < patch.len) {
- printk("AWE32 Warning: Patch record too short (%d<%d)\n",
+ printk(KERN_WARNING "AWE32 Error: Patch record too short (%d<%d)\n",
size, patch.len);
return -EINVAL;
}
- if (check_patch_opened(AWE_PAT_TYPE_GUS, NULL) < 0)
- return -ENOSPC;
- if (alloc_new_sample() < 0)
- return -ENOSPC;
- if (alloc_new_info(1))
- return -ENOSPC;
-
- free_sample = awe_free_sample();
- smp = &samples[free_sample].v;
+ if ((sf = check_patch_opened(AWE_PAT_TYPE_GUS, NULL)) == NULL)
+ return -ENOMEM;
+ if ((smprec = alloc_new_sample()) == NULL)
+ return -ENOMEM;
+ if ((vrec = alloc_new_info()) == NULL) {
+ kfree(smprec);
+ return -ENOMEM;
+ }
- smp->sample = free_sample;
+ smp = &smprec->v;
+ smp->sample = sf->num_sample;
smp->start = 0;
smp->end = patch.len;
smp->loopstart = patch.loop_start;
smp->checksum_flag = 0;
smp->checksum = 0;
- if ((rc = awe_write_wave_data(addr, sizeof_patch, smp, -1)) != 0)
+ if ((rc = awe_write_wave_data(addr, sizeof_patch, smprec, -1)) < 0)
return rc;
-
- smp->sf_id = current_sf_id;
- add_sf_sample(free_sample);
+ sf->mem_ptr += rc;
+ add_sf_sample(sf, smprec);
/* set up voice info */
- free_info = awe_free_info();
- rec = &infos[free_info].v;
+ rec = &vrec->v;
awe_init_voice_info(rec);
- rec->sample = free_sample; /* the last sample */
+ rec->sample = sf->num_info; /* the last sample */
rec->rate_offset = calc_rate_offset(patch.base_freq);
note = freq_to_note(patch.base_note);
rec->root = note / 100;
release += calc_gus_envelope_time
(patch.env_rate[5], patch.env_offset[4],
patch.env_offset[5]);
- rec->parm.volatkhld = (calc_parm_attack(attack) << 8) |
- calc_parm_hold(hold);
+ rec->parm.volatkhld = (calc_parm_hold(hold) << 8) |
+ calc_parm_attack(attack);
rec->parm.voldcysus = (calc_gus_sustain(patch.env_offset[2]) << 8) |
calc_parm_decay(decay);
rec->parm.volrelease = 0x8000 | calc_parm_decay(release);
/* scale_freq, scale_factor, volume, and fractions not implemented */
/* append to the tail of the list */
- infos[free_info].bank = ctrls[AWE_MD_GUS_BANK];
- infos[free_info].instr = patch.instr_no;
- infos[free_info].disabled = FALSE;
- infos[free_info].type = V_ST_NORMAL;
- infos[free_info].v.sf_id = current_sf_id;
+ vrec->bank = ctrls[AWE_MD_GUS_BANK];
+ vrec->instr = patch.instr_no;
+ vrec->disabled = FALSE;
+ vrec->type = V_ST_NORMAL;
- add_info_list(free_info);
- add_sf_info(free_info);
+ add_sf_info(sf, vrec);
+ add_info_list(vrec);
/* set the voice index */
- awe_set_sample(rec);
+ awe_set_sample(vrec);
return 0;
}
* sample and voice list handlers
*/
-/* append this to the sf list */
-static void add_sf_info(int rec)
+/* append this to the current sf list */
+static void add_sf_info(sf_list *sf, awe_voice_list *rec)
{
- int sf_id = infos[rec].v.sf_id;
- if (sf_id <= 0) return;
- sf_id--;
- if (sflists[sf_id].infos < 0)
- sflists[sf_id].infos = rec;
- else {
- int i, prev;
- prev = sflists[sf_id].infos;
- while ((i = infos[prev].next) >= 0)
- prev = i;
- infos[prev].next = rec;
- }
- infos[rec].next = -1;
- sflists[sf_id].num_info++;
+ if (sf == NULL)
+ return;
+ rec->holder = sf;
+ rec->v.sf_id = sf->sf_id;
+ if (sf->last_infos)
+ sf->last_infos->next = rec;
+ else
+ sf->infos = rec;
+ sf->last_infos = rec;
+ rec->next = NULL;
+ sf->num_info++;
}
/* prepend this sample to sf list */
-static void add_sf_sample(int rec)
+static void add_sf_sample(sf_list *sf, awe_sample_list *rec)
{
- int sf_id = samples[rec].v.sf_id;
- if (sf_id <= 0) return;
- sf_id--;
- samples[rec].next = sflists[sf_id].samples;
- sflists[sf_id].samples = rec;
- sflists[sf_id].num_sample++;
+ if (sf == NULL)
+ return;
+ rec->holder = sf;
+ rec->v.sf_id = sf->sf_id;
+ if (sf->last_samples)
+ sf->last_samples->next = rec;
+ else
+ sf->samples = rec;
+ sf->last_samples = rec;
+ rec->next = NULL;
+ sf->num_sample++;
}
/* purge the old records which don't belong with the same file id */
-static void purge_old_list(int rec, int next)
+static void purge_old_list(awe_voice_list *rec, awe_voice_list *next)
{
- infos[rec].next_instr = next;
- if (infos[rec].bank == AWE_DRUM_BANK) {
+ rec->next_instr = next;
+ if (rec->bank == AWE_DRUM_BANK) {
/* remove samples with the same note range */
- int cur, *prevp = &infos[rec].next_instr;
- int low = infos[rec].v.low;
- int high = infos[rec].v.high;
- for (cur = next; cur >= 0; cur = infos[cur].next_instr) {
- if (infos[cur].v.low == low &&
- infos[cur].v.high == high &&
- ! is_identical_id(infos[cur].v.sf_id, infos[rec].v.sf_id))
- *prevp = infos[cur].next_instr;
- prevp = &infos[cur].next_instr;
+ awe_voice_list *cur, *prev = rec;
+ int low = rec->v.low;
+ int high = rec->v.high;
+ for (cur = next; cur; cur = cur->next_instr) {
+ if (cur->v.low == low &&
+ cur->v.high == high &&
+ ! is_identical_holder(cur->holder, rec->holder))
+ prev->next_instr = cur->next_instr;
+ else
+ prev = cur;
}
} else {
- if (! is_identical_id(infos[next].v.sf_id, infos[rec].v.sf_id))
- infos[rec].next_instr = -1;
+ if (! is_identical_holder(next->holder, rec->holder))
+ /* remove all samples */
+ rec->next_instr = NULL;
}
}
/* prepend to top of the preset table */
-static void add_info_list(int rec)
+static void add_info_list(awe_voice_list *rec)
{
- int *prevp, cur;
- int instr;
- int bank;
+ awe_voice_list *prev, *cur;
+ int key;
- if (infos[rec].disabled)
+ if (rec->disabled)
return;
- instr = infos[rec].instr;
- bank = infos[rec].bank;
- limitvalue(instr, 0, AWE_MAX_PRESETS-1);
- prevp = &preset_table[instr];
- cur = *prevp;
- while (cur >= 0) {
+ key = awe_search_key(rec->bank, rec->instr, rec->v.low);
+ prev = NULL;
+ for (cur = preset_table[key]; cur; cur = cur->next_bank) {
/* search the first record with the same bank number */
- if (infos[cur].bank == bank) {
+ if (cur->instr == rec->instr && cur->bank == rec->bank) {
/* replace the list with the new record */
- infos[rec].next_bank = infos[cur].next_bank;
- *prevp = rec;
+ rec->next_bank = cur->next_bank;
+ if (prev)
+ prev->next_bank = rec;
+ else
+ preset_table[key] = rec;
purge_old_list(rec, cur);
return;
}
- prevp = &infos[cur].next_bank;
- cur = infos[cur].next_bank;
+ prev = cur;
}
/* this is the first bank record.. just add this */
- infos[rec].next_instr = -1;
- infos[rec].next_bank = preset_table[instr];
- preset_table[instr] = rec;
+ rec->next_instr = NULL;
+ rec->next_bank = preset_table[key];
+ preset_table[key] = rec;
}
/* remove samples later than the specified sf_id */
static void
awe_remove_samples(int sf_id)
{
+ sf_list *p, *prev;
+
if (sf_id <= 0) {
awe_reset_samples();
return;
if (current_sf_id <= sf_id)
return;
+ for (p = sftail; p; p = prev) {
+ if (p->sf_id <= sf_id)
+ break;
+ prev = p->prev;
+ awe_free_sf(p);
+ }
+ sftail = p;
+ if (sftail) {
+ sf_id = sftail->sf_id;
+ sftail->next = NULL;
+ } else {
+ sf_id = 0;
+ sfhead = NULL;
+ }
current_sf_id = sf_id;
if (locked_sf_id > sf_id)
locked_sf_id = sf_id;
/* rebuild preset search list */
static void rebuild_preset_list(void)
{
- int i, j;
+ sf_list *p;
+ awe_voice_list *rec;
- for (i = 0; i < AWE_MAX_PRESETS; i++)
- preset_table[i] = -1;
+ memset(preset_table, 0, sizeof(preset_table));
- for (i = 0; i < current_sf_id; i++) {
- for (j = sflists[i].infos; j >= 0; j = infos[j].next)
- add_info_list(j);
+ for (p = sfhead; p; p = p->next) {
+ for (rec = p->infos; rec; rec = rec->next)
+ add_info_list(rec);
}
}
/* compare the given sf_id pair */
-static int is_identical_id(int id1, int id2)
+static int is_identical_holder(sf_list *sf1, sf_list *sf2)
{
- if (id1 == id2)
- return TRUE;
- if (id1 <= 0 || id2 <= 0) /* this must not happen.. */
+ if (sf1 == NULL || sf2 == NULL)
return FALSE;
+ if (sf1 == sf2)
+ return TRUE;
#ifdef AWE_ALLOW_SAMPLE_SHARING
{
/* compare with the sharing id */
- int i;
- if (id1 < id2) { /* make sure id1 > id2 */
- int tmp; tmp = id1; id1 = id2; id2 = tmp;
+ sf_list *p;
+ int counter = 0;
+ if (sf1->sf_id < sf2->sf_id) { /* make sure id1 > id2 */
+ sf_list *tmp; tmp = sf1; sf1 = sf2; sf2 = tmp;
}
- for (i = sflists[id1-1].shared; i > 0 && i <= current_sf_id; i = sflists[i-1].shared) {
- if (i == id2)
+ for (p = sf1->shared; p; p = p->shared) {
+ if (counter++ > current_sf_id)
+ break; /* strange sharing loop.. quit */
+ if (p == sf2)
return TRUE;
}
}
}
/* search the sample index matching with the given sample id */
-static int search_sample_index(int sf, int sample, int level)
+static awe_sample_list *
+search_sample_index(sf_list *sf, int sample)
{
- int i;
-
- if (sf <= 0 || sf > current_sf_id)
- return -1; /* this must not happen */
-
- for (i = sflists[sf-1].samples; i >= 0; i = samples[i].next) {
- if (samples[i].v.sample == sample)
- return i;
- }
+ awe_sample_list *p;
#ifdef AWE_ALLOW_SAMPLE_SHARING
- if ((i = sflists[sf-1].shared) > 0 && i <= current_sf_id) { /* search recursively */
- if (level > current_sf_id)
- return -1; /* strange sharing loop.. quit */
- return search_sample_index(i, sample, level + 1);
+ int counter = 0;
+ while (sf) {
+ for (p = sf->samples; p; p = p->next) {
+ if (p->v.sample == sample)
+ return p;
+ }
+ sf = sf->shared;
+ if (counter++ > current_sf_id)
+ break; /* strange sharing loop.. quit */
+ }
+#else
+ if (sf) {
+ for (p = sf->samples; p; p = p->next) {
+ if (p->v.sample == sample)
+ return p;
+ }
}
#endif
- return -1;
+ return NULL;
}
/* search the specified sample */
+/* non-zero = found */
static short
-awe_set_sample(awe_voice_info *vp)
+awe_set_sample(awe_voice_list *rec)
{
- int i;
+ awe_sample_list *smp;
+ awe_voice_info *vp = &rec->v;
- vp->index = -1;
- if ((i = search_sample_index(vp->sf_id, vp->sample, 0)) < 0)
- return -1;
+ vp->index = 0;
+ if ((smp = search_sample_index(rec->holder, vp->sample)) == NULL)
+ return 0;
/* set the actual sample offsets */
- vp->start += samples[i].v.start;
- vp->end += samples[i].v.end;
- vp->loopstart += samples[i].v.loopstart;
- vp->loopend += samples[i].v.loopend;
+ vp->start += smp->v.start;
+ vp->end += smp->v.end;
+ vp->loopstart += smp->v.loopstart;
+ vp->loopend += smp->v.loopend;
/* copy mode flags */
- vp->mode = samples[i].v.mode_flags;
- /* set index */
- vp->index = i;
+ vp->mode = smp->v.mode_flags;
+ /* set flag */
+ vp->index = 1;
- return i;
+ return 1;
}
-/*----------------------------------------------------------------
+/*
* voice allocation
- *----------------------------------------------------------------*/
+ */
/* look for all voices associated with the specified note & velocity */
static int
-awe_search_multi_voices(int rec, int note, int velocity, awe_voice_info **vlist)
+awe_search_multi_voices(awe_voice_list *rec, int note, int velocity,
+ awe_voice_info **vlist)
{
int nvoices;
nvoices = 0;
- for (; rec >= 0; rec = infos[rec].next_instr) {
- if (note >= infos[rec].v.low &&
- note <= infos[rec].v.high &&
- velocity >= infos[rec].v.vellow &&
- velocity <= infos[rec].v.velhigh) {
- if (infos[rec].type == V_ST_MAPPED) {
+ for (; rec; rec = rec->next_instr) {
+ if (note >= rec->v.low &&
+ note <= rec->v.high &&
+ velocity >= rec->v.vellow &&
+ velocity <= rec->v.velhigh) {
+ if (rec->type == V_ST_MAPPED) {
/* mapper */
- vlist[0] = &infos[rec].v;
+ vlist[0] = &rec->v;
return -1;
}
- vlist[nvoices++] = &infos[rec].v;
+ vlist[nvoices++] = &rec->v;
if (nvoices >= AWE_MAX_VOICES)
break;
}
the note number if necessary.
*/
static int
-really_alloc_voices(int vrec, int def_vrec, int *note, int velocity, awe_voice_info **vlist, int level)
+really_alloc_voices(int bank, int instr, int *note, int velocity, awe_voice_info **vlist)
{
int nvoices;
-
- nvoices = awe_search_multi_voices(vrec, *note, velocity, vlist);
- if (nvoices == 0)
- nvoices = awe_search_multi_voices(def_vrec, *note, velocity, vlist);
- if (nvoices < 0) { /* mapping */
- int preset = vlist[0]->start;
- int bank = vlist[0]->end;
- int key = vlist[0]->fixkey;
- if (level > 5) {
- printk("AWE32: too deep mapping level\n");
- return 0;
+ awe_voice_list *vrec;
+ int level = 0;
+
+ for (;;) {
+ vrec = awe_search_instr(bank, instr, *note);
+ nvoices = awe_search_multi_voices(vrec, *note, velocity, vlist);
+ if (nvoices == 0) {
+ if (bank == AWE_DRUM_BANK)
+ /* search default drumset */
+ vrec = awe_search_instr(bank, ctrls[AWE_MD_DEF_DRUM], *note);
+ else
+ /* search default preset */
+ vrec = awe_search_instr(ctrls[AWE_MD_DEF_BANK], instr, *note);
+ nvoices = awe_search_multi_voices(vrec, *note, velocity, vlist);
}
- vrec = awe_search_instr(bank, preset);
- if (bank == AWE_DRUM_BANK)
- def_vrec = awe_search_instr(bank, 0);
- else
- def_vrec = awe_search_instr(0, preset);
- if (key >= 0)
- *note = key;
- return really_alloc_voices(vrec, def_vrec, note, velocity, vlist, level+1);
+ if (nvoices == 0) {
+ if (bank == AWE_DRUM_BANK && ctrls[AWE_MD_DEF_DRUM] != 0)
+ /* search default drumset */
+ vrec = awe_search_instr(bank, 0, *note);
+ else if (bank != AWE_DRUM_BANK && ctrls[AWE_MD_DEF_BANK] != 0)
+ /* search default preset */
+ vrec = awe_search_instr(0, instr, *note);
+ nvoices = awe_search_multi_voices(vrec, *note, velocity, vlist);
+ }
+ if (nvoices < 0) { /* mapping */
+ int key = vlist[0]->fixkey;
+ instr = vlist[0]->start;
+ bank = vlist[0]->end;
+ if (level++ > 5) {
+ printk(KERN_ERR "AWE32: too deep mapping level\n");
+ return 0;
+ }
+ if (key >= 0)
+ *note = key;
+ } else
+ break;
}
return nvoices;
static void
awe_alloc_multi_voices(int ch, int note, int velocity, int key)
{
- int i, v, nvoices;
+ int i, v, nvoices, bank;
awe_voice_info *vlist[AWE_MAX_VOICES];
- if (channels[ch].vrec < 0 && channels[ch].def_vrec < 0)
- awe_set_instr(0, ch, channels[ch].instr);
+ if (MULTI_LAYER_MODE() && IS_DRUM_CHANNEL(ch))
+ bank = AWE_DRUM_BANK; /* always search drumset */
+ else
+ bank = channels[ch].bank;
/* check the possible voices; note may be changeable if mapped */
- nvoices = really_alloc_voices(channels[ch].vrec, channels[ch].def_vrec,
- ¬e, velocity, vlist, 0);
+ nvoices = really_alloc_voices(bank, channels[ch].instr,
+ ¬e, velocity, vlist);
/* set the voices */
current_alloc_time++;
}
-/* search the best voice from the specified status condition */
+/* search an empty voice.
+ if no empty voice is found, at least terminate a voice
+ */
static int
-search_best_voice(int condition)
+awe_clear_voice(void)
{
- int i, time, best;
- int vtarget = 0xffff, min_vtarget = 0xffff;
+ enum {
+ OFF=0, RELEASED, SUSTAINED, PLAYING, END
+ };
+ struct voice_candidate_t {
+ int best;
+ int time;
+ int vtarget;
+ } candidate[END];
+ int i, type, vtarget;
+
+ vtarget = 0xffff;
+ for (type = OFF; type < END; type++) {
+ candidate[type].best = -1;
+ candidate[type].time = current_alloc_time + 1;
+ candidate[type].vtarget = vtarget;
+ }
- best = -1;
- time = current_alloc_time + 1;
for (i = 0; i < awe_max_voices; i++) {
- if (! (voices[i].state & condition))
+ if (voices[i].state & AWE_ST_OFF)
+ type = OFF;
+ else if (voices[i].state & AWE_ST_RELEASED)
+ type = RELEASED;
+ else if (voices[i].state & AWE_ST_SUSTAINED)
+ type = SUSTAINED;
+ else if (voices[i].state & ~AWE_ST_MARK)
+ type = PLAYING;
+ else
continue;
#ifdef AWE_CHECK_VTARGET
/* get current volume */
vtarget = (awe_peek_dw(AWE_VTFT(i)) >> 16) & 0xffff;
#endif
- if (best < 0 || vtarget < min_vtarget ||
- (vtarget == min_vtarget && voices[i].time < time)) {
- best = i;
- time = voices[i].time;
- min_vtarget = vtarget;
+ if (candidate[type].best < 0 ||
+ vtarget < candidate[type].vtarget ||
+ (vtarget == candidate[type].vtarget &&
+ voices[i].time < candidate[type].time)) {
+ candidate[type].best = i;
+ candidate[type].time = voices[i].time;
+ candidate[type].vtarget = vtarget;
}
}
- /* clear voice */
- if (best >= 0) {
- if (voices[best].state != AWE_ST_OFF)
- awe_terminate(best);
- awe_voice_init(best, TRUE);
- }
-
- return best;
-}
-/* search an empty voice.
- if no empty voice is found, at least terminate a voice
- */
-static int
-awe_clear_voice(void)
-{
- int best;
-
- /* looking for the oldest empty voice */
- if ((best = search_best_voice(AWE_ST_OFF)) >= 0)
- return best;
- if ((best = search_best_voice(AWE_ST_RELEASED)) >= 0)
- return best;
- /* looking for the oldest sustained voice */
- if ((best = search_best_voice(AWE_ST_SUSTAINED)) >= 0)
- return best;
-
- if (MULTI_LAYER_MODE() && ctrls[AWE_MD_CHN_PRIOR]) {
- int ch = -1;
- int time = current_alloc_time + 1;
- int i;
- /* looking for the voices from high channel (except drum ch) */
- for (i = 0; i < awe_max_voices; i++) {
- if (IS_DRUM_CHANNEL(voices[i].ch)) continue;
- if (voices[i].ch < ch) continue;
- if (voices[i].state != AWE_ST_MARK &&
- (voices[i].ch > ch || voices[i].time < time)) {
- best = i;
- time = voices[i].time;
- ch = voices[i].ch;
- }
+ for (type = OFF; type < END; type++) {
+ if ((i = candidate[type].best) >= 0) {
+ if (voices[i].state != AWE_ST_OFF)
+ awe_terminate(i);
+ awe_voice_init(i, TRUE);
+ return i;
}
}
- if (best < 0)
- best = search_best_voice(~AWE_ST_MARK);
-
- if (best >= 0)
- return best;
-
return 0;
}
static void
awe_alloc_one_voice(int voice, int note, int velocity)
{
- int ch, nvoices;
+ int ch, nvoices, bank;
awe_voice_info *vlist[AWE_MAX_VOICES];
ch = voices[voice].ch;
- if (channels[ch].vrec < 0 && channels[ch].def_vrec < 0)
- awe_set_instr(0, ch, channels[ch].instr);
+ if (MULTI_LAYER_MODE() && IS_DRUM_CHANNEL(voice))
+ bank = AWE_DRUM_BANK; /* always search drumset */
+ else
+ bank = voices[voice].cinfo->bank;
- nvoices = really_alloc_voices(voices[voice].cinfo->vrec,
- voices[voice].cinfo->def_vrec,
- ¬e, velocity, vlist, 0);
+ nvoices = really_alloc_voices(bank, voices[voice].cinfo->instr,
+ ¬e, velocity, vlist);
if (nvoices > 0) {
voices[voice].time = ++current_alloc_time;
voices[voice].sample = vlist[0]; /* use the first one */
}
-/*----------------------------------------------------------------
+/*
* sequencer2 functions
- *----------------------------------------------------------------*/
+ */
/* search an empty voice; used by sequencer2 */
static int
awe_mixer_ioctl,
};
-static void attach_mixer(void)
+static void __init attach_mixer(void)
{
if ((my_mixerdev = sound_alloc_mixerdev()) >= 0) {
mixer_devs[my_mixerdev] = &awe_mixer_operations;
}
}
-static void unload_mixer(void)
+static void __exit unload_mixer(void)
{
if (my_mixerdev >= 0)
sound_unload_mixerdev(my_mixerdev);
if (((cmd >> 8) & 0xff) != 'M')
return -EINVAL;
- level = (int) *(int *)arg;
+ level = *(int*)arg;
level = ((level & 0xff) + (level >> 8)) / 2;
DEBUG(0,printk("AWEMix: cmd=%x val=%d\n", cmd & 0xff, level));
level = 0;
break;
}
- return *(int *)arg = level;
+ return *(int*)arg = level;
}
#endif /* CONFIG_AWE32_MIXER */
/*
- * initialization of AWE32
+ * initialization of Emu8000
*/
/* intiailize audio channels */
{
#ifndef AWE_ALWAYS_INIT_FM
/* if no extended memory is on board.. */
- if (awe_mem_size <= 0)
+ if (memsize <= 0)
return;
#endif
DEBUG(3,printk("AWE32: initializing FM\n"));
awe_poke_dw(AWE_CCCA(vidx[i]), 0);
voices[vidx[i]].state = AWE_ST_OFF;
}
- return -ENOSPC;
+ printk("awe: not ready to write..\n");
+ return -EPERM;
}
/* set address to write */
}
-/*================================================================
+/*
* detect presence of AWE32 and check memory size
- *================================================================*/
+ */
/* detect emu8000 chip on the specified address; from VV's guide */
-static int
+static int __init
awe_detect_base(int addr)
{
setup_ports(addr, 0, 0);
return 1;
}
-static int
+#ifdef CONFIG_ISAPNP
+static struct {
+ unsigned short vendor;
+ unsigned short function;
+ char *name;
+} isapnp_awe_list[] __initdata = {
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0021), "AWE32 WaveTable"},
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0022), "AWE64 WaveTable"},
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0023), "AWE64 Gold WaveTable"},
+ {0,}
+};
+
+static struct pci_dev *idev = NULL;
+
+static int __init awe_probe_isapnp(int *port)
+{
+ int i;
+
+ for (i = 0; isapnp_awe_list[i].vendor != 0; i++) {
+ while ((idev = isapnp_find_dev(NULL,
+ isapnp_awe_list[i].vendor,
+ isapnp_awe_list[i].function,
+ idev))) {
+ if (idev->prepare(idev) < 0)
+ continue;
+ if (idev->activate(idev) < 0 ||
+ !idev->resource[0].start) {
+ idev->deactivate(idev);
+ idev->deactivate(idev);
+ continue;
+ }
+ *port = idev->resource[0].start;
+ break;
+ }
+ if (!idev)
+ continue;
+ printk(KERN_INFO "ISAPnP reports %s at i/o %#x\n",
+ isapnp_awe_list[i].name, *port);
+ return 0;
+ }
+ return -ENODEV;
+}
+
+static void __exit awe_deactivate_isapnp(void)
+{
+#if 1
+ if (idev) {
+ idev->deactivate(idev);
+ idev = NULL;
+ }
+#endif
+}
+
+#endif
+
+static int __init
awe_detect(void)
{
int base;
- if (port_setuped) /* already initialized by PnP */
+#ifdef CONFIG_ISAPNP
+ if (isapnp) {
+ if (awe_probe_isapnp(&io) < 0) {
+ printk(KERN_ERR "AWE32: No ISAPnP cards found\n");
+ return 0;
+ }
+ setup_ports(io, 0, 0);
return 1;
+ }
+#endif /* isapnp */
- if (awe_port) /* use default i/o port value */
- setup_ports(awe_port, 0, 0);
+ if (io) /* use default i/o port value */
+ setup_ports(io, 0, 0);
else { /* probe it */
for (base = 0x620; base <= 0x680; base += 0x20)
if (awe_detect_base(base))
}
-/*================================================================
+/*
* check dram size on AWE board
- *================================================================*/
+ */
/* any three numbers you like */
#define UNIQUE_ID1 0x1234
#define UNIQUE_ID2 0x4321
#define UNIQUE_ID3 0xFFFF
-static void
+static void __init
awe_check_dram(void)
{
if (awe_present) /* already initialized */
return;
- if (awe_mem_size >= 0) { /* given by config file or module option */
- awe_mem_size *= 1024; /* convert to Kbytes */
+ if (memsize >= 0) { /* given by config file or module option */
+ memsize *= 1024; /* convert to Kbytes */
return;
}
awe_open_dram_for_check();
- awe_mem_size = 0;
+ memsize = 0;
/* set up unique two id numbers */
awe_poke_dw(AWE_SMALW, AWE_DRAM_OFFSET);
awe_poke(AWE_SMLD, UNIQUE_ID1);
awe_poke(AWE_SMLD, UNIQUE_ID2);
- while (awe_mem_size < AWE_MAX_DRAM_SIZE) {
+ while (memsize < AWE_MAX_DRAM_SIZE) {
awe_wait(5);
/* read a data on the DRAM start address */
awe_poke_dw(AWE_SMALR, AWE_DRAM_OFFSET);
break;
if (awe_peek(AWE_SMLD) != UNIQUE_ID2)
break;
- awe_mem_size += 512; /* increment 512kbytes */
+ memsize += 512; /* increment 512kbytes */
/* Write a unique data on the test address;
* if the address is out of range, the data is written on
* 0x200000(=AWE_DRAM_OFFSET). Then the two id words are
* broken by this data.
*/
- awe_poke_dw(AWE_SMALW, AWE_DRAM_OFFSET + awe_mem_size*512L);
+ awe_poke_dw(AWE_SMALW, AWE_DRAM_OFFSET + memsize*512L);
awe_poke(AWE_SMLD, UNIQUE_ID3);
awe_wait(5);
/* read a data on the just written DRAM address */
- awe_poke_dw(AWE_SMALR, AWE_DRAM_OFFSET + awe_mem_size*512L);
+ awe_poke_dw(AWE_SMALR, AWE_DRAM_OFFSET + memsize*512L);
awe_peek(AWE_SMLD); /* discard stale data */
if (awe_peek(AWE_SMLD) != UNIQUE_ID3)
break;
}
awe_close_dram();
- DEBUG(0,printk("AWE32: %d Kbytes memory detected\n", awe_mem_size));
+ DEBUG(0,printk("AWE32: %d Kbytes memory detected\n", memsize));
/* convert to Kbytes */
- awe_mem_size *= 1024;
+ memsize *= 1024;
}
-/*================================================================
+/*----------------------------------------------------------------*/
+
+/*
* chorus and reverb controls; from VV's guide
- *================================================================*/
+ */
/* 5 parameters for each chorus mode; 3 x 16bit, 2 x 32bit */
static char chorus_defined[AWE_CHORUS_NUMBERS];
awe_load_chorus_fx(awe_patch_info *patch, const char *addr, int count)
{
if (patch->optarg < AWE_CHORUS_PREDEFINED || patch->optarg >= AWE_CHORUS_NUMBERS) {
- printk("AWE32 Error: illegal chorus mode %d for uploading\n", patch->optarg);
+ printk(KERN_WARNING "AWE32 Error: invalid chorus mode %d for uploading\n", patch->optarg);
return -EINVAL;
}
if (count < sizeof(awe_chorus_fx_rec)) {
- printk("AWE32 Error: too short chorus fx parameters\n");
+ printk(KERN_WARNING "AWE32 Error: too short chorus fx parameters\n");
return -EINVAL;
}
- copy_from_user(&chorus_parm[patch->optarg], addr + AWE_PATCH_INFO_SIZE,
- sizeof(awe_chorus_fx_rec));
+ if (copy_from_user(&chorus_parm[patch->optarg], addr + AWE_PATCH_INFO_SIZE,
+ sizeof(awe_chorus_fx_rec)))
+ return -EFAULT;
chorus_defined[patch->optarg] = TRUE;
return 0;
}
awe_load_reverb_fx(awe_patch_info *patch, const char *addr, int count)
{
if (patch->optarg < AWE_REVERB_PREDEFINED || patch->optarg >= AWE_REVERB_NUMBERS) {
- printk("AWE32 Error: illegal reverb mode %d for uploading\n", patch->optarg);
+ printk(KERN_WARNING "AWE32 Error: invalid reverb mode %d for uploading\n", patch->optarg);
return -EINVAL;
}
if (count < sizeof(awe_reverb_fx_rec)) {
- printk("AWE32 Error: too short reverb fx parameters\n");
+ printk(KERN_WARNING "AWE32 Error: too short reverb fx parameters\n");
return -EINVAL;
}
- copy_from_user(&reverb_parm[patch->optarg], addr + AWE_PATCH_INFO_SIZE,
- sizeof(awe_reverb_fx_rec));
+ if (copy_from_user(&reverb_parm[patch->optarg], addr + AWE_PATCH_INFO_SIZE,
+ sizeof(awe_reverb_fx_rec)))
+ return -EFAULT;
reverb_defined[patch->optarg] = TRUE;
return 0;
}
awe_set_reverb_mode(ctrls[AWE_MD_REVERB_MODE]);
}
-/*================================================================
+/*
* treble/bass equalizer control
- *================================================================*/
+ */
static unsigned short bass_parm[12][3] = {
{0xD26A, 0xD36A, 0x0000}, /* -12 dB */
}
+/*----------------------------------------------------------------*/
+
#ifdef CONFIG_AWE32_MIDIEMU
-/*================================================================
+/*
* Emu8000 MIDI Emulation
- *================================================================*/
+ */
-/*================================================================
+/*
* midi queue record
- *================================================================*/
+ */
/* queue type */
enum { Q_NONE, Q_VARLEN, Q_READ, Q_SYSEX, };
} ConvTable;
-/*================================================================
+/*
* prototypes
- *================================================================*/
+ */
static int awe_midi_open(int dev, int mode, void (*input)(int,unsigned char), void (*output)(int));
static void awe_midi_close(int dev);
#define numberof(ary) (sizeof(ary)/sizeof(ary[0]))
-/*================================================================
+/*
* OSS Midi device record
- *================================================================*/
+ */
static struct midi_operations awe_midi_operations =
{
static int my_mididev = -1;
-static void attach_midiemu(void)
+static void __init attach_midiemu(void)
{
if ((my_mididev = sound_alloc_mididev()) < 0)
printk ("Sound: Too many midi devices detected\n");
midi_devs[my_mididev] = &awe_midi_operations;
}
-static void unload_midiemu(void)
+static void __exit unload_midiemu(void)
{
if (my_mididev >= 0)
sound_unload_mididev(my_mididev);
}
-/*================================================================
+/*
* RPN events
- *================================================================*/
+ */
static void midi_rpn_event(MidiStatus *st)
{
}
-/*================================================================
+/*
* system exclusive message
* GM/GS/XG macros are accepted
- *================================================================*/
+ */
static void midi_system_exclusive(MidiStatus *st)
{
}
+/*----------------------------------------------------------------*/
+
/*
* convert NRPN/control values
*/
static int num_gs_effects = numberof(gs_effects);
-/*================================================================
+/*
* NRPN events: accept as AWE32/SC88 specific controls
- *================================================================*/
+ */
static void midi_nrpn_event(MidiStatus *st)
{
}
-/*----------------------------------------------------------------
+/*
* XG control effects; still experimental
- *----------------------------------------------------------------*/
+ */
/* cutoff: quarter semitone step, max=255 */
static unsigned short xg_cutoff(int val)
#endif /* CONFIG_AWE32_MIDIEMU */
-/* new type interface */
-static int __init attach_awe(void)
+
+/*----------------------------------------------------------------*/
+
+/*
+ * device / lowlevel (module) interface
+ */
+
+int __init attach_awe(void)
{
-#ifdef CONFIG_PNP_DRV
- if (pnp) {
- awe_initpnp();
- if (awe_pnp_ok)
- return 0;
- }
-#endif /* pnp */
-
- _attach_awe();
-
- return 0;
-}
+ return _attach_awe() ? 0 : -ENODEV;
+}
-static void __exit unload_awe(void)
+void __exit unload_awe(void)
{
-#ifdef CONFIG_PNP_DRV
- if (pnp)
- awe_unload_pnp();
-#endif
-
_unload_awe();
+#ifdef CONFIG_ISAPNP
+ if (isapnp)
+ awe_deactivate_isapnp();
+#endif /* isapnp */
}
+
module_init(attach_awe);
module_exit(unload_awe);
#ifndef MODULE
static int __init setup_awe(char *str)
{
- /* io, memsize */
- int ints[3];
+ /* io, memsize, isapnp */
+ int ints[4];
str = get_options(str, ARRAY_SIZE(ints), ints);
io = ints[1];
memsize = ints[2];
+ isapnp = ints[3];
return 1;
}
* sound/awe_config.h
*
* Configuration of AWE32/SB32/AWE64 wave table synth driver.
- * version 0.4.3; Mar. 1, 1998
+ * version 0.4.4; Jan. 4, 2000
*
* Copyright (C) 1996-1998 Takashi Iwai
*
/*#define AWE_DEFAULT_MEM_SIZE 512*/ /* kbytes */
/*
- * maximum size of soundfont list table
+ * AWE driver version number
*/
-
-#define AWE_MAX_SF_LISTS 16
-
-/*
- * chunk size of sample and voice tables
- */
-
-#define AWE_MAX_SAMPLES 400
-#define AWE_MAX_INFOS 800
-
#define AWE_MAJOR_VERSION 0
#define AWE_MINOR_VERSION 4
-#define AWE_TINY_VERSION 3
+#define AWE_TINY_VERSION 4
#define AWE_VERSION_NUMBER ((AWE_MAJOR_VERSION<<16)|(AWE_MINOR_VERSION<<8)|AWE_TINY_VERSION)
-#define AWEDRV_VERSION "0.4.3"
+#define AWEDRV_VERSION "0.4.4"
--- /dev/null
+***************
+*** 399,405 ****
+ /* @X@0001:mpu
+ */
+
+- #ifdef CONFIG_MIDI
+ if((mpu_dev = isapnp_find_dev(bus,
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+ {
+--- 361,366 ----
+ /* @X@0001:mpu
+ */
+
+ if((mpu_dev = isapnp_find_dev(bus,
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+ {
+***************
+*** 413,583 ****
+ }
+ else
+ printk(KERN_ERR "sb: DT0197H panic: mpu not found\n");
+- #endif
+-
+-
+- /* @P@:Gameport
+- */
+-
+- if((jp_dev = isapnp_find_dev(bus,
+- ISAPNP_VENDOR('@','P','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+- {
+- jp_dev->prepare(jp_dev);
+-
+- if((jp_dev = activate_dev("DT0197H", "gameport", jp_dev)))
+- show_base("DT0197H", "gameport", &jp_dev->resource[0]);
+- }
+- else
+- printk(KERN_ERR "sb: DT0197H panic: gameport not found\n");
+-
+- /* @H@0001:OPL3
+- */
+-
+- #if defined(CONFIG_SOUND_YM3812) || defined(CONFIG_SOUND_YM3812_MODULE)
+- if((wss_dev = isapnp_find_dev(bus,
+- ISAPNP_VENDOR('@','H','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+- {
+- wss_dev->prepare(wss_dev);
+-
+- /* Let's disable IRQ and DMA for WSS device */
+-
+- wss_dev->irq_resource[0].flags = 0;
+- wss_dev->dma_resource[0].flags = 0;
+-
+- if((wss_dev = activate_dev("DT0197H", "opl3", wss_dev)))
+- show_base("DT0197H", "opl3", &wss_dev->resource[0]);
+- }
+- else
+- printk(KERN_ERR "sb: DT0197H panic: opl3 not found\n");
+- #endif
+
+ printk(KERN_INFO "sb: DT0197H mail reports to Torsten Werner <twerner@intercomm.de>\n");
+
+ return(sb_dev);
+ }
+
+- /* Specific support for awe will be dropped when:
+- * a) The new awe_wawe driver with PnP support will be introduced in the kernel
+- * b) The joystick driver will support PnP - a little patch is available from me....hint, hint :-)
+- */
+-
+- static struct pci_dev *sb_init_awe(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
+ {
+- /* CTL0042:Audio SB64
+- * CTL0031:Audio SB32
+- * CTL0045:Audio SB64
+ */
+
+- if( (sb_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042), NULL)) ||
+- (sb_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031), NULL)) ||
+- (sb_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045), NULL)) )
+ {
+ sb_dev->prepare(sb_dev);
+
+- if((sb_dev = activate_dev("AWE", "sb", sb_dev)))
+ {
+ hw_config->io_base = sb_dev->resource[0].start;
+ hw_config->irq = sb_dev->irq_resource[0].start;
+- hw_config->dma = sb_dev->dma_resource[0].start;
+- hw_config->dma2 = sb_dev->dma_resource[1].start;
+-
+- mpu_config->io_base = sb_dev->resource[1].start;
+
+- show_base("AWE", "sb", &sb_dev->resource[0]);
+- show_base("AWE", "mpu", &sb_dev->resource[1]);
+- show_base("AWE", "opl3", &sb_dev->resource[2]);
+ }
+- else
+- return(NULL);
+- }
+- else
+- printk(KERN_ERR "sb: AWE panic: sb base not found\n");
+-
+
+- /* CTL7002:Game SB64
+- * CTL7001:Game SB32
+- */
+-
+- if( (jp_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x7002), NULL)) ||
+- (jp_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x7001), NULL)) )
+- {
+- jp_dev->prepare(jp_dev);
+-
+- if((jp_dev = activate_dev("AWE", "gameport", jp_dev)))
+- show_base("AWE", "gameport", &jp_dev->resource[0]);
+ }
+ else
+- printk(KERN_ERR "sb: AWE panic: gameport not found\n");
+-
+
+- /* CTL0022:WaveTable SB64
+- * CTL0021:WaveTable SB32
+- * CTL0023:WaveTable Sb64
+ */
+
+- if( nosbwave == 0 &&
+- ( ( wt_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0023), NULL)) ||
+- ( wt_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0022), NULL)) ||
+- ( wt_dev = isapnp_find_dev(bus, ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0021), NULL)) ))
+ {
+- wt_dev->prepare(wt_dev);
+-
+- if((wt_dev = activate_dev("AWE", "wavetable", wt_dev)))
+ {
+- show_base("AWE", "wavetable", &wt_dev->resource[0]);
+- show_base("AWE", "wavetable", &wt_dev->resource[1]);
+- show_base("AWE", "wavetable", &wt_dev->resource[2]);
+ }
+ }
+ else
+- printk(KERN_ERR "sb: AWE panic: wavetable not found\n");
+
+- printk(KERN_INFO "sb: AWE mail reports to Alessandro Zummo <azummo@ita.flashnet.it>\n");
+
+ return(sb_dev);
+ }
+
+- #define SBF_DEV 0x01
+-
+
+ static struct { unsigned short vendor, function, flags; struct pci_dev * (*initfunc)(struct pci_bus *, struct pci_dev *, struct address_info *, struct address_info *); char *name; }
+- isapnp_sb_list[] __initdata = {
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0001), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0041), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0043), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0044), 0, &sb_init_awe, "Sound Blaster 32" },
+- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0039), 0, &sb_init_awe, "Sound Blaster AWE 32" },
+- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x009D), 0, &sb_init_awe, "Sound Blaster AWE 64" },
+- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x00C5), 0, &sb_init_awe, "Sound Blaster AWE 64" },
+- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x00E4), 0, &sb_init_awe, "Sound Blaster AWE 64" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x0968), SBF_DEV, &sb_init_ess, "ESS 1688" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1868), SBF_DEV, &sb_init_ess, "ESS 1868" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x8611), SBF_DEV, &sb_init_ess, "ESS 1868" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1869), SBF_DEV, &sb_init_ess, "ESS 1869" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1878), SBF_DEV, &sb_init_ess, "ESS 1878" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1879), SBF_DEV, &sb_init_ess, "ESS 1879" },
+- {ISAPNP_VENDOR('C','M','I'), ISAPNP_FUNCTION(0x0001), 0, &sb_init_cmi, "CMI 8330 SoundPRO" },
+- {ISAPNP_VENDOR('R','W','B'), ISAPNP_FUNCTION(0x1688), 0, &sb_init_diamond, "Diamond DT0197H" },
+ {0}
+ };
+
+- static int __init sb_init_isapnp(struct address_info *hw_config, struct address_info *mpu_config, struct pci_bus *bus, struct pci_dev *card, int slot)
+ {
+ struct pci_dev *idev = NULL;
+
+ /* You missed the init func? That's bad. */
+- if(isapnp_sb_list[slot].initfunc)
+ {
+- char *busname = bus->name[0] ? bus->name : isapnp_sb_list[slot].name;
+
+ printk(KERN_INFO "sb: %s detected\n", busname);
+
+ /* Initialize this baby. */
+
+- if((idev = isapnp_sb_list[slot].initfunc(bus, card, hw_config, mpu_config)))
+ {
+ /* We got it. */
+
+--- 374,473 ----
+ }
+ else
+ printk(KERN_ERR "sb: DT0197H panic: mpu not found\n");
+
+ printk(KERN_INFO "sb: DT0197H mail reports to Torsten Werner <twerner@intercomm.de>\n");
+
+ return(sb_dev);
+ }
+
++ static struct pci_dev *sb_init_als(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
+ {
++ /*
++ * ALS 100
++ * very similar to both ones above
++ */
++
++ /* @@@0001:Soundblaster.
+ */
+
++ if((sb_dev = isapnp_find_dev(bus,
++ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+ {
+ sb_dev->prepare(sb_dev);
+
++ if((sb_dev = activate_dev("ALS100", "sb", sb_dev)))
+ {
+ hw_config->io_base = sb_dev->resource[0].start;
+ hw_config->irq = sb_dev->irq_resource[0].start;
++ hw_config->dma = sb_dev->dma_resource[1].start;
++ hw_config->dma2 = sb_dev->dma_resource[0].start;
+
++ show_base("ALS100", "sb", &sb_dev->resource[0]);
+ }
+
++ if(!sb_dev) return(NULL);
+ }
+ else
++ printk(KERN_ERR "sb: ALS100 panic: sb base not found\n");
+
++ /* @X@0001:mpu
+ */
+
++ if((mpu_dev = isapnp_find_dev(bus,
++ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+ {
++ mpu_dev->prepare(mpu_dev);
++
++ if((mpu_dev = activate_dev("ALS100", "mpu", mpu_dev)))
+ {
++ show_base("ALS100", "mpu", &mpu_dev->resource[0]);
++ mpu_config->io_base = mpu_dev->resource[0].start;
+ }
+ }
+ else
++ printk(KERN_ERR "sb: ALS100 panic: mpu not found\n");
+
++ printk(KERN_INFO "sb: ALS100 mail reports to Torsten Werner <twerner@intercomm.de>\n");
+
+ return(sb_dev);
+ }
+
++ #define SBF_DEV 0x01 /* Please notice that cards without this flag set are on top in the list */
+
+ static struct { unsigned short vendor, function, flags; struct pci_dev * (*initfunc)(struct pci_bus *, struct pci_dev *, struct address_info *, struct address_info *); char *name; }
++ sb_isapnp_list[] __initdata = {
++ {ISAPNP_VENDOR('C','M','I'), ISAPNP_FUNCTION(0x0001), 0, &sb_init_cmi, "CMI 8330 SoundPRO" },
++ {ISAPNP_VENDOR('R','W','B'), ISAPNP_FUNCTION(0x1688), 0, &sb_init_diamond, "Diamond DT0197H" },
++ {ISAPNP_VENDOR('A','L','S'), ISAPNP_FUNCTION(0x0001), 0, &sb_init_als, "ALS 100" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0001), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0041), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0043), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x0968), SBF_DEV, &sb_init_ess, "ESS 1688" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1868), SBF_DEV, &sb_init_ess, "ESS 1868" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x8611), SBF_DEV, &sb_init_ess, "ESS 1868" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1869), SBF_DEV, &sb_init_ess, "ESS 1869" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1878), SBF_DEV, &sb_init_ess, "ESS 1878" },
+ {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1879), SBF_DEV, &sb_init_ess, "ESS 1879" },
+ {0}
+ };
+
++ static int __init sb_isapnp_init(struct address_info *hw_config, struct address_info *mpu_config, struct pci_bus *bus, struct pci_dev *card, int slot)
+ {
+ struct pci_dev *idev = NULL;
+
+ /* You missed the init func? That's bad. */
++ if(sb_isapnp_list[slot].initfunc)
+ {
++ char *busname = bus->name[0] ? bus->name : sb_isapnp_list[slot].name;
+
+ printk(KERN_INFO "sb: %s detected\n", busname);
+
+ /* Initialize this baby. */
+
++ if((idev = sb_isapnp_list[slot].initfunc(bus, card, hw_config, mpu_config)))
+ {
+ /* We got it. */
+
dep_tristate ' USB OV511 Camera support' CONFIG_USB_OV511 $CONFIG_USB
dep_tristate ' USB Kodak DC-2xx Camera support' CONFIG_USB_DC2XX $CONFIG_USB
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ dep_tristate ' USB Mustek MDC800 Digital Camera support (EXPERIMENTAL)' CONFIG_USB_MDC800 $CONFIG_USB
dep_tristate ' USB Mass Storage support (EXPERIMENTAL)' CONFIG_USB_STORAGE $CONFIG_USB m
if [ "$CONFIG_USB_STORAGE" != "n" ]; then
bool ' USB Mass Storage verbose debug' CONFIG_USB_STORAGE_DEBUG
obj-$(CONFIG_USB_CPIA) += cpia.o
obj-$(CONFIG_USB_IBMCAM) += ibmcam.o
obj-$(CONFIG_USB_DC2XX) += dc2xx.o
+obj-$(CONFIG_USB_MDC800) += mdc800.o
obj-$(CONFIG_USB_STORAGE) += usb-storage.o
obj-$(CONFIG_USB_USS720) += uss720.o
obj-$(CONFIG_USB_DABUSB) += dabusb.o
--- /dev/null
+/*
+ * copyright (C) 1999/2000 by Henning Zabel <henning@uni-paderborn.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+/*
+ * USB-Kernel Driver for the Mustek MDC800 Digital Camera
+ * (c) 1999/2000 Henning Zabel <henning@uni-paderborn.de>
+ *
+ *
+ * The driver brings the USB functions of the MDC800 to Linux.
+ * To use the Camera you must support the USB Protocoll of the camera
+ * to the Kernel Node.
+ * The Driver uses a misc device Node. Create it with :
+ * mknod /dev/mustek c 10 171
+ *
+ * The driver supports only one camera.
+ *
+ * version 0.7.1
+ * The Init und Exit Module Function are updated.
+ * (01/03/2000)
+ *
+ * version 0.7.0
+ * Rewrite of the driver : The driver now uses URB's. The old stuff
+ * has been removed.
+ *
+ * version 0.6.0
+ * Rewrite of this driver: The Emulation of the rs232 protocoll
+ * has been removed from the driver. A special executeCommand function
+ * for this driver is included to gphoto.
+ * The driver supports two kind of communication to bulk endpoints.
+ * Either with the dev->bus->ops->bulk... or with callback function.
+ * (09/11/1999)
+ *
+ * version 0.5.0:
+ * first Version that gets a version number. Most of the needed
+ * functions work.
+ * (20/10/1999)
+ */
+
+#include <linux/version.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/spinlock.h>
+#include <linux/errno.h>
+#include <linux/miscdevice.h>
+#include <linux/random.h>
+#include <linux/poll.h>
+#include <linux/init.h>
+#include <linux/malloc.h>
+#include <linux/module.h>
+
+#include <linux/usb.h>
+
+#define VERSION "0.7.1"
+#define RELEASE_DATE "(01/03/2000)"
+
+/* Vendor and Product Information */
+#define MDC800_VENDOR_ID 0x055f
+#define MDC800_PRODUCT_ID 0xa800
+
+/* Timeouts (msec) */
+#define TO_READ_FROM_IRQ 4000
+#define TO_GET_READY 2000
+#define TO_DOWNLOAD_GET_READY 1500
+#define TO_DOWNLOAD_GET_BUSY 1500
+#define TO_WRITE_GET_READY 3000
+#define TO_DEFAULT_COMMAND 5000
+
+/* Minor Number of the device (create with mknod /dev/mustek c 10 171) */
+#define MDC800_DEVICE_MINOR 171
+
+
+/**************************************************************************
+ Data and structs
+***************************************************************************/
+
+
+typedef enum {
+ NOT_CONNECTED, READY, WORKING, DOWNLOAD
+} mdc800_state;
+
+
+/* Data for the driver */
+struct mdc800_data
+{
+ struct usb_device * dev; // Device Data
+ mdc800_state state;
+
+ unsigned int endpoint [4];
+
+ purb_t irq_urb;
+ wait_queue_head_t irq_wait;
+ char* irq_urb_buffer;
+
+ int camera_busy; // is camera busy ?
+ int camera_request_ready; // Status to synchronize with irq
+ char camera_response [8]; // last Bytes send after busy
+
+ purb_t write_urb;
+ char* write_urb_buffer;
+ wait_queue_head_t write_wait;
+
+
+ purb_t download_urb;
+ char* download_urb_buffer;
+ wait_queue_head_t download_wait;
+ int download_left; // Bytes left to download ?
+
+
+ /* Device Data */
+ char out [64]; // Answer Buffer
+ int out_ptr; // Index to the first not readen byte
+ int out_count; // Bytes in the buffer
+
+ int open; // Camera device open ?
+ int rw_lock; // Block read <-> write
+
+ char in [8]; // Command Input Buffer
+ int in_count;
+
+ int pic_index; // Cache for the Imagesize (-1 for nothing cached )
+ int pic_len;
+};
+
+
+/* Specification of the Endpoints */
+static struct usb_endpoint_descriptor mdc800_ed [4] =
+{
+ { 0,0, 0x01, 0x02, 8, 0,0,0 },
+ { 0,0, 0x82, 0x03, 8, 0,0,0 },
+ { 0,0, 0x03, 0x02, 64, 0,0,0 },
+ { 0,0, 0x84, 0x02, 64, 0,0,0 }
+};
+
+
+/* The Variable used by the driver */
+static struct mdc800_data* mdc800=0;
+
+
+/***************************************************************************
+ The USB Part of the driver
+****************************************************************************/
+
+static int mdc800_endpoint_equals (struct usb_endpoint_descriptor *a,struct usb_endpoint_descriptor *b)
+{
+ return (
+ ( a->bEndpointAddress == b->bEndpointAddress )
+ && ( a->bmAttributes == b->bmAttributes )
+ && ( a->wMaxPacketSize == b->wMaxPacketSize )
+ );
+}
+
+
+/*
+ * Checks wether the camera responds busy
+ */
+static int mdc800_isBusy (char* ch)
+{
+ int i=0;
+ while (i<8)
+ {
+ if (ch [i] != (char)0x99)
+ return 0;
+ i++;
+ }
+ return 1;
+}
+
+
+/*
+ * Checks wether the Camera is ready
+ */
+static int mdc800_isReady (char *ch)
+{
+ int i=0;
+ while (i<8)
+ {
+ if (ch [i] != (char)0xbb)
+ return 0;
+ i++;
+ }
+ return 1;
+}
+
+
+
+/*
+ * USB IRQ Handler for InputLine
+ */
+static void mdc800_usb_irq (struct urb *urb)
+{
+ int data_received=0, wake_up;
+ unsigned char* b=urb->transfer_buffer;
+ struct mdc800_data* mdc800=urb->context;
+
+ if (urb->status >= 0)
+ {
+
+ //dbg ("%i %i %i %i %i %i %i %i \n",b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
+
+ if (mdc800_isBusy (b))
+ {
+ if (!mdc800->camera_busy)
+ {
+ mdc800->camera_busy=1;
+ dbg ("gets busy");
+ }
+ }
+ else
+ {
+ if (mdc800->camera_busy && mdc800_isReady (b))
+ {
+ mdc800->camera_busy=0;
+ dbg ("gets ready");
+ }
+ }
+ if (!(mdc800_isBusy (b) || mdc800_isReady (b)))
+ {
+ /* Store Data in camera_answer field */
+ dbg ("%i %i %i %i %i %i %i %i ",b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
+
+ memcpy (mdc800->camera_response,b,8);
+ data_received=1;
+ }
+ }
+ wake_up= ( mdc800->camera_request_ready > 0 )
+ &&
+ (
+ ((mdc800->camera_request_ready == 1) && (!mdc800->camera_busy))
+ ||
+ ((mdc800->camera_request_ready == 2) && data_received)
+ ||
+ ((mdc800->camera_request_ready == 3) && (mdc800->camera_busy))
+ ||
+ (urb->status < 0)
+ );
+
+ if (wake_up)
+ {
+ mdc800->camera_request_ready=0;
+ wake_up_interruptible (&mdc800->irq_wait);
+ }
+}
+
+
+/*
+ * Waits a while until the irq responds that camera is ready
+ *
+ * mode : 0: Wait for camera gets ready
+ * 1: Wait for receiving data
+ * 2: Wait for camera gets busy
+ *
+ * msec: Time to wait
+ */
+static int mdc800_usb_waitForIRQ (int mode, int msec)
+{
+ mdc800->camera_request_ready=1+mode;
+
+ interruptible_sleep_on_timeout (&mdc800->irq_wait, msec*HZ/1000);
+
+ if (mdc800->camera_request_ready>0)
+ {
+ mdc800->camera_request_ready=0;
+ err ("timeout waiting for camera.");
+ return 0;
+ }
+ return 1;
+}
+
+
+/*
+ * The write_urb callback function
+ */
+static void mdc800_usb_write_notify (struct urb *urb)
+{
+ struct mdc800_data* mdc800=urb->context;
+
+ if (urb->status != 0)
+ {
+ err ("writing command fails (status=%i)", urb->status);
+ }
+ mdc800->state=READY;
+ wake_up_interruptible (&mdc800->write_wait);
+}
+
+
+/*
+ * The download_urb callback function
+ */
+static void mdc800_usb_download_notify (struct urb *urb)
+{
+ struct mdc800_data* mdc800=urb->context;
+
+ if (urb->status == 0)
+ {
+ /* Fill output buffer with these data */
+ memcpy (mdc800->out, urb->transfer_buffer, 64);
+ mdc800->out_count=64;
+ mdc800->out_ptr=0;
+ mdc800->download_left-=64;
+ if (mdc800->download_left == 0)
+ {
+ mdc800->state=READY;
+ }
+ }
+ else
+ {
+ err ("request bytes fails (status:%i)", urb->status);
+ mdc800->state=READY;
+ }
+ wake_up_interruptible (&mdc800->download_wait);
+}
+
+
+/***************************************************************************
+ Probing for the Camera
+ ***************************************************************************/
+
+static struct usb_driver mdc800_usb_driver;
+
+/*
+ * Callback to search the Mustek MDC800 on the USB Bus
+ */
+static void* mdc800_usb_probe (struct usb_device *dev ,unsigned int ifnum )
+{
+ int i,j;
+ struct usb_interface_descriptor *intf_desc;
+ int irq_interval=0;
+
+ dbg ("(mdc800_usb_probe) called.");
+
+ if (mdc800->dev != 0)
+ {
+ warn ("only one Mustek MDC800 is supported.");
+ return 0;
+ }
+
+ if (dev->descriptor.idVendor != MDC800_VENDOR_ID)
+ return 0;
+ if (dev->descriptor.idProduct != MDC800_PRODUCT_ID)
+ return 0;
+
+ if (dev->descriptor.bNumConfigurations != 1)
+ {
+ err ("probe fails -> wrong Number of Configuration");
+ return 0;
+ }
+ intf_desc=&dev->actconfig->interface[ifnum].altsetting[0];
+
+ if (
+ ( intf_desc->bInterfaceClass != 0xff )
+ || ( intf_desc->bInterfaceSubClass != 0 )
+ || ( intf_desc->bInterfaceProtocol != 0 )
+ || ( intf_desc->bNumEndpoints != 4)
+ )
+ {
+ err ("probe fails -> wrong Interface");
+ return 0;
+ }
+
+ /* Check the Endpoints */
+ for (i=0; i<4; i++)
+ {
+ mdc800->endpoint[i]=-1;
+ for (j=0; j<4; j++)
+ {
+ if (mdc800_endpoint_equals (&intf_desc->endpoint [j],&mdc800_ed [i]))
+ {
+ mdc800->endpoint[i]=intf_desc->endpoint [j].bEndpointAddress ;
+ if (i==1)
+ {
+ irq_interval=intf_desc->endpoint [j].bInterval;
+ }
+
+ continue;
+ }
+ }
+ if (mdc800->endpoint[i] == -1)
+ {
+ err ("probe fails -> Wrong Endpoints.");
+ return 0;
+ }
+ }
+
+
+ usb_driver_claim_interface (&mdc800_usb_driver, &dev->actconfig->interface[ifnum], mdc800);
+ if (usb_set_interface (dev, ifnum, 0) < 0)
+ {
+ err ("MDC800 Configuration fails.");
+ return 0;
+ }
+
+ info ("Found Mustek MDC800 on USB.");
+
+ mdc800->dev=dev;
+ mdc800->state=READY;
+
+ /* Setup URB Structs */
+ FILL_INT_URB (
+ mdc800->irq_urb,
+ mdc800->dev,
+ usb_rcvintpipe (mdc800->dev,mdc800->endpoint [1]),
+ mdc800->irq_urb_buffer,
+ 8,
+ mdc800_usb_irq,
+ mdc800,
+ irq_interval
+ );
+
+ FILL_BULK_URB (
+ mdc800->write_urb,
+ mdc800->dev,
+ usb_sndbulkpipe (mdc800->dev, mdc800->endpoint[0]),
+ mdc800->write_urb_buffer,
+ 8,
+ mdc800_usb_write_notify,
+ mdc800
+ );
+
+ FILL_BULK_URB (
+ mdc800->download_urb,
+ mdc800->dev,
+ usb_rcvbulkpipe (mdc800->dev, mdc800->endpoint [3]),
+ mdc800->download_urb_buffer,
+ 64,
+ mdc800_usb_download_notify,
+ mdc800
+ );
+
+ return mdc800;
+}
+
+
+/*
+ * Disconnect USB device (maybe the MDC800)
+ */
+static void mdc800_usb_disconnect (struct usb_device *dev,void* ptr)
+{
+ struct mdc800_data* mdc800=(struct mdc800_data*) ptr;
+
+ dbg ("(mdc800_usb_disconnect) called");
+
+ if (mdc800->state == NOT_CONNECTED)
+ return;
+
+ mdc800->state=NOT_CONNECTED;
+ mdc800->open=0;
+ mdc800->rw_lock=0;
+
+ usb_unlink_urb (mdc800->irq_urb);
+ usb_unlink_urb (mdc800->write_urb);
+ usb_unlink_urb (mdc800->download_urb);
+
+ usb_driver_release_interface (&mdc800_usb_driver, &dev->actconfig->interface[1]);
+
+ mdc800->dev=0;
+ info ("Mustek MDC800 disconnected from USB.");
+}
+
+
+/***************************************************************************
+ The Misc device Part (file_operations)
+****************************************************************************/
+
+/*
+ * This Function calc the Answersize for a command.
+ */
+static int mdc800_getAnswerSize (char command)
+{
+ switch ((unsigned char) command)
+ {
+ case 0x2a:
+ case 0x49:
+ case 0x51:
+ case 0x0d:
+ case 0x20:
+ case 0x07:
+ case 0x01:
+ case 0x25:
+ case 0x00:
+ return 8;
+
+ case 0x05:
+ case 0x3e:
+ return mdc800->pic_len;
+
+ case 0x09:
+ return 4096;
+
+ default:
+ return 0;
+ }
+}
+
+
+/*
+ * Init the device: (1) alloc mem (2) Increase MOD Count ..
+ */
+static int mdc800_device_open (struct inode* inode, struct file *file)
+{
+ int retval=0;
+ if (mdc800->state == NOT_CONNECTED)
+ return -EBUSY;
+
+ if (mdc800->open)
+ return -EBUSY;
+
+ mdc800->rw_lock=0;
+ mdc800->in_count=0;
+ mdc800->out_count=0;
+ mdc800->out_ptr=0;
+ mdc800->pic_index=0;
+ mdc800->pic_len=-1;
+ mdc800->download_left=0;
+
+ mdc800->camera_busy=0;
+ mdc800->camera_request_ready=0;
+
+ retval=0;
+ if (usb_submit_urb (mdc800->irq_urb))
+ {
+ err ("request USB irq fails (submit_retval=%i urb_status=%i).",retval, mdc800->irq_urb->status);
+ return -EIO;
+ }
+
+ MOD_INC_USE_COUNT;
+ mdc800->open=1;
+
+ dbg ("Mustek MDC800 device opened.");
+ return 0;
+}
+
+
+/*
+ * Close the Camera and release Memory
+ */
+static int mdc800_device_release (struct inode* inode, struct file *file)
+{
+ int retval=0;
+ dbg ("Mustek MDC800 device closed.");
+
+ if (mdc800->open && (mdc800->state != NOT_CONNECTED))
+ {
+ mdc800->open=0;
+ usb_unlink_urb (mdc800->irq_urb);
+ usb_unlink_urb (mdc800->write_urb);
+ usb_unlink_urb (mdc800->download_urb);
+ }
+ else
+ {
+ retval=-EIO;
+ }
+
+ MOD_DEC_USE_COUNT;
+
+ return retval;
+}
+
+
+/*
+ * The Device read callback Function
+ */
+static ssize_t mdc800_device_read (struct file *file, char *buf, size_t len, loff_t *pos)
+{
+ int left=len, sts=len; /* single transfer size */
+ char* ptr=buf;
+
+ if (mdc800->state == NOT_CONNECTED)
+ return -EBUSY;
+
+ if (!mdc800->open || mdc800->rw_lock)
+ return -EBUSY;
+ mdc800->rw_lock=1;
+
+ while (left)
+ {
+ if (signal_pending (current)) {
+ mdc800->rw_lock=0;
+ return -EINTR;
+ }
+
+ sts=left > (mdc800->out_count-mdc800->out_ptr)?mdc800->out_count-mdc800->out_ptr:left;
+
+ if (sts <= 0)
+ {
+ /* Too less Data in buffer */
+ if (mdc800->state == DOWNLOAD)
+ {
+ mdc800->out_count=0;
+ mdc800->out_ptr=0;
+
+ /* Download -> Request new bytes */
+ if (usb_submit_urb (mdc800->download_urb))
+ {
+ err ("Can't submit download urb (status=%i)",mdc800->download_urb->status);
+ mdc800->state=READY;
+ mdc800->rw_lock=0;
+ return len-left;
+ }
+ interruptible_sleep_on_timeout (&mdc800->download_wait, TO_DOWNLOAD_GET_READY*HZ/1000);
+ if (mdc800->download_urb->status != 0)
+ {
+ err ("requesting bytes fails (status=%i)",mdc800->download_urb->status);
+ mdc800->state=READY;
+ mdc800->rw_lock=0;
+ return len-left;
+ }
+ }
+ else
+ {
+ /* No more bytes -> that's an error*/
+ mdc800->rw_lock=0;
+ return -EIO;
+ }
+ }
+ else
+ {
+ /* memcpy Bytes */
+ memcpy (ptr, &mdc800->out [mdc800->out_ptr], sts);
+ ptr+=sts;
+ left-=sts;
+ mdc800->out_ptr+=sts;
+ }
+ }
+
+ mdc800->rw_lock=0;
+ return len-left;
+}
+
+
+/*
+ * The Device write callback Function
+ * If a 8Byte Command is received, it will be send to the camera.
+ * After this the driver initiates the request for the answer or
+ * just waits until the camera becomes ready.
+ */
+static ssize_t mdc800_device_write (struct file *file, const char *buf, size_t len, loff_t *pos)
+{
+ int i=0;
+
+ if (mdc800->state != READY)
+ return -EBUSY;
+
+ if (!mdc800->open || mdc800->rw_lock)
+ return -EBUSY;
+ mdc800->rw_lock=1;
+
+ while (i<len)
+ {
+ if (signal_pending (current)) {
+ mdc800->rw_lock=0;
+ return -EINTR;
+ }
+
+ /* check for command start */
+ if (buf [i] == (char) 0x55)
+ {
+ mdc800->in_count=0;
+ mdc800->out_count=0;
+ mdc800->out_ptr=0;
+ mdc800->download_left=0;
+ }
+
+ /* save command byte */
+ if (mdc800->in_count < 8)
+ {
+ mdc800->in[mdc800->in_count]=buf[i];
+ mdc800->in_count++;
+ }
+ else
+ {
+ err ("Command is to long !\n");
+ mdc800->rw_lock=0;
+ return -EIO;
+ }
+
+ /* Command Buffer full ? -> send it to camera */
+ if (mdc800->in_count == 8)
+ {
+ int answersize;
+
+ mdc800_usb_waitForIRQ (0,TO_GET_READY);
+
+ answersize=mdc800_getAnswerSize (mdc800->in[1]);
+
+ mdc800->state=WORKING;
+ memcpy (mdc800->write_urb->transfer_buffer, mdc800->in,8);
+ if (usb_submit_urb (mdc800->write_urb))
+ {
+ err ("submitting write urb fails (status=%i)", mdc800->write_urb->status);
+ mdc800->rw_lock=0;
+ mdc800->state=READY;
+ return -EIO;
+ }
+ interruptible_sleep_on_timeout (&mdc800->write_wait, TO_DEFAULT_COMMAND*HZ/1000);
+ if (mdc800->state == WORKING)
+ {
+ usb_unlink_urb (mdc800->write_urb);
+ mdc800->state=READY;
+ mdc800->rw_lock=0;
+ return -EIO;
+ }
+
+ switch ((unsigned char) mdc800->in[1])
+ {
+ case 0x05: /* Download Image */
+ case 0x3e: /* Take shot in Fine Mode (WCam Mode) */
+ if (mdc800->pic_len < 0)
+ {
+ err ("call 0x07 before 0x05,0x3e");
+ mdc800->state=READY;
+ mdc800->rw_lock=0;
+ return -EIO;
+ }
+ mdc800->pic_len=-1;
+
+ case 0x09: /* Download Thumbnail */
+ mdc800->download_left=answersize+64;
+ mdc800->state=DOWNLOAD;
+ mdc800_usb_waitForIRQ (0,TO_DOWNLOAD_GET_BUSY);
+ break;
+
+
+ default:
+ if (answersize)
+ {
+
+ if (!mdc800_usb_waitForIRQ (1,TO_READ_FROM_IRQ))
+ {
+ err ("requesting answer from irq fails");
+ mdc800->state=READY;
+ mdc800->rw_lock=0;
+ return -EIO;
+ }
+
+ /* Write dummy data, (this is ugly but part of the USB Protokoll */
+ /* if you use endpoint 1 as bulk and not as irq */
+ memcpy (mdc800->out, mdc800->camera_response,8);
+
+ /* This is the interpreted answer */
+ memcpy (&mdc800->out[8], mdc800->camera_response,8);
+
+ mdc800->out_ptr=0;
+ mdc800->out_count=16;
+
+ /* Cache the Imagesize, if command was getImageSize */
+ if (mdc800->in [1] == (char) 0x07)
+ {
+ mdc800->pic_len=(int) 65536*(unsigned char) mdc800->camera_response[0]+256*(unsigned char) mdc800->camera_response[1]+(unsigned char) mdc800->camera_response[2];
+
+ dbg ("cached imagesize = %i",mdc800->pic_len);
+ }
+
+ }
+ else
+ {
+ if (!mdc800_usb_waitForIRQ (0,TO_DEFAULT_COMMAND))
+ {
+ err ("Command Timeout.");
+ mdc800->rw_lock=0;
+ mdc800->state=READY;
+ return -EIO;
+ }
+ }
+ mdc800->state=READY;
+ break;
+ }
+ }
+ i++;
+ }
+ mdc800->rw_lock=0;
+ return i;
+}
+
+
+/***************************************************************************
+ Init and Cleanup this driver (Structs and types)
+****************************************************************************/
+
+
+/*
+ * USB Driver Struct for this device
+ */
+static struct usb_driver mdc800_usb_driver =
+{
+ "mdc800",
+ mdc800_usb_probe,
+ mdc800_usb_disconnect,
+ { 0,0 },
+ 0,
+ 0
+};
+
+
+/* File Operations of this drivers */
+static struct file_operations mdc800_device_ops =
+{
+ 0, /* llseek */
+ mdc800_device_read,
+ mdc800_device_write,
+ 0, /* readdir */
+ 0, /* poll */
+ 0, /* ioctl, this can be used to detect USB ! */
+ 0, /* mmap */
+ mdc800_device_open,
+ 0, /* flush */
+ mdc800_device_release,
+ 0, /* async */
+ 0, /* fasync */
+ 0, /* check_media_change */
+// 0, /* revalidate */
+// 0 /* lock */
+};
+
+
+/*
+ * The Misc Device Configuration Struct
+ */
+static struct miscdevice mdc800_device =
+{
+ MDC800_DEVICE_MINOR,
+ "USB Mustek MDC800 Camera",
+ &mdc800_device_ops
+};
+
+
+/************************************************************************
+ Init and Cleanup this driver (Main Functions)
+*************************************************************************/
+
+#define try(A) if ((A) == 0) goto cleanup_on_fail;
+#define try_free_mem(A) if (A != 0) { kfree (A); A=0; }
+#define try_free_urb(A) if (A != 0) { usb_free_urb (A); A=0; }
+
+int __init usb_mdc800_init (void)
+{
+ /* Allocate Memory */
+ try (mdc800=kmalloc (sizeof (struct mdc800_data), GFP_KERNEL));
+
+ mdc800->dev=0;
+ mdc800->open=0;
+ mdc800->state=NOT_CONNECTED;
+ memset(mdc800, 0, sizeof(struct mdc800_data));
+
+ init_waitqueue_head (&mdc800->irq_wait);
+ init_waitqueue_head (&mdc800->write_wait);
+ init_waitqueue_head (&mdc800->download_wait);
+
+ try (mdc800->irq_urb_buffer=kmalloc (8, GFP_KERNEL));
+ try (mdc800->write_urb_buffer=kmalloc (8, GFP_KERNEL));
+ try (mdc800->download_urb_buffer=kmalloc (64, GFP_KERNEL));
+
+ try (mdc800->irq_urb=usb_alloc_urb (0));
+ try (mdc800->download_urb=usb_alloc_urb (0));
+ try (mdc800->write_urb=usb_alloc_urb (0));
+
+ /* Register the driver */
+ if (usb_register (&mdc800_usb_driver) < 0)
+ goto cleanup_on_fail;
+ if (misc_register (&mdc800_device) < 0)
+ goto cleanup_on_misc_register_fail;
+
+ info ("Mustek Digital Camera Driver " VERSION " (MDC800)");
+ info (RELEASE_DATE " Henning Zabel <henning@uni-paderborn.de>");
+
+ return 0;
+
+ /* Clean driver up, when something fails */
+
+cleanup_on_misc_register_fail:
+ usb_deregister (&mdc800_usb_driver);
+
+cleanup_on_fail:
+
+ if (mdc800 != 0)
+ {
+ err ("can't alloc memory!");
+
+ try_free_mem (mdc800->download_urb_buffer);
+ try_free_mem (mdc800->write_urb_buffer);
+ try_free_mem (mdc800->irq_urb_buffer);
+
+ try_free_urb (mdc800->write_urb);
+ try_free_urb (mdc800->download_urb);
+ try_free_urb (mdc800->irq_urb);
+
+ kfree (mdc800);
+ }
+ mdc800=0;
+ return -1;
+}
+
+
+void __exit usb_mdc800_cleanup (void)
+{
+ usb_deregister (&mdc800_usb_driver);
+ misc_deregister (&mdc800_device);
+
+ usb_free_urb (mdc800->irq_urb);
+ usb_free_urb (mdc800->download_urb);
+ usb_free_urb (mdc800->write_urb);
+
+ kfree (mdc800->irq_urb_buffer);
+ kfree (mdc800->write_urb_buffer);
+ kfree (mdc800->download_urb_buffer);
+
+ kfree (mdc800);
+ mdc800=0;
+}
+
+
+MODULE_AUTHOR ("Henning Zabel <henning@uni-paderborn.de>");
+MODULE_DESCRIPTION ("USB Driver for Mustek MDC800 Digital Camera");
+
+module_init (usb_mdc800_init);
+module_exit (usb_mdc800_cleanup);
#include <linux/usb.h>
-static const char *version = __FILE__ ": v0.3.3 2000/03/13 Written by Petko Manolov (petkan@spct.net)\n";
+static const char *version = __FILE__ ": v0.3.5 2000/03/21 Written by Petko Manolov (petkan@spct.net)\n";
-#define ADMTEK_VENDOR_ID 0x07a6
-#define ADMTEK_DEVICE_ID_PEGASUS 0x0986
-
#define PEGASUS_MTU 1500
#define PEGASUS_MAX_MTU 1536
#define PEGASUS_TX_TIMEOUT (HZ*5)
unsigned char ALIGN(intr_buff[8]);
};
+struct usb_eth_dev {
+ char *name;
+ __u16 vendor;
+ __u16 device;
+ void *private;
+};
+
static int loopback = 0;
static int multicast_filter_limit = 32;
MODULE_PARM(loopback, "i");
+static struct usb_eth_dev usb_dev_id[] = {
+ { "D-Link DSB-650TX", 0x2001, 0x4001, NULL },
+ { "Linksys USB100TX", 0x066b, 0x2203, NULL },
+ { "SMC 202 USB Ethernet", 0x0707, 0x0200, NULL },
+ { "ADMtek AN986 (Pegasus) USB Ethernet", 0x07a6, 0x0986, NULL },
+ { "Accton USB 10/100 Ethernet Adapter", 0x083a, 0x1046, NULL },
+ { NULL, 0, 0, NULL }
+};
+
+
#define pegasus_get_registers(dev, indx, size, data)\
usb_control_msg(dev, usb_rcvctrlpipe(dev,0), 0xf0, 0xc0, 0, indx, data, size, HZ);
#define pegasus_set_registers(dev, indx, size, data)\
return 4;
if ((partmedia & 0x1f) != 1) {
- err("party FAIL %x", partmedia);
- return 5;
+ warn("party FAIL %x", partmedia);
+ /* return 5; FIXME */
}
data[0] = 0xc9;
netif_wake_queue(net);
}
+static int check_device_ids( __u16 vendor, __u16 product )
+{
+ int i=0;
+
+ while ( usb_dev_id[i].name ) {
+ if ( (usb_dev_id[i].vendor == vendor) &&
+ (usb_dev_id[i].device == product) )
+ return 0;
+ i++;
+ }
+ return 1;
+}
+
static void * pegasus_probe(struct usb_device *dev, unsigned int ifnum)
{
struct net_device *net;
struct pegasus *pegasus;
- if (dev->descriptor.idVendor != ADMTEK_VENDOR_ID ||
- dev->descriptor.idProduct != ADMTEK_DEVICE_ID_PEGASUS) {
+ if ( check_device_ids(dev->descriptor.idVendor, dev->descriptor.idProduct) ) {
return NULL;
}
urb_print (urb, "SUB", usb_pipein (pipe));
#endif
+ /* a request to the virtual root hub */
if (usb_pipedevice (pipe) == ohci->rh.devnum)
- return rh_submit_urb (urb); /* a request to the virtual root hub */
+ return rh_submit_urb (urb);
+
+ /* when controller's hung, permit only hub cleanup attempts
+ * such as powering down ports */
+ if (ohci->disabled)
+ return -ESHUTDOWN;
/* every endpoint has a ed, locate and fill it */
if (!(ed = ep_add_ed (urb->dev, pipe, urb->interval, 1))) {
urb_t * urb = (urb_t *) ptr;
ohci_t * ohci = urb->dev->bus->hcpriv;
+
+ if (ohci->disabled)
+ return;
if(ohci->rh.send) {
len = rh_send_irq (ohci, urb->transfer_buffer, urb->transfer_buffer_length);
if (len > 0) {
urb->actual_length = len;
#ifdef DEBUG
- urb_print (urb, "RET(rh)", usb_pipeout (urb->pipe));
+ urb_print (urb, "RET-t(rh)", usb_pipeout (urb->pipe));
#endif
if (urb->complete) urb->complete (urb);
}
break;
case RH_GET_DESCRIPTOR | RH_CLASS:
- *(__u8 *) (data_buf+1) = 0x29;
- put_unaligned(cpu_to_le32 (readl (&ohci->regs->roothub.a)),
- (__u32 *) (data_buf + 2));
- *(__u8 *) data_buf = (*(__u8 *) (data_buf + 2) / 8) * 2 + 9; /* length of descriptor */
-
- len = min (leni, min(*(__u8 *) data_buf, wLength));
- *(__u8 *) (data_buf+6) = 0; /* Root Hub needs no current from bus */
- if (*(__u8 *) (data_buf+2) < 8) { /* less than 8 Ports */
- *(__u8 *) (data_buf+7) = readl (&ohci->regs->roothub.b) & 0xff;
- *(__u8 *) (data_buf+8) = (readl (&ohci->regs->roothub.b) & 0xff0000) >> 16;
- } else {
- put_unaligned(cpu_to_le32 (readl(&ohci->regs->roothub.b)),
- (__u32 *) (data_buf + 7));
+ {
+ __u32 temp = readl (&ohci->regs->roothub.a);
+
+ data_buf [0] = 9; // min length;
+ data_buf [1] = 0x29;
+ data_buf [2] = temp & RH_A_NDP;
+ data_buf [3] = 0;
+ if (temp & RH_A_PSM) /* per-port power switching? */
+ data_buf [3] |= 0x1;
+ if (temp & RH_A_NOCP) /* no overcurrent reporting? */
+ data_buf [3] |= 0x10;
+ else if (temp & RH_A_OCPM) /* per-port overcurrent reporting? */
+ data_buf [3] |= 0x8;
+
+ datab [1] = 0;
+ data_buf [5] = (temp & RH_A_POTPGT) >> 24;
+ temp = readl (&ohci->regs->roothub.b);
+ data_buf [7] = temp & RH_B_DR;
+ if (data_buf [2] < 7) {
+ data_buf [8] = 0xff;
+ } else {
+ data_buf [0] += 2;
+ data_buf [8] = (temp & RH_B_DR) >> 8;
+ data_buf [10] = data_buf [9] = 0xff;
+ }
+
+ len = min (leni, min (data_buf [0], wLength));
+ OK (len);
}
- OK (len);
case RH_GET_CONFIGURATION: *(__u8 *) data_buf = 0x01; OK (1);
}
udelay (1);
}
+ ohci->disabled = 0;
}
/*-------------------------------------------------------------------------*/
writel (0x628, &ohci->regs->lsthresh);
/* Choose the interrupts we care about now, others later on demand */
- mask = OHCI_INTR_MIE | OHCI_INTR_WDH | OHCI_INTR_SO;
+ mask = OHCI_INTR_MIE | OHCI_INTR_UE | OHCI_INTR_WDH | OHCI_INTR_SO;
writel (ohci->hc_control = 0xBF, &ohci->regs->control); /* USB Operational */
writel (mask, &ohci->regs->intrenable);
dbg("Interrupt: %x frame: %x", ints, le16_to_cpu (ohci->hcca.frame_no));
+ if (ints & OHCI_INTR_UE) {
+ ohci->disabled++;
+ err ("OHCI Unrecoverable Error, controller disabled");
+ }
+
if (ints & OHCI_INTR_WDH) {
writel (OHCI_INTR_WDH, ®s->intrdisable);
dl_done_list (ohci, dl_reverse_done_list (ohci));
} roothub;
} __attribute((aligned(32)));
+
+/* OHCI CONTROL AND STATUS REGISTER MASKS */
+
/*
- * cmdstatus register */
-#define OHCI_CLF 0x02
-#define OHCI_BLF 0x04
+ * HcControl (control) register masks
+ */
+#define OHCI_CTRL_CBSR (3 << 0) /* control/bulk service ratio */
+#define OHCI_CTRL_PLE (1 << 2) /* periodic list enable */
+#define OHCI_CTRL_IE (1 << 3) /* isochronous enable */
+#define OHCI_CTRL_CLE (1 << 4) /* control list enable */
+#define OHCI_CTRL_BLE (1 << 5) /* bulk list enable */
+#define OHCI_CTRL_HCFS (3 << 6) /* host controller functional state */
+#define OHCI_CTRL_IR (1 << 8) /* interrupt routing */
+#define OHCI_CTRL_RWC (1 << 9) /* remote wakeup connected */
+#define OHCI_CTRL_RWE (1 << 10) /* remote wakeup enable */
+
+/* pre-shifted values for HCFS */
+# define OHCI_USB_RESET (0 << 6)
+# define OHCI_USB_RESUME (1 << 6)
+# define OHCI_USB_OPER (2 << 6)
+# define OHCI_USB_SUSPEND (3 << 6)
/*
- * Interrupt register masks
+ * HcCommandStatus (cmdstatus) register masks
*/
-#define OHCI_INTR_SO (1)
-#define OHCI_INTR_WDH (1 << 1)
-#define OHCI_INTR_SF (1 << 2)
-#define OHCI_INTR_RD (1 << 3)
-#define OHCI_INTR_UE (1 << 4)
-#define OHCI_INTR_FNO (1 << 5)
-#define OHCI_INTR_RHSC (1 << 6)
-#define OHCI_INTR_OC (1 << 30)
-#define OHCI_INTR_MIE (1 << 31)
+#define OHCI_HCR (1 << 0) /* host controller reset */
+#define OHCI_CLF (1 << 1) /* control list filled */
+#define OHCI_BLF (1 << 2) /* bulk list filled */
+#define OHCI_OCR (1 << 3) /* ownership change request */
+#define OHCI_SOC (3 << 16) /* scheduling overrun count */
/*
- * Control register masks
+ * masks used with interrupt registers:
+ * HcInterruptStatus (intrstatus)
+ * HcInterruptEnable (intrenable)
+ * HcInterruptDisable (intrdisable)
*/
-#define OHCI_USB_RESET 0
-#define OHCI_USB_RESUME (1 << 6)
-#define OHCI_USB_OPER (2 << 6)
-#define OHCI_USB_SUSPEND (3 << 6)
+#define OHCI_INTR_SO (1 << 0) /* scheduling overrun */
+#define OHCI_INTR_WDH (1 << 1) /* writeback of done_head */
+#define OHCI_INTR_SF (1 << 2) /* start frame */
+#define OHCI_INTR_RD (1 << 3) /* resume detect */
+#define OHCI_INTR_UE (1 << 4) /* unrecoverable error */
+#define OHCI_INTR_FNO (1 << 5) /* frame number overflow */
+#define OHCI_INTR_RHSC (1 << 6) /* root hub status change */
+#define OHCI_INTR_OC (1 << 30) /* ownership change */
+#define OHCI_INTR_MIE (1 << 31) /* master interrupt enable */
+
/* Virtual Root HUB */
int interval;
struct timer_list rh_int_timer;
};
+
+
+/* USB HUB CONSTANTS (not OHCI-specific; see hub.h) */
/* destination of request */
#define RH_INTERFACE 0x01
#define RH_GET_STATUS 0x0080
#define RH_CLEAR_FEATURE 0x0100
#define RH_SET_FEATURE 0x0300
-#define RH_SET_ADDRESS 0x0500
-#define RH_GET_DESCRIPTOR 0x0680
+#define RH_SET_ADDRESS 0x0500
+#define RH_GET_DESCRIPTOR 0x0680
#define RH_SET_DESCRIPTOR 0x0700
#define RH_GET_CONFIGURATION 0x0880
#define RH_SET_CONFIGURATION 0x0900
#define RH_PORT_RESET 0x04
#define RH_PORT_POWER 0x08
#define RH_PORT_LOW_SPEED 0x09
+
#define RH_C_PORT_CONNECTION 0x10
#define RH_C_PORT_ENABLE 0x11
#define RH_C_PORT_SUSPEND 0x12
#define RH_ACK 0x01
#define RH_REQ_ERR -1
#define RH_NACK 0x00
+
+
+/* OHCI ROOT HUB REGISTER MASKS */
-/* Root-Hub Register info */
-
-#define RH_PS_CCS 0x00000001
-#define RH_PS_PES 0x00000002
-#define RH_PS_PSS 0x00000004
-#define RH_PS_POCI 0x00000008
-#define RH_PS_PRS 0x00000010
-#define RH_PS_PPS 0x00000100
-#define RH_PS_LSDA 0x00000200
-#define RH_PS_CSC 0x00010000
-#define RH_PS_PESC 0x00020000
-#define RH_PS_PSSC 0x00040000
-#define RH_PS_OCIC 0x00080000
-#define RH_PS_PRSC 0x00100000
-
-/* Root hub status bits */
-#define RH_HS_LPS 0x00000001
-#define RH_HS_OCI 0x00000002
-#define RH_HS_DRWE 0x00008000
-#define RH_HS_LPSC 0x00010000
-#define RH_HS_OCIC 0x00020000
-#define RH_HS_CRWE 0x80000000
+/* roothub.portstatus [i] bits */
+#define RH_PS_CCS 0x00000001 /* current connect status */
+#define RH_PS_PES 0x00000002 /* port enable status*/
+#define RH_PS_PSS 0x00000004 /* port suspend status */
+#define RH_PS_POCI 0x00000008 /* port over current indicator */
+#define RH_PS_PRS 0x00000010 /* port reset status */
+#define RH_PS_PPS 0x00000100 /* port power status */
+#define RH_PS_LSDA 0x00000200 /* low speed device attached */
+#define RH_PS_CSC 0x00010000 /* connect status change */
+#define RH_PS_PESC 0x00020000 /* port enable status change */
+#define RH_PS_PSSC 0x00040000 /* port suspend status change */
+#define RH_PS_OCIC 0x00080000 /* over current indicator change */
+#define RH_PS_PRSC 0x00100000 /* port reset status change */
+
+/* roothub.status bits */
+#define RH_HS_LPS 0x00000001 /* local power status */
+#define RH_HS_OCI 0x00000002 /* over current indicator */
+#define RH_HS_DRWE 0x00008000 /* device remote wakeup enable */
+#define RH_HS_LPSC 0x00010000 /* local power status change */
+#define RH_HS_OCIC 0x00020000 /* over current indicator change */
+#define RH_HS_CRWE 0x80000000 /* clear remote wakeup enable */
+
+/* roothub.b masks */
+#define RH_B_DR 0x0000ffff /* device removable flags */
+#define RH_B_PPCM 0xffff0000 /* port power control mask */
+
+/* roothub.a masks */
+#define RH_A_NDP (0xff << 0) /* number of downstream ports */
+#define RH_A_PSM (1 << 8) /* power switching mode */
+#define RH_A_NPS (1 << 9) /* no power switching */
+#define RH_A_DT (1 << 10) /* device type (mbz) */
+#define RH_A_OCPM (1 << 11) /* over current protection mode */
+#define RH_A_NOCP (1 << 12) /* no over current protection */
+#define RH_A_POTPGT (0xff << 24) /* power on to power good time */
#define min(a,b) (((a)<(b))?(a):(b))
typedef struct ohci {
- struct ohci_hcca hcca; /* hcca */
+ struct ohci_hcca hcca; /* hcca */
int irq;
- struct ohci_regs * regs; /* OHCI controller's memory */
- struct list_head ohci_hcd_list; /* list of all ohci_hcd */
+ int disabled; /* e.g. got a UE, we're hung */
+
+ struct ohci_regs * regs; /* OHCI controller's memory */
+ struct list_head ohci_hcd_list; /* list of all ohci_hcd */
struct ohci * next; // chain of uhci device contexts
struct list_head urb_list; // list of all pending urbs
spinlock_t urb_list_lock; // lock to keep consistency
- int ohci_int_load[32]; /* load of the 32 Interrupt Chains (for load ballancing)*/
+ int ohci_int_load[32]; /* load of the 32 Interrupt Chains (for load balancing)*/
ed_t * ed_rm_list[2]; /* lists of all endpoints to be removed */
ed_t * ed_bulktail; /* last endpoint of bulk list */
ed_t * ed_controltail; /* last endpoint of control list */
ed_t * ed_isotail; /* last endpoint of iso list */
int intrstatus;
- __u32 hc_control; /* copy of the hc control reg */
+ __u32 hc_control; /* copy of the hc control reg */
struct usb_bus * bus;
struct usb_device * dev[128];
struct virt_root_hub rh;
return block_read_full_page(page, adfs_get_block);
}
-static int adfs_prepare_write(struct page *page, unsigned int from, unsigned int to)
+static int adfs_prepare_write(struct file *file, struct page *page, unsigned int from, unsigned int to)
{
return cont_prepare_write(page, from, to, adfs_get_block,
&((struct inode *)page->mapping->host)->u.adfs_i.mmu_private);
{
return block_read_full_page(page,affs_get_block);
}
-static int affs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int affs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,affs_get_block,
&((struct inode*)page->mapping->host)->u.affs_i.mmu_private);
return block_read_full_page(page, bfs_get_block);
}
-static int bfs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int bfs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page, from, to, bfs_get_block);
}
if (!page)
goto fail;
- err = mapping->a_ops->prepare_write(page, 0, len-1);
+ err = mapping->a_ops->prepare_write(NULL, page, 0, len-1);
if (err)
goto fail_map;
kaddr = (char*)page_address(page);
* David S. Miller (davem@caip.rutgers.edu), 1995
*/
+#include <linux/config.h>
#include <linux/fs.h>
#include <linux/locks.h>
#include <linux/quotaops.h>
* David S. Miller (davem@caip.rutgers.edu), 1995
*/
+#include <linux/config.h>
#include <linux/fs.h>
#include <linux/locks.h>
#include <linux/quotaops.h>
{
return block_read_full_page(page,ext2_get_block);
}
-static int ext2_prepare_write(struct page *page, unsigned from, unsigned to)
+static int ext2_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page,from,to,ext2_get_block);
}
* David S. Miller (davem@caip.rutgers.edu), 1995
*/
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/fs.h>
{
return block_read_full_page(page,fat_get_block);
}
-static int fat_prepare_write(struct page *page, unsigned from, unsigned to)
+static int fat_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,fat_get_block,
&MSDOS_I((struct inode*)page->mapping->host)->mmu_private);
#include <linux/mm.h>
#include <linux/malloc.h>
+static void wait_for_partner(struct inode* inode, unsigned int* cnt)
+{
+ int cur = *cnt;
+ while(cur == *cnt) {
+ pipe_wait(inode);
+ if(signal_pending(current))
+ break;
+ }
+}
+
+static void wake_up_partner(struct inode* inode)
+{
+ wake_up_interruptible(PIPE_WAIT(*inode));
+}
+
static int fifo_open(struct inode *inode, struct file *filp)
{
int ret;
if (down_interruptible(PIPE_SEM(*inode)))
goto err_nolock_nocleanup;
- if (! inode->i_pipe) {
- unsigned long page;
- struct pipe_inode_info *info;
-
- info = kmalloc(sizeof(struct pipe_inode_info),GFP_KERNEL);
-
+ if (!inode->i_pipe) {
ret = -ENOMEM;
- if (!info)
- goto err_nocleanup;
- page = __get_free_page(GFP_KERNEL);
- if (!page) {
- kfree(info);
+ if(!pipe_new(inode))
goto err_nocleanup;
- }
-
- inode->i_pipe = info;
-
- init_waitqueue_head(PIPE_WAIT(*inode));
- PIPE_BASE(*inode) = (char *) page;
- PIPE_START(*inode) = PIPE_LEN(*inode) = 0;
- PIPE_READERS(*inode) = PIPE_WRITERS(*inode) = 0;
- PIPE_WAITING_WRITERS(*inode) = PIPE_WAITING_READERS(*inode) = 0;
}
+ filp->f_version = 0;
switch (filp->f_mode) {
case 1:
* POSIX.1 says that O_NONBLOCK means return with the FIFO
* opened, even when there is no process writing the FIFO.
*/
- filp->f_op = &connecting_fifo_fops;
+ filp->f_op = &read_fifo_fops;
+ PIPE_RCOUNTER(*inode)++;
if (PIPE_READERS(*inode)++ == 0)
- wake_up_interruptible(PIPE_WAIT(*inode));
-
- if (!(filp->f_flags & O_NONBLOCK)) {
- while (!PIPE_WRITERS(*inode)) {
- if (signal_pending(current))
+ wake_up_partner(inode);
+
+ if (!PIPE_WRITERS(*inode)) {
+ if ((filp->f_flags & O_NONBLOCK)) {
+ /* suppress POLLHUP until we have
+ * seen a writer */
+ filp->f_version = PIPE_WCOUNTER(*inode);
+ } else
+ {
+ wait_for_partner(inode, &PIPE_WCOUNTER(*inode));
+ if(signal_pending(current))
goto err_rd;
- up(PIPE_SEM(*inode));
- interruptible_sleep_on(PIPE_WAIT(*inode));
-
- /* Note that using down_interruptible here
- and similar places below is pointless,
- since we have to acquire the lock to clean
- up properly. */
- down(PIPE_SEM(*inode));
}
}
-
- if (PIPE_WRITERS(*inode))
- filp->f_op = &read_fifo_fops;
break;
case 2:
goto err;
filp->f_op = &write_fifo_fops;
+ PIPE_WCOUNTER(*inode)++;
if (!PIPE_WRITERS(*inode)++)
- wake_up_interruptible(PIPE_WAIT(*inode));
+ wake_up_partner(inode);
- while (!PIPE_READERS(*inode)) {
+ if (!PIPE_READERS(*inode)) {
+ wait_for_partner(inode, &PIPE_RCOUNTER(*inode));
if (signal_pending(current))
goto err_wr;
- up(PIPE_SEM(*inode));
- interruptible_sleep_on(PIPE_WAIT(*inode));
- down(PIPE_SEM(*inode));
}
break;
PIPE_READERS(*inode)++;
PIPE_WRITERS(*inode)++;
+ PIPE_RCOUNTER(*inode)++;
+ PIPE_WCOUNTER(*inode)++;
if (PIPE_READERS(*inode) == 1 || PIPE_WRITERS(*inode) == 1)
- wake_up_interruptible(PIPE_WAIT(*inode));
+ wake_up_partner(inode);
break;
default:
{
return block_read_full_page(page,hfs_get_block);
}
-static int hfs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int hfs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,hfs_get_block,
&((struct inode*)page->mapping->host)->u.hfs_i.mmu_private);
{
return block_read_full_page(page,hpfs_get_block);
}
-static int hpfs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int hpfs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,hpfs_get_block,
&((struct inode*)page->mapping->host)->u.hpfs_i.mmu_private);
{
return block_read_full_page(page,minix_get_block);
}
-static int minix_prepare_write(struct page *page, unsigned from, unsigned to)
+static int minix_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page,from,to,minix_get_block);
}
O_TARGET := nfs.o
O_OBJS := inode.o file.o read.o write.o dir.o symlink.o proc.o \
- nfs2xdr.o
+ nfs2xdr.o flushd.o
ifdef CONFIG_ROOT_NFS
O_OBJS += nfsroot.o mount_clnt.o
* If the writer ends up delaying the write, the writer needs to
* increment the page use counts until he is done with the page.
*/
-static int nfs_prepare_write(struct page *page, unsigned offset, unsigned to)
+static int nfs_prepare_write(struct file *file, struct page *page, unsigned offset, unsigned to)
{
kmap(page);
- return 0;
+ return nfs_flush_incompatible(file, page);
}
static int nfs_commit_write(struct file *file, struct page *page, unsigned offset, unsigned to)
{
--- /dev/null
+/*
+ * linux/fs/nfs/flushd.c
+ *
+ * For each NFS mount, there is a separate cache object that contains
+ * a hash table of all clusters. With this cache, an async RPC task
+ * (`flushd') is associated, which wakes up occasionally to inspect
+ * its list of dirty buffers.
+ * (Note that RPC tasks aren't kernel threads. Take a look at the
+ * rpciod code to understand what they are).
+ *
+ * Inside the cache object, we also maintain a count of the current number
+ * of dirty pages, which may not exceed a certain threshold.
+ * (FIXME: This threshold should be configurable).
+ *
+ * The code is streamlined for what I think is the prevalent case for
+ * NFS traffic, which is sequential write access without concurrent
+ * access by different processes.
+ *
+ * Copyright (C) 1996, 1997, Olaf Kirch <okir@monad.swb.de>
+ *
+ * Rewritten 6/3/2000 by Trond Myklebust
+ * Copyright (C) 1999, 2000, Trond Myklebust <trond.myklebust@fys.uio.no>
+ */
+
+#include <linux/types.h>
+#include <linux/malloc.h>
+#include <linux/pagemap.h>
+#include <linux/file.h>
+
+#include <linux/sched.h>
+
+#include <linux/sunrpc/auth.h>
+#include <linux/sunrpc/clnt.h>
+#include <linux/sunrpc/sched.h>
+
+#include <linux/spinlock.h>
+
+#include <linux/nfs.h>
+#include <linux/nfs_fs.h>
+#include <linux/nfs_fs_sb.h>
+#include <linux/nfs_flushd.h>
+#include <linux/nfs_mount.h>
+
+/*
+ * Various constants
+ */
+#define NFSDBG_FACILITY NFSDBG_PAGECACHE
+
+/*
+ * This is the wait queue all cluster daemons sleep on
+ */
+static struct rpc_wait_queue flushd_queue = RPC_INIT_WAITQ("nfs_flushd");
+
+/*
+ * Spinlock
+ */
+spinlock_t nfs_flushd_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Local function declarations.
+ */
+static void nfs_flushd(struct rpc_task *);
+static void nfs_flushd_exit(struct rpc_task *);
+
+
+int nfs_reqlist_init(struct nfs_server *server)
+{
+ struct nfs_reqlist *cache;
+ struct rpc_task *task;
+ int status = 0;
+
+ dprintk("NFS: writecache_init\n");
+ spin_lock(&nfs_flushd_lock);
+ cache = server->rw_requests;
+
+ if (cache->task)
+ goto out_unlock;
+
+ /* Create the RPC task */
+ status = -ENOMEM;
+ task = rpc_new_task(server->client, NULL, RPC_TASK_ASYNC);
+ if (!task)
+ goto out_unlock;
+
+ task->tk_calldata = server;
+
+ cache->task = task;
+
+ /* Run the task */
+ cache->runat = jiffies;
+
+ cache->auth = server->client->cl_auth;
+ task->tk_action = nfs_flushd;
+ task->tk_exit = nfs_flushd_exit;
+
+ spin_unlock(&nfs_flushd_lock);
+ rpc_execute(task);
+ return 0;
+ out_unlock:
+ spin_unlock(&nfs_flushd_lock);
+ return status;
+}
+
+void nfs_reqlist_exit(struct nfs_server *server)
+{
+ struct nfs_reqlist *cache;
+
+ cache = server->rw_requests;
+ if (!cache)
+ return;
+
+ dprintk("NFS: reqlist_exit (ptr %p rpc %p)\n", cache, cache->task);
+ while (cache->task || cache->inodes) {
+ spin_lock(&nfs_flushd_lock);
+ if (!cache->task) {
+ spin_unlock(&nfs_flushd_lock);
+ nfs_reqlist_init(server);
+ } else {
+ cache->task->tk_status = -ENOMEM;
+ rpc_wake_up_task(cache->task);
+ spin_unlock(&nfs_flushd_lock);
+ }
+ interruptible_sleep_on_timeout(&cache->request_wait, 1 * HZ);
+ }
+}
+
+int nfs_reqlist_alloc(struct nfs_server *server)
+{
+ struct nfs_reqlist *cache;
+ if (server->rw_requests)
+ return 0;
+
+ cache = (struct nfs_reqlist *)kmalloc(sizeof(*cache), GFP_KERNEL);
+ if (!cache)
+ return -ENOMEM;
+
+ memset(cache, 0, sizeof(*cache));
+ init_waitqueue_head(&cache->request_wait);
+ server->rw_requests = cache;
+
+ return 0;
+}
+
+void nfs_reqlist_free(struct nfs_server *server)
+{
+ if (server->rw_requests) {
+ kfree(server->rw_requests);
+ server->rw_requests = NULL;
+ }
+}
+
+void nfs_wake_flushd()
+{
+ rpc_wake_up_status(&flushd_queue, -ENOMEM);
+}
+
+static void inode_append_flushd(struct inode *inode)
+{
+ struct nfs_reqlist *cache = NFS_REQUESTLIST(inode);
+ struct inode **q;
+
+ spin_lock(&nfs_flushd_lock);
+ if (NFS_FLAGS(inode) & NFS_INO_FLUSH)
+ goto out;
+ inode->u.nfs_i.hash_next = NULL;
+
+ q = &cache->inodes;
+ while (*q)
+ q = &(*q)->u.nfs_i.hash_next;
+ *q = inode;
+
+ /* Note: we increase the inode i_count in order to prevent
+ * it from disappearing when on the flush list
+ */
+ NFS_FLAGS(inode) |= NFS_INO_FLUSH;
+ inode->i_count++;
+ out:
+ spin_unlock(&nfs_flushd_lock);
+}
+
+void inode_remove_flushd(struct inode *inode)
+{
+ struct nfs_reqlist *cache = NFS_REQUESTLIST(inode);
+ struct inode **q;
+
+ spin_lock(&nfs_flushd_lock);
+ if (!(NFS_FLAGS(inode) & NFS_INO_FLUSH))
+ goto out;
+
+ q = &cache->inodes;
+ while (*q && *q != inode)
+ q = &(*q)->u.nfs_i.hash_next;
+ if (*q) {
+ *q = inode->u.nfs_i.hash_next;
+ NFS_FLAGS(inode) &= ~NFS_INO_FLUSH;
+ iput(inode);
+ }
+ out:
+ spin_unlock(&nfs_flushd_lock);
+}
+
+void inode_schedule_scan(struct inode *inode, unsigned long time)
+{
+ struct nfs_reqlist *cache = NFS_REQUESTLIST(inode);
+ struct rpc_task *task;
+ unsigned long mintimeout;
+
+ if (time_after(NFS_NEXTSCAN(inode), time))
+ NFS_NEXTSCAN(inode) = time;
+ mintimeout = jiffies + 1 * HZ;
+ if (time_before(mintimeout, NFS_NEXTSCAN(inode)))
+ mintimeout = NFS_NEXTSCAN(inode);
+ inode_append_flushd(inode);
+
+ spin_lock(&nfs_flushd_lock);
+ task = cache->task;
+ if (!task) {
+ spin_unlock(&nfs_flushd_lock);
+ nfs_reqlist_init(NFS_SERVER(inode));
+ } else {
+ if (time_after(cache->runat, mintimeout))
+ rpc_wake_up_task(task);
+ spin_unlock(&nfs_flushd_lock);
+ }
+}
+
+
+static void
+nfs_flushd(struct rpc_task *task)
+{
+ struct nfs_server *server;
+ struct nfs_reqlist *cache;
+ struct inode *inode, *next;
+ unsigned long delay = jiffies + NFS_WRITEBACK_LOCKDELAY;
+ int flush = (task->tk_status == -ENOMEM);
+
+ dprintk("NFS: %4d flushd starting\n", task->tk_pid);
+ server = (struct nfs_server *) task->tk_calldata;
+ cache = server->rw_requests;
+
+ spin_lock(&nfs_flushd_lock);
+ next = cache->inodes;
+ cache->inodes = NULL;
+ spin_unlock(&nfs_flushd_lock);
+
+ while ((inode = next) != NULL) {
+ next = next->u.nfs_i.hash_next;
+ inode->u.nfs_i.hash_next = NULL;
+ NFS_FLAGS(inode) &= ~NFS_INO_FLUSH;
+
+ if (flush) {
+ nfs_sync_file(inode, NULL, 0, 0, FLUSH_AGING);
+ } else if (time_after(jiffies, NFS_NEXTSCAN(inode))) {
+ NFS_NEXTSCAN(inode) = jiffies + NFS_WRITEBACK_LOCKDELAY;
+ nfs_flush_timeout(inode, FLUSH_AGING);
+#ifdef CONFIG_NFS_V3
+ nfs_commit_timeout(inode, FLUSH_AGING);
+#endif
+ }
+
+ if (nfs_have_writebacks(inode)) {
+ inode_append_flushd(inode);
+ if (time_after(delay, NFS_NEXTSCAN(inode)))
+ delay = NFS_NEXTSCAN(inode);
+ }
+ iput(inode);
+ }
+
+ dprintk("NFS: %4d flushd back to sleep\n", task->tk_pid);
+ if (time_after(jiffies + 1 * HZ, delay))
+ delay = 1 * HZ;
+ else
+ delay = delay - jiffies;
+ task->tk_status = 0;
+ task->tk_action = nfs_flushd;
+ task->tk_timeout = delay;
+ cache->runat = jiffies + task->tk_timeout;
+
+ spin_lock(&nfs_flushd_lock);
+ if (!cache->nr_requests && !cache->inodes) {
+ cache->task = NULL;
+ task->tk_action = NULL;
+ } else
+ rpc_sleep_on(&flushd_queue, task, NULL, NULL);
+ spin_unlock(&nfs_flushd_lock);
+}
+
+static void
+nfs_flushd_exit(struct rpc_task *task)
+{
+ struct nfs_server *server;
+ struct nfs_reqlist *cache;
+ server = (struct nfs_server *) task->tk_calldata;
+ cache = server->rw_requests;
+
+ spin_lock(&nfs_flushd_lock);
+ if (cache->task == task)
+ cache->task = NULL;
+ spin_unlock(&nfs_flushd_lock);
+ wake_up(&cache->request_wait);
+ rpc_release_task(task);
+}
+
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/stats.h>
#include <linux/nfs_fs.h>
+#include <linux/nfs_flushd.h>
#include <linux/lockd/bind.h>
#include <linux/smp_lock.h>
inode->i_rdev = 0;
NFS_FILEID(inode) = 0;
NFS_FSID(inode) = 0;
+ INIT_LIST_HEAD(&inode->u.nfs_i.dirty);
+ INIT_LIST_HEAD(&inode->u.nfs_i.commit);
+ INIT_LIST_HEAD(&inode->u.nfs_i.writeback);
+ inode->u.nfs_i.ndirty = 0;
+ inode->u.nfs_i.ncommit = 0;
+ inode->u.nfs_i.npages = 0;
NFS_CACHEINV(inode);
NFS_ATTRTIMEO(inode) = NFS_MINATTRTIMEO(inode);
}
static void
nfs_delete_inode(struct inode * inode)
{
- int failed;
-
dprintk("NFS: delete_inode(%x/%ld)\n", inode->i_dev, inode->i_ino);
lock_kernel();
nfs_free_dircache(inode);
} else {
/*
- * Flush out any pending write requests ...
+ * The following can never actually happen...
*/
- if (NFS_WRITEBACK(inode) != NULL) {
- unsigned long timeout = jiffies + 5*HZ;
-#ifdef NFS_DEBUG_VERBOSE
-printk("nfs_delete_inode: inode %ld has pending RPC requests\n", inode->i_ino);
-#endif
- nfs_inval(inode);
- while (NFS_WRITEBACK(inode) != NULL &&
- time_before(jiffies, timeout)) {
- current->state = TASK_INTERRUPTIBLE;
- schedule_timeout(HZ/10);
- }
- current->state = TASK_RUNNING;
- if (NFS_WRITEBACK(inode) != NULL)
- printk("NFS: Arghhh, stuck RPC requests!\n");
+ if (nfs_have_writebacks(inode)) {
+ printk(KERN_ERR "nfs_delete_inode: inode %ld has pending RPC requests\n", inode->i_ino);
}
}
-
- failed = nfs_check_failed_request(inode);
- if (failed)
- printk("NFS: inode %ld had %d failed requests\n",
- inode->i_ino, failed);
unlock_kernel();
clear_inode(inode);
struct nfs_server *server = &sb->u.nfs_sb.s_server;
struct rpc_clnt *rpc;
+ /*
+ * First get rid of the request flushing daemon.
+ * Relies on rpc_shutdown_client() waiting on all
+ * client tasks to finish.
+ */
+ nfs_reqlist_exit(server);
+
if ((rpc = server->client) != NULL)
rpc_shutdown_client(rpc);
+ nfs_reqlist_free(server);
+
if (!(server->flags & NFS_MOUNT_NONLM))
lockd_down(); /* release rpc.lockd */
rpciod_down(); /* release rpciod */
sb->s_root->d_op = &nfs_dentry_operations;
sb->s_root->d_fsdata = root_fh;
+ /* Fire up the writeback cache */
+ if (nfs_reqlist_alloc(server) < 0) {
+ printk(KERN_NOTICE "NFS: cannot initialize writeback cache.\n");
+ goto failure_kill_reqlist;
+ }
+
/* We're airborne */
/* Check whether to start the lockd process */
return sb;
/* Yargs. It didn't work out. */
+ failure_kill_reqlist:
+ nfs_reqlist_exit(server);
out_no_root:
printk("nfs_read_super: get root inode failed\n");
iput(root_inode);
printk(KERN_WARNING "NFS: cannot create RPC transport.\n");
out_free_host:
+ nfs_reqlist_free(server);
kfree(server->hostname);
out_unlock:
goto out_fail;
make_bad_inode(inode);
inode->i_mode = save_mode;
- nfs_inval(inode);
nfs_zap_caches(inode);
}
* to look at the size or the mtime the server sends us
* too closely, as we're in the middle of modifying them.
*/
- if (NFS_WRITEBACK(inode))
+ if (nfs_have_writebacks(inode))
goto out;
if (inode->i_size != fattr->size) {
static DECLARE_FSTYPE(nfs_fs_type, "nfs", nfs_read_super, 0);
extern int nfs_init_fhcache(void);
-extern int nfs_init_wreqcache(void);
+extern int nfs_init_nfspagecache(void);
/*
* Initialize NFS
if (err)
return err;
- err = nfs_init_wreqcache();
+ err = nfs_init_nfspagecache();
if (err)
return err;
#define NFS_diropres_sz 1+NFS_fhandle_sz+NFS_fattr_sz
#define NFS_readlinkres_sz 1
#define NFS_readres_sz 1+NFS_fattr_sz+1
+#define NFS_writeres_sz NFS_attrstat_sz
#define NFS_stat_sz 1
#define NFS_readdirres_sz 1
#define NFS_statfsres_sz 1+NFS_info_sz
static int
nfs_xdr_writeargs(struct rpc_rqst *req, u32 *p, struct nfs_writeargs *args)
{
+ unsigned int nr;
u32 count = args->count;
p = xdr_encode_fhandle(p, args->fh);
*p++ = htonl(count);
req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
- req->rq_svec[1].iov_base = (void *) args->buffer;
- req->rq_svec[1].iov_len = count;
- req->rq_slen += count;
- req->rq_snr = 2;
+ /* Get the number of buffers in the send iovec */
+ nr = args->nriov;
+
+ if (nr+2 > MAX_IOVEC) {
+ printk(KERN_ERR "NFS: Bad number of iov's in xdr_writeargs "
+ "(nr %d max %d)\n", nr, MAX_IOVEC);
+ return -EINVAL;
+ }
+
+ /* Copy the iovec */
+ memcpy(req->rq_svec + 1, args->iov, nr * sizeof(struct iovec));
#ifdef NFS_PAD_WRITES
/*
* Some old servers require that the message length
* be a multiple of 4, so we pad it here if needed.
*/
- count = ((count + 3) & ~3) - count;
- if (count) {
-#if 0
-printk("nfs_writeargs: padding write, len=%d, slen=%d, pad=%d\n",
-req->rq_svec[1].iov_len, req->rq_slen, count);
-#endif
- req->rq_svec[2].iov_base = (void *) "\0\0\0";
- req->rq_svec[2].iov_len = count;
- req->rq_slen += count;
- req->rq_snr = 3;
+ if (count & 3) {
+ struct iovec *iov = req->rq_svec + nr + 1;
+ int pad = 4 - (count & 3);
+
+ iov->iov_base = (void *) "\0\0\0";
+ iov->iov_len = pad;
+ count += pad;
+ nr++;
}
#endif
+ req->rq_slen += count;
+ req->rq_snr += nr;
return 0;
}
return 0;
}
+/*
+ * Decode WRITE reply
+ */
+static int
+nfs_xdr_writeres(struct rpc_rqst *req, u32 *p, struct nfs_writeres *res)
+{
+ res->verf->committed = NFS_FILE_SYNC;
+ return nfs_xdr_attrstat(req, p, res->fattr);
+}
+
/*
* Decode STATFS reply
*/
PROC(readlink, readlinkargs, readlinkres),
PROC(read, readargs, readres),
PROC(writecache, enc_void, dec_void),
- PROC(write, writeargs, attrstat),
+ PROC(write, writeargs, writeres),
PROC(create, createargs, diropres),
PROC(remove, diropargs, stat),
PROC(rename, renameargs, stat),
{ "nocto", ~NFS_MOUNT_NOCTO, NFS_MOUNT_NOCTO },
{ "ac", ~NFS_MOUNT_NOAC, 0 },
{ "noac", ~NFS_MOUNT_NOAC, NFS_MOUNT_NOAC },
+ { "lock", ~NFS_MOUNT_LOCK, 0 },
+ { "nolock", ~NFS_MOUNT_NONLM, NFS_MOUNT_NONLM },
{ NULL, 0, 0 }
};
unsigned long offset, unsigned int count,
const void *buffer, struct nfs_fattr *fattr)
{
- struct nfs_writeargs arg = { fhandle, offset, count, buffer };
+ struct nfs_writeargs arg = { fhandle, offset, count, 1, 1,
+ {{(void *) buffer, count}, {0,0}, {0,0}, {0,0},
+ {0,0}, {0,0}, {0,0}, {0,0}}};
+ struct nfs_writeres res = {fattr, 0, count};
int status;
dprintk("NFS call write %d @ %ld\n", count, offset);
- status = rpc_call(server->client, NFSPROC_WRITE, &arg, fattr,
+ status = rpc_call(server->client, NFSPROC_WRITE, &arg, &res,
swap? (RPC_TASK_SWAPPER|RPC_TASK_ROOTCREDS) : 0);
dprintk("NFS reply read: %d\n", status);
return status < 0? status : count;
#include <linux/sunrpc/clnt.h>
#include <linux/nfs_fs.h>
+#include <linux/nfs_flushd.h>
#include <asm/uaccess.h>
#include <linux/smp_lock.h>
#define NFS_PARANOIA 1
#define NFSDBG_FACILITY NFSDBG_PAGECACHE
-static void nfs_wback_begin(struct rpc_task *task);
-static void nfs_wback_result(struct rpc_task *task);
-static void nfs_cancel_request(struct nfs_wreq *req);
+/*
+ * Spinlock
+ */
+spinlock_t nfs_wreq_lock = SPIN_LOCK_UNLOCKED;
+static unsigned int nfs_nr_requests = 0;
/*
- * Cache parameters
+ * Local structures
+ *
+ * Valid flags for a dirty buffer
*/
-#define NFS_WRITEBACK_DELAY (10 * HZ)
-#define NFS_WRITEBACK_MAX 64
+#define PG_BUSY 0x0001
/*
- * Limit number of delayed writes
+ * This is the struct where the WRITE/COMMIT arguments go.
*/
-static int nr_write_requests = 0;
-static struct rpc_wait_queue write_queue = RPC_INIT_WAITQ("write_chain");
+struct nfs_write_data {
+ struct rpc_task task;
+ struct file *file;
+ struct rpc_cred *cred;
+ struct nfs_writeargs args; /* argument struct */
+ struct nfs_writeres res; /* result struct */
+ struct nfs_fattr fattr;
+ struct nfs_writeverf verf;
+ struct list_head pages; /* Coalesced requests we wish to flush */
+};
+
+struct nfs_page {
+ struct list_head wb_hash, /* Inode */
+ wb_list,
+ *wb_list_head;
+ struct file *wb_file;
+ struct rpc_cred *wb_cred;
+ struct page *wb_page; /* page to write out */
+ wait_queue_head_t wb_wait; /* wait queue */
+ unsigned long wb_timeout; /* when to write/commit */
+ unsigned int wb_offset, /* Offset of write */
+ wb_bytes, /* Length of request */
+ wb_count, /* reference count */
+ wb_flags;
+ struct nfs_writeverf wb_verf; /* Commit cookie */
+};
+
+#define NFS_WBACK_BUSY(req) ((req)->wb_flags & PG_BUSY)
+
+/*
+ * Local function declarations
+ */
+static void nfs_writeback_done(struct rpc_task *);
+#ifdef CONFIG_NFS_V3
+static void nfs_commit_done(struct rpc_task *);
+#endif
/* Hack for future NFS swap support */
#ifndef IS_SWAPFILE
# define IS_SWAPFILE(inode) (0)
#endif
+static kmem_cache_t *nfs_page_cachep = NULL;
+static kmem_cache_t *nfs_wdata_cachep = NULL;
+
+static __inline__ struct nfs_page *nfs_page_alloc(void)
+{
+ struct nfs_page *p;
+ p = kmem_cache_alloc(nfs_page_cachep, SLAB_KERNEL);
+ if (p) {
+ memset(p, 0, sizeof(*p));
+ INIT_LIST_HEAD(&p->wb_hash);
+ INIT_LIST_HEAD(&p->wb_list);
+ init_waitqueue_head(&p->wb_wait);
+ }
+ return p;
+}
+
+static __inline__ void nfs_page_free(struct nfs_page *p)
+{
+ kmem_cache_free(nfs_page_cachep, p);
+}
+
+static __inline__ struct nfs_write_data *nfs_writedata_alloc(void)
+{
+ struct nfs_write_data *p;
+ p = kmem_cache_alloc(nfs_wdata_cachep, SLAB_NFS);
+ if (p) {
+ memset(p, 0, sizeof(*p));
+ INIT_LIST_HEAD(&p->pages);
+ }
+ return p;
+}
+
+static __inline__ void nfs_writedata_free(struct nfs_write_data *p)
+{
+ kmem_cache_free(nfs_wdata_cachep, p);
+}
+
+static void nfs_writedata_release(struct rpc_task *task)
+{
+ struct nfs_write_data *wdata = (struct nfs_write_data *)task->tk_calldata;
+ rpc_release_task(task);
+ nfs_writedata_free(wdata);
+}
+
+/*
+ * This function will be used to simulate weak cache consistency
+ * under NFSv2 when the NFSv3 attribute patch is included.
+ * For the moment, we just call nfs_refresh_inode().
+ */
+static __inline__ int
+nfs_write_attributes(struct inode *inode, struct nfs_fattr *fattr)
+{
+ return nfs_refresh_inode(inode, fattr);
+}
+
/*
* Write a page synchronously.
* Offset is the data offset within the page.
}
/*
- * Append a writeback request to a list
+ * Write a page to the server. This was supposed to be used for
+ * NFS swapping only.
+ * FIXME: Using this for mmap is pointless, breaks asynchronous
+ * writebacks, and is extremely slow.
*/
-static inline void
-append_write_request(struct nfs_wreq **q, struct nfs_wreq *wreq)
+int
+nfs_writepage(struct dentry * dentry, struct page *page)
{
- dprintk("NFS: append_write_request(%p, %p)\n", q, wreq);
- rpc_append_list(q, wreq);
+ struct inode *inode = dentry->d_inode;
+ unsigned long end_index = inode->i_size >> PAGE_CACHE_SHIFT;
+ unsigned offset = PAGE_CACHE_SIZE;
+ int err;
+
+ /* easy case */
+ if (page->index < end_index)
+ goto do_it;
+ /* things got complicated... */
+ offset = inode->i_size & (PAGE_CACHE_SIZE-1);
+ /* OK, are we completely out? */
+ if (page->index >= end_index+1 || !offset)
+ return -EIO;
+do_it:
+ err = nfs_writepage_sync(dentry, inode, page, 0, offset);
+ if ( err == offset) return 0;
+ return err;
}
/*
- * Remove a writeback request from a list
+ * Check whether the file range we want to write to is locked by
+ * us.
+ */
+static int
+region_locked(struct inode *inode, struct nfs_page *req)
+{
+ struct file_lock *fl;
+ unsigned long rqstart, rqend;
+
+ /* Don't optimize writes if we don't use NLM */
+ if (NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM)
+ return 0;
+
+ rqstart = page_offset(req->wb_page) + req->wb_offset;
+ rqend = rqstart + req->wb_bytes;
+ for (fl = inode->i_flock; fl; fl = fl->fl_next) {
+ if (fl->fl_owner == current->files && (fl->fl_flags & FL_POSIX)
+ && fl->fl_type == F_WRLCK
+ && fl->fl_start <= rqstart && rqend <= fl->fl_end) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static inline struct nfs_page *
+nfs_inode_wb_entry(struct list_head *head)
+{
+ return list_entry(head, struct nfs_page, wb_hash);
+}
+
+/*
+ * Insert a write request into an inode
*/
static inline void
-remove_write_request(struct nfs_wreq **q, struct nfs_wreq *wreq)
+nfs_inode_add_request(struct inode *inode, struct nfs_page *req)
{
- dprintk("NFS: remove_write_request(%p, %p)\n", q, wreq);
- rpc_remove_list(q, wreq);
+ if (!list_empty(&req->wb_hash))
+ return;
+ if (!NFS_WBACK_BUSY(req))
+ printk(KERN_ERR "NFS: unlocked request attempted hashed!\n");
+ inode->u.nfs_i.npages++;
+ list_add(&req->wb_hash, &inode->u.nfs_i.writeback);
+ req->wb_count++;
}
/*
- * Find a non-busy write request for a given page to
- * try to combine with.
+ * Insert a write request into an inode
*/
-static inline struct nfs_wreq *
-find_write_request(struct inode *inode, struct page *page)
+static inline void
+nfs_inode_remove_request(struct nfs_page *req)
{
- pid_t pid = current->pid;
- struct nfs_wreq *head, *req;
+ struct inode *inode;
+ spin_lock(&nfs_wreq_lock);
+ if (list_empty(&req->wb_hash)) {
+ spin_unlock(&nfs_wreq_lock);
+ return;
+ }
+ if (!NFS_WBACK_BUSY(req))
+ printk(KERN_ERR "NFS: unlocked request attempted unhashed!\n");
+ inode = req->wb_file->f_dentry->d_inode;
+ list_del(&req->wb_hash);
+ INIT_LIST_HEAD(&req->wb_hash);
+ inode->u.nfs_i.npages--;
+ if ((inode->u.nfs_i.npages == 0) != list_empty(&inode->u.nfs_i.writeback))
+ printk(KERN_ERR "NFS: desynchronized value of nfs_i.npages.\n");
+ if (!nfs_have_writebacks(inode))
+ inode_remove_flushd(inode);
+ spin_unlock(&nfs_wreq_lock);
+ nfs_release_request(req);
+}
- dprintk("NFS: find_write_request(%x/%ld, %p)\n",
- inode->i_dev, inode->i_ino, page);
- if (!(req = head = NFS_WRITEBACK(inode)))
- return NULL;
- do {
- /*
- * We can't combine with canceled requests or
- * requests that have already been started..
- */
- if (req->wb_flags & (NFS_WRITE_CANCELLED | NFS_WRITE_INPROGRESS))
+/*
+ * Find a request
+ */
+static inline struct nfs_page *
+_nfs_find_request(struct inode *inode, struct page *page)
+{
+ struct list_head *head, *next;
+
+ head = &inode->u.nfs_i.writeback;
+ next = head->next;
+ while (next != head) {
+ struct nfs_page *req = nfs_inode_wb_entry(next);
+ next = next->next;
+ if (page_index(req->wb_page) != page_index(page))
continue;
+ req->wb_count++;
+ return req;
+ }
+ return NULL;
+}
- if (req->wb_page == page && req->wb_pid == pid)
- return req;
+struct nfs_page *
+nfs_find_request(struct inode *inode, struct page *page)
+{
+ struct nfs_page *req;
- /*
- * Ehh, don't keep too many tasks queued..
- */
- rpc_wake_up_task(&req->wb_task);
+ spin_lock(&nfs_wreq_lock);
+ req = _nfs_find_request(inode, page);
+ spin_unlock(&nfs_wreq_lock);
+ return req;
+}
- } while ((req = WB_NEXT(req)) != head);
- return NULL;
+static inline struct nfs_page *
+nfs_list_entry(struct list_head *head)
+{
+ return list_entry(head, struct nfs_page, wb_list);
}
/*
- * Find and release all failed requests for this inode.
+ * Insert a write request into a sorted list
*/
-int
-nfs_check_failed_request(struct inode * inode)
+static inline void
+nfs_list_add_request(struct nfs_page *req, struct list_head *head)
{
- /* FIXME! */
- return 0;
+ struct list_head *prev;
+
+ if (!list_empty(&req->wb_list)) {
+ printk(KERN_ERR "NFS: Add to list failed!\n");
+ return;
+ }
+ if (list_empty(&req->wb_hash)) {
+ printk(KERN_ERR "NFS: Unhashed request attempted added to a list!\n");
+ return;
+ }
+ if (!NFS_WBACK_BUSY(req))
+ printk(KERN_ERR "NFS: unlocked request attempted added to list!\n");
+ prev = head->prev;
+ while (prev != head) {
+ struct nfs_page *p = nfs_list_entry(prev);
+ if (page_index(p->wb_page) < page_index(req->wb_page))
+ break;
+ prev = prev->prev;
+ }
+ list_add(&req->wb_list, prev);
+ req->wb_list_head = head;
}
/*
- * Try to merge adjacent write requests. This works only for requests
- * issued by the same user.
+ * Insert a write request into an inode
*/
-static inline int
-update_write_request(struct nfs_wreq *req, unsigned int first,
- unsigned int bytes)
+static inline void
+nfs_list_remove_request(struct nfs_page *req)
{
- unsigned int rqfirst = req->wb_offset,
- rqlast = rqfirst + req->wb_bytes,
- last = first + bytes;
+ if (list_empty(&req->wb_list))
+ return;
+ if (!NFS_WBACK_BUSY(req))
+ printk(KERN_ERR "NFS: unlocked request attempted removed from list!\n");
+ list_del(&req->wb_list);
+ INIT_LIST_HEAD(&req->wb_list);
+ req->wb_list_head = NULL;
+}
- dprintk("nfs: trying to update write request %p\n", req);
+/*
+ * Add a request to the inode's dirty list.
+ */
+static inline void
+nfs_mark_request_dirty(struct nfs_page *req)
+{
+ struct inode *inode = req->wb_file->f_dentry->d_inode;
- /* not contiguous? */
- if (rqlast < first || last < rqfirst)
- return 0;
+ spin_lock(&nfs_wreq_lock);
+ if (list_empty(&req->wb_list)) {
+ nfs_list_add_request(req, &inode->u.nfs_i.dirty);
+ inode->u.nfs_i.ndirty++;
+ }
+ spin_unlock(&nfs_wreq_lock);
+ /*
+ * NB: the call to inode_schedule_scan() must lie outside the
+ * spinlock since it can run flushd().
+ */
+ inode_schedule_scan(inode, req->wb_timeout);
+}
- if (first < rqfirst)
- rqfirst = first;
- if (rqlast < last)
- rqlast = last;
+/*
+ * Check if a request is dirty
+ */
+static inline int
+nfs_dirty_request(struct nfs_page *req)
+{
+ struct inode *inode = req->wb_file->f_dentry->d_inode;
+ return !list_empty(&req->wb_list) && req->wb_list_head == &inode->u.nfs_i.dirty;
+}
- req->wb_offset = rqfirst;
- req->wb_bytes = rqlast - rqfirst;
- req->wb_count++;
+#ifdef CONFIG_NFS_V3
+/*
+ * Add a request to the inode's commit list.
+ */
+static inline void
+nfs_mark_request_commit(struct nfs_page *req)
+{
+ struct inode *inode = req->wb_file->f_dentry->d_inode;
- return 1;
+ spin_lock(&nfs_wreq_lock);
+ if (list_empty(&req->wb_list)) {
+ nfs_list_add_request(req, &inode->u.nfs_i.commit);
+ inode->u.nfs_i.ncommit++;
+ }
+ spin_unlock(&nfs_wreq_lock);
+ /*
+ * NB: the call to inode_schedule_scan() must lie outside the
+ * spinlock since it can run flushd().
+ */
+ inode_schedule_scan(inode, req->wb_timeout);
}
+#endif
-static kmem_cache_t *nfs_wreq_cachep;
-
-int nfs_init_wreqcache(void)
+/*
+ * Lock the page of an asynchronous request
+ */
+static inline int
+nfs_lock_request(struct nfs_page *req)
{
- nfs_wreq_cachep = kmem_cache_create("nfs_wreq",
- sizeof(struct nfs_wreq),
- 0, SLAB_HWCACHE_ALIGN,
- NULL, NULL);
- if (nfs_wreq_cachep == NULL)
- return -ENOMEM;
- return 0;
+ if (NFS_WBACK_BUSY(req))
+ return 0;
+ req->wb_count++;
+ req->wb_flags |= PG_BUSY;
+ return 1;
}
static inline void
-free_write_request(struct nfs_wreq * req)
+nfs_unlock_request(struct nfs_page *req)
{
- if (!--req->wb_count)
- kmem_cache_free(nfs_wreq_cachep, req);
+ if (!NFS_WBACK_BUSY(req)) {
+ printk(KERN_ERR "NFS: Invalid unlock attempted\n");
+ return;
+ }
+ req->wb_flags &= ~PG_BUSY;
+ wake_up(&req->wb_wait);
+ nfs_release_request(req);
}
/*
- * Create and initialize a writeback request
+ * Create a write request.
+ * Page must be locked by the caller. This makes sure we never create
+ * two different requests for the same page, and avoids possible deadlock
+ * when we reach the hard limit on the number of dirty pages.
*/
-static inline struct nfs_wreq *
-create_write_request(struct file * file, struct page *page, unsigned int offset, unsigned int bytes)
+static struct nfs_page *
+nfs_create_request(struct inode *inode, struct file *file, struct page *page,
+ unsigned int offset, unsigned int count)
{
- struct dentry *dentry = file->f_dentry;
- struct inode *inode = dentry->d_inode;
- struct rpc_clnt *clnt = NFS_CLIENT(inode);
- struct nfs_wreq *wreq;
- struct rpc_task *task;
- struct rpc_message msg;
+ struct nfs_reqlist *cache = NFS_REQUESTLIST(inode);
+ struct nfs_page *req = NULL;
+ long timeout;
- dprintk("NFS: create_write_request(%s/%s, %ld+%d)\n",
- dentry->d_parent->d_name.name, dentry->d_name.name,
- (page->index << PAGE_CACHE_SHIFT) + offset, bytes);
-
- /* FIXME: Enforce hard limit on number of concurrent writes? */
- wreq = kmem_cache_alloc(nfs_wreq_cachep, SLAB_KERNEL);
- if (!wreq)
- goto out_fail;
- memset(wreq, 0, sizeof(*wreq));
+ /* Deal with hard/soft limits.
+ */
+ do {
+ /* If we're over the soft limit, flush out old requests */
+ if (nfs_nr_requests >= MAX_REQUEST_SOFT)
+ nfs_wb_file(inode, file);
+
+ /* If we're still over the soft limit, wake up some requests */
+ if (nfs_nr_requests >= MAX_REQUEST_SOFT) {
+ dprintk("NFS: hit soft limit (%d requests)\n",
+ nfs_nr_requests);
+ if (!cache->task)
+ nfs_reqlist_init(NFS_SERVER(inode));
+ nfs_wake_flushd();
+ }
- task = &wreq->wb_task;
- rpc_init_task(task, clnt, nfs_wback_result, RPC_TASK_NFSWRITE);
- msg.rpc_proc = NFSPROC_WRITE;
- msg.rpc_argp = &wreq->wb_args;
- msg.rpc_resp = &wreq->wb_fattr;
- msg.rpc_cred = NULL;
- rpc_call_setup(task, &msg, 0);
- if (task->tk_status < 0)
- goto out_req;
+ /* If we haven't reached the hard limit yet,
+ * try to allocate the request struct */
+ if (nfs_nr_requests < MAX_REQUEST_HARD) {
+ req = nfs_page_alloc();
+ if (req != NULL)
+ break;
+ }
- task->tk_calldata = wreq;
+ /* We're over the hard limit. Wait for better times */
+ dprintk("NFS: create_request sleeping (total %d pid %d)\n",
+ nfs_nr_requests, current->pid);
+
+ timeout = 1 * HZ;
+ if (NFS_SERVER(inode)->flags & NFS_MOUNT_INTR) {
+ interruptible_sleep_on_timeout(&cache->request_wait,
+ timeout);
+ if (signalled())
+ break;
+ } else
+ sleep_on_timeout(&cache->request_wait, timeout);
+
+ dprintk("NFS: create_request waking up (tot %d pid %d)\n",
+ nfs_nr_requests, current->pid);
+ } while (!req);
+ if (!req)
+ return NULL;
- /* Put the task on inode's writeback request list. */
+ /* Initialize the request struct. Initially, we assume a
+ * long write-back delay. This will be adjusted in
+ * update_nfs_request below if the region is not locked. */
+ req->wb_page = page;
+ atomic_inc(&page->count);
+ req->wb_offset = offset;
+ req->wb_bytes = count;
+ /* If the region is locked, adjust the timeout */
+ if (region_locked(inode, req))
+ req->wb_timeout = jiffies + NFS_WRITEBACK_LOCKDELAY;
+ else
+ req->wb_timeout = jiffies + NFS_WRITEBACK_DELAY;
+ req->wb_file = file;
+ req->wb_cred = rpcauth_lookupcred(NFS_CLIENT(inode)->cl_auth, 0);
get_file(file);
- wreq->wb_file = file;
- wreq->wb_pid = current->pid;
- wreq->wb_page = page;
- init_waitqueue_head(&wreq->wb_wait);
- wreq->wb_offset = offset;
- wreq->wb_bytes = bytes;
- wreq->wb_count = 2; /* One for the IO, one for us */
+ req->wb_count = 1;
- kmap(page);
- append_write_request(&NFS_WRITEBACK(inode), wreq);
+ /* register request's existence */
+ cache->nr_requests++;
+ nfs_nr_requests++;
+ return req;
+}
- if (nr_write_requests++ > NFS_WRITEBACK_MAX*3/4)
- rpc_wake_up_next(&write_queue);
- return wreq;
+/*
+ * Release all resources associated with a write request after it
+ * has been committed to stable storage
+ *
+ * Note: Should always be called with the spinlock held!
+ */
+void
+nfs_release_request(struct nfs_page *req)
+{
+ struct inode *inode = req->wb_file->f_dentry->d_inode;
+ struct nfs_reqlist *cache = NFS_REQUESTLIST(inode);
+ struct page *page = req->wb_page;
+
+ spin_lock(&nfs_wreq_lock);
+ if (--req->wb_count) {
+ spin_unlock(&nfs_wreq_lock);
+ return;
+ }
+ spin_unlock(&nfs_wreq_lock);
-out_req:
- rpc_release_task(task);
- kmem_cache_free(nfs_wreq_cachep, wreq);
-out_fail:
- return NULL;
+ if (!list_empty(&req->wb_list)) {
+ printk(KERN_ERR "NFS: Request released while still on a list!\n");
+ nfs_list_remove_request(req);
+ }
+ if (!list_empty(&req->wb_hash)) {
+ printk(KERN_ERR "NFS: Request released while still hashed!\n");
+ nfs_inode_remove_request(req);
+ }
+ if (NFS_WBACK_BUSY(req))
+ printk(KERN_ERR "NFS: Request released while still locked!\n");
+
+ rpcauth_releasecred(NFS_CLIENT(inode)->cl_auth, req->wb_cred);
+ fput(req->wb_file);
+ page_cache_release(page);
+ nfs_page_free(req);
+ /* wake up anyone waiting to allocate a request */
+ cache->nr_requests--;
+ nfs_nr_requests--;
+ wake_up(&cache->request_wait);
}
/*
- * Schedule a writeback RPC call.
- * If the server is congested, don't add to our backlog of queued
- * requests but call it synchronously.
- * The function returns whether we should wait for the thing or not.
+ * Wait for a request to complete.
*
- * FIXME: Here we could walk the inode's lock list to see whether the
- * page we're currently writing to has been write-locked by the caller.
- * If it is, we could schedule an async write request with a long
- * delay in order to avoid writing back the page until the lock is
- * released.
+ * Interruptible by signals only if mounted with intr flag.
*/
-static inline int
-schedule_write_request(struct nfs_wreq *req, int sync)
+static int
+nfs_wait_on_request(struct nfs_page *req)
{
- struct rpc_task *task = &req->wb_task;
- struct file *file = req->wb_file;
- struct dentry *dentry = file->f_dentry;
- struct inode *inode = dentry->d_inode;
+ struct inode *inode = req->wb_file->f_dentry->d_inode;
+ struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ int retval;
- if (NFS_CONGESTED(inode) || nr_write_requests >= NFS_WRITEBACK_MAX)
- sync = 1;
-
- if (sync) {
- sigset_t oldmask;
- struct rpc_clnt *clnt = NFS_CLIENT(inode);
- dprintk("NFS: %4d schedule_write_request (sync)\n",
- task->tk_pid);
- /* Page is already locked */
- rpc_clnt_sigmask(clnt, &oldmask);
- nfs_wback_begin(task);
- rpc_execute(task);
- rpc_clnt_sigunmask(clnt, &oldmask);
- } else {
- dprintk("NFS: %4d schedule_write_request (async)\n",
- task->tk_pid);
- task->tk_flags |= RPC_TASK_ASYNC;
- task->tk_timeout = NFS_WRITEBACK_DELAY;
- rpc_sleep_on(&write_queue, task, nfs_wback_begin, NULL);
+ if (!NFS_WBACK_BUSY(req))
+ return 0;
+ req->wb_count++;
+ retval = nfs_wait_event(clnt, req->wb_wait, !NFS_WBACK_BUSY(req));
+ nfs_release_request(req);
+ return retval;
+}
+
+/*
+ * Wait for a request to complete.
+ *
+ * Interruptible by signals only if mounted with intr flag.
+ */
+static int
+nfs_wait_on_requests(struct inode *inode, struct file *file, unsigned long start, unsigned int count)
+{
+ struct list_head *p, *head;
+ unsigned long idx_start, idx_end;
+ unsigned int pages = 0;
+ int error;
+
+ idx_start = start >> PAGE_CACHE_SHIFT;
+ if (count == 0)
+ idx_end = ~0;
+ else {
+ unsigned long idx_count = count >> PAGE_CACHE_SHIFT;
+ idx_end = idx_start + idx_count;
}
+ spin_lock(&nfs_wreq_lock);
+ head = &inode->u.nfs_i.writeback;
+ p = head->next;
+ while (p != head) {
+ unsigned long pg_idx;
+ struct nfs_page *req = nfs_inode_wb_entry(p);
- return sync;
+ p = p->next;
+
+ if (file && req->wb_file != file)
+ continue;
+
+ pg_idx = page_index(req->wb_page);
+ if (pg_idx < idx_start || pg_idx > idx_end)
+ continue;
+
+ if (!NFS_WBACK_BUSY(req))
+ continue;
+ req->wb_count++;
+ spin_unlock(&nfs_wreq_lock);
+ error = nfs_wait_on_request(req);
+ nfs_release_request(req);
+ if (error < 0)
+ return error;
+ spin_lock(&nfs_wreq_lock);
+ p = head->next;
+ pages++;
+ }
+ spin_unlock(&nfs_wreq_lock);
+ return pages;
}
/*
- * Wait for request to complete.
+ * Scan cluster for dirty pages and send as many of them to the
+ * server as possible.
*/
static int
-wait_on_write_request(struct nfs_wreq *req)
+nfs_scan_list_timeout(struct list_head *head, struct list_head *dst, struct inode *inode)
{
- struct file *file = req->wb_file;
- struct dentry *dentry = file->f_dentry;
- struct inode *inode = dentry->d_inode;
- struct rpc_clnt *clnt = NFS_CLIENT(inode);
- DECLARE_WAITQUEUE(wait, current);
- sigset_t oldmask;
- int retval;
+ struct list_head *p;
+ struct nfs_page *req;
+ int pages = 0;
+
+ p = head->next;
+ while (p != head) {
+ req = nfs_list_entry(p);
+ p = p->next;
+ if (time_after(req->wb_timeout, jiffies)) {
+ if (time_after(NFS_NEXTSCAN(inode), req->wb_timeout))
+ NFS_NEXTSCAN(inode) = req->wb_timeout;
+ continue;
+ }
+ if (!nfs_lock_request(req))
+ continue;
+ nfs_list_remove_request(req);
+ nfs_list_add_request(req, dst);
+ pages++;
+ }
+ return pages;
+}
+
+static int
+nfs_scan_dirty_timeout(struct inode *inode, struct list_head *dst)
+{
+ int pages;
+ spin_lock(&nfs_wreq_lock);
+ pages = nfs_scan_list_timeout(&inode->u.nfs_i.dirty, dst, inode);
+ inode->u.nfs_i.ndirty -= pages;
+ if ((inode->u.nfs_i.ndirty == 0) != list_empty(&inode->u.nfs_i.dirty))
+ printk(KERN_ERR "NFS: desynchronized value of nfs_i.ndirty.\n");
+ spin_unlock(&nfs_wreq_lock);
+ return pages;
+}
- /* Make sure it's started.. */
- if (!WB_INPROGRESS(req))
- rpc_wake_up_task(&req->wb_task);
+#ifdef CONFIG_NFS_V3
+static int
+nfs_scan_commit_timeout(struct inode *inode, struct list_head *dst)
+{
+ int pages;
+ spin_lock(&nfs_wreq_lock);
+ pages = nfs_scan_list_timeout(&inode->u.nfs_i.commit, dst, inode);
+ inode->u.nfs_i.ncommit -= pages;
+ if ((inode->u.nfs_i.ncommit == 0) != list_empty(&inode->u.nfs_i.commit))
+ printk(KERN_ERR "NFS: desynchronized value of nfs_i.ncommit.\n");
+ spin_unlock(&nfs_wreq_lock);
+ return pages;
+}
+#endif
+
+static int
+nfs_scan_list(struct list_head *src, struct list_head *dst, struct file *file, unsigned long start, unsigned int count)
+{
+ struct list_head *p;
+ struct nfs_page *req;
+ unsigned long idx_start, idx_end;
+ int pages;
+
+ pages = 0;
+ idx_start = start >> PAGE_CACHE_SHIFT;
+ if (count == 0)
+ idx_end = ~0;
+ else
+ idx_end = idx_start + (count >> PAGE_CACHE_SHIFT);
+ p = src->next;
+ while (p != src) {
+ unsigned long pg_idx;
+
+ req = nfs_list_entry(p);
+ p = p->next;
+
+ if (file && req->wb_file != file)
+ continue;
+
+ pg_idx = page_index(req->wb_page);
+ if (pg_idx < idx_start || pg_idx > idx_end)
+ continue;
+
+ if (!nfs_lock_request(req))
+ continue;
+ nfs_list_remove_request(req);
+ nfs_list_add_request(req, dst);
+ pages++;
+ }
+ return pages;
+}
+
+static int
+nfs_scan_dirty(struct inode *inode, struct list_head *dst, struct file *file, unsigned long start, unsigned int count)
+{
+ int pages;
+ spin_lock(&nfs_wreq_lock);
+ pages = nfs_scan_list(&inode->u.nfs_i.dirty, dst, file, start, count);
+ inode->u.nfs_i.ndirty -= pages;
+ if ((inode->u.nfs_i.ndirty == 0) != list_empty(&inode->u.nfs_i.dirty))
+ printk(KERN_ERR "NFS: desynchronized value of nfs_i.ndirty.\n");
+ spin_unlock(&nfs_wreq_lock);
+ return pages;
+}
+
+#ifdef CONFIG_NFS_V3
+static int
+nfs_scan_commit(struct inode *inode, struct list_head *dst, struct file *file, unsigned long start, unsigned int count)
+{
+ int pages;
+ spin_lock(&nfs_wreq_lock);
+ pages = nfs_scan_list(&inode->u.nfs_i.commit, dst, file, start, count);
+ inode->u.nfs_i.ncommit -= pages;
+ if ((inode->u.nfs_i.ncommit == 0) != list_empty(&inode->u.nfs_i.commit))
+ printk(KERN_ERR "NFS: desynchronized value of nfs_i.ncommit.\n");
+ spin_unlock(&nfs_wreq_lock);
+ return pages;
+}
+#endif
+
+
+static int
+coalesce_requests(struct list_head *src, struct list_head *dst, unsigned int maxpages)
+{
+ struct nfs_page *req = NULL;
+ unsigned int pages = 0;
+
+ while (!list_empty(src)) {
+ struct nfs_page *prev = req;
+
+ req = nfs_list_entry(src->next);
+ if (prev) {
+ if (req->wb_file != prev->wb_file)
+ break;
+
+ if (page_index(req->wb_page) != page_index(prev->wb_page)+1)
+ break;
+
+ if (req->wb_offset != 0)
+ break;
+ }
+ nfs_list_remove_request(req);
+ nfs_list_add_request(req, dst);
+ pages++;
+ if (req->wb_offset + req->wb_bytes != PAGE_CACHE_SIZE)
+ break;
+ if (pages >= maxpages)
+ break;
+ }
+ return pages;
+}
+
+/*
+ * Try to update any existing write request, or create one if there is none.
+ * In order to match, the request's credentials must match those of
+ * the calling process.
+ *
+ * Note: Should always be called with the Page Lock held!
+ */
+static struct nfs_page *
+nfs_update_request(struct file* file, struct page *page,
+ unsigned long offset, unsigned int bytes)
+{
+ struct inode *inode = file->f_dentry->d_inode;
+ struct nfs_page *req, *new = NULL;
+ unsigned long rqend, end;
+
+ end = offset + bytes;
- rpc_clnt_sigmask(clnt, &oldmask);
- add_wait_queue(&req->wb_wait, &wait);
for (;;) {
- set_current_state(TASK_INTERRUPTIBLE);
- retval = 0;
- if (req->wb_flags & NFS_WRITE_COMPLETE)
+ /* Loop over all inode entries and see if we find
+ * A request for the page we wish to update
+ */
+ spin_lock(&nfs_wreq_lock);
+ req = _nfs_find_request(inode, page);
+ if (req) {
+ if (!nfs_lock_request(req)) {
+ spin_unlock(&nfs_wreq_lock);
+ nfs_wait_on_request(req);
+ nfs_release_request(req);
+ continue;
+ }
+ spin_unlock(&nfs_wreq_lock);
+ if (new)
+ nfs_release_request(new);
break;
- retval = -ERESTARTSYS;
- if (signalled())
+ }
+
+ req = new;
+ if (req) {
+ nfs_lock_request(req);
+ nfs_inode_add_request(inode, req);
+ spin_unlock(&nfs_wreq_lock);
+ nfs_mark_request_dirty(req);
break;
- schedule();
+ }
+ spin_unlock(&nfs_wreq_lock);
+
+ /* Create the request. It's safe to sleep in this call because
+ * we only get here if the page is locked.
+ */
+ new = nfs_create_request(inode, file, page, offset, bytes);
+ if (!new)
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /* We have a request for our page.
+ * If the creds don't match, or the
+ * page addresses don't match,
+ * tell the caller to wait on the conflicting
+ * request.
+ */
+ rqend = req->wb_offset + req->wb_bytes;
+ if (req->wb_file != file
+ || req->wb_page != page
+ || !nfs_dirty_request(req)
+ || offset > rqend || end < req->wb_offset) {
+ nfs_unlock_request(req);
+ nfs_release_request(req);
+ return ERR_PTR(-EBUSY);
+ }
+
+ /* Okay, the request matches. Update the region */
+ if (offset < req->wb_offset) {
+ req->wb_offset = offset;
+ req->wb_bytes = rqend - req->wb_offset;
}
- remove_wait_queue(&req->wb_wait, &wait);
- current->state = TASK_RUNNING;
- rpc_clnt_sigunmask(clnt, &oldmask);
- return retval;
+
+ if (end > rqend)
+ req->wb_bytes = end - req->wb_offset;
+
+ nfs_unlock_request(req);
+
+ return req;
}
/*
- * Write a page to the server. This will be used for NFS swapping only
- * (for now), and we currently do this synchronously only.
+ * This is the strategy routine for NFS.
+ * It is called by nfs_updatepage whenever the user wrote up to the end
+ * of a page.
+ *
+ * We always try to submit a set of requests in parallel so that the
+ * server's write code can gather writes. This is mainly for the benefit
+ * of NFSv2.
+ *
+ * We never submit more requests than we think the remote can handle.
+ * For UDP sockets, we make sure we don't exceed the congestion window;
+ * for TCP, we limit the number of requests to 8.
+ *
+ * NFS_STRATEGY_PAGES gives the minimum number of requests for NFSv2 that
+ * should be sent out in one go. This is for the benefit of NFSv2 servers
+ * that perform write gathering.
+ *
+ * FIXME: Different servers may have different sweet spots.
+ * Record the average congestion window in server struct?
*/
-int
-nfs_writepage(struct dentry * dentry, struct page *page)
+#define NFS_STRATEGY_PAGES 8
+static void
+nfs_strategy(struct file *file)
{
- struct inode *inode = dentry->d_inode;
- unsigned long end_index = inode->i_size >> PAGE_CACHE_SHIFT;
- unsigned offset = PAGE_CACHE_SIZE;
- int err;
+ struct inode *inode = file->f_dentry->d_inode;
+ unsigned int dirty, wpages;
+
+ dirty = inode->u.nfs_i.ndirty;
+ wpages = NFS_SERVER(inode)->wsize >> PAGE_CACHE_SHIFT;
+#ifdef CONFIG_NFS_V3
+ if (NFS_PROTO(inode)->version == 2) {
+ if (dirty >= NFS_STRATEGY_PAGES * wpages)
+ nfs_flush_file(inode, file, 0, 0, 0);
+ } else {
+ if (dirty >= wpages)
+ nfs_flush_file(inode, file, 0, 0, 0);
+ }
+#else
+ if (dirty >= NFS_STRATEGY_PAGES * wpages)
+ nfs_flush_file(inode, file, 0, 0, 0);
+#endif
+ /*
+ * If we're running out of requests, flush out everything
+ * in order to reduce memory useage...
+ */
+ if (nfs_nr_requests > MAX_REQUEST_SOFT)
+ nfs_wb_file(inode, file);
+}
- /* easy case */
- if (page->index < end_index)
- goto do_it;
- /* things got complicated... */
- offset = inode->i_size & (PAGE_CACHE_SIZE-1);
- /* OK, are we completely out? */
- if (page->index >= end_index+1 || !offset)
- return -EIO;
-do_it:
- err = nfs_writepage_sync(dentry, inode, page, 0, offset);
- if ( err == offset) return 0;
- return err;
+int
+nfs_flush_incompatible(struct file *file, struct page *page)
+{
+ struct inode *inode = file->f_dentry->d_inode;
+ struct nfs_page *req;
+ int status = 0;
+ /*
+ * Look for a request corresponding to this page. If there
+ * is one, and it belongs to another file, we flush it out
+ * before we try to copy anything into the page. Do this
+ * due to the lack of an ACCESS-type call in NFSv2.
+ * Also do the same if we find a request from an existing
+ * dropped page.
+ */
+ req = nfs_find_request(inode,page);
+ if (req) {
+ if (req->wb_file != file || req->wb_page != page)
+ status = nfs_wb_page(inode, page);
+ nfs_release_request(req);
+ }
+ return (status < 0) ? status : 0;
}
/*
{
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
- struct nfs_wreq *req;
+ struct nfs_page *req;
int synchronous = file->f_flags & O_SYNC;
- int retval;
+ int status = 0;
- dprintk("NFS: nfs_updatepage(%s/%s %d@%ld)\n",
+ dprintk("NFS: nfs_updatepage(%s/%s %d@%Ld)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
- count, (page->index << PAGE_CACHE_SHIFT) +offset);
-
- /*
- * Try to find a corresponding request on the writeback queue.
- * If there is one, we can be sure that this request is not
- * yet being processed, because we hold a lock on the page.
- *
- * If the request was created by us, update it. Otherwise,
- * transfer the page lock and flush out the dirty page now.
- * After returning, generic_file_write will wait on the
- * page and retry the update.
- */
- req = find_write_request(inode, page);
- if (req && req->wb_file == file && update_write_request(req, offset, count))
- goto updated;
+ count, page_offset(page) +offset);
/*
* If wsize is smaller than page size, update and write
if (NFS_SERVER(inode)->wsize < PAGE_SIZE)
return nfs_writepage_sync(dentry, inode, page, offset, count);
- /* Create the write request. */
- req = create_write_request(file, page, offset, count);
- if (!req)
- return -ENOBUFS;
-
/*
- * Ok, there's another user of this page with the new request..
- * The IO completion will then free the page and the dentry.
+ * Try to find an NFS request corresponding to this page
+ * and update it.
+ * If the existing request cannot be updated, we must flush
+ * it out now.
*/
- get_page(page);
-
- /* Schedule request */
- synchronous = schedule_write_request(req, synchronous);
+ do {
+ req = nfs_update_request(file, page, offset, count);
+ status = (IS_ERR(req)) ? PTR_ERR(req) : 0;
+ if (status != -EBUSY)
+ break;
+ /* Request could not be updated. Flush it out and try again */
+ status = nfs_wb_page(inode, page);
+ } while (status >= 0);
+ if (status < 0)
+ goto done;
-updated:
- if (req->wb_bytes == PAGE_SIZE)
+ if (req->wb_bytes == PAGE_CACHE_SIZE)
SetPageUptodate(page);
- retval = 0;
+ status = 0;
if (synchronous) {
- int status = wait_on_write_request(req);
- if (status) {
- nfs_cancel_request(req);
- retval = status;
- } else {
- status = req->wb_status;
- if (status < 0)
- retval = status;
- }
+ int error;
- if (retval < 0)
- ClearPageUptodate(page);
+ error = nfs_sync_file(inode, file, page_offset(page) + offset, count, FLUSH_SYNC|FLUSH_STABLE);
+ if (error < 0 || (error = file->f_error) < 0)
+ status = error;
+ file->f_error = 0;
+ } else {
+ /* If we wrote past the end of the page.
+ * Call the strategy routine so it can send out a bunch
+ * of requests.
+ */
+ if (req->wb_offset == 0 && req->wb_bytes == PAGE_CACHE_SIZE)
+ nfs_strategy(file);
}
-
- free_write_request(req);
- return retval;
+ nfs_release_request(req);
+done:
+ dprintk("NFS: nfs_updatepage returns %d (isize %Ld)\n",
+ status, inode->i_size);
+ if (status < 0)
+ clear_bit(PG_uptodate, &page->flags);
+ return status;
}
/*
- * Cancel a write request. We always mark it cancelled,
- * but if it's already in progress there's no point in
- * calling rpc_exit, and we don't want to overwrite the
- * tk_status field.
- */
+ * Set up the argument/result storage required for the RPC call.
+ */
static void
-nfs_cancel_request(struct nfs_wreq *req)
+nfs_write_rpcsetup(struct list_head *head, struct nfs_write_data *data)
{
- req->wb_flags |= NFS_WRITE_CANCELLED;
- if (!WB_INPROGRESS(req)) {
- rpc_exit(&req->wb_task, 0);
- rpc_wake_up_task(&req->wb_task);
+ struct nfs_page *req;
+ struct iovec *iov;
+ unsigned int count;
+
+ /* Set up the RPC argument and reply structs
+ * NB: take care not to mess about with data->commit et al. */
+
+ iov = data->args.iov;
+ count = 0;
+ while (!list_empty(head)) {
+ struct nfs_page *req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+ nfs_list_add_request(req, &data->pages);
+ iov->iov_base = (void *)(kmap(req->wb_page) + req->wb_offset);
+ iov->iov_len = req->wb_bytes;
+ count += req->wb_bytes;
+ iov++;
+ data->args.nriov++;
}
+ req = nfs_list_entry(data->pages.next);
+ data->file = req->wb_file;
+ data->cred = req->wb_cred;
+ data->args.fh = NFS_FH(req->wb_file->f_dentry);
+ data->args.offset = page_offset(req->wb_page) + req->wb_offset;
+ data->args.count = count;
+ data->res.fattr = &data->fattr;
+ data->res.count = count;
+ data->res.verf = &data->verf;
}
+
/*
- * Cancel all writeback requests, both pending and in progress.
+ * Create an RPC task for the given write request and kick it.
+ * The page must have been locked by the caller.
+ *
+ * It may happen that the page we're passed is not marked dirty.
+ * This is the case if nfs_updatepage detects a conflicting request
+ * that has been written but not committed.
*/
-static void
-nfs_cancel_dirty(struct inode *inode, pid_t pid)
+static int
+nfs_flush_one(struct list_head *head, struct file *file, int how)
{
- struct nfs_wreq *head, *req;
+ struct dentry *dentry = file->f_dentry;
+ struct inode *inode = dentry->d_inode;
+ struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ struct nfs_write_data *data;
+ struct rpc_task *task;
+ struct rpc_message msg;
+ int flags,
+ async = !(how & FLUSH_SYNC),
+ stable = (how & FLUSH_STABLE);
+ sigset_t oldset;
+
+
+ data = nfs_writedata_alloc();
+ if (!data)
+ goto out_bad;
+ task = &data->task;
+
+ /* Set the initial flags for the task. */
+ flags = (async) ? RPC_TASK_ASYNC : 0;
+
+ /* Set up the argument struct */
+ nfs_write_rpcsetup(head, data);
+ if (stable) {
+ if (!inode->u.nfs_i.ncommit)
+ data->args.stable = NFS_FILE_SYNC;
+ else
+ data->args.stable = NFS_DATA_SYNC;
+ } else
+ data->args.stable = NFS_UNSTABLE;
- req = head = NFS_WRITEBACK(inode);
- while (req != NULL) {
- if (pid == 0 || req->wb_pid == pid)
- nfs_cancel_request(req);
- if ((req = WB_NEXT(req)) == head)
+ /* Finalize the task. */
+ rpc_init_task(task, clnt, nfs_writeback_done, flags);
+ task->tk_calldata = data;
+
+#ifdef CONFIG_NFS_V3
+ msg.rpc_proc = (NFS_PROTO(inode)->version == 3) ? NFS3PROC_WRITE : NFSPROC_WRITE;
+#else
+ msg.rpc_proc = NFSPROC_WRITE;
+#endif
+ msg.rpc_argp = &data->args;
+ msg.rpc_resp = &data->res;
+ msg.rpc_cred = data->cred;
+
+ dprintk("NFS: %4d initiated write call (req %s/%s count %d nriov %d)\n",
+ task->tk_pid,
+ dentry->d_parent->d_name.name,
+ dentry->d_name.name,
+ data->args.count, data->args.nriov);
+
+ rpc_clnt_sigmask(clnt, &oldset);
+ rpc_call_setup(task, &msg, 0);
+ rpc_execute(task);
+ rpc_clnt_sigunmask(clnt, &oldset);
+ return 0;
+ out_bad:
+ while (!list_empty(head)) {
+ struct nfs_page *req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+ nfs_mark_request_dirty(req);
+ nfs_unlock_request(req);
+ }
+ return -ENOMEM;
+}
+
+static int
+nfs_flush_list(struct inode *inode, struct list_head *head, int how)
+{
+ LIST_HEAD(one_request);
+ struct nfs_page *req;
+ int error = 0;
+ unsigned int pages = 0,
+ wpages = NFS_SERVER(inode)->wsize >> PAGE_CACHE_SHIFT;
+
+ while (!list_empty(head)) {
+ pages += coalesce_requests(head, &one_request, wpages);
+ req = nfs_list_entry(one_request.next);
+ error = nfs_flush_one(&one_request, req->wb_file, how);
+ if (error < 0)
break;
}
+ if (error >= 0)
+ return pages;
+
+ while (!list_empty(head)) {
+ req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+ nfs_mark_request_dirty(req);
+ nfs_unlock_request(req);
+ }
+ return error;
}
+
/*
- * If we're waiting on somebody else's request
- * we need to increment the counter during the
- * wait so that the request doesn't disappear
- * from under us during the wait..
+ * This function is called when the WRITE call is complete.
*/
-static int FASTCALL(wait_on_other_req(struct nfs_wreq *));
-static int wait_on_other_req(struct nfs_wreq *req)
+static void
+nfs_writeback_done(struct rpc_task *task)
{
- int retval;
- req->wb_count++;
- retval = wait_on_write_request(req);
- free_write_request(req);
- return retval;
-}
+ struct nfs_write_data *data = (struct nfs_write_data *) task->tk_calldata;
+ struct nfs_writeargs *argp = &data->args;
+ struct nfs_writeres *resp = &data->res;
+ struct dentry *dentry = data->file->f_dentry;
+ struct inode *inode = dentry->d_inode;
+ struct nfs_page *req;
+
+ dprintk("NFS: %4d nfs_writeback_done (status %d)\n",
+ task->tk_pid, task->tk_status);
+
+ /* We can't handle that yet but we check for it nevertheless */
+ if (resp->count < argp->count && task->tk_status >= 0) {
+ static unsigned long complain = 0;
+ if (time_before(complain, jiffies)) {
+ printk(KERN_WARNING
+ "NFS: Server wrote less than requested.\n");
+ complain = jiffies + 300 * HZ;
+ }
+ /* Can't do anything about it right now except throw
+ * an error. */
+ task->tk_status = -EIO;
+ }
+#ifdef CONFIG_NFS_V3
+ if (resp->verf->committed < argp->stable && task->tk_status >= 0) {
+ /* We tried a write call, but the server did not
+ * commit data to stable storage even though we
+ * requested it.
+ */
+ static unsigned long complain = 0;
+
+ if (time_before(complain, jiffies)) {
+ printk(KERN_NOTICE "NFS: faulty NFSv3 server %s:"
+ " (committed = %d) != (stable = %d)\n",
+ NFS_SERVER(inode)->hostname,
+ resp->verf->committed, argp->stable);
+ complain = jiffies + 300 * HZ;
+ }
+ }
+#endif
-/*
- * This writes back a set of requests according to the condition.
- *
- * If this ever gets much more convoluted, use a fn pointer for
- * the condition..
- */
-#define NFS_WB(inode, cond) { int retval = 0 ; \
- do { \
- struct nfs_wreq *req = NFS_WRITEBACK(inode); \
- struct nfs_wreq *head = req; \
- if (!req) break; \
- for (;;) { \
- if (!(req->wb_flags & NFS_WRITE_COMPLETE)) \
- if (cond) break; \
- req = WB_NEXT(req); \
- if (req == head) goto out; \
- } \
- retval = wait_on_other_req(req); \
- } while (!retval); \
-out: return retval; \
-}
+ /* Update attributes as result of writeback. */
+ if (task->tk_status >= 0)
+ nfs_write_attributes(inode, resp->fattr);
-int
-nfs_wb_all(struct inode *inode)
-{
- NFS_WB(inode, 1);
+ while (!list_empty(&data->pages)) {
+ req = nfs_list_entry(data->pages.next);
+ nfs_list_remove_request(req);
+
+ kunmap(req->wb_page);
+
+ dprintk("NFS: write (%s/%s %d@%Ld)",
+ req->wb_file->f_dentry->d_parent->d_name.name,
+ req->wb_file->f_dentry->d_name.name,
+ req->wb_bytes,
+ page_offset(req->wb_page) + req->wb_offset);
+
+ if (task->tk_status < 0) {
+ req->wb_file->f_error = task->tk_status;
+ nfs_inode_remove_request(req);
+ dprintk(", error = %d\n", task->tk_status);
+ goto next;
+ }
+
+#ifdef CONFIG_NFS_V3
+ if (resp->verf->committed != NFS_UNSTABLE) {
+ nfs_inode_remove_request(req);
+ dprintk(" OK\n");
+ goto next;
+ }
+ memcpy(&req->wb_verf, resp->verf, sizeof(req->wb_verf));
+ req->wb_timeout = jiffies + NFS_COMMIT_DELAY;
+ nfs_mark_request_commit(req);
+ dprintk(" marked for commit\n");
+#else
+ nfs_inode_remove_request(req);
+#endif
+ next:
+ nfs_unlock_request(req);
+ }
+ nfs_writedata_release(task);
}
+
+#ifdef CONFIG_NFS_V3
/*
- * Write back all requests on one page - we do this before reading it.
+ * Set up the argument/result storage required for the RPC call.
*/
-int
-nfs_wb_page(struct inode *inode, struct page *page)
+static void
+nfs_commit_rpcsetup(struct list_head *head, struct nfs_write_data *data)
{
- NFS_WB(inode, req->wb_page == page);
+ struct nfs_page *req;
+ struct dentry *dentry;
+ struct inode *inode;
+ unsigned long start, end, len;
+
+ /* Set up the RPC argument and reply structs
+ * NB: take care not to mess about with data->commit et al. */
+
+ end = 0;
+ start = ~0;
+ req = nfs_list_entry(head->next);
+ data->file = req->wb_file;
+ data->cred = req->wb_cred;
+ dentry = data->file->f_dentry;
+ inode = dentry->d_inode;
+ while (!list_empty(head)) {
+ struct nfs_page *req;
+ unsigned long rqstart, rqend;
+ req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+ nfs_list_add_request(req, &data->pages);
+ rqstart = page_offset(req->wb_page) + req->wb_offset;
+ rqend = rqstart + req->wb_bytes;
+ if (rqstart < start)
+ start = rqstart;
+ if (rqend > end)
+ end = rqend;
+ }
+ data->args.fh = NFS_FH(dentry);
+ data->args.offset = start;
+ len = end - start;
+ if (end >= inode->i_size || len > (~((u32)0) >> 1))
+ len = 0;
+ data->res.count = data->args.count = (u32)len;
+ data->res.fattr = &data->fattr;
+ data->res.verf = &data->verf;
}
/*
- * Write back all pending writes from one file descriptor..
+ * Commit dirty pages
*/
-int
-nfs_wb_file(struct inode *inode, struct file *file)
-{
- NFS_WB(inode, req->wb_file == file);
-}
-
-void
-nfs_inval(struct inode *inode)
+static int
+nfs_commit_list(struct list_head *head, int how)
{
- nfs_cancel_dirty(inode,0);
+ struct rpc_message msg;
+ struct file *file;
+ struct rpc_clnt *clnt;
+ struct nfs_write_data *data;
+ struct rpc_task *task;
+ struct nfs_page *req;
+ int flags,
+ async = !(how & FLUSH_SYNC);
+ sigset_t oldset;
+
+ data = nfs_writedata_alloc();
+
+ if (!data)
+ goto out_bad;
+ task = &data->task;
+
+ flags = (async) ? RPC_TASK_ASYNC : 0;
+
+ /* Set up the argument struct */
+ nfs_commit_rpcsetup(head, data);
+ req = nfs_list_entry(data->pages.next);
+ file = req->wb_file;
+ clnt = NFS_CLIENT(file->f_dentry->d_inode);
+
+ rpc_init_task(task, clnt, nfs_commit_done, flags);
+ task->tk_calldata = data;
+
+ msg.rpc_proc = NFS3PROC_COMMIT;
+ msg.rpc_argp = &data->args;
+ msg.rpc_resp = &data->res;
+ msg.rpc_cred = data->cred;
+
+ dprintk("NFS: %4d initiated commit call\n", task->tk_pid);
+ rpc_clnt_sigmask(clnt, &oldset);
+ rpc_call_setup(task, &msg, 0);
+ rpc_execute(task);
+ rpc_clnt_sigunmask(clnt, &oldset);
+ return 0;
+ out_bad:
+ while (!list_empty(head)) {
+ req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+ nfs_mark_request_commit(req);
+ nfs_unlock_request(req);
+ }
+ return -ENOMEM;
}
/*
- * The following procedures make up the writeback finite state machinery:
- *
- * 1. Try to lock the page if not yet locked by us,
- * set up the RPC call info, and pass to the call FSM.
+ * COMMIT call returned
*/
static void
-nfs_wback_begin(struct rpc_task *task)
+nfs_commit_done(struct rpc_task *task)
{
- struct nfs_wreq *req = (struct nfs_wreq *) task->tk_calldata;
- struct page *page = req->wb_page;
- struct file *file = req->wb_file;
- struct dentry *dentry = file->f_dentry;
+ struct nfs_write_data *data = (struct nfs_write_data *)task->tk_calldata;
+ struct nfs_writeres *resp = &data->res;
+ struct nfs_page *req;
+ struct dentry *dentry = data->file->f_dentry;
+ struct inode *inode = dentry->d_inode;
- dprintk("NFS: %4d nfs_wback_begin (%s/%s, status=%d flags=%x)\n",
- task->tk_pid, dentry->d_parent->d_name.name,
- dentry->d_name.name, task->tk_status, req->wb_flags);
+ dprintk("NFS: %4d nfs_commit_done (status %d)\n",
+ task->tk_pid, task->tk_status);
+
+ nfs_refresh_inode(inode, resp->fattr);
+ while (!list_empty(&data->pages)) {
+ req = nfs_list_entry(data->pages.next);
+ nfs_list_remove_request(req);
+
+ dprintk("NFS: commit (%s/%s %d@%ld)",
+ req->wb_file->f_dentry->d_parent->d_name.name,
+ req->wb_file->f_dentry->d_name.name,
+ req->wb_bytes,
+ page_offset(req->wb_page) + req->wb_offset);
+ if (task->tk_status < 0) {
+ req->wb_file->f_error = task->tk_status;
+ nfs_inode_remove_request(req);
+ dprintk(", error = %d\n", task->tk_status);
+ goto next;
+ }
- task->tk_status = 0;
+ /* Okay, COMMIT succeeded, apparently. Check the verifier
+ * returned by the server against all stored verfs. */
+ if (!memcmp(req->wb_verf.verifier, data->verf.verifier, sizeof(data->verf.verifier))) {
+ /* We have a match */
+ nfs_inode_remove_request(req);
+ dprintk(" OK\n");
+ goto next;
+ }
+ /* We have a mismatch. Write the page again */
+ dprintk(" mismatch\n");
+ nfs_mark_request_dirty(req);
+ next:
+ nfs_unlock_request(req);
+ }
+ nfs_writedata_release(task);
+}
+#endif
- /* Setup the task struct for a writeback call */
- req->wb_flags |= NFS_WRITE_INPROGRESS;
- req->wb_args.fh = NFS_FH(dentry);
- req->wb_args.offset = (page->index << PAGE_CACHE_SHIFT) + req->wb_offset;
- req->wb_args.count = req->wb_bytes;
- req->wb_args.buffer = (void *) (page_address(page) + req->wb_offset);
+int nfs_flush_file(struct inode *inode, struct file *file, unsigned long start,
+ unsigned int count, int how)
+{
+ LIST_HEAD(head);
+ int pages,
+ error = 0;
+
+ pages = nfs_scan_dirty(inode, &head, file, start, count);
+ if (pages)
+ error = nfs_flush_list(inode, &head, how);
+ if (error < 0)
+ return error;
+ return pages;
+}
- return;
+int nfs_flush_timeout(struct inode *inode, int how)
+{
+ LIST_HEAD(head);
+ int pages,
+ error = 0;
+
+ pages = nfs_scan_dirty_timeout(inode, &head);
+ if (pages)
+ error = nfs_flush_list(inode, &head, how);
+ if (error < 0)
+ return error;
+ return pages;
}
-/*
- * 2. Collect the result
- */
-static void
-nfs_wback_result(struct rpc_task *task)
+#ifdef CONFIG_NFS_V3
+int nfs_commit_file(struct inode *inode, struct file *file, unsigned long start,
+ unsigned int count, int how)
{
- struct nfs_wreq *req = (struct nfs_wreq *) task->tk_calldata;
- struct file *file = req->wb_file;
- struct page *page = req->wb_page;
- int status = task->tk_status;
- struct dentry *dentry = file->f_dentry;
- struct inode *inode = dentry->d_inode;
+ LIST_HEAD(head);
+ int pages,
+ error = 0;
+
+ pages = nfs_scan_commit(inode, &head, file, start, count);
+ if (pages)
+ error = nfs_commit_list(&head, how);
+ if (error < 0)
+ return error;
+ return pages;
+}
- dprintk("NFS: %4d nfs_wback_result (%s/%s, status=%d, flags=%x)\n",
- task->tk_pid, dentry->d_parent->d_name.name,
- dentry->d_name.name, status, req->wb_flags);
-
- /* Set the WRITE_COMPLETE flag, but leave WRITE_INPROGRESS set */
- req->wb_flags |= NFS_WRITE_COMPLETE;
- req->wb_status = status;
-
- if (status < 0) {
- req->wb_flags |= NFS_WRITE_INVALIDATE;
- file->f_error = status;
- } else if (!WB_CANCELLED(req)) {
- struct nfs_fattr *fattr = &req->wb_fattr;
- /* Update attributes as result of writeback.
- * Beware: when UDP replies arrive out of order, we
- * may end up overwriting a previous, bigger file size.
- *
- * When the file size shrinks we cancel all pending
- * writebacks.
- */
- if (fattr->mtime.seconds >= inode->i_mtime) {
- if (fattr->size < inode->i_size)
- fattr->size = inode->i_size;
-
- /* possible Solaris 2.5 server bug workaround */
- if (inode->i_ino == fattr->fileid) {
- /*
- * We expect these values to change, and
- * don't want to invalidate the caches.
- */
- inode->i_size = fattr->size;
- inode->i_mtime = fattr->mtime.seconds;
- nfs_refresh_inode(inode, fattr);
- }
- else
- printk("nfs_wback_result: inode %ld, got %u?\n",
- inode->i_ino, fattr->fileid);
- }
+int nfs_commit_timeout(struct inode *inode, int how)
+{
+ LIST_HEAD(head);
+ int pages,
+ error = 0;
+
+ pages = nfs_scan_commit_timeout(inode, &head);
+ if (pages) {
+ pages += nfs_scan_commit(inode, &head, NULL, 0, 0);
+ error = nfs_commit_list(&head, how);
}
+ if (error < 0)
+ return error;
+ return pages;
+}
+#endif
- rpc_release_task(task);
+int nfs_sync_file(struct inode *inode, struct file *file, unsigned long start,
+ unsigned int count, int how)
+{
+ int error,
+ wait;
- if (WB_INVALIDATE(req))
- ClearPageUptodate(page);
+ wait = how & FLUSH_WAIT;
+ how &= ~FLUSH_WAIT;
- kunmap(page);
- __free_page(page);
- remove_write_request(&NFS_WRITEBACK(inode), req);
- nr_write_requests--;
- fput(req->wb_file);
+ if (!inode && file)
+ inode = file->f_dentry->d_inode;
- wake_up(&req->wb_wait);
- free_write_request(req);
+ do {
+ error = 0;
+ if (wait)
+ error = nfs_wait_on_requests(inode, file, start, count);
+ if (error == 0)
+ error = nfs_flush_file(inode, file, start, count, how);
+#ifdef CONFIG_NFS_V3
+ if (error == 0)
+ error = nfs_commit_file(inode, file, start, count, how);
+#endif
+ } while (error > 0);
+ return error;
+}
+
+int nfs_init_nfspagecache(void)
+{
+ nfs_page_cachep = kmem_cache_create("nfs_page",
+ sizeof(struct nfs_page),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+ if (nfs_page_cachep == NULL)
+ return -ENOMEM;
+
+ nfs_wdata_cachep = kmem_cache_create("nfs_write_data",
+ sizeof(struct nfs_write_data),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+ if (nfs_wdata_cachep == NULL)
+ return -ENOMEM;
+
+ return 0;
+}
+
+void nfs_destroy_nfspagecache(void)
+{
+ if (kmem_cache_destroy(nfs_page_cachep))
+ printk(KERN_INFO "nfs_page: not all structures were freed\n");
+ if (kmem_cache_destroy(nfs_wdata_cachep))
+ printk(KERN_INFO "nfs_write_data: not all structures were freed\n");
}
+
* fh must be initialized before calling fh_compose
*/
fh_init(&fh, maxsize);
- err = fh_compose(&fh, exp, dentry);
+ if (fh_compose(&fh, exp, dentry))
+ err = -EINVAL;
+ else
+ err = 0;
memcpy(f, &fh.fh_handle, sizeof(struct knfsd_fh));
fh_put(&fh);
return err;
if (fh_compose(&fh, exp, dchild) != 0 || !dchild->d_inode)
goto noexec;
p = encode_post_op_attr(cd->rqstp, p, fh.fh_dentry);
+ *p++ = xdr_one; /* yes, a file handle follows */
p = encode_fh(p, &fh);
fh_put(&fh);
}
static int nfsctl_unexport(struct nfsctl_export *data);
static int nfsctl_getfh(struct nfsctl_fhparm *, __u8 *);
static int nfsctl_getfd(struct nfsctl_fdparm *, __u8 *);
-#ifdef notyet
static int nfsctl_getfs(struct nfsctl_fsparm *, struct knfsd_fh *);
+#ifdef notyet
static int nfsctl_ugidupdate(struct nfsctl_ugidmap *data);
#endif
}
#endif
-#ifdef notyet
static inline int
nfsctl_getfs(struct nfsctl_fsparm *data, struct knfsd_fh *res)
{
else
err = exp_rootfh(clp, 0, 0, data->gd_path, res, data->gd_maxlen);
exp_unlock();
-
+ /*HACK*/ res->fh_size = NFS_FHSIZE; /* HACK until lockd handles var-length handles */
return err;
}
-#endif
static inline int
nfsctl_getfd(struct nfsctl_fdparm *data, __u8 *res)
#define handle_sys_nfsservctl sys_nfsservctl
#endif
+static struct {
+ int argsize, respsize;
+} sizes[] = {
+ /* NFSCTL_SVC */ { sizeof(struct nfsctl_svc), 0 },
+ /* NFSCTL_ADDCLIENT */ { sizeof(struct nfsctl_client), 0},
+ /* NFSCTL_DELCLIENT */ { sizeof(struct nfsctl_client), 0},
+ /* NFSCTL_EXPORT */ { sizeof(struct nfsctl_export), 0},
+ /* NFSCTL_UNEXPORT */ { sizeof(struct nfsctl_export), 0},
+ /* NFSCTL_UGIDUPDATE */ { sizeof(struct nfsctl_uidmap), 0},
+ /* NFSCTL_GETFH */ { sizeof(struct nfsctl_fhparm), NFS_FHSIZE},
+ /* NFSCTL_GETFD */ { sizeof(struct nfsctl_fdparm), NFS_FHSIZE},
+ /* NFSCTL_GETFS */ { sizeof(struct nfsctl_fsparm), sizeof(struct knfsd_fh)},
+};
+#define CMD_MAX (sizeof(sizes)/sizeof(sizes[0])-1)
+
int
asmlinkage handle_sys_nfsservctl(int cmd, void *opaque_argp, void *opaque_resp)
{
struct nfsctl_arg * arg = NULL;
union nfsctl_res * res = NULL;
int err;
+ int argsize, respsize;
MOD_INC_USE_COUNT;
lock_kernel ();
if (!capable(CAP_SYS_ADMIN)) {
goto done;
}
+ err = -EINVAL;
+ if (cmd<0 || cmd > CMD_MAX)
+ goto done;
err = -EFAULT;
- if (!access_ok(VERIFY_READ, argp, sizeof(*argp))
- || (resp && !access_ok(VERIFY_WRITE, resp, sizeof(*resp)))) {
+ argsize = sizes[cmd].argsize + sizeof(int); /* int for ca_version */
+ respsize = sizes[cmd].respsize; /* maximum */
+ if (!access_ok(VERIFY_READ, argp, argsize)
+ || (resp && !access_ok(VERIFY_WRITE, resp, respsize))) {
goto done;
}
-
err = -ENOMEM; /* ??? */
if (!(arg = kmalloc(sizeof(*arg), GFP_USER)) ||
(resp && !(res = kmalloc(sizeof(*res), GFP_USER)))) {
}
err = -EINVAL;
- copy_from_user(arg, argp, sizeof(*argp));
+ copy_from_user(arg, argp, argsize);
if (arg->ca_version != NFSCTL_VERSION) {
printk(KERN_WARNING "nfsd: incompatible version in syscall.\n");
goto done;
case NFSCTL_GETFD:
err = nfsctl_getfd(&arg->ca_getfd, res->cr_getfh);
break;
-#ifdef notyet
case NFSCTL_GETFS:
err = nfsctl_getfs(&arg->ca_getfs, &res->cr_getfs);
-#endif
+ respsize = res->cr_getfs.fh_size+sizeof(int);
+ break;
default:
err = -EINVAL;
}
- if (!err && resp)
- copy_to_user(resp, res, sizeof(*resp));
+ if (!err && resp && respsize)
+ copy_to_user(resp, res, respsize);
done:
if (arg)
goto done;
fh_lock(dirfhp);
dchild = lookup_one(argp->name, dget(dirfhp->fh_dentry));
- nfserr = nfserrno(PTR_ERR(dchild));
- if (IS_ERR(dchild))
+ if (IS_ERR(dchild)) {
+ nfserr = nfserrno(PTR_ERR(dchild));
goto out_unlock;
+ }
fh_init(newfhp, NFS_FHSIZE);
nfserr = fh_compose(newfhp, dirfhp->fh_export, dchild);
if (!nfserr && !dchild->d_inode)
static void nfsd(struct svc_rqst *rqstp);
struct timeval nfssvc_boot = { 0, 0 };
static struct svc_serv *nfsd_serv = NULL;
+static int nfsd_busy = 0;
+static unsigned long nfsd_last_call;
struct nfsd_list {
struct list_head list;
return error;
}
+static void inline
+update_thread_usage(int busy_threads)
+{
+ unsigned long prev_call;
+ unsigned long diff;
+ int decile;
+
+ prev_call = nfsd_last_call;
+ nfsd_last_call = jiffies;
+ decile = busy_threads*10/nfsdstats.th_cnt;
+ if (decile>0 && decile <= 10) {
+ diff = nfsd_last_call - prev_call;
+ nfsdstats.th_usage[decile-1] += diff;
+ if (decile == 10)
+ nfsdstats.th_fullcnt++;
+ }
+}
+
/*
* This is the NFS server kernel thread
*/
sprintf(current->comm, "nfsd");
current->fs->umask = 0;
+ nfsdstats.th_cnt++;
/* Let svc_process check client's authentication. */
rqstp->rq_auth = 1;
;
if (err < 0)
break;
+ update_thread_usage(nfsd_busy);
+ nfsd_busy++;
/* Lock the export hash tables for reading. */
exp_readlock();
/* Unlock export hash tables */
exp_unlock();
+ update_thread_usage(nfsd_busy);
+ nfsd_busy--;
}
if (err != -EINTR) {
nfsd_racache_shutdown(); /* release read-ahead cache */
}
list_del(&me.list);
+ nfsdstats.th_cnt --;
/* Release the thread */
svc_exit_thread(rqstp);
* Format:
* rc <hits> <misses> <nocache>
* Statistsics for the reply cache
+ * fh <stale> <total-lookups> <anonlookups> <dir-not-in-dcache> <nondir-not-in-dcache>
+ * statistics for filehandle lookup
+ * io <bytes-read> <bytes-writtten>
+ * statistics for IO throughput
+ * th <threads> <fullcnt> <10%-20%> <20%-30%> ... <90%-100%> <100%>
+ * time (milliseconds) when nfsd thread usage above thresholds
+ * and number of times that all threads were in use
+ * ra cache-size <10% <20% <30% ... <100% not-found
+ * number of times that read-ahead entry was found that deep in
+ * the cache.
* plus generic RPC stats (see net/sunrpc/stats.c)
*
* Copyright (C) 1995, 1996, 1997 Olaf Kirch <okir@monad.swb.de>
int *eof, void *data)
{
int len;
+ int i;
- len = sprintf(buffer, "rc %d %d %d %d %d %d %d %d\n",
- nfsdstats.rchits,
- nfsdstats.rcmisses,
- nfsdstats.rcnocache,
- nfsdstats.fh_stale,
- nfsdstats.fh_lookup,
- nfsdstats.fh_anon,
- nfsdstats.fh_nocache_dir,
- nfsdstats.fh_nocache_nondir);
+ len = sprintf(buffer, "rc %u %u %u\nfh %u %u %u %u %u\nio %u %u\n",
+ nfsdstats.rchits,
+ nfsdstats.rcmisses,
+ nfsdstats.rcnocache,
+ nfsdstats.fh_stale,
+ nfsdstats.fh_lookup,
+ nfsdstats.fh_anon,
+ nfsdstats.fh_nocache_dir,
+ nfsdstats.fh_nocache_nondir,
+ nfsdstats.io_read,
+ nfsdstats.io_write);
+ /* thread usage: */
+ len += sprintf(buffer+len, "th %u %u", nfsdstats.th_cnt, nfsdstats.th_fullcnt);
+ for (i=0; i<10; i++)
+ len += sprintf(buffer+len, " %u", nfsdstats.th_usage[i]);
+ /* newline and ra-cache */
+ len += sprintf(buffer+len, "\nra %u", nfsdstats.ra_size);
+ for (i=0; i<11; i++)
+ len += sprintf(buffer+len, " %u", nfsdstats.ra_depth[i]);
+ len += sprintf(buffer+len, "\n");
+
/* Assume we haven't hit EOF yet. Will be set by svc_proc_read. */
*eof = 0;
*/
if (len <= offset) {
len = svc_proc_read(buffer, start, offset - len, count,
- eof, data);
+ eof, data);
return len;
}
if (len < count) {
len += svc_proc_read(buffer + len, start, 0, count - len,
- eof, data);
+ eof, data);
}
if (offset >= len) {
nfsd_get_raparms(dev_t dev, ino_t ino)
{
struct raparms *ra, **rap, **frap = NULL;
-
+ int depth = 0;
+
for (rap = &raparm_cache; (ra = *rap); rap = &ra->p_next) {
if (ra->p_ino == ino && ra->p_dev == dev)
goto found;
+ depth++;
if (ra->p_count == 0)
frap = rap;
}
+ depth = nfsdstats.ra_size*11/10;
if (!frap)
return NULL;
rap = frap;
raparm_cache = ra;
}
ra->p_count++;
+ nfsdstats.ra_depth[depth*10/nfsdstats.ra_size]++;
return ra;
}
oldfs = get_fs(); set_fs(KERNEL_DS);
err = file.f_op->read(&file, buf, *count, &file.f_pos);
set_fs(oldfs);
+ nfsdstats.io_read += *count;
/* Write back readahead params */
if (ra != NULL) {
#else
err = file.f_op->write(&file, buf, cnt, &file.f_pos);
#endif
+ nfsdstats.io_write += cnt;
set_fs(oldfs);
/* clear setuid/setgid flag after write */
"nfsd: Could not allocate memory read-ahead cache.\n");
return -ENOMEM;
}
+ nfsdstats.ra_size = cache_size;
return 0;
}
{
return block_read_full_page(page,ntfs_get_block);
}
-static int ntfs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int ntfs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,ntfs_get_block,
&((struct inode*)page->mapping->host)->u.ntfs_i.mmu_private);
#include <asm/uaccess.h>
-/*
- * Define this if you want SunOS compatibility wrt braindead
- * select behaviour on FIFO's.
- */
-#ifdef __sparc__
-#define FIFO_SUNOS_BRAINDAMAGE
-#else
-#undef FIFO_SUNOS_BRAINDAMAGE
-#endif
-
/*
* We use a start+len construction, which provides full use of the
* allocated memory.
*/
/* Drop the inode semaphore and wait for a pipe event, atomically */
-static void pipe_wait(struct inode * inode)
+void pipe_wait(struct inode * inode)
{
DECLARE_WAITQUEUE(wait, current);
current->state = TASK_INTERRUPTIBLE;
/* Reading only -- no need for aquiring the semaphore. */
mask = POLLIN | POLLRDNORM;
- if (PIPE_EMPTY(*inode))
+ if (PIPE_FREE(*inode) >= PIPE_BUF)
mask = POLLOUT | POLLWRNORM;
- if (!PIPE_WRITERS(*inode))
+ if (!PIPE_WRITERS(*inode) && filp->f_version != PIPE_WCOUNTER(*inode))
mask |= POLLHUP;
if (!PIPE_READERS(*inode))
mask |= POLLERR;
return mask;
}
-#ifdef FIFO_SUNOS_BRAINDAMAGE
-/*
- * Argh! Why does SunOS have to have different select() behaviour
- * for pipes and FIFOs? Hate, hate, hate! SunOS lacks POLLHUP.
- */
-static unsigned int
-fifo_poll(struct file *filp, poll_table *wait)
-{
- unsigned int mask;
- struct inode *inode = filp->f_dentry->d_inode;
-
- poll_wait(filp, PIPE_WAIT(*inode), wait);
-
- /* Reading only -- no need for aquiring the semaphore. */
- mask = POLLIN | POLLRDNORM;
- if (PIPE_EMPTY(*inode))
- mask = POLLOUT | POLLWRNORM;
- if (!PIPE_READERS(*inode))
- mask |= POLLERR;
-
- return mask;
-}
-#else
-
+/* FIXME: most Unices do not set POLLERR for fifos */
#define fifo_poll pipe_poll
-#endif /* FIFO_SUNOS_BRAINDAMAGE */
-
-/*
- * The 'connect_xxx()' functions are needed for named pipes when
- * the open() code hasn't guaranteed a connection (O_NONBLOCK),
- * and we need to act differently until we do get a writer..
- */
-static ssize_t
-connect_read(struct file *filp, char *buf, size_t count, loff_t *ppos)
-{
- struct inode *inode = filp->f_dentry->d_inode;
-
- /* Reading only -- no need for aquiring the semaphore. */
- if (PIPE_EMPTY(*inode) && !PIPE_WRITERS(*inode))
- return 0;
-
- filp->f_op = &read_fifo_fops;
- return pipe_read(filp, buf, count, ppos);
-}
-
-static unsigned int
-connect_poll(struct file *filp, poll_table *wait)
-{
- struct inode *inode = filp->f_dentry->d_inode;
- unsigned int mask = 0;
-
- poll_wait(filp, PIPE_WAIT(*inode), wait);
-
- /* Reading only -- no need for aquiring the semaphore. */
- if (!PIPE_EMPTY(*inode)) {
- filp->f_op = &read_fifo_fops;
- mask = POLLIN | POLLRDNORM;
- } else if (PIPE_WRITERS(*inode)) {
- filp->f_op = &read_fifo_fops;
- mask = POLLOUT | POLLWRNORM;
- }
-
- return mask;
-}
-
static int
pipe_release(struct inode *inode, int decr, int decw)
{
* The file_operations structs are not static because they
* are also used in linux/fs/fifo.c to do operations on FIFOs.
*/
-struct file_operations connecting_fifo_fops = {
- llseek: pipe_lseek,
- read: connect_read,
- write: bad_pipe_w,
- poll: connect_poll,
- ioctl: pipe_ioctl,
- open: pipe_read_open,
- release: pipe_read_release,
-};
-
struct file_operations read_fifo_fops = {
llseek: pipe_lseek,
read: pipe_read,
release: pipe_rdwr_release,
};
-static struct inode * get_pipe_inode(void)
+struct inode* pipe_new(struct inode* inode)
{
- struct inode *inode = get_empty_inode();
unsigned long page;
- if (!inode)
- goto fail_inode;
-
page = __get_free_page(GFP_USER);
if (!page)
- goto fail_iput;
+ return NULL;
inode->i_pipe = kmalloc(sizeof(struct pipe_inode_info), GFP_KERNEL);
if (!inode->i_pipe)
goto fail_page;
- inode->i_fop = &rdwr_pipe_fops;
-
init_waitqueue_head(PIPE_WAIT(*inode));
- PIPE_BASE(*inode) = (char *) page;
+ PIPE_BASE(*inode) = (char*) page;
PIPE_START(*inode) = PIPE_LEN(*inode) = 0;
- PIPE_READERS(*inode) = PIPE_WRITERS(*inode) = 1;
+ PIPE_READERS(*inode) = PIPE_WRITERS(*inode) = 0;
PIPE_WAITING_READERS(*inode) = PIPE_WAITING_WRITERS(*inode) = 0;
+ PIPE_RCOUNTER(*inode) = PIPE_WCOUNTER(*inode) = 1;
+
+ return inode;
+fail_page:
+ free_page(page);
+ return NULL;
+}
+
+static struct inode * get_pipe_inode(void)
+{
+ struct inode *inode = get_empty_inode();
+
+ if (!inode)
+ goto fail_inode;
+
+ if(!pipe_new(inode))
+ goto fail_iput;
+ PIPE_READERS(*inode) = PIPE_WRITERS(*inode) = 1;
+ inode->i_fop = &rdwr_pipe_fops;
/*
* Mark the inode dirty from the very beginning,
inode->i_blksize = PAGE_SIZE;
return inode;
-fail_page:
- free_page(page);
fail_iput:
iput(inode);
fail_inode:
f1->f_flags = O_RDONLY;
f1->f_op = &read_pipe_fops;
f1->f_mode = 1;
+ f1->f_version = 0;
/* write file */
f2->f_flags = O_WRONLY;
f2->f_op = &write_pipe_fops;
f2->f_mode = 2;
+ f2->f_version = 0;
fd_install(i, f1);
fd_install(j, f2);
++*pages;
if (pte_dirty(page))
++*dirty;
- if (pte_pagenr(page) >= max_mapnr)
+ if ((pte_pagenr(page) >= max_mapnr) ||
+ PageReserved(pte_pagenr(page) + mem_map))
continue;
if (page_count(pte_page(page)) > 1)
++*shared;
{
return block_read_full_page(page,qnx4_get_block);
}
-static int qnx4_prepare_write(struct page *page, unsigned from, unsigned to)
+static int qnx4_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return cont_prepare_write(page,from,to,qnx4_get_block,
&((struct inode*)page->mapping->host)->u.qnx4_i.mmu_private);
* If the writer ends up delaying the write, the writer needs to
* increment the page use counts until he is done with the page.
*/
-static int smb_prepare_write(struct page *page, unsigned offset, unsigned to)
+static int smb_prepare_write(struct file *file, struct page *page, unsigned offset, unsigned to)
{
kmap(page);
return 0;
{
return block_read_full_page(page,sysv_get_block);
}
-static int sysv_prepare_write(struct page *page, unsigned from, unsigned to)
+static int sysv_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page,from,to,sysv_get_block);
}
return 0;
}
-static int udf_adinicb_prepare_write(struct page *page, unsigned offset, unsigned to)
+static int udf_adinicb_prepare_write(struct file *file, struct page *page, unsigned offset, unsigned to)
{
kmap(page);
return 0;
return block_read_full_page(page, udf_get_block);
}
-static int udf_prepare_write(struct page *page, unsigned from, unsigned to)
+static int udf_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page, from, to, udf_get_block);
}
{
return block_read_full_page(page,ufs_getfrag_block);
}
-static int ufs_prepare_write(struct page *page, unsigned from, unsigned to)
+static int ufs_prepare_write(struct file *file, struct page *page, unsigned from, unsigned to)
{
return block_prepare_write(page,from,to,ufs_getfrag_block);
}
#define FPCR_DYN_PLUS (0x3UL << FPCR_DYN_SHIFT) /* towards +INF */
#define FPCR_DYN_MASK (0x3UL << FPCR_DYN_SHIFT)
-#define FPCR_MASK 0xfffe000000000000
+#define FPCR_MASK 0xffff800000000000
/*
* IEEE trap enables are implemented in software. These per-thread
IEEE_STATUS_OVF | IEEE_STATUS_UNF | \
IEEE_STATUS_INE | IEEE_STATUS_DNO)
-#define IEEE_SW_MASK (IEEE_TRAP_ENABLE_MASK | IEEE_STATUS_MASK | IEEE_MAP_MASK)
+#define IEEE_SW_MASK (IEEE_TRAP_ENABLE_MASK | \
+ IEEE_STATUS_MASK | IEEE_MAP_MASK)
#define IEEE_CURRENT_RM_SHIFT 32
#define IEEE_CURRENT_RM_MASK (3UL<<IEEE_CURRENT_RM_SHIFT)
/*
* Convert the software IEEE trap enable and status bits into the
- * hardware fpcr format.
+ * hardware fpcr format.
+ *
+ * Digital Unix engineers receive my thanks for not defining the
+ * software bits identical to the hardware bits. The chip designers
+ * receive my thanks for making all the not-implemented fpcr bits
+ * RAZ forcing us to use system calls to read/write this value.
*/
static inline unsigned long
{
unsigned long fp;
fp = (sw & IEEE_STATUS_MASK) << 35;
- fp |= sw & IEEE_STATUS_MASK ? FPCR_SUM : 0;
+ fp |= (sw & IEEE_MAP_DMZ) << 36;
+ fp |= (sw & IEEE_STATUS_MASK ? FPCR_SUM : 0);
fp |= (~sw & (IEEE_TRAP_ENABLE_INV
| IEEE_TRAP_ENABLE_DZE
| IEEE_TRAP_ENABLE_OVF)) << 48;
fp |= (~sw & (IEEE_TRAP_ENABLE_UNF | IEEE_TRAP_ENABLE_INE)) << 57;
+ fp |= (sw & IEEE_MAP_UMZ ? FPCR_UNDZ | FPCR_UNFD : 0);
fp |= (~sw & IEEE_TRAP_ENABLE_DNO) << 41;
return fp;
}
{
unsigned long sw;
sw = (fp >> 35) & IEEE_STATUS_MASK;
+ sw |= (fp >> 36) & IEEE_MAP_DMZ;
sw |= (~fp >> 48) & (IEEE_TRAP_ENABLE_INV
| IEEE_TRAP_ENABLE_DZE
| IEEE_TRAP_ENABLE_OVF);
sw |= (~fp >> 57) & (IEEE_TRAP_ENABLE_UNF | IEEE_TRAP_ENABLE_INE);
+ sw |= (fp >> 47) & IEEE_MAP_UMZ;
sw |= (~fp >> 41) & IEEE_TRAP_ENABLE_DNO;
return sw;
}
never generates arithmetic faults and (b) call_pal instructions
are implied trap barriers. */
-static inline unsigned long rdfpcr(void)
+static inline unsigned long
+rdfpcr(void)
{
unsigned long tmp, ret;
+
+#if defined(__alpha_cix__) || defined(__alpha_fix__)
+ __asm__ ("ftoit $f0,%0\n\t"
+ "mf_fpcr $f0\n\t"
+ "ftoit $f0,%1\n\t"
+ "itoft %0,$f0"
+ : "=r"(tmp), "=r"(ret));
+#else
__asm__ ("stt $f0,%0\n\t"
"mf_fpcr $f0\n\t"
"stt $f0,%1\n\t"
"ldt $f0,%0"
- : "=m"(tmp), "=m"(ret));
+ : "=m"(tmp), "=m"(ret));
+#endif
+
return ret;
}
-static inline void wrfpcr(unsigned long val)
+static inline void
+wrfpcr(unsigned long val)
{
unsigned long tmp;
+
+#if defined(__alpha_cix__) || defined(__alpha_fix__)
+ __asm__ ("ftoit $f0,%0\n\t"
+ "itoft %1,$f0\n\t"
+ "mt_fpcr $f0\n\t"
+ "itoft %0,$f0"
+ : "=&r"(tmp) : "r"(val));
+#else
__asm__ __volatile__ (
"stt $f0,%0\n\t"
"ldt $f0,%1\n\t"
"mt_fpcr $f0\n\t"
"ldt $f0,%0"
: "=m"(tmp) : "m"(val));
+#endif
+}
+
+static inline unsigned long
+swcr_update_status(unsigned long swcr, unsigned long fpcr)
+{
+ /* EV6 implements most of the bits in hardware. Collect
+ the acrued exception bits from the real fpcr. */
+ if (implver() == IMPLVER_EV6) {
+ swcr &= ~IEEE_STATUS_MASK;
+ swcr |= (fpcr >> 35) & IEEE_STATUS_MASK;
+ }
+ return swcr;
}
extern unsigned long alpha_read_fp_reg (unsigned long reg);
#ifndef __ALPHA_PCI_H
#define __ALPHA_PCI_H
+#ifdef __KERNEL__
+
#include <linux/spinlock.h>
#include <asm/scatterlist.h>
#include <asm/machvec.h>
struct resource *io_space;
struct resource *mem_space;
- unsigned long config_space;
+ /* The following are for reporting to userland. The invariant is
+ that if we report a BWX-capable dense memory, we do not report
+ a sparse memory at all, even if it exists. */
+ unsigned long sparse_mem_base;
+ unsigned long dense_mem_base;
+ unsigned long sparse_io_base;
+ unsigned long dense_io_base;
+
+ /* This one's for the kernel only. It's in KSEG somewhere. */
+ unsigned long config_space_base;
+
unsigned int index;
unsigned int first_busno;
unsigned int last_busno;
extern int pci_dma_supported(struct pci_dev *hwdev, dma_addr_t mask);
+#endif /* __KERNEL__ */
+
+/* Values for the `which' argument to sys_pciconfig_iobase. */
+#define IOBASE_HOSE 0
+#define IOBASE_SPARSE_MEM 1
+#define IOBASE_DENSE_MEM 2
+#define IOBASE_SPARSE_IO 3
+#define IOBASE_DENSE_IO 4
+
#endif /* __ALPHA_PCI_H */
* On certain platforms whose physical address space can overlap KSEG,
* namely EV6 and above, we must re-twiddle the physaddr to restore the
* correct high-order bits.
+ *
+ * This is extremely confusing until you realize that this is actually
+ * just working around a userspace bug. The X server was intending to
+ * provide the physical address but instead provided the KSEG address.
+ * Or tried to, except it's not representable.
+ *
+ * On Tsunami there's nothing meaningful at 0x40000000000, so this is
+ * a safe thing to do. Come the first core logic that does put something
+ * in this area -- memory or whathaveyou -- then this hack will have
+ * to go away. So be prepared!
*/
#if defined(CONFIG_ALPHA_GENERIC) && defined(USE_48_BIT_KSEG)
#define FP_EX_INEXACT IEEE_TRAP_ENABLE_INE
#define FP_EX_DENORM IEEE_TRAP_ENABLE_DNO
-#define FP_DENORM_ZERO (fpcw & IEEE_MAP_DMZ)
-
-#define FP_HANDLE_EXCEPTIONS return _fex
+#define FP_DENORM_ZERO (swcr & IEEE_MAP_DMZ)
/* We write the results always */
#define FP_INHIBIT_RESULTS 0
#define __NR_dipc 373
#define __NR_pivot_root 374
#define __NR_mincore 375
+#define __NR_pciconfig_iobase 376
#if defined(__LIBRARY__) && defined(__GNUC__)
typedef double elf_fpreg_t;
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
+#ifdef __KERNEL__
/* Altivec registers */
typedef vector128 elf_vrreg_t;
typedef elf_vrreg_t elf_vrregset_t[ELF_NVRREG];
+#endif /* __KERNEL__ */
#define ELF_CORE_COPY_REGS(gregs, regs) \
memcpy(gregs, regs, \
#define MAP_GROWSDOWN 0x0100 /* stack-like segment */
#define MAP_DENYWRITE 0x0800 /* ETXTBSY */
-#define MAP_EXECUTABLE 0x1000 /* mark it as a executable */
+#define MAP_EXECUTABLE 0x1000 /* mark it as an executable */
#define MS_ASYNC 1 /* sync memory asynchronously */
#define MS_INVALIDATE 2 /* invalidate the caches */
#define MS_SYNC 4 /* synchronous memory sync */
-#define MCL_CURRENT 0x2000 /* lock all currently mapped pages */
-#define MCL_FUTURE 0x4000 /* lock all additions to address space */
+#define MCL_CURRENT 1 /* lock all current mappings */
+#define MCL_FUTURE 2 /* lock all future mappings */
+
+#define MADV_NORMAL 0x0 /* default page-in behavior */
+#define MADV_RANDOM 0x1 /* page-in minimum required */
+#define MADV_SEQUENTIAL 0x2 /* read-ahead aggressively */
+#define MADV_WILLNEED 0x3 /* pre-fault pages */
+#define MADV_DONTNEED 0x4 /* discard these pages */
/* compatibility flags */
#define MAP_ANON MAP_ANONYMOUS
#define _PPC_TYPES_H
#ifndef __ASSEMBLY__
-#ifdef __KERNEL__
typedef unsigned short umode_t;
typedef unsigned long long __u64;
#endif
+typedef struct {
+ __u32 u[4];
+} __attribute((aligned(16))) vector128;
+
+#ifdef __KERNEL__
/*
* These aren't exported outside the kernel to avoid name space clashes
*/
typedef signed long long s64;
typedef unsigned long long u64;
-typedef struct {
- u32 u[4];
-} __attribute((aligned(16))) vector128;
-
#define BITS_PER_LONG 32
/* DMA addresses are 32-bits wide */
*
* Voice information definitions for the low level driver for the
* AWE32/SB32/AWE64 wave table synth.
- * version 0.4.3; Feb. 1, 1999
+ * version 0.4.4; Jan. 4, 2000
*
- * Copyright (C) 1996-1999 Takashi Iwai
+ * Copyright (C) 1996-2000 Takashi Iwai
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
#define AWE_MAP_PRESET 6 /* awe_voice_map */
/*#define AWE_PROBE_INFO 7*/ /* awe_voice_map (pat only) */
#define AWE_PROBE_DATA 8 /* optarg=sample */
+#define AWE_REMOVE_INFO 9 /* optarg=(bank<<8)|instr */
#define AWE_LOAD_CHORUS_FX 0x10 /* awe_chorus_fx_rec (optarg=mode) */
#define AWE_LOAD_REVERB_FX 0x11 /* awe_reverb_fx_rec (optarg=mode) */
--- /dev/null
+#ifndef _LINUX_CIRC_BUF_H
+#define _LINUX_CIRC_BUF_H 1
+
+struct circ_buf {
+ char *buf;
+ int head;
+ int tail;
+};
+
+/* Return count in buffer. */
+#define CIRC_CNT(head,tail,size) (((head) - (tail)) & ((size)-1))
+
+/* Return space available, 0..size-1. We always leave one free char
+ as a completely full buffer has head == tail, which is the same as
+ empty. */
+#define CIRC_SPACE(head,tail,size) CIRC_CNT((tail),((head)+1),(size))
+
+/* Return count up to the end of the buffer. Carefully avoid
+ accessing head and tail more than once, so they can change
+ underneath us without returning inconsistent results. */
+#define CIRC_CNT_TO_END(head,tail,size) \
+ ({int end = (size) - (tail); \
+ int n = ((head) + end) & ((size)-1); \
+ n < end ? n : end;})
+
+/* Return space available up to the end of the buffer. */
+#define CIRC_SPACE_TO_END(head,tail,size) \
+ ({int end = (size) - 1 - (head); \
+ int n = (end + (tail)) & ((size)-1); \
+ n <= end ? n : end+1;})
+
+#endif /* _LINUX_CIRC_BUF_H */
struct address_space_operations {
int (*writepage) (struct dentry *, struct page *);
int (*readpage)(struct dentry *, struct page *);
- int (*prepare_write)(struct page *, unsigned, unsigned);
+ int (*prepare_write)(struct file *, struct page *, unsigned, unsigned);
int (*commit_write)(struct file *, struct page *, unsigned, unsigned);
/* Unfortunately this kludge is needed for FIBMAP. Don't use it */
int (*bmap)(struct address_space *, long);
__u32 bavail;
};
+/* Arguments to the write call.
+ * Note that NFS_WRITE_MAXIOV must be <= (MAX_IOVEC-2) from sunrpc/xprt.h
+ */
+#define NFS_WRITE_MAXIOV 8
+
+enum nfs3_stable_how {
+ NFS_UNSTABLE = 0,
+ NFS_DATA_SYNC = 1,
+ NFS_FILE_SYNC = 2
+};
+
struct nfs_writeargs {
struct nfs_fh * fh;
__u32 offset;
__u32 count;
- const void * buffer;
+ enum nfs3_stable_how stable;
+ unsigned int nriov;
+ struct iovec iov[NFS_WRITE_MAXIOV];
+};
+
+struct nfs_writeverf {
+ enum nfs3_stable_how committed;
+ __u32 verifier[2];
+};
+
+struct nfs_writeres {
+ struct nfs_fattr * fattr;
+ struct nfs_writeverf * verf;
+ __u32 count;
};
#ifdef NFS_NEED_XDR_TYPES
--- /dev/null
+#ifndef NFS_CLUSTER_H
+#define NFS_CLUSTER_H
+
+
+
+#ifdef __KERNEL__
+#include <linux/nfs_fs_sb.h>
+
+/*
+ * Counters of total number and pending number of requests.
+ * When the total number of requests exceeds the soft limit, we start
+ * flushing out requests. If it exceeds the hard limit, we stall until
+ * it drops again.
+ */
+#define MAX_REQUEST_SOFT 192
+#define MAX_REQUEST_HARD 256
+
+/*
+ * Maximum number of requests per write cluster.
+ * 32 requests per cluster account for 128K of data on an intel box.
+ * Note: it's a good idea to make this number smaller than MAX_REQUEST_SOFT.
+ *
+ * For 100Mbps Ethernet, 128 pages (i.e. 256K) per cluster gives much
+ * better performance.
+ */
+#define REQUEST_HASH_SIZE 16
+#define REQUEST_NR(off) ((off) >> PAGE_CACHE_SHIFT)
+#define REQUEST_HASH(ino, off) (((ino) ^ REQUEST_NR(off)) & (REQUEST_HASH_SIZE - 1))
+
+
+/*
+ * Functions
+ */
+extern int nfs_reqlist_alloc(struct nfs_server *);
+extern void nfs_reqlist_free(struct nfs_server *);
+extern int nfs_reqlist_init(struct nfs_server *);
+extern void nfs_reqlist_exit(struct nfs_server *);
+extern void inode_schedule_scan(struct inode *, unsigned long);
+extern void inode_remove_flushd(struct inode *);
+extern void nfs_wake_flushd(void);
+
+/*
+ * This is the per-mount writeback cache.
+ */
+struct nfs_reqlist {
+ unsigned int nr_requests;
+ unsigned long runat;
+ wait_queue_head_t request_wait;
+
+ /* The async RPC task that is responsible for scanning the
+ * requests.
+ */
+ struct rpc_task *task; /* request flush task */
+
+ /* Authentication flavor handle for this NFS client */
+ struct rpc_auth *auth;
+
+ /* The list of all inodes with pending writebacks. */
+ struct inode *inodes;
+};
+
+#endif
+
+#endif
#include <linux/signal.h>
#include <linux/sched.h>
+#include <linux/pagemap.h>
#include <linux/in.h>
#include <linux/sunrpc/sched.h>
*/
#define NFS_MAX_DIRCACHE 16
-#define NFS_MAX_FILE_IO_BUFFER_SIZE 16384
+#define NFS_MAX_FILE_IO_BUFFER_SIZE 32768
#define NFS_DEF_FILE_IO_BUFFER_SIZE 4096
/*
* The upper limit on timeouts for the exponential backoff algorithm.
*/
#define NFS_MAX_RPC_TIMEOUT (6*HZ)
+#define NFS_WRITEBACK_DELAY (5*HZ)
+#define NFS_WRITEBACK_LOCKDELAY (60*HZ)
+#define NFS_COMMIT_DELAY (5*HZ)
/*
* Size of the lookup cache in units of number of entries cached.
#define NFS_DSERVER(dentry) (&(dentry)->d_sb->u.nfs_sb.s_server)
#define NFS_SERVER(inode) (&(inode)->i_sb->u.nfs_sb.s_server)
#define NFS_CLIENT(inode) (NFS_SERVER(inode)->client)
+#define NFS_REQUESTLIST(inode) (NFS_SERVER(inode)->rw_requests)
#define NFS_ADDR(inode) (RPC_PEERADDR(NFS_CLIENT(inode)))
#define NFS_CONGESTED(inode) (RPC_CONGESTED(NFS_CLIENT(inode)))
#define NFS_READTIME(inode) ((inode)->u.nfs_i.read_cache_jiffies)
#define NFS_OLDMTIME(inode) ((inode)->u.nfs_i.read_cache_mtime)
+#define NFS_NEXTSCAN(inode) ((inode)->u.nfs_i.nextscan)
#define NFS_CACHEINV(inode) \
do { \
NFS_READTIME(inode) = jiffies - 1000000; \
#define NFS_FLAGS(inode) ((inode)->u.nfs_i.flags)
#define NFS_REVALIDATING(inode) (NFS_FLAGS(inode) & NFS_INO_REVALIDATING)
-#define NFS_WRITEBACK(inode) ((inode)->u.nfs_i.writeback)
#define NFS_COOKIES(inode) ((inode)->u.nfs_i.cookies)
#define NFS_DIREOF(inode) ((inode)->u.nfs_i.direof)
/* Flags in the RPC client structure */
#define NFS_CLNTF_BUFSIZE 0x0001 /* readdir buffer in longwords */
-#ifdef __KERNEL__
+#define NFS_RW_SYNC 0x0001 /* O_SYNC handling */
+#define NFS_RW_SWAP 0x0002 /* This is a swap request */
/*
- * This struct describes a file region to be written.
- * It's kind of a pity we have to keep all these lists ourselves, rather
- * than sticking an extra pointer into struct page.
+ * When flushing a cluster of dirty pages, there can be different
+ * strategies:
*/
-struct nfs_wreq {
- struct rpc_listitem wb_list; /* linked list of req's */
- struct rpc_task wb_task; /* RPC task */
- struct file * wb_file; /* dentry referenced */
- struct page * wb_page; /* page to be written */
- wait_queue_head_t wb_wait; /* wait for completion */
- unsigned int wb_offset; /* offset within page */
- unsigned int wb_bytes; /* dirty range */
- unsigned int wb_count; /* user count */
- int wb_status;
- pid_t wb_pid; /* owner process */
- unsigned short wb_flags; /* status flags */
-
- struct nfs_writeargs wb_args; /* NFS RPC stuff */
- struct nfs_fattr wb_fattr; /* file attributes */
-};
-
-#define WB_NEXT(req) ((struct nfs_wreq *) ((req)->wb_list.next))
+#define FLUSH_AGING 0 /* only flush old buffers */
+#define FLUSH_SYNC 1 /* file being synced, or contention */
+#define FLUSH_WAIT 2 /* wait for completion */
+#define FLUSH_STABLE 4 /* commit to stable storage */
-/*
- * Various flags for wb_flags
- */
-#define NFS_WRITE_CANCELLED 0x0004 /* has been cancelled */
-#define NFS_WRITE_UNCOMMITTED 0x0008 /* written but uncommitted (NFSv3) */
-#define NFS_WRITE_INVALIDATE 0x0010 /* invalidate after write */
-#define NFS_WRITE_INPROGRESS 0x0100 /* RPC call in progress */
-#define NFS_WRITE_COMPLETE 0x0200 /* RPC call completed */
-
-#define WB_CANCELLED(req) ((req)->wb_flags & NFS_WRITE_CANCELLED)
-#define WB_UNCOMMITTED(req) ((req)->wb_flags & NFS_WRITE_UNCOMMITTED)
-#define WB_INVALIDATE(req) ((req)->wb_flags & NFS_WRITE_INVALIDATE)
-#define WB_INPROGRESS(req) ((req)->wb_flags & NFS_WRITE_INPROGRESS)
-#define WB_COMPLETE(req) ((req)->wb_flags & NFS_WRITE_COMPLETE)
+static inline
+loff_t page_offset(struct page *page)
+{
+ return ((loff_t)page->index) << PAGE_CACHE_SHIFT;
+}
+
+static inline
+unsigned long page_index(struct page *page)
+{
+ return page->index;
+}
+
+#ifdef __KERNEL__
/*
* linux/fs/nfs/proc.c
*/
extern int nfs_writepage(struct dentry *, struct page *);
extern int nfs_check_failed_request(struct inode *);
-
+extern struct nfs_page* nfs_find_request(struct inode *, struct page *);
+extern void nfs_release_request(struct nfs_page *req);
+extern int nfs_flush_incompatible(struct file *file, struct page *page);
+extern int nfs_updatepage(struct file *, struct page *, unsigned long, unsigned int);
/*
* Try to write back everything synchronously (but check the
* return value!)
*/
-extern int nfs_wb_all(struct inode *);
-extern int nfs_wb_page(struct inode *, struct page *);
-extern int nfs_wb_file(struct inode *, struct file *);
+extern int nfs_sync_file(struct inode *, struct file *, unsigned long, unsigned int, int);
+extern int nfs_flush_file(struct inode *, struct file *, unsigned long, unsigned int, int);
+extern int nfs_flush_timeout(struct inode *, int);
+#ifdef CONFIG_NFS_V3
+extern int nfs_commit_file(struct inode *, struct file *, unsigned long, unsigned int, int);
+extern int nfs_commit_timeout(struct inode *, int);
+#endif
+
+static inline int
+nfs_have_writebacks(struct inode *inode)
+{
+ return !list_empty(&inode->u.nfs_i.writeback);
+}
+
+static inline int
+nfs_wb_all(struct inode *inode)
+{
+ int error = nfs_sync_file(inode, 0, 0, 0, FLUSH_WAIT);
+ return (error < 0) ? error : 0;
+}
+
+/*
+ * Write back all requests on one page - we do this before reading it.
+ */
+static inline int
+nfs_wb_page(struct inode *inode, struct page* page)
+{
+ int error = nfs_sync_file(inode, 0, page_offset(page), PAGE_CACHE_SIZE, FLUSH_WAIT | FLUSH_STABLE);
+ return (error < 0) ? error : 0;
+}
/*
- * Invalidate write-backs, possibly trying to write them
- * back first..
+ * Write back all pending writes for one user..
*/
-extern void nfs_inval(struct inode *);
-extern int nfs_updatepage(struct file *, struct page *, unsigned long, unsigned int);
+static inline int
+nfs_wb_file(struct inode *inode, struct file *file)
+{
+ int error = nfs_sync_file(inode, file, 0, 0, FLUSH_WAIT);
+ return (error < 0) ? error : 0;
+}
/*
* linux/fs/nfs/read.c
extern int nfs_root_mount(struct super_block *sb);
+#define nfs_wait_event(clnt, wq, condition) \
+({ \
+ int __retval = 0; \
+ if (clnt->cl_intr) { \
+ sigset_t oldmask; \
+ rpc_clnt_sigmask(clnt, &oldmask); \
+ __retval = wait_event_interruptible(wq, condition); \
+ rpc_clnt_sigunmask(clnt, &oldmask); \
+ } else \
+ wait_event(wq, condition); \
+ __retval; \
+})
+
#endif /* __KERNEL__ */
/*
/*
* This is the list of dirty unwritten pages.
- * NFSv3 will want to add a list for written but uncommitted
- * pages.
*/
- struct nfs_wreq * writeback;
+ struct list_head dirty;
+ struct list_head commit;
+ struct list_head writeback;
+
+ unsigned int ndirty,
+ ncommit,
+ npages;
+
+ /* Flush daemon info */
+ struct inode *hash_next,
+ *hash_prev;
+ unsigned long nextscan;
/* Readdir caching information. */
void *cookies;
/*
* Legal inode flag values
*/
-#define NFS_INO_REVALIDATING 0x0001 /* revalidating attrs */
+#define NFS_INO_REVALIDATING 0x0004 /* revalidating attrs */
#define NFS_IS_SNAPSHOT 0x0010 /* a snapshot file */
+#define NFS_INO_FLUSH 0x0020 /* inode is due for flushing */
/*
* NFS lock info
unsigned int acdirmin;
unsigned int acdirmax;
char * hostname; /* remote hostname */
+ struct nfs_reqlist * rw_requests; /* async read/write requests */
};
/*
unsigned int fh_stale; /* FH stale error */
unsigned int fh_lookup; /* dentry cached */
unsigned int fh_anon; /* anon file dentry returned */
- unsigned int fh_nocache_dir; /* filehandle not foudn in dcache */
- unsigned int fh_nocache_nondir; /* filehandle not foudn in dcache */
+ unsigned int fh_nocache_dir; /* filehandle not found in dcache */
+ unsigned int fh_nocache_nondir; /* filehandle not found in dcache */
+ unsigned int io_read; /* bytes returned to read requests */
+ unsigned int io_write; /* bytes passed in write requests */
+ unsigned int th_cnt; /* number of available threads */
+ unsigned int th_usage[10]; /* number of ticks during which n perdeciles
+ * of available threads were in use */
+ unsigned int th_fullcnt; /* number of times last free thread was used */
+ unsigned int ra_size; /* size of ra cache */
+ unsigned int ra_depth[11]; /* number of times ra entry was found that deep
+ * in the cache (10percentiles). [10] = not found */
};
#ifdef __KERNEL__
struct nfsctl_uidmap u_umap;
struct nfsctl_fhparm u_getfh;
struct nfsctl_fdparm u_getfd;
-#ifdef notyet
struct nfsctl_fsparm u_getfs;
-#endif
- unsigned int u_debug;
} u;
#define ca_svc u.u_svc
#define ca_client u.u_client
#define ca_getfd u.u_getfd
#define ca_getfs u.u_getfs
#define ca_authd u.u_authd
-#define ca_debug u.u_debug
};
union nfsctl_res {
__u8 cr_getfh[NFS_FHSIZE];
-#ifdef notyet
struct knfsd_fh cr_getfs;
-#endif
- unsigned int cr_debug;
};
#ifdef __KERNEL__
#define PCI_DEVICE_ID_OXSEMI_16PCI952 0x950A
#define PCI_DEVICE_ID_OXSEMI_16PCI95N 0x9511
+#define PCI_VENDOR_ID_TITAN 0x14D2
+#define PCI_DEVICE_ID_TITAN_100 0xA001
+#define PCI_DEVICE_ID_TITAN_200 0xA005
+#define PCI_DEVICE_ID_TITAN_400 0xA003
+#define PCI_DEVICE_ID_TITAN_800B 0xA004
+
#define PCI_VENDOR_ID_PANACOM 0x14d4
#define PCI_DEVICE_ID_PANACOM_QUADMODEM 0x0400
#define PCI_DEVICE_ID_PANACOM_DUALMODEM 0x0402
unsigned int writers;
unsigned int waiting_readers;
unsigned int waiting_writers;
+ unsigned int r_counter;
+ unsigned int w_counter;
};
/* Differs from PIPE_BUF in that PIPE_SIZE is the length of the actual
#define PIPE_WRITERS(inode) ((inode).i_pipe->writers)
#define PIPE_WAITING_READERS(inode) ((inode).i_pipe->waiting_readers)
#define PIPE_WAITING_WRITERS(inode) ((inode).i_pipe->waiting_writers)
+#define PIPE_RCOUNTER(inode) ((inode).i_pipe->r_counter)
+#define PIPE_WCOUNTER(inode) ((inode).i_pipe->w_counter)
#define PIPE_EMPTY(inode) (PIPE_LEN(inode) == 0)
#define PIPE_FULL(inode) (PIPE_LEN(inode) == PIPE_SIZE)
#define PIPE_MAX_RCHUNK(inode) (PIPE_SIZE - PIPE_START(inode))
#define PIPE_MAX_WCHUNK(inode) (PIPE_SIZE - PIPE_END(inode))
+/* Drop the inode semaphore and wait for a pipe event, atomically */
+void pipe_wait(struct inode * inode);
+
+struct inode* pipe_new(struct inode* inode);
+
#endif
#ifndef _LINUX_SERIAL_H
#define _LINUX_SERIAL_H
+#include <asm/page.h>
+
/*
* Counters of the input lines (CTS, DSR, RI, CD) interrupts
*/
/*
* The size of the serial xmit buffer is 1 page, or 4096 bytes
*/
-#define SERIAL_XMIT_SIZE 4096
+#define SERIAL_XMIT_SIZE PAGE_SIZE
struct serial_struct {
int type;
int line;
- int port;
+ unsigned long port;
int irq;
int flags;
int xmit_fifo_size;
#include <linux/config.h>
#include <linux/termios.h>
#include <linux/tqueue.h>
+#include <linux/circ_buf.h>
#include <linux/wait.h>
struct serial_state {
int magic;
int baud_base;
- int port;
+ unsigned long port;
int irq;
int flags;
int hub6;
struct async_struct {
int magic;
- int port;
+ unsigned long port;
int hub6;
int flags;
int xmit_fifo_size;
int blocked_open; /* # of blocked opens */
long session; /* Session of opening process */
long pgrp; /* pgrp of opening process */
- unsigned char *xmit_buf;
- int xmit_head;
- int xmit_tail;
- int xmit_cnt;
+ struct circ_buf xmit;
+ spinlock_t xmit_lock;
u8 *iomem_base;
u16 iomem_reg_shift;
int io_type;
#define SERIAL_MAGIC 0x5301
#define SSTATE_MAGIC 0x5302
-/*
- * The size of the serial xmit buffer is 1 page, or 4096 bytes
- */
-#define SERIAL_XMIT_SIZE 4096
-
/*
* Events are used to schedule things to happen at timer-interrupt
* time, instead of at rs interrupt time.
#define SPCI_FL_IRQBASE3 (0x0003 << 4)
#define SPCI_FL_IRQBASE4 (0x0004 << 4)
#define SPCI_FL_GET_IRQBASE(x) ((x & SPCI_FL_IRQ_MASK) >> 4)
-
+
/* Use sucessiveentries base resource table */
#define SPCI_FL_BASE_TABLE 0x0100
-
+
/* Use successive entries in the irq resource table */
#define SPCI_FL_IRQ_TABLE 0x0200
-
+
/* Use the irq resource table instead of dev->irq */
#define SPCI_FL_IRQRESOURCE 0x0400
/* Use the Base address register size to cap number of ports */
#define SPCI_FL_REGION_SZ_CAP 0x0800
-
+
+/* Do not use irq sharing for this device */
+#define SPCI_FL_NO_SHIRQ 0x1000
+
#define SPCI_FL_PNPDEFAULT (SPCI_FL_IRQRESOURCE)
-
+
#endif /* _LINUX_SERIAL_H */
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Tue Apr 14 12:41:42 1998
- * Modified at: Fri Jan 14 10:46:56 2000
+ * Modified at: Mon Mar 20 09:08:57 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
* Copyright (c) 1999-2000 Dag Brattli, All Rights Reserved.
IRDA_TASK_CHILD_INIT, /* Initializing child task */
IRDA_TASK_CHILD_WAIT, /* Waiting for child task to finish */
IRDA_TASK_CHILD_DONE /* Child task is finished */
-} TASK_STATE;
+} IRDA_TASK_STATE;
struct irda_task;
-typedef int (*TASK_CALLBACK) (struct irda_task *task);
+typedef int (*IRDA_TASK_CALLBACK) (struct irda_task *task);
struct irda_task {
queue_t q;
magic_t magic;
- TASK_STATE state;
- TASK_CALLBACK function;
- TASK_CALLBACK finished;
+ IRDA_TASK_STATE state;
+ IRDA_TASK_CALLBACK function;
+ IRDA_TASK_CALLBACK finished;
struct irda_task *parent;
struct timer_list timer;
void irda_task_delete(struct irda_task *task);
int irda_task_kick(struct irda_task *task);
-struct irda_task *irda_task_execute(void *instance, TASK_CALLBACK function,
- TASK_CALLBACK finished,
+struct irda_task *irda_task_execute(void *instance,
+ IRDA_TASK_CALLBACK function,
+ IRDA_TASK_CALLBACK finished,
struct irda_task *parent, void *param);
-void irda_task_next_state(struct irda_task *task, TASK_STATE state);
+void irda_task_next_state(struct irda_task *task, IRDA_TASK_STATE state);
extern const char *infrared_mode[];
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Mon Jun 7 08:47:28 1999
- * Modified at: Mon Dec 13 11:51:59 1999
+ * Modified at: Sun Jan 30 14:05:14 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
- * Copyright (c) 1999 Dag Brattli, All Rights Reserved.
+ * Copyright (c) 1999-2000 Dag Brattli, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
__u8 *bp;
__u16 *sp;
__u32 *ip;
-} pv_t;
+} irda_pv_t;
typedef struct {
__u8 pi;
__u8 pl;
- pv_t pv;
-} param_t;
+ irda_pv_t pv;
+} irda_param_t;
-typedef int (*PI_HANDLER)(void *self, param_t *param, int get);
+typedef int (*PI_HANDLER)(void *self, irda_param_t *param, int get);
typedef int (*PV_HANDLER)(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func);
*/
if ((shmid % SEQ_MULTIPLIER) == zero_id)
return -EINVAL;
- lock_kernel();
down(&shm_ids.sem);
shp = shm_lock(shmid);
if (shp == NULL) {
up(&shm_ids.sem);
- unlock_kernel();
return -EINVAL;
}
err = -EIDRM;
int id=shp->id;
shm_unlock(shmid);
up(&shm_ids.sem);
- /* The kernel lock prevents new attaches from
- * being happening. We can't hold shm_lock here
- * else we will deadlock in shm_lookup when we
+ /*
+ * We can't hold shm_lock here else we
+ * will deadlock in shm_lookup when we
* try to recursively grab it.
*/
- err = shm_remove_name(id);
- unlock_kernel();
- return err;
+ return shm_remove_name(id);
}
/* Do not find me any more */
shp->shm_perm.mode |= SHM_DEST;
/* Unlock */
shm_unlock(shmid);
up(&shm_ids.sem);
- unlock_kernel();
return err;
}
static int shm_mmap(struct file * file, struct vm_area_struct * vma)
{
- if (!(vma->vm_flags & VM_SHARED))
- return -EINVAL; /* we cannot do private mappings */
+ if ((vma->vm_flags & VM_WRITE) && !(vma->vm_flags & VM_SHARED))
+ return -EINVAL; /* we cannot do private writable mappings */
UPDATE_ATIME(file->f_dentry->d_inode);
vma->vm_ops = &shm_vm_ops;
shm_inc(file->f_dentry->d_inode->i_ino);
unsigned long addr;
struct file * file;
int err;
- int flags;
+ unsigned long flags;
+ unsigned long prot;
+ unsigned long o_flags;
char name[SHM_FMT_LEN+1];
if (!shm_sb || (shmid % SEQ_MULTIPLIER) == zero_id)
return -EINVAL;
- if ((addr = (ulong)shmaddr))
- {
- if(addr & (SHMLBA-1)) {
+ if ((addr = (ulong)shmaddr)) {
+ if (addr & (SHMLBA-1)) {
if (shmflg & SHM_RND)
addr &= ~(SHMLBA-1); /* round down */
else
} else
flags = MAP_SHARED;
- sprintf (name, SHM_FMT, shmid);
+ if (shmflg & SHM_RDONLY) {
+ prot = PROT_READ;
+ o_flags = O_RDONLY;
+ } else {
+ prot = PROT_READ | PROT_WRITE;
+ o_flags = O_RDWR;
+ }
+
+ sprintf (name, SHM_FMT, shmid);
+
lock_kernel();
- file = filp_open(name, O_RDWR, 0, dget(shm_sb->s_root));
+ file = filp_open(name, o_flags, 0, dget(shm_sb->s_root));
if (IS_ERR (file))
goto bad_file;
*raddr = do_mmap (file, addr, file->f_dentry->d_inode->i_size,
- (shmflg & SHM_RDONLY ? PROT_READ :
- PROT_READ | PROT_WRITE), flags, 0);
+ prot, flags, 0);
unlock_kernel();
if (IS_ERR(*raddr))
err = PTR_ERR(*raddr);
}
/*
- * Remove a name. Must be called with lock_kernel
+ * Remove a name.
*/
static int shm_remove_name(int id)
{
+ int err;
char name[SHM_FMT_LEN+1];
sprintf (name, SHM_FMT, id);
- return do_unlink (name, dget(shm_sb->s_root));
+ lock_kernel();
+ err = do_unlink (name, dget(shm_sb->s_root));
+ unlock_kernel();
+ return err;
}
/*
int id = shmd->vm_file->f_dentry->d_inode->i_ino;
struct shmid_kernel *shp;
- lock_kernel();
-
/* remove from the list of attaches of the shm segment */
if(!(shp = shm_lock(id)))
BUG();
* try to recursively grab it.
*/
err = shm_remove_name(pid);
- if(err && err != -ENOENT)
+ if(err && err != -EINVAL && err != -ENOENT)
printk(KERN_ERR "Unlink of SHM id %d failed (%d).\n", pid, err);
} else {
shm_unlock(id);
}
-
- unlock_kernel();
}
/*
OX_OBJS += ksyms.o
endif
-ifdef CONFIG_PM
+ifeq ($(CONFIG_PM),y)
OX_OBJS += pm.o
endif
#ifdef CONFIG_BSD_PROCESS_ACCT
acct_process(code);
#endif
- task_lock(tsk);
sem_exit();
__exit_mm(tsk);
__exit_files(tsk);
__exit_fs(tsk);
__exit_sighand(tsk);
+ task_lock(tsk);
exit_thread();
tsk->state = TASK_ZOMBIE;
tsk->exit_code = code;
mapnr = pte_pagenr(*pgtable);
if (write && (!pte_write(*pgtable) || !pte_dirty(*pgtable)))
goto fault_in_page;
- if (mapnr >= max_mapnr)
- return 0;
page = mem_map + mapnr;
+ if ((mapnr >= max_mapnr) || PageReserved(page))
+ return 0;
flush_cache_page(vma, addr);
if (write) {
goto handle_softirq_back;
handle_tq_scheduler:
+ /*
+ * do not run the task queue with disabled interrupts,
+ * cli() wouldn't work on SMP
+ */
+ sti();
run_task_queue(&tq_scheduler);
goto tq_scheduler_back;
scheduling_in_interrupt:
printk("Scheduling in interrupt\n");
- *(int *)0 = 0;
+ BUG();
return;
}
PAGE_BUG(page);
}
- status = mapping->a_ops->prepare_write(page, offset, offset+bytes);
+ status = mapping->a_ops->prepare_write(file, page, offset, offset+bytes);
if (status)
goto unlock;
kaddr = (char*)page_address(page);
return;
flush_cache_page(vma, address);
page = pte_page(pte);
- if (page-mem_map >= max_mapnr)
+ if ((page-mem_map >= max_mapnr) || PageReserved(page))
return;
offset = address & ~PAGE_MASK;
memclear_highpage_flush(page, offset, PAGE_SIZE - offset);
/*
* This is the 'heart' of the zoned buddy allocator:
*/
-struct page * __alloc_pages (zonelist_t *zonelist, unsigned long order)
+struct page * __alloc_pages(zonelist_t *zonelist, unsigned long order)
{
zone_t **zone = zonelist->zones;
+ /*
+ * If this is a recursive call, we'd better
+ * do our best to just allocate things without
+ * further thought.
+ */
+ if (current->flags & PF_MEMALLOC)
+ goto allocate_ok;
+
/*
* (If anyone calls gfp from interrupts nonatomically then it
* will sooner or later tripped up by a schedule().)
break;
if (!z->size)
BUG();
- /*
- * If this is a recursive call, we'd better
- * do our best to just allocate things without
- * further thought.
- */
- if (!(current->flags & PF_MEMALLOC)) {
- /* Are we low on memory? */
- if (z->free_pages <= z->pages_low)
- continue;
- }
- /*
- * This is an optimization for the 'higher order zone
- * is empty' case - it can happen even in well-behaved
- * systems, think the page-cache filling up all RAM.
- * We skip over empty zones. (this is not exact because
- * we do not take the spinlock and it's not exact for
- * the higher order case, but will do it for most things.)
- */
- if (z->free_pages) {
+
+ /* Are we supposed to free memory? Don't make it worse.. */
+ if (!z->zone_wake_kswapd && z->free_pages > z->pages_low) {
struct page *page = rmqueue(z, order);
if (page)
return page;
}
}
+
+ /*
+ * Ok, no obvious zones were available, start
+ * balancing things a bit..
+ */
if (zone_balance_memory(zonelist)) {
zone = zonelist->zones;
+allocate_ok:
for (;;) {
zone_t *z = *(zone++);
if (!z)
continue;
if (pte_present(page)) {
unsigned long map_nr = pte_pagenr(page);
- if (map_nr < max_mapnr)
+ if ((map_nr < max_mapnr) &&
+ (!PageReserved(mem_map + map_nr)))
__free_page(mem_map + map_nr);
continue;
}
struct vm_struct *area;
size = PAGE_ALIGN(size);
- if (!size || (size >> PAGE_SHIFT) > max_mapnr) {
+ if (!size || (size >> PAGE_SHIFT) > num_physpages) {
BUG();
return NULL;
}
if (!pte_present(pte))
goto out_failed;
page = pte_page(pte);
- if (page-mem_map >= max_mapnr)
+ if ((page-mem_map >= max_mapnr) || PageReserved(page))
goto out_failed;
/* Don't look at this pte if it's been accessed recently. */
goto out_failed;
}
- if (PageReserved(page) || PageLocked(page))
+ if (PageLocked(page))
goto out_failed;
/*
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Mon Jun 7 10:25:11 1999
- * Modified at: Tue Dec 14 15:26:30 1999
+ * Modified at: Sun Jan 30 14:32:03 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
- * Copyright (c) 1999 Dag Brattli, All Rights Reserved.
+ * Copyright (c) 1999-2000 Dag Brattli, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
#include <net/irda/ircomm_param.h>
-static int ircomm_param_service_type(void *instance, param_t *param, int get);
-static int ircomm_param_port_type(void *instance, param_t *param, int get);
-static int ircomm_param_port_name(void *instance, param_t *param, int get);
-static int ircomm_param_service_type(void *instance, param_t *param, int get);
-static int ircomm_param_data_rate(void *instance, param_t *param, int get);
-static int ircomm_param_data_format(void *instance, param_t *param, int get);
-static int ircomm_param_flow_control(void *instance, param_t *param, int get);
-static int ircomm_param_xon_xoff(void *instance, param_t *param, int get);
-static int ircomm_param_enq_ack(void *instance, param_t *param, int get);
-static int ircomm_param_line_status(void *instance, param_t *param, int get);
-static int ircomm_param_dte(void *instance, param_t *param, int get);
-static int ircomm_param_dce(void *instance, param_t *param, int get);
-static int ircomm_param_poll(void *instance, param_t *param, int get);
+static int ircomm_param_service_type(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_port_type(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_port_name(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_service_type(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_data_rate(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_data_format(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_flow_control(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_xon_xoff(void *instance, irda_param_t *param, int get);
+static int ircomm_param_enq_ack(void *instance, irda_param_t *param, int get);
+static int ircomm_param_line_status(void *instance, irda_param_t *param,
+ int get);
+static int ircomm_param_dte(void *instance, irda_param_t *param, int get);
+static int ircomm_param_dce(void *instance, irda_param_t *param, int get);
+static int ircomm_param_poll(void *instance, irda_param_t *param, int get);
static pi_minor_info_t pi_minor_call_table_common[] = {
{ ircomm_param_service_type, PV_INT_8_BITS },
* query and then the remote device sends its initial paramters
*
*/
-static int ircomm_param_service_type(void *instance, param_t *param, int get)
+static int ircomm_param_service_type(void *instance, irda_param_t *param,
+ int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
__u8 service_type = param->pv.b; /* We know it's a one byte integer */
* Since we only advertise serial service, this parameter should only
* be equal to IRCOMM_SERIAL.
*/
-static int ircomm_param_port_type(void *instance, param_t *param, int get)
+static int ircomm_param_port_type(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange port name
*
*/
-static int ircomm_param_port_name(void *instance, param_t *param, int get)
+static int ircomm_param_port_name(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange data rate to be used in this settings
*
*/
-static int ircomm_param_data_rate(void *instance, param_t *param, int get)
+static int ircomm_param_data_rate(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange data format to be used in this settings
*
*/
-static int ircomm_param_data_format(void *instance, param_t *param, int get)
+static int ircomm_param_data_format(void *instance, irda_param_t *param,
+ int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange flow control settings to be used in this settings
*
*/
-static int ircomm_param_flow_control(void *instance, param_t *param, int get)
+static int ircomm_param_flow_control(void *instance, irda_param_t *param,
+ int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange XON/XOFF characters
*
*/
-static int ircomm_param_xon_xoff(void *instance, param_t *param, int get)
+static int ircomm_param_xon_xoff(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
* Exchange ENQ/ACK characters
*
*/
-static int ircomm_param_enq_ack(void *instance, param_t *param, int get)
+static int ircomm_param_enq_ack(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
*
*
*/
-static int ircomm_param_line_status(void *instance, param_t *param, int get)
+static int ircomm_param_line_status(void *instance, irda_param_t *param,
+ int get)
{
IRDA_DEBUG(2, __FUNCTION__ "(), not impl.\n");
* If we get here, there must be some sort of null-modem connection, and
* we are probably working in server mode as well.
*/
-static int ircomm_param_dte(void *instance, param_t *param, int get)
+static int ircomm_param_dte(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
__u8 dte;
*
*
*/
-static int ircomm_param_dce(void *instance, param_t *param, int get)
+static int ircomm_param_dce(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
__u8 dce;
* Called when the peer device is polling for the line settings
*
*/
-static int ircomm_param_poll(void *instance, param_t *param, int get)
+static int ircomm_param_poll(void *instance, irda_param_t *param, int get)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) instance;
return req.ifr_receiving;
}
-void irda_task_next_state(struct irda_task *task, TASK_STATE state)
+void irda_task_next_state(struct irda_task *task, IRDA_TASK_STATE state)
{
IRDA_DEBUG(2, __FUNCTION__ "(), state = %s\n", task_state[state]);
* called from interrupt context, so it's not possible to use
* schedule_timeout()
*/
-struct irda_task *irda_task_execute(void *instance, TASK_CALLBACK function,
- TASK_CALLBACK finished,
+struct irda_task *irda_task_execute(void *instance,
+ IRDA_TASK_CALLBACK function,
+ IRDA_TASK_CALLBACK finished,
struct irda_task *parent, void *param)
{
struct irda_task *task;
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Thu Oct 15 08:37:58 1998
- * Modified at: Thu Nov 4 14:50:52 1999
+ * Modified at: Tue Mar 21 09:06:41 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
* Sources: skeleton.c by Donald Becker <becker@CESDIS.gsfc.nasa.gov>
* slip.c by Laurence Culhane, <loz@holmes.demon.co.uk>
* Fred N. van Kempen, <waltje@uwalt.nl.mugnet.org>
*
- * Copyright (c) 1998-1999 Dag Brattli, All Rights Reserved.
+ * Copyright (c) 1998-2000 Dag Brattli, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
*
********************************************************************/
+#include <linux/config.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/inetdevice.h>
* is useful if we have changed access points on the same
* subnet.
*/
+#ifdef CONFIG_INET
IRDA_DEBUG(4, "IrLAN: Sending gratuitous ARP\n");
in_dev = in_dev_get(dev);
if (in_dev == NULL)
NULL, dev->dev_addr, NULL);
read_unlock(&in_dev->lock);
in_dev_put(in_dev);
+#endif /* CONFIG_INET */
}
/*
static void irttp_fragment_skb(struct tsap_cb *self, struct sk_buff *skb);
static void irttp_start_todo_timer(struct tsap_cb *self, int timeout);
static struct sk_buff *irttp_reassemble_skb(struct tsap_cb *self);
-static int irttp_param_max_sdu_size(void *instance, param_t *param, int get);
+static int irttp_param_max_sdu_size(void *instance, irda_param_t *param,
+ int get);
/* Information for parsing parameters in IrTTP */
static pi_minor_info_t pi_minor_call_table[] = {
* will be called both when this parameter needs to be inserted into, and
* extracted from the connect frames
*/
-static int irttp_param_max_sdu_size(void *instance, param_t *param, int get)
+static int irttp_param_max_sdu_size(void *instance, irda_param_t *param,
+ int get)
{
struct tsap_cb *self;
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Mon Jun 7 10:25:11 1999
- * Modified at: Tue Dec 14 16:03:57 1999
+ * Modified at: Sun Jan 30 14:08:39 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
- * Copyright (c) 1999 Dag Brattli, All Rights Reserved.
+ * Copyright (c) 1999-2000 Dag Brattli, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
static int irda_insert_no_value(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func)
{
- param_t p;
+ irda_param_t p;
int ret;
p.pi = pi;
static int irda_extract_no_value(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func)
{
- param_t p;
+ irda_param_t p;
int ret;
/* Extract values anyway, since handler may need them */
static int irda_insert_integer(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func)
{
- param_t p;
+ irda_param_t p;
int n = 0;
int err;
static int irda_extract_integer(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func)
{
- param_t p;
+ irda_param_t p;
int n = 0;
int err;
PV_TYPE type, PI_HANDLER func)
{
char str[33];
- param_t p;
+ irda_param_t p;
int err;
IRDA_DEBUG(2, __FUNCTION__ "()\n");
static int irda_extract_octseq(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func)
{
- param_t p;
+ irda_param_t p;
p.pi = pi; /* In case handler needs to know */
p.pl = buf[1]; /* Extract lenght of value */
*/
int irda_param_pack(__u8 *buf, char *fmt, ...)
{
+ irda_pv_t arg;
va_list args;
char *p;
int n = 0;
- pv_t arg;
va_start(args, fmt);
*/
int irda_param_unpack(__u8 *buf, char *fmt, ...)
{
+ irda_pv_t arg;
va_list args;
char *p;
int n = 0;
- pv_t arg;
va_start(args, fmt);
* Status: Stable
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Tue Sep 9 00:00:26 1997
- * Modified at: Sun Dec 12 13:47:09 1999
+ * Modified at: Sun Jan 30 14:29:16 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
- * Copyright (c) 1998-1999 Dag Brattli <dagb@cs.uit.no>,
+ * Copyright (c) 1998-2000 Dag Brattli <dagb@cs.uit.no>,
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
#define CI_BZIP2 27 /* Random pick */
#endif
-static int irlap_param_baud_rate(void *instance, param_t *param, int get);
-static int irlap_param_link_disconnect(void *instance, param_t *parm, int get);
-static int irlap_param_max_turn_time(void *instance, param_t *param, int get);
-static int irlap_param_data_size(void *instance, param_t *param, int get);
-static int irlap_param_window_size(void *instance, param_t *param, int get);
-static int irlap_param_additional_bofs(void *instance, param_t *parm, int get);
-static int irlap_param_min_turn_time(void *instance, param_t *param, int get);
+static int irlap_param_baud_rate(void *instance, irda_param_t *param, int get);
+static int irlap_param_link_disconnect(void *instance, irda_param_t *parm,
+ int get);
+static int irlap_param_max_turn_time(void *instance, irda_param_t *param,
+ int get);
+static int irlap_param_data_size(void *instance, irda_param_t *param, int get);
+static int irlap_param_window_size(void *instance, irda_param_t *param,
+ int get);
+static int irlap_param_additional_bofs(void *instance, irda_param_t *parm,
+ int get);
+static int irlap_param_min_turn_time(void *instance, irda_param_t *param,
+ int get);
__u32 min_turn_times[] = { 10000, 5000, 1000, 500, 100, 50, 10, 0 }; /* us */
__u32 baud_rates[] = { 2400, 9600, 19200, 38400, 57600, 115200, 576000,
* Negotiate data-rate
*
*/
-static int irlap_param_baud_rate(void *instance, param_t *param, int get)
+static int irlap_param_baud_rate(void *instance, irda_param_t *param, int get)
{
__u16 final;
* Negotiate link disconnect/threshold time.
*
*/
-static int irlap_param_link_disconnect(void *instance, param_t *param, int get)
+static int irlap_param_link_disconnect(void *instance, irda_param_t *param,
+ int get)
{
__u16 final;
* will be negotiated independently for each station
*
*/
-static int irlap_param_max_turn_time(void *instance, param_t *param, int get)
+static int irlap_param_max_turn_time(void *instance, irda_param_t *param,
+ int get)
{
struct irlap_cb *self = (struct irlap_cb *) instance;
* will be negotiated independently for each station
*
*/
-static int irlap_param_data_size(void *instance, param_t *param, int get)
+static int irlap_param_data_size(void *instance, irda_param_t *param, int get)
{
struct irlap_cb *self = (struct irlap_cb *) instance;
* will be negotiated independently for each station
*
*/
-static int irlap_param_window_size(void *instance, param_t *param, int get)
+static int irlap_param_window_size(void *instance, irda_param_t *param,
+ int get)
{
struct irlap_cb *self = (struct irlap_cb *) instance;
* Negotiate additional BOF characters. This is a type 1 parameter and
* will be negotiated independently for each station.
*/
-static int irlap_param_additional_bofs(void *instance, param_t *param, int get)
+static int irlap_param_additional_bofs(void *instance, irda_param_t *param, int get)
{
struct irlap_cb *self = (struct irlap_cb *) instance;
* Negotiate the minimum turn around time. This is a type 1 parameter and
* will be negotiated independently for each station
*/
-static int irlap_param_min_turn_time(void *instance, param_t *param, int get)
+static int irlap_param_min_turn_time(void *instance, irda_param_t *param,
+ int get)
{
struct irlap_cb *self = (struct irlap_cb *) instance;
DECLARE_WAIT_QUEUE_HEAD(WQ);
- MOD_INC_USE_COUNT;
sprintf(current->comm,"khttpd manager");
lock_kernel(); /* This seems to be required for exit_mm */
int __init khttpd_init(void)
{
int I;
+
+ MOD_INC_USE_COUNT;
I=0;
while (I<CONFIG_KHTTPD_NUMCPU)
#endif
/* Rule no. 3 -- Does the file exist ? */
-
+ lock_kernel();
filp = filp_open(Filename, 0, O_RDONLY, NULL);
+ unlock_kernel();
if ((IS_ERR(filp))||(filp==NULL)||(filp->f_dentry==NULL))
{
+++ /dev/null
-***************
-*** 290,296 ****
- rpc_call_setup(struct rpc_task *task, u32 proc,
- void *argp, void *resp, int flags)
- {
-- task->tk_action = call_reserve;
- task->tk_proc = proc;
- task->tk_argp = argp;
- task->tk_resp = resp;
---- 291,297 ----
- rpc_call_setup(struct rpc_task *task, u32 proc,
- void *argp, void *resp, int flags)
- {
-+ task->tk_action = call_bind;
- task->tk_proc = proc;
- task->tk_argp = argp;
- task->tk_resp = resp;
-***************
-*** 312,322 ****
- rpc_release_task(task);
- return;
- }
-- task->tk_action = call_reserve;
- rpcproc_count(task->tk_client, task->tk_proc)++;
- }
-
- /*
- * 1. Reserve an RPC call slot
- */
- static void
---- 313,342 ----
- rpc_release_task(task);
- return;
- }
-+ task->tk_action = call_bind;
- rpcproc_count(task->tk_client, task->tk_proc)++;
- }
-
- /*
-+ * 0. Get the server port number if not yet set
-+ */
-+ static void
-+ call_bind(struct rpc_task *task)
-+ {
-+ struct rpc_clnt *clnt = task->tk_client;
-+ struct rpc_xprt *xprt = clnt->cl_xprt;
-+
-+ if (xprt->stream && !xprt->connected)
-+ task->tk_action = call_reconnect;
-+ else
-+ task->tk_action = call_reserve;
-+ task->tk_status = 0;
-+
-+ if (!clnt->cl_port)
-+ rpc_getport(task, clnt);
-+ }
-+
-+ /*
- * 1. Reserve an RPC call slot
- */
- static void
-***************
-*** 324,345 ****
- {
- struct rpc_clnt *clnt = task->tk_client;
-
-- if (task->tk_proc > clnt->cl_maxproc) {
-- printk(KERN_WARNING "%s (vers %d): bad procedure number %d\n",
-- clnt->cl_protname, clnt->cl_vers, task->tk_proc);
- rpc_exit(task, -EIO);
- return;
- }
--
-- dprintk("RPC: %4d call_reserve\n", task->tk_pid);
- if (!rpcauth_uptodatecred(task)) {
- task->tk_action = call_refresh;
- return;
- }
--
-- task->tk_status = 0;
- task->tk_action = call_reserveresult;
- task->tk_timeout = clnt->cl_timeout.to_resrvval;
- clnt->cl_stats->rpccnt++;
- xprt_reserve(task);
- }
---- 344,369 ----
- {
- struct rpc_clnt *clnt = task->tk_client;
-
-+ dprintk("RPC: %4d call_reserve\n", task->tk_pid);
-+ if (!clnt->cl_port) {
-+ printk(KERN_NOTICE "%s: couldn't bind to server %s - %s.\n",
-+ clnt->cl_protname, clnt->cl_server,
-+ clnt->cl_softrtry? "giving up" : "retrying");
-+ if (!clnt->cl_softrtry) {
-+ task->tk_action = call_bind;
-+ rpc_delay(task, 5*HZ);
-+ return;
-+ }
- rpc_exit(task, -EIO);
- return;
- }
- if (!rpcauth_uptodatecred(task)) {
- task->tk_action = call_refresh;
- return;
- }
- task->tk_action = call_reserveresult;
- task->tk_timeout = clnt->cl_timeout.to_resrvval;
-+ task->tk_status = 0;
- clnt->cl_stats->rpccnt++;
- xprt_reserve(task);
- }
-***************
-*** 452,464 ****
- req->rq_rnr = 1;
- req->rq_damaged = 0;
-
- /* Zero buffer so we have automatic zero-padding of opaque & string */
- memset(task->tk_buffer, 0, bufsiz);
-
- /* Encode header and provided arguments */
- encode = rpcproc_encode(clnt, task->tk_proc);
- if (!(p = call_header(task))) {
-- printk(KERN_INFO "RPC: call_header failed, exit EIO\n");
- rpc_exit(task, -EIO);
- } else
- if (encode && (status = encode(req, p, task->tk_argp)) < 0) {
---- 474,493 ----
- req->rq_rnr = 1;
- req->rq_damaged = 0;
-
-+ if (task->tk_proc > clnt->cl_maxproc) {
-+ printk(KERN_WARNING "%s (vers %d): bad procedure number %d\n",
-+ clnt->cl_protname, clnt->cl_vers, task->tk_proc);
-+ rpc_exit(task, -EIO);
-+ return;
-+ }
-+
- /* Zero buffer so we have automatic zero-padding of opaque & string */
- memset(task->tk_buffer, 0, bufsiz);
-
- /* Encode header and provided arguments */
- encode = rpcproc_encode(clnt, task->tk_proc);
- if (!(p = call_header(task))) {
-+ printk("RPC: call_header failed, exit EIO\n");
- rpc_exit(task, -EIO);
- } else
- if (encode && (status = encode(req, p, task->tk_argp)) < 0) {
-***************
-*** 469,527 ****
- }
-
- /*
-- * 4. Get the server port number if not yet set
- */
- static void
-- call_bind(struct rpc_task *task)
- {
-- struct rpc_clnt *clnt = task->tk_client;
-- struct rpc_xprt *xprt = clnt->cl_xprt;
-
-- task->tk_action = (xprt->connected) ? call_transmit : call_reconnect;
--
-- if (!clnt->cl_port) {
-- task->tk_action = call_reconnect;
-- task->tk_timeout = clnt->cl_timeout.to_maxval;
-- rpc_getport(task, clnt);
-- }
- }
-
- /*
-- * 4a. Reconnect to the RPC server (TCP case)
- */
- static void
-- call_reconnect(struct rpc_task *task)
- {
-- struct rpc_clnt *clnt = task->tk_client;
-
-- dprintk("RPC: %4d call_reconnect status %d\n",
-- task->tk_pid, task->tk_status);
-
-- task->tk_action = call_transmit;
-- if (task->tk_status < 0 || !clnt->cl_xprt->stream)
-- return;
-- clnt->cl_stats->netreconn++;
-- xprt_reconnect(task);
-- }
-
-- /*
-- * 5. Transmit the RPC request, and wait for reply
-- */
-- static void
-- call_transmit(struct rpc_task *task)
-- {
-- struct rpc_clnt *clnt = task->tk_client;
--
-- dprintk("RPC: %4d call_transmit (status %d)\n",
-- task->tk_pid, task->tk_status);
--
-- task->tk_action = call_status;
-- if (task->tk_status < 0)
- return;
-- xprt_transmit(task);
-- if (!rpcproc_decode(clnt, task->tk_proc)) {
-- task->tk_action = NULL;
-- rpc_wake_up_task(task);
- }
- }
-
---- 498,535 ----
- }
-
- /*
-+ * 4. Transmit the RPC request
- */
- static void
-+ call_transmit(struct rpc_task *task)
- {
-+ dprintk("RPC: %4d call_transmit (status %d)\n",
-+ task->tk_pid, task->tk_status);
-
-+ task->tk_action = call_receive;
-+ task->tk_status = 0;
-+ xprt_transmit(task);
- }
-
- /*
-+ * 5. Wait for the RPC reply
- */
- static void
-+ call_receive(struct rpc_task *task)
- {
-+ dprintk("RPC: %4d call_receive (status %d)\n",
-+ task->tk_pid, task->tk_status);
-
-+ task->tk_action = call_status;
-
-+ /* Need to ensure cleanups are performed by xprt_receive_status() */
-+ xprt_receive(task);
-
-+ /* If we have no decode function, this means we're performing
-+ * a void call (a la lockd message passing). */
-+ if (!rpcproc_decode(task->tk_client, task->tk_proc)) {
-+ task->tk_action = NULL;
- return;
- }
- }
-
}
#ifdef RPC_DEBUG
-#include <linux/nfs_fs.h>
void rpc_show_tasks(void)
{
struct rpc_task *t = all_tasks, *next;
- struct nfs_wreq *wreq;
spin_lock(&rpc_sched_lock);
t = all_tasks;
t->tk_rqstp, t->tk_timeout,
t->tk_rpcwait ? rpc_qname(t->tk_rpcwait) : " <NULL> ",
t->tk_action, t->tk_exit);
-
- if (!(t->tk_flags & RPC_TASK_NFSWRITE))
- continue;
- /* NFS write requests */
- wreq = (struct nfs_wreq *) t->tk_calldata;
- printk(" NFS: flgs=%08x, pid=%d, pg=%p, off=(%d, %d)\n",
- wreq->wb_flags, wreq->wb_pid, wreq->wb_page,
- wreq->wb_offset, wreq->wb_bytes);
- printk(" name=%s/%s\n",
- wreq->wb_file->f_dentry->d_parent->d_name.name,
- wreq->wb_file->f_dentry->d_name.name);
}
spin_unlock(&rpc_sched_lock);
}