S: Iasi 6600
S: Romania
+N: Monalisa Agrawal
+E: magrawal@nortelnetworks.com
+D: Basic Interphase 5575 driver with UBR and ABR support.
+S: 75 Donald St, Apt 42
+S: Weymouth, MA 02188
+
N: Dave Airlie
E: airlied@linux.ie
W: http://www.csn.ul.ie/~airlied
S: United Kingdom
N: Werner Almesberger
-E: werner.almesberger@lrc.di.epfl.ch
-D: dosfs, LILO, some fd features, various other hacks here and there
+E: werner.almesberger@epfl.ch
+D: dosfs, LILO, some fd features, ATM, various other hacks here and there
S: Ecole Polytechnique Federale de Lausanne
-S: DI-LRC
-S: INR (Ecublens)
+S: DSC ICA
+S: INN (Ecublens)
S: CH-1015 Lausanne
S: Switzerland
S: Notre Dame, Indiana
S: USA
+N: Greg Banks
+E: gnb@linuxfan.com
+D: IDT77105 ATM network driver
+S: NEC Australia
+S: 649-655 Springvale Rd
+S: Mulgrave, Victoria 3170
+S: Australia
+
N: James Banks
E: james@sovereign.org
D: TLAN network driver
S: NN1 3QT
S: United Kingdom
+N: Uwe Dannowski
+E: Uwe.Dannowski@ira.uka.de
+W: http://i30www.ira.uka.de/~dannowsk/
+D: FORE PCA-200E driver
+S: University of Karlsruhe
+S: Germany
+
N: Ray Dassen
E: jdassen@wi.LeidenUniv.nl
W: http://www.wi.leidenuniv.nl/~jdassen/
S: TW9 1AE
S: United Kingdom
+N: Marko Kiiskila
+E: marko@iprg.nokia.com
+D: Author of ATM Lan Emulation
+S: 660 Harvard Ave. #7
+S: Santa Clara, CA 95051
+S: USA
+
N: Russell King
E: rmk@arm.uk.linux.org
D: Linux/arm integrator, maintainer & hacker
S: (ask for current address)
S: Germany
+N: Christophe Lizzi
+E: lizzi@cnam.fr
+W: http://cedric.cnam.fr/personne/lizzi
+D: FORE Systems 200E-series ATM network driver, sparc64 port of ATM
+S: CNAM, Laboratoire CEDRIC
+S: 292, rue St-Martin
+S: 75141 Paris Cedex 03
+S: France
+
N: Siegfried "Frieder" Loeffler (dg1sek)
E: floeff@tunix.mathematik.uni-stuttgart.de, fl@LF.net
W: http://www.mathematik.uni-stuttgart.de/~floeff
E: Frederic.Potter@masi.ibp.fr
D: Some PCI kernel support
+N: Rui Prior
+E: rprior@inescn.pt
+D: ATM device driver for NICStAR based cards
+
N: Stefan Probst
E: sp@caldera.de
D: The Linux Support Team Erlangen, 1993-97
S: Berkeley, CA 94720-1776
S: USA
+N: Mike Westall
+D: IBM Turboways 25 ATM Device Driver
+E: westall@cs.clemson.edu
+S: Department of Computer Science
+S: Clemson University
+S: Clemson SC 29634 USA
+
N: Greg Wettstein
E: greg@wind.rmcc.com
D: Filesystem valid flag for MINIX filesystem.
of your ATM card below.
Note that you need a set of user-space programs to actually make use
- of ATM. See the file Documentation/atm.txt for further details.
+ of ATM. See the file Documentation/networking/atm.txt for further
+ details.
Classical IP over ATM
CONFIG_ATM_CLIP
overhead for timer synchronization and also per-packet overhead for
time conversion.
-IDT 77201 (NICStAR)
+IDT 77201/11 (NICStAR) (ForeRunnerLE)
CONFIG_ATM_NICSTAR
The NICStAR chipset family is used in a large number of ATM NICs for
25 and for 155 Mbps, including IDT cards and the Fore ForeRunnerLE
series.
+ForeRunner LE155 PHYsical layer
+CONFIG_ATM_NICSTAR_USE_SUNI
+ Support for the S-UNI and compatible PHYsical layer chips. These are
+ found in most 155Mbps NICStAR based ATM cards, namely in the
+ ForeRunner LE155 cards. This driver provides detection of cable
+ removal and reinsertion and provides some statistics. This driver
+ doesn't have removal capability when compiled as a module, so if you
+ need that capability don't include S-UNI support (it's not needed to
+ make the card work).
+
+ForeRunner LE25 PHYsical layer
+CONFIG_ATM_NICSTAR_USE_IDT77105
+ Support for the PHYsical layer chip in ForeRunner LE25 cards. In
+ addition to cable removal/reinsertion detection, this driver allows
+ you to control the loopback mode of the chip via a dedicated IOCTL.
+ This driver is required for proper handling of temporary carrier
+ loss, so if you have a 25Mbps NICStAR based ATM card you must say Y.
+
Madge Ambassador (Collage PCI 155 Server)
CONFIG_ATM_AMBASSADOR
This is a driver for ATMizer based ATM card produced by Madge
speed of the driver, and the size of your syslog files! When
inactive, they will have only a modest impact on performance.
+Interphase ATM PCI x575/x525/x531
+CONFIG_ATM_IA
+ This is a driver for the Interphase (i)ChipSAR adapter cards
+ which include a variety of variants in term of the size of the
+ control memory (128K-1KVC, 512K-4KVC), the size of the packet
+ memory (128K, 512K, 1M), and the PHY type (Single/Multi mode OC3,
+ UTP155, UTP25, DS3 and E3). Go to:
+ www.iphase.com/products/ClassSheet.cfm?ClassID=ATM
+ for more info about the cards. Say Y (or M to compile as a module
+ named iphase.o) here if you have one of these cards.
+
+ See the file Documentation/networking/iphase.txt for further
+ details.
+
+Enable debugging messages
+CONFIG_ATM_IA_DEBUG
+ Somewhat useful debugging messages are available. The choice of
+ messages is controlled by a bitmap. This may be specified as a
+ module argument (kernel command line argument as well?), changed
+ dynamically using an ioctl (Get the debug utility, iadbg, from
+ ftp.iphase.com/pub/atm/pci). See the file drivers/atm/iphase.h
+ for the meanings of the bits in the mask.
+
+ When active, these messages can have a significant impact on the
+ speed of the driver, and the size of your syslog files! When
+ inactive, they will have only a modest impact on performance.
+
SCSI support?
CONFIG_SCSI
If you want to use a SCSI hard disk, SCSI tape drive, SCSI CDROM or
+++ /dev/null
-In order to use anything but the most primitive functions of ATM,
-several user-mode programs are required to assist the kernel. These
-programs and related material can be found via the ATM on Linux Web
-page at http://icawww1.epfl.ch/linux-atm/
-
-If you encounter problems with ATM, please report them on the ATM
-on Linux mailing list. Subscription information, archives, etc.,
-can be found on http://icawww1.epfl.ch/linux-atm/
--- /dev/null
+In order to use anything but the most primitive functions of ATM,
+several user-mode programs are required to assist the kernel. These
+programs and related material can be found via the ATM on Linux Web
+page at http://icawww1.epfl.ch/linux-atm/
+
+If you encounter problems with ATM, please report them on the ATM
+on Linux mailing list. Subscription information, archives, etc.,
+can be found on http://icawww1.epfl.ch/linux-atm/
CONFIG_PROCFS (to see what's going on)
CONFIG_SYSCTL (for easy configuration)
-if you want to try out router support (not properly debugged and not
-complete yet), you'll need the following options as well...
+if you want to try out router support (not properly debugged yet)
+you'll need the following options as well...
CONFIG_DECNET_RAW (to receive routing packets)
CONFIG_DECNET_ROUTER (to be able to add/delete routes)
The kernel command line takes options looking like the following:
- decnet=1,2,1
+ decnet=1,2
-the first two numbers are the node address 1,2 = 1.2 For 2.2.xx kernels
+the two numbers are the node address 1,2 = 1.2 For 2.2.xx kernels
and early 2.3.xx kernels, you must use a comma when specifying the
DECnet address like this. For more recent 2.3.xx kernels, you may
use almost charecter except space, although a `.` would be the most
obvious choice :-)
-The third number is the level number for routers and is optional. In fact
-this option may go away shortly in favour if settings for each interface
-seperately. It is probably a good idea to set the DECnet address and type
-on boot like this rather than trying to do it later.
+There used to be a third number specifying the node type. This option
+has gone away in favour of a per interface node type. This is now set
+using /proc/sys/net/decnet/conf/<dev>/forwarding. This file can be
+set with a single digit, 0=EndNode, 1=L1 Router and 2=L2 Router.
-There are also equivalent options for modules. The node address and type can
+There are also equivalent options for modules. The node address can
also be set through the /proc/sys/net/decnet/ files, as can other system
parameters.
-Currently the only supported device is ethernet. You'll have to set the
-ethernet address of your ethernet card according to the DECnet address
-of the node in order for it to be recognised (and thus appear in
+Currently the only supported devices are ethernet and ip_gre. The
+ethernet address of your ethernet card has to be set according to the DECnet
+address of the node in order for it to be recognised (and thus appear in
/proc/net/decnet_dev). There is a utility available at the above
FTP sites called dn2ethaddr which can compute the correct ethernet
address to use. The address can be set by ifconfig either before at
kernel subsystem is working.
- Is the node address set (see /proc/sys/net/decnet/node_address)
- - Is the node of the correct type (see /proc/sys/net/decnet/node_type)
+ - Is the node of the correct type
+ (see /proc/sys/net/decnet/conf/<dev>/forwarding)
- Is the Ethernet MAC address of each Ethernet card set to match
the DECnet address. If in doubt use the dn2ethaddr utility available
at the ftp archive.
--- /dev/null
+
+ READ ME FISRT
+ ATM (i)Chip IA Linux Driver Source
+--------------------------------------------------------------------------------
+ Read This Before You Begin!
+--------------------------------------------------------------------------------
+
+Description
+-----------
+
+This is the README file for the Interphase PCI ATM (i)Chip IA Linux driver
+source release.
+
+The features and limitations of this driver are as follows:
+ - A single VPI (VPI value of 0) is supported.
+ - Supports 4K VCs for the server board (with 512K control memory) and 1K
+ VCs for the client board (with 128K control memory).
+ - UBR, ABR and CBR service categories are supported.
+ - Only AAL5 is supported.
+ - Supports setting of PCR on the VCs.
+ - Multiple adapters in a system are supported.
+ - All variants of Interphase ATM PCI (i)Chip adapter cards are supported,
+ including x575 (OC3, control memory 128K , 512K and packet memory 128K,
+ 512K and 1M), x525 (UTP25) and x531 (DS3 and E3). See
+ http://www.iphase.com/products/ClassSheet.cfm?ClassID=ATM
+ for details.
+ - Only x86 platforms are supported.
+ - SMP is supported.
+
+
+Before You Start
+----------------
+
+
+Installation
+------------
+
+1. Installing the adapters in the system
+ To install the ATM adapters in the system, follow the steps below.
+ a. Login as root.
+ b. Shut down the system and power off the system.
+ c. Install one or more ATM adapters in the system.
+ d. Connect each adapter to a port on an ATM switch. The green 'Link'
+ LED on the front panel of the adapter will be on if the adapter is
+ connected to the switch properly when the system is powered up.
+ e. Power on and boot the system.
+
+2. [ Removed ]
+
+3. Rebuild kernel with ABR support
+ [ a. and b. removed ]
+ c. Reconfigure the kernel, choose the Interphase ia driver through "make
+ menuconfig" or "make xconfig".
+ d. Rebuild the kernel, loadable modules and the atm tools.
+ e. Install the new built kernel and modules and reboot.
+
+4. Load the adapter hardware driver (ia driver) if it is built as a module
+ a. Login as root.
+ b. Change directory to /lib/modules/<kernel-version>/atm.
+ c. Run "insmod suni.o;insmod iphase.o"
+ The yellow 'status' LED on the front panel of the adapter will blink
+ while the driver is loaded in the system.
+ d. To verify that the 'ia' driver is loaded successfully, run the
+ following command:
+
+ cat /proc/atm/devices
+
+ If the driver is loaded successfully, the output of the command will
+ be similar to the following lines:
+
+ Itf Type ESI/"MAC"addr AAL(TX,err,RX,err,drop) ...
+ 0 ia xxxxxxxxx 0 ( 0 0 0 0 0 ) 5 ( 0 0 0 0 0 )
+
+ You can also check the system log file /var/log/messages for messages
+ related to the ATM driver.
+
+5. Ia Driver Configuration
+
+5.1 Configuration of adapter buffers
+ The (i)Chip boards have 3 different packet RAM size variants: 128K, 512K and
+ 1M. The RAM size decides the number of buffers and buffer size. The default
+ size and number of buffers are set as following:
+
+ Totol Rx RAM Tx RAM Rx Buf Tx Buf Rx buf Tx buf
+ RAM size size size size size cnt cnt
+ -------- ------ ------ ------ ------ ------ ------
+ 128K 64K 64K 10K 10K 6 6
+ 512K 256K 256K 10K 10K 25 25
+ 1M 512K 512K 10K 10K 51 51
+
+ These setting should work well in most environments, but can be
+ changed by typing the following command:
+
+ insmod <IA_DIR>/ia.o IA_RX_BUF=<RX_CNT> IA_RX_BUF_SZ=<RX_SIZE> \
+ IA_TX_BUF=<TX_CNT> IA_TX_BUF_SZ=<TX_SIZE>
+ Where:
+ RX_CNT = number of receive buffers in the range (1-128)
+ RX_SIZE = size of receive buffers in the range (48-64K)
+ TX_CNT = number of transmit buffers in the range (1-128)
+ TX_SIZE = size of transmit buffers in the range (48-64K)
+
+ 1. Transmit and receive buffer size must be a multiple of 4.
+ 2. Care should be taken so that the memory required for the
+ transmit and receive buffers is less than or equal to the
+ total adapter packet memory.
+
+5.2 Turn on ia debug trace
+
+ When the ia driver is built with the CONFIG_ATM_IA_DEBUG flag, the driver
+ can provide more debug trace if needed. There is a bit mask variable,
+ IADebugFlag, which controls the output of the traces. You can find the bit
+ map of the IADebugFlag in iphase.h.
+ The debug trace can be turn on through the insmod command line option, for
+ example, "insmod iphase.o IADebugFlag=0xffffffff" can turn on all the debug
+ traces together with loading the driver.
+
+6. Ia Driver Test Using ttcp_atm and PVC
+
+ For the PVC setup, the test machines can either be connected back-to-back or
+ through a switch. If connected through the switch, the switch must be
+ configured for the PVC(s).
+
+ a. For UBR test:
+ At the test machine intended to receive data, type:
+ ttcp_atm -r -a -s 0.100
+ At the other test machine, type:
+ ttcp_atm -t -a -s 0.100 -n 10000
+ Run "ttcp_atm -h" to display more options of the ttcp_atm tool.
+ b. For ABR test:
+ It is the same as the UBR testing, but with an extra command option:
+ -Pabr:max_pcr=<xxx>
+ where:
+ xxx = the maximum peak cell rate, from 170 - 353207.
+ This option must be set on both the machines.
+ c. For CBR test:
+ It is the same as the UBR testing, but with an extra command option:
+ -Pcbr:max_pcr=<xxx>
+ where:
+ xxx = the maximum peak cell rate, from 170 - 353207.
+ This option may only be set on the trasmit machine.
+
+
+OUTSTANDING ISSUES
+------------------
+
+
+
+Contact Information
+-------------------
+
+ Customer Support:
+ United States: Telephone: (214) 654-5555
+ Fax: (214) 654-5500
+ E-Mail: intouch@iphase.com
+ Europe: Telephone: 33 (0)1 41 15 44 00
+ Fax: 33 (0)1 41 15 12 13
+ World Wide Web: http://www.iphase.com
+ Anonymous FTP: ftp.iphase.com
# clear all implied options (don't want default values for those):
unset CONFIG_ALPHA_EV4 CONFIG_ALPHA_EV5 CONFIG_ALPHA_EV6
-unset CONFIG_PCI CONFIG_ALPHA_EISA
+unset CONFIG_PCI CONFIG_ISA CONFIG_ALPHA_EISA
unset CONFIG_ALPHA_LCA CONFIG_ALPHA_APECS CONFIG_ALPHA_CIA
unset CONFIG_ALPHA_T2 CONFIG_ALPHA_PYXIS CONFIG_ALPHA_POLARIS
unset CONFIG_ALPHA_TSUNAMI CONFIG_ALPHA_MCPCIA
unset CONFIG_ALPHA_IRONGATE
+# Most of these machines have ISA slots; not exactly sure which don't,
+# and this doesn't activate hordes of code, so do it always.
+define_bool CONFIG_ISA y
+
if [ "$CONFIG_ALPHA_GENERIC" = "y" ]
then
define_bool CONFIG_PCI y
{
unsigned long cpu = smp_processor_id();
+ if (!test_bit(cpu, &flush_cpumask))
+ BUG();
if (flush_mm == cpu_tlbstate[cpu].active_mm) {
if (cpu_tlbstate[cpu].state == TLBSTATE_OK) {
if (flush_va == FLUSH_ALL)
__flush_tlb_one(flush_va);
} else
leave_mm(cpu);
+ } else {
+ extern void show_stack (void *);
+ printk("hm #1: %p, %p.\n", flush_mm, cpu_tlbstate[cpu].active_mm);
+ show_stack(NULL);
}
+ __flush_tlb();
ack_APIC_irq();
clear_bit(cpu, &flush_cpumask);
}
BUG();
if (cpumask & (1 << smp_processor_id()))
BUG();
+ if (!mm)
+ BUG();
/*
* i'm not happy about this global shared spinlock in the
* MM hot path, but we'll see how contended it is.
+ * Temporarily this turns IRQs off, so that lockups are
+ * detected by the NMI watchdog.
*/
- spin_lock(&tlbstate_lock);
+ spin_lock_irq(&tlbstate_lock);
flush_mm = mm;
flush_va = va;
flush_mm = NULL;
flush_va = 0;
- spin_unlock(&tlbstate_lock);
+ spin_unlock_irq(&tlbstate_lock);
}
void flush_tlb_current_task(void)
-# $Id: config.in,v 1.84 2000/01/31 21:10:04 davem Exp $
+# $Id: config.in,v 1.85 2000/02/08 08:57:45 jj Exp $
# For a description of the syntax of this configuration file,
# see the Configure script.
#
bool 'Support for SUN4 machines (disables SUN4[CDM] support)' CONFIG_SUN4
if [ "$CONFIG_SUN4" != "y" ]; then
- bool ' Support for PCI and PS/2 keyboard/mouse' CONFIG_PCI
+ bool 'Support for PCI and PS/2 keyboard/mouse' CONFIG_PCI
source drivers/pci/Config.in
fi
CONFIG_AFFS_FS=m
# CONFIG_HFS_FS is not set
# CONFIG_BFS_FS is not set
-# CONFIG_BFS_FS_WRITE is not set
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
# CONFIG_UMSDOS_FS is not set
-/* $Id: ioport.c,v 1.30 2000/01/28 13:41:55 jj Exp $
+/* $Id: ioport.c,v 1.31 2000/02/06 22:55:32 zaitcev Exp $
* ioport.c: Simple io mapping allocator.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
* Copyright (C) 1995 Miguel de Icaza (miguel@nuclecu.unam.mx)
*
* 1996: sparc_free_io, 1999: ioremap()/iounmap() by Pete Zaitcev.
+ *
+ * 2000/01/29
+ * <rth> zait: as long as pci_alloc_consistent produces something addressable,
+ * things are ok.
+ * <zaitcev> rth: no, it is relevant, because get_free_pages returns you a
+ * pointer into the big page mapping
+ * <rth> zait: so what?
+ * <rth> zait: remap_it_my_way(virt_to_phys(get_free_page()))
+ * <zaitcev> Hmm
+ * <zaitcev> Suppose I did this remap_it_my_way(virt_to_phys(get_free_page())).
+ * So far so good.
+ * <zaitcev> Now, driver calls pci_free_consistent(with result of
+ * remap_it_my_way()).
+ * <zaitcev> How do you find the address to pass to free_pages()?
+ * <rth> zait: walk the page tables? It's only two or three level after all.
+ * <rth> zait: you have to walk them anyway to remove the mapping.
+ * <zaitcev> Hmm
+ * <zaitcev> Sounds reasonable
*/
#include <linux/config.h>
#include <linux/ioport.h>
#include <linux/mm.h>
#include <linux/malloc.h>
+#include <linux/pci.h> /* struct pci_dev */
+#include <linux/proc_fs.h>
#include <asm/io.h>
#include <asm/vaddrs.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
-struct resource *sparc_find_resource_bystart(struct resource *, unsigned long);
-struct resource *sparc_find_resource_by_hit(struct resource *, unsigned long);
+struct resource *_sparc_find_resource(struct resource *r, unsigned long);
+int _sparc_len2order(unsigned long len);
static void *_sparc_ioremap(struct resource *res, u32 bus, u32 pa, int sz);
static void *_sparc_alloc_io(unsigned int busno, unsigned long phys,
static void _sparc_free_io(struct resource *res);
/* This points to the next to use virtual memory for DVMA mappings */
-static struct resource sparc_dvma = {
+static struct resource _sparc_dvma = {
"sparc_dvma", DVMA_VADDR, DVMA_VADDR + DVMA_LEN - 1
};
/* This points to the start of I/O mappings, cluable from outside. */
- struct resource sparc_iomap = {
+/*ext*/ struct resource sparc_iomap = {
"sparc_iomap", IOBASE_VADDR, IOBASE_END-1
};
+/*
+ * BTFIXUP would do as well but it seems overkill for the case.
+ */
+static void (*_sparc_mapioaddr)(unsigned long pa, unsigned long va,
+ int bus, int ro);
+static void (*_sparc_unmapioaddr)(unsigned long va);
+
/*
* Our mini-allocator...
* Boy this is gross! We need it because we must map I/O for
xrp->xflag = 0;
}
-/*
- */
-extern void sun4c_mapioaddr(unsigned long, unsigned long, int bus_type, int rdonly);
-extern void srmmu_mapioaddr(unsigned long, unsigned long, int bus_type, int rdonly);
-
-static void mapioaddr(unsigned long physaddr, unsigned long virt_addr,
- int bus, int rdonly)
-{
- switch(sparc_cpu_model) {
- case sun4c:
- case sun4:
- sun4c_mapioaddr(physaddr, virt_addr, bus, rdonly);
- break;
- case sun4m:
- case sun4d:
- case sun4e:
- srmmu_mapioaddr(physaddr, virt_addr, bus, rdonly);
- break;
- default:
- printk("mapioaddr: Trying to map IO space for unsupported machine.\n");
- printk("mapioaddr: sparc_cpu_model = %d\n", sparc_cpu_model);
- printk("mapioaddr: Halting...\n");
- halt();
- };
- return;
-}
-
-extern void srmmu_unmapioaddr(unsigned long virt);
-extern void sun4c_unmapioaddr(unsigned long virt);
-
-static void unmapioaddr(unsigned long virt_addr)
-{
- switch(sparc_cpu_model) {
- case sun4c:
- case sun4:
- sun4c_unmapioaddr(virt_addr);
- break;
- case sun4m:
- case sun4d:
- case sun4e:
- srmmu_unmapioaddr(virt_addr);
- break;
- default:
- printk("unmapioaddr: sparc_cpu_model = %d, halt...\n", sparc_cpu_model);
- halt();
- };
- return;
-}
-
/*
* These are typically used in PCI drivers
* which are trying to be cross-platform.
unsigned long vaddr = (unsigned long) virtual & PAGE_MASK;
struct resource *res;
- if ((res = sparc_find_resource_bystart(&sparc_iomap, vaddr)) == NULL) {
+ if ((res = _sparc_find_resource(&sparc_iomap, vaddr)) == NULL) {
printk("free_io/iounmap: cannot free %lx\n", vaddr);
return;
}
}
/*
- * Davem's version of sbus_ioremap.
*/
unsigned long sbus_ioremap(struct resource *phyres, unsigned long offset,
unsigned long size, char *name)
}
/*
- * This is called from _sparc_alloc_io only, we left it separate
- * in case Davem changes his mind about interface to sbus_ioremap().
*/
static void *
_sparc_ioremap(struct resource *res, u32 bus, u32 pa, int sz)
va = res->start;
pa &= PAGE_MASK;
for (psz = res->end - res->start + 1; psz != 0; psz -= PAGE_SIZE) {
- mapioaddr(pa, va, bus, 0);
+ (*_sparc_mapioaddr)(pa, va, bus, 0);
va += PAGE_SIZE;
pa += PAGE_SIZE;
}
plen = res->end - res->start + 1;
while (plen != 0) {
plen -= PAGE_SIZE;
- unmapioaddr(res->start + plen);
+ (*_sparc_unmapioaddr)(res->start + plen);
}
release_resource(res);
return NULL;
}
- for (order = 0; order < 6; order++) /* 2^6 pages == 256K */
- if ((1 << (order + PAGE_SHIFT)) >= len_total)
- break;
+ order = _sparc_len2order(len_total);
va = __get_free_pages(GFP_KERNEL, order);
if (va == 0) {
/*
}
memset((char*)res, 0, sizeof(struct resource));
- if (allocate_resource(&sparc_dvma, res, len_total,
- sparc_dvma.start, sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) {
+ if (allocate_resource(&_sparc_dvma, res, len_total,
+ _sparc_dvma.start, _sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) {
printk("sbus_alloc_consistent: cannot occupy 0x%lx", len_total);
free_pages(va, order);
kfree(res);
return NULL;
}
- *dma_addrp = res->start;
mmu_map_dma_area(va, res->start, len_total);
- /*
- * "Official" or "natural" address of pages we got is va.
- * We want to return uncached range. We could make va[len]
- * uncached but it's difficult to make cached back [P3: hmm]
- * We use the artefact of sun4c, replicated everywhere else,
- * that CPU can use bus addresses to access the same memory.
- */
- res->name = (void *)va; /* XXX Ouch.. we got to hide it somewhere */
+ *dma_addrp = res->start;
return (void *)res->start;
}
{
struct resource *res;
unsigned long pgp;
- int order;
- if ((res = sparc_find_resource_bystart(&sparc_dvma,
+ if ((res = _sparc_find_resource(&_sparc_dvma,
(unsigned long)p)) == NULL) {
printk("sbus_free_consistent: cannot free %p\n", p);
return;
return;
}
- mmu_inval_dma_area((unsigned long)res->name, n); /* XXX Ouch */
- mmu_unmap_dma_area(ba, n);
release_resource(res);
+ kfree(res);
- pgp = (unsigned long) res->name; /* XXX Ouch */
- for (order = 0; order < 6; order++)
- if ((1 << (order + PAGE_SHIFT)) >= n)
- break;
- free_pages(pgp, order);
+ /* mmu_inval_dma_area(va, n); */ /* it's consistent, isn't it */
+ pgp = (unsigned long) phys_to_virt(mmu_translate_dvma(ba));
+ mmu_unmap_dma_area(ba, n);
- kfree(res);
+ free_pages(pgp, _sparc_len2order(n));
}
/*
return 0;
}
memset((char*)res, 0, sizeof(struct resource));
- res->name = va;
+ res->name = va; /* XXX */
- if (allocate_resource(&sparc_dvma, res, len_total,
- sparc_dvma.start, sparc_dvma.end, PAGE_SIZE) != 0) {
+ if (allocate_resource(&_sparc_dvma, res, len_total,
+ _sparc_dvma.start, _sparc_dvma.end, PAGE_SIZE) != 0) {
printk("sbus_map_single: cannot occupy 0x%lx", len);
kfree(res);
return 0;
if (len > 256*1024) { /* __get_free_pages() limit */
return 0;
}
-/* BTFIXUPDEF_CALL(__u32, mmu_get_scsi_one, char *, unsigned long, struct sbus_bus *sbus) */
return mmu_get_scsi_one(va, len, sdev->bus);
#endif
}
struct resource *res;
unsigned long va;
- if ((res = sparc_find_resource_bystart(&sparc_dvma, ba)) == NULL) {
+ if ((res = _sparc_find_resource(&_sparc_dvma, ba)) == NULL) {
printk("sbus_unmap_single: cannot find %08x\n", (unsigned)ba);
return;
}
kfree(res);
#endif
#if 1 /* "trampoline" version */
-/* BTFIXUPDEF_CALL(void, mmu_release_scsi_one, __u32, unsigned long, struct sbus_bus *sbus) */
mmu_release_scsi_one(ba, n, sdev->bus);
#endif
}
int sbus_map_sg(struct sbus_dev *sdev, struct scatterlist *sg, int n)
{
-/* BTFIXUPDEF_CALL(void, mmu_get_scsi_sgl, struct scatterlist *, int, struct sbus_bus *sbus) */
mmu_get_scsi_sgl(sg, n, sdev->bus);
/*
void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sg, int n)
{
-/* BTFIXUPDEF_CALL(void, mmu_release_scsi_sgl, struct scatterlist *, int, struct sbus_bus *sbus) */
mmu_release_scsi_sgl(sg, n, sdev->bus);
}
-#endif
/*
- * P3: I think a partial flush is permitted...
- * We are not too efficient at doing it though.
- *
- * If only DaveM understood a concept of an allocation cookie,
- * we could avoid find_resource_by_hit() here and a major
- * performance hit.
*/
void sbus_dma_sync_single(struct sbus_dev *sdev, u32 ba, long size)
{
unsigned long va;
struct resource *res;
- res = sparc_find_resource_by_hit(&sparc_dvma, ba);
+ /* We do not need the resource, just print a message if invalid. */
+ res = _sparc_find_resource(&_sparc_dvma, ba);
if (res == NULL)
panic("sbus_dma_sync_single: 0x%x\n", ba);
- va = (unsigned long) res->name;
- /* if (va == 0) */
-
- mmu_inval_dma_area(va, (res->end - res->start) + 1);
+ va = (unsigned long) phys_to_virt(mmu_translate_dvma(ba));
+ mmu_inval_dma_area(va, (size + PAGE_SIZE-1) & PAGE_MASK);
}
void sbus_dma_sync_sg(struct sbus_dev *sdev, struct scatterlist *sg, int n)
{
- printk("dma_sync_sg: not implemented yet\n");
+ printk("sbus_dma_sync_sg: not implemented yet\n");
+}
+#endif /* CONFIG_SBUS */
+
+#ifdef CONFIG_PCI
+
+/* Allocate and map kernel buffer using consistent mode DMA for a device.
+ * hwdev should be valid struct pci_dev pointer for PCI devices.
+ */
+void *pci_alloc_consistent(struct pci_dev *pdev, size_t len, dma_addr_t *pba)
+{
+ unsigned long len_total = (len + PAGE_SIZE-1) & PAGE_MASK;
+ unsigned long va;
+ struct resource *res;
+ int order;
+
+ if (len == 0) {
+ return NULL;
+ }
+ if (len > 256*1024) { /* __get_free_pages() limit */
+ return NULL;
+ }
+
+ order = _sparc_len2order(len_total);
+ va = __get_free_pages(GFP_KERNEL, order);
+ if (va == 0) {
+ printk("pci_alloc_consistent: no %ld pages\n", len_total>>PAGE_SHIFT);
+ return NULL;
+ }
+
+ if ((res = kmalloc(sizeof(struct resource), GFP_KERNEL)) == NULL) {
+ free_pages(va, order);
+ printk("sbus_alloc_consistent: no core\n");
+ return NULL;
+ }
+ memset((char*)res, 0, sizeof(struct resource));
+
+ if (allocate_resource(&_sparc_dvma, res, len_total,
+ _sparc_dvma.start, _sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) {
+ printk("pci_alloc_consistent: cannot occupy 0x%lx", len_total);
+ free_pages(va, order);
+ kfree(res);
+ return NULL;
+ }
+
+ mmu_inval_dma_area(va, len_total);
+
+#if 1
+/* P3 */ printk("pci_alloc_consistent: kva %lx uncva %lx phys %lx size %x\n",
+ (long)va, (long)res->start, (long)virt_to_phys(va), len_total);
+#endif
+ {
+ unsigned long xva, xpa;
+ xva = res->start;
+ xpa = virt_to_phys(va);
+ while (len_total != 0) {
+ len_total -= PAGE_SIZE;
+ (*_sparc_mapioaddr)(xpa, xva, 0, 0);
+ xva += PAGE_SIZE;
+ xpa += PAGE_SIZE;
+ }
+ }
+
+ *pba = virt_to_bus(va);
+ return (void *) res->start;
}
+/* Free and unmap a consistent DMA buffer.
+ * cpu_addr is what was returned from pci_alloc_consistent,
+ * size must be the same as what as passed into pci_alloc_consistent,
+ * and likewise dma_addr must be the same as what *dma_addrp was set to.
+ *
+ * References to the memory and mappings assosciated with cpu_addr/dma_addr
+ * past this call are illegal.
+ */
+void pci_free_consistent(struct pci_dev *pdev, size_t n, void *p, dma_addr_t ba)
+{
+ struct resource *res;
+ unsigned long pgp;
+
+ if ((res = _sparc_find_resource(&_sparc_dvma,
+ (unsigned long)p)) == NULL) {
+ printk("sbus_free_consistent: cannot free %p\n", p);
+ return;
+ }
+
+ if (((unsigned long)p & (PAGE_MASK-1)) != 0) {
+ printk("sbus_free_consistent: unaligned va %p\n", p);
+ return;
+ }
+
+ n = (n + PAGE_SIZE-1) & PAGE_MASK;
+ if ((res->end-res->start)+1 != n) {
+ printk("sbus_free_consistent: region 0x%lx asked 0x%lx\n",
+ (long)((res->end-res->start)+1), (long)n);
+ return;
+ }
+
+ pgp = (unsigned long) bus_to_virt(ba);
+ mmu_inval_dma_area(pgp, n);
+ {
+ int x;
+ for (x = 0; x < n; x += PAGE_SIZE) {
+ (*_sparc_unmapioaddr)(p + n);
+ }
+ }
+
+ release_resource(res);
+ kfree(res);
+
+ free_pages(pgp, _sparc_len2order(n));
+}
+
+/* Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size)
+{
+ return virt_to_bus(ptr);
+}
+
+/* Unmap a single streaming mode DMA translation. The dma_addr and size
+ * must match what was provided for in a previous pci_map_single call. All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see
+ * whatever the device wrote there.
+ */
+void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size)
+{
+ /* Nothing to do... */
+}
+
+/* Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents)
+{
+ int n;
+ for (n = 0; n < nents; n++) {
+ sg->dvma_address = virt_to_bus(sg->address);
+ sg->dvma_length = sg->length;
+ sg++;
+ }
+ return nents;
+}
+
+/* Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nhwents)
+{
+ /* Nothing to do... */
+}
+
+/* Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+void pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t ba, size_t size)
+{
+ mmu_inval_dma_area((unsigned long)bus_to_virt(ba),
+ (size + PAGE_SIZE-1) & PAGE_MASK);
+}
+
+/* Make physical memory consistent for a set of streaming
+ * mode DMA translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+void pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents)
+{
+ while (nents) {
+ --nents;
+ mmu_inval_dma_area((unsigned long)sg->address,
+ (sg->dvma_length + PAGE_SIZE-1) & PAGE_MASK);
+ sg++;
+ }
+}
+#endif CONFIG_PCI
+
+#ifdef CONFIG_PROC_FS
+
+static int
+_sparc_io_get_info(char *buf, char **start, off_t fpos, int length, int *eof,
+ void *data)
+{
+ char *p = buf, *e = buf + length;
+ struct resource *r;
+ const char *nm;
+
+ for (r = ((struct resource *)data)->child; r != NULL; r = r->sibling) {
+ if (p + 32 >= e) /* Better than nothing */
+ break;
+ if ((nm = r->name) == 0) nm = "???";
+ p += sprintf(p, "%08lx-%08lx: %s\n", r->start, r->end, nm);
+ }
+
+ return p-buf;
+}
+
+static struct proc_dir_entry _sparc_iomap_proc_entry = {
+ 0, /* Inode number - dynamic */
+ 6, /* Length of the file name */
+ "io_map", /* The file name */
+ S_IFREG | S_IRUGO, /* File mode */
+ 1, /* Number of links */
+ 0, 0, /* The uid and gid for the file */
+ 0, /* The size of the file reported by ls. */
+ NULL, /* struct inode_operations * ops */
+ NULL, /* get_info: backward compatibility */
+ NULL, /* owner */
+ NULL, NULL, NULL, /* linkage */
+ &sparc_iomap,
+ _sparc_io_get_info, /* The read function for this file */
+ NULL,
+ /* and more stuff */
+};
+
+static struct proc_dir_entry _sparc_dvma_proc_entry = {
+ 0, /* Inode number - dynamic */
+ 8, /* Length of the file name */
+ "dvma_map", /* The file name */
+ S_IFREG | S_IRUGO, /* File mode */
+ 1, /* Number of links */
+ 0, 0, /* The uid and gid for the file */
+ 0, /* The size of the file reported by ls. */
+ NULL, /* struct inode_operations * ops */
+ NULL, /* get_info: backward compatibility */
+ NULL, /* owner */
+ NULL, NULL, NULL, /* linkage */
+ &_sparc_dvma,
+ _sparc_io_get_info,
+ NULL,
+ /* some more stuff */
+};
+
+#endif CONFIG_PROC_FS
+
/*
* This is a version of find_resource and it belongs to kernel/resource.c.
* Until we have agreement with Linus and Martin, it lingers here.
*
- * "same start" is more strict than "hit into"
+ * XXX Too slow. Can have 8192 DVMA pages on sun4m in the worst case.
+ * This probably warrants some sort of hashing.
*/
struct resource *
-sparc_find_resource_bystart(struct resource *root, unsigned long start)
+_sparc_find_resource(struct resource *root, unsigned long hit)
{
struct resource *tmp;
- for (tmp = root->child; tmp != 0; tmp = tmp->sibling) {
- if (tmp->start == start)
+ for (tmp = root->child; tmp != 0; tmp = tmp->sibling) {
+ if (tmp->start <= hit && tmp->end >= hit)
return tmp;
- }
- return NULL;
+ }
+ return NULL;
}
-struct resource *
-sparc_find_resource_by_hit(struct resource *root, unsigned long hit)
+int
+_sparc_len2order(unsigned long len)
{
- struct resource *tmp;
+ int order;
- for (tmp = root->child; tmp != 0; tmp = tmp->sibling) {
- if (tmp->start <= hit && tmp->end >= hit)
- return tmp;
- }
- return NULL;
+ for (order = 0; order < 7; order++) /* 2^6 pages == 256K */
+ if ((1 << (order + PAGE_SHIFT)) >= len)
+ return order;
+ printk("len2order: from %p: len %lu(0x%lx) yields order >=7.\n",
+ __builtin_return_address(0), len, len);
+ return 1;
+}
+
+/*
+ * Necessary boot time initializations.
+ */
+
+void ioport_init(void)
+{
+ extern void sun4c_mapioaddr(unsigned long, unsigned long, int, int);
+ extern void srmmu_mapioaddr(unsigned long, unsigned long, int, int);
+ extern void sun4c_unmapioaddr(unsigned long);
+ extern void srmmu_unmapioaddr(unsigned long);
+
+ switch(sparc_cpu_model) {
+ case sun4c:
+ case sun4:
+ case sun4e:
+ _sparc_mapioaddr = sun4c_mapioaddr;
+ _sparc_unmapioaddr = sun4c_unmapioaddr;
+ break;
+ case sun4m:
+ case sun4d:
+ _sparc_mapioaddr = srmmu_mapioaddr;
+ _sparc_unmapioaddr = srmmu_unmapioaddr;
+ break;
+ default:
+ printk("ioport_init: cpu type %d is unknown.\n",
+ sparc_cpu_model);
+ halt();
+ };
+
+#ifdef CONFIG_PROC_FS
+ proc_register(&proc_root, &_sparc_iomap_proc_entry);
+ proc_register(&proc_root, &_sparc_dvma_proc_entry);
+#endif
}
-/* $Id: sys_sparc.c,v 1.59 2000/01/29 07:40:10 davem Exp $
+/* $Id: sys_sparc.c,v 1.60 2000/02/08 20:24:18 davem Exp $
* linux/arch/sparc/kernel/sys_sparc.c
*
* This file contains various random system calls that
-/* $Id: io-unit.c,v 1.20 2000/01/15 00:51:27 anton Exp $
+/* $Id: io-unit.c,v 1.21 2000/02/06 22:55:45 zaitcev Exp $
* io-unit.c: IO-UNIT specific routines for memory management.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
static void iounit_unmap_dma_area(unsigned long addr, int len)
{
+ /* XXX Somebody please fill this in */
+}
+
+/* XXX We do not pass sbus device here, bad. */
+static unsigned long iounit_translate_dvma(unsigned long addr)
+{
+ struct sbus_bus *sbus = sbus_root; /* They are all the same */
+ struct iounit_struct *iounit = (struct iounit_struct *)sbus->iommu;
+ int i;
+ iopte_t *iopte;
+
+ i = ((addr - IOUNIT_DMA_BASE) >> PAGE_SHIFT);
+ iopte = (iopte_t *)(iounit->page_table + i);
+ return (iopte_val(*iopte) & 0xFFFFFFF0) << 4; /* XXX sun4d guru, help */
}
#endif
#ifdef CONFIG_SBUS
BTFIXUPSET_CALL(mmu_map_dma_area, iounit_map_dma_area, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(mmu_unmap_dma_area, iounit_unmap_dma_area, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(mmu_translate_dvma, iounit_translate_dvma, BTFIXUPCALL_NORM);
#endif
}
-/* $Id: iommu.c,v 1.18 2000/01/15 00:51:27 anton Exp $
+/* $Id: iommu.c,v 1.19 2000/02/06 22:55:45 zaitcev Exp $
* iommu.c: IOMMU specific routines for memory management.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
iommu_invalidate(iommu->regs);
}
-static void iommu_unmap_dma_area(unsigned long addr, int len)
+static void iommu_unmap_dma_area(unsigned long busa, int len)
{
+ struct iommu_struct *iommu = sbus_root->iommu;
+ iopte_t *iopte = iommu->page_table;
+ unsigned long end;
+
+ iopte += ((busa - iommu->start) >> PAGE_SHIFT);
+ end = PAGE_ALIGN((busa + len));
+ while (busa < end) {
+ iopte_val(*iopte++) = 0;
+ busa += PAGE_SIZE;
+ }
+ flush_tlb_all(); /* P3: Hmm... it would not hurt. */
+ iommu_invalidate(iommu->regs);
+}
+
+static unsigned long iommu_translate_dvma(unsigned long busa)
+{
+ struct iommu_struct *iommu = sbus_root->iommu;
+ iopte_t *iopte = iommu->page_table;
+ unsigned long pa;
+
+ iopte += ((busa - iommu->start) >> PAGE_SHIFT);
+ pa = pte_val(*iopte);
+ pa = (pa & 0xFFFFFFF0) << 4; /* Loose higher bits of 36 */
+ return pa + PAGE_OFFSET;
}
#endif
#ifdef CONFIG_SBUS
BTFIXUPSET_CALL(mmu_map_dma_area, iommu_map_dma_area, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(mmu_unmap_dma_area, iommu_unmap_dma_area, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(mmu_translate_dvma, iommu_translate_dvma, BTFIXUPCALL_NORM);
#endif
}
-/* $Id: loadmmu.c,v 1.54 2000/01/29 01:09:07 anton Exp $
+/* $Id: loadmmu.c,v 1.56 2000/02/08 20:24:21 davem Exp $
* loadmmu.c: This code loads up all the mm function pointers once the
* machine type has been determined. It also sets the static
* mmu values such as PAGE_NONE, etc.
extern void ld_mmu_sun4c(void);
extern void ld_mmu_srmmu(void);
+extern void ioport_init(void);
void __init load_mmu(void)
{
prom_halt();
}
btfixup();
+ ioport_init();
}
-/* $Id: srmmu.c,v 1.205 2000/01/21 17:59:46 anton Exp $
+/* $Id: srmmu.c,v 1.206 2000/02/08 07:45:59 davem Exp $
* srmmu.c: SRMMU specific routines for memory management.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
sparc_context_init(num_contexts);
{
- unsigned int zones_size[MAX_NR_ZONES] = { 0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES] = { 0, 0, 0};
zones_size[ZONE_DMA] = end_pfn;
free_area_init(zones_size);
-/* $Id: sun4c.c,v 1.185 2000/01/15 00:51:32 anton Exp $
+/* $Id: sun4c.c,v 1.187 2000/02/08 07:46:01 davem Exp $
* sun4c.c: Doing in software what should be done in hardware.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
}
}
-static void sun4c_unmap_dma_area(unsigned long addr, int len)
+static unsigned long sun4c_translate_dvma(unsigned long busa)
{
+ /* Fortunately for us, bus_addr == uncached_virt in sun4c. */
+ unsigned long pte = sun4c_get_pte(busa);
+ return (pte << PAGE_SHIFT) + PAGE_OFFSET;
}
-static void sun4c_inval_dma_area(unsigned long addr, int len)
+static unsigned long sun4c_unmap_dma_area(unsigned long busa, int len)
{
+ /* Fortunately for us, bus_addr == uncached_virt in sun4c. */
+ /* XXX Implement this */
}
-static void sun4c_flush_dma_area(unsigned long addr, int len)
+static void sun4c_inval_dma_area(unsigned long virt, int len)
+{
+}
+
+static void sun4c_flush_dma_area(unsigned long virt, int len)
{
}
sparc_context_init(num_contexts);
{
- unsigned int zones_size[MAX_NR_ZONES] = { 0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES] = { 0, 0, 0};
zones_size[ZONE_DMA] = end_pfn;
free_area_init(zones_size);
BTFIXUPSET_CALL(mmu_map_dma_area, sun4c_map_dma_area, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(mmu_unmap_dma_area, sun4c_unmap_dma_area, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(mmu_translate_dvma, sun4c_translate_dvma, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(mmu_flush_dma_area, sun4c_flush_dma_area, BTFIXUPCALL_NOP);
BTFIXUPSET_CALL(mmu_inval_dma_area, sun4c_inval_dma_area, BTFIXUPCALL_NORM);
-/* $Id: bootstr.c,v 1.19 2000/01/29 01:09:11 anton Exp $
+/* $Id: bootstr.c,v 1.20 2000/02/08 20:24:23 davem Exp $
* bootstr.c: Boot string/argument acquisition from the PROM.
*
* Copyright(C) 1995 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: console.c,v 1.21 2000/01/29 01:09:12 anton Exp $
+/* $Id: console.c,v 1.22 2000/02/08 20:24:23 davem Exp $
* console.c: Routines that deal with sending and receiving IO
* to/from the current console device using the PROM.
*
-/* $Id: printf.c,v 1.6 2000/01/29 01:09:12 anton Exp $
+/* $Id: printf.c,v 1.7 2000/02/08 20:24:23 davem Exp $
* printf.c: Internal prom library printf facility.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
.debug_pubnames 0 : { *(.debug_pubnames) }
.debug_sfnames 0 : { *(.debug_sfnames) }
.line 0 : { *(.line) }
+ /DISCARD/ : { *(.text.exit) *(.data.exit) }
}
-# $Id: config.in,v 1.89 2000/01/31 21:10:10 davem Exp $
+# $Id: config.in,v 1.94 2000/02/08 08:57:50 jj Exp $
# For a description of the syntax of this configuration file,
# see the Configure script.
#
bool 'Symmetric multi-processing support' CONFIG_SMP
-mainmenu_option next_comment
-comment 'Console drivers'
-bool 'PROM console' CONFIG_PROM_CONSOLE
-bool 'Support Frame buffer devices' CONFIG_FB
-source drivers/video/Config.in
-endmenu
-
# Global things across all Sun machines.
define_bool CONFIG_SBUS y
define_bool CONFIG_SBUSCHAR y
define_bool CONFIG_SUN_IO y
bool 'PCI support' CONFIG_PCI
source drivers/pci/Config.in
+
+mainmenu_option next_comment
+comment 'Console drivers'
+bool 'PROM console' CONFIG_PROM_CONSOLE
+bool 'Support Frame buffer devices' CONFIG_FB
+source drivers/video/Config.in
+endmenu
+
source drivers/sbus/char/Config.in
source drivers/sbus/audio/Config.in
bool ' Keepalive and linefill' CONFIG_SLIP_SMART
bool ' Six bit SLIP encapsulation' CONFIG_SLIP_MODE_SLIP6
fi
- bool ' Sun LANCE support' CONFIG_SUNLANCE
- tristate ' Sun Happy Meal 10/100baseT support' CONFIG_HAPPYMEAL
+
+ mainmenu_option next_comment
+ comment 'Ethernet (10 or 100Mbit)'
+
+ bool 'Sun LANCE support' CONFIG_SUNLANCE
+ tristate 'Sun Happy Meal 10/100baseT support' CONFIG_HAPPYMEAL
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- tristate ' Sun BigMAC 10/100baseT support (EXPERIMENTAL)' CONFIG_SUNBMAC
+ tristate 'Sun BigMAC 10/100baseT support (EXPERIMENTAL)' CONFIG_SUNBMAC
fi
tristate ' Sun QuadEthernet support' CONFIG_SUNQE
- tristate ' MyriCOM Gigabit Ethernet support' CONFIG_MYRI_SBUS
if [ "$CONFIG_PCI" = "y" ]; then
- tristate ' Generic DECchip & DIGITAL EtherWORKS PCI/EISA' CONFIG_DE4X5
- tristate ' 3c590/3c900 series (592/595/597) "Vortex/Boomerang" support' CONFIG_VORTEX
- tristate ' RealTek 8129/8139 (not 8019/8029!) support' CONFIG_RTL8139
- tristate ' PCI NE2000 support' CONFIG_NE2K_PCI
- tristate ' EtherExpressPro/100 support' CONFIG_EEXPRESS_PRO100
- tristate ' Adaptec Starfire support' CONFIG_ADAPTEC_STARFIRE
+ tristate 'Generic DECchip & DIGITAL EtherWORKS PCI/EISA' CONFIG_DE4X5
+ tristate '3c590/3c900 series (592/595/597) "Vortex/Boomerang" support' CONFIG_VORTEX
+ tristate 'RealTek 8129/8139 (not 8019/8029!) support' CONFIG_RTL8139
+ tristate 'PCI NE2000 support' CONFIG_NE2K_PCI
+ tristate 'EtherExpressPro/100 support' CONFIG_EEXPRESS_PRO100
+ tristate 'Adaptec Starfire support' CONFIG_ADAPTEC_STARFIRE
fi
+ endmenu
+
+ mainmenu_option next_comment
+ comment 'Ethernet (1000 Mbit)'
+
+ if [ "$CONFIG_PCI" = "y" ]; then
+ tristate 'Alteon AceNIC/3Com 3C985/NetGear GA620 Gigabit support' CONFIG_ACENIC
+ if [ "$CONFIG_ACENIC" != "n" ]; then
+ bool ' Omit support for old Tigon I based AceNICs' CONFIG_ACENIC_OMIT_TIGON_I
+ fi
+ tristate 'SysKonnect SK-98xx support' CONFIG_SK98LIN
+ fi
+ tristate 'MyriCOM Gigabit Ethernet support' CONFIG_MYRI_SBUS
+ endmenu
+
# bool ' FDDI driver support' CONFIG_FDDI
# if [ "$CONFIG_FDDI" = "y" ]; then
# fi
+
+ if [ "$CONFIG_ATM" = "y" ]; then
+ source drivers/atm/Config.in
+ fi
fi
-endmenu
+ endmenu
fi
# This one must be before the filesystem configs. -DaveM
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
# CONFIG_SMP is not set
+CONFIG_SBUS=y
+CONFIG_SBUSCHAR=y
+CONFIG_BUSMOUSE=y
+CONFIG_SUN_MOUSE=y
+CONFIG_SERIAL=y
+CONFIG_SUN_SERIAL=y
+CONFIG_SERIAL_CONSOLE=y
+CONFIG_SUN_KEYBOARD=y
+CONFIG_SUN_CONSOLE=y
+CONFIG_SUN_AUXIO=y
+CONFIG_SUN_IO=y
+CONFIG_PCI=y
+CONFIG_PCI_NAMES=y
#
# Console drivers
CONFIG_FBCON_FONTWIDTH8_ONLY=y
CONFIG_FONT_SUN8x16=y
# CONFIG_FBCON_FONTS is not set
-CONFIG_SBUS=y
-CONFIG_SBUSCHAR=y
-CONFIG_BUSMOUSE=y
-CONFIG_SUN_MOUSE=y
-CONFIG_SERIAL=y
-CONFIG_SUN_SERIAL=y
-CONFIG_SERIAL_CONSOLE=y
-CONFIG_SUN_KEYBOARD=y
-CONFIG_SUN_CONSOLE=y
-CONFIG_SUN_AUXIO=y
-CONFIG_SUN_IO=y
-CONFIG_PCI=y
-CONFIG_PCI_NAMES=y
#
# Misc Linux/SPARC drivers
# CONFIG_SUN_BPP is not set
# CONFIG_SUN_VIDEOPIX is not set
CONFIG_SUN_AURORA=m
-# CONFIG_TADPOLE_TS102_UCTRL is not set
-# CONFIG_SUN_JSFLASH is not set
-CONFIG_APM_RTC_IS_GMT=y
-# CONFIG_RTC is not set
#
# Linux/SPARC audio subsystem (EXPERIMENTAL)
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
# CONFIG_SLIP_MODE_SLIP6 is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
CONFIG_SUNLANCE=y
CONFIG_HAPPYMEAL=y
CONFIG_SUNBMAC=m
CONFIG_SUNQE=m
-CONFIG_MYRI_SBUS=m
CONFIG_DE4X5=m
CONFIG_VORTEX=m
CONFIG_RTL8139=m
CONFIG_EEXPRESS_PRO100=m
CONFIG_ADAPTEC_STARFIRE=m
+#
+# Ethernet (1000 Mbit)
+#
+CONFIG_ACENIC=m
+# CONFIG_ACENIC_OMIT_TIGON_I is not set
+CONFIG_SK98LIN=m
+CONFIG_MYRI_SBUS=m
+
#
# Unix 98 PTY support
#
CONFIG_AFFS_FS=m
# CONFIG_HFS_FS is not set
CONFIG_BFS_FS=m
-# CONFIG_BFS_FS_WRITE is not set
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
# CONFIG_UMSDOS_FS is not set
-# $Id: Makefile,v 1.50 1999/12/21 04:02:24 davem Exp $
+# $Id: Makefile,v 1.51 2000/02/08 05:11:31 jj Exp $
# Makefile for the linux kernel.
#
# Note! Dependencies are done automagically by 'make dep', which also
O_OBJS := process.o setup.o cpu.o idprom.o \
traps.o devices.o auxio.o \
irq.o ptrace.o time.o sys_sparc.o signal.o \
- unaligned.o central.o pci.o pci_common.o pci_iommu.o \
- pci_psycho.o pci_sabre.o starfire.o semaphore.o \
+ unaligned.o central.o pci.o starfire.o semaphore.o \
power.o sbus.o iommu_common.o
OX_OBJS := sparc64_ksyms.o
ifdef CONFIG_PCI
- O_OBJS += ebus.o
+ O_OBJS += ebus.o pci_common.o pci_iommu.o \
+ pci_psycho.o pci_sabre.o
endif
ifdef CONFIG_SUNOS_EMUL
-/* $Id: ioctl32.c,v 1.76 2000/01/31 21:10:15 davem Exp $
+/* $Id: ioctl32.c,v 1.79 2000/02/08 20:24:25 davem Exp $
* ioctl32.c: Conversion between 32bit and 64bit native ioctls.
*
* Copyright (C) 1997 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
#include <linux/if.h>
#include <linux/malloc.h>
#include <linux/hdreg.h>
+#if 0 /* New RAID code is half-merged... -DaveM */
#include <linux/md.h>
+#endif
#include <linux/kd.h>
#include <linux/route.h>
#include <linux/skbuff.h>
#include <linux/soundcard.h>
+#include <linux/atm.h>
+#include <linux/atmarp.h>
+#include <linux/atmclip.h>
+#include <linux/atmdev.h>
+#include <linux/atmioc.h>
+#include <linux/atmlec.h>
+#include <linux/atmmpc.h>
+#include <linux/atmsvc.h>
+#include <linux/atm_tcp.h>
+#include <linux/sonet.h>
+#include <linux/atm_suni.h>
+
/* Use this to get at 32-bit user passed pointers.
See sys_sparc32.c for description about these. */
#define A(__x) ((unsigned long)(__x))
return err;
}
+struct atmif_sioc32 {
+ int number;
+ int length;
+ __kernel_caddr_t32 arg;
+};
+
+struct atm_iobuf32 {
+ int length;
+ __kernel_caddr_t32 buffer;
+};
+
+#define ATM_GETLINKRATE32 _IOW('a', ATMIOC_ITF+1, struct atmif_sioc32)
+#define ATM_GETNAMES32 _IOW('a', ATMIOC_ITF+3, struct atm_iobuf32)
+#define ATM_GETTYPE32 _IOW('a', ATMIOC_ITF+4, struct atmif_sioc32)
+#define ATM_GETESI32 _IOW('a', ATMIOC_ITF+5, struct atmif_sioc32)
+#define ATM_GETADDR32 _IOW('a', ATMIOC_ITF+6, struct atmif_sioc32)
+#define ATM_RSTADDR32 _IOW('a', ATMIOC_ITF+7, struct atmif_sioc32)
+#define ATM_ADDADDR32 _IOW('a', ATMIOC_ITF+8, struct atmif_sioc32)
+#define ATM_DELADDR32 _IOW('a', ATMIOC_ITF+9, struct atmif_sioc32)
+#define ATM_GETCIRANGE32 _IOW('a', ATMIOC_ITF+10, struct atmif_sioc32)
+#define ATM_SETCIRANGE32 _IOW('a', ATMIOC_ITF+11, struct atmif_sioc32)
+#define ATM_SETESI32 _IOW('a', ATMIOC_ITF+12, struct atmif_sioc32)
+#define ATM_SETESIF32 _IOW('a', ATMIOC_ITF+13, struct atmif_sioc32)
+#define ATM_GETSTAT32 _IOW('a', ATMIOC_SARCOM+0, struct atmif_sioc32)
+#define ATM_GETSTATZ32 _IOW('a', ATMIOC_SARCOM+1, struct atmif_sioc32)
+
+static struct {
+ unsigned int cmd32;
+ unsigned int cmd;
+} atm_ioctl_map[] = {
+ { ATM_GETLINKRATE32, ATM_GETLINKRATE },
+ { ATM_GETNAMES32, ATM_GETNAMES },
+ { ATM_GETTYPE32, ATM_GETTYPE },
+ { ATM_GETESI32, ATM_GETESI },
+ { ATM_GETADDR32, ATM_GETADDR },
+ { ATM_RSTADDR32, ATM_RSTADDR },
+ { ATM_ADDADDR32, ATM_ADDADDR },
+ { ATM_DELADDR32, ATM_DELADDR },
+ { ATM_GETCIRANGE32, ATM_GETCIRANGE },
+ { ATM_SETCIRANGE32, ATM_SETCIRANGE },
+ { ATM_SETESI32, ATM_SETESI },
+ { ATM_SETESIF32, ATM_SETESIF },
+ { ATM_GETSTAT32, ATM_GETSTAT },
+ { ATM_GETSTATZ32, ATM_GETSTATZ }
+};
+
+#define NR_ATM_IOCTL (sizeof(atm_ioctl_map)/sizeof(atm_ioctl_map[0]))
+
+
+static int do_atm_iobuf(unsigned int fd, unsigned int cmd, unsigned long arg)
+{
+ struct atm_iobuf32 iobuf32;
+ struct atm_iobuf iobuf = { 0, NULL };
+ mm_segment_t old_fs;
+ int err;
+
+ err = copy_from_user(&iobuf32, (struct atm_iobuf32*)arg,
+ sizeof(struct atm_iobuf32));
+ if (err)
+ return -EFAULT;
+
+ iobuf.length = iobuf32.length;
+
+ if (iobuf32.buffer == (__kernel_caddr_t32) NULL || iobuf32.length == 0) {
+ iobuf.buffer = (void*)(unsigned long)iobuf32.buffer;
+ } else {
+ iobuf.buffer = kmalloc(iobuf.length, GFP_KERNEL);
+ if (iobuf.buffer == NULL) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = copy_from_user(iobuf.buffer, A(iobuf32.buffer), iobuf.length);
+ if (err) {
+ err = -EFAULT;
+ goto out;
+ }
+ }
+
+ old_fs = get_fs(); set_fs (KERNEL_DS);
+ err = sys_ioctl (fd, cmd, (unsigned long)&iobuf);
+ set_fs (old_fs);
+ if(err)
+ goto out;
+
+ if(iobuf.buffer && iobuf.length > 0) {
+ err = copy_to_user(A(iobuf32.buffer), iobuf.buffer, iobuf.length);
+ if (err) {
+ err = -EFAULT;
+ goto out;
+ }
+ }
+ err = __put_user(iobuf.length, &(((struct atm_iobuf32*)arg)->length));
+
+ out:
+ if(iobuf32.buffer && iobuf32.length > 0)
+ kfree(iobuf.buffer);
+
+ return err;
+}
+
+
+static int do_atmif_sioc(unsigned int fd, unsigned int cmd, unsigned long arg)
+{
+ struct atmif_sioc32 sioc32;
+ struct atmif_sioc sioc = { 0, 0, NULL };
+ mm_segment_t old_fs;
+ int err;
+
+ err = copy_from_user(&sioc32, (struct atmif_sioc32*)arg,
+ sizeof(struct atmif_sioc32));
+ if (err)
+ return -EFAULT;
+
+ sioc.number = sioc32.number;
+ sioc.length = sioc32.length;
+
+ if (sioc32.arg == (__kernel_caddr_t32) NULL || sioc32.length == 0) {
+ sioc.arg = (void*)(unsigned long)sioc32.arg;
+ } else {
+ sioc.arg = kmalloc(sioc.length, GFP_KERNEL);
+ if (sioc.arg == NULL) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = copy_from_user(sioc.arg, A(sioc32.arg), sioc32.length);
+ if (err) {
+ err = -EFAULT;
+ goto out;
+ }
+ }
+
+ old_fs = get_fs(); set_fs (KERNEL_DS);
+ err = sys_ioctl (fd, cmd, (unsigned long)&sioc);
+ set_fs (old_fs);
+ if(err) {
+ goto out;
+ }
+
+ if(sioc.arg && sioc.length > 0) {
+ err = copy_to_user(A(sioc32.arg), sioc.arg, sioc.length);
+ if (err) {
+ err = -EFAULT;
+ goto out;
+ }
+ }
+ err = __put_user(sioc.length, &(((struct atmif_sioc32*)arg)->length));
+
+ out:
+ if(sioc32.arg && sioc32.length > 0)
+ kfree(sioc.arg);
+
+ return err;
+}
+
+
+static int do_atm_ioctl(unsigned int fd, unsigned int cmd32, unsigned long arg)
+{
+ int i;
+ unsigned int cmd = 0;
+
+ switch (cmd32) {
+ case SUNI_GETLOOP:
+ case SUNI_SETLOOP:
+ case SONET_GETSTAT:
+ case SONET_GETSTATZ:
+ case SONET_GETDIAG:
+ case SONET_SETDIAG:
+ case SONET_CLRDIAG:
+ case SONET_SETFRAMING:
+ case SONET_GETFRAMING:
+ case SONET_GETFRSENSE:
+ return do_atmif_sioc(fd, cmd32, arg);
+ }
+
+ if (cmd == 0) {
+ for (i = 0; i < NR_ATM_IOCTL; i++) {
+ if (cmd32 == atm_ioctl_map[i].cmd32) {
+ cmd = atm_ioctl_map[i].cmd;
+ break;
+ }
+ }
+ if (i == NR_ATM_IOCTL) {
+ return -EINVAL;
+ }
+ }
+
+ switch (cmd) {
+ case ATM_GETNAMES:
+ return do_atm_iobuf(fd, cmd, arg);
+
+ case ATM_GETLINKRATE:
+ case ATM_GETTYPE:
+ case ATM_GETESI:
+ case ATM_GETADDR:
+ case ATM_RSTADDR:
+ case ATM_ADDADDR:
+ case ATM_DELADDR:
+ case ATM_GETCIRANGE:
+ case ATM_SETCIRANGE:
+ case ATM_SETESI:
+ case ATM_SETESIF:
+ case ATM_GETSTAT:
+ case ATM_GETSTATZ:
+ return do_atmif_sioc(fd, cmd, arg);
+ }
+
+ return -EINVAL;
+}
+
asmlinkage int sys32_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg)
{
struct file * filp;
error = do_smb_getmountuid(fd, cmd, arg);
goto out;
+ case ATM_GETLINKRATE32:
+ case ATM_GETNAMES32:
+ case ATM_GETTYPE32:
+ case ATM_GETESI32:
+ case ATM_GETADDR32:
+ case ATM_RSTADDR32:
+ case ATM_ADDADDR32:
+ case ATM_DELADDR32:
+ case ATM_GETCIRANGE32:
+ case ATM_SETCIRANGE32:
+ case ATM_SETESI32:
+ case ATM_SETESIF32:
+ case ATM_GETSTAT32:
+ case ATM_GETSTATZ32:
+ case SUNI_GETLOOP:
+ case SUNI_SETLOOP:
+ case SONET_GETSTAT:
+ case SONET_GETSTATZ:
+ case SONET_GETDIAG:
+ case SONET_SETDIAG:
+ case SONET_CLRDIAG:
+ case SONET_SETFRAMING:
+ case SONET_GETFRAMING:
+ case SONET_GETFRSENSE:
+ error = do_atm_ioctl(fd, cmd, arg);
+ goto out;
+
/* List here exlicitly which ioctl's are known to have
* compatable types passed or none at all...
*/
case BLKRRPART:
case BLKFLSBUF:
case BLKRASET:
-
+
+#if 0 /* New RAID code is being merged, fix up to handle
+ * new RAID ioctls when fully merged in 2.3.x -DaveM
+ */
/* 0x09 */
case REGISTER_DEV:
case REGISTER_DEV_NEW:
case START_MD:
case STOP_MD:
+#endif
/* Big K */
case PIO_FONT:
/* SMB ioctls which do not need any translations */
case SMB_IOC_NEWCONN:
+ /* Little a */
+ case ATMSIGD_CTRL:
+ case ATMARPD_CTRL:
+ case ATMLEC_CTRL:
+ case ATMLEC_MCAST:
+ case ATMLEC_DATA:
+ case ATM_SETSC:
+ case SIOCSIFATMTCP:
+ case SIOCMKCLIP:
+ case ATMARP_MKIP:
+ case ATMARP_SETENTRY:
+ case ATMARP_ENCAP:
+ case ATMTCP_CREATE:
+ case ATMTCP_REMOVE:
+ case ATMMPC_CTRL:
+ case ATMMPC_DATA:
+
error = sys_ioctl (fd, cmd, arg);
goto out;
-/* $Id: pci.c,v 1.14 2000/01/13 00:05:43 davem Exp $
+/* $Id: pci.c,v 1.15 2000/02/08 05:11:29 jj Exp $
* pci.c: UltraSparc PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
#include <asm/irq.h>
#include <asm/ebus.h>
-#ifndef NEW_PCI_DMA_MAP
-unsigned long pci_dvma_v2p_hash[PCI_DVMA_HASHSZ];
-unsigned long pci_dvma_p2v_hash[PCI_DVMA_HASHSZ];
-#endif
-
unsigned long pci_memspace_mask = 0xffffffffUL;
#ifndef CONFIG_PCI
-/* $Id: pci_impl.h,v 1.4 1999/12/17 12:32:03 jj Exp $
+/* $Id: pci_impl.h,v 1.5 2000/02/08 05:11:32 jj Exp $
* pci_impl.h: Helper definitions for PCI controller support.
*
* Copyright (C) 1999 David S. Miller (davem@redhat.com)
extern void pci_scan_for_master_abort(struct pci_controller_info *, struct pci_pbm_info *, struct pci_bus *);
extern void pci_scan_for_parity_error(struct pci_controller_info *, struct pci_pbm_info *, struct pci_bus *);
-#ifndef NEW_PCI_DMA_MAP
-/* IOMMU/DVMA initialization. */
-#define PCI_DVMA_HASH_NONE ~0UL
-static __inline__ void set_dvma_hash(unsigned long dvma_offset,
- unsigned long paddr,
- unsigned long daddr)
-{
- unsigned long dvma_addr = dvma_offset + daddr;
- unsigned long vaddr = (unsigned long)__va(paddr);
-
- pci_dvma_v2p_hash[pci_dvma_ahashfn(paddr)] = dvma_addr - vaddr;
- pci_dvma_p2v_hash[pci_dvma_ahashfn(dvma_addr)] = vaddr - dvma_addr;
-}
-#endif
-
/* Configuration space access. */
extern spinlock_t pci_poke_lock;
extern volatile int pci_poke_in_progress;
-/* $Id: pci_psycho.c,v 1.10 2000/01/28 13:42:00 jj Exp $
+/* $Id: pci_psycho.c,v 1.11 2000/02/08 05:11:32 jj Exp $
* pci_psycho.c: PSYCHO/U2P specific PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@caipfs.rutgers.edu)
static void __init psycho_iommu_init(struct pci_controller_info *p)
{
-#ifndef NEW_PCI_DMA_MAP
- struct linux_mlist_p1275 *mlist;
- unsigned long n;
- iopte_t *iopte;
- int tsbsize = 32;
-#endif
extern int this_is_starfire;
extern void *starfire_hookup(int);
unsigned long tsbbase, i;
control &= ~(PSYCHO_IOMMU_CTRL_DENAB);
psycho_write(p->controller_regs + PSYCHO_IOMMU_CONTROL, control);
-#ifndef NEW_PCI_DMA_MAP
- /* Using assumed page size 64K with 32K entries we need 256KB iommu page
- * table (32K ioptes * 8 bytes per iopte). This is
- * page order 5 on UltraSparc.
- */
- tsbbase = __get_free_pages(GFP_KERNEL, 5);
-#else
/* Using assumed page size 8K with 128K entries we need 1MB iommu page
* table (128K ioptes * 8 bytes per iopte). This is
* page order 7 on UltraSparc.
*/
tsbbase = __get_free_pages(GFP_KERNEL, 7);
-#endif
if (!tsbbase) {
prom_printf("PSYCHO_IOMMU: Error, gfp(tsb) failed.\n");
prom_halt();
p->iommu.page_table = (iopte_t *)tsbbase;
p->iommu.page_table_sz_bits = 17;
p->iommu.page_table_map_base = 0xc0000000;
-#ifndef NEW_PCI_DMA_MAP
- memset((char *)tsbbase, 0, PAGE_SIZE << 5);
-#else
memset((char *)tsbbase, 0, PAGE_SIZE << 7);
-#endif
/* Make sure DMA address 0 is never returned just to allow catching
of buggy drivers. */
p->iommu.lowest_free[0] = 1;
-#ifndef NEW_PCI_DMA_MAP
- iopte = (iopte_t *)tsbbase;
- /* Initialize to "none" settings. */
- for(i = 0; i < PCI_DVMA_HASHSZ; i++) {
- pci_dvma_v2p_hash[i] = PCI_DVMA_HASH_NONE;
- pci_dvma_p2v_hash[i] = PCI_DVMA_HASH_NONE;
- }
-
- n = 0;
- mlist = *prom_meminfo()->p1275_totphys;
- while (mlist) {
- unsigned long paddr = mlist->start_adr;
- unsigned long num_bytes = mlist->num_bytes;
-
- if(paddr >= (((unsigned long) high_memory) - PAGE_OFFSET))
- goto next;
-
- if((paddr + num_bytes) >= (((unsigned long) high_memory) - PAGE_OFFSET))
- num_bytes = (((unsigned long) high_memory) - PAGE_OFFSET) - paddr;
-
- /* Align base and length so we map whole hash table sized chunks
- * at a time (and therefore full 64K IOMMU pages).
- */
- paddr &= ~((1UL << 24UL) - 1);
- num_bytes = (num_bytes + ((1UL << 24UL) - 1)) & ~((1UL << 24) - 1);
-
- /* Move up the base for mappings already created. */
- while(pci_dvma_v2p_hash[pci_dvma_ahashfn(paddr)] !=
- PCI_DVMA_HASH_NONE) {
- paddr += (1UL << 24UL);
- num_bytes -= (1UL << 24UL);
- if(num_bytes == 0UL)
- goto next;
- }
-
- /* Move down the size for tail mappings already created. */
- while(pci_dvma_v2p_hash[pci_dvma_ahashfn(paddr + num_bytes - (1UL << 24UL))] !=
- PCI_DVMA_HASH_NONE) {
- num_bytes -= (1UL << 24UL);
- if(num_bytes == 0UL)
- goto next;
- }
-
- /* Now map the rest. */
- for (i = 0; i < ((num_bytes + ((1 << 16) - 1)) >> 16); i++) {
- iopte_val(*iopte) = ((IOPTE_VALID | IOPTE_64K |
- IOPTE_CACHE | IOPTE_WRITE) |
- (paddr & IOPTE_PAGE));
-
- if (!(n & 0xff))
- set_dvma_hash(0x80000000, paddr, (n << 16));
-
- if (++n > (tsbsize * 1024))
- goto out;
-
- paddr += (1 << 16);
- iopte++;
- }
- next:
- mlist = mlist->theres_more;
- }
-out:
- if (mlist) {
- prom_printf("WARNING: not all physical memory mapped in IOMMU\n");
- prom_printf("Try booting with mem=xxxM or similar\n");
- prom_halt();
- }
-#endif
psycho_write(p->controller_regs + PSYCHO_IOMMU_TSBBASE, __pa(tsbbase));
control = psycho_read(p->controller_regs + PSYCHO_IOMMU_CONTROL);
-#ifndef NEW_PCI_DMA_MAP
- control &= ~(PSYCHO_IOMMU_CTRL_TSBSZ);
- control |= (PSYCHO_IOMMU_CTRL_TBWSZ | PSYCHO_IOMMU_CTRL_ENAB);
- switch(tsbsize) {
- case 8:
- p->iommu.page_table_map_base = 0xe0000000;
- control |= PSYCHO_IOMMU_TSBSZ_8K;
- break;
- case 16:
- p->iommu.page_table_map_base = 0xc0000000;
- control |= PSYCHO_IOMMU_TSBSZ_16K;
- break;
- case 32:
- p->iommu.page_table_map_base = 0x80000000;
- control |= PSYCHO_IOMMU_TSBSZ_32K;
- break;
- default:
- prom_printf("iommu_init: Illegal TSB size %d\n", tsbsize);
- prom_halt();
- break;
- }
-#else
control &= ~(PSYCHO_IOMMU_CTRL_TSBSZ | PSYCHO_IOMMU_CTRL_TBWSZ);
control |= (PSYCHO_IOMMU_TSBSZ_128K | PSYCHO_IOMMU_CTRL_ENAB);
-#endif
psycho_write(p->controller_regs + PSYCHO_IOMMU_CONTROL, control);
/* If necessary, hook us up for starfire IRQ translations. */
-/* $Id: pci_sabre.c,v 1.11 2000/01/28 13:42:01 jj Exp $
+/* $Id: pci_sabre.c,v 1.12 2000/02/08 05:11:33 jj Exp $
* pci_sabre.c: Sabre specific PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@caipfs.rutgers.edu)
static void __init sabre_iommu_init(struct pci_controller_info *p,
int tsbsize, unsigned long dvma_offset)
{
-#ifndef NEW_PCI_DMA_MAP
- struct linux_mlist_p1275 *mlist;
- unsigned long n;
- iopte_t *iopte;
-#endif
unsigned long tsbbase, i, order;
u64 control;
of buggy drivers. */
p->iommu.lowest_free[0] = 1;
-#ifndef NEW_PCI_DMA_MAP
- iopte = (iopte_t *)tsbbase;
-
- /* Initialize to "none" settings. */
- for(i = 0; i < PCI_DVMA_HASHSZ; i++) {
- pci_dvma_v2p_hash[i] = PCI_DVMA_HASH_NONE;
- pci_dvma_p2v_hash[i] = PCI_DVMA_HASH_NONE;
- }
-
- n = 0;
- mlist = *prom_meminfo()->p1275_totphys;
- while (mlist) {
- unsigned long paddr = mlist->start_adr;
- unsigned long num_bytes = mlist->num_bytes;
-
- if(paddr >= (((unsigned long) high_memory) - PAGE_OFFSET))
- goto next;
-
- if((paddr + num_bytes) >= (((unsigned long) high_memory) - PAGE_OFFSET))
- num_bytes =
- (((unsigned long) high_memory) -
- PAGE_OFFSET) - paddr;
-
- /* Align base and length so we map whole hash table sized chunks
- * at a time (and therefore full 64K IOMMU pages).
- */
- paddr &= ~((1UL << 24UL) - 1);
- num_bytes = (num_bytes + ((1UL << 24UL) - 1)) & ~((1UL << 24) - 1);
-
- /* Move up the base for mappings already created. */
- while(pci_dvma_v2p_hash[pci_dvma_ahashfn(paddr)] !=
- PCI_DVMA_HASH_NONE) {
- paddr += (1UL << 24UL);
- num_bytes -= (1UL << 24UL);
- if(num_bytes == 0UL)
- goto next;
- }
-
- /* Move down the size for tail mappings already created. */
- while(pci_dvma_v2p_hash[pci_dvma_ahashfn(paddr + num_bytes - (1UL << 24UL))] !=
- PCI_DVMA_HASH_NONE) {
- num_bytes -= (1UL << 24UL);
- if(num_bytes == 0UL)
- goto next;
- }
-
- /* Now map the rest. */
- for (i = 0; i < ((num_bytes + ((1 << 16) - 1)) >> 16); i++) {
- iopte_val(*iopte) = ((IOPTE_VALID | IOPTE_64K |
- IOPTE_CACHE | IOPTE_WRITE) |
- (paddr & IOPTE_PAGE));
-
- if (!(n & 0xff))
- set_dvma_hash(dvma_offset, paddr, (n << 16));
- if (++n > (tsbsize * 1024))
- goto out;
-
- paddr += (1 << 16);
- iopte++;
- }
- next:
- mlist = mlist->theres_more;
- }
-out:
- if (mlist) {
- prom_printf("WARNING: not all physical memory mapped in IOMMU\n");
- prom_printf("Try booting with mem=xxxM or similar\n");
- prom_halt();
- }
-#endif
-
sabre_write(p->controller_regs + SABRE_IOMMU_TSBBASE, __pa(tsbbase));
control = sabre_read(p->controller_regs + SABRE_IOMMU_CONTROL);
-#ifndef NEW_PCI_DMA_MAP
- control &= ~(SABRE_IOMMUCTRL_TSBSZ);
- control |= (SABRE_IOMMUCTRL_TBWSZ | SABRE_IOMMUCTRL_ENAB);
- switch(tsbsize) {
- case 8:
- control |= SABRE_IOMMU_TSBSZ_8K;
- break;
- case 16:
- control |= SABRE_IOMMU_TSBSZ_16K;
- break;
- case 32:
- control |= SABRE_IOMMU_TSBSZ_32K;
- break;
- default:
- prom_printf("iommu_init: Illegal TSB size %d\n", tsbsize);
- prom_halt();
- break;
- }
-#else
control &= ~(SABRE_IOMMUCTRL_TSBSZ | SABRE_IOMMUCTRL_TBWSZ);
control |= SABRE_IOMMUCTRL_ENAB;
switch(tsbsize) {
prom_halt();
break;
}
-#endif
sabre_write(p->controller_regs + SABRE_IOMMU_CONTROL, control);
}
}
switch(vdma[1]) {
-#ifndef NEW_PCI_DMA_MAP
- case 0x20000000:
- tsbsize = 8;
- break;
- case 0x40000000:
- tsbsize = 16;
- break;
- case 0x80000000:
- tsbsize = 32;
- break;
-#else
case 0x20000000:
tsbsize = 64;
break;
case 0x80000000:
tsbsize = 128;
break;
-#endif
default:
prom_printf("SABRE: strange virtual-dma size.\n");
prom_halt();
-/* $Id: sparc64_ksyms.c,v 1.72 2000/01/28 13:41:59 jj Exp $
+/* $Id: sparc64_ksyms.c,v 1.73 2000/02/08 05:11:32 jj Exp $
* arch/sparc64/kernel/sparc64_ksyms.c: Sparc64 specific ksyms support.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
EXPORT_SYMBOL(sbus_dma_sync_single);
EXPORT_SYMBOL(sbus_dma_sync_sg);
#endif
-#if CONFIG_PCI
+#ifdef CONFIG_PCI
EXPORT_SYMBOL(ebus_chain);
-#ifndef NEW_PCI_DMA_MAP
-EXPORT_SYMBOL(pci_dvma_v2p_hash);
-EXPORT_SYMBOL(pci_dvma_p2v_hash);
-#endif
EXPORT_SYMBOL(pci_memspace_mask);
EXPORT_SYMBOL(empty_zero_page);
EXPORT_SYMBOL(outsb);
EXPORT_SYMBOL(insb);
EXPORT_SYMBOL(insw);
EXPORT_SYMBOL(insl);
-#endif
-#ifdef NEW_PCI_DMA_MAP
EXPORT_SYMBOL(pci_alloc_consistent);
EXPORT_SYMBOL(pci_free_consistent);
EXPORT_SYMBOL(pci_map_single);
-/* $Id: init.c,v 1.144 2000/01/23 07:16:11 davem Exp $
+/* $Id: init.c,v 1.145 2000/02/08 07:46:11 davem Exp $
* arch/sparc64/mm/init.c
*
* Copyright (C) 1996-1999 David S. Miller (davem@caip.rutgers.edu)
flush_tlb_all();
{
- unsigned int zones_size[MAX_NR_ZONES] = { 0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES] = { 0, 0, 0};
zones_size[ZONE_DMA] = end_pfn;
free_area_init(zones_size);
.debug_pubnames 0 : { *(.debug_pubnames) }
.debug_sfnames 0 : { *(.debug_sfnames) }
.line 0 : { *(.line) }
+ /DISCARD/ : { *(.text.exit) *(.data.exit) }
}
fi
if [ "$CONFIG_PCI" = "y" ]; then
tristate 'Efficient Networks ENI155P' CONFIG_ATM_ENI
- if [ ! "$CONFIG_ATM_ENI" = "n" ]; then
+ if [ "$CONFIG_ATM_ENI" != "n" ]; then
bool ' Enable extended debugging' CONFIG_ATM_ENI_DEBUG
bool ' Fine-tune burst settings' CONFIG_ATM_ENI_TUNE_BURST
if [ "$CONFIG_ATM_ENI_TUNE_BURST" = "y" ]; then
fi
fi
tristate 'ZeitNet ZN1221/ZN1225' CONFIG_ATM_ZATM
- if [ ! "$CONFIG_ATM_ZATM" = "n" ]; then
+ if [ "$CONFIG_ATM_ZATM" != "n" ]; then
bool ' Enable extended debugging' CONFIG_ATM_ZATM_DEBUG
- bool ' Enable usec resolution timestamps' CONFIG_ATM_ZATM_EXACT_TS
+ if [ "$CONFIG_X86" = "y" ]; then
+ bool ' Enable usec resolution timestamps' CONFIG_ATM_ZATM_EXACT_TS
+ fi
fi
# bool 'Rolfs TI TNETA1570' CONFIG_ATM_TNETA1570 y
# if [ "$CONFIG_ATM_TNETA1570" = "y" ]; then
# bool ' Enable extended debugging' CONFIG_ATM_TNETA1570_DEBUG n
# fi
- tristate 'IDT 77201 (NICStAR)' CONFIG_ATM_NICSTAR
+ tristate 'IDT 77201 (NICStAR) (ForeRunnerLE)' CONFIG_ATM_NICSTAR
if [ "$CONFIG_ATM_NICSTAR" != "n" ]; then
- bool ' Use suni PHY driver' CONFIG_ATM_NICSTAR_USE_SUNI
+ bool ' Use suni PHY driver (155Mbps)' CONFIG_ATM_NICSTAR_USE_SUNI
+ bool ' Use IDT77015 PHY driver (25Mbps)' CONFIG_ATM_NICSTAR_USE_IDT77105
fi
tristate 'Madge Ambassador (Collage PCI 155 Server)' CONFIG_ATM_AMBASSADOR
if [ "$CONFIG_ATM_AMBASSADOR" != "n" ]; then
if [ "$CONFIG_ATM_HORIZON" != "n" ]; then
bool ' Enable debugging messages' CONFIG_ATM_HORIZON_DEBUG
fi
+ tristate 'Interphase ATM PCI x575/x525/x531' CONFIG_ATM_IA
+ if [ "$CONFIG_ATM_IA" != "n" ]; then
+ bool ' Enable debugging messages' CONFIG_ATM_IA_DEBUG
+ fi
fi
endmenu
L_OBJS += tneta1570.o suni.o
endif
-ifeq ($(CONFIG_ATM_FORE200),y)
-L_OBJS += fore200.o
-endif
-
ifeq ($(CONFIG_ATM_NICSTAR),y)
L_OBJS += nicstar.o
ifeq ($(CONFIG_ATM_NICSTAR_USE_SUNI),y)
NEED_SUNI_LX = suni.o
endif
+ ifeq ($(CONFIG_ATM_NICSTAR_USE_IDT77105),y)
+ NEED_IDT77105_LX = idt77105.o
+ endif
else
ifeq ($(CONFIG_ATM_NICSTAR),m)
M_OBJS += nicstar.o
ifeq ($(CONFIG_ATM_NICSTAR_USE_SUNI),y)
NEED_SUNI_MX = suni.o
endif
+ ifeq ($(CONFIG_ATM_NICSTAR_USE_IDT77105),y)
+ NEED_SUNI_MX = idt77105.o
+ endif
endif
endif
endif
endif
+ifeq ($(CONFIG_ATM_TCP),y)
+L_OBJS += atmtcp.o
+else
+ ifeq ($(CONFIG_ATM_TCP),m)
+ M_OBJS += atmtcp.o
+ endif
+endif
+
+ifeq ($(CONFIG_ATM_IA),y)
+L_OBJS += iphase.o
+NEED_SUNI_LX = suni.o
+else
+ifeq ($(CONFIG_ATM_IA),m)
+ M_OBJS += iphase.o
+ NEED_SUNI_MX = suni.o
+ endif
+endif
+
ifeq ($(NEED_SUNI_LX),)
MX_OBJS += $(NEED_SUNI_MX)
else
LX_OBJS += $(NEED_SUNI_LX)
endif
-ifeq ($(CONFIG_ATM_TCP),y)
-L_OBJS += atmtcp.o
+ifeq ($(NEED_IDT77105_LX),)
+ MX_OBJS += $(NEED_IDT77105_MX)
else
- ifeq ($(CONFIG_ATM_TCP),m)
- M_OBJS += atmtcp.o
- endif
+ LX_OBJS += $(NEED_IDT77105_LX)
endif
EXTRA_CFLAGS=-g
dont_panic (dev);
} else {
// moan
- return -EINVAL;
+ return -ENOIOCTLCMD;
}
}
#endif
/********** Operation Structure **********/
static const struct atmdev_ops amb_ops = {
- NULL, // no amb_dev_close
- amb_open,
- amb_close,
- NULL, // no amb_ioctl,
- NULL, // no amb_getsockopt,
- NULL, // no amb_setsockopt,
- amb_send,
- amb_sg_send,
- NULL, // no send_oam - not in fact used yet
- NULL, // no amb_phy_put - not needed in this driver
- NULL, // no amb_phy_get - not needed in this driver
- NULL, // no feedback - feedback to the driver!
- NULL, // no amb_change_qos
- NULL, // amb_free_rx_skb not used until checked by someone else
- amb_proc_read
+ open: amb_open,
+ close: amb_close,
+ send: amb_send,
+ sg_send: amb_sg_send,
+ proc_read: amb_proc_read
};
/********** housekeeping **********/
#ifdef CONFIG_ATM_TNETA1570
extern int tneta1570_detect(void);
#endif
-#ifdef CONFIG_ATM_FORE200
-extern int fore200_detect(void);
-#endif
#ifdef CONFIG_ATM_NICSTAR
extern int nicstar_detect(void);
#endif
#ifdef CONFIG_ATM_HORIZON
extern int hrz_detect(void);
#endif
+#ifdef CONFIG_ATM_IA
+extern int ia_detect(void);
+#endif
int __init atmdev_init(void)
#ifdef CONFIG_ATM_TNETA1570
devs += tneta1570_detect();
#endif
-#ifdef CONFIG_ATM_FORE200
- devs += fore200_detect();
-#endif
#ifdef CONFIG_ATM_NICSTAR
devs += nicstar_detect();
#endif
#endif
#ifdef CONFIG_ATM_HORIZON
devs += hrz_detect();
+#endif
+#ifdef CONFIG_ATM_IA
+ devs += ia_detect();
#endif
return devs;
}
Madge Ambassador ATM Adapter microcode.
Copyright (C) 1995-1999 Madge Networks Ltd.
- This is provided here for your convenience only.
+ This microcode data is placed under the terms of the GNU General
+ Public License. The GPL is contained in /usr/doc/copyright/GPL on a
+ Debian system and in the file COPYING in the Linux kernel source.
- No restrictions are placed on its use, so long as this file remains
- unchanged.
-
- You may not make, use or re-distribute modified versions of this code.
+ We would prefer you not to distribute modified versions without
+ consultation and not to ask for assembly/other microcode source.
*/
0x401a6800,
+/*
+ See copyright and licensing conditions in ambassador.* files.
+*/
{ 0x00000080, 993, },
{ 0xa0d0d500, 80, },
{ 0xa0d0f000, 978, },
+/*
+ See copyright and licensing conditions in ambassador.* files.
+*/
0xa0d0f000
/* drivers/atm/atmtcp.c - ATM over TCP "device" driver */
-/* Written 1997-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1997-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/module.h>
#include <linux/atmdev.h>
#include <linux/atm_tcp.h>
#include <asm/uaccess.h>
-#include "../../net/atm/protocols.h" /* @@@ fix this */
+
+
+extern int atm_init_aal5(struct atm_vcc *vcc); /* "raw" AAL5 transport */
#define PRIV(dev) ((struct atmtcp_dev_data *) ((dev)->dev_data))
*new_msg = *msg;
new_msg->hdr.length = ATMTCP_HDR_MAGIC;
new_msg->type = type;
- new_msg->vcc = (unsigned long) vcc;
+ memset(&new_msg->vcc,0,sizeof(atm_kptr_t));
+ *(struct atm_vcc **) &new_msg->vcc = vcc;
old_flags = vcc->flags;
out_vcc->push(out_vcc,skb);
while (!((vcc->flags ^ old_flags) & flag)) {
static int atmtcp_recv_control(const struct atmtcp_control *msg)
{
- struct atm_vcc *vcc = (struct atm_vcc *) msg->vcc;
+ struct atm_vcc *vcc = *(struct atm_vcc **) &msg->vcc;
vcc->vpi = msg->addr.sap_addr.vpi;
vcc->vci = msg->addr.sap_addr.vci;
struct atm_cirange ci;
struct atm_vcc *vcc;
- if (cmd != ATM_SETCIRANGE) return -EINVAL;
+ if (cmd != ATM_SETCIRANGE) return -ENOIOCTLCMD;
if (copy_from_user(&ci,(void *) arg,sizeof(ci))) return -EFAULT;
if (ci.vpi_bits == ATM_CI_MAX) ci.vpi_bits = MAX_VPI_BITS;
if (ci.vci_bits == ATM_CI_MAX) ci.vci_bits = MAX_VCI_BITS;
if (vcc->pop) vcc->pop(vcc,skb);
else dev_kfree_skb(skb);
out_vcc->push(out_vcc,new_skb);
+ vcc->stats->tx++;
+ out_vcc->stats->rx++;
return 0;
}
new_skb->stamp = xtime;
memcpy(skb_put(new_skb,skb->len),skb->data,skb->len);
out_vcc->push(out_vcc,new_skb);
+ vcc->stats->tx++;
+ out_vcc->stats->rx++;
done:
if (vcc->pop) vcc->pop(vcc,skb);
else dev_kfree_skb(skb);
static struct atmdev_ops atmtcp_v_dev_ops = {
- atmtcp_v_dev_close,
- atmtcp_v_open,
- atmtcp_v_close,
- atmtcp_v_ioctl,
- NULL, /* no getsockopt */
- NULL, /* no setsockopt */
- atmtcp_v_send,
- NULL, /* no direct writes */
- NULL, /* no send_oam */
- NULL, /* no phy_put */
- NULL, /* no phy_get */
- NULL, /* no feedback */
- NULL, /* no change_qos */
- NULL, /* no free_rx_skb */
- atmtcp_v_proc /* proc_read */
+ dev_close: atmtcp_v_dev_close,
+ open: atmtcp_v_open,
+ close: atmtcp_v_close,
+ ioctl: atmtcp_v_ioctl,
+ send: atmtcp_v_send,
+ proc_read: atmtcp_v_proc
};
static struct atmdev_ops atmtcp_c_dev_ops = {
- NULL, /* no dev_close */
- NULL, /* no open */
- atmtcp_c_close,
- NULL, /* no ioctl */
- NULL, /* no getsockopt */
- NULL, /* no setsockopt */
- atmtcp_c_send,
- NULL, /* no sg_send */
- NULL, /* no send_oam */
- NULL, /* no phy_put */
- NULL, /* no phy_get */
- NULL, /* no feedback */
- NULL, /* no change_qos */
- NULL, /* no free_rx_skb */
- NULL /* no proc_read */
+ close: atmtcp_c_close,
+ send: atmtcp_c_send
};
/* drivers/atm/eni.c - Efficient Networks ENI155P device driver */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/module.h>
eni_vcc = ENI_VCC(vcc);
eni_vcc->rx = NULL;
if (vcc->qos.rxtp.traffic_class == ATM_NONE) return 0;
- size = vcc->qos.rxtp.max_sdu*3; /* @@@ improve this */
+ size = vcc->qos.rxtp.max_sdu*eni_dev->rx_mult/100;
if (size > MID_MAX_BUF_SIZE && vcc->qos.rxtp.max_sdu <=
MID_MAX_BUF_SIZE)
size = MID_MAX_BUF_SIZE;
return -ENOMEM;
}
memset(eni_dev->rx_map,0,PAGE_SIZE);
+ eni_dev->rx_mult = DEFAULT_RX_MULT;
eni_dev->fast = eni_dev->last_fast = NULL;
eni_dev->slow = eni_dev->last_slow = NULL;
init_waitqueue_head(&eni_dev->rx_wait);
if (tx->send)
while ((skb = skb_dequeue(&tx->backlog))) {
res = do_tx(skb);
- if (res != enq_ok) {
+ if (res == enq_ok) tx->backlog_len--;
+ else {
DPRINTK("re-queuing TX PDU\n");
skb_queue_head(&tx->backlog,skb);
requeued++;
unlimited = ubr && (!rate || rate <= -ATM_OC3_PCR ||
rate >= ATM_OC3_PCR);
if (!unlimited) {
- size = txtp->max_sdu*3; /* @@@ improve */
+ size = txtp->max_sdu*eni_dev->tx_mult/100;
if (size > MID_MAX_BUF_SIZE && txtp->max_sdu <=
MID_MAX_BUF_SIZE)
size = MID_MAX_BUF_SIZE;
tx->send = mem;
tx->words = size >> 2;
skb_queue_head_init(&tx->backlog);
+ tx->backlog_len = 0;
for (order = 0; size > (1 << (order+10)); order++);
eni_out((order << MID_SIZE_SHIFT) |
((tx->send-eni_dev->ram) >> (MID_LOC_SKIP+2)),
eni_dev = ENI_DEV(dev);
eni_dev->lost = 0;
eni_dev->tx_bw = ATM_OC3_PCR;
+ eni_dev->tx_mult = DEFAULT_TX_MULT;
init_waitqueue_head(&eni_dev->tx_wait);
eni_dev->ubr = NULL;
skb_queue_head_init(&eni_dev->tx_queue);
struct midway_eprom *eprom;
struct eni_dev *eni_dev;
struct pci_dev *pci_dev;
- unsigned int real_base,base;
+ unsigned long real_base,base;
unsigned char revision;
int error,i,last;
"(0x%02x)\n",dev->number,error);
return error;
}
- printk(KERN_NOTICE DEV_LABEL "(itf %d): rev.%d,base=0x%x,irq=%d,",
+ printk(KERN_NOTICE DEV_LABEL "(itf %d): rev.%d,base=0x%lx,irq=%d,",
dev->number,revision,real_base,eni_dev->irq);
if (!(base = (unsigned long) ioremap_nocache(real_base,MAP_MAX_SIZE))) {
printk("\n");
"master (0x%02x)\n",dev->number,error);
return error;
}
+#ifdef __sparc_v9__ /* copied from drivers/net/sunhme.c */
+ /* NOTE: Cache line size is in 32-bit word units. */
+ pci_write_config_byte(eni_dev->pci_dev, PCI_CACHE_LINE_SIZE, 0x10);
+#endif
if ((error = pci_write_config_byte(eni_dev->pci_dev,PCI_TONGA_CTRL,
END_SWAP_DMA))) {
printk(KERN_ERR DEV_LABEL "(itf %d): can't set endian swap "
static int eni_ioctl(struct atm_dev *dev,unsigned int cmd,void *arg)
{
+ struct eni_dev *eni_dev = ENI_DEV(dev);
+
if (cmd == ENI_MEMDUMP) {
+ if (!capable(CAP_NET_ADMIN)) return -EPERM;
printk(KERN_WARNING "Please use /proc/atm/" DEV_LABEL ":%d "
"instead of obsolete ioctl ENI_MEMDUMP\n",dev->number);
dump(dev);
return 0;
}
+ if (cmd == ENI_SETMULT) {
+ struct eni_multipliers mult;
+
+ if (!capable(CAP_NET_ADMIN)) return -EPERM;
+ if (copy_from_user(&mult,(void *) arg,
+ sizeof(struct eni_multipliers)))
+ return -EFAULT;
+ if ((mult.tx && mult.tx <= 100) || (mult.rx &&mult.rx <= 100) ||
+ mult.tx > 65536 || mult.rx > 65536)
+ return -EINVAL;
+ if (mult.tx) eni_dev->tx_mult = mult.tx;
+ if (mult.rx) eni_dev->rx_mult = mult.rx;
+ return 0;
+ }
if (cmd == ATM_SETCIRANGE) {
struct atm_cirange ci;
return 0;
return -EINVAL;
}
- if (!dev->phy->ioctl) return -EINVAL;
+ if (!dev->phy->ioctl) return -ENOIOCTLCMD;
return dev->phy->ioctl(dev,cmd,arg);
}
static int eni_getsockopt(struct atm_vcc *vcc,int level,int optname,
void *optval,int optlen)
{
-#ifdef CONFIG_MMU_HACKS
-
-static const struct atm_buffconst bctx = { PAGE_SIZE,0,PAGE_SIZE,0,0,0 };
-static const struct atm_buffconst bcrx = { PAGE_SIZE,0,PAGE_SIZE,0,0,0 };
-
-#else
-
-static const struct atm_buffconst bctx = { sizeof(int),0,sizeof(int),0,0,0 };
-static const struct atm_buffconst bcrx = { sizeof(int),0,sizeof(int),0,0,0 };
-
-#endif
- if (level == SOL_AAL && (optname == SO_BCTXOPT ||
- optname == SO_BCRXOPT))
- return copy_to_user(optval,optname == SO_BCTXOPT ? &bctx :
- &bcrx,sizeof(struct atm_buffconst)) ? -EFAULT : 0;
return -EINVAL;
}
cli(); /* brute force */
if (skb_peek(&ENI_VCC(vcc)->tx->backlog) || do_tx(skb)) {
skb_queue_tail(&ENI_VCC(vcc)->tx->backlog,skb);
- backlogged++;
+ ENI_VCC(vcc)->tx->backlog_len++;
+backlogged++;
}
restore_flags(flags);
return 0;
return sprintf(page,DEV_LABEL "(itf %d) signal %s, %dkB, "
"%d cps remaining\n",dev->number,signal[(int) dev->signal],
eni_dev->mem >> 10,eni_dev->tx_bw);
- left--;
- if (!left)
- return sprintf(page,"Bursts: TX"
+ if (!--left)
+ return sprintf(page,"%4sBursts: TX"
#if !defined(CONFIG_ATM_ENI_BURST_TX_16W) && \
!defined(CONFIG_ATM_ENI_BURST_TX_8W) && \
!defined(CONFIG_ATM_ENI_BURST_TX_4W) && \
#ifndef CONFIG_ATM_ENI_TUNE_BURST
" (default)"
#endif
- "\n");
+ "\n","");
+ if (!--left)
+ return sprintf(page,"%4sBuffer multipliers: tx %d%%, rx %d%%\n",
+ "",eni_dev->tx_mult,eni_dev->rx_mult);
for (i = 0; i < NR_CHAN; i++) {
struct eni_tx *tx = eni_dev->tx+i;
if (!tx->send) continue;
+ if (!--left) {
+ return sprintf(page,"tx[%d]: 0x%06lx-0x%06lx "
+ "(%6ld bytes), rsv %d cps, shp %d cps%s\n",i,
+ tx->send-eni_dev->ram,
+ tx->send-eni_dev->ram+tx->words*4-1,tx->words*4,
+ tx->reserved,tx->shaping,
+ tx == eni_dev->ubr ? " (UBR)" : "");
+ }
if (--left) continue;
- return sprintf(page,"tx[%d]: 0x%06lx-0x%06lx (%6ld bytes), "
- "rsv %d cps, shp %d cps%s\n",i,
- tx->send-eni_dev->ram,
- tx->send-eni_dev->ram+tx->words*4-1,tx->words*4,
- tx->reserved,tx->shaping,
- tx == eni_dev->ubr ? " (UBR)" : "");
+ return sprintf(page,"%10sbacklog %d bytes\n","",
+ tx->backlog_len);
}
for (vcc = dev->vccs; vcc; vcc = vcc->next) {
struct eni_vcc *eni_vcc = ENI_VCC(vcc);
if (eni_vcc->tx) length += sprintf(page+length,", ");
}
if (eni_vcc->tx)
- length += sprintf(page+length,"tx[%d]",
- eni_vcc->tx->index);
+ length += sprintf(page+length,"tx[%d], txing %d bytes",
+ eni_vcc->tx->index,eni_vcc->txing);
page[length] = '\n';
return length+1;
}
static const struct atmdev_ops ops = {
- NULL, /* no dev_close */
- eni_open,
- eni_close,
- eni_ioctl,
- eni_getsockopt,
- eni_setsockopt,
- eni_send,
- eni_sg_send,
- NULL, /* no send_oam */
- eni_phy_put,
- eni_phy_get,
- NULL, /* no feedback */
- eni_change_qos, /* no change_qos */
- NULL, /* no free_rx_skb */
- eni_proc_read
+ open: eni_open,
+ close: eni_close,
+ ioctl: eni_ioctl,
+ getsockopt: eni_getsockopt,
+ setsockopt: eni_setsockopt,
+ send: eni_send,
+ sg_send: eni_sg_send,
+ phy_put: eni_phy_put,
+ phy_get: eni_phy_get,
+ change_qos: eni_change_qos,
+ proc_read: eni_proc_read
};
/* drivers/atm/eni.h - Efficient Networks ENI155P device driver declarations */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#ifndef DRIVER_ATM_ENI_H
#define RX_DMA_BUF 8 /* burst and skip a few things */
#define TX_DMA_BUF 100 /* should be enough for 64 kB */
+#define DEFAULT_RX_MULT 300 /* max_sdu*3 */
+#define DEFAULT_TX_MULT 300 /* max_sdu*3 */
+
struct eni_free {
unsigned long start; /* counting in bytes */
int reserved; /* reserved peak cell rate */
int shaping; /* shaped peak cell rate */
struct sk_buff_head backlog; /* queue of waiting TX buffers */
+ int backlog_len; /* length of backlog in bytes */
};
struct eni_vcc {
struct eni_tx *tx; /* TXer, NULL if none */
int rxing; /* number of pending PDUs */
int servicing; /* number of waiting VCs (0 or 1) */
- int txing; /* number of pending TX cells/PDUs */
+ int txing; /* number of pending TX bytes */
struct timeval timestamp; /* for RX timing */
struct atm_vcc *next; /* next pending RX */
struct sk_buff *last; /* last PDU being DMAed (used to carry
wait_queue_head_t tx_wait; /* for close */
int tx_bw; /* remaining bandwidth */
u32 dma[TX_DMA_BUF*2]; /* DMA request scratch area */
+ int tx_mult; /* buffer size multiplier (percent) */
/*-------------------------------- RX part */
u32 serv_read; /* host service read index */
struct atm_vcc *fast,*last_fast;/* queues of VCCs with pending PDUs */
struct atm_vcc **rx_map; /* for fast lookups */
struct sk_buff_head rx_queue; /* PDUs currently being RX-DMAed */
wait_queue_head_t rx_wait; /* for close */
+ int rx_mult; /* buffer size multiplier (percent) */
/*-------------------------------- statistics */
unsigned long lost; /* number of lost cells (RX) */
/*-------------------------------- memory management */
/*-------------------------------- general information */
int mem; /* RAM on board (in bytes) */
int asic; /* PCI interface type, 0 for FPGA */
- unsigned char irq; /* IRQ */
+ unsigned int irq; /* IRQ */
struct pci_dev *pci_dev; /* PCI stuff */
};
switch (level) {
case SOL_SOCKET:
switch (optname) {
- case SO_BCTXOPT:
- // return the right thing
- break;
- case SO_BCRXOPT:
- // return the right thing
- break;
+// case SO_BCTXOPT:
+// break;
+// case SO_BCRXOPT:
+// break;
default:
return -ENOPROTOOPT;
break;
switch (level) {
case SOL_SOCKET:
switch (optname) {
- case SO_BCTXOPT:
- // not settable
- break;
- case SO_BCRXOPT:
- // not settable
- break;
+// case SO_BCTXOPT:
+// break;
+// case SO_BCRXOPT:
+// break;
default:
return -ENOPROTOOPT;
break;
}
static const struct atmdev_ops hrz_ops = {
- NULL, // no hrz_dev_close
- hrz_open,
- hrz_close,
- NULL, // no hrz_ioctl
- NULL, // hrz_getsockopt,
- NULL, // hrz_setsockopt,
- hrz_send,
- hrz_sg_send,
- NULL, // no send_oam - not in fact used yet
- NULL, // no hrz_phy_put - not needed in this driver
- NULL, // no hrz_phy_get - not needed in this driver
- NULL, // no feedback - feedback to the driver!
- NULL, // no hrz_change_qos
- NULL, // no free_rx_skb
- hrz_proc_read
+ open: hrz_open,
+ close: hrz_close,
+ send: hrz_send,
+ sg_send: hrz_sg_send,
+ proc_read: hrz_proc_read
};
static int __init hrz_probe (void) {
--- /dev/null
+/* drivers/atm/idt77105.c - IDT77105 (PHY) driver */
+
+/* Written 1999 by Greg Banks, NEC Australia <gnb@linuxfan.com>. Based on suni.c */
+
+
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <linux/atmdev.h>
+#include <linux/sonet.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/init.h>
+#include <linux/capability.h>
+#include <linux/atm_idt77105.h>
+#include <asm/system.h>
+#include <asm/param.h>
+#include <asm/uaccess.h>
+
+#include "idt77105.h"
+
+#undef GENERAL_DEBUG
+
+#ifdef GENERAL_DEBUG
+#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define DPRINTK(format,args...)
+#endif
+
+
+struct idt77105_priv {
+ struct idt77105_stats stats; /* link diagnostics */
+ struct atm_dev *dev; /* device back-pointer */
+ struct idt77105_priv *next;
+ int loop_mode;
+ unsigned char old_mcr; /* storage of MCR reg while signal lost */
+};
+
+
+#define PRIV(dev) ((struct idt77105_priv *) dev->phy_data)
+
+#define PUT(val,reg) dev->ops->phy_put(dev,val,IDT77105_##reg)
+#define GET(reg) dev->ops->phy_get(dev,IDT77105_##reg)
+
+static void idt77105_stats_timer_func(unsigned long);
+static void idt77105_restart_timer_func(unsigned long);
+
+
+static struct timer_list stats_timer = { NULL, NULL, 0L, 0L,
+ &idt77105_stats_timer_func };
+static struct timer_list restart_timer = { NULL, NULL, 0L, 0L,
+ &idt77105_restart_timer_func };
+static int start_timer = 1;
+static struct idt77105_priv *idt77105_all = NULL;
+
+/*
+ * Retrieve the value of one of the IDT77105's counters.
+ * `counter' is one of the IDT77105_CTRSEL_* constants.
+ */
+static u16 get_counter(struct atm_dev *dev, int counter)
+{
+ u16 val;
+
+ /* write the counter bit into PHY register 6 */
+ PUT(counter, CTRSEL);
+ /* read the low 8 bits from register 4 */
+ val = GET(CTRLO);
+ /* read the high 8 bits from register 5 */
+ val |= GET(CTRHI)<<8;
+
+ return val;
+}
+
+/*
+ * Timer function called every second to gather statistics
+ * from the 77105. This is done because the h/w registers
+ * will overflow if not read at least once per second. The
+ * kernel's stats are much higher precision. Also, having
+ * a separate copy of the stats allows implementation of
+ * an ioctl which gathers the stats *without* zero'ing them.
+ */
+static void idt77105_stats_timer_func(unsigned long dummy)
+{
+ struct idt77105_priv *walk;
+ struct atm_dev *dev;
+ struct idt77105_stats *stats;
+
+ DPRINTK("IDT77105 gathering statistics\n");
+ for (walk = idt77105_all; walk; walk = walk->next) {
+ dev = walk->dev;
+
+ stats = &walk->stats;
+ stats->symbol_errors += get_counter(dev, IDT77105_CTRSEL_SEC);
+ stats->tx_cells += get_counter(dev, IDT77105_CTRSEL_TCC);
+ stats->rx_cells += get_counter(dev, IDT77105_CTRSEL_RCC);
+ stats->rx_hec_errors += get_counter(dev, IDT77105_CTRSEL_RHEC);
+ }
+ if (!start_timer) mod_timer(&stats_timer,jiffies+IDT77105_STATS_TIMER_PERIOD);
+}
+
+
+/*
+ * A separate timer func which handles restarting PHY chips which
+ * have had the cable re-inserted after being pulled out. This is
+ * done by polling the Good Signal Bit in the Interrupt Status
+ * register every 5 seconds. The other technique (checking Good
+ * Signal Bit in the interrupt handler) cannot be used because PHY
+ * interrupts need to be disabled when the cable is pulled out
+ * to avoid lots of spurious cell error interrupts.
+ */
+static void idt77105_restart_timer_func(unsigned long dummy)
+{
+ struct idt77105_priv *walk;
+ struct atm_dev *dev;
+ unsigned char istat;
+
+ DPRINTK("IDT77105 checking for cable re-insertion\n");
+ for (walk = idt77105_all; walk; walk = walk->next) {
+ dev = walk->dev;
+
+ if (dev->signal != ATM_PHY_SIG_LOST)
+ continue;
+
+ istat = GET(ISTAT); /* side effect: clears all interrupt status bits */
+ if (istat & IDT77105_ISTAT_GOODSIG) {
+ /* Found signal again */
+ dev->signal = ATM_PHY_SIG_FOUND;
+ printk(KERN_NOTICE "%s(itf %d): signal detected again\n",
+ dev->type,dev->number);
+ /* flush the receive FIFO */
+ PUT( GET(DIAG) | IDT77105_DIAG_RFLUSH, DIAG);
+ /* re-enable interrupts */
+ PUT( walk->old_mcr ,MCR);
+ }
+ }
+ if (!start_timer) mod_timer(&restart_timer,jiffies+IDT77105_RESTART_TIMER_PERIOD);
+}
+
+
+static int fetch_stats(struct atm_dev *dev,struct idt77105_stats *arg,int zero)
+{
+ unsigned long flags;
+ int error;
+
+ error = 0;
+ save_flags(flags);
+ cli();
+ if (arg)
+ error = copy_to_user(arg,&PRIV(dev)->stats,
+ sizeof(struct idt77105_stats));
+ if (zero && !error)
+ memset(&PRIV(dev)->stats,0,sizeof(struct idt77105_stats));
+ restore_flags(flags);
+ return error ? -EFAULT : sizeof(struct idt77105_stats);
+}
+
+
+
+static int idt77105_ioctl(struct atm_dev *dev,unsigned int cmd,void *arg)
+{
+ printk(KERN_NOTICE "%s(%d) idt77105_ioctl() called\n",dev->type,dev->number);
+ switch (cmd) {
+ case IDT77105_GETSTATZ:
+ case IDT77105_GETSTAT:
+ return fetch_stats(dev,(struct idt77105_stats *) arg,
+ cmd == IDT77105_GETSTATZ);
+ case IDT77105_SETLOOP:
+ if (!capable(CAP_NET_ADMIN)) return -EPERM;
+ if ((int) arg < 0 || (int) arg > IDT77105_LM_LOOP)
+ return -EINVAL;
+ PUT((GET(DIAG) & ~IDT77105_DIAG_LCMASK) |
+ ((int) arg == IDT77105_LM_NONE ? IDT77105_DIAG_LC_NORMAL : 0) |
+ ((int) arg == IDT77105_LM_DIAG ? IDT77105_DIAG_LC_PHY_LOOPBACK : 0) |
+ ((int) arg == IDT77105_LM_LOOP ? IDT77105_DIAG_LC_LINE_LOOPBACK : 0),
+ DIAG);
+ printk(KERN_NOTICE "%s(%d) Loopback mode is: %s\n",
+ dev->type, dev->number,
+ ((int) arg == IDT77105_LM_NONE ? "NONE" :
+ ((int) arg == IDT77105_LM_DIAG ? "DIAG (local)" :
+ ((int) arg == IDT77105_LM_LOOP ? "LOOP (remote)" :
+ "unknown")))
+ );
+ PRIV(dev)->loop_mode = (int) arg;
+ return 0;
+ case IDT77105_GETLOOP:
+ return put_user(PRIV(dev)->loop_mode,(int *) arg) ?
+ -EFAULT : sizeof(int);
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+
+
+static void idt77105_int(struct atm_dev *dev)
+{
+ unsigned char istat;
+
+ istat = GET(ISTAT); /* side effect: clears all interrupt status bits */
+
+ DPRINTK("IDT77105 generated an interrupt, istat=%02x\n", (unsigned)istat);
+
+ if (istat & IDT77105_ISTAT_RSCC) {
+ /* Rx Signal Condition Change - line went up or down */
+ if (istat & IDT77105_ISTAT_GOODSIG) { /* signal detected again */
+ /* This should not happen (restart timer does it) but JIC */
+ dev->signal = ATM_PHY_SIG_FOUND;
+ } else { /* signal lost */
+ /*
+ * Disable interrupts and stop all transmission and
+ * reception - the restart timer will restore these.
+ */
+ PRIV(dev)->old_mcr = GET(MCR);
+ PUT(
+ (PRIV(dev)->old_mcr|
+ IDT77105_MCR_DREC|
+ IDT77105_MCR_DRIC|
+ IDT77105_MCR_HALTTX
+ ) & ~IDT77105_MCR_EIP, MCR);
+ dev->signal = ATM_PHY_SIG_LOST;
+ printk(KERN_NOTICE "%s(itf %d): signal lost\n",
+ dev->type,dev->number);
+ }
+ }
+
+ if (istat & IDT77105_ISTAT_RFO) {
+ /* Rx FIFO Overrun -- perform a FIFO flush */
+ PUT( GET(DIAG) | IDT77105_DIAG_RFLUSH, DIAG);
+ printk(KERN_NOTICE "%s(itf %d): receive FIFO overrun\n",
+ dev->type,dev->number);
+ }
+#ifdef GENERAL_DEBUG
+ if (istat & (IDT77105_ISTAT_HECERR | IDT77105_ISTAT_SCR |
+ IDT77105_ISTAT_RSE)) {
+ /* normally don't care - just report in stats */
+ printk(KERN_NOTICE "%s(itf %d): received cell with error\n",
+ dev->type,dev->number);
+ }
+#endif
+}
+
+
+static int idt77105_start(struct atm_dev *dev)
+{
+ unsigned long flags;
+
+ if (!(PRIV(dev) = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
+ return -ENOMEM;
+ PRIV(dev)->dev = dev;
+ save_flags(flags);
+ cli();
+ PRIV(dev)->next = idt77105_all;
+ idt77105_all = PRIV(dev);
+ restore_flags(flags);
+ memset(&PRIV(dev)->stats,0,sizeof(struct idt77105_stats));
+
+ /* initialise dev->signal from Good Signal Bit */
+ dev->signal = GET(ISTAT) & IDT77105_ISTAT_GOODSIG ? ATM_PHY_SIG_FOUND :
+ ATM_PHY_SIG_LOST;
+ if (dev->signal == ATM_PHY_SIG_LOST)
+ printk(KERN_WARNING "%s(itf %d): no signal\n",dev->type,
+ dev->number);
+
+ /* initialise loop mode from hardware */
+ switch ( GET(DIAG) & IDT77105_DIAG_LCMASK ) {
+ case IDT77105_DIAG_LC_NORMAL:
+ PRIV(dev)->loop_mode = IDT77105_LM_NONE;
+ break;
+ case IDT77105_DIAG_LC_PHY_LOOPBACK:
+ PRIV(dev)->loop_mode = IDT77105_LM_DIAG;
+ break;
+ case IDT77105_DIAG_LC_LINE_LOOPBACK:
+ PRIV(dev)->loop_mode = IDT77105_LM_LOOP;
+ break;
+ }
+
+ /* enable interrupts, e.g. on loss of signal */
+ PRIV(dev)->old_mcr = GET(MCR);
+ if (dev->signal == ATM_PHY_SIG_FOUND) {
+ PRIV(dev)->old_mcr |= IDT77105_MCR_EIP;
+ PUT(PRIV(dev)->old_mcr, MCR);
+ }
+
+
+ idt77105_stats_timer_func(0); /* clear 77105 counters */
+ (void) fetch_stats(dev,NULL,1); /* clear kernel counters */
+
+ cli();
+ if (!start_timer) restore_flags(flags);
+ else {
+ start_timer = 0;
+ restore_flags(flags);
+
+ init_timer(&stats_timer);
+ stats_timer.expires = jiffies+IDT77105_STATS_TIMER_PERIOD;
+ stats_timer.function = idt77105_stats_timer_func;
+ add_timer(&stats_timer);
+
+ init_timer(&restart_timer);
+ restart_timer.expires = jiffies+IDT77105_RESTART_TIMER_PERIOD;
+ restart_timer.function = idt77105_restart_timer_func;
+ add_timer(&restart_timer);
+ }
+ return 0;
+}
+
+
+static const struct atmphy_ops idt77105_ops = {
+ idt77105_start,
+ idt77105_ioctl,
+ idt77105_int
+};
+
+
+int __init idt77105_init(struct atm_dev *dev)
+{
+#ifdef MODULE
+ MOD_INC_USE_COUNT;
+#endif /* MODULE */
+
+ dev->phy = &idt77105_ops;
+ return 0;
+}
+
+
+/*
+ * TODO: this function should be called through phy_ops
+ * but that will not be possible for some time as there is
+ * currently a freeze on modifying that structure
+ * -- Greg Banks, 13 Sep 1999
+ */
+int idt77105_stop(struct atm_dev *dev)
+{
+ struct idt77105_priv *walk, *prev;
+
+ DPRINTK("%s(itf %d): stopping IDT77105\n",dev->type,dev->number);
+
+ /* disable interrupts */
+ PUT( GET(MCR) & ~IDT77105_MCR_EIP, MCR );
+
+ /* detach private struct from atm_dev & free */
+ for (prev = NULL, walk = idt77105_all ;
+ walk != NULL;
+ prev = walk, walk = walk->next) {
+ if (walk->dev == dev) {
+ if (prev != NULL)
+ prev->next = walk->next;
+ else
+ idt77105_all = walk->next;
+ dev->phy = NULL;
+ PRIV(dev) = NULL;
+ kfree(walk);
+ break;
+ }
+ }
+
+#ifdef MODULE
+ MOD_DEC_USE_COUNT;
+#endif /* MODULE */
+ return 0;
+}
+
+
+
+EXPORT_SYMBOL(idt77105_init);
+EXPORT_SYMBOL(idt77105_stop);
+
+#ifdef MODULE
+
+int init_module(void)
+{
+ return 0;
+}
+
+
+void cleanup_module(void)
+{
+ /* turn off timers */
+ del_timer(&stats_timer);
+ del_timer(&restart_timer);
+}
+
+#endif
--- /dev/null
+/* drivers/atm/idt77105.h - IDT77105 (PHY) declarations */
+
+/* Written 1999 by Greg Banks, NEC Australia <gnb@linuxfan.com>. Based on suni.h */
+
+
+#ifndef DRIVER_ATM_IDT77105_H
+#define DRIVER_ATM_IDT77105_H
+
+#include <linux/atmdev.h>
+#include <linux/atmioc.h>
+
+
+/* IDT77105 registers */
+
+#define IDT77105_MCR 0x0 /* Master Control Register */
+#define IDT77105_ISTAT 0x1 /* Interrupt Status */
+#define IDT77105_DIAG 0x2 /* Diagnostic Control */
+#define IDT77105_LEDHEC 0x3 /* LED Driver & HEC Status/Control */
+#define IDT77105_CTRLO 0x4 /* Low Byte Counter Register */
+#define IDT77105_CTRHI 0x5 /* High Byte Counter Register */
+#define IDT77105_CTRSEL 0x6 /* Counter Register Read Select */
+
+/* IDT77105 register values */
+
+/* MCR */
+#define IDT77105_MCR_UPLO 0x80 /* R/W, User Prog'le Output Latch */
+#define IDT77105_MCR_DREC 0x40 /* R/W, Discard Receive Error Cells */
+#define IDT77105_MCR_ECEIO 0x20 /* R/W, Enable Cell Error Interrupts
+ * Only */
+#define IDT77105_MCR_TDPC 0x10 /* R/W, Transmit Data Parity Check */
+#define IDT77105_MCR_DRIC 0x08 /* R/W, Discard Received Idle Cells */
+#define IDT77105_MCR_HALTTX 0x04 /* R/W, Halt Tx */
+#define IDT77105_MCR_UMODE 0x02 /* R/W, Utopia (cell/byte) Mode */
+#define IDT77105_MCR_EIP 0x01 /* R/W, Enable Interrupt Pin */
+
+/* ISTAT */
+#define IDT77105_ISTAT_GOODSIG 0x40 /* R, Good Signal Bit */
+#define IDT77105_ISTAT_HECERR 0x20 /* sticky, HEC Error*/
+#define IDT77105_ISTAT_SCR 0x10 /* sticky, Short Cell Received */
+#define IDT77105_ISTAT_TPE 0x08 /* sticky, Transmit Parity Error */
+#define IDT77105_ISTAT_RSCC 0x04 /* sticky, Rx Signal Condition Change */
+#define IDT77105_ISTAT_RSE 0x02 /* sticky, Rx Symbol Error */
+#define IDT77105_ISTAT_RFO 0x01 /* sticky, Rx FIFO Overrun */
+
+/* DIAG */
+#define IDT77105_DIAG_FTD 0x80 /* R/W, Force TxClav deassert */
+#define IDT77105_DIAG_ROS 0x40 /* R/W, RxClav operation select */
+#define IDT77105_DIAG_MPCS 0x20 /* R/W, Multi-PHY config'n select */
+#define IDT77105_DIAG_RFLUSH 0x10 /* R/W, clear receive FIFO */
+#define IDT77105_DIAG_ITPE 0x08 /* R/W, Insert Tx payload error */
+#define IDT77105_DIAG_ITHE 0x04 /* R/W, Insert Tx HEC error */
+#define IDT77105_DIAG_UMODE 0x02 /* R/W, Utopia (cell/byte) Mode */
+#define IDT77105_DIAG_LCMASK 0x03 /* R/W, Loopback Control */
+
+#define IDT77105_DIAG_LC_NORMAL 0x00 /* Receive from network */
+#define IDT77105_DIAG_LC_PHY_LOOPBACK 0x02
+#define IDT77105_DIAG_LC_LINE_LOOPBACK 0x03
+
+/* LEDHEC */
+#define IDT77105_LEDHEC_DRHC 0x40 /* R/W, Disable Rx HEC check */
+#define IDT77105_LEDHEC_DTHC 0x20 /* R/W, Disable Tx HEC calculation */
+#define IDT77105_LEDHEC_RPWMASK 0x18 /* R/W, RxRef pulse width select */
+#define IDT77105_LEDHEC_TFS 0x04 /* R, Tx FIFO Status (1=empty) */
+#define IDT77105_LEDHEC_TLS 0x02 /* R, Tx LED Status (1=lit) */
+#define IDT77105_LEDHEC_RLS 0x01 /* R, Rx LED Status (1=lit) */
+
+#define IDT77105_LEDHEC_RPW_1 0x00 /* RxRef active for 1 RxClk cycle */
+#define IDT77105_LEDHEC_RPW_2 0x08 /* RxRef active for 2 RxClk cycle */
+#define IDT77105_LEDHEC_RPW_4 0x10 /* RxRef active for 4 RxClk cycle */
+#define IDT77105_LEDHEC_RPW_8 0x18 /* RxRef active for 8 RxClk cycle */
+
+/* CTRSEL */
+#define IDT77105_CTRSEL_SEC 0x08 /* W, Symbol Error Counter */
+#define IDT77105_CTRSEL_TCC 0x04 /* W, Tx Cell Counter */
+#define IDT77105_CTRSEL_RCC 0x02 /* W, Rx Cell Counter */
+#define IDT77105_CTRSEL_RHEC 0x01 /* W, Rx HEC Error Counter */
+
+#ifdef __KERNEL__
+int idt77105_init(struct atm_dev *dev) __init;
+int idt77105_stop(struct atm_dev *dev);
+#endif
+
+/*
+ * Tunable parameters
+ */
+
+/* Time between samples of the hardware cell counters. Should be <= 1 sec */
+#define IDT77105_STATS_TIMER_PERIOD (HZ)
+/* Time between checks to see if the signal has been found again */
+#define IDT77105_RESTART_TIMER_PERIOD (5 * HZ)
+
+#endif
--- /dev/null
+/******************************************************************************
+ iphase.c: Device driver for Interphase ATM PCI adapter cards
+ Author: Peter Wang <pwang@iphase.com>
+ Interphase Corporation <www.iphase.com>
+ Version: 1.0
+*******************************************************************************
+
+ This software may be used and distributed according to the terms
+ of the GNU Public License (GPL), incorporated herein by reference.
+ Drivers based on this skeleton fall under the GPL and must retain
+ the authorship (implicit copyright) notice.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ Modified from an incomplete driver for Interphase 5575 1KVC 1M card which
+ was originally written by Monalisa Agrawal at UNH. Now this driver
+ supports a variety of varients of Interphase ATM PCI (i)Chip adapter
+ card family (See www.iphase.com/products/ClassSheet.cfm?ClassID=ATM)
+ in terms of PHY type, the size of control memory and the size of
+ packet memory. The followings are the change log and history:
+
+ Bugfix the Mona's UBR driver.
+ Modify the basic memory allocation and dma logic.
+ Port the driver to the latest kernel from 2.0.46.
+ Complete the ABR logic of the driver, and added the ABR work-
+ around for the hardware anormalies.
+ Add the CBR support.
+ Add the flow control logic to the driver to allow rate-limit VC.
+ Add 4K VC support to the board with 512K control memory.
+ Add the support of all the variants of the Interphase ATM PCI
+ (i)Chip adapter cards including x575 (155M OC3 and UTP155), x525
+ (25M UTP25) and x531 (DS3 and E3).
+ Add SMP support.
+
+ Support and updates available at: ftp://ftp.iphase.com/pub/atm
+
+*******************************************************************************/
+
+#ifdef IA_MODULE
+#define MODULE
+#endif
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/atm.h>
+#include <linux/atmdev.h>
+#include <linux/sonet.h>
+#include <linux/skbuff.h>
+#include <linux/time.h>
+#include <linux/sched.h> /* for xtime */
+#include <linux/delay.h>
+#include <linux/uio.h>
+#include <linux/init.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/string.h>
+#include <asm/byteorder.h>
+#include <math.h>
+#include <linux/vmalloc.h>
+#include <linux/time.h>
+#include "iphase.h"
+#include "suni.h"
+#define swap(x) (((x & 0xff) << 8) | ((x & 0xff00) >> 8))
+struct suni_priv {
+ struct sonet_stats sonet_stats; /* link diagnostics */
+ unsigned char loop_mode; /* loopback mode */
+ struct atm_dev *dev; /* device back-pointer */
+ struct suni_priv *next; /* next SUNI */
+};
+#define PRIV(dev) ((struct suni_priv *) dev->phy_data)
+
+static unsigned char ia_phy_get(struct atm_dev *dev, unsigned long addr);
+
+static IADEV *ia_dev[8] = {NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL};
+static struct atm_dev *_ia_dev[8] = {NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL};
+static int iadev_count = 0;
+static struct timer_list ia_timer;
+struct atm_vcc *vcc_close_que[100];
+static int IA_TX_BUF = DFL_TX_BUFFERS, IA_TX_BUF_SZ = DFL_TX_BUF_SZ;
+static int IA_RX_BUF = DFL_RX_BUFFERS, IA_RX_BUF_SZ = DFL_RX_BUF_SZ;
+static u32 IADebugFlag = /* IF_IADBG_ERR | IF_IADBG_CBR| IF_IADBG_INIT_ADAPTER
+ |IF_IADBG_ABR | IF_IADBG_EVENT*/ 0;
+
+#ifdef MODULE
+MODULE_PARM(IA_TX_BUF, "i");
+MODULE_PARM(IA_TX_BUF_SZ, "i");
+MODULE_PARM(IA_RX_BUF, "i");
+MODULE_PARM(IA_RX_BUF_SZ, "i");
+MODULE_PARM(IADebugFlag, "i");
+#endif
+
+/**************************** IA_LIB **********************************/
+
+static void ia_init_rtn_q (IARTN_Q *que)
+{
+ que->next = NULL;
+ que->tail = NULL;
+}
+
+static void ia_enque_head_rtn_q (IARTN_Q *que, IARTN_Q * data)
+{
+ data->next = NULL;
+ if (que->next == NULL)
+ que->next = que->tail = data;
+ else {
+ data->next = que->next;
+ que->next = data;
+ }
+ return;
+}
+
+static int ia_enque_rtn_q (IARTN_Q *que, struct desc_tbl_t data) {
+ IARTN_Q *entry;
+ entry = (IARTN_Q *)kmalloc(sizeof(IARTN_Q), GFP_KERNEL);
+ if (!entry) return -1;
+ entry->data = data;
+ entry->next = NULL;
+ if (que->next == NULL)
+ que->next = que->tail = entry;
+ else {
+ que->tail->next = entry;
+ que->tail = que->tail->next;
+ }
+ return 1;
+}
+
+static IARTN_Q * ia_deque_rtn_q (IARTN_Q *que) {
+ IARTN_Q *tmpdata;
+ if (que->next == NULL)
+ return NULL;
+ tmpdata = que->next;
+ if ( que->next == que->tail)
+ que->next = que->tail = NULL;
+ else
+ que->next = que->next->next;
+ return tmpdata;
+}
+
+static void ia_hack_tcq(IADEV *dev) {
+
+ u_short desc1;
+ u_short tcq_wr;
+ struct ia_vcc *iavcc_r = NULL;
+ extern void desc_dbg(IADEV *iadev);
+
+ tcq_wr = readl(dev->seg_reg+TCQ_WR_PTR) & 0xffff;
+ while (dev->host_tcq_wr != tcq_wr) {
+ desc1 = *(u_short *)(dev->seg_ram + dev->host_tcq_wr);
+ if (!desc1) ;
+ else if (!dev->desc_tbl[desc1 -1].timestamp) {
+ IF_ABR(printk(" Desc %d is reset at %ld\n", desc1 -1, jiffies);)
+ *(u_short *) (dev->seg_ram + dev->host_tcq_wr) = 0;
+ }
+ else if (dev->desc_tbl[desc1 -1].timestamp) {
+ if (!(iavcc_r = dev->desc_tbl[desc1 -1].iavcc)) {
+ printk("IA: Fatal err in get_desc\n");
+ continue;
+ }
+ iavcc_r->vc_desc_cnt--;
+ dev->desc_tbl[desc1 -1].timestamp = 0;
+ IF_EVENT(printk("ia_hack: return_q skb = 0x%x desc = %d\n",
+ (u32)dev->desc_tbl[desc1 -1].txskb, desc1);)
+ if (iavcc_r->pcr < dev->rate_limit) {
+ IA_SKB_STATE (dev->desc_tbl[desc1-1].txskb) |= IA_TX_DONE;
+ if (ia_enque_rtn_q(&dev->tx_return_q, dev->desc_tbl[desc1 -1]) < 0)
+ printk("ia_hack_tcq: No memory available\n");
+ }
+ dev->desc_tbl[desc1 -1].iavcc = NULL;
+ dev->desc_tbl[desc1 -1].txskb = NULL;
+ }
+ dev->host_tcq_wr += 2;
+ if (dev->host_tcq_wr > dev->ffL.tcq_ed)
+ dev->host_tcq_wr = dev->ffL.tcq_st;
+ }
+} /* ia_hack_tcq */
+
+static u16 get_desc (IADEV *dev, struct ia_vcc *iavcc) {
+ u_short desc_num, i;
+ struct sk_buff *skb;
+ struct ia_vcc *iavcc_r = NULL;
+ unsigned long delta;
+ static unsigned long timer = 0;
+ int ltimeout;
+ extern void desc_dbg(IADEV *iadev);
+
+ ia_hack_tcq (dev);
+ if(((jiffies - timer)>50)||((dev->ffL.tcq_rd==dev->host_tcq_wr))){
+ timer = jiffies;
+ i=0;
+ while (i < dev->num_tx_desc) {
+ if (!dev->desc_tbl[i].timestamp) {
+ i++;
+ continue;
+ }
+ ltimeout = dev->desc_tbl[i].iavcc->ltimeout;
+ delta = jiffies - dev->desc_tbl[i].timestamp;
+ if (delta >= ltimeout) {
+ IF_ABR(printk("RECOVER run!! desc_tbl %d = %d delta = %ld,
+ time = %ld\n", i,dev->desc_tbl[i].timestamp, delta, jiffies);)
+ if (dev->ffL.tcq_rd == dev->ffL.tcq_st)
+ dev->ffL.tcq_rd = dev->ffL.tcq_ed;
+ else
+ dev->ffL.tcq_rd -= 2;
+ *(u_short *)(dev->seg_ram + dev->ffL.tcq_rd) = i+1;
+ if (!(skb = dev->desc_tbl[i].txskb) ||
+ !(iavcc_r = dev->desc_tbl[i].iavcc))
+ printk("Fatal err, desc table vcc or skb is NULL\n");
+ else
+ iavcc_r->vc_desc_cnt--;
+ dev->desc_tbl[i].timestamp = 0;
+ dev->desc_tbl[i].iavcc = NULL;
+ dev->desc_tbl[i].txskb = NULL;
+ }
+ i++;
+ } /* while */
+ }
+ if (dev->ffL.tcq_rd == dev->host_tcq_wr)
+ return 0xFFFF;
+
+ /* Get the next available descriptor number from TCQ */
+ desc_num = *(u_short *)(dev->seg_ram + dev->ffL.tcq_rd);
+
+ while (!desc_num || (dev->desc_tbl[desc_num -1]).timestamp) {
+ dev->ffL.tcq_rd += 2;
+ if (dev->ffL.tcq_rd > dev->ffL.tcq_ed)
+ dev->ffL.tcq_rd = dev->ffL.tcq_st;
+ if (dev->ffL.tcq_rd == dev->host_tcq_wr)
+ return 0xFFFF;
+ desc_num = *(u_short *)(dev->seg_ram + dev->ffL.tcq_rd);
+ }
+
+ /* get system time */
+ dev->desc_tbl[desc_num -1].timestamp = jiffies;
+ return desc_num;
+}
+
+static void clear_lockup (struct atm_vcc *vcc, IADEV *dev) {
+ u_char foundLockUp;
+ vcstatus_t *vcstatus;
+ u_short *shd_tbl;
+ u_short tempCellSlot, tempFract;
+ struct main_vc *abr_vc = (struct main_vc *)dev->MAIN_VC_TABLE_ADDR;
+ struct ext_vc *eabr_vc = (struct ext_vc *)dev->EXT_VC_TABLE_ADDR;
+ u_int i;
+
+ if (vcc->qos.txtp.traffic_class == ATM_ABR) {
+ vcstatus = (vcstatus_t *) &(dev->testTable[vcc->vci]->vc_status);
+ vcstatus->cnt++;
+ foundLockUp = 0;
+ if( vcstatus->cnt == 0x05 ) {
+ abr_vc += vcc->vci;
+ eabr_vc += vcc->vci;
+ if( eabr_vc->last_desc ) {
+ if( (abr_vc->status & 0x07) == ABR_STATE /* 0x2 */ ) {
+ /* Wait for 10 Micro sec */
+ udelay(10);
+ if ((eabr_vc->last_desc)&&((abr_vc->status & 0x07)==ABR_STATE))
+ foundLockUp = 1;
+ }
+ else {
+ tempCellSlot = abr_vc->last_cell_slot;
+ tempFract = abr_vc->fraction;
+ if((tempCellSlot == dev->testTable[vcc->vci]->lastTime)
+ && (tempFract == dev->testTable[vcc->vci]->fract))
+ foundLockUp = 1;
+ dev->testTable[vcc->vci]->lastTime = tempCellSlot;
+ dev->testTable[vcc->vci]->fract = tempFract;
+ }
+ } /* last descriptor */
+ vcstatus->cnt = 0;
+ } /* vcstatus->cnt */
+
+ if (foundLockUp) {
+ IF_ABR(printk("LOCK UP found\n");)
+ writew(0xFFFD, dev->seg_reg+MODE_REG_0);
+ /* Wait for 10 Micro sec */
+ udelay(10);
+ abr_vc->status &= 0xFFF8;
+ abr_vc->status |= 0x0001; /* state is idle */
+ shd_tbl = (u_short *)dev->ABR_SCHED_TABLE_ADDR;
+ for( i = 0; ((i < dev->num_vc) && (shd_tbl[i])); i++ );
+ if (i < dev->num_vc)
+ shd_tbl[i] = vcc->vci;
+ else
+ IF_ERR(printk("ABR Seg. may not continue on VC %x\n",vcc->vci);)
+ writew(T_ONLINE, dev->seg_reg+MODE_REG_0);
+ writew(~(TRANSMIT_DONE|TCQ_NOT_EMPTY), dev->seg_reg+SEG_MASK_REG);
+ writew(TRANSMIT_DONE, dev->seg_reg+SEG_INTR_STATUS_REG);
+ vcstatus->cnt = 0;
+ } /* foundLockUp */
+
+ } /* if an ABR VC */
+
+
+}
+
+/*
+** Conversion of 24-bit cellrate (cells/sec) to 16-bit floating point format.
+**
+** +----+----+------------------+-------------------------------+
+** | R | NZ | 5-bit exponent | 9-bit mantissa |
+** +----+----+------------------+-------------------------------+
+**
+** R = reserverd (written as 0)
+** NZ = 0 if 0 cells/sec; 1 otherwise
+**
+** if NZ = 1, rate = 1.mmmmmmmmm x 2^(eeeee) cells/sec
+*/
+static u16
+cellrate_to_float(u32 cr)
+{
+
+#define NZ 0x4000
+#define M_BITS 9 /* Number of bits in mantissa */
+#define E_BITS 5 /* Number of bits in exponent */
+#define M_MASK 0x1ff
+#define E_MASK 0x1f
+ u16 flot;
+ u32 tmp = cr & 0x00ffffff;
+ int i = 0;
+ if (cr == 0)
+ return 0;
+ while (tmp != 1) {
+ tmp >>= 1;
+ i++;
+ }
+ if (i == M_BITS)
+ flot = NZ | (i << M_BITS) | (cr & M_MASK);
+ else if (i < M_BITS)
+ flot = NZ | (i << M_BITS) | ((cr << (M_BITS - i)) & M_MASK);
+ else
+ flot = NZ | (i << M_BITS) | ((cr >> (i - M_BITS)) & M_MASK);
+ return flot;
+}
+
+#if 0
+/*
+** Conversion of 16-bit floating point format to 24-bit cellrate (cells/sec).
+*/
+static u32
+float_to_cellrate(u16 rate)
+{
+ u32 exp, mantissa, cps;
+ if ((rate & NZ) == 0)
+ return 0;
+ exp = (rate >> M_BITS) & E_MASK;
+ mantissa = rate & M_MASK;
+ if (exp == 0)
+ return 1;
+ cps = (1 << M_BITS) | mantissa;
+ if (exp == M_BITS)
+ cps = cps;
+ else if (exp > M_BITS)
+ cps <<= (exp - M_BITS);
+ else
+ cps >>= (M_BITS - exp);
+ return cps;
+}
+#endif
+
+static void init_abr_vc (IADEV *dev, srv_cls_param_t *srv_p) {
+ srv_p->class_type = ATM_ABR;
+ srv_p->pcr = dev->LineRate;
+ srv_p->mcr = 0;
+ srv_p->icr = 0x055cb7;
+ srv_p->tbe = 0xffffff;
+ srv_p->frtt = 0x3a;
+ srv_p->rif = 0xf;
+ srv_p->rdf = 0xb;
+ srv_p->nrm = 0x4;
+ srv_p->trm = 0x7;
+ srv_p->cdf = 0x3;
+ srv_p->adtf = 50;
+}
+
+static int
+ia_open_abr_vc(IADEV *dev, srv_cls_param_t *srv_p,
+ struct atm_vcc *vcc, u8 flag)
+{
+ f_vc_abr_entry *f_abr_vc;
+ r_vc_abr_entry *r_abr_vc;
+ u32 icr;
+ u8 trm, nrm, crm;
+ u16 adtf, air, *ptr16;
+ f_abr_vc =(f_vc_abr_entry *)dev->MAIN_VC_TABLE_ADDR;
+ f_abr_vc += vcc->vci;
+ switch (flag) {
+ case 1: /* FFRED initialization */
+#if 0 /* sanity check */
+ if (srv_p->pcr == 0)
+ return INVALID_PCR;
+ if (srv_p->pcr > dev->LineRate)
+ srv_p->pcr = dev->LineRate;
+ if ((srv_p->mcr + dev->sum_mcr) > dev->LineRate)
+ return MCR_UNAVAILABLE;
+ if (srv_p->mcr > srv_p->pcr)
+ return INVALID_MCR;
+ if (!(srv_p->icr))
+ srv_p->icr = srv_p->pcr;
+ if ((srv_p->icr < srv_p->mcr) || (srv_p->icr > srv_p->pcr))
+ return INVALID_ICR;
+ if ((srv_p->tbe < MIN_TBE) || (srv_p->tbe > MAX_TBE))
+ return INVALID_TBE;
+ if ((srv_p->frtt < MIN_FRTT) || (srv_p->frtt > MAX_FRTT))
+ return INVALID_FRTT;
+ if (srv_p->nrm > MAX_NRM)
+ return INVALID_NRM;
+ if (srv_p->trm > MAX_TRM)
+ return INVALID_TRM;
+ if (srv_p->adtf > MAX_ADTF)
+ return INVALID_ADTF;
+ else if (srv_p->adtf == 0)
+ srv_p->adtf = 1;
+ if (srv_p->cdf > MAX_CDF)
+ return INVALID_CDF;
+ if (srv_p->rif > MAX_RIF)
+ return INVALID_RIF;
+ if (srv_p->rdf > MAX_RDF)
+ return INVALID_RDF;
+#endif
+ memset ((caddr_t)f_abr_vc, 0, sizeof(f_vc_abr_entry));
+ f_abr_vc->f_vc_type = ABR;
+ nrm = 2 << srv_p->nrm; /* (2 ** (srv_p->nrm +1)) */
+ /* i.e 2**n = 2 << (n-1) */
+ f_abr_vc->f_nrm = nrm << 8 | nrm;
+ trm = 100000/(2 << (16 - srv_p->trm));
+ if ( trm == 0) trm = 1;
+ f_abr_vc->f_nrmexp =(((srv_p->nrm +1) & 0x0f) << 12)|(MRM << 8) | trm;
+ crm = srv_p->tbe / nrm;
+ if (crm == 0) crm = 1;
+ f_abr_vc->f_crm = crm & 0xff;
+ f_abr_vc->f_pcr = cellrate_to_float(srv_p->pcr);
+ icr = MIN( srv_p->icr, (srv_p->tbe > srv_p->frtt) ?
+ ((srv_p->tbe/srv_p->frtt)*1000000) :
+ (1000000/(srv_p->frtt/srv_p->tbe)));
+ f_abr_vc->f_icr = cellrate_to_float(icr);
+ adtf = (10000 * srv_p->adtf)/8192;
+ if (adtf == 0) adtf = 1;
+ f_abr_vc->f_cdf = ((7 - srv_p->cdf) << 12 | adtf) & 0xfff;
+ f_abr_vc->f_mcr = cellrate_to_float(srv_p->mcr);
+ f_abr_vc->f_acr = f_abr_vc->f_icr;
+ f_abr_vc->f_status = 0x0042;
+ break;
+ case 0: /* RFRED initialization */
+ ptr16 = (u_short *)(dev->reass_ram + REASS_TABLE*dev->memSize);
+ *(ptr16 + vcc->vci) = NO_AAL5_PKT | REASS_ABR;
+ r_abr_vc = (r_vc_abr_entry*)(dev->reass_ram+ABR_VC_TABLE*dev->memSize);
+ r_abr_vc += vcc->vci;
+ r_abr_vc->r_status_rdf = (15 - srv_p->rdf) & 0x000f;
+ air = srv_p->pcr << (15 - srv_p->rif);
+ if (air == 0) air = 1;
+ r_abr_vc->r_air = cellrate_to_float(air);
+ dev->testTable[vcc->vci]->vc_status = VC_ACTIVE | VC_ABR;
+ dev->sum_mcr += srv_p->mcr;
+ dev->n_abr++;
+ break;
+ default:
+ break;
+ }
+ return 0;
+}
+static int ia_cbr_setup (IADEV *dev, struct atm_vcc *vcc) {
+ u32 rateLow=0, rateHigh, rate;
+ int entries;
+ struct ia_vcc *ia_vcc;
+
+ int idealSlot =0, testSlot, toBeAssigned, inc;
+ u32 spacing;
+ u16 *SchedTbl, *TstSchedTbl;
+ u16 cbrVC, vcIndex;
+ u32 fracSlot = 0;
+ u32 sp_mod = 0;
+ u32 sp_mod2 = 0;
+
+ /* IpAdjustTrafficParams */
+ if (vcc->qos.txtp.max_pcr <= 0) {
+ IF_ERR(printk("PCR for CBR not defined\n");)
+ return -1;
+ }
+ rate = vcc->qos.txtp.max_pcr;
+ entries = rate / dev->Granularity;
+ IF_CBR(printk("CBR: CBR entries=0x%x for rate=0x%x & Gran=0x%x\n",
+ entries, rate, dev->Granularity);)
+ if (entries < 1)
+ IF_CBR(printk("CBR: Bandwidth smaller than granularity of CBR table\n");)
+ rateLow = entries * dev->Granularity;
+ rateHigh = (entries + 1) * dev->Granularity;
+ if (3*(rate - rateLow) > (rateHigh - rate))
+ entries++;
+ if (entries > dev->CbrRemEntries) {
+ IF_CBR(printk("CBR: Not enough bandwidth to support this PCR.\n");)
+ IF_CBR(printk("Entries = 0x%x, CbrRemEntries = 0x%x.\n",
+ entries, dev->CbrRemEntries);)
+ return -EBUSY;
+ }
+
+ ia_vcc = INPH_IA_VCC(vcc);
+ ia_vcc->NumCbrEntry = entries;
+ dev->sum_mcr += entries * dev->Granularity;
+ /* IaFFrednInsertCbrSched */
+ // Starting at an arbitrary location, place the entries into the table
+ // as smoothly as possible
+ cbrVC = 0;
+ spacing = dev->CbrTotEntries / entries;
+ sp_mod = dev->CbrTotEntries % entries; // get modulo
+ toBeAssigned = entries;
+ fracSlot = 0;
+ vcIndex = vcc->vci;
+ IF_CBR(printk("Vci=0x%x,Spacing=0x%x,Sp_mod=0x%x\n",vcIndex,spacing,sp_mod);)
+ while (toBeAssigned)
+ {
+ // If this is the first time, start the table loading for this connection
+ // as close to entryPoint as possible.
+ if (toBeAssigned == entries)
+ {
+ idealSlot = dev->CbrEntryPt;
+ dev->CbrEntryPt += 2; // Adding 2 helps to prevent clumping
+ if (dev->CbrEntryPt >= dev->CbrTotEntries)
+ dev->CbrEntryPt -= dev->CbrTotEntries;// Wrap if necessary
+ } else {
+ idealSlot += (u32)(spacing + fracSlot); // Point to the next location
+ // in the table that would be smoothest
+ fracSlot = ((sp_mod + sp_mod2) / entries); // get new integer part
+ sp_mod2 = ((sp_mod + sp_mod2) % entries); // calc new fractional part
+ }
+ if (idealSlot >= (int)dev->CbrTotEntries)
+ idealSlot -= dev->CbrTotEntries;
+ // Continuously check around this ideal value until a null
+ // location is encountered.
+ SchedTbl = (u16*)(dev->seg_ram+CBR_SCHED_TABLE*dev->memSize);
+ inc = 0;
+ testSlot = idealSlot;
+ TstSchedTbl = (u16*)(SchedTbl+testSlot); //set index and read in value
+ IF_CBR(printk("CBR Testslot 0x%x AT Location 0x%x, NumToAssign=%d\n",
+ testSlot, (u32)TstSchedTbl,toBeAssigned);)
+ memcpy((caddr_t)&cbrVC,(caddr_t)TstSchedTbl,sizeof(u16));
+ while (cbrVC) // If another VC at this location, we have to keep looking
+ {
+ inc++;
+ testSlot = idealSlot - inc;
+ if (testSlot < 0) { // Wrap if necessary
+ testSlot += dev->CbrTotEntries;
+ IF_CBR(printk("Testslot Wrap. STable Start=0x%x,Testslot=%d\n",
+ (u32)SchedTbl,testSlot);)
+ }
+ TstSchedTbl = (u16 *)(SchedTbl + testSlot); // set table index
+ memcpy((caddr_t)&cbrVC,(caddr_t)TstSchedTbl,sizeof(u16));
+ if (!cbrVC)
+ break;
+ testSlot = idealSlot + inc;
+ if (testSlot >= (int)dev->CbrTotEntries) { // Wrap if necessary
+ testSlot -= dev->CbrTotEntries;
+ IF_CBR(printk("TotCbrEntries=%d",dev->CbrTotEntries);)
+ IF_CBR(printk(" Testslot=0x%x ToBeAssgned=%d\n",
+ testSlot, toBeAssigned);)
+ }
+ // set table index and read in value
+ TstSchedTbl = (u16*)(SchedTbl + testSlot);
+ IF_CBR(printk("Reading CBR Tbl from 0x%x, CbrVal=0x%x Iteration %d\n",
+ (u32)TstSchedTbl,cbrVC,inc);)
+ memcpy((caddr_t)&cbrVC,(caddr_t)TstSchedTbl,sizeof(u16));
+ } /* while */
+ // Move this VCI number into this location of the CBR Sched table.
+ memcpy((caddr_t)TstSchedTbl, (caddr_t)&vcIndex,sizeof(u16));
+ dev->CbrRemEntries--;
+ toBeAssigned--;
+ } /* while */
+
+ /* IaFFrednCbrEnable */
+ dev->NumEnabledCBR++;
+ if (dev->NumEnabledCBR == 1) {
+ writew((CBR_EN | UBR_EN | ABR_EN | (0x23 << 2)), dev->seg_reg+STPARMS);
+ IF_CBR(printk("CBR is enabled\n");)
+ }
+ return 0;
+}
+static void ia_cbrVc_close (struct atm_vcc *vcc) {
+ IADEV *iadev;
+ u16 *SchedTbl, NullVci = 0;
+ u32 i, NumFound;
+
+ iadev = INPH_IA_DEV(vcc->dev);
+ iadev->NumEnabledCBR--;
+ SchedTbl = (u16*)(iadev->seg_ram+CBR_SCHED_TABLE*iadev->memSize);
+ if (iadev->NumEnabledCBR == 0) {
+ writew((UBR_EN | ABR_EN | (0x23 << 2)), iadev->seg_reg+STPARMS);
+ IF_CBR (printk("CBR support disabled\n");)
+ }
+ NumFound = 0;
+ for (i=0; i < iadev->CbrTotEntries; i++)
+ {
+ if (*SchedTbl == vcc->vci) {
+ iadev->CbrRemEntries++;
+ *SchedTbl = NullVci;
+ IF_CBR(NumFound++;)
+ }
+ SchedTbl++;
+ }
+ IF_CBR(printk("Exit ia_cbrVc_close, NumRemoved=%d\n",NumFound);)
+}
+
+static int ia_avail_descs(IADEV *iadev) {
+ int tmp = 0;
+ ia_hack_tcq(iadev);
+ if (iadev->host_tcq_wr >= iadev->ffL.tcq_rd)
+ tmp = (iadev->host_tcq_wr - iadev->ffL.tcq_rd) / 2;
+ else
+ tmp = (iadev->ffL.tcq_ed - iadev->ffL.tcq_rd + 2 + iadev->host_tcq_wr -
+ iadev->ffL.tcq_st) / 2;
+ return tmp;
+}
+
+static int ia_que_tx (IADEV *iadev) {
+ struct sk_buff *skb;
+ int num_desc;
+ struct atm_vcc *vcc;
+ struct ia_vcc *iavcc;
+ static int ia_pkt_tx (struct atm_vcc *vcc, struct sk_buff *skb);
+ num_desc = ia_avail_descs(iadev);
+ while (num_desc && (skb = skb_dequeue(&iadev->tx_backlog))) {
+ if (!(vcc = ATM_SKB(skb)->vcc)) {
+ dev_kfree_skb(skb);
+ printk("ia_que_tx: Null vcc\n");
+ break;
+ }
+ if ((vcc->flags & ATM_VF_READY) == 0 ) {
+ dev_kfree_skb(skb);
+ printk("Free the SKB on closed vci %d \n", vcc->vci);
+ break;
+ }
+ iavcc = INPH_IA_VCC(vcc);
+ if (ia_pkt_tx (vcc, skb)) {
+ skb_queue_head(&iadev->tx_backlog, skb);
+ }
+ num_desc--;
+ }
+ return 0;
+}
+void ia_tx_poll (IADEV *iadev) {
+ struct atm_vcc *vcc = NULL;
+ struct sk_buff *skb = NULL, *skb1 = NULL;
+ struct ia_vcc *iavcc;
+ IARTN_Q * rtne;
+
+ ia_hack_tcq(iadev);
+ while ( (rtne = ia_deque_rtn_q(&iadev->tx_return_q))) {
+ skb = rtne->data.txskb;
+ if (!skb) {
+ printk("ia_tx_poll: skb is null\n");
+ return;
+ }
+ vcc = ATM_SKB(skb)->vcc;
+ if (!vcc) {
+ printk("ia_tx_poll: vcc is null\n");
+ dev_kfree_skb(skb);
+ return;
+ }
+
+ iavcc = INPH_IA_VCC(vcc);
+ if (!iavcc) {
+ printk("ia_tx_poll: iavcc is null\n");
+ dev_kfree_skb(skb);
+ return;
+ }
+
+ skb1 = skb_dequeue(&iavcc->txing_skb);
+ while (skb1 && (skb1 != skb)) {
+ if (!(IA_SKB_STATE(skb1) & IA_TX_DONE)) {
+ printk("IA_tx_intr: Vci %d lost pkt!!!\n", vcc->vci);
+ }
+ IF_ERR(printk("Release the SKB not match\n");)
+ if (vcc && (vcc->pop) && (skb1->len != 0))
+ {
+ vcc->pop(vcc, skb1);
+ IF_EVENT(printk("Tansmit Done - skb 0x%lx return\n",
+ (long)skb1);)
+ }
+ else
+ dev_kfree_skb(skb1);
+ skb1 = skb_dequeue(&iavcc->txing_skb);
+ }
+ if (!skb1) {
+ IF_EVENT(printk("IA: Vci %d - skb not found requed\n",vcc->vci);)
+ ia_enque_head_rtn_q (&iadev->tx_return_q, rtne);
+ break;
+ }
+ if (vcc && (vcc->pop) && (skb->len != 0))
+ {
+ vcc->pop(vcc, skb);
+ IF_EVENT(printk("Tx Done - skb 0x%lx return\n",(long)skb);)
+ }
+ else
+ dev_kfree_skb(skb);
+ kfree(rtne);
+ }
+ ia_que_tx(iadev);
+ return;
+}
+#if 0
+static void ia_eeprom_put (IADEV *iadev, u32 addr, u_short val)
+{
+ u32 t;
+ int i;
+ /*
+ * Issue a command to enable writes to the NOVRAM
+ */
+ NVRAM_CMD (EXTEND + EWEN);
+ NVRAM_CLR_CE;
+ /*
+ * issue the write command
+ */
+ NVRAM_CMD(IAWRITE + addr);
+ /*
+ * Send the data, starting with D15, then D14, and so on for 16 bits
+ */
+ for (i=15; i>=0; i--) {
+ NVRAM_CLKOUT (val & 0x8000);
+ val <<= 1;
+ }
+ NVRAM_CLR_CE;
+ CFG_OR(NVCE);
+ t = readl(iadev->reg+IPHASE5575_EEPROM_ACCESS);
+ while (!(t & NVDO))
+ t = readl(iadev->reg+IPHASE5575_EEPROM_ACCESS);
+
+ NVRAM_CLR_CE;
+ /*
+ * disable writes again
+ */
+ NVRAM_CMD(EXTEND + EWDS)
+ NVRAM_CLR_CE;
+ CFG_AND(~NVDI);
+}
+#endif
+
+static u16 ia_eeprom_get (IADEV *iadev, u32 addr)
+{
+ u_short val;
+ u32 t;
+ int i;
+ /*
+ * Read the first bit that was clocked with the falling edge of the
+ * the last command data clock
+ */
+ NVRAM_CMD(IAREAD + addr);
+ /*
+ * Now read the rest of the bits, the next bit read is D14, then D13,
+ * and so on.
+ */
+ val = 0;
+ for (i=15; i>=0; i--) {
+ NVRAM_CLKIN(t);
+ val |= (t << i);
+ }
+ NVRAM_CLR_CE;
+ CFG_AND(~NVDI);
+ return val;
+}
+
+static void ia_hw_type(IADEV *iadev) {
+ u_short memType = ia_eeprom_get(iadev, 25);
+ iadev->memType = memType;
+ if ((memType & MEM_SIZE_MASK) == MEM_SIZE_1M) {
+ iadev->num_tx_desc = IA_TX_BUF;
+ iadev->tx_buf_sz = IA_TX_BUF_SZ;
+ iadev->num_rx_desc = IA_RX_BUF;
+ iadev->rx_buf_sz = IA_RX_BUF_SZ;
+ } else if ((memType & MEM_SIZE_MASK) == MEM_SIZE_512K) {
+ if (IA_TX_BUF == DFL_TX_BUFFERS)
+ iadev->num_tx_desc = IA_TX_BUF / 2;
+ else
+ iadev->num_tx_desc = IA_TX_BUF;
+ iadev->tx_buf_sz = IA_TX_BUF_SZ;
+ if (IA_RX_BUF == DFL_RX_BUFFERS)
+ iadev->num_rx_desc = IA_RX_BUF / 2;
+ else
+ iadev->num_rx_desc = IA_RX_BUF;
+ iadev->rx_buf_sz = IA_RX_BUF_SZ;
+ }
+ else {
+ if (IA_TX_BUF == DFL_TX_BUFFERS)
+ iadev->num_tx_desc = IA_TX_BUF / 8;
+ else
+ iadev->num_tx_desc = IA_TX_BUF;
+ iadev->tx_buf_sz = IA_TX_BUF_SZ;
+ if (IA_RX_BUF == DFL_RX_BUFFERS)
+ iadev->num_rx_desc = IA_RX_BUF / 8;
+ else
+ iadev->num_rx_desc = IA_RX_BUF;
+ iadev->rx_buf_sz = IA_RX_BUF_SZ;
+ }
+ iadev->rx_pkt_ram = TX_PACKET_RAM + (iadev->num_tx_desc * iadev->tx_buf_sz);
+ IF_INIT(printk("BUF: tx=%d,sz=%d rx=%d sz= %d rx_pkt_ram=%d\n",
+ iadev->num_tx_desc, iadev->tx_buf_sz, iadev->num_rx_desc,
+ iadev->rx_buf_sz, iadev->rx_pkt_ram);)
+
+#if 0
+ if ((memType & FE_MASK) == FE_SINGLE_MODE) {
+ iadev->phy_type = PHY_OC3C_S;
+ else if ((memType & FE_MASK) == FE_UTP_OPTION)
+ iadev->phy_type = PHY_UTP155;
+ else
+ iadev->phy_type = PHY_OC3C_M;
+#endif
+
+ iadev->phy_type = memType & FE_MASK;
+ IF_INIT(printk("memType = 0x%x iadev->phy_type = 0x%x\n",
+ memType,iadev->phy_type);)
+ if (iadev->phy_type == FE_25MBIT_PHY)
+ iadev->LineRate = (u32)(((25600000/8)*26)/(27*53));
+ else if (iadev->phy_type == FE_DS3_PHY)
+ iadev->LineRate = (u32)(((44736000/8)*26)/(27*53));
+ else if (iadev->phy_type == FE_E3_PHY)
+ iadev->LineRate = (u32)(((34368000/8)*26)/(27*53));
+ else
+ iadev->LineRate = (u32)(ATM_OC3_PCR);
+ IF_INIT(printk("iadev->LineRate = %d \n", iadev->LineRate);)
+
+}
+
+static void IaFrontEndIntr(IADEV *iadev) {
+ volatile IA_SUNI *suni;
+ volatile ia_mb25_t *mb25;
+ volatile suni_pm7345_t *suni_pm7345;
+ u32 intr_status;
+ u_int frmr_intr;
+
+ if(iadev->phy_type & FE_25MBIT_PHY) {
+ mb25 = (ia_mb25_t*)iadev->phy;
+ iadev->carrier_detect = Boolean(mb25->mb25_intr_status & MB25_IS_GSB);
+ } else if (iadev->phy_type & FE_DS3_PHY) {
+ suni_pm7345 = (suni_pm7345_t *)iadev->phy;
+ /* clear FRMR interrupts */
+ frmr_intr = suni_pm7345->suni_ds3_frm_intr_stat;
+ iadev->carrier_detect =
+ Boolean(!(suni_pm7345->suni_ds3_frm_stat & SUNI_DS3_LOSV));
+ } else if (iadev->phy_type & FE_E3_PHY ) {
+ suni_pm7345 = (suni_pm7345_t *)iadev->phy;
+ frmr_intr = suni_pm7345->suni_e3_frm_maint_intr_ind;
+ iadev->carrier_detect =
+ Boolean(!(suni_pm7345->suni_e3_frm_fram_intr_ind_stat&SUNI_E3_LOS));
+ }
+ else {
+ suni = (IA_SUNI *)iadev->phy;
+ intr_status = suni->suni_rsop_status & 0xff;
+ iadev->carrier_detect = Boolean(!(suni->suni_rsop_status & SUNI_LOSV));
+ }
+ if (iadev->carrier_detect)
+ printk("IA: SUNI carrier detected\n");
+ else
+ printk("IA: SUNI carrier lost signal\n");
+ return;
+}
+
+void ia_mb25_init (IADEV *iadev)
+{
+ volatile ia_mb25_t *mb25 = (ia_mb25_t*)iadev->phy;
+#if 0
+ mb25->mb25_master_ctrl = MB25_MC_DRIC | MB25_MC_DREC | MB25_MC_ENABLED;
+#endif
+ mb25->mb25_master_ctrl = MB25_MC_DRIC | MB25_MC_DREC;
+ mb25->mb25_diag_control = 0;
+ /*
+ * Initialize carrier detect state
+ */
+ iadev->carrier_detect = Boolean(mb25->mb25_intr_status & MB25_IS_GSB);
+ return;
+}
+
+void ia_suni_pm7345_init (IADEV *iadev)
+{
+ volatile suni_pm7345_t *suni_pm7345 = (suni_pm7345_t *)iadev->phy;
+ if (iadev->phy_type & FE_DS3_PHY)
+ {
+ iadev->carrier_detect =
+ Boolean(!(suni_pm7345->suni_ds3_frm_stat & SUNI_DS3_LOSV));
+ suni_pm7345->suni_ds3_frm_intr_enbl = 0x17;
+ suni_pm7345->suni_ds3_frm_cfg = 1;
+ suni_pm7345->suni_ds3_tran_cfg = 1;
+ suni_pm7345->suni_config = 0;
+ suni_pm7345->suni_splr_cfg = 0;
+ suni_pm7345->suni_splt_cfg = 0;
+ }
+ else
+ {
+ iadev->carrier_detect =
+ Boolean(!(suni_pm7345->suni_e3_frm_fram_intr_ind_stat & SUNI_E3_LOS));
+ suni_pm7345->suni_e3_frm_fram_options = 0x4;
+ suni_pm7345->suni_e3_frm_maint_options = 0x20;
+ suni_pm7345->suni_e3_frm_fram_intr_enbl = 0x1d;
+ suni_pm7345->suni_e3_frm_maint_intr_enbl = 0x30;
+ suni_pm7345->suni_e3_tran_stat_diag_options = 0x0;
+ suni_pm7345->suni_e3_tran_fram_options = 0x1;
+ suni_pm7345->suni_config = SUNI_PM7345_E3ENBL;
+ suni_pm7345->suni_splr_cfg = 0x41;
+ suni_pm7345->suni_splt_cfg = 0x41;
+ }
+ /*
+ * Enable RSOP loss of signal interrupt.
+ */
+ suni_pm7345->suni_intr_enbl = 0x28;
+
+ /*
+ * Clear error counters
+ */
+ suni_pm7345->suni_id_reset = 0;
+
+ /*
+ * Clear "PMCTST" in master test register.
+ */
+ suni_pm7345->suni_master_test = 0;
+
+ suni_pm7345->suni_rxcp_ctrl = 0x2c;
+ suni_pm7345->suni_rxcp_fctrl = 0x81;
+
+ suni_pm7345->suni_rxcp_idle_pat_h1 = 0;
+ suni_pm7345->suni_rxcp_idle_pat_h2 = 0;
+ suni_pm7345->suni_rxcp_idle_pat_h3 = 0;
+ suni_pm7345->suni_rxcp_idle_pat_h4 = 1;
+
+ suni_pm7345->suni_rxcp_idle_mask_h1 = 0xff;
+ suni_pm7345->suni_rxcp_idle_mask_h2 = 0xff;
+ suni_pm7345->suni_rxcp_idle_mask_h3 = 0xff;
+ suni_pm7345->suni_rxcp_idle_mask_h4 = 0xfe;
+
+ suni_pm7345->suni_rxcp_cell_pat_h1 = 0;
+ suni_pm7345->suni_rxcp_cell_pat_h2 = 0;
+ suni_pm7345->suni_rxcp_cell_pat_h3 = 0;
+ suni_pm7345->suni_rxcp_cell_pat_h4 = 1;
+
+ suni_pm7345->suni_rxcp_cell_mask_h1 = 0xff;
+ suni_pm7345->suni_rxcp_cell_mask_h2 = 0xff;
+ suni_pm7345->suni_rxcp_cell_mask_h3 = 0xff;
+ suni_pm7345->suni_rxcp_cell_mask_h4 = 0xff;
+
+ suni_pm7345->suni_txcp_ctrl = 0xa4;
+ suni_pm7345->suni_txcp_intr_en_sts = 0x10;
+ suni_pm7345->suni_txcp_idle_pat_h5 = 0x55;
+
+ suni_pm7345->suni_config &= ~(SUNI_PM7345_LLB |
+ SUNI_PM7345_CLB |
+ SUNI_PM7345_DLB |
+ SUNI_PM7345_PLB);
+#ifdef __SNMP__
+ suni_pm7345->suni_rxcp_intr_en_sts |= SUNI_OOCDE;
+#endif __SNMP__
+ return;
+}
+
+
+/***************************** IA_LIB END *****************************/
+
+/* pwang_test debug utility */
+int tcnter = 0, rcnter = 0;
+void xdump( u_char* cp, int length, char* prefix )
+{
+ int col, count;
+ u_char prntBuf[120];
+ u_char* pBuf = prntBuf;
+ count = 0;
+ while(count < length){
+ pBuf += sprintf( pBuf, "%s", prefix );
+ for(col = 0;count + col < length && col < 16; col++){
+ if (col != 0 && (col % 4) == 0)
+ pBuf += sprintf( pBuf, " " );
+ pBuf += sprintf( pBuf, "%02X ", cp[count + col] );
+ }
+ while(col++ < 16){ /* pad end of buffer with blanks */
+ if ((col % 4) == 0)
+ sprintf( pBuf, " " );
+ pBuf += sprintf( pBuf, " " );
+ }
+ pBuf += sprintf( pBuf, " " );
+ for(col = 0;count + col < length && col < 16; col++){
+ if (isprint((int)cp[count + col]))
+ pBuf += sprintf( pBuf, "%c", cp[count + col] );
+ else
+ pBuf += sprintf( pBuf, "." );
+ }
+ sprintf( pBuf, "\n" );
+ // SPrint(prntBuf);
+ printk(prntBuf);
+ count += col;
+ pBuf = prntBuf;
+ }
+
+} /* close xdump(... */
+
+
+static struct atm_dev *ia_boards = NULL;
+
+#define ACTUAL_RAM_BASE \
+ RAM_BASE*((iadev->mem)/(128 * 1024))
+#define ACTUAL_SEG_RAM_BASE \
+ IPHASE5575_FRAG_CONTROL_RAM_BASE*((iadev->mem)/(128 * 1024))
+#define ACTUAL_REASS_RAM_BASE \
+ IPHASE5575_REASS_CONTROL_RAM_BASE*((iadev->mem)/(128 * 1024))
+
+
+/*-- some utilities and memory allocation stuff will come here -------------*/
+
+void desc_dbg(IADEV *iadev) {
+
+ u_short tcq_wr_ptr, tcq_st_ptr, tcq_ed_ptr;
+ u32 tmp, i;
+ // regval = readl((u32)ia_cmds->maddr);
+ tcq_wr_ptr = readw(iadev->seg_reg+TCQ_WR_PTR);
+ printk("B_tcq_wr = 0x%x desc = %d last desc = %d\n",
+ tcq_wr_ptr, readw(iadev->seg_ram+tcq_wr_ptr),
+ readw(iadev->seg_ram+tcq_wr_ptr-2));
+ printk(" host_tcq_wr = 0x%x host_tcq_rd = 0x%x \n", iadev->host_tcq_wr,
+ iadev->ffL.tcq_rd);
+ tcq_st_ptr = readw(iadev->seg_reg+TCQ_ST_ADR);
+ tcq_ed_ptr = readw(iadev->seg_reg+TCQ_ED_ADR);
+ printk("tcq_st_ptr = 0x%x tcq_ed_ptr = 0x%x \n", tcq_st_ptr, tcq_ed_ptr);
+ i = 0;
+ while (tcq_st_ptr != tcq_ed_ptr) {
+ tmp = iadev->seg_ram+tcq_st_ptr;
+ printk("TCQ slot %d desc = %d Addr = 0x%x\n", i++, readw(tmp), tmp);
+ tcq_st_ptr += 2;
+ }
+ for(i=0; i <iadev->num_tx_desc; i++)
+ printk("Desc_tbl[%d] = %d \n", i, iadev->desc_tbl[i].timestamp);
+}
+
+
+/*----------------------------- Recieving side stuff --------------------------*/
+
+static void rx_excp_rcvd(struct atm_dev *dev)
+{
+#if 0 /* closing the receiving size will cause too many excp int */
+ IADEV *iadev;
+ u_short state;
+ u_short excpq_rd_ptr;
+ //u_short *ptr;
+ int vci, error = 1;
+ iadev = INPH_IA_DEV(dev);
+ state = readl(iadev->reass_reg + STATE_REG) & 0xffff;
+ while((state & EXCPQ_EMPTY) != EXCPQ_EMPTY)
+ { printk("state = %x \n", state);
+ excpq_rd_ptr = readw(iadev->reass_reg + EXCP_Q_RD_PTR) & 0xffff;
+ printk("state = %x excpq_rd_ptr = %x \n", state, excpq_rd_ptr);
+ if (excpq_rd_ptr == *(u16*)(iadev->reass_reg + EXCP_Q_WR_PTR))
+ IF_ERR(printk("excpq_rd_ptr is wrong!!!\n");)
+ // TODO: update exception stat
+ vci = readw(iadev->reass_ram+excpq_rd_ptr);
+ error = readw(iadev->reass_ram+excpq_rd_ptr+2) & 0x0007;
+ // pwang_test
+ excpq_rd_ptr += 4;
+ if (excpq_rd_ptr > (readw(iadev->reass_reg + EXCP_Q_ED_ADR)& 0xffff))
+ excpq_rd_ptr = readw(iadev->reass_reg + EXCP_Q_ST_ADR)& 0xffff;
+ writew( excpq_rd_ptr, iadev->reass_reg + EXCP_Q_RD_PTR);
+ state = readl(iadev->reass_reg + STATE_REG) & 0xffff;
+ }
+#endif
+}
+
+static void free_desc(struct atm_dev *dev, int desc)
+{
+ IADEV *iadev;
+ iadev = INPH_IA_DEV(dev);
+ writew(desc, iadev->reass_ram+iadev->rfL.fdq_wr);
+ iadev->rfL.fdq_wr +=2;
+ if (iadev->rfL.fdq_wr > iadev->rfL.fdq_ed)
+ iadev->rfL.fdq_wr = iadev->rfL.fdq_st;
+ writew(iadev->rfL.fdq_wr, iadev->reass_reg+FREEQ_WR_PTR);
+}
+
+
+static int rx_pkt(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ struct atm_vcc *vcc;
+ unsigned short status;
+ struct rx_buf_desc *buf_desc_ptr;
+ int desc;
+ struct dle* wr_ptr;
+ int len;
+ struct sk_buff *skb;
+ u_int buf_addr, dma_addr;
+ iadev = INPH_IA_DEV(dev);
+ if (iadev->rfL.pcq_rd == (readw(iadev->reass_reg+PCQ_WR_PTR)&0xffff))
+ {
+ printk(KERN_ERR DEV_LABEL "(itf %d) Receive queue empty\n", dev->number);
+ return -EINVAL;
+ }
+ /* mask 1st 3 bits to get the actual descno. */
+ desc = readw(iadev->reass_ram+iadev->rfL.pcq_rd) & 0x1fff;
+ IF_RX(printk("reass_ram = 0x%x iadev->rfL.pcq_rd = 0x%x desc = %d\n",
+ iadev->reass_ram, iadev->rfL.pcq_rd, desc);
+ printk(" pcq_wr_ptr = 0x%x\n",
+ readw(iadev->reass_reg+PCQ_WR_PTR)&0xffff);)
+ /* update the read pointer - maybe we shud do this in the end*/
+ if ( iadev->rfL.pcq_rd== iadev->rfL.pcq_ed)
+ iadev->rfL.pcq_rd = iadev->rfL.pcq_st;
+ else
+ iadev->rfL.pcq_rd += 2;
+ writew(iadev->rfL.pcq_rd, iadev->reass_reg+PCQ_RD_PTR);
+
+ /* get the buffer desc entry.
+ update stuff. - doesn't seem to be any update necessary
+ */
+ buf_desc_ptr = (struct rx_buf_desc *)iadev->RX_DESC_BASE_ADDR;
+ /* make the ptr point to the corresponding buffer desc entry */
+ buf_desc_ptr += desc;
+ if (!desc || (desc > iadev->num_rx_desc) ||
+ ((buf_desc_ptr->vc_index & 0xffff) > iadev->num_vc)) {
+ free_desc(dev, desc);
+ IF_ERR(printk("IA: bad descriptor desc = %d \n", desc);)
+ return -1;
+ }
+ vcc = iadev->rx_open[buf_desc_ptr->vc_index & 0xffff];
+ if (!vcc)
+ {
+ free_desc(dev, desc);
+ printk("IA: null vcc, drop PDU\n");
+ return -1;
+ }
+
+
+ /* might want to check the status bits for errors */
+ status = (u_short) (buf_desc_ptr->desc_mode);
+ if (status & (RX_CER | RX_PTE | RX_OFL))
+ {
+ vcc->stats->rx_err++;
+ IF_ERR(printk("IA: bad packet, dropping it");)
+ if (status & RX_CER) {
+ IF_ERR(printk(" cause: packet CRC error\n");)
+ }
+ else if (status & RX_PTE) {
+ IF_ERR(printk(" cause: packet time out\n");)
+ }
+ else {
+ IF_ERR(printk(" cause: buffer over flow\n");)
+ }
+ free_desc(dev, desc);
+ return 0;
+ }
+
+ /*
+ build DLE.
+ */
+
+ buf_addr = (buf_desc_ptr->buf_start_hi << 16) | buf_desc_ptr->buf_start_lo;
+ dma_addr = (buf_desc_ptr->dma_start_hi << 16) | buf_desc_ptr->dma_start_lo;
+ len = dma_addr - buf_addr;
+ if (len > iadev->rx_buf_sz) {
+ printk("Over %d bytes sdu received, dropped!!!\n", iadev->rx_buf_sz);
+ vcc->stats->rx_err++;
+ free_desc(dev, desc);
+ return 0;
+ }
+
+#if LINUX_VERSION_CODE >= 0x20312
+ if (!(skb = atm_alloc_charge(vcc, len, GFP_ATOMIC))) {
+#else
+ if (atm_charge(vcc, atm_pdu2truesize(len))) {
+ /* lets allocate an skb for now */
+ skb = alloc_skb(len, GFP_ATOMIC);
+ if (!skb)
+ {
+ IF_ERR(printk("can't allocate memory for recv, drop pkt!\n");)
+ vcc->stats->rx_drop++;
+ atm_return(vcc, atm_pdu2truesize(len));
+ free_desc(dev, desc);
+ return 0;
+ }
+ }
+ else {
+ IF_EVENT(printk("IA: Rx over the rx_quota %ld\n", vcc->rx_quota);)
+#endif
+ if (vcc->vci < 32)
+ printk("Drop control packets\n");
+ free_desc(dev, desc);
+ return 0;
+ }
+ skb_put(skb,len);
+ // pwang_test
+ ATM_SKB(skb)->vcc = vcc;
+ ATM_SKB(skb)->iovcnt = 0;
+ ATM_DESC(skb) = desc;
+ skb_queue_tail(&iadev->rx_dma_q, skb);
+
+ /* Build the DLE structure */
+ wr_ptr = iadev->rx_dle_q.write;
+ wr_ptr->sys_pkt_addr = virt_to_bus(skb->data);
+ wr_ptr->local_pkt_addr = buf_addr;
+ wr_ptr->bytes = len; /* We don't know this do we ?? */
+ wr_ptr->mode = DMA_INT_ENABLE;
+
+ /* shud take care of wrap around here too. */
+ if(++wr_ptr == iadev->rx_dle_q.end)
+ wr_ptr = iadev->rx_dle_q.start;
+ iadev->rx_dle_q.write = wr_ptr;
+ udelay(1);
+ /* Increment transaction counter */
+ writel(1, iadev->dma+IPHASE5575_RX_COUNTER);
+ return 0;
+}
+
+static void rx_intr(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ u_short status;
+ u_short state, i;
+
+ iadev = INPH_IA_DEV(dev);
+ status = readl(iadev->reass_reg+REASS_INTR_STATUS_REG) & 0xffff;
+ IF_EVENT(printk("rx_intr: status = 0x%x\n", status);)
+ if (status & RX_PKT_RCVD)
+ {
+ /* do something */
+ /* Basically recvd an interrupt for receving a packet.
+ A descriptor would have been written to the packet complete
+ queue. Get all the descriptors and set up dma to move the
+ packets till the packet complete queue is empty..
+ */
+ state = readl(iadev->reass_reg + STATE_REG) & 0xffff;
+ IF_EVENT(printk("Rx intr status: RX_PKT_RCVD %08x\n", status);)
+ while(!(state & PCQ_EMPTY))
+ {
+ rx_pkt(dev);
+ state = readl(iadev->reass_reg + STATE_REG) & 0xffff;
+ }
+ iadev->rxing = 1;
+ }
+ if (status & RX_FREEQ_EMPT)
+ {
+ if (iadev->rxing) {
+ iadev->rx_tmp_cnt = iadev->rx_pkt_cnt;
+ iadev->rx_tmp_jif = jiffies;
+ iadev->rxing = 0;
+ }
+ else if (((jiffies - iadev->rx_tmp_jif) > 50) &&
+ ((iadev->rx_pkt_cnt - iadev->rx_tmp_cnt) == 0)) {
+ for (i = 1; i <= iadev->num_rx_desc; i++)
+ free_desc(dev, i);
+printk("Test logic RUN!!!!\n");
+ writew( ~(RX_FREEQ_EMPT|RX_EXCP_RCVD),iadev->reass_reg+REASS_MASK_REG);
+ iadev->rxing = 1;
+ }
+ IF_EVENT(printk("Rx intr status: RX_FREEQ_EMPT %08x\n", status);)
+ }
+
+ if (status & RX_EXCP_RCVD)
+ {
+ /* probably need to handle the exception queue also. */
+ IF_EVENT(printk("Rx intr status: RX_EXCP_RCVD %08x\n", status);)
+ rx_excp_rcvd(dev);
+ }
+
+
+ if (status & RX_RAW_RCVD)
+ {
+ /* need to handle the raw incoming cells. This deepnds on
+ whether we have programmed to receive the raw cells or not.
+ Else ignore. */
+ IF_EVENT(printk("Rx intr status: RX_RAW_RCVD %08x\n", status);)
+ }
+}
+
+
+static void rx_dle_intr(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ struct atm_vcc *vcc;
+ struct sk_buff *skb;
+ int desc;
+ u_short state;
+ struct dle *dle, *cur_dle;
+ u_int dle_lp;
+ iadev = INPH_IA_DEV(dev);
+
+ /* free all the dles done, that is just update our own dle read pointer
+ - do we really need to do this. Think not. */
+ /* DMA is done, just get all the recevie buffers from the rx dma queue
+ and push them up to the higher layer protocol. Also free the desc
+ associated with the buffer. */
+ dle = iadev->rx_dle_q.read;
+ dle_lp = readl(iadev->dma+IPHASE5575_RX_LIST_ADDR) & (sizeof(struct dle)*DLE_ENTRIES - 1);
+ cur_dle = (struct dle*)(iadev->rx_dle_q.start + (dle_lp >> 4));
+ while(dle != cur_dle)
+ {
+ /* free the DMAed skb */
+ skb = skb_dequeue(&iadev->rx_dma_q);
+ if (!skb)
+ goto INCR_DLE;
+ desc = ATM_DESC(skb);
+ free_desc(dev, desc);
+
+ if (!skb->len)
+ {
+ printk("rx_dle_intr: skb len 0\n");
+ dev_kfree_skb(skb);
+ }
+ else
+ {
+ struct cpcs_trailer *trailer;
+ u_short length;
+ struct ia_vcc *ia_vcc;
+ /* no VCC related housekeeping done as yet. lets see */
+ vcc = ATM_SKB(skb)->vcc;
+ if (!vcc) {
+ printk("IA: null vcc\n");
+ vcc->stats->rx_err++;
+ dev_kfree_skb(skb);
+ goto INCR_DLE;
+ }
+ ia_vcc = INPH_IA_VCC(vcc);
+ if (ia_vcc == NULL)
+ {
+ vcc->stats->rx_err++;
+ dev_kfree_skb(skb);
+#if LINUX_VERSION_CODE >= 0x20312
+ atm_return(vcc, atm_guess_pdu2truesize(skb->len));
+#else
+ atm_return(vcc, atm_pdu2truesize(skb->len));
+#endif
+ goto INCR_DLE;
+ }
+ // get real pkt length pwang_test
+ trailer = (struct cpcs_trailer*)((u_char *)skb->data +
+ skb->len - sizeof(struct cpcs_trailer));
+ length = swap(trailer->length);
+ if ((length > iadev->rx_buf_sz) || (length >
+ (skb->len - sizeof(struct cpcs_trailer))))
+ {
+ vcc->stats->rx_err++;
+ dev_kfree_skb(skb);
+ IF_ERR(printk("rx_dle_intr: Bad AAL5 trailer %d (skb len %d)",
+ length, skb->len);)
+#if LINUX_VERSION_CODE >= 0x20312
+ atm_return(vcc, atm_guess_pdu2truesize(skb->len));
+#else
+ atm_return(vcc, atm_pdu2truesize(skb->len));
+#endif
+ goto INCR_DLE;
+ }
+ skb_trim(skb, length);
+
+ /* Display the packet */
+ IF_RXPKT(printk("\nDmad Recvd data: len = %d \n", skb->len);
+ xdump(skb->data, skb->len, "RX: ");
+ printk("\n");)
+
+ IF_RX(printk("rx_dle_intr: skb push");)
+ vcc->push(vcc,skb);
+ vcc->stats->rx++;
+ iadev->rx_pkt_cnt++;
+ }
+INCR_DLE:
+ if (++dle == iadev->rx_dle_q.end)
+ dle = iadev->rx_dle_q.start;
+ }
+ iadev->rx_dle_q.read = dle;
+
+ /* if the interrupts are masked because there were no free desc available,
+ unmask them now. */
+ if (!iadev->rxing) {
+ state = readl(iadev->reass_reg + STATE_REG) & 0xffff;
+ if (!(state & FREEQ_EMPTY)) {
+ state = readl(iadev->reass_reg + REASS_MASK_REG) & 0xffff;
+ writel(state & ~(RX_FREEQ_EMPT |/* RX_EXCP_RCVD |*/ RX_PKT_RCVD),
+ iadev->reass_reg+REASS_MASK_REG);
+ iadev->rxing++;
+ }
+ }
+}
+
+
+static int open_rx(struct atm_vcc *vcc)
+{
+ IADEV *iadev;
+ u_short *vc_table;
+ u_short *reass_ptr;
+ IF_EVENT(printk("iadev: open_rx %d.%d\n", vcc->vpi, vcc->vci);)
+
+ if (vcc->qos.rxtp.traffic_class == ATM_NONE) return 0;
+ iadev = INPH_IA_DEV(vcc->dev);
+ if (vcc->qos.rxtp.traffic_class == ATM_ABR) {
+ if (iadev->phy_type & FE_25MBIT_PHY) {
+ printk("IA: ABR not support\n");
+ return -EINVAL;
+ }
+ }
+ /* Make only this VCI in the vc table valid and let all
+ others be invalid entries */
+ vc_table = (u_short *)(iadev->reass_ram+RX_VC_TABLE*iadev->memSize);
+ vc_table += vcc->vci;
+ /* mask the last 6 bits and OR it with 3 for 1K VCs */
+
+ *vc_table = vcc->vci << 6;
+ /* Also keep a list of open rx vcs so that we can attach them with
+ incoming PDUs later. */
+ if ((vcc->qos.rxtp.traffic_class == ATM_ABR) ||
+ (vcc->qos.txtp.traffic_class == ATM_ABR))
+ {
+ srv_cls_param_t srv_p;
+ init_abr_vc(iadev, &srv_p);
+ ia_open_abr_vc(iadev, &srv_p, vcc, 0);
+ }
+ else { /* for UBR later may need to add CBR logic */
+ reass_ptr = (u_short *)
+ (iadev->reass_ram+REASS_TABLE*iadev->memSize);
+ reass_ptr += vcc->vci;
+ *reass_ptr = NO_AAL5_PKT;
+ }
+
+ if (iadev->rx_open[vcc->vci])
+ printk(KERN_CRIT DEV_LABEL "(itf %d): VCI %d already open\n",
+ vcc->dev->number, vcc->vci);
+ iadev->rx_open[vcc->vci] = vcc;
+ return 0;
+}
+
+static int rx_init(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ struct rx_buf_desc *buf_desc_ptr;
+ unsigned long rx_pkt_start = 0;
+ u32 *dle_addr;
+ struct abr_vc_table *abr_vc_table;
+ u16 *vc_table;
+ u16 *reass_table;
+ u16 *ptr16;
+ int i,j, vcsize_sel;
+ u_short freeq_st_adr;
+ u_short *freeq_start;
+
+ iadev = INPH_IA_DEV(dev);
+ // spin_lock_init(&iadev->rx_lock);
+ /* I need to initialize the DLEs somewhere. Lets see what I
+ need to do for this, hmmm...
+ - allocate memory for 256 DLEs. make sure that it starts
+ on a 4k byte address boundary. Program the start address
+ in Receive List address register. ..... to do for TX also
+ To make sure that it is a 4k byte boundary - allocate 8k and find
+ 4k byte boundary within.
+ ( (addr + (4k-1)) & ~(4k-1) )
+ */
+
+ /* allocate 8k bytes */
+ dle_addr = (u32*)kmalloc(2*sizeof(struct dle)*DLE_ENTRIES, GFP_KERNEL);
+ if (!dle_addr)
+ {
+ printk(KERN_ERR DEV_LABEL "can't allocate DLEs\n");
+ }
+ /* find 4k byte boundary within the 8k allocated */
+ dle_addr = (u32*)( ((u32)dle_addr+(4096-1)) & ~(4096-1) );
+ iadev->rx_dle_q.start = (struct dle*)dle_addr;
+ iadev->rx_dle_q.read = iadev->rx_dle_q.start;
+ iadev->rx_dle_q.write = iadev->rx_dle_q.start;
+ iadev->rx_dle_q.end = (struct dle*)((u32)dle_addr+sizeof(struct dle)*DLE_ENTRIES);
+ /* the end of the dle q points to the entry after the last
+ DLE that can be used. */
+
+ /* write the upper 20 bits of the start address to rx list address register */
+ writel(virt_to_bus(dle_addr) & 0xfffff000, iadev->dma+IPHASE5575_RX_LIST_ADDR);
+ IF_INIT(printk("Tx Dle list addr: 0x%08x value: 0x%0x\n",
+ (u32)(iadev->dma+IPHASE5575_TX_LIST_ADDR),
+ *(u32*)(iadev->dma+IPHASE5575_TX_LIST_ADDR));
+ printk("Rx Dle list addr: 0x%08x value: 0x%0x\n",
+ (u32)(iadev->dma+IPHASE5575_RX_LIST_ADDR),
+ *(u32*)(iadev->dma+IPHASE5575_RX_LIST_ADDR));)
+
+ writew(0xffff, iadev->reass_reg+REASS_MASK_REG);
+ writew(0, iadev->reass_reg+MODE_REG);
+ writew(RESET_REASS, iadev->reass_reg+REASS_COMMAND_REG);
+
+ /* Receive side control memory map
+ -------------------------------
+
+ Buffer descr 0x0000 (736 - 23K)
+ VP Table 0x5c00 (256 - 512)
+ Except q 0x5e00 (128 - 512)
+ Free buffer q 0x6000 (1K - 2K)
+ Packet comp q 0x6800 (1K - 2K)
+ Reass Table 0x7000 (1K - 2K)
+ VC Table 0x7800 (1K - 2K)
+ ABR VC Table 0x8000 (1K - 32K)
+ */
+
+ /* Base address for Buffer Descriptor Table */
+ writew(RX_DESC_BASE >> 16, iadev->reass_reg+REASS_DESC_BASE);
+ /* Set the buffer size register */
+ writew(iadev->rx_buf_sz, iadev->reass_reg+BUF_SIZE);
+
+ /* Initialize each entry in the Buffer Descriptor Table */
+ iadev->RX_DESC_BASE_ADDR = iadev->reass_ram+RX_DESC_BASE*iadev->memSize;
+ buf_desc_ptr =(struct rx_buf_desc *)iadev->RX_DESC_BASE_ADDR;
+ memset((caddr_t)buf_desc_ptr, 0, sizeof(struct rx_buf_desc));
+ buf_desc_ptr++;
+ rx_pkt_start = iadev->rx_pkt_ram;
+ for(i=1; i<=iadev->num_rx_desc; i++)
+ {
+ memset((caddr_t)buf_desc_ptr, 0, sizeof(struct rx_buf_desc));
+ buf_desc_ptr->buf_start_hi = rx_pkt_start >> 16;
+ buf_desc_ptr->buf_start_lo = rx_pkt_start & 0x0000ffff;
+ buf_desc_ptr++;
+ rx_pkt_start += iadev->rx_buf_sz;
+ }
+ IF_INIT(printk("Rx Buffer desc ptr: 0x%0x\n", (u32)(buf_desc_ptr));)
+ i = FREE_BUF_DESC_Q*iadev->memSize;
+ writew(i >> 16, iadev->reass_reg+REASS_QUEUE_BASE);
+ writew(i, iadev->reass_reg+FREEQ_ST_ADR);
+ writew(i+iadev->num_rx_desc*sizeof(u_short),
+ iadev->reass_reg+FREEQ_ED_ADR);
+ writew(i, iadev->reass_reg+FREEQ_RD_PTR);
+ writew(i+iadev->num_rx_desc*sizeof(u_short),
+ iadev->reass_reg+FREEQ_WR_PTR);
+ /* Fill the FREEQ with all the free descriptors. */
+ freeq_st_adr = readw(iadev->reass_reg+FREEQ_ST_ADR);
+ freeq_start = (u_short *)(iadev->reass_ram+freeq_st_adr);
+ for(i=1; i<=iadev->num_rx_desc; i++)
+ {
+ *freeq_start = (u_short)i;
+ freeq_start++;
+ }
+ IF_INIT(printk("freeq_start: 0x%0x\n", (u32)freeq_start);)
+ /* Packet Complete Queue */
+ i = (PKT_COMP_Q * iadev->memSize) & 0xffff;
+ writew(i, iadev->reass_reg+PCQ_ST_ADR);
+ writew(i+iadev->num_vc*sizeof(u_short), iadev->reass_reg+PCQ_ED_ADR);
+ writew(i, iadev->reass_reg+PCQ_RD_PTR);
+ writew(i, iadev->reass_reg+PCQ_WR_PTR);
+
+ /* Exception Queue */
+ i = (EXCEPTION_Q * iadev->memSize) & 0xffff;
+ writew(i, iadev->reass_reg+EXCP_Q_ST_ADR);
+ writew(i + NUM_RX_EXCP * sizeof(RX_ERROR_Q),
+ iadev->reass_reg+EXCP_Q_ED_ADR);
+ writew(i, iadev->reass_reg+EXCP_Q_RD_PTR);
+ writew(i, iadev->reass_reg+EXCP_Q_WR_PTR);
+
+ /* Load local copy of FREEQ and PCQ ptrs */
+ iadev->rfL.fdq_st = readw(iadev->reass_reg+FREEQ_ST_ADR) & 0xffff;
+ iadev->rfL.fdq_ed = readw(iadev->reass_reg+FREEQ_ED_ADR) & 0xffff ;
+ iadev->rfL.fdq_rd = readw(iadev->reass_reg+FREEQ_RD_PTR) & 0xffff;
+ iadev->rfL.fdq_wr = readw(iadev->reass_reg+FREEQ_WR_PTR) & 0xffff;
+ iadev->rfL.pcq_st = readw(iadev->reass_reg+PCQ_ST_ADR) & 0xffff;
+ iadev->rfL.pcq_ed = readw(iadev->reass_reg+PCQ_ED_ADR) & 0xffff;
+ iadev->rfL.pcq_rd = readw(iadev->reass_reg+PCQ_RD_PTR) & 0xffff;
+ iadev->rfL.pcq_wr = readw(iadev->reass_reg+PCQ_WR_PTR) & 0xffff;
+
+ IF_INIT(printk("INIT:pcq_st:0x%x pcq_ed:0x%x pcq_rd:0x%x pcq_wr:0x%x",
+ iadev->rfL.pcq_st, iadev->rfL.pcq_ed, iadev->rfL.pcq_rd,
+ iadev->rfL.pcq_wr);)
+ /* just for check - no VP TBL */
+ /* VP Table */
+ /* writew(0x0b80, iadev->reass_reg+VP_LKUP_BASE); */
+ /* initialize VP Table for invalid VPIs
+ - I guess we can write all 1s or 0x000f in the entire memory
+ space or something similar.
+ */
+
+ /* This seems to work and looks right to me too !!! */
+ i = REASS_TABLE * iadev->memSize;
+ writew((i >> 3), iadev->reass_reg+REASS_TABLE_BASE);
+ /* initialize Reassembly table to I don't know what ???? */
+ reass_table = (u16 *)(iadev->reass_ram+i);
+ j = REASS_TABLE_SZ * iadev->memSize;
+ for(i=0; i < j; i++)
+ *reass_table++ = NO_AAL5_PKT;
+ i = 8*1024;
+ vcsize_sel = 0;
+ while (i != iadev->num_vc) {
+ i /= 2;
+ vcsize_sel++;
+ }
+ i = RX_VC_TABLE * iadev->memSize;
+ writew(((i>>3) & 0xfff8) | vcsize_sel, iadev->reass_reg+VC_LKUP_BASE);
+ vc_table = (u16 *)(iadev->reass_ram+RX_VC_TABLE*iadev->memSize);
+ j = RX_VC_TABLE_SZ * iadev->memSize;
+ for(i = 0; i < j; i++)
+ {
+ /* shift the reassembly pointer by 3 + lower 3 bits of
+ vc_lkup_base register (=3 for 1K VCs) and the last byte
+ is those low 3 bits.
+ Shall program this later.
+ */
+ *vc_table = (i << 6) | 15; /* for invalid VCI */
+ vc_table++;
+ }
+ /* ABR VC table */
+ i = ABR_VC_TABLE * iadev->memSize;
+ writew(i >> 3, iadev->reass_reg+ABR_LKUP_BASE);
+
+ i = ABR_VC_TABLE * iadev->memSize;
+ abr_vc_table = (struct abr_vc_table *)(iadev->reass_ram+i);
+ j = REASS_TABLE_SZ * iadev->memSize;
+ memset ((char*)abr_vc_table, 0, j * sizeof(struct abr_vc_table ) );
+ for(i = 0; i < j; i++) {
+ abr_vc_table->rdf = 0x0003;
+ abr_vc_table->air = 0x5eb1;
+ abr_vc_table++;
+ }
+
+ /* Initialize other registers */
+
+ /* VP Filter Register set for VC Reassembly only */
+ writew(0xff00, iadev->reass_reg+VP_FILTER);
+ writew(0, iadev->reass_reg+XTRA_RM_OFFSET);
+ writew(0x1, iadev->reass_reg+PROTOCOL_ID);
+
+ /* Packet Timeout Count related Registers :
+ Set packet timeout to occur in about 3 seconds
+ Set Packet Aging Interval count register to overflow in about 4 us
+ */
+ writew(0xF6F8, iadev->reass_reg+PKT_TM_CNT );
+ ptr16 = (u16*)j;
+ i = ((u32)ptr16 >> 6) & 0xff;
+ ptr16 += j - 1;
+ i |=(((u32)ptr16 << 2) & 0xff00);
+ writew(i, iadev->reass_reg+TMOUT_RANGE);
+ /* initiate the desc_tble */
+ for(i=0; i<iadev->num_tx_desc;i++)
+ iadev->desc_tbl[i].timestamp = 0;
+
+ /* to clear the interrupt status register - read it */
+ readw(iadev->reass_reg+REASS_INTR_STATUS_REG);
+
+ /* Mask Register - clear it */
+ writew(~(RX_FREEQ_EMPT|RX_PKT_RCVD), iadev->reass_reg+REASS_MASK_REG);
+
+ skb_queue_head_init(&iadev->rx_dma_q);
+ iadev->rx_free_desc_qhead = NULL;
+ iadev->rx_open =(struct atm_vcc **)kmalloc(4*iadev->num_vc,GFP_KERNEL);
+ if (!iadev->rx_open)
+ {
+ printk(KERN_ERR DEV_LABEL "itf %d couldn't get free page\n",
+ dev->number);
+ return -ENOMEM;
+ }
+ memset(iadev->rx_open, 0, 4*iadev->num_vc);
+ iadev->rxing = 1;
+ iadev->rx_pkt_cnt = 0;
+ /* Mode Register */
+ writew(R_ONLINE, iadev->reass_reg+MODE_REG);
+ return 0;
+}
+
+
+/*
+ The memory map suggested in appendix A and the coding for it.
+ Keeping it around just in case we change our mind later.
+
+ Buffer descr 0x0000 (128 - 4K)
+ UBR sched 0x1000 (1K - 4K)
+ UBR Wait q 0x2000 (1K - 4K)
+ Commn queues 0x3000 Packet Ready, Trasmit comp(0x3100)
+ (128 - 256) each
+ extended VC 0x4000 (1K - 8K)
+ ABR sched 0x6000 and ABR wait queue (1K - 2K) each
+ CBR sched 0x7000 (as needed)
+ VC table 0x8000 (1K - 32K)
+*/
+
+static void tx_intr(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ unsigned short status;
+ unsigned long flags;
+
+ iadev = INPH_IA_DEV(dev);
+
+ status = readl(iadev->seg_reg+SEG_INTR_STATUS_REG);
+ if (status & TRANSMIT_DONE){
+
+ IF_EVENT(printk("Tansmit Done Intr logic run\n");)
+ spin_lock_irqsave(&iadev->tx_lock, flags);
+ ia_tx_poll(iadev);
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+ writew(TRANSMIT_DONE, iadev->seg_reg+SEG_INTR_STATUS_REG);
+ if (iadev->close_pending)
+ wake_up(&iadev->close_wait);
+ }
+ if (status & TCQ_NOT_EMPTY)
+ {
+ IF_EVENT(printk("TCQ_NOT_EMPTY int received\n");)
+ }
+}
+
+static void tx_dle_intr(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ struct dle *dle, *cur_dle;
+ struct sk_buff *skb;
+ struct atm_vcc *vcc;
+ struct ia_vcc *iavcc;
+ u_int dle_lp;
+ unsigned long flags;
+
+ iadev = INPH_IA_DEV(dev);
+ spin_lock_irqsave(&iadev->tx_lock, flags);
+ dle = iadev->tx_dle_q.read;
+ dle_lp = readl(iadev->dma+IPHASE5575_TX_LIST_ADDR) &
+ (sizeof(struct dle)*DLE_ENTRIES - 1);
+ cur_dle = (struct dle*)(iadev->tx_dle_q.start + (dle_lp >> 4));
+ while (dle != cur_dle)
+ {
+ /* free the DMAed skb */
+ skb = skb_dequeue(&iadev->tx_dma_q);
+ if (!skb) break;
+ vcc = ATM_SKB(skb)->vcc;
+ if (!vcc) {
+ printk("tx_dle_intr: vcc is null\n");
+ dev_kfree_skb(skb);
+ return;
+ }
+ iavcc = INPH_IA_VCC(vcc);
+ if (!iavcc) {
+ printk("tx_dle_intr: iavcc is null\n");
+ dev_kfree_skb(skb);
+ return;
+ }
+ if (vcc->qos.txtp.pcr >= iadev->rate_limit) {
+ if ((vcc->pop) && (skb->len != 0))
+ {
+ vcc->pop(vcc, skb);
+ }
+ else {
+ dev_kfree_skb(skb);
+ }
+ }
+ else { /* Hold the rate-limited skb for flow control */
+ IA_SKB_STATE(skb) |= IA_DLED;
+ skb_queue_tail(&iavcc->txing_skb, skb);
+ }
+ IF_EVENT(printk("tx_dle_intr: enque skb = 0x%x \n", (u32)skb);)
+ if (++dle == iadev->tx_dle_q.end)
+ dle = iadev->tx_dle_q.start;
+ }
+ iadev->tx_dle_q.read = dle;
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+}
+
+static int open_tx(struct atm_vcc *vcc)
+{
+ struct ia_vcc *ia_vcc;
+ IADEV *iadev;
+ struct main_vc *vc;
+ struct ext_vc *evc;
+ int ret;
+ IF_EVENT(printk("iadev: open_tx entered vcc->vci = %d\n", vcc->vci);)
+ if (vcc->qos.txtp.traffic_class == ATM_NONE) return 0;
+ iadev = INPH_IA_DEV(vcc->dev);
+
+ if (iadev->phy_type & FE_25MBIT_PHY) {
+ if (vcc->qos.txtp.traffic_class == ATM_ABR) {
+ printk("IA: ABR not support\n");
+ return -EINVAL;
+ }
+ if (vcc->qos.txtp.traffic_class == ATM_CBR) {
+ printk("IA: CBR not support\n");
+ return -EINVAL;
+ }
+ }
+ ia_vcc = INPH_IA_VCC(vcc);
+ memset((caddr_t)ia_vcc, 0, sizeof(struct ia_vcc));
+ if (vcc->qos.txtp.max_sdu >
+ (iadev->tx_buf_sz - sizeof(struct cpcs_trailer))){
+ printk("IA: SDU size over the configured SDU size %d\n",
+ iadev->tx_buf_sz);
+ kfree(ia_vcc);
+ return -EINVAL;
+ }
+ ia_vcc->vc_desc_cnt = 0;
+ ia_vcc->txing = 1;
+
+ /* find pcr */
+ if (vcc->qos.txtp.max_pcr == ATM_MAX_PCR)
+ vcc->qos.txtp.pcr = iadev->LineRate;
+ else if ((vcc->qos.txtp.max_pcr == 0)&&( vcc->qos.txtp.pcr <= 0))
+ vcc->qos.txtp.pcr = iadev->LineRate;
+ else if ((vcc->qos.txtp.max_pcr > vcc->qos.txtp.pcr) && (vcc->qos.txtp.max_pcr> 0))
+ vcc->qos.txtp.pcr = vcc->qos.txtp.max_pcr;
+ if (vcc->qos.txtp.pcr > iadev->LineRate)
+ vcc->qos.txtp.pcr = iadev->LineRate;
+ ia_vcc->pcr = vcc->qos.txtp.pcr;
+
+ if (ia_vcc->pcr > (iadev->LineRate / 6) ) ia_vcc->ltimeout = HZ / 10;
+ else if (ia_vcc->pcr > (iadev->LineRate / 130)) ia_vcc->ltimeout = HZ;
+ else if (ia_vcc->pcr <= 170) ia_vcc->ltimeout = 16 * HZ;
+ else ia_vcc->ltimeout = 2700 * HZ / ia_vcc->pcr;
+ if (ia_vcc->pcr < iadev->rate_limit)
+ skb_queue_head_init (&ia_vcc->txing_skb);
+ if (ia_vcc->pcr < iadev->rate_limit) {
+ if (vcc->qos.txtp.max_sdu != 0) {
+ if (ia_vcc->pcr > 60000)
+ vcc->sk->sndbuf = vcc->qos.txtp.max_sdu * 5;
+ else if (ia_vcc->pcr > 2000)
+ vcc->sk->sndbuf = vcc->qos.txtp.max_sdu * 4;
+ else
+ vcc->sk->sndbuf = 3*vcc->qos.txtp.max_sdu;
+ }
+ else
+ vcc->sk->sndbuf = 24576;
+ }
+
+ vc = (struct main_vc *)iadev->MAIN_VC_TABLE_ADDR;
+ evc = (struct ext_vc *)iadev->EXT_VC_TABLE_ADDR;
+ vc += vcc->vci;
+ evc += vcc->vci;
+ memset((caddr_t)vc, 0, sizeof(struct main_vc));
+ memset((caddr_t)evc, 0, sizeof(struct ext_vc));
+
+ /* store the most significant 4 bits of vci as the last 4 bits
+ of first part of atm header.
+ store the last 12 bits of vci as first 12 bits of the second
+ part of the atm header.
+ */
+ evc->atm_hdr1 = (vcc->vci >> 12) & 0x000f;
+ evc->atm_hdr2 = (vcc->vci & 0x0fff) << 4;
+
+ /* check the following for different traffic classes */
+ if (vcc->qos.txtp.traffic_class == ATM_UBR)
+ {
+ vc->type = UBR;
+ vc->status = CRC_APPEND;
+ vc->acr = cellrate_to_float(iadev->LineRate);
+ if (vcc->qos.txtp.pcr > 0)
+ vc->acr = cellrate_to_float(vcc->qos.txtp.pcr);
+ IF_UBR(printk("UBR: txtp.pcr = 0x%d f_rate = 0x%x\n",
+ vcc->qos.txtp.max_pcr,vc->acr);)
+ }
+ else if (vcc->qos.txtp.traffic_class == ATM_ABR)
+ { srv_cls_param_t srv_p;
+ IF_ABR(printk("Tx ABR VCC\n");)
+ init_abr_vc(iadev, &srv_p);
+ if (vcc->qos.txtp.pcr > 0)
+ srv_p.pcr = vcc->qos.txtp.pcr;
+ if (vcc->qos.txtp.min_pcr > 0) {
+ int tmpsum = iadev->sum_mcr+iadev->sum_cbr+vcc->qos.txtp.min_pcr;
+ if (tmpsum > iadev->LineRate)
+ return -EBUSY;
+ srv_p.mcr = vcc->qos.txtp.min_pcr;
+ iadev->sum_mcr += vcc->qos.txtp.min_pcr;
+ }
+ else srv_p.mcr = 0;
+ if (vcc->qos.txtp.icr)
+ srv_p.icr = vcc->qos.txtp.icr;
+ if (vcc->qos.txtp.tbe)
+ srv_p.tbe = vcc->qos.txtp.tbe;
+ if (vcc->qos.txtp.frtt)
+ srv_p.frtt = vcc->qos.txtp.frtt;
+ if (vcc->qos.txtp.rif)
+ srv_p.rif = vcc->qos.txtp.rif;
+ if (vcc->qos.txtp.rdf)
+ srv_p.rdf = vcc->qos.txtp.rdf;
+ if (vcc->qos.txtp.nrm_pres)
+ srv_p.nrm = vcc->qos.txtp.nrm;
+ if (vcc->qos.txtp.trm_pres)
+ srv_p.trm = vcc->qos.txtp.trm;
+ if (vcc->qos.txtp.adtf_pres)
+ srv_p.adtf = vcc->qos.txtp.adtf;
+ if (vcc->qos.txtp.cdf_pres)
+ srv_p.cdf = vcc->qos.txtp.cdf;
+ if (srv_p.icr > srv_p.pcr)
+ srv_p.icr = srv_p.pcr;
+ IF_ABR(printk("ABR:vcc->qos.txtp.max_pcr = %d mcr = %d\n",
+ srv_p.pcr, srv_p.mcr);)
+ ia_open_abr_vc(iadev, &srv_p, vcc, 1);
+ } else if (vcc->qos.txtp.traffic_class == ATM_CBR) {
+ if (iadev->phy_type & FE_25MBIT_PHY) {
+ printk("IA: CBR not support\n");
+ return -EINVAL;
+ }
+ if (vcc->qos.txtp.max_pcr > iadev->LineRate) {
+ IF_CBR(printk("PCR is not availble\n");)
+ return -1;
+ }
+ vc->type = CBR;
+ vc->status = CRC_APPEND;
+ if ((ret = ia_cbr_setup (iadev, vcc)) < 0) {
+ return ret;
+ }
+ }
+ else
+ printk("iadev: Non UBR, ABR and CBR traffic not supportedn");
+
+ iadev->testTable[vcc->vci]->vc_status |= VC_ACTIVE;
+ IF_EVENT(printk("ia open_tx returning \n");)
+ return 0;
+}
+
+
+static int tx_init(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ struct tx_buf_desc *buf_desc_ptr;
+ unsigned int tx_pkt_start;
+ u32 *dle_addr;
+ int i;
+ u_short tcq_st_adr;
+ u_short *tcq_start;
+ u_short prq_st_adr;
+ u_short *prq_start;
+ struct main_vc *vc;
+ struct ext_vc *evc;
+ u_short tmp16;
+ u32 vcsize_sel;
+
+ iadev = INPH_IA_DEV(dev);
+ spin_lock_init(&iadev->tx_lock);
+
+ IF_INIT(printk("Tx MASK REG: 0x%0x\n",
+ readw(iadev->seg_reg+SEG_MASK_REG));)
+ /*---------- Initializing Transmit DLEs ----------*/
+ /* allocating 8k memory for transmit DLEs */
+ dle_addr = (u32*)kmalloc(2*sizeof(struct dle)*DLE_ENTRIES, GFP_KERNEL);
+ if (!dle_addr)
+ {
+ printk(KERN_ERR DEV_LABEL "can't allocate TX DLEs\n");
+ }
+
+ /* find 4k byte boundary within the 8k allocated */
+ dle_addr = (u32*)(((u32)dle_addr+(4096-1)) & ~(4096-1));
+ iadev->tx_dle_q.start = (struct dle*)dle_addr;
+ iadev->tx_dle_q.read = iadev->tx_dle_q.start;
+ iadev->tx_dle_q.write = iadev->tx_dle_q.start;
+ iadev->tx_dle_q.end = (struct dle*)((u32)dle_addr+sizeof(struct dle)*DLE_ENTRIES);
+
+ /* write the upper 20 bits of the start address to tx list address register */
+ writel(virt_to_bus(dle_addr) & 0xfffff000, iadev->dma+IPHASE5575_TX_LIST_ADDR);
+ writew(0xffff, iadev->seg_reg+SEG_MASK_REG);
+ writew(0, iadev->seg_reg+MODE_REG_0);
+ writew(RESET_SEG, iadev->seg_reg+SEG_COMMAND_REG);
+ iadev->MAIN_VC_TABLE_ADDR = iadev->seg_ram+MAIN_VC_TABLE*iadev->memSize;
+ iadev->EXT_VC_TABLE_ADDR = iadev->seg_ram+EXT_VC_TABLE*iadev->memSize;
+ iadev->ABR_SCHED_TABLE_ADDR=iadev->seg_ram+ABR_SCHED_TABLE*iadev->memSize;
+
+ /*
+ Transmit side control memory map
+ --------------------------------
+ Buffer descr 0x0000 (128 - 4K)
+ Commn queues 0x1000 Transmit comp, Packet ready(0x1400)
+ (512 - 1K) each
+ TCQ - 4K, PRQ - 5K
+ CBR Table 0x1800 (as needed) - 6K
+ UBR Table 0x3000 (1K - 4K) - 12K
+ UBR Wait queue 0x4000 (1K - 4K) - 16K
+ ABR sched 0x5000 and ABR wait queue (1K - 2K) each
+ ABR Tbl - 20K, ABR Wq - 22K
+ extended VC 0x6000 (1K - 8K) - 24K
+ VC Table 0x8000 (1K - 32K) - 32K
+
+ Between 0x2000 (8K) and 0x3000 (12K) there is 4K space left for VBR Tbl
+ and Wait q, which can be allotted later.
+ */
+
+ /* Buffer Descriptor Table Base address */
+ writew(TX_DESC_BASE, iadev->seg_reg+SEG_DESC_BASE);
+
+ /* initialize each entry in the buffer descriptor table */
+ buf_desc_ptr =(struct tx_buf_desc *)(iadev->seg_ram+TX_DESC_BASE);
+ memset((caddr_t)buf_desc_ptr, 0, sizeof(struct tx_buf_desc));
+ buf_desc_ptr++;
+ tx_pkt_start = TX_PACKET_RAM;
+ for(i=1; i<=iadev->num_tx_desc; i++)
+ {
+ memset((caddr_t)buf_desc_ptr, 0, sizeof(struct tx_buf_desc));
+ buf_desc_ptr->desc_mode = AAL5;
+ buf_desc_ptr->buf_start_hi = tx_pkt_start >> 16;
+ buf_desc_ptr->buf_start_lo = tx_pkt_start & 0x0000ffff;
+ buf_desc_ptr++;
+ tx_pkt_start += iadev->tx_buf_sz;
+ }
+ iadev->tx_buf= (caddr_t*)kmalloc(iadev->num_tx_desc*sizeof(caddr_t),
+ GFP_KERNEL);
+ if (!iadev->tx_buf) {
+ printk(KERN_ERR DEV_LABEL " couldn't get mem\n");
+ return -EAGAIN;
+ }
+ for (i= 0; i< iadev->num_tx_desc; i++)
+ {
+
+ iadev->tx_buf[i] =(caddr_t)kmalloc(sizeof(struct cpcs_trailer),
+ GFP_KERNEL|GFP_DMA);
+ if(!iadev->tx_buf[i]) {
+ printk(KERN_ERR DEV_LABEL " couldn't get freepage\n");
+ return -EAGAIN;
+ }
+ }
+ iadev->desc_tbl = (struct desc_tbl_t *)kmalloc(iadev->num_tx_desc *
+ sizeof(struct desc_tbl_t), GFP_KERNEL);
+
+ /* Communication Queues base address */
+ i = TX_COMP_Q * iadev->memSize;
+ writew(i >> 16, iadev->seg_reg+SEG_QUEUE_BASE);
+
+ /* Transmit Complete Queue */
+ writew(i, iadev->seg_reg+TCQ_ST_ADR);
+ writew(i, iadev->seg_reg+TCQ_RD_PTR);
+ writew(i+iadev->num_tx_desc*sizeof(u_short),iadev->seg_reg+TCQ_WR_PTR);
+ iadev->host_tcq_wr = i + iadev->num_tx_desc*sizeof(u_short);
+ writew(i+2 * iadev->num_tx_desc * sizeof(u_short),
+ iadev->seg_reg+TCQ_ED_ADR);
+ /* Fill the TCQ with all the free descriptors. */
+ tcq_st_adr = readw(iadev->seg_reg+TCQ_ST_ADR);
+ tcq_start = (u_short *)(iadev->seg_ram+tcq_st_adr);
+ for(i=1; i<=iadev->num_tx_desc; i++)
+ {
+ *tcq_start = (u_short)i;
+ tcq_start++;
+ }
+
+ /* Packet Ready Queue */
+ i = PKT_RDY_Q * iadev->memSize;
+ writew(i, iadev->seg_reg+PRQ_ST_ADR);
+ writew(i+2 * iadev->num_tx_desc * sizeof(u_short),
+ iadev->seg_reg+PRQ_ED_ADR);
+ writew(i, iadev->seg_reg+PRQ_RD_PTR);
+ writew(i, iadev->seg_reg+PRQ_WR_PTR);
+
+ /* Load local copy of PRQ and TCQ ptrs */
+ iadev->ffL.prq_st = readw(iadev->seg_reg+PRQ_ST_ADR) & 0xffff;
+ iadev->ffL.prq_ed = readw(iadev->seg_reg+PRQ_ED_ADR) & 0xffff;
+ iadev->ffL.prq_wr = readw(iadev->seg_reg+PRQ_WR_PTR) & 0xffff;
+
+ iadev->ffL.tcq_st = readw(iadev->seg_reg+TCQ_ST_ADR) & 0xffff;
+ iadev->ffL.tcq_ed = readw(iadev->seg_reg+TCQ_ED_ADR) & 0xffff;
+ iadev->ffL.tcq_rd = readw(iadev->seg_reg+TCQ_RD_PTR) & 0xffff;
+
+ /* Just for safety initializing the queue to have desc 1 always */
+ /* Fill the PRQ with all the free descriptors. */
+ prq_st_adr = readw(iadev->seg_reg+PRQ_ST_ADR);
+ prq_start = (u_short *)(iadev->seg_ram+prq_st_adr);
+ for(i=1; i<=iadev->num_tx_desc; i++)
+ {
+ *prq_start = (u_short)0; /* desc 1 in all entries */
+ prq_start++;
+ }
+ /* CBR Table */
+ IF_INIT(printk("Start CBR Init\n");)
+#if 1 /* for 1K VC board, CBR_PTR_BASE is 0 */
+ writew(0,iadev->seg_reg+CBR_PTR_BASE);
+#else /* Charlie's logic is wrong ? */
+ tmp16 = (iadev->seg_ram+CBR_SCHED_TABLE*iadev->memSize)>>17;
+ IF_INIT(printk("cbr_ptr_base = 0x%x ", tmp16);)
+ writew(tmp16,iadev->seg_reg+CBR_PTR_BASE);
+#endif
+
+ IF_INIT(printk("value in register = 0x%x\n",
+ readw(iadev->seg_reg+CBR_PTR_BASE));)
+ tmp16 = (CBR_SCHED_TABLE*iadev->memSize) >> 1;
+ writew(tmp16, iadev->seg_reg+CBR_TAB_BEG);
+ IF_INIT(printk("cbr_tab_beg = 0x%x in reg = 0x%x \n", tmp16,
+ readw(iadev->seg_reg+CBR_TAB_BEG));)
+ writew(tmp16, iadev->seg_reg+CBR_TAB_END+1); // CBR_PTR;
+ tmp16 = (CBR_SCHED_TABLE*iadev->memSize + iadev->num_vc*6 - 2) >> 1;
+ writew(tmp16, iadev->seg_reg+CBR_TAB_END);
+ IF_INIT(printk("iadev->seg_reg = 0x%x CBR_PTR_BASE = 0x%x\n",
+ (u32)iadev->seg_reg, readw(iadev->seg_reg+CBR_PTR_BASE));)
+ IF_INIT(printk("CBR_TAB_BEG = 0x%x, CBR_TAB_END = 0x%x, CBR_PTR = 0x%x\n",
+ readw(iadev->seg_reg+CBR_TAB_BEG), readw(iadev->seg_reg+CBR_TAB_END),
+ readw(iadev->seg_reg+CBR_TAB_END+1));)
+ tmp16 = (iadev->seg_ram+CBR_SCHED_TABLE*iadev->memSize);
+
+ /* Initialize the CBR Schedualing Table */
+ memset((caddr_t)(iadev->seg_ram+CBR_SCHED_TABLE*iadev->memSize),
+ 0, iadev->num_vc*6);
+ iadev->CbrRemEntries = iadev->CbrTotEntries = iadev->num_vc*3;
+ iadev->CbrEntryPt = 0;
+ iadev->Granularity = MAX_ATM_155 / iadev->CbrTotEntries;
+ iadev->NumEnabledCBR = 0;
+
+ /* UBR scheduling Table and wait queue */
+ /* initialize all bytes of UBR scheduler table and wait queue to 0
+ - SCHEDSZ is 1K (# of entries).
+ - UBR Table size is 4K
+ - UBR wait queue is 4K
+ since the table and wait queues are contiguous, all the bytes
+ can be intialized by one memeset.
+ */
+
+ vcsize_sel = 0;
+ i = 8*1024;
+ while (i != iadev->num_vc) {
+ i /= 2;
+ vcsize_sel++;
+ }
+
+ i = MAIN_VC_TABLE * iadev->memSize;
+ writew(vcsize_sel | ((i >> 8) & 0xfff8),iadev->seg_reg+VCT_BASE);
+ i = EXT_VC_TABLE * iadev->memSize;
+ writew((i >> 8) & 0xfffe, iadev->seg_reg+VCTE_BASE);
+ i = UBR_SCHED_TABLE * iadev->memSize;
+ writew((i & 0xffff) >> 11, iadev->seg_reg+UBR_SBPTR_BASE);
+ i = UBR_WAIT_Q * iadev->memSize;
+ writew((i >> 7) & 0xffff, iadev->seg_reg+UBRWQ_BASE);
+ memset((caddr_t)(iadev->seg_ram+UBR_SCHED_TABLE*iadev->memSize),
+ 0, iadev->num_vc*8);
+ /* ABR scheduling Table(0x5000-0x57ff) and wait queue(0x5800-0x5fff)*/
+ /* initialize all bytes of ABR scheduler table and wait queue to 0
+ - SCHEDSZ is 1K (# of entries).
+ - ABR Table size is 2K
+ - ABR wait queue is 2K
+ since the table and wait queues are contiguous, all the bytes
+ can be intialized by one memeset.
+ */
+ i = ABR_SCHED_TABLE * iadev->memSize;
+ writew((i >> 11) & 0xffff, iadev->seg_reg+ABR_SBPTR_BASE);
+ i = ABR_WAIT_Q * iadev->memSize;
+ writew((i >> 7) & 0xffff, iadev->seg_reg+ABRWQ_BASE);
+
+ i = ABR_SCHED_TABLE*iadev->memSize;
+ memset((caddr_t)(iadev->seg_ram+i), 0, iadev->num_vc*4);
+ vc = (struct main_vc *)iadev->MAIN_VC_TABLE_ADDR;
+ evc = (struct ext_vc *)iadev->EXT_VC_TABLE_ADDR;
+ iadev->testTable = (struct testTable_t **)
+ kmalloc(sizeof(long)*iadev->num_vc, GFP_KERNEL);
+ if (!iadev->testTable) {
+ printk("Get freepage failed\n");
+ return -EAGAIN;
+ }
+ for(i=0; i<iadev->num_vc; i++)
+ {
+ memset((caddr_t)vc, 0, sizeof(struct main_vc));
+ memset((caddr_t)evc, 0, sizeof(struct ext_vc));
+ iadev->testTable[i] = (struct testTable_t *)
+ kmalloc(sizeof(struct testTable_t), GFP_KERNEL);
+ iadev->testTable[i]->lastTime = 0;
+ iadev->testTable[i]->fract = 0;
+ iadev->testTable[i]->vc_status = VC_UBR;
+ vc++;
+ evc++;
+ }
+
+ /* Other Initialization */
+
+ /* Max Rate Register */
+ if (iadev->phy_type & FE_25MBIT_PHY) {
+ writew(RATE25, iadev->seg_reg+MAXRATE);
+ writew((UBR_EN | (0x23 << 2)), iadev->seg_reg+STPARMS);
+ }
+ else {
+ writew(cellrate_to_float(iadev->LineRate),iadev->seg_reg+MAXRATE);
+ writew((UBR_EN | ABR_EN | (0x23 << 2)), iadev->seg_reg+STPARMS);
+ }
+ /* Set Idle Header Reigisters to be sure */
+ writew(0, iadev->seg_reg+IDLEHEADHI);
+ writew(0, iadev->seg_reg+IDLEHEADLO);
+
+ /* Program ABR UBR Priority Register as PRI_ABR_UBR_EQUAL */
+ writew(0xaa00, iadev->seg_reg+ABRUBR_ARB);
+
+ iadev->close_pending = 0;
+#if LINUX_VERSION_CODE >= 0x20303
+ init_waitqueue_head(&iadev->close_wait);
+ init_waitqueue_head(&iadev->timeout_wait);
+#else
+ iadev->close_wait = NULL;
+ iadev->timeout_wait = NULL;
+#endif
+ skb_queue_head_init(&iadev->tx_dma_q);
+ ia_init_rtn_q(&iadev->tx_return_q);
+
+ /* RM Cell Protocol ID and Message Type */
+ writew(RM_TYPE_4_0, iadev->seg_reg+RM_TYPE);
+ skb_queue_head_init (&iadev->tx_backlog);
+
+ /* Mode Register 1 */
+ writew(MODE_REG_1_VAL, iadev->seg_reg+MODE_REG_1);
+
+ /* Mode Register 0 */
+ writew(T_ONLINE, iadev->seg_reg+MODE_REG_0);
+
+ /* Interrupt Status Register - read to clear */
+ readw(iadev->seg_reg+SEG_INTR_STATUS_REG);
+
+ /* Interrupt Mask Reg- don't mask TCQ_NOT_EMPTY interrupt generation */
+ writew(~(TRANSMIT_DONE | TCQ_NOT_EMPTY), iadev->seg_reg+SEG_MASK_REG);
+ writew(TRANSMIT_DONE, iadev->seg_reg+SEG_INTR_STATUS_REG);
+ iadev->tx_pkt_cnt = 0;
+ iadev->rate_limit = iadev->LineRate / 3;
+
+ return 0;
+}
+
+static void ia_int(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct atm_dev *dev;
+ IADEV *iadev;
+ unsigned int status;
+
+ dev = dev_id;
+ iadev = INPH_IA_DEV(dev);
+ while( (status = readl(iadev->reg+IPHASE5575_BUS_STATUS_REG) & 0x7f))
+ {
+ IF_EVENT(printk("ia_int: status = 0x%x\n", status);)
+ if (status & STAT_REASSINT)
+ {
+ /* do something */
+ IF_EVENT(printk("REASSINT Bus status reg: %08x\n", status);)
+ rx_intr(dev);
+ }
+ if (status & STAT_DLERINT)
+ {
+ /* Clear this bit by writing a 1 to it. */
+ *(u_int *)(iadev->reg+IPHASE5575_BUS_STATUS_REG) = STAT_DLERINT;
+ rx_dle_intr(dev);
+ }
+ if (status & STAT_SEGINT)
+ {
+ /* do something */
+ IF_EVENT(printk("IA: tx_intr \n");)
+ tx_intr(dev);
+ }
+ if (status & STAT_DLETINT)
+ {
+ *(u_int *)(iadev->reg+IPHASE5575_BUS_STATUS_REG) = STAT_DLETINT;
+ tx_dle_intr(dev);
+ }
+ if (status & (STAT_FEINT | STAT_ERRINT | STAT_MARKINT))
+ {
+ if (status & STAT_FEINT)
+ IaFrontEndIntr(iadev);
+ }
+ }
+}
+
+
+
+/*----------------------------- entries --------------------------------*/
+static int get_esi(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ int i;
+ u32 mac1;
+ u16 mac2;
+
+ iadev = INPH_IA_DEV(dev);
+ mac1 = cpu_to_be32(le32_to_cpu(readl(
+ iadev->reg+IPHASE5575_MAC1)));
+ mac2 = cpu_to_be16(le16_to_cpu(readl(iadev->reg+IPHASE5575_MAC2)));
+ IF_INIT(printk("ESI: 0x%08x%04x\n", mac1, mac2);)
+ for (i=0; i<MAC1_LEN; i++)
+ dev->esi[i] = mac1 >>(8*(MAC1_LEN-1-i));
+
+ for (i=0; i<MAC2_LEN; i++)
+ dev->esi[i+MAC1_LEN] = mac2 >>(8*(MAC2_LEN - 1 -i));
+ return 0;
+}
+
+static int reset_sar(struct atm_dev *dev)
+{
+ IADEV *iadev;
+ int i, error = 1;
+ unsigned int pci[64];
+
+ iadev = INPH_IA_DEV(dev);
+ for(i=0; i<64; i++)
+ if ((error = pci_read_config_dword(iadev->pci,
+ i*4, &pci[i])) != PCIBIOS_SUCCESSFUL)
+ return error;
+ writel(0, iadev->reg+IPHASE5575_EXT_RESET);
+ for(i=0; i<64; i++)
+ if ((error = pci_write_config_dword(iadev->pci,
+ i*4, pci[i])) != PCIBIOS_SUCCESSFUL)
+ return error;
+ udelay(5);
+ return 0;
+}
+
+
+#if LINUX_VERSION_CODE >= 0x20312
+static int __init ia_init(struct atm_dev *dev)
+#else
+__initfunc(static int ia_init(struct atm_dev *dev))
+#endif
+{
+ IADEV *iadev;
+ unsigned int real_base, base;
+ unsigned short command;
+ unsigned char revision;
+ int error, i;
+
+ /* The device has been identified and registered. Now we read
+ necessary configuration info like memory base address,
+ interrupt number etc */
+
+ IF_INIT(printk(">ia_init\n");)
+ dev->ci_range.vpi_bits = 0;
+ dev->ci_range.vci_bits = NR_VCI_LD;
+
+ iadev = INPH_IA_DEV(dev);
+
+ if ((error = pci_read_config_word(iadev->pci, PCI_COMMAND,&command))
+ || (error = pci_read_config_dword(iadev->pci,
+ PCI_BASE_ADDRESS_0,&real_base))
+ || (error = pci_read_config_byte(iadev->pci,
+ PCI_INTERRUPT_LINE,&iadev->irq))
+ || (error = pci_read_config_byte(iadev->pci,
+ PCI_REVISION_ID,&revision)))
+ {
+ printk(KERN_ERR DEV_LABEL "(itf %d): init error 0x%x\n",
+ dev->number,error);
+ return -EINVAL;
+ }
+ IF_INIT(printk(DEV_LABEL "(itf %d): rev.%d,realbase=0x%x,irq=%d\n",
+ dev->number, revision, real_base, iadev->irq);)
+
+ /* find mapping size of board */
+
+ /* write all 1's into the base address register.
+ read the register whic returns us 0's in the don't care bits.
+ size is calculated as ~(don't cre bits) + 1 */
+
+ if (pci_write_config_dword(iadev->pci,
+ PCI_BASE_ADDRESS_0, 0xffffffff)!=PCIBIOS_SUCCESSFUL)
+ {
+ printk(DEV_LABEL "(itf %d): init error 0x%x\n",dev->number,
+ error);
+ return -EINVAL;
+ }
+ if(pci_read_config_dword(iadev->pci, PCI_BASE_ADDRESS_0,
+ &(iadev->pci_map_size)) !=PCIBIOS_SUCCESSFUL)
+ {
+ printk(DEV_LABEL "(itf %d): init error 0x%x\n",dev->number,
+ error);
+ return -EINVAL;
+ }
+ iadev->pci_map_size &= PCI_BASE_ADDRESS_MEM_MASK;
+ iadev->pci_map_size = ~iadev->pci_map_size + 1;
+ if (iadev->pci_map_size == 0x100000){
+ iadev->num_vc = 4096;
+ dev->ci_range.vci_bits = NR_VCI_4K_LD;
+ iadev->memSize = 4;
+ }
+ else if (iadev->pci_map_size == 0x40000) {
+ iadev->num_vc = 1024;
+ iadev->memSize = 1;
+ }
+ else {
+ printk("Unknown pci_map_size = 0x%x\n", iadev->pci_map_size);
+ return -EINVAL;
+ }
+ IF_INIT(printk (DEV_LABEL "map size: %i\n", iadev->pci_map_size);)
+ if(pci_write_config_dword(iadev->pci, PCI_BASE_ADDRESS_0,
+ real_base)!=PCIBIOS_SUCCESSFUL)
+ {
+ printk(DEV_LABEL "(itf %d): init error 0x%x\n",dev->number,
+ error);
+ return -EINVAL;
+ }
+
+ /* strip flags (last 4 bits ) ---> mask with 0xfffffff0 */
+ real_base &= MEM_VALID;
+ /* enabling the responses in memory space */
+ command |= (PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
+ if ((error = pci_write_config_word(iadev->pci,
+ PCI_COMMAND, command)))
+ {
+ printk(DEV_LABEL "(itf %d): can't enable memory (0x%x)\n",
+ dev->number,error);
+ return error;
+ }
+ /*
+ * Delay at least 1us before doing any mem accesses (how 'bout 10?)
+ */
+ udelay(10);
+
+ /* mapping the physical address to a virtual address in address space */
+ base=(unsigned long)ioremap((unsigned long)real_base,iadev->pci_map_size); /* ioremap is not resolved ??? */
+
+ if (!base)
+ {
+ printk(DEV_LABEL " (itf %d): can't set up page mapping\n",
+ dev->number);
+ return error;
+ }
+ IF_INIT(printk(DEV_LABEL " (itf %d): rev.%d,base=0x%x,irq=%d\n",
+ dev->number, revision, base, iadev->irq);)
+
+ /* filling the iphase dev structure */
+ iadev->mem = iadev->pci_map_size /2;
+ iadev->base_diff = real_base - base;
+ iadev->real_base = real_base;
+ iadev->base = base;
+
+ /* Bus Interface Control Registers */
+ iadev->reg = (u32 *) (base + REG_BASE);
+ /* Segmentation Control Registers */
+ iadev->seg_reg = (u32 *) (base + SEG_BASE);
+ /* Reassembly Control Registers */
+ iadev->reass_reg = (u32 *) (base + REASS_BASE);
+ /* Front end/ DMA control registers */
+ iadev->phy = (u32 *) (base + PHY_BASE);
+ iadev->dma = (u32 *) (base + PHY_BASE);
+ /* RAM - Segmentation RAm and Reassembly RAM */
+ iadev->ram = (u32 *) (base + ACTUAL_RAM_BASE);
+ iadev->seg_ram = (base + ACTUAL_SEG_RAM_BASE);
+ iadev->reass_ram = (base + ACTUAL_REASS_RAM_BASE);
+
+ /* lets print out the above */
+ IF_INIT(printk("Base addrs: %08x %08x %08x \n %08x %08x %08x %08x\n",
+ (u32)iadev->reg,(u32)iadev->seg_reg,(u32)iadev->reass_reg,
+ (u32)iadev->phy, (u32)iadev->ram, (u32)iadev->seg_ram,
+ (u32)iadev->reass_ram);)
+
+ /* lets try reading the MAC address */
+ error = get_esi(dev);
+ if (error) return error;
+ printk("IA: ");
+ for (i=0; i < ESI_LEN; i++)
+ printk("%s%02X",i ? "-" : "",dev->esi[i]);
+ printk("\n");
+
+ /* reset SAR */
+ if (reset_sar(dev)) {
+ printk("IA: reset SAR fail, please try again\n");
+ return 1;
+ }
+ return 0;
+}
+
+static void ia_update_stats(IADEV *iadev) {
+ if (!iadev->carrier_detect)
+ return;
+ iadev->rx_cell_cnt += readw(iadev->reass_reg+CELL_CTR0)&0xffff;
+ iadev->rx_cell_cnt += (readw(iadev->reass_reg+CELL_CTR1) & 0xffff) << 16;
+ iadev->drop_rxpkt += readw(iadev->reass_reg + DRP_PKT_CNTR ) & 0xffff;
+ iadev->drop_rxcell += readw(iadev->reass_reg + ERR_CNTR) & 0xffff;
+ iadev->tx_cell_cnt += readw(iadev->seg_reg + CELL_CTR_LO_AUTO)&0xffff;
+ iadev->tx_cell_cnt += (readw(iadev->seg_reg+CELL_CTR_HIGH_AUTO)&0xffff)<<16;
+ return;
+}
+
+static void ia_led_timer(unsigned long arg) {
+ unsigned long flags;
+ static u_char blinking[8] = {0, 0, 0, 0, 0, 0, 0, 0};
+ u_char i;
+ static u32 ctrl_reg;
+ for (i = 0; i < iadev_count; i++) {
+ if (ia_dev[i]) {
+ ctrl_reg = readl(ia_dev[i]->reg+IPHASE5575_BUS_CONTROL_REG);
+ if (blinking[i] == 0) {
+ blinking[i]++;
+ ctrl_reg &= (~CTRL_LED);
+ writel(ctrl_reg, ia_dev[i]->reg+IPHASE5575_BUS_CONTROL_REG);
+ ia_update_stats(ia_dev[i]);
+ }
+ else {
+ blinking[i] = 0;
+ ctrl_reg |= CTRL_LED;
+ writel(ctrl_reg, ia_dev[i]->reg+IPHASE5575_BUS_CONTROL_REG);
+ spin_lock_irqsave(&ia_dev[i]->tx_lock, flags);
+ if (ia_dev[i]->close_pending)
+ wake_up(&ia_dev[i]->close_wait);
+ ia_tx_poll(ia_dev[i]);
+ spin_unlock_irqrestore(&ia_dev[i]->tx_lock, flags);
+ }
+ }
+ }
+ mod_timer(&ia_timer, jiffies + HZ / 4);
+ return;
+}
+
+static void ia_phy_put(struct atm_dev *dev, unsigned char value,
+ unsigned long addr)
+{
+ writel(value, INPH_IA_DEV(dev)->phy+addr);
+}
+
+static unsigned char ia_phy_get(struct atm_dev *dev, unsigned long addr)
+{
+ return readl(INPH_IA_DEV(dev)->phy+addr);
+}
+
+#if LINUX_VERSION_CODE >= 0x20312
+static int __init ia_start(struct atm_dev *dev)
+#else
+__initfunc(static int ia_start(struct atm_dev *dev))
+#endif
+{
+ IADEV *iadev;
+ int error = 1;
+ unsigned char phy;
+ u32 ctrl_reg;
+ IF_EVENT(printk(">ia_start\n");)
+ iadev = INPH_IA_DEV(dev);
+ if (request_irq(iadev->irq, &ia_int, SA_SHIRQ, DEV_LABEL, dev)) {
+ printk(KERN_ERR DEV_LABEL "(itf %d): IRQ%d is already in use\n",
+ dev->number, iadev->irq);
+ return -EAGAIN;
+ }
+ /* @@@ should release IRQ on error */
+ /* enabling memory + master */
+ if ((error = pci_write_config_word(iadev->pci,
+ PCI_COMMAND,
+ PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER )))
+ {
+ printk(KERN_ERR DEV_LABEL "(itf %d): can't enable memory+"
+ "master (0x%x)\n",dev->number, error);
+ free_irq(iadev->irq, dev);
+ return error;
+ }
+ udelay(10);
+
+ /* Maybe we should reset the front end, initialize Bus Interface Control
+ Registers and see. */
+
+ IF_INIT(printk("Bus ctrl reg: %08x\n",
+ readl(iadev->reg+IPHASE5575_BUS_CONTROL_REG));)
+ ctrl_reg = readl(iadev->reg+IPHASE5575_BUS_CONTROL_REG);
+ ctrl_reg = (ctrl_reg & (CTRL_LED | CTRL_FE_RST))
+ | CTRL_B8
+ | CTRL_B16
+ | CTRL_B32
+ | CTRL_B48
+ | CTRL_B64
+ | CTRL_B128
+ | CTRL_ERRMASK
+ | CTRL_DLETMASK /* shud be removed l8r */
+ | CTRL_DLERMASK
+ | CTRL_SEGMASK
+ | CTRL_REASSMASK
+ | CTRL_FEMASK
+ | CTRL_CSPREEMPT;
+
+ writel(ctrl_reg, iadev->reg+IPHASE5575_BUS_CONTROL_REG);
+
+ IF_INIT(printk("Bus ctrl reg after initializing: %08x\n",
+ readl(iadev->reg+IPHASE5575_BUS_CONTROL_REG));
+ printk("Bus status reg after init: %08x\n",
+ readl(iadev->reg+IPHASE5575_BUS_STATUS_REG));)
+
+ ia_hw_type(iadev);
+ error = tx_init(dev);
+ if (error) {
+ free_irq(iadev->irq, dev);
+ return error;
+ }
+ error = rx_init(dev);
+ if (error) {
+ free_irq(iadev->irq, dev);
+ return error;
+ }
+
+ ctrl_reg = readl(iadev->reg+IPHASE5575_BUS_CONTROL_REG);
+ writel(ctrl_reg | CTRL_FE_RST, iadev->reg+IPHASE5575_BUS_CONTROL_REG);
+ IF_INIT(printk("Bus ctrl reg after initializing: %08x\n",
+ readl(iadev->reg+IPHASE5575_BUS_CONTROL_REG));)
+ phy = 0; /* resolve compiler complaint */
+ IF_INIT (
+ if ((phy=ia_phy_get(dev,0)) == 0x30)
+ printk("IA: pm5346,rev.%d\n",phy&0x0f);
+ else
+ printk("IA: utopia,rev.%0x\n",phy);)
+
+ if (iadev->phy_type & FE_25MBIT_PHY) {
+ ia_mb25_init(iadev);
+ return 0;
+ }
+ if (iadev->phy_type & (FE_DS3_PHY | FE_E3_PHY)) {
+ ia_suni_pm7345_init(iadev);
+ return 0;
+ }
+
+ error = suni_init(dev);
+ if (error) {
+ free_irq(iadev->irq, dev);
+ return error;
+ }
+
+ /* Enable interrupt on loss of signal SUNI_RSOP_CIE 0x10
+ SUNI_RSOP_CIE_LOSE - 0x04
+ */
+ ia_phy_put(dev, ia_phy_get(dev,0x10) | 0x04, 0x10);
+#ifndef MODULE
+ error = dev->phy->start(dev);
+ if (error) {
+ free_irq(iadev->irq, dev);
+ return error;
+ }
+#endif
+ /* Get iadev->carrier_detect status */
+ IaFrontEndIntr(iadev);
+ return 0;
+}
+
+static void ia_close(struct atm_vcc *vcc)
+{
+ u16 *vc_table;
+ IADEV *iadev;
+ struct ia_vcc *ia_vcc;
+ struct sk_buff *skb = NULL;
+ struct sk_buff_head tmp_tx_backlog, tmp_vcc_backlog;
+ unsigned long closetime, flags;
+ int ctimeout;
+
+ iadev = INPH_IA_DEV(vcc->dev);
+ ia_vcc = INPH_IA_VCC(vcc);
+ if (!ia_vcc) return;
+
+ IF_EVENT(printk("ia_close: ia_vcc->vc_desc_cnt = %d vci = %d\n",
+ ia_vcc->vc_desc_cnt,vcc->vci);)
+ vcc->flags &= ~ATM_VF_READY;
+ skb_queue_head_init (&tmp_tx_backlog);
+ skb_queue_head_init (&tmp_vcc_backlog);
+ if (vcc->qos.txtp.traffic_class != ATM_NONE) {
+ iadev->close_pending++;
+ sleep_on_timeout(&iadev->timeout_wait, 50);
+ spin_lock_irqsave(&iadev->tx_lock, flags);
+ while((skb = skb_dequeue(&iadev->tx_backlog))) {
+ if (ATM_SKB(skb)->vcc == vcc){
+ if (vcc->pop) vcc->pop(vcc, skb);
+ else dev_kfree_skb(skb);
+ }
+ else
+ skb_queue_tail(&tmp_tx_backlog, skb);
+ }
+ while((skb = skb_dequeue(&tmp_tx_backlog)))
+ skb_queue_tail(&iadev->tx_backlog, skb);
+ IF_EVENT(printk("IA TX Done decs_cnt = %d\n", ia_vcc->vc_desc_cnt);)
+ closetime = jiffies;
+ ctimeout = 300000 / ia_vcc->pcr;
+ if (ctimeout == 0)
+ ctimeout = 1;
+ while (ia_vcc->vc_desc_cnt > 0){
+ if ((jiffies - closetime) >= ctimeout)
+ break;
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+ sleep_on(&iadev->close_wait);
+ spin_lock_irqsave(&iadev->tx_lock, flags);
+ }
+ iadev->close_pending--;
+ iadev->testTable[vcc->vci]->lastTime = 0;
+ iadev->testTable[vcc->vci]->fract = 0;
+ iadev->testTable[vcc->vci]->vc_status = VC_UBR;
+ if (vcc->qos.txtp.traffic_class == ATM_ABR) {
+ if (vcc->qos.txtp.min_pcr > 0)
+ iadev->sum_mcr -= vcc->qos.txtp.min_pcr;
+ }
+ if (vcc->qos.txtp.traffic_class == ATM_CBR) {
+ ia_vcc = INPH_IA_VCC(vcc);
+ iadev->sum_mcr -= ia_vcc->NumCbrEntry*iadev->Granularity;
+ ia_cbrVc_close (vcc);
+ }
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+ }
+
+ if (vcc->qos.rxtp.traffic_class != ATM_NONE) {
+ // reset reass table
+ vc_table = (u16 *)(iadev->reass_ram+REASS_TABLE*iadev->memSize);
+ vc_table += vcc->vci;
+ *vc_table = NO_AAL5_PKT;
+ // reset vc table
+ vc_table = (u16 *)(iadev->reass_ram+RX_VC_TABLE*iadev->memSize);
+ vc_table += vcc->vci;
+ *vc_table = (vcc->vci << 6) | 15;
+ if (vcc->qos.rxtp.traffic_class == ATM_ABR) {
+ struct abr_vc_table *abr_vc_table = (struct abr_vc_table *)
+ (iadev->reass_ram+ABR_VC_TABLE*iadev->memSize);
+ abr_vc_table += vcc->vci;
+ abr_vc_table->rdf = 0x0003;
+ abr_vc_table->air = 0x5eb1;
+ }
+ // Drain the packets
+ rx_dle_intr(vcc->dev);
+ iadev->rx_open[vcc->vci] = 0;
+ }
+ kfree(INPH_IA_VCC(vcc));
+ ia_vcc = NULL;
+ INPH_IA_VCC(vcc) = NULL;
+ vcc->flags &= ~ATM_VF_ADDR;
+ return;
+}
+
+static int ia_open(struct atm_vcc *vcc, short vpi, int vci)
+{
+ IADEV *iadev;
+ struct ia_vcc *ia_vcc;
+ int error;
+ if (!(vcc->flags & ATM_VF_PARTIAL))
+ {
+ IF_EVENT(printk("ia: not partially allocated resources\n");)
+ INPH_IA_VCC(vcc) = NULL;
+ }
+ iadev = INPH_IA_DEV(vcc->dev);
+ error = atm_find_ci(vcc, &vpi, &vci);
+ if (error)
+ {
+ printk("iadev: atm_find_ci returned error %d\n", error);
+ return error;
+ }
+ vcc->vpi = vpi;
+ vcc->vci = vci;
+ if (vci != ATM_VPI_UNSPEC && vpi != ATM_VCI_UNSPEC)
+ {
+ IF_EVENT(printk("iphase open: unspec part\n");)
+ vcc->flags |= ATM_VF_ADDR;
+ }
+ if (vcc->qos.aal != ATM_AAL5)
+ return -EINVAL;
+ IF_EVENT(printk(DEV_LABEL "(itf %d): open %d.%d\n",
+ vcc->dev->number, vcc->vpi, vcc->vci);)
+
+ /* Device dependent initialization */
+ ia_vcc = kmalloc(sizeof(struct ia_vcc), GFP_KERNEL);
+ if (!ia_vcc) return -ENOMEM;
+ INPH_IA_VCC(vcc) = ia_vcc;
+
+ if ((error = open_rx(vcc)))
+ {
+ IF_EVENT(printk("iadev: error in open_rx, closing\n");)
+ ia_close(vcc);
+ return error;
+ }
+
+ if ((error = open_tx(vcc)))
+ {
+ IF_EVENT(printk("iadev: error in open_tx, closing\n");)
+ ia_close(vcc);
+ return error;
+ }
+
+ vcc->flags |= ATM_VF_READY;
+
+#ifndef MODULE
+ {
+ static u8 first = 1;
+ if (first) {
+ ia_timer.next = NULL;
+ ia_timer.prev = NULL;
+ ia_timer.expires = jiffies + 3*HZ;
+ ia_timer.data = 0UL;
+ ia_timer.function = ia_led_timer;
+ add_timer(&ia_timer);
+ first = 0;
+ }
+ }
+#endif
+ IF_EVENT(printk("ia open returning\n");)
+ return 0;
+}
+
+static int ia_change_qos(struct atm_vcc *vcc, struct atm_qos *qos, int flags)
+{
+ IF_EVENT(printk(">ia_change_qos\n");)
+ return 0;
+}
+
+static int ia_ioctl(struct atm_dev *dev, unsigned int cmd, void *arg)
+{
+ PIA_CMDBUF ia_cmds;
+ IADEV *iadev;
+ int i, board;
+ u16 *tmps;
+ IF_EVENT(printk(">ia_ioctl\n");)
+ if (cmd != IA_CMD) {
+ if (!dev->phy->ioctl) return -EINVAL;
+ return dev->phy->ioctl(dev,cmd,arg);
+ }
+ ia_cmds = (PIA_CMDBUF)arg;
+ board = ia_cmds->status;
+ if ((board < 0) || (board > iadev_count))
+ board = 0;
+ iadev = ia_dev[board];
+ switch (ia_cmds->cmd) {
+ case MEMDUMP:
+ {
+ switch (ia_cmds->sub_cmd) {
+ case MEMDUMP_DEV:
+ memcpy((char*)ia_cmds->buf, (char*)iadev,
+ sizeof(IADEV));
+ ia_cmds->status = 0;
+ break;
+ case MEMDUMP_SEGREG:
+ tmps = (u16 *)ia_cmds->buf;
+ for(i=0; i<0x80; i+=2, tmps++)
+ *tmps = *(u16*)(iadev->seg_reg+i);
+ ia_cmds->status = 0;
+ ia_cmds->len = 0x80;
+ break;
+ case MEMDUMP_REASSREG:
+ tmps = (u16 *)ia_cmds->buf;
+ for(i=0; i<0x80; i+=2, tmps++)
+ *tmps = *(u16*)(iadev->reass_reg+i);
+ ia_cmds->status = 0;
+ ia_cmds->len = 0x80;
+ break;
+ case MEMDUMP_FFL:
+ {
+ ia_regs_t regs_local;
+ ffredn_t *ffL = ®s_local.ffredn;
+ rfredn_t *rfL = ®s_local.rfredn;
+
+ /* Copy real rfred registers into the local copy */
+ for (i=0; i<(sizeof (rfredn_t))/4; i++)
+ ((u_int *)rfL)[i] = ((u_int *)iadev->reass_reg)[i] & 0xffff;
+ /* Copy real ffred registers into the local copy */
+ for (i=0; i<(sizeof (ffredn_t))/4; i++)
+ ((u_int *)ffL)[i] = ((u_int *)iadev->seg_reg)[i] & 0xffff;
+
+ memcpy((char*)ia_cmds->buf,(char*)®s_local,sizeof(ia_regs_t));
+ printk("Board %d registers dumped\n", board);
+ ia_cmds->status = 0;
+ }
+ break;
+ case READ_REG:
+ {
+ desc_dbg(iadev);
+ ia_cmds->status = 0;
+ }
+ break;
+ case 0x6:
+ {
+ ia_cmds->status = 0;
+ printk("skb = 0x%lx\n", (long)skb_peek(&iadev->tx_backlog));
+ printk("rtn_q: 0x%lx\n",(long)ia_deque_rtn_q(&iadev->tx_return_q));
+ }
+ break;
+ case 0x8:
+ {
+ struct sonet_stats *stats;
+ stats = &PRIV(_ia_dev[board])->sonet_stats;
+ printk("section_bip: %d\n", stats->section_bip);
+ printk("line_bip : %d\n", stats->line_bip);
+ printk("path_bip : %d\n", stats->path_bip);
+ printk("line_febe : %d\n", stats->line_febe);
+ printk("path_febe : %d\n", stats->path_febe);
+ printk("corr_hcs : %d\n", stats->corr_hcs);
+ printk("uncorr_hcs : %d\n", stats->uncorr_hcs);
+ printk("tx_cells : %d\n", stats->tx_cells);
+ printk("rx_cells : %d\n", stats->rx_cells);
+ }
+ ia_cmds->status = 0;
+ break;
+ case 0x9:
+ for (i = 1; i <= iadev->num_rx_desc; i++)
+ free_desc(_ia_dev[board], i);
+ writew( ~(RX_FREEQ_EMPT | RX_EXCP_RCVD),
+ iadev->reass_reg+REASS_MASK_REG);
+ iadev->rxing = 1;
+
+ ia_cmds->status = 0;
+ break;
+
+ case 0xb:
+ IaFrontEndIntr(iadev);
+ break;
+ case 0xa:
+ {
+ ia_cmds->status = 0;
+ IADebugFlag = ia_cmds->maddr;
+ printk("New debug option loaded\n");
+ }
+ break;
+ default:
+ memcpy((char*)ia_cmds->buf, (char*)ia_cmds->maddr, ia_cmds->len);
+ ia_cmds->status = 0;
+ break;
+ }
+ }
+ break;
+ default:
+ break;
+
+ }
+ return 0;
+}
+
+static int ia_getsockopt(struct atm_vcc *vcc, int level, int optname,
+ void *optval, int optlen)
+{
+ IF_EVENT(printk(">ia_getsockopt\n");)
+ return -EINVAL;
+}
+
+static int ia_setsockopt(struct atm_vcc *vcc, int level, int optname,
+ void *optval, int optlen)
+{
+ IF_EVENT(printk(">ia_setsockopt\n");)
+ return -EINVAL;
+}
+
+static int ia_pkt_tx (struct atm_vcc *vcc, struct sk_buff *skb) {
+ IADEV *iadev;
+ struct dle *wr_ptr;
+ struct tx_buf_desc *buf_desc_ptr;
+ int desc;
+ int comp_code;
+ unsigned int addr;
+ int total_len, pad, last;
+ struct cpcs_trailer *trailer;
+ struct ia_vcc *iavcc;
+ iadev = INPH_IA_DEV(vcc->dev);
+ iavcc = INPH_IA_VCC(vcc);
+ if (!iavcc->txing) {
+ printk("discard packet on closed VC\n");
+ if (vcc->pop) vcc->pop(vcc, skb);
+ else dev_kfree_skb(skb);
+ }
+
+ if (skb->len > iadev->tx_buf_sz - 8) {
+ printk("Transmit size over tx buffer size\n");
+ if (vcc->pop)
+ vcc->pop(vcc, skb);
+ else
+ dev_kfree_skb(skb);
+ return 0;
+ }
+ if ((u32)skb->data & 3) {
+ printk("Misaligned SKB\n");
+ if (vcc->pop)
+ vcc->pop(vcc, skb);
+ else
+ dev_kfree_skb(skb);
+ return 0;
+ }
+ /* Get a descriptor number from our free descriptor queue
+ We get the descr number from the TCQ now, since I am using
+ the TCQ as a free buffer queue. Initially TCQ will be
+ initialized with all the descriptors and is hence, full.
+ */
+ desc = get_desc (iadev, iavcc);
+ if (desc == 0xffff)
+ return 1;
+ comp_code = desc >> 13;
+ desc &= 0x1fff;
+
+ if ((desc == 0) || (desc > iadev->num_tx_desc))
+ {
+ IF_ERR(printk(DEV_LABEL "invalid desc for send: %d\n", desc);)
+ vcc->stats->tx++;
+ if (vcc->pop)
+ vcc->pop(vcc, skb);
+ else
+ dev_kfree_skb(skb);
+ return 0; /* return SUCCESS */
+ }
+
+ if (comp_code)
+ {
+ IF_ERR(printk(DEV_LABEL "send desc:%d completion code %d error\n",
+ desc, comp_code);)
+ }
+
+ /* remember the desc and vcc mapping */
+ iavcc->vc_desc_cnt++;
+ iadev->desc_tbl[desc-1].iavcc = iavcc;
+ iadev->desc_tbl[desc-1].txskb = skb;
+ IA_SKB_STATE(skb) = 0;
+
+ iadev->ffL.tcq_rd += 2;
+ if (iadev->ffL.tcq_rd > iadev->ffL.tcq_ed)
+ iadev->ffL.tcq_rd = iadev->ffL.tcq_st;
+ writew(iadev->ffL.tcq_rd, iadev->seg_reg+TCQ_RD_PTR);
+
+ /* Put the descriptor number in the packet ready queue
+ and put the updated write pointer in the DLE field
+ */
+ *(u16*)(iadev->seg_ram+iadev->ffL.prq_wr) = desc;
+
+ iadev->ffL.prq_wr += 2;
+ if (iadev->ffL.prq_wr > iadev->ffL.prq_ed)
+ iadev->ffL.prq_wr = iadev->ffL.prq_st;
+
+ /* Figure out the exact length of the packet and padding required to
+ make it aligned on a 48 byte boundary. */
+ total_len = skb->len + sizeof(struct cpcs_trailer);
+ last = total_len - (total_len/48)*48;
+ pad = 48 - last;
+ total_len = pad + total_len;
+ IF_TX(printk("ia packet len:%d padding:%d\n", total_len, pad);)
+
+ /* Put the packet in a tx buffer */
+ if (!iadev->tx_buf[desc-1])
+ printk("couldn't get free page\n");
+
+ IF_TX(printk("Sent: skb = 0x%x skb->data: 0x%x len: %d, desc: %d\n",
+ (u32)skb, (u32)skb->data, skb->len, desc);)
+ addr = virt_to_bus(skb->data);
+ trailer = (struct cpcs_trailer*)iadev->tx_buf[desc-1];
+ trailer->control = 0;
+ /*big endian*/
+ trailer->length = ((skb->len & 0xff) << 8) | ((skb->len & 0xff00) >> 8);
+ trailer->crc32 = 0; /* not needed - dummy bytes */
+
+ /* Display the packet */
+ IF_TXPKT(printk("Sent data: len = %d MsgNum = %d\n",
+ skb->len, tcnter++);
+ xdump(skb->data, skb->len, "TX: ");
+ printk("\n");)
+
+ /* Build the buffer descriptor */
+ buf_desc_ptr = (struct tx_buf_desc *)(iadev->seg_ram+TX_DESC_BASE);
+ buf_desc_ptr += desc; /* points to the corresponding entry */
+ buf_desc_ptr->desc_mode = AAL5 | EOM_EN | APP_CRC32 | CMPL_INT;
+ writew(TRANSMIT_DONE, iadev->seg_reg+SEG_INTR_STATUS_REG);
+ buf_desc_ptr->vc_index = vcc->vci;
+ buf_desc_ptr->bytes = total_len;
+
+ if (vcc->qos.txtp.traffic_class == ATM_ABR)
+ clear_lockup (vcc, iadev);
+
+ /* Build the DLE structure */
+ wr_ptr = iadev->tx_dle_q.write;
+ memset((caddr_t)wr_ptr, 0, sizeof(struct dle));
+ wr_ptr->sys_pkt_addr = addr;
+ wr_ptr->local_pkt_addr = (buf_desc_ptr->buf_start_hi << 16) |
+ buf_desc_ptr->buf_start_lo;
+ /* wr_ptr->bytes = swap(total_len); didn't seem to affect ?? */
+ wr_ptr->bytes = skb->len;
+
+ /* hw bug - DLEs of 0x2d, 0x2e, 0x2f cause DMA lockup */
+ if ((wr_ptr->bytes >> 2) == 0xb)
+ wr_ptr->bytes = 0x30;
+
+ wr_ptr->mode = TX_DLE_PSI;
+ wr_ptr->prq_wr_ptr_data = 0;
+
+ /* end is not to be used for the DLE q */
+ if (++wr_ptr == iadev->tx_dle_q.end)
+ wr_ptr = iadev->tx_dle_q.start;
+
+ /* Build trailer dle */
+ wr_ptr->sys_pkt_addr = virt_to_bus(iadev->tx_buf[desc-1]);
+ wr_ptr->local_pkt_addr = ((buf_desc_ptr->buf_start_hi << 16) |
+ buf_desc_ptr->buf_start_lo) + total_len - sizeof(struct cpcs_trailer);
+
+ wr_ptr->bytes = sizeof(struct cpcs_trailer);
+ wr_ptr->mode = DMA_INT_ENABLE;
+ wr_ptr->prq_wr_ptr_data = iadev->ffL.prq_wr;
+
+ /* end is not to be used for the DLE q */
+ if (++wr_ptr == iadev->tx_dle_q.end)
+ wr_ptr = iadev->tx_dle_q.start;
+
+ iadev->tx_dle_q.write = wr_ptr;
+ ATM_DESC(skb) = vcc->vci;
+ skb_queue_tail(&iadev->tx_dma_q, skb);
+
+ vcc->stats->tx++;
+ iadev->tx_pkt_cnt++;
+ /* Increment transaction counter */
+ writel(2, iadev->dma+IPHASE5575_TX_COUNTER);
+
+#if 0
+ /* add flow control logic */
+ if (vcc->stats->tx % 20 == 0) {
+ if (iavcc->vc_desc_cnt > 10) {
+ vcc->tx_quota = vcc->tx_quota * 3 / 4;
+ printk("Tx1: vcc->tx_quota = %d \n", (u32)vcc->tx_quota );
+ iavcc->flow_inc = -1;
+ iavcc->saved_tx_quota = vcc->tx_quota;
+ } else if ((iavcc->flow_inc < 0) && (iavcc->vc_desc_cnt < 3)) {
+ // vcc->tx_quota = 3 * iavcc->saved_tx_quota / 4;
+ printk("Tx2: vcc->tx_quota = %d \n", (u32)vcc->tx_quota );
+ iavcc->flow_inc = 0;
+ }
+ }
+#endif
+ IF_TX(printk("ia send done\n");)
+ return 0;
+}
+
+static int ia_send(struct atm_vcc *vcc, struct sk_buff *skb)
+{
+ IADEV *iadev;
+ struct ia_vcc *iavcc;
+ unsigned long flags;
+
+ iadev = INPH_IA_DEV(vcc->dev);
+ iavcc = INPH_IA_VCC(vcc);
+ if ((!skb)||(skb->len>(iadev->tx_buf_sz-sizeof(struct cpcs_trailer))))
+ {
+ if (!skb)
+ printk(KERN_CRIT "null skb in ia_send\n");
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+ spin_lock_irqsave(&iadev->tx_lock, flags);
+ if ((vcc->flags & ATM_VF_READY) == 0){
+ dev_kfree_skb(skb);
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+ return -EINVAL;
+ }
+ ATM_SKB(skb)->vcc = vcc;
+
+ if (skb_peek(&iadev->tx_backlog)) {
+ skb_queue_tail(&iadev->tx_backlog, skb);
+ }
+ else {
+ if (ia_pkt_tx (vcc, skb)) {
+ skb_queue_tail(&iadev->tx_backlog, skb);
+ }
+ }
+ spin_unlock_irqrestore(&iadev->tx_lock, flags);
+ return 0;
+
+}
+
+static int ia_sg_send(struct atm_vcc *vcc, unsigned long start,
+ unsigned long size)
+{
+ IF_EVENT(printk(">ia_sg_send\n");)
+ return 0;
+}
+
+
+static int ia_proc_read(struct atm_dev *dev,loff_t *pos,char *page)
+{
+ int left = *pos, n;
+ char *tmpPtr;
+ IADEV *iadev = INPH_IA_DEV(dev);
+ if(!left--) {
+ if (iadev->phy_type == FE_25MBIT_PHY) {
+ n = sprintf(page, " Board Type : Iphase5525-1KVC-128K\n");
+ return n;
+ }
+ if (iadev->phy_type == FE_DS3_PHY)
+ n = sprintf(page, " Board Type : Iphase-ATM-DS3");
+ else if (iadev->phy_type == FE_E3_PHY)
+ n = sprintf(page, " Board Type : Iphase-ATM-E3");
+ else if (iadev->phy_type == FE_UTP_OPTION)
+ n = sprintf(page, " Board Type : Iphase-ATM-UTP155");
+ else
+ n = sprintf(page, " Board Type : Iphase-ATM-OC3");
+ tmpPtr = page + n;
+ if (iadev->pci_map_size == 0x40000)
+ n += sprintf(tmpPtr, "-1KVC-");
+ else
+ n += sprintf(tmpPtr, "-4KVC-");
+ tmpPtr = page + n;
+ if ((iadev->memType & MEM_SIZE_MASK) == MEM_SIZE_1M)
+ n += sprintf(tmpPtr, "1M \n");
+ else if ((iadev->memType & MEM_SIZE_MASK) == MEM_SIZE_512K)
+ n += sprintf(tmpPtr, "512K\n");
+ else
+ n += sprintf(tmpPtr, "128K\n");
+ return n;
+ }
+ if (!left) {
+ return sprintf(page, " Number of Tx Buffer: %u\n"
+ " Size of Tx Buffer : %u\n"
+ " Number of Rx Buffer: %u\n"
+ " Size of Rx Buffer : %u\n"
+ " Packets Receiverd : %u\n"
+ " Packets Transmitted: %u\n"
+ " Cells Received : %u\n"
+ " Cells Transmitted : %u\n"
+ " Board Dropped Cells: %u\n"
+ " Board Dropped Pkts : %u\n",
+ iadev->num_tx_desc, iadev->tx_buf_sz,
+ iadev->num_rx_desc, iadev->rx_buf_sz,
+ iadev->rx_pkt_cnt, iadev->tx_pkt_cnt,
+ iadev->rx_cell_cnt, iadev->tx_cell_cnt,
+ iadev->drop_rxcell, iadev->drop_rxpkt);
+ }
+ return 0;
+}
+
+static const struct atmdev_ops ops = {
+ open: ia_open,
+ close: ia_close,
+ ioctl: ia_ioctl,
+ getsockopt: ia_getsockopt,
+ setsockopt: ia_setsockopt,
+ send: ia_send,
+ sg_send: ia_sg_send,
+ phy_put: ia_phy_put,
+ phy_get: ia_phy_get,
+ change_qos: ia_change_qos,
+ proc_read: ia_proc_read
+};
+
+
+#if LINUX_VERSION_CODE >= 0x20312
+int __init ia_detect(void)
+#else
+__initfunc(int ia_detect(void))
+#endif
+{
+ struct atm_dev *dev;
+ IADEV *iadev;
+ unsigned long flags;
+ int index = 0;
+ struct pci_dev *prev_dev;
+ if (!pci_present()) {
+ printk(KERN_ERR DEV_LABEL " driver but no PCI BIOS ?\n");
+ return 0;
+ }
+ iadev = (IADEV *)kmalloc(sizeof(IADEV), GFP_KERNEL);
+ if (!iadev) return -ENOMEM;
+ memset((char*)iadev, 0, sizeof(IADEV));
+ prev_dev = NULL;
+ while((iadev->pci = pci_find_device(PCI_VENDOR_ID_IPHASE,
+ PCI_DEVICE_ID_IPHASE_5575, prev_dev))) {
+ IF_INIT(printk("ia detected at bus:%d dev: %d function:%d\n",
+ iadev->pci->bus->number, PCI_SLOT(iadev->pci->devfn),
+ PCI_FUNC(iadev->pci->devfn));)
+ dev = atm_dev_register(DEV_LABEL, &ops, -1, 0);
+ if (!dev) break;
+ IF_INIT(printk(DEV_LABEL "registered at (itf :%d)\n",
+ dev->number);)
+ INPH_IA_DEV(dev) = iadev;
+ // TODO: multi_board using ia_boards logic in cleanup_module
+ ia_dev[index] = iadev;
+ _ia_dev[index] = dev;
+ IF_INIT(printk("dev_id = 0x%x iadev->LineRate = %d \n",
+ (u32)dev, iadev->LineRate);)
+ iadev_count++;
+ spin_lock_init(&iadev->misc_lock);
+ spin_lock_irqsave(&iadev->misc_lock, flags);
+ if (ia_init(dev) || ia_start(dev)) {
+ atm_dev_deregister(dev);
+ IF_INIT(printk("IA register failed!\n");)
+ ia_dev[index] = NULL;
+ _ia_dev[index] = NULL;
+ iadev_count--;
+ spin_unlock_irqrestore(&iadev->misc_lock, flags);
+ return -EINVAL;
+ }
+ spin_unlock_irqrestore(&iadev->misc_lock, flags);
+ IF_EVENT(printk("iadev_count = %d\n", iadev_count);)
+ prev_dev = iadev->pci;
+ iadev->next_board = ia_boards;
+ ia_boards = dev;
+ iadev = (IADEV *)kmalloc(
+ sizeof(IADEV), GFP_KERNEL);
+ if (!iadev) break;
+ memset((char*)iadev, 0, sizeof(IADEV));
+ index++;
+ dev = NULL;
+ }
+ return index;
+}
+
+
+#ifdef MODULE
+
+int init_module(void)
+{
+ IF_EVENT(printk(">ia init_module\n");)
+ if (!ia_detect()) {
+ printk(KERN_ERR DEV_LABEL ": no adapter found\n");
+ return -ENXIO;
+ }
+ // MOD_INC_USE_COUNT;
+ ia_timer.next = NULL;
+ ia_timer.prev = NULL;
+ ia_timer.expires = jiffies + 3*HZ;
+ ia_timer.data = 0UL;
+ ia_timer.function = ia_led_timer;
+ add_timer(&ia_timer);
+
+ return 0;
+}
+
+
+void cleanup_module(void)
+{
+ struct atm_dev *dev;
+ IADEV *iadev;
+ unsigned short command;
+ int i, j= 0;
+
+ IF_EVENT(printk(">ia cleanup_module\n");)
+ // MOD_DEC_USE_COUNT;
+ if (MOD_IN_USE)
+ printk("ia: module in use\n");
+ del_timer(&ia_timer);
+ while(ia_dev[j])
+ {
+ dev = ia_boards;
+ iadev = INPH_IA_DEV(dev);
+ ia_boards = iadev->next_board;
+
+ /* disable interrupt of lost signal */
+ ia_phy_put(dev, ia_phy_get(dev,0x10) & ~(0x4), 0x10);
+ udelay(1);
+
+ /* De-register device */
+ atm_dev_deregister(dev);
+ IF_EVENT(printk("iav deregistered at (itf:%d)\n", dev->number);)
+ for (i= 0; i< iadev->num_tx_desc; i++)
+ kfree(iadev->tx_buf[i]);
+ kfree(iadev->tx_buf);
+ /* Disable memory mapping and busmastering */
+ if (pci_read_config_word(iadev->pci,
+ PCI_COMMAND, &command) != 0)
+ {
+ printk("ia: can't read PCI_COMMAND.\n");
+ }
+ command &= ~(PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
+ if (pci_write_config_word(iadev->pci,
+ PCI_COMMAND, command) != 0)
+ {
+ printk("ia: can't write PCI_COMMAND.\n");
+ }
+ free_irq(iadev->irq, dev);
+ iounmap((void *) iadev->base);
+ kfree(iadev);
+ j++;
+ }
+ /* and voila whatever we tried seems to work. I don't know if it will
+ fix suni errors though. Really doubt that. */
+ for (i = 0; i<8; i++) {
+ ia_dev[i] = NULL;
+ _ia_dev[i] = NULL;
+ }
+}
+
+#endif
+
--- /dev/null
+/******************************************************************************
+ Device driver for Interphase ATM PCI adapter cards
+ Author: Peter Wang <pwang@iphase.com>
+ Interphase Corporation <www.iphase.com>
+ Version: 1.0
+ iphase.h: This is the header file for iphase.c.
+*******************************************************************************
+
+ This software may be used and distributed according to the terms
+ of the GNU Public License (GPL), incorporated herein by reference.
+ Drivers based on this skeleton fall under the GPL and must retain
+ the authorship (implicit copyright) notice.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ Modified from an incomplete driver for Interphase 5575 1KVC 1M card which
+ was originally written by Monalisa Agrawal at UNH. Now this driver
+ supports a variety of varients of Interphase ATM PCI (i)Chip adapter
+ card family (See www.iphase.com/products/ClassSheet.cfm?ClassID=ATM)
+ in terms of PHY type, the size of control memory and the size of
+ packet memory. The followings are the change log and history:
+
+ Bugfix the Mona's UBR driver.
+ Modify the basic memory allocation and dma logic.
+ Port the driver to the latest kernel from 2.0.46.
+ Complete the ABR logic of the driver, and added the ABR work-
+ around for the hardware anormalies.
+ Add the CBR support.
+ Add the flow control logic to the driver to allow rate-limit VC.
+ Add 4K VC support to the board with 512K control memory.
+ Add the support of all the variants of the Interphase ATM PCI
+ (i)Chip adapter cards including x575 (155M OC3 and UTP155), x525
+ (25M UTP25) and x531 (DS3 and E3).
+ Add SMP support.
+
+ Support and updates available at: ftp://ftp.iphase.com/pub/atm
+
+*******************************************************************************/
+
+#ifndef IPHASE_H
+#define IPHASE_H
+
+#include <linux/config.h>
+
+/************************ IADBG DEFINE *********************************/
+/* IADebugFlag Bit Map */
+#define IF_IADBG_INIT_ADAPTER 0x00000001 // init adapter info
+#define IF_IADBG_TX 0x00000002 // debug TX
+#define IF_IADBG_RX 0x00000004 // debug RX
+#define IF_IADBG_QUERY_INFO 0x00000008 // debug Request call
+#define IF_IADBG_SHUTDOWN 0x00000010 // debug shutdown event
+#define IF_IADBG_INTR 0x00000020 // debug interrupt DPC
+#define IF_IADBG_TXPKT 0x00000040 // debug TX PKT
+#define IF_IADBG_RXPKT 0x00000080 // debug RX PKT
+#define IF_IADBG_ERR 0x00000100 // debug system error
+#define IF_IADBG_EVENT 0x00000200 // debug event
+#define IF_IADBG_DIS_INTR 0x00001000 // debug disable interrupt
+#define IF_IADBG_EN_INTR 0x00002000 // debug enable interrupt
+#define IF_IADBG_LOUD 0x00004000 // debugging info
+#define IF_IADBG_VERY_LOUD 0x00008000 // excessive debugging info
+#define IF_IADBG_CBR 0x00100000 //
+#define IF_IADBG_UBR 0x00200000 //
+#define IF_IADBG_ABR 0x00400000 //
+#define IF_IADBG_DESC 0x01000000 //
+#define IF_IADBG_SUNI_STAT 0x02000000 // suni statistics
+#define IF_IADBG_RESET 0x04000000
+
+extern unsigned int IADebugFlag;
+
+#define IF_IADBG(f) if (IADebugFlag & (f))
+
+#ifdef CONFIG_ATM_IA_DEBUG /* Debug build */
+
+#define IF_LOUD(A) IF_IADBG(IF_IADBG_LOUD) { A }
+#define IF_ERR(A) IF_IADBG(IF_IADBG_ERR) { A }
+#define IF_VERY_LOUD(A) IF_IADBG( IF_IADBG_VERY_LOUD ) { A }
+
+#define IF_INIT_ADAPTER(A) IF_IADBG( IF_IADBG_INIT_ADAPTER ) { A }
+#define IF_INIT(A) IF_IADBG( IF_IADBG_INIT_ADAPTER ) { A }
+#define IF_SUNI_STAT(A) IF_IADBG( IF_IADBG_SUNI_STAT ) { A }
+#define IF_QUERY_INFO(A) IF_IADBG( IF_IADBG_QUERY_INFO ) { A }
+#define IF_COPY_OVER(A) IF_IADBG( IF_IADBG_COPY_OVER ) { A }
+
+#define IF_INTR(A) IF_IADBG( IF_IADBG_INTR ) { A }
+#define IF_DIS_INTR(A) IF_IADBG( IF_IADBG_DIS_INTR ) { A }
+#define IF_EN_INTR(A) IF_IADBG( IF_IADBG_EN_INTR ) { A }
+
+#define IF_TX(A) IF_IADBG( IF_IADBG_TX ) { A }
+#define IF_RX(A) IF_IADBG( IF_IADBG_RX ) { A }
+#define IF_TXPKT(A) IF_IADBG( IF_IADBG_TXPKT ) { A }
+#define IF_RXPKT(A) IF_IADBG( IF_IADBG_RXPKT ) { A }
+
+#define IF_SHUTDOWN(A) IF_IADBG(IF_IADBG_SHUTDOWN) { A }
+#define IF_CBR(A) IF_IADBG( IF_IADBG_CBR ) { A }
+#define IF_UBR(A) IF_IADBG( IF_IADBG_UBR ) { A }
+#define IF_ABR(A) IF_IADBG( IF_IADBG_ABR ) { A }
+#define IF_EVENT(A) IF_IADBG( IF_IADBG_EVENT) { A }
+
+#else /* free build */
+#define IF_LOUD(A)
+#define IF_VERY_LOUD(A)
+#define IF_INIT_ADAPTER(A)
+#define IF_INIT(A)
+#define IF_SUNI_STAT(A)
+#define IF_PVC_CHKPKT(A)
+#define IF_QUERY_INFO(A)
+#define IF_COPY_OVER(A)
+#define IF_HANG(A)
+#define IF_INTR(A)
+#define IF_DIS_INTR(A)
+#define IF_EN_INTR(A)
+#define IF_TX(A)
+#define IF_RX(A)
+#define IF_TXDEBUG(A)
+#define IF_VC(A)
+#define IF_ERR(A)
+#define IF_CBR(A)
+#define IF_UBR(A)
+#define IF_ABR(A)
+#define IF_SHUTDOWN(A)
+#define DbgPrint(A)
+#define IF_EVENT(A)
+#define IF_TXPKT(A)
+#define IF_RXPKT(A)
+#endif /* CONFIG_ATM_IA_DEBUG */
+
+#define isprint(a) ((a >=' ')&&(a <= '~'))
+#define ATM_DESC(skb) (skb->protocol)
+#define IA_SKB_STATE(skb) (skb->protocol)
+#define IA_DLED 1
+#define IA_TX_DONE 2
+
+/* iadbg defines */
+#define IA_CMD 0x7749
+typedef struct {
+ int cmd;
+ int sub_cmd;
+ int len;
+ u32 maddr;
+ int status;
+ void *buf;
+} IA_CMDBUF, *PIA_CMDBUF;
+
+/* cmds */
+#define MEMDUMP 0x01
+
+/* sub_cmds */
+#define MEMDUMP_SEGREG 0x2
+#define MEMDUMP_DEV 0x1
+#define MEMDUMP_REASSREG 0x3
+#define MEMDUMP_FFL 0x4
+#define READ_REG 0x5
+#define WAKE_DBG_WAIT 0x6
+
+/************************ IADBG DEFINE END ***************************/
+
+#define Boolean(x) ((x) ? 1 : 0)
+#define NR_VCI 1024 /* number of VCIs */
+#define NR_VCI_LD 10 /* log2(NR_VCI) */
+#define NR_VCI_4K 4096 /* number of VCIs */
+#define NR_VCI_4K_LD 12 /* log2(NR_VCI) */
+#define MEM_VALID 0xfffffff0 /* mask base address with this */
+
+#ifndef PCI_VENDOR_ID_IPHASE
+#define PCI_VENDOR_ID_IPHASE 0x107e
+#endif
+#ifndef PCI_DEVICE_ID_IPHASE_5575
+#define PCI_DEVICE_ID_IPHASE_5575 0x0008
+#endif
+#define DEV_LABEL "ia"
+#define PCR 207692
+#define ICR 100000
+#define MCR 0
+#define TBE 1000
+#define FRTT 1
+#define RIF 2
+#define RDF 4
+#define NRMCODE 5 /* 0 - 7 */
+#define TRMCODE 3 /* 0 - 7 */
+#define CDFCODE 6
+#define ATDFCODE 2 /* 0 - 15 */
+
+/*---------------------- Packet/Cell Memory ------------------------*/
+#define TX_PACKET_RAM 0x00000 /* start of Trasnmit Packet memory - 0 */
+#define DFL_TX_BUF_SZ 10240 /* 10 K buffers */
+#define DFL_TX_BUFFERS 50 /* number of packet buffers for Tx
+ - descriptor 0 unused */
+#define REASS_RAM_SIZE 0x10000 /* for 64K 1K VC board */
+#define RX_PACKET_RAM 0x80000 /* start of Receive Packet memory - 512K */
+#define DFL_RX_BUF_SZ 10240 /* 10k buffers */
+#define DFL_RX_BUFFERS 50 /* number of packet buffers for Rx
+ - descriptor 0 unused */
+
+struct cpcs_trailer
+{
+ u_short control;
+ u_short length;
+ u_int crc32;
+};
+
+struct ia_vcc
+{
+ int rxing;
+ int txing;
+ int NumCbrEntry;
+ u32 pcr;
+ u32 saved_tx_quota;
+ int flow_inc;
+ struct sk_buff_head txing_skb;
+ int ltimeout;
+ u8 vc_desc_cnt;
+
+};
+
+struct abr_vc_table
+{
+ u_char status;
+ u_char rdf;
+ u_short air;
+ u_int res[3];
+ u_int req_rm_cell_data1;
+ u_int req_rm_cell_data2;
+ u_int add_rm_cell_data1;
+ u_int add_rm_cell_data2;
+};
+
+/* 32 byte entries */
+struct main_vc
+{
+ u_short type;
+#define ABR 0x8000
+#define UBR 0xc000
+#define CBR 0x0000
+ /* ABR fields */
+ u_short nrm;
+ u_short trm;
+ u_short rm_timestamp_hi;
+ u_short rm_timestamp_lo:8,
+ crm:8;
+ u_short remainder; /* ABR and UBR fields - last 10 bits*/
+ u_short next_vc_sched;
+ u_short present_desc; /* all classes */
+ u_short last_cell_slot; /* ABR and UBR */
+ u_short pcr;
+ u_short fraction;
+ u_short icr;
+ u_short atdf;
+ u_short mcr;
+ u_short acr;
+ u_short unack:8,
+ status:8; /* all classes */
+#define UIOLI 0x80
+#define CRC_APPEND 0x40 /* for status field - CRC-32 append */
+#define ABR_STATE 0x02
+
+};
+
+
+/* 8 byte entries */
+struct ext_vc
+{
+ u_short atm_hdr1;
+ u_short atm_hdr2;
+ u_short last_desc;
+ u_short out_of_rate_link; /* reserved for UBR and CBR */
+};
+
+
+#define DLE_ENTRIES 256
+#define DMA_INT_ENABLE 0x0002 /* use for both Tx and Rx */
+#define TX_DLE_PSI 0x0001
+
+/* Descriptor List Entries (DLE) */
+struct dle
+{
+ u32 sys_pkt_addr;
+ u32 local_pkt_addr;
+ u32 bytes;
+ u16 prq_wr_ptr_data;
+ u16 mode;
+};
+
+struct dle_q
+{
+ struct dle *start;
+ struct dle *end;
+ struct dle *read;
+ struct dle *write;
+};
+
+struct free_desc_q
+{
+ int desc; /* Descriptor number */
+ struct free_desc_q *next;
+};
+
+struct tx_buf_desc {
+ unsigned short desc_mode;
+ unsigned short vc_index;
+ unsigned short res1; /* reserved field */
+ unsigned short bytes;
+ unsigned short buf_start_hi;
+ unsigned short buf_start_lo;
+ unsigned short res2[10]; /* reserved field */
+};
+
+
+struct rx_buf_desc {
+ unsigned short desc_mode;
+ unsigned short vc_index;
+ unsigned short vpi;
+ unsigned short bytes;
+ unsigned short buf_start_hi;
+ unsigned short buf_start_lo;
+ unsigned short dma_start_hi;
+ unsigned short dma_start_lo;
+ unsigned short crc_upper;
+ unsigned short crc_lower;
+ unsigned short res:8, timeout:8;
+ unsigned short res2[5]; /* reserved field */
+};
+
+/*--------SAR stuff ---------------------*/
+
+#define EPROM_SIZE 0x40000 /* says 64K in the docs ??? */
+#define MAC1_LEN 4
+#define MAC2_LEN 2
+
+/*------------ PCI Memory Space Map, 128K SAR memory ----------------*/
+#define IPHASE5575_PCI_CONFIG_REG_BASE 0x0000
+#define IPHASE5575_BUS_CONTROL_REG_BASE 0x1000 /* offsets 0x00 - 0x3c */
+#define IPHASE5575_FRAG_CONTROL_REG_BASE 0x2000
+#define IPHASE5575_REASS_CONTROL_REG_BASE 0x3000
+#define IPHASE5575_DMA_CONTROL_REG_BASE 0x4000
+#define IPHASE5575_FRONT_END_REG_BASE IPHASE5575_DMA_CONTROL_REG_BASE
+#define IPHASE5575_FRAG_CONTROL_RAM_BASE 0x10000
+#define IPHASE5575_REASS_CONTROL_RAM_BASE 0x20000
+
+/*------------ Bus interface control registers -----------------*/
+#define IPHASE5575_BUS_CONTROL_REG 0x00
+#define IPHASE5575_BUS_STATUS_REG 0x01 /* actual offset 0x04 */
+#define IPHASE5575_MAC1 0x02
+#define IPHASE5575_REV 0x03
+#define IPHASE5575_MAC2 0x03 /*actual offset 0x0e-reg 0x0c*/
+#define IPHASE5575_EXT_RESET 0x04
+#define IPHASE5575_INT_RESET 0x05 /* addr 1c ?? reg 0x06 */
+#define IPHASE5575_PCI_ADDR_PAGE 0x07 /* reg 0x08, 0x09 ?? */
+#define IPHASE5575_EEPROM_ACCESS 0x0a /* actual offset 0x28 */
+#define IPHASE5575_CELL_FIFO_QUEUE_SZ 0x0b
+#define IPHASE5575_CELL_FIFO_MARK_STATE 0x0c
+#define IPHASE5575_CELL_FIFO_READ_PTR 0x0d
+#define IPHASE5575_CELL_FIFO_WRITE_PTR 0x0e
+#define IPHASE5575_CELL_FIFO_CELLS_AVL 0x0f /* actual offset 0x3c */
+
+/* Bus Interface Control Register bits */
+#define CTRL_FE_RST 0x80000000
+#define CTRL_LED 0x40000000
+#define CTRL_25MBPHY 0x10000000
+#define CTRL_ENCMBMEM 0x08000000
+#define CTRL_ENOFFSEG 0x01000000
+#define CTRL_ERRMASK 0x00400000
+#define CTRL_DLETMASK 0x00100000
+#define CTRL_DLERMASK 0x00080000
+#define CTRL_FEMASK 0x00040000
+#define CTRL_SEGMASK 0x00020000
+#define CTRL_REASSMASK 0x00010000
+#define CTRL_CSPREEMPT 0x00002000
+#define CTRL_B128 0x00000200
+#define CTRL_B64 0x00000100
+#define CTRL_B48 0x00000080
+#define CTRL_B32 0x00000040
+#define CTRL_B16 0x00000020
+#define CTRL_B8 0x00000010
+
+/* Bus Interface Status Register bits */
+#define STAT_CMEMSIZ 0xc0000000
+#define STAT_ADPARCK 0x20000000
+#define STAT_RESVD 0x1fffff80
+#define STAT_ERRINT 0x00000040
+#define STAT_MARKINT 0x00000020
+#define STAT_DLETINT 0x00000010
+#define STAT_DLERINT 0x00000008
+#define STAT_FEINT 0x00000004
+#define STAT_SEGINT 0x00000002
+#define STAT_REASSINT 0x00000001
+
+
+/*--------------- Segmentation control registers -----------------*/
+/* The segmentation registers are 16 bits access and the addresses
+ are defined as such so the addresses are the actual "offsets" */
+#define IDLEHEADHI 0x00
+#define IDLEHEADLO 0x01
+#define MAXRATE 0x02
+/* Values for MAXRATE register for 155Mbps and 25.6 Mbps operation */
+#define RATE155 0x64b1 // 16 bits float format
+#define MAX_ATM_155 352768 // Cells/second p.118
+#define RATE25 0x5f9d
+
+#define STPARMS 0x03
+#define STPARMS_1K 0x008c
+#define STPARMS_2K 0x0049
+#define STPARMS_4K 0x0026
+#define COMP_EN 0x4000
+#define CBR_EN 0x2000
+#define ABR_EN 0x0800
+#define UBR_EN 0x0400
+
+#define ABRUBR_ARB 0x04
+#define RM_TYPE 0x05
+/*Value for RM_TYPE register for ATM Forum Traffic Mangement4.0 support*/
+#define RM_TYPE_4_0 0x0100
+
+#define SEG_COMMAND_REG 0x17
+/* Values for the command register */
+#define RESET_SEG 0x0055
+#define RESET_SEG_STATE 0x00aa
+#define RESET_TX_CELL_CTR 0x00cc
+
+#define CBR_PTR_BASE 0x20
+#define ABR_SBPTR_BASE 0x22
+#define UBR_SBPTR_BASE 0x23
+#define ABRWQ_BASE 0x26
+#define UBRWQ_BASE 0x27
+#define VCT_BASE 0x28
+#define VCTE_BASE 0x29
+#define CBR_TAB_BEG 0x2c
+#define CBR_TAB_END 0x2d
+#define PRQ_ST_ADR 0x30
+#define PRQ_ED_ADR 0x31
+#define PRQ_RD_PTR 0x32
+#define PRQ_WR_PTR 0x33
+#define TCQ_ST_ADR 0x34
+#define TCQ_ED_ADR 0x35
+#define TCQ_RD_PTR 0x36
+#define TCQ_WR_PTR 0x37
+#define SEG_QUEUE_BASE 0x40
+#define SEG_DESC_BASE 0x41
+#define MODE_REG_0 0x45
+#define T_ONLINE 0x0002 /* (i)chipSAR is online */
+
+#define MODE_REG_1 0x46
+#define MODE_REG_1_VAL 0x0400 /*for propoer device operation*/
+
+#define SEG_INTR_STATUS_REG 0x47
+#define SEG_MASK_REG 0x48
+#define TRANSMIT_DONE 0x0200
+#define TCQ_NOT_EMPTY 0x1000 /* this can be used for both the interrupt
+ status registers as well as the mask register */
+
+#define CELL_CTR_HIGH_AUTO 0x49
+#define CELL_CTR_HIGH_NOAUTO 0xc9
+#define CELL_CTR_LO_AUTO 0x4a
+#define CELL_CTR_LO_NOAUTO 0xca
+
+/* Diagnostic registers */
+#define NEXTDESC 0x59
+#define NEXTVC 0x5a
+#define PSLOTCNT 0x5d
+#define NEWDN 0x6a
+#define NEWVC 0x6b
+#define SBPTR 0x6c
+#define ABRWQ_WRPTR 0x6f
+#define ABRWQ_RDPTR 0x70
+#define UBRWQ_WRPTR 0x71
+#define UBRWQ_RDPTR 0x72
+#define CBR_VC 0x73
+#define ABR_SBVC 0x75
+#define UBR_SBVC 0x76
+#define ABRNEXTLINK 0x78
+#define UBRNEXTLINK 0x79
+
+
+/*----------------- Reassembly control registers ---------------------*/
+/* The reassembly registers are 16 bits access and the addresses
+ are defined as such so the addresses are the actual "offsets" */
+#define MODE_REG 0x00
+#define R_ONLINE 0x0002 /* (i)chip is online */
+#define IGN_RAW_FL 0x0004
+
+#define PROTOCOL_ID 0x01
+#define REASS_MASK_REG 0x02
+#define REASS_INTR_STATUS_REG 0x03
+/* Interrupt Status register bits */
+#define RX_PKT_CTR_OF 0x8000
+#define RX_ERR_CTR_OF 0x4000
+#define RX_CELL_CTR_OF 0x1000
+#define RX_FREEQ_EMPT 0x0200
+#define RX_EXCPQ_FL 0x0080
+#define RX_RAWQ_FL 0x0010
+#define RX_EXCP_RCVD 0x0008
+#define RX_PKT_RCVD 0x0004
+#define RX_RAW_RCVD 0x0001
+
+#define DRP_PKT_CNTR 0x04
+#define ERR_CNTR 0x05
+#define RAW_BASE_ADR 0x08
+#define CELL_CTR0 0x0c
+#define CELL_CTR1 0x0d
+#define REASS_COMMAND_REG 0x0f
+/* Values for command register */
+#define RESET_REASS 0x0055
+#define RESET_REASS_STATE 0x00aa
+#define RESET_DRP_PKT_CNTR 0x00f1
+#define RESET_ERR_CNTR 0x00f2
+#define RESET_CELL_CNTR 0x00f8
+#define RESET_REASS_ALL_REGS 0x00ff
+
+#define REASS_DESC_BASE 0x10
+#define VC_LKUP_BASE 0x11
+#define REASS_TABLE_BASE 0x12
+#define REASS_QUEUE_BASE 0x13
+#define PKT_TM_CNT 0x16
+#define TMOUT_RANGE 0x17
+#define INTRVL_CNTR 0x18
+#define TMOUT_INDX 0x19
+#define VP_LKUP_BASE 0x1c
+#define VP_FILTER 0x1d
+#define ABR_LKUP_BASE 0x1e
+#define FREEQ_ST_ADR 0x24
+#define FREEQ_ED_ADR 0x25
+#define FREEQ_RD_PTR 0x26
+#define FREEQ_WR_PTR 0x27
+#define PCQ_ST_ADR 0x28
+#define PCQ_ED_ADR 0x29
+#define PCQ_RD_PTR 0x2a
+#define PCQ_WR_PTR 0x2b
+#define EXCP_Q_ST_ADR 0x2c
+#define EXCP_Q_ED_ADR 0x2d
+#define EXCP_Q_RD_PTR 0x2e
+#define EXCP_Q_WR_PTR 0x2f
+#define CC_FIFO_ST_ADR 0x34
+#define CC_FIFO_ED_ADR 0x35
+#define CC_FIFO_RD_PTR 0x36
+#define CC_FIFO_WR_PTR 0x37
+#define STATE_REG 0x38
+#define BUF_SIZE 0x42
+#define XTRA_RM_OFFSET 0x44
+#define DRP_PKT_CNTR_NC 0x84
+#define ERR_CNTR_NC 0x85
+#define CELL_CNTR0_NC 0x8c
+#define CELL_CNTR1_NC 0x8d
+
+/* State Register bits */
+#define EXCPQ_EMPTY 0x0040
+#define PCQ_EMPTY 0x0010
+#define FREEQ_EMPTY 0x0004
+
+
+/*----------------- Front End registers/ DMA control --------------*/
+/* There is a lot of documentation error regarding these offsets ???
+ eg:- 2 offsets given 800, a00 for rx counter
+ similarly many others
+ Remember again that the offsets are to be 4*register number, so
+ correct the #defines here
+*/
+#define IPHASE5575_TX_COUNTER 0x200 /* offset - 0x800 */
+#define IPHASE5575_RX_COUNTER 0x280 /* offset - 0xa00 */
+#define IPHASE5575_TX_LIST_ADDR 0x300 /* offset - 0xc00 */
+#define IPHASE5575_RX_LIST_ADDR 0x380 /* offset - 0xe00 */
+
+/*--------------------------- RAM ---------------------------*/
+/* These memory maps are actually offsets from the segmentation and reassembly RAM base addresses */
+
+/* Segmentation Control Memory map */
+#define TX_DESC_BASE 0x0000 /* Buffer Decriptor Table */
+#define TX_COMP_Q 0x1000 /* Transmit Complete Queue */
+#define PKT_RDY_Q 0x1400 /* Packet Ready Queue */
+#define CBR_SCHED_TABLE 0x1800 /* CBR Table */
+#define UBR_SCHED_TABLE 0x3000 /* UBR Table */
+#define UBR_WAIT_Q 0x4000 /* UBR Wait Queue */
+#define ABR_SCHED_TABLE 0x5000 /* ABR Table */
+#define ABR_WAIT_Q 0x5800 /* ABR Wait Queue */
+#define EXT_VC_TABLE 0x6000 /* Extended VC Table */
+#define MAIN_VC_TABLE 0x8000 /* Main VC Table */
+#define SCHEDSZ 1024 /* ABR and UBR Scheduling Table size */
+#define TX_DESC_TABLE_SZ 128 /* Number of entries in the Transmit
+ Buffer Descriptor Table */
+
+/* These are used as table offsets in Descriptor Table address generation */
+#define DESC_MODE 0x0
+#define VC_INDEX 0x1
+#define BYTE_CNT 0x3
+#define PKT_START_HI 0x4
+#define PKT_START_LO 0x5
+
+/* Descriptor Mode Word Bits */
+#define EOM_EN 0x0800
+#define AAL5 0x0100
+#define APP_CRC32 0x0400
+#define CMPL_INT 0x1000
+
+#define TABLE_ADDRESS(db, dn, to) \
+ (((unsigned long)(db & 0x04)) << 16) | (dn << 5) | (to << 1)
+
+/* Reassembly Control Memory Map */
+#define RX_DESC_BASE 0x0000 /* Buffer Descriptor Table */
+#define VP_TABLE 0x5c00 /* VP Table */
+#define EXCEPTION_Q 0x5e00 /* Exception Queue */
+#define FREE_BUF_DESC_Q 0x6000 /* Free Buffer Descriptor Queue */
+#define PKT_COMP_Q 0x6800 /* Packet Complete Queue */
+#define REASS_TABLE 0x7000 /* Reassembly Table */
+#define RX_VC_TABLE 0x7800 /* VC Table */
+#define ABR_VC_TABLE 0x8000 /* ABR VC Table */
+#define RX_DESC_TABLE_SZ 736 /* Number of entries in the Receive
+ Buffer Descriptor Table */
+#define VP_TABLE_SZ 256 /* Number of entries in VPTable */
+#define RX_VC_TABLE_SZ 1024 /* Number of entries in VC Table */
+#define REASS_TABLE_SZ 1024 /* Number of entries in Reassembly Table */
+ /* Buffer Descriptor Table */
+#define RX_ACT 0x8000
+#define RX_VPVC 0x4000
+#define RX_CNG 0x0040
+#define RX_CER 0x0008
+#define RX_PTE 0x0004
+#define RX_OFL 0x0002
+#define NUM_RX_EXCP 32
+
+/* Reassembly Table */
+#define NO_AAL5_PKT 0x0000
+#define AAL5_PKT_REASSEMBLED 0x4000
+#define AAL5_PKT_TERMINATED 0x8000
+#define RAW_PKT 0xc000
+#define REASS_ABR 0x2000
+
+/*-------------------- Base Registers --------------------*/
+#define REG_BASE IPHASE5575_BUS_CONTROL_REG_BASE
+#define RAM_BASE IPHASE5575_FRAG_CONTROL_RAM_BASE
+#define PHY_BASE IPHASE5575_FRONT_END_REG_BASE
+#define SEG_BASE IPHASE5575_FRAG_CONTROL_REG_BASE
+#define REASS_BASE IPHASE5575_REASS_CONTROL_REG_BASE
+
+typedef volatile u_int freg_t;
+typedef u_int rreg_t;
+
+typedef struct _ffredn_t {
+ freg_t idlehead_high; /* Idle cell header (high) */
+ freg_t idlehead_low; /* Idle cell header (low) */
+ freg_t maxrate; /* Maximum rate */
+ freg_t stparms; /* Traffic Management Parameters */
+ freg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */
+ freg_t rm_type; /* */
+ u_int filler5[0x17 - 0x06];
+ freg_t cmd_reg; /* Command register */
+ u_int filler18[0x20 - 0x18];
+ freg_t cbr_base; /* CBR Pointer Base */
+ freg_t vbr_base; /* VBR Pointer Base */
+ freg_t abr_base; /* ABR Pointer Base */
+ freg_t ubr_base; /* UBR Pointer Base */
+ u_int filler24;
+ freg_t vbrwq_base; /* VBR Wait Queue Base */
+ freg_t abrwq_base; /* ABR Wait Queue Base */
+ freg_t ubrwq_base; /* UBR Wait Queue Base */
+ freg_t vct_base; /* Main VC Table Base */
+ freg_t vcte_base; /* Extended Main VC Table Base */
+ u_int filler2a[0x2C - 0x2A];
+ freg_t cbr_tab_beg; /* CBR Table Begin */
+ freg_t cbr_tab_end; /* CBR Table End */
+ freg_t cbr_pointer; /* CBR Pointer */
+ u_int filler2f[0x30 - 0x2F];
+ freg_t prq_st_adr; /* Packet Ready Queue Start Address */
+ freg_t prq_ed_adr; /* Packet Ready Queue End Address */
+ freg_t prq_rd_ptr; /* Packet Ready Queue read pointer */
+ freg_t prq_wr_ptr; /* Packet Ready Queue write pointer */
+ freg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/
+ freg_t tcq_ed_adr; /* Transmit Complete Queue End Address */
+ freg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */
+ freg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/
+ u_int filler38[0x40 - 0x38];
+ freg_t queue_base; /* Base address for PRQ and TCQ */
+ freg_t desc_base; /* Base address of descriptor table */
+ u_int filler42[0x45 - 0x42];
+ freg_t mode_reg_0; /* Mode register 0 */
+ freg_t mode_reg_1; /* Mode register 1 */
+ freg_t intr_status_reg;/* Interrupt Status register */
+ freg_t mask_reg; /* Mask Register */
+ freg_t cell_ctr_high1; /* Total cell transfer count (high) */
+ freg_t cell_ctr_lo1; /* Total cell transfer count (low) */
+ freg_t state_reg; /* Status register */
+ u_int filler4c[0x58 - 0x4c];
+ freg_t curr_desc_num; /* Contains the current descriptor num */
+ freg_t next_desc; /* Next descriptor */
+ freg_t next_vc; /* Next VC */
+ u_int filler5b[0x5d - 0x5b];
+ freg_t present_slot_cnt;/* Present slot count */
+ u_int filler5e[0x6a - 0x5e];
+ freg_t new_desc_num; /* New descriptor number */
+ freg_t new_vc; /* New VC */
+ freg_t sched_tbl_ptr; /* Schedule table pointer */
+ freg_t vbrwq_wptr; /* VBR wait queue write pointer */
+ freg_t vbrwq_rptr; /* VBR wait queue read pointer */
+ freg_t abrwq_wptr; /* ABR wait queue write pointer */
+ freg_t abrwq_rptr; /* ABR wait queue read pointer */
+ freg_t ubrwq_wptr; /* UBR wait queue write pointer */
+ freg_t ubrwq_rptr; /* UBR wait queue read pointer */
+ freg_t cbr_vc; /* CBR VC */
+ freg_t vbr_sb_vc; /* VBR SB VC */
+ freg_t abr_sb_vc; /* ABR SB VC */
+ freg_t ubr_sb_vc; /* UBR SB VC */
+ freg_t vbr_next_link; /* VBR next link */
+ freg_t abr_next_link; /* ABR next link */
+ freg_t ubr_next_link; /* UBR next link */
+ u_int filler7a[0x7c-0x7a];
+ freg_t out_rate_head; /* Out of rate head */
+ u_int filler7d[0xca-0x7d]; /* pad out to full address space */
+ freg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */
+ freg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */
+ u_int fillercc[0x100-0xcc]; /* pad out to full address space */
+} ffredn_t;
+
+typedef struct _rfredn_t {
+ rreg_t mode_reg_0; /* Mode register 0 */
+ rreg_t protocol_id; /* Protocol ID */
+ rreg_t mask_reg; /* Mask Register */
+ rreg_t intr_status_reg;/* Interrupt status register */
+ rreg_t drp_pkt_cntr; /* Dropped packet cntr (clear on read) */
+ rreg_t err_cntr; /* Error Counter (cleared on read) */
+ u_int filler6[0x08 - 0x06];
+ rreg_t raw_base_adr; /* Base addr for raw cell Q */
+ u_int filler2[0x0c - 0x09];
+ rreg_t cell_ctr0; /* Cell Counter 0 (cleared when read) */
+ rreg_t cell_ctr1; /* Cell Counter 1 (cleared when read) */
+ u_int filler3[0x0f - 0x0e];
+ rreg_t cmd_reg; /* Command register */
+ rreg_t desc_base; /* Base address for description table */
+ rreg_t vc_lkup_base; /* Base address for VC lookup table */
+ rreg_t reass_base; /* Base address for reassembler table */
+ rreg_t queue_base; /* Base address for Communication queue */
+ u_int filler14[0x16 - 0x14];
+ rreg_t pkt_tm_cnt; /* Packet Timeout and count register */
+ rreg_t tmout_range; /* Range of reassembley IDs for timeout */
+ rreg_t intrvl_cntr; /* Packet aging interval counter */
+ rreg_t tmout_indx; /* index of pkt being tested for aging */
+ u_int filler1a[0x1c - 0x1a];
+ rreg_t vp_lkup_base; /* Base address for VP lookup table */
+ rreg_t vp_filter; /* VP filter register */
+ rreg_t abr_lkup_base; /* Base address of ABR VC Table */
+ u_int filler1f[0x24 - 0x1f];
+ rreg_t fdq_st_adr; /* Free desc queue start address */
+ rreg_t fdq_ed_adr; /* Free desc queue end address */
+ rreg_t fdq_rd_ptr; /* Free desc queue read pointer */
+ rreg_t fdq_wr_ptr; /* Free desc queue write pointer */
+ rreg_t pcq_st_adr; /* Packet Complete queue start address */
+ rreg_t pcq_ed_adr; /* Packet Complete queue end address */
+ rreg_t pcq_rd_ptr; /* Packet Complete queue read pointer */
+ rreg_t pcq_wr_ptr; /* Packet Complete queue write pointer */
+ rreg_t excp_st_adr; /* Exception queue start address */
+ rreg_t excp_ed_adr; /* Exception queue end address */
+ rreg_t excp_rd_ptr; /* Exception queue read pointer */
+ rreg_t excp_wr_ptr; /* Exception queue write pointer */
+ u_int filler30[0x34 - 0x30];
+ rreg_t raw_st_adr; /* Raw Cell start address */
+ rreg_t raw_ed_adr; /* Raw Cell end address */
+ rreg_t raw_rd_ptr; /* Raw Cell read pointer */
+ rreg_t raw_wr_ptr; /* Raw Cell write pointer */
+ rreg_t state_reg; /* State Register */
+ u_int filler39[0x42 - 0x39];
+ rreg_t buf_size; /* Buffer size */
+ u_int filler43;
+ rreg_t xtra_rm_offset; /* Offset of the additional turnaround RM */
+ u_int filler45[0x84 - 0x45];
+ rreg_t drp_pkt_cntr_nc;/* Dropped Packet cntr, Not clear on rd */
+ rreg_t err_cntr_nc; /* Error Counter, Not clear on read */
+ u_int filler86[0x8c - 0x86];
+ rreg_t cell_ctr0_nc; /* Cell Counter 0, Not clear on read */
+ rreg_t cell_ctr1_nc; /* Cell Counter 1, Not clear on read */
+ u_int filler8e[0x100-0x8e]; /* pad out to full address space */
+} rfredn_t;
+
+typedef struct {
+ /* Atlantic */
+ ffredn_t ffredn; /* F FRED */
+ rfredn_t rfredn; /* R FRED */
+} ia_regs_t;
+
+typedef struct {
+ u_short f_vc_type; /* VC type */
+ u_short f_nrm; /* Nrm */
+ u_short f_nrmexp; /* Nrm Exp */
+ u_short reserved6; /* */
+ u_short f_crm; /* Crm */
+ u_short reserved10; /* Reserved */
+ u_short reserved12; /* Reserved */
+ u_short reserved14; /* Reserved */
+ u_short last_cell_slot; /* last_cell_slot_count */
+ u_short f_pcr; /* Peak Cell Rate */
+ u_short fraction; /* fraction */
+ u_short f_icr; /* Initial Cell Rate */
+ u_short f_cdf; /* */
+ u_short f_mcr; /* Minimum Cell Rate */
+ u_short f_acr; /* Allowed Cell Rate */
+ u_short f_status; /* */
+} f_vc_abr_entry;
+
+typedef struct {
+ u_short r_status_rdf; /* status + RDF */
+ u_short r_air; /* AIR */
+ u_short reserved4[14]; /* Reserved */
+} r_vc_abr_entry;
+
+#define MRM 3
+#define MIN(x,y) ((x) < (y)) ? (x) : (y)
+
+typedef struct srv_cls_param {
+ u32 class_type; /* CBR/VBR/ABR/UBR; use the enum above */
+ u32 pcr; /* Peak Cell Rate (24-bit) */
+ /* VBR parameters */
+ u32 scr; /* sustainable cell rate */
+ u32 max_burst_size; /* ?? cell rate or data rate */
+
+ /* ABR only UNI 4.0 Parameters */
+ u32 mcr; /* Min Cell Rate (24-bit) */
+ u32 icr; /* Initial Cell Rate (24-bit) */
+ u32 tbe; /* Transient Buffer Exposure (24-bit) */
+ u32 frtt; /* Fixed Round Trip Time (24-bit) */
+
+#if 0 /* Additional Parameters of TM 4.0 */
+bits 31 30 29 28 27-25 24-22 21-19 18-9
+-----------------------------------------------------------------------------
+| NRM present | TRM prsnt | CDF prsnt | ADTF prsnt | NRM | TRM | CDF | ADTF |
+-----------------------------------------------------------------------------
+#endif /* 0 */
+
+ u8 nrm; /* Max # of Cells for each forward RM
+ cell (3-bit) */
+ u8 trm; /* Time between forward RM cells (3-bit) */
+ u16 adtf; /* ACR Decrease Time Factor (10-bit) */
+ u8 cdf; /* Cutoff Decrease Factor (3-bit) */
+ u8 rif; /* Rate Increment Factor (4-bit) */
+ u8 rdf; /* Rate Decrease Factor (4-bit) */
+ u8 reserved; /* 8 bits to keep structure word aligned */
+} srv_cls_param_t;
+
+struct testTable_t {
+ u16 lastTime;
+ u16 fract;
+ u8 vc_status;
+};
+
+typedef struct {
+ u16 vci;
+ u16 error;
+} RX_ERROR_Q;
+
+typedef struct {
+ u8 active: 1;
+ u8 abr: 1;
+ u8 ubr: 1;
+ u8 cnt: 5;
+#define VC_ACTIVE 0x01
+#define VC_ABR 0x02
+#define VC_UBR 0x04
+} vcstatus_t;
+
+struct ia_rfL_t {
+ u32 fdq_st; /* Free desc queue start address */
+ u32 fdq_ed; /* Free desc queue end address */
+ u32 fdq_rd; /* Free desc queue read pointer */
+ u32 fdq_wr; /* Free desc queue write pointer */
+ u32 pcq_st; /* Packet Complete queue start address */
+ u32 pcq_ed; /* Packet Complete queue end address */
+ u32 pcq_rd; /* Packet Complete queue read pointer */
+ u32 pcq_wr; /* Packet Complete queue write pointer */
+};
+
+struct ia_ffL_t {
+ u32 prq_st; /* Packet Ready Queue Start Address */
+ u32 prq_ed; /* Packet Ready Queue End Address */
+ u32 prq_wr; /* Packet Ready Queue write pointer */
+ u32 tcq_st; /* Transmit Complete Queue Start Address*/
+ u32 tcq_ed; /* Transmit Complete Queue End Address */
+ u32 tcq_rd; /* Transmit Complete Queue read pointer */
+};
+
+struct desc_tbl_t {
+ u32 timestamp;
+ struct ia_vcc *iavcc;
+ struct sk_buff *txskb;
+};
+
+typedef struct ia_rtn_q {
+ struct desc_tbl_t data;
+ struct ia_rtn_q *next, *tail;
+} IARTN_Q;
+
+#define SUNI_LOSV 0x04
+typedef struct {
+ u32 suni_master_reset; /* SUNI Master Reset and Identity */
+ u32 suni_master_config; /* SUNI Master Configuration */
+ u32 suni_master_intr_stat; /* SUNI Master Interrupt Status */
+ u32 suni_reserved1; /* Reserved */
+ u32 suni_master_clk_monitor;/* SUNI Master Clock Monitor */
+ u32 suni_master_control; /* SUNI Master Clock Monitor */
+ u32 suni_reserved2[10]; /* Reserved */
+
+ u32 suni_rsop_control; /* RSOP Control/Interrupt Enable */
+ u32 suni_rsop_status; /* RSOP Status/Interrupt States */
+ u32 suni_rsop_section_bip8l;/* RSOP Section BIP-8 LSB */
+ u32 suni_rsop_section_bip8m;/* RSOP Section BIP-8 MSB */
+
+ u32 suni_tsop_control; /* TSOP Control */
+ u32 suni_tsop_diag; /* TSOP Disgnostics */
+ u32 suni_tsop_reserved[2]; /* TSOP Reserved */
+
+ u32 suni_rlop_cs; /* RLOP Control/Status */
+ u32 suni_rlop_intr; /* RLOP Interrupt Enable/Status */
+ u32 suni_rlop_line_bip24l; /* RLOP Line BIP-24 LSB */
+ u32 suni_rlop_line_bip24; /* RLOP Line BIP-24 */
+ u32 suni_rlop_line_bip24m; /* RLOP Line BIP-24 MSB */
+ u32 suni_rlop_line_febel; /* RLOP Line FEBE LSB */
+ u32 suni_rlop_line_febe; /* RLOP Line FEBE */
+ u32 suni_rlop_line_febem; /* RLOP Line FEBE MSB */
+
+ u32 suni_tlop_control; /* TLOP Control */
+ u32 suni_tlop_disg; /* TLOP Disgnostics */
+ u32 suni_tlop_reserved[14]; /* TLOP Reserved */
+
+ u32 suni_rpop_cs; /* RPOP Status/Control */
+ u32 suni_rpop_intr; /* RPOP Interrupt/Status */
+ u32 suni_rpop_reserved; /* RPOP Reserved */
+ u32 suni_rpop_intr_ena; /* RPOP Interrupt Enable */
+ u32 suni_rpop_reserved1[3]; /* RPOP Reserved */
+ u32 suni_rpop_path_sig; /* RPOP Path Signal Label */
+ u32 suni_rpop_bip8l; /* RPOP Path BIP-8 LSB */
+ u32 suni_rpop_bip8m; /* RPOP Path BIP-8 MSB */
+ u32 suni_rpop_febel; /* RPOP Path FEBE LSB */
+ u32 suni_rpop_febem; /* RPOP Path FEBE MSB */
+ u32 suni_rpop_reserved2[4]; /* RPOP Reserved */
+
+ u32 suni_tpop_cntrl_daig; /* TPOP Control/Disgnostics */
+ u32 suni_tpop_pointer_ctrl; /* TPOP Pointer Control */
+ u32 suni_tpop_sourcer_ctrl; /* TPOP Source Control */
+ u32 suni_tpop_reserved1[2]; /* TPOP Reserved */
+ u32 suni_tpop_arb_prtl; /* TPOP Arbitrary Pointer LSB */
+ u32 suni_tpop_arb_prtm; /* TPOP Arbitrary Pointer MSB */
+ u32 suni_tpop_reserved2; /* TPOP Reserved */
+ u32 suni_tpop_path_sig; /* TPOP Path Signal Lable */
+ u32 suni_tpop_path_status; /* TPOP Path Status */
+ u32 suni_tpop_reserved3[6]; /* TPOP Reserved */
+
+ u32 suni_racp_cs; /* RACP Control/Status */
+ u32 suni_racp_intr; /* RACP Interrupt Enable/Status */
+ u32 suni_racp_hdr_pattern; /* RACP Match Header Pattern */
+ u32 suni_racp_hdr_mask; /* RACP Match Header Mask */
+ u32 suni_racp_corr_hcs; /* RACP Correctable HCS Error Count */
+ u32 suni_racp_uncorr_hcs; /* RACP Uncorrectable HCS Error Count */
+ u32 suni_racp_reserved[10]; /* RACP Reserved */
+
+ u32 suni_tacp_control; /* TACP Control */
+ u32 suni_tacp_idle_hdr_pat; /* TACP Idle Cell Header Pattern */
+ u32 suni_tacp_idle_pay_pay; /* TACP Idle Cell Payld Octet Pattern */
+ u32 suni_tacp_reserved[5]; /* TACP Reserved */
+
+ u32 suni_reserved3[24]; /* Reserved */
+
+ u32 suni_master_test; /* SUNI Master Test */
+ u32 suni_reserved_test; /* SUNI Reserved for Test */
+} IA_SUNI;
+
+
+typedef struct _SUNI_STATS_
+{
+ u32 valid; // 1 = oc3 PHY card
+ u32 carrier_detect; // GPIN input
+ // RSOP: receive section overhead processor
+ u16 rsop_oof_state; // 1 = out of frame
+ u16 rsop_lof_state; // 1 = loss of frame
+ u16 rsop_los_state; // 1 = loss of signal
+ u32 rsop_los_count; // loss of signal count
+ u32 rsop_bse_count; // section BIP-8 error count
+ // RLOP: receive line overhead processor
+ u16 rlop_ferf_state; // 1 = far end receive failure
+ u16 rlop_lais_state; // 1 = line AIS
+ u32 rlop_lbe_count; // BIP-24 count
+ u32 rlop_febe_count; // FEBE count;
+ // RPOP: receive path overhead processor
+ u16 rpop_lop_state; // 1 = LOP
+ u16 rpop_pais_state; // 1 = path AIS
+ u16 rpop_pyel_state; // 1 = path yellow alert
+ u32 rpop_bip_count; // path BIP-8 error count
+ u32 rpop_febe_count; // path FEBE error count
+ u16 rpop_psig; // path signal label value
+ // RACP: receive ATM cell processor
+ u16 racp_hp_state; // hunt/presync state
+ u32 racp_fu_count; // FIFO underrun count
+ u32 racp_fo_count; // FIFO overrun count
+ u32 racp_chcs_count; // correctable HCS error count
+ u32 racp_uchcs_count; // uncorrectable HCS error count
+} IA_SUNI_STATS;
+
+typedef struct iadev_t {
+ /*-----base pointers into (i)chipSAR+ address space */
+ u32 *phy; /* base pointer into phy(SUNI) */
+ u32 *dma; /* base pointer into DMA control
+ registers */
+ u32 *reg; /* base pointer to SAR registers
+ - Bus Interface Control Regs */
+ u32 *seg_reg; /* base pointer to segmentation engine
+ internal registers */
+ u32 *reass_reg; /* base pointer to reassemble engine
+ internal registers */
+ u32 *ram; /* base pointer to SAR RAM */
+ unsigned int seg_ram;
+ unsigned int reass_ram;
+ struct dle_q tx_dle_q;
+ struct free_desc_q *tx_free_desc_qhead;
+ struct sk_buff_head tx_dma_q, tx_backlog;
+ spinlock_t tx_lock;
+ IARTN_Q tx_return_q;
+ u32 close_pending;
+#if LINUX_VERSION_CODE >= 0x20303
+ wait_queue_head_t close_wait;
+ wait_queue_head_t timeout_wait;
+#else
+ struct wait_queue *close_wait;
+ struct wait_queue *timeout_wait;
+#endif
+ caddr_t *tx_buf;
+ u16 num_tx_desc, tx_buf_sz, rate_limit;
+ u32 tx_cell_cnt, tx_pkt_cnt;
+ u32 MAIN_VC_TABLE_ADDR, EXT_VC_TABLE_ADDR, ABR_SCHED_TABLE_ADDR;
+ struct dle_q rx_dle_q;
+ struct free_desc_q *rx_free_desc_qhead;
+ struct sk_buff_head rx_dma_q;
+ spinlock_t rx_lock, misc_lock;
+ struct atm_vcc **rx_open; /* list of all open VCs */
+ u16 num_rx_desc, rx_buf_sz, rxing;
+ u32 rx_pkt_ram, rx_tmp_cnt, rx_tmp_jif;
+ u32 RX_DESC_BASE_ADDR;
+ u32 drop_rxpkt, drop_rxcell, rx_cell_cnt, rx_pkt_cnt;
+ struct atm_dev *next_board; /* other iphase devices */
+ struct pci_dev *pci;
+ int mem;
+ unsigned long base_diff; /* virtual - real base address */
+ unsigned int real_base, base; /* real and virtual base address */
+ unsigned int pci_map_size; /*pci map size of board */
+ unsigned char irq;
+ unsigned char bus;
+ unsigned char dev_fn;
+ u_short phy_type;
+ u_short num_vc, memSize, memType;
+ struct ia_ffL_t ffL;
+ struct ia_rfL_t rfL;
+ /* Suni stat */
+ // IA_SUNI_STATS suni_stats;
+ unsigned char carrier_detect;
+ /* CBR related */
+ // transmit DMA & Receive
+ unsigned int tx_dma_cnt; // number of elements on dma queue
+ unsigned int rx_dma_cnt; // number of elements on rx dma queue
+ unsigned int NumEnabledCBR; // number of CBR VCI's enabled. CBR
+ // receive MARK for Cell FIFO
+ unsigned int rx_mark_cnt; // number of elements on mark queue
+ unsigned int CbrTotEntries; // Total CBR Entries in Scheduling Table.
+ unsigned int CbrRemEntries; // Remaining CBR Entries in Scheduling Table.
+ unsigned int CbrEntryPt; // CBR Sched Table Entry Point.
+ unsigned int Granularity; // CBR Granularity given Table Size.
+ /* ABR related */
+ unsigned int sum_mcr, sum_cbr, LineRate;
+ unsigned int n_abr;
+ struct desc_tbl_t *desc_tbl;
+ u_short host_tcq_wr;
+ struct testTable_t **testTable;
+} IADEV;
+
+
+#define INPH_IA_DEV(d) ((IADEV *) (d)->dev_data)
+#define INPH_IA_VCC(v) ((struct ia_vcc *) (v)->dev_data)
+
+/******************* IDT77105 25MB/s PHY DEFINE *****************************/
+typedef struct {
+ u_int mb25_master_ctrl; /* Master control */
+ u_int mb25_intr_status; /* Interrupt status */
+ u_int mb25_diag_control; /* Diagnostic control */
+ u_int mb25_led_hec; /* LED driver and HEC status/control */
+ u_int mb25_low_byte_counter; /* Low byte counter */
+ u_int mb25_high_byte_counter; /* High byte counter */
+} ia_mb25_t;
+
+/*
+ * Master Control
+ */
+#define MB25_MC_UPLO 0x80 /* UPLO */
+#define MB25_MC_DREC 0x40 /* Discard receive cell errors */
+#define MB25_MC_ECEIO 0x20 /* Enable Cell Error Interrupts Only */
+#define MB25_MC_TDPC 0x10 /* Transmit data parity check */
+#define MB25_MC_DRIC 0x08 /* Discard receive idle cells */
+#define MB25_MC_HALTTX 0x04 /* Halt Tx */
+#define MB25_MC_UMS 0x02 /* UTOPIA mode select */
+#define MB25_MC_ENABLED 0x01 /* Enable interrupt */
+
+/*
+ * Interrupt Status
+ */
+#define MB25_IS_GSB 0x40 /* GOOD Symbol Bit */
+#define MB25_IS_HECECR 0x20 /* HEC error cell received */
+#define MB25_IS_SCR 0x10 /* "Short Cell" Received */
+#define MB25_IS_TPE 0x08 /* Trnamsit Parity Error */
+#define MB25_IS_RSCC 0x04 /* Receive Signal Condition change */
+#define MB25_IS_RCSE 0x02 /* Received Cell Symbol Error */
+#define MB25_IS_RFIFOO 0x01 /* Received FIFO Overrun */
+
+/*
+ * Diagnostic Control
+ */
+#define MB25_DC_FTXCD 0x80 /* Force TxClav deassert */
+#define MB25_DC_RXCOS 0x40 /* RxClav operation select */
+#define MB25_DC_ECEIO 0x20 /* Single/Multi-PHY config select */
+#define MB25_DC_RLFLUSH 0x10 /* Clear receive FIFO */
+#define MB25_DC_IXPE 0x08 /* Insert xmit payload error */
+#define MB25_DC_IXHECE 0x04 /* Insert Xmit HEC Error */
+#define MB25_DC_LB_MASK 0x03 /* Loopback control mask */
+
+#define MB25_DC_LL 0x03 /* Line Loopback */
+#define MB25_DC_PL 0x02 /* PHY Loopback */
+#define MB25_DC_NM 0x00
+
+#define FE_MASK 0x00F0
+#define FE_MULTI_MODE 0x0000
+#define FE_SINGLE_MODE 0x0010
+#define FE_UTP_OPTION 0x0020
+#define FE_25MBIT_PHY 0x0040
+#define FE_DS3_PHY 0x0080 /* DS3 */
+#define FE_E3_PHY 0x0090 /* E3 */
+
+extern void ia_mb25_init (IADEV *);
+
+/*********************** SUNI_PM7345 PHY DEFINE HERE *********************/
+typedef struct _suni_pm7345_t
+{
+ u_int suni_config; /* SUNI Configuration */
+ u_int suni_intr_enbl; /* SUNI Interrupt Enable */
+ u_int suni_intr_stat; /* SUNI Interrupt Status */
+ u_int suni_control; /* SUNI Control */
+ u_int suni_id_reset; /* SUNI Reset and Identity */
+ u_int suni_data_link_ctrl;
+ u_int suni_rboc_conf_intr_enbl;
+ u_int suni_rboc_stat;
+ u_int suni_ds3_frm_cfg;
+ u_int suni_ds3_frm_intr_enbl;
+ u_int suni_ds3_frm_intr_stat;
+ u_int suni_ds3_frm_stat;
+ u_int suni_rfdl_cfg;
+ u_int suni_rfdl_enbl_stat;
+ u_int suni_rfdl_stat;
+ u_int suni_rfdl_data;
+ u_int suni_pmon_chng;
+ u_int suni_pmon_intr_enbl_stat;
+ u_int suni_reserved1[0x13-0x11];
+ u_int suni_pmon_lcv_evt_cnt_lsb;
+ u_int suni_pmon_lcv_evt_cnt_msb;
+ u_int suni_pmon_fbe_evt_cnt_lsb;
+ u_int suni_pmon_fbe_evt_cnt_msb;
+ u_int suni_pmon_sez_det_cnt_lsb;
+ u_int suni_pmon_sez_det_cnt_msb;
+ u_int suni_pmon_pe_evt_cnt_lsb;
+ u_int suni_pmon_pe_evt_cnt_msb;
+ u_int suni_pmon_ppe_evt_cnt_lsb;
+ u_int suni_pmon_ppe_evt_cnt_msb;
+ u_int suni_pmon_febe_evt_cnt_lsb;
+ u_int suni_pmon_febe_evt_cnt_msb;
+ u_int suni_ds3_tran_cfg;
+ u_int suni_ds3_tran_diag;
+ u_int suni_reserved2[0x23-0x21];
+ u_int suni_xfdl_cfg;
+ u_int suni_xfdl_intr_st;
+ u_int suni_xfdl_xmit_data;
+ u_int suni_xboc_code;
+ u_int suni_splr_cfg;
+ u_int suni_splr_intr_en;
+ u_int suni_splr_intr_st;
+ u_int suni_splr_status;
+ u_int suni_splt_cfg;
+ u_int suni_splt_cntl;
+ u_int suni_splt_diag_g1;
+ u_int suni_splt_f1;
+ u_int suni_cppm_loc_meters;
+ u_int suni_cppm_chng_of_cppm_perf_meter;
+ u_int suni_cppm_b1_err_cnt_lsb;
+ u_int suni_cppm_b1_err_cnt_msb;
+ u_int suni_cppm_framing_err_cnt_lsb;
+ u_int suni_cppm_framing_err_cnt_msb;
+ u_int suni_cppm_febe_cnt_lsb;
+ u_int suni_cppm_febe_cnt_msb;
+ u_int suni_cppm_hcs_err_cnt_lsb;
+ u_int suni_cppm_hcs_err_cnt_msb;
+ u_int suni_cppm_idle_un_cell_cnt_lsb;
+ u_int suni_cppm_idle_un_cell_cnt_msb;
+ u_int suni_cppm_rcv_cell_cnt_lsb;
+ u_int suni_cppm_rcv_cell_cnt_msb;
+ u_int suni_cppm_xmit_cell_cnt_lsb;
+ u_int suni_cppm_xmit_cell_cnt_msb;
+ u_int suni_rxcp_ctrl;
+ u_int suni_rxcp_fctrl;
+ u_int suni_rxcp_intr_en_sts;
+ u_int suni_rxcp_idle_pat_h1;
+ u_int suni_rxcp_idle_pat_h2;
+ u_int suni_rxcp_idle_pat_h3;
+ u_int suni_rxcp_idle_pat_h4;
+ u_int suni_rxcp_idle_mask_h1;
+ u_int suni_rxcp_idle_mask_h2;
+ u_int suni_rxcp_idle_mask_h3;
+ u_int suni_rxcp_idle_mask_h4;
+ u_int suni_rxcp_cell_pat_h1;
+ u_int suni_rxcp_cell_pat_h2;
+ u_int suni_rxcp_cell_pat_h3;
+ u_int suni_rxcp_cell_pat_h4;
+ u_int suni_rxcp_cell_mask_h1;
+ u_int suni_rxcp_cell_mask_h2;
+ u_int suni_rxcp_cell_mask_h3;
+ u_int suni_rxcp_cell_mask_h4;
+ u_int suni_rxcp_hcs_cs;
+ u_int suni_rxcp_lcd_cnt_threshold;
+ u_int suni_reserved3[0x57-0x54];
+ u_int suni_txcp_ctrl;
+ u_int suni_txcp_intr_en_sts;
+ u_int suni_txcp_idle_pat_h1;
+ u_int suni_txcp_idle_pat_h2;
+ u_int suni_txcp_idle_pat_h3;
+ u_int suni_txcp_idle_pat_h4;
+ u_int suni_txcp_idle_pat_h5;
+ u_int suni_txcp_idle_payload;
+ u_int suni_e3_frm_fram_options;
+ u_int suni_e3_frm_maint_options;
+ u_int suni_e3_frm_fram_intr_enbl;
+ u_int suni_e3_frm_fram_intr_ind_stat;
+ u_int suni_e3_frm_maint_intr_enbl;
+ u_int suni_e3_frm_maint_intr_ind;
+ u_int suni_e3_frm_maint_stat;
+ u_int suni_reserved4;
+ u_int suni_e3_tran_fram_options;
+ u_int suni_e3_tran_stat_diag_options;
+ u_int suni_e3_tran_bip_8_err_mask;
+ u_int suni_e3_tran_maint_adapt_options;
+ u_int suni_ttb_ctrl;
+ u_int suni_ttb_trail_trace_id_stat;
+ u_int suni_ttb_ind_addr;
+ u_int suni_ttb_ind_data;
+ u_int suni_ttb_exp_payload_type;
+ u_int suni_ttb_payload_type_ctrl_stat;
+ u_int suni_pad5[0x7f-0x71];
+ u_int suni_master_test;
+ u_int suni_pad6[0xff-0x80];
+}suni_pm7345_t;
+
+#define SUNI_PM7345_T suni_pm7345_t
+#define SUNI_PM7345 0x20 /* Suni chip type */
+#define SUNI_PM5346 0x30 /* Suni chip type */
+/*
+ * SUNI_PM7345 Configuration
+ */
+#define SUNI_PM7345_CLB 0x01 /* Cell loopback */
+#define SUNI_PM7345_PLB 0x02 /* Payload loopback */
+#define SUNI_PM7345_DLB 0x04 /* Diagnostic loopback */
+#define SUNI_PM7345_LLB 0x80 /* Line loopback */
+#define SUNI_PM7345_E3ENBL 0x40 /* E3 enable bit */
+#define SUNI_PM7345_LOOPT 0x10 /* LOOPT enable bit */
+#define SUNI_PM7345_FIFOBP 0x20 /* FIFO bypass */
+#define SUNI_PM7345_FRMRBP 0x08 /* Framer bypass */
+/*
+ * DS3 FRMR Interrupt Enable
+ */
+#define SUNI_DS3_COFAE 0x80 /* Enable change of frame align */
+#define SUNI_DS3_REDE 0x40 /* Enable DS3 RED state intr */
+#define SUNI_DS3_CBITE 0x20 /* Enable Appl ID channel intr */
+#define SUNI_DS3_FERFE 0x10 /* Enable Far End Receive Failure intr*/
+#define SUNI_DS3_IDLE 0x08 /* Enable Idle signal intr */
+#define SUNI_DS3_AISE 0x04 /* Enable Alarm Indication signal intr*/
+#define SUNI_DS3_OOFE 0x02 /* Enable Out of frame intr */
+#define SUNI_DS3_LOSE 0x01 /* Enable Loss of signal intr */
+
+/*
+ * DS3 FRMR Status
+ */
+#define SUNI_DS3_ACE 0x80 /* Additional Configuration Reg */
+#define SUNI_DS3_REDV 0x40 /* DS3 RED state */
+#define SUNI_DS3_CBITV 0x20 /* Application ID channel state */
+#define SUNI_DS3_FERFV 0x10 /* Far End Receive Failure state*/
+#define SUNI_DS3_IDLV 0x08 /* Idle signal state */
+#define SUNI_DS3_AISV 0x04 /* Alarm Indication signal state*/
+#define SUNI_DS3_OOFV 0x02 /* Out of frame state */
+#define SUNI_DS3_LOSV 0x01 /* Loss of signal state */
+
+/*
+ * E3 FRMR Interrupt/Status
+ */
+#define SUNI_E3_CZDI 0x40 /* Consecutive Zeros indicator */
+#define SUNI_E3_LOSI 0x20 /* Loss of signal intr status */
+#define SUNI_E3_LCVI 0x10 /* Line code violation intr */
+#define SUNI_E3_COFAI 0x08 /* Change of frame align intr */
+#define SUNI_E3_OOFI 0x04 /* Out of frame intr status */
+#define SUNI_E3_LOS 0x02 /* Loss of signal state */
+#define SUNI_E3_OOF 0x01 /* Out of frame state */
+
+/*
+ * E3 FRMR Maintenance Status
+ */
+#define SUNI_E3_AISD 0x80 /* Alarm Indication signal state*/
+#define SUNI_E3_FERF_RAI 0x40 /* FERF/RAI indicator */
+#define SUNI_E3_FEBE 0x20 /* Far End Block Error indicator*/
+
+/*
+ * RXCP Control/Status
+ */
+#define SUNI_DS3_HCSPASS 0x80 /* Pass cell with HEC errors */
+#define SUNI_DS3_HCSDQDB 0x40 /* Control octets in HCS calc */
+#define SUNI_DS3_HCSADD 0x20 /* Add coset poly */
+#define SUNI_DS3_HCK 0x10 /* Control FIFO data path integ chk*/
+#define SUNI_DS3_BLOCK 0x08 /* Enable cell filtering */
+#define SUNI_DS3_DSCR 0x04 /* Disable payload descrambling */
+#define SUNI_DS3_OOCDV 0x02 /* Cell delineation state */
+#define SUNI_DS3_FIFORST 0x01 /* Cell FIFO reset */
+
+/*
+ * RXCP Interrupt Enable/Status
+ */
+#define SUNI_DS3_OOCDE 0x80 /* Intr enable, change in CDS */
+#define SUNI_DS3_HCSE 0x40 /* Intr enable, corr HCS errors */
+#define SUNI_DS3_FIFOE 0x20 /* Intr enable, unco HCS errors */
+#define SUNI_DS3_OOCDI 0x10 /* SYNC state */
+#define SUNI_DS3_UHCSI 0x08 /* Uncorr. HCS errors detected */
+#define SUNI_DS3_COCAI 0x04 /* Corr. HCS errors detected */
+#define SUNI_DS3_FOVRI 0x02 /* FIFO overrun */
+#define SUNI_DS3_FUDRI 0x01 /* FIFO underrun */
+
+extern void ia_suni_pm7345_init (IADEV *iadev);
+
+///////////////////SUNI_PM7345 PHY DEFINE END /////////////////////////////
+
+/* ia_eeprom define*/
+#define MEM_SIZE_MASK 0x000F /* mask of 4 bits defining memory size*/
+#define MEM_SIZE_128K 0x0000 /* board has 128k buffer */
+#define MEM_SIZE_512K 0x0001 /* board has 512K of buffer */
+#define MEM_SIZE_1M 0x0002 /* board has 1M of buffer */
+ /* 0x3 to 0xF are reserved for future */
+
+#define FE_MASK 0x00F0 /* mask of 4 bits defining FE type */
+#define FE_MULTI_MODE 0x0000 /* 155 MBit multimode fiber */
+#define FE_SINGLE_MODE 0x0010 /* 155 MBit single mode laser */
+#define FE_UTP_OPTION 0x0020 /* 155 MBit UTP front end */
+
+#define NOVRAM_SIZE 64
+#define CMD_LEN 10
+
+/***********
+ *
+ * Switches and defines for header files.
+ *
+ * The following defines are used to turn on and off
+ * various options in the header files. Primarily useful
+ * for debugging.
+ *
+ ***********/
+
+/*
+ * a list of the commands that can be sent to the NOVRAM
+ */
+
+#define EXTEND 0x100
+#define IAWRITE 0x140
+#define IAREAD 0x180
+#define ERASE 0x1c0
+
+#define EWDS 0x00
+#define WRAL 0x10
+#define ERAL 0x20
+#define EWEN 0x30
+
+/*
+ * these bits duplicate the hw_flip.h register settings
+ * note: how the data in / out bits are defined in the flipper specification
+ */
+
+#define NVCE 0x02
+#define NVSK 0x01
+#define NVDO 0x08
+#define NVDI 0x04
+/***********************
+ *
+ * This define ands the value and the current config register and puts
+ * the result in the config register
+ *
+ ***********************/
+
+#define CFG_AND(val) { \
+ u32 t; \
+ t = readl(iadev->reg+IPHASE5575_EEPROM_ACCESS); \
+ t &= (val); \
+ writel(t, iadev->reg+IPHASE5575_EEPROM_ACCESS); \
+ }
+
+/***********************
+ *
+ * This define ors the value and the current config register and puts
+ * the result in the config register
+ *
+ ***********************/
+
+#define CFG_OR(val) { \
+ u32 t; \
+ t = readl(iadev->reg+IPHASE5575_EEPROM_ACCESS); \
+ t |= (val); \
+ writel(t, iadev->reg+IPHASE5575_EEPROM_ACCESS); \
+ }
+
+/***********************
+ *
+ * Send a command to the NOVRAM, the command is in cmd.
+ *
+ * clear CE and SK. Then assert CE.
+ * Clock each of the command bits out in the correct order with SK
+ * exit with CE still asserted
+ *
+ ***********************/
+
+#define NVRAM_CMD(cmd) { \
+ int i; \
+ u_short c = cmd; \
+ CFG_AND(~(NVCE|NVSK)); \
+ CFG_OR(NVCE); \
+ for (i=0; i<CMD_LEN; i++) { \
+ NVRAM_CLKOUT((c & (1 << (CMD_LEN - 1))) ? 1 : 0); \
+ c <<= 1; \
+ } \
+ }
+
+/***********************
+ *
+ * clear the CE, this must be used after each command is complete
+ *
+ ***********************/
+
+#define NVRAM_CLR_CE {CFG_AND(~NVCE)}
+
+/***********************
+ *
+ * clock the data bit in bitval out to the NOVRAM. The bitval must be
+ * a 1 or 0, or the clockout operation is undefined
+ *
+ ***********************/
+
+#define NVRAM_CLKOUT(bitval) { \
+ CFG_AND(~NVDI); \
+ CFG_OR((bitval) ? NVDI : 0); \
+ CFG_OR(NVSK); \
+ CFG_AND( ~NVSK); \
+ }
+
+/***********************
+ *
+ * clock the data bit in and return a 1 or 0, depending on the value
+ * that was received from the NOVRAM
+ *
+ ***********************/
+
+#define NVRAM_CLKIN(value) { \
+ u32 _t; \
+ CFG_OR(NVSK); \
+ CFG_AND(~NVSK); \
+ _t = readl(iadev->reg+IPHASE5575_EEPROM_ACCESS); \
+ value = (_t & NVDO) ? 1 : 0; \
+ }
+
+
+#endif /* IPHASE_H */
#ifdef CONFIG_ATM_NICSTAR_USE_SUNI
#include "suni.h"
#endif /* CONFIG_ATM_NICSTAR_USE_SUNI */
+#ifdef CONFIG_ATM_NICSTAR_USE_IDT77105
+#include "idt77105.h"
+#endif /* CONFIG_ATM_NICSTAR_USE_IDT77105 */
/* Additional code ************************************************************/
#define NS_DELAY mdelay(1)
-#define ALIGN_ADDRESS(addr, alignment) \
+#define ALIGN_BUS_ADDR(addr, alignment) \
((((u32) (addr)) + (((u32) (alignment)) - 1)) & ~(((u32) (alignment)) - 1))
+#define ALIGN_ADDRESS(addr, alignment) \
+ bus_to_virt(ALIGN_BUS_ADDR(virt_to_bus(addr), alignment))
#undef CEIL(d)
static unsigned num_cards = 0;
static struct atmdev_ops atm_ops =
{
- NULL, /* dev_close */
- ns_open, /* open */
- ns_close, /* close */
- ns_ioctl, /* ioctl */
- NULL, /* getsockopt */
- NULL, /* setsockopt */
- ns_send, /* send */
- NULL, /* sg_send */
- NULL, /* send_oam */
- ns_phy_put, /* phy_put */
- ns_phy_get, /* phy_get */
- NULL, /* feedback */
- NULL, /* change_qos */
- NULL, /* free_rx_skb */
- ns_proc_read /* proc_read */
+ open: ns_open,
+ close: ns_close,
+ ioctl: ns_ioctl,
+ send: ns_send,
+ phy_put: ns_phy_put,
+ phy_get: ns_phy_get,
+ proc_read: ns_proc_read
};
static struct timer_list ns_timer;
static char *mac[NS_MAX_CARDS] = { NULL
card = cards[i];
+#ifdef CONFIG_ATM_NICSTAR_USE_IDT77105
+ if (card->max_pcr == IDT_25_PCR) {
+ idt77105_stop(card->atmdev);
+ }
+#endif /* CONFIG_ATM_NICSTAR_USE_IDT77105 */
+
/* Stop everything */
writel(0x00000000, card->membase + CFG);
cards[i] = card;
card->index = i;
+ card->atmdev = NULL;
card->pcidev = pcidev;
- card->membase = (u32) pcidev->resource[1].start;
+ card->membase = pcidev->resource[1].start;
#ifdef __powerpc__
/* Compensate for different memory map between host CPU and PCI bus.
Shouldn't we use a macro for this? */
card->membase += KERNELBASE;
#endif /* __powerpc__ */
- card->membase = (u32) ioremap(card->membase, NS_IOREMAP_SIZE);
- if (card->membase == (u32) (NULL))
+ card->membase = (unsigned long) ioremap(card->membase, NS_IOREMAP_SIZE);
+ if (card->membase == 0)
{
printk("nicstar%d: can't ioremap() membase.\n",i);
error = 3;
ns_init_card_error(card, error);
return error;
}
+#ifdef NS_PCI_LATENCY
if (pci_latency < NS_PCI_LATENCY)
{
PRINTK("nicstar%d: setting PCI latency timer to %d.\n", i, NS_PCI_LATENCY);
return error;
}
}
+#endif /* NS_PCI_LATENCY */
/* Clear timer overflow */
data = readl(card->membase + STAT);
card->atmdev->ci_range.vpi_bits = card->vpibits;
card->atmdev->ci_range.vci_bits = card->vcibits;
card->atmdev->link_rate = card->max_pcr;
-
card->atmdev->phy = NULL;
+
#ifdef CONFIG_ATM_NICSTAR_USE_SUNI
if (card->max_pcr == ATM_OC3_PCR) {
suni_init(card->atmdev);
#endif /* MODULE */
}
#endif /* CONFIG_ATM_NICSTAR_USE_SUNI */
+
+#ifdef CONFIG_ATM_NICSTAR_USE_IDT77105
+ if (card->max_pcr == IDT_25_PCR) {
+ idt77105_init(card->atmdev);
+ /* Note that for the IDT77105 PHY we don't need the awful
+ * module count hack that the SUNI needs because we can
+ * stop the '105 when the nicstar module is cleaned up.
+ */
+ }
+#endif /* CONFIG_ATM_NICSTAR_USE_IDT77105 */
+
if (card->atmdev->phy && card->atmdev->phy->start)
card->atmdev->phy->start(card->atmdev);
flags = NS_TBD_AAL5;
scqe.word_2 = cpu_to_le32((u32) virt_to_bus(skb->data));
scqe.word_3 = cpu_to_le32((u32) skb->len);
- scqe.word_4 = cpu_to_le32(((u32) vcc->vpi) << NS_TBD_VPI_SHIFT |
- ((u32) vcc->vci) << NS_TBD_VCI_SHIFT);
+ scqe.word_4 = ns_tbd_mkword_4(0, (u32) vcc->vpi, (u32) vcc->vci, 0,
+ ATM_SKB(skb)->atm_options & ATM_ATMOPT_CLP ? 1 : 0);
flags |= NS_TBD_EOPDU;
}
else /* (vcc->qos.aal == ATM_AAL0) */
{
u32 scdi;
scq_info *scq;
- ns_tsi *previous, *one_ahead, *two_ahead;
+ ns_tsi *previous = NULL, *one_ahead, *two_ahead;
int serviced_entries; /* flag indicating at least on entry was serviced */
serviced_entries = 0;
card->intcnt = 0;
return retval;
}
+#if 0
/* Dump 25.6 Mbps PHY registers */
+ /* Now there's a 25.6 Mbps PHY driver this code isn't needed. I left it
+ here just in case it's needed for debugging. */
if (card->max_pcr == IDT_25_PCR && !left--)
{
u32 phy_regs[4];
return sprintf(page, "PHY regs: 0x%02X 0x%02X 0x%02X 0x%02X \n",
phy_regs[0], phy_regs[1], phy_regs[2], phy_regs[3]);
}
+#endif /* 0 - Dump 25.6 Mbps PHY registers */
#if 0
/* Dump TST */
if (left-- < NS_TST_NUM_ENTRIES)
break;
default:
- return -EINVAL;
+ return -ENOIOCTLCMD;
}
if (!copy_to_user((pool_levels *) arg, &pl, sizeof(pl)))
else {
printk("nicstar%d: %s == NULL \n", card->index,
dev->phy ? "dev->phy->ioctl" : "dev->phy");
- return -EINVAL;
+ return -ENOIOCTLCMD;
}
}
}
128K x 32bit SRAM will limit the maximum
VCI. */
-#define NS_PCI_LATENCY 64 /* Must be a multiple of 32 */
+/*#define NS_PCI_LATENCY 64*/ /* Must be a multiple of 32 */
/* Number of buffers initially allocated */
#define NUM_SB 32 /* Must be even */
#define NS_TBD_VCI_SHIFT 4
#define ns_tbd_mkword_1(flags, m, n, buflen) \
- (cpu_to_le32(flags | m << 23 | n << 16 | buflen))
+ (cpu_to_le32((flags) | (m) << 23 | (n) << 16 | (buflen)))
#define ns_tbd_mkword_1_novbr(flags, buflen) \
- (cpu_to_le32(flags | buflen | 0x00810000))
+ (cpu_to_le32((flags) | (buflen) | 0x00810000))
#define ns_tbd_mkword_3(control, pdulen) \
- (cpu_to_le32(control << 16 | pdulen))
+ (cpu_to_le32((control) << 16 | (pdulen)))
#define ns_tbd_mkword_4(gfc, vpi, vci, pt, clp) \
- (cpu_to_le32(gfc << 28 | vpi << 20 | vci << 4 | pt << 1 | clp)))
+ (cpu_to_le32((gfc) << 28 | (vpi) << 20 | (vci) << 4 | (pt) << 1 | (clp)))
#define NS_TSR_INTENABLE 0x20000000
/* NISCtAR operation registers ************************************************/
+/* See Section 3.4 of `IDT77211 NICStAR User Manual' from www.idt.com */
+
enum ns_regs
{
- DR0 = 0x00,
- DR1 = 0x04,
- DR2 = 0x08,
- DR3 = 0x0C,
- CMD = 0x10,
- CFG = 0x14,
- STAT = 0x18,
- RSQB = 0x1C,
- RSQT = 0x20,
- RSQH = 0x24,
- CDC = 0x28,
- VPEC = 0x2C,
- ICC = 0x30,
- RAWCT = 0x34,
- TMR = 0x38,
- TSTB = 0x3C,
- TSQB = 0x40,
- TSQT = 0x44,
- TSQH = 0x48,
- GP = 0x4C,
- VPM = 0x50
+ DR0 = 0x00, /* Data Register 0 R/W*/
+ DR1 = 0x04, /* Data Register 1 W */
+ DR2 = 0x08, /* Data Register 2 W */
+ DR3 = 0x0C, /* Data Register 3 W */
+ CMD = 0x10, /* Command W */
+ CFG = 0x14, /* Configuration R/W */
+ STAT = 0x18, /* Status R/W */
+ RSQB = 0x1C, /* Receive Status Queue Base W */
+ RSQT = 0x20, /* Receive Status Queue Tail R */
+ RSQH = 0x24, /* Receive Status Queue Head W */
+ CDC = 0x28, /* Cell Drop Counter R/clear */
+ VPEC = 0x2C, /* VPI/VCI Lookup Error Count R/clear */
+ ICC = 0x30, /* Invalid Cell Count R/clear */
+ RAWCT = 0x34, /* Raw Cell Tail R */
+ TMR = 0x38, /* Timer R */
+ TSTB = 0x3C, /* Transmit Schedule Table Base R/W */
+ TSQB = 0x40, /* Transmit Status Queue Base W */
+ TSQT = 0x44, /* Transmit Status Queue Tail R */
+ TSQH = 0x48, /* Transmit Status Queue Head W */
+ GP = 0x4C, /* General Purpose R/W */
+ VPM = 0x50 /* VPI/VCI Mask W */
};
/* NICStAR commands issued to the CMD register ********************************/
+
+/* Top 4 bits are command opcode, lower 28 are parameters. */
+
#define NS_CMD_NO_OPERATION 0x00000000
+ /* params always 0 */
+
#define NS_CMD_OPENCLOSE_CONNECTION 0x20000000
+ /* b19{1=open,0=close} b18-2{SRAM addr} */
+
#define NS_CMD_WRITE_SRAM 0x40000000
+ /* b18-2{SRAM addr} b1-0{burst size} */
+
#define NS_CMD_READ_SRAM 0x50000000
+ /* b18-2{SRAM addr} */
+
#define NS_CMD_WRITE_FREEBUFQ 0x60000000
+ /* b0{large buf indicator} */
+
#define NS_CMD_READ_UTILITY 0x80000000
+ /* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
+
#define NS_CMD_WRITE_UTILITY 0x90000000
+ /* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
#define NS_CMD_OPEN_CONNECTION (NS_CMD_OPENCLOSE_CONNECTION | 0x00080000)
#define NS_CMD_CLOSE_CONNECTION NS_CMD_OPENCLOSE_CONNECTION
/* NICStAR configuration bits *************************************************/
-#define NS_CFG_SWRST 0x80000000
-#define NS_CFG_RXPATH 0x20000000
-#define NS_CFG_SMBUFSIZE_MASK 0x18000000
-#define NS_CFG_LGBUFSIZE_MASK 0x06000000
-#define NS_CFG_EFBIE 0x01000000
-#define NS_CFG_RSQSIZE_MASK 0x00C00000
-#define NS_CFG_ICACCEPT 0x00200000
-#define NS_CFG_IGNOREGFC 0x00100000
-#define NS_CFG_VPIBITS_MASK 0x000C0000
-#define NS_CFG_RCTSIZE_MASK 0x00030000
-#define NS_CFG_VCERRACCEPT 0x00008000
-#define NS_CFG_RXINT_MASK 0x00007000
-#define NS_CFG_RAWIE 0x00000800
-#define NS_CFG_RSQAFIE 0x00000400
-#define NS_CFG_RXRM 0x00000200
-#define NS_CFG_TMRROIE 0x00000080
-#define NS_CFG_TXEN 0x00000020
-#define NS_CFG_TXIE 0x00000010
-#define NS_CFG_TXURIE 0x00000008
-#define NS_CFG_UMODE 0x00000004
-#define NS_CFG_TSQFIE 0x00000002
-#define NS_CFG_PHYIE 0x00000001
+#define NS_CFG_SWRST 0x80000000 /* Software Reset */
+#define NS_CFG_RXPATH 0x20000000 /* Receive Path Enable */
+#define NS_CFG_SMBUFSIZE_MASK 0x18000000 /* Small Receive Buffer Size */
+#define NS_CFG_LGBUFSIZE_MASK 0x06000000 /* Large Receive Buffer Size */
+#define NS_CFG_EFBIE 0x01000000 /* Empty Free Buffer Queue
+ Interrupt Enable */
+#define NS_CFG_RSQSIZE_MASK 0x00C00000 /* Receive Status Queue Size */
+#define NS_CFG_ICACCEPT 0x00200000 /* Invalid Cell Accept */
+#define NS_CFG_IGNOREGFC 0x00100000 /* Ignore General Flow Control */
+#define NS_CFG_VPIBITS_MASK 0x000C0000 /* VPI/VCI Bits Size Select */
+#define NS_CFG_RCTSIZE_MASK 0x00030000 /* Receive Connection Table Size */
+#define NS_CFG_VCERRACCEPT 0x00008000 /* VPI/VCI Error Cell Accept */
+#define NS_CFG_RXINT_MASK 0x00007000 /* End of Receive PDU Interrupt
+ Handling */
+#define NS_CFG_RAWIE 0x00000800 /* Raw Cell Qu' Interrupt Enable */
+#define NS_CFG_RSQAFIE 0x00000400 /* Receive Queue Almost Full
+ Interrupt Enable */
+#define NS_CFG_RXRM 0x00000200 /* Receive RM Cells */
+#define NS_CFG_TMRROIE 0x00000080 /* Timer Roll Over Interrupt
+ Enable */
+#define NS_CFG_TXEN 0x00000020 /* Transmit Operation Enable */
+#define NS_CFG_TXIE 0x00000010 /* Transmit Status Interrupt
+ Enable */
+#define NS_CFG_TXURIE 0x00000008 /* Transmit Under-run Interrupt
+ Enable */
+#define NS_CFG_UMODE 0x00000004 /* Utopia Mode (cell/byte) Select */
+#define NS_CFG_TSQFIE 0x00000002 /* Transmit Status Queue Full
+ Interrupt Enable */
+#define NS_CFG_PHYIE 0x00000001 /* PHY Interrupt Enable */
#define NS_CFG_SMBUFSIZE_48 0x00000000
#define NS_CFG_SMBUFSIZE_96 0x08000000
/* NICStAR STATus bits ********************************************************/
-#define NS_STAT_SFBQC_MASK 0xFF000000
-#define NS_STAT_LFBQC_MASK 0x00FF0000
-#define NS_STAT_TSIF 0x00008000
-#define NS_STAT_TXICP 0x00004000
-#define NS_STAT_TSQF 0x00001000
-#define NS_STAT_TMROF 0x00000800
-#define NS_STAT_PHYI 0x00000400
-#define NS_STAT_CMDBZ 0x00000200
-#define NS_STAT_SFBQF 0x00000100
-#define NS_STAT_LFBQF 0x00000080
-#define NS_STAT_RSQF 0x00000040
-#define NS_STAT_EOPDU 0x00000020
-#define NS_STAT_RAWCF 0x00000010
-#define NS_STAT_SFBQE 0x00000008
-#define NS_STAT_LFBQE 0x00000004
-#define NS_STAT_RSQAF 0x00000002
+#define NS_STAT_SFBQC_MASK 0xFF000000 /* hi 8 bits Small Buffer Queue Count */
+#define NS_STAT_LFBQC_MASK 0x00FF0000 /* hi 8 bits Large Buffer Queue Count */
+#define NS_STAT_TSIF 0x00008000 /* Transmit Status Queue Indicator */
+#define NS_STAT_TXICP 0x00004000 /* Transmit Incomplete PDU */
+#define NS_STAT_TSQF 0x00001000 /* Transmit Status Queue Full */
+#define NS_STAT_TMROF 0x00000800 /* Timer Overflow */
+#define NS_STAT_PHYI 0x00000400 /* PHY Device Interrupt */
+#define NS_STAT_CMDBZ 0x00000200 /* Command Busy */
+#define NS_STAT_SFBQF 0x00000100 /* Small Buffer Queue Full */
+#define NS_STAT_LFBQF 0x00000080 /* Large Buffer Queue Full */
+#define NS_STAT_RSQF 0x00000040 /* Receive Status Queue Full */
+#define NS_STAT_EOPDU 0x00000020 /* End of PDU */
+#define NS_STAT_RAWCF 0x00000010 /* Raw Cell Flag */
+#define NS_STAT_SFBQE 0x00000008 /* Small Buffer Queue Empty */
+#define NS_STAT_LFBQE 0x00000004 /* Large Buffer Queue Empty */
+#define NS_STAT_RSQAF 0x00000002 /* Receive Status Queue Almost Full */
#define ns_stat_sfbqc_get(stat) (((stat) & NS_STAT_SFBQC_MASK) >> 23)
#define ns_stat_lfbqc_get(stat) (((stat) & NS_STAT_LFBQC_MASK) >> 15)
{
int index; /* Card ID to the device driver */
int sram_size; /* In k x 32bit words. 32 or 128 */
- u32 membase; /* Card's memory base address */
+ unsigned long membase; /* Card's memory base address */
unsigned long max_pcr;
int rct_size; /* Number of entries */
int vpibits;
/* drivers/atm/suni.c - PMC SUNI (PHY) driver */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/module.h>
case SONET_GETFRSENSE:
return -EINVAL;
case SUNI_SETLOOP:
- if (!capable(CAP_NET_ADMIN)) return -EPERM;
- if ((int) arg < 0 || (int) arg > SUNI_LM_LOOP)
- return -EINVAL;
- PUT((GET(MCT) & ~(SUNI_MCT_DLE | SUNI_MCT_LLE)) |
- ((int) arg == SUNI_LM_DIAG ? SUNI_MCT_DLE : 0) |
- ((int) arg == SUNI_LM_LOOP ? SUNI_MCT_LLE : 0),MCT);
- PRIV(dev)->loop_mode = (int) arg;
- return 0;
+ {
+ int int_arg = (int) (long) arg;
+
+ if (!capable(CAP_NET_ADMIN)) return -EPERM;
+ if (int_arg < 0 || int_arg > SUNI_LM_LOOP)
+ return -EINVAL;
+ PUT((GET(MCT) & ~(SUNI_MCT_DLE | SUNI_MCT_LLE))
+ | (int_arg == SUNI_LM_DIAG ? SUNI_MCT_DLE :
+ 0) | (int_arg == SUNI_LM_LOOP ?
+ SUNI_MCT_LLE : 0),MCT);
+ PRIV(dev)->loop_mode = int_arg;
+ return 0;
+ }
case SUNI_GETLOOP:
return put_user(PRIV(dev)->loop_mode,(int *) arg) ?
-EFAULT : 0;
default:
- return -EINVAL;
+ return -ENOIOCTLCMD;
}
}
/* drivers/atm/uPD98402.c - NEC uPD98402 (PHY) declarations */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/module.h>
case SONET_GETFRSENSE:
return get_sense(dev,arg);
default:
- return -EINVAL;
+ return -ENOIOCTLCMD;
}
}
/* drivers/atm/zatm.c - ZeitNet ZN122x device driver */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/config.h>
zpokel(zatm_dev,(zpeekl(zatm_dev,pos) & ~(0xffff << shift)) |
((zatm_vcc->rx_chan | uPD98401_RXLT_ENBL) << shift),pos);
restore_flags(flags);
-/* Ugly hack to ensure that ttcp_atm will work with the current allocation
- scheme. @@@ */
-if (vcc->rx_quota < 200000) vcc->rx_quota = 200000;
return 0;
}
}
#endif
default:
- if (!dev->phy->ioctl) return -EINVAL;
+ if (!dev->phy->ioctl) return -ENOIOCTLCMD;
return dev->phy->ioctl(dev,cmd,arg);
}
}
static int zatm_getsockopt(struct atm_vcc *vcc,int level,int optname,
void *optval,int optlen)
{
-#ifdef CONFIG_MMU_HACKS
-
-static const struct atm_buffconst bctx = { PAGE_SIZE,0,PAGE_SIZE,0,0,0 };
-static const struct atm_buffconst bcrx = { PAGE_SIZE,0,PAGE_SIZE,0,0,0 };
-
-#else
-
-static const struct atm_buffconst bctx = { 4,0,4,0,0,0 };
-static const struct atm_buffconst bcrx = { 4,0,4,0,0,0 };
-
-#endif
- if (level == SOL_AAL && (optname == SO_BCTXOPT ||
- optname == SO_BCRXOPT))
- return copy_to_user(optval,optname == SO_BCTXOPT ? &bctx :
- &bcrx,sizeof(struct atm_buffconst)) ? -EFAULT : 0;
return -EINVAL;
}
static const struct atmdev_ops ops = {
- NULL, /* no dev_close */
- zatm_open,
- zatm_close,
- zatm_ioctl,
- zatm_getsockopt,
- zatm_setsockopt,
- zatm_send,
- NULL /*zatm_sg_send*/,
- NULL, /* no send_oam */
- zatm_phy_put,
- zatm_phy_get,
- zatm_feedback,
- zatm_change_qos,
- NULL, /* no free_rx_skb */
- NULL /* no proc_read */
+ open: zatm_open,
+ close: zatm_close,
+ ioctl: zatm_ioctl,
+ getsockopt: zatm_getsockopt,
+ setsockopt: zatm_setsockopt,
+ send: zatm_send,
+ /*zatm_sg_send*/
+ phy_put: zatm_phy_put,
+ phy_get: zatm_phy_get,
+ feedback: zatm_feedback,
+ change_qos: zatm_change_qos,
};
EXPORT_SYMBOL(blk_init_queue);
EXPORT_SYMBOL(blk_cleanup_queue);
EXPORT_SYMBOL(blk_queue_headactive);
+EXPORT_SYMBOL(generic_make_request);
Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+#include <linux/config.h>
#include <linux/raid/md.h>
-#include <linux/raid/xor.h>
#ifdef CONFIG_KMOD
#include <linux/kmod.h>
md_register_reboot_notifier(&md_notifier);
-#ifdef CONFIG_MD_HSM
- hsm_init ();
-#endif
-#ifdef CONFIG_MD_TRANSLUCENT
- translucent_init ();
-#endif
#ifdef CONFIG_MD_LINEAR
linear_init ();
#endif
MD_EXPORT_SYMBOL(find_rdev_nr);
MD_EXPORT_SYMBOL(md_interrupt_thread);
MD_EXPORT_SYMBOL(mddev_map);
+MD_EXPORT_SYMBOL(md_check_ordering);
struct vortex_private {
/* The Rx and Tx rings should be quad-word-aligned. */
- struct boom_rx_desc rx_ring[RX_RING_SIZE];
- struct boom_tx_desc tx_ring[TX_RING_SIZE];
+ struct boom_rx_desc* rx_ring;
+ struct boom_tx_desc* tx_ring;
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_ring_dma;
/* The addresses of transmit- and receive-in-place skbuffs. */
struct sk_buff* rx_skbuff[RX_RING_SIZE];
struct sk_buff* tx_skbuff[TX_RING_SIZE];
struct net_device *next_module;
void *priv_addr;
+ dma_addr_t ring_dma;
unsigned int cur_rx, cur_tx; /* The next free ring entry */
unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
struct net_device_stats stats;
struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
+ dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
/* PCI configuration space information. */
u8 pci_bus, pci_devfn; /* PCI bus location, for power management. */
char *cb_fn_base; /* CardBus function status addr space. */
int chip_id;
+ struct pci_dev *pdev; /* Device for DMA mapping */
/* The remainder are related to chip state, mostly media selection. */
unsigned long in_interrupt;
dev->mtu = mtu;
/* Make certain the descriptor lists are aligned. */
- {
- void *mem = kmalloc(sizeof(*vp) + 15, GFP_KERNEL);
- vp = (void *)(((long)mem + 15) & ~15);
- vp->priv_addr = mem;
- }
+ vp = kmalloc(sizeof(*vp), GFP_KERNEL);
+
memset(vp, 0, sizeof(*vp));
dev->priv = vp;
vp->chip_id = chip_idx;
vp->pci_bus = pci_bus;
vp->pci_devfn = pci_devfn;
+ vp->pdev = pci_find_slot(pci_bus, pci_devfn);
+
+ vp->priv_addr = pci_alloc_consistent(vp->pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
+ + sizeof(struct boom_tx_desc) * TX_RING_SIZE
+ + 15, &vp->ring_dma);
+ /* Make sure rings are 16 byte aligned. */
+ vp->rx_ring = (void *)(((long)vp->priv_addr + 15) & ~15);
+ vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
+ vp->rx_ring_dma = (vp->ring_dma + 15) & ~15;
+ vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
/* The lower four bits are the media type. */
if (dev->mem_start)
printk(KERN_DEBUG "%s: Filling in the Rx ring.\n", dev->name);
for (i = 0; i < RX_RING_SIZE; i++) {
struct sk_buff *skb;
- vp->rx_ring[i].next = cpu_to_le32(virt_to_bus(&vp->rx_ring[i+1]));
+ vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
vp->rx_ring[i].status = 0; /* Clear complete bit. */
vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
skb = dev_alloc_skb(PKT_BUF_SZ);
break; /* Bad news! */
skb->dev = dev; /* Mark as being used by this device. */
skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- vp->rx_ring[i].addr = cpu_to_le32(virt_to_bus(skb->tail));
+ vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ));
}
/* Wrap the ring. */
- vp->rx_ring[i-1].next = cpu_to_le32(virt_to_bus(&vp->rx_ring[0]));
- outl(virt_to_bus(&vp->rx_ring[0]), ioaddr + UpListPtr);
+ vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
+ outl(vp->rx_ring_dma, ioaddr + UpListPtr);
}
if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
dev->hard_start_xmit = &boomerang_start_xmit;
printk(KERN_DEBUG "%s: Resetting the Tx ring pointer.\n",
dev->name);
if (vp->cur_tx - vp->dirty_tx > 0 && inl(ioaddr + DownListPtr) == 0)
- outl(virt_to_bus(&vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]),
+ outl(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
ioaddr + DownListPtr);
if (vp->tx_full && (vp->cur_tx - vp->dirty_tx <= TX_RING_SIZE - 1)) {
vp->tx_full = 0;
outl(skb->len, ioaddr + TX_FIFO);
if (vp->bus_master) {
/* Set the bus-master controller to transfer the packet. */
- outl(virt_to_bus(skb->data), ioaddr + Wn7_MasterAddr);
- outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
+ int len = (skb->len + 3) & ~3;
+ outl(vp->tx_skb_dma = pci_map_single(vp->pdev, skb->data, len), ioaddr + Wn7_MasterAddr);
+ outw(len, ioaddr + Wn7_MasterLen);
vp->tx_skb = skb;
outw(StartDMADown, ioaddr + EL3_CMD);
/* dev->tbusy will be cleared at the DMADone interrupt. */
}
vp->tx_skbuff[entry] = skb;
vp->tx_ring[entry].next = 0;
- vp->tx_ring[entry].addr = cpu_to_le32(virt_to_bus(skb->data));
+ vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data, skb->len));
vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
for (i = 600; i >= 0 ; i--)
if ( (inw(ioaddr + EL3_STATUS) & CmdInProgress) == 0)
break;
- prev_entry->next = cpu_to_le32(virt_to_bus(&vp->tx_ring[entry]));
+ prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
if (inl(ioaddr + DownListPtr) == 0) {
- outl(virt_to_bus(&vp->tx_ring[entry]), ioaddr + DownListPtr);
+ outl(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
queued_packet++;
}
outw(DownUnstall, ioaddr + EL3_CMD);
while (vp->cur_tx - dirty_tx > 0) {
int entry = dirty_tx % TX_RING_SIZE;
if (inl(ioaddr + DownListPtr) ==
- virt_to_bus(&vp->tx_ring[entry]))
+ vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
break; /* It still hasn't been processed. */
if (vp->tx_skbuff[entry]) {
+ struct sk_buff *skb = vp->tx_skbuff[entry];
+
+ pci_unmap_single(vp->pdev, le32_to_cpu(vp->tx_ring[entry].addr), skb->len);
DEV_FREE_SKB(vp->tx_skbuff[entry]);
vp->tx_skbuff[entry] = 0;
}
if (status & DMADone) {
if (inw(ioaddr + Wn7_MasterStatus) & 0x1000) {
outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
+ pci_unmap_single(vp->pdev, vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3);
DEV_FREE_SKB(vp->tx_skb); /* Release the transfered buffer */
if (inw(ioaddr + TxFree) > 1536) {
clear_bit(0, (void*)&dev->tbusy);
/* 'skb_put()' points to the start of sk_buff data area. */
if (vp->bus_master &&
! (inw(ioaddr + Wn7_MasterStatus) & 0x8000)) {
- outl(virt_to_bus(skb_put(skb, pkt_len)),
- ioaddr + Wn7_MasterAddr);
+ dma_addr_t dma = pci_map_single(vp->pdev, skb_put(skb, pkt_len),
+ pkt_len);
+ outl(dma, ioaddr + Wn7_MasterAddr);
outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
outw(StartDMAUp, ioaddr + EL3_CMD);
while (inw(ioaddr + Wn7_MasterStatus) & 0x8000)
;
+ pci_unmap_single(vp->pdev, dma, pkt_len);
} else {
insl(ioaddr + RX_FIFO, skb_put(skb, pkt_len),
(pkt_len + 3) >> 2);
/* The packet length: up to 4.5K!. */
int pkt_len = rx_status & 0x1fff;
struct sk_buff *skb;
+ dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
vp->stats.rx_bytes += pkt_len;
if (vortex_debug > 4)
&& (skb = dev_alloc_skb(pkt_len + 2)) != 0) {
skb->dev = dev;
skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
+ pci_dma_sync_single(vp->pdev, dma, PKT_BUF_SZ);
/* 'skb_put()' points to the start of sk_buff data area. */
memcpy(skb_put(skb, pkt_len),
- bus_to_virt(le32_to_cpu(vp->rx_ring[entry].addr)),
+ vp->rx_skbuff[entry]->tail,
pkt_len);
rx_copy++;
} else {
- void *temp;
/* Pass up the skbuff already on the Rx ring. */
skb = vp->rx_skbuff[entry];
vp->rx_skbuff[entry] = NULL;
- temp = skb_put(skb, pkt_len);
- /* Remove this checking code for final release. */
- if (bus_to_virt(le32_to_cpu(vp->rx_ring[entry].addr)) != temp)
- printk(KERN_ERR "%s: Warning -- the skbuff addresses do not match"
- " in boomerang_rx: %p vs. %p.\n", dev->name,
- bus_to_virt(le32_to_cpu(vp->rx_ring[entry].addr)),
- temp);
+ skb_put(skb, pkt_len);
+ pci_unmap_single(vp->pdev, dma, PKT_BUF_SZ);
rx_nocopy++;
}
skb->protocol = eth_type_trans(skb, dev);
break; /* Bad news! */
skb->dev = dev; /* Mark as being used by this device. */
skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
- vp->rx_ring[entry].addr = cpu_to_le32(virt_to_bus(skb->tail));
+ vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ));
vp->rx_skbuff[entry] = skb;
}
vp->rx_ring[entry].status = 0; /* Clear complete bit. */
#if LINUX_VERSION_CODE < 0x20100
vp->rx_skbuff[i]->free = 1;
#endif
+ pci_unmap_single(vp->pdev, le32_to_cpu(vp->rx_ring[i].addr), PKT_BUF_SZ);
DEV_FREE_SKB(vp->rx_skbuff[i]);
vp->rx_skbuff[i] = 0;
}
outl(0, ioaddr + DownListPtr);
for (i = 0; i < TX_RING_SIZE; i++)
if (vp->tx_skbuff[i]) {
- DEV_FREE_SKB(vp->tx_skbuff[i]);
+ struct sk_buff *skb = vp->tx_skbuff[i];
+
+ pci_unmap_single(vp->pdev, le32_to_cpu(vp->tx_ring[i].addr), skb->len);
+ DEV_FREE_SKB(skb);
vp->tx_skbuff[i] = 0;
}
}
release_region(root_vortex_dev->base_addr,
pci_tbl[vp->chip_id].io_size);
kfree(root_vortex_dev);
- kfree(vp->priv_addr);
+ pci_free_consistent(vp->pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
+ + sizeof(struct boom_tx_desc) * TX_RING_SIZE
+ + 15, vp->priv_addr, vp->ring_dma);
+ kfree(vp);
root_vortex_dev = next_dev;
}
}
* Additional work by Pete Wyckoff <wyckoff@ca.sandia.gov> for initial
* Alpha and trace dump support. The trace dump support has not been
* integrated yet however.
+ *
+ * Big-endian+Sparc fixes and conversion to new PCI dma mapping
+ * infrastructure by David S. Miller <davem@redhat.com>.
*/
#include <linux/config.h>
static int probed __initdata = 0;
+void ace_free_descriptors(struct net_device *dev)
+{
+ struct ace_private *ap = dev->priv;
+ int size;
+
+ if (ap->rx_std_ring != NULL) {
+ size = (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES +
+ RX_RETURN_RING_ENTRIES));
+ pci_free_consistent(ap->pdev, size,
+ ap->rx_std_ring,
+ ap->rx_ring_base_dma);
+ ap->rx_std_ring = NULL;
+ ap->rx_jumbo_ring = NULL;
+ ap->rx_mini_ring = NULL;
+ ap->rx_return_ring = NULL;
+ }
+ if (ap->evt_ring != NULL) {
+ size = (sizeof(struct event) * EVT_RING_ENTRIES);
+ pci_free_consistent(ap->pdev, size,
+ ap->evt_ring,
+ ap->evt_ring_dma);
+ ap->evt_ring = NULL;
+ }
+ if (ap->evt_prd != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->evt_prd, ap->evt_prd_dma);
+ ap->evt_prd = NULL;
+ }
+ if (ap->rx_ret_prd != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->rx_ret_prd, ap->rx_ret_prd_dma);
+ ap->rx_ret_prd = NULL;
+ }
+ if (ap->tx_csm != NULL) {
+ pci_free_consistent(ap->pdev, sizeof(u32),
+ (void *)ap->tx_csm, ap->tx_csm_dma);
+ ap->tx_csm = NULL;
+ }
+}
+
+int ace_allocate_descriptors(struct net_device *dev)
+{
+ struct ace_private *ap = dev->priv;
+ int size;
+
+ size = (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES +
+ RX_RETURN_RING_ENTRIES));
+
+ ap->rx_std_ring = pci_alloc_consistent(ap->pdev, size,
+ &ap->rx_ring_base_dma);
+ if (ap->rx_std_ring == NULL)
+ goto fail;
+
+ ap->rx_jumbo_ring = ap->rx_std_ring + RX_STD_RING_ENTRIES;
+ ap->rx_mini_ring = ap->rx_jumbo_ring + RX_JUMBO_RING_ENTRIES;
+ ap->rx_return_ring = ap->rx_mini_ring + RX_MINI_RING_ENTRIES;
+
+ size = (sizeof(struct event) * EVT_RING_ENTRIES);
+
+ ap->evt_ring = pci_alloc_consistent(ap->pdev, size,
+ &ap->evt_ring_dma);
+
+ if (ap->evt_ring == NULL)
+ goto fail;
+
+ ap->evt_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->evt_prd_dma);
+ if (ap->evt_prd == NULL)
+ goto fail;
+
+ ap->rx_ret_prd = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->rx_ret_prd_dma);
+ if (ap->rx_ret_prd == NULL)
+ goto fail;
+
+ ap->tx_csm = pci_alloc_consistent(ap->pdev, sizeof(u32),
+ &ap->tx_csm_dma);
+ if (ap->tx_csm == NULL)
+ goto fail;
+
+ return 0;
+
+fail:
+ /* Clean up. */
+ ace_free_descriptors(dev);
+ iounmap(ap->regs);
+ unregister_netdev(dev);
+ return 1;
+}
int __init acenic_probe(void)
{
pci_set_master(pdev);
+#ifdef __sparc__
+ /* NOTE: Cache line size is in 32-bit word units. */
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x10);
+#endif
/*
* Remap the regs into kernel space - this is abuse of
* dev->base_addr since it was means for I/O port
}
#endif
+ if (ace_allocate_descriptors(dev))
+ continue;
+
#ifdef MODULE
if (ace_init(dev, boards_found))
continue;
synchronize_irq();
for (i = 0; i < RX_STD_RING_ENTRIES; i++) {
- if (ap->skb->rx_std_skbuff[i]) {
+ struct sk_buff *skb = ap->skb->rx_std_skbuff[i].skb;
+
+ if (skb) {
+ dma_addr_t mapping;
+
+ mapping = ap->skb->rx_std_skbuff[i].mapping;
+
ap->rx_std_ring[i].size = 0;
- set_aceaddr_bus(&ap->rx_std_ring[i].addr, 0);
- dev_kfree_skb(ap->skb->rx_std_skbuff[i]);
+ set_aceaddr(&ap->rx_std_ring[i].addr, 0);
+ pci_unmap_single(ap->pdev, mapping,
+ ACE_STD_BUFSIZE - (2 + 16));
+ dev_kfree_skb(skb);
+ ap->skb->rx_std_skbuff[i].skb = NULL;
}
}
if (ap->version >= 2) {
for (i = 0; i < RX_MINI_RING_ENTRIES; i++) {
- if (ap->skb->rx_mini_skbuff[i]) {
+ struct sk_buff *skb = ap->skb->rx_mini_skbuff[i].skb;
+
+ if (skb) {
+ dma_addr_t mapping;
+
+ mapping = ap->skb->rx_mini_skbuff[i].mapping;
+
ap->rx_mini_ring[i].size = 0;
- set_aceaddr_bus(&ap->rx_mini_ring[i].addr, 0);
- dev_kfree_skb(ap->skb->rx_mini_skbuff[i]);
+ set_aceaddr(&ap->rx_mini_ring[i].addr, 0);
+ pci_unmap_single(ap->pdev, mapping,
+ ACE_MINI_BUFSIZE - (2 + 16));
+ dev_kfree_skb(skb);
}
}
}
+ ace_free_descriptors(root_dev);
+
iounmap(regs);
if(ap->trace_buf)
kfree(ap->trace_buf);
- kfree(ap->info);
+ pci_free_consistent(ap->pdev, sizeof(struct ace_info),
+ ap->info, ap->info_dma);
kfree(ap->skb);
free_irq(root_dev->irq, root_dev);
unregister_netdev(root_dev);
/*
* Don't access any other registes before this point!
*/
-#ifdef __BIG_ENDIAN
- writel(((BYTE_SWAP | WORD_SWAP | CLR_INT) |
- ((BYTE_SWAP | WORD_SWAP | CLR_INT) << 24)),
- ®s->HostCtrl);
-#else
writel((CLR_INT | WORD_SWAP | ((CLR_INT | WORD_SWAP) << 24)),
®s->HostCtrl);
-#endif
mb();
/*
* value a second time works as well. This is what caused the
* `Firmware not running' problem on the Tigon II.
*/
-#ifdef __LITTLE_ENDIAN
writel(ACE_BYTE_SWAP_DATA | ACE_WARN | ACE_FATAL |
ACE_WORD_SWAP | ACE_NO_JUMBO_FRAG, ®s->ModeStat);
-#else
-#error "this driver doesn't run on big-endian machines yet!"
-#endif
mac1 = 0;
for(i = 0; i < 4; i++){
* and the control blocks for the transmit and receive rings
* as they need to be setup once and for all.
*/
- if (!(info = kmalloc(sizeof(struct ace_info), GFP_KERNEL)))
- return -EAGAIN;
+ info = pci_alloc_consistent(ap->pdev, sizeof(struct ace_info),
+ &ap->info_dma);
+ if (info == NULL)
+ goto fail;
/*
* Get the memory for the skb rings.
*/
if (!(ap->skb = kmalloc(sizeof(struct ace_skb), GFP_KERNEL)))
- return -EAGAIN;
+ goto fail;
+
+ memset(ap->skb, 0, sizeof(struct ace_skb));
if (request_irq(dev->irq, ace_interrupt, SA_SHIRQ, ap->name, dev)) {
printk(KERN_WARNING "%s: Requested IRQ %d is busy\n",
dev->name, dev->irq);
- return -EAGAIN;
+ goto fail;
}
/*
root_dev = dev;
ap->info = info;
- memset(info, 0, sizeof(struct ace_info));
- memset(ap->skb, 0, sizeof(struct ace_skb));
ace_load_firmware(dev);
ap->fw_running = 0;
- tmp_ptr = virt_to_bus((void *)info);
+ tmp_ptr = (unsigned long) ap->info_dma;
#if (BITS_PER_LONG == 64)
writel(tmp_ptr >> 32, ®s->InfoPtrHi);
#else
memset(ap->evt_ring, 0, EVT_RING_ENTRIES * sizeof(struct event));
- set_aceaddr(&info->evt_ctrl.rngptr, ap->evt_ring);
+ set_aceaddr(&info->evt_ctrl.rngptr, ap->evt_ring_dma);
info->evt_ctrl.flags = 0;
- set_aceaddr(&info->evt_prd_ptr, &ap->evt_prd);
- ap->evt_prd = 0;
+ set_aceaddr(&info->evt_prd_ptr, ap->evt_prd_dma);
+ *(ap->evt_prd) = 0;
wmb();
writel(0, ®s->EvtCsm);
- set_aceaddr_bus(&info->cmd_ctrl.rngptr, (void *)0x100);
+ set_aceaddr(&info->cmd_ctrl.rngptr, 0x100);
info->cmd_ctrl.flags = 0;
info->cmd_ctrl.max_len = 0;
writel(0, ®s->CmdPrd);
writel(0, ®s->CmdCsm);
- set_aceaddr(&info->stats2_ptr, &info->s.stats);
+ tmp_ptr = ap->info_dma;
+ tmp_ptr += (unsigned long) &(((struct ace_info *)0)->s.stats);
+ set_aceaddr(&info->stats2_ptr, (dma_addr_t) tmp_ptr);
- set_aceaddr(&info->rx_std_ctrl.rngptr, ap->rx_std_ring);
- info->rx_std_ctrl.max_len = ACE_STD_MTU + ETH_HLEN + 4;
- info->rx_std_ctrl.flags = RCB_FLG_TCP_UDP_SUM;
+ set_aceaddr(&info->rx_std_ctrl.rngptr, ap->rx_ring_base_dma);
+ info->rx_std_ctrl.max_len = cpu_to_le16(ACE_STD_MTU + ETH_HLEN + 4);
+ info->rx_std_ctrl.flags = cpu_to_le16(RCB_FLG_TCP_UDP_SUM);
memset(ap->rx_std_ring, 0,
RX_STD_RING_ENTRIES * sizeof(struct rx_desc));
for (i = 0; i < RX_STD_RING_ENTRIES; i++)
- ap->rx_std_ring[i].flags = BD_FLG_TCP_UDP_SUM;
+ ap->rx_std_ring[i].flags = cpu_to_le16(BD_FLG_TCP_UDP_SUM);
ap->rx_std_skbprd = 0;
atomic_set(&ap->cur_rx_bufs, 0);
- set_aceaddr(&info->rx_jumbo_ctrl.rngptr, ap->rx_jumbo_ring);
+ set_aceaddr(&info->rx_jumbo_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) * RX_STD_RING_ENTRIES)));
info->rx_jumbo_ctrl.max_len = 0;
- info->rx_jumbo_ctrl.flags = RCB_FLG_TCP_UDP_SUM;
+ info->rx_jumbo_ctrl.flags = cpu_to_le16(RCB_FLG_TCP_UDP_SUM);
memset(ap->rx_jumbo_ring, 0,
RX_JUMBO_RING_ENTRIES * sizeof(struct rx_desc));
for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++)
- ap->rx_jumbo_ring[i].flags = BD_FLG_TCP_UDP_SUM | BD_FLG_JUMBO;
+ ap->rx_jumbo_ring[i].flags =
+ cpu_to_le16(BD_FLG_TCP_UDP_SUM | BD_FLG_JUMBO);
ap->rx_jumbo_skbprd = 0;
atomic_set(&ap->cur_jumbo_bufs, 0);
RX_MINI_RING_ENTRIES * sizeof(struct rx_desc));
if (ap->version >= 2) {
- set_aceaddr(&info->rx_mini_ctrl.rngptr, ap->rx_mini_ring);
- info->rx_mini_ctrl.max_len = ACE_MINI_SIZE;
- info->rx_mini_ctrl.flags = RCB_FLG_TCP_UDP_SUM;
+ set_aceaddr(&info->rx_mini_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES))));
+ info->rx_mini_ctrl.max_len = cpu_to_le16(ACE_MINI_SIZE);
+ info->rx_mini_ctrl.flags = cpu_to_le16(RCB_FLG_TCP_UDP_SUM);
for (i = 0; i < RX_MINI_RING_ENTRIES; i++)
ap->rx_mini_ring[i].flags =
- BD_FLG_TCP_UDP_SUM | BD_FLG_MINI;
+ cpu_to_le16(BD_FLG_TCP_UDP_SUM | BD_FLG_MINI);
} else {
set_aceaddr(&info->rx_mini_ctrl.rngptr, 0);
- info->rx_mini_ctrl.flags = RCB_FLG_RNG_DISABLE;
+ info->rx_mini_ctrl.flags = cpu_to_le16(RCB_FLG_RNG_DISABLE);
info->rx_mini_ctrl.max_len = 0;
}
ap->rx_mini_skbprd = 0;
atomic_set(&ap->cur_mini_bufs, 0);
- set_aceaddr(&info->rx_return_ctrl.rngptr, ap->rx_return_ring);
+ set_aceaddr(&info->rx_return_ctrl.rngptr,
+ (ap->rx_ring_base_dma +
+ (sizeof(struct rx_desc) *
+ (RX_STD_RING_ENTRIES +
+ RX_JUMBO_RING_ENTRIES +
+ RX_MINI_RING_ENTRIES))));
info->rx_return_ctrl.flags = 0;
- info->rx_return_ctrl.max_len = RX_RETURN_RING_ENTRIES;
+ info->rx_return_ctrl.max_len = cpu_to_le16(RX_RETURN_RING_ENTRIES);
memset(ap->rx_return_ring, 0,
RX_RETURN_RING_ENTRIES * sizeof(struct rx_desc));
- set_aceaddr(&info->rx_ret_prd_ptr, &ap->rx_ret_prd);
+ set_aceaddr(&info->rx_ret_prd_ptr, ap->rx_ret_prd_dma);
+ *(ap->rx_ret_prd) = 0;
writel(TX_RING_BASE, ®s->WinBase);
ap->tx_ring = (struct tx_desc *)regs->Window;
writel(0, (unsigned long)ap->tx_ring + i * 4);
}
- set_aceaddr_bus(&info->tx_ctrl.rngptr, (void *)TX_RING_BASE);
- info->tx_ctrl.max_len = TX_RING_ENTRIES;
+ set_aceaddr(&info->tx_ctrl.rngptr, TX_RING_BASE);
+ info->tx_ctrl.max_len = cpu_to_le16(TX_RING_ENTRIES);
#if TX_COAL_INTS_ONLY
- info->tx_ctrl.flags = RCB_FLG_COAL_INT_ONLY;
+ info->tx_ctrl.flags = cpu_to_le16(RCB_FLG_COAL_INT_ONLY);
#else
info->tx_ctrl.flags = 0;
#endif
- set_aceaddr(&info->tx_csm_ptr, &ap->tx_csm);
+ set_aceaddr(&info->tx_csm_ptr, ap->tx_csm_dma);
/*
* Potential item for tuning parameter
*/
ap->tx_full = 0;
ap->cur_rx = 0;
- ap->tx_prd = ap->tx_csm = ap->tx_ret_csm = 0;
+ ap->tx_prd = *(ap->tx_csm) = ap->tx_ret_csm = 0;
wmb();
writel(0, ®s->TxPrd);
"the RX mini ring\n", dev->name);
}
return 0;
+
+fail:
+ if (info != NULL)
+ pci_free_consistent(ap->pdev, sizeof(struct ace_info),
+ info, ap->info_dma);
+ if (ap->skb != NULL) {
+ kfree(ap->skb);
+ ap->skb = NULL;
+ }
+
+ return -EAGAIN;
}
* seconds and there is data in the transmit queue, thus we
* asume the card is stuck.
*/
- if (ap->tx_csm != ap->tx_ret_csm){
+ if (le32_to_cpu(*(ap->tx_csm)) != ap->tx_ret_csm){
printk(KERN_WARNING "%s: Transmitter is stuck, %08x\n",
dev->name, (unsigned int)readl(®s->HostCtrl));
}
for (i = 0; i < nr_bufs; i++) {
struct sk_buff *skb;
struct rx_desc *rd;
+ dma_addr_t mapping;
skb = alloc_skb(ACE_STD_BUFSIZE, GFP_ATOMIC);
/*
* Make sure IP header starts on a fresh cache line.
*/
skb_reserve(skb, 2 + 16);
- ap->skb->rx_std_skbuff[idx] = skb;
+ mapping = pci_map_single(ap->pdev, skb->data,
+ ACE_STD_BUFSIZE - (2 + 16));
+ ap->skb->rx_std_skbuff[idx].skb = skb;
+ ap->skb->rx_std_skbuff[idx].mapping = mapping;
rd = &ap->rx_std_ring[idx];
- set_aceaddr(&rd->addr, skb->data);
- rd->size = ACE_STD_MTU + ETH_HLEN + 4;
- rd->idx = idx;
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = cpu_to_le16(ACE_STD_MTU + ETH_HLEN + 4);
+ rd->idx = cpu_to_le16(idx);
idx = (idx + 1) % RX_STD_RING_ENTRIES;
}
for (i = 0; i < nr_bufs; i++) {
struct sk_buff *skb;
struct rx_desc *rd;
+ dma_addr_t mapping;
skb = alloc_skb(ACE_MINI_BUFSIZE, GFP_ATOMIC);
/*
* Make sure the IP header ends up on a fresh cache line
*/
skb_reserve(skb, 2 + 16);
- ap->skb->rx_mini_skbuff[idx] = skb;
+ mapping = pci_map_single(ap->pdev, skb->data,
+ ACE_MINI_BUFSIZE - (2 + 16));
+ ap->skb->rx_mini_skbuff[idx].skb = skb;
+ ap->skb->rx_mini_skbuff[idx].mapping = mapping;
rd = &ap->rx_mini_ring[idx];
- set_aceaddr(&rd->addr, skb->data);
- rd->size = ACE_MINI_SIZE;
- rd->idx = idx;
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = cpu_to_le16(ACE_MINI_SIZE);
+ rd->idx = cpu_to_le16(idx);
idx = (idx + 1) % RX_MINI_RING_ENTRIES;
}
for (i = 0; i < nr_bufs; i++) {
struct sk_buff *skb;
struct rx_desc *rd;
+ dma_addr_t mapping;
skb = alloc_skb(ACE_JUMBO_BUFSIZE, GFP_ATOMIC);
/*
* Make sure the IP header ends up on a fresh cache line
*/
skb_reserve(skb, 2 + 16);
- ap->skb->rx_jumbo_skbuff[idx] = skb;
+ mapping = pci_map_single(ap->pdev, skb->data,
+ ACE_JUMBO_BUFSIZE - (2 + 16));
+ ap->skb->rx_jumbo_skbuff[idx].skb = skb;
+ ap->skb->rx_jumbo_skbuff[idx].mapping = mapping;
rd = &ap->rx_jumbo_ring[idx];
- set_aceaddr(&rd->addr, skb->data);
- rd->size = ACE_JUMBO_MTU + ETH_HLEN + 4;
- rd->idx = idx;
+ set_aceaddr(&rd->addr, mapping);
+ rd->size = cpu_to_le16(ACE_JUMBO_MTU + ETH_HLEN + 4);
+ rd->idx = cpu_to_le16(idx);
idx = (idx + 1) % RX_JUMBO_RING_ENTRIES;
}
ace_issue_cmd(regs, &cmd);
for (i = 0; i < RX_JUMBO_RING_ENTRIES; i++) {
- if (ap->skb->rx_jumbo_skbuff[i]) {
+ struct sk_buff *skb;
+
+ skb = ap->skb->rx_jumbo_skbuff[i].skb;
+ if (skb) {
+ dma_addr_t mapping;
+
+ mapping = ap->skb->rx_jumbo_skbuff[i].mapping;
+
ap->rx_jumbo_ring[i].size = 0;
- set_aceaddr_bus(&ap->rx_jumbo_ring[i].addr, 0);
- dev_kfree_skb(ap->skb->rx_jumbo_skbuff[i]);
+ set_aceaddr(&ap->rx_jumbo_ring[i].addr, 0);
+ pci_unmap_single(ap->pdev, mapping,
+ ACE_JUMBO_BUFSIZE - (2 + 16));
+ dev_kfree_skb(skb);
+ ap->skb->rx_jumbo_skbuff[i].skb = NULL;
}
}
}else
ap = (struct ace_private *)dev->priv;
while (evtcsm != evtprd){
- switch (ap->evt_ring[evtcsm].evt){
+ struct event evt_local;
+
+ memcpy(&evt_local, &ap->evt_ring[evtcsm], sizeof(evt_local));
+ evt_local.u.word = le32_to_cpu(evt_local.u.word);
+ switch (evt_local.u.data.evt){
case E_FW_RUNNING:
printk(KERN_INFO "%s: Firmware up and running\n",
dev->name);
break;
case E_LNK_STATE:
{
- u16 code = ap->evt_ring[evtcsm].code;
+ u16 code = evt_local.u.data.code;
if (code == E_C_LINK_UP){
printk(KERN_WARNING "%s: Optical link UP\n",
dev->name);
break;
}
case E_ERROR:
- switch(ap->evt_ring[evtcsm].code){
+ switch(evt_local.u.data.code){
case E_C_ERR_INVAL_CMD:
printk(KERN_ERR "%s: invalid command error\n",
dev->name);
break;
default:
printk(KERN_ERR "%s: unknown error %02x\n",
- dev->name, ap->evt_ring[evtcsm].code);
+ dev->name, evt_local.u.data.code);
}
break;
case E_RESET_JUMBO_RNG:
break;
default:
printk(KERN_ERR "%s: Unhandled event 0x%02x\n",
- dev->name, ap->evt_ring[evtcsm].evt);
+ dev->name, evt_local.u.data.evt);
}
evtcsm = (evtcsm + 1) % EVT_RING_ENTRIES;
}
idx = rxretcsm;
while (idx != rxretprd){
- struct sk_buff *skb, **oldskb_p;
+ struct ring_info *rip;
+ struct sk_buff *skb;
struct rx_desc *rxdesc;
+ dma_addr_t mapping;
u32 skbidx;
- int desc_type;
+ int desc_type, mapsize;
u16 csum;
- skbidx = ap->rx_return_ring[idx].idx;
- desc_type = ap->rx_return_ring[idx].flags &
+ skbidx = le16_to_cpu(ap->rx_return_ring[idx].idx);
+ desc_type = le16_to_cpu(ap->rx_return_ring[idx].flags) &
(BD_FLG_JUMBO | BD_FLG_MINI);
switch(desc_type) {
* atomic operations for each packet arriving.
*/
case 0:
- oldskb_p = &ap->skb->rx_std_skbuff[skbidx];
+ rip = &ap->skb->rx_std_skbuff[skbidx];
+ mapsize = ACE_STD_BUFSIZE - (2 + 16);
rxdesc = &ap->rx_std_ring[skbidx];
std_count++;
break;
case BD_FLG_JUMBO:
- oldskb_p = &ap->skb->rx_jumbo_skbuff[skbidx];
+ rip = &ap->skb->rx_jumbo_skbuff[skbidx];
+ mapsize = ACE_JUMBO_BUFSIZE - (2 + 16);
rxdesc = &ap->rx_jumbo_ring[skbidx];
atomic_dec(&ap->cur_jumbo_bufs);
break;
case BD_FLG_MINI:
- oldskb_p = &ap->skb->rx_mini_skbuff[skbidx];
+ rip = &ap->skb->rx_mini_skbuff[skbidx];
+ mapsize = ACE_MINI_BUFSIZE - (2 + 16);
rxdesc = &ap->rx_mini_ring[skbidx];
mini_count++;
break;
default:
printk(KERN_INFO "%s: unknown frame type (0x%02x) "
"returned by NIC\n", dev->name,
- ap->rx_return_ring[idx].flags);
+ le16_to_cpu(ap->rx_return_ring[idx].flags));
goto error;
}
- skb = *oldskb_p;
+ skb = rip->skb;
+ mapping = rip->mapping;
#if DEBUG
if (skb == NULL) {
printk("Mayday! illegal skb received! (idx %i)\n", skbidx);
goto error;
}
#endif
- *oldskb_p = NULL;
- skb_put(skb, rxdesc->size);
+ rip->skb = NULL;
+ pci_unmap_single(ap->pdev, mapping, mapsize);
+ skb_put(skb, le16_to_cpu(rxdesc->size));
rxdesc->size = 0;
/*
* Fly baby, fly!
*/
- csum = ap->rx_return_ring[idx].tcp_udp_csum;
+ csum = le16_to_cpu(ap->rx_return_ring[idx].tcp_udp_csum);
skb->dev = dev;
skb->protocol = eth_type_trans(skb, dev);
* working on the other stuff - hey we don't need a spin lock
* anymore.
*/
- rxretprd = ap->rx_ret_prd;
+ rxretprd = le32_to_cpu(*(ap->rx_ret_prd));
rxretcsm = ap->cur_rx;
if (rxretprd != rxretcsm)
ace_rx_int(dev, rxretprd, rxretcsm);
- txcsm = ap->tx_csm;
+ txcsm = le32_to_cpu(*(ap->tx_csm));
idx = ap->tx_ret_csm;
if (txcsm != idx) {
do {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+
+ skb = ap->skb->tx_skbuff[idx].skb;
+ mapping = ap->skb->tx_skbuff[idx].mapping;
+
ap->stats.tx_packets++;
- ap->stats.tx_bytes += ap->skb->tx_skbuff[idx]->len;
- dev_kfree_skb(ap->skb->tx_skbuff[idx]);
+ ap->stats.tx_bytes += skb->len;
- ap->skb->tx_skbuff[idx] = NULL;
+ pci_unmap_single(ap->pdev, mapping, skb->len);
+ dev_kfree_skb(skb);
+
+ ap->skb->tx_skbuff[idx].skb = NULL;
/*
* Question here is whether one should not skip
}
evtcsm = readl(®s->EvtCsm);
- evtprd = ap->evt_prd;
+ evtprd = le32_to_cpu(*(ap->evt_prd));
if (evtcsm != evtprd) {
evtcsm = ace_handle_event(dev, evtcsm, evtprd);
cli();
for (i = 0; i < TX_RING_ENTRIES; i++) {
- if (ap->skb->tx_skbuff[i]) {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+
+ skb = ap->skb->tx_skbuff[i].skb;
+ mapping = ap->skb->tx_skbuff[i].mapping;
+ if (skb) {
writel(0, &ap->tx_ring[i].addr.addrhi);
writel(0, &ap->tx_ring[i].addr.addrlo);
writel(0, &ap->tx_ring[i].flagsize);
- dev_kfree_skb(ap->skb->tx_skbuff[i]);
+ pci_unmap_single(ap->pdev, mapping, skb->len);
+ dev_kfree_skb(skb);
}
}
return 1;
}
- ap->skb->tx_skbuff[idx] = skb;
- addr = virt_to_bus(skb->data);
+ ap->skb->tx_skbuff[idx].skb = skb;
+ ap->skb->tx_skbuff[idx].mapping =
+ pci_map_single(ap->pdev, skb->data, skb->len);
+ addr = (unsigned long) ap->skb->tx_skbuff[idx].mapping;
#if (BITS_PER_LONG == 64)
writel(addr >> 32, &ap->tx_ring[idx].addr.addrhi);
#endif
}
-void __init ace_copy(struct ace_regs *regs, void *src, u32 dest, int size)
+void __init ace_copy(struct ace_regs *regs, void *src, unsigned long dest, int size)
{
unsigned long tdest;
u32 *wsrc;
- short tsize, i;
+ unsigned long tsize, i;
if (size <= 0)
return;
tdest = (unsigned long)®s->Window +
(dest & (ACE_WINDOW_SIZE - 1));
writel(dest & ~(ACE_WINDOW_SIZE - 1), ®s->WinBase);
-#ifdef __BIG_ENDIAN
-#error "data must be swapped here"
-#else
wsrc = src;
for (i = 0; i < (tsize / 4); i++){
writel(wsrc[i], tdest + i*4);
}
-#endif
dest += tsize;
src += tsize;
size -= tsize;
}
-void __init ace_clear(struct ace_regs *regs, u32 dest, int size)
+void __init ace_clear(struct ace_regs *regs, unsigned long dest, int size)
{
unsigned long tdest;
- short tsize = 0, i;
+ unsigned long tsize = 0, i;
if (size <= 0)
return;
} aceaddr;
-static inline void set_aceaddr(aceaddr *aa, volatile void *addr)
+static inline void set_aceaddr(aceaddr *aa, dma_addr_t addr)
{
- unsigned long baddr = virt_to_bus((void *)addr);
+ unsigned long baddr = (unsigned long) addr;
#if (BITS_PER_LONG == 64)
- aa->addrlo = baddr & 0xffffffff;
- aa->addrhi = baddr >> 32;
+ aa->addrlo = cpu_to_le32(baddr & 0xffffffff);
+ aa->addrhi = cpu_to_le32(baddr >> 32);
#else
/* Don't bother setting zero every time */
- aa->addrlo = baddr;
+ aa->addrlo = cpu_to_le32(baddr);
#endif
mb();
}
-
-static inline void set_aceaddr_bus(aceaddr *aa, volatile void *addr)
-{
- unsigned long baddr = (unsigned long)addr;
-#if (BITS_PER_LONG == 64)
- aa->addrlo = baddr & 0xffffffff;
- aa->addrhi = baddr >> 32;
-#else
- /* Don't bother setting zero every time */
- aa->addrlo = baddr;
-#endif
- mb();
-}
-
-
-static inline void *get_aceaddr(aceaddr *aa)
-{
- unsigned long addr;
- mb();
-#if (BITS_PER_LONG == 64)
- addr = (u64)aa->addrhi << 32 | aa->addrlo;
-#else
- addr = aa->addrlo;
-#endif
- return bus_to_virt(addr);
-}
-
-
-static inline void *get_aceaddr_bus(aceaddr *aa)
-{
- unsigned long addr;
- mb();
-#if (BITS_PER_LONG == 64)
- addr = (u64)aa->addrhi << 32 | aa->addrlo;
-#else
- addr = aa->addrlo;
-#endif
- return (void *)addr;
-}
-
-
struct ace_regs {
u32 pad0[16]; /* PCI control registers */
#define EVT_RING_SIZE (EVT_RING_ENTRIES * sizeof(struct event))
struct event {
-#ifdef __LITTLE_ENDIAN
- u32 idx:12;
- u32 code:12;
- u32 evt:8;
+ union {
+ u32 word;
+ struct {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ u32 idx:12;
+ u32 code:12;
+ u32 evt:8;
#else
- u32 evt:8;
- u32 code:12;
- u32 idx:12;
+ u32 evt:8;
+ u32 code:12;
+ u32 idx:12;
#endif
+ } data;
+ } u;
u32 pad;
};
#define CMD_RING_ENTRIES 64
struct cmd {
-#ifdef __LITTLE_ENDIAN
+#if defined(__LITTLE_ENDIAN_BITFIELD)
u32 idx:12;
u32 code:12;
u32 evt:8;
* This is in PCI shared mem and must be accessed with readl/writel
* real layout is:
*/
-#if __LITTLE_ENDIAN
u16 flags;
u16 size;
u16 vlan;
u16 reserved;
-#else
- u16 size;
- u16 flags;
- u16 reserved;
- u16 vlan;
-#endif
#endif
u32 vlanres;
};
struct rx_desc{
aceaddr addr;
-#ifdef __LITTLE_ENDIAN
u16 size;
u16 idx;
-#else
- u16 idx;
- u16 size;
-#endif
-#ifdef __LITTLE_ENDIAN
u16 flags;
u16 type;
-#else
- u16 type;
- u16 flags;
-#endif
-#ifdef __LITTLE_ENDIAN
u16 tcp_udp_csum;
u16 ip_csum;
-#else
- u16 ip_csum;
- u16 tcp_udp_csum;
-#endif
-#ifdef __LITTLE_ENDIAN
u16 vlan;
u16 err_flags;
-#else
- u16 err_flags;
- u16 vlan;
-#endif
u32 reserved;
u32 opague;
};
*/
struct ring_ctrl {
aceaddr rngptr;
-#ifdef __LITTLE_ENDIAN
u16 flags;
u16 max_len;
-#else
- u16 max_len;
- u16 flags;
-#endif
u32 pad;
};
* pointers, but I don't see any other smart mode to do this in an
* efficient manner ;-(
*/
+struct ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+};
+
struct ace_skb
{
- struct sk_buff *tx_skbuff[TX_RING_ENTRIES];
- struct sk_buff *rx_std_skbuff[RX_STD_RING_ENTRIES];
- struct sk_buff *rx_mini_skbuff[RX_MINI_RING_ENTRIES];
- struct sk_buff *rx_jumbo_skbuff[RX_JUMBO_RING_ENTRIES];
+ struct ring_info tx_skbuff[TX_RING_ENTRIES];
+ struct ring_info rx_std_skbuff[RX_STD_RING_ENTRIES];
+ struct ring_info rx_mini_skbuff[RX_MINI_RING_ENTRIES];
+ struct ring_info rx_jumbo_skbuff[RX_JUMBO_RING_ENTRIES];
};
{
struct ace_skb *skb;
struct ace_regs *regs; /* register base */
- int version, fw_running, fw_up, link;
+ volatile int fw_running;
+ int version, fw_up, link;
int promisc, mcast_all;
/*
* The send ring is located in the shared memory window
*/
struct ace_info *info;
+ dma_addr_t info_dma;
struct tx_desc *tx_ring;
u32 tx_prd, tx_full, tx_ret_csm;
struct timer_list timer;
u32 cur_rx;
struct tq_struct immediate;
int bh_pending, jumbo;
- struct rx_desc rx_std_ring[RX_STD_RING_ENTRIES]
- __attribute__ ((aligned (L1_CACHE_BYTES)));
- struct rx_desc rx_jumbo_ring[RX_JUMBO_RING_ENTRIES];
- struct rx_desc rx_mini_ring[RX_MINI_RING_ENTRIES];
- struct rx_desc rx_return_ring[RX_RETURN_RING_ENTRIES];
- struct event evt_ring[EVT_RING_ENTRIES];
- volatile u32 evt_prd
- __attribute__ ((aligned (L1_CACHE_BYTES)));
- volatile u32 rx_ret_prd
- __attribute__ ((aligned (L1_CACHE_BYTES)));
- volatile u32 tx_csm
- __attribute__ ((aligned (L1_CACHE_BYTES)));
+
+ /* These elements are allocated using consistent PCI
+ * dma memory.
+ */
+ struct rx_desc *rx_std_ring;
+ struct rx_desc *rx_jumbo_ring;
+ struct rx_desc *rx_mini_ring;
+ struct rx_desc *rx_return_ring;
+ dma_addr_t rx_ring_base_dma;
+
+ struct event *evt_ring;
+ dma_addr_t evt_ring_dma;
+
+ volatile u32 *evt_prd, *rx_ret_prd, *tx_csm;
+ dma_addr_t evt_prd_dma, rx_ret_prd_dma, tx_csm_dma;
+
unsigned char *trace_buf;
struct pci_dev *pdev;
struct net_device *next;
struct de4x5_private {
char adapter_name[80]; /* Adapter name */
u_long interrupt; /* Aligned ISR flag */
- struct de4x5_desc rx_ring[NUM_RX_DESC]; /* RX descriptor ring */
- struct de4x5_desc tx_ring[NUM_TX_DESC]; /* TX descriptor ring */
+ struct de4x5_desc *rx_ring; /* RX descriptor ring */
+ struct de4x5_desc *tx_ring; /* TX descriptor ring */
struct sk_buff *tx_skb[NUM_TX_DESC]; /* TX skb for freeing when sent */
struct sk_buff *rx_skb[NUM_RX_DESC]; /* RX skb's */
int rx_new, rx_old; /* RX descriptor ring pointers */
int tmp; /* Temporary global per card */
struct {
void *priv; /* Original kmalloc'd mem addr */
- void *buf; /* Original kmalloc'd mem addr */
u_long lock; /* Lock the cache accesses */
s32 csr0; /* Saved Bus Mode Register */
s32 csr6; /* Saved Operating Mode Reg. */
u_char *rst; /* Pointer to Type 5 reset info */
u_char ibn; /* Infoblock number */
struct parameters params; /* Command line/ #defined params */
+ struct pci_dev *pdev; /* Device cookie for DMA alloc */
+ dma_addr_t dma_rings; /* DMA handle for rings */
+ int dma_size; /* Size of the DMA area */
+ char *rx_bufs; /* rx bufs on alpha, sparc, ... */
};
/*
/*
** Private functions
*/
-static int de4x5_hw_init(struct net_device *dev, u_long iobase);
+static int de4x5_hw_init(struct net_device *dev, u_long iobase, struct pci_dev *pdev);
static int de4x5_init(struct net_device *dev);
static int de4x5_sw_reset(struct net_device *dev);
static int de4x5_rx(struct net_device *dev);
}
static int __init
-de4x5_hw_init(struct net_device *dev, u_long iobase)
+de4x5_hw_init(struct net_device *dev, u_long iobase, struct pci_dev *pdev)
{
struct bus_type *lp = &bus;
int i, status=0;
char *tmp;
+ dma_addr_t dma_rx_bufs;
/* Ensure we're not sleeping */
if (lp->bus == EISA) {
lp->asBitValid = TRUE;
lp->timeout = -1;
lp->useSROM = useSROM;
+ lp->pdev = pdev;
memcpy((char *)&lp->srom,(char *)&bus.srom,sizeof(struct de4x5_srom));
lp->lock = (spinlock_t) SPIN_LOCK_UNLOCKED;
de4x5_parse_params(dev);
}
lp->fdx = lp->params.fdx;
sprintf(lp->adapter_name,"%s (%s)", name, dev->name);
-
+
+ lp->dma_size = (NUM_RX_DESC + NUM_TX_DESC) * sizeof(struct de4x5_desc);
+#if defined(__alpha__) || defined(__powerpc__) || defined(__sparc_v9__) || defined(DE4X5_DO_MEMCPY)
+ lp->dma_size += RX_BUFF_SZ * NUM_RX_DESC + ALIGN;
+#endif
+ lp->rx_ring = pci_alloc_consistent(pdev, lp->dma_size, &lp->dma_rings);
+ if (lp->rx_ring == NULL) {
+ kfree(lp->cache.priv);
+ lp->cache.priv = NULL;
+ return -ENOMEM;
+ }
+
+ lp->tx_ring = lp->rx_ring + NUM_RX_DESC;
+
/*
** Set up the RX descriptor ring (Intels)
** Allocate contiguous receive buffers, long word aligned (Alphas)
#if !defined(__alpha__) && !defined(__powerpc__) && !defined(__sparc_v9__) && !defined(DE4X5_DO_MEMCPY)
for (i=0; i<NUM_RX_DESC; i++) {
lp->rx_ring[i].status = 0;
- lp->rx_ring[i].des1 = RX_BUFF_SZ;
+ lp->rx_ring[i].des1 = cpu_to_le32(RX_BUFF_SZ);
lp->rx_ring[i].buf = 0;
lp->rx_ring[i].next = 0;
lp->rx_skb[i] = (struct sk_buff *) 1; /* Dummy entry */
}
#else
- if ((tmp = (void *)kmalloc(RX_BUFF_SZ * NUM_RX_DESC + ALIGN,
- GFP_KERNEL)) == NULL) {
- kfree(lp->cache.priv);
- lp->cache.priv = NULL;
- return -ENOMEM;
- }
-
- lp->cache.buf = tmp;
- tmp = (char *)(((u_long) tmp + ALIGN) & ~ALIGN);
+ dma_rx_bufs = lp->dma_rings + (NUM_RX_DESC + NUM_TX_DESC)
+ * sizeof(struct de4x5_desc);
+ dma_rx_bufs = (dma_rx_bufs + ALIGN) & ~ALIGN;
+ lp->rx_bufs = (char *)(((long)(lp->rx_ring + NUM_RX_DESC
+ + NUM_TX_DESC) + ALIGN) & ~ALIGN);
for (i=0; i<NUM_RX_DESC; i++) {
lp->rx_ring[i].status = 0;
lp->rx_ring[i].des1 = cpu_to_le32(RX_BUFF_SZ);
- lp->rx_ring[i].buf = cpu_to_le32(virt_to_bus(tmp+i*RX_BUFF_SZ));
+ lp->rx_ring[i].buf = cpu_to_le32(dma_rx_bufs+i*RX_BUFF_SZ);
lp->rx_ring[i].next = 0;
lp->rx_skb[i] = (struct sk_buff *) 1; /* Dummy entry */
}
/* Write the end of list marker to the descriptor lists */
lp->rx_ring[lp->rxRingSize - 1].des1 |= cpu_to_le32(RD_RER);
lp->tx_ring[lp->txRingSize - 1].des1 |= cpu_to_le32(TD_TER);
-
+
/* Tell the adapter where the TX/RX rings are located. */
- outl(virt_to_bus(lp->rx_ring), DE4X5_RRBA);
- outl(virt_to_bus(lp->tx_ring), DE4X5_TRBA);
+ outl(lp->dma_rings, DE4X5_RRBA);
+ outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
+ DE4X5_TRBA);
/* Initialise the IRQ mask and Enable/Disable */
lp->irq_mask = IMR_RIM | IMR_TIM | IMR_TUM | IMR_UNM;
omr |= (OMR_SDP | OMR_SB);
}
lp->setup_f = PERFECT;
- outl(virt_to_bus(lp->rx_ring), DE4X5_RRBA);
- outl(virt_to_bus(lp->tx_ring), DE4X5_TRBA);
+ outl(lp->dma_rings, DE4X5_RRBA);
+ outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
+ DE4X5_TRBA);
lp->rx_new = lp->rx_old = 0;
lp->tx_new = lp->tx_old = 0;
/* Build the setup frame depending on filtering mode */
SetMulticastFilter(dev);
- load_packet(dev, lp->setup_frame, PERFECT_F|TD_SET|SETUP_FRAME_LEN, NULL);
+ load_packet(dev, lp->setup_frame, PERFECT_F|TD_SET|SETUP_FRAME_LEN, (struct sk_buff *)1);
outl(omr|OMR_ST, DE4X5_OMR);
/* Poll for setup frame completion (adapter interrupts are disabled now) */
return -1;
/* Transmit descriptor ring full or stale skb */
- if (dev->tbusy || lp->tx_skb[lp->tx_new]) {
+ if (dev->tbusy || (u_long) lp->tx_skb[lp->tx_new] > 1) {
if (lp->interrupt) {
de4x5_putb_cache(dev, skb); /* Requeue the buffer */
} else {
de4x5_put_cache(dev, skb);
}
if (de4x5_debug & DEBUG_TX) {
- printk("%s: transmit busy, lost media or stale skb found:\n STS:%08x\n tbusy:%ld\n IMR:%08x\n OMR:%08x\n Stale skb: %s\n",dev->name, inl(DE4X5_STS), dev->tbusy, inl(DE4X5_IMR), inl(DE4X5_OMR), (lp->tx_skb[lp->tx_new] ? "YES" : "NO"));
+ printk("%s: transmit busy, lost media or stale skb found:\n STS:%08x\n tbusy:%ld\n IMR:%08x\n OMR:%08x\n Stale skb: %s\n",dev->name, inl(DE4X5_STS), dev->tbusy, inl(DE4X5_IMR), inl(DE4X5_OMR), ((u_long) lp->tx_skb[lp->tx_new] > 1) ? "YES" : "NO");
}
} else if (skb->len > 0) {
/* If we already have stuff queued locally, use that first */
skb = de4x5_get_cache(dev);
}
- while (skb && !dev->tbusy && !lp->tx_skb[lp->tx_new]) {
+ while (skb && !dev->tbusy && (u_long) lp->tx_skb[lp->tx_new] <= 1) {
spin_lock_irqsave(&lp->lock, flags);
test_and_set_bit(0, (void*)&dev->tbusy);
load_packet(dev, skb->data, TD_IC | TD_LS | TD_FS | skb->len, skb);
return 0;
}
+static inline void
+de4x5_free_tx_buff(struct de4x5_private *lp, int entry)
+{
+ pci_unmap_single(lp->pdev, le32_to_cpu(lp->tx_ring[entry].buf),
+ le32_to_cpu(lp->tx_ring[entry].des1) & TD_TBS1);
+ if ((u_long) lp->tx_skb[entry] > 1)
+ dev_kfree_skb(lp->tx_skb[entry]);
+ lp->tx_skb[entry] = NULL;
+}
+
/*
** Buffer sent - check for TX buffer errors.
*/
((status & TD_CC) >> 3));
/* Free the buffer. */
- if (lp->tx_skb[entry] != NULL) {
- dev_kfree_skb(lp->tx_skb[entry]);
- lp->tx_skb[entry] = NULL;
- }
+ if (lp->tx_skb[entry] != NULL)
+ de4x5_free_tx_buff(lp, entry);
}
/* Update all the pointers */
{
struct de4x5_private *lp = (struct de4x5_private *)dev->priv;
int entry = (lp->tx_new ? lp->tx_new-1 : lp->txRingSize-1);
+ dma_addr_t buf_dma = pci_map_single(lp->pdev, buf, flags & TD_TBS1);
- lp->tx_ring[lp->tx_new].buf = cpu_to_le32(virt_to_bus(buf));
+ lp->tx_ring[lp->tx_new].buf = cpu_to_le32(buf_dma);
lp->tx_ring[lp->tx_new].des1 &= cpu_to_le32(TD_TER);
lp->tx_ring[lp->tx_new].des1 |= cpu_to_le32(flags);
lp->tx_skb[lp->tx_new] = skb;
lp->tx_ring[lp->tx_new].status = cpu_to_le32(T_OWN);
barrier();
-
- return;
}
/*
} else {
SetMulticastFilter(dev);
load_packet(dev, lp->setup_frame, TD_IC | PERFECT_F | TD_SET |
- SETUP_FRAME_LEN, NULL);
+ SETUP_FRAME_LEN, (struct sk_buff *)1);
lp->tx_new = (++lp->tx_new) % lp->txRingSize;
outl(POLL_DEMAND, DE4X5_TPD); /* Start the TX */
DevicePresent(EISA_APROM);
dev->irq = irq;
- if ((status = de4x5_hw_init(dev, iobase)) == 0) {
+ if ((status = de4x5_hw_init(dev, iobase, NULL)) == 0) {
num_de4x5s++;
if (loading_module) link_modules(lastModule, dev);
lastEISA = i;
DevicePresent(DE4X5_APROM);
if (check_region(iobase, DE4X5_PCI_TOTAL_SIZE) == 0) {
dev->irq = irq;
- if ((status = de4x5_hw_init(dev, iobase)) == 0) {
+ if ((status = de4x5_hw_init(dev, iobase, pdev)) == 0) {
num_de4x5s++;
lastPCI = index;
if (loading_module) link_modules(lastModule, dev);
lp->timeout = msec/100;
lp->tmp = lp->tx_new; /* Remember the ring position */
- load_packet(dev, lp->frame, TD_LS | TD_FS | sizeof(lp->frame), NULL);
+ load_packet(dev, lp->frame, TD_LS | TD_FS | sizeof(lp->frame), (struct sk_buff *)1);
lp->tx_new = (++lp->tx_new) % lp->txRingSize;
outl(POLL_DEMAND, DE4X5_TPD);
}
skb_reserve(p, 2); /* Align */
if (index < lp->rx_old) { /* Wrapped buffer */
short tlen = (lp->rxRingSize - lp->rx_old) * RX_BUFF_SZ;
- memcpy(skb_put(p,tlen),
- bus_to_virt(le32_to_cpu(lp->rx_ring[lp->rx_old].buf)),tlen);
- memcpy(skb_put(p,len-tlen),
- bus_to_virt(le32_to_cpu(lp->rx_ring[0].buf)), len-tlen);
+ memcpy(skb_put(p,tlen),lp->rx_bufs + lp->rx_old * RX_BUFF_SZ,tlen);
+ memcpy(skb_put(p,len-tlen),lp->rx_bufs,len-tlen);
} else { /* Linear buffer */
- memcpy(skb_put(p,len),
- bus_to_virt(le32_to_cpu(lp->rx_ring[lp->rx_old].buf)),len);
+ memcpy(skb_put(p,len),lp->rx_bufs + lp->rx_old * RX_BUFF_SZ,len);
}
return p;
int i;
for (i=0; i<lp->txRingSize; i++) {
- if (lp->tx_skb[i]) {
- dev_kfree_skb(lp->tx_skb[i]);
- lp->tx_skb[i] = NULL;
- }
+ if (lp->tx_skb[i])
+ de4x5_free_tx_buff(lp, i);
lp->tx_ring[i].status = 0;
}
if (lp->cache.save_cnt) {
STOP_DE4X5;
- outl(virt_to_bus(lp->rx_ring), DE4X5_RRBA);
- outl(virt_to_bus(lp->tx_ring), DE4X5_TRBA);
+ outl(lp->dma_rings, DE4X5_RRBA);
+ outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
+ DE4X5_TRBA);
lp->rx_new = lp->rx_old = 0;
lp->tx_new = lp->tx_old = 0;
/* Set up the descriptor and give ownership to the card */
while (test_and_set_bit(0, (void *)&dev->tbusy) != 0) barrier();
load_packet(dev, lp->setup_frame, TD_IC | PERFECT_F | TD_SET |
- SETUP_FRAME_LEN, NULL);
+ SETUP_FRAME_LEN, (struct sk_buff *)1);
lp->tx_new = (++lp->tx_new) % lp->txRingSize;
outl(POLL_DEMAND, DE4X5_TPD); /* Start the TX */
dev->tbusy = 0; /* Unlock the TX ring */
release_region(p->base_addr, (lp->bus == PCI ?
DE4X5_PCI_TOTAL_SIZE :
DE4X5_EISA_TOTAL_SIZE));
- if (lp->cache.buf) { /* MAC buffers allocated? */
- kfree(lp->cache.buf); /* Free the MAC buffers */
- }
if (lp->cache.priv) { /* Private area allocated? */
kfree(lp->cache.priv); /* Free the private area */
}
+ if (lp->rx_ring) {
+ pci_free_consistent(lp->pdev, lp->dma_size, lp->rx_ring,
+ lp->dma_rings);
+ }
}
kfree(p);
} else {
struct de4x5_private *lp = (struct de4x5_private *)p->priv;
next = lp->next_module;
- if (lp->cache.buf) { /* MAC buffers allocated? */
- kfree(lp->cache.buf); /* Free the MAC buffers */
+ if (lp->rx_ring) {
+ pci_free_consistent(lp->pdev, lp->dma_size, lp->rx_ring,
+ lp->dma_rings);
}
release_region(p->base_addr, (lp->bus == PCI ?
DE4X5_PCI_TOTAL_SIZE :
#endif
#define RUN_AT(x) (jiffies + (x))
-/* Condensed bus+endian portability operations. */
-#define virt_to_le32bus(addr) cpu_to_le32(virt_to_bus(addr))
-#define le32bus_to_virt(addr) bus_to_virt(le32_to_cpu(addr))
#if (LINUX_VERSION_CODE < 0x20123)
#define test_and_set_bit(val, addr) set_bit(val, addr)
#elif defined(__powerpc__)
#define clear_suspend(cmd) clear_bit(6, &(cmd)->cmd_status)
#else
+#if 0
# error You are probably in trouble: clear_suspend() MUST be atomic.
+#endif
# define clear_suspend(cmd) (cmd)->cmd_status &= cpu_to_le32(~CmdSuspend)
#endif
/* Do not change the position (alignment) of the first few elements!
The later elements are grouped for cache locality. */
struct speedo_private {
- struct TxFD tx_ring[TX_RING_SIZE]; /* Commands (usually CmdTxPacket). */
+ struct TxFD *tx_ring; /* Commands (usually CmdTxPacket). */
struct RxFD *rx_ringp[RX_RING_SIZE]; /* Rx descriptor, used as ring. */
/* The addresses of a Tx/Rx-in-place packets/buffers. */
struct sk_buff* tx_skbuff[TX_RING_SIZE];
struct sk_buff* rx_skbuff[RX_RING_SIZE];
+ dma_addr_t rx_ring_dma[RX_RING_SIZE];
+ dma_addr_t tx_ring_dma;
struct descriptor *last_cmd; /* Last command sent. */
unsigned int cur_tx, dirty_tx; /* The ring entries to be free()ed. */
spinlock_t lock; /* Group with Tx control cache line. */
struct net_device *next_module;
void *priv_addr; /* Unaligned address for kfree */
struct enet_statistics stats;
- struct speedo_stats lstats;
+ struct speedo_stats *lstats;
int chip_id;
unsigned char pci_bus, pci_devfn, acpi_pwr;
+ struct pci_dev *pdev;
struct timer_list timer; /* Media selection timer. */
int mc_setup_frm_len; /* The length of an allocated.. */
struct descriptor *mc_setup_frm; /* ..multicast setup frame. */
int mc_setup_busy; /* Avoid double-use of setup frame. */
- int in_interrupt; /* Word-aligned dev->interrupt */
+ dma_addr_t mc_setup_dma;
+ unsigned long in_interrupt; /* Word-aligned dev->interrupt */
char rx_mode; /* Current PROMISC/ALLMULTI setting. */
unsigned int tx_full:1; /* The Tx queue is full. */
unsigned int full_duplex:1; /* Full-duplex operation requested. */
for (; pci_index < 8; pci_index++) {
unsigned char pci_bus, pci_device_fn, pci_latency;
- u32 pciaddr;
long ioaddr;
int irq;
{
struct pci_dev *pdev = pci_find_slot(pci_bus, pci_device_fn);
#ifdef USE_IO
- pciaddr = pdev->resource[1].start;
+ ioaddr = pdev->resource[1].start;
#else
- pciaddr = pdev->resource[0].start;
+ ioaddr = pdev->resource[0].start;
#endif
irq = pdev->irq;
}
#else
{
+ u32 pciaddr;
u8 pci_irq_line;
pcibios_read_config_byte(pci_bus, pci_device_fn,
PCI_INTERRUPT_LINE, &pci_irq_line);
pcibios_read_config_dword(pci_bus, pci_device_fn,
PCI_BASE_ADDRESS_0, &pciaddr);
#endif
+ ioaddr = pciaddr;
irq = pci_irq_line;
}
#endif
/* Remove I/O space marker in bit 0. */
#ifdef USE_IO
- ioaddr = pciaddr;
if (check_region(ioaddr, 32))
continue;
#else
- if ((ioaddr = (long)ioremap(pciaddr & ~0xfUL, 0x1000)) == 0) {
- printk(KERN_INFO "Failed to map PCI address %#x.\n",
- pciaddr);
- continue;
+ {
+ unsigned long orig_ioaddr = ioaddr;
+
+ if ((ioaddr = (long)ioremap(ioaddr & ~0xfUL, 0x1000)) == 0) {
+ printk(KERN_INFO "Failed to map PCI address %#lx.\n",
+ orig_ioaddr);
+ continue;
+ }
}
#endif
if (speedo_debug > 2)
{
struct net_device *dev;
struct speedo_private *sp;
+ struct pci_dev *pdev;
+ unsigned char *tx_ring;
+ dma_addr_t tx_ring_dma;
const char *product;
int i, option;
u16 eeprom[0x100];
printk(version);
#endif
+ pdev = pci_find_slot(pci_bus, pci_devfn);
+
+ tx_ring = pci_alloc_consistent(pdev, TX_RING_SIZE * sizeof(struct TxFD)
+ + sizeof(struct speedo_stats), &tx_ring_dma);
+ if (!tx_ring) {
+ printk(KERN_ERR "Could not allocate DMA memory.\n");
+ return NULL;
+ }
+
dev = init_etherdev(NULL, sizeof(struct speedo_private));
+ if (dev == NULL) {
+ pci_free_consistent(pdev, TX_RING_SIZE * sizeof(struct TxFD)
+ + sizeof(struct speedo_stats),
+ tx_ring, tx_ring_dma);
+ return NULL;
+ }
if (dev->mem_start > 0)
option = dev->mem_start;
{
const char *connectors[] = {" RJ45", " BNC", " AUI", " MII"};
/* The self-test results must be paragraph aligned. */
- s32 str[6], *volatile self_test_results;
+ volatile s32 *self_test_results = (volatile s32 *)tx_ring;
int boguscnt = 16000; /* Timeout for set-test. */
if (eeprom[3] & 0x03)
printk(KERN_INFO " Receiver lock-up bug exists -- enabling"
((option & 0x10) ? 0x0100 : 0)); /* Full duplex? */
}
- /* Perform a system self-test. */
- self_test_results = (s32*) ((((long) str) + 15) & ~0xf);
+ /* Perform a system self-test. Use the tx_ring consistent DMA mapping for it. */
self_test_results[0] = 0;
self_test_results[1] = -1;
- outl(virt_to_bus(self_test_results) | PortSelfTest, ioaddr + SCBPort);
+ outl(tx_ring_dma | PortSelfTest, ioaddr + SCBPort);
do {
udelay(10);
} while (self_test_results[1] == -1 && --boguscnt >= 0);
sp->pci_bus = pci_bus;
sp->pci_devfn = pci_devfn;
+ sp->pdev = pdev;
sp->chip_id = chip_idx;
sp->acpi_pwr = acpi_idle_state;
-
+ sp->tx_ring = (struct TxFD *)tx_ring;
+ sp->tx_ring_dma = tx_ring_dma;
+ sp->lstats = (struct speedo_stats *)(sp->tx_ring + TX_RING_SIZE);
+
sp->full_duplex = option >= 0 && (option & 0x10) ? 1 : 0;
if (card_idx >= 0) {
if (full_duplex[card_idx] >= 0)
sp->dirty_tx = 0;
sp->last_cmd = 0;
sp->tx_full = 0;
- sp->lock = (spinlock_t) SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&sp->lock);
sp->in_interrupt = 0;
/* .. we can safely take handler calls during init. */
wait_for_cmd_done(ioaddr + SCBCmd);
/* Load the statistics block and rx ring addresses. */
- outl(virt_to_bus(&sp->lstats), ioaddr + SCBPointer);
+ outl(sp->tx_ring_dma + sizeof(struct TxFD) * TX_RING_SIZE, ioaddr + SCBPointer);
outb(CUStatsAddr, ioaddr + SCBCmd);
- sp->lstats.done_marker = 0;
+ sp->lstats->done_marker = 0;
wait_for_cmd_done(ioaddr + SCBCmd);
- outl(virt_to_bus(sp->rx_ringp[sp->cur_rx % RX_RING_SIZE]),
+ outl(sp->rx_ring_dma[sp->cur_rx % RX_RING_SIZE],
ioaddr + SCBPointer);
outb(RxStart, ioaddr + SCBCmd);
wait_for_cmd_done(ioaddr + SCBCmd);
/* Avoid a bug(?!) here by marking the command already completed. */
cur_cmd->cmd_status = cpu_to_le32((CmdSuspend | CmdIASetup) | 0xa000);
cur_cmd->link =
- virt_to_le32bus(&sp->tx_ring[sp->cur_tx % TX_RING_SIZE]);
+ cpu_to_le32(sp->tx_ring_dma + (sp->cur_tx % TX_RING_SIZE)
+ * sizeof(struct TxFD));
memcpy(cur_cmd->params, dev->dev_addr, 6);
if (sp->last_cmd)
clear_suspend(sp->last_cmd);
/* Start the chip's Tx process and unmask interrupts. */
wait_for_cmd_done(ioaddr + SCBCmd);
- outl(virt_to_bus(&sp->tx_ring[sp->dirty_tx % TX_RING_SIZE]),
+ outl(sp->tx_ring_dma
+ + (sp->dirty_tx % TX_RING_SIZE) * sizeof(struct TxFD),
ioaddr + SCBPointer);
outw(CUStart, ioaddr + SCBCmd);
}
skb->dev = dev; /* Mark as being used by this device. */
rxf = (struct RxFD *)skb->tail;
sp->rx_ringp[i] = rxf;
+ sp->rx_ring_dma[i] =
+ pci_map_single(sp->pdev, rxf, PKT_BUF_SZ + sizeof(struct RxFD));
skb_reserve(skb, sizeof(struct RxFD));
if (last_rxf)
- last_rxf->link = virt_to_le32bus(rxf);
+ last_rxf->link = cpu_to_le32(sp->rx_ring_dma[i]);
last_rxf = rxf;
rxf->status = cpu_to_le32(0x00000001); /* '1' is flag value only. */
rxf->link = 0; /* None yet. */
- /* This field unused by i82557, we use it as a consistency check. */
-#ifdef final_version
+ /* This field unused by i82557. */
rxf->rx_buf_addr = 0xffffffff;
-#else
- rxf->rx_buf_addr = virt_to_bus(skb->tail);
-#endif
rxf->count = cpu_to_le32(PKT_BUF_SZ << 16);
}
sp->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
/* Only the command unit has stopped. */
printk(KERN_WARNING "%s: Trying to restart the transmitter...\n",
dev->name);
- outl(virt_to_bus(&sp->tx_ring[sp->dirty_tx % TX_RING_SIZE]),
+ outl(sp->tx_ring_dma
+ + (sp->dirty_tx % TX_RING_SIZE) * sizeof(struct TxFD),
ioaddr + SCBPointer);
outw(CUStart, ioaddr + SCBCmd);
} else {
sp->tx_ring[entry].status =
cpu_to_le32(CmdSuspend | CmdTx | CmdTxFlex);
sp->tx_ring[entry].link =
- virt_to_le32bus(&sp->tx_ring[sp->cur_tx % TX_RING_SIZE]);
+ cpu_to_le32(sp->tx_ring_dma
+ + (sp->cur_tx % TX_RING_SIZE)
+ * sizeof(struct TxFD));
sp->tx_ring[entry].tx_desc_addr =
- virt_to_le32bus(&sp->tx_ring[entry].tx_buf_addr0);
+ cpu_to_le32(sp->tx_ring_dma
+ + ((long)&sp->tx_ring[entry].tx_buf_addr0
+ - (long)sp->tx_ring));
/* The data region is always in one buffer descriptor. */
sp->tx_ring[entry].count = cpu_to_le32(sp->tx_threshold);
- sp->tx_ring[entry].tx_buf_addr0 = virt_to_le32bus(skb->data);
+ sp->tx_ring[entry].tx_buf_addr0 =
+ cpu_to_le32(pci_map_single(sp->pdev, skb->data,
+ skb->len));
sp->tx_ring[entry].tx_buf_size0 = cpu_to_le32(skb->len);
/* Todo: perhaps leave the interrupt bit set if the Tx queue is more
than half full. Argument against: we should be receiving packets
outw(RxResumeNoResources, ioaddr + SCBCmd);
else if ((status & 0x003c) == 0x0008) { /* No resources (why?!) */
/* No idea of what went wrong. Restart the receiver. */
- outl(virt_to_bus(sp->rx_ringp[sp->cur_rx % RX_RING_SIZE]),
+ outl(sp->rx_ring_dma[sp->cur_rx % RX_RING_SIZE],
ioaddr + SCBPointer);
outw(RxStart, ioaddr + SCBCmd);
}
#if LINUX_VERSION_CODE > 0x20127
sp->stats.tx_bytes += sp->tx_skbuff[entry]->len;
#endif
+ pci_unmap_single(sp->pdev,
+ le32_to_cpu(sp->tx_ring[entry].tx_buf_addr0),
+ sp->tx_skbuff[entry]->len);
dev_free_skb(sp->tx_skbuff[entry]);
sp->tx_skbuff[entry] = 0;
- } else if ((status & 0x70000) == CmdNOp)
+ } else if ((status & 0x70000) == CmdNOp) {
+ if (sp->mc_setup_busy)
+ pci_unmap_single(sp->pdev,
+ sp->mc_setup_dma,
+ sp->mc_setup_frm_len);
sp->mc_setup_busy = 0;
+ }
dirty_tx++;
}
skb->dev = dev;
skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
/* 'skb_put()' points to the start of sk_buff data area. */
+ pci_dma_sync_single(sp->pdev, sp->rx_ring_dma[entry],
+ PKT_BUF_SZ + sizeof(struct RxFD));
#if 1 || USE_IP_CSUM
/* Packet is in one chunk -- we can copy + cksum. */
eth_copy_and_sum(skb, sp->rx_skbuff[entry]->tail, pkt_len, 0);
}
sp->rx_skbuff[entry] = NULL;
temp = skb_put(skb, pkt_len);
-#if !defined(final_version) && !defined(__powerpc__)
- if (bus_to_virt(sp->rx_ringp[entry]->rx_buf_addr) != temp)
- printk(KERN_ERR "%s: Rx consistency error -- the skbuff "
- "addresses do not match in speedo_rx: %p vs. %p "
- "/ %p.\n", dev->name,
- bus_to_virt(sp->rx_ringp[entry]->rx_buf_addr),
- skb->head, temp);
-#endif
sp->rx_ringp[entry] = NULL;
+ pci_unmap_single(sp->pdev, sp->rx_ring_dma[entry],
+ PKT_BUF_SZ + sizeof(struct RxFD));
}
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
break; /* Better luck next time! */
}
rxf = sp->rx_ringp[entry] = (struct RxFD *)skb->tail;
+ sp->rx_ring_dma[entry] =
+ pci_map_single(sp->pdev, rxf, PKT_BUF_SZ
+ + sizeof(struct RxFD));
skb->dev = dev;
skb_reserve(skb, sizeof(struct RxFD));
- rxf->rx_buf_addr = virt_to_bus(skb->tail);
+ rxf->rx_buf_addr = 0xffffffff;
} else {
rxf = sp->rx_ringp[entry];
}
rxf->status = cpu_to_le32(0xC0000001); /* '1' for driver use only. */
rxf->link = 0; /* None yet. */
rxf->count = cpu_to_le32(PKT_BUF_SZ << 16);
- sp->last_rxf->link = virt_to_le32bus(rxf);
+ sp->last_rxf->link = cpu_to_le32(sp->rx_ring_dma[entry]);
sp->last_rxf->status &= cpu_to_le32(~0xC0000000);
sp->last_rxf = rxf;
}
sp->rx_skbuff[i] = 0;
/* Clear the Rx descriptors. */
if (skb) {
+ pci_unmap_single(sp->pdev,
+ sp->rx_ring_dma[i],
+ PKT_BUF_SZ + sizeof(struct RxFD));
#if LINUX_VERSION_CODE < 0x20100
skb->free = 1;
#endif
for (i = 0; i < TX_RING_SIZE; i++) {
struct sk_buff *skb = sp->tx_skbuff[i];
sp->tx_skbuff[i] = 0;
+
/* Clear the Tx descriptors. */
- if (skb)
+ if (skb) {
+ pci_unmap_single(sp->pdev,
+ le32_to_cpu(sp->tx_ring[i].tx_buf_addr0),
+ skb->len);
dev_free_skb(skb);
+ }
}
if (sp->mc_setup_frm) {
kfree(sp->mc_setup_frm);
long ioaddr = dev->base_addr;
/* Update only if the previous dump finished. */
- if (sp->lstats.done_marker == le32_to_cpu(0xA007)) {
- sp->stats.tx_aborted_errors += le32_to_cpu(sp->lstats.tx_coll16_errs);
- sp->stats.tx_window_errors += le32_to_cpu(sp->lstats.tx_late_colls);
- sp->stats.tx_fifo_errors += le32_to_cpu(sp->lstats.tx_underruns);
- sp->stats.tx_fifo_errors += le32_to_cpu(sp->lstats.tx_lost_carrier);
- /*sp->stats.tx_deferred += le32_to_cpu(sp->lstats.tx_deferred);*/
- sp->stats.collisions += le32_to_cpu(sp->lstats.tx_total_colls);
- sp->stats.rx_crc_errors += le32_to_cpu(sp->lstats.rx_crc_errs);
- sp->stats.rx_frame_errors += le32_to_cpu(sp->lstats.rx_align_errs);
- sp->stats.rx_over_errors += le32_to_cpu(sp->lstats.rx_resource_errs);
- sp->stats.rx_fifo_errors += le32_to_cpu(sp->lstats.rx_overrun_errs);
- sp->stats.rx_length_errors += le32_to_cpu(sp->lstats.rx_runt_errs);
- sp->lstats.done_marker = 0x0000;
+ if (sp->lstats->done_marker == le32_to_cpu(0xA007)) {
+ sp->stats.tx_aborted_errors += le32_to_cpu(sp->lstats->tx_coll16_errs);
+ sp->stats.tx_window_errors += le32_to_cpu(sp->lstats->tx_late_colls);
+ sp->stats.tx_fifo_errors += le32_to_cpu(sp->lstats->tx_underruns);
+ sp->stats.tx_fifo_errors += le32_to_cpu(sp->lstats->tx_lost_carrier);
+ /*sp->stats.tx_deferred += le32_to_cpu(sp->lstats->tx_deferred);*/
+ sp->stats.collisions += le32_to_cpu(sp->lstats->tx_total_colls);
+ sp->stats.rx_crc_errors += le32_to_cpu(sp->lstats->rx_crc_errs);
+ sp->stats.rx_frame_errors += le32_to_cpu(sp->lstats->rx_align_errs);
+ sp->stats.rx_over_errors += le32_to_cpu(sp->lstats->rx_resource_errs);
+ sp->stats.rx_fifo_errors += le32_to_cpu(sp->lstats->rx_overrun_errs);
+ sp->stats.rx_length_errors += le32_to_cpu(sp->lstats->rx_runt_errs);
+ sp->lstats->done_marker = 0x0000;
if (dev->start) {
wait_for_cmd_done(ioaddr + SCBCmd);
outw(CUDumpStats, ioaddr + SCBCmd);
sp->tx_skbuff[entry] = 0; /* Redundant. */
sp->tx_ring[entry].status = cpu_to_le32(CmdSuspend | CmdConfigure);
sp->tx_ring[entry].link =
- virt_to_le32bus(&sp->tx_ring[(entry + 1) % TX_RING_SIZE]);
+ cpu_to_le32(sp->tx_ring_dma + ((entry + 1) % TX_RING_SIZE)
+ * sizeof(struct TxFD));
config_cmd_data = (void *)&sp->tx_ring[entry].tx_desc_addr;
/* Construct a full CmdConfig frame. */
memcpy(config_cmd_data, i82558_config_cmd, sizeof(i82558_config_cmd));
sp->tx_skbuff[entry] = 0;
sp->tx_ring[entry].status = cpu_to_le32(CmdSuspend | CmdMulticastList);
sp->tx_ring[entry].link =
- virt_to_le32bus(&sp->tx_ring[(entry + 1) % TX_RING_SIZE]);
+ cpu_to_le32(sp->tx_ring_dma + ((entry + 1) % TX_RING_SIZE)
+ * sizeof(struct TxFD));
sp->tx_ring[entry].tx_desc_addr = 0; /* Really MC list count. */
setup_params = (u16 *)&sp->tx_ring[entry].tx_desc_addr;
*setup_params++ = cpu_to_le16(dev->mc_count*6);
struct descriptor *mc_setup_frm = sp->mc_setup_frm;
int i;
+ /* If we are busy, someone might be quickly adding to the MC list.
+ Try again later when the list updates stop. */
+ if (sp->mc_setup_busy) {
+ sp->rx_mode = -1;
+ return;
+ }
if (sp->mc_setup_frm_len < 10 + dev->mc_count*6
|| sp->mc_setup_frm == NULL) {
/* Allocate a full setup frame, 10bytes + <max addrs>. */
if (sp->mc_setup_frm)
kfree(sp->mc_setup_frm);
- sp->mc_setup_busy = 0;
sp->mc_setup_frm_len = 10 + multicast_filter_limit*6;
sp->mc_setup_frm = kmalloc(sp->mc_setup_frm_len, GFP_ATOMIC);
if (sp->mc_setup_frm == NULL) {
return;
}
}
- /* If we are busy, someone might be quickly adding to the MC list.
- Try again later when the list updates stop. */
- if (sp->mc_setup_busy) {
- sp->rx_mode = -1;
- return;
- }
mc_setup_frm = sp->mc_setup_frm;
/* Fill the setup frame. */
if (speedo_debug > 1)
entry = sp->cur_tx++ % TX_RING_SIZE;
last_cmd = sp->last_cmd;
sp->last_cmd = mc_setup_frm;
- sp->mc_setup_busy++;
+ sp->mc_setup_busy = 1;
/* Change the command to a NoOp, pointing to the CmdMulti command. */
sp->tx_skbuff[entry] = 0;
sp->tx_ring[entry].status = cpu_to_le32(CmdNOp);
- sp->tx_ring[entry].link = virt_to_le32bus(mc_setup_frm);
+ sp->mc_setup_dma = pci_map_single(sp->pdev, mc_setup_frm, sp->mc_setup_frm_len);
+ sp->tx_ring[entry].link = cpu_to_le32(sp->mc_setup_dma);
/* Set the link in the setup frame. */
mc_setup_frm->link =
- virt_to_le32bus(&(sp->tx_ring[(entry+1) % TX_RING_SIZE]));
+ cpu_to_le32(sp->tx_ring_dma + ((entry + 1) % TX_RING_SIZE)
+ * sizeof(struct TxFD));
wait_for_cmd_done(ioaddr + SCBCmd);
clear_suspend(last_cmd);
*
********************************************************************/
-#include <linux/config.h>
#include <linux/module.h>
-
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/skbuff.h>
*
********************************************************************/
-#include <linux/config.h>
#include <linux/module.h>
-
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/skbuff.h>
/* No user servicable parts below here */
#include <linux/module.h>
-#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/skbuff.h>
const char *name;
u16 vendor_id, device_id, device_id_mask, flags;
int io_size;
- struct net_device *(*probe1)(int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_idx, int fnd_cnt);
+ struct net_device *(*probe1)(struct pci_dev *pdev, int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_idx, int fnd_cnt);
};
-static struct net_device * rtl8129_probe1(int pci_bus, int pci_devfn, long ioaddr,
- int irq, int chp_idx, int fnd_cnt);
+static struct net_device * rtl8129_probe1(struct pci_dev *pdev, int pci_bus,
+ int pci_devfn, long ioaddr,
+ int irq, int chp_idx, int fnd_cnt);
static struct pci_id_info pci_tbl[] =
{{ "RealTek RTL8129 Fast Ethernet",
{0x0bb39de43,0x0bb39ce43,0x0bb39ce83,0x0bb39ce83}
};
+struct ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+};
+
struct rtl8129_private {
char devname[8]; /* Used only for kernel debugging. */
const char *product_name;
struct net_device *next_module;
+ struct pci_dev *pdev;
int chip_id;
int chip_revision;
unsigned char pci_bus, pci_devfn;
unsigned int cur_rx; /* Index into the Rx buffer of next Rx pkt. */
unsigned int cur_tx, dirty_tx, tx_flag;
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
- struct sk_buff* tx_skbuff[NUM_TX_DESC];
+ struct ring_info tx_info[NUM_TX_DESC];
unsigned char *tx_buf[NUM_TX_DESC]; /* Tx bounce buffers */
unsigned char *rx_ring;
unsigned char *tx_bufs; /* Tx bounce buffer region. */
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_bufs_dma;
char phys[4]; /* MII device addresses. */
char twistie, twist_cnt; /* Twister tune state. */
unsigned int tx_full:1; /* The Tx queue is full. */
return -ENODEV;
for (; pci_index < 0xff; pci_index++) {
+ struct pci_dev *pdev;
u16 vendor, device, pci_command, new_command;
int chip_idx, irq;
long ioaddr;
if (pci_tbl[chip_idx].vendor_id == 0) /* Compiled out! */
continue;
- {
-#if defined(PCI_SUPPORT_VER2)
- struct pci_dev *pdev = pci_find_slot(pci_bus, pci_device_fn);
- ioaddr = pdev->resource[0].start;
- irq = pdev->irq;
-#else
- u32 pci_ioaddr;
- u8 pci_irq_line;
- pcibios_read_config_byte(pci_bus, pci_device_fn,
- PCI_INTERRUPT_LINE, &pci_irq_line);
- pcibios_read_config_dword(pci_bus, pci_device_fn,
- PCI_BASE_ADDRESS_0, &pci_ioaddr);
- ioaddr = pci_ioaddr & ~3;
- irq = pci_irq_line;
-#endif
- }
+ pdev = pci_find_slot(pci_bus, pci_device_fn);
+ ioaddr = pdev->resource[0].start;
+ irq = pdev->irq;
if ((pci_tbl[chip_idx].flags & PCI_USES_IO) &&
check_region(ioaddr, pci_tbl[chip_idx].io_size))
PCI_COMMAND, new_command);
}
- dev = pci_tbl[chip_idx].probe1(pci_bus, pci_device_fn, ioaddr, irq, chip_idx, cards_found);
+ dev = pci_tbl[chip_idx].probe1(pdev, pci_bus, pci_device_fn, ioaddr, irq, chip_idx, cards_found);
if (dev && (pci_tbl[chip_idx].flags & PCI_COMMAND_MASTER)) {
u8 pci_latency;
return cards_found ? 0 : -ENODEV;
}
-static struct net_device *rtl8129_probe1(int pci_bus, int pci_devfn, long ioaddr,
- int irq, int chip_idx, int found_cnt)
+static struct net_device *rtl8129_probe1(struct pci_dev *pdev, int pci_bus,
+ int pci_devfn, long ioaddr,
+ int irq, int chip_idx, int found_cnt)
{
static int did_version = 0; /* Already printed version info. */
struct rtl8129_private *tp;
tp->next_module = root_rtl8129_dev;
root_rtl8129_dev = dev;
+ tp->pdev = pdev;
tp->chip_id = chip_idx;
tp->pci_bus = pci_bus;
tp->pci_devfn = pci_devfn;
MOD_INC_USE_COUNT;
- tp->tx_bufs = kmalloc(TX_BUF_SIZE * NUM_TX_DESC, GFP_KERNEL);
- tp->rx_ring = kmalloc(RX_BUF_LEN + 16, GFP_KERNEL);
+ tp->tx_bufs = pci_alloc_consistent(tp->pdev,
+ TX_BUF_SIZE * NUM_TX_DESC,
+ &tp->tx_bufs_dma);
+ tp->rx_ring = pci_alloc_consistent(tp->pdev,
+ RX_BUF_LEN + 16,
+ &tp->rx_ring_dma);
if (tp->tx_bufs == NULL || tp->rx_ring == NULL) {
free_irq(dev->irq, dev);
if (tp->tx_bufs)
- kfree(tp->tx_bufs);
+ pci_free_consistent(tp->pdev,
+ TX_BUF_SIZE * NUM_TX_DESC,
+ tp->tx_bufs, tp->tx_bufs_dma);
+ if (tp->rx_ring)
+ pci_free_consistent(tp->pdev,
+ RX_BUF_LEN + 16,
+ tp->rx_ring, tp->rx_ring_dma);
if (rtl8129_debug > 0)
printk(KERN_ERR "%s: Couldn't allocate a %d byte receive ring.\n",
dev->name, RX_BUF_LEN);
outb(tp->full_duplex ? 0x60 : 0x20, ioaddr + Config1);
outb(0x00, ioaddr + Cfg9346);
- outl(virt_to_bus(tp->rx_ring), ioaddr + RxBuf);
+ outl(tp->rx_ring_dma, ioaddr + RxBuf);
/* Start the chip's Tx and Rx process. */
outl(0, ioaddr + RxMissed);
{ /* Save the unsent Tx packets. */
struct sk_buff *saved_skb[NUM_TX_DESC], *skb;
int j;
- for (j = 0; tp->cur_tx - tp->dirty_tx > 0 ; j++, tp->dirty_tx++)
- saved_skb[j] = tp->tx_skbuff[tp->dirty_tx % NUM_TX_DESC];
+ for (j = 0; tp->cur_tx - tp->dirty_tx > 0 ; j++, tp->dirty_tx++) {
+ struct ring_info *rp = &tp->tx_info[tp->dirty_tx % NUM_TX_DESC];
+
+ saved_skb[j] = rp->skb;
+ if (rp->mapping != 0) {
+ pci_unmap_single(tp->pdev, rp->mapping, rp->skb->len);
+ rp->mapping = 0;
+ }
+ }
tp->dirty_tx = tp->cur_tx = 0;
for (i = 0; i < j; i++) {
- skb = tp->tx_skbuff[i] = saved_skb[i];
+ skb = tp->tx_info[i].skb = saved_skb[i];
if ((long)skb->data & 3) { /* Must use alignment buffer. */
memcpy(tp->tx_buf[i], skb->data, skb->len);
- outl(virt_to_bus(tp->tx_buf[i]), ioaddr + TxAddr0 + i*4);
- } else
- outl(virt_to_bus(skb->data), ioaddr + TxAddr0 + i*4);
+ outl(tp->tx_bufs_dma + (tp->tx_buf[i] - tp->tx_bufs),
+ ioaddr + TxAddr0 + i*4);
+ } else {
+ tp->tx_info[i].mapping =
+ pci_map_single(tp->pdev, skb->data, skb->len);
+ outl(tp->tx_info[i].mapping, ioaddr + TxAddr0 + i*4);
+ }
/* Note: the chip doesn't have auto-pad! */
outl(tp->tx_flag | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN),
ioaddr + TxStatus0 + i*4);
}
tp->cur_tx = i;
- while (i < NUM_TX_DESC)
- tp->tx_skbuff[i++] = 0;
+ while (i < NUM_TX_DESC) {
+ tp->tx_info[i].skb = NULL;
+ tp->tx_info[i].mapping = 0;
+ i++;
+ }
if (tp->cur_tx - tp->dirty_tx < NUM_TX_DESC) {/* Typical path */
dev->tbusy = 0;
tp->tx_full = 0;
tp->dirty_tx = tp->cur_tx = 0;
for (i = 0; i < NUM_TX_DESC; i++) {
- tp->tx_skbuff[i] = 0;
tp->tx_buf[i] = &tp->tx_bufs[i*TX_BUF_SIZE];
+ tp->tx_info[i].skb = NULL;
+ tp->tx_info[i].mapping = 0;
}
}
/* Calculate the next Tx descriptor entry. */
entry = tp->cur_tx % NUM_TX_DESC;
- tp->tx_skbuff[entry] = skb;
+ tp->tx_info[entry].skb = skb;
if ((long)skb->data & 3) { /* Must use alignment buffer. */
+ tp->tx_info[entry].mapping = 0;
memcpy(tp->tx_buf[entry], skb->data, skb->len);
- outl(virt_to_bus(tp->tx_buf[entry]), ioaddr + TxAddr0 + entry*4);
- } else
- outl(virt_to_bus(skb->data), ioaddr + TxAddr0 + entry*4);
+ outl(tp->tx_bufs_dma + (tp->tx_buf[entry] - tp->tx_bufs),
+ ioaddr + TxAddr0 + entry*4);
+ } else {
+ tp->tx_info[entry].mapping =
+ pci_map_single(tp->pdev, skb->data, skb->len);
+ outl(tp->tx_info[entry].mapping, ioaddr + TxAddr0 + entry*4);
+ }
/* Note: the chip doesn't have auto-pad! */
outl(tp->tx_flag | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN),
ioaddr + TxStatus0 + entry*4);
tp->stats.tx_packets++;
}
+ if (tp->tx_info[entry].mapping != 0) {
+ pci_unmap_single(tp->pdev,
+ tp->tx_info[entry].mapping,
+ tp->tx_info[entry].skb->len);
+ tp->tx_info[entry].mapping = 0;
+ }
+
/* Free the original skb. */
- dev_free_skb(tp->tx_skbuff[entry]);
- tp->tx_skbuff[entry] = 0;
+ dev_free_skb(tp->tx_info[entry].skb);
+ tp->tx_info[entry].skb = NULL;
if (tp->tx_full) {
/* The ring is no longer full, clear tbusy. */
tp->tx_full = 0;
free_irq(dev->irq, dev);
for (i = 0; i < NUM_TX_DESC; i++) {
- if (tp->tx_skbuff[i])
- dev_free_skb(tp->tx_skbuff[i]);
- tp->tx_skbuff[i] = 0;
+ struct sk_buff *skb = tp->tx_info[i].skb;
+ dma_addr_t mapping = tp->tx_info[i].mapping;
+
+ if (skb) {
+ if (mapping)
+ pci_unmap_single(tp->pdev, mapping, skb->len);
+ dev_free_skb(skb);
+ }
+ tp->tx_info[i].skb = NULL;
+ tp->tx_info[i].mapping = 0;
}
- kfree(tp->rx_ring);
- kfree(tp->tx_bufs);
+ pci_free_consistent(tp->pdev, RX_BUF_LEN + 16,
+ tp->rx_ring, tp->rx_ring_dma);
+ pci_free_consistent(tp->pdev, TX_BUF_SIZE * NUM_TX_DESC,
+ tp->tx_bufs, tp->tx_bufs_dma);
+ tp->rx_ring = NULL;
+ tp->tx_bufs = NULL;
/* Green! Put the chip in low-power mode. */
outb(0xC0, ioaddr + Cfg9346);
caddr_t pDescrMem; /* Pointer to the descriptor area */
+ dma_addr_t pDescrMemDMA; /* PCI DMA address of area */
+
/* the port structures with descriptor rings */
TX_PORT TxPort[SK_MAX_MACS][2];
RX_PORT RxPort[SK_MAX_MACS];
AllocLength = (RX_RING_SIZE + TX_RING_SIZE) * pAC->GIni.GIMacsFound
+ RX_RING_SIZE + 8;
#endif
- pDescrMem = kmalloc(AllocLength, GFP_KERNEL);
+ pDescrMem = pci_alloc_consistent(&pAC->PciDev, AllocLength,
+ &pAC->pDescrMemDMA);
if (pDescrMem == NULL) {
return (SK_FALSE);
}
pAC->pDescrMem = pDescrMem;
- memset(pDescrMem, 0, AllocLength);
- /* Descriptors need 8 byte alignment */
- BusAddr = virt_to_bus(pDescrMem);
- if (BusAddr & (DESCR_ALIGN-1)) {
- pDescrMem += DESCR_ALIGN - (BusAddr & (DESCR_ALIGN-1));
- }
+
+ /* Descriptors need 8 byte alignment, and this is ensured
+ * by pci_alloc_consistent.
+ */
+ BusAddr = (unsigned long) pAC->pDescrMemDMA;
for (i=0; i<pAC->GIni.GIMacsFound; i++) {
- if ((virt_to_bus(pDescrMem) & ~0xFFFFFFFFULL) !=
- (virt_to_bus(pDescrMem+TX_RING_SIZE) & ~0xFFFFFFFFULL)) {
- pDescrMem += TX_RING_SIZE;
- }
SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_DRV_TX_PROGRESS,
("TX%d/A: pDescrMem: %lX, PhysDescrMem: %lX\n",
i, (unsigned long) pDescrMem,
- (unsigned long)virt_to_bus(pDescrMem)));
+ BusAddr));
pAC->TxPort[i][0].pTxDescrRing = pDescrMem;
- pAC->TxPort[i][0].VTxDescrRing = virt_to_bus(pDescrMem);
+ pAC->TxPort[i][0].VTxDescrRing = BusAddr;
pDescrMem += TX_RING_SIZE;
+ BusAddr += TX_RING_SIZE;
- if ((virt_to_bus(pDescrMem) & ~0xFFFFFFFFULL) !=
- (virt_to_bus(pDescrMem+RX_RING_SIZE) & ~0xFFFFFFFFULL)) {
- pDescrMem += RX_RING_SIZE;
- }
SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_DRV_TX_PROGRESS,
("RX%d: pDescrMem: %lX, PhysDescrMem: %lX\n",
i, (unsigned long) pDescrMem,
- (unsigned long)(virt_to_bus(pDescrMem))));
+ (unsigned long)BusAddr));
pAC->RxPort[i].pRxDescrRing = pDescrMem;
- pAC->RxPort[i].VRxDescrRing = virt_to_bus(pDescrMem);
+ pAC->RxPort[i].VRxDescrRing = BusAddr;
pDescrMem += RX_RING_SIZE;
+ BusAddr += RX_RING_SIZE;
} /* for */
return (SK_TRUE);
static void BoardFreeMem(
SK_AC *pAC)
{
+size_t AllocLength; /* length of complete descriptor area */
+
SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_DRV_ENTRY,
("BoardFreeMem\n"));
- kfree(pAC->pDescrMem);
+#if (BITS_PER_LONG == 32)
+ AllocLength = (RX_RING_SIZE + TX_RING_SIZE) * pAC->GIni.GIMacsFound + 8;
+#else
+ AllocLength = (RX_RING_SIZE + TX_RING_SIZE) * pAC->GIni.GIMacsFound
+ + RX_RING_SIZE + 8;
+#endif
+ pci_free_consistent(&pAC->PciDev, AllocLength,
+ pAC->pDescrMem, pAC->pDescrMemDMA);
+ pAC->pDescrMem = NULL;
} /* BoardFreeMem */
if (Rc == 0) {
/* transmitter out of resources */
set_bit(0, (void*) &dev->tbusy);
- return (0);
+
+ /* give buffer ownership back to the queueing layer */
+ return (1);
}
dev->trans_start = jiffies;
return (0);
SK_DBGCAT_DRV_TX_PROGRESS,
("XmitFrame failed\n"));
/* this message can not be sent now */
- DEV_KFREE_SKB(pMessage);
return (0);
}
}
#endif
/* set up descriptor and CONTROL dword */
- PhysAddr = virt_to_bus(pMessage->data);
+ PhysAddr = (SK_U64) pci_map_single(&pAC->PciDev,
+ pMessage->data,
+ pMessage->len);
pTxd->VDataLow = (SK_U32) (PhysAddr & 0xffffffff);
pTxd->VDataHigh = (SK_U32) (PhysAddr >> 32);
pTxd->pMBuf = pMessage;
TXD *pTxd; /* pointer to the checked descriptor */
TXD *pNewTail; /* pointer to 'end' of the ring */
SK_U32 Control; /* TBControl field of descriptor */
+SK_U64 PhysAddr; /* address of DMA mapping */
pNewTail = pTxPort->pTxdRingTail;
pTxd = pNewTail;
return;
}
+ /* release the DMA mapping */
+ PhysAddr = ((SK_U64) pTxd->VDataHigh) << (SK_U64) 32;
+ PhysAddr |= (SK_U64) pTxd->VDataLow;
+ pci_unmap_single(&pAC->PciDev, PhysAddr,
+ pTxd->pMBuf->len);
+
DEV_KFREE_SKB(pTxd->pMBuf); /* free message */
pTxPort->TxdRingFree++;
pTxd->TBControl &= ~TX_CTRL_SOFTWARE;
pRxPort->pRxdRingTail = pRxd->pNextRxd;
pRxPort->RxdRingFree--;
Length = pAC->RxBufSize;
- PhysAddr = virt_to_bus(pMsgBlock->data);
+ PhysAddr = (SK_U64) pci_map_single(&pAC->PciDev,
+ pMsgBlock->data,
+ pAC->RxBufSize - 2);
pRxd->VDataLow = (SK_U32) (PhysAddr & 0xffffffff);
pRxd->VDataHigh = (SK_U32) (PhysAddr >> 32);
pRxd->pMBuf = pMsgBlock;
unsigned short Csum2;
unsigned short Type;
int Result;
+SK_U64 PhysAddr;
+
rx_start:
/* do forever; exit if RX_CTRL_OWN_BMU found */
/*
* if short frame then copy data to reduce memory waste
*/
+ pNewMsg = NULL;
if (FrameLength < SK_COPY_THRESHOLD) {
pNewMsg = alloc_skb(FrameLength+2, GFP_ATOMIC);
- if (pNewMsg == NULL) {
- /* use original skb */
- /* set length in message */
- skb_put(pMsg, FrameLength);
- }
- else {
- /* alloc new skb and copy data */
+ if (pNewMsg != NULL) {
+ PhysAddr = ((SK_U64) pRxd->VDataHigh) << (SK_U64)32;
+ PhysAddr |= (SK_U64) pRxd->VDataLow;
+
+ /* use new skb and copy data */
skb_reserve(pNewMsg, 2);
skb_put(pNewMsg, FrameLength);
+ pci_dma_sync_single(&pAC->PciDev,
+ (dma_addr_t) PhysAddr,
+ FrameLength);
eth_copy_and_sum(pNewMsg, pMsg->data,
FrameLength, 0);
ReQueueRxBuffer(pAC, pRxPort, pMsg,
pMsg = pNewMsg;
}
}
- else {
+
+ /*
+ * if large frame, or SKB allocation failed, pass
+ * the SKB directly to the networking
+ */
+ if (pNewMsg == NULL) {
+ PhysAddr = ((SK_U64) pRxd->VDataHigh) << (SK_U64)32;
+ PhysAddr |= (SK_U64) pRxd->VDataLow;
+
+ /* release the DMA mapping */
+ pci_unmap_single(&pAC->PciDev,
+ PhysAddr,
+ pAC->RxBufSize - 2);
+
/* set length in message */
skb_put(pMsg, FrameLength);
/* hardware checksum */
/* remove error frame */
SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_DRV_ERROR,
("Schrottdescriptor, length: 0x%x\n", FrameLength));
+
+ /* release the DMA mapping */
+ PhysAddr = ((SK_U64) pRxd->VDataHigh) << (SK_U64)32;
+ PhysAddr |= (SK_U64) pRxd->VDataLow;
+ pci_unmap_single(&pAC->PciDev,
+ PhysAddr,
+ pAC->RxBufSize - 2);
DEV_KFREE_SKB(pRxd->pMBuf);
pRxd->pMBuf = NULL;
pRxPort->RxdRingFree++;
{
RXD *pRxd; /* pointer to the current descriptor */
unsigned int Flags;
+ SK_U64 PhysAddr;
if (pRxPort->RxdRingFree == pAC->RxDescrPerRing) {
return;
pRxd = pRxPort->pRxdRingHead;
do {
if (pRxd->pMBuf != NULL) {
+ PhysAddr = ((SK_U64) pRxd->VDataHigh) << (SK_U64)32;
+ PhysAddr |= (SK_U64) pRxd->VDataLow;
+ pci_unmap_single(&pAC->PciDev,
+ PhysAddr,
+ pAC->RxBufSize - 2);
DEV_KFREE_SKB(pRxd->pMBuf);
pRxd->pMBuf = NULL;
}
const char *name;
u16 vendor_id, device_id, device_id_mask, flags;
int io_size;
- struct net_device *(*probe1)(int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_idx, int fnd_cnt);
+ struct net_device *(*probe1)(struct pci_dev *pdev, int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_idx, int fnd_cnt);
};
-static struct net_device *starfire_probe1(int pci_bus, int pci_devfn, long ioaddr,
- int irq, int chp_idx, int fnd_cnt);
+static struct net_device *starfire_probe1(struct pci_dev *pdev, int pci_bus,
+ int pci_devfn, long ioaddr,
+ int irq, int chp_idx, int fnd_cnt);
#if 0
#define ADDR_64BITS 1 /* This chip uses 64 bit addresses. */
#endif
};
+struct ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+};
+
struct netdev_private {
/* Descriptor rings first for alignment. */
struct starfire_rx_desc *rx_ring;
struct starfire_tx_desc *tx_ring;
+ dma_addr_t rx_ring_dma;
+ dma_addr_t tx_ring_dma;
struct net_device *next_module; /* Link for devices of this type. */
const char *product_name;
/* The addresses of rx/tx-in-place skbuffs. */
- struct sk_buff* rx_skbuff[RX_RING_SIZE];
- struct sk_buff* tx_skbuff[TX_RING_SIZE];
+ struct ring_info rx_info[RX_RING_SIZE];
+ struct ring_info tx_info[TX_RING_SIZE];
/* Pointers to completion queues (full pages). I should cache line pad..*/
u8 pad0[100];
struct rx_done_desc *rx_done_q;
+ dma_addr_t rx_done_q_dma;
unsigned int rx_done;
struct tx_done_report *tx_done_q;
unsigned int tx_done;
+ dma_addr_t tx_done_q_dma;
struct net_device_stats stats;
struct timer_list timer; /* Media monitoring timer. */
/* Frequently used values: keep some adjacent for cache effect. */
int chip_id;
+ struct pci_dev *pdev;
unsigned char pci_bus, pci_devfn;
unsigned int cur_rx, dirty_rx; /* Producer/consumer ring indices */
unsigned int cur_tx, dirty_tx;
return -ENODEV;
for (;pci_index < 0xff; pci_index++) {
+ struct pci_dev *pdev;
u16 vendor, device, pci_command, new_command;
int chip_idx, irq;
long pciaddr;
if (pci_tbl[chip_idx].vendor_id == 0) /* Compiled out! */
continue;
- {
- struct pci_dev *pdev = pci_find_slot(pci_bus, pci_device_fn);
+ pdev = pci_find_slot(pci_bus, pci_device_fn);
+ {
pciaddr = pdev->resource[0].start;
#if defined(ADDR_64BITS) && defined(__alpha__)
pciaddr |= ((long)pdev->base_address[1]) << 32;
PCI_COMMAND, new_command);
}
- dev = pci_tbl[chip_idx].probe1(pci_bus, pci_device_fn, ioaddr,
+ dev = pci_tbl[chip_idx].probe1(pdev, pci_bus, pci_device_fn, ioaddr,
irq, chip_idx, cards_found);
if (dev && (pci_tbl[chip_idx].flags & PCI_COMMAND_MASTER)) {
static struct net_device *
-starfire_probe1(int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_id, int card_idx)
+starfire_probe1(struct pci_dev *pdev, int pci_bus, int pci_devfn, long ioaddr, int irq, int chip_id, int card_idx)
{
struct netdev_private *np;
int i, option = card_idx < MAX_UNITS ? options[card_idx] : 0;
np->next_module = root_net_dev;
root_net_dev = dev;
+ np->pdev = pdev;
np->pci_bus = pci_bus;
np->pci_devfn = pci_devfn;
np->chip_id = chip_id;
dev->name, dev->irq);
/* Allocate the various queues, failing gracefully. */
if (np->tx_done_q == 0)
- np->tx_done_q = (struct tx_done_report *)get_free_page(GFP_KERNEL);
+ np->tx_done_q = pci_alloc_consistent(np->pdev, PAGE_SIZE, &np->tx_done_q_dma);
if (np->rx_done_q == 0)
- np->rx_done_q = (struct rx_done_desc *)get_free_page(GFP_KERNEL);
+ np->rx_done_q = pci_alloc_consistent(np->pdev, PAGE_SIZE, &np->rx_done_q_dma);
if (np->tx_ring == 0)
- np->tx_ring = (struct starfire_tx_desc *)get_free_page(GFP_KERNEL);
+ np->tx_ring = pci_alloc_consistent(np->pdev, PAGE_SIZE, &np->tx_ring_dma);
if (np->rx_ring == 0)
- np->rx_ring = (struct starfire_rx_desc *)get_free_page(GFP_KERNEL);
+ np->rx_ring = pci_alloc_consistent(np->pdev, PAGE_SIZE, &np->rx_ring_dma);
if (np->tx_done_q == 0 || np->rx_done_q == 0
- || np->rx_ring == 0 || np->tx_ring == 0)
+ || np->rx_ring == 0 || np->tx_ring == 0) {
+ if (np->tx_done_q)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
return -ENOMEM;
+ }
MOD_INC_USE_COUNT;
writel(0x02000401, ioaddr + TxDescCtrl);
#if defined(ADDR_64BITS) && defined(__alpha__)
- writel(virt_to_bus(np->rx_ring) >> 32, ioaddr + RxDescQHiAddr);
- writel(virt_to_bus(np->tx_ring) >> 32, ioaddr + TxRingHiAddr);
+ /* XXX We really need a 64-bit PCI dma interfaces too... -DaveM */
+ writel(np->rx_ring_dma >> 32, ioaddr + RxDescQHiAddr);
+ writel(np->tx_ring_dma >> 32, ioaddr + TxRingHiAddr);
#else
writel(0, ioaddr + RxDescQHiAddr);
writel(0, ioaddr + TxRingHiAddr);
writel(0, ioaddr + CompletionHiAddr);
#endif
- writel(virt_to_bus(np->rx_ring), ioaddr + RxDescQAddr);
- writel(virt_to_bus(np->tx_ring), ioaddr + TxRingPtr);
+ writel(np->rx_ring_dma, ioaddr + RxDescQAddr);
+ writel(np->tx_ring_dma, ioaddr + TxRingPtr);
- writel(virt_to_bus(np->tx_done_q), ioaddr + TxCompletionAddr);
- writel(virt_to_bus(np->rx_done_q), ioaddr + RxCompletionAddr);
+ writel(np->tx_done_q_dma, ioaddr + TxCompletionAddr);
+ writel(np->rx_done_q_dma, ioaddr + RxCompletionAddr);
if (debug > 1)
printk(KERN_DEBUG "%s: Filling in the station address.\n", dev->name);
/* Fill in the Rx buffers. Handle allocation failure gracefully. */
for (i = 0; i < RX_RING_SIZE; i++) {
struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz);
- np->rx_skbuff[i] = skb;
+ np->rx_info[i].skb = skb;
if (skb == NULL)
break;
+ np->rx_info[i].mapping = pci_map_single(np->pdev, skb->tail, np->rx_buf_sz);
skb->dev = dev; /* Mark as being used by this device. */
/* Grrr, we cannot offset to correctly align the IP header. */
- np->rx_ring[i].rxaddr = cpu_to_le32(virt_to_bus(skb->tail) | RxDescValid);
+ np->rx_ring[i].rxaddr = cpu_to_le32(np->rx_info[i].mapping | RxDescValid);
}
writew(i-1, dev->base_addr + RxDescQIdx);
np->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
/* Clear the remainder of the Rx buffer ring. */
for ( ; i < RX_RING_SIZE; i++) {
np->rx_ring[i].rxaddr = 0;
- np->rx_skbuff[i] = 0;
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
}
/* Mark the last entry as wrapping the ring. */
np->rx_ring[i-1].rxaddr |= cpu_to_le32(RxDescEndRing);
}
for (i = 0; i < TX_RING_SIZE; i++) {
- np->tx_skbuff[i] = 0;
+ np->tx_info[i].skb = NULL;
+ np->tx_info[i].mapping = 0;
np->tx_ring[i].status = 0;
}
return;
/* Calculate the next Tx descriptor entry. */
entry = np->cur_tx % TX_RING_SIZE;
- np->tx_skbuff[entry] = skb;
+ np->tx_info[entry].skb = skb;
+ np->tx_info[entry].mapping =
+ pci_map_single(np->pdev, skb->data, skb->len);
- np->tx_ring[entry].addr = cpu_to_le32(virt_to_bus(skb->data));
+ np->tx_ring[entry].addr = cpu_to_le32(np->tx_info[entry].mapping);
/* Add |TxDescIntr to generate Tx-done interrupts. */
np->tx_ring[entry].status = cpu_to_le32(skb->len | TxDescID);
if (debug > 5) {
if ((tx_status & 0xe0000000) == 0xa0000000) {
np->stats.tx_packets++;
} else if ((tx_status & 0xe0000000) == 0x80000000) {
+ struct sk_buff *skb;
u16 entry = tx_status; /* Implicit truncate */
entry >>= 3;
+
+ skb = np->tx_info[entry].skb;
+ pci_unmap_single(np->pdev,
+ np->tx_info[entry].mapping,
+ skb->len);
+
/* Scavenge the descriptor. */
- dev_kfree_skb(np->tx_skbuff[entry]);
- np->tx_skbuff[entry] = 0;
+ dev_kfree_skb(skb);
+ np->tx_info[entry].skb = NULL;
+ np->tx_info[entry].mapping = 0;
np->dirty_tx++;
}
np->tx_done_q[np->tx_done].status = 0;
&& (skb = dev_alloc_skb(pkt_len + 2)) != NULL) {
skb->dev = dev;
skb_reserve(skb, 2); /* 16 byte align the IP header */
+ pci_dma_sync_single(np->pdev,
+ np->rx_info[entry].mapping,
+ pkt_len);
#if HAS_IP_COPYSUM /* Call copy + cksum if available. */
- eth_copy_and_sum(skb, np->rx_skbuff[entry]->tail, pkt_len, 0);
+ eth_copy_and_sum(skb, np->rx_info[entry].skb->tail, pkt_len, 0);
skb_put(skb, pkt_len);
#else
- memcpy(skb_put(skb, pkt_len), np->rx_skbuff[entry]->tail,
+ memcpy(skb_put(skb, pkt_len), np->rx_info[entry].skb->tail,
pkt_len);
#endif
} else {
- char *temp = skb_put(skb = np->rx_skbuff[entry], pkt_len);
- np->rx_skbuff[entry] = NULL;
-#ifndef final_version /* Remove after testing. */
- if (bus_to_virt(le32_to_cpu(np->rx_ring[entry].rxaddr) & ~3) != temp)
- printk(KERN_ERR "%s: Internal fault: The skbuff addresses "
- "do not match in netdev_rx: %p vs. %p / %p.\n",
- dev->name, bus_to_virt(le32_to_cpu(np->rx_ring[entry].rxaddr)),
- skb->head, temp);
-#endif
+ char *temp;
+
+ pci_unmap_single(np->pdev, np->rx_info[entry].mapping, np->rx_buf_sz);
+ skb = np->rx_info[entry].skb;
+ temp = skb_put(skb, pkt_len);
+ np->rx_info[entry].skb = NULL;
+ np->rx_info[entry].mapping = 0;
}
#ifndef final_version /* Remove after testing. */
/* You will want this info for the initial debug. */
for (; np->cur_rx - np->dirty_rx > 0; np->dirty_rx++) {
struct sk_buff *skb;
int entry = np->dirty_rx % RX_RING_SIZE;
- if (np->rx_skbuff[entry] == NULL) {
+ if (np->rx_info[entry].skb == NULL) {
skb = dev_alloc_skb(np->rx_buf_sz);
- np->rx_skbuff[entry] = skb;
+ np->rx_info[entry].skb = skb;
if (skb == NULL)
break; /* Better luck next round. */
+ np->rx_info[entry].mapping =
+ pci_map_single(np->pdev, skb->tail, np->rx_buf_sz);
skb->dev = dev; /* Mark as being used by this device. */
- np->rx_ring[entry].rxaddr = cpu_to_le32(virt_to_bus(skb->tail) | RxDescValid);
+ np->rx_ring[entry].rxaddr =
+ cpu_to_le32(np->rx_info[entry].mapping | RxDescValid);
}
if (entry == RX_RING_SIZE - 1)
np->rx_ring[entry].rxaddr |= cpu_to_le32(RxDescEndRing);
#ifdef __i386__
if (debug > 2) {
printk("\n"KERN_DEBUG" Tx ring at %8.8x:\n",
- (int)virt_to_bus(np->tx_ring));
+ np->tx_ring_dma);
for (i = 0; i < 8 /* TX_RING_SIZE */; i++)
printk(KERN_DEBUG " #%d desc. %8.8x %8.8x -> %8.8x.\n",
i, le32_to_cpu(np->tx_ring[i].status),
le32_to_cpu(np->tx_ring[i].addr),
le32_to_cpu(np->tx_done_q[i].status));
printk(KERN_DEBUG " Rx ring at %8.8x -> %p:\n",
- (int)virt_to_bus(np->rx_ring), np->rx_done_q);
+ np->rx_ring_dma, np->rx_done_q);
if (np->rx_done_q)
for (i = 0; i < 8 /* RX_RING_SIZE */; i++) {
printk(KERN_DEBUG " #%d desc. %8.8x -> %8.8x\n",
/* Free all the skbuffs in the Rx queue. */
for (i = 0; i < RX_RING_SIZE; i++) {
np->rx_ring[i].rxaddr = cpu_to_le32(0xBADF00D0); /* An invalid address. */
- if (np->rx_skbuff[i]) {
-#if LINUX_VERSION_CODE < 0x20100
- np->rx_skbuff[i]->free = 1;
-#endif
- dev_kfree_skb(np->rx_skbuff[i]);
+ if (np->rx_info[i].skb != NULL) {
+ pci_unmap_single(np->pdev, np->rx_info[i].mapping, np->rx_buf_sz);
+ dev_kfree_skb(np->rx_info[i].skb);
}
- np->rx_skbuff[i] = 0;
+ np->rx_info[i].skb = NULL;
+ np->rx_info[i].mapping = 0;
}
for (i = 0; i < TX_RING_SIZE; i++) {
- if (np->tx_skbuff[i])
- dev_kfree_skb(np->tx_skbuff[i]);
- np->tx_skbuff[i] = 0;
+ struct sk_buff *skb = np->tx_info[i].skb;
+ if (skb != NULL) {
+ pci_unmap_single(np->pdev,
+ np->tx_info[i].mapping,
+ skb->len);
+ dev_kfree_skb(skb);
+ }
+ np->tx_info[i].skb = NULL;
+ np->tx_info[i].mapping = 0;
}
MOD_DEC_USE_COUNT;
next_dev = np->next_module;
unregister_netdev(root_net_dev);
iounmap((char *)root_net_dev->base_addr);
- if (np->tx_done_q) free_page((long)np->tx_done_q);
- if (np->rx_done_q) free_page((long)np->rx_done_q);
+ if (np->tx_done_q)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->tx_done_q, np->tx_done_q_dma);
+ if (np->rx_done_q)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->rx_done_q, np->rx_done_q_dma);
+ if (np->tx_ring)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->tx_ring, np->tx_ring_dma);
+ if (np->rx_ring)
+ pci_free_consistent(np->pdev, PAGE_SIZE,
+ np->rx_ring, np->rx_ring_dma);
kfree(root_net_dev);
root_net_dev = next_dev;
}
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ mainmenu_option next_comment
comment 'Linux/SPARC audio subsystem (EXPERIMENTAL)'
tristate 'Audio support (EXPERIMENTAL)' CONFIG_SPARCAUDIO
dep_tristate ' CS4231 Lowlevel Driver' CONFIG_SPARCAUDIO_CS4231 $CONFIG_SPARCAUDIO
dep_tristate ' DBRI Lowlevel Driver' CONFIG_SPARCAUDIO_DBRI $CONFIG_SPARCAUDIO
dep_tristate ' Dummy Lowlevel Driver' CONFIG_SPARCAUDIO_DUMMY $CONFIG_SPARCAUDIO
+ endmenu
fi
+mainmenu_option next_comment
comment 'Misc Linux/SPARC drivers'
tristate '/dev/openprom device support' CONFIG_SUN_OPENPROMIO
tristate 'Mostek real time clock support' CONFIG_SUN_MOSTEK_RTC
tristate 'Bidirectional parallel port support (OBSOLETE)' CONFIG_SUN_BPP
tristate 'Videopix Frame Grabber (EXPERIMENTAL)' CONFIG_SUN_VIDEOPIX
tristate 'Aurora Multiboard 1600se (EXPERIMENTAL)' CONFIG_SUN_AURORA
- tristate 'Tadpole TS102 Microcontroller support (EXPERIMENTAL)' CONFIG_TADPOLE_TS102_UCTRL
- tristate 'JavaStation OS Flash SIMM (EXPERIMENTAL)' CONFIG_SUN_JSFLASH
- # XXX Why don't we do "source drivers/char/Config.in" somewhere?
- if [ "$CONFIG_PCI" = "y" ]; then
- define_bool CONFIG_APM_RTC_IS_GMT y # no shit
- bool 'PC-style RTC' CONFIG_RTC
+ if [ "$ARCH" = "sparc" ]; then
+ tristate 'Tadpole TS102 Microcontroller support (EXPERIMENTAL)' CONFIG_TADPOLE_TS102_UCTRL
+
+ tristate 'JavaStation OS Flash SIMM (EXPERIMENTAL)' CONFIG_SUN_JSFLASH
+ # XXX Why don't we do "source drivers/char/Config.in" somewhere?
+ if [ "$CONFIG_PCI" = "y" ]; then
+ define_bool CONFIG_APM_RTC_IS_GMT y # no shit
+ bool 'PC-style RTC' CONFIG_RTC
+ fi
fi
fi
+endmenu
#include <linux/fcntl.h>
#include <linux/poll.h>
#include <linux/init.h>
+#include <linux/string.h>
#if 0 /* P3 from mem.c */
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <asm/sbus.h>
#include <asm/ebus.h>
#endif
+#include <asm/pcic.h>
+#include <asm/oplib.h>
#include <asm/jsflash.h> /* ioctl arguments. <linux/> ?? */
#define JSFIDSZ (sizeof(struct jsflash_ident_arg))
#ifdef MODULE
int init_module(void)
#else
-int /* __init */ jsflash_init(void)
+int __init jsflash_init(void)
#endif
{
int rc;
+ char banner[128];
+
+ /* FIXME: Really autodetect things */
+ prom_getproperty(prom_root_node, "banner-name", banner, 128);
+ if (strcmp (banner, "JavaStation-NC") && strcmp (banner, "JavaStation-E"))
+ return -ENXIO;
/* extern enum sparc_cpu sparc_cpu_model; */ /* in <asm/system.h> */
if (sparc_cpu_model == sun4m && jsf0.base == 0) {
*/
#define RES_QUEUE_LEN ((QLOGICISP_REQ_QUEUE_LEN + 1) / 8 - 1)
#define QUEUE_ENTRY_LEN 64
+#define QSIZE(entries) (((entries) + 1) * QUEUE_ENTRY_LEN)
+
+struct isp_queue_entry {
+ char __opaque[QUEUE_ENTRY_LEN];
+};
struct isp1020_hostdata {
u_long memaddr;
struct dev_param dev_param[MAX_TARGETS];
struct pci_dev *pci_dev;
+ struct isp_queue_entry *res_cpu; /* CPU-side address of response queue. */
+ struct isp_queue_entry *req_cpu; /* CPU-size address of request queue. */
+
/* result and request queues (shared with isp1020): */
u_int req_in_ptr; /* index of next request slot */
u_int res_out_ptr; /* index of next result slot */
/* this is here so the queues are nicely aligned */
long send_marker; /* do we need to send a marker? */
- char res[RES_QUEUE_LEN+1][QUEUE_ENTRY_LEN];
- char req[QLOGICISP_REQ_QUEUE_LEN+1][QUEUE_ENTRY_LEN];
+ /* The cmd->handle has a fixed size, and is only 32-bits. We
+ * need to take care to handle 64-bit systems correctly thus what
+ * we actually place in cmd->handle is an index to the following
+ * table. Kudos to Matt Jacob for the technique. -DaveM
+ */
+ Scsi_Cmnd *cmd_slots[QLOGICISP_REQ_QUEUE_LEN + 1];
+
+ dma_addr_t res_dma; /* PCI side view of response queue */
+ dma_addr_t req_dma; /* PCI side view of request queue */
};
/* queue length's _must_ be power of two: */
hostdata->pci_dev = pdev;
- if (isp1020_init(host)) {
- scsi_unregister(host);
- continue;
- }
+ if (isp1020_init(host))
+ goto fail_and_unregister;
if (isp1020_reset_hardware(host)
#if USE_NVRAM_DEFAULTS
|| isp1020_load_parameters(host)) {
iounmap((void *)hostdata->memaddr);
release_region(host->io_port, 0xff);
- scsi_unregister(host);
- continue;
+ goto fail_and_unregister;
}
host->this_id = hostdata->host_param.initiator_scsi_id;
host->irq);
iounmap((void *)hostdata->memaddr);
release_region(host->io_port, 0xff);
- scsi_unregister(host);
- continue;
+ goto fail_and_unregister;
}
isp_outw(0x0, host, PCI_SEMAPHORE);
isp1020_enable_irqs(host);
hosts++;
+ continue;
+
+ fail_and_unregister:
+ if (hostdata->res_cpu)
+ pci_free_consistent(hostdata->pci_dev,
+ QSIZE(RES_QUEUE_LEN),
+ hostdata->res_cpu,
+ hostdata->res_dma);
+ if (hostdata->req_cpu)
+ pci_free_consistent(hostdata->pci_dev,
+ QSIZE(QLOGICISP_REQ_QUEUE_LEN),
+ hostdata->req_cpu,
+ hostdata->req_dma);
+ scsi_unregister(host);
}
LEAVE("isp1020_detect");
*/
int isp1020_queuecommand(Scsi_Cmnd *Cmnd, void (*done)(Scsi_Cmnd *))
{
- int i, sg_count, n, num_free;
+ int i, n, num_free;
u_int in_ptr, out_ptr;
struct dataseg * ds;
struct scatterlist *sg;
DEBUG(printk("qlogicisp : request queue depth %d\n",
REQ_QUEUE_DEPTH(in_ptr, out_ptr)));
- cmd = (struct Command_Entry *) &hostdata->req[in_ptr][0];
+ cmd = (struct Command_Entry *) &hostdata->req_cpu[in_ptr];
in_ptr = (in_ptr + 1) & QLOGICISP_REQ_QUEUE_LEN;
if (in_ptr == out_ptr) {
printk("qlogicisp : request queue overflow\n");
printk("qlogicisp : request queue overflow\n");
return 1;
}
- cmd = (struct Command_Entry *) &hostdata->req[in_ptr][0];
+ cmd = (struct Command_Entry *) &hostdata->req_cpu[in_ptr];
in_ptr = (in_ptr + 1) & QLOGICISP_REQ_QUEUE_LEN;
}
cmd->hdr.entry_type = ENTRY_COMMAND;
cmd->hdr.entry_cnt = 1;
- cmd->handle = cpu_to_le32((u_int) virt_to_bus(Cmnd));
cmd->target_lun = Cmnd->lun;
cmd->target_id = Cmnd->target;
cmd->cdb_length = cpu_to_le16(Cmnd->cmd_len);
memcpy(cmd->cdb, Cmnd->cmnd, Cmnd->cmd_len);
if (Cmnd->use_sg) {
- cmd->segment_cnt = cpu_to_le16(sg_count = Cmnd->use_sg);
+ int sg_count;
+
sg = (struct scatterlist *) Cmnd->request_buffer;
ds = cmd->dataseg;
+ sg_count = pci_map_sg(hostdata->pci_dev, sg, Cmnd->use_sg);
+
+ cmd->segment_cnt = cpu_to_le16(sg_count);
+
/* fill in first four sg entries: */
n = sg_count;
if (n > 4)
n = 4;
for (i = 0; i < n; i++) {
- ds[i].d_base = cpu_to_le32((u_int) virt_to_bus(sg->address));
- ds[i].d_count = cpu_to_le32(sg->length);
+ ds[i].d_base = cpu_to_le32(sg_dma_address(sg));
+ ds[i].d_count = cpu_to_le32(sg_dma_len(sg));
++sg;
}
sg_count -= 4;
while (sg_count > 0) {
++cmd->hdr.entry_cnt;
cont = (struct Continuation_Entry *)
- &hostdata->req[in_ptr][0];
+ &hostdata->req_cpu[in_ptr];
in_ptr = (in_ptr + 1) & QLOGICISP_REQ_QUEUE_LEN;
if (in_ptr == out_ptr) {
printk("isp1020: unexpected request queue "
if (n > 7)
n = 7;
for (i = 0; i < n; ++i) {
- ds[i].d_base = cpu_to_le32((u_int)virt_to_bus(sg->address));
- ds[i].d_count = cpu_to_le32(sg->length);
+ ds[i].d_base = cpu_to_le32(sg_dma_address(sg));
+ ds[i].d_count = cpu_to_le32(sg_dma_len(sg));
++sg;
}
sg_count -= n;
}
} else {
+ Cmnd->SCp.ptr = (char *)(unsigned long)
+ pci_map_single(hostdata->pci_dev,
+ Cmnd->request_buffer,
+ Cmnd->request_bufflen);
+
cmd->dataseg[0].d_base =
- cpu_to_le32((u_int) virt_to_bus(Cmnd->request_buffer));
+ cpu_to_le32((u32)(long)Cmnd->SCp.ptr);
cmd->dataseg[0].d_count =
- cpu_to_le32((u_int) Cmnd->request_bufflen);
+ cpu_to_le32((u32)Cmnd->request_bufflen);
cmd->segment_cnt = cpu_to_le16(1);
}
+ /* Committed, record Scsi_Cmd so we can find it later. */
+ cmd->handle = in_ptr;
+ hostdata->cmd_slots[in_ptr] = Cmnd;
+
isp_outw(in_ptr, host, MBOX4);
hostdata->req_in_ptr = in_ptr;
QUEUE_DEPTH(in_ptr, out_ptr, RES_QUEUE_LEN)));
while (out_ptr != in_ptr) {
- sts = (struct Status_Entry *) &hostdata->res[out_ptr][0];
+ u_int cmd_slot;
+
+ sts = (struct Status_Entry *) &hostdata->res_cpu[out_ptr];
out_ptr = (out_ptr + 1) & RES_QUEUE_LEN;
- Cmnd = (Scsi_Cmnd *) bus_to_virt(le32_to_cpu(sts->handle));
+ cmd_slot = sts->handle;
+ Cmnd = hostdata->cmd_slots[cmd_slot];
+ hostdata->cmd_slots[cmd_slot] = NULL;
TRACE("done", out_ptr, Cmnd);
else
Cmnd->result = DID_ERROR << 16;
+ if (Cmnd->use_sg)
+ pci_unmap_sg(hostdata->pci_dev,
+ (struct scatterlist *)Cmnd->buffer,
+ Cmnd->use_sg);
+ else
+ pci_unmap_single(hostdata->pci_dev,
+ (u32)((long)Cmnd->SCp.ptr),
+ Cmnd->request_bufflen);
+
isp_outw(out_ptr, host, MBOX5);
(*Cmnd->scsi_done)(Cmnd);
}
struct Scsi_Host *host;
struct isp1020_hostdata *hostdata;
int return_status = SCSI_ABORT_SUCCESS;
- u_int cmdaddr = virt_to_bus(Cmnd);
+ u_int cmd_cookie;
+ int i;
ENTER("isp1020_abort");
host = Cmnd->host;
hostdata = (struct isp1020_hostdata *) host->hostdata;
+ for (i = 0; i < QLOGICISP_REQ_QUEUE_LEN + 1; i++)
+ if (hostdata->cmd_slots[i] == Cmnd)
+ break;
+ cmd_cookie = i;
+
isp1020_disable_irqs(host);
param[0] = MBOX_ABORT;
param[1] = (((u_short) Cmnd->target) << 8) | Cmnd->lun;
- param[2] = cmdaddr >> 16;
- param[3] = cmdaddr & 0xffff;
+ param[2] = cmd_cookie >> 16;
+ param[3] = cmd_cookie & 0xffff;
isp1020_mbox_command(host, param);
sh->max_id = MAX_TARGETS;
sh->max_lun = MAX_LUNS;
+ hostdata->res_cpu = pci_alloc_consistent(hostdata->pci_dev,
+ QSIZE(RES_QUEUE_LEN),
+ &hostdata->res_dma);
+ if (hostdata->res_cpu == NULL) {
+ printk("qlogicisp : can't allocate response queue\n");
+ return 1;
+ }
+
+ hostdata->req_cpu = pci_alloc_consistent(hostdata->pci_dev,
+ QSIZE(QLOGICISP_REQ_QUEUE_LEN),
+ &hostdata->req_dma);
+ if (hostdata->req_cpu == NULL) {
+ pci_free_consistent(hostdata->pci_dev,
+ QSIZE(RES_QUEUE_LEN),
+ hostdata->res_cpu,
+ hostdata->res_dma);
+ printk("qlogicisp : can't allocate request queue\n");
+ return 1;
+ }
+
LEAVE("isp1020_init");
return 0;
}
}
- queue_addr = (u_int) virt_to_bus(&hostdata->res[0][0]);
+ queue_addr = hostdata->res_dma;
param[0] = MBOX_INIT_RES_QUEUE;
param[1] = RES_QUEUE_LEN + 1;
return 1;
}
- queue_addr = (u_int) virt_to_bus(&hostdata->req[0][0]);
+ queue_addr = hostdata->req_dma;
param[0] = MBOX_INIT_REQ_QUEUE;
param[1] = QLOGICISP_REQ_QUEUE_LEN + 1;
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/string.h>
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
* off of it; go on, I dare you.
*/
-#include <linux/config.h>
#define __NO_VERSION__
#include <linux/pci.h>
#include <linux/module.h>
* "Channel Binding" ioctl extension
*/
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/version.h>
#include <linux/string.h>
comment 'USB Controllers'
dep_tristate ' UHCI (Intel PIIX4, VIA, ...) support' CONFIG_USB_UHCI $CONFIG_USB
dep_tristate ' UHCI Alternate Driver (JE) support' CONFIG_USB_UHCI_ALT $CONFIG_USB
+ if [ "$CONFIG_USB_UHCI_ALT" != "n" ]; then
+ bool ' UHCI unlink optimizations (EXPERIMENTAL)' CONFIG_USB_UHCI_ALT_UNLINK_OPTIMIZE
+ fi
dep_tristate ' OHCI (Compaq, iMacs, OPTi, SiS, ALi, ...) support' CONFIG_USB_OHCI $CONFIG_USB
comment 'Miscellaneous USB options'
# Translate to Rules.make lists.
-O_OBJS := $(filter-out $(export-objs), $(obj-y))
-OX_OBJS := $(filter $(export-objs), $(obj-y))
+O_OBJS := $(sort $(filter-out $(export-objs), $(obj-y)))
+OX_OBJS := $(sort $(filter $(export-objs), $(obj-y)))
M_OBJS := $(sort $(filter-out $(export-objs), $(obj-m)))
MX_OBJS := $(sort $(filter $(export-objs), $(obj-m)))
MI_OBJS := $(sort $(filter-out $(export-objs), $(int-m)))
struct usb_audiodev {
struct list_head list;
struct usb_audio_state *state;
- int remove_pending;
/* soundcore stuff */
int dev_audio;
#if 0
printk(KERN_DEBUG "usbin_completed: status %d errcnt %d flags 0x%x\n", urb->status, urb->error_count, u->flags);
#endif
- if (as->remove_pending)
- return;
if (urb == &u->durb[0].urb)
mask = FLG_URB0RUNNING;
else if (urb == &u->durb[1].urb)
#if 0
printk(KERN_DEBUG "usbin_sync_completed: status %d errcnt %d flags 0x%x\n", urb->status, urb->error_count, u->flags);
#endif
- if (as->remove_pending)
- return;
-
if (urb == &u->surb[0].urb)
mask = FLG_SYNC0RUNNING;
else if (urb == &u->surb[1].urb)
#if 0
printk(KERN_DEBUG "usbout_sync_completed: status %d errcnt %d flags 0x%x\n", urb->status, urb->error_count, u->flags);
#endif
- if (as->remove_pending)
- return;
if (urb == &u->surb[0].urb)
mask = FLG_SYNC0RUNNING;
else if (urb == &u->surb[1].urb)
d->srate = fmt->sratelo;
if (d->srate > fmt->sratehi)
d->srate = fmt->sratehi;
+printk(KERN_DEBUG "usb_audio: set_format_in: usb_set_interface %u %u\n", alts->bInterfaceNumber, fmt->altsetting);
if (usb_set_interface(dev, alts->bInterfaceNumber, fmt->altsetting) < 0) {
printk(KERN_WARNING "usbaudio: usb_set_interface failed, device %d interface %d altsetting %d\n",
dev->devnum, u->interface, fmt->altsetting);
u->datapipe = usb_sndisocpipe(dev, alts->endpoint[0].bEndpointAddress & 0xf);
u->syncpipe = u->syncinterval = 0;
if ((alts->endpoint[0].bmAttributes & 0x0c) == 0x04) {
-
+#if 0
printk(KERN_DEBUG "bNumEndpoints 0x%02x endpoint[1].bmAttributes 0x%02x\n"
KERN_DEBUG "endpoint[1].bSynchAddress 0x%02x endpoint[1].bEndpointAddress 0x%02x\n"
KERN_DEBUG "endpoint[0].bSynchAddress 0x%02x\n", alts->bNumEndpoints,
alts->endpoint[1].bmAttributes, alts->endpoint[1].bSynchAddress,
alts->endpoint[1].bEndpointAddress, alts->endpoint[0].bSynchAddress);
-
+#endif
if (alts->bNumEndpoints < 2 ||
alts->endpoint[1].bmAttributes != 0x01 ||
alts->endpoint[1].bSynchAddress != 0 ||
u->syncpipe = usb_rcvisocpipe(dev, alts->endpoint[1].bEndpointAddress & 0xf);
u->syncinterval = alts->endpoint[1].bRefresh;
}
-
- printk(KERN_DEBUG "datapipe 0x%x syncpipe 0x%x\n", u->datapipe, u->syncpipe);
-
-
if (d->srate < fmt->sratelo)
d->srate = fmt->sratelo;
if (d->srate > fmt->sratehi)
d->srate = fmt->sratehi;
+printk(KERN_DEBUG "usb_audio: set_format_out: usb_set_interface %u %u\n", alts->bInterfaceNumber, fmt->altsetting);
if (usb_set_interface(dev, u->interface, fmt->altsetting) < 0) {
printk(KERN_WARNING "usbaudio: usb_set_interface failed, device %d interface %d altsetting %d\n",
dev->devnum, u->interface, fmt->altsetting);
file->private_data = as;
as->open_mode |= file->f_mode & (FMODE_READ | FMODE_WRITE);
s->count++;
- as->remove_pending=0;
MOD_INC_USE_COUNT;
up(&open_sem);
return 0;
}
if (state->nrchannels > 2)
printk(KERN_WARNING "usbaudio: feature unit %u: OSS mixer interface does not support more than 2 channels\n", ftr[3]);
- if (ftr[0] < 7+ftr[5]*(1+state->nrchannels)) {
- printk(KERN_ERR "usbaudio: unit %u: invalid FEATURE_UNIT descriptor\n", ftr[3]);
- return;
+ if (state->nrchannels == 1 && ftr[0] == 7+ftr[5]) {
+ printk(KERN_WARNING "usbaudio: workaround for broken Philips Camera Microphone descriptor enabled\n");
+ mchftr = ftr[6];
+ chftr = 0;
+ } else {
+ if (ftr[0] < 7+ftr[5]*(1+state->nrchannels)) {
+ printk(KERN_ERR "usbaudio: unit %u: invalid FEATURE_UNIT descriptor\n", ftr[3]);
+ return;
+ }
+ mchftr = ftr[6];
+ chftr = ftr[6+ftr[5]];
+ if (state->nrchannels > 1)
+ chftr &= ftr[6+2*ftr[5]];
}
- mchftr = ftr[6];
- chftr = ftr[6+ftr[5]];
- if (state->nrchannels > 1)
- chftr &= ftr[6+2*ftr[5]];
/* volume control */
if (chftr & 2) {
ch = getmixchannel(state, getvolchannel(state));
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
- * $Id: devio.c,v 1.6 2000/01/11 23:26:33 tom Exp $
+ * $Id: devio.c,v 1.7 2000/02/01 17:28:48 fliegl Exp $
*
* This file implements the usbdevfs/x/y files, where
* x is the bus number and y the device number.
urb->transfer_buffer = data;
urb->transfer_buffer_length = size;
ret = do_sync(urb, timeout);
- if (ret >= 0)
- ret = urb->status;
+ //if (ret >= 0)
+ // ret = urb->status;
if (ret >= 0)
ret = urb->actual_length;
kfree(urb->setup_packet);
urb->transfer_buffer = data;
urb->transfer_buffer_length = len;
ret = do_sync(urb, timeout);
- if (ret >= 0)
- ret = urb->status;
+ //if (ret >= 0)
+ // ret = urb->status;
if (ret >= 0 && actual_length != NULL)
*actual_length = urb->actual_length;
usb_free_urb(urb);
struct dev_state *ps = as->ps;
struct siginfo sinfo;
-#if 0
+#if 1
printk(KERN_DEBUG "usbdevfs: async_completed: status %d errcount %d actlen %d pipe 0x%x\n",
urb->status, urb->error_count, urb->actual_length, urb->pipe);
#endif
unsigned long flags;
spin_lock_irqsave(&ps->lock, flags);
- if (!list_empty(&ps->async_pending)) {
+ while (!list_empty(&ps->async_pending)) {
as = list_entry(ps->async_pending.next, struct async, asynclist);
list_del(&as->asynclist);
INIT_LIST_HEAD(&as->asynclist);
static int rh_submit_urb(urb_t *urb);
static int rh_unlink_urb(urb_t *urb);
-static int uhci_get_current_frame_number(struct usb_device *usb_dev);
+static int uhci_get_current_frame_number(struct usb_device *dev);
+static void uhci_stop_hc_schedule(struct uhci *uhci);
+static void uhci_start_hc_schedule(struct uhci *uhci);
+static int uhci_unlink_urb(urb_t *urb);
#define min(a,b) (((a)<(b))?(a):(b))
/*
* Only the USB core should call uhci_alloc_dev and uhci_free_dev
*/
-static int uhci_alloc_dev(struct usb_device *usb_dev)
+static int uhci_alloc_dev(struct usb_device *dev)
{
return 0;
}
-static int uhci_free_dev(struct usb_device *usb_dev)
+static int uhci_free_dev(struct usb_device *dev)
{
+ urb_t *u;
+ struct uhci *uhci = (struct uhci *)dev->bus->hcpriv;
+ struct list_head *tmp, *head = &uhci->urb_list;
+ unsigned long flags;
+
+ /* Walk through the entire URB list and forcefully remove any */
+ /* URBs that are still active for that device */
+ nested_lock(&uhci->urblist_lock, flags);
+ tmp = head->next;
+ while (tmp != head) {
+ u = list_entry(tmp, urb_t, urb_list);
+
+ if (u->dev == dev)
+ uhci_unlink_urb(u);
+ }
+ nested_unlock(&uhci->urblist_lock, flags);
+
return 0;
}
{
unsigned long flags;
- spin_lock_irqsave(&uhci->urblist_lock, flags);
+ nested_lock(&uhci->urblist_lock, flags);
list_add(&urb->urb_list, &uhci->urb_list);
- spin_unlock_irqrestore(&uhci->urblist_lock, flags);
+ nested_unlock(&uhci->urblist_lock, flags);
}
static void uhci_remove_urb_list(struct uhci *uhci, struct urb *urb)
{
unsigned long flags;
- spin_lock_irqsave(&uhci->urblist_lock, flags);
+ nested_lock(&uhci->urblist_lock, flags);
if (urb->urb_list.next != &urb->urb_list) {
list_del(&urb->urb_list);
INIT_LIST_HEAD(&urb->urb_list);
}
- spin_unlock_irqrestore(&uhci->urblist_lock, flags);
+ nested_unlock(&uhci->urblist_lock, flags);
}
/*
prevtd->link = UHCI_PTR_TERM;
}
-static struct uhci_td *uhci_td_alloc(struct usb_device *dev)
+static struct uhci_td *uhci_alloc_td(struct usb_device *dev)
{
struct uhci_td *td;
return td;
}
-static void uhci_td_free(struct uhci_td *td)
+static void uhci_free_td(struct uhci_td *td)
{
kmem_cache_free(uhci_td_cachep, td);
usb_dec_dev_use(td->dev);
}
-static void uhci_schedule_delete_td(struct uhci *uhci, struct uhci_td *td)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&uhci->freelist_lock, flags);
- list_add(&td->list, &uhci->td_free_list);
- if (td->dev) {
- usb_dec_dev_use(td->dev);
- td->dev = NULL;
- }
- spin_unlock_irqrestore(&uhci->freelist_lock, flags);
-}
-
-static struct uhci_qh *uhci_qh_alloc(struct usb_device *dev)
+static struct uhci_qh *uhci_alloc_qh(struct usb_device *dev)
{
struct uhci_qh *qh;
return qh;
}
-static void uhci_qh_free(struct uhci_qh *qh)
+static void uhci_free_qh(struct uhci_qh *qh)
{
kmem_cache_free(uhci_qh_cachep, qh);
usb_dec_dev_use(qh->dev);
}
-static void uhci_schedule_delete_qh(struct uhci *uhci, struct uhci_qh *qh)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&uhci->freelist_lock, flags);
- list_add(&qh->list, &uhci->qh_free_list);
- if (qh->dev) {
- usb_dec_dev_use(qh->dev);
- qh->dev = NULL;
- }
- spin_unlock_irqrestore(&uhci->freelist_lock, flags);
-}
-
static void uhci_insert_qh(struct uhci *uhci, struct uhci_qh *skelqh, struct uhci_qh *qh)
{
unsigned long flags;
urbp->begin = td;
}
+void uhci_inc_fsbr(struct uhci *uhci)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&uhci->framelist_lock, flags);
+
+ if (!uhci->fsbr++)
+ uhci->skel_term_qh.link = virt_to_bus(&uhci->skel_hs_control_qh) | UHCI_PTR_QH;
+
+ spin_unlock_irqrestore(&uhci->framelist_lock, flags);
+}
+
+void uhci_dec_fsbr(struct uhci *uhci)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&uhci->framelist_lock, flags);
+
+ if (!--uhci->fsbr)
+ uhci->skel_term_qh.link = UHCI_PTR_TERM;
+
+ spin_unlock_irqrestore(&uhci->framelist_lock, flags);
+}
+
/*
* Map status to standard result codes
*
struct uhci_td *td;
struct uhci_qh *qh;
unsigned long destination, status;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
int maxsze = usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe));
int len = urb->transfer_buffer_length;
unsigned char *data = urb->transfer_buffer;
/*
* Build the TD for the control request
*/
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td)
return -ENOMEM;
/*
* Build the DATA TD's
*/
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td) {
/* FIXME: Free the TD's */
return -ENOMEM;
data += pktsze;
len -= pktsze;
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td)
/* FIXME: Free all of the previously allocated td's */
return -ENOMEM;
uhci_add_irq_list(uhci, td);
- qh = uhci_qh_alloc(urb->dev);
+ qh = uhci_alloc_qh(urb->dev);
if (!qh) {
/* FIXME: Free all of the TD's */
return -ENOMEM;
}
uhci_insert_tds_in_qh(qh, urbp->begin);
- uhci_insert_qh(uhci, &uhci->skel_control_qh, qh);
+ if (!(urb->pipe & TD_CTRL_LS)) {
+ uhci_insert_qh(uhci, &uhci->skel_hs_control_qh, qh);
+ uhci_inc_fsbr(uhci);
+ } else
+ uhci_insert_qh(uhci, &uhci->skel_ls_control_qh, qh);
+
urbp->qh = qh;
uhci_add_urb_list(uhci, urb);
{
struct urb_priv *urbp = urb->hcpriv;
struct uhci_td *td;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
+ int notfinished;
if (!urbp)
return -EINVAL;
+ notfinished = (urb->status == -EINPROGRESS);
+
+ if (notfinished)
+ uhci_stop_hc_schedule(uhci);
+
+ if (!(urb->pipe & TD_CTRL_LS))
+ uhci_dec_fsbr(uhci);
+
uhci_remove_qh(uhci, urbp->qh);
- uhci_schedule_delete_qh(uhci, urbp->qh);
+ uhci_free_qh(urbp->qh);
/* Go through the rest of the TD's, deleting them, then scheduling */
/* their deletion */
if (td->status & TD_CTRL_IOC)
uhci_remove_irq_list(uhci, td);
- uhci_schedule_delete_td(uhci, td);
+ uhci_free_td(td);
td = next;
}
+ if (notfinished)
+ uhci_start_hc_schedule(uhci);
+
kfree(urbp);
urb->hcpriv = NULL;
struct urb_priv *urbp = urb->hcpriv;
struct uhci_td *td;
unsigned int status;
- int ret;
td = urbp->begin;
if (!td) /* Nothing to do */
/* If SPD is set then we received a short packet */
/* There will be no status phase at the end */
+ /* FIXME: Re-setup the queue to run the STATUS phase? */
if (td->status & TD_CTRL_SPD && (uhci_actual_length(td->status) < uhci_expected_length(td->info)))
- goto td_success;
+ return 0;
if (status)
goto td_error;
if (td->status & TD_CTRL_IOC &&
status & TD_CTRL_ACTIVE &&
status & TD_CTRL_NAK)
- goto td_success;
+ return 0;
if (status & TD_CTRL_ACTIVE)
return -EINPROGRESS;
if (status)
goto td_error;
-td_success:
- uhci_unlink_control(urb);
-
return 0;
td_error:
/* endpoint has stalled - mark it halted */
usb_endpoint_halt(urb->dev, uhci_endpoint(td->info),
uhci_packetout(td->info));
- uhci_unlink_control(urb);
-
- return -EPIPE;
}
- ret = uhci_map_status(status, uhci_packetout(td->info));
-
- uhci_unlink_control(urb);
-
- return ret;
+ return uhci_map_status(status, uhci_packetout(td->info));
}
/*
{
struct uhci_td *td;
unsigned long destination, status;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
struct urb_priv *urbp;
if (urb->transfer_buffer_length > usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe)))
urb->hcpriv = urbp;
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td)
return -ENOMEM;
{
struct urb_priv *urbp = urb->hcpriv;
struct uhci_td *td;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
+ int notfinished;
if (!urbp)
return -EINVAL;
+ notfinished = (urb->status == -EINPROGRESS);
+
+ if (notfinished)
+ uhci_stop_hc_schedule(uhci);
+
td = urbp->begin;
uhci_remove_td(uhci, td);
if (td->status & TD_CTRL_IOC)
uhci_remove_irq_list(uhci, td);
- uhci_schedule_delete_td(uhci, td);
+ uhci_free_td(td);
+
+ if (notfinished)
+ uhci_start_hc_schedule(uhci);
kfree(urbp);
urb->hcpriv = NULL;
if (status & TD_CTRL_ACTIVE)
return -EINPROGRESS;
- if (status)
- return uhci_map_status(status, uhci_packetout(td->info));
-
- urb->actual_length += uhci_actual_length(td->status);
+ if (!status)
+ urb->actual_length = uhci_actual_length(td->status);
- return 0;
+ return uhci_map_status(status, uhci_packetout(td->info));
}
static void uhci_reset_interrupt(urb_t *urb)
struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
struct uhci_td *td;
- if (urb->interval) {
- td = urbp->begin;
+ td = urbp->begin;
- usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe));
- td->status = (td->status & 0x2F000000) | TD_CTRL_ACTIVE | TD_CTRL_IOC;
- td->info &= ~(1 << TD_TOKEN_TOGGLE);
- td->info |= (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE);
+ usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe));
+ td->status = (td->status & 0x2F000000) | TD_CTRL_ACTIVE | TD_CTRL_IOC;
+ td->info &= ~(1 << TD_TOKEN_TOGGLE);
+ td->info |= (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE);
- urb->status = -EINPROGRESS;
- } else
- uhci_unlink_interrupt(urb);
+ urb->status = -EINPROGRESS;
}
/*
struct uhci_td *td;
struct uhci_qh *qh;
unsigned long destination, status;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
int maxsze = usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe));
int len = urb->transfer_buffer_length;
unsigned char *data = urb->transfer_buffer;
if (pktsze > maxsze)
pktsze = maxsze;
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td) {
/* FIXME: Free the TD's */
return -ENOMEM;
usb_pipeout(urb->pipe));
}
- qh = uhci_qh_alloc(urb->dev);
+ qh = uhci_alloc_qh(urb->dev);
if (!qh) {
/* FIXME: Free all of the TD's */
return -ENOMEM;
usb_inc_dev_use(urb->dev);
+ uhci_inc_fsbr(uhci);
+
return -EINPROGRESS;
}
uhci_packetout(td->info),
uhci_toggle(td->info) ^ 1);
- goto td_success;
+ return 0;
}
if (status)
goto td_error;
}
-td_success:
- uhci_unlink_bulk(urb);
-
return 0;
td_error:
/* endpoint has stalled - mark it halted */
usb_endpoint_halt(urb->dev, uhci_endpoint(td->info),
uhci_packetout(td->info));
- return -EPIPE;
}
return uhci_map_status(status, uhci_packetout(td->info));
struct list_head *tmp, *head = &uhci->urb_list;
unsigned long flags;
- spin_lock_irqsave(&uhci->urblist_lock, flags);
+ nested_lock(&uhci->urblist_lock, flags);
tmp = head->next;
while (tmp != head) {
u = list_entry(tmp, urb_t, urb_list);
}
tmp = tmp->next;
}
- spin_unlock_irqrestore(&uhci->urblist_lock, flags);
+ nested_unlock(&uhci->urblist_lock, flags);
if (last_urb) {
*end = (last_urb->start_frame + last_urb->number_of_packets) & 1023;
static int uhci_submit_isochronous(urb_t *urb)
{
struct uhci_td *td;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
struct urb_priv *urbp;
int i, ret, framenum;
int status, destination;
if (!urb->iso_frame_desc[i].length)
continue;
- td = uhci_td_alloc(urb->dev);
+ td = uhci_alloc_td(urb->dev);
if (!td) {
/* FIXME: Free the TD's */
return -ENOMEM;
{
struct urb_priv *urbp = urb->hcpriv;
struct uhci_td *td;
- struct uhci *uhci = urb->dev->bus->hcpriv;
+ struct uhci *uhci = (struct uhci *)urb->dev->bus->hcpriv;
+ int notfinished;
if (!urbp)
return -EINVAL;
+ notfinished = (urb->status == -EINPROGRESS);
+
+ if (notfinished)
+ uhci_stop_hc_schedule(uhci);
+
/* Go through the rest of the TD's, deleting them, then scheduling */
/* their deletion */
td = urbp->begin;
if (td->status & TD_CTRL_IOC)
uhci_remove_irq_list(uhci, td);
- uhci_schedule_delete_td(uhci, td);
+ uhci_free_td(td);
td = next;
}
+ if (notfinished)
+ uhci_start_hc_schedule(uhci);
+
kfree(urbp);
urb->hcpriv = NULL;
if (!td) /* Nothing to do */
return -EINVAL;
- status = uhci_status_bits(td->status);
+ status = uhci_status_bits(td->status);
if (status & TD_CTRL_ACTIVE)
return -EINPROGRESS;
- /* Assume no errors, we'll overwrite this if not */
- urb->status = 0;
-
urb->actual_length = 0;
+
for (i = 0, td = urbp->begin; td; i++, td = td->next) {
int actlength;
}
}
- uhci_unlink_isochronous(urb);
-
- return status;
+ return ret;
}
static int uhci_submit_urb(urb_t *urb)
ret = uhci_result_control(urb);
break;
case PIPE_INTERRUPT:
- /* Interrupts are an exception */
- urb->status = uhci_result_interrupt(urb);
- if (urb->status != -EINPROGRESS) {
- urb->complete(urb);
- uhci_reset_interrupt(urb);
- }
- return;
+ ret = uhci_result_interrupt(urb);
+ break;
case PIPE_BULK:
ret = uhci_result_bulk(urb);
break;
if (urb->status == -EINPROGRESS)
return;
+ switch (usb_pipetype(urb->pipe)) {
+ case PIPE_CONTROL:
+ uhci_unlink_control(urb);
+ break;
+ case PIPE_INTERRUPT:
+ /* Interrupts are an exception */
+ urb->complete(urb);
+ if (urb->interval)
+ uhci_reset_interrupt(urb);
+ else
+ uhci_unlink_interrupt(urb);
+ return;
+ case PIPE_BULK:
+ uhci_unlink_bulk(urb);
+ break;
+ case PIPE_ISOCHRONOUS:
+ uhci_unlink_isochronous(urb);
+ break;
+ }
+
if (urb->next) {
turb = urb->next;
do {
if (!urb)
return -EINVAL;
+ if (!urb->dev || !urb->dev->bus)
+ return -ENODEV;
+
uhci = (struct uhci *)urb->dev->bus->hcpriv;
if (usb_pipedevice(urb->pipe) == uhci->rh.devnum)
if (urb->complete)
urb->complete(urb);
+#ifndef CONFIG_USB_UHCI_ALT_UNLINK_OPTIMIZE
if (in_interrupt()) { /* wait at least 1 frame */
int errorcount = 10;
udelay(1000);
} else
schedule_timeout(1+1*HZ/1000);
+#endif
- usb_dec_dev_use(urb->dev);
+ urb->status = -ENOENT;
}
- urb->status = -ENOENT;
-
return ret;
}
urb->status = USB_ST_NOERROR;
if ((data > 0) && (uhci->rh.send != 0)) {
-#ifdef DEBUG /* JE */
-static int foo=5;
-if (foo--)
-#endif
dbg("root-hub INT complete: port1: %x port2: %x data: %x",
inw(io_addr + USBPORTSC1), inw(io_addr + USBPORTSC2), data);
urb->complete(urb);
}
/*-------------------------------------------------------------------*/
-void uhci_free_pending(struct uhci *uhci)
-{
- struct list_head *tmp, *head;
-
- /* Free all of the pending QH's and TD's */
- head = &uhci->td_free_list;
- tmp = head->next;
- while (tmp != head) {
- struct uhci_td *td = list_entry(tmp, struct uhci_td, list);
-
- tmp = tmp->next;
-
- list_del(&td->list);
- INIT_LIST_HEAD(&td->list);
-
- uhci_td_free(td);
- }
-
- head = &uhci->qh_free_list;
- tmp = head->next;
- while (tmp != head) {
- struct uhci_qh *qh = list_entry(tmp, struct uhci_qh, list);
-
- tmp = tmp->next;
-
- list_del(&qh->list);
- INIT_LIST_HEAD(&qh->list);
-
- uhci_qh_free(qh);
- }
-}
-
static void uhci_interrupt(int irq, void *__uhci, struct pt_regs *regs)
{
struct uhci *uhci = __uhci;
}
}
- /* Free all of the pending QH's and TD's */
- spin_lock(&uhci->freelist_lock);
- uhci_free_pending(uhci);
- spin_unlock(&uhci->freelist_lock);
-
/* Walk the list of pending TD's to see which ones completed.. */
nested_lock(&uhci->irqlist_lock, flags);
head = &uhci->interrupt_list;
nested_unlock(&uhci->irqlist_lock, flags);
}
+static void uhci_stop_hc_schedule(struct uhci *uhci)
+{
+#ifdef CONFIG_USB_UHCI_ALT_UNLINK_OPTIMIZE
+ unsigned int cmdreg, timeout = 1000;
+
+ cmdreg = inw(uhci->io_addr + USBCMD);
+ outw(cmdreg & ~USBCMD_RS, uhci->io_addr + USBCMD);
+
+ while (!(inw(uhci->io_addr + USBSTS) & USBSTS_HCH)) {
+ if (!--timeout) {
+ printk(KERN_ERR "uhci: stop_hc_schedule failed, HC still running\n");
+ break;
+ }
+ }
+#endif
+}
+
+static void uhci_start_hc_schedule(struct uhci *uhci)
+{
+#ifdef CONFIG_USB_UHCI_ALT_UNLINK_OPTIMIZE
+ unsigned int cmdreg, timeout = 1000;
+
+ cmdreg = inw(uhci->io_addr + USBCMD);
+ outw(cmdreg | USBCMD_RS, uhci->io_addr + USBCMD);
+
+ while (inw(uhci->io_addr + USBSTS) & USBSTS_HCH) {
+ if (!--timeout) {
+ printk(KERN_ERR "uhci: start_hc_schedule failed, HC still halted\n");
+ break;
+ }
+ }
+#endif
+}
+
static void reset_hc(struct uhci *uhci)
{
unsigned int io_addr = uhci->io_addr;
INIT_LIST_HEAD(&uhci->interrupt_list);
INIT_LIST_HEAD(&uhci->urb_list);
- INIT_LIST_HEAD(&uhci->td_free_list);
- INIT_LIST_HEAD(&uhci->qh_free_list);
- spin_lock_init(&uhci->urblist_lock);
spin_lock_init(&uhci->framelist_lock);
- spin_lock_init(&uhci->freelist_lock);
+ nested_init(&uhci->urblist_lock);
nested_init(&uhci->irqlist_lock);
+ uhci->fsbr = 0;
+
/* We need exactly one page (per UHCI specs), how convenient */
/* We assume that one page is atleast 4k (1024 frames * 4 bytes) */
uhci->fl = (void *)__get_free_page(GFP_KERNEL);
uhci_fill_td(&uhci->skel_int1_td, 0, (UHCI_NULL_DATA_SIZE << 21) | (0x7f << 8) | USB_PID_IN, 0);
- uhci->skel_int1_td.link = virt_to_bus(&uhci->skel_control_qh) | UHCI_PTR_QH;
+ uhci->skel_int1_td.link = virt_to_bus(&uhci->skel_ls_control_qh) | UHCI_PTR_QH;
- uhci->skel_control_qh.link = virt_to_bus(&uhci->skel_bulk_qh) | UHCI_PTR_QH;
- uhci->skel_control_qh.element = UHCI_PTR_TERM;
+ uhci->skel_ls_control_qh.link = virt_to_bus(&uhci->skel_hs_control_qh) | UHCI_PTR_QH;
+ uhci->skel_ls_control_qh.element = UHCI_PTR_TERM;
- uhci->skel_bulk_qh.link = UHCI_PTR_TERM;
+ uhci->skel_hs_control_qh.link = virt_to_bus(&uhci->skel_bulk_qh) | UHCI_PTR_QH;
+ uhci->skel_hs_control_qh.element = UHCI_PTR_TERM;
+
+ uhci->skel_bulk_qh.link = virt_to_bus(&uhci->skel_term_qh) | UHCI_PTR_QH;
uhci->skel_bulk_qh.element = UHCI_PTR_TERM;
+ uhci->skel_term_qh.link = UHCI_PTR_TERM;
+ uhci->skel_term_qh.element = UHCI_PTR_TERM;
+
/*
* Fill the frame list: make all entries point to
* the proper interrupt queue.
uhci->irq = irq;
if (!uhci_start_root_hub(uhci)) {
- struct pm_dev *pmdev;
+ struct pm_dev *pmdev;
- pmdev = pm_register(PM_PCI_DEV,
- PM_PCI_ID(dev),
- handle_pm_event);
- if (pmdev)
- pmdev->data = uhci;
+ pmdev = pm_register(PM_PCI_DEV,
+ PM_PCI_ID(dev),
+ handle_pm_event);
+ if (pmdev)
+ pmdev->data = uhci;
return 0;
- }
+ }
}
/* Couldn't allocate IRQ if we got here */
void uhci_cleanup(void)
{
struct list_head *next, *tmp, *head = &uhci_list;
- unsigned long flags;
tmp = head->next;
while (tmp != head) {
reset_hc(uhci);
release_region(uhci->io_addr, uhci->io_size);
- /* Free any outstanding TD's and QH's */
- spin_lock_irqsave(&uhci->freelist_lock, flags);
- uhci_free_pending(uhci);
- spin_unlock_irqrestore(&uhci->freelist_lock, flags);
-
release_uhci(uhci);
tmp = next;
#define skel_int128_td skeltd[7]
#define skel_int256_td skeltd[8]
-#define UHCI_NUM_SKELQH 2
-#define skel_control_qh skelqh[0]
-#define skel_bulk_qh skelqh[1]
+#define UHCI_NUM_SKELQH 4
+#define skel_ls_control_qh skelqh[0]
+#define skel_hs_control_qh skelqh[1]
+#define skel_bulk_qh skelqh[2]
+#define skel_term_qh skelqh[3]
/*
* Search tree for determining where <interval> fits in the
struct s_nested_lock irqlist_lock;
struct list_head interrupt_list; /* List of interrupt-active TD's for this uhci */
- spinlock_t urblist_lock;
+ struct s_nested_lock urblist_lock;
struct list_head urb_list;
spinlock_t framelist_lock;
- spinlock_t freelist_lock;
- struct list_head td_free_list;
- struct list_head qh_free_list;
+ int fsbr; /* Full speed bandwidth reclamation */
struct virt_root_hub rh; /* private data of the virtual root hub */
};
* $Id: usb-uhci.c,v 1.185 2000/02/05 21:29:19 acher Exp $
*/
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/kernel.h>
uhci_t *s = (uhci_t*) dev->data;
dbg("handle_apm_event(%d)", rqst);
if (s) {
- switch (rqst) {
- case PM_SUSPEND:
- reset_hc (s);
- break;
- case PM_RESUME:
- start_hc (s);
- break;
+ switch (rqst) {
+ case PM_SUSPEND:
+ reset_hc (s);
+ break;
+ case PM_RESUME:
+ start_hc (s);
+ break;
+ }
}
return 0;
}
-/* $Id: atyfb.c,v 1.136 2000/01/06 23:53:29 davem Exp $
+/* $Id: atyfb.c,v 1.137 2000/01/09 03:11:49 davem Exp $
* linux/drivers/video/atyfb.c -- Frame buffer device for ATI Mach64
*
* Copyright (C) 1997-1998 Geert Uytterhoeven
flush_tlb_mm(mm);
}
+/*
+ * Flush a specified range of user mapping page tables
+ * from TLB.
+ * Although Alpha uses VPTE caches, this can be a nop, as Alpha does
+ * not have finegrained tlb flushing, so it will flush VPTE stuff
+ * during next flush_tlb_range.
+ */
+static inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
#else /* __SMP__ */
extern void flush_tlb_all(void);
*/
#include <asm/proc/cache.h>
+extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start,
+ unsigned long end)
+{
+}
+
/*
* Page table cache stuff
*/
* - flush_tlb_mm(mm) flushes the specified mm context TLB's
* - flush_tlb_page(vma, vmaddr) flushes one page
* - flush_tlb_range(mm, start, end) flushes a range of pages
+ * - flush_tlb_pgtables(mm, start, end) flushes a range of page tables
*
* ..but the i386 has somewhat limited tlb flushing capabilities,
* and page-granular flushes are available only on i486 and up.
#endif
+extern inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ /* i386 does not keep any page table caches in TLB */
+}
+
#endif /* _I386_PGALLOC_H */
__asm__ __volatile__("pflush #4,#4,(%0)" : : "a" (addr));
}
+extern inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
#endif /* _M68K_PGALLOC_H */
unsigned long end);
extern void (*flush_tlb_page)(struct vm_area_struct *vma, unsigned long page);
+extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+}
+
/*
* - add_wired_entry() add a fixed TLB entry, and move wired register
*/
#define flush_tlb_page local_flush_tlb_page
#define flush_tlb_range local_flush_tlb_range
+extern inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ /* PPC has hw page tables. */
+}
+
/*
* No cache flushing is required when address mappings are
* changed, because the caches on PowerPCs are physically
extern void flush_tlb_range(struct mm_struct *mm, unsigned long start,
unsigned long end);
extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
/*
* Basically we have the same two-level (which is the logical three level
#define PCIBIOS_MIN_IO 0UL
#define PCIBIOS_MIN_MEM 0UL
+#ifdef __KERNEL__
+
+/* Dynamic DMA mapping stuff.
+ */
+
+#include <asm/scatterlist.h>
+
+struct pci_dev;
+
+/* Allocate and map kernel buffer using consistent mode DMA for a device.
+ * hwdev should be valid struct pci_dev pointer for PCI devices.
+ */
+extern void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle);
+
+/* Free and unmap a consistent DMA buffer.
+ * cpu_addr is what was returned from pci_alloc_consistent,
+ * size must be the same as what as passed into pci_alloc_consistent,
+ * and likewise dma_addr must be the same as what *dma_addrp was set to.
+ *
+ * References to the memory and mappings assosciated with cpu_addr/dma_addr
+ * past this call are illegal.
+ */
+extern void pci_free_consistent(struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle);
+
+/* Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+extern dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size);
+
+/* Unmap a single streaming mode DMA translation. The dma_addr and size
+ * must match what was provided for in a previous pci_map_single call. All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see
+ * whatever the device wrote there.
+ */
+extern void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size);
+
+/* Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+extern int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents);
+
+/* Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+extern void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nhwents);
+
+/* Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+extern void pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size);
+
+/* Make physical memory consistent for a set of streaming
+ * mode DMA translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+extern void pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nelems);
+
+#endif /* __KERNEL__ */
+
#endif /* __SPARC_PCI_H */
-/* $Id: pgalloc.h,v 1.2 2000/01/15 00:51:42 anton Exp $ */
+/* $Id: pgalloc.h,v 1.3 2000/02/03 10:13:31 jj Exp $ */
#ifndef _SPARC_PGALLOC_H
#define _SPARC_PGALLOC_H
BTFIXUPDEF_CALL(void, flush_tlb_range, struct mm_struct *, unsigned long, unsigned long)
BTFIXUPDEF_CALL(void, flush_tlb_page, struct vm_area_struct *, unsigned long)
+extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+}
+
#define flush_tlb_all() BTFIXUP_CALL(flush_tlb_all)()
#define flush_tlb_mm(mm) BTFIXUP_CALL(flush_tlb_mm)(mm)
#define flush_tlb_range(mm,start,end) BTFIXUP_CALL(flush_tlb_range)(mm,start,end)
-/* $Id: pgtable.h,v 1.87 1999/12/27 06:37:14 anton Exp $ */
+/* $Id: pgtable.h,v 1.88 2000/02/06 22:56:09 zaitcev Exp $ */
#ifndef _SPARC_PGTABLE_H
#define _SPARC_PGTABLE_H
#define mmu_release_scsi_one(vaddr,len,sbus) BTFIXUP_CALL(mmu_release_scsi_one)(vaddr,len,sbus)
#define mmu_release_scsi_sgl(sg,sz,sbus) BTFIXUP_CALL(mmu_release_scsi_sgl)(sg,sz,sbus)
-/* mmu_map/unmap is provided by iommu/iounit; mmu_flush/inval probably belongs to CPU... */
+/*
+ * mmu_map/unmap are provided by iommu/iounit; Invalid to call on IIep.
+ * mmu_flush/inval belong to CPU. Valid on IIep.
+ */
BTFIXUPDEF_CALL(void, mmu_map_dma_area, unsigned long va, __u32 addr, int len)
-BTFIXUPDEF_CALL(void, mmu_unmap_dma_area, unsigned long addr, int len)
-BTFIXUPDEF_CALL(void, mmu_inval_dma_area, unsigned long addr, int len)
-BTFIXUPDEF_CALL(void, mmu_flush_dma_area, unsigned long addr, int len)
+BTFIXUPDEF_CALL(unsigned long /*phys*/, mmu_translate_dvma, unsigned long busa)
+BTFIXUPDEF_CALL(void, mmu_unmap_dma_area, unsigned long busa, int len)
+BTFIXUPDEF_CALL(void, mmu_inval_dma_area, unsigned long virt, int len)
+BTFIXUPDEF_CALL(void, mmu_flush_dma_area, unsigned long virt, int len)
#define mmu_map_dma_area(va, ba,len) BTFIXUP_CALL(mmu_map_dma_area)(va,ba,len)
#define mmu_unmap_dma_area(ba,len) BTFIXUP_CALL(mmu_unmap_dma_area)(ba,len)
-#define mmu_inval_dma_area(va,len) BTFIXUP_CALL(mmu_unmap_dma_area)(va,len)
-#define mmu_flush_dma_area(va,len) BTFIXUP_CALL(mmu_unmap_dma_area)(va,len)
+#define mmu_translate_dvma(ba) BTFIXUP_CALL(mmu_translate_dvma)(ba)
+#define mmu_inval_dma_area(va,len) BTFIXUP_CALL(mmu_inval_dma_area)(va,len)
+#define mmu_flush_dma_area(va,len) BTFIXUP_CALL(mmu_flush_dma_area)(va,len)
BTFIXUPDEF_SIMM13(pmd_shift)
BTFIXUPDEF_SETHI(pmd_size)
-/* $Id: io.h,v 1.30 2000/01/28 13:43:14 jj Exp $ */
+/* $Id: io.h,v 1.31 2000/02/08 05:11:38 jj Exp $ */
#ifndef __SPARC64_IO_H
#define __SPARC64_IO_H
#define __SLOW_DOWN_IO do { } while (0)
#define SLOW_DOWN_IO do { } while (0)
-#define NEW_PCI_DMA_MAP
-
-#ifndef NEW_PCI_DMA_MAP
-#define PCI_DVMA_HASHSZ 256
-
-extern unsigned long pci_dvma_v2p_hash[PCI_DVMA_HASHSZ];
-extern unsigned long pci_dvma_p2v_hash[PCI_DVMA_HASHSZ];
-
-#define pci_dvma_ahashfn(addr) (((addr) >> 24) & 0xff)
-
-extern __inline__ unsigned long virt_to_bus(volatile void *addr)
-{
- unsigned long vaddr = (unsigned long)addr;
- unsigned long off;
-
- /* Handle kernel variable pointers... */
- if (vaddr < PAGE_OFFSET)
- vaddr += PAGE_OFFSET - (unsigned long)&empty_zero_page;
-
- off = pci_dvma_v2p_hash[pci_dvma_ahashfn(vaddr - PAGE_OFFSET)];
- return vaddr + off;
-}
-
-extern __inline__ void *bus_to_virt(unsigned long addr)
-{
- unsigned long paddr = addr & 0xffffffffUL;
- unsigned long off;
-
- off = pci_dvma_p2v_hash[pci_dvma_ahashfn(paddr)];
- return (void *)(paddr + off);
-}
-#else
extern unsigned long virt_to_bus_not_defined_use_pci_map(volatile void *addr);
#define virt_to_bus virt_to_bus_not_defined_use_pci_map
extern unsigned long bus_to_virt_not_defined_use_pci_map(volatile void *addr);
#define bus_to_virt bus_to_virt_not_defined_use_pci_map
-#endif
/* Different PCI controllers we support have their PCI MEM space
* mapped to an either 2GB (Psycho) or 4GB (Sabre) aligned area,
-/* $Id: mmu_context.h,v 1.41 1999/09/10 15:39:03 jj Exp $ */
+/* $Id: mmu_context.h,v 1.42 2000/02/08 07:47:03 davem Exp $ */
#ifndef __SPARC64_MMU_CONTEXT_H
#define __SPARC64_MMU_CONTEXT_H
#endif /* ! __SMP__ */
+/* This will change for Cheetah and later chips. */
+#define VPTE_BASE 0xfffffffe00000000
+
+extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start,
+ unsigned long end)
+{
+ /* Note the signed type. */
+ long s = start, e = end;
+ if (s > e)
+ /* Nobody should call us with start below VM hole and end above.
+ See if it is really true. */
+ BUG();
+#if 0
+ /* Currently free_pgtables guarantees this. */
+ s &= PMD_MASK;
+ e = (e + PMD_SIZE - 1) & PMD_MASK;
+#endif
+ flush_tlb_range(mm,
+ VPTE_BASE + (s >> (PAGE_SHIFT - 3)),
+ VPTE_BASE + (e >> (PAGE_SHIFT - 3)));
+}
+
/* Page table allocation/freeing. */
#ifdef __SMP__
/* Sliiiicck */
/* atm.h - general ATM declarations */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
/*
#include <linux/socket.h>
#include <linux/types.h>
#endif
+#include <linux/atmapi.h>
#include <linux/atmsap.h>
#include <linux/atmioc.h>
* please speak up ...
*/
-/* socket layer */
-#define SO_BCTXOPT __SO_ENCODE(SOL_SOCKET,16,struct atm_buffconst)
- /* not ATM specific - should go somewhere else */
-#define SO_BCRXOPT __SO_ENCODE(SOL_SOCKET,17,struct atm_buffconst)
-
-
-/* for SO_BCTXOPT and SO_BCRXOPT */
-
-struct atm_buffconst {
- unsigned long buf_fac; /* buffer alignment factor */
- unsigned long buf_off; /* buffer alignment offset */
- unsigned long size_fac; /* buffer size factor */
- unsigned long size_off; /* buffer size offset */
- unsigned long min_size; /* minimum size */
- unsigned long max_size; /* maximum size, 0 = unlimited */
-};
-
/* ATM cell header (for AAL0) */
int min_pcr; /* minimum PCR in cells per second */
int max_cdv; /* maximum CDV in microseconds */
int max_sdu; /* maximum SDU in bytes */
+ /* extra params for ABR */
+ unsigned int icr; /* Initial Cell Rate (24-bit) */
+ unsigned int tbe; /* Transient Buffer Exposure (24-bit) */
+ unsigned int frtt : 24; /* Fixed Round Trip Time (24-bit) */
+ unsigned int rif : 4; /* Rate Increment Factor (4-bit) */
+ unsigned int rdf : 4; /* Rate Decrease Factor (4-bit) */
+ unsigned int nrm_pres :1; /* nrm present bit */
+ unsigned int trm_pres :1; /* rm present bit */
+ unsigned int adtf_pres :1; /* adtf present bit */
+ unsigned int cdf_pres :1; /* cdf present bit*/
+ unsigned int nrm :3; /* Max # of Cells for each forward RM cell (3-bit) */
+ unsigned int trm :3; /* Time between forward RM cells (3-bit) */
+ unsigned int adtf :10; /* ACR Decrease Time Factor (10-bit) */
+ unsigned int cdf :3; /* Cutoff Decrease Factor (3-bit) */
+ unsigned int spare :9; /* spare bits */
};
struct atm_qos {
struct atm_trafprm txtp; /* parameters in TX direction */
- struct atm_trafprm rxtp; /* parameters in RX direction */
- unsigned char aal;
+ struct atm_trafprm rxtp __ATM_API_ALIGN;
+ /* parameters in RX direction */
+ unsigned char aal __ATM_API_ALIGN;
};
/* PVC addressing */
short itf; /* ATM interface */
short vpi; /* VPI (only 8 bits at UNI) */
int vci; /* VCI (only 16 bits at UNI) */
- } sap_addr; /* PVC address */
+ } sap_addr __ATM_API_ALIGN; /* PVC address */
};
/* SVC addressing */
/* unused addresses must be bzero'ed */
char lij_type; /* role in LIJ call; one of ATM_LIJ* */
uint32_t lij_id; /* LIJ call identifier */
- } sas_addr; /* SVC address */
+ } sas_addr __ATM_API_ALIGN; /* SVC address */
};
};
-#define ATM_CREATE_LEAF _IO('a',ATMIOC_SPECIAL+2)
- /* create a point-to-multipoint leaf socket */
-
-
#ifdef __KERNEL__
#include <linux/net.h> /* struct net_proto */
/* atm_eni.h - Driver-specific declarations of the ENI driver (for use by
driver-specific utilities) */
-/* Written 1995-1997 by Werner Almesberger, EPFL LRC */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#ifndef LINUX_ATM_ENI_H
#include <linux/atmioc.h>
+
+struct eni_multipliers {
+ int tx,rx; /* values are in percent and must be > 100 */
+};
+
+
#define ENI_MEMDUMP _IOW('a',ATMIOC_SARPRV,struct atmif_sioc)
/* printk memory map */
+#define ENI_SETMULT _IOW('a',ATMIOC_SARPRV+7,struct atmif_sioc)
+ /* set buffer multipliers */
#endif
--- /dev/null
+/* atm_idt77105.h - Driver-specific declarations of the IDT77105 driver (for
+ * use by driver-specific utilities) */
+
+/* Written 1999 by Greg Banks <gnb@linuxfan.com>. Copied from atm_suni.h. */
+
+
+#ifndef LINUX_ATM_IDT77105_H
+#define LINUX_ATM_IDT77105_H
+
+#include <asm/types.h>
+#include <linux/atmioc.h>
+
+/*
+ * Structure for IDT77105_GETSTAT and IDT77105_GETSTATZ ioctls.
+ * Pointed to by `arg' in atmif_sioc.
+ */
+struct idt77105_stats {
+ __u32 symbol_errors; /* wire symbol errors */
+ __u32 tx_cells; /* cells transmitted */
+ __u32 rx_cells; /* cells received */
+ __u32 rx_hec_errors; /* Header Error Check errors on receive */
+};
+
+#define IDT77105_GETLOOP _IOW('a',ATMIOC_PHYPRV,struct atmif_sioc) /* get loopback mode */
+#define IDT77105_SETLOOP _IOW('a',ATMIOC_PHYPRV+1,struct atmif_sioc) /* set loopback mode */
+#define IDT77105_GETSTAT _IOW('a',ATMIOC_PHYPRV+2,struct atmif_sioc) /* get stats */
+#define IDT77105_GETSTATZ _IOW('a',ATMIOC_PHYPRV+3,struct atmif_sioc) /* get stats and zero */
+
+
+/*
+ * TODO: what we need is a global loopback mode get/set ioctl for
+ * all devices, not these device-specific hacks -- Greg Banks
+ */
+#define IDT77105_LM_NONE 0 /* no loopback */
+#define IDT77105_LM_DIAG 1 /* diagnostic (i.e. loop TX to RX)
+ * (a.k.a. local loopback) */
+#define IDT77105_LM_LOOP 2 /* line (i.e. loop RX to TX)
+ * (a.k.a. remote loopback) */
+
+#endif
* sys/types.h for struct timeval
*/
+#include <linux/atmapi.h>
#include <linux/atmioc.h>
#define NS_GETPSTAT _IOWR('a',ATMIOC_SARPRV+1,struct atmif_sioc)
unsigned min;
unsigned init;
unsigned max;
-} buf_nr;
+}buf_nr;
typedef struct pool_levels
/* atm_tcp.h - Driver-specific declarations of the ATMTCP driver (for use by
driver-specific utilities) */
-/* Written 1997-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1997-2000 by Werner Almesberger, EPFL LRC/ICA */
#ifndef LINUX_ATM_TCP_H
#define LINUX_ATM_TCP_H
+#include <linux/atmapi.h>
+
#ifdef __KERNEL__
#include <linux/types.h>
#endif
struct atmtcp_control {
struct atmtcp_hdr hdr; /* must be first */
- int type; /* message type; both directions */
- unsigned long vcc; /* both directions */
+ int type; /* message type; both directions */
+ atm_kptr_t vcc; /* both directions */
struct sockaddr_atmpvc addr; /* suggested value from kernel */
struct atm_qos qos; /* both directions */
int result; /* to kernel only */
-};
+} __ATM_API_ALIGN;
/*
* Field usage:
/* atm_zatm.h - Driver-specific declarations of the ZATM driver (for use by
driver-specific utilities) */
-/* Written 1995-1997 by Werner Almesberger, EPFL LRC */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
#ifndef LINUX_ATM_ZATM_H
* sys/types.h for struct timeval
*/
+#include <linux/atmapi.h>
#include <linux/atmioc.h>
#define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)
--- /dev/null
+/* atmapi.h - ATM API user space/kernel compatibility */
+
+/* Written 1999,2000 by Werner Almesberger, EPFL ICA */
+
+
+#ifndef _LINUX_ATMAPI_H
+#define _LINUX_ATMAPI_H
+
+#ifdef __sparc__
+/* such alignment is not required on 32 bit sparcs, but we can't
+ figure that we are on a sparc64 while compiling user-space programs. */
+#define __ATM_API_ALIGN __attribute__((aligned(8)))
+#else
+#define __ATM_API_ALIGN
+#endif
+
+
+/*
+ * Opaque type for kernel pointers. Note that _ is never accessed. We need
+ * the struct in order hide the array, so that we can make simple assignments
+ * instead of being forced to use memcpy. It also improves error reporting for
+ * code that still assumes that we're passing unsigned longs.
+ *
+ * Convention: NULL pointers are passed as a field of all zeroes.
+ */
+
+typedef struct { unsigned char _[8]; } atm_kptr_t;
+
+#endif
/* atmarp.h - ATM ARP protocol and kernel-demon interface definitions */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
#ifndef _LINUX_ATMARP_H
#ifdef __KERNEL__
#include <linux/types.h>
#endif
+#include <linux/atmapi.h>
#include <linux/atmioc.h>
#include <linux/config.h>
+#include <linux/atmapi.h>
+#include <linux/atm.h>
#include <linux/atmioc.h>
struct atm_aal_stats {
- long tx,tx_err; /* TX okay and errors */
- long rx,rx_err; /* RX okay and errors */
- long rx_drop; /* RX out of memory */
+ int tx,tx_err; /* TX okay and errors */
+ int rx,rx_err; /* RX okay and errors */
+ int rx_drop; /* RX out of memory */
};
struct atm_aal_stats aal0;
struct atm_aal_stats aal34;
struct atm_aal_stats aal5;
-};
+} __ATM_API_ALIGN;
#define ATM_GETLINKRATE _IOW('a',ATMIOC_ITF+1,struct atmif_sioc)
#define ATM_VS2TXT_MAP \
"IDLE", "CONNECTED", "CLOSING", "LISTEN", "INUSE", "BOUND"
+#define ATM_VF2TXT_MAP \
+ "ADDR", "READY", "PARTIAL", "REGIS", \
+ "RELEASED", "HASQOS", "LISTEN", "META", \
+ "256", "512", "1024", "2048", \
+ "SESSION", "HASSAP", "BOUND", "CLOSE"
+
#ifdef __KERNEL__
#include <linux/skbuff.h> /* struct sk_buff */
#include <linux/atm.h>
#include <linux/uio.h>
+#include <net/sock.h>
#include <asm/atomic.h>
#ifdef CONFIG_PROC_FS
#define ATM_VF_META 128 /* SVC socket isn't used for normal data
traffic and doesn't depend on signaling
to be available */
-#define ATM_VF_AQREL 256 /* Arequipa VC is being released */
-#define ATM_VF_AQDANG 512 /* VC is in Arequipa's dangling list */
-#define ATM_VF_SCRX ATM_SC_RX /* 1024; allow single-copy in the RX dir. */
-#define ATM_VF_SCTX ATM_SC_TX /* 2048; allow single-copy in the TX dir. */
+ /* 256; unused */
+ /* 512; unused */
+ /* 1024; unused */
+ /* 2048; unused */
#define ATM_VF_SESSION 4096 /* VCC is p2mp session control descriptor */
#define ATM_VF_HASSAP 8192 /* SAP has been set */
#define ATM_VF_CLOSE 32768 /* asynchronous close - treat like VF_RELEASED*/
struct atm_dev *dev; /* device back pointer */
struct atm_qos qos; /* QOS */
struct atm_sap sap; /* SAP */
- unsigned long tx_quota,rx_quota; /* buffer quotas */
atomic_t tx_inuse,rx_inuse; /* buffer space in use */
void (*push)(struct atm_vcc *vcc,struct sk_buff *skb);
void (*pop)(struct atm_vcc *vcc,struct sk_buff *skb); /* optional */
struct atm_aal_stats *stats; /* pointer to AAL stats group */
wait_queue_head_t sleep; /* if socket is busy */
wait_queue_head_t wsleep; /* if waiting for write buffer space */
+ struct sock *sk; /* socket backpointer */
struct atm_vcc *prev,*next;
/* SVC part --- may move later ------------------------------------- */
short itf; /* interface number */
/* Multipoint part ------------------------------------------------- */
struct atm_vcc *session; /* session VCC descriptor */
/* Other stuff ----------------------------------------------------- */
- void *user_back; /* user backlink - not touched */
+ void *user_back; /* user backlink - not touched by */
+ /* native ATM stack. Currently used */
+ /* by CLIP and sch_atm. */
};
}
+static __inline__ int atm_may_send(struct atm_vcc *vcc,unsigned int size)
+{
+ return size+atomic_read(&vcc->tx_inuse)+ATM_PDU_OVHD < vcc->sk->sndbuf;
+}
+
+
int atm_charge(struct atm_vcc *vcc,int truesize);
struct sk_buff *atm_alloc_charge(struct atm_vcc *vcc,int pdu_size,
int gfp_flags);
/* atmioc.h - ranges for ATM-related ioctl numbers */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
/*
- * See http://lrcwww.epfl.ch/linux-atm/magic.html for the complete list of
+ * See http://icawww1.epfl.ch/linux-atm/magic.html for the complete list of
* "magic" ioctl numbers.
*/
#ifndef _ATMLEC_H_
#define _ATMLEC_H_
+#include <linux/atmapi.h>
#include <linux/atmioc.h>
#include <linux/atm.h>
#include <linux/if_ether.h>
struct atmlec_config_msg {
unsigned int maximum_unknown_frame_count;
- unsigned long max_unknown_frame_time;
+ unsigned int max_unknown_frame_time;
unsigned short max_retry_count;
- unsigned long aging_time;
- unsigned long forward_delay_time;
- unsigned long arp_response_time;
- unsigned long flush_timeout;
- unsigned long path_switching_delay;
+ unsigned int aging_time;
+ unsigned int forward_delay_time;
+ unsigned int arp_response_time;
+ unsigned int flush_timeout;
+ unsigned int path_switching_delay;
unsigned int lane_version; /* LANE2: 1 for LANEv1, 2 for LANEv2 */
int mtu;
+ int is_proxy;
};
struct atmlec_msg {
struct {
unsigned char mac_addr[ETH_ALEN];
unsigned char atm_addr[ATM_ESA_LEN];
- unsigned long flag;/* Topology_change flag,
+ unsigned int flag;/* Topology_change flag,
remoteflag, permanent flag,
lecid, transaction id */
unsigned int targetless_le_arp; /* LANE2 */
uint32_t tran_id; /* transaction id */
unsigned char mac_addr[ETH_ALEN]; /* dst mac addr */
unsigned char atm_addr[ATM_ESA_LEN]; /* reqestor ATM addr */
- } proxy; /* For mapping LE_ARP requests to responses. Filled by */
+ } proxy;
+ /* For mapping LE_ARP requests to responses. Filled by */
} content; /* zeppelin, returned by kernel. Used only when proxying */
-};
+} __ATM_API_ALIGN;
struct atmlec_ioc {
int dev_num;
#ifndef _ATMMPC_H_
#define _ATMMPC_H_
+#include <linux/atmapi.h>
#include <linux/atmioc.h>
#include <linux/atm.h>
uint16_t holding_time;
} eg_ctrl_info;
-struct mpc_parameters{
+struct mpc_parameters {
uint16_t mpc_p1; /* Shortcut-Setup Frame Count */
uint16_t mpc_p2; /* Shortcut-Setup Frame Time */
uint8_t mpc_p3[8]; /* Flow-detection Protocols */
uint16_t mpc_p4; /* MPC Initial Retry Time */
uint16_t mpc_p5; /* MPC Retry Time Maximum */
uint16_t mpc_p6; /* Hold Down Time */
-};
+} ;
-struct k_message{
+struct k_message {
uint16_t type;
uint32_t ip_mask;
uint8_t MPS_ctrl[ATM_ESA_LEN];
struct mpc_parameters params;
} content;
struct atm_qos qos;
-};
+} __ATM_API_ALIGN;
-struct llc_snap_hdr { /* RFC 1483 LLC/SNAP encapsulation for routed IP PDUs */
+struct llc_snap_hdr {
+ /* RFC 1483 LLC/SNAP encapsulation for routed IP PDUs */
uint8_t dsap; /* Destination Service Access Point (0xAA) */
uint8_t ssap; /* Source Service Access Point (0xAA) */
uint8_t ui; /* Unnumbered Information (0x03) */
#define RELOAD 301 /* kill -HUP the daemon for reload */
#endif /* _ATMMPC_H_ */
-
/* atmsap.h - ATM Service Access Point addressing definitions */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
#ifndef _LINUX_ATMSAP_H
#define _LINUX_ATMSAP_H
+#include <linux/atmapi.h>
+
/*
* BEGIN_xx and END_xx markers are used for automatic generation of
* documentation. Do not change them.
unsigned char def_size; /* default packet size (log2), 4-12 (0 to */
/* omit) */
unsigned char window;/* packet window size, 1-127 (0 to omit) */
- } itu; /* ITU-T ecoding */
+ } itu; /* ITU-T encoding */
unsigned char user; /* user specified l3 information */
- struct { /* if l3_proto = ATM_L3_H310 */
- unsigned char term_type; /* terminal type */
+ struct { /* if l3_proto = ATM_L3_H310 */
+ unsigned char term_type; /* terminal type */
unsigned char fw_mpx_cap; /* forward multiplexing capability */
/* only if term_type != ATM_TT_NONE */
unsigned char bw_mpx_cap; /* backward multiplexing capability */
/* only if term_type != ATM_TT_NONE */
} h310;
- struct { /* if l3_proto = ATM_L3_TR9577 */
- unsigned char ipi; /* initial protocol id */
+ struct { /* if l3_proto = ATM_L3_TR9577 */
+ unsigned char ipi; /* initial protocol id */
unsigned char snap[5];/* IEEE 802.1 SNAP identifier */
/* (only if ipi == NLPID_IEEE802_1_SNAP) */
} tr9577;
} l3;
- struct atm_blli *next; /* next BLLI or NULL (undefined when used in */
- /* atmsvc_msg) ONLY USED IN OLD-STYLE API */
-};
+} __ATM_API_ALIGN;
struct atm_bhli {
struct atm_sap {
struct atm_bhli bhli; /* local SAP, high-layer information */
- struct atm_blli blli[ATM_MAX_BLLI]; /* local SAP, low-layer info */
+ struct atm_blli blli[ATM_MAX_BLLI] __ATM_API_ALIGN;
+ /* local SAP, low-layer info */
};
/* atmsvc.h - ATM signaling kernel-demon interface definitions */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#ifndef _LINUX_ATMSVC_H
#define _LINUX_ATMSVC_H
+#include <linux/atmapi.h>
#include <linux/atm.h>
#include <linux/atmioc.h>
struct atmsvc_msg {
enum atmsvc_msg_type type;
- unsigned long vcc;
- unsigned long listen_vcc; /* indicate */
+ atm_kptr_t vcc;
+ atm_kptr_t listen_vcc; /* indicate */
int reply; /* for okay and close: */
/* < 0: error before active */
/* (sigd has discarded ctx) */
struct sockaddr_atmsvc local; /* local SVC address */
struct atm_qos qos; /* QOS parameters */
struct atm_sap sap; /* SAP */
- unsigned long session; /* for p2pm */
+ unsigned int session; /* for p2pm */
struct sockaddr_atmsvc svc; /* SVC address */
-};
+} __ATM_API_ALIGN;
/*
- * Message contents: see ftp://lrcftp.epfl.ch/pub/linux/atm/docs/isp-*.tar.gz
+ * Message contents: see ftp://icaftp.epfl.ch/pub/linux/atm/docs/isp-*.tar.gz
*/
/*
+#ifndef _PPP_CHANNEL_H_
+#define _PPP_CHANNEL_H_
/*
* Definitions for the interface between the generic PPP code
* and a PPP channel.
* ==FILEVERSION 990909==
*/
-/* $Id: ppp_channel.h,v 1.2 1999/09/15 11:21:53 paulus Exp $ */
+/* $Id: ppp_channel.h,v 1.3 2000/01/31 01:42:48 davem Exp $ */
#include <linux/list.h>
#include <linux/skbuff.h>
extern void ppp_unregister_channel(struct ppp_channel *);
#endif /* __KERNEL__ */
+#endif
#define _MD_H
#include <linux/mm.h>
+#include <linux/config.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
#include <asm/semaphore.h>
/* sonet.h - SONET/SHD physical layer control */
-/* Written 1995 by Werner Almesberger, EPFL LRC */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
#ifndef LINUX_SONET_H
#define LINUX_SONET_H
struct sonet_stats {
- long section_bip; /* section parity errors (B1) */
- long line_bip; /* line parity errors (B2) */
- long path_bip; /* path parity errors (B3) */
- long line_febe; /* line parity errors at remote */
- long path_febe; /* path parity errors at remote */
- long corr_hcs; /* correctable header errors */
- long uncorr_hcs; /* uncorrectable header errors */
- long tx_cells; /* cells sent */
- long rx_cells; /* cells received */
-};
+ int section_bip; /* section parity errors (B1) */
+ int line_bip; /* line parity errors (B2) */
+ int path_bip; /* path parity errors (B3) */
+ int line_febe; /* line parity errors at remote */
+ int path_febe; /* path parity errors at remote */
+ int corr_hcs; /* correctable header errors */
+ int uncorr_hcs; /* uncorrectable header errors */
+ int tx_cells; /* cells sent */
+ int rx_cells; /* cells received */
+} __attribute__ ((packed));
#define SONET_GETSTAT _IOR('a',ATMIOC_PHYTYP,struct sonet_stats)
/* get statistics */
/* net/atm/atmarp.h - RFC1577 ATM ARP */
-/* Written 1995-1998 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
#ifndef _ATMCLIP_H
unsigned long last_use; /* last send or receive operation */
unsigned long idle_timeout; /* keep open idle for so many jiffies*/
void (*old_push)(struct atm_vcc *vcc,struct sk_buff *skb);
- /* keep old push fn for detaching */
+ /* keep old push fn for chaining */
+ void (*old_pop)(struct atm_vcc *vcc,struct sk_buff *skb);
+ /* keep old pop fn for chaining */
struct clip_vcc *next; /* next VCC */
};
/* include/net/dsfield.h - Manipulation of the Differentiated Services field */
-/* Written 1998 by Werner Almesberger, EPFL ICA */
+/* Written 1998-2000 by Werner Almesberger, EPFL ICA */
#ifndef __NET_DSFIELD_H
__u16 tmp;
tmp = ntohs(*(__u16 *) ipv6h);
- tmp = (tmp & (mask << 4)) | (value << 4);
+ tmp = (tmp & ((mask << 4) | 0xf00f)) | (value << 4);
*(__u16 *) ipv6h = htons(tmp);
}
__u32 lrcvtime; /* timestamp of last received data packet*/
__u16 last_seg_size; /* Size of last incoming segment */
__u16 rcv_mss; /* MSS used for delayed ACK decisions */
+ __u32 rcv_segs; /* Number of received segments since last ack */
} ack;
/* Data for direct copy to user */
__u32 rcv_tsecr; /* Time stamp echo reply */
__u32 ts_recent; /* Time stamp to echo next */
long ts_recent_stamp;/* Time we stored ts_recent (for aging) */
- __u32 last_ack_sent; /* last ack we sent (RTTM/PAWS) */
/* SACKs data */
struct tcp_sack_block selective_acks[4]; /* The SACKS themselves*/
#define TCP_DEBUG 1
#undef TCP_FORMAL_WINDOW
+#define TCP_MORE_COARSE_ACKS
+#undef TCP_LESS_COARSE_ACKS
#include <linux/config.h>
#include <linux/tcp.h>
* TIME-WAIT timer.
*/
-#define TCP_DELACK_MAX (HZ/2) /* maximal time to delay before sending an ACK */
+#define TCP_DELACK_MAX (HZ/5) /* maximal time to delay before sending an ACK */
#define TCP_DELACK_MIN (2) /* minimal time to delay before sending an ACK,
- * 2 scheduler ticks, not depending on HZ */
-#define TCP_ATO_MAX ((TCP_DELACK_MAX*4)/5) /* ATO producing TCP_DELACK_MAX */
+ * 2 scheduler ticks, not depending on HZ. */
+#define TCP_ATO_MAX (HZ/2) /* Clamp ATO estimator at his value. */
#define TCP_ATO_MIN 2
#define TCP_RTO_MAX (120*HZ)
#define TCP_RTO_MIN (HZ/5)
static __inline__ void tcp_dec_quickack_mode(struct tcp_opt *tp)
{
- if (tp->ack.quick && --tp->ack.quick == 0 && !tp->ack.pingpong) {
- /* Leaving quickack mode we deflate ATO to give peer
- * a time to adapt to new worse(!) RTO. It is not required
- * in pingpong mode, when ACKs were delayed in any case.
- */
+ if (tp->ack.quick && --tp->ack.quick == 0) {
+ /* Leaving quickack mode we deflate ATO. */
tp->ack.ato = TCP_ATO_MIN;
}
}
* Don't update rcv_wup/rcv_wnd here or else
* we will not be able to advertise a zero
* window in time. --DaveM
+ *
+ * Relax Will Robinson.
*/
new_win = cur_win;
- } else {
- tp->rcv_wnd = new_win;
- tp->rcv_wup = tp->rcv_nxt;
}
+ tp->rcv_wnd = new_win;
+ tp->rcv_wup = tp->rcv_nxt;
/* RFC1323 scaling applied */
new_win >>= tp->rcv_wscale;
void kernel_to_ipc64_perm(struct kern_ipc_perm *in, struct ipc64_perm *out);
void ipc64_perm_to_ipc_perm(struct ipc64_perm *in, struct ipc_perm *out);
-#if BITS_PER_LONG < 64
int ipc_parse_version (int *cmd);
-#else
-
-#define ipc_parse_version(cmd) IPC_64
-
-#endif
no_mmaps:
first = first >> PGDIR_SHIFT;
last = last >> PGDIR_SHIFT;
- if (last > first)
+ if (last > first) {
clear_page_tables(mm, first, last-first);
+ flush_tlb_pgtables(mm, first << PGDIR_SHIFT, last << PGDIR_SHIFT);
+ }
}
/* Munmap is split into 2 main parts -- this part which finds
#include <asm/atomic.h>
#include <asm/errno.h>
-#include "tunable.h"
-
int atm_charge(struct atm_vcc *vcc,int truesize)
{
atm_force_charge(vcc,truesize);
- if (atomic_read(&vcc->rx_inuse) <= vcc->rx_quota) return 1;
+ if (atomic_read(&vcc->rx_inuse) <= vcc->sk->rcvbuf) return 1;
atm_return(vcc,truesize);
vcc->stats->rx_drop++;
return 0;
int guess = atm_guess_pdu2truesize(pdu_size);
atm_force_charge(vcc,guess);
- if (atomic_read(&vcc->rx_inuse) <= vcc->rx_quota) {
+ if (atomic_read(&vcc->rx_inuse) <= vcc->sk->rcvbuf) {
struct sk_buff *skb = alloc_skb(pdu_size,gfp_flags);
if (skb) {
/* net/atm/clip.c - RFC1577 Classical IP over ATM */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/config.h>
#include <asm/atomic.h>
#include "common.h"
-#include "tunable.h"
#include "resources.h"
#include "ipcommon.h"
#include <net/atmclip.h>
}
+static void clip_pop(struct atm_vcc *vcc,struct sk_buff *skb)
+{
+ DPRINTK("clip_pop(vcc %p)\n",vcc);
+ CLIP_VCC(vcc)->old_pop(vcc,skb);
+ /* skb->dev == NULL in outbound ARP packets */
+ if (atm_may_send(vcc,0) && skb->dev) {
+ skb->dev->tbusy = 0;
+ mark_bh(NET_BH);
+ }
+}
+
+
static void clip_neigh_destroy(struct neighbour *neigh)
{
DPRINTK("clip_neigh_destroy (neigh %p)\n",neigh);
static int clip_start_xmit(struct sk_buff *skb,struct net_device *dev)
{
struct atmarp_entry *entry;
+ struct atm_vcc *vcc;
DPRINTK("clip_start_xmit (skb %p)\n",skb);
if (!skb->dst) {
return 0;
}
DPRINTK("neigh %p, vccs %p\n",entry,entry->vccs);
- ATM_SKB(skb)->vcc = entry->vccs->vcc;
- DPRINTK("using neighbour %p, vcc %p\n",skb->dst->neighbour,
- ATM_SKB(skb)->vcc);
+ ATM_SKB(skb)->vcc = vcc = entry->vccs->vcc;
+ DPRINTK("using neighbour %p, vcc %p\n",skb->dst->neighbour,vcc);
if (entry->vccs->encap) {
void *here;
memcpy(here,llc_oui,sizeof(llc_oui));
((u16 *) here)[3] = skb->protocol;
}
- atomic_add(skb->truesize,&ATM_SKB(skb)->vcc->tx_inuse);
+ atomic_add(skb->truesize,&vcc->tx_inuse);
+ dev->tbusy = !atm_may_send(vcc,0);
ATM_SKB(skb)->iovcnt = 0;
- ATM_SKB(skb)->atm_options = ATM_SKB(skb)->vcc->atm_options;
+ ATM_SKB(skb)->atm_options = vcc->atm_options;
entry->vccs->last_use = jiffies;
- DPRINTK("atm_skb(%p)->vcc(%p)->dev(%p)\n",skb,ATM_SKB(skb)->vcc,
- ATM_SKB(skb)->vcc->dev);
+ DPRINTK("atm_skb(%p)->vcc(%p)->dev(%p)\n",skb,vcc,vcc->dev);
PRIV(dev)->stats.tx_packets++;
PRIV(dev)->stats.tx_bytes += skb->len;
- (void) ATM_SKB(skb)->vcc->dev->ops->send(ATM_SKB(skb)->vcc,skb);
+ (void) vcc->dev->ops->send(vcc,skb);
return 0;
}
clip_vcc->last_use = jiffies;
clip_vcc->idle_timeout = timeout*HZ;
clip_vcc->old_push = vcc->push;
+ clip_vcc->old_pop = vcc->pop;
save_flags(flags);
cli();
vcc->push = clip_push;
+ vcc->pop = clip_pop;
skb_migrate(&vcc->recvq,©);
restore_flags(flags);
/* re-process everything received between connection setup and MKIP */
dev->hard_header_len = RFC1483LLC_LEN;
dev->mtu = RFC1626_MTU;
dev->addr_len = 0;
- dev->tx_queue_len = 0;
+ dev->tx_queue_len = 100; /* "normal" queue */
+ /* When using a "real" qdisc, the qdisc determines the queue */
+ /* length. tx_queue_len is only used for the default case, */
+ /* without any more elaborate queuing. 100 is a reasonable */
+ /* compromise between decent burst-tolerance and protection */
+ /* against memory hogs. */
dev->flags = 0;
dev_init_buffers(dev); /* is this ever supposed to be used ? */
return 0;
static struct atmdev_ops atmarpd_dev_ops = {
- NULL, /* no dev_close */
- NULL, /* no open */
- atmarpd_close, /* close */
- NULL, /* no ioctl */
- NULL, /* no getsockopt */
- NULL, /* no setsockopt */
- NULL, /* send */
- NULL, /* no sg_send */
- NULL, /* no send_oam */
- NULL, /* no phy_put */
- NULL, /* no phy_get */
- NULL, /* no feedback */
- NULL, /* no change_qos */
- NULL /* no free_rx_skb */
+ close: atmarpd_close,
};
/* net/atm/common.c - ATM sockets (common part for PVC and SVC) */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/config.h>
#include <asm/uaccess.h>
#include <asm/poll.h>
-#ifdef CONFIG_MMU_HACKS
-#include <linux/mmuio.h>
-#include <linux/uio.h>
-#endif
-
#if defined(CONFIG_ATM_LANE) || defined(CONFIG_ATM_LANE_MODULE)
#include <linux/atmlec.h>
#include "lec.h"
#include "resources.h" /* atm_find_dev */
#include "common.h" /* prototypes */
#include "protocols.h" /* atm_init_<transport> */
-#include "tunable.h" /* tunable parameters */
#include "addr.h" /* address registry */
#ifdef CONFIG_ATM_CLIP
#include <net/atmclip.h> /* for clip_create */
{
struct sk_buff *skb;
- if (atomic_read(&vcc->tx_inuse) && size+atomic_read(&vcc->tx_inuse)+
- ATM_PDU_OVHD > vcc->tx_quota) {
- DPRINTK("Sorry: tx_inuse = %d, size = %d, tx_quota = %ld\n",
- atomic_read(&vcc->tx_inuse),size,vcc->tx_quota);
+ if (atomic_read(&vcc->tx_inuse) && !atm_may_send(vcc,size)) {
+ DPRINTK("Sorry: tx_inuse = %d, size = %d, sndbuf = %d\n",
+ atomic_read(&vcc->tx_inuse),size,vcc->sk->sndbuf);
return NULL;
}
while (!(skb = alloc_skb(size,GFP_KERNEL))) schedule();
if (sock->type == SOCK_STREAM) return -EINVAL;
if (!(sk = alloc_atm_vcc_sk(family))) return -ENOMEM;
vcc = sk->protinfo.af_atm;
- vcc->flags = ATM_VF_SCRX | ATM_VF_SCTX;
+ vcc->flags = 0;
vcc->dev = NULL;
vcc->family = sock->ops->family;
vcc->alloc_tx = alloc_tx;
vcc->callback = NULL;
memset(&vcc->local,0,sizeof(struct sockaddr_atmsvc));
memset(&vcc->remote,0,sizeof(struct sockaddr_atmsvc));
- vcc->tx_quota = ATM_TXBQ_DEF;
- vcc->rx_quota = ATM_RXBQ_DEF;
atomic_set(&vcc->tx_inuse,0);
atomic_set(&vcc->rx_inuse,0);
vcc->push = NULL;
else vcc->dev->ops->free_rx_skb(vcc, skb);
return error ? error : eff_len;
}
-#ifdef CONFIG_MMU_HACKS
- if (vcc->flags & ATM_VF_SCRX) {
- mmucp_tofs((unsigned long) buff,eff_len,skb,
- (unsigned long) skb->data);
- return eff_len;
- }
- else
-#endif
- {
- error = copy_to_user(buff,skb->data,eff_len) ? -EFAULT : 0;
- if (!vcc->dev->ops->free_rx_skb) kfree_skb(skb);
- else vcc->dev->ops->free_rx_skb(vcc, skb);
- }
+ error = copy_to_user(buff,skb->data,eff_len) ? -EFAULT : 0;
+ if (!vcc->dev->ops->free_rx_skb) kfree_skb(skb);
+ else vcc->dev->ops->free_rx_skb(vcc, skb);
return error ? error : eff_len;
}
if (!(vcc->flags & ATM_VF_READY)) return -EPIPE;
if (!size) return 0;
/* verify_area is done by net/socket.c */
-#ifdef CONFIG_MMU_HACKS
- if ((vcc->flags & ATM_VF_SCTX) && vcc->dev->ops->sg_send &&
- vcc->dev->ops->sg_send(vcc,(unsigned long) buff,size)) {
- int res,max_iov;
-
- max_iov = 2+size/PAGE_SIZE;
- /*
- * Doesn't use alloc_tx yet - this will change later. @@@
- */
- while (!(skb = alloc_skb(sizeof(struct iovec)*max_iov,
- GFP_KERNEL))) {
- if (m->msg_flags & MSG_DONTWAIT) return -EAGAIN;
- interruptible_sleep_on(&vcc->wsleep);
- if (signal_pending(current)) return -ERESTARTSYS;
- }
- skb_put(skb,size);
- res = lock_user((unsigned long) buff,size,max_iov,
- (struct iovec *) skb->data);
- if (res < 0) {
- kfree_skb(skb);
- if (res != -EAGAIN) return res;
- }
- else {
- DPRINTK("res is %d\n",res);
- DPRINTK("Asnd %d += %d\n",vcc->tx_inuse,skb->truesize);
- atomic_add(skb->truesize+ATM_PDU_OVHD,&vcc->tx_inuse);
- ATM_SKB(skb)->iovcnt = res;
- error = vcc->dev->ops->send(vcc,skb);
- /* FIXME: security: may send up to 3 "garbage" bytes */
- return error ? error : size;
- }
- }
-#endif
eff = (size+3) & ~3; /* align to word boundary */
while (!(skb = vcc->alloc_tx(vcc,eff))) {
if (m->msg_flags & MSG_DONTWAIT) return -EAGAIN;
return vcc->reply;
if (!(vcc->flags & ATM_VF_READY)) return -EPIPE;
}
+ skb->dev = NULL; /* for paths shared with net_device interfaces */
ATM_SKB(skb)->iovcnt = 0;
ATM_SKB(skb)->atm_options = vcc->atm_options;
if (copy_from_user(skb_put(skb,size),buff,size)) {
if (sock->state != SS_CONNECTING) {
if (vcc->qos.txtp.traffic_class != ATM_NONE &&
vcc->qos.txtp.max_sdu+atomic_read(&vcc->tx_inuse)+
- ATM_PDU_OVHD <= vcc->tx_quota)
+ ATM_PDU_OVHD <= vcc->sk->sndbuf)
mask |= POLLOUT | POLLWRNORM;
}
else if (vcc->reply != WAITING) {
vcc = ATM_SD(sock);
switch (cmd) {
- case TIOCOUTQ:
+ case SIOCOUTQ:
if (sock->state != SS_CONNECTED ||
!(vcc->flags & ATM_VF_READY)) return -EINVAL;
- return put_user(vcc->tx_quota-
+ return put_user(vcc->sk->sndbuf-
atomic_read(&vcc->tx_inuse)-ATM_PDU_OVHD,
(int *) arg) ? -EFAULT : 0;
- case TIOCINQ:
+ case SIOCINQ:
{
struct sk_buff *skb;
return copy_to_user((void *) arg,&vcc->timestamp,
sizeof(struct timeval)) ? -EFAULT : 0;
case ATM_SETSC:
- if (arg & ~(ATM_VF_SCRX | ATM_VF_SCTX)) return -EINVAL;
- /* @@@ race condition - should split flags into
- "volatile" and non-volatile part */
- vcc->flags = (vcc->flags & ~(ATM_VF_SCRX |
- ATM_VF_SCTX)) | arg;
+ printk(KERN_WARNING "ATM_SETSC is obsolete\n");
return 0;
case ATMSIGD_CTRL:
if (!capable(CAP_NET_ADMIN)) return -EPERM;
error = sigd_attach(vcc);
if (!error) sock->state = SS_CONNECTED;
return error;
-#ifdef WE_DONT_SUPPORT_P2MP_YET
- case ATM_CREATE_LEAF:
- {
- struct socket *session;
-
- if (!(session = sockfd_lookup(arg,&error)))
- return error;
- if (sock->ops->family != PF_ATMSVC ||
- session->ops->family != PF_ATMSVC)
- return -EPROTOTYPE;
- return create_leaf(sock,session);
- }
-#endif
#ifdef CONFIG_ATM_CLIP
case SIOCMKCLIP:
if (!capable(CAP_NET_ADMIN)) return -EPERM;
default:
if (!dev->ops->ioctl) return -EINVAL;
size = dev->ops->ioctl(dev,cmd,buf);
- if (size < 0) return size;
+ if (size < 0)
+ return size == -ENOIOCTLCMD ? -EINVAL : size;
}
if (!size) return 0;
return put_user(size,&((struct atmif_sioc *) arg)->length) ?
vcc = ATM_SD(sock);
switch (optname) {
- case SO_SNDBUF:
- if (get_user(value,(unsigned long *) optval))
- return -EFAULT;
- if (!value) value = ATM_TXBQ_DEF;
- if (value < ATM_TXBQ_MIN) value = ATM_TXBQ_MIN;
- if (value > ATM_TXBQ_MAX) value = ATM_TXBQ_MAX;
- vcc->tx_quota = value;
- return 0;
- case SO_RCVBUF:
- if (get_user(value,(unsigned long *) optval))
- return -EFAULT;
- if (!value) value = ATM_RXBQ_DEF;
- if (value < ATM_RXBQ_MIN) value = ATM_RXBQ_MIN;
- if (value > ATM_RXBQ_MAX) value = ATM_RXBQ_MAX;
- vcc->rx_quota = value;
- return 0;
case SO_ATMQOS:
{
struct atm_qos qos;
vcc = ATM_SD(sock);
switch (optname) {
- case SO_SNDBUF:
- return put_user(vcc->tx_quota,(unsigned long *) optval)
- ? -EFAULT : 0;
- case SO_RCVBUF:
- return put_user(vcc->rx_quota,(unsigned long *) optval)
- ? -EFAULT : 0;
- case SO_BCTXOPT:
- /* fall through */
- case SO_BCRXOPT:
- printk(KERN_WARNING "Warning: SO_BCTXOPT/SO_BCRXOPT "
- "are obsolete\n");
- break;
case SO_ATMQOS:
if (!(vcc->flags & ATM_VF_HASQOS)) return -EINVAL;
return copy_to_user(optval,&vcc->qos,sizeof(vcc->qos)) ?
#include "lec.h"
#include "lec_arpc.h"
-#include "tunable.h"
#include "resources.h" /* for bind_vcc() */
#if 0
static int lec_send_packet(struct sk_buff *skb, struct net_device *dev);
static int lec_close(struct net_device *dev);
static struct net_device_stats *lec_get_stats(struct net_device *dev);
-static int lec_init(struct net_device *dev);
+static void lec_init(struct net_device *dev);
static __inline__ struct lec_arp_table* lec_arp_find(struct lec_priv *priv,
unsigned char *mac_addr);
static __inline__ int lec_arp_remove(struct lec_arp_table **lec_arp_tables,
NULL /* associate indicator, spec 3.1.5 */
};
-/* will be lec0, lec1, lec2 etc. */
-static char myname[] = "lecxx";
-
static unsigned char bus_mac[ETH_ALEN] = {0xff,0xff,0xff,0xff,0xff,0xff};
/* Device structures */
lec_h = (struct lecdatahdr_8023*)skb->data;
lec_h->le_header = htons(priv->lecid);
+#ifdef CONFIG_TR
+ /* Ugly. Use this to realign Token Ring packets for
+ * e.g. PCA-200E driver. */
+ if (priv->is_trdev) {
+ skb2 = skb_realloc_headroom(skb, LEC_HEADER_LEN);
+ kfree_skb(skb);
+ if (skb2 == NULL) return 0;
+ skb = skb2;
+ }
+#endif
+
#if DUMP_PACKETS > 0
printk("%s: send datalen:%ld lecid:%4.4x\n", dev->name,
skb->len, priv->lecid);
if (dev->change_mtu(dev, mesg->content.config.mtu))
printk("%s: change_mtu to %d failed\n", dev->name,
mesg->content.config.mtu);
+ priv->is_proxy = mesg->content.config.is_proxy;
break;
case l_flush_tran_id:
lec_set_flush_tran_id(priv, mesg->content.normal.atm_addr,
}
static struct atmdev_ops lecdev_ops = {
- NULL, /*dev_close*/
- NULL, /*open*/
- lec_atm_close, /*close*/
- NULL, /*ioctl*/
- NULL, /*getsockopt */
- NULL, /*setsockopt */
- lec_atm_send, /*send */
- NULL, /*sg_send */
-#if 0 /* these are disabled in <linux/atmdev.h> too */
- NULL, /*poll */
- NULL, /*send_iovec*/
-#endif
- NULL, /*send_oam*/
- NULL, /*phy_put*/
- NULL, /*phy_get*/
- NULL, /*feedback*/
- NULL, /* change_qos*/
- NULL /* free_rx_skb*/
+ close: lec_atm_close,
+ send: lec_atm_send
};
static struct atm_dev lecatm_dev = {
return 0;
}
-static int
+static void
lec_init(struct net_device *dev)
{
- struct lec_priv *priv;
-
- priv = (struct lec_priv *)dev->priv;
- if (priv->is_trdev) {
-#ifdef CONFIG_TR
- init_trdev(dev, 0);
-#endif
- } else ether_setup(dev);
dev->change_mtu = lec_change_mtu;
dev->open = lec_open;
dev->stop = lec_close;
dev->set_multicast_list = NULL;
dev->do_ioctl = NULL;
printk("%s: Initialized!\n",dev->name);
- return 0;
+ return;
}
static unsigned char lec_ctrl_magic[] = {
{
struct net_device *dev = (struct net_device *)vcc->proto_data;
struct lec_priv *priv = (struct lec_priv *)dev->priv;
- struct lecdatahdr_8023 *hdr;
#if DUMP_PACKETS >0
int i=0;
skb_queue_tail(&vcc->recvq, skb);
wake_up(&vcc->sleep);
} else { /* Data frame, queue to protocol handlers */
+ unsigned char *dst;
+
atm_return(vcc,skb->truesize);
- hdr = (struct lecdatahdr_8023 *)skb->data;
- if (hdr->le_header == htons(priv->lecid) ||
+ if (*(uint16_t *)skb->data == htons(priv->lecid) ||
!priv->lecd) {
/* Probably looping back, or if lecd is missing,
lecd has gone down */
dev_kfree_skb(skb);
return;
}
- if (priv->lec_arp_empty_ones) { /* FILTER DATA!!!! */
+#ifdef CONFIG_TR
+ if (priv->is_trdev) dst = ((struct lecdatahdr_8025 *)skb->data)->h_dest;
+ else
+#endif
+ dst = ((struct lecdatahdr_8023 *)skb->data)->h_dest;
+
+ if (!(dst[0]&0x01) && /* Never filter Multi/Broadcast */
+ !priv->is_proxy && /* Proxy wants all the packets */
+ memcmp(dst, dev->dev_addr, sizeof(dev->dev_addr))) {
+ dev_kfree_skb(skb);
+ return;
+ }
+ if (priv->lec_arp_empty_ones) {
lec_arp_check_empties(priv, vcc, skb);
}
skb->dev = dev;
int
lecd_attach(struct atm_vcc *vcc, int arg)
{
- int i, result;
+ int i;
struct lec_priv *priv;
if (arg<0)
return -EINVAL;
#endif
if (!dev_lec[i]) {
- dev_lec[i] = (struct net_device*)
- kmalloc(sizeof(struct net_device)+sizeof(myname)+1,
- GFP_KERNEL);
- if (!dev_lec[i])
- return -ENOMEM;
- memset(dev_lec[i],0,sizeof(struct net_device)+sizeof(myname)+1);
+ int is_trdev, size;
- dev_lec[i]->priv = kmalloc(sizeof(struct lec_priv), GFP_KERNEL);
- if (!dev_lec[i]->priv)
+ is_trdev = 0;
+ if (i >= (MAX_LEC_ITF - NUM_TR_DEVS))
+ is_trdev = 1;
+
+ size = sizeof(struct lec_priv);
+#ifdef CONFIG_TR
+ if (is_trdev)
+ dev_lec[i] = init_trdev(NULL, size);
+ else
+#endif
+ dev_lec[i] = init_etherdev(NULL, size);
+ if (!dev_lec[i])
return -ENOMEM;
- memset(dev_lec[i]->priv,0,sizeof(struct lec_priv));
- priv = (struct lec_priv *)dev_lec[i]->priv;
- if (i >= (MAX_LEC_ITF - NUM_TR_DEVS))
- priv->is_trdev = 1;
-
- dev_lec[i]->name = (char*)(dev_lec[i]+1);
- sprintf(dev_lec[i]->name, "lec%d",i);
- dev_lec[i]->init = lec_init;
- if ((result = register_netdev(dev_lec[i])) !=0)
- return result;
- sprintf(dev_lec[i]->name, "lec%d", i); /* init_trdev globbers device name */
+ priv = dev_lec[i]->priv;
+ priv->is_trdev = is_trdev;
+ sprintf(dev_lec[i]->name, "lec%d", i);
+ lec_init(dev_lec[i]);
} else {
- priv = (struct lec_priv *)dev_lec[i]->priv;
+ priv = dev_lec[i]->priv;
if (priv->lecd)
return -EADDRINUSE;
}
#endif
} else
unregister_netdev(dev_lec[i]);
- kfree(dev_lec[i]->priv);
kfree(dev_lec[i]);
dev_lec[i] = NULL;
}
if (entry)
entry->next = to_remove->next;
}
- if (!entry)
+ if (!entry) {
if (to_remove == priv->lec_no_forward) {
priv->lec_no_forward = to_remove->next;
} else {
if (entry)
entry->next = to_remove->next;
}
+ }
lec_arp_clear_vccs(to_remove);
kfree(to_remove);
}
#include "lec.h"
#include "mpc.h"
-#include "tunable.h"
#include "resources.h" /* for bind_vcc() */
/*
return;
}
-static const char *mpoa_device_type_string (char type)
+static const char * __attribute__ ((unused)) mpoa_device_type_string(char type)
{
switch(type) {
case NON_MPOA:
dprintk("mpoa: (%s) mpc_vcc_close:\n", dev->name);
in_entry = mpc->in_ops->search_by_vcc(vcc, mpc);
if (in_entry) {
- unsigned char *ip = (unsigned char *)&in_entry->ctrl_info.in_dst_ip;
+ unsigned char *ip __attribute__ ((unused)) =
+ (unsigned char *)&in_entry->ctrl_info.in_dst_ip;
dprintk("mpoa: (%s) mpc_vcc_close: ingress SVC closed ip = %u.%u.%u.%u\n",
mpc->dev->name, ip[0], ip[1], ip[2], ip[3]);
in_entry->shortcut = NULL;
}
static struct atmdev_ops mpc_ops = { /* only send is required */
- NULL, /* dev_close */
- NULL, /* open */
- mpoad_close, /* close */
- NULL, /* ioctl */
- NULL, /* getsockopt */
- NULL, /* setsockopt */
- msg_from_mpoad, /* send */
- NULL, /* sg_send */
- NULL, /* send_oam */
- NULL, /* phy_put */
- NULL, /* phy_get */
- NULL, /* feedback */
- NULL, /* change_qos */
- NULL, /* free_rx_skb */
- NULL /* proc_read */
+ close: mpoad_close,
+ send: msg_from_mpoad
};
static struct atm_dev mpc_dev = {
*/
static void check_qos_and_open_shortcut(struct k_message *msg, struct mpoa_client *client, in_cache_entry *entry){
uint32_t dst_ip = msg->content.in_info.in_dst_ip;
- unsigned char *ip = (unsigned char *)&dst_ip;
+ unsigned char *ip __attribute__ ((unused)) = (unsigned char *)&dst_ip;
struct atm_mpoa_qos *qos = atm_mpoa_search_qos(dst_ip);
eg_cache_entry *eg_entry = client->eg_ops->search_by_src_ip(dst_ip, client);
if(eg_entry && eg_entry->shortcut){
struct mpoa_client *client)
{
unsigned long flags;
- unsigned char *ip = (unsigned char *)&dst_ip;
+ unsigned char *ip __attribute__ ((unused)) = (unsigned char *)&dst_ip;
in_cache_entry* entry = kmalloc(sizeof(in_cache_entry), GFP_KERNEL);
if (entry == NULL) {
if( entry->count > mpc->parameters.mpc_p1 &&
entry->entry_state == INGRESS_INVALID){
- unsigned char *ip = (unsigned char *)&entry->ctrl_info.in_dst_ip;
+ unsigned char *ip __attribute__ ((unused)) =
+ (unsigned char *)&entry->ctrl_info.in_dst_ip;
dprintk("mpoa: (%s) mpoa_caches.c: threshold exceeded for ip %u.%u.%u.%u, sending MPOA res req\n", mpc->dev->name, ip[0], ip[1], ip[2], ip[3]);
entry->entry_state = INGRESS_RESOLVING;
while(eg_entry != NULL){
for(i=0;i<ATM_ESA_LEN;i++){
length += sprintf((char *)page + length,"%02x",eg_entry->ctrl_info.in_MPC_data_ATM_addr[i]);}
- length += sprintf((char *)page + length,"\n%-16lu%s%-14lu%-15u",ntohl(eg_entry->ctrl_info.cache_id), egress_state_string(eg_entry->entry_state), (eg_entry->ctrl_info.holding_time-(now.tv_sec-eg_entry->tv.tv_sec)), eg_entry->packets_rcvd);
+ length += sprintf((char *)page + length,"\n%-16lu%s%-14lu%-15u",(unsigned long) ntohl(eg_entry->ctrl_info.cache_id), egress_state_string(eg_entry->entry_state), (eg_entry->ctrl_info.holding_time-(now.tv_sec-eg_entry->tv.tv_sec)), eg_entry->packets_rcvd);
/* latest IP address */
temp = (unsigned char *)&eg_entry->latest_ip_addr;
/* net/atm/proc.c - ATM /proc interface */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
/*
* The mechanism used here isn't designed for speed but rather for convenience
&proc_spec_atm_operations, /* default ATM directory file-ops */
};
+
static void add_stats(char *buf,const char *aal,
const struct atm_aal_stats *stats)
{
- sprintf(strchr(buf,0),"%s ( %ld %ld %ld %ld %ld )",aal,stats->tx,
+ sprintf(strchr(buf,0),"%s ( %d %d %d %d %d )",aal,stats->tx,
stats->tx_err,stats->rx,stats->rx_err,stats->rx_drop);
}
len = strlen(addr->sas_addr.pub);
buf += len;
if (*addr->sas_addr.pub) {
- *buf += '+';
+ *buf++ = '+';
len++;
}
}
}
+static void vc_info(struct atm_vcc *vcc,char *buf)
+{
+ char *here;
+
+ here = buf+sprintf(buf,"%p ",vcc);
+ if (!vcc->dev) here += sprintf(here,"Unassigned ");
+ else here += sprintf(here,"%3d %3d %5d ",vcc->dev->number,vcc->vpi,
+ vcc->vci);
+ switch (vcc->family) {
+ case AF_ATMPVC:
+ here += sprintf(here,"PVC");
+ break;
+ case AF_ATMSVC:
+ here += sprintf(here,"SVC");
+ break;
+ default:
+ here += sprintf(here,"%3d",vcc->family);
+ }
+ here += sprintf(here," %04x %5d %7d/%7d %7d/%7d\n",vcc->flags,
+ vcc->reply,
+ atomic_read(&vcc->tx_inuse),vcc->sk->sndbuf,
+ atomic_read(&vcc->rx_inuse),vcc->sk->rcvbuf);
+}
+
+
static void svc_info(struct atm_vcc *vcc,char *buf)
{
char *here;
int i;
- if (!vcc->dev) sprintf(buf,"Unassigned ");
+ if (!vcc->dev)
+ sprintf(buf,sizeof(void *) == 4 ? "N/A@%p%6s" : "N/A@%p%2s",
+ vcc,"");
else sprintf(buf,"%3d %3d %5d ",vcc->dev->number,vcc->vpi,vcc->vci);
here = strchr(buf,0);
here += sprintf(here,"%-10s ",vcc_state(vcc));
lec_info(struct lec_arp_table *entry, char *buf)
{
int j, offset=0;
-
for(j=0;j<ETH_ALEN;j++) {
offset+=sprintf(buf+offset,"%2.2x",0xff&entry->mac_addr[j]);
return 0;
}
+
+static int atm_vc_info(loff_t pos,char *buf)
+{
+ struct atm_dev *dev;
+ struct atm_vcc *vcc;
+ int left;
+
+ if (!pos)
+ return sprintf(buf,sizeof(void *) == 4 ? "%-8s%s" : "%-16s%s",
+ "Address"," Itf VPI VCI Fam Flags Reply Send buffer"
+ " Recv buffer\n");
+ left = pos-1;
+ for (dev = atm_devs; dev; dev = dev->next)
+ for (vcc = dev->vccs; vcc; vcc = vcc->next)
+ if (!left--) {
+ vc_info(vcc,buf);
+ return strlen(buf);
+ }
+ for (vcc = nodev_vccs; vcc; vcc = vcc->next)
+ if (!left--) {
+ vc_info(vcc,buf);
+ return strlen(buf);
+ }
+
+ return 0;
+}
+
+
static int atm_svc_info(loff_t pos,char *buf)
{
struct atm_dev *dev;
struct lec_arp_table *entry;
int i, count, d, e;
struct net_device **dev_lec;
+
if (!pos) {
return sprintf(buf,"Itf MAC ATM destination"
" Status Flags "
if (count < 0) return -EINVAL;
page = get_free_page(GFP_KERNEL);
if (!page) return -ENOMEM;
- dev = ((struct proc_dir_entry *)file->f_dentry->d_inode->u.generic_ip)->data;
+ dev = ((struct proc_dir_entry *) file->f_dentry->d_inode->u.generic_ip)
+ ->data;
if (!dev->ops->proc_read)
length = -EINVAL;
else {
return length;
}
+
static ssize_t proc_spec_atm_read(struct file *file,char *buf,size_t count,
loff_t *pos)
{
unsigned long page;
int length;
int (*info)(loff_t,char *);
- info = ((struct proc_dir_entry *)file->f_dentry->d_inode->u.generic_ip)->data;
+ info = ((struct proc_dir_entry *) file->f_dentry->d_inode->u.generic_ip)
+ ->data;
if (count < 0) return -EINVAL;
page = get_free_page(GFP_KERNEL);
return length;
}
+
struct proc_dir_entry *atm_proc_root;
EXPORT_SYMBOL(atm_proc_root);
+
int atm_proc_dev_register(struct atm_dev *dev)
{
int digits,num;
kfree(dev->proc_name);
}
+
+#define CREATE_ENTRY(name) \
+ name = create_proc_entry(#name,0,atm_proc_root); \
+ if (!name) goto cleanup; \
+ name->data = atm_##name##_info; \
+ name->ops = &proc_spec_atm_inode_operations
+
+
int __init atm_proc_init(void)
{
- struct proc_dir_entry *dev=NULL,*pvc=NULL,*svc=NULL,*arp=NULL,*lec=NULL;
+ struct proc_dir_entry *devices = NULL,*pvc = NULL,*svc = NULL;
+ struct proc_dir_entry *arp = NULL,*lec = NULL,*vc = NULL;
+
atm_proc_root = proc_mkdir("atm", &proc_root);
if (!atm_proc_root)
return -ENOMEM;
- dev = create_proc_entry("devices",0,atm_proc_root);
- if (!dev)
- goto cleanup;
- dev->data = atm_devices_info;
- dev->ops = &proc_spec_atm_inode_operations;
- pvc = create_proc_entry("pvc",0,atm_proc_root);
- if (!pvc)
- goto cleanup;
- pvc->data = atm_pvc_info;
- pvc->ops = &proc_spec_atm_inode_operations;
- svc = create_proc_entry("svc",0,atm_proc_root);
- if (!svc)
- goto cleanup;
- svc->data = atm_svc_info;
- svc->ops = &proc_spec_atm_inode_operations;
+ CREATE_ENTRY(devices);
+ CREATE_ENTRY(pvc);
+ CREATE_ENTRY(svc);
+ CREATE_ENTRY(vc);
#ifdef CONFIG_ATM_CLIP
- arp = create_proc_entry("arp",0,atm_proc_root);
- if (!arp)
- goto cleanup;
- arp->data = atm_arp_info;
- arp->ops = &proc_spec_atm_inode_operations;
+ CREATE_ENTRY(arp);
#endif
#if defined(CONFIG_ATM_LANE) || defined(CONFIG_ATM_LANE_MODULE)
- lec = create_proc_entry("lec",0,atm_proc_root);
- if (!lec)
- goto cleanup;
- lec->data = atm_lec_info;
- lec->ops = &proc_spec_atm_inode_operations;
+ CREATE_ENTRY(lec);
#endif
return 0;
+
cleanup:
- if (dev) remove_proc_entry("devices",atm_proc_root);
+ if (devices) remove_proc_entry("devices",atm_proc_root);
if (pvc) remove_proc_entry("pvc",atm_proc_root);
if (svc) remove_proc_entry("svc",atm_proc_root);
if (arp) remove_proc_entry("arp",atm_proc_root);
if (lec) remove_proc_entry("lec",atm_proc_root);
+ if (vc) remove_proc_entry("vc",atm_proc_root);
remove_proc_entry("atm",&proc_root);
return -ENOMEM;
}
/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/atmdev.h>
#include <linux/skbuff.h>
#include <linux/mm.h>
-#ifdef CONFIG_MMU_HACKS
-#include <linux/mmuio.h>
-#include <linux/uio.h>
-#endif
-
#include "common.h"
#include "protocols.h"
-#include "tunable.h" /* tunable parameters */
#if 0
static void atm_pop_raw(struct atm_vcc *vcc,struct sk_buff *skb)
{
-#ifdef CONFIG_MMU_HACKS
- if (ATM_SKB(skb)->iovcnt)
- unlock_user(ATM_SKB(skb)->iovcnt,(struct iovec *) skb->data);
-#endif
DPRINTK("APopR (%d) %d -= %d\n",vcc->vci,vcc->tx_inuse,skb->truesize);
atomic_sub(skb->truesize+ATM_PDU_OVHD,&vcc->tx_inuse);
dev_kfree_skb(skb);
sk_free(sk);
return NULL;
}
+ sock_init_data(NULL,sk);
sk->destruct = atm_free_sock;
memset(vcc,0,sizeof(*vcc));
+ vcc->sk = sk;
if (nodev_vccs) nodev_vccs->prev = vcc;
vcc->prev = NULL;
vcc->next = nodev_vccs;
/* net/atm/signaling.c - ATM signaling */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/errno.h> /* error codes */
#include <linux/atmsvc.h>
#include <linux/atmdev.h>
-#include "tunable.h"
#include "resources.h"
#include "signaling.h"
msg = (struct atmsvc_msg *) skb->data;
atomic_sub(skb->truesize+ATM_PDU_OVHD,&vcc->tx_inuse);
- DPRINTK("sigd_send %d (0x%lx)\n",(int) msg->type,msg->vcc);
- vcc = (struct atm_vcc *) msg->vcc;
+ DPRINTK("sigd_send %d (0x%lx)\n",(int) msg->type,
+ (unsigned long) msg->vcc);
+ vcc = *(struct atm_vcc **) &msg->vcc;
switch (msg->type) {
case as_okay:
vcc->reply = msg->reply;
vcc->reply = msg->reply;
break;
case as_indicate:
- vcc = (struct atm_vcc *) msg->listen_vcc;
+ vcc = *(struct atm_vcc **) &msg->listen_vcc;
DPRINTK("as_indicate!!!\n");
if (!vcc->backlog_quota) {
sigd_enq(0,as_reject,vcc,NULL,NULL);
void sigd_enq(struct atm_vcc *vcc,enum atmsvc_msg_type type,
- const struct atm_vcc *listen_vcc,const struct sockaddr_atmpvc *pvc,
+ struct atm_vcc *listen_vcc,const struct sockaddr_atmpvc *pvc,
const struct sockaddr_atmsvc *svc)
{
struct sk_buff *skb;
while (!(skb = alloc_skb(sizeof(struct atmsvc_msg),GFP_KERNEL)))
schedule();
msg = (struct atmsvc_msg *) skb_put(skb,sizeof(struct atmsvc_msg));
+ memset(msg,0,sizeof(*msg));
msg->type = type;
- msg->vcc = (unsigned long) vcc;
- msg->listen_vcc = (unsigned long) listen_vcc;
+ *(struct atm_vcc **) &msg->vcc = vcc;
+ *(struct atm_vcc **) &msg->listen_vcc = listen_vcc;
msg->reply = 0; /* other ISP applications may use this field */
if (vcc) {
msg->qos = vcc->qos;
static struct atmdev_ops sigd_dev_ops = {
- NULL, /* no dev_close */
- NULL, /* no open */
- sigd_close, /* close */
- NULL, /* no ioctl */
- NULL, /* no getsockopt */
- NULL, /* no setsockopt */
- sigd_send, /* send */
- NULL, /* no sg_send */
- NULL, /* no send_oam */
- NULL, /* no phy_put */
- NULL, /* no phy_get */
- NULL, /* no feedback */
- NULL, /* no change_qos */
- NULL /* no free_rx_skb */
+ close: sigd_close,
+ send: sigd_send
};
/* net/atm/signaling.h - ATM signaling */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#ifndef NET_ATM_SIGNALING_H
void sigd_enq(struct atm_vcc *vcc,enum atmsvc_msg_type type,
- const struct atm_vcc *listen_vcc,const struct sockaddr_atmpvc *pvc,
+ struct atm_vcc *listen_vcc,const struct sockaddr_atmpvc *pvc,
const struct sockaddr_atmsvc *svc);
int sigd_attach(struct atm_vcc *vcc);
void signaling_init(void);
/* net/atm/svc.c - ATM SVC sockets */
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
+/* Written 1995-2000 by Werner Almesberger, EPFL LRC/ICA */
#include <linux/string.h>
new_vcc->qos = msg->qos;
new_vcc->flags |= ATM_VF_HASQOS;
new_vcc->remote = msg->svc;
+ new_vcc->local = msg->local;
new_vcc->sap = msg->sap;
error = atm_connect(newsock,msg->pvc.sap_addr.itf,
msg->pvc.sap_addr.vpi,msg->pvc.sap_addr.vci);
+++ /dev/null
-/* net/atm/tunable.h - Tunable parameters of ATM support */
-
-/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
-
-
-#ifndef NET_ATM_TUNABLE_H
-#define NET_ATM_TUNABLE_H
-
-#define ATM_RXBQ_DEF ( 64*1024) /* default RX buffer quota, in bytes */
-#define ATM_TXBQ_DEF ( 64*1024) /* default TX buffer quota, in bytes */
-#define ATM_RXBQ_MIN ( 1*1024) /* RX buffer minimum, in bytes */
-#define ATM_TXBQ_MIN ( 1*1024) /* TX buffer minimum, in bytes */
-#define ATM_RXBQ_MAX (1024*1024) /* RX buffer quota limit, in bytes */
-#define ATM_TXBQ_MAX (1024*1024) /* TX buffer quota limit, in bytes */
-
-#endif
o Hello messages should be generated for each primary address on each
interface.
+ o Add more information into /proc/net/decnet and finalise the format to
+ allow DECnet support in netstat.
+
+ o Make sure that returned connect messages are generated when they should
+ be, and that the correct error messages are sent too. Ensure that the
+ conninit receiving routine does not accept conninits with parameters
+ that we cannot handle.
+
* Steve Whitehouse: Now handles returned conninit frames.
* David S. Miller: New socket locking
* Steve Whitehouse: Fixed lockup when socket filtering was enabled.
+ * Paul Koning: Fix to push CC sockets into RUN when acks are
+ * received.
*/
/******************************************************************************
}
/*
- * Copy of sock_queue_rcv_skb (from sock.h) with out
+ * Copy of sock_queue_rcv_skb (from sock.h) without
* bh_lock_sock() (its already held when this is called) which
* also allows data and other data to be queued to a socket.
*/
if (sk != NULL) {
struct dn_scp *scp = &sk->protinfo.dn;
int ret;
- /* printk(KERN_DEBUG "dn_nsp_rx: Found a socket\n"); */
/* Reset backoff */
scp->nsp_rxtshift = 0;
} else {
int other = 1;
+ /* both data and ack frames can kick a CC socket into RUN */
+ if ((scp->state == DN_CC) && !sk->dead) {
+ scp->state = DN_RUN;
+ sk->state = TCP_ESTABLISHED;
+ sk->state_change(sk);
+ }
+
if ((cb->nsp_flags & 0x1c) == 0)
other = 0;
if (cb->nsp_flags == 0x04)
/*
* If we've some sort of data here then call a
* suitable routine for dealing with it, otherwise
- * the packet is an ack and can be discarded. All
- * data frames can also kick a CC socket into RUN.
+ * the packet is an ack and can be discarded.
*/
if ((cb->nsp_flags & 0x0c) == 0) {
- if ((scp->state == DN_CC) && !sk->dead) {
- scp->state = DN_RUN;
- sk->state = TCP_ESTABLISHED;
- sk->state_change(sk);
- }
-
if (scp->state != DN_RUN)
goto free_out;
panic("Failed to allocate DECnet route cache hash table\n");
printk(KERN_INFO
- "DECnet: Routing cache hash table of %u buckets, %dKbytes\n",
+ "DECnet: Routing cache hash table of %u buckets, %ldKbytes\n",
dn_rt_hash_mask,
- (dn_rt_hash_mask*sizeof(struct dn_rt_hash_bucket))/1024);
+ (long)(dn_rt_hash_mask*sizeof(struct dn_rt_hash_bucket))/1024);
dn_rt_hash_mask--;
for(i = 0; i <= dn_rt_hash_mask; i++) {
#else /* no CONFIG_RTNETLINK */
-#define dn_rt_msg_fib(event,f,z,tb_id,nlh,req)
+#define dn_rtmsg_fib(event,f,z,tb_id,nlh,req)
#endif /* CONFIG_RTNETLINK */
*
* PF_INET protocol family socket handler.
*
- * Version: $Id: af_inet.c,v 1.104 2000/01/18 08:24:14 davem Exp $
+ * Version: $Id: af_inet.c,v 1.106 2000/02/04 21:04:06 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
sin->sin_family = AF_INET;
if (peer) {
- if (!sk->dport)
+ if (!sk->dport)
+ return -ENOTCONN;
+ if (((1<<sk->state)&(TCPF_CLOSE|TCPF_SYN_SENT)) && peer == 1)
return -ENOTCONN;
sin->sin_port = sk->dport;
sin->sin_addr.s_addr = sk->daddr;
*
* The Internet Protocol (IP) output module.
*
- * Version: $Id: ip_output.c,v 1.78 2000/01/16 05:11:22 davem Exp $
+ * Version: $Id: ip_output.c,v 1.79 2000/02/08 21:27:11 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
return;
daddr = ipc.addr = rt->rt_src;
- ipc.opt = &replyopts.opt;
+ ipc.opt = NULL;
+
+ if (replyopts.opt.optlen) {
+ ipc.opt = &replyopts.opt;
+
+ if (ipc.opt->srr)
+ daddr = replyopts.opt.faddr;
+ }
- if (ipc.opt->srr)
- daddr = replyopts.opt.faddr;
if (ip_route_output(&rt, daddr, rt->rt_spec_dst, RT_TOS(skb->nh.iph->tos), 0))
return;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp.c,v 1.161 2000/01/31 01:21:16 davem Exp $
+ * Version: $Id: tcp.c,v 1.163 2000/02/08 21:27:13 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
struct sk_buff *skb;
- int time_to_ack;
-
+ int time_to_ack = 0;
+
/* NOTE! The socket must be locked, so that we don't get
* a messed-up receive queue.
*/
tcp_eat_skb(sk, skb);
}
- /* Delayed ACKs frequently hit locked sockets during bulk receive. */
- time_to_ack = tp->ack.blocked && tp->ack.pending;
-#ifdef CONFIG_TCP_MORE_COARSE_ACKS
- if (tp->ack.pending &&
- (tp->rcv_nxt - tp->rcv_wup) > tp->ack.rcv_mss)
- time_to_ack = 1;
+ if (tp->ack.pending) {
+ /* Delayed ACKs frequently hit locked sockets during bulk receive. */
+ if (tp->ack.blocked
+#ifdef TCP_MORE_COARSE_ACKS
+ /* Once-per-two-segments ACK was not sent by tcp_input.c */
+ || tp->rcv_nxt - tp->rcv_wup > tp->ack.rcv_mss
#endif
+ /*
+ * If this read emptied read buffer, we send ACK when:
+ *
+ * -- ATO estimator diverged. In this case it is useless
+ * to delay ACK, it will miss in any case.
+ *
+ * -- The second condition is triggered when we did not
+ * ACK 8 segments not depending of their size.
+ * Linux senders allocate full-sized frame even for one byte
+ * packets, so that default queue for MTU=8K can hold
+ * only 8 packets. Note, that no other workarounds
+ * but counting packets are possible. If sender selected
+ * a small sndbuf or have larger mtu lockup will still
+ * occur. Well, not lockup, but 10-20msec gap.
+ * It is essentially dead lockup for 1Gib ethernet
+ * and loopback :-). The value 8 covers all reasonable
+ * cases and we may receive packet of any size
+ * with maximal possible rate now.
+ */
+ || (copied > 0 &&
+ (tp->ack.ato >= TCP_DELACK_MAX || tp->ack.rcv_segs > 7) &&
+ !tp->ack.pingpong &&
+ atomic_read(&sk->rmem_alloc) == 0)) {
+ time_to_ack = 1;
+ }
+ }
/* We send an ACK if we can now advertise a non-zero window
* which has been raised "significantly".
__u32 rcv_window_now = tcp_receive_window(tp);
__u32 new_window = __tcp_select_window(sk);
- /* We won't be raising the window any further than
- * the window-clamp allows. Our window selection
- * also keeps things a nice multiple of MSS. These
- * checks are necessary to prevent spurious ACKs
- * which don't advertize a larger window.
+ /* Send ACK now, if this read freed lots of space
+ * in our buffer. Certainly, new_window is new window.
+ * We can advertise it now, if it is not less than current one.
+ * "Lots" means "at least twice" here.
*/
- if((new_window && (new_window >= rcv_window_now * 2)) &&
- ((rcv_window_now + tp->ack.rcv_mss) <= tp->window_clamp))
+ if(new_window && new_window >= 2*rcv_window_now)
time_to_ack = 1;
}
if (time_to_ack)
copied += chunk;
}
}
-#ifdef CONFIG_TCP_MORE_COARSE_ACKS
- if (tp->ack.pending &&
- (tp->rcv_nxt - tp->rcv_wup) > tp->ack.rcv_mss)
- tcp_send_ack(sk);
-#endif
}
continue;
skb->used = 1;
tcp_eat_skb(sk, skb);
-#ifdef CONFIG_TCP_LESS_COARSE_ACKS
+#ifdef TCP_LESS_COARSE_ACKS
/* Possible improvement. When sender is faster than receiver,
* traffic looks like: fill window ... wait for window open ...
* fill window. We lose at least one rtt, because call
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_input.c,v 1.186 2000/01/31 20:26:13 davem Exp $
+ * Version: $Id: tcp_input.c,v 1.188 2000/02/08 21:27:14 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
tp->ack.rcv_mss = len;
tp->ack.last_seg_size = len;
}
-
-#if 0
- /* Tiny-grams with PSH set artifically deflate our
- * ato measurement.
- *
- * Mmm... I copied this test from tcp_remember_ack(), but
- * I did not understand this. Is it to speedup nagling sender?
- * It does not because classic (non-Minshall) sender nagles
- * guided by not-acked frames not depending on size.
- * And it does not help NODELAY sender, because latency
- * is too high in any case. The only result is timer trashing
- * and redundant ACKs. Grr... Seems, I missed something. --ANK
- *
- * Let me to comment out this yet... TCP should work
- * perfectly without this. --ANK
- */
- if (len < (tp->ack.rcv_mss >> 1) && skb->h.th->psh)
- tp->ack.ato = TCP_ATO_MIN;
-#endif
}
}
tcp_measure_rcv_mss(tp, skb);
tp->ack.pending = 1;
+ tp->ack.rcv_segs++;
now = tcp_time_stamp;
} else {
if (m <= 0)
m = TCP_ATO_MIN/2;
- tp->ack.ato = (tp->ack.ato >> 1) + m;
+ if (m <= tp->ack.ato)
+ tp->ack.ato = (tp->ack.ato >> 1) + m;
}
}
tp->ack.lrcvtime = now;
extern __inline__ void
tcp_replace_ts_recent(struct sock *sk, struct tcp_opt *tp, u32 seq)
{
- if (!after(seq, tp->last_ack_sent)) {
+ if (!after(seq, tp->rcv_wup)) {
/* PAWS bug workaround wrt. ACK frames, the PAWS discard
* extra check below makes sure this can only happen
* for pure ACK frames. -DaveM
if(atomic_read(&sk->rmem_alloc) < (sk->rcvbuf << 1))
return 0;
+ NET_INC_STATS_BH(RcvPruned);
+
/* Massive buffer overcommit. */
return -1;
}
goto slow_path;
/* Predicted packet is in window by definition.
- * seq == rcv_nxt and last_ack_sent <= rcv_nxt.
- * Hence, check seq<=last_ack_sent reduces to:
+ * seq == rcv_nxt and rcv_wup <= rcv_nxt.
+ * Hence, check seq<=rcv_wup reduces to:
*/
- if (tp->rcv_nxt == tp->last_ack_sent) {
+ if (tp->rcv_nxt == tp->rcv_wup) {
tp->ts_recent = tp->rcv_tsval;
tp->ts_recent_stamp = xtime.tv_sec;
}
tcp_event_data_recv(tp, skb);
-#if 1/*def CONFIG_TCP_MORE_COARSE_ACKS*/
+#ifdef TCP_MORE_COARSE_ACKS
if (eaten) {
if (tcp_in_quickack_mode(tp)) {
tcp_send_ack(sk);
newtp->copied_seq = req->rcv_isn + 1;
newtp->saw_tstamp = 0;
- newtp->last_ack_sent = req->rcv_isn + 1;
newtp->probes_out = 0;
newtp->syn_seq = req->rcv_isn;
tp->ack.pending = 1;
tp->ack.lrcvtime = tcp_time_stamp;
tcp_enter_quickack_mode(tp);
- tp->ack.pingpong = 1;
tp->ack.ato = TCP_ATO_MIN;
tcp_reset_xmit_timer(sk, TCP_TIME_DACK, TCP_DELACK_MIN);
goto discard;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_ipv4.c,v 1.198 2000/01/31 01:21:20 davem Exp $
+ * Version: $Id: tcp_ipv4.c,v 1.199 2000/02/08 21:27:17 davem Exp $
*
* IPv4 specific functions
*
tcp_parse_options(NULL, th, &tp, want_cookie);
+ if (tp.saw_tstamp && tp.rcv_tsval == 0) {
+ /* Some OSes (unknown ones, but I see them on web server, which
+ * contains information interesting only for windows'
+ * users) do not send their stamp in SYN. It is easy case.
+ * We simply do not advertise TS support.
+ */
+ tp.saw_tstamp = 0;
+ tp.tstamp_ok = 0;
+ }
+
tcp_openreq_init(req, &tp, skb);
req->af.v4_req.loc_addr = daddr;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_output.c,v 1.120 2000/01/31 01:21:22 davem Exp $
+ * Version: $Id: tcp_output.c,v 1.121 2000/02/08 21:27:19 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
- tp->last_ack_sent = tp->rcv_nxt;
tcp_dec_quickack_mode(tp);
tp->ack.pending = 0;
+ tp->ack.rcv_segs = 0;
tcp_clear_xmit_timer(sk, TCP_TIME_DACK);
}
/* Bound mss with half of window */
if (tp->max_window && mss_now > (tp->max_window>>1))
- mss_now = max((tp->max_window>>1), 1);
+ mss_now = max((tp->max_window>>1), 68 - tp->tcp_header_len);
/* And store cached results */
tp->pmtu_cookie = pmtu;
if (tp->window_clamp < mss)
mss = tp->window_clamp;
- if ((free_space < (min((int)tp->window_clamp, tcp_full_space(sk)) / 2)) &&
- (free_space < ((int) (mss/2)))) {
- window = 0;
-
+ if (free_space < min((int)tp->window_clamp, tcp_full_space(sk)) / 2) {
/* THIS IS _VERY_ GOOD PLACE to play window clamp.
* if free_space becomes suspiciously low
* verify ratio rmem_alloc/(rcv_nxt - copied_seq),
* rmem_alloc will run out of rcvbuf*2, shrink window_clamp.
* It will eliminate most of prune events! Very simple,
* it is the next thing to do. --ANK
+ *
+ * Provided we found a way to raise it back... --ANK
*/
- } else {
- /* Get the largest window that is a nice multiple of mss.
- * Window clamp already applied above.
- * If our current window offering is within 1 mss of the
- * free space we just keep it. This prevents the divide
- * and multiply from happening most of the time.
- * We also don't do any window rounding when the free space
- * is too small.
- */
- window = tp->rcv_wnd;
- if ((((int) window) <= (free_space - ((int) mss))) ||
- (((int) window) > free_space))
- window = (((unsigned int) free_space)/mss)*mss;
+ tp->ack.quick = 0;
+
+ if (free_space < ((int) (mss/2)))
+ return 0;
}
+
+ /* Get the largest window that is a nice multiple of mss.
+ * Window clamp already applied above.
+ * If our current window offering is within 1 mss of the
+ * free space we just keep it. This prevents the divide
+ * and multiply from happening most of the time.
+ * We also don't do any window rounding when the free space
+ * is too small.
+ */
+ window = tp->rcv_wnd;
+ if ((((int) window) <= (free_space - ((int) mss))) ||
+ (((int) window) > free_space))
+ window = (((unsigned int) free_space)/mss)*mss;
+
return window;
}
unsigned long timeout;
/* Stay within the limit we were given */
- timeout = tp->ack.ato;
- timeout += jiffies + (timeout>>2);
+ timeout = jiffies + tp->ack.ato;
/* Use new timeout only if there wasn't a older one earlier. */
spin_lock_bh(&sk->timer_lock);
buff = alloc_skb(MAX_TCP_HEADER + 15, GFP_ATOMIC);
if (buff == NULL) {
tp->ack.pending = 1;
+ tp->ack.ato = TCP_ATO_MAX;
tcp_reset_xmit_timer(sk, TCP_TIME_DACK, TCP_DELACK_MAX);
return;
}
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_timer.c,v 1.71 2000/01/18 08:24:19 davem Exp $
+ * Version: $Id: tcp_timer.c,v 1.72 2000/02/08 21:27:20 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
}
if (tp->ack.pending) {
- /* Delayed ACK missed: inflate ATO, leave pingpong mode */
- tp->ack.ato = min(tp->ack.ato<<1, TCP_ATO_MAX);
- tp->ack.pingpong = 0;
+ if (!tp->ack.pingpong) {
+ /* Delayed ACK missed: inflate ATO. */
+ tp->ack.ato = min(tp->ack.ato<<1, TCP_ATO_MAX);
+ } else {
+ /* Delayed ACK missed: leave pingpong mode and
+ * deflate ATO.
+ */
+ tp->ack.pingpong = 0;
+ tp->ack.ato = TCP_ATO_MIN;
+ }
tcp_send_ack(sk);
NET_INC_STATS_BH(DelayedACKs);
}
*
* Adapted from linux/net/ipv4/af_inet.c
*
- * $Id: af_inet6.c,v 1.52 2000/01/18 08:24:21 davem Exp $
+ * $Id: af_inet6.c,v 1.53 2000/02/04 21:04:08 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
if (peer) {
if (!sk->dport)
return -ENOTCONN;
+ if (((1<<sk->state)&(TCPF_CLOSE|TCPF_SYN_SENT)) && peer == 1)
+ return -ENOTCONN;
sin->sin6_port = sk->dport;
memcpy(&sin->sin6_addr, &sk->net_pinfo.af_inet6.daddr,
sizeof(struct in6_addr));
* Authors:
* Pedro Roque <roque@di.fc.ul.pt>
*
- * $Id: mcast.c,v 1.29 2000/01/18 08:24:21 davem Exp $
+ * $Id: mcast.c,v 1.30 2000/02/08 21:27:23 davem Exp $
*
* Based on linux/ipv4/igmp.c and linux/ipv4/ip_sockglue.c
*
}
}
read_unlock_bh(&idev->lock);
+ in6_dev_put(idev);
}
- in6_dev_put(idev);
return 0;
}
*
* PACKET - implements raw packet sockets.
*
- * Version: $Id: af_packet.c,v 1.28 2000/01/24 23:35:59 davem Exp $
+ * Version: $Id: af_packet.c,v 1.30 2000/02/01 12:38:30 freitag Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
struct sk_buff *skb;
int copied, err;
+ err = -EINVAL;
+ if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC))
+ goto out;
+
#if 0
/* What error should we return now? EUNATTACH? */
if (sk->protinfo.af_packet->ifindex < 0)
/* net/sched/sch_atm.c - ATM VC selection "queueing discipline" */
-/* Written 1998,1999 by Werner Almesberger, EPFL ICA */
+/* Written 1998-2000 by Werner Almesberger, EPFL ICA */
#include <linux/config.h>
#define PRIV(sch) ((struct atm_qdisc_data *) (sch)->data)
+#define VCC2FLOW(vcc) ((struct atm_flow_data *) ((vcc)->user_back))
struct atm_flow_data {
struct Qdisc *q; /* FIFO, TBF, etc. */
struct tcf_proto *filter_list;
struct atm_vcc *vcc; /* VCC; NULL if VCC is closed */
+ void (*old_pop)(struct atm_vcc *vcc,struct sk_buff *skb); /* chaining */
struct socket *sock; /* for closing */
u32 classid; /* x:y type ID */
int ref; /* reference count */
static unsigned long atm_tc_get(struct Qdisc *sch,u32 classid)
{
- struct atm_qdisc_data *p = PRIV(sch);
+ struct atm_qdisc_data *p __attribute__((unused)) = PRIV(sch);
struct atm_flow_data *flow;
DPRINTK("atm_tc_get(sch %p,[qdisc %p],classid %x)\n",sch,p,classid);
}
if (flow->sock) {
DPRINTK("atm_tc_put: f_count %d\n",file_count(flow->sock->file));
+ flow->vcc->pop = flow->old_pop;
sockfd_put(flow->sock);
}
if (flow->excess) atm_tc_put(sch,(unsigned long) flow->excess);
}
+static void sch_atm_pop(struct atm_vcc *vcc,struct sk_buff *skb)
+{
+ VCC2FLOW(vcc)->old_pop(vcc,skb);
+ mark_bh(NET_BH); /* may allow to send more */
+}
+
+
static int atm_tc_change(struct Qdisc *sch, u32 classid, u32 parent,
struct rtattr **tca, unsigned long *arg)
{
DPRINTK("atm_tc_change: qdisc %p\n",flow->q);
flow->sock = sock;
flow->vcc = ATM_SD(sock); /* speedup */
+ flow->vcc->user_back = flow;
DPRINTK("atm_tc_change: vcc %p\n",flow->vcc);
+ flow->old_pop = flow->vcc->pop;
+ flow->vcc->pop = sch_atm_pop;
flow->classid = classid;
flow->ref = 1;
flow->excess = excess;
* little bursts. Otherwise, it may ... @@@
*/
while ((skb = flow->q->dequeue(flow->q))) {
+ if (!atm_may_send(flow->vcc,skb->truesize)) {
+ flow->q->ops->requeue(skb,flow->q);
+ break;
+ }
sch->q.qlen--;
D2PRINTK("atm_tc_deqeueue: sending on class %p\n",flow);
/* remove any LL header somebody else has attached */
}
+static int atm_tc_requeue(struct sk_buff *skb,struct Qdisc *sch)
+{
+ struct atm_qdisc_data *p = PRIV(sch);
+ int ret;
+
+ D2PRINTK("atm_tc_requeue(skb %p,sch %p,[qdisc %p])\n",skb,sch,p);
+ ret = p->link.q->ops->requeue(skb,p->link.q);
+ if (!ret) sch->q.qlen++;
+ else {
+ sch->stats.drops++;
+ p->link.stats.drops++;
+ }
+ return ret;
+}
+
+
static int atm_tc_drop(struct Qdisc *sch)
{
struct atm_qdisc_data *p = PRIV(sch);
atm_tc_enqueue, /* enqueue */
atm_tc_dequeue, /* dequeue */
- atm_tc_enqueue, /* requeue; we're cheating a little */
+ atm_tc_requeue, /* requeue */
atm_tc_drop, /* drop */
atm_tc_init, /* init */
goto out;
}
+ sock->file = file;
file->f_op = &socket_file_ops;
file->f_mode = 3;
file->f_flags = O_RDWR;
file->f_pos = 0;
fd_install(fd, file);
- sock->file = file;
}
out:
goto out_release;
if (upeer_sockaddr) {
- if(newsock->ops->getname(newsock, (struct sockaddr *)address, &len, 1)<0) {
+ if(newsock->ops->getname(newsock, (struct sockaddr *)address, &len, 2)<0) {
err = -ECONNABORTED;
goto out_release;
}
goto out_release;
}
- /* File flags are inherited via accept(). It looks silly, but we
- * have to be compatible with another OSes.
- */
+ /* File flags are not inherited via accept() unlike another OSes. */
+
if ((err = sock_map_fd(newsock)) < 0)
goto out_release;