S: Australia
N: Jeffrey A. Uphoff
-E: juphoff@nrao.edu
+E: juphoff@transmeta.com
E: jeff.uphoff@linux.org
P: 1024/9ED505C5 D7 BB CA AA 10 45 40 1B 16 19 0A C0 38 A0 3E CB
D: Linux Security/Alert mailing lists' moderator/maintainer.
D: PAM S/Key module developer.
D: 'dip' contributor.
D: AIPS port, astronomical community support.
-S: National Radio Astronomy Observatory
-S: 520 Edgemont Road
-S: Charlottesville, Virginia 22903
+S: Transmeta Corporation
+S: 2540 Mission College Blvd.
+S: Santa Clara, CA 95054
S: USA
N: Matthias Urlichs
- Bash 1.14.7 ; bash -version
- Ncpfs 2.2.0 ; ncpmount -v
- Pcmcia-cs 3.0.7 ; cardmgr -V
-- PPP 2.3.5 ; pppd --version
+- PPP 2.3.9 ; pppd --version
- Util-linux 2.9i ; chsh -v
Upgrade notes
PPP
===
- Due to changes in the routing code, those of you using PPP
-networking will need to upgrade your pppd.
+ Due to changes in the PPP driver and routing code, those of you
+using PPP networking will need to upgrade your pppd.
iBCS
====
PPP
===
-The 2.3.5 release:
-ftp://cs.anu.edu.au/pub/software/ppp/ppp-2.3.5.tar.gz
+The 2.3.9 release:
+ftp://cs.anu.edu.au/pub/software/ppp/ppp-2.3.9.tar.gz
IP Chains
=========
end of the link as well. It's good enough, for example, to run IP
over the async ports of a Camtec JNT Pad. If unsure, say N.
-PPP (point-to-point) support
+PPP (point-to-point protocol) support
CONFIG_PPP
PPP (Point to Point Protocol) is a newer and better SLIP. It serves
the same purpose: sending Internet traffic over telephone (and other
- serial) lines. Ask your access provider if they support it, because
- otherwise you can't use it (not quite true any more: the free
- program SLiRP can emulate a PPP line if you just have a regular dial
- up shell account on some UNIX computer; get it via FTP (user:
- anonymous) from
- ftp://metalab.unc.edu/pub/Linux/system/network/serial/). Note that
- you don't need "PPP support" if you just want to run term (term is a
- program which gives you almost full Internet connectivity if you
- have a regular dial up shell account on some Internet connected UNIX
- computer. Read
- http://www.bart.nl/~patrickr/term-howto/Term-HOWTO.html (to browse
- the WWW, you need to have access to a machine on the Internet that
- has a program like lynx or netscape)).
+ serial) lines. Most ISPs these days support PPP rather than SLIP.
To use PPP, you need an additional program called pppd as described
in Documentation/networking/ppp.txt and in the PPP-HOWTO, available
from an older kernel, you might need to upgrade pppd as well. The
PPP option enlarges your kernel by about 16 KB.
+ Almost always, if you answer Y or M to this question, you should
+ give the same answer to the next question, about PPP support for
+ async serial ports.
+
This driver is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
If you said Y to "Version information on all symbols" above, then
you cannot compile the PPP driver into the kernel; you can then only
- compile it as a module. The module will be called ppp.o. If you want
- to compile it as a module, say M here and read
+ compile it as a module. The module will be called ppp_generic.o. If
+ you want to compile it as a module, say M here and read
Documentation/modules.txt as well as
- Documentation/networking/net-modules.txt. Note that, no matter what
- you do, the BSD compression code (used to compress the IP packets
- sent over the serial line; has to be supported at the other end as
- well) will always be compiled as a module; it is called bsd_comp.o
- and will show up in the directory modules once you have said "make
- modules". If unsure, say N.
+ Documentation/networking/net-modules.txt.
+
+PPP support for async serial ports
+CONFIG_PPP_ASYNC
+ Say Y (or M) here if you want to be able to use PPP over standard
+ asynchronous serial ports, such as COM1 or COM2 on a PC. If you use
+ a modem (not a synchronous or ISDN modem) to contact your ISP, you
+ need this option.
+
+ This code is also available as a module (code which can be inserted
+ into and removed from the running kernel). If you want to compile
+ it as a module, say M here and read Documentation/modules.txt.
+
+PPP Deflate compression
+CONFIG_PPP_DEFLATE
+ Support for the Deflate compression method for PPP, which uses the
+ Deflate algorithm (the same algorithm that gzip uses) to compress
+ each PPP packet before it is sent over the wire. The peer (the
+ machine at the other end of the PPP link, usually your ISP) has to
+ support the Deflate compression method as well for this to be
+ useful.
+
+ This code is also available as a module (code which can be inserted
+ into and removed from the running kernel). If you want to compile
+ it as a module, say M here and read Documentation/modules.txt.
+
+PPP BSD-Compress compression
+CONFIG_PPP_BSDCOMP
+ Support for the BSD-Compress compression method for PPP, which uses
+ the LZW compression method to compress each PPP packet before it is
+ sent over the wire. The peer (the other end of the PPP link) has to
+ support the BSD-Compress compression method as well for this to be
+ useful. The PPP Deflate compression method is preferable to
+ BSD-Compress, because it compresses better and is patent-free.
+
+ Note that the BSD compression code will always be compiled as a
+ module; it is called bsd_comp.o and will show up in the directory
+ modules once you have said "make modules". If unsure, say N.
Wireless LAN (non-hamradio)
CONFIG_NET_RADIO
0xffffffff);
pcibios_read_config_dword(bus->number, dev->devfn, off, &base);
if (!base) {
- /* this base-address register is unused */
- dev->base_address[idx] = 0;
+ /* This base-address register is unused. */
+ dev->resource[idx].start = 0;
+ dev->resource[idx].end = 0;
+ dev->resource[idx].flags = 0;
continue;
}
new_io_reset(dev, off, orig_base);
handle = PCI_HANDLE(bus->number) | base | 1;
- dev->base_address[idx] = handle;
+ dev->resource[idx].start
+ = handle & PCI_BASE_ADDRESS_IO_MASK;
+ dev->resource[idx].end
+ = dev->resource[idx].start + size - 1;
+ dev->resource[idx].flags
+ = handle & ~PCI_BASE_ADDRESS_IO_MASK;
DBG_DEVS(("layout_dev: dev 0x%x IO @ 0x%lx (0x%x)\n",
dev->device, handle, size));
new_io_reset(dev, off, orig_base);
handle = PCI_HANDLE(bus->number) | base;
- dev->base_address[idx] = handle;
+ dev->resource[idx].start
+ = handle & PCI_BASE_ADDRESS_MEM_MASK;
+ dev->resource[idx].end
+ = dev->resource[idx].start + size - 1;
+ dev->resource[idx].flags
+ = handle & ~PCI_BASE_ADDRESS_MEM_MASK;
/*
* Currently for 64-bit cards, we simply do the usual
new_io_reset (dev, off+4, orig_base2);
}
/* Bypass hi reg in the loop. */
- dev->base_address[++idx] = 0;
+ dev->resource[++idx].start = 0;
+ dev->resource[idx].end = 0;
+ dev->resource[idx].flags = 0;
printk("bios32 WARNING: "
"handling 64-bit device in "
extern asmlinkage void entInt(void);
\f
-/*
- * Process bootcommand SMP options, like "nosmp" and "maxcpus=".
- */
-void __init
-smp_setup(char *str, int *ints)
+static int __init nosmp(char *str)
{
- if (ints && ints[0] > 0)
- max_cpus = ints[1];
- else
- max_cpus = 0;
+ max_cpus = 0;
+ return 1;
}
+__setup("nosmp", nosmp);
+
+static int __init maxcpus(char *str)
+{
+ get_option(&str, &max_cpus);
+ return 1;
+}
+
+__setup("maxcpus", maxcpus);
+
+
/*
* Called by both boot and secondaries to move global data into
* per-processor storage.
/* Setup the scheduler for this processor. */
init_idle();
+ /* ??? This should be in init_idle. */
+ atomic_inc(&init_mm.mm_count);
+ current->active_mm = &init_mm;
+
/* Get our local ticker going. */
smp_setup_percpu_timer(cpuid);
init_idle();
+ /* ??? This should be in init_idle. */
+ atomic_inc(&init_mm.mm_count);
+ current->active_mm = &init_mm;
+
/* Nothing to do on a UP box, or when told not to. */
if (smp_num_probed == 1 || max_cpus == 0) {
printk(KERN_INFO "SMP mode deactivated.\n");
ipi_flush_tlb_mm(void *x)
{
struct mm_struct *mm = (struct mm_struct *) x;
- if (mm == current->mm)
+ if (mm == current->active_mm)
flush_tlb_current(mm);
}
void
flush_tlb_mm(struct mm_struct *mm)
{
- if (mm == current->mm) {
+ if (mm == current->active_mm) {
flush_tlb_current(mm);
- if (atomic_read(&mm->count) == 1)
+ if (atomic_read(&mm->mm_users) <= 1)
return;
} else
flush_tlb_other(mm);
ipi_flush_tlb_page(void *x)
{
struct flush_tlb_page_struct *data = (struct flush_tlb_page_struct *)x;
- if (data->mm == current->mm)
+ if (data->mm == current->active_mm)
flush_tlb_current_page(data->mm, data->vma, data->addr);
}
struct flush_tlb_page_struct data;
struct mm_struct *mm = vma->vm_mm;
- if (mm == current->mm) {
+ if (mm == current->active_mm) {
flush_tlb_current_page(mm, vma, addr);
- if (atomic_read(&mm->count) == 1)
+ if (atomic_read(&mm->mm_users) <= 1)
return;
} else
flush_tlb_other(mm);
OBJS = __divqu.o __remqu.o __divlu.o __remlu.o memset.o memcpy.o io.o \
checksum.o csum_partial_copy.o strlen.o \
strcat.o strcpy.o strncat.o strncpy.o stxcpy.o stxncpy.o \
- strchr.o strrchr.o \
+ strchr.o strrchr.o memchr.o \
copy_user.o clear_user.o strncpy_from_user.o strlen_user.o \
csum_ipv6_magic.o strcasecmp.o semaphore.o \
srm_dispatch.o srm_fixup.o srm_puts.o srm_printk.o
--- /dev/null
+/* Copyright (C) 1996 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+ Contributed by David Mosberger (davidm@cs.arizona.edu).
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Library General Public License as
+ published by the Free Software Foundation; either version 2 of the
+ License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Library General Public License for more details.
+
+ You should have received a copy of the GNU Library General Public
+ License along with the GNU C Library; see the file COPYING.LIB. If not,
+ write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ Boston, MA 02111-1307, USA. */
+
+/* Finds characters in a memory area. Optimized for the Alpha:
+
+ - memory accessed as aligned quadwords only
+ - uses cmpbge to compare 8 bytes in parallel
+ - does binary search to find 0 byte in last
+ quadword (HAKMEM needed 12 instructions to
+ do this instead of the 9 instructions that
+ binary search needs).
+
+For correctness consider that:
+
+ - only minimum number of quadwords may be accessed
+ - the third argument is an unsigned long
+*/
+
+ .set noreorder
+ .set noat
+
+ .globl memchr
+ .ent memchr
+memchr:
+ .frame $30,0,$26,0
+ .prologue 0
+
+ # Hack -- if someone passes in (size_t)-1, hoping to just
+ # search til the end of the address space, we will overflow
+ # below when we find the address of the last byte. Given
+ # that we will never have a 56-bit address space, cropping
+ # the length is the easiest way to avoid trouble.
+ zap $18, 0x80, $5 #-e0 :
+
+ beq $18, $not_found # .. e1 :
+ ldq_u $1, 0($16) # e1 : load first quadword
+ insbl $17, 1, $2 # .. e0 : $2 = 000000000000ch00
+ and $17, 0xff, $17 #-e0 : $17 = 00000000000000ch
+ cmpult $18, 9, $4 # .. e1 :
+ or $2, $17, $17 # e0 : $17 = 000000000000chch
+ lda $3, -1($31) # .. e1 :
+ sll $17, 16, $2 #-e0 : $2 = 00000000chch0000
+ addq $16, $5, $5 # .. e1 :
+ or $2, $17, $17 # e1 : $17 = 00000000chchchch
+ unop # :
+ sll $17, 32, $2 #-e0 : $2 = chchchch00000000
+ or $2, $17, $17 # e1 : $17 = chchchchchchchch
+ extql $1, $16, $7 # e0 :
+ beq $4, $first_quad # .. e1 :
+
+ ldq_u $6, -1($5) #-e1 : eight or less bytes to search
+ extqh $6, $16, $6 # .. e0 :
+ mov $16, $0 # e0 :
+ or $7, $6, $1 # .. e1 : $1 = quadword starting at $16
+
+ # Deal with the case where at most 8 bytes remain to be searched
+ # in $1. E.g.:
+ # $18 = 6
+ # $1 = ????c6c5c4c3c2c1
+$last_quad:
+ negq $18, $6 #-e0 :
+ xor $17, $1, $1 # .. e1 :
+ srl $3, $6, $6 # e0 : $6 = mask of $18 bits set
+ cmpbge $31, $1, $2 # .. e1 :
+ and $2, $6, $2 #-e0 :
+ beq $2, $not_found # .. e1 :
+
+$found_it:
+ # Now, determine which byte matched:
+ negq $2, $3 # e0 :
+ and $2, $3, $2 # e1 :
+
+ and $2, 0x0f, $1 #-e0 :
+ addq $0, 4, $3 # .. e1 :
+ cmoveq $1, $3, $0 # e0 :
+
+ addq $0, 2, $3 # .. e1 :
+ and $2, 0x33, $1 #-e0 :
+ cmoveq $1, $3, $0 # .. e1 :
+
+ and $2, 0x55, $1 # e0 :
+ addq $0, 1, $3 # .. e1 :
+ cmoveq $1, $3, $0 #-e0 :
+
+$done: ret # .. e1 :
+
+ # Deal with the case where $18 > 8 bytes remain to be
+ # searched. $16 may not be aligned.
+ .align 4
+$first_quad:
+ andnot $16, 0x7, $0 #-e1 :
+ insqh $3, $16, $2 # .. e0 : $2 = 0000ffffffffffff ($16<0:2> ff)
+ xor $1, $17, $1 # e0 :
+ or $1, $2, $1 # e1 : $1 = ====ffffffffffff
+ cmpbge $31, $1, $2 #-e0 :
+ bne $2, $found_it # .. e1 :
+
+ # At least one byte left to process.
+
+ ldq $1, 8($0) # e0 :
+ subq $5, 1, $18 # .. e1 :
+ addq $0, 8, $0 #-e0 :
+
+ # Make $18 point to last quad to be accessed (the
+ # last quad may or may not be partial).
+
+ andnot $18, 0x7, $18 # .. e1 :
+ cmpult $0, $18, $2 # e0 :
+ beq $2, $final # .. e1 :
+
+ # At least two quads remain to be accessed.
+
+ subq $18, $0, $4 #-e0 : $4 <- nr quads to be processed
+ and $4, 8, $4 # e1 : odd number of quads?
+ bne $4, $odd_quad_count # e1 :
+
+ # At least three quads remain to be accessed
+
+ mov $1, $4 # e0 : move prefetched value to correct reg
+
+ .align 4
+$unrolled_loop:
+ ldq $1, 8($0) #-e0 : prefetch $1
+ xor $17, $4, $2 # .. e1 :
+ cmpbge $31, $2, $2 # e0 :
+ bne $2, $found_it # .. e1 :
+
+ addq $0, 8, $0 #-e0 :
+$odd_quad_count:
+ xor $17, $1, $2 # .. e1 :
+ ldq $4, 8($0) # e0 : prefetch $4
+ cmpbge $31, $2, $2 # .. e1 :
+ addq $0, 8, $6 #-e0 :
+ bne $2, $found_it # .. e1 :
+
+ cmpult $6, $18, $6 # e0 :
+ addq $0, 8, $0 # .. e1 :
+ bne $6, $unrolled_loop #-e1 :
+
+ mov $4, $1 # e0 : move prefetched value into $1
+$final: subq $5, $0, $18 # .. e1 : $18 <- number of bytes left to do
+ bne $18, $last_quad # e1 :
+
+$not_found:
+ mov $31, $0 #-e0 :
+ ret # .. e1 :
+
+ .end memchr
{
struct vm_area_struct * vma;
struct mm_struct *mm = current->mm;
- unsigned fixup;
+ unsigned int fixup;
+ int fault;
/* As of EV6, a load into $31/$f31 is a prefetch, and never faults
(or is suppressed by the PALcode). Support that for older CPUs
}
}
+ /* If we're in an interrupt context, or have no user context,
+ we must not take the fault. */
+ if (!mm || in_interrupt())
+ goto no_context;
+
down(&mm->mmap_sem);
- lock_kernel();
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
}
- handle_mm_fault(current, vma, address, cause > 0);
+
+ /*
+ * If for any reason at all we couldn't handle the fault,
+ * make sure we exit gracefully rather than endlessly redo
+ * the fault.
+ */
+ fault = handle_mm_fault(current, vma, address, cause > 0);
up(&mm->mmap_sem);
- goto out;
+
+ if (fault < 0)
+ goto out_of_memory;
+ if (fault == 0)
+ goto do_sigbus;
+
+ return;
/*
* Something tried to access memory that isn't in our memory map..
if (user_mode(regs)) {
force_sig(SIGSEGV, current);
- goto out;
+ return;
}
+no_context:
/* Are we prepared to handle this fault as an exception? */
if ((fixup = search_exception_table(regs->pc)) != 0) {
unsigned long newpc;
printk("%s: Exception at [<%lx>] (%lx)\n",
current->comm, regs->pc, newpc);
regs->pc = newpc;
- goto out;
+ return;
}
/*
"virtual address %016lx\n", address);
die_if_kernel("Oops", regs, cause, (unsigned long*)regs - 16);
do_exit(SIGKILL);
- out:
- unlock_kernel();
-}
+/*
+ * We ran out of memory, or some other thing happened to us that made
+ * us unable to handle the page fault gracefully.
+ */
+out_of_memory:
+ printk(KERN_ALERT "VM: killing process %s(%d)\n",
+ current->comm, current->pid);
+ if (!user_mode(regs))
+ goto no_context;
+ do_exit(SIGKILL);
+
+do_sigbus:
+ /*
+ * Send a sigbus, regardless of whether we were in kernel
+ * or user mode.
+ */
+ force_sig(SIGBUS, current);
+ if (!user_mode(regs))
+ goto no_context;
+ return;
+}
return p - buf;
}
-void __init apm_setup(char *str, int *dummy)
-{
- int invert;
-
- while ((str != NULL) && (*str != '\0')) {
- if (strncmp(str, "off", 3) == 0)
- apm_disabled = 1;
- if (strncmp(str, "on", 2) == 0)
- apm_disabled = 0;
- invert = (strncmp(str, "no-", 3) == 0);
- if (invert)
- str += 3;
- if (strncmp(str, "debug", 5) == 0)
- debug = !invert;
- if (strncmp(str, "smp-power-off", 13) == 0)
- smp_hack = !invert;
- str = strchr(str, ',');
- if (str != NULL)
- str += strspn(str, ", \t");
- }
-}
-
static int apm(void *unused)
{
unsigned short bx;
return 0;
}
+static int __init apm_setup(char *str)
+{
+ int invert;
+
+ while ((str != NULL) && (*str != '\0')) {
+ if (strncmp(str, "off", 3) == 0)
+ apm_disabled = 1;
+ if (strncmp(str, "on", 2) == 0)
+ apm_disabled = 0;
+ invert = (strncmp(str, "no-", 3) == 0);
+ if (invert)
+ str += 3;
+ if (strncmp(str, "debug", 5) == 0)
+ debug = !invert;
+ if (strncmp(str, "smp-power-off", 13) == 0)
+ smp_hack = !invert;
+ str = strchr(str, ',');
+ if (str != NULL)
+ str += strspn(str, ", \t");
+ }
+ return 1;
+}
+
+__setup("apm=", apm_setup);
+
/*
* Just start the APM thread. We do NOT want to do APM BIOS
* calls from anything but the APM thread, if for no other reason
int i, max;
int ints[MAX_PIRQS+1];
- get_options(str, MAX_PIRQS+1, ints);
+ get_options(str, ARRAY_SIZE(ints), ints);
for (i = 0; i < MAX_PIRQS; i++)
pirq_entries[i] = -1;
stack = (unsigned long *) &stack;
for (i = 40; i ; i--) {
unsigned long x = *++stack;
- if (x > (unsigned long) &get_options && x < (unsigned long) &vsprintf) {
+ if (x > (unsigned long) &get_option && x < (unsigned long) &vsprintf) {
printk("<[%08lx]> ", x);
}
}
asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
-static int hlt_counter=0;
+int hlt_counter=0;
void disable_hlt(void)
{
-/* $Id: debuglocks.c,v 1.7 1999/04/21 02:26:58 anton Exp $
+/* $Id: debuglocks.c,v 1.8 1999/08/05 09:49:59 anton Exp $
* debuglocks.c: Debugging versions of SMP locking primitives.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
lock, cpu, caller, lock->owner_pc & ~3, lock->owner_pc & 3);
for(i = 0; i < NR_CPUS; i++)
- printk(" reader[i]=%08lx", lock->reader_pc[i]);
+ printk(" reader[%d]=%08lx", i, lock->reader_pc[i]);
printk("\n");
}
-# $Id: config.in,v 1.72 1999/08/04 03:19:17 davem Exp $
+# $Id: config.in,v 1.73 1999/08/06 12:11:47 davem Exp $
# For a description of the syntax of this configuration file,
# see the Configure script.
#
dep_tristate ' SCSI emulation support' CONFIG_BLK_DEV_IDESCSI $CONFIG_BLK_DEV_IDE
define_bool CONFIG_BLK_DEV_IDEPCI y
define_bool CONFIG_BLK_DEV_IDEDMA y
- define_bool CONFIG_IDEDMA_PCI_AUTO y
+ define_bool CONFIG_IDEDMA_AUTO y
+ define_bool IDEDMA_NEW_DRIVE_LISTINGS y
define_bool CONFIG_BLK_DEV_NS87415 y
define_bool CONFIG_BLK_DEV_CMD646 y
fi
# CONFIG_BLK_DEV_IDESCSI is not set
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_BLK_DEV_IDEDMA=y
-CONFIG_IDEDMA_PCI_AUTO=y
+CONFIG_IDEDMA_AUTO=y
+IDEDMA_NEW_DRIVE_LISTINGS=y
CONFIG_BLK_DEV_NS87415=y
CONFIG_BLK_DEV_CMD646=y
-/* $Id: ebus.c,v 1.36 1999/05/04 03:21:42 davem Exp $
+/* $Id: ebus.c,v 1.38 1999/08/06 10:37:32 davem Exp $
* ebus.c: PCI to EBus bridge device.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
#include <asm/ebus.h>
#include <asm/oplib.h>
#include <asm/bpp.h>
+#include <asm/irq.h>
#undef PROM_DEBUG
#undef DEBUG_FILL_EBUS_DEV
return mem;
}
-__initfunc(void ebus_intmap_match(struct linux_ebus *ebus,
- struct linux_prom_registers *reg,
- int *interrupt))
+void __init ebus_intmap_match(struct linux_ebus *ebus,
+ struct linux_prom_registers *reg,
+ int *interrupt)
{
unsigned int hi, lo, irq;
int i;
prom_halt();
}
-__initfunc(void fill_ebus_child(int node, struct linux_prom_registers *preg,
- struct linux_ebus_child *dev))
+void __init fill_ebus_child(int node, struct linux_prom_registers *preg,
+ struct linux_ebus_child *dev)
{
int regs[PROMREG_MAX];
int irqs[PROMREG_MAX];
}
#ifdef DEBUG_FILL_EBUS_DEV
- dprintf("child '%s': address%s\n", dev->prom_name,
+ dprintf("child '%s': address%s ", dev->prom_name,
dev->num_addrs > 1 ? "es" : "");
for (i = 0; i < dev->num_addrs; i++)
- dprintf(" %016lx\n", dev->base_address[i]);
+ dprintf("%016lx ", dev->base_address[i]);
+ dprintf("\n");
if (dev->num_irqs) {
dprintf(" IRQ%s", dev->num_irqs > 1 ? "s" : "");
for (i = 0; i < dev->num_irqs; i++)
#endif
}
-__initfunc(void fill_ebus_device(int node, struct linux_ebus_device *dev))
+void __init fill_ebus_device(int node, struct linux_ebus_device *dev)
{
struct linux_prom_registers regs[PROMREG_MAX];
struct linux_ebus_child *child;
for (i = 0; i < dev->num_addrs; i++) {
n = (regs[i].which_io - 0x10) >> 2;
- dev->base_address[i] = dev->bus->self->base_address[n];
+ dev->base_address[i] = dev->bus->self->resource[n].start;
dev->base_address[i] += (unsigned long)regs[i].phys_addr;
}
}
#ifdef DEBUG_FILL_EBUS_DEV
- dprintf("'%s': address%s\n", dev->prom_name,
+ dprintf("'%s': address%s ", dev->prom_name,
dev->num_addrs > 1 ? "es" : "");
for (i = 0; i < dev->num_addrs; i++)
- dprintf(" %016lx\n", dev->base_address[i]);
+ dprintf("%016lx ", dev->base_address[i]);
+ dprintf("\n");
if (dev->num_irqs) {
dprintf(" IRQ%s", dev->num_irqs > 1 ? "s" : "");
for (i = 0; i < dev->num_irqs; i++)
extern void clock_probe(void);
-__initfunc(void ebus_init(void))
+void __init ebus_init(void)
{
struct linux_prom_pci_registers regs[PROMREG_MAX];
struct linux_pbm_info *pbm;
struct pci_dev *pdev;
struct pcidev_cookie *cookie;
char lbuf[128];
- unsigned long addr, *base;
+ struct resource *base;
+ unsigned long addr;
unsigned short pci_command;
int nd, len, ebusnd;
int reg, rng, nreg;
}
nreg = len / sizeof(struct linux_prom_pci_registers);
- base = &ebus->self->base_address[0];
+ base = &ebus->self->resource[0];
for (reg = 0; reg < nreg; reg++) {
if (!(regs[reg].phys_hi & 0x03000000))
continue;
addr += (u64)regs[reg].phys_mid << 32UL;
addr += (u64)rp->parent_phys_lo;
addr += (u64)rp->parent_phys_hi << 32UL;
- *base++ = (unsigned long)__va(addr);
+
+ base->name = "EBUS";
+ base->start = (unsigned long)__va(addr);
+ base->end = base->start + regs[reg].size_lo - 1;
+ base->flags = 0;
+ request_resource(&ioport_resource, base);
+
+ base += 1;
printk(" %lx[%x]", (unsigned long)__va(addr),
regs[reg].size_lo);
if(cmd == SIOCETHTOOL)
len = sizeof(struct ethtool_cmd);
if(cmd == SIOCGPPPVER)
- len = strlen(PPP_VERSION) + 1;
+ len = strlen((char *)ifr.ifr_data) + 1;
else if(cmd == SIOCGPPPCSTATS)
len = sizeof(struct ppp_comp_stats);
else
-/* $Id: psycho.c,v 1.87 1999/07/23 01:56:45 davem Exp $
+/* $Id: psycho.c,v 1.89 1999/08/06 10:37:35 davem Exp $
* psycho.c: Ultra/AX U2P PCI controller support.
*
* Copyright (C) 1997 David S. Miller (davem@caipfs.rutgers.edu)
}
}
+extern struct pci_bus pci_root;
+extern struct pci_dev *pci_devices;
+static struct pci_dev **pci_last_dev_p = &pci_devices;
+extern int pci_reverse;
+
+extern void pci_namedevice(struct pci_dev *);
+
+static void __init sparc64_pci_read_bases(struct pci_dev *dev, unsigned int howmany)
+{
+ unsigned int reg;
+ u32 l;
+
+ for(reg=0; reg < howmany; reg++) {
+ struct resource *res = dev->resource + reg;
+ unsigned long mask;
+ unsigned int newval, size;
+
+ res->name = dev->name;
+ pci_read_config_dword(dev, PCI_BASE_ADDRESS_0 + (reg << 2), &l);
+ if (l == 0xffffffff)
+ continue;
+
+ pci_write_config_dword(dev, PCI_BASE_ADDRESS_0 + (reg << 2), 0xffffffff);
+ pci_read_config_dword(dev, PCI_BASE_ADDRESS_0 + (reg << 2), &newval);
+ pci_write_config_dword(dev, PCI_BASE_ADDRESS_0 + (reg << 2), l);
+
+ mask = PCI_BASE_ADDRESS_MEM_MASK;
+ if (l & PCI_BASE_ADDRESS_SPACE_IO)
+ mask = PCI_BASE_ADDRESS_IO_MASK;
+
+ newval &= mask;
+ if (!newval)
+ continue;
+
+ res->start = l & mask;
+ res->flags = l & ~mask;
+
+ size = 1;
+ do {
+ size <<= 1;
+ } while (!(size & newval));
+
+ /* 64-bit memory? */
+ if ((l & (PCI_BASE_ADDRESS_SPACE | PCI_BASE_ADDRESS_MEM_TYPE_MASK))
+ == (PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64)) {
+ unsigned int high;
+ reg++;
+ pci_read_config_dword(dev, PCI_BASE_ADDRESS_0 + (reg << 2), &high);
+ if (high)
+ res->start |= ((unsigned long) high) << 32;
+ }
+ res->end = res->start + size - 1;
+ }
+}
+
+static unsigned int __init sparc64_pci_scan_bus(struct pci_bus *bus)
+{
+ unsigned int devfn, l, max, class;
+ unsigned char cmd, irq, tmp, hdr_type, is_multi = 0;
+ struct pci_dev *dev, **bus_last;
+ struct pci_bus *child;
+
+ bus_last = &bus->devices;
+ max = bus->secondary;
+ for (devfn = 0; devfn < 0xff; ++devfn) {
+ if (PCI_FUNC(devfn) && !is_multi) {
+ /* not a multi-function device */
+ continue;
+ }
+ if (pcibios_read_config_byte(bus->number, devfn, PCI_HEADER_TYPE, &hdr_type))
+ continue;
+ if (!PCI_FUNC(devfn))
+ is_multi = hdr_type & 0x80;
+
+ if (pcibios_read_config_dword(bus->number, devfn, PCI_VENDOR_ID, &l) ||
+ /* some broken boards return 0 if a slot is empty: */
+ l == 0xffffffff || l == 0x00000000 || l == 0x0000ffff || l == 0xffff0000)
+ continue;
+
+ dev = kmalloc(sizeof(*dev), GFP_ATOMIC);
+ if(dev==NULL)
+ {
+ printk(KERN_ERR "pci: out of memory.\n");
+ continue;
+ }
+ memset(dev, 0, sizeof(*dev));
+ dev->bus = bus;
+ dev->devfn = devfn;
+ dev->vendor = l & 0xffff;
+ dev->device = (l >> 16) & 0xffff;
+ pci_namedevice(dev);
+
+ /* non-destructively determine if device can be a master: */
+ pcibios_read_config_byte(bus->number, devfn, PCI_COMMAND, &cmd);
+ pcibios_write_config_byte(bus->number, devfn, PCI_COMMAND, cmd | PCI_COMMAND_MASTER);
+ pcibios_read_config_byte(bus->number, devfn, PCI_COMMAND, &tmp);
+ dev->master = ((tmp & PCI_COMMAND_MASTER) != 0);
+ pcibios_write_config_byte(bus->number, devfn, PCI_COMMAND, cmd);
+
+ pcibios_read_config_dword(bus->number, devfn, PCI_CLASS_REVISION, &class);
+ class >>= 8; /* upper 3 bytes */
+ dev->class = class;
+ class >>= 8;
+ dev->hdr_type = hdr_type;
+
+ switch (hdr_type & 0x7f) { /* header type */
+ case PCI_HEADER_TYPE_NORMAL: /* standard header */
+ if (class == PCI_CLASS_BRIDGE_PCI)
+ goto bad;
+ /*
+ * If the card generates interrupts, read IRQ number
+ * (some architectures change it during pcibios_fixup())
+ */
+ pcibios_read_config_byte(bus->number, dev->devfn, PCI_INTERRUPT_PIN, &irq);
+ if (irq)
+ pcibios_read_config_byte(bus->number, dev->devfn, PCI_INTERRUPT_LINE, &irq);
+ dev->irq = irq;
+ sparc64_pci_read_bases(dev, 6);
+ pcibios_read_config_dword(bus->number, devfn, PCI_ROM_ADDRESS, &l);
+ dev->rom_address = (l == 0xffffffff) ? 0 : l;
+ break;
+ case PCI_HEADER_TYPE_BRIDGE: /* bridge header */
+ if (class != PCI_CLASS_BRIDGE_PCI)
+ goto bad;
+ sparc64_pci_read_bases(dev, 2);
+ pcibios_read_config_dword(bus->number, devfn, PCI_ROM_ADDRESS1, &l);
+ dev->rom_address = (l == 0xffffffff) ? 0 : l;
+ break;
+ case PCI_HEADER_TYPE_CARDBUS: /* CardBus bridge header */
+ if (class != PCI_CLASS_BRIDGE_CARDBUS)
+ goto bad;
+ sparc64_pci_read_bases(dev, 1);
+ break;
+ default: /* unknown header */
+ bad:
+ printk(KERN_ERR "PCI: %02x:%02x [%04x/%04x/%06x] has unknown header type %02x, ignoring.\n",
+ bus->number, dev->devfn, dev->vendor, dev->device, class, hdr_type);
+ continue;
+ }
+
+ /*
+ * Put it into the global PCI device chain. It's used to
+ * find devices once everything is set up.
+ */
+ if (!pci_reverse) {
+ *pci_last_dev_p = dev;
+ pci_last_dev_p = &dev->next;
+ } else {
+ dev->next = pci_devices;
+ pci_devices = dev;
+ }
+
+ /*
+ * Now insert it into the list of devices held
+ * by the parent bus.
+ */
+ *bus_last = dev;
+ bus_last = &dev->sibling;
+ }
+
+ /*
+ * After performing arch-dependent fixup of the bus, look behind
+ * all PCI-to-PCI bridges on this bus.
+ */
+ pcibios_fixup_bus(bus);
+ for(dev=bus->devices; dev; dev=dev->sibling)
+ /*
+ * If it's a bridge, scan the bus behind it.
+ */
+ if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
+ unsigned int buses;
+ unsigned int devfn = dev->devfn;
+ unsigned short cr;
+
+ /*
+ * Insert it into the tree of buses.
+ */
+ child = kmalloc(sizeof(*child), GFP_ATOMIC);
+ if(child==NULL)
+ {
+ printk(KERN_ERR "pci: out of memory for bridge.\n");
+ continue;
+ }
+ memset(child, 0, sizeof(*child));
+ child->next = bus->children;
+ bus->children = child;
+ child->self = dev;
+ child->parent = bus;
+
+ /*
+ * Set up the primary, secondary and subordinate
+ * bus numbers.
+ */
+ child->number = child->secondary = ++max;
+ child->primary = bus->secondary;
+ child->subordinate = 0xff;
+ /*
+ * Clear all status bits and turn off memory,
+ * I/O and master enables.
+ */
+ pcibios_read_config_word(bus->number, devfn, PCI_COMMAND, &cr);
+ pcibios_write_config_word(bus->number, devfn, PCI_COMMAND, 0x0000);
+ pcibios_write_config_word(bus->number, devfn, PCI_STATUS, 0xffff);
+ /*
+ * Read the existing primary/secondary/subordinate bus
+ * number configuration to determine if the PCI bridge
+ * has already been configured by the system. If so,
+ * do not modify the configuration, merely note it.
+ */
+ pcibios_read_config_dword(bus->number, devfn, PCI_PRIMARY_BUS, &buses);
+ if ((buses & 0xFFFFFF) != 0)
+ {
+ unsigned int cmax;
+
+ child->primary = buses & 0xFF;
+ child->secondary = (buses >> 8) & 0xFF;
+ child->subordinate = (buses >> 16) & 0xFF;
+ child->number = child->secondary;
+ cmax = sparc64_pci_scan_bus(child);
+ if (cmax > max) max = cmax;
+ }
+ else
+ {
+ /*
+ * Configure the bus numbers for this bridge:
+ */
+ buses &= 0xff000000;
+ buses |=
+ (((unsigned int)(child->primary) << 0) |
+ ((unsigned int)(child->secondary) << 8) |
+ ((unsigned int)(child->subordinate) << 16));
+ pcibios_write_config_dword(bus->number, devfn, PCI_PRIMARY_BUS, buses);
+ /*
+ * Now we can scan all subordinate buses:
+ */
+ max = sparc64_pci_scan_bus(child);
+ /*
+ * Set the subordinate bus number to its real
+ * value:
+ */
+ child->subordinate = max;
+ buses = (buses & 0xff00ffff)
+ | ((unsigned int)(child->subordinate) << 16);
+ pcibios_write_config_dword(bus->number, devfn, PCI_PRIMARY_BUS, buses);
+ }
+ pcibios_write_config_word(bus->number, devfn, PCI_COMMAND, cr);
+ }
+
+ /*
+ * We've scanned the bus and so we know all about what's on
+ * the other side of any bridges that may be on this bus plus
+ * any devices.
+ *
+ * Return how far we've got finding sub-buses.
+ */
+ return max;
+}
+
static void __init sabre_probe(struct linux_psycho *sabre)
{
struct pci_bus *pbus = sabre->pci_bus;
pbus->number = pbus->secondary = busno;
pbus->sysdata = sabre;
- pbus->subordinate = pci_scan_bus(pbus);
+ pbus->subordinate = sparc64_pci_scan_bus(pbus);
busno = pbus->subordinate + 1;
for(pbus = pbus->children; pbus; pbus = pbus->next) {
pbm_fixup_busno(pbm, busno);
- pbus->subordinate = pci_scan_bus(pbus);
+ pbus->subordinate = sparc64_pci_scan_bus(pbus);
/*
* Set the maximum subordinate bus of this pbm.
int bustype = (pregs[preg].phys_hi >> 24) & 0x3;
int bsreg, brindex;
unsigned int rtmp;
- u64 pci_addr;
+ u64 pci_addr, pci_size;
if(bustype == 0) {
/* Config space cookie, nothing to do. */
/* Now construct UPA physical address. */
pci_addr = (((u64)pregs[preg].phys_mid) << 32UL);
pci_addr |= (((u64)pregs[preg].phys_lo));
+ pci_size = (((u64)pregs[preg].size_hi) << 32UL);
+ pci_size |= (((u64)pregs[preg].size_lo));
if(ap) {
pci_addr += ((u64)ap->phys_lo);
/* Final step, apply PBM range. */
for(rng = 0; rng < pbm->num_pbm_ranges; rng++) {
- struct linux_prom_pci_ranges *rp = &pbm->pbm_ranges[rng];
- int space = (rp->child_phys_hi >> 24) & 3;
+ struct linux_prom_pci_ranges *rngp = &pbm->pbm_ranges[rng];
+ int space = (rngp->child_phys_hi >> 24) & 3;
if(space == bustype) {
- pci_addr += ((u64)rp->parent_phys_lo);
- pci_addr += (((u64)rp->parent_phys_hi) << 32UL);
+ pci_addr += ((u64)rngp->parent_phys_lo);
+ pci_addr += (((u64)rngp->parent_phys_hi) << 32UL);
break;
}
}
*/
pci_read_config_dword(pdev, PCI_ROM_ADDRESS, &rtmp);
pci_write_config_dword(pdev, PCI_ROM_ADDRESS, rtmp & ~1);
- } else
- pdev->base_address[brindex] = (unsigned long)__va(pci_addr);
-
- /* Preserve I/O space bit. */
- if(bustype == 0x1) {
- pdev->base_address[brindex] |= 1;
- IO_seen = 1;
} else {
- MEM_seen = 1;
+ struct resource *root, *rp;
+
+ rp = &pdev->resource[brindex];
+
+ pci_read_config_dword(pdev, PCI_BASE_ADDRESS_0 + (brindex * 4), &rtmp);
+ if (rtmp & 0x1)
+ rtmp &= 0x1;
+ else
+ rtmp &= 0xf;
+
+ rp->name = pdev->name;
+ rp->start = (unsigned long)__va(pci_addr);
+ rp->end = rp->start + pci_size - 1;
+
+ /* Keep track of what we've seen so far. */
+ if(rtmp & 0x1) {
+ IO_seen = 1;
+ root = &ioport_resource;
+ } else {
+ MEM_seen = 1;
+ root = &iomem_resource;
+ }
+ rp->flags = rtmp;
+ request_resource(root, rp);
}
}
int breg;
for(breg = PCI_BASE_ADDRESS_0; breg <= PCI_BASE_ADDRESS_5; breg += 4) {
+ struct resource *rp;
int io;
ridx = ((breg - PCI_BASE_ADDRESS_0) >> 2);
- base = (unsigned int)pdev->base_address[ridx];
+ rp = &pdev->resource[ridx];
+ base = (unsigned int)rp->start;
- if(pdev->base_address[ridx] > PAGE_OFFSET)
+ /* Already handled? */
+ if(rp->start > PAGE_OFFSET)
continue;
- io = (base & PCI_BASE_ADDRESS_SPACE)==PCI_BASE_ADDRESS_SPACE_IO;
- base &= ~((io ?
- PCI_BASE_ADDRESS_IO_MASK :
- PCI_BASE_ADDRESS_MEM_MASK));
+ pci_read_config_dword(pdev, breg, &rtmp);
+ io = (rtmp & 0x1);
offset = (pdev->bus->number << 16) | (pdev->devfn << 8) | breg;
vp = pci_find_vma(pbm, base, offset, io);
if(!vp || vp->start > base) {
unsigned int size, new_base;
- pci_read_config_dword(pdev, breg, &rtmp);
pci_write_config_dword(pdev, breg, 0xffffffff);
pci_read_config_dword(pdev, breg, &size);
+
if(io)
- size &= ~1;
+ size &= ~0x1;
+ else
+ size &= ~0xf;
+
size = (~(size) + 1);
if(!size)
continue;
/* Apply PBM ranges and update pci_dev. */
pci_addr = new_base;
for(rng = 0; rng < pbm->num_pbm_ranges; rng++) {
- struct linux_prom_pci_ranges *rp;
+ struct linux_prom_pci_ranges *rngp;
int rspace;
- rp = &pbm->pbm_ranges[rng];
- rspace = (rp->child_phys_hi >> 24) & 3;
+ rngp = &pbm->pbm_ranges[rng];
+ rspace = (rngp->child_phys_hi >> 24) & 3;
if(io && rspace != 1)
continue;
else if(!io && rspace != 2)
continue;
- pci_addr += ((u64)rp->parent_phys_lo);
- pci_addr += (((u64)rp->parent_phys_hi)<<32UL);
+ pci_addr += ((u64)rngp->parent_phys_lo);
+ pci_addr += (((u64)rngp->parent_phys_hi)<<32UL);
break;
}
if(rng == pbm->num_pbm_ranges) {
prom_printf("fixup_doit: YIEEE, cannot find "
"PBM ranges\n");
}
- pdev->base_address[ridx] = (unsigned long)__va(pci_addr);
+ rp->name = pdev->name;
+ rp->start = (unsigned long) __va(pci_addr);
+ rp->end = rp->start + size - 1;
- /* Preserve I/O space bit. */
+ /* Keep track of what we've seen so far. */
if(io) {
- pdev->base_address[ridx] |= 1;
IO_seen = 1;
+ rp->flags = rtmp & 0x1;
+ request_resource(&ioport_resource, rp);
} else {
MEM_seen = 1;
+ rp->flags = rtmp & 0xf;
+ request_resource(&iomem_resource, rp);
}
}
}
/* Apply PBM ranges and update pci_dev. */
pci_addr = new_base;
for(rng = 0; rng < pbm->num_pbm_ranges; rng++) {
- struct linux_prom_pci_ranges *rp;
+ struct linux_prom_pci_ranges *rngp;
int rspace;
- rp = &pbm->pbm_ranges[rng];
- rspace = (rp->child_phys_hi >> 24) & 3;
+ rngp = &pbm->pbm_ranges[rng];
+ rspace = (rngp->child_phys_hi >> 24) & 3;
if(rspace != 2)
continue;
- pci_addr += ((u64)rp->parent_phys_lo);
- pci_addr += (((u64)rp->parent_phys_hi)<<32UL);
+ pci_addr += ((u64)rngp->parent_phys_lo);
+ pci_addr += (((u64)rngp->parent_phys_hi)<<32UL);
break;
}
if(rng == pbm->num_pbm_ranges) {
#ifdef FIXUP_REGS_DEBUG
dprintf("REG_FIXUP[%04x,%04x]: ", pdev->vendor, pdev->device);
for(preg = 0; preg < 6; preg++) {
- if(pdev->base_address[preg] != 0)
- dprintf("%d[%016lx] ", preg, pdev->base_address[preg]);
+ if(pdev->resource[preg].start != 0)
+ dprintf("%d[%016lx] ", preg, pdev->resource[preg].start);
}
dprintf("\n");
#endif
static inline int
sabre_out_of_range(unsigned char devfn)
{
- return ((PCI_SLOT(devfn) == 0) && (PCI_FUNC(devfn) > 0)) ||
- ((PCI_SLOT(devfn) == 1) && (PCI_FUNC(devfn) > 1)) ||
- (PCI_SLOT(devfn) > 1);
+ return (((PCI_SLOT(devfn) == 0) && (PCI_FUNC(devfn) > 0)) ||
+ ((PCI_SLOT(devfn) == 1) && (PCI_FUNC(devfn) > 1)) ||
+ (PCI_SLOT(devfn) > 1) ||
+ (pci_probe_enable == 0));
}
static int
-/* $Id: signal.c,v 1.43 1999/07/30 09:35:24 davem Exp $
+/* $Id: signal.c,v 1.44 1999/08/04 07:04:13 jj Exp $
* arch/sparc64/kernel/signal.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
struct sparc_stackf ss;
siginfo_t info;
struct pt_regs regs;
- sigset_t mask;
__siginfo_fpu_t * fpu_save;
stack_t stack;
+ sigset_t mask;
__siginfo_fpu_t fpu_state;
};
-/* $Id: sparc64_ksyms.c,v 1.61 1999/07/23 01:56:48 davem Exp $
+/* $Id: sparc64_ksyms.c,v 1.62 1999/08/06 01:42:48 davem Exp $
* arch/sparc64/kernel/sparc64_ksyms.c: Sparc64 specific ksyms support.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
/* Kernel thread creation. */
EXPORT_SYMBOL(kernel_thread);
-EXPORT_SYMBOL(init_mm);
/* prom symbols */
EXPORT_SYMBOL(idprom);
-/* $Id: sys_sparc.c,v 1.28 1999/07/30 09:35:27 davem Exp $
+/* $Id: sys_sparc.c,v 1.29 1999/08/04 07:04:10 jj Exp $
* linux/arch/sparc64/kernel/sys_sparc.c
*
* This file contains various random system calls that
asmlinkage unsigned long sparc_brk(unsigned long brk)
{
- if(brk >= 0x80000000000UL) /* VM hole */
+ if((brk >= 0x80000000000UL && brk < PAGE_OFFSET) ||
+ (brk - current->mm->brk > 0x80000000000UL &&
+ brk - current->mm->brk < PAGE_OFFSET)) /* VM hole */
return current->mm->brk;
return sys_brk(brk);
}
(addr < 0x80000000000UL &&
addr > 0x80000000000UL-len))
goto out_putf;
- if (addr >= 0x80000000000ULL && addr < 0xfffff80000000000UL) {
+ if (addr >= 0x80000000000UL && addr < PAGE_OFFSET) {
/* VM hole */
retval = current->mm->brk;
goto out_putf;
. = ALIGN(8192);
__init_end = .;
. = ALIGN(64);
- .data.cacheline_aligned : { *(.data-cacheline_aligned) }
+ .data.cacheline_aligned : { *(.data.cacheline_aligned) }
__bss_start = .;
.sbss : { *(.sbss) *(.scommon) }
.bss :
#define SEL_DLY (2*HZ/100)
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
/*
* this struct defines the different floppy drive types.
*/
* = ((hwif->channel ? 2 : 0) + (drive->select.b.unit & 0x01));
*/
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/delay.h>
* = ((hwif->channel ? 2 : 0) + (drive->select.b.unit & 0x01));
*/
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/delay.h>
-- Now default to checking media type.
-- CDROM_SEND_PACKET ioctl added. The infrastructure was in place for
doing this anyway, with the generic_packet addition.
+
+ 3.01 Aug 6, 1999 - Jens Axboe <axboe@image.dk>
+ -- Fix up the sysctl handling so that the option flags get set
+ correctly.
+ -- Fix up ioctl handling so the device specific ones actually get
+ called :).
-------------------------------------------------------------------------*/
-#define REVISION "Revision: 3.00"
-#define VERSION "Id: cdrom.c 3.00 1999/08/05"
+#define REVISION "Revision: 3.01"
+#define VERSION "Id: cdrom.c 3.01 1999/08/06"
/* I use an error-log mask to give fine grain control over the type of
messages dumped to the system logs. The available masks include: */
let it go through the device specific ones. */
if (CDROM_CAN(CDC_GENERIC_PACKET)) {
ret = mmc_ioctl(cdi, cmd, arg);
- if (ret != -ENOTTY)
+ if (ret != -ENOTTY) {
return ret;
+ }
}
-
- /* Now all the audio-ioctls follow, they are all routed through the
- same call audio_ioctl(). */
- if (!CDROM_CAN(CDC_PLAY_AUDIO)) {
- if (CDROM_CAN(CDC_IOCTLS))
- return cdo->dev_ioctl(cdi, cmd, arg);
- return -ENOSYS;
- }
-
+
/* note: most of the cdinfo() calls are commented out here,
because they fill up the sys log when CD players poll
the drive. */
case CDROMSUBCHNL: {
struct cdrom_subchnl q;
u_char requested, back;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
/* cdinfo(CD_DO_IOCTL,"entering CDROMSUBCHNL\n");*/
IOCTL_IN(arg, struct cdrom_subchnl, q);
requested = q.cdsc_format;
}
case CDROMREADTOCHDR: {
struct cdrom_tochdr header;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCHDR\n"); */
IOCTL_IN(arg, struct cdrom_tochdr, header);
if ((ret=cdo->audio_ioctl(cdi, cmd, &header)))
case CDROMREADTOCENTRY: {
struct cdrom_tocentry entry;
u_char requested_format;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
/* cdinfo(CD_DO_IOCTL, "entering CDROMREADTOCENTRY\n"); */
IOCTL_IN(arg, struct cdrom_tocentry, entry);
requested_format = entry.cdte_format;
}
case CDROMPLAYMSF: {
struct cdrom_msf msf;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYMSF\n");
IOCTL_IN(arg, struct cdrom_msf, msf);
CHECKAUDIO;
}
case CDROMPLAYTRKIND: {
struct cdrom_ti ti;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMPLAYTRKIND\n");
IOCTL_IN(arg, struct cdrom_ti, ti);
CHECKAUDIO;
}
case CDROMVOLCTRL: {
struct cdrom_volctrl volume;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLCTRL\n");
IOCTL_IN(arg, struct cdrom_volctrl, volume);
return cdo->audio_ioctl(cdi, cmd, &volume);
}
case CDROMVOLREAD: {
struct cdrom_volctrl volume;
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
cdinfo(CD_DO_IOCTL, "entering CDROMVOLREAD\n");
if ((ret=cdo->audio_ioctl(cdi, cmd, &volume)))
return ret;
case CDROMSTOP:
case CDROMPAUSE:
case CDROMRESUME: {
+ if (!CDROM_CAN(CDC_PLAY_AUDIO))
+ return -ENOSYS;
cdinfo(CD_DO_IOCTL, "doing audio ioctl (start/stop/pause/resume)\n");
CHECKAUDIO;
return cdo->audio_ioctl(cdi, cmd, NULL);
}
} /* switch */
+ /* do the device specific ioctls */
+ if (CDROM_CAN(CDC_IOCTLS))
+ return cdo->dev_ioctl(cdi, cmd, arg);
+
return -ENOSYS;
}
return proc_dostring(ctl, write, filp, buffer, lenp);
}
+/* Unfortunately, per device settings are not implemented through
+ procfs/sysctl yet. When they are, this will naturally disappear. For now
+ just update all drives. Later this will become the template on which
+ new registered drives will be based. */
+void cdrom_update_settings(void)
+{
+ struct cdrom_device_info *cdi;
+
+ for (cdi = topCdromPtr; cdi != NULL; cdi = cdi->next) {
+ if (autoclose && CDROM_CAN(CDC_CLOSE_TRAY))
+ cdi->options |= CDO_AUTO_CLOSE;
+ else if (!autoclose)
+ cdi->options &= ~CDO_AUTO_CLOSE;
+ if (autoeject && CDROM_CAN(CDC_OPEN_TRAY))
+ cdi->options |= CDO_AUTO_EJECT;
+ else if (!autoeject)
+ cdi->options &= ~CDO_AUTO_EJECT;
+ if (lockdoor && CDROM_CAN(CDC_LOCK))
+ cdi->options |= CDO_LOCK;
+ else if (!lockdoor)
+ cdi->options &= ~CDO_LOCK;
+ if (check_media_type)
+ cdi->options |= CDO_CHECK_TYPE;
+ else
+ cdi->options &= ~CDO_CHECK_TYPE;
+ }
+}
+
static int cdrom_sysctl_handler(ctl_table *ctl, int write, struct file * filp,
void *buffer, size_t *lenp)
{
ret = proc_dointvec(ctl, write, filp, buffer, lenp);
- /* FIXME: only 1's and 0's should be accepted */
if (write && *valp != val) {
+
+ /* we only care for 1 or 0. */
+ if (*valp)
+ *valp = 1;
+ else
+ *valp = 0;
switch (ctl->ctl_name) {
case DEV_CDROM_AUTOCLOSE: {
break;
}
}
+ /* update the option flags according to the changes. we
+ don't have per device options through sysctl yet,
+ but we will have and then this will disappear. */
+ cdrom_update_settings();
}
return ret;
cdrom_sysctl_header = register_sysctl_table(cdrom_root_table, 1);
cdrom_root_table->child->de->fill_inode = &cdrom_procfs_modcount;
-
+
/* set the defaults */
cdrom_sysctl_settings.autoclose = autoclose;
cdrom_sysctl_settings.autoeject = autoeject;
* Support polled I2O PCI controllers.
*/
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#define FMT_U64_HEX "0x%08x%08x"
#define U64_VAL(pu64) *((u32*)(pu64)+1), *((u32*)(pu64))
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/i2o.h>
/* get IRQ */
cs->irq = dev_diva->irq;
/* get IO address */
- cs->hw.diva.cfg_reg = dev_diva->base_address[2]
+ cs->hw.diva.cfg_reg = dev_diva->resource[2].start
& PCI_BASE_ADDRESS_IO_MASK;
} else if ((dev_diva_u = pci_find_device(PCI_VENDOR_EICON_DIEHL,
PCI_DIVA20_U_ID, dev_diva_u))) {
/* get IRQ */
cs->irq = dev_diva_u->irq;
/* get IO address */
- cs->hw.diva.cfg_reg = dev_diva_u->base_address[2]
+ cs->hw.diva.cfg_reg = dev_diva_u->resource[2].start
& PCI_BASE_ADDRESS_IO_MASK;
} else {
printk(KERN_WARNING "Diva: No PCI card found\n");
dev_qs1000))) {
cs->subtyp = ELSA_QS1000PCI;
cs->irq = dev_qs1000->irq;
- cs->hw.elsa.cfg = dev_qs1000->base_address[1] &
+ cs->hw.elsa.cfg = dev_qs1000->resource[1].start &
PCI_BASE_ADDRESS_IO_MASK;
- cs->hw.elsa.base = dev_qs1000->base_address[3] &
+ cs->hw.elsa.base = dev_qs1000->resource[3].start &
PCI_BASE_ADDRESS_IO_MASK;
} else if ((dev_qs3000 = pci_find_device(PCI_VENDOR_ELSA,
PCI_QS3000_ID, dev_qs3000))) {
cs->subtyp = ELSA_QS3000PCI;
cs->irq = dev_qs3000->irq;
- cs->hw.elsa.cfg = dev_qs3000->base_address[1] &
+ cs->hw.elsa.cfg = dev_qs3000->resource[1].start &
PCI_BASE_ADDRESS_IO_MASK;
- cs->hw.elsa.base = dev_qs3000->base_address[3] &
+ cs->hw.elsa.base = dev_qs3000->resource[3].start &
PCI_BASE_ADDRESS_IO_MASK;
} else {
printk(KERN_WARNING "Elsa: No PCI card found\n");
printk(KERN_WARNING "NETjet: No IRQ for PCI card found\n");
return(0);
}
- cs->hw.njet.base = dev_netjet->base_address[0] &
+ cs->hw.njet.base = dev_netjet->resource[0].start &
PCI_BASE_ADDRESS_IO_MASK;
if (!cs->hw.njet.base) {
printk(KERN_WARNING "NETjet: No IO-Adr for PCI card found\n");
return(0);
}
cs->irq = niccy_dev->irq;
- if (!niccy_dev->base_address[0]) {
+ if (!niccy_dev->resource[0].start) {
printk(KERN_WARNING "Niccy: No IO-Adr for PCI cfg found\n");
return(0);
}
- cs->hw.niccy.cfg_reg = niccy_dev->base_address[0] & PCI_BASE_ADDRESS_IO_MASK;
- if (!niccy_dev->base_address[1]) {
+ cs->hw.niccy.cfg_reg = niccy_dev->resource[0].start & PCI_BASE_ADDRESS_IO_MASK;
+ if (!niccy_dev->resource[1].start) {
printk(KERN_WARNING "Niccy: No IO-Adr for PCI card found\n");
return(0);
}
- pci_ioaddr = niccy_dev->base_address[1] & PCI_BASE_ADDRESS_IO_MASK;
+ pci_ioaddr = niccy_dev->resource[1].start & PCI_BASE_ADDRESS_IO_MASK;
cs->hw.niccy.isac = pci_ioaddr + ISAC_PCI_DATA;
cs->hw.niccy.isac_ale = pci_ioaddr + ISAC_PCI_ADDR;
cs->hw.niccy.hscx = pci_ioaddr + HSCX_PCI_DATA;
dep_tristate 'PLIP (parallel port) support' CONFIG_PLIP $CONFIG_PARPORT
fi
-tristate 'PPP (point-to-point) support' CONFIG_PPP
+tristate 'PPP (point-to-point protocol) support' CONFIG_PPP
if [ ! "$CONFIG_PPP" = "n" ]; then
- comment 'CCP compressors for PPP are only built as modules.'
+ dep_tristate 'PPP support for async serial ports' CONFIG_PPP_ASYNC $CONFIG_PPP
+ dep_tristate 'PPP Deflate compression' CONFIG_PPP_DEFLATE $CONFIG_PPP
+ dep_tristate 'PPP BSD-Compress compression' CONFIG_PPP_BSDCOMP m
fi
tristate 'SLIP (serial line) support' CONFIG_SLIP
# bsd_comp.o is *always* a module, for some documented reason
# (licensing).
ifeq ($(CONFIG_PPP),y)
-LX_OBJS += ppp.o
-M_OBJS += bsd_comp.o
+LX_OBJS += ppp_generic.o
CONFIG_SLHC_BUILTIN = y
-CONFIG_PPPDEF_BUILTIN = y
+ ifeq ($(CONFIG_PPP_ASYNC),y)
+ LX_OBJS += ppp_async.o
+ else
+ ifeq ($(CONFIG_PPP_ASYNC),m)
+ MX_OBJS += ppp_async.o
+ endif
+ endif
+ ifeq ($(CONFIG_PPP_DEFLATE),y)
+ CONFIG_PPPDEF_BUILTIN = y
+ else
+ ifeq ($(CONFIG_PPP_DEFLATE),m)
+ CONFIG_PPPDEF_MODULE = y
+ endif
+ endif
+ ifeq ($(CONFIG_PPP_BSDCOMP),m)
+ M_OBJS += bsd_comp.o
+ endif
else
ifeq ($(CONFIG_PPP),m)
+ MX_OBJS += ppp_generic.o
CONFIG_SLHC_MODULE = y
- CONFIG_PPPDEF_MODULE = y
- MX_OBJS += ppp.o
- M_OBJS += bsd_comp.o
+ ifeq ($(CONFIG_PPP_ASYNC),m)
+ MX_OBJS += ppp_async.o
+ endif
+ ifeq ($(CONFIG_PPP_DEFLATE),m)
+ CONFIG_PPPDEF_MODULE = y
+ endif
+ ifeq ($(CONFIG_PPP_BSDCOMP),m)
+ M_OBJS += bsd_comp.o
+ endif
endif
endif
# if anything built-in uses ppp_deflate, then build it into the kernel also.
# If not, but a module uses it, build as a module.
-# ... NO!!! ppp_deflate.o does not work as resident;
-# it works only as a module!
ifdef CONFIG_PPPDEF_BUILTIN
-MX_OBJS += ppp_deflate.o
+L_OBJS += ppp_deflate.o
else
ifdef CONFIG_PPPDEF_MODULE
- MX_OBJS += ppp_deflate.o
+ M_OBJS += ppp_deflate.o
endif
endif
#error You must compile this driver with "-O".
#endif
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#ifdef MODVERSIONS
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/kernel.h>
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/ioport.h>
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/ioport.h>
--- /dev/null
+/*
+ * PPP async serial channel driver for Linux.
+ *
+ * Copyright 1999 Paul Mackerras.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * This driver provides the encapsulation and framing for sending
+ * and receiving PPP frames over async serial lines. It relies on
+ * the generic PPP layer to give it frames to send and to process
+ * received frames. It implements the PPP line discipline.
+ *
+ * Part of the code in this driver was inspired by the old async-only
+ * PPP driver, written by Michael Callahan and Al Longyear, and
+ * subsequently hacked by Paul Mackerras.
+ *
+ * ==FILEVERSION 990806==
+ */
+
+/* $Id$ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/tty.h>
+#include <linux/netdevice.h>
+#include <linux/poll.h>
+#include <linux/ppp_defs.h>
+#include <linux/if_ppp.h>
+#include <linux/ppp_channel.h>
+#include <asm/uaccess.h>
+
+#define PPP_VERSION "2.4.0"
+
+#define OBUFSIZE 256
+
+/* Structure for storing local state. */
+struct asyncppp {
+ struct tty_struct *tty;
+ unsigned int flags;
+ unsigned int state;
+ unsigned int rbits;
+ int mru;
+ unsigned long busy;
+ u32 xaccm[8];
+ u32 raccm;
+ unsigned int bytes_sent;
+ unsigned int bytes_rcvd;
+
+ struct sk_buff *tpkt;
+ int tpkt_pos;
+ u16 tfcs;
+ unsigned char *optr;
+ unsigned char *olim;
+ struct sk_buff_head xq;
+ unsigned long last_xmit;
+
+ struct sk_buff *rpkt;
+ struct sk_buff_head rq;
+ wait_queue_head_t rwait;
+
+ struct ppp_channel chan; /* interface to generic ppp layer */
+ int connected;
+ unsigned char obuf[OBUFSIZE];
+};
+
+/* Bit numbers in busy */
+#define XMIT_BUSY 0
+#define RECV_BUSY 1
+#define XMIT_WAKEUP 2
+#define XMIT_FULL 3
+
+/* State bits */
+#define SC_TOSS 0x20000000
+#define SC_ESCAPE 0x40000000
+
+/* Bits in rbits */
+#define SC_RCV_BITS (SC_RCV_B7_1|SC_RCV_B7_0|SC_RCV_ODDP|SC_RCV_EVNP)
+
+#define PPPASYNC_MAX_RQLEN 32 /* arbitrary */
+
+static int flag_time = HZ;
+MODULE_PARM(flag_time, "i");
+
+/*
+ * Prototypes.
+ */
+static int ppp_async_encode(struct asyncppp *ap);
+static int ppp_async_send(struct ppp_channel *chan, struct sk_buff *skb);
+static int ppp_async_push(struct asyncppp *ap);
+static void ppp_async_flush_output(struct asyncppp *ap);
+static void ppp_async_input(struct asyncppp *ap, const unsigned char *buf,
+ char *flags, int count);
+
+struct ppp_channel_ops async_ops = {
+ ppp_async_send
+};
+
+/*
+ * Routines for locking and unlocking the transmit and receive paths.
+ */
+static inline void
+lock_path(struct asyncppp *ap, int bit)
+{
+ do {
+ while (test_bit(bit, &ap->busy))
+ mb();
+ } while (test_and_set_bit(bit, &ap->busy));
+ mb();
+}
+
+static inline int
+trylock_path(struct asyncppp *ap, int bit)
+{
+ if (test_and_set_bit(bit, &ap->busy))
+ return 0;
+ mb();
+ return 1;
+}
+
+static inline void
+unlock_path(struct asyncppp *ap, int bit)
+{
+ mb();
+ clear_bit(bit, &ap->busy);
+}
+
+#define lock_xmit_path(ap) lock_path(ap, XMIT_BUSY)
+#define trylock_xmit_path(ap) trylock_path(ap, XMIT_BUSY)
+#define unlock_xmit_path(ap) unlock_path(ap, XMIT_BUSY)
+#define lock_recv_path(ap) lock_path(ap, RECV_BUSY)
+#define trylock_recv_path(ap) trylock_path(ap, RECV_BUSY)
+#define unlock_recv_path(ap) unlock_path(ap, RECV_BUSY)
+
+static inline void
+flush_skb_queue(struct sk_buff_head *q)
+{
+ struct sk_buff *skb;
+
+ while ((skb = skb_dequeue(q)) != 0)
+ kfree_skb(skb);
+}
+
+/*
+ * Routines implementing the PPP line discipline.
+ */
+
+/*
+ * Called when a tty is put into PPP line discipline.
+ */
+static int
+ppp_async_open(struct tty_struct *tty)
+{
+ struct asyncppp *ap;
+
+ ap = kmalloc(sizeof(*ap), GFP_KERNEL);
+ if (ap == 0)
+ return -ENOMEM;
+
+ MOD_INC_USE_COUNT;
+
+ /* initialize the asyncppp structure */
+ memset(ap, 0, sizeof(*ap));
+ ap->tty = tty;
+ ap->mru = PPP_MRU;
+ ap->xaccm[0] = ~0U;
+ ap->xaccm[3] = 0x60000000U;
+ ap->raccm = ~0U;
+ ap->optr = ap->obuf;
+ ap->olim = ap->obuf;
+ skb_queue_head_init(&ap->xq);
+ skb_queue_head_init(&ap->rq);
+ init_waitqueue_head(&ap->rwait);
+
+ tty->disc_data = ap;
+
+ return 0;
+}
+
+/*
+ * Called when the tty is put into another line discipline
+ * (or it hangs up).
+ */
+static void
+ppp_async_close(struct tty_struct *tty)
+{
+ struct asyncppp *ap = tty->disc_data;
+
+ if (ap == 0)
+ return;
+ tty->disc_data = 0;
+ lock_xmit_path(ap);
+ lock_recv_path(ap);
+ if (ap->rpkt != 0)
+ kfree_skb(ap->rpkt);
+ flush_skb_queue(&ap->rq);
+ if (ap->tpkt != 0)
+ kfree_skb(ap->tpkt);
+ flush_skb_queue(&ap->xq);
+ if (ap->connected)
+ ppp_unregister_channel(&ap->chan);
+ kfree(ap);
+ MOD_DEC_USE_COUNT;
+}
+
+/*
+ * Read a PPP frame. pppd can use this to negotiate over the
+ * channel before it joins it to a bundle.
+ */
+static ssize_t
+ppp_async_read(struct tty_struct *tty, struct file *file,
+ unsigned char *buf, size_t count)
+{
+ struct asyncppp *ap = tty->disc_data;
+ DECLARE_WAITQUEUE(wait, current);
+ ssize_t ret;
+ struct sk_buff *skb = 0;
+
+ ret = -ENXIO;
+ if (ap == 0)
+ goto out; /* should never happen */
+
+ add_wait_queue(&ap->rwait, &wait);
+ current->state = TASK_INTERRUPTIBLE;
+ for (;;) {
+ ret = -EAGAIN;
+ skb = skb_dequeue(&ap->rq);
+ if (skb)
+ break;
+ if (file->f_flags & O_NONBLOCK)
+ break;
+ ret = -ERESTARTSYS;
+ if (signal_pending(current))
+ break;
+ schedule();
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&ap->rwait, &wait);
+
+ if (skb == 0)
+ goto out;
+
+ ret = -EOVERFLOW;
+ if (skb->len > count)
+ goto outf;
+ ret = -EFAULT;
+ if (copy_to_user(buf, skb->data, skb->len))
+ goto outf;
+ ret = skb->len;
+
+ outf:
+ kfree_skb(skb);
+ out:
+ return ret;
+}
+
+/*
+ * Write a ppp frame. pppd can use this to send frames over
+ * this particular channel.
+ */
+static ssize_t
+ppp_async_write(struct tty_struct *tty, struct file *file,
+ const unsigned char *buf, size_t count)
+{
+ struct asyncppp *ap = tty->disc_data;
+ struct sk_buff *skb;
+ ssize_t ret;
+
+ ret = -ENXIO;
+ if (ap == 0)
+ goto out; /* should never happen */
+
+ ret = -ENOMEM;
+ skb = alloc_skb(count + 2, GFP_KERNEL);
+ if (skb == 0)
+ goto out;
+ skb_reserve(skb, 2);
+ ret = -EFAULT;
+ if (copy_from_user(skb_put(skb, count), buf, count)) {
+ kfree_skb(skb);
+ goto out;
+ }
+
+ skb_queue_tail(&ap->xq, skb);
+ ppp_async_push(ap);
+
+ ret = count;
+
+ out:
+ return ret;
+}
+
+static int
+ppp_async_ioctl(struct tty_struct *tty, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct asyncppp *ap = tty->disc_data;
+ int err, val;
+ u32 accm[8];
+ struct sk_buff *skb;
+
+ err = -ENXIO;
+ if (ap == 0)
+ goto out; /* should never happen */
+ err = -EPERM;
+ if (!capable(CAP_NET_ADMIN))
+ goto out;
+
+ err = -EFAULT;
+ switch (cmd) {
+ case PPPIOCGFLAGS:
+ val = ap->flags | ap->rbits;
+ if (put_user(ap->flags, (int *) arg))
+ break;
+ err = 0;
+ break;
+ case PPPIOCSFLAGS:
+ if (get_user(val, (int *) arg))
+ break;
+ ap->flags = val & ~SC_RCV_BITS;
+ ap->rbits = val & SC_RCV_BITS;
+ err = 0;
+ break;
+
+ case PPPIOCGASYNCMAP:
+ if (put_user(ap->xaccm[0], (u32 *) arg))
+ break;
+ err = 0;
+ break;
+ case PPPIOCSASYNCMAP:
+ if (get_user(ap->xaccm[0], (u32 *) arg))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCGRASYNCMAP:
+ if (put_user(ap->raccm, (u32 *) arg))
+ break;
+ err = 0;
+ break;
+ case PPPIOCSRASYNCMAP:
+ if (get_user(ap->raccm, (u32 *) arg))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCGXASYNCMAP:
+ if (copy_to_user((void *) arg, ap->xaccm, sizeof(ap->xaccm)))
+ break;
+ err = 0;
+ break;
+ case PPPIOCSXASYNCMAP:
+ if (copy_from_user(accm, (void *) arg, sizeof(accm)))
+ break;
+ accm[2] &= ~0x40000000U; /* can't escape 0x5e */
+ accm[3] |= 0x60000000U; /* must escape 0x7d, 0x7e */
+ memcpy(ap->xaccm, accm, sizeof(ap->xaccm));
+ err = 0;
+ break;
+
+ case PPPIOCGMRU:
+ if (put_user(ap->mru, (int *) arg))
+ break;
+ err = 0;
+ break;
+ case PPPIOCSMRU:
+ if (get_user(val, (int *) arg))
+ break;
+ if (val < PPP_MRU)
+ val = PPP_MRU;
+ ap->mru = val;
+ err = 0;
+ break;
+
+ case PPPIOCATTACH:
+ if (get_user(val, (int *) arg))
+ break;
+ err = -EALREADY;
+ if (ap->connected)
+ break;
+ ap->chan.private = ap;
+ ap->chan.ops = &async_ops;
+ err = ppp_register_channel(&ap->chan, val);
+ if (err != 0)
+ break;
+ ap->connected = 1;
+ break;
+ case PPPIOCDETACH:
+ err = -ENXIO;
+ if (!ap->connected)
+ break;
+ ppp_unregister_channel(&ap->chan);
+ ap->connected = 0;
+ err = 0;
+ break;
+
+ case TCGETS:
+ case TCGETA:
+ err = n_tty_ioctl(tty, file, cmd, arg);
+ break;
+
+ case TCFLSH:
+ /* flush our buffers and the serial port's buffer */
+ if (arg == TCIFLUSH || arg == TCIOFLUSH)
+ flush_skb_queue(&ap->rq);
+ if (arg == TCIOFLUSH || arg == TCOFLUSH)
+ ppp_async_flush_output(ap);
+ err = n_tty_ioctl(tty, file, cmd, arg);
+ break;
+
+ case FIONREAD:
+ val = 0;
+ if ((skb = skb_peek(&ap->rq)) != 0)
+ val = skb->len;
+ if (put_user(val, (int *) arg))
+ break;
+ err = 0;
+ break;
+
+ default:
+ err = -ENOIOCTLCMD;
+ }
+ out:
+ return err;
+}
+
+static unsigned int
+ppp_async_poll(struct tty_struct *tty, struct file *file, poll_table *wait)
+{
+ struct asyncppp *ap = tty->disc_data;
+ unsigned int mask;
+
+ if (ap == 0)
+ return 0; /* should never happen */
+ poll_wait(file, &ap->rwait, wait);
+ mask = POLLOUT | POLLWRNORM;
+ if (skb_peek(&ap->rq))
+ mask |= POLLIN | POLLRDNORM;
+ if (test_bit(TTY_OTHER_CLOSED, &tty->flags) || tty_hung_up_p(file))
+ mask |= POLLHUP;
+ return mask;
+}
+
+static int
+ppp_async_room(struct tty_struct *tty)
+{
+ return 65535;
+}
+
+static void
+ppp_async_receive(struct tty_struct *tty, const unsigned char *buf,
+ char *flags, int count)
+{
+ struct asyncppp *ap = tty->disc_data;
+
+ if (ap == 0)
+ return;
+ trylock_recv_path(ap);
+ ppp_async_input(ap, buf, flags, count);
+ unlock_recv_path(ap);
+ if (test_and_clear_bit(TTY_THROTTLED, &tty->flags)
+ && tty->driver.unthrottle)
+ tty->driver.unthrottle(tty);
+}
+
+static void
+ppp_async_wakeup(struct tty_struct *tty)
+{
+ struct asyncppp *ap = tty->disc_data;
+
+ clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ if (ap == 0)
+ return;
+ if (ppp_async_push(ap) && ap->connected)
+ ppp_output_wakeup(&ap->chan);
+}
+
+
+static struct tty_ldisc ppp_ldisc = {
+ magic: TTY_LDISC_MAGIC,
+ name: "ppp",
+ open: ppp_async_open,
+ close: ppp_async_close,
+ read: ppp_async_read,
+ write: ppp_async_write,
+ ioctl: ppp_async_ioctl,
+ poll: ppp_async_poll,
+ receive_room: ppp_async_room,
+ receive_buf: ppp_async_receive,
+ write_wakeup: ppp_async_wakeup,
+};
+
+int
+ppp_async_init(void)
+{
+ int err;
+
+ err = tty_register_ldisc(N_PPP, &ppp_ldisc);
+ if (err != 0)
+ printk(KERN_ERR "PPP_async: error %d registering line disc.\n",
+ err);
+ return err;
+}
+
+/*
+ * Procedures for encapsulation and framing.
+ */
+
+u16 ppp_crc16_table[256] = {
+ 0x0000, 0x1189, 0x2312, 0x329b, 0x4624, 0x57ad, 0x6536, 0x74bf,
+ 0x8c48, 0x9dc1, 0xaf5a, 0xbed3, 0xca6c, 0xdbe5, 0xe97e, 0xf8f7,
+ 0x1081, 0x0108, 0x3393, 0x221a, 0x56a5, 0x472c, 0x75b7, 0x643e,
+ 0x9cc9, 0x8d40, 0xbfdb, 0xae52, 0xdaed, 0xcb64, 0xf9ff, 0xe876,
+ 0x2102, 0x308b, 0x0210, 0x1399, 0x6726, 0x76af, 0x4434, 0x55bd,
+ 0xad4a, 0xbcc3, 0x8e58, 0x9fd1, 0xeb6e, 0xfae7, 0xc87c, 0xd9f5,
+ 0x3183, 0x200a, 0x1291, 0x0318, 0x77a7, 0x662e, 0x54b5, 0x453c,
+ 0xbdcb, 0xac42, 0x9ed9, 0x8f50, 0xfbef, 0xea66, 0xd8fd, 0xc974,
+ 0x4204, 0x538d, 0x6116, 0x709f, 0x0420, 0x15a9, 0x2732, 0x36bb,
+ 0xce4c, 0xdfc5, 0xed5e, 0xfcd7, 0x8868, 0x99e1, 0xab7a, 0xbaf3,
+ 0x5285, 0x430c, 0x7197, 0x601e, 0x14a1, 0x0528, 0x37b3, 0x263a,
+ 0xdecd, 0xcf44, 0xfddf, 0xec56, 0x98e9, 0x8960, 0xbbfb, 0xaa72,
+ 0x6306, 0x728f, 0x4014, 0x519d, 0x2522, 0x34ab, 0x0630, 0x17b9,
+ 0xef4e, 0xfec7, 0xcc5c, 0xddd5, 0xa96a, 0xb8e3, 0x8a78, 0x9bf1,
+ 0x7387, 0x620e, 0x5095, 0x411c, 0x35a3, 0x242a, 0x16b1, 0x0738,
+ 0xffcf, 0xee46, 0xdcdd, 0xcd54, 0xb9eb, 0xa862, 0x9af9, 0x8b70,
+ 0x8408, 0x9581, 0xa71a, 0xb693, 0xc22c, 0xd3a5, 0xe13e, 0xf0b7,
+ 0x0840, 0x19c9, 0x2b52, 0x3adb, 0x4e64, 0x5fed, 0x6d76, 0x7cff,
+ 0x9489, 0x8500, 0xb79b, 0xa612, 0xd2ad, 0xc324, 0xf1bf, 0xe036,
+ 0x18c1, 0x0948, 0x3bd3, 0x2a5a, 0x5ee5, 0x4f6c, 0x7df7, 0x6c7e,
+ 0xa50a, 0xb483, 0x8618, 0x9791, 0xe32e, 0xf2a7, 0xc03c, 0xd1b5,
+ 0x2942, 0x38cb, 0x0a50, 0x1bd9, 0x6f66, 0x7eef, 0x4c74, 0x5dfd,
+ 0xb58b, 0xa402, 0x9699, 0x8710, 0xf3af, 0xe226, 0xd0bd, 0xc134,
+ 0x39c3, 0x284a, 0x1ad1, 0x0b58, 0x7fe7, 0x6e6e, 0x5cf5, 0x4d7c,
+ 0xc60c, 0xd785, 0xe51e, 0xf497, 0x8028, 0x91a1, 0xa33a, 0xb2b3,
+ 0x4a44, 0x5bcd, 0x6956, 0x78df, 0x0c60, 0x1de9, 0x2f72, 0x3efb,
+ 0xd68d, 0xc704, 0xf59f, 0xe416, 0x90a9, 0x8120, 0xb3bb, 0xa232,
+ 0x5ac5, 0x4b4c, 0x79d7, 0x685e, 0x1ce1, 0x0d68, 0x3ff3, 0x2e7a,
+ 0xe70e, 0xf687, 0xc41c, 0xd595, 0xa12a, 0xb0a3, 0x8238, 0x93b1,
+ 0x6b46, 0x7acf, 0x4854, 0x59dd, 0x2d62, 0x3ceb, 0x0e70, 0x1ff9,
+ 0xf78f, 0xe606, 0xd49d, 0xc514, 0xb1ab, 0xa022, 0x92b9, 0x8330,
+ 0x7bc7, 0x6a4e, 0x58d5, 0x495c, 0x3de3, 0x2c6a, 0x1ef1, 0x0f78
+};
+EXPORT_SYMBOL(ppp_crc16_table);
+#define fcstab ppp_crc16_table /* for PPP_FCS macro */
+
+/*
+ * Procedure to encode the data for async serial transmission.
+ * Does octet stuffing (escaping), puts the address/control bytes
+ * on if A/C compression is disabled, and does protocol compression.
+ * Assumes ap->tpkt != 0 on entry.
+ * Returns 1 if we finished the current frame, 0 otherwise.
+ */
+
+#define PUT_BYTE(ap, buf, c, islcp) do { \
+ if ((islcp && c < 0x20) || (ap->xaccm[c >> 5] & (1 << (c & 0x1f)))) {\
+ *buf++ = PPP_ESCAPE; \
+ *buf++ = c ^ 0x20; \
+ } else \
+ *buf++ = c; \
+} while (0)
+
+static int
+ppp_async_encode(struct asyncppp *ap)
+{
+ int fcs, i, count, c, proto;
+ unsigned char *buf, *buflim;
+ unsigned char *data;
+ int islcp;
+
+ buf = ap->obuf;
+ ap->olim = buf;
+ ap->optr = buf;
+ i = ap->tpkt_pos;
+ data = ap->tpkt->data;
+ count = ap->tpkt->len;
+ fcs = ap->tfcs;
+ proto = (data[0] << 8) + data[1];
+
+ /*
+ * LCP packets with code values between 1 (configure-reqest)
+ * and 7 (code-reject) must be sent as though no options
+ * had been negotiated.
+ */
+ islcp = proto == PPP_LCP && 1 <= data[2] && data[2] <= 7;
+
+ if (i == 0) {
+ /*
+ * Start of a new packet - insert the leading FLAG
+ * character if necessary.
+ */
+ if (islcp || flag_time == 0
+ || jiffies - ap->last_xmit >= flag_time)
+ *buf++ = PPP_FLAG;
+ ap->last_xmit = jiffies;
+ fcs = PPP_INITFCS;
+
+ /*
+ * Put in the address/control bytes if necessary
+ */
+ if ((ap->flags & SC_COMP_AC) == 0 || islcp) {
+ PUT_BYTE(ap, buf, 0xff, islcp);
+ fcs = PPP_FCS(fcs, 0xff);
+ PUT_BYTE(ap, buf, 0x03, islcp);
+ fcs = PPP_FCS(fcs, 0x03);
+ }
+ }
+
+ /*
+ * Once we put in the last byte, we need to put in the FCS
+ * and closing flag, so make sure there is at least 7 bytes
+ * of free space in the output buffer.
+ */
+ buflim = ap->obuf + OBUFSIZE - 6;
+ while (i < count && buf < buflim) {
+ c = data[i++];
+ if (i == 1 && c == 0 && (ap->flags & SC_COMP_PROT))
+ continue; /* compress protocol field */
+ fcs = PPP_FCS(fcs, c);
+ PUT_BYTE(ap, buf, c, islcp);
+ }
+
+ if (i < count) {
+ /*
+ * Remember where we are up to in this packet.
+ */
+ ap->olim = buf;
+ ap->tpkt_pos = i;
+ ap->tfcs = fcs;
+ return 0;
+ }
+
+ /*
+ * We have finished the packet. Add the FCS and flag.
+ */
+ fcs = ~fcs;
+ c = fcs & 0xff;
+ PUT_BYTE(ap, buf, c, islcp);
+ c = (fcs >> 8) & 0xff;
+ PUT_BYTE(ap, buf, c, islcp);
+ *buf++ = PPP_FLAG;
+ ap->olim = buf;
+
+ kfree_skb(ap->tpkt);
+ ap->tpkt = 0;
+ return 1;
+}
+
+/*
+ * Transmit-side routines.
+ */
+
+/*
+ * Send a packet to the peer over an async tty line.
+ * Returns 1 iff the packet was accepted.
+ * If the packet was not accepted, we will call ppp_output_wakeup
+ * at some later time.
+ */
+static int
+ppp_async_send(struct ppp_channel *chan, struct sk_buff *skb)
+{
+ struct asyncppp *ap = chan->private;
+
+ ppp_async_push(ap);
+
+ if (test_and_set_bit(XMIT_FULL, &ap->busy))
+ return 0; /* already full */
+ ap->tpkt = skb;
+ ap->tpkt_pos = 0;
+
+ ppp_async_push(ap);
+ return 1;
+}
+
+/*
+ * Push as much data as possible out to the tty.
+ */
+static int
+ppp_async_push(struct asyncppp *ap)
+{
+ int avail, sent, done = 0;
+ struct tty_struct *tty = ap->tty;
+ int tty_stuffed = 0;
+
+ if (!trylock_xmit_path(ap)) {
+ set_bit(XMIT_WAKEUP, &ap->busy);
+ return 0;
+ }
+ for (;;) {
+ if (test_and_clear_bit(XMIT_WAKEUP, &ap->busy))
+ tty_stuffed = 0;
+ if (!tty_stuffed && ap->optr < ap->olim) {
+ avail = ap->olim - ap->optr;
+ set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ sent = tty->driver.write(tty, 0, ap->optr, avail);
+ if (sent < 0)
+ goto flush; /* error, e.g. loss of CD */
+ ap->optr += sent;
+ if (sent < avail)
+ tty_stuffed = 1;
+ continue;
+ }
+ if (ap->optr == ap->olim && ap->tpkt != 0) {
+ if (ppp_async_encode(ap)) {
+ /* finished processing ap->tpkt */
+ struct sk_buff *skb = skb_dequeue(&ap->xq);
+ if (skb != 0) {
+ ap->tpkt = skb;
+ } else {
+ clear_bit(XMIT_FULL, &ap->busy);
+ done = 1;
+ }
+ }
+ continue;
+ }
+ /* haven't made any progress */
+ unlock_xmit_path(ap);
+ if (!(test_bit(XMIT_WAKEUP, &ap->busy)
+ || (!tty_stuffed && ap->tpkt != 0)))
+ break;
+ if (!trylock_xmit_path(ap))
+ break;
+ }
+ return done;
+
+flush:
+ if (ap->tpkt != 0) {
+ kfree_skb(ap->tpkt);
+ ap->tpkt = 0;
+ clear_bit(XMIT_FULL, &ap->busy);
+ done = 1;
+ }
+ ap->optr = ap->olim;
+ unlock_xmit_path(ap);
+ return done;
+}
+
+/*
+ * Flush output from our internal buffers.
+ * Called for the TCFLSH ioctl.
+ */
+static void
+ppp_async_flush_output(struct asyncppp *ap)
+{
+ int done = 0;
+
+ flush_skb_queue(&ap->xq);
+ lock_xmit_path(ap);
+ ap->optr = ap->olim;
+ if (ap->tpkt != NULL) {
+ kfree_skb(ap->tpkt);
+ ap->tpkt = 0;
+ clear_bit(XMIT_FULL, &ap->busy);
+ done = 1;
+ }
+ unlock_xmit_path(ap);
+ if (done && ap->connected)
+ ppp_output_wakeup(&ap->chan);
+}
+
+/*
+ * Receive-side routines.
+ */
+
+/* see how many ordinary chars there are at the start of buf */
+static inline int
+scan_ordinary(struct asyncppp *ap, const unsigned char *buf, int count)
+{
+ int i, c;
+
+ for (i = 0; i < count; ++i) {
+ c = buf[i];
+ if (c == PPP_ESCAPE || c == PPP_FLAG
+ || (c < 0x20 && (ap->raccm & (1 << c)) != 0))
+ break;
+ }
+ return i;
+}
+
+/* called when a flag is seen - do end-of-packet processing */
+static inline void
+process_input_packet(struct asyncppp *ap)
+{
+ struct sk_buff *skb;
+ unsigned char *p;
+ unsigned int len, fcs;
+ int code = 0;
+
+ skb = ap->rpkt;
+ ap->rpkt = 0;
+ if ((ap->state & (SC_TOSS | SC_ESCAPE)) || skb == 0) {
+ ap->state &= ~(SC_TOSS | SC_ESCAPE);
+ if (skb != 0)
+ kfree_skb(skb);
+ return;
+ }
+
+ /* check the FCS */
+ p = skb->data;
+ len = skb->len;
+ if (len < 3)
+ goto err; /* too short */
+ fcs = PPP_INITFCS;
+ for (; len > 0; --len)
+ fcs = PPP_FCS(fcs, *p++);
+ if (fcs != PPP_GOODFCS)
+ goto err; /* bad FCS */
+ skb_trim(skb, skb->len - 2);
+
+ /* check for address/control and protocol compression */
+ p = skb->data;
+ if (p[0] == PPP_ALLSTATIONS && p[1] == PPP_UI) {
+ /* chop off address/control */
+ if (skb->len < 3)
+ goto err;
+ p = skb_pull(skb, 2);
+ }
+ if (p[0] & 1) {
+ /* protocol is compressed */
+ skb_push(skb, 1)[0] = 0;
+ } else if (skb->len < 2)
+ goto err;
+
+ /* all OK, give it to the generic layer or queue it */
+ if (ap->connected) {
+ ppp_input(&ap->chan, skb);
+ } else {
+ skb_queue_tail(&ap->rq, skb);
+ /* drop old frames if queue too long */
+ while (ap->rq.qlen > PPPASYNC_MAX_RQLEN
+ && (skb = skb_dequeue(&ap->rq)) != 0)
+ kfree(skb);
+ wake_up_interruptible(&ap->rwait);
+ }
+ return;
+
+ err:
+ kfree_skb(skb);
+ if (ap->connected)
+ ppp_input_error(&ap->chan, code);
+}
+
+static inline void
+input_error(struct asyncppp *ap, int code)
+{
+ ap->state |= SC_TOSS;
+ if (ap->connected)
+ ppp_input_error(&ap->chan, code);
+}
+
+/* called when the tty driver has data for us. */
+static void
+ppp_async_input(struct asyncppp *ap, const unsigned char *buf,
+ char *flags, int count)
+{
+ struct sk_buff *skb;
+ int c, i, j, n, s, f;
+ unsigned char *sp;
+
+ /* update bits used for 8-bit cleanness detection */
+ if (~ap->rbits & SC_RCV_BITS) {
+ s = 0;
+ for (i = 0; i < count; ++i) {
+ c = buf[i];
+ if (flags != 0 && flags[i] != 0)
+ continue;
+ s |= (c & 0x80)? SC_RCV_B7_1: SC_RCV_B7_0;
+ c = ((c >> 4) ^ c) & 0xf;
+ s |= (0x6996 & (1 << c))? SC_RCV_ODDP: SC_RCV_EVNP;
+ }
+ ap->rbits |= s;
+ }
+
+ while (count > 0) {
+ /* scan through and see how many chars we can do in bulk */
+ if ((ap->state & SC_ESCAPE) && buf[0] == PPP_ESCAPE)
+ n = 1;
+ else
+ n = scan_ordinary(ap, buf, count);
+
+ f = 0;
+ if (flags != 0 && (ap->state & SC_TOSS) == 0) {
+ /* check the flags to see if any char had an error */
+ for (j = 0; j < n; ++j)
+ if ((f = flags[j]) != 0)
+ break;
+ }
+ if (f != 0) {
+ /* start tossing */
+ input_error(ap, f);
+
+ } else if (n > 0 && (ap->state & SC_TOSS) == 0) {
+ /* stuff the chars in the skb */
+ skb = ap->rpkt;
+ if (skb == 0) {
+ skb = alloc_skb(ap->mru + PPP_HDRLEN + 2,
+ GFP_ATOMIC);
+ if (skb == 0)
+ goto nomem;
+ /* Try to get the payload 4-byte aligned */
+ if (buf[0] != PPP_ALLSTATIONS)
+ skb_reserve(skb, 2 + (buf[0] & 1));
+ ap->rpkt = skb;
+ }
+ if (n > skb_tailroom(skb)) {
+ /* packet overflowed MRU */
+ input_error(ap, 1);
+ } else {
+ sp = skb_put(skb, n);
+ memcpy(sp, buf, n);
+ if (ap->state & SC_ESCAPE) {
+ sp[0] ^= 0x20;
+ ap->state &= ~SC_ESCAPE;
+ }
+ }
+ }
+
+ if (n >= count)
+ break;
+
+ c = buf[n];
+ if (c == PPP_FLAG) {
+ process_input_packet(ap);
+ } else if (c == PPP_ESCAPE) {
+ ap->state |= SC_ESCAPE;
+ }
+ /* otherwise it's a char in the recv ACCM */
+ ++n;
+
+ buf += n;
+ if (flags != 0)
+ flags += n;
+ count -= n;
+ }
+ return;
+
+ nomem:
+ printk(KERN_ERR "PPPasync: no memory (input pkt)\n");
+ input_error(ap, 0);
+}
+
+#ifdef MODULE
+int
+init_module(void)
+{
+ return ppp_async_init();
+}
+
+void
+cleanup_module(void)
+{
+ if (tty_register_ldisc(N_PPP, NULL) != 0)
+ printk(KERN_ERR "failed to unregister PPP line discipline\n");
+}
+#endif /* MODULE */
--- /dev/null
+/*
+ * Generic PPP layer for Linux.
+ *
+ * Copyright 1999 Paul Mackerras.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * The generic PPP layer handles the PPP network interfaces, the
+ * /dev/ppp device, packet and VJ compression, and multilink.
+ * It talks to PPP `channels' via the interface defined in
+ * include/linux/ppp_channel.h. Channels provide the basic means for
+ * sending and receiving PPP frames on some kind of communications
+ * channel.
+ *
+ * Part of the code in this driver was inspired by the old async-only
+ * PPP driver, written by Michael Callahan and Al Longyear, and
+ * subsequently hacked by Paul Mackerras.
+ *
+ * ==FILEVERSION 990806==
+ */
+
+/* $Id$ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/poll.h>
+#include <linux/ppp_defs.h>
+#include <linux/if_ppp.h>
+#include <linux/ppp_channel.h>
+#include <linux/ppp-comp.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_arp.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <net/slhc_vj.h>
+#include <asm/spinlock.h>
+
+#define PPP_VERSION "2.4.0"
+
+EXPORT_SYMBOL(ppp_register_channel);
+EXPORT_SYMBOL(ppp_unregister_channel);
+EXPORT_SYMBOL(ppp_input);
+EXPORT_SYMBOL(ppp_input_error);
+EXPORT_SYMBOL(ppp_output_wakeup);
+EXPORT_SYMBOL(ppp_register_compressor);
+EXPORT_SYMBOL(ppp_unregister_compressor);
+
+/*
+ * Network protocols we support.
+ */
+#define NP_IP 0 /* Internet Protocol V4 */
+#define NP_IPV6 1 /* Internet Protocol V6 */
+#define NP_IPX 2 /* IPX protocol */
+#define NP_AT 3 /* Appletalk protocol */
+#define NUM_NP 4 /* Number of NPs. */
+
+/*
+ * Data structure describing one ppp unit.
+ * A ppp unit corresponds to a ppp network interface device
+ * and represents a multilink bundle.
+ * It may have 0 or more ppp channels connected to it.
+ */
+struct ppp {
+ struct list_head list; /* link in list of ppp units */
+ int index; /* interface unit number */
+ char name[16]; /* unit name */
+ int refcnt; /* # open /dev/ppp attached */
+ unsigned long busy; /* lock and other bits */
+ struct list_head channels; /* list of attached channels */
+ int n_channels; /* how many channels are attached */
+ int mru; /* max receive unit */
+ unsigned int flags; /* control bits */
+ unsigned int xstate; /* transmit state bits */
+ unsigned int rstate; /* receive state bits */
+ int debug; /* debug flags */
+ struct slcompress *vj; /* state for VJ header compression */
+ struct sk_buff_head xq; /* pppd transmit queue */
+ struct sk_buff_head rq; /* receive queue for pppd */
+ wait_queue_head_t rwait; /* for poll on reading /dev/ppp */
+ enum NPmode npmode[NUM_NP]; /* what to do with each net proto */
+ struct sk_buff *xmit_pending; /* a packet ready to go out */
+ struct sk_buff_head recv_pending;/* pending input packets */
+ struct compressor *xcomp; /* transmit packet compressor */
+ void *xc_state; /* its internal state */
+ struct compressor *rcomp; /* receive decompressor */
+ void *rc_state; /* its internal state */
+ unsigned long last_xmit; /* jiffies when last pkt sent */
+ unsigned long last_recv; /* jiffies when last pkt rcvd */
+ struct device dev; /* network interface device */
+ struct net_device_stats stats; /* statistics */
+};
+
+static LIST_HEAD(all_ppp_units);
+static spinlock_t all_ppp_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Private data structure for each channel.
+ * Ultimately this will have multilink stuff etc. in it.
+ */
+struct channel {
+ struct list_head list; /* link in list of channels per unit */
+ struct ppp_channel *chan; /* public channel data structure */
+ int blocked; /* if channel refused last packet */
+ struct ppp *ppp; /* ppp unit we're connected to */
+};
+
+/* Bit numbers in busy */
+#define XMIT_BUSY 0
+#define RECV_BUSY 1
+#define XMIT_WAKEUP 2
+
+/*
+ * Bits in flags: SC_NO_TCP_CCID, SC_CCP_OPEN, SC_CCP_UP, SC_LOOP_TRAFFIC.
+ * Bits in rstate: SC_DECOMP_RUN, SC_DC_ERROR, SC_DC_FERROR.
+ * Bits in xstate: SC_COMP_RUN
+ */
+#define SC_FLAG_BITS (SC_NO_TCP_CCID|SC_CCP_OPEN|SC_CCP_UP|SC_LOOP_TRAFFIC)
+
+/* Get the PPP protocol number from a skb */
+#define PPP_PROTO(skb) (((skb)->data[0] << 8) + (skb)->data[1])
+
+/* We limit the length of ppp->rq to this (arbitrary) value */
+#define PPP_MAX_RQLEN 32
+
+/* Prototypes. */
+static void ppp_xmit_unlock(struct ppp *ppp);
+static void ppp_send_frame(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_push(struct ppp *ppp);
+static void ppp_recv_unlock(struct ppp *ppp);
+static void ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb);
+static struct sk_buff *ppp_decompress_frame(struct ppp *ppp,
+ struct sk_buff *skb);
+static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
+static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
+static void ppp_ccp_closed(struct ppp *ppp);
+static struct compressor *find_compressor(int type);
+static void ppp_get_stats(struct ppp *ppp, struct ppp_stats *st);
+static struct ppp *ppp_create_unit(int unit, int *retp);
+static struct ppp *ppp_find_unit(int unit);
+
+/* Translates a PPP protocol number to a NP index (NP == network protocol) */
+static inline int proto_to_npindex(int proto)
+{
+ switch (proto) {
+ case PPP_IP:
+ return NP_IP;
+ case PPP_IPV6:
+ return NP_IPV6;
+ case PPP_IPX:
+ return NP_IPX;
+ case PPP_AT:
+ return NP_AT;
+ }
+ return -EINVAL;
+}
+
+/* Translates an NP index into a PPP protocol number */
+static const int npindex_to_proto[NUM_NP] = {
+ PPP_IP,
+ PPP_IPV6,
+ PPP_IPX,
+ PPP_AT,
+};
+
+/* Translates an ethertype into an NP index */
+static inline int ethertype_to_npindex(int ethertype)
+{
+ switch (ethertype) {
+ case ETH_P_IP:
+ return NP_IP;
+ case ETH_P_IPV6:
+ return NP_IPV6;
+ case ETH_P_IPX:
+ return NP_IPX;
+ case ETH_P_PPPTALK:
+ case ETH_P_ATALK:
+ return NP_AT;
+ }
+ return -1;
+}
+
+/* Translates an NP index into an ethertype */
+static const int npindex_to_ethertype[NUM_NP] = {
+ ETH_P_IP,
+ ETH_P_IPV6,
+ ETH_P_IPX,
+ ETH_P_PPPTALK,
+};
+
+/*
+ * Routines for locking and unlocking the transmit and receive paths
+ * of each unit.
+ */
+static inline void
+lock_path(struct ppp *ppp, int bit)
+{
+ int timeout = 1000000;
+
+ do {
+ while (test_bit(bit, &ppp->busy)) {
+ mb();
+ if (--timeout == 0) {
+ printk(KERN_ERR "lock_path timeout ppp=%p bit=%x\n", ppp, bit);
+ return;
+ }
+ }
+ } while (test_and_set_bit(bit, &ppp->busy));
+ mb();
+}
+
+static inline int
+trylock_path(struct ppp *ppp, int bit)
+{
+ if (test_and_set_bit(bit, &ppp->busy))
+ return 0;
+ mb();
+ return 1;
+}
+
+static inline void
+unlock_path(struct ppp *ppp, int bit)
+{
+ mb();
+ clear_bit(bit, &ppp->busy);
+}
+
+#define lock_xmit_path(ppp) lock_path(ppp, XMIT_BUSY)
+#define trylock_xmit_path(ppp) trylock_path(ppp, XMIT_BUSY)
+#define unlock_xmit_path(ppp) unlock_path(ppp, XMIT_BUSY)
+#define lock_recv_path(ppp) lock_path(ppp, RECV_BUSY)
+#define trylock_recv_path(ppp) trylock_path(ppp, RECV_BUSY)
+#define unlock_recv_path(ppp) unlock_path(ppp, RECV_BUSY)
+
+static inline void
+free_skbs(struct sk_buff_head *head)
+{
+ struct sk_buff *skb;
+
+ while ((skb = skb_dequeue(head)) != 0)
+ kfree_skb(skb);
+}
+
+/*
+ * /dev/ppp device routines.
+ * The /dev/ppp device is used by pppd to control the ppp unit.
+ * It supports the read, write, ioctl and poll functions.
+ */
+static int ppp_open(struct inode *inode, struct file *file)
+{
+ /*
+ * This could (should?) be enforced by the permissions on /dev/ppp.
+ */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+static int ppp_release(struct inode *inode, struct file *file)
+{
+ struct ppp *ppp = (struct ppp *) file->private_data;
+ struct list_head *list, *next;
+ int ref;
+
+ if (ppp == 0)
+ goto out;
+ file->private_data = 0;
+ spin_lock(&all_ppp_lock);
+ ref = --ppp->refcnt;
+ if (ref == 0)
+ list_del(&ppp->list);
+ spin_unlock(&all_ppp_lock);
+ if (ref != 0)
+ goto out;
+
+ /* Last fd open to this ppp unit is being closed -
+ mark the interface down, free the ppp unit */
+ rtnl_lock();
+ dev_close(&ppp->dev);
+ rtnl_unlock();
+ for (list = ppp->channels.next; list != &ppp->channels; list = next) {
+ /* forcibly detach this channel */
+ struct channel *chan;
+ chan = list_entry(list, struct channel, list);
+ chan->chan->ppp = 0;
+ next = list->next;
+ kfree(chan);
+ }
+
+ /* Free up resources. */
+ ppp_ccp_closed(ppp);
+ lock_xmit_path(ppp);
+ lock_recv_path(ppp);
+ if (ppp->vj) {
+ slhc_free(ppp->vj);
+ ppp->vj = 0;
+ }
+ free_skbs(&ppp->xq);
+ free_skbs(&ppp->rq);
+ free_skbs(&ppp->recv_pending);
+ unregister_netdev(&ppp->dev);
+ kfree(ppp);
+
+ out:
+ MOD_DEC_USE_COUNT;
+ return 0;
+}
+
+static ssize_t ppp_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos)
+{
+ struct ppp *ppp = (struct ppp *) file->private_data;
+ DECLARE_WAITQUEUE(wait, current);
+ ssize_t ret;
+ struct sk_buff *skb = 0;
+
+ ret = -ENXIO;
+ if (ppp == 0)
+ goto out; /* not currently attached */
+
+ add_wait_queue(&ppp->rwait, &wait);
+ current->state = TASK_INTERRUPTIBLE;
+ for (;;) {
+ ret = -EAGAIN;
+ skb = skb_dequeue(&ppp->rq);
+ if (skb)
+ break;
+ if (file->f_flags & O_NONBLOCK)
+ break;
+ ret = -ERESTARTSYS;
+ if (signal_pending(current))
+ break;
+ schedule();
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&ppp->rwait, &wait);
+
+ if (skb == 0)
+ goto out;
+
+ ret = -EOVERFLOW;
+ if (skb->len > count)
+ goto outf;
+ ret = -EFAULT;
+ if (copy_to_user(buf, skb->data, skb->len))
+ goto outf;
+ ret = skb->len;
+
+ outf:
+ kfree_skb(skb);
+ out:
+ return ret;
+}
+
+static ssize_t ppp_write(struct file *file, const char *buf,
+ size_t count, loff_t *ppos)
+{
+ struct ppp *ppp = (struct ppp *) file->private_data;
+ struct sk_buff *skb;
+ ssize_t ret;
+
+ ret = -ENXIO;
+ if (ppp == 0)
+ goto out;
+
+ ret = -ENOMEM;
+ skb = alloc_skb(count + 2, GFP_KERNEL);
+ if (skb == 0)
+ goto out;
+ skb_reserve(skb, 2);
+ ret = -EFAULT;
+ if (copy_from_user(skb_put(skb, count), buf, count)) {
+ kfree_skb(skb);
+ goto out;
+ }
+
+ skb_queue_tail(&ppp->xq, skb);
+ if (trylock_xmit_path(ppp))
+ ppp_xmit_unlock(ppp);
+
+ ret = count;
+
+ out:
+ return ret;
+}
+
+static unsigned int ppp_poll(struct file *file, poll_table *wait)
+{
+ struct ppp *ppp = (struct ppp *) file->private_data;
+ unsigned int mask;
+
+ if (ppp == 0)
+ return 0;
+ poll_wait(file, &ppp->rwait, wait);
+ mask = POLLOUT | POLLWRNORM;
+ if (skb_peek(&ppp->rq) != 0)
+ mask |= POLLIN | POLLRDNORM;
+ return mask;
+}
+
+static int ppp_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct ppp *ppp = (struct ppp *) file->private_data;
+ int err, val, val2, i;
+ struct ppp_idle idle;
+ struct npioctl npi;
+
+ if (cmd == PPPIOCNEWUNIT) {
+ /* Create a new ppp unit */
+ int unit, ret;
+
+ if (ppp != 0)
+ return -EINVAL;
+ if (get_user(unit, (int *) arg))
+ return -EFAULT;
+ ppp = ppp_create_unit(unit, &ret);
+ if (ppp == 0)
+ return ret;
+ file->private_data = ppp;
+ if (put_user(ppp->index, (int *) arg))
+ return -EFAULT;
+ return 0;
+ }
+ if (cmd == PPPIOCATTACH) {
+ /* Attach to an existing ppp unit */
+ int unit;
+
+ if (ppp != 0)
+ return -EINVAL;
+ if (get_user(unit, (int *) arg))
+ return -EFAULT;
+ spin_lock(&all_ppp_lock);
+ ppp = ppp_find_unit(unit);
+ if (ppp != 0)
+ ++ppp->refcnt;
+ spin_unlock(&all_ppp_lock);
+ if (ppp == 0)
+ return -ENXIO;
+ file->private_data = ppp;
+ return 0;
+ }
+
+ if (ppp == 0)
+ return -ENXIO;
+ err = -EFAULT;
+ switch (cmd) {
+ case PPPIOCSMRU:
+ if (get_user(val, (int *) arg))
+ break;
+ ppp->mru = val;
+ err = 0;
+ break;
+
+ case PPPIOCSFLAGS:
+ if (get_user(val, (int *) arg))
+ break;
+ if (ppp->flags & ~val & SC_CCP_OPEN)
+ ppp_ccp_closed(ppp);
+ ppp->flags = val & SC_FLAG_BITS;
+ err = 0;
+ break;
+
+ case PPPIOCGFLAGS:
+ val = ppp->flags | ppp->xstate | ppp->rstate;
+ if (put_user(val, (int *) arg))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCSCOMPRESS:
+ err = ppp_set_compress(ppp, arg);
+ break;
+
+ case PPPIOCGUNIT:
+ if (put_user(ppp->index, (int *) arg))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCSDEBUG:
+ if (get_user(val, (int *) arg))
+ break;
+ ppp->debug = val;
+ err = 0;
+ break;
+
+ case PPPIOCGDEBUG:
+ if (put_user(ppp->debug, (int *) arg))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCGIDLE:
+ idle.xmit_idle = (jiffies - ppp->last_xmit) / HZ;
+ idle.recv_idle = (jiffies - ppp->last_recv) / HZ;
+ if (copy_to_user((void *) arg, &idle, sizeof(idle)))
+ break;
+ err = 0;
+ break;
+
+ case PPPIOCSMAXCID:
+ if (get_user(val, (int *) arg))
+ break;
+ val2 = 15;
+ if ((val >> 16) != 0) {
+ val2 = val >> 16;
+ val &= 0xffff;
+ }
+ lock_xmit_path(ppp);
+ lock_recv_path(ppp);
+ if (ppp->vj != 0)
+ slhc_free(ppp->vj);
+ ppp->vj = slhc_init(val2+1, val+1);
+ ppp_recv_unlock(ppp);
+ ppp_xmit_unlock(ppp);
+ err = -ENOMEM;
+ if (ppp->vj == 0) {
+ printk(KERN_ERR "PPP: no memory (VJ compressor)\n");
+ break;
+ }
+ err = 0;
+ break;
+
+ case PPPIOCGNPMODE:
+ case PPPIOCSNPMODE:
+ if (copy_from_user(&npi, (void *) arg, sizeof(npi)))
+ break;
+ err = proto_to_npindex(npi.protocol);
+ if (err < 0)
+ break;
+ i = err;
+ if (cmd == PPPIOCGNPMODE) {
+ err = -EFAULT;
+ npi.mode = ppp->npmode[i];
+ if (copy_to_user((void *) arg, &npi, sizeof(npi)))
+ break;
+ } else {
+ ppp->npmode[i] = npi.mode;
+ /* we may be able to transmit more packets now (??) */
+ mark_bh(NET_BH);
+ }
+ err = 0;
+ break;
+
+ default:
+ err = -ENOTTY;
+ }
+ return err;
+}
+
+static struct file_operations ppp_device_fops = {
+ NULL, /* seek */
+ ppp_read,
+ ppp_write,
+ NULL, /* readdir */
+ ppp_poll,
+ ppp_ioctl,
+ NULL, /* mmap */
+ ppp_open,
+ NULL, /* flush */
+ ppp_release
+};
+
+#define PPP_MAJOR 108
+
+/* Called at boot time if ppp is compiled into the kernel,
+ or at module load time (from init_module) if compiled as a module. */
+int
+ppp_init(struct device *dev)
+{
+ int err;
+#ifndef MODULE
+ extern struct compressor ppp_deflate, ppp_deflate_draft;
+ extern int ppp_async_init(void);
+#endif
+
+ printk(KERN_INFO "PPP generic driver version " PPP_VERSION "\n");
+ err = register_chrdev(PPP_MAJOR, "ppp", &ppp_device_fops);
+ if (err)
+ printk(KERN_ERR "failed to register PPP device (%d)\n", err);
+#ifndef MODULE
+#ifdef CONFIG_PPP_ASYNC
+ ppp_async_init();
+#endif
+#ifdef CONFIG_PPP_DEFLATE
+ if (ppp_register_compressor(&ppp_deflate) == 0)
+ printk(KERN_INFO "PPP Deflate compression module registered\n");
+ ppp_register_compressor(&ppp_deflate_draft);
+#endif
+#endif /* MODULE */
+
+ return -ENODEV;
+}
+
+/*
+ * Network interface unit routines.
+ */
+static int
+ppp_start_xmit(struct sk_buff *skb, struct device *dev)
+{
+ struct ppp *ppp = (struct ppp *) dev->priv;
+ int npi, proto;
+ unsigned char *pp;
+
+ if (skb == 0)
+ return 0;
+ /* can skb->data ever be 0? */
+
+ npi = ethertype_to_npindex(ntohs(skb->protocol));
+ if (npi < 0)
+ goto outf;
+
+ /* Drop, accept or reject the packet */
+ switch (ppp->npmode[npi]) {
+ case NPMODE_PASS:
+ break;
+ case NPMODE_QUEUE:
+ /* it would be nice to have a way to tell the network
+ system to queue this one up for later. */
+ goto outf;
+ case NPMODE_DROP:
+ case NPMODE_ERROR:
+ goto outf;
+ }
+
+ /* The transmit side of the ppp interface is serialized by
+ the XMIT_BUSY bit in ppp->busy. */
+ if (!trylock_xmit_path(ppp)) {
+ dev->tbusy = 1;
+ return 1;
+ }
+ if (ppp->xmit_pending)
+ ppp_push(ppp);
+ if (ppp->xmit_pending) {
+ dev->tbusy = 1;
+ ppp_xmit_unlock(ppp);
+ return 1;
+ }
+ dev->tbusy = 0;
+
+ /* Put the 2-byte PPP protocol number on the front,
+ making sure there is room for the address and control fields. */
+ if (skb_headroom(skb) < PPP_HDRLEN) {
+ struct sk_buff *ns;
+
+ ns = alloc_skb(skb->len + PPP_HDRLEN, GFP_ATOMIC);
+ if (ns == 0)
+ goto outnbusy;
+ skb_reserve(ns, PPP_HDRLEN);
+ memcpy(skb_put(ns, skb->len), skb->data, skb->len);
+ kfree_skb(skb);
+ skb = ns;
+ }
+ pp = skb_push(skb, 2);
+ proto = npindex_to_proto[npi];
+ pp[0] = proto >> 8;
+ pp[1] = proto;
+
+ ppp_send_frame(ppp, skb);
+ ppp_xmit_unlock(ppp);
+ return 0;
+
+ outnbusy:
+ ppp_xmit_unlock(ppp);
+
+ outf:
+ kfree_skb(skb);
+ return 0;
+}
+
+static struct net_device_stats *
+ppp_net_stats(struct device *dev)
+{
+ struct ppp *ppp = (struct ppp *) dev->priv;
+
+ return &ppp->stats;
+}
+
+static int
+ppp_net_ioctl(struct device *dev, struct ifreq *ifr, int cmd)
+{
+ struct ppp *ppp = dev->priv;
+ int err = -EFAULT;
+ void *addr = (void *) ifr->ifr_ifru.ifru_data;
+ struct ppp_stats stats;
+ struct ppp_comp_stats cstats;
+ char *vers;
+
+ switch (cmd) {
+ case SIOCGPPPSTATS:
+ ppp_get_stats(ppp, &stats);
+ if (copy_to_user(addr, &stats, sizeof(stats)))
+ break;
+ err = 0;
+ break;
+
+ case SIOCGPPPCSTATS:
+ memset(&cstats, 0, sizeof(cstats));
+ if (ppp->xc_state != 0)
+ ppp->xcomp->comp_stat(ppp->xc_state, &cstats.c);
+ if (ppp->rc_state != 0)
+ ppp->rcomp->decomp_stat(ppp->rc_state, &cstats.d);
+ if (copy_to_user(addr, &cstats, sizeof(cstats)))
+ break;
+ err = 0;
+ break;
+
+ case SIOCGPPPVER:
+ vers = PPP_VERSION;
+ if (copy_to_user(addr, vers, strlen(vers) + 1))
+ break;
+ err = 0;
+ break;
+
+ default:
+ err = -EINVAL;
+ }
+
+ return err;
+}
+
+int
+ppp_net_init(struct device *dev)
+{
+ dev->hard_header_len = PPP_HDRLEN;
+ dev->mtu = PPP_MTU;
+ dev->hard_start_xmit = ppp_start_xmit;
+ dev->get_stats = ppp_net_stats;
+ dev->do_ioctl = ppp_net_ioctl;
+ dev->addr_len = 0;
+ dev->tx_queue_len = 3;
+ dev->type = ARPHRD_PPP;
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+
+ dev_init_buffers(dev);
+ return 0;
+}
+
+/*
+ * Transmit-side routines.
+ */
+
+/*
+ * Called to unlock the transmit side of the ppp unit,
+ * making sure that any work queued up gets done.
+ */
+static void
+ppp_xmit_unlock(struct ppp *ppp)
+{
+ struct sk_buff *skb;
+
+ for (;;) {
+ if (test_and_clear_bit(XMIT_WAKEUP, &ppp->busy))
+ ppp_push(ppp);
+ while (ppp->xmit_pending == 0
+ && (skb = skb_dequeue(&ppp->xq)) != 0)
+ ppp_send_frame(ppp, skb);
+ unlock_xmit_path(ppp);
+ if (!(test_bit(XMIT_WAKEUP, &ppp->busy)
+ || (ppp->xmit_pending == 0 && skb_peek(&ppp->xq))))
+ break;
+ if (!trylock_xmit_path(ppp))
+ break;
+ }
+}
+
+/*
+ * Compress and send a frame.
+ * The caller should have locked the xmit path,
+ * and xmit_pending should be 0.
+ */
+static void
+ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+{
+ int proto = PPP_PROTO(skb);
+ struct sk_buff *new_skb;
+ int len;
+ unsigned char *cp;
+
+ ++ppp->stats.tx_packets;
+ ppp->stats.tx_bytes += skb->len - 2;
+
+ switch (proto) {
+ case PPP_IP:
+ if (ppp->vj == 0 || (ppp->flags & SC_COMP_TCP) == 0)
+ break;
+ /* try to do VJ TCP header compression */
+ new_skb = alloc_skb(skb->len + 2, GFP_ATOMIC);
+ if (new_skb == 0) {
+ printk(KERN_ERR "PPP: no memory (VJ comp pkt)\n");
+ goto drop;
+ }
+ skb_reserve(new_skb, 2);
+ cp = skb->data + 2;
+ len = slhc_compress(ppp->vj, cp, skb->len - 2,
+ new_skb->data + 2, &cp,
+ !(ppp->flags & SC_NO_TCP_CCID));
+ if (cp == skb->data + 2) {
+ /* didn't compress */
+ kfree_skb(new_skb);
+ } else {
+ if (cp[0] & SL_TYPE_COMPRESSED_TCP) {
+ proto = PPP_VJC_COMP;
+ cp[0] &= ~SL_TYPE_COMPRESSED_TCP;
+ } else {
+ proto = PPP_VJC_UNCOMP;
+ cp[0] = skb->data[2];
+ }
+ kfree_skb(skb);
+ skb = new_skb;
+ cp = skb_put(skb, len + 2);
+ cp[0] = 0;
+ cp[1] = proto;
+ }
+ break;
+
+ case PPP_CCP:
+ /* peek at outbound CCP frames */
+ ppp_ccp_peek(ppp, skb, 0);
+ break;
+ }
+
+ /* try to do packet compression */
+ if ((ppp->xstate & SC_COMP_RUN) && ppp->xc_state != 0
+ && proto != PPP_LCP && proto != PPP_CCP) {
+ new_skb = alloc_skb(ppp->dev.mtu + PPP_HDRLEN, GFP_ATOMIC);
+ if (new_skb == 0) {
+ printk(KERN_ERR "PPP: no memory (comp pkt)\n");
+ goto drop;
+ }
+
+ /* compressor still expects A/C bytes in hdr */
+ len = ppp->xcomp->compress(ppp->xc_state, skb->data - 2,
+ new_skb->data, skb->len + 2,
+ ppp->dev.mtu + PPP_HDRLEN);
+ if (len > 0 && (ppp->flags & SC_CCP_UP)) {
+ kfree_skb(skb);
+ skb = new_skb;
+ skb_put(skb, len);
+ skb_pull(skb, 2); /* pull off A/C bytes */
+ } else {
+ /* didn't compress, or CCP not up yet */
+ kfree_skb(new_skb);
+ }
+ }
+
+ /*
+ * If we are waiting for traffic (demand dialling),
+ * queue it up for pppd to receive.
+ */
+ if (ppp->flags & SC_LOOP_TRAFFIC) {
+ if (ppp->rq.qlen > PPP_MAX_RQLEN)
+ goto drop;
+ skb_queue_tail(&ppp->rq, skb);
+ wake_up_interruptible(&ppp->rwait);
+ return;
+ }
+
+ ppp->xmit_pending = skb;
+ ppp_push(ppp);
+ return;
+
+ drop:
+ kfree_skb(skb);
+ ++ppp->stats.tx_errors;
+}
+
+/*
+ * Try to send the frame in xmit_pending.
+ * The caller should have the xmit path locked.
+ */
+static void
+ppp_push(struct ppp *ppp)
+{
+ struct list_head *list;
+ struct channel *chan;
+ struct sk_buff *skb = ppp->xmit_pending;
+
+ if (skb == 0)
+ return;
+
+ list = &ppp->channels;
+ if (list_empty(list)) {
+ /* nowhere to send the packet, just drop it */
+ ppp->xmit_pending = 0;
+ kfree_skb(skb);
+ return;
+ }
+
+ /* If we are doing multilink, decide which channel gets the
+ packet, and/or fragment the packet over several links. */
+ /* XXX for now, just take the first channel */
+ list = list->next;
+ chan = list_entry(list, struct channel, list);
+
+ if (chan->chan->ops->start_xmit(chan->chan, skb)) {
+ ppp->xmit_pending = 0;
+ chan->blocked = 0;
+ } else
+ chan->blocked = 1;
+}
+
+/*
+ * Receive-side routines.
+ */
+static inline void
+ppp_do_recv(struct ppp *ppp, struct sk_buff *skb)
+{
+ skb_queue_tail(&ppp->recv_pending, skb);
+ if (trylock_recv_path(ppp))
+ ppp_recv_unlock(ppp);
+}
+
+void
+ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
+{
+ struct channel *pch = chan->ppp;
+
+ if (pch == 0 || skb->len == 0) {
+ kfree_skb(skb);
+ return;
+ }
+ ppp_do_recv(pch->ppp, skb);
+}
+
+/* Put a 0-length skb in the receive queue as an error indication */
+void
+ppp_input_error(struct ppp_channel *chan, int code)
+{
+ struct channel *pch = chan->ppp;
+ struct sk_buff *skb;
+
+ if (pch == 0)
+ return;
+ skb = alloc_skb(0, GFP_ATOMIC);
+ if (skb == 0)
+ return;
+ skb->len = 0; /* probably unnecessary */
+ skb->cb[0] = code;
+ ppp_do_recv(pch->ppp, skb);
+}
+
+static void
+ppp_recv_unlock(struct ppp *ppp)
+{
+ struct sk_buff *skb;
+
+ for (;;) {
+ while ((skb = skb_dequeue(&ppp->recv_pending)) != 0)
+ ppp_receive_frame(ppp, skb);
+ unlock_recv_path(ppp);
+ if (skb_peek(&ppp->recv_pending) == 0)
+ break;
+ if (!trylock_recv_path(ppp))
+ break;
+ }
+}
+
+static void
+ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb)
+{
+ struct sk_buff *ns;
+ int proto, len, npi;
+
+ if (skb->len == 0) {
+ /* XXX should do something with code in skb->cb[0] */
+ goto err; /* error indication */
+ }
+
+ if (skb->len < 2) {
+ ++ppp->stats.rx_length_errors;
+ goto err;
+ }
+
+ /* Decompress the frame, if compressed. */
+ if (ppp->rc_state != 0 && (ppp->rstate & SC_DECOMP_RUN)
+ && (ppp->rstate & (SC_DC_FERROR | SC_DC_ERROR)) == 0)
+ skb = ppp_decompress_frame(ppp, skb);
+
+ proto = PPP_PROTO(skb);
+ switch (proto) {
+ case PPP_VJC_COMP:
+ /* decompress VJ compressed packets */
+ if (ppp->vj == 0 || (ppp->flags & SC_REJ_COMP_TCP))
+ goto err;
+ if (skb_tailroom(skb) < 124) {
+ /* copy to a new sk_buff with more tailroom */
+ ns = alloc_skb(skb->len + 128, GFP_ATOMIC);
+ if (ns == 0) {
+ printk(KERN_ERR"PPP: no memory (VJ decomp)\n");
+ goto err;
+ }
+ skb_reserve(ns, 2);
+ memcpy(skb_put(ns, skb->len), skb->data, skb->len);
+ kfree_skb(skb);
+ skb = ns;
+ }
+ len = slhc_uncompress(ppp->vj, skb->data + 2, skb->len - 2);
+ if (len <= 0) {
+ printk(KERN_ERR "PPP: VJ decompression error\n");
+ goto err;
+ }
+ len += 2;
+ if (len > skb->len)
+ skb_put(skb, len - skb->len);
+ else if (len < skb->len)
+ skb_trim(skb, len);
+ proto = PPP_IP;
+ break;
+
+ case PPP_VJC_UNCOMP:
+ if (ppp->vj == 0 || (ppp->flags & SC_REJ_COMP_TCP))
+ goto err;
+ if (slhc_remember(ppp->vj, skb->data + 2, skb->len - 2) <= 0) {
+ printk(KERN_ERR "PPP: VJ uncompressed error\n");
+ goto err;
+ }
+ proto = PPP_IP;
+ break;
+
+ case PPP_CCP:
+ ppp_ccp_peek(ppp, skb, 1);
+ break;
+ }
+
+ ++ppp->stats.rx_packets;
+ ppp->stats.rx_bytes += skb->len - 2;
+
+ npi = proto_to_npindex(proto);
+ if (npi < 0) {
+ /* control or unknown frame - pass it to pppd */
+ skb_queue_tail(&ppp->rq, skb);
+ /* limit queue length by dropping old frames */
+ while (ppp->rq.qlen > PPP_MAX_RQLEN) {
+ skb = skb_dequeue(&ppp->rq);
+ if (skb)
+ kfree_skb(skb);
+ }
+ /* wake up any process polling or blocking on read */
+ wake_up_interruptible(&ppp->rwait);
+
+ } else {
+ /* network protocol frame - give it to the kernel */
+ ppp->last_recv = jiffies;
+ if ((ppp->dev.flags & IFF_UP) == 0
+ || ppp->npmode[npi] != NPMODE_PASS) {
+ kfree_skb(skb);
+ } else {
+ skb_pull(skb, 2); /* chop off protocol */
+ skb->dev = &ppp->dev;
+ skb->protocol = htons(npindex_to_ethertype[npi]);
+ skb->mac.raw = skb->data;
+ netif_rx(skb);
+ }
+ }
+ return;
+
+ err:
+ ++ppp->stats.rx_errors;
+ if (ppp->vj != 0)
+ slhc_toss(ppp->vj);
+ kfree_skb(skb);
+}
+
+static struct sk_buff *
+ppp_decompress_frame(struct ppp *ppp, struct sk_buff *skb)
+{
+ int proto = PPP_PROTO(skb);
+ struct sk_buff *ns;
+ int len;
+
+ if (proto == PPP_COMP) {
+ ns = alloc_skb(ppp->mru + PPP_HDRLEN, GFP_ATOMIC);
+ if (ns == 0) {
+ printk(KERN_ERR "ppp_receive: no memory\n");
+ goto err;
+ }
+ /* the decompressor still expects the A/C bytes in the hdr */
+ len = ppp->rcomp->decompress(ppp->rc_state, skb->data - 2,
+ skb->len + 2, ns->data, ppp->mru + PPP_HDRLEN);
+ if (len < 0) {
+ /* Pass the compressed frame to pppd as an
+ error indication. */
+ if (len == DECOMP_FATALERROR)
+ ppp->rstate |= SC_DC_FERROR;
+ goto err;
+ }
+
+ kfree_skb(skb);
+ skb = ns;
+ skb_put(skb, len);
+ skb_pull(skb, 2); /* pull off the A/C bytes */
+
+ } else {
+ /* Uncompressed frame - pass to decompressor so it
+ can update its dictionary if necessary. */
+ if (ppp->rcomp->incomp)
+ ppp->rcomp->incomp(ppp->rc_state, skb->data - 2,
+ skb->len + 2);
+ }
+
+ return skb;
+
+ err:
+ ppp->rstate |= SC_DC_ERROR;
+ if (ppp->vj != 0)
+ slhc_toss(ppp->vj);
+ ++ppp->stats.rx_errors;
+ return skb;
+}
+
+/*
+ * Channel interface.
+ */
+
+/*
+ * Connect a channel to a given PPP unit.
+ * The channel MUST NOT be connected to a PPP unit already.
+ */
+int
+ppp_register_channel(struct ppp_channel *chan, int unit)
+{
+ struct ppp *ppp;
+ struct channel *pch;
+ int ret = -ENXIO;
+
+ spin_lock(&all_ppp_lock);
+ ppp = ppp_find_unit(unit);
+ if (ppp == 0)
+ goto out;
+ pch = kmalloc(sizeof(struct channel), GFP_ATOMIC);
+ ret = -ENOMEM;
+ if (pch == 0)
+ goto out;
+ memset(pch, 0, sizeof(struct channel));
+ pch->ppp = ppp;
+ pch->chan = chan;
+ list_add(&pch->list, &ppp->channels);
+ chan->ppp = pch;
+ ++ppp->n_channels;
+ ret = 0;
+ out:
+ spin_unlock(&all_ppp_lock);
+ return ret;
+}
+
+/*
+ * Disconnect a channel from its PPP unit.
+ */
+void
+ppp_unregister_channel(struct ppp_channel *chan)
+{
+ struct channel *pch;
+
+ spin_lock(&all_ppp_lock);
+ if ((pch = chan->ppp) != 0) {
+ chan->ppp = 0;
+ list_del(&pch->list);
+ --pch->ppp->n_channels;
+ kfree(pch);
+ }
+ spin_unlock(&all_ppp_lock);
+}
+
+/*
+ * Callback from a channel when it can accept more to transmit.
+ * This should ideally be called at BH level, not interrupt level.
+ */
+void
+ppp_output_wakeup(struct ppp_channel *chan)
+{
+ struct channel *pch = chan->ppp;
+ struct ppp *ppp;
+
+ if (pch == 0)
+ return;
+ ppp = pch->ppp;
+ pch->blocked = 0;
+ set_bit(XMIT_WAKEUP, &ppp->busy);
+ if (trylock_xmit_path(ppp))
+ ppp_xmit_unlock(ppp);
+ if (ppp->xmit_pending == 0) {
+ ppp->dev.tbusy = 0;
+ mark_bh(NET_BH);
+ }
+}
+
+/*
+ * Compression control.
+ */
+
+/* Process the PPPIOCSCOMPRESS ioctl. */
+static int
+ppp_set_compress(struct ppp *ppp, unsigned long arg)
+{
+ int err;
+ struct compressor *cp;
+ struct ppp_option_data data;
+ unsigned char ccp_option[CCP_MAX_OPTION_LENGTH];
+#ifdef CONFIG_KMOD
+ char modname[32];
+#endif
+
+ err = -EFAULT;
+ if (copy_from_user(&data, (void *) arg, sizeof(data))
+ || (data.length <= CCP_MAX_OPTION_LENGTH
+ && copy_from_user(ccp_option, data.ptr, data.length)))
+ goto out;
+ err = -EINVAL;
+ if (data.length > CCP_MAX_OPTION_LENGTH
+ || ccp_option[1] < 2 || ccp_option[1] > data.length)
+ goto out;
+
+ cp = find_compressor(ccp_option[0]);
+#ifdef CONFIG_KMOD
+ if (cp == 0) {
+ sprintf(modname, "ppp-compress-%d", ccp_option[0]);
+ request_module(modname);
+ cp = find_compressor(ccp_option[0]);
+ }
+#endif /* CONFIG_KMOD */
+ if (cp == 0)
+ goto out;
+
+ err = -ENOBUFS;
+ if (data.transmit) {
+ lock_xmit_path(ppp);
+ ppp->xstate &= ~SC_COMP_RUN;
+ if (ppp->xc_state != 0) {
+ ppp->xcomp->comp_free(ppp->xc_state);
+ ppp->xc_state = 0;
+ }
+
+ ppp->xcomp = cp;
+ ppp->xc_state = cp->comp_alloc(ccp_option, data.length);
+ ppp_xmit_unlock(ppp);
+ if (ppp->xc_state == 0)
+ goto out;
+
+ } else {
+ lock_recv_path(ppp);
+ ppp->rstate &= ~SC_DECOMP_RUN;
+ if (ppp->rc_state != 0) {
+ ppp->rcomp->decomp_free(ppp->rc_state);
+ ppp->rc_state = 0;
+ }
+
+ ppp->rcomp = cp;
+ ppp->rc_state = cp->decomp_alloc(ccp_option, data.length);
+ ppp_recv_unlock(ppp);
+ if (ppp->rc_state == 0)
+ goto out;
+ }
+ err = 0;
+
+ out:
+ return err;
+}
+
+/*
+ * Look at a CCP packet and update our state accordingly.
+ * We assume the caller has the xmit or recv path locked.
+ */
+static void
+ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound)
+{
+ unsigned char *dp = skb->data + 2;
+ int len;
+
+ if (skb->len < CCP_HDRLEN + 2
+ || skb->len < (len = CCP_LENGTH(dp)) + 2)
+ return; /* too short */
+
+ switch (CCP_CODE(dp)) {
+ case CCP_CONFREQ:
+ case CCP_TERMREQ:
+ case CCP_TERMACK:
+ /*
+ * CCP is going down - disable compression.
+ */
+ if (inbound)
+ ppp->rstate &= ~SC_DECOMP_RUN;
+ else
+ ppp->xstate &= ~SC_COMP_RUN;
+ break;
+
+ case CCP_CONFACK:
+ if ((ppp->flags & (SC_CCP_OPEN | SC_CCP_UP)) != SC_CCP_OPEN)
+ break;
+ dp += CCP_HDRLEN;
+ len -= CCP_HDRLEN;
+ if (len < CCP_OPT_MINLEN || len < CCP_OPT_LENGTH(dp))
+ break;
+ if (inbound) {
+ /* we will start receiving compressed packets */
+ if (ppp->rc_state == 0)
+ break;
+ if (ppp->rcomp->decomp_init(ppp->rc_state, dp, len,
+ ppp->index, 0, ppp->mru, ppp->debug)) {
+ ppp->rstate |= SC_DECOMP_RUN;
+ ppp->rstate &= ~(SC_DC_ERROR | SC_DC_FERROR);
+ }
+ } else {
+ /* we will soon start sending compressed packets */
+ if (ppp->xc_state == 0)
+ break;
+ if (ppp->xcomp->comp_init(ppp->xc_state, dp, len,
+ ppp->index, 0, ppp->debug))
+ ppp->xstate |= SC_COMP_RUN;
+ }
+ break;
+
+ case CCP_RESETACK:
+ /* reset the [de]compressor */
+ if ((ppp->flags & SC_CCP_UP) == 0)
+ break;
+ if (inbound) {
+ if (ppp->rc_state && (ppp->rstate & SC_DECOMP_RUN)) {
+ ppp->rcomp->decomp_reset(ppp->rc_state);
+ ppp->rstate &= ~SC_DC_ERROR;
+ }
+ } else {
+ if (ppp->xc_state && (ppp->xstate & SC_COMP_RUN))
+ ppp->xcomp->comp_reset(ppp->xc_state);
+ }
+ break;
+ }
+}
+
+/* Free up compression resources. */
+static void
+ppp_ccp_closed(struct ppp *ppp)
+{
+ ppp->flags &= ~(SC_CCP_OPEN | SC_CCP_UP);
+
+ lock_xmit_path(ppp);
+ ppp->xstate &= ~SC_COMP_RUN;
+ if (ppp->xc_state) {
+ ppp->xcomp->comp_free(ppp->xc_state);
+ ppp->xc_state = 0;
+ }
+ ppp_xmit_unlock(ppp);
+
+ lock_recv_path(ppp);
+ ppp->xstate &= ~SC_DECOMP_RUN;
+ if (ppp->rc_state) {
+ ppp->rcomp->decomp_free(ppp->rc_state);
+ ppp->rc_state = 0;
+ }
+ ppp_recv_unlock(ppp);
+}
+
+/* List of compressors. */
+static LIST_HEAD(compressor_list);
+static spinlock_t compressor_list_lock = SPIN_LOCK_UNLOCKED;
+
+struct compressor_entry {
+ struct list_head list;
+ struct compressor *comp;
+};
+
+static struct compressor_entry *
+find_comp_entry(int proto)
+{
+ struct compressor_entry *ce;
+ struct list_head *list = &compressor_list;
+
+ while ((list = list->next) != &compressor_list) {
+ ce = list_entry(list, struct compressor_entry, list);
+ if (ce->comp->compress_proto == proto)
+ return ce;
+ }
+ return 0;
+}
+
+/* Register a compressor */
+int
+ppp_register_compressor(struct compressor *cp)
+{
+ struct compressor_entry *ce;
+ int ret;
+
+ spin_lock(&compressor_list_lock);
+ ret = -EEXIST;
+ if (find_comp_entry(cp->compress_proto) != 0)
+ goto out;
+ ret = -ENOMEM;
+ ce = kmalloc(sizeof(struct compressor_entry), GFP_KERNEL);
+ if (ce == 0)
+ goto out;
+ ret = 0;
+ ce->comp = cp;
+ list_add(&ce->list, &compressor_list);
+ out:
+ spin_unlock(&compressor_list_lock);
+ return ret;
+}
+
+/* Unregister a compressor */
+void
+ppp_unregister_compressor(struct compressor *cp)
+{
+ struct compressor_entry *ce;
+
+ spin_lock(&compressor_list_lock);
+ ce = find_comp_entry(cp->compress_proto);
+ if (ce != 0 && ce->comp == cp) {
+ list_del(&ce->list);
+ kfree(ce);
+ }
+ spin_unlock(&compressor_list_lock);
+}
+
+/* Find a compressor. */
+static struct compressor *
+find_compressor(int type)
+{
+ struct compressor_entry *ce;
+ struct compressor *cp = 0;
+
+ spin_lock(&compressor_list_lock);
+ ce = find_comp_entry(type);
+ if (ce != 0)
+ cp = ce->comp;
+ spin_unlock(&compressor_list_lock);
+ return cp;
+}
+
+/*
+ * Miscelleneous stuff.
+ */
+
+static void
+ppp_get_stats(struct ppp *ppp, struct ppp_stats *st)
+{
+ struct slcompress *vj = ppp->vj;
+
+ memset(st, 0, sizeof(*st));
+ st->p.ppp_ipackets = ppp->stats.rx_packets;
+ st->p.ppp_ierrors = ppp->stats.rx_errors;
+ st->p.ppp_ibytes = ppp->stats.rx_bytes;
+ st->p.ppp_opackets = ppp->stats.tx_packets;
+ st->p.ppp_oerrors = ppp->stats.tx_errors;
+ st->p.ppp_obytes = ppp->stats.tx_bytes;
+ if (vj == 0)
+ return;
+ st->vj.vjs_packets = vj->sls_o_compressed + vj->sls_o_uncompressed;
+ st->vj.vjs_compressed = vj->sls_o_compressed;
+ st->vj.vjs_searches = vj->sls_o_searches;
+ st->vj.vjs_misses = vj->sls_o_misses;
+ st->vj.vjs_errorin = vj->sls_i_error;
+ st->vj.vjs_tossed = vj->sls_i_tossed;
+ st->vj.vjs_uncompressedin = vj->sls_i_uncompressed;
+ st->vj.vjs_compressedin = vj->sls_i_compressed;
+}
+
+/*
+ * Stuff for handling the list of ppp units and for initialization.
+ */
+
+/*
+ * Create a new ppp unit. Fails if it can't allocate memory or
+ * if there is already a unit with the requested number.
+ * unit == -1 means allocate a new number.
+ */
+static struct ppp *
+ppp_create_unit(int unit, int *retp)
+{
+ struct ppp *ppp;
+ struct list_head *list;
+ int last_unit = -1;
+ int ret = -EEXIST;
+ int i;
+
+ spin_lock(&all_ppp_lock);
+ list = &all_ppp_units;
+ while ((list = list->next) != &all_ppp_units) {
+ ppp = list_entry(list, struct ppp, list);
+ if ((unit < 0 && ppp->index > last_unit + 1)
+ || (unit >= 0 && unit < ppp->index))
+ break;
+ if (unit == ppp->index)
+ goto out; /* unit already exists */
+ last_unit = ppp->index;
+ }
+ if (unit < 0)
+ unit = last_unit + 1;
+
+ /* Create a new ppp structure and link it before `list'. */
+ ret = -ENOMEM;
+ ppp = kmalloc(sizeof(struct ppp), GFP_KERNEL);
+ if (ppp == 0)
+ goto out;
+ memset(ppp, 0, sizeof(struct ppp));
+
+ ppp->index = unit;
+ sprintf(ppp->name, "ppp%d", unit);
+ ppp->mru = PPP_MRU;
+ skb_queue_head_init(&ppp->xq);
+ skb_queue_head_init(&ppp->rq);
+ init_waitqueue_head(&ppp->rwait);
+ ppp->refcnt = 1;
+ for (i = 0; i < NUM_NP; ++i)
+ ppp->npmode[i] = NPMODE_PASS;
+ INIT_LIST_HEAD(&ppp->channels);
+ skb_queue_head_init(&ppp->recv_pending);
+
+ ppp->dev.init = ppp_net_init;
+ ppp->dev.name = ppp->name;
+ ppp->dev.priv = ppp;
+
+ ret = register_netdev(&ppp->dev);
+ if (ret != 0) {
+ printk(KERN_ERR "PPP: couldn't register device (%d)\n", ret);
+ kfree(ppp);
+ goto out;
+ }
+
+ list_add(&ppp->list, list->prev);
+ out:
+ spin_unlock(&all_ppp_lock);
+ *retp = ret;
+ if (ret != 0)
+ ppp = 0;
+ return ppp;
+}
+
+/*
+ * Locate an existing ppp unit.
+ * The caller should have locked the all_ppp_lock.
+ */
+static struct ppp *
+ppp_find_unit(int unit)
+{
+ struct ppp *ppp;
+ struct list_head *list;
+
+ list = &all_ppp_units;
+ while ((list = list->next) != &all_ppp_units) {
+ ppp = list_entry(list, struct ppp, list);
+ if (ppp->index == unit)
+ return ppp;
+ }
+ return 0;
+}
+
+/*
+ * Module stuff.
+ */
+#ifdef MODULE
+int
+init_module(void)
+{
+ ppp_init(0);
+ return 0;
+}
+
+void
+cleanup_module(void)
+{
+ /* should never happen */
+ if (all_ppp_units.next != &all_ppp_units)
+ printk(KERN_ERR "PPP: removing module but units remain!\n");
+ if (unregister_chrdev(PPP_MAJOR, "ppp") != 0)
+ printk(KERN_ERR "PPP: failed to unregister PPP device\n");
+}
+#endif /* MODULE */
/* We use this to acquire receive skb's that we can DMA directly into. */
#define ALIGNED_RX_SKB_ADDR(addr) \
((((unsigned long)(addr) + (64 - 1)) & ~(64 - 1)) - (unsigned long)(addr))
-static inline struct sk_buff *happy_meal_alloc_skb(unsigned int length, int gfp_flags)
-{
- struct sk_buff *skb;
-
- skb = alloc_skb(length + 64, gfp_flags);
- if(skb) {
- int offset = ALIGNED_RX_SKB_ADDR(skb->data);
-
- if(offset)
- skb_reserve(skb, offset);
- }
- return skb;
-}
+#define happy_meal_alloc_skb(__length, __gfp_flags) \
+({ struct sk_buff *__skb; \
+ __skb = alloc_skb((__length) + 64, (__gfp_flags)); \
+ if(__skb) { \
+ int __offset = ALIGNED_RX_SKB_ADDR(__skb->data); \
+ if(__offset) \
+ skb_reserve(__skb, __offset); \
+ } \
+ __skb; \
+})
/* Register/DMA access stuff, used to cope with differences between
* PCI and SBUS happy meals.
*/
-extern inline u32 kva_to_hva(struct happy_meal *hp, char *addr)
-{
-#ifdef CONFIG_PCI
- if(hp->happy_flags & HFLAG_PCI)
- return (u32) virt_to_bus((volatile void *)addr);
- else
-#endif
- {
-#ifdef __sparc_v9__
- if (((unsigned long) addr) >= MAX_DMA_ADDRESS) {
- printk("sunhme: Bogus DMA buffer address "
- "[%016lx]\n", ((unsigned long) addr));
- panic("DMA address too large, tell DaveM");
- }
-#endif
- return sbus_dvma_addr(addr);
- }
-}
-
-extern inline unsigned int hme_read32(struct happy_meal *hp,
- volatile unsigned int *reg)
-{
-#ifdef CONFIG_PCI
- if(hp->happy_flags & HFLAG_PCI)
- return readl((unsigned long)reg);
- else
-#endif
- return *reg;
-}
-
-extern inline void hme_write32(struct happy_meal *hp,
- volatile unsigned int *reg,
- unsigned int val)
-{
-#ifdef CONFIG_PCI
- if(hp->happy_flags & HFLAG_PCI)
- writel(val, (unsigned long)reg);
- else
+#if defined(CONFIG_PCI)
+#define kva_to_hva(__hp, __addr) \
+({ u32 __ret; \
+ if ((__hp)->happy_flags & HFLAG_PCI) \
+ (__ret) = (u32) virt_to_bus((volatile void *)(__addr)); \
+ else \
+ (__ret) = sbus_dvma_addr(__addr); \
+ __ret; \
+})
+#define hme_read32(__hp, __reg) \
+({ unsigned int __ret; \
+ if ((__hp)->happy_flags & HFLAG_PCI) \
+ __ret = readl((unsigned long)(__reg)); \
+ else \
+ __ret = *(__reg); \
+ __ret; \
+})
+#define hme_write32(__hp, __reg, __val) \
+do { if ((__hp)->happy_flags & HFLAG_PCI) \
+ writel((__val), (unsigned long)(__reg)); \
+ else \
+ *(__reg) = (__val); \
+} while(0)
+#else
+#define kva_to_hva(__hp, __addr) ((u32)sbus_dvma_addr(__addr))
+#define hme_read32(__hp, __reg) (*(__reg))
+#define hme_write32(__hp, __reg, __val) ((*(__reg)) = (__val))
#endif
- *reg = val;
-}
#ifdef CONFIG_PCI
#ifdef __sparc_v9__
-extern inline void pcihme_write_rxd(struct happy_meal_rxd *rp,
- unsigned int flags,
- unsigned int addr)
-{
- __asm__ __volatile__("
- stwa %3, [%0] %2
- stwa %4, [%1] %2
-" : /* no outputs */
- : "r" (&rp->rx_addr), "r" (&rp->rx_flags),
- "i" (ASI_PL), "r" (addr), "r" (flags));
-}
-
-extern inline void pcihme_write_txd(struct happy_meal_txd *tp,
- unsigned int flags,
- unsigned int addr)
-{
- __asm__ __volatile__("
- stwa %3, [%0] %2
- stwa %4, [%1] %2
-" : /* no outputs */
- : "r" (&tp->tx_addr), "r" (&tp->tx_flags),
- "i" (ASI_PL), "r" (addr), "r" (flags));
-}
+#define pcihme_write_rxd(__rp, __flags, __addr) \
+ __asm__ __volatile__("stwa %3, [%0] %2\n\t" \
+ "stwa %4, [%1] %2" \
+ : /* no outputs */ \
+ : "r" (&(__rp)->rx_addr), "r" (&(__rp)->rx_flags), \
+ "i" (ASI_PL), "r" (__addr), "r" (__flags))
+
+#define pcihme_write_txd(__tp, __flags, __addr) \
+ __asm__ __volatile__("stwa %3, [%0] %2\n\t" \
+ "stwa %4, [%1] %2" \
+ : /* no outputs */ \
+ : "r" (&(__tp)->tx_addr), "r" (&(__tp)->tx_flags), \
+ "i" (ASI_PL), "r" (__addr), "r" (__flags))
#else
-extern inline void pcihme_write_rxd(struct happy_meal_rxd *rp,
- unsigned int flags,
- unsigned int addr)
-{
- rp->rx_addr = flip_dword(addr);
- rp->rx_flags = flip_dword(flags);
-}
+#define pcihme_write_rxd(__rp, __flags, __addr) \
+do { (__rp)->rx_addr = flip_dword(__addr); \
+ (__rp)->rx_flags = flip_dword(__flags); \
+} while(0)
-extern inline void pcihme_write_txd(struct happy_meal_txd *tp,
- unsigned int flags,
- unsigned int addr)
-{
- tp->tx_addr = flip_dword(addr);
- tp->tx_flags = flip_dword(flags);
-}
+#define pcihme_write_txd(__tp, __flags, __addr) \
+do { (__tp)->tx_addr = flip_dword(__addr); \
+ (__tp)->tx_flags = flip_dword(__flags); \
+} while(0)
#endif /* def __sparc_v9__ */
#endif /* def CONFIG_PCI */
/* Event 6: nAck goes high */
if (parport_wait_peripheral (port,
- PARPORT_STATUS_ACK
- | PARPORT_STATUS_PAPEROUT,
+ PARPORT_STATUS_ACK,
PARPORT_STATUS_ACK)) {
- if (parport_read_status (port) & PARPORT_STATUS_ACK)
- printk (KERN_DEBUG
- "%s: working around buggy peripheral: tell "
- "Tim what make it is\n", port->name);
+ /* This shouldn't really happen with a compliant device. */
DPRINTK (KERN_DEBUG
"%s: Mode 0x%02x not supported? (0x%02x)\n",
port->name, mode, port->ops->read_status (port));
const void *buffer, size_t len,
int flags)
{
+ int no_irq;
ssize_t count = 0;
const unsigned char *addr = buffer;
unsigned char byte;
parport_enable_irq (port);
port->physport->ieee1284.phase = IEEE1284_PH_FWD_DATA;
+ no_irq = polling (dev);
while (count < len) {
long expire = jiffies + dev->timeout;
long wait = (HZ + 99) / 100;
first time around the loop, don't let go of
the port. This way, we find out if we have
our interrupt handler called. */
- if (count && polling (dev)) {
+ if (count && no_irq) {
parport_release (dev);
current->state = TASK_INTERRUPTIBLE;
schedule_timeout (wait);
parport_write_control (port, ctl);
udelay (1); /* hold */
- /* Wait until it's received (up to 20us). */
- for (i = 0; i < 20; i++) {
+ if (no_irq)
+ /* Assume the peripheral received it. */
+ goto done;
+
+ /* Wait until it's received, up to 500us (this ought to be
+ * tuneable). */
+ for (i = 500; i; i--) {
if (!down_trylock (&port->physport->ieee1284.irq) ||
!(parport_read_status (port) & PARPORT_STATUS_ACK))
- break;
+ goto done;
udelay (1);
}
+ /* Two choices:
+ * 1. Assume that the peripheral got the data and just
+ * hasn't acknowledged it yet.
+ * 2. Assume that the peripheral never saw the strobe pulse.
+ *
+ * We can't know for sure, so let's be conservative.
+ */
+ DPRINTK (KERN_DEBUG "%s: no ack", port->name);
+ break;
+
+ done:
count++;
/* Let another process run if it needs to. */
* David Mosberger-Tang, Martin Mares
*/
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/pci.h>
struct pci_bus pci_root;
struct pci_dev *pci_devices = NULL;
+int pci_reverse __initdata = 0;
+
static struct pci_dev **pci_last_dev_p = &pci_devices;
-static int pci_reverse __initdata = 0;
struct pci_dev *
pci_find_slot(unsigned int bus, unsigned int devfn)
kill_procs(drv->sd_siglist,SIGPOLL,S_MSG);
/* Stop the lowlevel driver from outputing. */
- /* drv->ops->stop_output(drv); Should not be necessary -- DJB 5/25/98 */
+ drv->ops->stop_output(drv);
drv->output_active = 0;
/* Wake up any waiting writers or syncers and return. */
case SOUND_MIXER_WRITE_CD:
case SOUND_MIXER_WRITE_LINE:
case SOUND_MIXER_WRITE_IMIX:
- if(get_user(k, arg))
+ if(COPY_IN(arg, k))
return -EFAULT;
tprintk(("setting input volume (0x%x)", k));
if (drv->ops->get_input_channels)
case SOUND_MIXER_WRITE_PCM:
case SOUND_MIXER_WRITE_VOLUME:
case SOUND_MIXER_WRITE_SPEAKER:
- if(get_user(k, arg))
+ if(COPY_IN(arg, k))
return -EFAULT;
tprintk(("setting output volume (0x%x)", k));
if (drv->ops->get_output_channels)
case SOUND_MIXER_WRITE_RECSRC:
if (!drv->ops->set_input_port)
return -EINVAL;
- if(get_user(k, arg))
+ if(COPY_IN(arg, k))
return -EFAULT;
/* only one should ever be selected */
if (k & SOUND_MASK_IMIX) j = AUDIO_ANALOG_LOOPBACK;
/* Else subsequent speed setting changes are ignored by the chip. */
cs4231_disable_play(drv);
+ cs4231_chip->perchip_info.play.active = 0;
}
#endif
/* Else subsequent speed setting changes are ignored by the chip. */
cs4231_chip->regs->dmacsr &= ~(APC_GENL_INT | APC_XINT_ENA | APC_XINT_PLAY
| APC_XINT_GENL | APC_PDMA_READY
- | APC_XINT_PENA );
+ | APC_XINT_PENA | APC_PPAUSE );
+ printk("in cs4231_stop_output: 0x%x\n", cs4231_chip->regs->dmacsr);
cs4231_disable_play(drv);
+ cs4231_chip->perchip_info.play.active = 0;
#endif
}
}
if((dummy & EBUS_DCSR_A_LOADED) == 0) {
- cs4231_chip->perchip_info.play.active = 0;
eb4231_playintr(drv);
eb4231_getsamplecount(drv, cs4231_chip->playlen, 0);
* if anything since we may be doing shared interrupts
*/
- if (dummy & APC_PLAY_INT) {
- if (dummy & APC_XINT_PEMP) {
+ if (dummy & (APC_PLAY_INT|APC_XINT_PNVA|APC_XINT_PLAY|APC_XINT_EMPT|APC_XINT_PEMP)) {
cs4231_chip->perchip_info.play.samples +=
cs4231_length_to_samplecount(&(cs4231_chip->perchip_info.play),
cs4231_chip->playlen);
cs4231_playintr(drv);
- }
/* Any other conditions we need worry about? */
}
}
if (dummy & APC_XINT_EMPT) {
+#if 0 /* Call to stop_output from midlevel will get this */
if (!cs4231_chip->output_next_dma_handle) {
cs4231_chip->regs->dmacsr |= (APC_PPAUSE);
cs4231_disable_play(drv);
}
cs4231_chip->perchip_info.play.active = 0;
cs4231_playintr(drv);
+#endif
cs4231_getsamplecount(drv, cs4231_chip->playlen, 0);
}
*/
struct bpp_regs {
/* DMA registers */
- __u32 p_csr; /* DMA Control/Status Register */
- __u32 p_addr; /* Address Register */
- __u32 p_bcnt; /* Byte Count Register */
- __u32 p_tst_csr; /* Test Control/Status (DMA2 only) */
+ __volatile__ __u32 p_csr; /* DMA Control/Status Register */
+ __volatile__ __u32 p_addr; /* Address Register */
+ __volatile__ __u32 p_bcnt; /* Byte Count Register */
+ __volatile__ __u32 p_tst_csr; /* Test Control/Status (DMA2 only) */
/* Parallel Port registers */
- __u16 p_hcr; /* Hardware Configuration Register */
- __u16 p_ocr; /* Operation Configuration Register */
- __u8 p_dr; /* Parallel Data Register */
- __u8 p_tcr; /* Transfer Control Register */
- __u8 p_or; /* Output Register */
- __u8 p_ir; /* Input Register */
- __u16 p_icr; /* Interrupt Control Register */
+ __volatile__ __u16 p_hcr; /* Hardware Configuration Register */
+ __volatile__ __u16 p_ocr; /* Operation Configuration Register */
+ __volatile__ __u8 p_dr; /* Parallel Data Register */
+ __volatile__ __u8 p_tcr; /* Transfer Control Register */
+ __volatile__ __u8 p_or; /* Output Register */
+ __volatile__ __u8 p_ir; /* Input Register */
+ __volatile__ __u16 p_icr; /* Interrupt Control Register */
};
/* P_CSR. Bits of type RW1 are cleared with writting '1'. */
unsigned char Bus = PCI_Device->bus->number;
unsigned char Device = PCI_Device->devfn >> 3;
unsigned int IRQ_Channel = PCI_Device->irq;
- unsigned long BaseAddress0 = PCI_Device->base_address[0];
- unsigned long BaseAddress1 = PCI_Device->base_address[1];
+ unsigned long BaseAddress0 = PCI_Device->resource[0].start;
+ unsigned long BaseAddress1 = PCI_Device->resource[1].start;
BusLogic_IO_Address_T IO_Address =
BaseAddress0 & PCI_BASE_ADDRESS_IO_MASK;
BusLogic_PCI_Address_T PCI_Address =
unsigned char Device = PCI_Device->devfn >> 3;
unsigned int IRQ_Channel = PCI_Device->irq;
BusLogic_IO_Address_T IO_Address =
- PCI_Device->base_address[0] & PCI_BASE_ADDRESS_IO_MASK;
+ PCI_Device->resource[0].start & PCI_BASE_ADDRESS_IO_MASK;
if (IO_Address == 0 || IRQ_Channel == 0) continue;
for (i = 0; i < BusLogic_ProbeInfoCount; i++)
{
unsigned char Bus = PCI_Device->bus->number;
unsigned char Device = PCI_Device->devfn >> 3;
unsigned int IRQ_Channel = PCI_Device->irq;
- unsigned long BaseAddress0 = PCI_Device->base_address[0];
- unsigned long BaseAddress1 = PCI_Device->base_address[1];
+ unsigned long BaseAddress0 = PCI_Device->resource[0].start;
+ unsigned long BaseAddress1 = PCI_Device->resource[1].start;
BusLogic_IO_Address_T IO_Address =
BaseAddress0 & PCI_BASE_ADDRESS_IO_MASK;
BusLogic_PCI_Address_T PCI_Address =
#define ASOK 0x00
#define ASST 0x01
-#define ARRAY_SIZE(arr) (sizeof (arr) / sizeof (arr)[0])
#define YESNO(a) ((a) ? 'y' : 'n')
#define TLDEV(type) ((type) == TYPE_DISK || (type) == TYPE_ROM)
up(SCpnt->request.sem);
}
-void __init scsi_logging_setup(char *str, int *ints)
+static int __init scsi_logging_setup (char *str)
{
- if (ints[0] != 1) {
- printk("scsi_logging_setup : usage scsi_logging_level=n "
+ int tmp;
+
+ if (get_option(&str, &tmp)==1) {
+ scsi_logging_level = (tmp ? ~0 : 0);
+ return 1;
+ } else {
+ printk("scsi_logging_setup : usage scsi_logging_level=n "
"(n should be 0 or non-zero)\n");
- } else {
- scsi_logging_level = (ints[1])? ~0 : 0;
+ return 0;
}
}
+__setup("scsi_logging=", scsi_logging_setup);
+
#ifdef CONFIG_SCSI_MULTI_LUN
static int max_scsi_luns = 8;
#else
static int max_scsi_luns = 1;
#endif
-void __init scsi_luns_setup(char *str, int *ints)
+static int __init scsi_luns_setup (char *str)
{
- if (ints[0] != 1)
- printk("scsi_luns_setup : usage max_scsi_luns=n (n should be between 1 and 8)\n");
- else
- max_scsi_luns = ints[1];
+ int tmp;
+
+ if (get_option(&str, &tmp)==1) {
+ max_scsi_luns = tmp;
+ return 1;
+ } else {
+ printk("scsi_luns_setup : usage max_scsi_luns=n "
+ "(n should be between 1 and 8)\n");
+ return 0;
+ }
}
+__setup("max_scsi_luns=", scsi_luns_setup);
/*
* Detecting SCSI devices :
* We scan all present host adapter's busses, from ID 0 to ID (max_id).
#define ASOK 0x00
#define ASST 0x91
-#define ARRAY_SIZE(arr) (sizeof (arr) / sizeof (arr)[0])
#define YESNO(a) ((a) ? 'y' : 'n')
#define TLDEV(type) ((type) == TYPE_DISK || (type) == TYPE_ROM)
#define VERSION "1.12"
-#define ARRAY_SIZE(arr) (sizeof (arr) / sizeof (arr)[0])
-
#define PACKED __attribute__((packed))
#define ALIGNED(x) __attribute__((aligned(x)))
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/string.h>
static int __init es1370_setup(char *str)
{
static unsigned __initdata nr_dev = 0;
- int ints[11];
if (nr_dev >= NR_DEVICE)
return 0;
- get_options(str, ints);
- if (ints[0] >= 1)
- joystick[nr_dev] = ints[1];
- if (ints[0] >= 2)
- lineout[nr_dev] = ints[2];
- if (ints[0] >= 3)
- micbias[nr_dev] = ints[3];
+
+ ( (get_option(&str,&joystick[nr_dev]) == 2)
+ && (get_option(&str,&lineout [nr_dev]) == 2)
+ && get_option(&str,&micbias [nr_dev])
+ );
+
nr_dev++;
return 1;
}
/*****************************************************************************/
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/string.h>
if (nr_dev >= NR_DEVICE)
return 0;
- get_options(str, ints);
- if (ints[0] >= 1)
- joystick[nr_dev] = ints[1];
+ get_option(&str, &joystick[nr_dev]);
nr_dev++;
return 1;
}
static int __init sonicvibes_setup(char *str)
{
static unsigned __initdata nr_dev = 0;
- int ints[11];
if (nr_dev >= NR_DEVICE)
return 0;
- get_options(str, ints);
- if (ints[0] >= 1)
- reverb[nr_dev] = ints[1];
+
+ ( (get_option(&str, &reverb [nr_dev]) == 2)
#if 0
- if (ints[0] >= 2)
- wavetable[nr_dev] = ints[2];
+ && get_option(&str, &wavetable[nr_dev])
#endif
+ };
+
nr_dev++;
return 1;
}
if (ACCESS_FBINFO(capable.cross4MB) < 0)
ACCESS_FBINFO(capable.cross4MB) = b->flags & DEVF_CROSS4MB;
if (b->flags & DEVF_SWAPS) {
- ctrlptr_phys = ACCESS_FBINFO(pcidev)->base_address[1] & ~0x3FFF;
- video_base_phys = ACCESS_FBINFO(pcidev)->base_address[0] & ~0x7FFFFF; /* aligned at 8MB (or 16 for Mill 2) */
+ ctrlptr_phys = ACCESS_FBINFO(pcidev)->resource[1].start & ~0x3FFF;
+ video_base_phys = ACCESS_FBINFO(pcidev)->resource[0].start & ~0x7FFFFF; /* aligned at 8MB (or 16 for Mill 2) */
} else {
- ctrlptr_phys = ACCESS_FBINFO(pcidev)->base_address[0] & ~0x3FFF;
- video_base_phys = ACCESS_FBINFO(pcidev)->base_address[1] & ~0x7FFFFF; /* aligned at 8MB */
+ ctrlptr_phys = ACCESS_FBINFO(pcidev)->resource[0].start & ~0x3FFF;
+ video_base_phys = ACCESS_FBINFO(pcidev)->resource[1].start & ~0x7FFFFF; /* aligned at 8MB */
}
if (!ctrlptr_phys) {
printk(KERN_ERR "matroxfb: control registers are not available, matroxfb disabled\n");
return 0;
}
-#ifdef CONFIG_MODULES
int unregister_binfmt(struct linux_binfmt * fmt)
{
struct linux_binfmt ** tmp = &formats;
}
return -EINVAL;
}
-#endif /* CONFIG_MODULES */
/* N.B. Error returns must be < 0 */
int open_dentry(struct dentry * dentry, int mode)
#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES-1))&~(L1_CACHE_BYTES-1))
#define SMP_CACHE_BYTES L1_CACHE_BYTES
-#ifdef MODULE
#define __cacheline_aligned __attribute__((__aligned__(L1_CACHE_BYTES)))
-#else
-#define __cacheline_aligned \
- __attribute__((__aligned__(L1_CACHE_BYTES), \
- __section__(".data.cacheline_aligned")))
-#endif
#endif
#ifdef __KERNEL__
+#include <linux/config.h>
+
#ifndef MAX_HWIFS
#define MAX_HWIFS 4
#endif
#ifndef _ASM_AXP_PARPORT_H
#define _ASM_AXP_PARPORT_H 1
+#include <linux/config.h>
+
/* Maximum number of ports to support. It is useless to set this greater
than PARPORT_MAX (in <linux/parport.h>). */
#define PARPORT_PC_MAX_PORTS 8
#define __HAVE_ARCH_STRCHR
#define __HAVE_ARCH_STRRCHR
#define __HAVE_ARCH_STRLEN
+#define __HAVE_ARCH_MEMCHR
/* The following routine is like memset except that it writes 16-bit
aligned values. The DEST and COUNT parameters must be even for
#ifdef __KERNEL__
+#include <linux/config.h>
+
#ifndef MAX_HWIFS
#define MAX_HWIFS 10
#endif
#ifndef _ASM_I386_PARPORT_H
#define _ASM_I386_PARPORT_H 1
+#include <linux/config.h>
+
/* Maximum number of ports to support. It is useless to set this greater
than PARPORT_MAX (in <linux/parport.h>). */
#define PARPORT_PC_MAX_PORTS 8
#ifdef __KERNEL__
+#include <linux/config.h>
+
#ifndef MAX_HWIFS
#define MAX_HWIFS 6
#endif
#ifdef __KERNEL__
+#include <linux/config.h>
#include <linux/hdreg.h>
#include <linux/ioport.h>
#include <asm/io.h>
-/* $Id: a.out.h,v 1.5 1999/07/30 09:31:09 davem Exp $ */
+/* $Id: a.out.h,v 1.6 1999/08/04 07:04:21 jj Exp $ */
#ifndef __SPARC64_A_OUT_H__
#define __SPARC64_A_OUT_H__
#ifdef __KERNEL__
-#define STACK_TOP (current->thread.flags & SPARC_FLAG_32BIT ? 0xf0000000 : TASK_SIZE)
+#define STACK_TOP (current->thread.flags & SPARC_FLAG_32BIT ? 0xf0000000 : 0x80000000000L)
#endif
#define SMP_CACHE_BYTES 64 /* L2 cache line size. */
+#ifdef MODULE
+#define __cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
+#else
+#define __cacheline_aligned \
+ __attribute__((__aligned__(SMP_CACHE_BYTES), \
+ __section__(".data.cacheline_aligned")))
+#endif
+
#endif
-/* $Id: elf.h,v 1.20 1999/07/30 09:31:14 davem Exp $ */
+/* $Id: elf.h,v 1.21 1999/08/04 07:04:23 jj Exp $ */
#ifndef __ASM_SPARC64_ELF_H
#define __ASM_SPARC64_ELF_H
that it will "exec", and that there is sufficient room for the brk. */
#ifndef ELF_ET_DYN_BASE
-#define ELF_ET_DYN_BASE 0x50000000000
+#define ELF_ET_DYN_BASE 0xfffff80000000000UL
#endif
#ifdef __KERNEL__
+#include <linux/config.h>
#include <asm/pgtable.h>
#include <asm/io.h>
#include <asm/hdreg.h>
#ifndef _ASM_SPARC64_PARPORT_H
#define _ASM_SPARC64_PARPORT_H 1
+#include <linux/config.h>
#include <asm/ebus.h>
#include <asm/ns87303.h>
*/
extern struct apm_bios_info apm_bios_info;
-extern void apm_init(void);
-extern void apm_setup(char *, int *);
-
extern int apm_register_callback(int (*callback)(apm_event_t));
extern void apm_unregister_callback(int (*callback)(apm_event_t));
* I2O Interface Objects
*/
+#include <linux/config.h>
#include <linux/notifier.h>
#include <asm/atomic.h>
*/
/*
- * ==FILEVERSION 990331==
+ * ==FILEVERSION 990806==
*
* NOTE TO MAINTAINERS:
* If you modify this file at all, please set the above date.
#define PPP_MTU 1500 /* Default MTU (size of Info field) */
#define PPP_MAXMRU 65000 /* Largest MRU we allow */
-#define PPP_VERSION "2.3.7"
-#define PPP_MAGIC 0x5002 /* Magic value for the ppp structure */
#define PROTO_IPX 0x002b /* protocol numbers */
#define PROTO_DNA_RT 0x0027 /* DNA Routing */
#define SC_CCP_OPEN 0x00000040 /* Look at CCP packets */
#define SC_CCP_UP 0x00000080 /* May send/recv compressed packets */
#define SC_ENABLE_IP 0x00000100 /* IP packets may be exchanged */
+#define SC_LOOP_TRAFFIC 0x00000200 /* send traffic to pppd */
+#define SC_MULTILINK 0x00000400 /* do multilink encapsulation */
#define SC_COMP_RUN 0x00001000 /* compressor has been inited */
#define SC_DECOMP_RUN 0x00002000 /* decompressor has been inited */
#define SC_DEBUG 0x00010000 /* enable debug messages */
#define SC_LOG_RAWIN 0x00080000 /* log all chars received */
#define SC_LOG_FLUSH 0x00100000 /* log all chars flushed */
#define SC_SYNC 0x00200000 /* synchronous serial mode */
-#define SC_MASK 0x0f2000ff /* bits that user can change */
+#define SC_MASK 0x0f200fff /* bits that user can change */
/* state bits */
#define SC_XMIT_BUSY 0x10000000 /* (used by isdn_ppp?) */
#define PPPIOCGDEBUG _IOR('t', 65, int) /* Read debug level */
#define PPPIOCSDEBUG _IOW('t', 64, int) /* Set debug level */
#define PPPIOCGIDLE _IOR('t', 63, struct ppp_idle) /* get idle time */
+#define PPPIOCNEWUNIT _IOWR('t', 62, int) /* create new ppp unit */
+#define PPPIOCATTACH _IOW('t', 61, int) /* attach to ppp unit */
+#define PPPIOCDETACH _IOW('t', 60, int) /* detach from ppp unit */
#define SIOCGPPPSTATS (SIOCDEVPRIVATE + 0)
#define SIOCGPPPVER (SIOCDEVPRIVATE + 1) /* NEVER change this!! */
*/
/*
- * ==FILEVERSION 990325==
+ * ==FILEVERSION 990806==
*
* NOTE TO MAINTAINERS:
* If you modify this file at all, please set the above date.
/* tty output buffer */
unsigned char obuf[OBUFSIZE]; /* buffer for characters to send */
};
+
+#define PPP_MAGIC 0x5002
+#define PPP_VERSION "2.3.7"
#ifndef MODULE
+#ifndef __ASSEMBLY__
+
/*
* Used for initialization calls..
*/
static char __setup_str_##fn[] __initdata = str; \
static struct kernel_param __setup_##fn __initsetup = { __setup_str_##fn, fn }
+#endif /* __ASSEMBLY__ */
+
/*
* Mark functions and data as being only used at initialization
* or exit time.
#define STACK_MAGIC 0xdeadbeef
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
#define KERN_EMERG "<0>" /* system is unusable */
#define KERN_ALERT "<1>" /* action must be taken immediately */
#define KERN_CRIT "<2>" /* critical conditions */
#define CI_PREDICTOR_2 2 /* config option for Predictor-2 */
#define CILEN_PREDICTOR_2 2 /* length of its config option */
+#ifdef __KERNEL__
+extern int ppp_register_compressor(struct compressor *);
+extern void ppp_unregister_compressor(struct compressor *);
+#endif /* __KERNEL__ */
+
#endif /* _NET_PPP_COMP_H */
--- /dev/null
+/*
+ * Definitions for the interface between the generic PPP code
+ * and a PPP channel.
+ *
+ * A PPP channel provides a way for the generic PPP code to send
+ * and receive packets over some sort of communications medium.
+ * Packets are stored in sk_buffs and have the 2-byte PPP protocol
+ * number at the start, but not the address and control bytes.
+ *
+ * Copyright 1999 Paul Mackerras.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * ==FILEVERSION 990717==
+ */
+
+/* $Id$ */
+
+#include <linux/list.h>
+#include <linux/skbuff.h>
+
+struct ppp_channel;
+
+struct ppp_channel_ops {
+ /* Send a packet (or multilink fragment) on this channel.
+ Returns 1 if it was accepted, 0 if not. */
+ int (*start_xmit)(struct ppp_channel *, struct sk_buff *);
+
+};
+
+struct ppp_channel {
+ void *private; /* channel private data */
+ struct ppp_channel_ops *ops; /* operations for this channel */
+ int xmit_qlen; /* length of transmit queue (bytes) */
+ int speed; /* transfer rate (bytes/second) */
+ int latency; /* overhead time in milliseconds */
+ struct list_head list; /* link in list of channels per unit */
+ void *ppp; /* opaque to channel */
+};
+
+#ifdef __KERNEL__
+/* Called by the channel when it can send some more data. */
+extern void ppp_output_wakeup(struct ppp_channel *);
+
+/* Called by the channel to process a received PPP packet.
+ The packet should have just the 2-byte PPP protocol header. */
+extern void ppp_input(struct ppp_channel *, struct sk_buff *);
+
+/* Called by the channel when an input error occurs, indicating
+ that we may have missed a packet. */
+extern void ppp_input_error(struct ppp_channel *, int code);
+
+/* Attach a channel to a given PPP unit. */
+extern int ppp_register_channel(struct ppp_channel *, int unit);
+
+/* Detach a channel from its PPP unit (e.g. on hangup). */
+extern void ppp_unregister_channel(struct ppp_channel *);
+
+#endif /* __KERNEL__ */
#ifndef __HAVE_ARCH_MEMCHR
void *memchr(const void *s, int c, size_t n)
{
- unsigned char *p = s;
+ const unsigned char *p = s;
while (n-- != 0) {
if ((unsigned char)c == *p++) {
return p-1;
#include <asm/system.h>
#include <asm/checksum.h>
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
-
static int __init eth_setup(char *str)
{
int ints[5];