S: Columbus, Ohio 43210
S: USA
+N: Peter Braam
+E: braam@cs.cmu.edu
+W: http://coda.cs.cmu.edu/~braam
+D: Coda Filesystem
+S: Dept of Computer Science
+S: 5000 Forbes Ave
+S: Pittsburgh PA 15213
+
N: Andries Brouwer
E: aeb@cwi.nl
D: random Linux hacker
S: Australia
N: Dmitry S. Gorodchanin
-E: begemot@bgm.rosprint.net
+E: pgmdsg@ibi.com
D: RISCom/8 driver, misc kernel fixes.
-S: 6/1 M.Koneva bl, apt #125
-S: Poltava 314023
-S: Ukraine
+S: 4 Main Street
+S: Woodbridge, Connecticut 06525
+S: USA
N: Paul Gortmaker
E: gpg109@rsphy1.anu.edu.au
S: 90491 Nuernberg
S: Germany
+N: Petr Vandrovec
+E: vandrove@vc.cvut.cz
+D: Small contributions to ncpfs
+S: Chudenicka 8
+S: 10200 Prague 10, Hostivar
+S: Czech Republic
+
N: Dirk Verworner
D: Co-author of german book ``Linux-Kernel-Programmierung''
D: Co-founder of Berlin Linux User Group
read Documentation/modules.txt. The module will be called
ncpfs.o. Say N unless you are connected to a Novell network.
+Packet signatures
+CONFIG_NCPFS_PACKET_SIGNING
+ NCP allows to sign packets for stronger security. If you want
+ security, say Y. Normal users can leave it off. To be able to use
+ packet signing you must use ncpfs > 2.0.12.
+
+Proprietary file locking
+CONFIG_NCPFS_IOCTL_LOCKING
+ Allows locking of records on remote volumes. Say N unless you have special
+ applications which are able to utilize this locking scheme.
+
+Clear remove/delete inhibit when needed
+CONFIG_NCPFS_STRONG
+ Allows manipulation of files flagged as Delete or Rename Inhibit.
+ To use this feature you must mount volumes with the ncpmount parameter
+ "-s" (ncpfs-2.0.12 and newer). Say Y unless you are not mounting
+ volumes with -f 444.
+
+Use NFS namespace when available
+CONFIG_NCPFS_NFS_NS
+ Allows you to utilize NFS namespace on NetWare servers. It brings you
+ case sensitive filesystems. Say Y. You can disable it at mount-time with
+ the -N nfs parameter of ncpmount.
+
+Use OS2/LONG namespace when available
+CONFIG_NCPFS_OS2_NS
+ Allows you to utilize OS2/LONG namespace on NetWare servers. Filenames
+ in this namespace are limited to 255 characters, they are case
+ insensitive, and case in names is preserved.
+ Say Y. You can disable it at mount time with the -N os2 parameter of
+ ncpmount.
+
+Allow mounting of volume subdirectories
+CONFIG_NCPFS_MOUNT_SUBDIR
+ Allows you to mount not only whole servers or whole volumes, but also
+ subdirectory from a volume. It can be used to reexport data and so on.
+ There is no reason why to say N, so Y is recommended unless you count
+ every byte.
+ To utilize this feature you must use ncpfs-2.0.12 or newer.
+
+NDS interserver authentication domains
+CONFIG_NCPFS_NDS_DOMAINS
+ This allows storing NDS private keys into kernel space where it can be
+ used to authenticate another server as interserver NDS accesses need
+ it. You must use ncpfs-2.0.12.1 or newer to utilize this feature.
+ Say Y if you are using NDS connections to NetWare servers. Do not say Y
+ if security is primary for you because root can read your session
+ key (from /proc/kcore).
+
Amiga FFS filesystem support
CONFIG_AFFS_FS
The Fast File System (FFS) is the common filesystem used on
+
+NOTE:
+This is one of the technical documents describing a component of
+Coda -- this document describes the client kernel-Venus interface.
+
+For more information:
+ http://www.coda.cs.cmu.edu
+For user level software needed to run Coda:
+ ftp://ftp.coda.cs.cmu.edu
+
+To run Coda you need to get a user level cache manager for the client,
+named Venus, as well as tools to manipulate ACL's, to log in etc. The
+client needs to have the Coda filesystem selected in the kernel
+configuration.
+
+The server needs a user level server and at present does not depend on
+kernel support.
+
+
+
+
+
+
+
The Venus kernel interface
Peter J. Braam
v1.0, Nov 9, 1997
-NET_ALIAS device aliasing v0.4x
-===============================
- The main step taken in versions 0.40+ is the implementation of a
- device aliasing mechanism that creates *actual* devices.
- This development includes NET_ALIAS (generic aliasing) plus IP_ALIAS
- (specific IP) support.
-Features
---------
-o ACTUAL alias devices created & inserted in dev chain
-o AF_ independent: net_alias_type objects. Generic aliasing engine.
-o AF_INET optimized
-o hashed alias address lookup
-o net_alias_type objs registration/unreg., module-ables.
-o /proc/net/aliases & /proc/net/alias_types entries
+IP-Aliasing:
+============
-o IP alias implementation: static or runtime module.
-Usage (IP aliasing)
--------------------
- A very first step to test if you are running a net_alias-ed kernel
- is to check /proc/net/aliases & /proc/net/alias_types entries:
- # cat /proc/net/alias*
-
- For IP aliasing you must have IP_ALIAS support included by
- static linking ('y' to 2nd question above), or runtime module
- insertion ('m' to 2nd q. above):
- # insmod /usr/src/linux/modules/ip_alias.o (1.3.xx)
- # insmod /usr/src/ip_alias/ip_alias.o (1.2.xx) see above.
+o For IP aliasing you must have IP_ALIAS support included by static
+ linking.
o Alias creation.
Alias creation is done by 'magic' iface naming: eg. to create a
for eth0:0)
o Alias deletion.
- Also done by magic naming, eg:
+ Also done by shutting the interface down:
+
+ # ifconfig eth0:0 down
+ ~~~~~~~~~~ -> will delete alias
- # ifconfig eth0:0- 0 (maybe any address)
- ~~~ -> will delete alias (note '-' after dev name)
- alias device is closed before deletion, so all network stuff that
- points to it (routes, arp entries, ...) will be released.
Alias (re-)configuring
- Aliases *are* devices, so you configure and refer to them as usual (ifconfig,
- route, etc).
-
-o Procfs entries
- 2 entries are added to help fetching alias runtime configuration:
- a) /proc/net/alias_types
- Will show you alias_types registered (ie. address families that
- can be aliased).
- eg. for IP aliasing with 1 alias configured:
-
- # cat /proc/net/alias_types
- type name n_attach
- 2 ip 1
-
- b) /proc/net/aliases
- Will show aliased devices info, eg (same as above):
- # cat /proc/net/aliases
- device family address
- eth0:0 2 200.1.1.1
+ Aliases are no real devices, but should be able to configure and
+ refer to them as usual (ifconfig, route, etc).
Relationship with main device
-----------------------------
- - On main device closing, all aliases will be closed and freed.
- - Each new alias created is inserted in dev_chain just before next
- main device (aliases get 'stacked' after main_dev), eg:
- lo->eth0->eth0:0->eth0:2->eth1->0
- If eth0 is unregistered, all it aliases will also be:
- lo->eth1->0
+
+ - the main device is an alias itself like additional aliases and can
+ be shut down without deleting other aliases.
Contact
-------
Please finger or e-mail me:
Juan Jose Ciarlante <jjciarla@raiz.uncu.edu.ar>
-
-
+
+Updated by Erik Schoenfelder <schoenfr@gaertner.DE>
+
; local variables:
; mode: indented-text
; mode: auto-fill
This is the README for RISCom/8 multi-port serial driver
- (C) 1994-1996 D.Gorodchanin (begemot@bgm.rosrpint.net)
+ (C) 1994-1996 D.Gorodchanin (pgmdsg@ibi.com)
See file LICENSE for terms and conditions.
NOTE: English is not my native language.
S: Maintained
NCP FILESYSTEM:
+P: Petr Vandrovec
+M: vandrove@vc.cvut.cz
P: Volker Lendecke
M: lendecke@Math.Uni-Goettingen.de
L: linware@sh.cvut.cz
RISCOM8 DRIVER:
P: Dmitry Gorodchanin
-M: begemot@bgm.rosprint.net
+M: pgmdsg@ibi.com
L: linux-kernel@vger.rutgers.edu
S: Maintained
stt $f29,296($30)
stt $f30,304($30)
stt $f0,312($30) # save fpcr in slot of $f31
+ ldt $f0,64($30) # don't let "do_switch_stack" change any fp state.
ret $31,($1),1
.end do_switch_stack
.quad sys_utimes
.quad sys_getrusage
.quad sys_wait4 /* 365 */
- .quad sys_ni_syscall
+ .quad sys_adjtimex
.quad sys_ni_syscall
.quad sys_ni_syscall
.quad sys_ni_syscall /* 369 */
midway = (temp.f[0] & 0x003fffffffffffff) == 0;
if ((midway && (temp.f[0] & 0x0080000000000000)) ||
!midway)
- ++b;
+ ++*b;
}
break;
case ROUND_PINF:
- if ((temp.f[0] & 0x003fffffffffffff) != 0)
- ++b;
+ if ((temp.f[0] & 0x007fffffffffffff) != 0)
+ ++*b;
break;
case ROUND_NINF:
- if ((temp.f[0] & 0x003fffffffffffff) != 0)
- --b;
+ if ((temp.f[0] & 0x007fffffffffffff) != 0)
+ --*b;
break;
case ROUND_CHOP:
/* no action needed */
break;
}
- if ((temp.f[0] & 0x003fffffffffffff) != 0)
+ if ((temp.f[0] & 0x007fffffffffffff) != 0)
res |= FPCR_INE;
if (temp.s) {
extern unsigned int io_apic_irqs;
-#define IO_APIC_IRQ(x) ((1<<x) & io_apic_irqs)
-
#define MAX_IRQ_SOURCES 128
#define MAX_MP_BUSSES 32
enum mp_bustype {
release_irqlock(cpu);
}
+#define IO_APIC_IRQ(x) ((1<<x) & io_apic_irqs)
+
#else
#define irq_enter(cpu, irq) (++local_irq_count[cpu])
#define irq_exit(cpu, irq) (--local_irq_count[cpu])
+/* Make these no-ops when not using SMP */
+#define enable_IO_APIC_irq(x) do { } while (0)
+#define disable_IO_APIC_irq(x) do { } while (0)
+
+#define IO_APIC_IRQ(x) (0)
+
#endif
#define __STR(x) #x
L_TARGET := block.a
-L_OBJS := ll_rw_blk.o genhd.o
+L_OBJS := genhd.o
M_OBJS :=
MOD_LIST_NAME := BLOCK_MODULES
-LX_OBJS :=
+LX_OBJS := ll_rw_blk.o
MX_OBJS :=
ifeq ($(CONFIG_MAC_FLOPPY),y)
if (!drive->proc || !p)
return;
while (p->name != NULL) {
- ent = create_proc_entry(p->name, 0, drive->proc);
+ mode_t mode = S_IFREG|S_IRUSR;
+ if (!strcmp(p->name,"settings"))
+ mode |= S_IWUSR;
+ ent = create_proc_entry(p->name, mode, drive->proc);
if (!ent) return;
+ ent->nlink = 1;
ent->data = drive;
ent->read_proc = p->read_proc;
ent->write_proc = p->write_proc;
if (!hwif_ent) return;
#ifdef CONFIG_PCI
if (!IDE_PCI_DEVID_EQ(hwif->pci_devid, IDE_PCI_DEVID_NULL)) {
- ent = create_proc_entry("config", 0, hwif_ent);
+ ent = create_proc_entry("config", S_IFREG|S_IRUSR|S_IWUSR, hwif_ent);
if (!ent) return;
+ ent->nlink = 1;
ent->data = hwif;
ent->read_proc = proc_ide_read_config;
ent->write_proc = proc_ide_write_config;;
EXPORT_SYMBOL(ide_end_request);
EXPORT_SYMBOL(ide_revalidate_disk);
EXPORT_SYMBOL(ide_cmd);
+EXPORT_SYMBOL(ide_wait_cmd);
EXPORT_SYMBOL(ide_stall_queue);
EXPORT_SYMBOL(ide_add_proc_entries);
EXPORT_SYMBOL(ide_remove_proc_entries);
#include <asm/io.h>
#include <linux/blk.h>
+#include <linux/module.h>
+
#define ATOMIC_ON() do { } while (0)
#define ATOMIC_OFF() do { } while (0)
ddv_init();
#endif
return 0;
-}
+};
+
+EXPORT_SYMBOL(io_request_lock);
* for the Specialix IO8+ multiport serial driver.
*
* Copyright (C) 1997 Roger Wolff (R.E.Wolff@BitWizard.nl)
- * Copyright (C) 1994-1996 Dmitry Gorodchanin (begemot@bgm.rosprint.net)
+ * Copyright (C) 1994-1996 Dmitry Gorodchanin (pgmdsg@ibi.com)
*
* Specialix pays for the development and support of this driver.
* Please DO contact io8-linux@specialix.co.uk if you require
static struct vm_operations_struct dummy = { NULL, };
vma->vm_ops = &dummy;
#endif
-#if LINUX_VERSION_CODE >= KERNEL_VER(2,1,45)
- vma->vm_dentry = dget(filep->f_dentry);
-#else
- vma_set_inode (vma, ino);
- inode_inc_count (ino);
-#endif
+ vma->vm_file = filep;
+ filep->f_count++;
}
current->blocked = old_sigmask; /* restore mask */
TRACE_EXIT result;
/*
* linux/drivers/char/riscom.c -- RISCom/8 multiport serial driver.
*
- * Copyright (C) 1994-1996 Dmitry Gorodchanin (begemot@bgm.rosprint.net)
+ * Copyright (C) 1994-1996 Dmitry Gorodchanin (pgmdsg@ibi.com)
*
* This code is loosely based on the Linux serial driver, written by
* Linus Torvalds, Theodore T'so and others. The RISCom/8 card
/*
* linux/drivers/char/riscom8.h -- RISCom/8 multiport serial driver.
*
- * Copyright (C) 1994-1996 Dmitry Gorodchanin (begemot@bgm.rosprint.net)
+ * Copyright (C) 1994-1996 Dmitry Gorodchanin (pgmdsg@ibi.com)
*
* This code is loosely based on the Linux serial driver, written by
* Linus Torvalds, Theodore T'so and others. The RISCom/8 card
* specialix.c -- specialix IO8+ multiport serial driver.
*
* Copyright (C) 1997 Roger Wolff (R.E.Wolff@BitWizard.nl)
- * Copyright (C) 1994-1996 Dmitry Gorodchanin (begemot@bgm.rosprint.net)
+ * Copyright (C) 1994-1996 Dmitry Gorodchanin (pgmdsg@ibi.com)
*
* Specialix pays for the development and support of this driver.
* Please DO contact io8-linux@specialix.co.uk if you require
* Specialix IO8+ multiport serial driver.
*
* Copyright (C) 1997 Roger Wolff (R.E.Wolff@BitWizard.nl)
- * Copyright (C) 1994-1996 Dmitry Gorodchanin (begemot@bgm.rosprint.net)
+ * Copyright (C) 1994-1996 Dmitry Gorodchanin (pgmdsg@ibi.com)
*
*
* Specialix pays for the development and support of this driver.
+++ /dev/null
-#
-# This file is used for selecting non-standard netcard options, and
-# need not be modified for typical use.
-#
-# Drivers are *not* selected in this file, but rather with files
-# automatically generated during the top-level kernel configuration.
-#
-# Special options supported, indexed by their 'config' name:
-#
-# CONFIG_WD80x3 The Western Digital (SMC) WD80x3 driver
-# WD_SHMEM=xxx Forces the address of the shared memory
-# CONFIG_NE2000 The NE-[12]000 clone driver.
-# PACKETBUF_MEMSIZE Allows an extra-large packet buffer to be
-# used. Usually pointless under Linux.
-# show_all_SAPROM Show the entire address PROM, not just the
-# ethernet address, during boot.
-# CONFIG_NE_RW_BUGFIX Patch an obscure bug with a version of the 8390.
-# CONFIG_NE_SANITY Double check the internal card xfer address
-# against the driver's value. Useful for debugging.
-# CONFIG_HPLAN The HP-LAN driver (for 8390-based boards only).
-# rw_bugfix Fix the same obscure bug.
-# CONFIG_EL2 The 3c503 EtherLink II driver
-# EL2_AUI Default to the AUI port instead of the BNC port
-# no_probe_nonshared_memory Don't probe for programmed-I/O boards.
-# EL2MEMTEST Test shared memory at boot-time.
-# CONFIG_PLIP The Crynwr-protocol PL/IP driver
-# INITIALTIMEOUTFACTOR Timing parameters.
-# MAXTIMEOUTFACTOR
-# DE600 The D-Link DE-600 Portable Ethernet Adaptor.
-# DE600_IO The DE600 I/O-port address (0x378 == default)
-# DE600_IRQ The DE600 IRQ number to use (IRQ7 == default)
-# DE600_DEBUG Enable or disable DE600 debugging (default off)
-# DE620 The D-Link DE-600 Portable Ethernet Adaptor.
-# DE620_IO The DE620 I/O-port address (0x378 == default)
-# DE620_IRQ The DE620 IRQ number to use (IRQ7 == default)
-# DE620_DEBUG Enable or disable DE600 debugging (default off)
-# DEPCA The DIGITAL series of LANCE based Ethernet Cards
-# (DEPCA, DE100, DE200/1/2, DE210, DE422 (EISA))
-# EWRK3 The DIGITAL series of AT Ethernet Cards (DE203/4/5)
-# EWRK3_DEBUG Set the desired debug level
-#
-# DE4x5 The DIGITAL series of PCI/EISA Ethernet Cards,
-# DE425, DE434, DE435, DE450, DE500
-# DE4X5_DEBUG Set the desired debug level
-# DEC_ONLY Allows driver to work with DIGITAL cards only -
-# see linux/drivers/net/README.de4x5
-# DE4X5_DO_MEMCPY Forces the Intels to use memory copies into sk_buffs
-# rather than straight DMA.
-# DE4X5_PARM See linux/Documentation/networking/de4x5.txt or the
-# driver source code for detailed information on setting
-# duplex and speed/media on individual adapters.
-#
-# DEFXX The DIGITAL series of FDDI EISA (DEFEA) and PCI (DEFPA)
-# controllers
-# DEFXX_DEBUG Set the desired debug level
-#
-# TULIP Tulip (dc21040/dc21041/ds21140) driver
-# TULIP_PORT specify default if_port
-# 0: 10TP
-# 1: 100Tx(ds21140)/AUI(dc2104x)
-# 2: BNC(dc2104x)
-# TULIP_FIX_PORT don't change if_port automatically if defined
-# TULIP_MAX_CARDS maximum number of probed card
-#
-
-# The following options exist, but cannot be set in this file.
-# lance.c
-# LANCE_DMA Change the default DMA to other than DMA5.
-# 8390.c
-# NO_PINGPONG Disable ping-pong transmit buffers.
-
-
-# Most drivers also have a *_DEBUG setting that may be adjusted.
-# The 8390 drivers share the EI_DEBUG setting.
-
-# General options for Space.c
-CONFIG_Space.o = # -DETH0_ADDR=0x300 -DETH0_IRQ=11
-CONFIG_3c503.o = # -DEL2_AUI
-CONFIG_wd.o = # -DWD_SHMEM=0xDD000
tristate 'Dummy net driver support' CONFIG_DUMMY
tristate 'EQL (serial line load balancing) support' CONFIG_EQUALIZER
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- if [ "$CONFIG_NETLINK_DEV" = "y" -o "$CONFIG_NETLINK_DEV" = "m" ]; then
- dep_tristate 'Ethertap network tap' CONFIG_ETHERTAP $CONFIG_NETLINK_DEV
+ if [ "$CONFIG_NETLINK" = "y" ]; then
+ tristate 'Ethertap network tap' CONFIG_ETHERTAP
fi
fi
#
# Makefile for the Linux network (ethercard) device drivers.
#
-# This will go away in some future future: hidden configuration files
-# are difficult for users to deal with.
-include CONFIG
-
SUB_DIRS :=
MOD_SUB_DIRS := $(SUB_DIRS)
ALL_SUB_DIRS := $(SUB_DIRS) hamradio
* the same and just have different names or only have minor differences
* such as more IO ports. As this driver is tested it will
* become more clear on exactly what cards are supported. The driver
- * defaults to using Dayna mode. To change the drivers mode adjust
- * drivers/net/CONFIG, and the line COPS_OPTS = -DDAYNA to -DTANGENT.
+ * defaults to using Dayna mode. To change the drivers mode, simply
+ * select Dayna or Tangent mode when configuring the kernel.
*
* This driver should support:
* TANGENT driver mode:
insmod de4x5 args='eth1:fdx autosense=BNC eth0:autosense=100Mb'.
- For a compiled in driver, in linux/drivers/net/CONFIG, place e.g.
- DE4X5_OPTS = -DDE4X5_PARM='"eth0:fdx autosense=AUI eth2:autosense=TP"'
+ For a compiled in driver, somewhere in this file, place e.g.
+ #define DE4X5_PARM "eth0:fdx autosense=AUI eth2:autosense=TP"
Yes, I know full duplex isn't permissible on BNC or AUI; they're just
examples. By default, full duplex is turned off and AUTO is the default
** insmod de4x5 args='eth1:fdx autosense=BNC eth0:autosense=100Mb'.
**
** For a compiled in driver, place e.g.
-** DE4X5_OPTS = -DDE4X5_PARM='"eth0:fdx autosense=AUI eth2:autosense=TP"'
-** in linux/drivers/net/CONFIG
+** #define DE4X5_PARM "eth0:fdx autosense=AUI eth2:autosense=TP"
+** somewhere in this file above this point.
*/
#ifdef DE4X5_PARM
static char *args = DE4X5_PARM;
/*
* Enable debugging by "-DDE620_DEBUG=3" when compiling,
- * OR in "./CONFIG"
* OR by enabling the following #define
*
* use 0 for production, 1 for verification, >2 for debug
* even for building bridging tunnels.
*/
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/skbuff.h>
#include <linux/init.h>
+#include <net/sock.h>
#include <linux/netlink.h>
/*
static int ethertap_start_xmit(struct sk_buff *skb, struct device *dev);
static int ethertap_close(struct device *dev);
static struct net_device_stats *ethertap_get_stats(struct device *dev);
-static int ethertap_rx(int id, struct sk_buff *skb);
+static void ethertap_rx(struct sock *sk, int len);
+#ifdef CONFIG_ETHERTAP_MC
+static void set_multicast_list(struct device *dev);
+#endif
static int ethertap_debug = 0;
struct net_local
{
+ struct sock *nl;
+#ifdef CONFIG_ETHERTAP_MC
+ __u32 groups;
+#endif
struct net_device_stats stats;
};
__initfunc(int ethertap_probe(struct device *dev))
{
- memcpy(dev->dev_addr, "\xFD\xFD\x00\x00\x00\x00", 6);
+ memcpy(dev->dev_addr, "\xFE\xFD\x00\x00\x00\x00", 6);
if (dev->mem_start & 0xf)
ethertap_debug = dev->mem_start & 0x7;
dev->hard_start_xmit = ethertap_start_xmit;
dev->stop = ethertap_close;
dev->get_stats = ethertap_get_stats;
+#ifdef CONFIG_ETHERTAP_MC
+ dev->set_multicast_list = set_multicast_list;
+#endif
/*
* Setup the generic properties
ether_setup(dev);
- dev->flags|=IFF_NOARP; /* Need to set ARP - looks like there is a bug
- in the 2.1.x hard header code currently */
+ dev->tx_queue_len = 0;
+ dev->flags|=IFF_NOARP;
tap_map[dev->base_addr]=dev;
-
+
return 0;
}
static int ethertap_open(struct device *dev)
{
- struct in_device *in_dev;
+ struct net_local *lp = (struct net_local*)dev->priv;
+
if (ethertap_debug > 2)
printk("%s: Doing ethertap_open()...", dev->name);
- netlink_attach(dev->base_addr, ethertap_rx);
+
+ MOD_INC_USE_COUNT;
+
+ lp->nl = netlink_kernel_create(dev->base_addr, ethertap_rx);
+ if (lp->nl == NULL) {
+ MOD_DEC_USE_COUNT;
+ return -ENOBUFS;
+ }
+
dev->start = 1;
dev->tbusy = 0;
+ return 0;
+}
- /* Fill in the MAC based on the IP address. We do the same thing
- here as PLIP does */
-
- if((in_dev=dev->ip_ptr)!=NULL)
- {
- /*
- * Any address wil do - we take the first
- */
- struct in_ifaddr *ifa=in_dev->ifa_list;
- if(ifa!=NULL)
- memcpy(dev->dev_addr+2,&ifa->ifa_local,4);
+#ifdef CONFIG_ETHERTAP_MC
+static unsigned ethertap_mc_hash(__u8 *dest)
+{
+ unsigned idx = 0;
+ idx ^= dest[0];
+ idx ^= dest[1];
+ idx ^= dest[2];
+ idx ^= dest[3];
+ idx ^= dest[4];
+ idx ^= dest[5];
+ return 1U << (idx&0x1F);
+}
+
+static void set_multicast_list(struct device *dev)
+{
+ unsigned groups = ~0;
+ struct net_local *lp = (struct net_local *)dev->priv;
+
+ if (!(dev->flags&(IFF_NOARP|IFF_PROMISC|IFF_ALLMULTI))) {
+ struct dev_mc_list *dmi;
+
+ groups = ethertap_mc_hash(dev->broadcast);
+
+ for (dmi=dev->mc_list; dmi; dmi=dmi->next) {
+ if (dmi->dmi_addrlen != 6)
+ continue;
+ groups |= ethertap_mc_hash(dmi->dmi_addr);
+ }
}
- MOD_INC_USE_COUNT;
- return 0;
+ lp->groups = groups;
+ if (lp->nl)
+ lp->nl->protinfo.af_netlink.groups = groups;
}
+#endif
/*
* We transmit by throwing the packet at netlink. We have to clone
static int ethertap_start_xmit(struct sk_buff *skb, struct device *dev)
{
struct net_local *lp = (struct net_local *)dev->priv;
- struct sk_buff *tmp;
- /* copy buffer to tap */
- tmp=skb_clone(skb, GFP_ATOMIC);
- if(tmp)
- {
- if(netlink_post(dev->base_addr, tmp)<0)
- kfree_skb(tmp);
- lp->stats.tx_bytes+=skb->len;
- lp->stats.tx_packets++;
+#ifdef CONFIG_ETHERTAP_MC
+ struct ethhdr *eth = (struct ethhdr*)skb->data;
+#endif
+
+ if (skb_headroom(skb) < 2) {
+ printk(KERN_DEBUG "%s : bug --- xmit with head<2\n", dev->name);
+ dev_kfree_skb(skb);
+ return 0;
+ }
+ skb_push(skb, 2);
+
+ /* Make the same thing, which loopback does. */
+ if (skb_shared(skb)) {
+ struct sk_buff *skb2 = skb;
+ skb = skb_clone(skb, GFP_ATOMIC); /* Clone the buffer */
+ if (skb==NULL) {
+ dev_kfree_skb(skb2);
+ return 0;
+ }
+ dev_kfree_skb(skb2);
}
- dev_kfree_skb (skb);
+ /* ... but do not orphan it here, netlink does it in any case. */
+
+ lp->stats.tx_bytes+=skb->len;
+ lp->stats.tx_packets++;
+
+#ifndef CONFIG_ETHERTAP_MC
+ netlink_broadcast(lp->nl, skb, 0, ~0, GFP_ATOMIC);
+#else
+ if (dev->flags&IFF_NOARP) {
+ netlink_broadcast(lp->nl, skb, 0, ~0, GFP_ATOMIC);
+ return 0;
+ }
+
+ if (!(eth->h_dest[0]&1)) {
+ /* Unicast packet */
+ __u32 pid;
+ memcpy(&pid, eth->h_dest+2, 4);
+ netlink_unicast(lp->nl, skb, ntohl(pid), MSG_DONTWAIT);
+ } else
+ netlink_broadcast(lp->nl, skb, 0, ethertap_mc_hash(eth->h_dest), GFP_ATOMIC);
+#endif
return 0;
}
+static __inline__ int ethertap_rx_skb(struct sk_buff *skb, struct device *dev)
+{
+ struct net_local *lp = (struct net_local *)dev->priv;
+#ifdef CONFIG_ETHERTAP_MC
+ struct ethhdr *eth = (struct ethhdr*)(skb->data + 2);
+#endif
+ int len = skb->len;
+
+ if (len < 16) {
+ printk(KERN_DEBUG "%s : rx len = %d\n", dev->name, len);
+ kfree_skb(skb);
+ return -EINVAL;
+ }
+ if (NETLINK_CREDS(skb)->uid) {
+ printk(KERN_INFO "%s : user %d\n", dev->name, NETLINK_CREDS(skb)->uid);
+ kfree_skb(skb);
+ return -EPERM;
+ }
+
+#ifdef CONFIG_ETHERTAP_MC
+ if (!(dev->flags&(IFF_NOARP|IFF_PROMISC))) {
+ int drop = 0;
+
+ if (eth->h_dest[0]&1) {
+ if (!(ethertap_mc_hash(eth->h_dest)&lp->groups))
+ drop = 1;
+ } else if (memcmp(eth->h_dest, dev->dev_addr, 6) != 0)
+ drop = 1;
+
+ if (drop) {
+ if (ethertap_debug > 3)
+ printk(KERN_DEBUG "%s : not for us\n", dev->name);
+ kfree_skb(skb);
+ return -EINVAL;
+ }
+ }
+#endif
+
+ if (skb_shared(skb)) {
+ struct sk_buff *skb2 = skb;
+ skb = skb_clone(skb, GFP_KERNEL); /* Clone the buffer */
+ if (skb==NULL) {
+ kfree_skb(skb2);
+ return -ENOBUFS;
+ }
+ kfree_skb(skb2);
+ } else
+ skb_orphan(skb);
+
+ skb_pull(skb, 2);
+ skb->dev = dev;
+ skb->protocol=eth_type_trans(skb,dev);
+ memset(skb->cb, 0, sizeof(skb->cb));
+ lp->stats.rx_packets++;
+ lp->stats.rx_bytes+=len;
+ netif_rx(skb);
+ return len;
+}
+
/*
* The typical workload of the driver:
* Handle the ether interface interrupts.
* (In this case handle the packets posted from user space..)
*/
-static int ethertap_rx(int id, struct sk_buff *skb)
+static void ethertap_rx(struct sock *sk, int len)
{
- struct device *dev = (struct device *)(tap_map[id]);
- struct net_local *lp;
- int len=skb->len;
-
- if(dev==NULL)
- {
- printk("ethertap: bad unit!\n");
- kfree_skb(skb);
- return -ENXIO;
+ struct device *dev = tap_map[sk->protocol];
+ struct sk_buff *skb;
+
+ if (dev==NULL) {
+ printk(KERN_CRIT "ethertap: bad unit!\n");
+ skb_queue_purge(&sk->receive_queue);
+ return;
}
- lp = (struct net_local *)dev->priv;
if (ethertap_debug > 3)
printk("%s: ethertap_rx()\n", dev->name);
- skb->dev = dev;
- skb->protocol=eth_type_trans(skb,dev);
- lp->stats.rx_packets++;
- lp->stats.rx_bytes+=len;
- netif_rx(skb);
- return len;
+
+ while ((skb = skb_dequeue(&sk->receive_queue)) != NULL)
+ ethertap_rx_skb(skb, dev);
}
static int ethertap_close(struct device *dev)
{
+ struct net_local *lp = (struct net_local *)dev->priv;
+ struct sock *sk = lp->nl;
+
if (ethertap_debug > 2)
- printk("%s: Shutting down tap %ld.\n", dev->name, dev->base_addr);
+ printk("%s: Shutting down.\n", dev->name);
dev->tbusy = 1;
dev->start = 0;
+
+ if (sk) {
+ lp->nl = NULL;
+ sock_release(sk->socket);
+ }
+
MOD_DEC_USE_COUNT;
return 0;
}
sprintf(devicename,"tap%d",unit);
if (dev_get(devicename))
{
- printk(KERN_INFO "ethertap: tap %d already loaded.\n", unit);
+ printk(KERN_INFO "%s already loaded.\n", devicename);
return -EBUSY;
}
if (register_netdev(&dev_ethertap) != 0)
if (tty != NULL && tty->disc_data == ppp)
tty->disc_data = NULL; /* Break the tty->ppp link */
- rtnl_lock();
- /* Strong layering violation. */
- if (dev && dev->flags & IFF_UP) {
- dev_close (dev); /* close the device properly */
- }
- rtnl_unlock();
-
ppp_free_buf (ppp->rbuf);
ppp_free_buf (ppp->wbuf);
ppp_free_buf (ppp->cbuf);
return ppp;
}
+/* Collect hanged up channels */
+
+static void ppp_sync(void)
+{
+ struct device *dev;
+ struct ppp *ppp;
+
+ rtnl_lock();
+ for (ppp = ppp_list; ppp != 0; ppp = ppp->next) {
+ if (!ppp->inuse) {
+ dev = ppp2dev(ppp);
+ if (dev->flags&IFF_UP)
+ dev_close(dev);
+ }
+ }
+ rtnl_unlock();
+}
+
+
/* allocate or create a PPP channel */
static struct ppp *
ppp_alloc (void)
struct device *dev;
struct ppp *ppp;
+ ppp_sync();
+
/* try to find an free device */
if_num = 0;
for (ppp = ppp_list; ppp != 0; ppp = ppp->next) {
- if (!test_and_set_bit(0, &ppp->inuse))
+ if (!test_and_set_bit(0, &ppp->inuse)) {
+
+ /* Reregister device */
+
+ dev = ppp2dev(ppp);
+ unregister_netdev (dev);
+
+ if (register_netdev (dev)) {
+ printk(KERN_DEBUG "cannot reregister ppp device\n");
+ return NULL;
+ }
return ppp;
+ }
++if_num;
}
/*
#include <linux/if_arp.h>
#include <linux/init.h>
#include <net/dst.h>
+#include <net/arp.h>
#include <linux/if_shaper.h>
int sh_debug; /* Debug flag */
return v;
}
+#if 0
static int shaper_cache(struct neighbour *neigh, struct hh_cache *hh)
{
struct shaper *sh=neigh->dev->priv;
printk("Shaper cache update\n");
sh->header_cache_update(hh, sh->dev, haddr);
}
+#endif
+
+static int shaper_neigh_setup(struct neighbour *n)
+{
+ if (n->nud_state == NUD_NONE) {
+ n->ops = &arp_broken_ops;
+ n->output = n->ops->output;
+ }
+ return 0;
+}
+
+static int shaper_neigh_setup_dev(struct device *dev, struct neigh_parms *p)
+{
+ if (p->tbl->family == AF_INET) {
+ p->neigh_setup = shaper_neigh_setup;
+ p->ucast_probes = 0;
+ p->mcast_probes = 0;
+ }
+ return 0;
+}
static int shaper_attach(struct device *shdev, struct shaper *sh, struct device *dev)
{
#else
shdev->header_cache_update = NULL;
shdev->hard_header_cache = NULL;
-#endif
+#endif
+ shdev->neigh_setup = shaper_neigh_setup_dev;
shdev->hard_header_len=dev->hard_header_len;
shdev->type=dev->type;
dev->hard_header = shaper_header;
dev->rebuild_header = shaper_rebuild_header;
+#if 0
dev->hard_header_cache = shaper_cache;
dev->header_cache_update= shaper_cache_update;
+#endif
+ dev->neigh_setup = shaper_neigh_setup_dev;
dev->do_ioctl = shaper_ioctl;
dev->hard_header_len = 0;
dev->type = ARPHRD_ETHER; /* initially */
* from multislip BSDI driver which was written
* by Igor Chechik, RELCOM Corp. Only algorithms
* have been ported to Linux SLIP driver.
+ * Vitaly E. Lavrov : Sane behaviour on tty hangup.
+ * Alexey Kuznetsov : Cleanup interfaces to tty&netdevice modules.
*/
#define SL_CHECK_TRANSMIT
static int sl_ioctl(struct device *dev,struct ifreq *rq,int cmd);
#endif
-/* Find a free SLIP channel, and link in this `tty' line. */
-static inline struct slip *
-sl_alloc(void)
-{
- slip_ctrl_t *slp = NULL;
- int i;
-
- if (slip_ctrls == NULL) return NULL; /* Master array missing ! */
-
- for (i = 0; i < slip_maxdev; i++) {
- slp = slip_ctrls[i];
- /* Not allocated ? */
- if (slp == NULL)
- break;
- /* Not in use ? */
- if (!test_and_set_bit(SLF_INUSE, &slp->ctrl.flags))
- break;
- }
- /* SLP is set.. */
-
- /* Sorry, too many, all slots in use */
- if (i >= slip_maxdev) return NULL;
-
- /* If no channels are available, allocate one */
- if (!slp &&
- (slip_ctrls[i] = (slip_ctrl_t *)kmalloc(sizeof(slip_ctrl_t),
- GFP_KERNEL)) != NULL) {
- slp = slip_ctrls[i];
- memset(slp, 0, sizeof(slip_ctrl_t));
-
- /* Initialize channel control data */
- set_bit(SLF_INUSE, &slp->ctrl.flags);
- slp->ctrl.tty = NULL;
- sprintf(slp->if_name, "sl%d", i);
- slp->dev.name = slp->if_name;
- slp->dev.base_addr = i;
- slp->dev.priv = (void*)&(slp->ctrl);
- slp->dev.next = NULL;
- slp->dev.init = slip_init;
-/* printk(KERN_INFO "slip: kmalloc()ed SLIP control node for line %s\n",
- slp->if_name); */
- }
- if (slp != NULL) {
-
- /* register device so that it can be ifconfig'ed */
- /* slip_init() will be called as a side-effect */
- /* SIDE-EFFECT WARNING: slip_init() CLEARS slp->ctrl ! */
-
- if (register_netdev(&(slp->dev)) == 0) {
- /* (Re-)Set the INUSE bit. Very Important! */
- set_bit(SLF_INUSE, &slp->ctrl.flags);
- slp->ctrl.dev = &(slp->dev);
- slp->dev.priv = (void*)&(slp->ctrl);
+/********************************
+* Buffer administration routines:
+* sl_alloc_bufs()
+* sl_free_bufs()
+* sl_realloc_bufs()
+*
+* NOTE: sl_realloc_bufs != sl_free_bufs + sl_alloc_bufs, because
+* sl_realloc_bufs provides strong atomicity and reallocation
+* on actively running device.
+*********************************/
+
+/*
+ Allocate channel buffers.
+ */
-/* printk(KERN_INFO "slip: linked in netdev %s for active use\n",
- slp->if_name); */
+static int
+sl_alloc_bufs(struct slip *sl, int mtu)
+{
+ int err = -ENOBUFS;
+ unsigned long len;
+ char * rbuff = NULL;
+ char * xbuff = NULL;
+#ifdef SL_INCLUDE_CSLIP
+ char * cbuff = NULL;
+ struct slcompress *slcomp = NULL;
+#endif
- return (&(slp->ctrl));
+ /*
+ * Allocate the SLIP frame buffers:
+ *
+ * rbuff Receive buffer.
+ * xbuff Transmit buffer.
+ * cbuff Temporary compression buffer.
+ */
+ len = mtu * 2;
- } else {
- clear_bit(SLF_INUSE,&(slp->ctrl.flags));
- printk("sl_alloc() - register_netdev() failure.\n");
- }
+ /*
+ * allow for arrival of larger UDP packets, even if we say not to
+ * also fixes a bug in which SunOS sends 512-byte packets even with
+ * an MSS of 128
+ */
+ if (len < 576 * 2)
+ len = 576 * 2;
+ rbuff = kmalloc(len + 4, GFP_KERNEL);
+ if (rbuff == NULL)
+ goto err_exit;
+ xbuff = kmalloc(len + 4, GFP_KERNEL);
+ if (xbuff == NULL)
+ goto err_exit;
+#ifdef SL_INCLUDE_CSLIP
+ cbuff = kmalloc(len + 4, GFP_KERNEL);
+ if (cbuff == NULL)
+ goto err_exit;
+ slcomp = slhc_init(16, 16);
+ if (slcomp == NULL)
+ goto err_exit;
+#endif
+ start_bh_atomic();
+ if (sl->tty == NULL) {
+ end_bh_atomic();
+ err = -ENODEV;
+ goto err_exit;
}
+ sl->mtu = mtu;
+ sl->buffsize = len;
+ sl->rcount = 0;
+ sl->xleft = 0;
+ rbuff = xchg(&sl->rbuff, rbuff);
+ xbuff = xchg(&sl->xbuff, xbuff);
+#ifdef CONFIG_SLIP_MODE_SLIP6
+ cbuff = xchg(&sl->cbuff, cbuff);
+ slcomp = xchg(&sl->slcomp, slcomp);
+ sl->xdata = 0;
+ sl->xbits = 0;
+#endif
+ end_bh_atomic();
+ err = 0;
- return NULL;
+ /* Cleanup */
+err_exit:
+#ifdef SL_INCLUDE_CSLIP
+ if (cbuff)
+ kfree(cbuff);
+ if (slcomp)
+ slhc_free(slcomp);
+#endif
+ if (xbuff)
+ kfree(xbuff);
+ if (rbuff)
+ kfree(rbuff);
+ return err;
}
-
-/* Free a SLIP channel. */
-static inline void
-sl_free(struct slip *sl)
+/* Free a SLIP channel buffers. */
+static void
+sl_free_bufs(struct slip *sl)
{
+ void * tmp;
+
/* Free all SLIP frame buffers. */
- if (sl->rbuff) {
- kfree(sl->rbuff);
- }
- sl->rbuff = NULL;
- if (sl->xbuff) {
- kfree(sl->xbuff);
- }
- sl->xbuff = NULL;
+ if ((tmp = xchg(&sl->rbuff, NULL)) != NULL)
+ kfree(tmp);
+ if ((tmp = xchg(&sl->xbuff, NULL)) != NULL)
+ kfree(tmp);
#ifdef SL_INCLUDE_CSLIP
- /* Save CSLIP statistics */
- if (sl->slcomp) {
- sl->rx_compressed += sl->slcomp->sls_i_compressed;
- sl->rx_dropped += sl->slcomp->sls_i_tossed;
- sl->tx_compressed += sl->slcomp->sls_o_compressed;
- sl->tx_misses += sl->slcomp->sls_o_misses;
- }
- if (sl->cbuff) {
- kfree(sl->cbuff);
- }
- sl->cbuff = NULL;
- if(sl->slcomp)
- slhc_free(sl->slcomp);
- sl->slcomp = NULL;
+ if ((tmp = xchg(&sl->cbuff, NULL)) != NULL)
+ kfree(tmp);
+ if ((tmp = xchg(&sl->slcomp, NULL)) != NULL)
+ slhc_free(tmp);
#endif
-
- if (!test_and_clear_bit(SLF_INUSE, &sl->flags)) {
- printk("%s: sl_free for already free unit.\n", sl->dev->name);
- }
}
-/* MTU has been changed by the IP layer. Unfortunately we are not told
- about this, but we spot it ourselves and fix things up. We could be
- in an upcall from the tty driver, or in an ip packet queue. */
+/*
+ Reallocate slip channel buffers.
+ */
-static void sl_changedmtu(struct slip *sl)
+static int sl_realloc_bufs(struct slip *sl, int mtu)
{
+ int err = 0;
struct device *dev = sl->dev;
- unsigned char *xbuff, *rbuff, *oxbuff, *orbuff;
+ unsigned char *xbuff, *rbuff;
#ifdef SL_INCLUDE_CSLIP
- unsigned char *cbuff, *ocbuff;
+ unsigned char *cbuff;
#endif
- int len;
- unsigned long flags;
+ int len = mtu * 2;
- len = dev->mtu * 2;
/*
* allow for arrival of larger UDP packets, even if we say not to
* also fixes a bug in which SunOS sends 512-byte packets even with
* an MSS of 128
*/
- if (len < 576 * 2) {
+ if (len < 576 * 2)
len = 576 * 2;
- }
xbuff = (unsigned char *) kmalloc (len + 4, GFP_ATOMIC);
rbuff = (unsigned char *) kmalloc (len + 4, GFP_ATOMIC);
cbuff = (unsigned char *) kmalloc (len + 4, GFP_ATOMIC);
#endif
+
#ifdef SL_INCLUDE_CSLIP
if (xbuff == NULL || rbuff == NULL || cbuff == NULL) {
#else
if (xbuff == NULL || rbuff == NULL) {
#endif
- printk("%s: unable to grow slip buffers, MTU change cancelled.\n",
- sl->dev->name);
- dev->mtu = sl->mtu;
- if (xbuff != NULL) {
- kfree(xbuff);
- }
- if (rbuff != NULL) {
- kfree(rbuff);
+ if (mtu >= sl->mtu) {
+ printk("%s: unable to grow slip buffers, MTU change cancelled.\n",
+ dev->name);
+ err = -ENOBUFS;
}
-#ifdef SL_INCLUDE_CSLIP
- if (cbuff != NULL) {
- kfree(cbuff);
- }
-#endif
- return;
+ goto done;
}
- save_flags(flags); cli();
+ start_bh_atomic();
- oxbuff = sl->xbuff;
- sl->xbuff = xbuff;
- orbuff = sl->rbuff;
- sl->rbuff = rbuff;
+ err = -ENODEV;
+ if (sl->tty == NULL)
+ goto done_on_bh;
+
+ xbuff = xchg(&sl->xbuff, xbuff);
+ rbuff = xchg(&sl->rbuff, rbuff);
#ifdef SL_INCLUDE_CSLIP
- ocbuff = sl->cbuff;
- sl->cbuff = cbuff;
+ cbuff = xchg(&sl->cbuff, cbuff);
#endif
if (sl->xleft) {
if (sl->xleft <= len) {
if (sl->rcount) {
if (sl->rcount <= len) {
- memcpy(sl->rbuff, orbuff, sl->rcount);
+ memcpy(sl->rbuff, rbuff, sl->rcount);
} else {
sl->rcount = 0;
sl->rx_over_errors++;
set_bit(SLF_ERROR, &sl->flags);
}
}
- sl->mtu = dev->mtu;
-
+ sl->mtu = mtu;
+ dev->mtu = mtu;
sl->buffsize = len;
+ err = 0;
- restore_flags(flags);
+done_on_bh:
+ end_bh_atomic();
- if (oxbuff != NULL) {
- kfree(oxbuff);
- }
- if (orbuff != NULL) {
- kfree(orbuff);
- }
+done:
+ if (xbuff)
+ kfree(xbuff);
+ if (rbuff)
+ kfree(rbuff);
#ifdef SL_INCLUDE_CSLIP
- if (ocbuff != NULL) {
- kfree(ocbuff);
- }
+ if (cbuff)
+ kfree(cbuff);
#endif
+ return err;
}
unsigned char *p;
int actual, count;
-
- if (sl->mtu != sl->dev->mtu) { /* Someone has been ifconfigging */
-
- sl_changedmtu(sl);
- }
-
if (len > sl->mtu) { /* Sigh, shouldn't occur BUT ... */
- len = sl->mtu;
printk ("%s: truncating oversized transmit packet!\n", sl->dev->name);
sl->tx_dropped++;
sl_unlock(sl);
#endif
sl->xleft = count - actual;
sl->xhead = sl->xbuff + actual;
+#ifdef CONFIG_SLIP_SMART
/* VSV */
clear_bit(SLF_OUTWAIT, &sl->flags); /* reset outfill flag */
+#endif
}
/*
if (!dev->start) {
printk("%s: xmit call when iface is down\n", dev->name);
- return 1;
+ dev_kfree_skb(skb);
+ return 0;
+ }
+ if (sl->tty == NULL) {
+ dev_kfree_skb(skb);
+ return 0;
}
+
/*
* If we are busy already- too bad. We ought to be able
* to queue things at this point, to allow for a little
}
-/* Return the frame type ID. This is normally IP but maybe be AX.25. */
+/******************************************
+ * Routines looking at netdevice side.
+ ******************************************/
+
+/* Netdevice UP -> DOWN routine */
-/* Open the low-level part of the SLIP channel. Easy! */
static int
-sl_open(struct device *dev)
+sl_close(struct device *dev)
{
struct slip *sl = (struct slip*)(dev->priv);
- unsigned long len;
- if (sl->tty == NULL) {
- return -ENODEV;
- }
-
- /*
- * Allocate the SLIP frame buffers:
- *
- * rbuff Receive buffer.
- * xbuff Transmit buffer.
- * cbuff Temporary compression buffer.
- */
- len = dev->mtu * 2;
- /*
- * allow for arrival of larger UDP packets, even if we say not to
- * also fixes a bug in which SunOS sends 512-byte packets even with
- * an MSS of 128
- */
- if (len < 576 * 2) {
- len = 576 * 2;
- }
- sl->rbuff = (unsigned char *) kmalloc(len + 4, GFP_KERNEL);
- if (sl->rbuff == NULL) {
- goto norbuff;
- }
- sl->xbuff = (unsigned char *) kmalloc(len + 4, GFP_KERNEL);
- if (sl->xbuff == NULL) {
- goto noxbuff;
- }
-#ifdef SL_INCLUDE_CSLIP
- sl->cbuff = (unsigned char *) kmalloc(len + 4, GFP_KERNEL);
- if (sl->cbuff == NULL) {
- goto nocbuff;
- }
- sl->slcomp = slhc_init(16, 16);
- if (sl->slcomp == NULL) {
- goto noslcomp;
+ start_bh_atomic();
+ if (sl->tty) {
+ /* TTY discipline is running. */
+ sl->tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP);
}
-#endif
- sl->mtu = dev->mtu;
- sl->buffsize = len;
+ dev->tbusy = 1;
+ dev->start = 0;
sl->rcount = 0;
sl->xleft = 0;
-#ifdef CONFIG_SLIP_MODE_SLIP6
- sl->xdata = 0;
- sl->xbits = 0;
-#endif
- sl->flags &= (1 << SLF_INUSE); /* Clear ESCAPE & ERROR flags */
-#ifdef CONFIG_SLIP_SMART
- sl->keepalive=0; /* no keepalive by default = VSV */
- init_timer(&sl->keepalive_timer); /* initialize timer_list struct */
- sl->keepalive_timer.data=(unsigned long)sl;
- sl->keepalive_timer.function=sl_keepalive;
- sl->outfill=0; /* & outfill too */
- init_timer(&sl->outfill_timer);
- sl->outfill_timer.data=(unsigned long)sl;
- sl->outfill_timer.function=sl_outfill;
-#endif
- dev->tbusy = 0;
- dev->start = 1;
+ end_bh_atomic();
+ MOD_DEC_USE_COUNT;
return 0;
+}
- /* Cleanup */
+/* Netdevice DOWN -> UP routine */
+
+static int sl_open(struct device *dev)
+{
+ struct slip *sl = (struct slip*)(dev->priv);
+
+ if (sl->tty==NULL)
+ return -ENODEV;
+
+ sl->flags &= (1 << SLF_INUSE);
+ dev->start = 1;
+ dev->tbusy = 0;
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+/* Netdevice change MTU request */
+
+static int sl_change_mtu(struct device *dev, int new_mtu)
+{
+ struct slip *sl = (struct slip*)(dev->priv);
+
+ if (new_mtu < 68 || new_mtu > 65534)
+ return -EINVAL;
+
+ if (new_mtu != dev->mtu)
+ return sl_realloc_bufs(sl, new_mtu);
+ return 0;
+}
+
+/* Netdevice get statistics request */
+
+static struct net_device_stats *
+sl_get_stats(struct device *dev)
+{
+ static struct net_device_stats stats;
+ struct slip *sl = (struct slip*)(dev->priv);
#ifdef SL_INCLUDE_CSLIP
-noslcomp:
- kfree(sl->cbuff);
-nocbuff:
+ struct slcompress *comp;
#endif
- kfree(sl->xbuff);
-noxbuff:
- kfree(sl->rbuff);
-norbuff:
- return -ENOMEM;
+
+ memset(&stats, 0, sizeof(struct net_device_stats));
+
+ stats.rx_packets = sl->rx_packets;
+ stats.tx_packets = sl->tx_packets;
+ stats.rx_bytes = sl->rx_bytes;
+ stats.tx_bytes = sl->tx_bytes;
+ stats.rx_dropped = sl->rx_dropped;
+ stats.tx_dropped = sl->tx_dropped;
+ stats.tx_errors = sl->tx_errors;
+ stats.rx_errors = sl->rx_errors;
+ stats.rx_over_errors = sl->rx_over_errors;
+#ifdef SL_INCLUDE_CSLIP
+ stats.rx_fifo_errors = sl->rx_compressed;
+ stats.tx_fifo_errors = sl->tx_compressed;
+ stats.collisions = sl->tx_misses;
+ comp = sl->slcomp;
+ if (comp) {
+ stats.rx_fifo_errors += comp->sls_i_compressed;
+ stats.rx_dropped += comp->sls_i_tossed;
+ stats.tx_fifo_errors += comp->sls_o_compressed;
+ stats.collisions += comp->sls_o_misses;
+ }
+#endif /* CONFIG_INET */
+ return (&stats);
}
+/* Netdevice register callback */
-/* Close the low-level part of the SLIP channel. Easy! */
-static int
-sl_close(struct device *dev)
+static int sl_init(struct device *dev)
{
struct slip *sl = (struct slip*)(dev->priv);
- if (sl->tty == NULL) {
- return -EBUSY;
- }
- sl->tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP);
- dev->tbusy = 1;
- dev->start = 0;
+ /*
+ * Finish setting up the DEVICE info.
+ */
+
+ dev->mtu = sl->mtu;
+ dev->hard_start_xmit = sl_xmit;
+ dev->open = sl_open;
+ dev->stop = sl_close;
+ dev->get_stats = sl_get_stats;
+ dev->change_mtu = sl_change_mtu;
+#ifdef CONFIG_SLIP_SMART
+ dev->do_ioctl = sl_ioctl;
+#endif
+ dev->hard_header_len = 0;
+ dev->addr_len = 0;
+ dev->type = ARPHRD_SLIP + sl->mode;
+ dev->tx_queue_len = 10;
+
+ dev_init_buffers(dev);
+
+ /* New-style flags. */
+ dev->flags = IFF_NOARP|IFF_POINTOPOINT|IFF_MULTICAST;
return 0;
}
+
+/******************************************
+ Routines looking at TTY side.
+ ******************************************/
+
+
static int slip_receive_room(struct tty_struct *tty)
{
return 65536; /* We can handle an infinite amount of data. :-) */
if (!sl || sl->magic != SLIP_MAGIC || !sl->dev->start)
return;
- /*
- * Argh! mtu change time! - costs us the packet part received
- * at the change
- */
- if (sl->mtu != sl->dev->mtu) {
-
- sl_changedmtu(sl);
- }
-
/* Read the characters out of the buffer */
while (count--) {
if (fp && *fp++) {
}
}
+/************************************
+ * slip_open helper routines.
+ ************************************/
+
+/* Collect hanged up channels */
+
+static void sl_sync(void)
+{
+ int i;
+
+ for (i = 0; i < slip_maxdev; i++) {
+ slip_ctrl_t *slp = slip_ctrls[i];
+ if (slp == NULL)
+ break;
+ if (slp->ctrl.tty || slp->ctrl.leased)
+ continue;
+ if (slp->dev.flags&IFF_UP)
+ dev_close(&slp->dev);
+ }
+}
+
+/* Find a free SLIP channel, and link in this `tty' line. */
+static struct slip *
+sl_alloc(kdev_t line)
+{
+ struct slip *sl;
+ slip_ctrl_t *slp = NULL;
+ int i;
+ int sel = -1;
+ int score = -1;
+
+ if (slip_ctrls == NULL)
+ return NULL; /* Master array missing ! */
+
+ for (i = 0; i < slip_maxdev; i++) {
+ slp = slip_ctrls[i];
+ if (slp == NULL)
+ break;
+
+ if (slp->ctrl.leased) {
+ if (slp->ctrl.line != line)
+ continue;
+ if (slp->ctrl.tty)
+ return NULL;
+
+ /* Clear ESCAPE & ERROR flags */
+ slp->ctrl.flags &= (1 << SLF_INUSE);
+ return &slp->ctrl;
+ }
+
+ if (slp->ctrl.tty)
+ continue;
+
+ if (current->pid == slp->ctrl.pid) {
+ if (slp->ctrl.line == line && score < 3) {
+ sel = i;
+ score = 3;
+ continue;
+ }
+ if (score < 2) {
+ sel = i;
+ score = 2;
+ }
+ continue;
+ }
+ if (slp->ctrl.line == line && score < 1) {
+ sel = i;
+ score = 1;
+ continue;
+ }
+ if (score < 0) {
+ sel = i;
+ score = 0;
+ }
+ }
+
+ if (sel >= 0) {
+ i = sel;
+ slp = slip_ctrls[i];
+ if (score > 1) {
+ slp->ctrl.flags &= (1 << SLF_INUSE);
+ return &slp->ctrl;
+ }
+ }
+
+ /* Sorry, too many, all slots in use */
+ if (i >= slip_maxdev)
+ return NULL;
+
+ if (slp) {
+ if (test_bit(SLF_INUSE, &slp->ctrl.flags)) {
+ unregister_netdevice(&slp->dev);
+ sl_free_bufs(&slp->ctrl);
+ }
+ } else if ((slp = (slip_ctrl_t *)kmalloc(sizeof(slip_ctrl_t),GFP_KERNEL)) == NULL)
+ return NULL;
+
+ memset(slp, 0, sizeof(slip_ctrl_t));
+
+ sl = &slp->ctrl;
+ /* Initialize channel control data */
+ sl->magic = SLIP_MAGIC;
+ sl->dev = &slp->dev;
+ sl->mode = SL_MODE_DEFAULT;
+ sprintf(slp->if_name, "sl%d", i);
+ slp->dev.name = slp->if_name;
+ slp->dev.base_addr = i;
+ slp->dev.priv = (void*)sl;
+ slp->dev.init = sl_init;
+#ifdef CONFIG_SLIP_SMART
+ init_timer(&sl->keepalive_timer); /* initialize timer_list struct */
+ sl->keepalive_timer.data=(unsigned long)sl;
+ sl->keepalive_timer.function=sl_keepalive;
+ init_timer(&sl->outfill_timer);
+ sl->outfill_timer.data=(unsigned long)sl;
+ sl->outfill_timer.function=sl_outfill;
+#endif
+ slip_ctrls[i] = slp;
+ return &slp->ctrl;
+}
+
/*
* Open the high-level part of the SLIP channel.
* This function is called by the TTY module when the
static int
slip_open(struct tty_struct *tty)
{
- struct slip *sl = (struct slip *) tty->disc_data;
+ struct slip *sl;
int err;
+ MOD_INC_USE_COUNT;
+
+ /* RTnetlink lock is misused here to serialize concurrent
+ opens of slip channels. There are better ways, but it is
+ the simplest one.
+ */
+ rtnl_lock();
+
+ /* Collect hanged up channels. */
+ sl_sync();
+
+ sl = (struct slip *) tty->disc_data;
+
+ err = -EEXIST;
/* First make sure we're not already connected. */
- if (sl && sl->magic == SLIP_MAGIC) {
- return -EEXIST;
- }
+ if (sl && sl->magic == SLIP_MAGIC)
+ goto err_exit;
/* OK. Find a free SLIP channel to use. */
- if ((sl = sl_alloc()) == NULL) {
- return -ENFILE;
- }
+ err = -ENFILE;
+ if ((sl = sl_alloc(tty->device)) == NULL)
+ goto err_exit;
sl->tty = tty;
tty->disc_data = sl;
- if (tty->driver.flush_buffer) {
+ sl->line = tty->device;
+ sl->pid = current->pid;
+ if (tty->driver.flush_buffer)
tty->driver.flush_buffer(tty);
- }
- if (tty->ldisc.flush_buffer) {
+ if (tty->ldisc.flush_buffer)
tty->ldisc.flush_buffer(tty);
- }
- /* Restore default settings */
- sl->mode = SL_MODE_DEFAULT;
- sl->dev->type = ARPHRD_SLIP + sl->mode;
- /* Perform the low-level SLIP initialization. */
- if ((err = sl_open(sl->dev))) {
- return err;
+ if (!test_bit(SLF_INUSE, &sl->flags)) {
+ /* Perform the low-level SLIP initialization. */
+ if ((err = sl_alloc_bufs(sl, SL_MTU)) != 0)
+ goto err_free_chan;
+
+ if (register_netdevice(sl->dev)) {
+ sl_free_bufs(sl);
+ goto err_free_chan;
+ }
+
+ set_bit(SLF_INUSE, &sl->flags);
}
- MOD_INC_USE_COUNT;
+#ifdef CONFIG_SLIP_SMART
+ if (sl->keepalive) {
+ sl->keepalive_timer.expires=jiffies+sl->keepalive*HZ;
+ add_timer (&sl->keepalive_timer);
+ }
+ if (sl->outfill) {
+ sl->outfill_timer.expires=jiffies+sl->outfill*HZ;
+ add_timer (&sl->outfill_timer);
+ }
+#endif
/* Done. We have linked the TTY line to a channel. */
+ rtnl_unlock();
return sl->dev->base_addr;
+
+err_free_chan:
+ sl->tty = NULL;
+ tty->disc_data = NULL;
+ clear_bit(SLF_INUSE, &sl->flags);
+
+err_exit:
+ rtnl_unlock();
+
+ /* Count references from TTY module */
+ MOD_DEC_USE_COUNT;
+ return err;
}
+/*
+ Let me to blame a bit.
+ 1. TTY module calls this funstion on soft interrupt.
+ 2. TTY module calls this function WITH MASKED INTERRUPTS!
+ 3. TTY module does not notify us about line discipline
+ shutdown,
+
+ Seems, now it is clean. The solution is to consider netdevice and
+ line discipline sides as two independant threads.
+
+ By-product (not desired): sl? does not feel hangups and remains open.
+ It is supposed, that user level program (dip, diald, slattach...)
+ will catch SIGHUP and make the rest of work.
+
+ I see no way to make more with current tty code. --ANK
+ */
/*
* Close down a SLIP channel.
struct slip *sl = (struct slip *) tty->disc_data;
/* First make sure we're connected. */
- if (!sl || sl->magic != SLIP_MAGIC) {
+ if (!sl || sl->magic != SLIP_MAGIC || sl->tty != tty)
return;
- }
-
- rtnl_lock();
- if (sl->dev->flags & IFF_UP)
- {
- /* STRONG layering violation! --ANK */
- (void) dev_close(sl->dev);
- }
tty->disc_data = 0;
sl->tty = NULL;
+ if (!sl->leased)
+ sl->line = 0;
+
/* VSV = very important to remove timers */
#ifdef CONFIG_SLIP_SMART
if (sl->keepalive)
- (void)del_timer (&sl->keepalive_timer);
+ del_timer (&sl->keepalive_timer);
if (sl->outfill)
- (void)del_timer (&sl->outfill_timer);
-#endif
- sl_free(sl);
- unregister_netdevice(sl->dev);
- rtnl_unlock();
- MOD_DEC_USE_COUNT;
-}
-
-
-static struct net_device_stats *
-sl_get_stats(struct device *dev)
-{
- static struct net_device_stats stats;
- struct slip *sl = (struct slip*)(dev->priv);
-#ifdef SL_INCLUDE_CSLIP
- struct slcompress *comp;
+ del_timer (&sl->outfill_timer);
#endif
- memset(&stats, 0, sizeof(struct net_device_stats));
-
- stats.rx_packets = sl->rx_packets;
- stats.tx_packets = sl->tx_packets;
- stats.rx_bytes = sl->rx_bytes;
- stats.tx_bytes = sl->tx_bytes;
- stats.rx_dropped = sl->rx_dropped;
- stats.tx_dropped = sl->tx_dropped;
- stats.tx_errors = sl->tx_errors;
- stats.rx_errors = sl->rx_errors;
- stats.rx_over_errors = sl->rx_over_errors;
-#ifdef SL_INCLUDE_CSLIP
- stats.rx_fifo_errors = sl->rx_compressed;
- stats.tx_fifo_errors = sl->tx_compressed;
- stats.collisions = sl->tx_misses;
- comp = sl->slcomp;
- if (comp) {
- stats.rx_fifo_errors += comp->sls_i_compressed;
- stats.rx_dropped += comp->sls_i_tossed;
- stats.tx_fifo_errors += comp->sls_o_compressed;
- stats.collisions += comp->sls_o_misses;
- }
-#endif /* CONFIG_INET */
- return (&stats);
+ /* Count references from TTY module */
+ MOD_DEC_USE_COUNT;
}
-
/************************************************************************
* STANDARD SLIP ENCAPSULATION *
************************************************************************/
switch(s) {
case END:
+#ifdef CONFIG_SLIP_SMART
/* drop keeptest bit = VSV */
if (test_bit(SLF_KEEPTEST, &sl->flags))
clear_bit(SLF_KEEPTEST, &sl->flags);
+#endif
if (!test_and_clear_bit(SLF_ERROR, &sl->flags) && (sl->rcount > 2)) {
sl_bump(sl);
unsigned char c;
if (s == 0x70) {
+#ifdef CONFIG_SLIP_SMART
/* drop keeptest bit = VSV */
if (test_bit(SLF_KEEPTEST, &sl->flags))
clear_bit(SLF_KEEPTEST, &sl->flags);
+#endif
if (!test_and_clear_bit(SLF_ERROR, &sl->flags) && (sl->rcount > 2)) {
sl_bump(sl);
slip_ioctl(struct tty_struct *tty, void *file, int cmd, void *arg)
{
struct slip *sl = (struct slip *) tty->disc_data;
- int err;
unsigned int tmp;
/* First make sure we're connected. */
return 0;
case SIOCGIFENCAP:
- err = verify_area(VERIFY_WRITE, arg, sizeof(int));
- if (err) {
- return err;
- }
- put_user(sl->mode, (int *)arg);
+ if (put_user(sl->mode, (int *)arg))
+ return -EFAULT;
return 0;
case SIOCSIFENCAP:
- err = verify_area(VERIFY_READ, arg, sizeof(int));
- if (err) {
- return err;
- }
- get_user(tmp,(int *)arg);
+ if (get_user(tmp,(int *)arg))
+ return -EFAULT;
#ifndef SL_INCLUDE_CSLIP
if (tmp & (SL_MODE_CSLIP|SL_MODE_ADAPTIVE)) {
return -EINVAL;
#ifdef CONFIG_SLIP_SMART
/* VSV changes start here */
case SIOCSKEEPALIVE:
- err = verify_area(VERIFY_READ, arg, sizeof(int));
- if (err) {
- return -err;
- }
- get_user(tmp,(int *)arg);
+ if (get_user(tmp,(int *)arg))
+ return -EFAULT;
if (tmp > 255) /* max for unchar */
return -EINVAL;
+
+ start_bh_atomic();
+ if (!sl->tty) {
+ end_bh_atomic();
+ return -ENODEV;
+ }
if (sl->keepalive)
(void)del_timer (&sl->keepalive_timer);
if ((sl->keepalive = (unchar) tmp) != 0) {
add_timer(&sl->keepalive_timer);
set_bit(SLF_KEEPTEST, &sl->flags);
}
+ end_bh_atomic();
+
return 0;
case SIOCGKEEPALIVE:
- err = verify_area(VERIFY_WRITE, arg, sizeof(int));
- if (err) {
- return -err;
- }
- put_user(sl->keepalive, (int *)arg);
+ if (put_user(sl->keepalive, (int *)arg))
+ return -EFAULT;
return 0;
case SIOCSOUTFILL:
- err = verify_area(VERIFY_READ, arg, sizeof(int));
- if (err) {
- return -err;
- }
- get_user(tmp,(int *)arg);
+ if (get_user(tmp,(int *)arg))
+ return -EFAULT;
if (tmp > 255) /* max for unchar */
return -EINVAL;
+ start_bh_atomic();
+ if (!sl->tty) {
+ end_bh_atomic();
+ return -ENODEV;
+ }
if (sl->outfill)
(void)del_timer (&sl->outfill_timer);
if ((sl->outfill = (unchar) tmp) != 0){
add_timer(&sl->outfill_timer);
set_bit(SLF_OUTWAIT, &sl->flags);
}
+ end_bh_atomic();
return 0;
case SIOCGOUTFILL:
- err = verify_area(VERIFY_WRITE, arg, sizeof(int));
- if (err) {
- return -err;
- }
- put_user(sl->outfill, (int *)arg);
+ if (put_user(sl->outfill, (int *)arg))
+ return -EFAULT;
return 0;
/* VSV changes end */
#endif
if (sl == NULL) /* Allocation failed ?? */
return -ENODEV;
+ start_bh_atomic(); /* Hangup would kill us */
+
+ if (!sl->tty) {
+ end_bh_atomic();
+ return -ENODEV;
+ }
+
switch(cmd){
case SIOCSKEEPALIVE:
/* max for unchar */
case SIOCGOUTFILL:
rq->ifr_data=(caddr_t)((unsigned long)sl->outfill);
+ break;
+
+ case SIOCSLEASE:
+ /* Resolve race condition, when ioctl'ing hanged up
+ and opened by another process device.
+ */
+ if (sl->tty != current->tty && sl->pid != current->pid) {
+ end_bh_atomic();
+ return -EPERM;
+ }
+ sl->leased = 0;
+ if ((unsigned long)rq->ifr_data)
+ sl->leased = 1;
+ break;
+
+ case SIOCGLEASE:
+ rq->ifr_data=(caddr_t)((unsigned long)sl->leased);
};
+ end_bh_atomic();
return 0;
}
#endif
/* VSV changes end */
-static int sl_open_dev(struct device *dev)
-{
- struct slip *sl = (struct slip*)(dev->priv);
- if(sl->tty==NULL)
- return -ENODEV;
- return 0;
-}
-
/* Initialize SLIP control device -- register SLIP line discipline */
#ifdef MODULE
static int slip_init_ctrl_dev(void)
}
-/* Initialise the SLIP driver. Called by the device init code */
-
-int slip_init(struct device *dev)
-{
- struct slip *sl = (struct slip*)(dev->priv);
-
- if (sl == NULL) /* Allocation failed ?? */
- return -ENODEV;
-
- /* Set up the "SLIP Control Block". (And clear statistics) */
-
- memset(sl, 0, sizeof (struct slip));
- sl->magic = SLIP_MAGIC;
- sl->dev = dev;
-
- /*
- * Finish setting up the DEVICE info.
- */
-
- dev->mtu = SL_MTU;
- dev->hard_start_xmit = sl_xmit;
- dev->open = sl_open_dev;
- dev->stop = sl_close;
- dev->get_stats = sl_get_stats;
-#ifdef CONFIG_SLIP_SMART
- dev->do_ioctl = sl_ioctl;
-#endif
- dev->hard_header_len = 0;
- dev->addr_len = 0;
- dev->type = ARPHRD_SLIP + SL_MODE_DEFAULT;
- dev->tx_queue_len = 10;
-
- dev_init_buffers(dev);
-
- /* New-style flags. */
- dev->flags = IFF_NOARP|IFF_POINTOPOINT|IFF_MULTICAST;
- return 0;
-}
#ifdef MODULE
int
{
int i;
- if (slip_ctrls != NULL)
- {
- for (i = 0; i < slip_maxdev; i++)
- {
- if (slip_ctrls[i])
- {
- /*
- * VSV = if dev->start==0, then device
- * unregistered while close proc.
- */
- if (slip_ctrls[i]->dev.start)
- unregister_netdev(&(slip_ctrls[i]->dev));
-
- kfree(slip_ctrls[i]);
+ if (slip_ctrls != NULL) {
+ unsigned long start = jiffies;
+ int busy = 0;
+
+ /* First of all: check for active disciplines and hangup them.
+ */
+ do {
+ if (busy) {
+ current->counter = 0;
+ schedule();
+ }
+
+ busy = 0;
+ start_bh_atomic();
+ for (i = 0; i < slip_maxdev; i++) {
+ struct slip_ctrl *slc = slip_ctrls[i];
+ if (slc && slc->ctrl.tty) {
+ busy++;
+ tty_hangup(slc->ctrl.tty);
+ }
+ }
+ end_bh_atomic();
+ } while (busy && jiffies - start < 1*HZ);
+
+ busy = 0;
+ for (i = 0; i < slip_maxdev; i++) {
+ struct slip_ctrl *slc = slip_ctrls[i];
+ if (slc) {
+ unregister_netdev(&slc->dev);
+ if (slc->ctrl.tty) {
+ printk("%s: tty discipline is still running\n", slc->dev.name);
+ /* Pin module forever */
+ MOD_INC_USE_COUNT;
+ busy++;
+ continue;
+ }
+ sl_free_bufs(&slc->ctrl);
+ kfree(slc);
slip_ctrls[i] = NULL;
}
}
- kfree(slip_ctrls);
- slip_ctrls = NULL;
+ if (!busy) {
+ kfree(slip_ctrls);
+ slip_ctrls = NULL;
+ }
}
if ((i = tty_register_ldisc(N_SLIP, NULL)))
{
{
struct slip *sl=(struct slip *)sls;
- if(sls==0L)
+ if (sl==NULL || sl->tty == NULL)
return;
if(sl->outfill)
sl->outfill_timer.expires=jiffies+sl->outfill*HZ;
add_timer(&sl->outfill_timer);
}
- else
- del_timer(&sl->outfill_timer);
}
static void sl_keepalive(unsigned long sls)
{
struct slip *sl=(struct slip *)sls;
- if(sl == NULL)
+ if (sl == NULL || sl->tty == NULL)
return;
if( sl->keepalive)
if(test_bit(SLF_KEEPTEST, &sl->flags))
{
/* keepalive still high :(, we must hangup */
- (void)del_timer(&sl->keepalive_timer);
if( sl->outfill ) /* outfill timer must be deleted too */
(void)del_timer(&sl->outfill_timer);
printk("%s: no packets received during keepalive timeout, hangup.\n", sl->dev->name);
}
else
set_bit(SLF_KEEPTEST, &sl->flags);
- (void)del_timer(&sl->keepalive_timer);
- sl->keepalive_timer.expires=jiffies+sl->keepalive*HZ;
+ sl->keepalive_timer.expires=jiffies+sl->keepalive*HZ;
add_timer(&sl->keepalive_timer);
}
- else
- (void)del_timer(&sl->keepalive_timer);
}
#endif
#define SLF_OUTWAIT 4 /* is outpacket was flag */
unsigned char mode; /* SLIP mode */
+ unsigned char leased;
+ kdev_t line;
+ pid_t pid;
#define SL_MODE_SLIP 0
#define SL_MODE_CSLIP 1
#define SL_MODE_SLIP6 2 /* Matt Dillon's printable slip */
DEVICE( CYCLADES, CYCLOM_Z_Hi, "Cyclom-Z above 1Mbyte"),
DEVICE( O2, O2_6832, "6832"),
DEVICE( 3DFX, 3DFX_VOODOO, "Voodoo"),
+ DEVICE( 3DFX, 3DFX_VOODOO2, "Voodoo2"),
DEVICE( SIGMADES, SIGMADES_6425, "REALmagic64/GX"),
DEVICE( STALLION, STALLION_ECHPCI832,"EasyConnection 8/32"),
DEVICE( STALLION, STALLION_ECHPCI864,"EasyConnection 8/64"),
*/
static struct dev_info device_list[] =
{
+{"Aashima","IMAGERY 2400SP","1.03",BLIST_NOLUN},/* Locks up if polled for lun != 0 */
{"CHINON","CD-ROM CDS-431","H42", BLIST_NOLUN}, /* Locks up if polled for lun != 0 */
{"CHINON","CD-ROM CDS-535","Q14", BLIST_NOLUN}, /* Locks up if polled for lun != 0 */
{"DENON","DRD-25X","V", BLIST_NOLUN}, /* Locks up if probed for lun != 0 */
}
}
- i = flush_buffer(inode, file, /* mtc.mt_op == MTSEEK || */
- mtc.mt_op == MTREW || mtc.mt_op == MTOFFL ||
- mtc.mt_op == MTRETEN || mtc.mt_op == MTEOM ||
- mtc.mt_op == MTLOCK || mtc.mt_op == MTLOAD ||
- mtc.mt_op == MTCOMPRESSION);
+ if (mtc.mt_op == MTSEEK) {
+ /* Old position must be restored if partition will be changed */
+ i = !STp->can_partitions ||
+ (STp->new_partition != STp->partition);
+ }
+ else {
+ i = mtc.mt_op == MTREW || mtc.mt_op == MTOFFL ||
+ mtc.mt_op == MTRETEN || mtc.mt_op == MTEOM ||
+ mtc.mt_op == MTLOCK || mtc.mt_op == MTLOAD ||
+ mtc.mt_op == MTCOMPRESSION;
+ }
+ i = flush_buffer(inode, file, i);
if (i < 0)
return i;
}
fi
if [ "$CONFIG_IPX" != "n" ]; then
tristate 'NCP filesystem support (to mount NetWare volumes)' CONFIG_NCP_FS
+ if [ "$CONFIG_NCP_FS" != "n" ]; then
+ source fs/ncpfs/Config.in
+ fi
fi
tristate 'OS/2 HPFS filesystem support (read only)' CONFIG_HPFS_FS
/*
* Convert a reserved page into buffers ... should happen only rarely.
*/
- if (nr_free_pages > (min_free_pages >> 1) &&
- grow_buffers(GFP_ATOMIC, size)) {
+ if (grow_buffers(GFP_ATOMIC, size)) {
#ifdef BUFFER_DEBUG
printk("refill_freelist: used reserve page\n");
#endif
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
/* Keep various stats */
list_add(&el->cc_cclist, &sbi->sbi_cchead);
}
-void coda_cninsert(struct coda_cache *el, struct cnode *cnp)
+void coda_cninsert(struct coda_cache *el, struct coda_inode_info *cnp)
{
ENTRY;
if ( !cnp || !el) {
void coda_ccremove(struct coda_cache *el)
{
-ENTRY;
- list_del(&el->cc_cclist);
+ ENTRY;
+ if (el->cc_cclist.next && el->cc_cclist.prev)
+ list_del(&el->cc_cclist);
+ else
+ printk("coda_cnremove: trying to remove 0 entry!");
}
void coda_cnremove(struct coda_cache *el)
{
-ENTRY;
- list_del(&el->cc_cnlist);
+ ENTRY;
+ if (el->cc_cnlist.next && el->cc_cnlist.prev)
+ list_del(&el->cc_cnlist);
+ else
+ printk("coda_cnremove: trying to remove 0 entry!");
}
void coda_cache_create(struct inode *inode, int mask)
{
- struct cnode *cnp = ITOC(inode);
+ struct coda_inode_info *cnp = ITOC(inode);
struct super_block *sb = inode->i_sb;
struct coda_cache *cc = NULL;
-ENTRY;
+ ENTRY;
CODA_ALLOC(cc, struct coda_cache *, sizeof(*cc));
if ( !cc ) {
struct coda_cache * coda_cache_find(struct inode *inode)
{
- struct cnode *cnp = ITOC(inode);
+ struct coda_inode_info *cnp = ITOC(inode);
struct list_head *lh, *le;
struct coda_cache *cc = NULL;
}
}
-void coda_cache_clear_cnp(struct cnode *cnp)
+void coda_cache_clear_cnp(struct coda_inode_info *cnp)
{
struct list_head *lh, *le;
struct coda_cache *cc;
int coda_cache_check(struct inode *inode, int mask)
{
- struct cnode *cnp = ITOC(inode);
+ struct coda_inode_info *cnp = ITOC(inode);
struct list_head *lh, *le;
struct coda_cache *cc = NULL;
static void coda_flag_children(struct dentry *parent)
{
struct list_head *child;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct dentry *de;
- char str[50];
child = parent->d_subdirs.next;
while ( child != &parent->d_subdirs ) {
cnp = ITOC(de->d_inode);
if (cnp)
cnp->c_flags |= C_ZAPFID;
- CDEBUG(D_CACHE, "ZAPFID for %s\n", coda_f2s(&cnp->c_fid, str));
+ CDEBUG(D_CACHE, "ZAPFID for %s\n", coda_f2s(&cnp->c_fid));
child = child->next;
}
void coda_dentry_delete(struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
- struct cnode *cnp = NULL;
+ struct coda_inode_info *cnp = NULL;
ENTRY;
if (inode) {
return;
}
-static void coda_zap_cnode(struct cnode *cnp, int flags)
+static void coda_zap_cnode(struct coda_inode_info *cnp, int flags)
{
cnp->c_flags |= flags;
coda_cache_clear_cnp(cnp);
void coda_zapfid(struct ViceFid *fid, struct super_block *sb, int flag)
{
struct inode *inode = NULL;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
ENTRY;
struct coda_sb_info *sbi = coda_sbp(sb);
le = lh = &sbi->sbi_volroothead;
while ( (le = le->next) != lh ) {
- cnp = list_entry(le, struct cnode, c_volrootlist);
+ cnp = list_entry(le, struct coda_inode_info, c_volrootlist);
if ( cnp->c_fid.Volume == fid->Volume)
coda_zap_cnode(cnp, flag);
}
return;
}
cnp = ITOC(inode);
- CHECK_CNODE(cnp);
- if ( !cnp ) {
- printk("coda_zapfid: no cnode!\n");
- return;
- }
coda_zap_cnode(cnp, flag);
}
/* cnode related routines for the coda kernel code
- Peter Braam, Sep 1996.
+ (C) 1996 Peter Braam
*/
#include <linux/types.h>
#include <linux/coda.h>
#include <linux/coda_linux.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_psdev.h>
extern int coda_debug;
extern int coda_print_entry;
/* cnode.c */
-static struct cnode *coda_cnode_alloc(void);
-/* return pointer to new empty cnode */
-static struct cnode *coda_cnode_alloc(void)
-{
- struct cnode *result = NULL;
-
- CODA_ALLOC(result, struct cnode *, sizeof(struct cnode));
- if ( !result ) {
- printk("coda_cnode_alloc: kmalloc returned NULL.\n");
- return result;
- }
-
- memset(result, 0, (int) sizeof(struct cnode));
- INIT_LIST_HEAD(&(result->c_cnhead));
- INIT_LIST_HEAD(&(result->c_volrootlist));
- return result;
-}
-
-/* release cnode memory */
-void coda_cnode_free(struct cnode *cinode)
-{
- CODA_FREE(cinode, sizeof(struct cnode));
-}
static void coda_fill_inode (struct inode *inode, struct coda_vattr *attr)
*/
int coda_cnode_make(struct inode **inode, ViceFid *fid, struct super_block *sb)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct coda_sb_info *sbi= coda_sbp(sb);
struct coda_vattr attr;
int error;
ino_t ino;
- char str[50];
ENTRY;
error = venus_getattr(sb, fid, &attr);
if ( error ) {
printk("coda_cnode_make: coda_getvattr returned %d for %s.\n",
- error, coda_f2s(fid, str));
+ error, coda_f2s(fid));
*inode = NULL;
return error;
}
return -ENOMEM;
}
- /* link the cnode and the vfs inode
- if this inode is not linked yet
- */
- if ( !(*inode)->u.generic_ip ) {
- cnp = coda_cnode_alloc();
- if ( !cnp ) {
- printk("coda_cnode_make: coda_cnode_alloc failed.\n");
- clear_inode(*inode);
- return -ENOMEM;
- }
- cnp->c_fid = *fid;
- cnp->c_magic = CODA_CNODE_MAGIC;
+ cnp = ITOC(*inode);
+ if ( cnp->c_magic == 0 ) {
+ memset(cnp, 0, (int) sizeof(struct coda_inode_info));
+ cnp->c_fid = *fid;
+ cnp->c_magic = CODA_CNODE_MAGIC;
cnp->c_flags = C_VATTR;
- cnp->c_vnode = *inode;
- (*inode)->u.generic_ip = (void *) cnp;
- CDEBUG(D_CNODE, "LINKING: ino %ld, count %d at 0x%x with cnp 0x%x, cnp->c_vnode 0x%x, in->u.generic_ip 0x%x\n", (*inode)->i_ino, (*inode)->i_count, (int) (*inode), (int) cnp, (int)cnp->c_vnode, (int) (*inode)->u.generic_ip);
+ cnp->c_vnode = *inode;
+ INIT_LIST_HEAD(&(cnp->c_cnhead));
+ INIT_LIST_HEAD(&(cnp->c_volrootlist));
} else {
- cnp = (struct cnode *)(*inode)->u.generic_ip;
- CDEBUG(D_CNODE, "FOUND linked: ino %ld, count %d, at 0x%x with cnp 0x%x, cnp->c_vnode 0x%x\n", (*inode)->i_ino, (*inode)->i_count, (int) (*inode), (int) cnp, (int)cnp->c_vnode);
+ printk("coda_cnode make on initialized inode %ld, %s!\n",
+ (*inode)->i_ino, coda_f2s(&cnp->c_fid));
}
- CHECK_CNODE(cnp);
/* fill in the inode attributes */
if ( coda_fid_is_volroot(fid) )
/* convert a fid to an inode. Avoids having a hash table
such as present in the Mach minicache */
-struct inode *coda_fid_to_inode(ViceFid *fid, struct super_block *sb) {
+struct inode *coda_fid_to_inode(ViceFid *fid, struct super_block *sb)
+{
ino_t nr;
struct inode *inode;
- struct cnode *cnp;
- char str[50];
+ struct coda_inode_info *cnp;
ENTRY;
- CDEBUG(D_INODE, "%s\n", coda_f2s(fid, str));
+ CDEBUG(D_INODE, "%s\n", coda_f2s(fid));
nr = coda_f2i(fid);
inode = iget(sb, nr);
}
/* check if this inode is linked to a cnode */
- cnp = (struct cnode *) inode->u.generic_ip;
- if ( cnp == NULL ) {
+ cnp = ITOC(inode);
+
+ if ( cnp->c_magic != CODA_CNODE_MAGIC ) {
iput(inode);
- EXIT;
return NULL;
}
+
/* make sure fid is the one we want */
if ( !coda_fideq(fid, &(cnp->c_fid)) ) {
printk("coda_fid2inode: bad cnode! Tell Peter.\n");
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
/* initialize the debugging variables */
int coda_access_cache = 1;
/* caller must allocate 36 byte string ! */
-char * coda_f2s(ViceFid *f, char *s)
+char * coda_f2s(ViceFid *f)
{
+ static char s[50];
if ( f ) {
- sprintf(s, "(%-#10lx,%-#10lx,%-#10lx)",
+ sprintf(s, "(%10lx,%10lx,%10lx)",
f->Volume, f->Vnode, f->Unique);
}
return s;
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
/* dir inode-ops */
/* acces routines: lookup, readlink, permission */
static int coda_lookup(struct inode *dir, struct dentry *entry)
{
- struct cnode *dircnp;
+ struct coda_inode_info *dircnp;
struct inode *res_inode = NULL;
struct ViceFid resfid;
int dropme = 0; /* to indicate entry should not be cached */
int error = 0;
const char *name = entry->d_name.name;
size_t length = entry->d_name.len;
- char str[50];
ENTRY;
CDEBUG(D_INODE, "name %s, len %d in ino %ld\n",
if ( length > CFS_MAXNAMLEN ) {
printk("name too long: lookup, %s (%*s)\n",
- coda_f2s(&dircnp->c_fid, str), length, name);
+ coda_f2s(&dircnp->c_fid), length, name);
return -ENAMETOOLONG;
}
CDEBUG(D_INODE, "lookup: %*s in %s\n", length, name,
- coda_f2s(&dircnp->c_fid, str));
+ coda_f2s(&dircnp->c_fid));
/* control object, create inode on the fly */
if (coda_isroot(dir) && coda_iscontrol(name, length)) {
return -error;
} else if (error != -ENOENT) {
CDEBUG(D_INODE, "error for %s(%*s)%d\n",
- coda_f2s(&dircnp->c_fid, str), length, name, error);
+ coda_f2s(&dircnp->c_fid), length, name, error);
return error;
}
CDEBUG(D_INODE, "lookup: %s is (%s) type %d result %d, dropme %d\n",
- name, coda_f2s(&resfid, str), type, error, dropme);
+ name, coda_f2s(&resfid), type, error, dropme);
exit:
entry->d_time = 0;
int coda_permission(struct inode *inode, int mask)
{
- struct cnode *cp;
+ struct coda_inode_info *cp;
int error;
- char str[50];
ENTRY;
error = venus_access(inode->i_sb, &(cp->c_fid), mask);
CDEBUG(D_INODE, "fid: %s, ino: %ld (mask: %o) error: %d\n",
- coda_f2s(&(cp->c_fid), str), inode->i_ino, mask, error);
+ coda_f2s(&(cp->c_fid)), inode->i_ino, mask, error);
if ( error == 0 ) {
coda_cache_enter(inode, mask);
static int coda_create(struct inode *dir, struct dentry *de, int mode)
{
int error=0;
- struct cnode *dircnp;
+ struct coda_inode_info *dircnp;
const char *name=de->d_name.name;
int length=de->d_name.len;
struct inode *result = NULL;
if ( length > CFS_MAXNAMLEN ) {
char str[50];
printk("name too long: create, %s(%s)\n",
- coda_f2s(&dircnp->c_fid, str), name);
+ coda_f2s(&dircnp->c_fid), name);
return -ENAMETOOLONG;
}
if ( error ) {
char str[50];
CDEBUG(D_INODE, "create: %s, result %d\n",
- coda_f2s(&newfid, str), error);
+ coda_f2s(&newfid), error);
d_drop(de);
return error;
}
static int coda_mkdir(struct inode *dir, struct dentry *de, int mode)
{
- struct cnode *dircnp;
+ struct coda_inode_info *dircnp;
struct inode *inode;
struct coda_vattr attr;
const char *name = de->d_name.name;
int len = de->d_name.len;
int error;
struct ViceFid newfid;
- char fidstr[50];
if (!dir || !S_ISDIR(dir->i_mode)) {
CHECK_CNODE(dircnp);
CDEBUG(D_INODE, "mkdir %s (len %d) in %s, mode %o.\n",
- name, len, coda_f2s(&(dircnp->c_fid), fidstr), mode);
+ name, len, coda_f2s(&(dircnp->c_fid)), mode);
attr.va_mode = mode;
error = venus_mkdir(dir->i_sb, &(dircnp->c_fid),
if ( error ) {
CDEBUG(D_INODE, "mkdir error: %s result %d\n",
- coda_f2s(&newfid, fidstr), error);
+ coda_f2s(&newfid), error);
d_drop(de);
return error;
}
CDEBUG(D_INODE, "mkdir: new dir has fid %s.\n",
- coda_f2s(&newfid, fidstr));
+ coda_f2s(&newfid));
error = coda_cnode_make(&inode, &newfid, dir->i_sb);
if ( error ) {
struct inode *inode = source_de->d_inode;
const char * name = de->d_name.name;
int len = de->d_name.len;
- struct cnode *dir_cnp, *cnp;
+ struct coda_inode_info *dir_cnp, *cnp;
char str[50];
int error;
cnp = ITOC(inode);
CHECK_CNODE(cnp);
- CDEBUG(D_INODE, "old: fid: %s\n", coda_f2s(&(cnp->c_fid), str));
- CDEBUG(D_INODE, "directory: %s\n", coda_f2s(&(dir_cnp->c_fid), str));
+ CDEBUG(D_INODE, "old: fid: %s\n", coda_f2s(&(cnp->c_fid)));
+ CDEBUG(D_INODE, "directory: %s\n", coda_f2s(&(dir_cnp->c_fid)));
if ( len > CFS_MAXNAMLEN ) {
printk("coda_link: name too long. \n");
{
const char *name = de->d_name.name;
int len = de->d_name.len;
- struct cnode *dir_cnp = ITOC(dir_inode);
+ struct coda_inode_info *dir_cnp = ITOC(dir_inode);
int symlen;
int error=0;
int coda_unlink(struct inode *dir, struct dentry *de)
{
- struct cnode *dircnp;
+ struct coda_inode_info *dircnp;
int error;
const char *name = de->d_name.name;
int len = de->d_name.len;
CHECK_CNODE(dircnp);
CDEBUG(D_INODE, " %s in %s, ino %ld\n", name ,
- coda_f2s(&(dircnp->c_fid), fidstr), dir->i_ino);
+ coda_f2s(&(dircnp->c_fid)), dir->i_ino);
/* this file should no longer be in the namecache! */
int coda_rmdir(struct inode *dir, struct dentry *de)
{
- struct cnode *dircnp;
+ struct coda_inode_info *dircnp;
const char *name = de->d_name.name;
int len = de->d_name.len;
int error, rehash = 0;
int new_length = new_dentry->d_name.len;
struct inode *old_inode = old_dentry->d_inode;
struct inode *new_inode = new_dentry->d_inode;
- struct cnode *new_cnp, *old_cnp;
+ struct coda_inode_info *new_cnp, *old_cnp;
int error, rehash = 0, update = 1;
ENTRY;
old_cnp = ITOC(old_dir);
int coda_readdir(struct file *file, void *dirent, filldir_t filldir)
{
int result = 0;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct file open_file;
struct dentry open_dentry;
struct inode *inode=file->f_dentry->d_inode;
{
ino_t ino;
dev_t dev;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
int error = 0;
struct inode *cont_inode = NULL;
unsigned short flags = f->f_flags;
int coda_release(struct inode *i, struct file *f)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
int error;
unsigned short flags = f->f_flags;
unsigned short cflags = coda_flags_to_cflags(flags);
#include <linux/coda.h>
#include <linux/coda_linux.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_psdev.h>
#include <linux/coda_cache.h>
struct inode *inode = de->d_inode;
struct dentry cont_dentry;
struct inode *cont_inode;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
ENTRY;
static int coda_file_mmap(struct file * file, struct vm_area_struct * vma)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
cnp = ITOC(file->f_dentry->d_inode);
cnp->c_mmcount++;
static ssize_t coda_file_read(struct file *coda_file, char *buff,
size_t count, loff_t *ppos)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct inode *coda_inode = coda_file->f_dentry->d_inode;
struct inode *cont_inode = NULL;
struct file cont_file;
static ssize_t coda_file_write(struct file *coda_file, const char *buff,
size_t count, loff_t *ppos)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct inode *coda_inode = coda_file->f_dentry->d_inode;
struct inode *cont_inode = NULL;
struct file cont_file;
int coda_fsync(struct file *coda_file, struct dentry *coda_dentry)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct inode *coda_inode = coda_dentry->d_inode;
struct inode *cont_inode = NULL;
struct file cont_file;
{
coda_file->f_pos = open_file->f_pos;
/* XXX what about setting the mtime here too? */
- coda_inode->i_mtime = open_inode->i_mtime;
+ /* coda_inode->i_mtime = open_inode->i_mtime; */
coda_inode->i_size = open_inode->i_size;
return;
}
{
struct super_block *sbptr;
- sbptr = get_super(to_kdev_t(dev));
+ sbptr = get_super(dev);
if ( !sbptr ) {
printk("coda_inode_grab: coda_find_super returns NULL.\n");
#include <linux/coda.h>
#include <linux/coda_linux.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
#include <linux/coda_psdev.h>
int error;
struct PioctlData data;
struct inode *target_inode = NULL;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
ENTRY;
/* get the Pioctl data arguments from user space */
* Look up the pathname. Note that the pathname is in
* user memory, and namei takes care of this
*/
- CDEBUG(D_PIOCTL, "namei, data.follow = %d\n", data.follow);
+ CDEBUG(D_PIOCTL, "namei, data.follow = %d\n",
+ data.follow);
if ( data.follow ) {
target_de = namei(data.path);
} else {
target_de = lnamei(data.path);
}
-
- if (!target_de) {
+
+ if ( PTR_ERR(target_de) == -ENOENT ) {
CDEBUG(D_PIOCTL, "error: lookup fails.\n");
- return -EINVAL;
+ return PTR_ERR(target_de);
} else {
target_inode = target_de->d_inode;
}
- CDEBUG(D_PIOCTL, "target ino: 0x%ld, dev: %s\n",
- target_inode->i_ino, kdevname(target_inode->i_dev));
+ CDEBUG(D_PIOCTL, "target ino: 0x%ld, dev: 0x%d\n",
+ target_inode->i_ino, target_inode->i_dev);
/* return if it is not a Coda inode */
if ( target_inode->i_sb != inode->i_sb ) {
#include <linux/coda.h>
#include <linux/coda_linux.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_psdev.h>
#include <linux/coda_cache.h>
#include <linux/coda_sysctl.h>
can profit from setting the C_DYING flag on the root
cnode of Coda filesystems */
if (coda_super_info[minor].sbi_root) {
- struct cnode *cnp = ITOC(coda_super_info[minor].sbi_root);
+ struct coda_inode_info *cnp =
+ ITOC(coda_super_info[minor].sbi_root);
cnp->c_flags |= C_DYING;
} else
vcp->vc_inuse = 0;
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
unlock_super(sb);
goto error;
}
- printk("coda_read_super: rootfid is %s\n", coda_f2s(&fid, str));
+ printk("coda_read_super: rootfid is %s\n", coda_f2s(&fid));
/* make root inode */
error = coda_cnode_make(&root, &fid, sb);
goto error;
}
- printk("coda_read_super: rootinode is %ld dev %s\n",
- root->i_ino, kdevname(root->i_dev));
+ printk("coda_read_super: rootinode is %ld dev %d\n",
+ root->i_ino, root->i_dev);
sbi->sbi_root = root;
sb->s_root = d_alloc_root(root, NULL);
unlock_super(sb);
}
if (root) {
iput(root);
- coda_cnode_free(ITOC(root));
}
sb->s_dev = 0;
return NULL;
/* all filling in of inodes postponed until lookup */
static void coda_read_inode(struct inode *inode)
{
+ struct coda_inode_info *cnp;
ENTRY;
- inode->u.generic_ip = NULL;
+ cnp = ITOC(inode);
+ cnp->c_magic = 0;
return;
}
{
ENTRY;
- CDEBUG(D_INODE,"ino: %ld, cnp: %p\n", in->i_ino, in->u.generic_ip);
+ CDEBUG(D_INODE,"ino: %ld, count %d\n", in->i_ino, in->i_count);
+
+ if ( in->i_count == 1 )
+ in->i_nlink = 0;
+
}
static void coda_delete_inode(struct inode *inode)
{
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct inode *open_inode;
ENTRY;
CDEBUG(D_SUPER, " inode->ino: %ld, count: %d\n",
inode->i_ino, inode->i_count);
- if ( inode->i_ino == CTL_INO ) {
+ cnp = ITOC(inode);
+ if ( inode->i_ino == CTL_INO || cnp->c_magic != CODA_CNODE_MAGIC ) {
clear_inode(inode);
return;
}
- cnp = ITOC(inode);
if ( coda_fid_is_volroot(&cnp->c_fid) )
list_del(&cnp->c_volrootlist);
coda_cache_clear_cnp(cnp);
inode->u.generic_ip = NULL;
- coda_cnode_free(cnp);
clear_inode(inode);
EXIT;
}
static int coda_notify_change(struct dentry *de, struct iattr *iattr)
{
struct inode *inode = de->d_inode;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
struct coda_vattr vattr;
int error;
if (MINOR(psdev->i_rdev) >= MAX_CODADEVS) {
printk("minor %d not an allocated Coda PSDEV\n",
- MINOR(psdev->i_rdev));
+ psdev->i_rdev);
return 1;
}
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
static int coda_readlink(struct dentry *de, char *buffer, int length);
NULL, /* mknod */
NULL, /* rename */
coda_readlink, /* readlink */
- coda_follow_link, /* follow_link */
+ coda_follow_link, /* follow_link */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* bmap */
NULL, /* truncate */
- NULL, /* permission */
- NULL, /* smap */
- NULL, /* update page */
- NULL /* revalidate */
+ NULL, /* permission */
+ NULL, /* smap */
+ NULL, /* update page */
+ NULL /* revalidate */
};
static int coda_readlink(struct dentry *de, char *buffer, int length)
{
struct inode *inode = de->d_inode;
- int len;
+ int len;
int error;
- char *buf;
- struct cnode *cp;
- ENTRY;
+ char *buf;
+ struct coda_inode_info *cp;
+ ENTRY;
- cp = ITOC(inode);
- CHECK_CNODE(cp);
+ cp = ITOC(inode);
+ CHECK_CNODE(cp);
- /* the maximum length we receive is len */
- if ( length > CFS_MAXPATHLEN )
- len = CFS_MAXPATHLEN;
+ /* the maximum length we receive is len */
+ if ( length > CFS_MAXPATHLEN )
+ len = CFS_MAXPATHLEN;
else
- len = length;
+ len = length;
CODA_ALLOC(buf, char *, len);
if ( !buf )
- return -ENOMEM;
+ return -ENOMEM;
error = venus_readlink(inode->i_sb, &(cp->c_fid), buf, &len);
- CDEBUG(D_INODE, "result %s\n", buf);
+ CDEBUG(D_INODE, "result %s\n", buf);
if (! error) {
copy_to_user(buffer, buf, len);
put_user('\0', buffer + len);
{
struct inode *inode = de->d_inode;
int error;
- struct cnode *cnp;
+ struct coda_inode_info *cnp;
unsigned int len;
char mem[CFS_MAXPATHLEN];
char *path;
ENTRY;
- CDEBUG(D_INODE, "(%s/%ld)\n", kdevname(inode->i_dev), inode->i_ino);
+ CDEBUG(D_INODE, "(%x/%ld)\n", inode->i_dev, inode->i_ino);
- cnp = ITOC(inode);
- CHECK_CNODE(cnp);
+ cnp = ITOC(inode);
+ CHECK_CNODE(cnp);
len = CFS_MAXPATHLEN;
error = venus_readlink(inode->i_sb, &(cnp->c_fid), mem, &len);
#include <linux/coda.h>
#include <linux/coda_linux.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_psdev.h>
#include <linux/coda_cache.h>
#include <linux/coda_sysctl.h>
#include <asm/system.h>
#include <asm/segment.h>
-
+#include <asm/signal.h>
+#include <linux/signal.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/coda.h>
#include <linux/coda_linux.h>
#include <linux/coda_psdev.h>
-#include <linux/coda_cnode.h>
+#include <linux/coda_fs_i.h>
#include <linux/coda_cache.h>
#define UPARG(op)\
if (error) {
printk("coda_pioctl: Venus returns: %d for %s\n",
- error, coda_f2s(fid, str));
+ error, coda_f2s(fid));
goto exit;
}
static inline void coda_waitfor_upcall(struct vmsg *vmp)
{
struct wait_queue wait = { current, NULL };
+ old_sigset_t pending;
vmp->vm_posttime = jiffies;
else
current->state = TASK_UNINTERRUPTIBLE;
+ /* got a reply */
if ( vmp->vm_flags & VM_WRITE )
break;
- if (signal_pending(current) &&
- (jiffies > vmp->vm_posttime + coda_timeout * HZ) )
+
+ if ( ! signal_pending(current) )
+ schedule();
+ /* signal is present: after timeout always return */
+ if ( jiffies > vmp->vm_posttime + coda_timeout * HZ )
+ break;
+
+ spin_lock_irq(¤t->sigmask_lock);
+ pending = current->blocked.sig[0] & current->signal.sig[0];
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ /* if this process really wants to die, let it go */
+ if ( sigismember(&pending, SIGKILL) ||
+ sigismember(&pending, SIGINT) )
break;
- schedule();
+ else
+ schedule();
}
remove_wait_queue(&vmp->vm_sleep, &wait);
current->state = TASK_RUNNING;
printk("ZAPDIR: Null fid\n");
return 0;
}
- CDEBUG(D_DOWNCALL, "zapdir: fid = %s\n", coda_f2s(fid, str));
+ CDEBUG(D_DOWNCALL, "zapdir: fid = %s\n", coda_f2s(fid));
clstats(CFS_ZAPDIR);
coda_zapfid(fid, sb, C_ZAPDIR);
return(0);
printk("ZAPVNODE: Null fid or cred\n");
return 0;
}
- CDEBUG(D_DOWNCALL, "zapvnode: fid = %s\n", coda_f2s(fid, str));
+ CDEBUG(D_DOWNCALL, "zapvnode: fid = %s\n", coda_f2s(fid));
coda_zapfid(fid, sb, C_ZAPFID);
coda_cache_clear_cred(sb, cred);
clstats(CFS_ZAPVNODE);
printk("ZAPFILE: Null fid\n");
return 0;
}
- CDEBUG(D_DOWNCALL, "zapfile: fid = %s\n", coda_f2s(fid, str));
+ CDEBUG(D_DOWNCALL, "zapfile: fid = %s\n", coda_f2s(fid));
coda_zapfid(fid, sb, C_ZAPFID);
return 0;
}
printk("PURGEFID: Null fid\n");
return 0;
}
- CDEBUG(D_DOWNCALL, "purgefid: fid = %s\n", coda_f2s(fid, str));
+ CDEBUG(D_DOWNCALL, "purgefid: fid = %s\n", coda_f2s(fid));
clstats(CFS_PURGEFID);
coda_zapfid(fid, sb, C_ZAPDIR);
return 0;
*
* Author: Marco van Wieringen <mvw@mcs.ow.nl> <mvw@tnix.net>
*
- * Fixes: Dmitry Gorodchanin <begemot@bgm.rosprint.net>, 11 Feb 96
+ * Fixes: Dmitry Gorodchanin <pgmdsg@ibi.com>, 11 Feb 96
* removed race conditions in dqput(), dqget() and iput().
* Andi Kleen removed all verify_area() calls, 31 Dec 96
* Nick Kralevich <nickkral@cal.alumni.berkeley.edu>, 21 Jul 97
return MSDOS_SB(inode->i_sb)->cvf_format
->cvf_file_read(filp,buf,count,ppos);
- if (!MSDOS_I(inode)->i_binary)
+ /*
+ * MS-DOS filesystems with a blocksize > 512 may have blocks
+ * spread over several hardware sectors (unaligned), which
+ * is not something the generic routines can (or would want
+ * to) handle).
+ */
+ if (!MSDOS_I(inode)->i_binary || inode->i_sb->s_blocksize > 512)
return fat_file_read_text(filp, buf, count, ppos);
return generic_file_read(filp, buf, count, ppos);
}
*
* Removed some race conditions in flock_lock_file(), marked other possible
* races. Just grep for FIXME to see them.
- * Dmitry Gorodchanin (begemot@bgm.rosprint.net), February 09, 1996.
+ * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
*
* Addressed Dmitry's concerns. Deadlock checking no longer recursive.
* Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
--- /dev/null
+#
+# NCP Filesystem configuration
+#
+# bool ' Packet singatures' CONFIG_NCPFS_PACKET_SIGNING
+bool ' Proprietary file locking' CONFIG_NCPFS_IOCTL_LOCKING
+bool ' Clear remove/delete inhibit when needed' CONFIG_NCPFS_STRONG
+bool ' Use NFS namespace if available' CONFIG_NCPFS_NFS_NS
+bool ' Use LONG (OS/2) namespace if available' CONFIG_NCPFS_OS2_NS
+bool ' Allow mounting of volume subdirectories' CONFIG_NCPFS_MOUNT_SUBDIR
+# bool ' NDS interserver authentication support' CONFIG_NCPFS_NDS_DOMAINS
*
*/
+#include <linux/config.h>
+
#include <linux/sched.h>
#include <linux/errno.h>
#include <linux/stat.h>
* This is the callback when the dcache has a lookup hit.
*/
+
+#ifdef CONFIG_NCPFS_STRONG
+/* try to delete a readonly file (NW R bit set) */
+
+static int
+ncp_force_unlink(struct inode *dir, struct dentry* dentry)
+{
+ int res=0x9c,res2;
+ struct iattr ia;
+
+ /* remove the Read-Only flag on the NW server */
+
+ memset(&ia,0,sizeof(struct iattr));
+ ia.ia_mode = dentry->d_inode->i_mode;
+ ia.ia_mode |= NCP_SERVER(dir)->m.file_mode & 0222; /* set write bits */
+ ia.ia_valid = ATTR_MODE;
+
+ res2=ncp_notify_change(dentry, &ia);
+ if (res2)
+ {
+ goto leave_me;
+ }
+
+ /* now try again the delete operation */
+
+ res = ncp_del_file_or_subdir2(NCP_SERVER(dir), dentry);
+
+ if (res) /* delete failed, set R bit again */
+ {
+ memset(&ia,0,sizeof(struct iattr));
+ ia.ia_mode = dentry->d_inode->i_mode;
+ ia.ia_mode &= ~(NCP_SERVER(dir)->m.file_mode & 0222); /* clear write bits */
+ ia.ia_valid = ATTR_MODE;
+
+ res2=ncp_notify_change(dentry, &ia);
+ if (res2)
+ {
+ goto leave_me;
+ }
+ }
+leave_me:
+ return(res);
+}
+#endif /* CONFIG_NCPFS_STRONG */
+
+#ifdef CONFIG_NCPFS_STRONG
+static int
+ncp_force_rename(struct inode *old_dir, struct dentry* old_dentry, char *_old_name,
+ struct inode *new_dir, struct dentry* new_dentry, char *_new_name)
+{
+ int res=0x90,res2;
+ struct iattr ia;
+
+ /* remove the Read-Only flag on the NW server */
+
+ memset(&ia,0,sizeof(struct iattr));
+ ia.ia_mode = old_dentry->d_inode->i_mode;
+ ia.ia_mode |= NCP_SERVER(old_dir)->m.file_mode & 0222; /* set write bits */
+ ia.ia_valid = ATTR_MODE;
+
+ res2=ncp_notify_change(old_dentry, &ia);
+ if (res2)
+ {
+ goto leave_me;
+ }
+
+ /* now try again the rename operation */
+ res = ncp_ren_or_mov_file_or_subdir(NCP_SERVER(old_dir),
+ old_dir, _old_name,
+ new_dir, _new_name);
+
+ memset(&ia,0,sizeof(struct iattr));
+ ia.ia_mode = old_dentry->d_inode->i_mode;
+ ia.ia_mode &= ~(NCP_SERVER(old_dentry->d_inode)->m.file_mode & 0222); /* clear write bits */
+ ia.ia_valid = ATTR_MODE;
+
+ /* FIXME: uses only inode info, no dentry info... so it is safe to call */
+ /* it now with old dentry. If we use name (in future), we have to move */
+ /* it after dentry_move in caller */
+ res2=ncp_notify_change(old_dentry, &ia);
+ if (res2)
+ {
+ printk(KERN_INFO "ncpfs: ncp_notify_change (2) failed: %08x\n",res2);
+ goto leave_me;
+ }
+
+ leave_me:
+ return(res);
+}
+#endif /* CONFIG_NCPFS_STRONG */
+
+
static int
ncp_lookup_validate(struct dentry * dentry)
{
* If we didn't find it, or if it has a different dirEntNum to
* what we remember, it's not valid any more.
*/
- if (!res)
+ if (!res) {
if (finfo.nw_info.i.dirEntNum == NCP_FINFO(dentry->d_inode)->dirEntNum)
val=1;
#ifdef NCPFS_PARANOIA
else
printk(KERN_DEBUG "ncp_lookup_validate: found, but dirEntNum changed\n");
#endif
+ ncp_update_inode2(dentry->d_inode, &finfo.nw_info);
+ }
if (!val) ncp_invalid_dir_cache(dir);
finished:
int result;
if (ncp_single_volume(server)) {
+ struct dentry* dent;
+
result = -ENOENT;
str_upper(server->m.mounted_vol);
if (ncp_lookup_volume(server, server->m.mounted_vol,
goto out;
}
str_lower(server->root.finfo.i.entryName);
+ dent = server->root_dentry;
+ if (dent) {
+ struct inode* ino = dent->d_inode;
+ if (ino) {
+ NCP_FINFO(ino)->volNumber = server->root.finfo.i.volNumber;
+ NCP_FINFO(ino)->dirEntNum = server->root.finfo.i.dirEntNum;
+ NCP_FINFO(ino)->DosDirNum = server->root.finfo.i.DosDirNum;
+ } else {
+ DPRINTK(KERN_DEBUG "ncpfs: sb->root_dentry->d_inode == NULL!\n");
+ }
+ } else {
+ DPRINTK(KERN_DEBUG "ncpfs: sb->root_dentry == NULL!\n");
+ }
}
result = 0;
finfo.nw_info.access = O_RDWR;
error = ncp_instantiate(dir, dentry, &finfo);
} else {
+ if (result == 0x87) error = -ENAMETOOLONG;
DPRINTK(KERN_DEBUG "ncp_create: %s/%s failed\n",
dentry->d_parent->d_name.name, dentry->d_name.name);
}
static int ncp_unlink(struct inode *dir, struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
- int error, result;
- __u8 _name[dentry->d_name.len + 1];
+ int error;
DPRINTK(KERN_DEBUG "ncp_unlink: unlinking %s/%s\n",
dentry->d_parent->d_name.name, dentry->d_name.name);
ncp_make_closed(inode);
}
- strncpy(_name, dentry->d_name.name, dentry->d_name.len);
- _name[dentry->d_name.len] = '\0';
- if (!ncp_preserve_case(dir))
- {
- str_upper(_name);
+ error = ncp_del_file_or_subdir2(NCP_SERVER(dir), dentry);
+#ifdef CONFIG_NCPFS_STRONG
+ if (error == 0x9C && NCP_SERVER(dir)->m.flags & NCP_MOUNT_STRONG) { /* R/O */
+ error = ncp_force_unlink(dir, dentry);
}
- error = -EACCES;
- result = ncp_del_file_or_subdir(NCP_SERVER(dir), dir, _name);
- if (!result) {
+#endif
+ if (!error) {
DPRINTK(KERN_DEBUG "ncp: removed %s/%s\n",
dentry->d_parent->d_name.name, dentry->d_name.name);
ncp_invalid_dir_cache(dir);
d_delete(dentry);
- error = 0;
+ } else if (error == 0xFF) {
+ error = -ENOENT;
+ } else {
+ error = -EACCES;
}
+
out:
return error;
}
{
int old_len = old_dentry->d_name.len;
int new_len = new_dentry->d_name.len;
- int error, result;
+ int error;
char _old_name[old_dentry->d_name.len + 1];
char _new_name[new_dentry->d_name.len + 1];
str_upper(_new_name);
}
- error = -EACCES;
- result = ncp_ren_or_mov_file_or_subdir(NCP_SERVER(old_dir),
+ error = ncp_ren_or_mov_file_or_subdir(NCP_SERVER(old_dir),
old_dir, _old_name,
new_dir, _new_name);
- if (result == 0)
+#ifdef CONFIG_NCPFS_STRONG
+ if (error == 0x90 && NCP_SERVER(old_dir)->m.flags & NCP_MOUNT_STRONG) { /* RO */
+ error = ncp_force_rename(old_dir, old_dentry, _old_name, new_dir, new_dentry, _new_name);
+ }
+#endif
+ if (error == 0)
{
DPRINTK(KERN_DEBUG "ncp renamed %s -> %s.\n",
old_dentry->d_name.name,new_dentry->d_name.name);
ncp_invalid_dir_cache(old_dir);
ncp_invalid_dir_cache(new_dir);
d_move(old_dentry,new_dentry);
- error = 0;
+ } else {
+ if (error == 0x9E)
+ error = -ENAMETOOLONG;
+ else if (error == 0xFF)
+ error = -ENOENT;
+ else
+ error = -EACCES;
}
out:
return error;
static void ncp_read_inode(struct inode *);
static void ncp_put_inode(struct inode *);
static void ncp_delete_inode(struct inode *);
-static int ncp_notify_change(struct dentry *, struct iattr *);
static void ncp_put_super(struct super_block *);
static int ncp_statfs(struct super_block *, struct statfs *, int);
NULL /* remount */
};
+extern struct dentry_operations ncp_dentry_operations;
+
static struct nw_file_info *read_nwinfo = NULL;
static struct semaphore read_sem = MUTEX;
#endif
}
+void ncp_update_inode2(struct inode* inode, struct nw_file_info *nwinfo)
+{
+ struct nw_info_struct *nwi = &nwinfo->i;
+ struct ncp_server *server = NCP_SERVER(inode);
+
+ if (!NCP_FINFO(inode)->opened) {
+ if (nwi->attributes & aDIR) {
+ inode->i_mode = server->m.dir_mode;
+ inode->i_size = 512;
+ } else {
+ inode->i_mode = server->m.file_mode;
+ inode->i_size = le32_to_cpu(nwi->dataStreamSize);
+ }
+ if (nwi->attributes & aRONLY) inode->i_mode &= ~0222;
+ }
+ inode->i_blocks = 0;
+ if ((inode->i_size)&&(inode->i_blksize)) {
+ inode->i_blocks = (inode->i_size-1)/(inode->i_blksize)+1;
+ }
+ /* TODO: times? I'm not sure... */
+ NCP_FINFO(inode)->DosDirNum = nwinfo->i.DosDirNum;
+ NCP_FINFO(inode)->dirEntNum = nwinfo->i.dirEntNum;
+ NCP_FINFO(inode)->volNumber = nwinfo->i.volNumber;
+}
+
/*
* Fill in the inode based on the nw_file_info structure.
*/
inode->i_mode = server->m.file_mode;
inode->i_size = le32_to_cpu(nwi->dataStreamSize);
}
+ if (nwi->attributes & aRONLY) inode->i_mode &= ~0222;
DDPRINTK(KERN_DEBUG "ncp_read_inode: inode->i_mode = %u\n", inode->i_mode);
root->finfo.opened= 0;
info->ino = 2; /* tradition */
info->nw_info = root->finfo;
+ return;
}
struct super_block *
struct inode *root_inode;
kdev_t dev = sb->s_dev;
int error;
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+ int options;
+#endif
struct ncpfs_inode_info finfo;
MOD_INC_USE_COUNT;
goto out_no_data;
if (data->version != NCP_MOUNT_VERSION)
goto out_bad_mount;
- if ((data->ncp_fd >= NR_OPEN) ||
- ((ncp_filp = current->files->fd[data->ncp_fd]) == NULL) ||
- !S_ISSOCK(ncp_filp->f_dentry->d_inode->i_mode))
+ ncp_filp = fget(data->ncp_fd);
+ if (!ncp_filp)
goto out_bad_file;
+ if (!S_ISSOCK(ncp_filp->f_dentry->d_inode->i_mode))
+ goto out_bad_file2;
lock_super(sb);
- ncp_filp->f_count++;
sb->s_blocksize = 1024; /* Eh... Is this correct? */
sb->s_blocksize_bits = 10;
server->packet = NULL;
server->buffer_size = 0;
server->conn_status = 0;
+ server->root_dentry = NULL;
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+ server->sign_wanted = 0;
+ server->sign_active = 0;
+#endif
+ server->auth.auth_type = NCP_AUTH_NONE;
+ server->auth.object_name_len = 0;
+ server->auth.object_name = NULL;
+ server->auth.object_type = 0;
+ server->priv.len = 0;
+ server->priv.data = NULL;
server->m = *data;
/* Althought anything producing this is buggy, it happens
goto out_no_connect;
DPRINTK(KERN_DEBUG "ncp_read_super: NCP_SBP(sb) = %x\n", (int) NCP_SBP(sb));
- error = ncp_negotiate_buffersize(server, NCP_DEFAULT_BUFSIZE,
- &(server->buffer_size));
- if (error)
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+ if (ncp_negotiate_size_and_options(server, NCP_DEFAULT_BUFSIZE,
+ NCP_DEFAULT_OPTIONS, &(server->buffer_size), &options) == 0)
+ {
+ if (options != NCP_DEFAULT_OPTIONS)
+ {
+ if (ncp_negotiate_size_and_options(server,
+ NCP_DEFAULT_BUFSIZE,
+ options & 2,
+ &(server->buffer_size), &options) != 0)
+
+ {
+ goto out_no_bufsize;
+ }
+ }
+ if (options & 2)
+ server->sign_wanted = 1;
+ }
+ else
+#endif /* CONFIG_NCPFS_PACKET_SIGNING */
+ if (ncp_negotiate_buffersize(server, NCP_DEFAULT_BUFSIZE,
+ &(server->buffer_size)) != 0)
goto out_no_bufsize;
DPRINTK(KERN_DEBUG "ncpfs: bufsize = %d\n", server->buffer_size);
ncp_init_root(server, &finfo);
+ server->name_space[finfo.nw_info.i.volNumber] = NW_NS_DOS;
root_inode = ncp_iget(sb, &finfo);
if (!root_inode)
goto out_no_root;
DPRINTK(KERN_DEBUG "ncp_read_super: root vol=%d\n", NCP_FINFO(root_inode)->volNumber);
- sb->s_root = d_alloc_root(root_inode, NULL);
+ server->root_dentry = sb->s_root = d_alloc_root(root_inode, NULL);
if (!sb->s_root)
goto out_no_root;
-
+ server->root_dentry->d_op = &ncp_dentry_operations;
unlock_super(sb);
return sb;
unlock_super(sb);
goto out;
+out_bad_file2:
+ fput(ncp_filp);
out_bad_file:
printk(KERN_ERR "ncp_read_super: invalid ncp socket\n");
goto out;
fput(server->ncp_filp);
kill_proc(server->m.wdog_pid, SIGTERM, 1);
+ if (server->priv.data)
+ ncp_kfree_s(server->priv.data, server->priv.len);
+ if (server->auth.object_name)
+ ncp_kfree_s(server->auth.object_name, server->auth.object_name_len);
ncp_kfree_s(server->packet, server->packet_size);
ncp_kfree_s(NCP_SBP(sb), sizeof(struct ncp_server));
return copy_to_user(buf, &tmp, bufsiz) ? -EFAULT : 0;
}
-static int ncp_notify_change(struct dentry *dentry, struct iattr *attr)
+int ncp_notify_change(struct dentry *dentry, struct iattr *attr)
{
struct inode *inode = dentry->d_inode;
int result = 0;
info_mask = 0;
memset(&info, 0, sizeof(info));
+#if 1
+ if ((attr->ia_valid & ATTR_MODE) != 0)
+ {
+ if (!S_ISREG(inode->i_mode))
+ {
+ return -EPERM;
+ }
+ else
+ {
+ umode_t newmode;
+
+ info_mask |= DM_ATTRIBUTES;
+ newmode=attr->ia_mode;
+ newmode &= NCP_SERVER(inode)->m.file_mode;
+
+ if (newmode & 0222) /* any write bit set */
+ {
+ info.attributes &= ~0x60001;
+ }
+ else
+ {
+ info.attributes |= 0x60001;
+ }
+ }
+ }
+#endif
+
if ((attr->ia_valid & ATTR_CTIME) != 0) {
info_mask |= (DM_CREATE_TIME | DM_CREATE_DATE);
ncp_date_unix2dos(attr->ia_ctime,
*
*/
+#include <linux/config.h>
+
#include <asm/uaccess.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/ncp.h>
#include <linux/ncp_fs.h>
+#include "ncplib_kernel.h"
+
+/* maximum limit for ncp_objectname_ioctl */
+#define NCP_OBJECT_NAME_MAX_LEN 4096
+/* maximum limit for ncp_privatedata_ioctl */
+#define NCP_PRIVATE_DATA_MAX_LEN 8192
int ncp_ioctl(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
put_user(server->m.mounted_uid, (uid_t *) arg);
return 0;
+ case NCP_IOC_GETMOUNTUID_INT:
+ if ( (permission(inode, MAY_READ) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+
+ {
+ unsigned int tmp=server->m.mounted_uid;
+ if (put_user(tmp, (unsigned long*) arg)) return -EFAULT;
+ }
+ return 0;
+
+#ifdef CONFIG_NCPFS_MOUNT_SUBDIR
+ case NCP_IOC_GETROOT:
+ {
+ struct ncp_setroot_ioctl sr;
+
+ if ( (permission(inode, MAY_READ) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+ if (server->m.mounted_vol[0]) {
+ sr.volNumber = server->root.finfo.i.volNumber;
+ sr.dirEntNum = server->root.finfo.i.dirEntNum;
+ sr.namespace = server->name_space[sr.volNumber];
+ } else {
+ sr.volNumber = -1;
+ sr.namespace = 0;
+ sr.dirEntNum = 0;
+ }
+ if (copy_to_user((struct ncp_setroot_ioctl*)arg,
+ &sr,
+ sizeof(sr))) return -EFAULT;
+ return 0;
+ }
+ case NCP_IOC_SETROOT:
+ {
+ struct ncp_setroot_ioctl sr;
+ struct dentry* dentry;
+
+ if ( (permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+ if (copy_from_user(&sr,
+ (struct ncp_setroot_ioctl*)arg,
+ sizeof(sr))) return -EFAULT;
+ if (sr.volNumber < 0) {
+ server->m.mounted_vol[0] = 0;
+ server->root.finfo.i.volNumber = NCP_NUMBER_OF_VOLUMES + 1;
+ server->root.finfo.i.dirEntNum = 0;
+ server->root.finfo.i.DosDirNum = 0;
+ } else if (sr.volNumber >= NCP_NUMBER_OF_VOLUMES) {
+ return -EINVAL;
+ } else {
+ if (ncp_mount_subdir(server, sr.volNumber, sr.namespace, sr.dirEntNum)) {
+ return -ENOENT;
+ }
+ }
+ dentry = server->root_dentry;
+ if (dentry) {
+ struct inode* inode = dentry->d_inode;
+
+ if (inode) {
+ NCP_FINFO(inode)->volNumber = server->root.finfo.i.volNumber;
+ NCP_FINFO(inode)->dirEntNum = server->root.finfo.i.dirEntNum;
+ NCP_FINFO(inode)->DosDirNum = server->root.finfo.i.DosDirNum;
+ } else {
+ DPRINTK(KERN_DEBUG "ncpfs: root_dentry->d_inode==NULL\n");
+ }
+ } else {
+ DPRINTK(KERN_DEBUG "ncpfs: root_dentry==NULL\n");
+ }
+ return 0;
+ }
+#endif /* CONFIG_NCPFS_MOUNT_SUBDIR */
+
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+ case NCP_IOC_SIGN_INIT:
+ if ((permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+ if (server->sign_active)
+ {
+ return -EINVAL;
+ }
+ if (server->sign_wanted)
+ {
+ struct ncp_sign_init sign;
+
+ if (copy_from_user(&sign, (struct ncp_sign_init *) arg,
+ sizeof(sign))) return -EFAULT;
+ memcpy(server->sign_root,sign.sign_root,8);
+ memcpy(server->sign_last,sign.sign_last,16);
+ server->sign_active = 1;
+ }
+ /* ignore when signatures not wanted */
+ return 0;
+
+ case NCP_IOC_SIGN_WANTED:
+ if ( (permission(inode, MAY_READ) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+
+ if (put_user(server->sign_wanted, (int*) arg))
+ return -EFAULT;
+ return 0;
+ case NCP_IOC_SET_SIGN_WANTED:
+ {
+ int newstate;
+
+ if ( (permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+ /* get only low 8 bits... */
+ get_user_ret(newstate, (unsigned char*)arg, -EFAULT);
+ if (server->sign_active) {
+ /* cannot turn signatures OFF when active */
+ if (!newstate) return -EINVAL;
+ } else {
+ server->sign_wanted = newstate != 0;
+ }
+ return 0;
+ }
+
+#endif /* CONFIG_NCPFS_PACKET_SIGNING */
+
+#ifdef CONFIG_NCPFS_IOCTL_LOCKING
+ case NCP_IOC_LOCKUNLOCK:
+ if ( (permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid))
+ {
+ return -EACCES;
+ }
+ {
+ struct ncp_lock_ioctl rqdata;
+ int result;
+
+ if (copy_from_user(&rqdata, (struct ncp_lock_ioctl*)arg,
+ sizeof(rqdata))) return -EFAULT;
+ if (rqdata.origin != 0)
+ return -EINVAL;
+ /* check for cmd */
+ switch (rqdata.cmd) {
+ case NCP_LOCK_EX:
+ case NCP_LOCK_SH:
+ if (rqdata.timeout == 0)
+ rqdata.timeout = NCP_LOCK_DEFAULT_TIMEOUT;
+ else if (rqdata.timeout > NCP_LOCK_MAX_TIMEOUT)
+ rqdata.timeout = NCP_LOCK_MAX_TIMEOUT;
+ break;
+ case NCP_LOCK_LOG:
+ rqdata.timeout = NCP_LOCK_DEFAULT_TIMEOUT; /* has no effect */
+ case NCP_LOCK_CLEAR:
+ break;
+ default:
+ return -EINVAL;
+ }
+ if ((result = ncp_make_open(inode, O_RDWR)) != 0)
+ {
+ return result;
+ }
+ if (!ncp_conn_valid(server))
+ {
+ return -EIO;
+ }
+ if (!S_ISREG(inode->i_mode))
+ {
+ return -EISDIR;
+ }
+ if (!NCP_FINFO(inode)->opened)
+ {
+ return -EBADFD;
+ }
+ if (rqdata.cmd == NCP_LOCK_CLEAR)
+ {
+ result = ncp_ClearPhysicalRecord(NCP_SERVER(inode),
+ NCP_FINFO(inode)->file_handle,
+ rqdata.offset,
+ rqdata.length);
+ if (result > 0) result = 0; /* no such lock */
+ }
+ else
+ {
+ int lockcmd;
+
+ switch (rqdata.cmd)
+ {
+ case NCP_LOCK_EX: lockcmd=1; break;
+ case NCP_LOCK_SH: lockcmd=3; break;
+ default: lockcmd=0; break;
+ }
+ result = ncp_LogPhysicalRecord(NCP_SERVER(inode),
+ NCP_FINFO(inode)->file_handle,
+ lockcmd,
+ rqdata.offset,
+ rqdata.length,
+ rqdata.timeout);
+ if (result > 0) result = -EAGAIN;
+ }
+ return result;
+ }
+#endif /* CONFIG_NCPFS_IOCTL_LOCKING */
+
+#ifdef CONFIG_NCPFS_NDS_DOMAINS
+ case NCP_IOC_GETOBJECTNAME:
+ if ( (permission(inode, MAY_READ) != 0)
+ && (current->uid != server->m.mounted_uid)) {
+ return -EACCES;
+ }
+ {
+ struct ncp_objectname_ioctl user;
+ int outl;
+
+ if ((result = verify_area(VERIFY_WRITE,
+ (struct ncp_objectname_ioctl*)arg,
+ sizeof(user))) != 0) {
+ return result;
+ }
+ if (copy_from_user(&user,
+ (struct ncp_objectname_ioctl*)arg,
+ sizeof(user))) return -EFAULT;
+ user.auth_type = server->auth.auth_type;
+ outl = user.object_name_len;
+ user.object_name_len = server->auth.object_name_len;
+ if (outl > user.object_name_len)
+ outl = user.object_name_len;
+ if (outl) {
+ if (copy_to_user(user.object_name,
+ server->auth.object_name,
+ outl)) return -EFAULT;
+ }
+ if (copy_to_user((struct ncp_objectname_ioctl*)arg,
+ &user,
+ sizeof(user))) return -EFAULT;
+ return 0;
+ }
+ case NCP_IOC_SETOBJECTNAME:
+ if ( (permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid)) {
+ return -EACCES;
+ }
+ {
+ struct ncp_objectname_ioctl user;
+ void* newname;
+ void* oldname;
+ size_t oldnamelen;
+ void* oldprivate;
+ size_t oldprivatelen;
+
+ if (copy_from_user(&user,
+ (struct ncp_objectname_ioctl*)arg,
+ sizeof(user))) return -EFAULT;
+ if (user.object_name_len > NCP_OBJECT_NAME_MAX_LEN)
+ return -ENOMEM;
+ if (user.object_name_len) {
+ newname = ncp_kmalloc(user.object_name_len, GFP_USER);
+ if (!newname) return -ENOMEM;
+ if (copy_from_user(newname, user.object_name, sizeof(user))) {
+ ncp_kfree_s(newname, user.object_name_len);
+ return -EFAULT;
+ }
+ } else {
+ newname = NULL;
+ }
+ /* enter critical section */
+ /* maybe that kfree can sleep so do that this way */
+ /* it is at least more SMP friendly (in future...) */
+ oldname = server->auth.object_name;
+ oldnamelen = server->auth.object_name_len;
+ oldprivate = server->priv.data;
+ oldprivatelen = server->priv.len;
+ server->auth.auth_type = user.auth_type;
+ server->auth.object_name_len = user.object_name_len;
+ server->auth.object_name = user.object_name;
+ server->priv.len = 0;
+ server->priv.data = NULL;
+ /* leave critical section */
+ if (oldprivate) ncp_kfree_s(oldprivate, oldprivatelen);
+ if (oldname) ncp_kfree_s(oldname, oldnamelen);
+ return 0;
+ }
+ case NCP_IOC_GETPRIVATEDATA:
+ if ( (permission(inode, MAY_READ) != 0)
+ && (current->uid != server->m.mounted_uid)) {
+ return -EACCES;
+ }
+ {
+ struct ncp_privatedata_ioctl user;
+ int outl;
+
+ if ((result = verify_area(VERIFY_WRITE,
+ (struct ncp_privatedata_ioctl*)arg,
+ sizeof(user))) != 0) {
+ return result;
+ }
+ if (copy_from_user(&user,
+ (struct ncp_privatedata_ioctl*)arg,
+ sizeof(user))) return -EFAULT;
+ outl = user.len;
+ user.len = server->priv.len;
+ if (outl > user.len) outl = user.len;
+ if (outl) {
+ if (copy_to_user(user.data,
+ server->priv.data,
+ outl)) return -EFAULT;
+ }
+ if (copy_to_user((struct ncp_privatedata_ioctl*)arg,
+ &user,
+ sizeof(user))) return -EFAULT;
+ return 0;
+ }
+ case NCP_IOC_SETPRIVATEDATA:
+ if ( (permission(inode, MAY_WRITE) != 0)
+ && (current->uid != server->m.mounted_uid)) {
+ return -EACCES;
+ }
+ {
+ struct ncp_privatedata_ioctl user;
+ void* new;
+ void* old;
+ size_t oldlen;
+
+ if (copy_from_user(&user,
+ (struct ncp_privatedata_ioctl*)arg,
+ sizeof(user))) return -EFAULT;
+ if (user.len > NCP_PRIVATE_DATA_MAX_LEN)
+ return -ENOMEM;
+ if (user.len) {
+ new = ncp_kmalloc(user.len, GFP_USER);
+ if (!new) return -ENOMEM;
+ if (copy_from_user(new, user.data, user.len)) {
+ ncp_kfree_s(new, user.len);
+ return -EFAULT;
+ }
+ } else {
+ new = NULL;
+ }
+ /* enter critical section */
+ old = server->priv.data;
+ oldlen = server->priv.len;
+ server->priv.len = user.len;
+ server->priv.data = new;
+ /* leave critical section */
+ if (old) ncp_kfree_s(old, oldlen);
+ return 0;
+ }
+#endif /* CONFIG_NCPFS_NDS_DOMAINS */
default:
return -EINVAL;
}
-
- return -EINVAL;
}
*/
+#include <linux/config.h>
+
#include "ncplib_kernel.h"
static inline int min(int a, int b)
return 0;
}
+
+/* options:
+ * bit 0 ipx checksum
+ * bit 1 packet signing
+ */
+int
+ncp_negotiate_size_and_options(struct ncp_server *server,
+ int size, int options, int *ret_size, int *ret_options) {
+ int result;
+
+ ncp_init_request(server);
+ ncp_add_word(server, htons(size));
+ ncp_add_byte(server, options);
+
+ if ((result = ncp_request(server, 0x61)) != 0)
+ {
+ ncp_unlock_server(server);
+ return result;
+ }
+
+ *ret_size = min(ntohs(ncp_reply_word(server, 0)), size);
+ *ret_options = ncp_reply_byte(server, 4);
+
+ ncp_unlock_server(server);
+ return 0;
+}
+
int
ncp_get_volume_info_with_number(struct ncp_server *server, int n,
struct ncp_volume_info *target)
ncp_add_byte(server, 6); /* subfunction */
ncp_add_byte(server, server->name_space[volnum]);
ncp_add_byte(server, server->name_space[volnum]); /* N.B. twice ?? */
- ncp_add_word(server, htons(0xff00)); /* get all */
+ ncp_add_word(server, htons(0x0680)); /* get all */
ncp_add_dword(server, RIM_ALL);
ncp_add_handle_path(server, volnum, dirent, 1, path);
return result;
}
+static int
+ncp_obtain_DOS_dir_base(struct ncp_server *server,
+ __u8 volnum, __u32 dirent,
+ char *path, /* At most 1 component */
+ __u32 *DOS_dir_base)
+{
+ int result;
+
+ ncp_init_request(server);
+ ncp_add_byte(server, 6); /* subfunction */
+ ncp_add_byte(server, server->name_space[volnum]);
+ ncp_add_byte(server, server->name_space[volnum]);
+ ncp_add_word(server, htons(0x0680)); /* get all */
+ ncp_add_dword(server, RIM_DIRECTORY);
+ ncp_add_handle_path(server, volnum, dirent, 1, path);
+
+ if ((result = ncp_request(server, 87)) == 0)
+ {
+ if (DOS_dir_base) *DOS_dir_base=ncp_reply_dword(server, 0x34);
+ }
+ ncp_unlock_server(server);
+ return result;
+}
+
static inline int
-ncp_has_os2_namespace(struct ncp_server *server, __u8 volume)
+ncp_get_known_namespace(struct ncp_server *server, __u8 volume)
{
+#if defined(CONFIG_NCPFS_OS2_NS) || defined(CONFIG_NCPFS_NFS_NS)
int result;
__u8 *namespace;
__u16 no_namespaces;
if ((result = ncp_request(server, 87)) != 0) {
ncp_unlock_server(server);
- return 0; /* not result ?? */
+ return NW_NS_DOS; /* not result ?? */
}
+
+ result = NW_NS_DOS;
no_namespaces = ncp_reply_word(server, 0);
namespace = ncp_reply_data(server, 2);
- result = 1;
while (no_namespaces > 0) {
DPRINTK(KERN_DEBUG "get_namespaces: found %d on %d\n", *namespace, volume);
- if (*namespace == 4) {
- DPRINTK(KERN_DEBUG "get_namespaces: found OS2\n");
- goto out;
+#ifdef CONFIG_NCPFS_NFS_NS
+ if ((*namespace == NW_NS_NFS) && !(server->m.flags&NCP_MOUNT_NO_NFS))
+ {
+ result = NW_NS_NFS;
+ break;
}
+#endif /* CONFIG_NCPFS_NFS_NS */
+#ifdef CONFIG_NCPFS_OS2_NS
+ if ((*namespace == NW_NS_OS2) && !(server->m.flags&NCP_MOUNT_NO_OS2))
+ {
+ result = NW_NS_OS2;
+ }
+#endif /* CONFIG_NCPFS_OS2_NS */
namespace += 1;
no_namespaces -= 1;
}
- result = 0;
-out:
ncp_unlock_server(server);
return result;
+#else /* neither OS2 nor NFS - only DOS */
+ return NW_NS_DOS;
+#endif /* defined(CONFIG_NCPFS_OS2_NS) || defined(CONFIG_NCPFS_NFS_NS) */
+}
+
+static int
+ncp_ObtainSpecificDirBase(struct ncp_server *server,
+ __u8 nsSrc, __u8 nsDst, __u8 vol_num, __u32 dir_base,
+ char *path, /* At most 1 component */
+ __u32 *dirEntNum, __u32 *DosDirNum)
+{
+ int result;
+
+ ncp_init_request(server);
+ ncp_add_byte(server, 6); /* subfunction */
+ ncp_add_byte(server, nsSrc);
+ ncp_add_byte(server, nsDst);
+ ncp_add_word(server, 0x8006); /* get all */
+ ncp_add_dword(server, RIM_ALL);
+ ncp_add_handle_path(server, vol_num, dir_base, 1, path);
+
+ if ((result = ncp_request(server, 87)) != 0)
+ {
+ ncp_unlock_server(server);
+ return result;
+ }
+
+ if (dirEntNum)
+ *dirEntNum = ncp_reply_dword(server, 0x30);
+ if (DosDirNum)
+ *DosDirNum = ncp_reply_dword(server, 0x34);
+ ncp_unlock_server(server);
+ return 0;
+}
+
+int
+ncp_mount_subdir(struct ncp_server *server,
+ __u8 volNumber,
+ __u8 srcNS, __u32 dirEntNum)
+{
+ int dstNS;
+ int result;
+ __u32 newDirEnt;
+ __u32 newDosEnt;
+
+ dstNS = ncp_get_known_namespace(server, volNumber);
+ if ((result = ncp_ObtainSpecificDirBase(server, srcNS, dstNS, volNumber,
+ dirEntNum, NULL, &newDirEnt, &newDosEnt)) != 0)
+ {
+ return result;
+ }
+ server->name_space[volNumber] = dstNS;
+ server->root.finfo.i.volNumber = volNumber;
+ server->root.finfo.i.dirEntNum = newDirEnt;
+ server->root.finfo.i.DosDirNum = newDosEnt;
+ server->m.mounted_vol[1] = 0;
+ server->m.mounted_vol[0] = 'X';
+ return 0;
}
int
target->volNumber = volnum = ncp_reply_byte(server, 8);
ncp_unlock_server(server);
- server->name_space[volnum] = ncp_has_os2_namespace(server, volnum) ? 4 : 0;
+ server->name_space[volnum] = ncp_get_known_namespace(server, volnum);
DPRINTK(KERN_DEBUG "lookup_vol: namespace[%d] = %d\n",
volnum, server->name_space[volnum]);
return result;
}
-int ncp_del_file_or_subdir(struct ncp_server *server,
- struct inode *dir, char *name)
+static int
+ncp_DeleteNSEntry(struct ncp_server *server,
+ __u8 have_dir_base, __u8 volnum, __u32 dirent,
+ char* name, __u8 ns, int attr)
{
- __u8 volnum = NCP_FINFO(dir)->volNumber;
- __u32 dirent = NCP_FINFO(dir)->dirEntNum;
int result;
ncp_init_request(server);
ncp_add_byte(server, 8); /* subfunction */
- ncp_add_byte(server, server->name_space[volnum]);
+ ncp_add_byte(server, ns);
ncp_add_byte(server, 0); /* reserved */
- ncp_add_word(server, ntohs(0x0680)); /* search attribs: all */
- ncp_add_handle_path(server, volnum, dirent, 1, name);
+ ncp_add_word(server, attr); /* search attribs: all */
+ ncp_add_handle_path(server, volnum, dirent, have_dir_base, name);
result = ncp_request(server, 87);
ncp_unlock_server(server);
return result;
}
+int
+ncp_del_file_or_subdir2(struct ncp_server *server,
+ struct dentry *dentry)
+{
+ struct inode *inode = dentry->d_inode;
+ __u8 volnum;
+ __u32 dirent;
+
+ if (!inode) {
+#if CONFIG_NCPFS_DEBUGDENTRY
+ printk(KERN_DEBUG "ncpfs: ncpdel2: dentry->d_inode == NULL\n");
+#endif
+ return 0xFF; /* Any error */
+ }
+ volnum = NCP_FINFO(inode)->volNumber;
+ dirent = NCP_FINFO(inode)->DosDirNum;
+ return ncp_DeleteNSEntry(server, 1, volnum, dirent, NULL, NW_NS_DOS, htons(0x0680));
+}
+
+int
+ncp_del_file_or_subdir(struct ncp_server *server,
+ struct inode *dir, char *name)
+{
+ __u8 volnum = NCP_FINFO(dir)->volNumber;
+ __u32 dirent = NCP_FINFO(dir)->dirEntNum;
+
+#ifdef CONFIG_NCPFS_NFS_NS
+ if (server->name_space[volnum]==NW_NS_NFS)
+ {
+ int result;
+
+ result=ncp_obtain_DOS_dir_base(server, volnum, dirent, name, &dirent);
+ if (result) return result;
+ return ncp_DeleteNSEntry(server, 1, volnum, dirent, NULL, NW_NS_DOS, htons(0x0680));
+ }
+ else
+#endif /* CONFIG_NCPFS_NFS_NS */
+ return ncp_DeleteNSEntry(server, 1, volnum, dirent, name, server->name_space[volnum], htons(0x0680));
+}
+
static inline void ConvertToNWfromDWORD(__u32 sfd, __u8 ret[6])
{
__u16 *dest = (__u16 *) ret;
ncp_add_byte(server, 3); /* subfunction */
ncp_add_byte(server, server->name_space[seq->volNumber]);
ncp_add_byte(server, 0); /* data stream (???) */
- ncp_add_word(server, 0xffff); /* Search attribs */
+ ncp_add_word(server, htons(0x0680)); /* Search attribs */
ncp_add_dword(server, RIM_ALL); /* return info mask */
ncp_add_mem(server, seq, 9);
- ncp_add_byte(server, 2); /* 2 byte pattern */
- ncp_add_byte(server, 0xff); /* following is a wildcard */
- ncp_add_byte(server, '*');
-
+#ifdef CONFIG_NCPFS_NFS_NS
+ if (server->name_space[seq->volNumber] == NW_NS_NFS) {
+ ncp_add_byte(server, 0); /* 0 byte pattern */
+ } else
+#endif
+ {
+ ncp_add_byte(server, 2); /* 2 byte pattern */
+ ncp_add_byte(server, 0xff); /* following is a wildcard */
+ ncp_add_byte(server, '*');
+ }
+
if ((result = ncp_request(server, 87)) != 0)
goto out;
memcpy(seq, ncp_reply_data(server, 0), sizeof(*seq));
return result;
}
-int ncp_ren_or_mov_file_or_subdir(struct ncp_server *server,
- struct inode *old_dir, char *old_name,
- struct inode *new_dir, char *new_name)
+int
+ncp_RenameNSEntry(struct ncp_server *server,
+ struct inode *old_dir, char *old_name, int old_type,
+ struct inode *new_dir, char *new_name)
{
int result = -EINVAL;
ncp_add_byte(server, 4); /* subfunction */
ncp_add_byte(server, server->name_space[NCP_FINFO(old_dir)->volNumber]);
ncp_add_byte(server, 1); /* rename flag */
- ncp_add_word(server, ntohs(0x0680)); /* search attributes */
+ ncp_add_word(server, old_type); /* search attributes */
/* source Handle Path */
ncp_add_byte(server, NCP_FINFO(old_dir)->volNumber);
return result;
}
+int ncp_ren_or_mov_file_or_subdir(struct ncp_server *server,
+ struct inode *old_dir, char *old_name,
+ struct inode *new_dir, char *new_name)
+{
+ int result;
+ int old_type = htons(0x0600);
+
+/* If somebody can do it atomic, call me... vandrove@vc.cvut.cz */
+ result = ncp_RenameNSEntry(server, old_dir, old_name, old_type,
+ new_dir, new_name);
+ if (result == 0xFF) /* File Not Found, try directory */
+ {
+ old_type = htons(0x1600);
+ result = ncp_RenameNSEntry(server, old_dir, old_name, old_type,
+ new_dir, new_name);
+ }
+ if (result != 0x92) return result; /* All except NO_FILES_RENAMED */
+ result = ncp_del_file_or_subdir(server, new_dir, new_name);
+ if (result != 0) return -EACCES;
+ result = ncp_RenameNSEntry(server, old_dir, old_name, old_type,
+ new_dir, new_name);
+ return result;
+}
+
/* We have to transfer to/from user space */
int
ncp_unlock_server(server);
return result;
}
+
+#ifdef CONFIG_NCPFS_IOCTL_LOCKING
+int
+ncp_LogPhysicalRecord(struct ncp_server *server, const char *file_id,
+ __u8 locktype, __u32 offset, __u32 length, __u16 timeout)
+{
+ int result;
+
+ ncp_init_request(server);
+ ncp_add_byte(server, locktype);
+ ncp_add_mem(server, file_id, 6);
+ ncp_add_dword(server, htonl(offset));
+ ncp_add_dword(server, htonl(length));
+ ncp_add_word(server, htons(timeout));
+
+ if ((result = ncp_request(server, 0x1A)) != 0)
+ {
+ ncp_unlock_server(server);
+ return result;
+ }
+ ncp_unlock_server(server);
+ return 0;
+}
+
+int
+ncp_ClearPhysicalRecord(struct ncp_server *server, const char *file_id,
+ __u32 offset, __u32 length)
+{
+ int result;
+
+ ncp_init_request(server);
+ ncp_add_byte(server, 0); /* who knows... lanalyzer says that */
+ ncp_add_mem(server, file_id, 6);
+ ncp_add_dword(server, htonl(offset));
+ ncp_add_dword(server, htonl(length));
+
+ if ((result = ncp_request(server, 0x1E)) != 0)
+ {
+ ncp_unlock_server(server);
+ return result;
+ }
+ ncp_unlock_server(server);
+ return 0;
+}
+#endif /* CONFIG_NCPFS_IOCTL_LOCKING */
+
#ifndef _NCPLIB_H
#define _NCPLIB_H
+#include <linux/config.h>
+
#include <linux/fs.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/ncp_fs_sb.h>
int ncp_negotiate_buffersize(struct ncp_server *, int, int *);
+int ncp_negotiate_size_and_options(struct ncp_server *server, int size,
+ int options, int *ret_size, int *ret_options);
int ncp_get_volume_info_with_number(struct ncp_server *, int,
struct ncp_volume_info *);
int ncp_close_file(struct ncp_server *, const char *);
int ncp_modify_file_or_subdir_dos_info(struct ncp_server *, struct inode *,
__u32, struct nw_modify_dos_info *info);
+int ncp_del_file_or_subdir2(struct ncp_server *, struct dentry*);
int ncp_del_file_or_subdir(struct ncp_server *, struct inode *, char *);
int ncp_open_create_file_or_subdir(struct ncp_server *, struct inode *, char *,
int, __u32, int, struct nw_file_info *);
struct inode *, char *, struct inode *, char *);
+int
+ncp_LogPhysicalRecord(struct ncp_server *server,
+ const char *file_id, __u8 locktype,
+ __u32 offset, __u32 length, __u16 timeout);
+
+#ifdef CONFIG_NCPFS_IOCTL_LOCKING
+int
+ncp_ClearPhysicalRecord(struct ncp_server *server,
+ const char *file_id,
+ __u32 offset, __u32 length);
+#endif /* CONFIG_NCPFS_IOCTL_LOCKING */
+
+#ifdef CONFIG_NCPFS_MOUNT_SUBDIR
+int
+ncp_mount_subdir(struct ncp_server* server, __u8 volNumber,
+ __u8 srcNS, __u32 srcDirEntNum);
+#endif /* CONFIG_NCPFS_MOUNT_SUBDIR */
#endif /* _NCPLIB_H */
*
*/
+#include <linux/config.h>
+
#include <linux/sched.h>
#include <linux/errno.h>
#include <linux/socket.h>
#include <net/sock.h>
#include <linux/ipx.h>
#include <linux/poll.h>
+#include <linux/file.h>
#include <linux/ncp.h>
#include <linux/ncp_fs.h>
#include <linux/ncp_fs_sb.h>
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+#include "ncpsign_kernel.h"
+#endif
+
static int _recv(struct socket *sock, unsigned char *ubuf, int size,
unsigned flags)
{
current->timeout = jiffies + timeout;
schedule();
remove_wait_queue(entry.wait_address, &entry.wait);
+ fput(file);
current->state = TASK_RUNNING;
if (signal_pending(current)) {
current->timeout = 0;
continue;
} else
current->timeout = 0;
- } else if (wait_table.nr)
+ } else if (wait_table.nr) {
remove_wait_queue(entry.wait_address, &entry.wait);
+ fput(file);
+ }
current->state = TASK_RUNNING;
/* Get the header from the next packet using a peek, so keep it
if (!ncp_conn_valid(server)) {
return -EIO;
}
+#ifdef CONFIG_NCPFS_PACKET_SIGNING
+ if (server->sign_active)
+ {
+ sign_packet(server, &size);
+ }
+#endif /* CONFIG_NCPFS_PACKET_SIGNING */
result = do_ncp_rpc_call(server, size);
DDPRINTK(KERN_DEBUG "do_ncp_rpc_call returned %d\n", result);
return buf+1;
}
+/*
+ * The task state array is a strange "bitmap" of
+ * reasons to sleep. Thus "running" is zero, and
+ * you can test for combinations of others with
+ * simple bit tests.
+ */
static const char *task_state_array[] = {
- ". Huh?",
- "R (running)",
- "S (sleeping)",
- "D (disk sleep)",
- "Z (zombie)",
- "T (stopped)",
- "W (paging)"
+ "R (running)", /* 0 */
+ "S (sleeping)", /* 1 */
+ "D (disk sleep)", /* 2 */
+ "Z (zombie)", /* 4 */
+ "T (stopped)", /* 8 */
+ "W (paging)" /* 16 */
};
static inline const char * get_task_state(struct task_struct *tsk)
}
int
-smb_readpage(struct dentry *dentry, struct page *page)
+smb_readpage(struct file *file, struct page *page)
{
+ struct dentry *dentry = file->f_dentry;
int error;
pr_debug("SMB: smb_readpage %08lx\n", page_address(page));
* (for now), and we currently do this synchronously only.
*/
static int
-smb_writepage(struct dentry *dentry, struct page *page)
+smb_writepage(struct file *file, struct page *page)
{
+ struct dentry *dentry = file->f_dentry;
int result;
#ifdef SMBFS_PARANOIA
}
static int
-smb_updatepage(struct dentry *dentry, struct page *page, const char *buffer,
+smb_updatepage(struct file *file, struct page *page, const char *buffer,
unsigned long offset, unsigned int count, int sync)
{
+ struct dentry *dentry = file->f_dentry;
unsigned long page_addr = page_address(page);
int result;
* by Riccardo Facchetti
*/
-#include <linux/fs.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/malloc.h>
+#include <linux/fs.h>
+#include <linux/file.h>
#include <linux/stat.h>
#include <linux/fcntl.h>
#include <linux/dcache.h>
#ifdef SMBFS_DEBUG_VERBOSE
printk("smb_newconn: fd=%d, pid=%d\n", opt->fd, current->pid);
#endif
- error = -EBADF;
- if (opt->fd < 0 || opt->fd >= NR_OPEN)
- goto out;
- if (!(filp = current->files->fd[opt->fd]))
- goto out;
- if (!smb_valid_socket(filp->f_dentry->d_inode))
- goto out;
-
- error = -EACCES;
- if ((current->uid != server->mnt->mounted_uid) && !suser())
- goto out;
-
/*
* Make sure we don't already have a pid ...
*/
error = -EINVAL;
if (server->conn_pid)
- {
- printk("SMBFS: invalid ioctl call\n");
goto out;
- }
- server->conn_pid = current->pid;
- filp->f_count += 1;
+ error = -EACCES;
+ if (current->uid != server->mnt->mounted_uid && !suser())
+ goto out;
+
+ error = -EBADF;
+ filp = fget(opt->fd);
+ if (!filp)
+ goto out;
+ if (!smb_valid_socket(filp->f_dentry->d_inode))
+ goto out_putf;
+
server->sock_file = filp;
+ server->conn_pid = current->pid;
smb_catch_keepalive(server);
server->opt = *opt;
server->generation += 1;
out:
wake_up_interruptible(&server->wait);
return error;
+
+out_putf:
+ fput(filp);
+ goto out;
}
/* smb_setup_header: We completely set up the packet. You only have to
#define __NR_utimes 363
#define __NR_getrusage 364
#define __NR_wait4 365
-
+#define __NR_adjtimex 366
#if defined(__LIBRARY__) && defined(__GNUC__)
#include <sys/types.h>
#endif
-#ifdef __linux__
+#ifdef DJGPP
+#ifdef KERNEL
+typedef unsigned long u_long;
+typedef unsigned int u_int;
+typedef unsigned short u_short;
+typedef u_long ino_t;
+typedef u_long dev_t;
+typedef void * caddr_t;
+typedef u_long u_quad_t;
+
+#define inline
+
+struct timespec {
+ long ts_sec;
+ long ts_nsec;
+};
+#else /* DJGPP but not KERNEL */
+#include <sys/types.h>
+#include <sys/time.h>
+typedef u_long u_quad_t;
+#endif /* !KERNEL */
+#endif /* !DJGPP */
+
+
+#if defined(__linux__) || defined(__CYGWIN32__)
#define cdev_t u_quad_t
#if !defined(_UQUAD_T_) && (!defined(__GLIBC__) || __GLIBC__ < 2)
#define _UQUAD_T_ 1
typedef unsigned long long u_quad_t;
-#endif
+#endif
#else
#define cdev_t dev_t
#endif
+#ifdef __CYGWIN32__
+typedef unsigned char u_int8_t;
+struct timespec {
+ time_t tv_sec; /* seconds */
+ long tv_nsec; /* nanoseconds */
+};
+#endif
+
/*
* Cfs constants
/*
* File types
*/
-#define DT_UNKNOWN 0
-#define DT_FIFO 1
-#define DT_CHR 2
-#define DT_DIR 4
-#define DT_BLK 6
-#define DT_REG 8
-#define DT_LNK 10
-#define DT_SOCK 12
-#define DT_WHT 14
+#define CDT_UNKNOWN 0
+#define CDT_FIFO 1
+#define CDT_CHR 2
+#define CDT_DIR 4
+#define CDT_BLK 6
+#define CDT_REG 8
+#define CDT_LNK 10
+#define CDT_SOCK 12
+#define CDT_WHT 14
/*
* Convert between stat structure types and directory types.
*/
-#define IFTODT(mode) (((mode) & 0170000) >> 12)
-#define DTTOIF(dirtype) ((dirtype) << 12)
+#define IFTOCDT(mode) (((mode) & 0170000) >> 12)
+#define CDTTOIF(dirtype) ((dirtype) << 12)
#endif
#define _CODACRED_T_
struct coda_cred {
vuid_t cr_uid, cr_euid, cr_suid, cr_fsuid; /* Real, efftve, set, fs uid*/
- vgid_t cr_gid, cr_egid, cr_sgid, cr_fsgid; /* same for groups */
+#if defined(__NetBSD__) || defined(__FreeBSD__)
+ vgid_t cr_groupid, cr_egid, cr_sgid, cr_fsgid; /* same for groups */
+#else
+ vgid_t cr_gid, cr_egid, cr_sgid, cr_fsgid; /* same for groups */
+#endif
};
#endif
#define CFS_ZAPDIR ((u_long) 28)
#define CFS_ZAPVNODE ((u_long) 29)
#define CFS_PURGEFID ((u_long) 30)
-#define CFS_NCALLS 31
+#define CFS_OPEN_BY_PATH ((u_long) 31)
+#define CFS_NCALLS 32
#define DOWNCALL(opcode) (opcode >= CFS_REPLACE && opcode <= CFS_PURGEFID)
ViceFid OldFid;
};
+/* cfs_open_by_path: */
+struct cfs_open_by_path_in {
+ struct cfs_in_hdr ih;
+ ViceFid VFid;
+ int flags;
+};
+
+struct cfs_open_by_path_out {
+ struct cfs_out_hdr oh;
+ int path;
+};
+
/*
* Occasionally, don't cache the fid returned by CFS_LOOKUP. For instance, if
* the fid is inconsistent. This case is handled by setting the top bit of the
struct cfs_inactive_in cfs_inactive;
struct cfs_vget_in cfs_vget;
struct cfs_rdwr_in cfs_rdwr;
+ struct cfs_open_by_path_in cfs_open_by_path;
};
union outputArgs {
struct cfs_purgefid_out cfs_purgefid;
struct cfs_rdwr_out cfs_rdwr;
struct cfs_replace_out cfs_replace;
+ struct cfs_open_by_path_out cfs_open_by_path;
};
union cfs_downcalls {
};
void coda_ccinsert(struct coda_cache *el, struct super_block *sb);
-void coda_cninsert(struct coda_cache *el, struct cnode *cnp);
+void coda_cninsert(struct coda_cache *el, struct coda_inode_info *cnp);
void coda_ccremove(struct coda_cache *el);
void coda_cnremove(struct coda_cache *el);
void coda_cache_create(struct inode *inode, int mask);
struct coda_cache *coda_cache_find(struct inode *inode);
void coda_cache_enter(struct inode *inode, int mask);
-void coda_cache_clear_cnp(struct cnode *cnp);
+void coda_cache_clear_cnp(struct coda_inode_info *cnp);
void coda_cache_clear_all(struct super_block *sb);
void coda_cache_clear_cred(struct super_block *sb, struct coda_cred *cred);
int coda_cache_check(struct inode *inode, int mask);
+++ /dev/null
-/*
- * Cnode definitions for Coda.
- * Original version: (C) 1996 Peter Braam
- * Rewritten for Linux 2.1: (C) 1997 Carnegie Mellon University
- *
- * Carnegie Mellon encourages users of this code to contribute improvements
- * to the Coda project. Contact Peter Braam <coda@cs.cmu.edu>.
- */
-
-
-#ifndef _CNODE_H_
-#define _CNODE_H_
-#include <linux/coda.h>
-
-
-#define CODA_CNODE_MAGIC 0x47114711
-
-/* defintion of cnode, which combines ViceFid with inode information */
-struct cnode {
- struct inode *c_vnode; /* inode associated with cnode */
- ViceFid c_fid; /* Coda identifier */
- u_short c_flags; /* flags (see below) */
- int c_magic; /* to verify the data structure */
- u_short c_ocount; /* count of openers */
- u_short c_owrite; /* count of open for write */
- u_short c_mmcount; /* count of mmappers */
- struct inode *c_ovp; /* open vnode pointer */
- struct list_head c_cnhead; /* head of cache entries */
- struct list_head c_volrootlist; /* list of volroot cnoddes */
-};
-
-/* flags */
-#define C_VATTR 0x1 /* Validity of vattr in the cnode */
-#define C_SYMLINK 0x2 /* Validity of symlink pointer in the cnode */
-#define C_DYING 0x4 /* Set for outstanding cnodes from venus (which died) */
-#define C_ZAPFID 0x8
-#define C_ZAPDIR 0x10
-
-void coda_cnode_free(struct cnode *);
-int coda_cnode_make(struct inode **, struct ViceFid *, struct super_block *);
-int coda_cnode_makectl(struct inode **inode, struct super_block *sb);
-struct inode *coda_fid_to_inode(ViceFid *fid, struct super_block *sb);
-
-/* inode to cnode */
-static inline struct cnode *ITOC(struct inode *inode)
-{
- return ((struct cnode *)inode->u.generic_ip);
-}
-
-/* cnode to inode */
-static inline struct inode *CTOI(struct cnode *cnode)
-{
- return (cnode->c_vnode);
-}
-
-#endif
-
--- /dev/null
+/*
+ * coda_fs_i.h
+ *
+ * Copyright (C) 1998 Carnegie Mellon University
+ *
+ */
+
+#ifndef _LINUX_CODA_FS_I
+#define _LINUX_CODA_FS_I
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/coda.h>
+
+
+
+#define CODA_CNODE_MAGIC 0x47114711
+/*
+ * smb fs inode data (in memory only)
+ */
+struct coda_inode_info {
+ struct ViceFid c_fid; /* Coda identifier */
+ u_short c_flags; /* flags (see below) */
+ u_short c_ocount; /* count of openers */
+ u_short c_owrite; /* count of open for write */
+ u_short c_mmcount; /* count of mmappers */
+ struct inode *c_ovp; /* open inode pointer */
+ struct list_head c_cnhead; /* head of cache entries */
+ struct list_head c_volrootlist; /* list of volroot cnoddes */
+ struct inode *c_vnode; /* inode associated with cnode */
+ int c_magic; /* to verify the data structure */
+};
+
+/* flags */
+#define C_VATTR 0x1 /* Validity of vattr in the cnode */
+#define C_SYMLINK 0x2 /* Validity of symlink pointer in the cnode */
+#define C_DYING 0x4 /* Set for outstanding cnodes from venus (which died) */
+#define C_ZAPFID 0x8
+#define C_ZAPDIR 0x10
+#define C_INITED 0x20
+
+int coda_cnode_make(struct inode **, struct ViceFid *, struct super_block *);
+int coda_cnode_makectl(struct inode **inode, struct super_block *sb);
+struct inode *coda_fid_to_inode(ViceFid *fid, struct super_block *sb);
+
+/* inode to cnode */
+#define ITOC(inode) ((struct coda_inode_info *)&((inode)->u.coda_i))
+
+
+#endif
+#endif
extern int coda_access_cache;
/* this file: heloers */
-char *coda_f2s(ViceFid *f, char *s);
+char *coda_f2s(ViceFid *f);
int coda_isroot(struct inode *i);
int coda_fid_is_volroot(struct ViceFid *);
int coda_iscontrol(const char *name, size_t length);
#define EXIT \
if(coda_print_entry) printk("Process %d leaving %s\n",current->pid,__FUNCTION__)
-
-
-#define CHECK_CNODE(c) \
-do { \
- if ( coda_debug ) {\
- struct cnode *cnode = (c); \
- if (!cnode) \
- printk ("%s(%d): cnode is null\n", __FUNCTION__, __LINE__); \
- if (cnode->c_magic != CODA_CNODE_MAGIC) \
- printk ("%s(%d): cnode magic wrong\n", __FUNCTION__, __LINE__); \
- if (!cnode->c_vnode) \
- printk ("%s(%d): cnode has null inode\n", __FUNCTION__, __LINE__); \
- if ( (struct cnode *)cnode->c_vnode->u.generic_ip != cnode ) \
- printk("AAooh, %s(%d) cnode doesn't link right!\n", __FUNCTION__,__LINE__);\
-}} while (0);
-
+#define CHECK_CNODE(c) do { } while (0);
#define CODA_ALLOC(ptr, cast, size) \
do { \
extern struct device * init_etherdev(struct device *, int);
#ifdef CONFIG_IP_ROUTER
-static void inline eth_copy_and_sum (struct sk_buff *dest, unsigned char *src, int len, int base)
+static __inline__ void eth_copy_and_sum (struct sk_buff *dest, unsigned char *src, int len, int base)
{
memcpy (dest->data, src, len);
}
/* And dynamically-tunable limits and defaults: */
extern int max_inodes;
extern int max_files, nr_files, nr_free_files;
-#define NR_INODE 4096 /* this should be bigger than NR_FILE */
-#define NR_FILE 1024 /* this can well be larger on a larger system */
+#define NR_INODE 4096 /* This should no longer be bigger than NR_FILE */
+#define NR_FILE 4096 /* this can well be larger on a larger system */
#define NR_RESERVED_FILES 10 /* reserved for root */
#define MAY_EXEC 1
#include <linux/sysv_fs_i.h>
#include <linux/affs_fs_i.h>
#include <linux/ufs_fs_i.h>
+#include <linux/coda_fs_i.h>
#include <linux/romfs_fs_i.h>
#include <linux/smb_fs_i.h>
#include <linux/hfs_fs_i.h>
struct affs_inode_info affs_i;
struct ufs_inode_info ufs_i;
struct romfs_inode_info romfs_i;
+ struct coda_inode_info coda_i;
struct smb_inode_info smbfs_i;
struct hfs_inode_info hfs_i;
struct adfs_inode_info adfs_i;
#define SIOCGKEEPALIVE (SIOCDEVPRIVATE+1) /* Get keepalive timeout */
#define SIOCSOUTFILL (SIOCDEVPRIVATE+2) /* Set outfill timeout */
#define SIOCGOUTFILL (SIOCDEVPRIVATE+3) /* Get outfill timeout */
+#define SIOCSLEASE (SIOCDEVPRIVATE+4) /* Set "leased" line type */
+#define SIOCGLEASE (SIOCDEVPRIVATE+5) /* Get line type */
#endif
#define RTCF_DEAD RTNH_F_DEAD
#define RTCF_ONLINK RTNH_F_ONLINK
+/* Obsolete flag. About to be deleted */
#define RTCF_NOPMTUDISC RTM_F_NOPMTUDISC
#define RTCF_NOTIFY 0x00010000
#ifndef _LINUX_INETDEVICE_H
#define _LINUX_INETDEVICE_H
-/* IPv4 specific flags. They are initialized from global sysctl variables,
- when IPv4 is initialized.
- */
+#ifdef __KERNEL__
-#define IFF_IP_FORWARD 1
-#define IFF_IP_PROXYARP 2
-#define IFF_IP_RXREDIRECTS 4
-#define IFF_IP_TXREDIRECTS 8
-#define IFF_IP_SHAREDMEDIA 0x10
-#define IFF_IP_MFORWARD 0x20
-#define IFF_IP_RPFILTER 0x40
+struct ipv4_devconf
+{
+ int accept_redirects;
+ int send_redirects;
+ int secure_redirects;
+ int shared_media;
+ int accept_source_route;
+ int rp_filter;
+ int proxy_arp;
+ int bootp_relay;
+ int log_martians;
+ int forwarding;
+ int mc_forwarding;
+ void *sysctl;
+};
-#ifdef __KERNEL__
+extern struct ipv4_devconf ipv4_devconf;
struct in_device
{
unsigned long mr_v1_seen;
unsigned flags;
struct neigh_parms *arp_parms;
+ struct ipv4_devconf cnf;
};
+#define IN_DEV_FORWARD(in_dev) ((in_dev)->cnf.forwarding)
+#define IN_DEV_MFORWARD(in_dev) (ipv4_devconf.mc_forwarding && (in_dev)->cnf.mc_forwarding)
+#define IN_DEV_RPFILTER(in_dev) (ipv4_devconf.rp_filter && (in_dev)->cnf.rp_filter)
+#define IN_DEV_SOURCE_ROUTE(in_dev) (ipv4_devconf.accept_source_route && (in_dev)->cnf.accept_source_route)
+#define IN_DEV_BOOTP_RELAY(in_dev) (ipv4_devconf.bootp_relay && (in_dev)->cnf.bootp_relay)
+
+#define IN_DEV_LOG_MARTIANS(in_dev) (ipv4_devconf.log_martians || (in_dev)->cnf.log_martians)
+#define IN_DEV_PROXY_ARP(in_dev) (ipv4_devconf.proxy_arp || (in_dev)->cnf.proxy_arp)
+#define IN_DEV_SHARED_MEDIA(in_dev) (ipv4_devconf.shared_media || (in_dev)->cnf.shared_media)
+#define IN_DEV_TX_REDIRECTS(in_dev) (ipv4_devconf.send_redirects || (in_dev)->cnf.send_redirects)
+#define IN_DEV_SEC_REDIRECTS(in_dev) (ipv4_devconf.secure_redirects || (in_dev)->cnf.secure_redirects)
-#define IN_DEV_RPFILTER(in_dev) (ipv4_config.rfc1812_filter && ((in_dev)->flags&IFF_IP_RPFILTER))
-#define IN_DEV_MFORWARD(in_dev) (ipv4_config.multicast_route && ((in_dev)->flags&IFF_IP_MFORWARD))
-#define IN_DEV_PROXY_ARP(in_dev) (ipv4_config.proxy_arp || (in_dev)->flags&IFF_IP_PROXYARP)
-#define IN_DEV_FORWARD(in_dev) (IS_ROUTER || ((in_dev)->flags&IFF_IP_FORWARD))
-#define IN_DEV_SHARED_MEDIA(in_dev) (ipv4_config.rfc1620_redirects || (in_dev)->flags&IFF_IP_SHAREDMEDIA)
-#define IN_DEV_RX_REDIRECTS(in_dev) (ipv4_config.accept_redirects || (in_dev)->flags&IFF_IP_RXREDIRECTS)
-#define IN_DEV_TX_REDIRECTS(in_dev) (/*ipv4_config.send_redirects ||*/ (in_dev)->flags&IFF_IP_TXREDIRECTS)
+#define IN_DEV_RX_REDIRECTS(in_dev) \
+ ((IN_DEV_FORWARD(in_dev) && \
+ (ipv4_devconf.accept_redirects && (in_dev)->cnf.accept_redirects)) \
+ || (!IN_DEV_FORWARD(in_dev) && \
+ (ipv4_devconf.accept_redirects || (in_dev)->cnf.accept_redirects)))
struct in_ifaddr
{
extern struct in_ifaddr *inet_ifa_byprefix(struct in_device *in_dev, u32 prefix, u32 mask);
extern int inet_add_bootp_addr(struct device *dev);
extern void inet_del_bootp_addr(struct device *dev);
+extern void inet_forward_change(void);
extern __inline__ int inet_ifa_match(u32 addr, struct in_ifaddr *ifa)
{
#ifndef _LINUX_IPV6_ROUTE_H
#define _LINUX_IPV6_ROUTE_H
+enum
+{
+ RTA_IPV6_UNSPEC,
+ RTA_IPV6_HOPLIMIT,
+};
+
+#define RTA_IPV6_MAX RTA_IPV6_HOPLIMIT
+
#define RTF_DEFAULT 0x00010000 /* default - learned via ND */
#define RTF_ALLONLINK 0x00020000 /* fallback, no routers on link */
struct ncp_fs_info {
int version;
struct sockaddr_ipx addr;
- uid_t mounted_uid;
+ __kernel_uid_t mounted_uid;
int connection; /* Connection number the server assigned us */
int buffer_size; /* The negotiated buffer size, to be
used for read/write requests! */
__u32 directory_id;
};
+struct ncp_sign_init
+{
+ char sign_root[8];
+ char sign_last[16];
+};
+
+struct ncp_lock_ioctl
+{
+#define NCP_LOCK_LOG 0
+#define NCP_LOCK_SH 1
+#define NCP_LOCK_EX 2
+#define NCP_LOCK_CLEAR 256
+ int cmd;
+ int origin;
+ unsigned int offset;
+ unsigned int length;
+#define NCP_LOCK_DEFAULT_TIMEOUT 18
+#define NCP_LOCK_MAX_TIMEOUT 180
+ int timeout;
+};
+
+struct ncp_setroot_ioctl
+{
+ int volNumber;
+ int namespace;
+ __u32 dirEntNum;
+};
+
+struct ncp_objectname_ioctl
+{
+#define NCP_AUTH_NONE 0x00
+#define NCP_AUTH_BIND 0x31
+#define NCP_AUTH_NDS 0x32
+ int auth_type;
+ size_t object_name_len;
+ void* object_name; /* an userspace data, in most cases user name */
+};
+
+struct ncp_privatedata_ioctl
+{
+ size_t len;
+ void* data; /* ~1000 for NDS */
+};
+
#define NCP_IOC_NCPREQUEST _IOR('n', 1, struct ncp_ioctl_request)
#define NCP_IOC_GETMOUNTUID _IOW('n', 2, uid_t)
+#define NCP_IOC_GETMOUNTUID_INT _IOW('n', 2, unsigned int)
#define NCP_IOC_CONN_LOGGED_IN _IO('n', 3)
#define NCP_GET_FS_INFO_VERSION (1)
#define NCP_IOC_GET_FS_INFO _IOWR('n', 4, struct ncp_fs_info)
+#define NCP_IOC_SIGN_INIT _IOR('n', 5, struct ncp_sign_init)
+#define NCP_IOC_SIGN_WANTED _IOR('n', 6, int)
+#define NCP_IOC_SET_SIGN_WANTED _IOW('n', 6, int)
+
+#define NCP_IOC_LOCKUNLOCK _IOR('n', 7, struct ncp_lock_ioctl)
+
+#define NCP_IOC_GETROOT _IOW('n', 8, struct ncp_setroot_ioctl)
+#define NCP_IOC_SETROOT _IOR('n', 8, struct ncp_setroot_ioctl)
+
+#define NCP_IOC_GETOBJECTNAME _IOWR('n', 9, struct ncp_objectname_ioctl)
+#define NCP_IOC_SETOBJECTNAME _IOR('n', 9, struct ncp_objectname_ioctl)
+#define NCP_IOC_GETPRIVATEDATA _IOWR('n', 10, struct ncp_privatedata_ioctl)
+#define NCP_IOC_SETPRIVATEDATA _IOR('n', 10, struct ncp_privatedata_ioctl)
+
/*
* The packet size to allocate. One page should be enough.
*/
#ifdef __KERNEL__
+#include <linux/config.h>
+
#undef NCPFS_PARANOIA
+#ifndef DEBUG_NCP
#define DEBUG_NCP 0
+#endif
#if DEBUG_NCP > 0
#define DPRINTK(format, args...) printk(format , ## args)
#else
#endif /* DEBUG_NCP_MALLOC */
/* linux/fs/ncpfs/inode.c */
+int ncp_notify_change(struct dentry *, struct iattr *attr);
struct super_block *ncp_read_super(struct super_block *, void *, int);
struct inode *ncp_iget(struct super_block *, struct ncpfs_inode_info *);
void ncp_update_inode(struct inode *, struct nw_file_info *);
+void ncp_update_inode2(struct inode *, struct nw_file_info *);
extern int init_ncp_fs(void);
/* linux/fs/ncpfs/dir.c */
static inline int ncp_preserve_case(struct inode *i)
{
- return (ncp_namespace(i) == NW_NS_OS2);
+#if defined(CONFIG_NCPFS_NFS_NS) || defined(CONFIG_NCPFS_OS2_NS)
+ int ns = ncp_namespace(i);
+#endif
+ return
+#ifdef CONFIG_NCPFS_OS2_NS
+ (ns == NW_NS_OS2) ||
+#endif /* CONFIG_NCPFS_OS2_NS */
+#ifdef CONFIG_NCPFS_NFS_NS
+ (ns == NW_NS_NFS) ||
+#endif /* CONFIG_NCPFS_NFS_NS */
+ 0;
}
static inline int ncp_case_sensitive(struct inode *i)
{
+#ifdef CONFIG_NCPFS_NFS_NS
+ return ncp_namespace(i) == NW_NS_NFS;
+#else
return 0;
+#endif /* CONFIG_NCPFS_NFS_NS */
}
#endif /* __KERNEL__ */
#ifdef __KERNEL__
#define NCP_DEFAULT_BUFSIZE 1024
+#define NCP_DEFAULT_OPTIONS 0 /* 2 for packet signatures */
struct ncp_server {
interest for us later, so we store
it completely. */
- __u8 name_space[NCP_NUMBER_OF_VOLUMES];
+ __u8 name_space[NCP_NUMBER_OF_VOLUMES + 2];
struct file *ncp_filp; /* File pointer to ncp socket */
int ncp_reply_size;
struct ncp_inode_info root;
+#if 0
char root_path; /* '\0' */
+#else
+ struct dentry* root_dentry;
+#endif
+
+/* info for packet signing */
+ int sign_wanted; /* 1=Server needs signed packets */
+ int sign_active; /* 0=don't do signing, 1=do */
+ char sign_root[8]; /* generated from password and encr. key */
+ char sign_last[16];
+
+ /* Authentication info: NDS or BINDERY, username */
+ struct {
+ int auth_type;
+ size_t object_name_len;
+ void* object_name;
+ int object_type;
+ } auth;
+ /* Password info */
+ struct {
+ size_t len;
+ void* data;
+ } priv;
};
static inline int ncp_conn_valid(struct ncp_server *server)
#endif /* __KERNEL__ */
#endif
+
/* Values for flags */
#define NCP_MOUNT_SOFT 0x0001
#define NCP_MOUNT_INTR 0x0002
+#define NCP_MOUNT_STRONG 0x0004 /* enable delete/rename of r/o files */
+#define NCP_MOUNT_NO_OS2 0x0008
+#define NCP_MOUNT_NO_NFS 0x0010
struct ncp_mount_data {
int version;
unsigned int ncp_fd; /* The socket to the ncp port */
- uid_t mounted_uid; /* Who may umount() this filesystem? */
- pid_t wdog_pid; /* Who cares for our watchdog packets? */
+ __kernel_uid_t mounted_uid; /* Who may umount() this filesystem? */
+ __kernel_pid_t wdog_pid; /* Who cares for our watchdog packets? */
unsigned char mounted_vol[NCP_VOLNAME_LEN + 1];
unsigned int time_out; /* How long should I wait after
unsigned int retry_count; /* And how often should I retry? */
unsigned int flags;
- uid_t uid;
- gid_t gid;
- mode_t file_mode;
- mode_t dir_mode;
+ __kernel_uid_t uid;
+ __kernel_gid_t gid;
+ __kernel_mode_t file_mode;
+ __kernel_mode_t dir_mode;
};
#endif
#endif
struct neighbour;
+struct neigh_parms;
struct sk_buff;
/*
int (*hard_header_parse)(struct sk_buff *skb,
unsigned char *haddr);
- int (*neigh_setup)(struct neighbour *n);
+ int (*neigh_setup)(struct device *dev, struct neigh_parms *);
int (*accept_fastpath)(struct device *, struct dst_entry*);
#ifdef CONFIG_NET_FASTROUTE
extern int unregister_netdevice_notifier(struct notifier_block *nb);
extern int dev_new_index(void);
extern struct device *dev_get_by_index(int ifindex);
-extern int register_gifconf(int family, int (*func)(struct device *dev, char *bufptr, int len));
extern int dev_restart(struct device *dev);
+typedef int gifconf_func_t(struct device * dev, char * bufptr, int len);
+extern int register_gifconf(unsigned int family, gifconf_func_t * gifconf);
+extern __inline__ int unregister_gifconf(unsigned int family)
+{
+ return register_gifconf(family, 0);
+}
+
#define HAVE_NETIF_RX 1
extern void netif_rx(struct sk_buff *skb);
extern void net_bh(void);
#define NLM_F_REPLACE 0x100 /* Override existing */
#define NLM_F_EXCL 0x200 /* Do not touch, if it exists */
#define NLM_F_CREATE 0x400 /* Create, if it does not exist */
+#define NLM_F_APPEND 0x800 /* Add to end of list */
/*
4.4BSD ADD NLM_F_CREATE|NLM_F_EXCL
(struct nlmsghdr*)(((char*)(nlh)) + NLMSG_ALIGN((nlh)->nlmsg_len)))
#define NLMSG_OK(nlh,len) ((nlh)->nlmsg_len >= sizeof(struct nlmsghdr) && \
(nlh)->nlmsg_len <= (len))
+#define NLMSG_PAYLOAD(nlh,len) ((nlh)->nlmsg_len - NLMSG_SPACE((len)))
#define NLMSG_NOOP 0x1 /* Nothing. */
#define NLMSG_ERROR 0x2 /* Error */
#define PCI_VENDOR_ID_3DFX 0x121a
#define PCI_DEVICE_ID_3DFX_VOODOO 0x0001
+#define PCI_DEVICE_ID_3DFX_VOODOO2 0x0002
#define PCI_VENDOR_ID_SIGMADES 0x1236
#define PCI_DEVICE_ID_SIGMADES_6425 0x6401
*
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
*
+ * Changes:
+ * Mike McLagan : Routing by source
+ *
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
#include <linux/netlink.h>
#define RTNL_DEBUG 1
+#define CONFIG_RTNL_OLD_IFINFO 1
/****
#define RTM_DELRULE (RTM_BASE+17)
#define RTM_GETRULE (RTM_BASE+18)
-#define RTM_MAX (RTM_BASE+19)
+#define RTM_NEWQDISC (RTM_BASE+20)
+#define RTM_DELQDSIC (RTM_BASE+21)
+#define RTM_GETQDISC (RTM_BASE+22)
+#define RTM_NEWTFLOW (RTM_BASE+24)
+#define RTM_DELTFLOW (RTM_BASE+25)
+#define RTM_GETTFLOW (RTM_BASE+26)
-/* Generic structure for encapsulation optional route
- information. It is reminiscent of sockaddr, but with sa_family
- replaced with attribute type.
- It would be good, if constructions of sort:
- struct something {
- struct rtattr rta;
- struct a_content a;
- }
- had correct alignment. It is true for x86, but I have no idea
- how to make it on 64bit architectures. Please, teach me. --ANK
+#define RTM_NEWTFILTER (RTM_BASE+28)
+#define RTM_DELTFILTER (RTM_BASE+29)
+#define RTM_GETTFILTER (RTM_BASE+30)
+
+#define RTM_MAX (RTM_BASE+31)
+
+/*
+ Generic structure for encapsulation optional route information.
+ It is reminiscent of sockaddr, but with sa_family replaced
+ with attribute type.
*/
struct rtattr
unsigned short rta_type;
};
-enum rtattr_type_t
-{
- RTA_UNSPEC,
- RTA_DST,
- RTA_SRC,
- RTA_IIF,
- RTA_OIF,
- RTA_GATEWAY,
- RTA_PRIORITY,
- RTA_PREFSRC,
- RTA_WINDOW,
- RTA_RTT,
- RTA_MTU,
- RTA_IFNAME,
- RTA_CACHEINFO
-};
-
-#define RTA_MAX RTA_CACHEINFO
-
/* Macros to handle rtattributes */
#define RTA_ALIGNTO 4
#define RTA_LENGTH(len) (RTA_ALIGN(sizeof(struct rtattr)) + (len))
#define RTA_SPACE(len) RTA_ALIGN(RTA_LENGTH(len))
#define RTA_DATA(rta) ((void*)(((char*)(rta)) + RTA_LENGTH(0)))
+#define RTA_PAYLOAD(rta) ((rta)->rta_len - RTA_LENGTH(0))
-struct rta_cacheinfo
-{
- __u32 rta_clntref;
- __u32 rta_lastuse;
- __s32 rta_expires;
- __u32 rta_error;
- __u32 rta_used;
-};
-
-/*
- * "struct rtnexthop" describres all necessary nexthop information,
- * i.e. parameters of path to a destination via this nextop.
- *
- * At the moment it is impossible to set different prefsrc, mtu, window
- * and rtt for different paths from multipath.
- */
-
-struct rtnexthop
-{
- unsigned short rtnh_len;
- unsigned char rtnh_flags;
- unsigned char rtnh_hops;
- int rtnh_ifindex;
-};
-
-/* rtnh_flags */
-
-#define RTNH_F_DEAD 1 /* Nexthop is dead (used by multipath) */
-#define RTNH_F_PERVASIVE 2 /* Do recursive gateway lookup */
-#define RTNH_F_ONLINK 4 /* Gateway is forced on link */
-/* Macros to handle hexthops */
-#define RTNH_ALIGNTO 4
-#define RTNH_ALIGN(len) ( ((len)+RTNH_ALIGNTO-1) & ~(RTNH_ALIGNTO-1) )
-#define RTNH_OK(rtnh,len) ((rtnh)->rtnh_len >= sizeof(struct rtnexthop) && \
- (rtnh)->rtnh_len <= (len))
-#define RTNH_NEXT(rtnh) ((struct rtnexthop*)(((char*)(rtnh)) + RTNH_ALIGN((rtnh)->rtnh_len)))
-#define RTNH_LENGTH(len) (RTNH_ALIGN(sizeof(struct rtnexthop)) + (len))
-#define RTNH_SPACE(len) RTNH_ALIGN(RTNH_LENGTH(len))
-#define RTNH_DATA(rtnh) ((struct rtattr*)(((char*)(rtnh)) + RTNH_LENGTH(0)))
+/******************************************************************************
+ * Definitions used in routing table administation.
+ ****/
struct rtmsg
{
unsigned char rtm_dst_len;
unsigned char rtm_src_len;
unsigned char rtm_tos;
+
unsigned char rtm_table; /* Routing table id */
unsigned char rtm_protocol; /* Routing protocol; see below */
+#ifdef CONFIG_RTNL_OLD_IFINFO
unsigned char rtm_nhs; /* Number of nexthops */
+#else
+ unsigned char rtm_scope; /* See below */
+#endif
unsigned char rtm_type; /* See below */
+
+#ifdef CONFIG_RTNL_OLD_IFINFO
unsigned short rtm_optlen; /* Byte length of rtm_opt */
unsigned char rtm_scope; /* See below */
unsigned char rtm_whatsit; /* Unused byte */
+#endif
unsigned rtm_flags;
};
-#define RTM_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg))))
-#define RTM_RTNH(r) ((struct rtnexthop*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg)) \
- + NLMSG_ALIGN((r)->rtm_optlen)))
-#define RTM_NHLEN(nlh,r) ((nlh)->nlmsg_len - NLMSG_SPACE(sizeof(struct rtmsg)) - NLMSG_ALIGN((r)->rtm_optlen))
-
/* rtm_type */
enum
#define RTN_MAX RTN_XRESOLVE
+
/* rtm_protocol */
#define RTPROT_UNSPEC 0
Really it is not scope, but sort of distance to the destination.
NOWHERE are reserved for not existing destinations, HOST is our
- local addresses, LINK are destinations, locate on directly attached
- link and UNIVERSE is everywhere in the Universe :-)
+ local addresses, LINK are destinations, located on directly attached
+ link and UNIVERSE is everywhere in the Universe.
Intermediate values are also possible f.e. interior routes
could be assigned a value between UNIVERSE and LINK.
enum rt_scope_t
{
RT_SCOPE_UNIVERSE=0,
-/* User defined values f.e. "site" */
+/* User defined values */
RT_SCOPE_SITE=200,
RT_SCOPE_LINK=253,
RT_SCOPE_HOST=254,
#define RTM_F_NOTIFY 0x100 /* Notify user of route change */
#define RTM_F_CLONED 0x200 /* This route is cloned */
-#define RTM_F_NOPMTUDISC 0x400 /* Do not make PMTU discovery */
-#define RTM_F_EQUALIZE 0x800 /* Multipath equalizer: NI */
+#define RTM_F_EQUALIZE 0x400 /* Multipath equalizer: NI */
+#ifdef CONFIG_RTNL_OLD_IFINFO
+#define RTM_F_NOPMTUDISC 0x800 /* Do not make PMTU discovery */
+#endif
/* Reserved table identifiers */
#define RT_TABLE_MAX RT_TABLE_LOCAL
+
+/* Routing message attributes */
+
+enum rtattr_type_t
+{
+ RTA_UNSPEC,
+ RTA_DST,
+ RTA_SRC,
+ RTA_IIF,
+ RTA_OIF,
+ RTA_GATEWAY,
+ RTA_PRIORITY,
+ RTA_PREFSRC,
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ RTA_METRICS,
+ RTA_MULTIPATH,
+ RTA_PROTOINFO,
+ RTA_FLOW,
+#else
+ RTA_WINDOW,
+ RTA_RTT,
+ RTA_MTU,
+ RTA_IFNAME,
+#endif
+ RTA_CACHEINFO
+};
+
+#define RTA_MAX RTA_CACHEINFO
+
+#define RTM_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg))))
+#define RTM_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct rtmsg))
+
+/* RTM_MULTIPATH --- array of struct rtnexthop.
+ *
+ * "struct rtnexthop" describres all necessary nexthop information,
+ * i.e. parameters of path to a destination via this nextop.
+ *
+ * At the moment it is impossible to set different prefsrc, mtu, window
+ * and rtt for different paths from multipath.
+ */
+
+struct rtnexthop
+{
+ unsigned short rtnh_len;
+ unsigned char rtnh_flags;
+ unsigned char rtnh_hops;
+ int rtnh_ifindex;
+};
+
+/* rtnh_flags */
+
+#define RTNH_F_DEAD 1 /* Nexthop is dead (used by multipath) */
+#define RTNH_F_PERVASIVE 2 /* Do recursive gateway lookup */
+#define RTNH_F_ONLINK 4 /* Gateway is forced on link */
+
+/* Macros to handle hexthops */
+
+#define RTNH_ALIGNTO 4
+#define RTNH_ALIGN(len) ( ((len)+RTNH_ALIGNTO-1) & ~(RTNH_ALIGNTO-1) )
+#define RTNH_OK(rtnh,len) ((rtnh)->rtnh_len >= sizeof(struct rtnexthop) && \
+ (rtnh)->rtnh_len <= (len))
+#define RTNH_NEXT(rtnh) ((struct rtnexthop*)(((char*)(rtnh)) + RTNH_ALIGN((rtnh)->rtnh_len)))
+#define RTNH_LENGTH(len) (RTNH_ALIGN(sizeof(struct rtnexthop)) + (len))
+#define RTNH_SPACE(len) RTNH_ALIGN(RTNH_LENGTH(len))
+#define RTNH_DATA(rtnh) ((struct rtattr*)(((char*)(rtnh)) + RTNH_LENGTH(0)))
+
+#ifdef CONFIG_RTNL_OLD_IFINFO
+#define RTM_RTNH(r) ((struct rtnexthop*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg)) \
+ + NLMSG_ALIGN((r)->rtm_optlen)))
+#define RTM_NHLEN(nlh,r) ((nlh)->nlmsg_len - NLMSG_SPACE(sizeof(struct rtmsg)) - NLMSG_ALIGN((r)->rtm_optlen))
+#endif
+
+/* RTM_CACHEINFO */
+
+struct rta_cacheinfo
+{
+ __u32 rta_clntref;
+ __u32 rta_lastuse;
+ __s32 rta_expires;
+ __u32 rta_error;
+ __u32 rta_used;
+};
+
+/* RTM_METRICS --- array of struct rtattr with types of RTAX_* */
+
+enum
+{
+ RTAX_UNSPEC,
+ RTAX_LOCK,
+ RTAX_MTU,
+ RTAX_WINDOW,
+ RTAX_RTT,
+ RTAX_HOPS,
+ RTAX_SSTHRESH,
+ RTAX_CWND,
+};
+
+#define RTAX_MAX RTAX_CWND
+
+
+
/*********************************************************
* Interface address.
****/
#define IFA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct ifaddrmsg))))
+#define IFA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct ifaddrmsg))
/*
Important comment:
struct ndmsg
{
unsigned char ndm_family;
+ unsigned char ndm_pad1;
+ unsigned short ndm_pad2;
int ndm_ifindex; /* Link index */
__u16 ndm_state;
__u8 ndm_flags;
#define NDA_MAX NDA_CACHEINFO
#define NDA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct ndmsg))))
+#define NDA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct ndmsg))
+
+/*
+ * Neighbor Cache Entry Flags
+ */
+
+#define NTF_PROXY 0x08 /* == ATF_PUBL */
+#define NTF_ROUTER 0x80
+
+/*
+ * Neighbor Cache Entry States.
+ */
+
+#define NUD_INCOMPLETE 0x01
+#define NUD_REACHABLE 0x02
+#define NUD_STALE 0x04
+#define NUD_DELAY 0x08
+#define NUD_PROBE 0x10
+#define NUD_FAILED 0x20
+
+/* Dummy states */
+#define NUD_NOARP 0x40
+#define NUD_PERMANENT 0x80
+#define NUD_NONE 0x00
+
struct nda_cacheinfo
{
* on network protocol.
*/
+#ifdef CONFIG_RTNL_OLD_IFINFO
struct ifinfomsg
{
unsigned char ifi_family; /* Dummy */
IFLA_STATS
};
+#else
+
+struct ifinfomsg
+{
+ unsigned char ifi_family;
+ unsigned char __ifi_pad;
+ unsigned short ifi_type; /* ARPHRD_* */
+ int ifi_index; /* Link index */
+ unsigned ifi_flags; /* IFF_* flags */
+ unsigned ifi_change; /* IFF_* change mask */
+};
+
+enum
+{
+ IFLA_UNSPEC,
+ IFLA_ADDRESS,
+ IFLA_BROADCAST,
+ IFLA_IFNAME,
+ IFLA_MTU,
+ IFLA_LINK,
+ IFLA_QDISC,
+ IFLA_STATS
+};
+
+#endif
+
+
#define IFLA_MAX IFLA_STATS
#define IFLA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct ifinfomsg))))
+#define IFLA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct ifinfomsg))
/* ifi_flags.
for IPIP tunnels, when route to endpoint is allowed to change)
*/
+/*****************************************************************
+ * Traffic control messages.
+ ****/
+
+struct tcmsg
+{
+ unsigned char tcm_family;
+ unsigned char tcm__pad1;
+ unsigned short tcm__pad2;
+ int tcm_ifindex;
+ __u32 tcm_handle;
+ __u32 tcm_parent;
+ __u32 tcm_info;
+};
+
+enum
+{
+ TCA_UNSPEC,
+ TCA_KIND,
+ TCA_OPTIONS,
+ TCA_STATS,
+ TCA_XSTATS
+};
+
+#define TCA_MAX TCA_XSTATS
+
+#define TCA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct tcmsg))))
+#define TCA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct tcmsg))
+
+
+/* SUMMARY: maximal rtattr understood by kernel */
+
+#define RTATTR_MAX RTA_MAX
+
+/* RTnetlink multicast groups */
+
#define RTMGRP_LINK 1
#define RTMGRP_NOTIFY 2
#define RTMGRP_NEIGH 4
#define RTMGRP_IPV6_MROUTE 0x200
#define RTMGRP_IPV6_ROUTE 0x400
+/* End of information exported to user level */
#ifdef __KERNEL__
-struct kern_rta
-{
- void *rta_dst;
- void *rta_src;
- int *rta_iif;
- int *rta_oif;
- void *rta_gw;
- u32 *rta_priority;
- void *rta_prefsrc;
- unsigned *rta_window;
- unsigned *rta_rtt;
- unsigned *rta_mtu;
- unsigned char *rta_ifname;
- struct rta_cacheinfo *rta_ci;
-};
-
-struct kern_ifa
-{
- void *ifa_address;
- void *ifa_local;
- unsigned char *ifa_label;
- void *ifa_broadcast;
- void *ifa_anycast;
-};
-
-
extern atomic_t rtnl_rlockct;
extern struct wait_queue *rtnl_wait;
NET_CORE_DESTROY_DELAY,
NET_CORE_MAX_BACKLOG,
NET_CORE_FASTROUTE,
+ NET_CORE_MSG_COST,
+ NET_CORE_MSG_BURST,
};
/* /proc/sys/net/ethernet */
/* /proc/sys/net/ipv4 */
enum
{
- NET_IPV4_TCP_HOE_RETRANSMITS=1,
+ /* v2.0 compatibile variables */
+ NET_IPV4_FORWARD = 8,
+ NET_IPV4_DYNADDR = 9,
+
+ NET_IPV4_CONF = 16,
+ NET_IPV4_NEIGH = 17,
+ NET_IPV4_ROUTE = 18,
+ NET_IPV4_FIB_HASH = 19,
+
+ NET_IPV4_TCP_HOE_RETRANSMITS=32,
NET_IPV4_TCP_SACK,
NET_IPV4_TCP_TSACK,
NET_IPV4_TCP_TIMESTAMPS,
NET_IPV4_TCP_WINDOW_SCALING,
NET_IPV4_TCP_VEGAS_CONG_AVOID,
- NET_IPV4_FORWARDING,
NET_IPV4_DEFAULT_TTL,
- NET_IPV4_RFC1812_FILTER,
- NET_IPV4_LOG_MARTIANS,
- NET_IPV4_SOURCE_ROUTE,
- NET_IPV4_SEND_REDIRECTS,
NET_IPV4_AUTOCONFIG,
- NET_IPV4_BOOTP_RELAY,
- NET_IPV4_PROXY_ARP,
NET_IPV4_NO_PMTU_DISC,
- NET_IPV4_ACCEPT_REDIRECTS,
- NET_IPV4_SECURE_REDIRECTS,
- NET_IPV4_RFC1620_REDIRECTS,
- NET_IPV4_RTCACHE_FLUSH,
NET_IPV4_TCP_SYN_RETRIES,
NET_IPV4_IPFRAG_HIGH_THRESH,
NET_IPV4_IPFRAG_LOW_THRESH,
NET_IPV4_TCP_RETRIES2,
NET_IPV4_TCP_MAX_DELAY_ACKS,
NET_IPV4_TCP_FIN_TIMEOUT,
- NET_IPV4_IGMP_MAX_HOST_REPORT_DELAY,
- NET_IPV4_IGMP_TIMER_SCALE,
- NET_IPV4_IGMP_AGE_THRESHOLD,
- NET_IPV4_IP_DYNADDR,
NET_IPV4_IP_MASQ_DEBUG,
NET_TCP_SYNCOOKIES,
NET_TCP_STDURG,
NET_IPV4_ICMP_TIMEEXCEED_RATE,
NET_IPV4_ICMP_PARAMPROB_RATE,
NET_IPV4_ICMP_ECHOREPLY_RATE,
- NET_IPV4_NEIGH,
};
+enum {
+ NET_IPV4_ROUTE_FLUSH = 1,
+ NET_IPV4_ROUTE_MIN_DELAY,
+ NET_IPV4_ROUTE_MAX_DELAY,
+ NET_IPV4_ROUTE_GC_THRESH,
+ NET_IPV4_ROUTE_MAX_SIZE,
+ NET_IPV4_ROUTE_GC_MIN_INTERVAL,
+ NET_IPV4_ROUTE_GC_TIMEOUT,
+ NET_IPV4_ROUTE_GC_INTERVAL,
+ NET_IPV4_ROUTE_REDIRECT_LOAD,
+ NET_IPV4_ROUTE_REDIRECT_NUMBER,
+ NET_IPV4_ROUTE_REDIRECT_SILENCE,
+ NET_IPV4_ROUTE_ERROR_COST,
+ NET_IPV4_ROUTE_ERROR_BURST,
+};
+
+enum
+{
+ NET_PROTO_CONF_ALL = -2,
+ NET_PROTO_CONF_DEFAULT = -3,
+
+ /* And device ifindices ... */
+};
+
+enum
+{
+ NET_IPV4_CONF_FORWARDING = 1,
+ NET_IPV4_CONF_MC_FORWARDING,
+ NET_IPV4_CONF_PROXY_ARP,
+ NET_IPV4_CONF_ACCEPT_REDIRECTS,
+ NET_IPV4_CONF_SECURE_REDIRECTS,
+ NET_IPV4_CONF_SEND_REDIRECTS,
+ NET_IPV4_CONF_SHARED_MEDIA,
+ NET_IPV4_CONF_RP_FILTER,
+ NET_IPV4_CONF_ACCEPT_SOURCE_ROUTE,
+ NET_IPV4_CONF_BOOTP_RELAY,
+ NET_IPV4_CONF_LOG_MARTIANS,
+};
/* /proc/sys/net/ipv6 */
enum {
- NET_IPV6_FORWARDING = 1,
- NET_IPV6_HOPLIMIT,
+ NET_IPV6_CONF = 16,
+ NET_IPV6_NEIGH = 17,
+ NET_IPV6_ROUTE = 18,
+};
+enum {
+ NET_IPV6_ROUTE_FLUSH = 1,
+ NET_IPV6_ROUTE_GC_THRESH,
+ NET_IPV6_ROUTE_MAX_SIZE,
+ NET_IPV6_ROUTE_GC_MIN_INTERVAL,
+ NET_IPV6_ROUTE_GC_TIMEOUT,
+ NET_IPV6_ROUTE_GC_INTERVAL,
+};
+
+enum {
+ NET_IPV6_FORWARDING = 1,
+ NET_IPV6_HOP_LIMIT,
+ NET_IPV6_MTU,
NET_IPV6_ACCEPT_RA,
NET_IPV6_ACCEPT_REDIRECTS,
-
NET_IPV6_AUTOCONF,
NET_IPV6_DAD_TRANSMITS,
NET_IPV6_RTR_SOLICITS,
NET_IPV6_RTR_SOLICIT_INTERVAL,
NET_IPV6_RTR_SOLICIT_DELAY,
-
- NET_IPV6_ICMPV6_TIME,
- NET_IPV6_NEIGH,
};
/* /proc/sys/net/<protocol>/neigh/<dev> */
__constant_htonl(0x2));
}
+extern __inline__ int ipv6_addr_is_multicast(struct in6_addr *addr)
+{
+ return (addr->s6_addr32[0] & __constant_htonl(0xFF000000)) == __constant_htonl(0xFF000000);
+}
#endif
#endif
extern int arp_bind_neighbour(struct dst_entry *dst);
extern int arp_mc_map(u32 addr, u8 *haddr, struct device *dev, int dir);
extern void arp_ifdown(struct device *dev);
+
+extern struct neigh_ops arp_broken_ops;
+
#endif /* _ARP_H */
int obsolete;
__u32 priority;
unsigned long lastuse;
+ unsigned mxlock;
unsigned window;
unsigned pmtu;
unsigned rtt;
{
unsigned short family;
unsigned short protocol;
+ unsigned gc_thresh;
+
+ int (*gc)(void);
struct dst_entry * (*check)(struct dst_entry *, __u32 cookie);
struct dst_entry * (*reroute)(struct dst_entry *,
struct sk_buff *);
void (*destroy)(struct dst_entry *);
struct dst_entry * (*negative_advice)(struct dst_entry *);
+ void (*link_failure)(struct sk_buff *);
+
+ atomic_t entries;
};
#ifdef __KERNEL__
if (dst && dst->ops->negative_advice)
*dst_p = dst->ops->negative_advice(dst);
}
+
+extern __inline__ void dst_link_failure(struct sk_buff *skb)
+{
+ struct dst_entry * dst = skb->dst;
+ if (dst && dst->ops && dst->ops->link_failure)
+ dst->ops->link_failure(skb);
+}
#endif
#endif /* _NET_DST_H */
#define IFA_SITE IPV6_ADDR_SITELOCAL
#define IFA_GLOBAL 0x0000U
+struct ipv6_devconf
+{
+ int forwarding;
+ int hop_limit;
+ int mtu6;
+ int accept_ra;
+ int accept_redirects;
+ int autoconf;
+ int dad_transmits;
+ int rtr_solicits;
+ int rtr_solicit_interval;
+ int rtr_solicit_delay;
+
+ void *sysctl;
+};
+
struct inet6_dev
{
struct device *dev;
struct inet6_ifaddr *addr_list;
struct ifmcaddr6 *mc_list;
-
__u32 if_flags;
- __u32 router:1,
- unused:31;
struct neigh_parms *nd_parms;
struct inet6_dev *next;
+ struct ipv6_devconf cnf;
};
+extern struct ipv6_devconf ipv6_devconf;
extern __inline__ void ipv6_eth_mc_map(struct in6_addr *addr, char *buf)
{
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
* Alan Cox, <gw4pts@gw4pts.ampr.org>
*
+ * Changes:
+ * Mike McLagan : Routing by source
+ *
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
extern int __ip_finish_output(struct sk_buff *skb);
-
-extern struct ip_mib ip_statistics;
-
struct ipv4_config
{
- int accept_redirects;
- int secure_redirects;
- int rfc1620_redirects;
- int rfc1812_filter;
- int send_redirects;
int log_martians;
- int source_route;
- int multicast_route;
- int proxy_arp;
- int bootp_relay;
int autoconfig;
int no_pmtu_disc;
};
extern struct ipv4_config ipv4_config;
-extern int sysctl_local_port_range[2];
+extern struct ip_mib ip_statistics;
-#define IS_ROUTER (ip_statistics.IpForwarding == 1)
+extern int sysctl_local_port_range[2];
extern __inline__ int ip_finish_output(struct sk_buff *skb)
{
u32 rt6i_flags;
u32 rt6i_metric;
+ u8 rt6i_hoplimit;
unsigned long rt6i_expires;
union {
extern void inet6_rt_notify(int event, struct rt6_info *rt);
+extern void fib6_run_gc(unsigned long dummy);
+
#endif
#endif
extern struct rt6_info ip6_null_entry;
+extern int ip6_rt_max_size;
+extern int ip6_rt_gc_min;
+extern int ip6_rt_gc_timeout;
+extern int ip6_rt_gc_interval;
+
extern void ip6_route_input(struct sk_buff *skb);
extern struct dst_entry * ip6_route_output(struct sock *sk,
#include <linux/config.h>
+struct kern_rta
+{
+ void *rta_dst;
+ void *rta_src;
+ int *rta_iif;
+ int *rta_oif;
+ void *rta_gw;
+ u32 *rta_priority;
+ void *rta_prefsrc;
+#ifdef CONFIG_RTNL_OLD_IFINFO
+ unsigned *rta_window;
+ unsigned *rta_rtt;
+ unsigned *rta_mtu;
+ unsigned char *rta_ifname;
+#else
+ struct rtattr *rta_mx;
+ struct rtattr *rta_mp;
+ unsigned char *rta_protoinfo;
+ unsigned char *rta_flow;
+#endif
+ struct rta_cacheinfo *rta_ci;
+};
+
struct fib_nh
{
struct device *nh_dev;
unsigned fib_flags;
int fib_protocol;
u32 fib_prefsrc;
+#ifdef CONFIG_RTNL_OLD_IFINFO
unsigned fib_mtu;
unsigned fib_rtt;
unsigned fib_window;
+#else
+#define FIB_MAX_METRICS RTAX_RTT
+ unsigned fib_metrics[FIB_MAX_METRICS];
+#define fib_mtu fib_metrics[RTAX_MTU-1]
+#define fib_window fib_metrics[RTAX_WINDOW-1]
+#define fib_rtt fib_metrics[RTAX_RTT-1]
+#endif
int fib_nhs;
#ifdef CONFIG_IP_ROUTE_MULTIPATH
int fib_power;
extern struct ipv6_mib ipv6_statistics;
-struct ipv6_config {
- int forwarding;
- int hop_limit;
- int accept_ra;
- int accept_redirects;
-
- int autoconf;
- int dad_transmits;
- int rtr_solicits;
- int rtr_solicit_interval;
- int rtr_solicit_delay;
-
- int rt_cache_timeout;
- int rt_gc_period;
-};
-
-extern struct ipv6_config ipv6_config;
-
struct ipv6_frag {
__u16 offset;
__u16 len;
struct neigh_parms
{
struct neigh_parms *next;
+ int (*neigh_setup)(struct neighbour *);
+ struct neigh_table *tbl;
+ int entries;
+ void *priv;
+
void *sysctl_table;
+
int base_reachable_time;
int retrans_time;
int gc_staletime;
u8 *lladdr, void *saddr,
struct device *dev);
-extern struct neigh_parms *neigh_parms_alloc(struct neigh_table *tbl);
+extern struct neigh_parms *neigh_parms_alloc(struct device *dev, struct neigh_table *tbl);
extern void neigh_parms_release(struct neigh_table *tbl, struct neigh_parms *parms);
extern unsigned long neigh_rand_reach_time(unsigned long base);
extern int neigh_delete(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg);
extern void neigh_app_ns(struct neighbour *n);
-extern void *neigh_sysctl_register(struct device *dev, struct neigh_parms *p,
- int p_id, int pdev_id, char *p_name);
+extern int neigh_sysctl_register(struct device *dev, struct neigh_parms *p,
+ int p_id, int pdev_id, char *p_name);
extern void neigh_sysctl_unregister(struct neigh_parms *p);
/*
* Alan Cox : Reformatted. Added ip_rt_local()
* Alan Cox : Support for TCP parameters.
* Alexey Kuznetsov: Major changes for new routing code.
+ * Mike McLagan : Routing by source
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
#include <linux/rtnetlink.h>
#define RT_HASH_DIVISOR 256
-#define RT_CACHE_MAX_SIZE 256
-
-/*
- * Maximal time to live for unused entry.
- */
-#define RT_CACHE_TIMEOUT (HZ*300)
-
-/*
- * Periodic timer frequency
- */
-#define RT_GC_INTERVAL (HZ*60)
-
-/*
- * Cache invalidations can be delayed by:
- */
-#define RT_FLUSH_DELAY (5*HZ)
-
-#define RT_REDIRECT_NUMBER 9
-#define RT_REDIRECT_LOAD (HZ/50) /* 20 msec */
-#define RT_REDIRECT_SILENCE (RT_REDIRECT_LOAD<<(RT_REDIRECT_NUMBER+1))
-/* 20sec */
-
-#define RT_ERROR_LOAD (1*HZ)
-
/*
* Prevents LRU trashing, entries considered equivalent,
#define RTO_ONLINK 0x01
#define RTO_TPROXY 0x80000000
+#ifdef CONFIG_IP_TRANSPARENT_PROXY
+#define RTO_CONN RTO_TPROXY
+#else
+#define RTO_CONN 0
+#endif
+
struct rt_key
{
__u32 dst;
__u32 rt_src_map;
__u32 rt_dst_map;
#endif
-
- /* ICMP statistics */
- unsigned long last_error;
- unsigned long errors;
};
#ifdef __KERNEL__
struct ucred creds; /* Skb credentials */
struct scm_fp_list *fp; /* Passed files */
unsigned long seq; /* Connection seqno */
+ struct file *file; /* file for socket */
struct socket *sock; /* Passed socket */
};
struct in6_addr daddr;
__u32 flow_lbl;
+ int hop_limit;
+ int mcast_hops;
__u8 priority;
- __u8 hop_limit;
- __u8 mcast_hops;
/* sockopt flags */
extern __inline__ int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
{
- if (atomic_read(&sk->rmem_alloc) + skb->truesize >= sk->rcvbuf)
+ /* Cast skb->rcvbuf to unsigned... It's pointless, but reduces
+ number of warnings when compiling with -W --ANK
+ */
+ if (atomic_read(&sk->rmem_alloc) + skb->truesize >= (unsigned)sk->rcvbuf)
return -ENOMEM;
skb_set_owner_r(skb, sk);
extern __inline__ int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
{
- if (atomic_read(&sk->rmem_alloc) + skb->truesize >= sk->rcvbuf)
+ /* Cast skb->rcvbuf to unsigned... It's pointless, but reduces
+ number of warnings when compiling with -W --ANK
+ */
+ if (atomic_read(&sk->rmem_alloc) + skb->truesize >= (unsigned)sk->rcvbuf)
return -ENOMEM;
skb_set_owner_r(skb, sk);
__skb_queue_tail(&sk->receive_queue,skb);
extern __inline__ int sock_queue_err_skb(struct sock *sk, struct sk_buff *skb)
{
- if (atomic_read(&sk->rmem_alloc) + skb->truesize >= sk->rcvbuf)
+ /* Cast skb->rcvbuf to unsigned... It's pointless, but reduces
+ number of warnings when compiling with -W --ANK
+ */
+ if (atomic_read(&sk->rmem_alloc) + skb->truesize >= (unsigned)sk->rcvbuf)
return -ENOMEM;
skb_set_owner_r(skb, sk);
__skb_queue_tail(&sk->error_queue,skb);
* break TCP port selection. This function must also NOT wrap around
* when the next number exceeds the largest possible port (2^16-1).
*/
-static __inline__ int tcp_bhashnext(__u16 short lport, __u16 h)
+static __inline__ int tcp_bhashnext(__u16 lport, __u16 h)
{
__u32 s; /* don't change this to a smaller type! */
#endif
extern char *get_options(char *str, int *ints);
-extern void set_device_ro(int dev,int flag);
+extern void set_device_ro(kdev_t dev,int flag);
extern struct file_operations * get_blkfops(unsigned int);
extern int blkdev_release(struct inode * inode);
#if !defined(CONFIG_NFSD) && defined(CONFIG_NFSD_MODULE)
#include <linux/smp_lock.h>
#include <linux/blkdev.h>
#include <linux/file.h>
+#include <linux/swapctl.h>
#include <asm/system.h>
#include <asm/pgtable.h>
switch (atomic_read(&page->count)) {
case 1:
- /* If it has been referenced recently, don't free it */
- if (test_and_clear_bit(PG_referenced, &page->flags))
- break;
-
/* is it a swap-cache or page-cache page? */
if (page->inode) {
+ if (test_and_clear_bit(PG_referenced, &page->flags)) {
+ touch_page(page);
+ break;
+ }
+ age_page(page);
+ if (page->age)
+ break;
if (PageSwapCache(page)) {
delete_from_swap_cache(page);
return 1;
__free_page(page);
return 1;
}
+ /* It's not a cache page, so we don't do aging.
+ * If it has been referenced recently, don't free it */
+ if (test_and_clear_bit(PG_referenced, &page->flags))
+ break;
/* is it a buffer cache page? */
if ((gfp_mask & __GFP_IO) && bh && try_to_free_buffer(bh, &bh, 6))
* but this had better return false if any reasonable "get_free_page()"
* allocation could currently fail..
*
- * Right now we just require that the highest memory order should
- * have at least two entries. Whether this makes sense or not
- * under real load is to be tested, but it also gives us some
- * guarantee about memory fragmentation (essentially, it means
- * that there should be at least two large areas available).
+ * Currently we approve of the following situations:
+ * - the highest memory order has two entries
+ * - the highest memory order has one free entry and:
+ * - the next-highest memory order has two free entries
+ * - the highest memory order has one free entry and:
+ * - the next-highest memory order has one free entry
+ * - the next-next-highest memory order has two free entries
+ *
+ * [previously, there had to be two entries of the highest memory
+ * order, but this lead to problems on large-memory machines.]
*/
int free_memory_available(void)
{
- int retval;
+ int i, retval = 0;
unsigned long flags;
- struct free_area_struct * last = free_area + NR_MEM_LISTS - 1;
+ struct free_area_struct * list = NULL;
spin_lock_irqsave(&page_alloc_lock, flags);
- retval = (last->next != memory_head(last)) && (last->next->next != memory_head(last));
+ /* We fall through the loop if the list contains one
+ * item. -- thanks to Colin Plumb <colin@nyx.net>
+ */
+ for (i = 1; i < 4; ++i) {
+ list = free_area + NR_MEM_LISTS - i;
+ if (list->next == memory_head(list))
+ break;
+ if (list->next->next == memory_head(list))
+ continue;
+ retval = 1;
+ break;
+ }
spin_unlock_irqrestore(&page_alloc_lock, flags);
return retval;
}
* Go through process' page directory.
*/
address = p->swap_address;
- p->swap_address = 0;
/*
* Find the proper vm-area
*/
vma = find_vma(p->mm, address);
- if (!vma)
+ if (!vma) {
+ p->swap_address = 0;
return 0;
+ }
if (address < vma->vm_start)
address = vma->vm_start;
init_swap_timer();
add_wait_queue(&kswapd_wait, &wait);
while (1) {
- int async;
+ int tries;
kswapd_awake = 0;
flush_signals(current);
kswapd_awake = 1;
swapstats.wakeups++;
/* Do the background pageout:
- * We now only swap out as many pages as needed.
- * When we are truly low on memory, we swap out
- * synchronously (WAIT == 1). -- Rik.
- * If we've had too many consecutive failures,
- * go back to sleep to let other tasks run.
+ * When we've got loads of memory, we try
+ * (free_pages_high - nr_free_pages) times to
+ * free memory. As memory gets tighter, kswapd
+ * gets more and more agressive. -- Rik.
*/
- async = 1;
- for (;;) {
+ tries = free_pages_high - nr_free_pages;
+ if (tries < min_free_pages) {
+ tries = min_free_pages;
+ }
+ else if (nr_free_pages < (free_pages_high + free_pages_low) / 2) {
+ tries <<= 1;
+ if (nr_free_pages < free_pages_low) {
+ tries <<= 1;
+ if (nr_free_pages <= min_free_pages) {
+ tries <<= 1;
+ }
+ }
+ }
+ while (tries--) {
int gfp_mask;
if (free_memory_available())
break;
gfp_mask = __GFP_IO;
- if (!async)
- gfp_mask |= __GFP_WAIT;
- async = try_to_free_page(gfp_mask);
- if (!(gfp_mask & __GFP_WAIT) || async)
- continue;
-
+ try_to_free_page(gfp_mask);
/*
- * Not good. We failed to free a page even though
- * we were synchronous. Complain and give up..
+ * Syncing large chunks is faster than swapping
+ * synchronously (less head movement). -- Rik.
*/
- printk("kswapd: failed to free page\n");
- break;
+ if (atomic_read(&nr_async_pages) >= SWAP_CLUSTER_MAX)
+ run_task_queue(&tq_disk);
+
}
+#if 0
+ /*
+ * Report failure if we couldn't even reach min_free_pages.
+ */
+ if (nr_free_pages < min_free_pages)
+ printk("kswapd: failed, got %d of %d\n",
+ nr_free_pages, min_free_pages);
+#endif
}
/* As if we could ever get here - maybe we want to make this killable */
remove_wait_queue(&kswapd_wait, &wait);
* Thomas Bogendoerfer : Return ENODEV for dev_open, if there
* is no device open function.
* Andi Kleen : Fix error reporting for SIOCGIFCONF
- * Régis Duchesne : Fix the argument check in dev_ioctl()
+ * Michael Chastain : Fix signed/unsigned for SIOCGIFCONF
*
*/
#include <net/pkt_sched.h>
#include <net/profile.h>
#include <linux/init.h>
-#ifdef CONFIG_KERNELD
#include <linux/kerneld.h>
-#endif
#ifdef CONFIG_NET_RADIO
#include <linux/wireless.h>
#endif /* CONFIG_NET_RADIO */
void dev_load(const char *name)
{
- if(!dev_get(name))
+ if(!dev_get(name) && suser())
request_module(name);
}
+#else
+
+extern inline void dev_load(const char *unused){;}
+
#endif
-static int
-default_rebuild_header(struct sk_buff *skb)
+static int default_rebuild_header(struct sk_buff *skb)
{
printk(KERN_DEBUG "%s: default_rebuild_header called -- BUG!\n", skb->dev ? skb->dev->name : "NULL!!!");
kfree_skb(skb);
/* Protocol dependent address dumping routines */
-static int (*gifconf[NPROTO])(struct device *dev, char *bufptr, int len);
+static gifconf_func_t * gifconf_list [NPROTO];
-int register_gifconf(int family, int (*func)(struct device *dev, char *bufptr, int len))
+int register_gifconf(unsigned int family, gifconf_func_t * gifconf)
{
- if (family<0 || family>=NPROTO)
+ if (family>=NPROTO)
return -EINVAL;
- gifconf[family] = func;
+ gifconf_list[family] = gifconf;
return 0;
}
struct ifconf ifc;
struct device *dev;
char *pos;
- unsigned int len;
- int err;
+ int len;
+ int total;
+ int i;
/*
* Fetch the caller's info block.
*/
- err = copy_from_user(&ifc, arg, sizeof(struct ifconf));
- if (err)
+ if (copy_from_user(&ifc, arg, sizeof(struct ifconf)))
return -EFAULT;
pos = ifc.ifc_buf;
- if (pos==NULL)
- ifc.ifc_len=0;
len = ifc.ifc_len;
/*
* Loop over the interfaces, and write an info block for each.
*/
+ total = 0;
for (dev = dev_base; dev != NULL; dev = dev->next) {
- int i;
for (i=0; i<NPROTO; i++) {
- int done;
-
- if (gifconf[i] == NULL)
- continue;
-
- done = gifconf[i](dev, pos, len);
-
- if (done<0)
- return -EFAULT;
-
- len -= done;
- if (pos)
- pos += done;
+ if (gifconf_list[i]) {
+ int done;
+ if (pos==NULL) {
+ done = gifconf_list[i](dev, NULL, 0);
+ } else {
+ done = gifconf_list[i](dev, pos+total, len-total);
+ }
+ if (done<0)
+ return -EFAULT;
+ total += done;
+ }
}
}
/*
* All done. Write the updated control block back to the caller.
*/
- ifc.ifc_len -= len;
+ ifc.ifc_len = total;
if (copy_to_user(arg, &ifc, sizeof(struct ifconf)))
return -EFAULT;
return -EINVAL;
}
+
/*
* This function handles all "interface"-type I/O control requests. The actual
* 'doing' part of this is dev_ifsioc above.
{
struct ifreq ifr;
int ret;
-#ifdef CONFIG_NET_ALIAS
char *colon;
-#endif
/* One special case: SIOCGIFCONF takes ifconf argument
and requires shared lock, because it sleeps writing
return dev_ifname((struct ifreq *)arg);
}
- /*
- * Fetch the interface name from the info block.
- */
-
if (copy_from_user(&ifr, arg, sizeof(struct ifreq)))
return -EFAULT;
+
ifr.ifr_name[IFNAMSIZ-1] = 0;
-#ifdef CONFIG_NET_ALIAS
+
colon = strchr(ifr.ifr_name, ':');
if (colon)
*colon = 0;
-#endif
+ /*
+ * See which interface the caller is talking about.
+ */
+
switch(cmd)
{
/*
case SIOCGIFMAP:
case SIOCGIFINDEX:
case SIOCGIFTXQLEN:
-#ifdef CONFIG_KERNELD
dev_load(ifr.ifr_name);
-#endif
ret = dev_ifsioc(&ifr, cmd);
if (!ret) {
-#ifdef CONFIG_NET_ALIAS
if (colon)
*colon = ':';
-#endif
if (copy_to_user(arg, &ifr, sizeof(struct ifreq)))
return -EFAULT;
}
case SIOCSIFTXQLEN:
if (!suser())
return -EPERM;
-#ifdef CONFIG_KERNELD
dev_load(ifr.ifr_name);
-#endif
rtnl_lock();
ret = dev_ifsioc(&ifr, cmd);
rtnl_unlock();
default:
if (cmd >= SIOCDEVPRIVATE &&
cmd <= SIOCDEVPRIVATE + 15) {
-#ifdef CONFIG_KERNELD
dev_load(ifr.ifr_name);
-#endif
rtnl_lock();
ret = dev_ifsioc(&ifr, cmd);
rtnl_unlock();
}
#ifdef CONFIG_NET_RADIO
if (cmd >= SIOCIWFIRST && cmd <= SIOCIWLAST) {
+ dev_load(ifr.ifr_name);
if (IW_IS_SET(cmd)) {
if (!suser())
return -EPERM;
-#ifdef CONFIG_KERNELD
- dev_load(ifr.ifr_name);
-#endif
rtnl_lock();
}
-#ifdef CONFIG_KERNELD
- else
- dev_load(ifr.ifr_name);
-#endif
ret = dev_ifsioc(&ifr, cmd);
if (IW_IS_SET(cmd))
rtnl_unlock();
}
}
-int dev_new_index()
+int dev_new_index(void)
{
static int ifindex;
for (;;) {
#ifdef CONFIG_PROC_FS
proc_net_register(&proc_net_dev);
- if (1) {
+ {
struct proc_dir_entry *ent = create_proc_entry("net/dev_stat", 0, 0);
ent->read_proc = dev_proc_stats;
}
void * dst_alloc(int size, struct dst_ops * ops)
{
struct dst_entry * dst;
+
+ if (ops->gc && atomic_read(&ops->entries) > ops->gc_thresh) {
+ if (ops->gc())
+ return NULL;
+ }
dst = kmalloc(size, GFP_ATOMIC);
if (!dst)
return NULL;
dst->input = dst_discard;
dst->output = dst_blackhole;
atomic_inc(&dst_total);
+ atomic_inc(&ops->entries);
return dst;
}
neigh_release(neigh);
}
+ atomic_dec(&dst->ops->entries);
+
if (dst->ops->destroy)
dst->ops->destroy(dst);
atomic_dec(&dst_total);
int verify_iovec(struct msghdr *m, struct iovec *iov, char *address, int mode)
{
- int err=0;
- int len=0;
- int ct;
+ int size = m->msg_iovlen * sizeof(struct iovec);
+ int err, ct;
if(m->msg_namelen)
{
{
err=move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
if(err<0)
- return err;
+ goto out;
}
m->msg_name = address;
if (m->msg_iovlen > UIO_FASTIOV)
{
- iov = kmalloc(m->msg_iovlen*sizeof(struct iovec), GFP_KERNEL);
+ err = -ENOMEM;
+ iov = kmalloc(size, GFP_KERNEL);
if (!iov)
- return -ENOMEM;
+ goto out;
}
- err = copy_from_user(iov, m->msg_iov, sizeof(struct iovec)*m->msg_iovlen);
- if (err)
- {
- if (m->msg_iovlen > UIO_FASTIOV)
- kfree(iov);
- return -EFAULT;
- }
+ if (copy_from_user(iov, m->msg_iov, size))
+ goto out_free;
+ m->msg_iov=iov;
- for(ct=0;ct<m->msg_iovlen;ct++)
- len+=iov[ct].iov_len;
+ for (err = 0, ct = 0; ct < m->msg_iovlen; ct++)
+ err += iov[ct].iov_len;
+out:
+ return err;
- m->msg_iov=iov;
- return len;
+out_free:
+ err = -EFAULT;
+ if (m->msg_iovlen > UIO_FASTIOV)
+ kfree(iov);
+ goto out;
}
/*
int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len)
{
- int err;
+ int err = -EFAULT;
+
while(len>0)
{
if(iov->iov_len)
{
- int copy = min(iov->iov_len,len);
- err = copy_to_user(iov->iov_base,kdata,copy);
- if (err)
- return err;
+ int copy = min(iov->iov_len, len);
+ if (copy_to_user(iov->iov_base, kdata, copy))
+ goto out;
kdata+=copy;
len-=copy;
iov->iov_len-=copy;
}
iov++;
}
- return 0;
+ err = 0;
+out:
+ return err;
}
/*
int memcpy_fromiovec(unsigned char *kdata, struct iovec *iov, int len)
{
- int err;
+ int err = -EFAULT;
+
while(len>0)
{
if(iov->iov_len)
{
- int copy=min(len,iov->iov_len);
- err = copy_from_user(kdata, iov->iov_base, copy);
- if (err)
- {
- return -EFAULT;
- }
+ int copy = min(len, iov->iov_len);
+ if (copy_from_user(kdata, iov->iov_base, copy))
+ goto out;
len-=copy;
kdata+=copy;
iov->iov_base+=copy;
}
iov++;
}
- return 0;
+ err = 0;
+out:
+ return err;
}
int memcpy_fromiovecend(unsigned char *kdata, struct iovec *iov, int offset,
int len)
{
- int err;
+ int err = -EFAULT;
+
while(offset>0)
{
if (offset > iov->iov_len)
{
offset -= iov->iov_len;
-
}
else
{
- u8 *base;
- int copy;
+ u8 *base = iov->iov_base + offset;
+ int copy = min(len, iov->iov_len - offset);
- base = iov->iov_base + offset;
- copy = min(len, iov->iov_len - offset);
offset = 0;
- err = copy_from_user(kdata, base, copy);
- if (err)
- {
- return -EFAULT;
- }
+ if (copy_from_user(kdata, base, copy))
+ goto out;
len-=copy;
kdata+=copy;
}
while (len>0)
{
- int copy=min(len, iov->iov_len);
- err = copy_from_user(kdata, iov->iov_base, copy);
- if (err)
- {
- return -EFAULT;
- }
+ int copy = min(len, iov->iov_len);
+
+ if (copy_from_user(kdata, iov->iov_base, copy))
+ goto out;
len-=copy;
kdata+=copy;
iov++;
}
- return 0;
+ err = 0;
+out:
+ return err;
}
/*
do {
int copy = iov->iov_len - offset;
- if (copy >= 0) {
+ if (copy > 0) {
u8 *base = iov->iov_base + offset;
/* Normal case (single iov component) is fastly detected */
if (len <= copy) {
*csump = csum_and_copy_from_user(base, kdata,
len, *csump, &err);
- return err;
+ goto out;
}
partial_cnt = copy % 4;
if (partial_cnt) {
copy -= partial_cnt;
- err |= copy_from_user(kdata+copy, base+copy, partial_cnt);
+ if (copy_from_user(kdata + copy, base + copy,
+ partial_cnt))
+ goto out_fault;
}
- *csump = csum_and_copy_from_user(base, kdata,
- copy, *csump, &err);
-
+ *csump = csum_and_copy_from_user(base, kdata, copy,
+ *csump, &err);
+ if (err)
+ goto out;
len -= copy + partial_cnt;
kdata += copy + partial_cnt;
iov++;
csum = *csump;
- while (len>0)
+ while (len > 0)
{
u8 *base = iov->iov_base;
unsigned int copy = min(len, iov->iov_len);
/* iov component is too short ... */
if (par_len > copy) {
- err |= copy_from_user(kdata, base, copy);
+ if (copy_from_user(kdata, base, copy))
+ goto out_fault;
+ kdata += copy;
base += copy;
partial_cnt += copy;
- kdata += copy;
len -= copy;
iov++;
if (len)
continue;
- *csump = csum_partial(kdata-partial_cnt, partial_cnt, csum);
- return err;
+ *csump = csum_partial(kdata - partial_cnt,
+ partial_cnt, csum);
+ goto out;
}
- err |= copy_from_user(kdata, base, par_len);
- csum = csum_partial(kdata-partial_cnt, 4, csum);
+ if (copy_from_user(kdata, base, par_len))
+ goto out_fault;
+ csum = csum_partial(kdata - partial_cnt, 4, csum);
+ kdata += par_len;
base += par_len;
copy -= par_len;
len -= par_len;
- kdata += par_len;
partial_cnt = 0;
}
if (partial_cnt)
{
copy -= partial_cnt;
- err |= copy_from_user(kdata+copy, base + copy, partial_cnt);
+ if (copy_from_user(kdata + copy, base + copy,
+ partial_cnt))
+ goto out_fault;
}
}
- if (copy == 0)
+ /* Why do we want to break?? There may be more to copy ... */
+ if (copy == 0) {
+if (len > partial_cnt)
+printk("csum_iovec: early break? len=%d, partial=%d\n", len, partial_cnt);
break;
+ }
csum = csum_and_copy_from_user(base, kdata, copy, csum, &err);
+ if (err)
+ goto out;
len -= copy + partial_cnt;
kdata += copy + partial_cnt;
iov++;
}
*csump = csum;
+out:
return err;
+
+out_fault:
+ err = -EFAULT;
+ goto out;
}
return NULL;
memset(n, 0, tbl->entry_size);
-
+
skb_queue_head_init(&n->arp_queue);
n->updated = n->used = jiffies;
n->nud_state = NUD_NONE;
}
/* Device specific setup. */
- if (dev->neigh_setup && dev->neigh_setup(n) < 0) {
+ if (n->parms && n->parms->neigh_setup && n->parms->neigh_setup(n) < 0) {
neigh_destroy(n);
return NULL;
}
lladdr = neigh->ha;
}
- neigh->used = jiffies;
neigh_sync(neigh);
old = neigh->nud_state;
if (new&NUD_CONNECTED)
}
-struct neigh_parms *neigh_parms_alloc(struct neigh_table *tbl)
+struct neigh_parms *neigh_parms_alloc(struct device *dev, struct neigh_table *tbl)
{
struct neigh_parms *p;
p = kmalloc(sizeof(*p), GFP_KERNEL);
if (p) {
memcpy(p, &tbl->parms, sizeof(*p));
+ p->tbl = tbl;
p->reachable_time = neigh_rand_reach_time(p->base_reachable_time);
- start_bh_atomic();
+ if (dev && dev->neigh_setup) {
+ if (dev->neigh_setup(dev, p)) {
+ kfree(p);
+ return NULL;
+ }
+ }
p->next = tbl->parms.next;
+ /* ATOMIC_SET */
tbl->parms.next = p;
- end_bh_atomic();
}
return p;
}
return;
for (p = &tbl->parms.next; *p; p = &(*p)->next) {
if (*p == parms) {
- start_bh_atomic();
+ /* ATOMIC_SET */
*p = parms->next;
- end_bh_atomic();
#ifdef CONFIG_SYSCTL
neigh_sysctl_unregister(parms);
#endif
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
&proc_dointvec},
{NET_NEIGH_REACHABLE_TIME, "base_reachable_time",
NULL, sizeof(int), 0644, NULL,
- &proc_dointvec},
+ &proc_dointvec_jiffies},
{NET_NEIGH_DELAY_PROBE_TIME, "delay_first_probe_time",
NULL, sizeof(int), 0644, NULL,
- &proc_dointvec},
+ &proc_dointvec_jiffies},
{NET_NEIGH_GC_STALE_TIME, "gc_stale_time",
NULL, sizeof(int), 0644, NULL,
- &proc_dointvec},
+ &proc_dointvec_jiffies},
{NET_NEIGH_UNRES_QLEN, "unres_qlen",
NULL, sizeof(int), 0644, NULL,
&proc_dointvec},
&proc_dointvec},
{NET_NEIGH_GC_INTERVAL, "gc_interval",
NULL, sizeof(int), 0644, NULL,
- &proc_dointvec},
+ &proc_dointvec_jiffies},
{NET_NEIGH_GC_THRESH1, "gc_thresh1",
NULL, sizeof(int), 0644, NULL,
&proc_dointvec},
{{CTL_NET, "net", NULL, 0, 0555, NULL},{0}}
};
-void * neigh_sysctl_register(struct device *dev, struct neigh_parms *p,
- int p_id, int pdev_id, char *p_name)
+int neigh_sysctl_register(struct device *dev, struct neigh_parms *p,
+ int p_id, int pdev_id, char *p_name)
{
struct neigh_sysctl_table *t;
t = kmalloc(sizeof(*t), GFP_KERNEL);
if (t == NULL)
- return NULL;
+ return -ENOBUFS;
memcpy(t, &neigh_sysctl_template, sizeof(*t));
- t->neigh_vars[0].data = &p->mcast_probes;
t->neigh_vars[1].data = &p->ucast_probes;
t->neigh_vars[2].data = &p->app_probes;
t->neigh_vars[3].data = &p->retrans_time;
t->sysctl_header = register_sysctl_table(t->neigh_root_dir, 0);
if (t->sysctl_header == NULL) {
kfree(t);
- return NULL;
+ return -ENOBUFS;
}
- return t;
+ p->sysctl_table = t;
+ return 0;
}
void neigh_sysctl_unregister(struct neigh_parms *p)
#define _X 2 /* exclusive access to tables required */
#define _G 4 /* GET request */
-static unsigned char rtm_properties[RTM_MAX-RTM_BASE+1] =
+static const int rtm_min[(RTM_MAX+1-RTM_BASE)/4] =
{
- _S|_X, /* RTM_NEWLINK */
- _S|_X, /* RTM_DELLINK */
- _G, /* RTM_GETLINK */
- 0,
-
- _S|_X, /* RTM_NEWADDR */
- _S|_X, /* RTM_DELADDR */
- _G, /* RTM_GETADDR */
- 0,
-
- _S|_X, /* RTM_NEWROUTE */
- _S|_X, /* RTM_DELROUTE */
- _G, /* RTM_GETROUTE */
- 0,
-
- _S|_X, /* RTM_NEWNEIGH */
- _S|_X, /* RTM_DELNEIGH */
- _G, /* RTM_GETNEIGH */
- 0,
-
- _S|_X, /* RTM_NEWRULE */
- _S|_X, /* RTM_DELRULE */
- _G, /* RTM_GETRULE */
- 0
+ NLMSG_LENGTH(sizeof(struct ifinfomsg)),
+ NLMSG_LENGTH(sizeof(struct ifaddrmsg)),
+ NLMSG_LENGTH(sizeof(struct rtmsg)),
+ NLMSG_LENGTH(sizeof(struct ndmsg)),
+ NLMSG_LENGTH(sizeof(struct rtmsg)),
+ NLMSG_LENGTH(sizeof(struct tcmsg)),
+ NLMSG_LENGTH(sizeof(struct tcmsg)),
+ NLMSG_LENGTH(sizeof(struct tcmsg))
};
-static int rtnetlink_get_rta(struct kern_rta *rta, struct rtattr *attr, int attrlen)
+static const int rta_max[(RTM_MAX+1-RTM_BASE)/4] =
{
- void **rta_data = (void**)rta;
-
- while (RTA_OK(attr, attrlen)) {
- int type = attr->rta_type;
- if (type != RTA_UNSPEC) {
- if (type > RTA_MAX)
- return -EINVAL;
- rta_data[type-1] = RTA_DATA(attr);
- }
- attr = RTA_NEXT(attr, attrlen);
- }
- return 0;
-}
-
-static int rtnetlink_get_ifa(struct kern_ifa *ifa, struct rtattr *attr, int attrlen)
-{
- void **ifa_data = (void**)ifa;
-
- while (RTA_OK(attr, attrlen)) {
- int type = attr->rta_type;
- if (type != IFA_UNSPEC) {
- if (type > IFA_MAX)
- return -EINVAL;
- ifa_data[type-1] = RTA_DATA(attr);
- }
- attr = RTA_NEXT(attr, attrlen);
- }
- return 0;
-}
-
-static int rtnetlink_get_ga(struct rtattr **rta, int sz,
- struct rtattr *attr, int attrlen)
-{
- while (RTA_OK(attr, attrlen)) {
- int type = attr->rta_type;
- if (type > 0) {
- if (type > sz)
- return -EINVAL;
- rta[type-1] = attr;
- }
- attr = RTA_NEXT(attr, attrlen);
- }
- return 0;
-}
-
+ IFLA_MAX,
+ IFA_MAX,
+ RTA_MAX,
+ NDA_MAX,
+ RTA_MAX,
+ TCA_MAX,
+ TCA_MAX,
+ TCA_MAX
+};
void __rta_fill(struct sk_buff *skb, int attrtype, int attrlen, const void *data)
{
memcpy(RTA_DATA(rta), data, attrlen);
}
+#ifdef CONFIG_RTNL_OLD_IFINFO
static int rtnetlink_fill_ifinfo(struct sk_buff *skb, struct device *dev,
int type, pid_t pid, u32 seq)
{
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+#else
+static int rtnetlink_fill_ifinfo(struct sk_buff *skb, struct device *dev,
+ int type, pid_t pid, u32 seq)
+{
+ struct ifinfomsg *r;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+
+ nlh = NLMSG_PUT(skb, pid, seq, type, sizeof(*r));
+ if (pid) nlh->nlmsg_flags |= NLM_F_MULTI;
+ r = NLMSG_DATA(nlh);
+ r->ifi_family = AF_UNSPEC;
+ r->ifi_type = dev->type;
+ r->ifi_index = dev->ifindex;
+ r->ifi_flags = dev->flags;
+ r->ifi_change = ~0U;
+
+ RTA_PUT(skb, IFLA_IFNAME, strlen(dev->name)+1, dev->name);
+ if (dev->addr_len) {
+ RTA_PUT(skb, IFLA_ADDRESS, dev->addr_len, dev->dev_addr);
+ RTA_PUT(skb, IFLA_BROADCAST, dev->addr_len, dev->broadcast);
+ }
+ if (1) {
+ unsigned mtu = dev->mtu;
+ RTA_PUT(skb, IFLA_MTU, sizeof(mtu), &mtu);
+ }
+ if (dev->ifindex != dev->iflink)
+ RTA_PUT(skb, IFLA_LINK, sizeof(int), &dev->iflink);
+ if (dev->qdisc_sleeping->ops)
+ RTA_PUT(skb, IFLA_QDISC,
+ strlen(dev->qdisc_sleeping->ops->id) + 1,
+ dev->qdisc_sleeping->ops->id);
+ if (dev->get_stats) {
+ struct net_device_stats *stats = dev->get_stats(dev);
+ if (stats)
+ RTA_PUT(skb, IFLA_STATS, sizeof(*stats), stats);
+ }
+ nlh->nlmsg_len = skb->tail - b;
+ return skb->len;
+
+nlmsg_failure:
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
return -1;
}
+#endif
int rtnetlink_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
{
return skb->len;
}
-
void rtmsg_ifinfo(int type, struct device *dev)
{
struct sk_buff *skb;
+#ifdef CONFIG_RTNL_OLD_IFINFO
int size = NLMSG_SPACE(sizeof(struct ifinfomsg)+
RTA_LENGTH(sizeof(struct net_device_stats)));
+#else
+ int size = NLMSG_GOODSIZE;
+#endif
skb = alloc_skb(size, GFP_KERNEL);
if (!skb)
extern __inline__ int
rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, int *errp)
{
- union {
- struct kern_rta rta;
- struct kern_ifa ifa;
- struct rtattr *ga[RTA_MAX-1];
- } u;
- struct rtmsg *rtm;
- struct ifaddrmsg *ifm;
- struct ndmsg *ndm;
+ struct rtnetlink_link *link;
+ struct rtnetlink_link *link_tab;
+ struct rtattr *rta[RTATTR_MAX];
+
int exclusive = 0;
+ int sz_idx, kind;
+ int min_len;
int family;
int type;
int err;
+ /* Only requests are handled by kernel now */
if (!(nlh->nlmsg_flags&NLM_F_REQUEST))
return 0;
+
type = nlh->nlmsg_type;
+
+ /* A control message: ignore them */
if (type < RTM_BASE)
return 0;
+
+ /* Unknown message: reply with EINVAL */
if (type > RTM_MAX)
goto err_inval;
+ type -= RTM_BASE;
+
+ /* All the messages must have at least 1 byte length */
if (nlh->nlmsg_len < NLMSG_LENGTH(sizeof(struct rtgenmsg)))
return 0;
+
family = ((struct rtgenmsg*)NLMSG_DATA(nlh))->rtgen_family;
- if (family > NPROTO || rtnetlink_links[family] == NULL) {
+ if (family > NPROTO) {
*errp = -EAFNOSUPPORT;
return -1;
}
- if (rtm_properties[type-RTM_BASE]&_S) {
- if (NETLINK_CREDS(skb)->uid) {
- *errp = -EPERM;
- return -1;
- }
+
+ link_tab = rtnetlink_links[family];
+ if (link_tab == NULL)
+ link_tab = rtnetlink_links[AF_UNSPEC];
+ link = &link_tab[type];
+
+ sz_idx = type>>2;
+ kind = type&3;
+
+ if (kind != 2 && NETLINK_CREDS(skb)->uid) {
+ *errp = -EPERM;
+ return -1;
}
- if (rtm_properties[type-RTM_BASE]&_G && nlh->nlmsg_flags&NLM_F_DUMP) {
- if (rtnetlink_links[family][type-RTM_BASE].dumpit == NULL)
+
+ if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) {
+ if (link->dumpit == NULL)
+ link = &(rtnetlink_links[AF_UNSPEC][type]);
+
+ if (link->dumpit == NULL)
goto err_inval;
/* Super-user locks all the tables to get atomic snapshot */
if (NETLINK_CREDS(skb)->uid == 0 && nlh->nlmsg_flags&NLM_F_ATOMIC)
atomic_inc(&rtnl_rlockct);
if ((*errp = netlink_dump_start(rtnl, skb, nlh,
- rtnetlink_links[family][type-RTM_BASE].dumpit,
+ link->dumpit,
rtnetlink_done)) != 0) {
if (NETLINK_CREDS(skb)->uid == 0 && nlh->nlmsg_flags&NLM_F_ATOMIC)
atomic_dec(&rtnl_rlockct);
skb_pull(skb, NLMSG_ALIGN(nlh->nlmsg_len));
return -1;
}
- if (rtm_properties[type-RTM_BASE]&_X) {
+
+ if (kind != 2) {
if (rtnl_exlock_nowait()) {
*errp = 0;
return -1;
}
exclusive = 1;
}
-
- memset(&u, 0, sizeof(u));
-
- switch (nlh->nlmsg_type) {
- case RTM_NEWROUTE:
- case RTM_DELROUTE:
- case RTM_GETROUTE:
- case RTM_NEWRULE:
- case RTM_DELRULE:
- case RTM_GETRULE:
- rtm = NLMSG_DATA(nlh);
- if (nlh->nlmsg_len < sizeof(*rtm))
- goto err_inval;
-
- if (rtm->rtm_optlen &&
- rtnetlink_get_rta(&u.rta, RTM_RTA(rtm), rtm->rtm_optlen) < 0)
- goto err_inval;
- break;
-
- case RTM_NEWADDR:
- case RTM_DELADDR:
- case RTM_GETADDR:
- ifm = NLMSG_DATA(nlh);
- if (nlh->nlmsg_len < sizeof(*ifm))
- goto err_inval;
-
- if (nlh->nlmsg_len > NLMSG_LENGTH(sizeof(*ifm)) &&
- rtnetlink_get_ifa(&u.ifa, IFA_RTA(ifm),
- nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*ifm))) < 0)
- goto err_inval;
- break;
- case RTM_NEWNEIGH:
- case RTM_DELNEIGH:
- case RTM_GETNEIGH:
- ndm = NLMSG_DATA(nlh);
- if (nlh->nlmsg_len < sizeof(*ndm))
- goto err_inval;
- if (nlh->nlmsg_len > NLMSG_LENGTH(sizeof(*ndm)) &&
- rtnetlink_get_ga(u.ga, NDA_MAX, NDA_RTA(ndm),
- nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*ndm))) < 0)
- goto err_inval;
- break;
+ memset(&rta, 0, sizeof(rta));
- case RTM_NEWLINK:
- case RTM_DELLINK:
- case RTM_GETLINK:
- /* Not urgent and even not necessary */
- default:
+ min_len = rtm_min[sz_idx];
+ if (nlh->nlmsg_len < min_len)
goto err_inval;
+
+ if (nlh->nlmsg_len > min_len) {
+ int attrlen = nlh->nlmsg_len - NLMSG_ALIGN(min_len);
+ struct rtattr *attr = (void*)nlh + NLMSG_ALIGN(min_len);
+
+ while (RTA_OK(attr, attrlen)) {
+ unsigned flavor = attr->rta_type;
+ if (flavor) {
+ if (flavor > rta_max[sz_idx])
+ goto err_inval;
+ rta[flavor-1] = attr;
+ }
+ attr = RTA_NEXT(attr, attrlen);
+ }
}
- if (rtnetlink_links[family][type-RTM_BASE].doit == NULL)
+ if (link->doit == NULL)
+ link = &(rtnetlink_links[AF_UNSPEC][type]);
+ if (link->doit == NULL)
goto err_inval;
- err = rtnetlink_links[family][type-RTM_BASE].doit(skb, nlh, (void *)&u);
+ err = link->doit(skb, nlh, (void *)&rta);
if (exclusive)
rtnl_exunlock();
{ NULL, rtnetlink_dump_all, },
{ NULL, NULL, },
- { NULL, NULL, },
- { NULL, NULL, },
+ { neigh_add, NULL, },
+ { neigh_delete, NULL, },
{ NULL, neigh_dump_info, },
{ NULL, NULL, },
void __scm_destroy(struct scm_cookie *scm)
{
struct scm_fp_list *fpl = scm->fp;
+ struct file *file;
int i;
if (fpl) {
fput(fpl->fp[i]);
kfree(fpl);
}
+
+ file = scm->file;
+ if (file) {
+ scm->sock = NULL;
+ scm->file = NULL;
+ fput(file);
+ }
}
int __scm_send(struct socket *sock, struct msghdr *msg, struct scm_cookie *p)
{
- int err;
struct cmsghdr *cmsg;
struct file *file;
- int acc_fd;
- unsigned scm_flags=0;
+ int acc_fd, err;
+ unsigned int scm_flags=0;
for (cmsg = CMSG_FIRSTHDR(msg); cmsg; cmsg = CMSG_NXTHDR(msg, cmsg))
{
memcpy(&acc_fd, CMSG_DATA(cmsg), sizeof(int));
p->sock = NULL;
if (acc_fd != -1) {
- if (acc_fd < 0 || acc_fd >= NR_OPEN ||
- (file=current->files->fd[acc_fd])==NULL)
- return -EBADF;
- if (!file->f_dentry->d_inode || !file->f_dentry->d_inode->i_sock)
- return -ENOTSOCK;
+ err = -EBADF;
+ file = fget(acc_fd);
+ if (!file)
+ goto error;
+ p->file = file;
+ err = -ENOTSOCK;
+ if (!file->f_dentry->d_inode ||
+ !file->f_dentry->d_inode->i_sock)
+ goto error;
p->sock = &file->f_dentry->d_inode->u.socket_i;
+ err = -EINVAL;
if (p->sock->state != SS_UNCONNECTED)
- return -EINVAL;
+ goto error;
}
scm_flags |= MSG_SYN;
break;
sk->bound_dev_if = 0;
}
else {
- if (copy_from_user(&req, optval, sizeof(req)) < 0)
+ if (copy_from_user(&req, optval, sizeof(req)))
return -EFAULT;
/* Remove any cached route for this socket. */
- if (sk->dst_cache) {
- ip_rt_put((struct rtable*)sk->dst_cache);
- sk->dst_cache = NULL;
- }
+ dst_release(xchg(&sk->dst_cache, NULL));
if (req.ifr_ifrn.ifrn_name[0] == '\0') {
sk->bound_dev_if = 0;
- }
- else {
+ } else {
struct device *dev = dev_get(req.ifr_ifrn.ifrn_name);
if (!dev)
return -EINVAL;
sk->bound_dev_if = dev->ifindex;
- if (sk->daddr) {
- int ret;
- ret = ip_route_output((struct rtable**)&sk->dst_cache,
- sk->daddr, sk->saddr,
- sk->ip_tos, sk->bound_dev_if);
- if (ret)
- return ret;
- }
}
}
return 0;
*/
atomic_add(size, &sk->wmem_alloc);
mem = kmalloc(size, priority);
- if (mem) {
- /* Recheck because kmalloc might sleep */
- if (atomic_read(&sk->wmem_alloc)+size < sk->sndbuf)
- return mem;
- kfree_s(mem, size);
- }
+ if (mem)
+ return mem;
atomic_sub(size, &sk->wmem_alloc);
}
return mem;
extern int netdev_max_backlog;
extern int netdev_fastroute;
+extern int net_msg_cost;
+extern int net_msg_burst;
extern __u32 sysctl_wmem_max;
extern __u32 sysctl_rmem_max;
&netdev_fastroute, sizeof(int), 0644, NULL,
&proc_dointvec},
#endif
+ {NET_CORE_MSG_COST, "message_cost",
+ &net_msg_cost, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_CORE_MSG_BURST, "message_burst",
+ &net_msg_burst, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
{ 0 }
};
#endif
net_random();
}
+int net_msg_cost = 5*HZ;
+int net_msg_burst = 10*5*HZ;
/*
* This enforces a rate limit: not more than one kernel message
*/
int net_ratelimit(void)
{
+ static unsigned long toks = 10*5*HZ;
static unsigned long last_msg;
static int missed;
-
- if ((jiffies - last_msg) >= 5*HZ) {
- if (missed)
- printk(KERN_WARNING "ipv4: (%d messages suppressed. Flood?)\n", missed);
- missed = 0;
- last_msg = jiffies;
+ unsigned long now = jiffies;
+
+ toks += now - xchg(&last_msg, now);
+ if (toks > net_msg_burst)
+ toks = net_msg_burst;
+ if (toks >= net_msg_cost) {
+ toks -= net_msg_cost;
+ if (missed)
+ printk(KERN_WARNING "NET: %d messages suppressed.\n", missed);
+ missed = 0;
return 1;
}
missed++;
- return 0;
+ return 0;
}
* Jonathan Layes : Added arpd support through kerneld
* message queue (960314)
* Mike Shaver : /proc/sys/net/ipv4/arp_* support
+ * Mike McLagan : Routing by source
* Stuart Cheshire : Metricom and grat arp fixes
* *** FOR 2.1 clean this up ***
* Lawrence V. Stefani: (08/12/96) Added FDDI support.
ip_acct_output
};
-#if defined(CONFIG_AX25) || defined(CONFIG_AX25)
-static struct neigh_ops arp_broken_ops =
+#if defined(CONFIG_AX25) || defined(CONFIG_AX25) || \
+ defined(CONFIG_SHAPER) || defined(CONFIG_SHAPER_MODULE)
+struct neigh_ops arp_broken_ops =
{
AF_INET,
NULL,
- NULL,
- NULL,
- neigh_compat_output,
- neigh_compat_output,
+ arp_solicit,
+ arp_error_report,
neigh_compat_output,
neigh_compat_output,
+ ip_acct_output,
+ ip_acct_output,
};
#endif
NULL,
NULL,
parp_redo,
- { NULL, NULL, 30*HZ, 1*HZ, 60*HZ, 30*HZ, 5*HZ, 3, 3, 0, 3, 1*HZ, (8*HZ)/10, 1*HZ, 64 },
+ { NULL, NULL, &arp_tbl, 0, NULL, NULL,
+ 30*HZ, 1*HZ, 60*HZ, 30*HZ, 5*HZ, 3, 3, 0, 3, 1*HZ, (8*HZ)/10, 1*HZ, 64 },
30*HZ, 128, 512, 1024,
};
* is not from an IP number. We can't currently handle this, so toss
* it.
*/
- if (arp->ar_hln != dev->addr_len ||
+ if (in_dev == NULL ||
+ arp->ar_hln != dev->addr_len ||
dev->flags & IFF_NOARP ||
skb->pkt_type == PACKET_OTHERHOST ||
skb->pkt_type == PACKET_LOOPBACK ||
#endif
}
+ /* Undertsand only these message types */
+
+ if (arp->ar_op != __constant_htons(ARPOP_REPLY) &&
+ arp->ar_op != __constant_htons(ARPOP_REQUEST))
+ goto out;
+
/*
* Extract fields
*/
* and in the case of requests for us we add the requester to the arp
* cache.
*/
- switch (arp->ar_op) {
- case __constant_htons(ARPOP_REQUEST):
- if (ip_route_input(skb, tip, sip, 0, dev))
- goto out;
+
+ /* Special case: IPv4 duplicate address detection packet (RFC2131) */
+ if (sip == 0) {
+ if (arp->ar_op == __constant_htons(ARPOP_REQUEST) &&
+ inet_addr_type(tip) == RTN_LOCAL)
+ arp_send(ARPOP_REPLY,ETH_P_ARP,tip,dev,tip,sha,dev->dev_addr,dev->dev_addr);
+ goto out;
+ }
+
+ if (arp->ar_op == __constant_htons(ARPOP_REQUEST) &&
+ ip_route_input(skb, tip, sip, 0, dev) == 0) {
+
rt = (struct rtable*)skb->dst;
addr_type = rt->rt_type;
if (addr_type == RTN_LOCAL) {
- struct neighbour *n;
n = neigh_event_ns(&arp_tbl, sha, &sip, dev);
if (n) {
arp_send(ARPOP_REPLY,ETH_P_ARP,sip,dev,tip,sha,dev->dev_addr,sha);
neigh_release(n);
}
- } else if (in_dev && IN_DEV_FORWARD(in_dev)) {
+ goto out;
+ } else if (IN_DEV_FORWARD(in_dev)) {
if ((rt->rt_flags&RTCF_DNAT) ||
(addr_type == RTN_UNICAST && rt->u.dst.dev != dev &&
(IN_DEV_PROXY_ARP(in_dev) || pneigh_lookup(&arp_tbl, &tip, dev, 0)))) {
pneigh_enqueue(&arp_tbl, in_dev->arp_parms, skb);
return 0;
}
+ goto out;
}
}
- break;
+ }
- case __constant_htons(ARPOP_REPLY):
- if (inet_addr_type(sip) != RTN_UNICAST)
- goto out;
+ /* Update our ARP tables */
+
+ n = __neigh_lookup(&arp_tbl, &sip, dev, 0);
+
+#ifdef CONFIG_IP_ACCEPT_UNSOLICITED_ARP
+ /* Unsolicited ARP is not accepted by default.
+ It is possible, that this option should be enabled for some
+ devices (strip is candidate)
+ */
+ if (n == NULL &&
+ arp->ar_op == __constant_htons(ARPOP_REPLY) &&
+ inet_addr_type(sip) == RTN_UNICAST)
n = __neigh_lookup(&arp_tbl, &sip, dev, -1);
- if (n) {
- int state = NUD_REACHABLE;
- int override = 1;
- if (jiffies - n->updated < n->parms->locktime &&
- jiffies - n->updated >= 0)
- override = 0;
- if (skb->pkt_type != PACKET_HOST)
- state = NUD_STALE;
- neigh_update(n, sha, state, override, 1);
- neigh_release(n);
- }
- break;
+#endif
+
+ if (n) {
+ int state = NUD_REACHABLE;
+ int override = 0;
+
+ /* If several different ARP replies follows back-to-back,
+ use the FIRST one. It is possible, if several proxy
+ agents are active. Taking the first reply prevents
+ arp trashing and chooses the fastest router.
+ */
+ if (jiffies - n->updated >= n->parms->locktime)
+ override = 1;
+
+ /* Broadcast replies and request packets
+ do not assert neighbour reachability.
+ */
+ if (arp->ar_op != __constant_htons(ARPOP_REPLY) ||
+ skb->pkt_type != PACKET_HOST)
+ state = NUD_STALE;
+ neigh_update(n, sha, state, override, 1);
+ neigh_release(n);
}
out:
return 0;
}
if (dev == NULL) {
- ipv4_config.proxy_arp = 1;
+ ipv4_devconf.proxy_arp = 1;
return 0;
}
if (dev->ip_ptr) {
- ((struct in_device*)dev->ip_ptr)->flags |= IFF_IP_PROXYARP;
+ ((struct in_device*)dev->ip_ptr)->cnf.proxy_arp = 1;
return 0;
}
return -ENXIO;
{
unsigned flags = 0;
if (neigh->nud_state&NUD_PERMANENT)
- flags = ATF_PERM;
+ flags = ATF_PERM|ATF_COM;
else if (neigh->nud_state&NUD_VALID)
flags = ATF_COM;
return flags;
return pneigh_delete(&arp_tbl, &ip, dev);
if (mask == 0) {
if (dev == NULL) {
- ipv4_config.proxy_arp = 0;
+ ipv4_devconf.proxy_arp = 0;
return 0;
}
if (dev->ip_ptr) {
- ((struct in_device*)dev->ip_ptr)->flags &= ~IFF_IP_PROXYARP;
+ ((struct in_device*)dev->ip_ptr)->cnf.proxy_arp = 0;
return 0;
}
return -ENXIO;
proc_net_register(&proc_net_arp);
#endif
#ifdef CONFIG_SYSCTL
- arp_tbl.parms.sysctl_table = neigh_sysctl_register(NULL, &arp_tbl.parms, NET_IPV4, NET_IPV4_NEIGH, "ipv4");
+ neigh_sysctl_register(NULL, &arp_tbl.parms, NET_IPV4, NET_IPV4_NEIGH, "ipv4");
#endif
}
#include <net/route.h>
#include <net/ip_fib.h>
+struct ipv4_devconf ipv4_devconf = { 1, 1, 1, 1, 0, };
+static struct ipv4_devconf ipv4_devconf_dflt = { 1, 1, 1, 1, 1, };
+
#ifdef CONFIG_RTNETLINK
static void rtmsg_ifa(int event, struct in_ifaddr *);
#else
static struct notifier_block *inetaddr_chain;
static void inet_del_ifa(struct in_device *in_dev, struct in_ifaddr **ifap, int destroy);
-
+#ifdef CONFIG_SYSCTL
+static void devinet_sysctl_register(struct in_device *in_dev, struct ipv4_devconf *p);
+static void devinet_sysctl_unregister(struct ipv4_devconf *p);
+#endif
int inet_ifa_count;
int inet_dev_count;
return NULL;
inet_dev_count++;
memset(in_dev, 0, sizeof(*in_dev));
+ memcpy(&in_dev->cnf, &ipv4_devconf_dflt, sizeof(in_dev->cnf));
+ in_dev->cnf.sysctl = NULL;
in_dev->dev = dev;
- if ((in_dev->arp_parms = neigh_parms_alloc(&arp_tbl)) == NULL)
- in_dev->arp_parms = &arp_tbl.parms;
+ if ((in_dev->arp_parms = neigh_parms_alloc(dev, &arp_tbl)) == NULL) {
+ kfree(in_dev);
+ return NULL;
+ }
#ifdef CONFIG_SYSCTL
- else
- in_dev->arp_parms->sysctl_table =
- neigh_sysctl_register(dev, in_dev->arp_parms, NET_IPV4, NET_IPV4_NEIGH, "ipv4");
+ neigh_sysctl_register(dev, in_dev->arp_parms, NET_IPV4, NET_IPV4_NEIGH, "ipv4");
#endif
dev->ip_ptr = in_dev;
+#ifdef CONFIG_SYSCTL
+ devinet_sysctl_register(in_dev, &in_dev->cnf);
+#endif
if (dev->flags&IFF_UP)
ip_mc_up(in_dev);
return in_dev;
inet_free_ifa(ifa);
}
+#ifdef CONFIG_SYSCTL
+ devinet_sysctl_unregister(&in_dev->cnf);
+#endif
in_dev->dev->ip_ptr = NULL;
neigh_parms_release(&arp_tbl, in_dev->arp_parms);
kfree(in_dev);
int
inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg)
{
- struct kern_ifa *k_ifa = arg;
+ struct rtattr **rta = arg;
struct in_device *in_dev;
struct ifaddrmsg *ifm = NLMSG_DATA(nlh);
struct in_ifaddr *ifa, **ifap;
return -EADDRNOTAVAIL;
for (ifap=&in_dev->ifa_list; (ifa=*ifap)!=NULL; ifap=&ifa->ifa_next) {
- if ((k_ifa->ifa_local && memcmp(k_ifa->ifa_local, &ifa->ifa_local, 4)) ||
- (k_ifa->ifa_label && strcmp(k_ifa->ifa_label, ifa->ifa_label)) ||
- (k_ifa->ifa_address &&
+ if ((rta[IFA_LOCAL-1] && memcmp(RTA_DATA(rta[IFA_LOCAL-1]), &ifa->ifa_local, 4)) ||
+ (rta[IFA_LABEL-1] && strcmp(RTA_DATA(rta[IFA_LABEL-1]), ifa->ifa_label)) ||
+ (rta[IFA_ADDRESS-1] &&
(ifm->ifa_prefixlen != ifa->ifa_prefixlen ||
- !inet_ifa_match(*(u32*)k_ifa->ifa_address, ifa))))
+ !inet_ifa_match(*(u32*)RTA_DATA(rta[IFA_ADDRESS-1]), ifa))))
continue;
inet_del_ifa(in_dev, ifap, 1);
return 0;
int
inet_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg)
{
- struct kern_ifa *k_ifa = arg;
+ struct rtattr **rta = arg;
struct device *dev;
struct in_device *in_dev;
struct ifaddrmsg *ifm = NLMSG_DATA(nlh);
struct in_ifaddr *ifa;
- if (ifm->ifa_prefixlen > 32 || k_ifa->ifa_local == NULL)
+ if (ifm->ifa_prefixlen > 32 || rta[IFA_LOCAL-1] == NULL)
return -EINVAL;
if ((dev = dev_get_by_index(ifm->ifa_index)) == NULL)
if ((ifa = inet_alloc_ifa()) == NULL)
return -ENOBUFS;
- if (k_ifa->ifa_address == NULL)
- k_ifa->ifa_address = k_ifa->ifa_local;
- memcpy(&ifa->ifa_local, k_ifa->ifa_local, 4);
- memcpy(&ifa->ifa_address, k_ifa->ifa_address, 4);
+ if (rta[IFA_ADDRESS-1] == NULL)
+ rta[IFA_ADDRESS-1] = rta[IFA_LOCAL-1];
+ memcpy(&ifa->ifa_local, RTA_DATA(rta[IFA_LOCAL-1]), 4);
+ memcpy(&ifa->ifa_address, RTA_DATA(rta[IFA_ADDRESS-1]), 4);
ifa->ifa_prefixlen = ifm->ifa_prefixlen;
ifa->ifa_mask = inet_make_mask(ifm->ifa_prefixlen);
- if (k_ifa->ifa_broadcast)
- memcpy(&ifa->ifa_broadcast, k_ifa->ifa_broadcast, 4);
- if (k_ifa->ifa_anycast)
- memcpy(&ifa->ifa_anycast, k_ifa->ifa_anycast, 4);
+ if (rta[IFA_BROADCAST-1])
+ memcpy(&ifa->ifa_broadcast, RTA_DATA(rta[IFA_BROADCAST-1]), 4);
+ if (rta[IFA_ANYCAST-1])
+ memcpy(&ifa->ifa_anycast, RTA_DATA(rta[IFA_ANYCAST-1]), 4);
ifa->ifa_flags = ifm->ifa_flags;
ifa->ifa_scope = ifm->ifa_scope;
ifa->ifa_dev = in_dev;
- if (k_ifa->ifa_label)
- memcpy(ifa->ifa_label, k_ifa->ifa_label, IFNAMSIZ);
+ if (rta[IFA_LABEL-1])
+ memcpy(ifa->ifa_label, RTA_DATA(rta[IFA_LABEL-1]), IFNAMSIZ);
else
memcpy(ifa->ifa_label, dev->name, IFNAMSIZ);
case SIOCGIFBRDADDR: /* Get the broadcast address */
case SIOCGIFDSTADDR: /* Get the destination address */
case SIOCGIFNETMASK: /* Get the netmask for the interface */
- case SIOCGIFPFLAGS: /* Get per device sysctl controls */
/* Note that this ioctls will not sleep,
so that we do not impose a lock.
One day we will be forced to put shlock here (I mean SMP)
break;
case SIOCSIFFLAGS:
- case SIOCSIFPFLAGS: /* Set per device sysctl controls */
if (!suser())
return -EACCES;
rtnl_lock();
sin->sin_addr.s_addr = ifa->ifa_mask;
goto rarok;
- case SIOCGIFPFLAGS:
- ifr.ifr_flags = in_dev->flags;
- goto rarok;
-
case SIOCSIFFLAGS:
#ifdef CONFIG_IP_ALIAS
if (colon) {
ret = dev_change_flags(dev, ifr.ifr_flags);
break;
- case SIOCSIFPFLAGS:
- in_dev->flags = ifr.ifr_flags;
- break;
-
case SIOCSIFADDR: /* Set interface address (and family) */
if (inet_abc_len(sin->sin_addr.s_addr) < 0) {
ret = -EINVAL;
done += sizeof(ifr);
continue;
}
- if (len < sizeof(ifr))
+ if (len < (int) sizeof(ifr))
return done;
memset(&ifr, 0, sizeof(struct ifreq));
if (ifa->ifa_label)
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
{
{ NULL, NULL, },
{ NULL, NULL, },
- { NULL, rtnetlink_dump_ifinfo, },
+ { NULL, NULL, },
{ NULL, NULL, },
{ inet_rtm_newaddr, NULL, },
{ inet_rtm_getroute, inet_dump_fib, },
{ NULL, NULL, },
- { neigh_add, NULL, },
- { neigh_delete, NULL, },
- { NULL, neigh_dump_info, },
+ { NULL, NULL, },
+ { NULL, NULL, },
+ { NULL, NULL, },
{ NULL, NULL, },
#ifdef CONFIG_IP_MULTIPLE_TABLES
#endif /* CONFIG_RTNETLINK */
+
+#ifdef CONFIG_SYSCTL
+
+void inet_forward_change()
+{
+ struct device *dev;
+ int on = ipv4_devconf.forwarding;
+
+ ipv4_devconf.accept_redirects = !on;
+ ipv4_devconf_dflt.forwarding = on;
+
+ for (dev = dev_base; dev; dev = dev->next) {
+ struct in_device *in_dev = dev->ip_ptr;
+ if (in_dev)
+ in_dev->cnf.forwarding = on;
+ }
+
+ rt_cache_flush(0);
+
+ ip_statistics.IpForwarding = on ? 1 : 2;
+}
+
+static
+int devinet_sysctl_forward(ctl_table *ctl, int write, struct file * filp,
+ void *buffer, size_t *lenp)
+{
+ int *valp = ctl->data;
+ int val = *valp;
+ int ret;
+
+ ret = proc_dointvec(ctl, write, filp, buffer, lenp);
+
+ if (write && *valp != val) {
+ if (valp == &ipv4_devconf.forwarding)
+ inet_forward_change();
+ else if (valp != &ipv4_devconf_dflt.forwarding)
+ rt_cache_flush(0);
+ }
+
+ return ret;
+}
+
+static struct devinet_sysctl_table
+{
+ struct ctl_table_header *sysctl_header;
+ ctl_table devinet_vars[12];
+ ctl_table devinet_dev[2];
+ ctl_table devinet_conf_dir[2];
+ ctl_table devinet_proto_dir[2];
+ ctl_table devinet_root_dir[2];
+} devinet_sysctl = {
+ NULL,
+ {{NET_IPV4_CONF_FORWARDING, "forwarding",
+ &ipv4_devconf.forwarding, sizeof(int), 0644, NULL,
+ &devinet_sysctl_forward},
+ {NET_IPV4_CONF_MC_FORWARDING, "mc_forwarding",
+ &ipv4_devconf.mc_forwarding, sizeof(int), 0444, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_ACCEPT_REDIRECTS, "accept_redirects",
+ &ipv4_devconf.accept_redirects, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_SECURE_REDIRECTS, "secure_redirects",
+ &ipv4_devconf.secure_redirects, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_SHARED_MEDIA, "shared_media",
+ &ipv4_devconf.shared_media, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_RP_FILTER, "rp_filter",
+ &ipv4_devconf.rp_filter, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_SEND_REDIRECTS, "send_redirects",
+ &ipv4_devconf.send_redirects, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_ACCEPT_SOURCE_ROUTE, "accept_source_route",
+ &ipv4_devconf.accept_source_route, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_PROXY_ARP, "proxy_arp",
+ &ipv4_devconf.proxy_arp, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_BOOTP_RELAY, "bootp_relay",
+ &ipv4_devconf.bootp_relay, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_CONF_LOG_MARTIANS, "log_martians",
+ &ipv4_devconf.log_martians, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {0}},
+
+ {{NET_PROTO_CONF_ALL, "all", NULL, 0, 0555, devinet_sysctl.devinet_vars},{0}},
+ {{NET_IPV4_CONF, "conf", NULL, 0, 0555, devinet_sysctl.devinet_dev},{0}},
+ {{NET_IPV4, "ipv4", NULL, 0, 0555, devinet_sysctl.devinet_conf_dir},{0}},
+ {{CTL_NET, "net", NULL, 0, 0555, devinet_sysctl.devinet_proto_dir},{0}}
+};
+
+static void devinet_sysctl_register(struct in_device *in_dev, struct ipv4_devconf *p)
+{
+ int i;
+ struct device *dev = in_dev ? in_dev->dev : NULL;
+ struct devinet_sysctl_table *t;
+
+ t = kmalloc(sizeof(*t), GFP_KERNEL);
+ if (t == NULL)
+ return;
+ memcpy(t, &devinet_sysctl, sizeof(*t));
+ for (i=0; i<sizeof(t->devinet_vars)/sizeof(t->devinet_vars[0])-1; i++) {
+ t->devinet_vars[i].data += (char*)p - (char*)&ipv4_devconf;
+ t->devinet_vars[i].de = NULL;
+ }
+ if (dev) {
+ t->devinet_dev[0].procname = dev->name;
+ t->devinet_dev[0].ctl_name = dev->ifindex;
+ } else {
+ t->devinet_dev[0].procname = "default";
+ t->devinet_dev[0].ctl_name = NET_PROTO_CONF_DEFAULT;
+ }
+ t->devinet_dev[0].child = t->devinet_vars;
+ t->devinet_dev[0].de = NULL;
+ t->devinet_conf_dir[0].child = t->devinet_dev;
+ t->devinet_conf_dir[0].de = NULL;
+ t->devinet_proto_dir[0].child = t->devinet_conf_dir;
+ t->devinet_proto_dir[0].de = NULL;
+ t->devinet_root_dir[0].child = t->devinet_proto_dir;
+ t->devinet_root_dir[0].de = NULL;
+
+ t->sysctl_header = register_sysctl_table(t->devinet_root_dir, 0);
+ if (t->sysctl_header == NULL)
+ kfree(t);
+}
+
+static void devinet_sysctl_unregister(struct ipv4_devconf *p)
+{
+ if (p->sysctl) {
+ struct devinet_sysctl_table *t = p->sysctl;
+ p->sysctl = NULL;
+ unregister_sysctl_table(t->sysctl_header);
+ kfree(t);
+ }
+}
+#endif
+
#ifdef CONFIG_IP_PNP_BOOTP
/*
#ifdef CONFIG_RTNETLINK
rtnetlink_links[AF_INET] = inet_rtnetlink_table;
#endif
+#ifdef CONFIG_SYSCTL
+ devinet_sysctl.sysctl_header =
+ register_sysctl_table(devinet_sysctl.devinet_root_dir, 0);
+ devinet_sysctl_register(NULL, &ipv4_devconf_dflt);
+#endif
}
#endif /* CONFIG_IP_MULTIPLE_TABLES */
if (flushed)
- rt_cache_flush(RT_FLUSH_DELAY);
+ rt_cache_flush(-1);
}
#ifdef CONFIG_RTNETLINK
+static int inet_check_attr(struct rtmsg *r, struct rtattr **rta)
+{
+ int i;
+
+ for (i=1; i<=RTA_MAX; i++) {
+ struct rtattr *attr = rta[i-1];
+ if (attr) {
+ if (RTA_PAYLOAD(attr) < 4)
+ return -EINVAL;
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ if (i != RTA_MULTIPATH && i != RTA_METRICS)
+#endif
+ rta[i-1] = (struct rtattr*)RTA_DATA(attr);
+ }
+ }
+ return 0;
+}
+
int inet_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
struct fib_table * tb;
- struct kern_rta *rta = arg;
+ struct rtattr **rta = arg;
struct rtmsg *r = NLMSG_DATA(nlh);
+ if (inet_check_attr(r, rta))
+ return -EINVAL;
+
tb = fib_get_table(r->rtm_table);
if (tb)
- return tb->tb_delete(tb, r, rta, nlh, &NETLINK_CB(skb));
+ return tb->tb_delete(tb, r, (struct kern_rta*)rta, nlh, &NETLINK_CB(skb));
return -ESRCH;
}
int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
struct fib_table * tb;
- struct kern_rta *rta = arg;
+ struct rtattr **rta = arg;
struct rtmsg *r = NLMSG_DATA(nlh);
+ if (inet_check_attr(r, rta))
+ return -EINVAL;
+
tb = fib_new_table(r->rtm_table);
if (tb)
- return tb->tb_insert(tb, r, rta, nlh, &NETLINK_CB(skb));
+ return tb->tb_insert(tb, r, (struct kern_rta*)rta, nlh, &NETLINK_CB(skb));
return -ENOBUFS;
}
req.nlh.nlmsg_len = sizeof(req);
req.nlh.nlmsg_type = cmd;
- req.nlh.nlmsg_flags = NLM_F_REQUEST|NLM_F_CREATE;
+ req.nlh.nlmsg_flags = NLM_F_REQUEST|NLM_F_CREATE|NLM_F_APPEND;
req.nlh.nlmsg_pid = 0;
req.nlh.nlmsg_seq = 0;
switch (event) {
case NETDEV_UP:
fib_add_ifaddr(ifa);
- rt_cache_flush(2*HZ);
+ rt_cache_flush(-1);
break;
case NETDEV_DOWN:
fib_del_ifaddr(ifa);
- rt_cache_flush(1*HZ);
+ rt_cache_flush(-1);
break;
}
return NOTIFY_DONE;
#ifdef CONFIG_IP_ROUTE_MULTIPATH
fib_sync_up(dev);
#endif
- rt_cache_flush(2*HZ);
+ rt_cache_flush(-1);
break;
case NETDEV_DOWN:
if (fib_sync_down(0, dev, 0))
break;
case NETDEV_CHANGEMTU:
case NETDEV_CHANGE:
- rt_cache_flush(1*HZ);
+ rt_cache_flush(0);
break;
}
return NOTIFY_DONE;
&& f->fn_tos == tos
#endif
) {
+ struct fib_node **ins_fp;
+
state = f->fn_state;
if (n->nlmsg_flags&NLM_F_EXCL && !(state&FN_S_ZOMBIE))
return -EEXIST;
f->fn_state = 0;
fib_release_info(old_fi);
if (state&FN_S_ACCESSED)
- rt_cache_flush(RT_FLUSH_DELAY);
+ rt_cache_flush(-1);
return 0;
}
+
+ ins_fp = fp;
+
for ( ; (f = *fp) != NULL && fn_key_eq(f->fn_key, key)
#ifdef CONFIG_IP_ROUTE_TOS
&& f->fn_tos == tos
f->fn_state = 0;
rtmsg_fib(RTM_NEWROUTE, f, z, tb->tb_id, n, req);
if (state&FN_S_ACCESSED)
- rt_cache_flush(RT_FLUSH_DELAY);
+ rt_cache_flush(-1);
return 0;
}
return -EEXIST;
}
}
+ if (!(n->nlmsg_flags&NLM_F_APPEND)) {
+ fp = ins_fp;
+ f = *fp;
+ }
} else {
if (!(n->nlmsg_flags&NLM_F_CREATE))
return -ENOENT;
* Insert new entry to the list.
*/
- start_bh_atomic();
new_f->fn_next = f;
+ /* ATOMIC_SET */
*fp = new_f;
- end_bh_atomic();
fz->fz_nent++;
rtmsg_fib(RTM_NEWROUTE, new_f, z, tb->tb_id, n, req);
- rt_cache_flush(RT_FLUSH_DELAY);
+ rt_cache_flush(-1);
return 0;
}
rtmsg_fib(RTM_DELROUTE, f, z, tb->tb_id, n, req);
if (f->fn_state&FN_S_ACCESSED) {
f->fn_state &= ~FN_S_ACCESSED;
- rt_cache_flush(RT_FLUSH_DELAY);
+ rt_cache_flush(-1);
}
if (++fib_hash_zombies > 128)
fib_flush();
#define FRprintk(a...)
+#ifndef CONFIG_RTNL_OLD_IFINFO
+#define RTA_IFNAME RTA_IIF
+#endif
+
struct fib_rule
{
struct fib_rule *r_next;
- unsigned r_preference;
+ u32 r_preference;
unsigned char r_table;
unsigned char r_action;
unsigned char r_dst_len;
int inet_rtm_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
- struct kern_rta *rta = arg;
+ struct rtattr **rta = arg;
struct rtmsg *rtm = NLMSG_DATA(nlh);
struct fib_rule *r, **rp;
for (rp=&fib_rules; (r=*rp) != NULL; rp=&r->r_next) {
- if ((!rta->rta_src || memcmp(rta->rta_src, &r->r_src, 4) == 0) &&
+ if ((!rta[RTA_SRC-1] || memcmp(RTA_DATA(rta[RTA_SRC-1]), &r->r_src, 4) == 0) &&
rtm->rtm_src_len == r->r_src_len &&
rtm->rtm_dst_len == r->r_dst_len &&
- (!rta->rta_dst || memcmp(rta->rta_dst, &r->r_dst, 4) == 0) &&
+ (!rta[RTA_DST-1] || memcmp(RTA_DATA(rta[RTA_DST-1]), &r->r_dst, 4) == 0) &&
rtm->rtm_tos == r->r_tos &&
rtm->rtm_type == r->r_action &&
- (!rta->rta_priority || *rta->rta_priority == r->r_preference) &&
- (!rta->rta_ifname || strcmp(rta->rta_ifname, r->r_ifname) == 0) &&
+ (!rta[RTA_PRIORITY-1] || memcmp(RTA_DATA(rta[RTA_PRIORITY-1]), &r->r_preference, 4) == 0) &&
+ (!rta[RTA_IFNAME-1] || strcmp(RTA_DATA(rta[RTA_IFNAME-1]), r->r_ifname) == 0) &&
(!rtm->rtm_table || (r && rtm->rtm_table == r->r_table))) {
*rp = r->r_next;
if (r != &default_rule && r != &main_rule && r != &local_rule)
int inet_rtm_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
- struct kern_rta *rta = arg;
+ struct rtattr **rta = arg;
struct rtmsg *rtm = NLMSG_DATA(nlh);
struct fib_rule *r, *new_r, **rp;
unsigned char table_id;
(rtm->rtm_tos & ~IPTOS_TOS_MASK))
return -EINVAL;
+ if (rta[RTA_IFNAME-1] && RTA_PAYLOAD(rta[RTA_IFNAME-1]) > IFNAMSIZ)
+ return -EINVAL;
+
table_id = rtm->rtm_table;
if (table_id == RT_TABLE_UNSPEC) {
struct fib_table *table;
if (!new_r)
return -ENOMEM;
memset(new_r, 0, sizeof(*new_r));
- if (rta->rta_src)
- memcpy(&new_r->r_src, rta->rta_src, 4);
- if (rta->rta_dst)
- memcpy(&new_r->r_dst, rta->rta_dst, 4);
- if (rta->rta_gw)
- memcpy(&new_r->r_srcmap, rta->rta_gw, 4);
+ if (rta[RTA_SRC-1])
+ memcpy(&new_r->r_src, RTA_DATA(rta[RTA_SRC-1]), 4);
+ if (rta[RTA_DST-1])
+ memcpy(&new_r->r_dst, RTA_DATA(rta[RTA_DST-1]), 4);
+ if (rta[RTA_GATEWAY-1])
+ memcpy(&new_r->r_srcmap, RTA_DATA(rta[RTA_GATEWAY-1]), 4);
new_r->r_src_len = rtm->rtm_src_len;
new_r->r_dst_len = rtm->rtm_dst_len;
new_r->r_srcmask = inet_make_mask(rtm->rtm_src_len);
new_r->r_tos = rtm->rtm_tos;
new_r->r_action = rtm->rtm_type;
new_r->r_flags = rtm->rtm_flags;
- if (rta->rta_priority)
- new_r->r_preference = *rta->rta_priority;
+ if (rta[RTA_PRIORITY-1])
+ memcpy(&new_r->r_preference, RTA_DATA(rta[RTA_PRIORITY-1]), 4);
new_r->r_table = table_id;
- if (rta->rta_ifname) {
+ if (rta[RTA_IFNAME-1]) {
struct device *dev;
- memcpy(new_r->r_ifname, rta->rta_ifname, IFNAMSIZ);
+ memcpy(new_r->r_ifname, RTA_DATA(rta[RTA_IFNAME-1]), IFNAMSIZ);
+ new_r->r_ifname[IFNAMSIZ-1] = 0;
new_r->r_ifindex = -1;
- dev = dev_get(rta->rta_ifname);
+ dev = dev_get(new_r->r_ifname);
if (dev)
new_r->r_ifindex = dev->ifindex;
}
rtm->rtm_table = r->r_table;
rtm->rtm_protocol = 0;
rtm->rtm_scope = 0;
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs = 0;
- rtm->rtm_type = r->r_action;
rtm->rtm_optlen = 0;
+#endif
+ rtm->rtm_type = r->r_action;
rtm->rtm_flags = r->r_flags;
if (r->r_dst_len)
return 0;
}
+#ifndef CONFIG_RTNL_OLD_IFINFO
+static int
+fib_count_nexthops(struct rtattr *rta)
+{
+ int nhs = 0;
+ struct rtnexthop *nhp = RTA_DATA(rta);
+ int nhlen = RTA_PAYLOAD(rta);
+
+ while (nhlen >= sizeof(struct rtnexthop)) {
+ if ((nhlen -= nhp->rtnh_len) < 0)
+ return 0;
+ nhs++;
+ nhp = RTNH_NEXT(nhp);
+ };
+ return nhs;
+}
+#endif
+
+#ifdef CONFIG_RTNL_OLD_IFINFO
static int
fib_get_nhs(struct fib_info *fi, const struct nlmsghdr *nlh, const struct rtmsg *r)
{
struct rtnexthop *nhp = RTM_RTNH(r);
int nhlen = RTM_NHLEN(nlh, r);
+#else
+static int
+fib_get_nhs(struct fib_info *fi, const struct rtattr *rta, const struct rtmsg *r)
+{
+ struct rtnexthop *nhp = RTA_DATA(rta);
+ int nhlen = RTA_PAYLOAD(rta);
+#endif
change_nexthops(fi) {
int attrlen = nhlen - sizeof(struct rtnexthop);
}
#ifdef CONFIG_IP_ROUTE_MULTIPATH
+#ifdef CONFIG_RTNL_OLD_IFINFO
if (r->rtm_nhs == 0)
return 0;
nhp = RTM_RTNH(r);
nhlen = RTM_NHLEN(nlh, r);
+#else
+ if (rta->rta_mp == NULL)
+ return 0;
+ nhp = RTA_DATA(rta->rta_mp);
+ nhlen = RTA_PAYLOAD(rta->rta_mp);
+#endif
for_nexthops(fi) {
int attrlen = nhlen - sizeof(struct rtnexthop);
struct fib_info *fi = NULL;
struct fib_info *ofi;
#ifdef CONFIG_IP_ROUTE_MULTIPATH
+#ifdef CONFIG_RTNL_OLD_IFINFO
int nhs = r->rtm_nhs ? : 1;
+#else
+ int nhs = 1;
+#endif
#else
const int nhs = 1;
#endif
/* Fast check to catch the most weird cases */
- if (fib_props[r->rtm_type].scope > r->rtm_scope) {
- printk("Einval 1\n");
+ if (fib_props[r->rtm_type].scope > r->rtm_scope)
goto err_inval;
+
+#ifdef CONFIG_IP_ROUTE_MULTIPATH
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ if (rta->rta_mp) {
+ nhs = fib_count_nexthops(rta->rta_mp);
+ if (nhs == 0)
+ goto err_inval;
}
+#endif
+#endif
fi = kmalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL);
err = -ENOBUFS;
fi->fib_protocol = r->rtm_protocol;
fi->fib_nhs = nhs;
fi->fib_flags = r->rtm_flags;
+#ifdef CONFIG_RTNL_OLD_IFINFO
if (rta->rta_mtu)
fi->fib_mtu = *rta->rta_mtu;
if (rta->rta_rtt)
fi->fib_rtt = *rta->rta_rtt;
if (rta->rta_window)
fi->fib_window = *rta->rta_window;
+#else
+ if (rta->rta_mx) {
+ int attrlen = RTA_PAYLOAD(rta->rta_mx);
+ struct rtattr *attr = RTA_DATA(rta->rta_mx);
+
+ while (RTA_OK(attr, attrlen)) {
+ unsigned flavor = attr->rta_type;
+ if (flavor) {
+ if (flavor > FIB_MAX_METRICS)
+ goto failure;
+ fi->fib_metrics[flavor-1] = *(unsigned*)RTA_DATA(attr);
+ }
+ attr = RTA_NEXT(attr, attrlen);
+ }
+ }
+#endif
if (rta->rta_prefsrc)
memcpy(&fi->fib_prefsrc, rta->rta_prefsrc, 4);
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ if (rta->rta_mp) {
+#else
if (r->rtm_nhs) {
+#endif
#ifdef CONFIG_IP_ROUTE_MULTIPATH
+#ifdef CONFIG_RTNL_OLD_IFINFO
if ((err = fib_get_nhs(fi, nlh, r)) != 0)
+#else
+ if ((err = fib_get_nhs(fi, rta->rta_mp, r)) != 0)
+#endif
goto failure;
if (rta->rta_oif && fi->fib_nh->nh_oif != *rta->rta_oif)
goto err_inval;
#endif
if (fib_props[r->rtm_type].error) {
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ if (rta->rta_gw || rta->rta_oif || rta->rta_mp)
+#else
if (rta->rta_gw || rta->rta_oif || r->rtm_nhs)
+#endif
goto err_inval;
goto link_it;
}
struct rtmsg *rtm;
struct nlmsghdr *nlh;
unsigned char *b = skb->tail;
+#ifdef CONFIG_RTNL_OLD_IFINFO
unsigned char *o;
+#endif
nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*rtm));
rtm = NLMSG_DATA(nlh);
rtm->rtm_type = type;
rtm->rtm_flags = fi->fib_flags;
rtm->rtm_scope = scope;
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs = 0;
o = skb->tail;
+#endif
if (rtm->rtm_dst_len)
RTA_PUT(skb, RTA_DST, 4, dst);
rtm->rtm_protocol = fi->fib_protocol;
+#ifdef CONFIG_RTNL_OLD_IFINFO
if (fi->fib_mtu)
RTA_PUT(skb, RTA_MTU, sizeof(unsigned), &fi->fib_mtu);
if (fi->fib_window)
RTA_PUT(skb, RTA_WINDOW, sizeof(unsigned), &fi->fib_window);
if (fi->fib_rtt)
RTA_PUT(skb, RTA_RTT, sizeof(unsigned), &fi->fib_rtt);
+#else
+ if (fi->fib_mtu || fi->fib_window || fi->fib_rtt) {
+ int i;
+ struct rtattr *mx = (struct rtattr *)skb->tail;
+ RTA_PUT(skb, RTA_METRICS, 0, NULL);
+ for (i=0; i<FIB_MAX_METRICS; i++) {
+ if (fi->fib_metrics[i])
+ RTA_PUT(skb, i+1, sizeof(unsigned), fi->fib_metrics + i);
+ }
+ mx->rta_len = skb->tail - (u8*)mx;
+ }
+#endif
if (fi->fib_prefsrc)
RTA_PUT(skb, RTA_PREFSRC, 4, &fi->fib_prefsrc);
if (fi->fib_nhs == 1) {
if (fi->fib_nh->nh_oif)
RTA_PUT(skb, RTA_OIF, sizeof(int), &fi->fib_nh->nh_oif);
}
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_optlen = skb->tail - o;
+#endif
#ifdef CONFIG_IP_ROUTE_MULTIPATH
if (fi->fib_nhs > 1) {
struct rtnexthop *nhp;
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ struct rtattr *mp_head;
+ if (skb_tailroom(skb) <= RTA_SPACE(0))
+ goto rtattr_failure;
+ mp_head = (struct rtattr*)skb_put(skb, RTA_SPACE(0));
+#endif
for_nexthops(fi) {
if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4))
goto rtattr_failure;
if (nh->nh_gw)
RTA_PUT(skb, RTA_GATEWAY, 4, &nh->nh_gw);
nhp->rtnh_len = skb->tail - (unsigned char*)nhp;
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs++;
+#endif
} endfor_nexthops(fi);
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ mp_head->rta_type = RTA_MULTIPATH;
+ mp_head->rta_len = skb->tail - (u8*)mp_head;
+#endif
}
#endif
nlh->nlmsg_len = skb->tail - b;
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
nl->nlmsg_flags = 0;
} else {
nl->nlmsg_type = RTM_NEWROUTE;
- nl->nlmsg_flags = NLM_F_CREATE;
+ nl->nlmsg_flags = NLM_F_REQUEST|NLM_F_CREATE;
rtm->rtm_protocol = RTPROT_BOOT;
- if (plen != 0)
- nl->nlmsg_flags |= NLM_F_REPLACE;
}
rtm->rtm_dst_len = plen;
ptr = &((struct sockaddr_in*)&r->rt_gateway)->sin_addr.s_addr;
if (r->rt_gateway.sa_family == AF_INET && *ptr) {
rta->rta_gw = ptr;
- if (r->rt_flags&RTF_GATEWAY)
+ if (r->rt_flags&RTF_GATEWAY && inet_addr_type(*ptr) == RTN_UNICAST)
rtm->rtm_scope = RT_SCOPE_UNIVERSE;
}
if (r->rt_flags&RTF_GATEWAY && rta->rta_gw == NULL)
return -EINVAL;
+#ifdef CONFIG_RTNL_OLD_IFINFO
/* Ugly conversion from rtentry types to unsigned */
if (r->rt_flags&RTF_IRTT) {
if (sizeof(*rta->rta_mtu) != sizeof(r->rt_mtu))
*rta->rta_mtu = r->rt_mtu;
}
+#else
+ if (r->rt_flags&(RTF_MTU|RTF_WINDOW|RTF_IRTT))
+ printk(KERN_DEBUG "SIOCRT*: mtu/window/irtt are not implemnted.\n");
+#endif
return 0;
}
struct in_ifaddr *ifa;
u32 mask;
- if (!ipv4_config.log_martians ||
- !IS_ROUTER ||
- !in_dev || !in_dev->ifa_list ||
+ if (!in_dev || !in_dev->ifa_list ||
+ !IN_DEV_LOG_MARTIANS(in_dev) ||
+ !IN_DEV_FORWARD(in_dev) ||
len < 4 ||
!(rt->rt_flags&RTCF_DIRECTSRC))
return;
* contradict to specs provided this delay is small enough.
*/
-#define IGMP_V1_SEEN(in_dev) ((in_dev)->mr_v1_seen && jiffies - (in_dev)->mr_v1_seen < 0)
+#define IGMP_V1_SEEN(in_dev) ((in_dev)->mr_v1_seen && (long)(jiffies - (in_dev)->mr_v1_seen) < 0)
/*
* Timer management
if (LOCAL_MCAST(im->multiaddr))
continue;
im->unsolicit_count = 0;
- if (im->tm_running && im->timer.expires-jiffies > max_delay)
+ if (im->tm_running && (long)(im->timer.expires-jiffies) > max_delay)
igmp_stop_timer(im);
igmp_start_timer(im, max_delay);
}
* use output device for accounting.
* Jos Vos : Call forward firewall after routing
* (always use output device).
+ * Mike McLagan : Routing by source
*/
#include <linux/config.h>
qp->dev = skb->dev;
/* Start a timer for this entry. */
+ init_timer(&qp->timer);
qp->timer.expires = jiffies + sysctl_ipfrag_time; /* about 30 seconds */
qp->timer.data = (unsigned long) qp; /* pointer to queue */
qp->timer.function = ip_expire; /* expire function */
}
if (tunnel->parms.i_flags&GRE_SEQ) {
if (!(flags&GRE_SEQ) ||
- (tunnel->i_seqno && seqno - tunnel->i_seqno < 0)) {
+ (tunnel->i_seqno && (s32)(seqno - tunnel->i_seqno) < 0)) {
tunnel->stat.rx_fifo_errors++;
tunnel->stat.rx_errors++;
goto drop;
if (jiffies - tunnel->err_time < IPTUNNEL_ERR_TIMEO) {
tunnel->err_count--;
- if (skb->protocol == __constant_htons(ETH_P_IP))
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
-#ifdef CONFIG_IPV6
- else if (skb->protocol == __constant_htons(ETH_P_IPV6))
- icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0, dev);
-#endif
+ dst_link_failure(skb);
} else
tunnel->err_count = 0;
}
return 0;
tx_error_icmp:
- if (skb->protocol == __constant_htons(ETH_P_IP))
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
-#ifdef CONFIG_IPV6
- else if (skb->protocol == __constant_htons(ETH_P_IPV6))
- icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0, dev);
-#endif
+ dst_link_failure(skb);
tx_error:
stats->tx_errors++;
* Alan Cox : Multicast routing hooks
* Jos Vos : Do accounting *before* call_in_firewall
* Willy Konynenberg : Transparent proxying support
+ * Mike McLagan : Routing by source
*
*
*
opt = &(IPCB(skb)->opt);
if (opt->srr) {
- if (!ipv4_config.source_route) {
- if (ipv4_config.log_martians && net_ratelimit())
+ struct in_device *in_dev = dev->ip_ptr;
+ if (in_dev && !IN_DEV_SOURCE_ROUTE(in_dev)) {
+ if (IN_DEV_LOG_MARTIANS(in_dev) && net_ratelimit())
printk(KERN_INFO "source route option %08lx -> %08lx\n",
ntohl(iph->saddr), ntohl(iph->daddr));
goto drop;
}
- if (((struct rtable*)skb->dst)->rt_type == RTN_LOCAL &&
- ip_options_rcv_srr(skb))
+ if (ip_options_rcv_srr(skb))
goto drop;
}
}
-
+
/*
* See if the firewall wants to dispose of the packet.
*/
* Alexander Demenshin: Missing sk/skb free in ip_queue_xmit
* (in case if packet not accepted by
* output firewall rules)
+ * Mike McLagan : Routing by source
* Alexey Kuznetsov: use new route cache
* Andi Kleen: Fix broken PMTU recovery and remove
* some redundant tests.
daddr = opt->faddr;
err = ip_route_output(&rt, daddr, saddr, RT_TOS(sk->ip_tos) |
-#ifdef CONFIG_IP_TRANSPARENT_PROXY
- /* Rationale: this routine is used only
- by TCP, so that validity of saddr is already
- checked and we can safely use RTO_TPROXY.
- */
- RTO_TPROXY |
-#endif
- (sk->localroute||0), sk->bound_dev_if);
+ RTO_CONN | sk->localroute, sk->bound_dev_if);
if (err)
{
ip_statistics.IpOutNoRoutes++;
iph->tos = sk->ip_tos;
iph->frag_off = 0;
if (sk->ip_pmtudisc == IP_PMTUDISC_WANT &&
- !(rt->rt_flags & RTCF_NOPMTUDISC))
+ !(rt->u.dst.mxlock&(1<<RTAX_MTU)))
iph->frag_off |= htons(IP_DF);
iph->ttl = sk->ip_ttl;
iph->daddr = rt->rt_dst;
sk->dst_cache = NULL;
ip_rt_put(rt);
err = ip_route_output(&rt, daddr, sk->saddr, RT_TOS(sk->ip_tos) |
- (sk->localroute||0), sk->bound_dev_if);
+ RTO_CONN | sk->localroute, sk->bound_dev_if);
if (err)
return err;
sk->dst_cache = &rt->u.dst;
iph->tos = sk->ip_tos;
iph->frag_off = 0;
if (sk->ip_pmtudisc == IP_PMTUDISC_WANT &&
- !(rt->rt_flags & RTCF_NOPMTUDISC))
+ !(rt->u.dst.mxlock&(1<<RTAX_MTU)))
iph->frag_off |= htons(IP_DF);
iph->ttl = sk->ip_ttl;
iph->daddr = rt->rt_dst;
#endif
skb->dev = dev;
+ skb->protocol = __constant_htons(ETH_P_IP);
/*
* Multicasts are looped back for other local users
Essentially it is "ip_reroute_output" function. --ANK
*/
struct rtable *nrt;
- if (ip_route_output(&nrt, rt->key.dst, rt->key.src, rt->key.tos,
+ if (ip_route_output(&nrt, rt->key.dst, rt->key.src,
+ rt->key.tos | RTO_CONN,
sk?sk->bound_dev_if:0))
goto drop;
skb->dst = &nrt->u.dst;
#endif
if (sk->ip_pmtudisc == IP_PMTUDISC_DONT ||
- rt->rt_flags&RTCF_NOPMTUDISC)
+ (rt->u.dst.mxlock&(1<<RTAX_MTU)))
df = 0;
* Martin Mares : TOS setting fixed.
* Alan Cox : Fixed a couple of oopses in Martin's
* TOS tweaks.
+ * Mike McLagan : Routing by source
*/
#include <linux/config.h>
if (IPTOS_PREC(val) >= IPTOS_PREC_CRITIC_ECP && !suser())
return -EPERM;
if (sk->ip_tos != val) {
- start_bh_atomic();
sk->ip_tos=val;
sk->priority = rt_tos2priority(val);
- if (sk->dst_cache) {
- dst_release(sk->dst_cache);
- sk->dst_cache = NULL;
- }
- end_bh_atomic();
+ dst_release(xchg(&sk->dst_cache, NULL));
}
sk->priority = rt_tos2priority(val);
return 0;
if (tunnel->err_count > 0) {
if (jiffies - tunnel->err_time < IPTUNNEL_ERR_TIMEO) {
tunnel->err_count--;
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
+ dst_link_failure(skb);
} else
tunnel->err_count = 0;
}
return 0;
tx_error_icmp:
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
+ dst_link_failure(skb);
tx_error:
stats->tx_errors++;
dev_kfree_skb(skb);
* Michael Chastain : Incorrect size of copying.
* Alan Cox : Added the cache manager code
* Alan Cox : Fixed the clone/copy bug and device race.
+ * Mike McLagan : Routing by source
* Malcolm Beattie : Buffer handling fixes.
* Alexey Kuznetsov : Double buffer free and other fixes.
* SVR Anand : Fixed several multicast bugs and problems.
in_dev = dev->ip_ptr;
if (in_dev == NULL && (in_dev = inetdev_init(dev)) == NULL)
goto failure;
+ in_dev->cnf.rp_filter = 0;
if (dev_open(dev))
goto failure;
if ((in_dev = inetdev_init(dev)) == NULL)
goto failure;
+ in_dev->cnf.rp_filter = 0;
+
if (dev_open(dev))
goto failure;
vifc_map &= ~(1<<vifi);
if ((in_dev = dev->ip_ptr) != NULL)
- in_dev->flags &= ~IFF_IP_MFORWARD;
+ in_dev->cnf.mc_forwarding = 0;
dev_set_allmulti(dev, -1);
ip_rt_multicast_event(in_dev);
static void mrtsock_destruct(struct sock *sk)
{
if (sk == mroute_socket) {
- ipv4_config.multicast_route = 0;
+ ipv4_devconf.mc_forwarding = 0;
mroute_socket=NULL;
mroute_close(sk);
}
if(mroute_socket)
return -EADDRINUSE;
mroute_socket=sk;
- ipv4_config.multicast_route = 1;
+ ipv4_devconf.mc_forwarding = 1;
if (ip_ra_control(sk, 1, mrtsock_destruct) == 0)
return 0;
mrtsock_destruct(sk);
if ((in_dev = dev->ip_ptr) == NULL)
return -EADDRNOTAVAIL;
- if (in_dev->flags & IFF_IP_MFORWARD)
+ if (in_dev->cnf.mc_forwarding)
return -EADDRINUSE;
- in_dev->flags |= IFF_IP_MFORWARD;
+ in_dev->cnf.mc_forwarding = 1;
dev_set_allmulti(dev, +1);
ip_rt_multicast_event(in_dev);
struct rtnexthop *nhp;
struct device *dev = vif_table[c->mfc_parent].dev;
+#ifdef CONFIG_RTNL_OLD_IFINFO
if (dev) {
u8 *o = skb->tail;
RTA_PUT(skb, RTA_IIF, 4, &dev->ifindex);
rtm->rtm_optlen += skb->tail - o;
}
+#else
+ struct rtattr *mp_head;
+
+ if (dev)
+ RTA_PUT(skb, RTA_IIF, 4, &dev->ifindex);
+
+ mp_head = (struct rtattr*)skb_put(skb, RTA_LENGTH(0));
+#endif
for (ct = c->mfc_minvif; ct < c->mfc_maxvif; ct++) {
if (c->mfc_ttls[ct] < 255) {
nhp->rtnh_hops = c->mfc_ttls[ct];
nhp->rtnh_ifindex = vif_table[ct].dev->ifindex;
nhp->rtnh_len = sizeof(*nhp);
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs++;
+#endif
}
}
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ mp_head->rta_type = RTA_MULTIPATH;
+ mp_head->rta_len = skb->tail - (u8*)mp_head;
+#endif
rtm->rtm_type = RTN_MULTICAST;
return 1;
* Fixes
* Alan Cox : Rarp delete on device down needed as
* reported by Walter Wolfgang.
+ * Mike McLagan : Routing by source
*
*/
daddr = ipc.opt->faddr;
}
}
- tos = RT_TOS(sk->ip_tos) | (sk->localroute || (msg->msg_flags&MSG_DONTROUTE));
+ tos = RT_TOS(sk->ip_tos) | sk->localroute;
+ if (msg->msg_flags&MSG_DONTROUTE)
+ tos |= RTO_ONLINK;
if (MULTICAST(daddr)) {
if (!ipc.oif)
sk->rcv_saddr = sk->saddr = addr->sin_addr.s_addr;
if(chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST)
sk->saddr = 0; /* Use device */
- dst_release(sk->dst_cache);
- sk->dst_cache = NULL;
+ dst_release(xchg(&sk->dst_cache, NULL));
return 0;
}
}
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
+ if (err)
+ goto done;
+
sk->stamp=skb->stamp;
/* Copy the address. */
}
if (sk->ip_cmsg_flags)
ip_cmsg_recv(msg, skb);
+done:
skb_free_datagram(sk, skb);
- return err ? err : (copied);
+ return (err ? : copied);
}
static int raw_init(struct sock *sk)
* Bjorn Ekwall : Kerneld route support.
* Alan Cox : Multicast fixed (I hope)
* Pavel Krauz : Limited broadcast fixed
+ * Mike McLagan : Routing by source
* Alexey Kuznetsov : End of old history. Splitted to fib.c and
* route.c and rewritten from scratch.
* Andi Kleen : Load-limit warning messages.
#include <net/arp.h>
#include <net/tcp.h>
#include <net/icmp.h>
+#ifdef CONFIG_SYSCTL
+#include <linux/sysctl.h>
+#endif
+
+#define RT_GC_TIMEOUT (300*HZ)
+
+int ip_rt_min_delay = 2*HZ;
+int ip_rt_max_delay = 10*HZ;
+int ip_rt_gc_thresh = RT_HASH_DIVISOR;
+int ip_rt_max_size = RT_HASH_DIVISOR*16;
+int ip_rt_gc_timeout = RT_GC_TIMEOUT;
+int ip_rt_gc_interval = 60*HZ;
+int ip_rt_gc_min_interval = 5*HZ;
+int ip_rt_redirect_number = 9;
+int ip_rt_redirect_load = HZ/50;
+int ip_rt_redirect_silence = ((HZ/50) << (9+1));
+int ip_rt_error_cost = HZ;
+int ip_rt_error_burst = 5*HZ;
+
+static unsigned long rt_deadline = 0;
#define RTprint(a...) printk(KERN_DEBUG a)
+static void rt_run_flush(unsigned long dummy);
+
static struct timer_list rt_flush_timer =
- { NULL, NULL, 0, 0L, NULL };
+ { NULL, NULL, 0, 0L, rt_run_flush };
static struct timer_list rt_periodic_timer =
{ NULL, NULL, 0, 0L, NULL };
static struct dst_entry * ipv4_dst_reroute(struct dst_entry * dst,
struct sk_buff *);
static struct dst_entry * ipv4_negative_advice(struct dst_entry *);
+static void ipv4_link_failure(struct sk_buff *skb);
+static int rt_garbage_collect(void);
struct dst_ops ipv4_dst_ops =
{
AF_INET,
__constant_htons(ETH_P_IP),
+ RT_HASH_DIVISOR,
+
+ rt_garbage_collect,
ipv4_dst_check,
ipv4_dst_reroute,
NULL,
- ipv4_negative_advice
+ ipv4_negative_advice,
+ ipv4_link_failure,
};
__u8 ip_tos2prio[16] = {
* Route cache.
*/
-static atomic_t rt_cache_size = ATOMIC_INIT(0);
static struct rtable *rt_hash_table[RT_HASH_DIVISOR];
static struct rtable * rt_intern_hash(unsigned hash, struct rtable * rth, u16 protocol);
}
#endif
-static void __inline__ rt_free(struct rtable *rt)
+static __inline__ void rt_free(struct rtable *rt)
{
dst_free(&rt->u.dst);
}
*/
if (!atomic_read(&rth->u.dst.use) &&
- (now - rth->u.dst.lastuse > RT_CACHE_TIMEOUT)) {
+ (now - rth->u.dst.lastuse > ip_rt_gc_timeout)) {
*rthp = rth_next;
- atomic_dec(&rt_cache_size);
#if RT_CACHE_DEBUG >= 2
printk("rt_check_expire clean %02x@%08x\n", rover, rth->rt_dst);
#endif
if (!rth_next)
break;
- if ( rth_next->u.dst.lastuse - rth->u.dst.lastuse > RT_CACHE_BUBBLE_THRESHOLD ||
- (rth->u.dst.lastuse - rth_next->u.dst.lastuse < 0 &&
+ if ( (long)(rth_next->u.dst.lastuse - rth->u.dst.lastuse) > RT_CACHE_BUBBLE_THRESHOLD ||
+ ((long)(rth->u.dst.lastuse - rth_next->u.dst.lastuse) < 0 &&
atomic_read(&rth->u.dst.refcnt) < atomic_read(&rth_next->u.dst.refcnt))) {
#if RT_CACHE_DEBUG >= 2
printk("rt_check_expire bubbled %02x@%08x<->%08x\n", rover, rth->rt_dst, rth_next->rt_dst);
rthp = &rth->u.rt_next;
}
}
- rt_periodic_timer.expires = now + RT_GC_INTERVAL;
+ rt_periodic_timer.expires = now + ip_rt_gc_interval;
add_timer(&rt_periodic_timer);
}
for (; rth; rth=next) {
next = rth->u.rt_next;
- atomic_dec(&rt_cache_size);
nr++;
rth->u.rt_next = NULL;
rt_free(rth);
void rt_cache_flush(int delay)
{
+ if (delay < 0)
+ delay = ip_rt_min_delay;
+
start_bh_atomic();
- if (delay && rt_flush_timer.function &&
- rt_flush_timer.expires - jiffies < delay) {
- end_bh_atomic();
- return;
- }
- if (rt_flush_timer.function) {
- del_timer(&rt_flush_timer);
- rt_flush_timer.function = NULL;
+
+ if (del_timer(&rt_flush_timer) && delay > 0 && rt_deadline) {
+ long tmo = (long)(rt_deadline - rt_flush_timer.expires);
+
+ /* If flush timer is already running
+ and flush request is not immediate (delay > 0):
+
+ if deadline is not achieved, prolongate timer to "dealy",
+ otherwise fire it at deadline time.
+ */
+
+ if (delay > tmo)
+ delay = tmo;
}
- if (delay == 0) {
+
+ if (delay <= 0) {
+ rt_deadline = 0;
end_bh_atomic();
+
rt_run_flush(0);
return;
}
- rt_flush_timer.function = rt_run_flush;
+
+ if (rt_deadline == 0)
+ rt_deadline = jiffies + ip_rt_max_delay;
+
rt_flush_timer.expires = jiffies + delay;
add_timer(&rt_flush_timer);
end_bh_atomic();
}
-
-static void rt_garbage_collect(void)
+static int rt_garbage_collect(void)
{
int i;
- static unsigned expire = RT_CACHE_TIMEOUT>>1;
+ static unsigned expire = RT_GC_TIMEOUT>>1;
static unsigned long last_gc;
struct rtable *rth, **rthp;
- unsigned long now;
+ unsigned long now = jiffies;
start_bh_atomic();
- now = jiffies;
/*
* Garbage collection is pretty expensive,
* do not make it too frequently, but just increase expire strength.
*/
- if (now - last_gc < 1*HZ) {
- expire >>= 1;
- end_bh_atomic();
- return;
- }
+ if (now - last_gc < ip_rt_gc_min_interval)
+ goto out;
expire++;
if (atomic_read(&rth->u.dst.use) ||
now - rth->u.dst.lastuse < expire)
continue;
- atomic_dec(&rt_cache_size);
*rthp = rth->u.rt_next;
rth->u.rt_next = NULL;
rt_free(rth);
}
last_gc = now;
- if (atomic_read(&rt_cache_size) < RT_CACHE_MAX_SIZE)
- expire = RT_CACHE_TIMEOUT>>1;
- else
- expire >>= 1;
+ if (atomic_read(&ipv4_dst_ops.entries) < ipv4_dst_ops.gc_thresh)
+ expire = ip_rt_gc_timeout;
+
+out:
+ expire >>= 1;
end_bh_atomic();
+ return (atomic_read(&ipv4_dst_ops.entries) > ip_rt_max_size);
}
static struct rtable *rt_intern_hash(unsigned hash, struct rtable * rt, u16 protocol)
if (rt->rt_type == RTN_UNICAST || rt->key.iif == 0)
arp_bind_neighbour(&rt->u.dst);
- if (atomic_read(&rt_cache_size) >= RT_CACHE_MAX_SIZE)
- rt_garbage_collect();
-
rt->u.rt_next = rt_hash_table[hash];
#if RT_CACHE_DEBUG >= 2
if (rt->u.rt_next) {
}
#endif
rt_hash_table[hash] = rt;
- atomic_inc(&rt_cache_size);
end_bh_atomic();
return rt;
tos &= IPTOS_TOS_MASK;
- if (!in_dev || new_gw == old_gw || !IN_DEV_RX_REDIRECTS(in_dev)
+ if (!in_dev)
+ return;
+
+ if (new_gw == old_gw || !IN_DEV_RX_REDIRECTS(in_dev)
|| MULTICAST(new_gw) || BADCLASS(new_gw) || ZERONET(new_gw))
goto reject_redirect;
reject_redirect:
#ifdef CONFIG_IP_ROUTE_VERBOSE
- if (ipv4_config.log_martians && net_ratelimit())
+ if (IN_DEV_LOG_MARTIANS(in_dev) && net_ratelimit())
printk(KERN_INFO "Redirect from %lX/%s to %lX ignored."
"Path = %lX -> %lX, tos %02x\n",
ntohl(old_gw), dev->name, ntohl(new_gw),
/*
* Algorithm:
- * 1. The first RT_REDIRECT_NUMBER redirects are sent
+ * 1. The first ip_rt_redirect_number redirects are sent
* with exponential backoff, then we stop sending them at all,
* assuming that the host ignores our redirects.
* 2. If we did not see packets requiring redirects
- * during RT_REDIRECT_SILENCE, we assume that the host
+ * during ip_rt_redirect_silence, we assume that the host
* forgot redirected route and start to send redirects again.
*
* This algorithm is much cheaper and more intelligent than dumb load limiting
{
struct rtable *rt = (struct rtable*)skb->dst;
- /* No redirected packets during RT_REDIRECT_SILENCE;
+ /* No redirected packets during ip_rt_redirect_silence;
* reset the algorithm.
*/
- if (jiffies - rt->last_error > RT_REDIRECT_SILENCE)
- rt->errors = 0;
+ if (jiffies - rt->u.dst.rate_last > ip_rt_redirect_silence)
+ rt->u.dst.rate_tokens = 0;
/* Too many ignored redirects; do not send anything
- * set last_error to the last seen redirected packet.
+ * set u.dst.rate_last to the last seen redirected packet.
*/
- if (rt->errors >= RT_REDIRECT_NUMBER) {
- rt->last_error = jiffies;
+ if (rt->u.dst.rate_tokens >= ip_rt_redirect_number) {
+ rt->u.dst.rate_last = jiffies;
return;
}
- /* Check for load limit; set last_error to the latest sent
+ /* Check for load limit; set rate_last to the latest sent
* redirect.
*/
- if (jiffies - rt->last_error > (RT_REDIRECT_LOAD<<rt->errors)) {
+ if (jiffies - rt->u.dst.rate_last > (ip_rt_redirect_load<<rt->u.dst.rate_tokens)) {
icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, rt->rt_gateway);
- rt->last_error = jiffies;
- ++rt->errors;
+ rt->u.dst.rate_last = jiffies;
+ ++rt->u.dst.rate_tokens;
#ifdef CONFIG_IP_ROUTE_VERBOSE
- if (ipv4_config.log_martians && rt->errors == RT_REDIRECT_NUMBER && net_ratelimit())
+ if (skb->dev->ip_ptr && IN_DEV_LOG_MARTIANS((struct in_device*)skb->dev->ip_ptr) &&
+ rt->u.dst.rate_tokens == ip_rt_redirect_number && net_ratelimit())
printk(KERN_WARNING "host %08x/if%d ignores redirects for %08x to %08x.\n",
rt->rt_src, rt->rt_iif, rt->rt_dst, rt->rt_gateway);
#endif
static int ip_error(struct sk_buff *skb)
{
struct rtable *rt = (struct rtable*)skb->dst;
+ unsigned long now;
int code;
switch (rt->u.dst.error) {
code = ICMP_PKT_FILTERED;
break;
}
- if (jiffies - rt->last_error > RT_ERROR_LOAD) {
+
+ now = jiffies;
+ if ((rt->u.dst.rate_tokens += now - rt->u.dst.rate_last) > ip_rt_error_burst)
+ rt->u.dst.rate_tokens = ip_rt_error_burst;
+ if (rt->u.dst.rate_tokens >= ip_rt_error_cost) {
+ rt->u.dst.rate_tokens -= ip_rt_error_cost;
icmp_send(skb, ICMP_DEST_UNREACH, code, 0);
- rt->last_error = jiffies;
+ rt->u.dst.rate_last = now;
}
+
kfree_skb(skb);
return 0;
}
rth->rt_src == iph->saddr &&
rth->key.tos == tos &&
rth->key.iif == 0 &&
- !(rth->rt_flags&RTCF_NOPMTUDISC)) {
+ !(rth->u.dst.mxlock&(1<<RTAX_MTU))) {
unsigned short mtu = new_mtu;
if (new_mtu < 68 || new_mtu >= old_mtu) {
return NULL;
}
+static void ipv4_link_failure(struct sk_buff *skb)
+{
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
+}
+
static int ip_rt_bug(struct sk_buff *skb)
{
printk(KERN_DEBUG "ip_rt_bug: %08x -> %08x, %s\n", skb->nh.iph->saddr,
rth->u.dst.pmtu = res.fi->fib_mtu ? : out_dev->dev->mtu;
rth->u.dst.window=res.fi->fib_window ? : 0;
rth->u.dst.rtt = res.fi->fib_rtt ? : TCP_TIMEOUT_INIT;
- rth->u.dst.rate_last = rth->u.dst.rate_tokens = 0;
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ rth->u.dst.mxlock = res.fi->fib_metrics[RTAX_LOCK-1];
+#endif
if (FIB_RES_GW(res) && FIB_RES_NH(res).nh_scope == RT_SCOPE_LINK)
rth->rt_gateway = FIB_RES_GW(res);
*/
martian_destination:
#ifdef CONFIG_IP_ROUTE_VERBOSE
- if (ipv4_config.log_martians && net_ratelimit())
+ if (IN_DEV_LOG_MARTIANS(in_dev) && net_ratelimit())
printk(KERN_WARNING "martian destination %08x from %08x, dev %s\n", daddr, saddr, dev->name);
#endif
return -EINVAL;
martian_source:
#ifdef CONFIG_IP_ROUTE_VERBOSE
- if (ipv4_config.log_martians && net_ratelimit()) {
+ if (IN_DEV_LOG_MARTIANS(in_dev) && net_ratelimit()) {
/*
* RFC1812 recommenadtion, if source is martian,
* the only hint is MAC header.
else if (BADCLASS(key.dst) || ZERONET(key.dst))
return -EINVAL;
+ if (dev_out->flags&IFF_LOOPBACK)
+ flags |= RTCF_LOCAL;
+
if (res.type == RTN_BROADCAST) {
flags |= RTCF_BROADCAST;
- if (!(dev_out->flags&IFF_LOOPBACK) && dev_out->flags&IFF_BROADCAST)
+ if (dev_out->flags&IFF_BROADCAST)
flags |= RTCF_LOCAL;
} else if (res.type == RTN_MULTICAST) {
- flags |= RTCF_MULTICAST;
- if (ip_check_mc(dev_out, daddr))
- flags |= RTCF_LOCAL;
+ flags |= RTCF_MULTICAST|RTCF_LOCAL;
+ if (!ip_check_mc(dev_out, daddr))
+ flags &= ~RTCF_LOCAL;
}
rth = dst_alloc(sizeof(struct rtable), &ipv4_dst_ops);
rth->u.dst.pmtu = res.fi->fib_mtu ? : dev_out->mtu;
rth->u.dst.window=res.fi->fib_window ? : 0;
rth->u.dst.rtt = res.fi->fib_rtt ? : TCP_TIMEOUT_INIT;
+#ifndef CONFIG_RTNL_OLD_IFINFO
+ rth->u.dst.mxlock = res.fi->fib_metrics[RTAX_LOCK-1];
+#endif
} else {
rth->u.dst.pmtu = dev_out->mtu;
rth->u.dst.window=0;
rth->u.dst.rtt = TCP_TIMEOUT_INIT;
}
- rth->u.dst.rate_last = rth->u.dst.rate_tokens = 0;
rth->rt_flags = flags;
rth->rt_type = res.type;
hash = rt_hash_code(daddr, saddr^(oif<<5), tos);
int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr* nlh, void *arg)
{
- struct kern_rta *rta = arg;
+ struct rtattr **rta = arg;
struct rtmsg *rtm = NLMSG_DATA(nlh);
struct rtable *rt = NULL;
u32 dst = 0;
u32 src = 0;
+ int iif = 0;
int err;
struct sk_buff *skb;
struct rta_cacheinfo ci;
- u8 *o;
+#ifdef CONFIG_RTNL_OLD_IFINFO
+ unsigned char *o;
+#else
+ struct rtattr *mx;
+#endif
skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
if (skb == NULL)
skb->mac.raw = skb->data;
skb_reserve(skb, MAX_HEADER + sizeof(struct iphdr));
- if (rta->rta_dst)
- memcpy(&dst, rta->rta_dst, 4);
- if (rta->rta_src)
- memcpy(&src, rta->rta_src, 4);
+ if (rta[RTA_SRC-1])
+ memcpy(&src, RTA_DATA(rta[RTA_SRC-1]), 4);
+ if (rta[RTA_DST-1])
+ memcpy(&dst, RTA_DATA(rta[RTA_DST-1]), 4);
+ if (rta[RTA_IIF-1])
+ memcpy(&iif, RTA_DATA(rta[RTA_IIF-1]), sizeof(int));
- if (rta->rta_iif) {
+ if (iif) {
struct device *dev;
- dev = dev_get_by_index(*rta->rta_iif);
+ dev = dev_get_by_index(iif);
if (!dev)
return -ENODEV;
skb->protocol = __constant_htons(ETH_P_IP);
if (!err && rt->u.dst.error)
err = rt->u.dst.error;
} else {
- err = ip_route_output(&rt, dst, src, rtm->rtm_tos,
- rta->rta_oif ? *rta->rta_oif : 0);
+ int oif = 0;
+ if (rta[RTA_OIF-1])
+ memcpy(&oif, RTA_DATA(rta[RTA_OIF-1]), sizeof(int));
+ err = ip_route_output(&rt, dst, src, rtm->rtm_tos, oif);
}
if (err) {
kfree_skb(skb);
rtm->rtm_scope = RT_SCOPE_UNIVERSE;
rtm->rtm_protocol = RTPROT_UNSPEC;
rtm->rtm_flags = (rt->rt_flags&~0xFFFF) | RTM_F_CLONED;
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs = 0;
o = skb->tail;
+#endif
RTA_PUT(skb, RTA_DST, 4, &rt->rt_dst);
RTA_PUT(skb, RTA_SRC, 4, &rt->rt_src);
if (rt->u.dst.dev)
RTA_PUT(skb, RTA_OIF, sizeof(int), &rt->u.dst.dev->ifindex);
if (rt->rt_dst != rt->rt_gateway)
RTA_PUT(skb, RTA_GATEWAY, 4, &rt->rt_gateway);
+#ifdef CONFIG_RTNL_OLD_IFINFO
RTA_PUT(skb, RTA_MTU, sizeof(unsigned), &rt->u.dst.pmtu);
RTA_PUT(skb, RTA_WINDOW, sizeof(unsigned), &rt->u.dst.window);
RTA_PUT(skb, RTA_RTT, sizeof(unsigned), &rt->u.dst.rtt);
+#else
+ mx = (struct rtattr*)skb->tail;
+ RTA_PUT(skb, RTA_METRICS, 0, NULL);
+ if (rt->u.dst.mxlock)
+ RTA_PUT(skb, RTAX_LOCK, sizeof(unsigned), &rt->u.dst.mxlock);
+ if (rt->u.dst.pmtu)
+ RTA_PUT(skb, RTAX_MTU, sizeof(unsigned), &rt->u.dst.pmtu);
+ if (rt->u.dst.window)
+ RTA_PUT(skb, RTAX_WINDOW, sizeof(unsigned), &rt->u.dst.window);
+ if (rt->u.dst.rtt)
+ RTA_PUT(skb, RTAX_RTT, sizeof(unsigned), &rt->u.dst.rtt);
+ mx->rta_len = skb->tail - (u8*)mx;
+#endif
RTA_PUT(skb, RTA_PREFSRC, 4, &rt->rt_spec_dst);
ci.rta_lastuse = jiffies - rt->u.dst.lastuse;
ci.rta_used = atomic_read(&rt->u.dst.refcnt);
ci.rta_expires = 0;
ci.rta_error = rt->u.dst.error;
RTA_PUT(skb, RTA_CACHEINFO, sizeof(ci), &ci);
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_optlen = skb->tail - o;
- if (rta->rta_iif) {
+#endif
+ if (iif) {
#ifdef CONFIG_IP_MROUTE
- if (MULTICAST(dst) && !LOCAL_MCAST(dst) && ipv4_config.multicast_route) {
+ if (MULTICAST(dst) && !LOCAL_MCAST(dst) && ipv4_devconf.mc_forwarding) {
NETLINK_CB(skb).pid = NETLINK_CB(in_skb).pid;
err = ipmr_get_route(skb, rtm);
if (err <= 0)
} else
#endif
{
- RTA_PUT(skb, RTA_IIF, 4, rta->rta_iif);
+ RTA_PUT(skb, RTA_IIF, sizeof(int), &iif);
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_optlen = skb->tail - o;
+#endif
}
}
nlh->nlmsg_len = skb->tail - (u8*)nlh;
void ip_rt_multicast_event(struct in_device *in_dev)
{
- rt_cache_flush(1*HZ);
+ rt_cache_flush(0);
}
+
+
+#ifdef CONFIG_SYSCTL
+
+static int flush_delay;
+
+static
+int ipv4_sysctl_rtcache_flush(ctl_table *ctl, int write, struct file * filp,
+ void *buffer, size_t *lenp)
+{
+ if (write) {
+ proc_dointvec(ctl, write, filp, buffer, lenp);
+ rt_cache_flush(flush_delay);
+ return 0;
+ } else
+ return -EINVAL;
+}
+
+ctl_table ipv4_route_table[] = {
+ {NET_IPV4_ROUTE_FLUSH, "flush",
+ &flush_delay, sizeof(int), 0644, NULL,
+ &ipv4_sysctl_rtcache_flush},
+ {NET_IPV4_ROUTE_MIN_DELAY, "min_delay",
+ &ip_rt_min_delay, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV4_ROUTE_MAX_DELAY, "max_delay",
+ &ip_rt_max_delay, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV4_ROUTE_GC_THRESH, "gc_thresh",
+ &ipv4_dst_ops.gc_thresh, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_MAX_SIZE, "max_size",
+ &ip_rt_max_size, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_GC_MIN_INTERVAL, "gc_min_interval",
+ &ip_rt_gc_min_interval, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV4_ROUTE_GC_TIMEOUT, "gc_timeout",
+ &ip_rt_gc_timeout, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV4_ROUTE_GC_INTERVAL, "gc_interval",
+ &ip_rt_gc_interval, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV4_ROUTE_REDIRECT_LOAD, "redirect_load",
+ &ip_rt_redirect_load, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_REDIRECT_NUMBER, "redirect_number",
+ &ip_rt_redirect_number, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_REDIRECT_SILENCE, "redirect_silence",
+ &ip_rt_redirect_silence, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_ERROR_COST, "error_cost",
+ &ip_rt_error_cost, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV4_ROUTE_ERROR_BURST, "error_burst",
+ &ip_rt_error_burst, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {0}
+};
+#endif
+
__initfunc(void ip_rt_init(void))
{
devinet_init();
/* All the timers, started at system startup tend
to synchronize. Perturb it a bit.
*/
- rt_periodic_timer.expires = jiffies + net_random()%RT_GC_INTERVAL + RT_GC_INTERVAL;
+ rt_periodic_timer.expires = jiffies + net_random()%ip_rt_gc_interval
+ + ip_rt_gc_interval;
add_timer(&rt_periodic_timer);
#ifdef CONFIG_PROC_FS
opt &&
opt->srr ? opt->faddr : req->af.v4_req.rmt_addr,
req->af.v4_req.loc_addr,
- sk->ip_tos,
+ sk->ip_tos | RTO_CONN,
0)) {
tcp_openreq_free(req);
return NULL;
extern int tcp_sysctl_congavoid(ctl_table *ctl, int write, struct file * filp,
void *buffer, size_t *lenp);
-struct ipv4_config ipv4_config = { 1, 1, 1, 0, };
+struct ipv4_config ipv4_config;
-#ifdef CONFIG_SYSCTL
+extern ctl_table ipv4_route_table[];
-struct ipv4_config ipv4_def_router_config = { 0, 1, 1, 1, 1, 1, 1, };
-struct ipv4_config ipv4_def_host_config = { 1, 1, 1, 0, };
+#ifdef CONFIG_SYSCTL
static
-int ipv4_sysctl_forwarding(ctl_table *ctl, int write, struct file * filp,
- void *buffer, size_t *lenp)
+int ipv4_sysctl_forward(ctl_table *ctl, int write, struct file * filp,
+ void *buffer, size_t *lenp)
{
- int val = IS_ROUTER;
+ int val = ipv4_devconf.forwarding;
int ret;
ret = proc_dointvec(ctl, write, filp, buffer, lenp);
- if (write && IS_ROUTER != val) {
- if (IS_ROUTER)
- ipv4_config = ipv4_def_router_config;
- else
- ipv4_config = ipv4_def_host_config;
- rt_cache_flush(0);
- }
+ if (write && ipv4_devconf.forwarding != val)
+ inet_forward_change();
+
return ret;
}
-static
-int ipv4_sysctl_rtcache_flush(ctl_table *ctl, int write, struct file * filp,
- void *buffer, size_t *lenp)
-{
- if (write) {
- rt_cache_flush(0);
- return 0;
- } else
- return -EINVAL;
-}
ctl_table ipv4_table[] = {
{NET_IPV4_TCP_HOE_RETRANSMITS, "tcp_hoe_retransmits",
{NET_IPV4_TCP_VEGAS_CONG_AVOID, "tcp_vegas_cong_avoid",
&sysctl_tcp_cong_avoidance, sizeof(int), 0644,
NULL, &tcp_sysctl_congavoid },
- {NET_IPV4_FORWARDING, "ip_forwarding",
- &ip_statistics.IpForwarding, sizeof(int), 0644, NULL,
- &ipv4_sysctl_forwarding},
+ {NET_IPV4_FORWARD, "ip_forward",
+ &ipv4_devconf.forwarding, sizeof(int), 0644, NULL,
+ &ipv4_sysctl_forward},
{NET_IPV4_DEFAULT_TTL, "ip_default_ttl",
&ip_statistics.IpDefaultTTL, sizeof(int), 0644, NULL,
&proc_dointvec},
- {NET_IPV4_RFC1812_FILTER, "ip_rfc1812_filter",
- &ipv4_config.rfc1812_filter, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_LOG_MARTIANS, "ip_log_martians",
- &ipv4_config.log_martians, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_SOURCE_ROUTE, "ip_source_route",
- &ipv4_config.source_route, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_SEND_REDIRECTS, "ip_send_redirects",
- &ipv4_config.send_redirects, sizeof(int), 0644, NULL,
- &proc_dointvec},
{NET_IPV4_AUTOCONFIG, "ip_autoconfig",
&ipv4_config.autoconfig, sizeof(int), 0644, NULL,
&proc_dointvec},
- {NET_IPV4_BOOTP_RELAY, "ip_bootp_relay",
- &ipv4_config.bootp_relay, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_PROXY_ARP, "ip_proxy_arp",
- &ipv4_config.proxy_arp, sizeof(int), 0644, NULL,
- &proc_dointvec},
{NET_IPV4_NO_PMTU_DISC, "ip_no_pmtu_disc",
&ipv4_config.no_pmtu_disc, sizeof(int), 0644, NULL,
&proc_dointvec},
- {NET_IPV4_ACCEPT_REDIRECTS, "ip_accept_redirects",
- &ipv4_config.accept_redirects, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_SECURE_REDIRECTS, "ip_secure_redirects",
- &ipv4_config.secure_redirects, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_RFC1620_REDIRECTS, "ip_rfc1620_redirects",
- &ipv4_config.rfc1620_redirects, sizeof(int), 0644, NULL,
- &proc_dointvec},
- {NET_IPV4_RTCACHE_FLUSH, "ip_rtcache_flush",
- NULL, sizeof(int), 0644, NULL,
- &ipv4_sysctl_rtcache_flush},
{NET_IPV4_TCP_SYN_RETRIES, "tcp_syn_retries",
&sysctl_tcp_syn_retries, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_IPV4_IPFRAG_HIGH_THRESH, "ipfrag_high_thresh",
&sysctl_ipfrag_high_thresh, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_IPV4_IPFRAG_LOW_THRESH, "ipfrag_low_thresh",
&sysctl_ipfrag_low_thresh, sizeof(int), 0644, NULL, &proc_dointvec},
- {NET_IPV4_IP_DYNADDR, "ip_dynaddr",
+ {NET_IPV4_DYNADDR, "ip_dynaddr",
&sysctl_ip_dynaddr, sizeof(int), 0644, NULL, &proc_dointvec},
#ifdef CONFIG_IP_MASQUERADE
{NET_IPV4_IP_MASQ_DEBUG, "ip_masq_debug",
&sysctl_icmp_paramprob_time, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_IPV4_ICMP_ECHOREPLY_RATE, "icmp_echoreply_rate",
&sysctl_icmp_echoreply_time, sizeof(int), 0644, NULL, &proc_dointvec},
+ {NET_IPV4_ROUTE, "route", NULL, 0, 0555, ipv4_route_table},
{0}
};
* improvement.
* Stefan Magdalinski : adjusted tcp_readable() to fix FIONREAD
* Willy Konynenberg : Transparent proxying support.
+ * Mike McLagan : Routing by source
* Keith Owens : Do proper meging with partial SKB's in
* tcp_do_sendmsg to avoid burstiness.
* Eric Schenk : Fix fast close down bug with
/* FIXME: must check that ts_recent is not
* more than 24 days old here. Yuck.
*/
- return (tp->rcv_tsval-tp->ts_recent < 0);
+ return ((s32)(tp->rcv_tsval-tp->ts_recent) < 0);
}
* Added tail drop and some other bugfixes.
* Added new listen sematics (ifdefed by
* NEW_LISTEN for now)
+ * Mike McLagan : Routing by source
* Juan Jose Ciarlante: ip_dynaddr bits
* Andi Kleen: various fixes.
* Vitaly E. Lavrov : Transparent proxy revived after year coma.
printk(KERN_DEBUG "%s forgot to set AF_INET in " __FUNCTION__ "\n", current->comm);
}
- if (sk->dst_cache) {
- dst_release(sk->dst_cache);
- sk->dst_cache = NULL;
- }
+ dst_release(xchg(&sk->dst_cache, NULL));
tmp = ip_route_connect(&rt, usin->sin_addr.s_addr, sk->saddr,
- RT_TOS(sk->ip_tos)|(sk->localroute || 0), sk->bound_dev_if);
+ RT_TOS(sk->ip_tos)|sk->localroute, sk->bound_dev_if);
if (tmp < 0)
return tmp;
sk->mtu = rt->u.dst.pmtu;
if ((sk->ip_pmtudisc == IP_PMTUDISC_DONT ||
(sk->ip_pmtudisc == IP_PMTUDISC_WANT &&
- rt->rt_flags&RTCF_NOPMTUDISC)) &&
+ (rt->u.dst.mxlock&(1<<RTAX_MTU)))) &&
rt->u.dst.pmtu > 576)
sk->mtu = 576;
if (ip_route_output(&rt,
newsk->opt && newsk->opt->srr ?
newsk->opt->faddr : newsk->daddr,
- newsk->saddr, newsk->ip_tos, 0)) {
+ newsk->saddr, newsk->ip_tos|RTO_CONN, 0)) {
sk_free(newsk);
return NULL;
}
/* Query new route */
tmp = ip_route_connect(&rt, rt->rt_dst, 0,
- RT_TOS(sk->ip_tos)|(sk->localroute||0),
+ RT_TOS(sk->ip_tos)|sk->localroute,
sk->bound_dev_if);
/* Only useful if different source addrs */
} else
if (rt->u.dst.obsolete) {
int err;
- err = ip_route_output(&rt, rt->rt_dst, rt->rt_src, rt->key.tos, rt->key.oif);
+ err = ip_route_output(&rt, rt->rt_dst, rt->rt_src, rt->key.tos|RTO_CONN, rt->key.oif);
if (err) {
sk->err_soft=-err;
sk->error_report(skb->sk);
* Mike Shaver : RFC1122 checks.
* Alan Cox : Nonblocking error fix.
* Willy Konynenberg : Transparent proxying support.
+ * Mike McLagan : Routing by source
* David S. Miller : New socket lookup architecture.
* Last socket cache retained as it
* does have a high hit rate.
struct sk_buff *skb;
unsigned long amount;
- if (sk->state == TCP_LISTEN) return(-EINVAL);
+ if (sk->state == TCP_LISTEN)
+ return(-EINVAL);
amount = 0;
+ /* N.B. Is this interrupt safe?? */
skb = skb_peek(&sk->receive_queue);
if (skb != NULL) {
/*
*/
int udp_recvmsg(struct sock *sk, struct msghdr *msg, int len,
- int noblock, int flags,int *addr_len)
+ int noblock, int flags, int *addr_len)
{
- int copied = 0;
- int truesize;
+ struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name;
struct sk_buff *skb;
- int er;
- struct sockaddr_in *sin=(struct sockaddr_in *)msg->msg_name;
+ int copied, err;
/*
* Check any passed addresses
*addr_len=sizeof(*sin);
if (sk->ip_recverr && (skb = skb_dequeue(&sk->error_queue)) != NULL) {
- er = sock_error(sk);
- if (msg->msg_controllen == 0) {
- skb_free_datagram(sk, skb);
- return er;
+ err = sock_error(sk);
+ if (msg->msg_controllen != 0) {
+ put_cmsg(msg, SOL_IP, IP_RECVERR, skb->len, skb->data);
+ err = 0;
}
- put_cmsg(msg, SOL_IP, IP_RECVERR, skb->len, skb->data);
- skb_free_datagram(sk, skb);
- return 0;
+ goto out_free;
}
/*
* the finished NET3, it will do _ALL_ the work!
*/
- skb=skb_recv_datagram(sk,flags,noblock,&er);
- if(skb==NULL)
- return er;
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+ goto out;
- truesize = skb->len - sizeof(struct udphdr);
- copied = truesize;
- if (len < truesize)
+ copied = skb->len - sizeof(struct udphdr);
+ if (copied > len)
{
- msg->msg_flags |= MSG_TRUNC;
copied = len;
+ msg->msg_flags |= MSG_TRUNC;
}
/*
* FIXME : should use udp header size info value
*/
- er = skb_copy_datagram_iovec(skb,sizeof(struct udphdr),msg->msg_iov,copied);
- if (er)
- return er;
+ err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov,
+ copied);
+ if (err)
+ goto out_free;
sk->stamp=skb->stamp;
/* Copy the address. */
}
if (sk->ip_cmsg_flags)
ip_cmsg_recv(msg, skb);
+ err = copied;
+out_free:
skb_free_datagram(sk, skb);
- return(copied);
+out:
+ return err;
}
int udp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
if (usin->sin_family && usin->sin_family != AF_INET)
return(-EAFNOSUPPORT);
- dst_release(sk->dst_cache);
- sk->dst_cache = NULL;
+ dst_release(xchg(&sk->dst_cache, NULL));
err = ip_route_connect(&rt, usin->sin_addr.s_addr, sk->saddr,
sk->ip_tos|sk->localroute, sk->bound_dev_if);
#define ADBG(x)
#endif
+#ifdef CONFIG_SYSCTL
+static void addrconf_sysctl_register(struct inet6_dev *idev, struct ipv6_devconf *p);
+static void addrconf_sysctl_unregister(struct ipv6_devconf *p);
+#endif
+
/*
* Configured unicast address list
*/
static void addrconf_rs_timer(unsigned long data);
static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa);
+struct ipv6_devconf ipv6_devconf =
+{
+ 0, /* forwarding */
+ IPV6_DEFAULT_HOPLIMIT, /* hop limit */
+ 576, /* mtu */
+ 1, /* accept RAs */
+ 1, /* accept redirects */
+ 1, /* autoconfiguration */
+ 1, /* dad transmits */
+ MAX_RTR_SOLICITATIONS, /* router solicits */
+ RTR_SOLICITATION_INTERVAL, /* rtr solicit interval */
+ MAX_RTR_SOLICITATION_DELAY, /* rtr solicit delay */
+};
+
+static struct ipv6_devconf ipv6_devconf_dflt =
+{
+ 0, /* forwarding */
+ IPV6_DEFAULT_HOPLIMIT, /* hop limit */
+ 576, /* mtu */
+ 1, /* accept RAs */
+ 1, /* accept redirects */
+ 1, /* autoconfiguration */
+ 1, /* dad transmits */
+ MAX_RTR_SOLICITATIONS, /* router solicits */
+ RTR_SOLICITATION_INTERVAL, /* rtr solicit interval */
+ MAX_RTR_SOLICITATION_DELAY, /* rtr solicit delay */
+};
+
int ipv6_addr_type(struct in6_addr *addr)
{
u32 st;
struct inet6_dev *ndev, **bptr, *iter;
int hash;
+ if (dev->mtu < 576)
+ return NULL;
+
ndev = kmalloc(sizeof(struct inet6_dev), gfp_any());
if (ndev) {
memset(ndev, 0, sizeof(struct inet6_dev));
ndev->dev = dev;
- ndev->nd_parms = neigh_parms_alloc(&nd_tbl);
- if (ndev->nd_parms == NULL)
- ndev->nd_parms = &nd_tbl.parms;
+ memcpy(&ndev->cnf, &ipv6_devconf_dflt, sizeof(ndev->cnf));
+ ndev->cnf.mtu6 = dev->mtu;
+ ndev->cnf.sysctl = NULL;
+ ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl);
+ if (ndev->nd_parms == NULL) {
+ kfree(ndev);
+ return NULL;
+ }
#ifdef CONFIG_SYSCTL
- else
- ndev->nd_parms->sysctl_table =
- neigh_sysctl_register(dev, ndev->nd_parms, NET_IPV6, NET_IPV6_NEIGH, "ipv6");
+ neigh_sysctl_register(dev, ndev->nd_parms, NET_IPV6, NET_IPV6_NEIGH, "ipv6");
+ addrconf_sysctl_register(ndev, &ndev->cnf);
#endif
hash = ipv6_devindex_hash(dev->ifindex);
bptr = &inet6_dev_lst[hash];
return idev;
}
-void addrconf_forwarding_on(void)
+static void addrconf_forward_change(struct inet6_dev *idev)
{
- struct inet6_dev *idev;
int i;
- for (i = 0; i < IN6_ADDR_HSIZE; i++) {
- for (idev = inet6_dev_lst[i]; idev; idev = idev->next) {
-#if ACONF_DEBUG >= 2
- printk(KERN_DEBUG "dev %s\n", idev->dev->name);
-#endif
+ if (idev)
+ return;
- idev->router = 1;
- }
+ for (i = 0; i < IN6_ADDR_HSIZE; i++) {
+ for (idev = inet6_dev_lst[i]; idev; idev = idev->next)
+ idev->cnf.forwarding = ipv6_devconf.forwarding;
}
}
__u32 prefered_lft;
int addr_type;
unsigned long rt_expires;
+ struct inet6_dev *in6_dev = ipv6_get_idev(dev);
+
+ if (in6_dev == NULL) {
+ printk(KERN_DEBUG "addrconf: device %s not configured\n", dev->name);
+ return;
+ }
pinfo = (struct prefix_info *) opt;
/* Try to figure out our local address for this prefix */
- if (pinfo->autoconf && ipv6_config.autoconf) {
+ if (pinfo->autoconf && in6_dev->cnf.autoconf) {
struct inet6_ifaddr * ifp;
struct in6_addr addr;
int plen;
ifp = ipv6_chk_addr(&addr, dev, 1);
if ((ifp == NULL || (ifp->flags&ADDR_INVALID)) && valid_lft) {
- struct inet6_dev *in6_dev = ipv6_get_idev(dev);
-
- if (in6_dev == NULL) {
- printk(KERN_DEBUG "addrconf: device %s not configured\n", dev->name);
- return;
- }
if (ifp == NULL)
ifp = ipv6_add_addr(in6_dev, &addr, addr_type & IPV6_ADDR_SCOPE_MASK);
if (!suser())
return -EPERM;
-
+
if (copy_from_user(&ireq, arg, sizeof(struct in6_ifreq)))
return -EFAULT;
rtnl_lock();
- err = inet6_addr_add(ireq.ifr6_ifindex, &ireq.ifr6_addr, ireq.ifr6_prefixlen);
+ err = inet6_addr_del(ireq.ifr6_ifindex, &ireq.ifr6_addr, ireq.ifr6_prefixlen);
rtnl_unlock();
return err;
}
dev->dev_addr, dev->addr_len);
addrconf_add_linklocal(idev, &addr);
#endif
-
- if (ipv6_config.forwarding)
- idev->router = 1;
}
static void addrconf_sit_config(struct device *dev)
#endif
break;
+ case NETDEV_CHANGEMTU:
+ /* BUGGG... Should scan FIB to change pmtu on routes. --ANK */
+ if (dev->mtu >= 576)
+ break;
+
+ /* MTU falled under 576. Stop IPv6 on this interface. */
+
case NETDEV_DOWN:
case NETDEV_UNREGISTER:
/*
* Remove all addresses from this interface.
*/
- if (addrconf_ifdown(dev, event == NETDEV_UNREGISTER) == 0) {
+ if (addrconf_ifdown(dev, event != NETDEV_DOWN) == 0) {
#ifdef CONFIG_IPV6_NETLINK
rt6_sndmsg(RTMSG_DELDEVICE, NULL, NULL, NULL, dev, 0, 0, 0, 0);
#endif
}
break;
- case NETDEV_CHANGEMTU:
case NETDEV_CHANGE:
- /* BUGGG... Should scan FIB to change pmtu on routes. --ANK */
break;
};
if (idev->dev == dev) {
*bidev = idev->next;
neigh_parms_release(&nd_tbl, idev->nd_parms);
+#ifdef CONFIG_SYSCTL
+ addrconf_sysctl_unregister(&idev->cnf);
+#endif
kfree(idev);
break;
}
ifp = (struct inet6_ifaddr *) data;
- if (ipv6_config.forwarding)
+ if (ifp->idev->cnf.forwarding)
return;
if (ifp->idev->if_flags & IF_RA_RCVD) {
return;
}
- if (ifp->probes++ <= ipv6_config.rtr_solicits) {
+ if (ifp->probes++ <= ifp->idev->cnf.rtr_solicits) {
struct in6_addr all_routers;
ipv6_addr_all_routers(&all_routers);
ifp->timer.function = addrconf_rs_timer;
ifp->timer.expires = (jiffies +
- ipv6_config.rtr_solicit_interval);
+ ifp->idev->cnf.rtr_solicit_interval);
add_timer(&ifp->timer);
} else {
struct in6_rtmsg rtmsg;
net_srandom(ifp->addr.s6_addr32[3]);
- ifp->probes = ipv6_config.dad_transmits;
+ ifp->probes = ifp->idev->cnf.dad_transmits;
ifp->flags |= DAD_INCOMPLETE;
- rand_num = net_random() % ipv6_config.rtr_solicit_delay;
+ rand_num = net_random() % ifp->idev->cnf.rtr_solicit_delay;
ifp->timer.function = addrconf_dad_timer;
ifp->timer.expires = jiffies + rand_num;
ndisc_send_ns(ifp->idev->dev, NULL, &ifp->addr, &mcaddr, &unspec);
#endif
- ifp->timer.expires = jiffies + ipv6_config.rtr_solicit_interval;
+ ifp->timer.expires = jiffies + ifp->idev->cnf.rtr_solicit_interval;
add_timer(&ifp->timer);
}
start sending router solicitations.
*/
- if (ipv6_config.forwarding == 0 &&
+ if (ifp->idev->cnf.forwarding == 0 &&
(dev->flags&(IFF_NOARP|IFF_LOOPBACK)) == 0 &&
(ipv6_addr_type(&ifp->addr) & IPV6_ADDR_LINKLOCAL)) {
struct in6_addr all_routers;
ifp->probes = 1;
ifp->timer.function = addrconf_rs_timer;
ifp->timer.expires = (jiffies +
- ipv6_config.rtr_solicit_interval);
+ ifp->idev->cnf.rtr_solicit_interval);
ifp->idev->if_flags |= IF_RS_SENT;
add_timer(&ifp->timer);
}
}
#ifdef CONFIG_RTNETLINK
+
static int
inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg)
{
- struct kern_ifa *k_ifa = arg;
+ struct rtattr **rta = arg;
struct ifaddrmsg *ifm = NLMSG_DATA(nlh);
struct in6_addr *pfx;
pfx = NULL;
- if (k_ifa->ifa_address)
- pfx = k_ifa->ifa_address;
- if (k_ifa->ifa_local) {
- if (pfx && memcmp(pfx, k_ifa->ifa_local, sizeof(*pfx)))
+ if (rta[IFA_ADDRESS-1]) {
+ if (RTA_PAYLOAD(rta[IFA_ADDRESS-1]) < sizeof(*pfx))
return -EINVAL;
- pfx = k_ifa->ifa_local;
+ pfx = RTA_DATA(rta[IFA_ADDRESS-1]);
+ }
+ if (rta[IFA_LOCAL-1]) {
+ if (pfx && memcmp(pfx, RTA_DATA(rta[IFA_LOCAL-1]), sizeof(*pfx)))
+ return -EINVAL;
+ pfx = RTA_DATA(rta[IFA_LOCAL-1]);
}
return inet6_addr_del(ifm->ifa_index, pfx, ifm->ifa_prefixlen);
static int
inet6_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg)
{
- struct kern_ifa *k_ifa = arg;
+ struct rtattr **rta = arg;
struct ifaddrmsg *ifm = NLMSG_DATA(nlh);
struct in6_addr *pfx;
pfx = NULL;
- if (k_ifa->ifa_address)
- pfx = k_ifa->ifa_address;
- if (k_ifa->ifa_local) {
- if (pfx && memcmp(pfx, k_ifa->ifa_local, sizeof(*pfx)))
+ if (rta[IFA_ADDRESS-1]) {
+ if (RTA_PAYLOAD(rta[IFA_ADDRESS-1]) < sizeof(*pfx))
return -EINVAL;
- pfx = k_ifa->ifa_local;
+ pfx = RTA_DATA(rta[IFA_ADDRESS-1]);
+ }
+ if (rta[IFA_LOCAL-1]) {
+ if (pfx && memcmp(pfx, RTA_DATA(rta[IFA_LOCAL-1]), sizeof(*pfx)))
+ return -EINVAL;
+ pfx = RTA_DATA(rta[IFA_LOCAL-1]);
}
return inet6_addr_add(ifm->ifa_index, pfx, ifm->ifa_prefixlen);
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
struct sk_buff *skb;
int size = NLMSG_SPACE(sizeof(struct ifaddrmsg)+128);
- skb = alloc_skb(size, GFP_KERNEL);
+ skb = alloc_skb(size, GFP_ATOMIC);
if (!skb) {
netlink_set_err(rtnl, 0, RTMGRP_IPV6_IFADDR, ENOBUFS);
return;
{
{ NULL, NULL, },
{ NULL, NULL, },
- { NULL, rtnetlink_dump_ifinfo, },
+ { NULL, NULL, },
{ NULL, NULL, },
{ inet6_rtm_newaddr, NULL, },
{ inet6_rtm_delroute, NULL, },
{ NULL, inet6_dump_fib, },
{ NULL, NULL, },
-
- { neigh_add, NULL, },
- { neigh_delete, NULL, },
- { NULL, neigh_dump_info, },
- { NULL, NULL, },
-
- { NULL, NULL, },
- { NULL, NULL, },
- { NULL, NULL, },
- { NULL, NULL, },
};
#endif
}
}
+#ifdef CONFIG_SYSCTL
+
+static
+int addrconf_sysctl_forward(ctl_table *ctl, int write, struct file * filp,
+ void *buffer, size_t *lenp)
+{
+ int *valp = ctl->data;
+ int val = *valp;
+ int ret;
+
+ ret = proc_dointvec(ctl, write, filp, buffer, lenp);
+
+ if (write && *valp != val && valp != &ipv6_devconf_dflt.forwarding) {
+ struct inet6_dev *idev = NULL;
+
+ if (valp != &ipv6_devconf.forwarding) {
+ struct device *dev = dev_get_by_index(ctl->ctl_name);
+ if (dev)
+ idev = ipv6_get_idev(dev);
+ if (idev == NULL)
+ return ret;
+ } else
+ ipv6_devconf_dflt.forwarding = ipv6_devconf.forwarding;
+
+ addrconf_forward_change(idev);
+
+ if (*valp)
+ rt6_purge_dflt_routers(0);
+ }
+
+ return ret;
+}
+
+static struct addrconf_sysctl_table
+{
+ struct ctl_table_header *sysctl_header;
+ ctl_table addrconf_vars[11];
+ ctl_table addrconf_dev[2];
+ ctl_table addrconf_conf_dir[2];
+ ctl_table addrconf_proto_dir[2];
+ ctl_table addrconf_root_dir[2];
+} addrconf_sysctl = {
+ NULL,
+ {{NET_IPV6_FORWARDING, "forwarding",
+ &ipv6_devconf.forwarding, sizeof(int), 0644, NULL,
+ &addrconf_sysctl_forward},
+
+ {NET_IPV6_HOP_LIMIT, "hop_limit",
+ &ipv6_devconf.hop_limit, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_MTU, "mtu",
+ &ipv6_devconf.mtu6, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_ACCEPT_RA, "accept_ra",
+ &ipv6_devconf.accept_ra, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_ACCEPT_REDIRECTS, "accept_redirects",
+ &ipv6_devconf.accept_redirects, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_AUTOCONF, "autoconf",
+ &ipv6_devconf.autoconf, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_DAD_TRANSMITS, "dad_transmits",
+ &ipv6_devconf.dad_transmits, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_RTR_SOLICITS, "router_solicitations",
+ &ipv6_devconf.rtr_solicits, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+
+ {NET_IPV6_RTR_SOLICIT_INTERVAL, "router_solicitation_interval",
+ &ipv6_devconf.rtr_solicit_interval, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+
+ {NET_IPV6_RTR_SOLICIT_DELAY, "router_solicitation_delay",
+ &ipv6_devconf.rtr_solicit_delay, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+
+ {0}},
+
+ {{NET_PROTO_CONF_ALL, "all", NULL, 0, 0555, addrconf_sysctl.addrconf_vars},{0}},
+ {{NET_IPV6_CONF, "conf", NULL, 0, 0555, addrconf_sysctl.addrconf_dev},{0}},
+ {{NET_IPV6, "ipv6", NULL, 0, 0555, addrconf_sysctl.addrconf_conf_dir},{0}},
+ {{CTL_NET, "net", NULL, 0, 0555, addrconf_sysctl.addrconf_proto_dir},{0}}
+};
+
+static void addrconf_sysctl_register(struct inet6_dev *idev, struct ipv6_devconf *p)
+{
+ int i;
+ struct device *dev = idev ? idev->dev : NULL;
+ struct addrconf_sysctl_table *t;
+
+ t = kmalloc(sizeof(*t), GFP_KERNEL);
+ if (t == NULL)
+ return;
+ memcpy(t, &addrconf_sysctl, sizeof(*t));
+ for (i=0; i<sizeof(t->addrconf_vars)/sizeof(t->addrconf_vars[0])-1; i++) {
+ t->addrconf_vars[i].data += (char*)p - (char*)&ipv6_devconf;
+ t->addrconf_vars[i].de = NULL;
+ }
+ if (dev) {
+ t->addrconf_dev[0].procname = dev->name;
+ t->addrconf_dev[0].ctl_name = dev->ifindex;
+ } else {
+ t->addrconf_dev[0].procname = "default";
+ t->addrconf_dev[0].ctl_name = NET_PROTO_CONF_DEFAULT;
+ }
+ t->addrconf_dev[0].child = t->addrconf_vars;
+ t->addrconf_dev[0].de = NULL;
+ t->addrconf_conf_dir[0].child = t->addrconf_dev;
+ t->addrconf_conf_dir[0].de = NULL;
+ t->addrconf_proto_dir[0].child = t->addrconf_conf_dir;
+ t->addrconf_proto_dir[0].de = NULL;
+ t->addrconf_root_dir[0].child = t->addrconf_proto_dir;
+ t->addrconf_root_dir[0].de = NULL;
+
+ t->sysctl_header = register_sysctl_table(t->addrconf_root_dir, 0);
+ if (t->sysctl_header == NULL)
+ kfree(t);
+}
+
+static void addrconf_sysctl_unregister(struct ipv6_devconf *p)
+{
+ if (p->sysctl) {
+ struct addrconf_sysctl_table *t = p->sysctl;
+ p->sysctl = NULL;
+ unregister_sysctl_table(t->sysctl_header);
+ kfree(t);
+ }
+}
+
+
+#endif
/*
* Init / cleanup code
#ifdef CONFIG_RTNETLINK
rtnetlink_links[AF_INET6] = inet6_rtnetlink_table;
#endif
+#ifdef CONFIG_SYSCTL
+ addrconf_sysctl.sysctl_header =
+ register_sysctl_table(addrconf_sysctl.addrconf_root_dir, 0);
+ addrconf_sysctl_register(NULL, &ipv6_devconf_dflt);
+#endif
}
#ifdef MODULE
#ifdef CONFIG_RTNETLINK
rtnetlink_links[AF_INET6] = NULL;
#endif
- start_bh_atomic();
+#ifdef CONFIG_SYSCTL
+ addrconf_sysctl_unregister(&ipv6_devconf_dflt);
+ addrconf_sysctl_unregister(&ipv6_devconf);
+#endif
del_timer(&addr_chk_timer);
}
}
+ start_bh_atomic();
/*
* clean addr_list
*/
sk->timer.data = (unsigned long)sk;
sk->timer.function = &net_timer;
- sk->net_pinfo.af_inet6.hop_limit = ipv6_config.hop_limit;
- sk->net_pinfo.af_inet6.mcast_hops = IPV6_DEFAULT_MCASTHOPS;
+ sk->net_pinfo.af_inet6.hop_limit = -1;
+ sk->net_pinfo.af_inet6.mcast_hops = -1;
sk->net_pinfo.af_inet6.mc_loop = 1;
/* Init the ipv4 part of the socket since we can have sockets
static __u32 rt_sernum = 0;
-static void fib6_run_gc(unsigned long);
-
static struct timer_list ip6_fib_timer = {
NULL, NULL,
0,
(ipv6_addr_cmp(&iter->rt6i_gateway,
&rt->rt6i_gateway) == 0)) {
if (rt->rt6i_expires == 0 ||
- rt->rt6i_expires - iter->rt6i_expires > 0)
+ (long)(rt->rt6i_expires - iter->rt6i_expires) > 0)
rt->rt6i_expires = iter->rt6i_expires;
return -EEXIST;
}
{
if ((ip6_fib_timer.expires == 0) &&
(rt->rt6i_flags & (RTF_ADDRCONF | RTF_CACHE))) {
- ip6_fib_timer.expires = jiffies + ipv6_config.rt_gc_period;
+ del_timer(&ip6_fib_timer);
+ ip6_fib_timer.expires = jiffies + ip6_rt_gc_interval;
add_timer(&ip6_fib_timer);
}
}
for (rt = fn->leaf; rt;) {
if ((rt->rt6i_flags & RTF_CACHE) && atomic_read(&rt->rt6i_use) == 0) {
- if (now - rt->rt6i_tstamp > timeout) {
+ if ((long)(now - rt->rt6i_tstamp) >= timeout) {
struct rt6_info *old;
old = rt;
* Seems, radix tree walking is absolutely broken,
* but we will try in any case --ANK
*/
- if (rt->rt6i_expires && now - rt->rt6i_expires < 0) {
+ if (rt->rt6i_expires && (long)(now - rt->rt6i_expires) < 0) {
struct rt6_info *old;
old = rt;
}
}
-static void fib6_run_gc(unsigned long dummy)
+void fib6_run_gc(unsigned long dummy)
{
struct fib6_gc_args arg = {
- ipv6_config.rt_cache_timeout,
+ ip6_rt_gc_timeout,
0
};
+ del_timer(&ip6_fib_timer);
+
+ if (dummy)
+ arg.timeout = dummy;
+
if (fib6_walk_count == 0)
fib6_walk_tree(&ip6_routing_table, fib6_garbage_collect, &arg, 0);
else
arg.more = 1;
if (arg.more) {
- ip6_fib_timer.expires = jiffies + ipv6_config.rt_gc_period;
+ ip6_fib_timer.expires = jiffies + ip6_rt_gc_interval;
add_timer(&ip6_fib_timer);
} else {
ip6_fib_timer.expires = 0;
skb->protocol = __constant_htons(ETH_P_IPV6);
skb->dev = dev;
+ if (ipv6_addr_is_multicast(&skb->nh.ipv6h->daddr)) {
+ if (!(dev->flags&IFF_LOOPBACK) &&
+ (skb->sk == NULL || skb->sk->net_pinfo.af_inet6.mc_loop) &&
+ ipv6_chk_mcast_addr(dev, &skb->nh.ipv6h->daddr)) {
+ /* Do not check for IFF_ALLMULTI; multicast routing
+ is not supported in any case.
+ */
+ dev_loopback_xmit(skb);
+
+ if (skb->nh.ipv6h->hop_limit == 0) {
+ kfree_skb(skb);
+ return 0;
+ }
+ }
+ }
+
if (hh) {
#ifdef __alpha__
/* Alpha has disguisting memcpy. Help it. */
hdr->payload_len = htons(seg_len - sizeof(struct ipv6hdr));
hdr->nexthdr = fl->proto;
- hdr->hop_limit = np ? np->hop_limit : ipv6_config.hop_limit;
-
+ if (np == NULL || np->hop_limit < 0)
+ hdr->hop_limit = ((struct rt6_info*)dst)->rt6i_hoplimit;
+ else
+ hdr->hop_limit = np->hop_limit;
+
ipv6_addr_copy(&hdr->saddr, fl->nl_u.ip6_u.saddr);
ipv6_addr_copy(&hdr->daddr, fl->nl_u.ip6_u.daddr);
pktlength = length;
- if (hlimit < 0)
- hlimit = np->hop_limit;
+ if (hlimit < 0) {
+ if (ipv6_addr_is_multicast(fl->nl_u.ip6_u.daddr))
+ hlimit = np->mcast_hops;
+ else
+ hlimit = np->hop_limit;
+ if (hlimit < 0)
+ hlimit = ((struct rt6_info*)dst)->rt6i_hoplimit;
+ }
if (!sk->ip_hdrincl) {
pktlength += sizeof(struct ipv6hdr);
hdr = (struct ipv6hdr *) skb->tail;
skb->nh.ipv6h = hdr;
-
+
if (!sk->ip_hdrincl) {
ip6_bld_1(sk, skb, fl, hlimit, pktlength);
#if 0
struct ipv6hdr *hdr = skb->nh.ipv6h;
int size;
- if (ipv6_config.forwarding == 0) {
+ if (ipv6_devconf.forwarding == 0) {
kfree_skb(skb);
return -EINVAL;
}
break;
case IPV6_UNICAST_HOPS:
- if (val > 255)
+ if (val > 255 || val < -1)
retv = -EINVAL;
else {
np->hop_limit = val;
break;
case IPV6_MULTICAST_HOPS:
- if (val > 255)
+ if (val > 255 || val < -1)
retv = -EINVAL;
else {
np->mcast_hops = val;
retv = 0;
}
break;
+ break;
case IPV6_MULTICAST_LOOP:
- np->mc_loop = val;
+ np->mc_loop = (val != 0);
+ retv = 0;
break;
case IPV6_MULTICAST_IF:
pndisc_constructor,
pndisc_destructor,
pndisc_redo,
- { NULL, NULL, 30*HZ, 1*HZ, 60*HZ, 30*HZ, 5*HZ, 3, 3, 0, 3, 1*HZ, (8*HZ)/10, 0, 64 },
+ { NULL, NULL, &nd_tbl, 0, NULL, NULL,
+ 30*HZ, 1*HZ, 60*HZ, 30*HZ, 5*HZ, 3, 3, 0, 3, 1*HZ, (8*HZ)/10, 0, 64 },
30*HZ, 128, 512, 1024,
};
ND_PRINTK1("RA: can't find in6 device\n");
return;
}
-
+ if (in6_dev->cnf.forwarding || !in6_dev->cnf.accept_ra)
+ return;
+
if (in6_dev->if_flags & IF_RS_SENT) {
/*
* flag that an RA was received after an RS was sent
rt->rt6i_expires = jiffies + (HZ * lifetime);
if (ra_msg->icmph.icmp6_hop_limit)
- ipv6_config.hop_limit = ra_msg->icmph.icmp6_hop_limit;
+ in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;
/*
* Update Reachable Time and Retrans Timer
break;
case ND_OPT_MTU:
- if (rt) {
+ {
int mtu;
- struct device *dev;
mtu = htonl(*(__u32 *)(opt+4));
- dev = rt->rt6i_dev;
-
- if (dev == NULL)
- break;
- if (mtu < 576) {
+ if (mtu < 576 || mtu > skb->dev->mtu) {
ND_PRINTK0("NDISC: router "
"announcement with mtu = %d\n",
mtu);
break;
}
-#if 0
- /* Bad idea. Sorry, this thing is not
- so easy to implement --ANK
- */
- if (dev->change_mtu)
- dev->change_mtu(dev, mtu);
- else
- dev->mtu = mtu;
-#endif
+ if (in6_dev->cnf.mtu6 != mtu) {
+ in6_dev->cnf.mtu6 = mtu;
+
+ if (rt)
+ rt->u.dst.pmtu = mtu;
+
+ /* BUGGG... Scan routing tables and
+ adjust mtu on routes going
+ via this device
+ */
+ }
}
break;
}
}
-void ndisc_forwarding_on(void)
-{
-
- /*
- * Forwarding was turned on.
- */
-
- rt6_purge_dflt_routers(0);
-}
-
-void ndisc_forwarding_off(void)
-{
- /*
- * Forwarding was turned off.
- */
-}
-
static void ndisc_redirect_rcv(struct sk_buff *skb)
{
+ struct inet6_dev *in6_dev;
struct icmp6hdr *icmph;
struct in6_addr *dest;
struct in6_addr *target; /* new first hop to destination */
return;
}
+ in6_dev = ipv6_get_idev(skb->dev);
+ if (!in6_dev || in6_dev->cnf.forwarding || !in6_dev->cnf.accept_redirects)
+ return;
+
/* passed validation tests
NOTE We should not install redirect if sender did not supply
ipv6_addr_all_nodes(&maddr);
ndisc_send_na(dev, NULL, &maddr, &ifp->addr,
- ifp->idev->router, 0, 1, 1);
+ ifp->idev->cnf.forwarding, 0, 1, 1);
return 0;
}
if (neigh) {
ndisc_send_na(dev, neigh, saddr, &ifp->addr,
- ifp->idev->router, 1, inc, inc);
+ ifp->idev->cnf.forwarding, 1, inc, inc);
neigh_release(neigh);
}
}
struct inet6_dev *in6_dev = ipv6_get_idev(dev);
int addr_type = ipv6_addr_type(saddr);
- if (in6_dev && in6_dev->router &&
+ if (in6_dev && in6_dev->cnf.forwarding &&
(addr_type & IPV6_ADDR_UNICAST) &&
pneigh_lookup(&nd_tbl, &msg->target, dev, 0)) {
int inc = ipv6_addr_type(daddr)&IPV6_ADDR_MULTICAST;
neigh_release(neigh);
}
break;
- };
- if (ipv6_config.forwarding == 0) {
- switch (msg->icmph.icmp6_type) {
- case NDISC_ROUTER_ADVERTISEMENT:
- if (ipv6_config.accept_ra)
- ndisc_router_discovery(skb);
- break;
+ case NDISC_ROUTER_ADVERTISEMENT:
+ ndisc_router_discovery(skb);
+ break;
- case NDISC_REDIRECT:
- if (ipv6_config.accept_redirects)
- ndisc_redirect_rcv(skb);
- break;
- };
- }
+ case NDISC_REDIRECT:
+ ndisc_redirect_rcv(skb);
+ break;
+ };
return 0;
}
sk->allocation = GFP_ATOMIC;
sk->net_pinfo.af_inet6.hop_limit = 255;
sk->net_pinfo.af_inet6.priority = 15;
+ /* Do not loopback ndisc messages */
+ sk->net_pinfo.af_inet6.mc_loop = 0;
sk->num = 256;
/*
#endif
#endif
#ifdef CONFIG_SYSCTL
- nd_tbl.parms.sysctl_table = neigh_sysctl_register(NULL, &nd_tbl.parms, NET_IPV6, NET_IPV6_NEIGH, "ipv6");
+ neigh_sysctl_register(NULL, &nd_tbl.parms, NET_IPV6, NET_IPV6_NEIGH, "ipv6");
#endif
}
SOCKHASH_UNLOCK();
}
-static int __inline__ inet6_mc_check(struct sock *sk, struct in6_addr *addr)
+static __inline__ int inet6_mc_check(struct sock *sk, struct in6_addr *addr)
{
struct ipv6_mc_socklist *mc;
*/
int rawv6_recvmsg(struct sock *sk, struct msghdr *msg, int len,
- int noblock, int flags,int *addr_len)
+ int noblock, int flags, int *addr_len)
{
- struct sockaddr_in6 *sin6=(struct sockaddr_in6 *)msg->msg_name;
+ struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)msg->msg_name;
struct sk_buff *skb;
- int copied=0;
- int err;
-
+ int copied, err;
if (flags & MSG_OOB)
return -EOPNOTSUPP;
if (addr_len)
*addr_len=sizeof(*sin6);
- skb=skb_recv_datagram(sk, flags, noblock, &err);
- if(skb==NULL)
- return err;
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+ goto out;
copied = min(len, skb->tail - skb->h.raw);
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
sk->stamp=skb->stamp;
-
if (err)
- return err;
+ goto out_free;
/* Copy the address. */
if (sin6) {
sin6->sin6_family = AF_INET6;
memcpy(&sin6->sin6_addr, &skb->nh.ipv6h->saddr,
sizeof(struct in6_addr));
-
- *addr_len = sizeof(struct sockaddr_in6);
}
if (msg->msg_controllen)
datagram_recv_ctl(sk, msg, skb);
+ err = copied;
+out_free:
skb_free_datagram(sk, skb);
- return (copied);
+out:
+ return err;
}
/*
#include <asm/uaccess.h>
+#ifdef CONFIG_SYSCTL
+#include <linux/sysctl.h>
+#endif
+
#undef CONFIG_RT6_POLICY
/* Set to 3 to get tracing. */
#define RDBG(x)
#endif
+int ip6_rt_max_size = 4096;
+int ip6_rt_gc_min_interval = 5*HZ;
+int ip6_rt_gc_timeout = 60*HZ;
+int ip6_rt_gc_interval = 30*HZ;
+
static struct rt6_info * ip6_rt_copy(struct rt6_info *ort);
static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie);
static struct dst_entry *ip6_dst_reroute(struct dst_entry *dst,
struct sk_buff *skb);
static struct dst_entry *ip6_negative_advice(struct dst_entry *);
+static int ip6_dst_gc(void);
static int ip6_pkt_discard(struct sk_buff *skb);
+static void ip6_link_failure(struct sk_buff *skb);
struct dst_ops ip6_dst_ops = {
AF_INET6,
__constant_htons(ETH_P_IPV6),
+ 1024,
+
+ ip6_dst_gc,
ip6_dst_check,
ip6_dst_reroute,
NULL,
- ip6_negative_advice
+ ip6_negative_advice,
+ ip6_link_failure,
};
struct rt6_info ip6_null_entry = {
{{NULL, ATOMIC_INIT(0), ATOMIC_INIT(0), NULL,
- -1, 0, 0, 0, 0, 0, 0, 0,
+ -1, 0, 0, 0, 0, 0, 0, 0, 0,
-ENETUNREACH, NULL, NULL,
ip6_pkt_discard, ip6_pkt_discard, &ip6_dst_ops}},
NULL, {{{0}}}, 256, RTF_REJECT|RTF_NONEXTHOP, ~0U,
- 0, {NULL}, {{{{0}}}, 128}, {{{{0}}}, 128}
+ 0, 255, {NULL}, {{{{0}}}, 128}, {{{{0}}}, 128}
};
struct fib6_node ip6_routing_table = {
struct device *dev,
int strict)
{
+ struct rt6_info *local = NULL;
struct rt6_info *sprt;
RDBG(("rt6_device_match: (%p,%p,%d) ", rt, dev, strict));
RDBG(("match --> %p\n", sprt));
return sprt;
}
+ if (sprt->rt6i_dev && (sprt->rt6i_dev->flags&IFF_LOOPBACK))
+ local = sprt;
}
+ if (local)
+ return local;
+
if (strict) {
RDBG(("nomatch & STRICT --> ip6_null_entry\n"));
return &ip6_null_entry;
struct dst_entry *dst;
RDBG(("ip6_route_input(%p) from %p\n", skb, __builtin_return_address(0)));
+ if ((dst = skb->dst) != NULL)
+ goto looped_back;
rt6_lock();
fn = fib6_lookup(&ip6_routing_table, &skb->nh.ipv6h->daddr,
&skb->nh.ipv6h->saddr);
rt6_unlock();
skb->dst = dst;
+looped_back:
dst->input(skb);
}
return NULL;
}
+static void ip6_link_failure(struct sk_buff *skb)
+{
+ icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0, skb->dev);
+}
+
+static int ip6_dst_gc()
+{
+ static unsigned expire = 30*HZ;
+ static unsigned long last_gc;
+ unsigned long now = jiffies;
+
+ start_bh_atomic();
+ if ((long)(now - last_gc) < ip6_rt_gc_min_interval)
+ goto out;
+
+ expire++;
+ fib6_run_gc(expire);
+ last_gc = now;
+ if (atomic_read(&ip6_dst_ops.entries) < ip6_dst_ops.gc_thresh)
+ expire = ip6_rt_gc_timeout;
+
+out:
+ expire >>= 1;
+ end_bh_atomic();
+ return (atomic_read(&ip6_dst_ops.entries) > ip6_rt_max_size);
+}
/* Clean host part of a prefix. Not necessary in radix tree,
but results in cleaner routing tables.
pfx->s6_addr[plen>>3] &= (0xFF<<(8-b));
}
+static int ipv6_get_mtu(struct device *dev)
+{
+ struct inet6_dev *idev;
+
+ idev = ipv6_get_idev(dev);
+ if (idev)
+ return idev->cnf.mtu6;
+ else
+ return 576;
+}
+
+static int ipv6_get_hoplimit(struct device *dev)
+{
+ struct inet6_dev *idev;
+
+ idev = ipv6_get_idev(dev);
+ if (idev)
+ return idev->cnf.hop_limit;
+ else
+ return ipv6_devconf.hop_limit;
+}
+
/*
*
*/
rt->rt6i_metric = rtmsg->rtmsg_metric;
rt->rt6i_dev = dev;
- rt->u.dst.pmtu = dev->mtu;
+ rt->u.dst.pmtu = ipv6_get_mtu(dev);
+ if (ipv6_addr_is_multicast(&rt->rt6i_dst.addr))
+ rt->rt6i_hoplimit = IPV6_DEFAULT_MCASTHOPS;
+ else
+ rt->rt6i_hoplimit = ipv6_get_hoplimit(dev);
rt->rt6i_flags = rtmsg->rtmsg_flags;
RDBG(("rt6ins(%p) ", rt));
int ip6_route_del(struct in6_rtmsg *rtmsg)
{
+ struct fib6_node *fn;
struct rt6_info *rt;
- struct device *dev=NULL;
- /*
- * Find device
- */
- if(rtmsg->rtmsg_ifindex) {
- dev=dev_get_by_index(rtmsg->rtmsg_ifindex);
- if (dev == NULL)
- return -ENODEV;
- }
- /*
- * Find route
- */
- rt=rt6_lookup(&rtmsg->rtmsg_dst, &rtmsg->rtmsg_src, dev, RTF_LINKRT);
+ rt6_lock();
+ fn = fib6_lookup(&ip6_routing_table, &rtmsg->rtmsg_dst, &rtmsg->rtmsg_src);
+ rt = fn->leaf;
/*
* Blow it away
if (fn->fn_flags & RTN_ROOT)
break;
if (fn->fn_flags & RTN_RTINFO) {
- rt = rt6_device_match(fn->leaf, dev, RTF_LINKRT);
+ rt = fn->leaf;
goto restart;
}
}
}
+
if (rt->rt6i_dst.plen == rtmsg->rtmsg_dst_len) {
- ip6_del_rt(rt);
- return 0;
+ for ( ; rt; rt = rt->u.next) {
+ if (rtmsg->rtmsg_ifindex &&
+ (rt->rt6i_dev == NULL ||
+ rt->rt6i_dev->ifindex != rtmsg->rtmsg_ifindex))
+ continue;
+ if (rtmsg->rtmsg_flags&RTF_GATEWAY &&
+ ipv6_addr_cmp(&rtmsg->rtmsg_gateway, &rt->rt6i_gateway))
+ continue;
+ if (rtmsg->rtmsg_metric &&
+ rtmsg->rtmsg_metric != rt->rt6i_metric)
+ continue;
+ ip6_del_rt(rt);
+ rt6_unlock();
+ return 0;
+ }
}
}
+ rt6_unlock();
return -ESRCH;
}
ipv6_addr_copy(&nrt->rt6i_gateway, target);
nrt->rt6i_nexthop = ndisc_get_neigh(nrt->rt6i_dev, target);
nrt->rt6i_dev = dev;
- nrt->u.dst.pmtu = dev->mtu;
+ nrt->u.dst.pmtu = ipv6_get_mtu(dev);
+ if (!ipv6_addr_is_multicast(&nrt->rt6i_dst.addr))
+ nrt->rt6i_hoplimit = ipv6_get_hoplimit(dev);
rt6_lock();
rt6_ins(nrt);
rt->u.dst.output = ort->u.dst.output;
rt->u.dst.pmtu = ort->u.dst.pmtu;
+ rt->rt6i_hoplimit = ort->rt6i_hoplimit;
rt->rt6i_dev = ort->rt6i_dev;
-
+
ipv6_addr_copy(&rt->rt6i_gateway, &ort->rt6i_gateway);
rt->rt6i_keylen = ort->rt6i_keylen;
rt->rt6i_flags = ort->rt6i_flags;
rt->u.dst.input = ip6_input;
rt->u.dst.output = ip6_output;
rt->rt6i_dev = dev_get("lo");
- rt->u.dst.pmtu = rt->rt6i_dev->mtu;
+ rt->u.dst.pmtu = ipv6_get_mtu(rt->rt6i_dev);
+ rt->rt6i_hoplimit = ipv6_get_hoplimit(rt->rt6i_dev);
rt->u.dst.obsolete = -1;
rt->rt6i_flags = RTF_UP | RTF_NONEXTHOP;
#ifdef CONFIG_RTNETLINK
-static void inet6_rtm_to_rtmsg(struct rtmsg *r, struct kern_rta *rta,
- struct in6_rtmsg *rtmsg)
+static int inet6_rtm_to_rtmsg(struct rtmsg *r, struct rtattr **rta,
+ struct in6_rtmsg *rtmsg)
{
memset(rtmsg, 0, sizeof(*rtmsg));
+
rtmsg->rtmsg_dst_len = r->rtm_dst_len;
rtmsg->rtmsg_src_len = r->rtm_src_len;
rtmsg->rtmsg_flags = RTF_UP;
rtmsg->rtmsg_metric = IP6_RT_PRIO_USER;
- if (rta->rta_gw) {
- memcpy(&rtmsg->rtmsg_gateway, rta->rta_gw, 16);
+
+ if (rta[RTA_GATEWAY-1]) {
+ if (rta[RTA_GATEWAY-1]->rta_len != RTA_LENGTH(16))
+ return -EINVAL;
+ memcpy(&rtmsg->rtmsg_gateway, RTA_DATA(rta[RTA_GATEWAY-1]), 16);
rtmsg->rtmsg_flags |= RTF_GATEWAY;
}
- if (rta->rta_dst)
- memcpy(&rtmsg->rtmsg_dst, rta->rta_dst, 16);
- if (rta->rta_src)
- memcpy(&rtmsg->rtmsg_src, rta->rta_src, 16);
- if (rta->rta_oif)
- memcpy(&rtmsg->rtmsg_ifindex, rta->rta_oif, 4);
- if (rta->rta_priority)
- memcpy(&rtmsg->rtmsg_metric, rta->rta_priority, 4);
+ if (rta[RTA_DST-1]) {
+ if (RTA_PAYLOAD(rta[RTA_DST-1]) < ((r->rtm_dst_len+7)>>3))
+ return -EINVAL;
+ memcpy(&rtmsg->rtmsg_dst, RTA_DATA(rta[RTA_DST-1]), ((r->rtm_dst_len+7)>>3));
+ }
+ if (rta[RTA_SRC-1]) {
+ if (RTA_PAYLOAD(rta[RTA_SRC-1]) < ((r->rtm_src_len+7)>>3))
+ return -EINVAL;
+ memcpy(&rtmsg->rtmsg_src, RTA_DATA(rta[RTA_SRC-1]), ((r->rtm_src_len+7)>>3));
+ }
+ if (rta[RTA_OIF-1]) {
+ if (rta[RTA_OIF-1]->rta_len != RTA_LENGTH(sizeof(int)))
+ return -EINVAL;
+ memcpy(&rtmsg->rtmsg_ifindex, RTA_DATA(rta[RTA_OIF-1]), sizeof(int));
+ }
+ if (rta[RTA_PRIORITY-1]) {
+ if (rta[RTA_PRIORITY-1]->rta_len != RTA_LENGTH(4))
+ return -EINVAL;
+ memcpy(&rtmsg->rtmsg_metric, RTA_DATA(rta[RTA_PRIORITY-1]), 4);
+ }
+ return 0;
}
int inet6_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
- struct kern_rta *rta = arg;
struct rtmsg *r = NLMSG_DATA(nlh);
struct in6_rtmsg rtmsg;
- inet6_rtm_to_rtmsg(r, rta, &rtmsg);
+ if (inet6_rtm_to_rtmsg(r, arg, &rtmsg))
+ return -EINVAL;
return ip6_route_del(&rtmsg);
}
int inet6_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
- struct kern_rta *rta = arg;
struct rtmsg *r = NLMSG_DATA(nlh);
struct in6_rtmsg rtmsg;
int err = 0;
- inet6_rtm_to_rtmsg(r, rta, &rtmsg);
+ if (inet6_rtm_to_rtmsg(r, arg, &rtmsg))
+ return -EINVAL;
ip6_route_add(&rtmsg, &err);
return err;
}
struct rtmsg *rtm;
struct nlmsghdr *nlh;
unsigned char *b = skb->tail;
+#ifdef CONFIG_RTNL_OLD_IFINFO
unsigned char *o;
+#else
+ struct rtattr *mx;
+#endif
struct rta_cacheinfo ci;
nlh = NLMSG_PUT(skb, pid, seq, type, sizeof(*rtm));
rtm->rtm_type = RTN_UNICAST;
rtm->rtm_flags = 0;
rtm->rtm_scope = RT_SCOPE_UNIVERSE;
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_nhs = 0;
+#endif
rtm->rtm_protocol = RTPROT_BOOT;
if (rt->rt6i_flags&RTF_DYNAMIC)
rtm->rtm_protocol = RTPROT_REDIRECT;
if (rt->rt6i_flags&RTF_CACHE)
rtm->rtm_flags |= RTM_F_CLONED;
+#ifdef CONFIG_RTNL_OLD_IFINFO
o = skb->tail;
+#endif
if (rtm->rtm_dst_len)
RTA_PUT(skb, RTA_DST, 16, &rt->rt6i_dst.addr);
if (rtm->rtm_src_len)
RTA_PUT(skb, RTA_SRC, 16, &rt->rt6i_src.addr);
+#ifdef CONFIG_RTNL_OLD_IFINFO
if (rt->u.dst.pmtu)
RTA_PUT(skb, RTA_MTU, sizeof(unsigned), &rt->u.dst.pmtu);
if (rt->u.dst.window)
RTA_PUT(skb, RTA_WINDOW, sizeof(unsigned), &rt->u.dst.window);
if (rt->u.dst.rtt)
RTA_PUT(skb, RTA_RTT, sizeof(unsigned), &rt->u.dst.rtt);
+#else
+ mx = (struct rtattr*)skb->tail;
+ RTA_PUT(skb, RTA_METRICS, 0, NULL);
+ if (rt->u.dst.pmtu)
+ RTA_PUT(skb, RTAX_MTU, sizeof(unsigned), &rt->u.dst.pmtu);
+ if (rt->u.dst.window)
+ RTA_PUT(skb, RTAX_WINDOW, sizeof(unsigned), &rt->u.dst.window);
+ if (rt->u.dst.rtt)
+ RTA_PUT(skb, RTAX_RTT, sizeof(unsigned), &rt->u.dst.rtt);
+ mx->rta_len = skb->tail - (u8*)mx;
+#endif
if (rt->u.dst.neighbour)
RTA_PUT(skb, RTA_GATEWAY, 16, &rt->u.dst.neighbour->primary_key);
if (rt->u.dst.dev)
ci.rta_clntref = atomic_read(&rt->u.dst.use);
ci.rta_error = rt->u.dst.error;
RTA_PUT(skb, RTA_CACHEINFO, sizeof(ci), &ci);
+#ifdef CONFIG_RTNL_OLD_IFINFO
rtm->rtm_optlen = skb->tail - o;
+#endif
nlh->nlmsg_len = skb->tail - b;
return skb->len;
nlmsg_failure:
rtattr_failure:
- skb_put(skb, b - skb->tail);
+ skb_trim(skb, b - skb->data);
return -1;
}
};
#endif /* CONFIG_PROC_FS */
+#ifdef CONFIG_SYSCTL
+
+static int flush_delay;
+
+static
+int ipv6_sysctl_rtcache_flush(ctl_table *ctl, int write, struct file * filp,
+ void *buffer, size_t *lenp)
+{
+ if (write) {
+ proc_dointvec(ctl, write, filp, buffer, lenp);
+ if (flush_delay < 0)
+ flush_delay = 0;
+ start_bh_atomic();
+ fib6_run_gc((unsigned long)flush_delay);
+ end_bh_atomic();
+ return 0;
+ } else
+ return -EINVAL;
+}
+
+ctl_table ipv6_route_table[] = {
+ {NET_IPV6_ROUTE_FLUSH, "flush",
+ &flush_delay, sizeof(int), 0644, NULL,
+ &ipv6_sysctl_rtcache_flush},
+ {NET_IPV6_ROUTE_GC_THRESH, "gc_thresh",
+ &ip6_dst_ops.gc_thresh, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV6_ROUTE_MAX_SIZE, "max_size",
+ &ip6_rt_max_size, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_IPV6_ROUTE_GC_MIN_INTERVAL, "gc_min_interval",
+ &ip6_rt_gc_min_interval, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV6_ROUTE_GC_TIMEOUT, "gc_timeout",
+ &ip6_rt_gc_timeout, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {NET_IPV6_ROUTE_GC_INTERVAL, "gc_interval",
+ &ip6_rt_gc_interval, sizeof(int), 0644, NULL,
+ &proc_dointvec_jiffies},
+ {0}
+};
+
+#endif
+
+
__initfunc(void ip6_route_init(void))
{
#ifdef CONFIG_PROC_FS
if (tunnel->err_count > 0) {
if (jiffies - tunnel->err_time < IPTUNNEL_ERR_TIMEO) {
tunnel->err_count--;
- icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0, skb->dev);
+ dst_link_failure(skb);
} else
tunnel->err_count = 0;
}
return 0;
tx_error_icmp:
- icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0, dev);
+ dst_link_failure(skb);
tx_error:
stats->tx_errors++;
dev_kfree_skb(skb);
#include <net/ipv6.h>
#include <net/addrconf.h>
-struct ipv6_config ipv6_config =
-{
- 0, /* forwarding */
- IPV6_DEFAULT_HOPLIMIT, /* hop limit */
- 1, /* accept RAs */
- 1, /* accept redirects */
-
- 1, /* autoconfiguration */
- 1, /* dad transmits */
- MAX_RTR_SOLICITATIONS, /* router solicits */
- RTR_SOLICITATION_INTERVAL, /* rtr solicit interval */
- MAX_RTR_SOLICITATION_DELAY, /* rtr solicit delay */
-
- 60*HZ, /* rt cache timeout */
- 30*HZ, /* rt gc period */
-};
+extern ctl_table ipv6_route_table[];
#ifdef CONFIG_SYSCTL
-int ipv6_sysctl_forwarding(ctl_table *ctl, int write, struct file * filp,
- void *buffer, size_t *lenp)
-{
- int val = ipv6_config.forwarding;
- int retv;
-
- retv = proc_dointvec(ctl, write, filp, buffer, lenp);
-
- if (write) {
- if (ipv6_config.forwarding && val == 0) {
- printk(KERN_DEBUG "sysctl: IPv6 forwarding enabled\n");
- ndisc_forwarding_on();
- addrconf_forwarding_on();
- }
-
- if (ipv6_config.forwarding == 0 && val)
- ndisc_forwarding_off();
- }
- return retv;
-}
-
ctl_table ipv6_table[] = {
- {NET_IPV6_FORWARDING, "forwarding",
- &ipv6_config.forwarding, sizeof(int), 0644, NULL,
- &ipv6_sysctl_forwarding},
-
- {NET_IPV6_HOPLIMIT, "hop_limit",
- &ipv6_config.hop_limit, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_ACCEPT_RA, "accept_ra",
- &ipv6_config.accept_ra, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_ACCEPT_REDIRECTS, "accept_redirects",
- &ipv6_config.accept_redirects, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_AUTOCONF, "autoconf",
- &ipv6_config.autoconf, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_DAD_TRANSMITS, "dad_transmits",
- &ipv6_config.dad_transmits, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_RTR_SOLICITS, "router_solicitations",
- &ipv6_config.rtr_solicits, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_RTR_SOLICIT_INTERVAL, "router_solicitation_interval",
- &ipv6_config.rtr_solicit_interval, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
- {NET_IPV6_RTR_SOLICIT_DELAY, "router_solicitation_delay",
- &ipv6_config.rtr_solicit_delay, sizeof(int), 0644, NULL,
- &proc_dointvec},
-
+ {NET_IPV6_ROUTE, "route", NULL, 0, 0555, ipv6_route_table},
{0}
};
int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, int len,
int noblock, int flags, int *addr_len)
{
- int copied = 0;
- int truesize;
struct sk_buff *skb;
- int err;
+ int copied, err;
/*
* Check any passed addresses
*/
skb = skb_recv_datagram(sk, flags, noblock, &err);
- if(skb==NULL)
- return err;
+ if (!skb)
+ goto out;
- truesize=ntohs(((struct udphdr *)skb->h.raw)->len) - sizeof(struct udphdr);
-
- copied=truesize;
-
- if(copied>len) {
- copied=len;
- msg->msg_flags|=MSG_TRUNC;
+ copied = ntohs(((struct udphdr *)skb->h.raw)->len) - sizeof(struct udphdr);
+ if (copied > len) {
+ copied = len;
+ msg->msg_flags |= MSG_TRUNC;
}
/*
err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr),
msg->msg_iov, copied);
if (err)
- return err;
+ goto out_free;
sk->stamp=skb->stamp;
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *) msg->msg_name;
-
sin6->sin6_family = AF_INET6;
sin6->sin6_port = skb->h.uh->source;
datagram_recv_ctl(sk, msg, skb);
}
}
-
- skb_free_datagram(sk, skb);
- return(copied);
+ err = copied;
+
+out_free:
+ skb_free_datagram(sk, skb);
+out:
+ return err;
}
void udpv6_err(int type, int code, unsigned char *buff, __u32 info,
return 0;
}
-static int __inline__ inet6_mc_check(struct sock *sk, struct in6_addr *addr)
+static __inline__ int inet6_mc_check(struct sock *sk, struct in6_addr *addr)
{
struct ipv6_mc_socklist *mc;
{
struct sock *sk, *sk2;
+ SOCKHASH_LOCK();
sk = udp_hash[ntohs(uh->dest) & (UDP_HTABLE_SIZE - 1)];
sk = udp_v6_mcast_next(sk, uh->dest, daddr, uh->source, saddr);
if(sk) {
uh->dest, saddr,
uh->source, daddr))) {
struct sk_buff *buff = skb_clone(skb, GFP_ATOMIC);
- if(sock_queue_rcv_skb(sk, buff) < 0) {
+ if (buff && sock_queue_rcv_skb(sk2, buff) < 0) {
buff->sk = NULL;
kfree_skb(buff);
}
skb->sk = NULL;
kfree_skb(skb);
}
+ SOCKHASH_UNLOCK();
}
int udpv6_rcv(struct sk_buff *skb, struct device *dev,
return -EPROTONOSUPPORT;
dev=dev_get(idef->ipx_device);
- if(dev==NULL) return -ENODEV;
+ if (dev==NULL)
+ return -ENODEV;
intrfc = ipxitf_find_using_phys(dev, dlink_type);
if (intrfc != NULL) {
sipx->sipx_family=AF_IPX;
sipx->sipx_network=ipxif->if_netnum;
memcpy(sipx->sipx_node, ipxif->if_node, sizeof(sipx->sipx_node));
- err = copy_to_user(arg,&ifr,sizeof(ifr));
- if (err)
- return -EFAULT;
+ err = -EFAULT;
+ if (!copy_to_user(arg, &ifr, sizeof(ifr)))
+ err = 0;
return err;
}
case SIOCAIPXITFCRT:
struct sock *sk=sock->sk;
struct sockaddr_ipx *sipx=(struct sockaddr_ipx *)msg->msg_name;
struct ipxhdr *ipx = NULL;
- int copied = 0;
- int truesize;
struct sk_buff *skb;
- int err;
+ int copied, err;
if (sk->zapped)
return -ENOTCONN;
skb=skb_recv_datagram(sk,flags&~MSG_DONTWAIT,flags&MSG_DONTWAIT,&err);
- if(skb==NULL)
- return err;
+ if (!skb)
+ goto out;
ipx = skb->nh.ipxh;
- truesize=ntohs(ipx->ipx_pktsize) - sizeof(struct ipxhdr);
-
- copied = truesize;
+ copied = ntohs(ipx->ipx_pktsize) - sizeof(struct ipxhdr);
if(copied > size)
{
copied=size;
msg->msg_flags|=MSG_TRUNC;
}
- err = skb_copy_datagram_iovec(skb,sizeof(struct ipxhdr),msg->msg_iov,copied);
-
+ err = skb_copy_datagram_iovec(skb, sizeof(struct ipxhdr), msg->msg_iov,
+ copied);
if (err)
- return err;
+ goto out_free;
msg->msg_namelen = sizeof(*sipx);
sipx->sipx_network=ipx->ipx_source.net;
sipx->sipx_type = ipx->ipx_type;
}
- skb_free_datagram(sk, skb);
+ err = copied;
- return(copied);
+out_free:
+ skb_free_datagram(sk, skb);
+out:
+ return err;
}
/*
{
if(sk->stamp.tv_sec==0)
return -ENOENT;
- ret = copy_to_user((void *)arg,&sk->stamp,sizeof(struct timeval));
- if (ret)
- ret = -EFAULT;
+ ret = -EFAULT;
+ if (!copy_to_user((void *)arg, &sk->stamp,
+ sizeof(struct timeval)))
+ ret = 0;
}
- return 0;
+ return ret;
}
case SIOCGIFDSTADDR:
case SIOCSIFDSTADDR:
#ifdef NL_EMULATE_DEV
if (sk->protinfo.af_netlink.handler) {
+ skb_orphan(skb);
len = sk->protinfo.af_netlink.handler(protocol, skb);
netlink_unlock(sk);
return len;
{
#ifdef NL_EMULATE_DEV
if (sk->protinfo.af_netlink.handler) {
+ skb_orphan(skb);
sk->protinfo.af_netlink.handler(sk->protocol, skb);
return 0;
} else
int netlink_post(int unit, struct sk_buff *skb)
{
if (netlink_kernel[unit]) {
+ memset(skb->cb, 0, sizeof(skb->cb));
netlink_broadcast(netlink_kernel[unit]->sk, skb, 0, ~0, GFP_ATOMIC);
return 0;
}
return -EUNATCH;;
}
-EXPORT_SYMBOL(netlink_attach);
-EXPORT_SYMBOL(netlink_detach);
-EXPORT_SYMBOL(netlink_post);
-
#endif
#if 0
EXPORT_SYMBOL(icmp_send);
EXPORT_SYMBOL(ip_options_compile);
EXPORT_SYMBOL(arp_send);
+#ifdef CONFIG_SHAPER_MODULE
+EXPORT_SYMBOL(arp_broken_ops);
+#endif
EXPORT_SYMBOL(ip_id_count);
EXPORT_SYMBOL(ip_send_check);
EXPORT_SYMBOL(ip_fragment);
EXPORT_SYMBOL(ip_mc_inc_group);
EXPORT_SYMBOL(ip_mc_dec_group);
EXPORT_SYMBOL(__ip_finish_output);
+EXPORT_SYMBOL(inet_dgram_ops);
/* needed for ip_gre -cw */
EXPORT_SYMBOL(ip_statistics);
#ifdef CONFIG_IPV6_MODULE
/* inet functions common to v4 and v6 */
EXPORT_SYMBOL(inet_stream_ops);
-EXPORT_SYMBOL(inet_dgram_ops);
EXPORT_SYMBOL(inet_release);
EXPORT_SYMBOL(inet_stream_connect);
EXPORT_SYMBOL(inet_dgram_connect);
EXPORT_SYMBOL(xrlim_allow);
#endif
+#ifdef CONFIG_NETLINK
+EXPORT_SYMBOL(netlink_set_err);
+EXPORT_SYMBOL(netlink_broadcast);
+EXPORT_SYMBOL(netlink_unicast);
+EXPORT_SYMBOL(netlink_kernel_create);
+EXPORT_SYMBOL(netlink_dump_start);
+EXPORT_SYMBOL(netlink_ack);
+#if defined(CONFIG_NETLINK_DEV) || defined(CONFIG_NETLINK_DEV_MODULE)
+EXPORT_SYMBOL(netlink_attach);
+EXPORT_SYMBOL(netlink_detach);
+EXPORT_SYMBOL(netlink_post);
+#endif
+#endif
+
#ifdef CONFIG_RTNETLINK
EXPORT_SYMBOL(rtnetlink_links);
EXPORT_SYMBOL(__rta_fill);
EXPORT_SYMBOL(rtnetlink_dump_ifinfo);
-EXPORT_SYMBOL(netlink_set_err);
-EXPORT_SYMBOL(netlink_broadcast);
EXPORT_SYMBOL(rtnl_wlockct);
EXPORT_SYMBOL(rtnl);
EXPORT_SYMBOL(neigh_delete);
EXPORT_SYMBOL(dev_alloc_name);
EXPORT_SYMBOL(dev_ioctl);
EXPORT_SYMBOL(dev_queue_xmit);
+EXPORT_SYMBOL(netdev_dropping);
#ifdef CONFIG_NET_FASTROUTE
EXPORT_SYMBOL(dev_fastroute_stat);
#endif
+#ifdef CONFIG_NET_HW_FLOWCONTROL
+EXPORT_SYMBOL(netdev_register_fc);
+EXPORT_SYMBOL(netdev_unregister_fc);
+EXPORT_SYMBOL(netdev_fc_xoff);
+#endif
#ifdef CONFIG_IP_ACCT
EXPORT_SYMBOL(ip_acct_output);
#endif
EXPORT_SYMBOL(unregister_qdisc);
EXPORT_SYMBOL(noop_qdisc);
+EXPORT_SYMBOL(register_gifconf);
+
#endif /* CONFIG_NET */
#include <linux/module.h>
#include <linux/init.h>
-#if defined(CONFIG_DLCI) || defined(CONFIG_DLCI_MODULE)
-#include <linux/if_frad.h>
+#ifdef CONFIG_INET
+#include <net/inet_common.h>
#endif
#ifdef CONFIG_BRIDGE
int flags, struct scm_cookie *scm)
{
struct sock *sk = sock->sk;
- int copied=0;
struct sk_buff *skb;
- int err;
+ int copied, err;
#if 0
/* What error should we return now? EUNATTACH? */
*/
if(skb==NULL)
- return err;
+ goto out;
/*
* You lose any data beyond the buffer you gave. If it worries a
*/
copied = skb->len;
- if(copied>len)
+ if (copied > len)
{
copied=len;
msg->msg_flags|=MSG_TRUNC;
/* We can't use skb_copy_datagram here */
err = memcpy_toiovec(msg->msg_iov, skb->data, copied);
if (err)
- {
- return -EFAULT;
- }
+ goto out_free;
sk->stamp=skb->stamp;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
/*
- * Free or return the buffer as appropriate. Again this hides all the
- * races and re-entrancy issues from us.
+ * Free or return the buffer as appropriate. Again this
+ * hides all the races and re-entrancy issues from us.
*/
+ err = copied;
+out_free:
skb_free_datagram(sk, skb);
-
- return(copied);
+out:
+ return err;
}
#ifdef CONFIG_SOCK_PACKET
err = -EFAULT;
return err;
case SIOCGIFFLAGS:
+#ifndef CONFIG_INET
case SIOCSIFFLAGS:
+#endif
case SIOCGIFCONF:
case SIOCGIFMETRIC:
case SIOCSIFMETRIC:
return -ENOPKG;
#endif
+#ifdef CONFIG_INET
+ case SIOCADDRT:
+ case SIOCDELRT:
+ case SIOCDARP:
+ case SIOCGARP:
+ case SIOCSARP:
+ case SIOCDRARP:
+ case SIOCGRARP:
+ case SIOCSRARP:
+ case SIOCGIFADDR:
+ case SIOCSIFADDR:
+ case SIOCGIFBRDADDR:
+ case SIOCSIFBRDADDR:
+ case SIOCGIFNETMASK:
+ case SIOCSIFNETMASK:
+ case SIOCGIFDSTADDR:
+ case SIOCSIFDSTADDR:
+ case SIOCSIFFLAGS:
case SIOCADDDLCI:
case SIOCDELDLCI:
-#ifdef CONFIG_DLCI
- return(dlci_ioctl(cmd, (void *) arg));
-#endif
-
-#ifdef CONFIG_DLCI_MODULE
-
-#ifdef CONFIG_KERNELD
- if (dlci_ioctl_hook == NULL)
- request_module("dlci");
+ return inet_dgram_ops.ioctl(sock, cmd, arg);
#endif
- if (dlci_ioctl_hook)
- return((*dlci_ioctl_hook)(cmd, (void *) arg));
-#endif
- return -ENOPKG;
-
default:
if ((cmd >= SIOCDEVPRIVATE) &&
(cmd <= (SIOCDEVPRIVATE + 15)))
* Anonymous : NOTSOCK/BADF cleanup. Error fix in
* shutdown()
* Alan Cox : verify_area() fixes
- * Alan Cox : Removed DDI
+ * Alan Cox : Removed DDI
* Jonathan Kamens : SOCK_DGRAM reconnect bug
* Alan Cox : Moved a load of checks to the very
* top level.
the AF_UNIX size (see net/unix/af_unix.c
:unix_mkname()).
*/
-
+
int move_addr_to_kernel(void *uaddr, int ulen, void *kaddr)
{
if(ulen<0||ulen>MAX_SOCK_ADDR)
* "fromlen shall refer to the value before truncation.."
* 1003.1g
*/
- return __put_user(klen, ulen);
+ return __put_user(klen, ulen);
}
/*
*/
inode->i_count++;
- current->files->fd[fd] = file;
+ fd_install(fd, file);
file->f_op = &socket_file_ops;
file->f_mode = 3;
file->f_flags = O_RDWR;
* Go from a file number to its socket slot.
*/
-extern __inline__ struct socket *sockfd_lookup(int fd, int *err)
+extern struct socket *sockfd_lookup(int fd, int *err)
{
struct file *file;
struct inode *inode;
+ struct socket *sock;
if (!(file = fget(fd)))
{
}
inode = file->f_dentry->d_inode;
- if (!inode || !inode->i_sock || !socki_lookup(inode))
+ if (!inode || !inode->i_sock || !(sock = socki_lookup(inode)))
{
*err = -ENOTSOCK;
fput(file);
return NULL;
}
- return socki_lookup(inode);
+ if (sock->file != file) {
+ printk(KERN_ERR "socki_lookup: socket file changed!\n");
+ sock->file = file;
+ }
+ return sock;
}
extern __inline__ void sockfd_put(struct socket *sock)
void sock_release(struct socket *sock)
{
- int oldstate;
-
- if ((oldstate = sock->state) != SS_UNCONNECTED)
+ if (sock->state != SS_UNCONNECTED)
sock->state = SS_DISCONNECTING;
if (sock->ops)
sock->ops->release(sock, NULL);
+ if (sock->fasync_list)
+ printk(KERN_ERR "sock_release: fasync list not empty!\n");
+
--sockets_in_use; /* Bookkeeping.. */
sock->file=NULL;
iput(sock->inode);
struct scm_cookie scm;
err = scm_send(sock, msg, &scm);
- if (err < 0)
- return err;
-
- err = sock->ops->sendmsg(sock, msg, size, &scm);
-
- scm_destroy(&scm);
-
+ if (err >= 0) {
+ err = sock->ops->sendmsg(sock, msg, size, &scm);
+ scm_destroy(&scm);
+ }
return err;
}
memset(&scm, 0, sizeof(scm));
size = sock->ops->recvmsg(sock, msg, size, flags, &scm);
-
- if (size < 0)
- return size;
-
- scm_recv(sock, msg, &scm, flags);
+ if (size >= 0)
+ scm_recv(sock, msg, &scm, flags);
return size;
}
unsigned long arg)
{
struct socket *sock = socki_lookup(inode);
- return sock->ops->ioctl(sock, cmd, arg);
+ return sock->ops->ioctl(sock, cmd, arg);
}
/*
* Update the socket async list
*/
-
+
static int sock_fasync(struct file *filp, int on)
{
struct fasync_struct *fa, *fna=NULL, **prev;
int i;
struct socket *sock;
- /*
- * Check protocol is in range
- */
- if(family<0||family>=NPROTO)
+ /*
+ * Check protocol is in range
+ */
+ if(family<0||family>=NPROTO)
return -EINVAL;
-
+
#if defined(CONFIG_KERNELD) && defined(CONFIG_NET)
/* Attempt to load a protocol module if the find failed.
*
#endif
if (net_families[family]==NULL)
- return -EINVAL;
+ return -EINVAL;
/*
* Check that this is a type that we know how to manipulate and
* the protocol makes sense here. The family can still reject the
* protocol later.
*/
-
+
if ((type != SOCK_STREAM && type != SOCK_DGRAM &&
type != SOCK_SEQPACKET && type != SOCK_RAW && type != SOCK_RDM &&
#ifdef CONFIG_XTP
asmlinkage int sys_socketpair(int family, int type, int protocol, int usockvec[2])
{
- int fd1, fd2, i;
- struct socket *sock1=NULL, *sock2=NULL;
- int err;
+ struct socket *sock1, *sock2;
+ int fd1, fd2, err;
lock_kernel();
* supports the socketpair call.
*/
- if ((fd1 = sys_socket(family, type, protocol)) < 0) {
- err = fd1;
+ err = sys_socket(family, type, protocol);
+ if (err < 0)
goto out;
- }
+ fd1 = err;
- sock1 = sockfd_lookup(fd1, &err);
- if (!sock1)
- goto out;
/*
- * Now grab another socket and try to connect the two together.
+ * Now grab another socket
*/
err = -EINVAL;
- if ((fd2 = sys_socket(family, type, protocol)) < 0)
- {
- sys_close(fd1);
- goto out;
- }
+ fd2 = sys_socket(family, type, protocol);
+ if (fd2 < 0)
+ goto out_close1;
- sock2 = sockfd_lookup(fd2,&err);
+ /*
+ * Get the sockets for the two fd's
+ */
+ sock1 = sockfd_lookup(fd1, &err);
+ if (!sock1)
+ goto out_close2;
+ sock2 = sockfd_lookup(fd2, &err);
if (!sock2)
- goto out;
- if ((i = sock1->ops->socketpair(sock1, sock2)) < 0)
- {
- sys_close(fd1);
+ goto out_put1;
+
+ /* try to connect the two sockets together */
+ err = sock1->ops->socketpair(sock1, sock2);
+ if (err < 0)
+ goto out_put2;
+
+ err = put_user(fd1, &usockvec[0]);
+ if (err)
+ goto out_put2;
+ err = put_user(fd2, &usockvec[1]);
+
+out_put2:
+ sockfd_put(sock2);
+out_put1:
+ sockfd_put(sock1);
+
+ if (err) {
+ out_close2:
sys_close(fd2);
- err = i;
- }
- else
- {
- err = put_user(fd1, &usockvec[0]);
- if (!err)
- err = put_user(fd2, &usockvec[1]);
- if (err) {
- sys_close(fd1);
- sys_close(fd2);
- }
+ out_close1:
+ sys_close(fd1);
}
out:
- if(sock1)
- sockfd_put(sock1);
- if(sock2)
- sockfd_put(sock2);
unlock_kernel();
return err;
}
* We move the socket address to kernel space before we call
* the protocol layer (having also checked the address is ok).
*/
-
+
asmlinkage int sys_bind(int fd, struct sockaddr *umyaddr, int addrlen)
{
struct socket *sock;
int len;
lock_kernel();
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+
restart:
- if ((sock = sockfd_lookup(fd, &err))!=NULL)
- {
- if (!(newsock = sock_alloc()))
- {
- err=-EMFILE;
- goto out;
- }
+ err = -EMFILE;
+ if (!(newsock = sock_alloc()))
+ goto out_put;
- inode = newsock->inode;
- newsock->type = sock->type;
+ inode = newsock->inode;
+ newsock->type = sock->type;
- if ((err = sock->ops->dup(newsock, sock)) < 0)
- {
- sock_release(newsock);
- goto out;
- }
+ err = sock->ops->dup(newsock, sock);
+ if (err < 0)
+ goto out_release;
- err = newsock->ops->accept(sock, newsock, current->files->fd[fd]->f_flags);
+ err = newsock->ops->accept(sock, newsock, sock->file->f_flags);
+ if (err < 0)
+ goto out_release;
+ newsock = socki_lookup(inode);
- if (err < 0)
- {
- sock_release(newsock);
- goto out;
- }
- newsock = socki_lookup(inode);
+ if ((err = get_fd(inode)) < 0)
+ goto out_inval;
+ newsock->file = current->files->fd[err];
- if ((err = get_fd(inode)) < 0)
+ if (upeer_sockaddr)
+ {
+ /* Handle the race where the accept works and we
+ then getname after it has closed again */
+ if(newsock->ops->getname(newsock, (struct sockaddr *)address, &len, 1)<0)
{
- sock_release(newsock);
- err=-EINVAL;
- goto out;
+ sys_close(err);
+ goto restart;
}
+ move_addr_to_user(address, len, upeer_sockaddr, upeer_addrlen);
+ }
- newsock->file = current->files->fd[err];
-
- if (upeer_sockaddr)
- {
- /* Handle the race where the accept works and we
- then getname after it has closed again */
- if(newsock->ops->getname(newsock, (struct sockaddr *)address, &len, 1)<0)
- {
- sys_close(err);
- goto restart;
- }
- move_addr_to_user(address,len, upeer_sockaddr, upeer_addrlen);
- }
+out_put:
+ sockfd_put(sock);
out:
- sockfd_put(sock);
- }
unlock_kernel();
return err;
+
+out_inval:
+ err = -EINVAL;
+out_release:
+ sock_release(newsock);
+ goto out_put;
}
* other SEQPACKET protocols that take time to connect() as it doesn't
* include the -EINPROGRESS status for such sockets.
*/
-
+
asmlinkage int sys_connect(int fd, struct sockaddr *uservaddr, int addrlen)
{
struct socket *sock;
int err;
lock_kernel();
- if ((sock = sockfd_lookup(fd,&err))!=NULL)
- {
- if((err=move_addr_to_kernel(uservaddr,addrlen,address))>=0)
- err = sock->ops->connect(sock, (struct sockaddr *)address, addrlen,
- current->files->fd[fd]->f_flags);
- sockfd_put(sock);
- }
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+ err = move_addr_to_kernel(uservaddr, addrlen, address);
+ if (err < 0)
+ goto out_put;
+ err = sock->ops->connect(sock, (struct sockaddr *) address, addrlen,
+ sock->file->f_flags);
+out_put:
+ sockfd_put(sock);
+out:
unlock_kernel();
return err;
}
{
struct socket *sock;
char address[MAX_SOCK_ADDR];
- int len;
- int err;
+ int len, err;
lock_kernel();
- if ((sock = sockfd_lookup(fd, &err))!=NULL)
- {
- if((err=sock->ops->getname(sock, (struct sockaddr *)address, &len, 0))==0)
- err=move_addr_to_user(address,len, usockaddr, usockaddr_len);
- sockfd_put(sock);
- }
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+ err = sock->ops->getname(sock, (struct sockaddr *)address, &len, 0);
+ if (err)
+ goto out_put;
+ err = move_addr_to_user(address, len, usockaddr, usockaddr_len);
+
+out_put:
+ sockfd_put(sock);
+out:
unlock_kernel();
return err;
}
* Get the remote address ('name') of a socket object. Move the obtained
* name to user space.
*/
-
+
asmlinkage int sys_getpeername(int fd, struct sockaddr *usockaddr, int *usockaddr_len)
{
struct socket *sock;
struct iovec iov;
lock_kernel();
- if ((sock = sockfd_lookup(fd, &err))!=NULL)
- {
- if(len>=0)
- {
- iov.iov_base=buff;
- iov.iov_len=len;
- msg.msg_name=NULL;
- msg.msg_namelen=0;
- msg.msg_iov=&iov;
- msg.msg_iovlen=1;
- msg.msg_control=NULL;
- msg.msg_controllen=0;
- if (current->files->fd[fd]->f_flags & O_NONBLOCK)
- flags |= MSG_DONTWAIT;
- msg.msg_flags=flags;
- err=sock_sendmsg(sock, &msg, len);
- }
- else
- err=-EINVAL;
- sockfd_put(sock);
- }
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+ err = -EINVAL;
+ if (len < 0)
+ goto out_put;
+
+ iov.iov_base=buff;
+ iov.iov_len=len;
+ msg.msg_name=NULL;
+ msg.msg_namelen=0;
+ msg.msg_iov=&iov;
+ msg.msg_iovlen=1;
+ msg.msg_control=NULL;
+ msg.msg_controllen=0;
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ msg.msg_flags = flags;
+ err = sock_sendmsg(sock, &msg, len);
+
+out_put:
+ sockfd_put(sock);
+out:
unlock_kernel();
return err;
}
struct iovec iov;
lock_kernel();
- if ((sock = sockfd_lookup(fd,&err))!=NULL)
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+ iov.iov_base=buff;
+ iov.iov_len=len;
+ msg.msg_name=NULL;
+ msg.msg_iov=&iov;
+ msg.msg_iovlen=1;
+ msg.msg_control=NULL;
+ msg.msg_controllen=0;
+ msg.msg_namelen=addr_len;
+ if(addr)
{
- iov.iov_base=buff;
- iov.iov_len=len;
- msg.msg_name=NULL;
- msg.msg_iov=&iov;
- msg.msg_iovlen=1;
- msg.msg_control=NULL;
- msg.msg_controllen=0;
- msg.msg_namelen=addr_len;
- if(addr)
- {
- err=move_addr_to_kernel(addr,addr_len,address);
- if (err < 0)
- goto bad;
- msg.msg_name=address;
- }
- if (current->files->fd[fd]->f_flags & O_NONBLOCK)
- flags |= MSG_DONTWAIT;
- msg.msg_flags=flags;
- err=sock_sendmsg(sock, &msg, len);
-bad:
- sockfd_put(sock);
+ err = move_addr_to_kernel(addr, addr_len, address);
+ if (err < 0)
+ goto out_put;
+ msg.msg_name=address;
}
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ msg.msg_flags = flags;
+ err = sock_sendmsg(sock, &msg, len);
+
+out_put:
+ sockfd_put(sock);
+out:
unlock_kernel();
return err;
}
-
/*
* Receive a frame from the socket and optionally record the address of the
* sender. We verify the buffers are writable and if needed move the
int err,err2;
lock_kernel();
- if ((sock = sockfd_lookup(fd, &err))!=NULL)
- {
- msg.msg_control=NULL;
- msg.msg_controllen=0;
- msg.msg_iovlen=1;
- msg.msg_iov=&iov;
- iov.iov_len=size;
- iov.iov_base=ubuf;
- msg.msg_name=address;
- msg.msg_namelen=MAX_SOCK_ADDR;
- err=sock_recvmsg(sock, &msg, size,
- (current->files->fd[fd]->f_flags & O_NONBLOCK) ? (flags | MSG_DONTWAIT) : flags);
- if(err>=0 && addr!=NULL)
- {
- err2=move_addr_to_user(address, msg.msg_namelen, addr, addr_len);
- if(err2<0)
- err=err2;
- }
- sockfd_put(sock);
- }
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+
+ msg.msg_control=NULL;
+ msg.msg_controllen=0;
+ msg.msg_iovlen=1;
+ msg.msg_iov=&iov;
+ iov.iov_len=size;
+ iov.iov_base=ubuf;
+ msg.msg_name=address;
+ msg.msg_namelen=MAX_SOCK_ADDR;
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ err=sock_recvmsg(sock, &msg, size, flags);
+
+ if(err >= 0 && addr != NULL)
+ {
+ err2=move_addr_to_user(address, msg.msg_namelen, addr, addr_len);
+ if(err2<0)
+ err=err2;
+ }
+ sockfd_put(sock);
+out:
unlock_kernel();
return err;
}
* Set a socket option. Because we don't know the option lengths we have
* to pass the user mode parameter for the protocols to sort out.
*/
-
+
asmlinkage int sys_setsockopt(int fd, int level, int optname, char *optval, int optlen)
{
int err;
/*
* Shutdown a socket.
*/
-
+
asmlinkage int sys_shutdown(int fd, int how)
{
int err;
/*
* BSD sendmsg interface
*/
-
+
asmlinkage int sys_sendmsg(int fd, struct msghdr *msg, unsigned flags)
{
struct socket *sock;
lock_kernel();
+ err=-EFAULT;
if (copy_from_user(&msg_sys,msg,sizeof(struct msghdr)))
- {
- err=-EFAULT;
goto out;
- }
/* do not move before msg_sys is valid */
if (msg_sys.msg_iovlen>UIO_MAXIOV)
goto out;
/* Note - when this code becomes multithreaded on
* SMP machines you have a race to fix here.
*/
+ err = -ENOBUFS;
ctl_buf = sock_kmalloc(sock->sk, msg_sys.msg_controllen,
GFP_KERNEL);
if (ctl_buf == NULL)
- {
- err = -ENOBUFS;
goto failed2;
- }
}
+ err = -EFAULT;
if (copy_from_user(ctl_buf, msg_sys.msg_control,
- msg_sys.msg_controllen)) {
- err = -EFAULT;
+ msg_sys.msg_controllen))
goto failed;
- }
msg_sys.msg_control = ctl_buf;
}
msg_sys.msg_flags = flags;
- if (current->files->fd[fd]->f_flags & O_NONBLOCK)
+ if (sock->file->f_flags & O_NONBLOCK)
msg_sys.msg_flags |= MSG_DONTWAIT;
err = sock_sendmsg(sock, &msg_sys, total_len);
+
failed:
if (ctl_buf != ctl)
sock_kfree_s(sock->sk, ctl_buf, msg_sys.msg_controllen);
/*
* BSD recvmsg interface
*/
-
+
asmlinkage int sys_recvmsg(int fd, struct msghdr *msg, unsigned int flags)
{
struct socket *sock;
if ((sock = sockfd_lookup(fd, &err))!=NULL)
{
- if (current->files->fd[fd]->f_flags&O_NONBLOCK)
+ if (sock->file->f_flags & O_NONBLOCK)
flags |= MSG_DONTWAIT;
err=sock_recvmsg(sock, &msg_sys, total_len, flags);
if(err>=0)
if (uaddr != NULL && err>=0)
err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
- if (err>=0) {
- err = __put_user(msg_sys.msg_flags, &msg->msg_flags);
- if (!err)
- err = __put_user((unsigned long)msg_sys.msg_control-cmsg_ptr,
+ if (err < 0)
+ goto out;
+ err = __put_user(msg_sys.msg_flags, &msg->msg_flags);
+ if (err)
+ goto out;
+ err = __put_user((unsigned long)msg_sys.msg_control-cmsg_ptr,
&msg->msg_controllen);
- }
out:
unlock_kernel();
if(err<0)
* advertise its address family, and have it linked into the
* SOCKET module.
*/
-
+
int sock_register(struct net_proto_family *ops)
{
if (ops->family >= NPROTO) {
* remove its address family, and have it unlinked from the
* SOCKET module.
*/
-
+
int sock_unregister(int family)
{
if (family < 0 || family >= NPROTO)
size=len-sent;
- if (size>(sk->sndbuf-sizeof(struct sk_buff))/2) /* Keep two messages in the pipe so it schedules better */
- size=(sk->sndbuf-sizeof(struct sk_buff))/2;
+ /* Keep two messages in the pipe so it schedules better */
+ if (size > (sk->sndbuf - sizeof(struct sk_buff)) / 2)
+ size = (sk->sndbuf - sizeof(struct sk_buff)) / 2;
/*
* Keep to page sized kmalloc()'s as various people
if (skb==NULL)
{
if (sent)
- return sent;
+ goto out;
return err;
}
if (scm->fp)
unix_attach_fds(scm, skb);
+ /* N.B. this could fail with -EFAULT */
memcpy_fromiovec(skb_put(skb,size), msg->msg_iov, size);
other=unix_peer(sk);
{
kfree_skb(skb);
if(sent)
- return sent;
+ goto out;
send_sig(SIGPIPE,current,0);
return -EPIPE;
}
other->data_ready(other,size);
sent+=size;
}
+out:
return sent;
}
msg->msg_namelen = 0;
- skb=skb_recv_datagram(sk, flags, noblock, &err);
- if(skb==NULL)
- return err;
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+ goto out;
if (msg->msg_name)
{
+ msg->msg_namelen = sizeof(short);
if (skb->sk->protinfo.af_unix.addr)
{
- memcpy(msg->msg_name, skb->sk->protinfo.af_unix.addr->name,
- skb->sk->protinfo.af_unix.addr->len);
msg->msg_namelen=skb->sk->protinfo.af_unix.addr->len;
+ memcpy(msg->msg_name,
+ skb->sk->protinfo.af_unix.addr->name,
+ skb->sk->protinfo.af_unix.addr->len);
}
- else
- msg->msg_namelen=sizeof(short);
}
if (size > skb->len)
else if (size < skb->len)
msg->msg_flags |= MSG_TRUNC;
- if (skb_copy_datagram_iovec(skb, 0, msg->msg_iov, size))
- return -EFAULT;
+ err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, size);
+ if (err)
+ goto out_free;
scm->creds = *UNIXCREDS(skb);
if (UNIXCB(skb).fp)
scm->fp = scm_fp_dup(UNIXCB(skb).fp);
}
+ err = size;
+
+out_free:
skb_free_datagram(sk,skb);
- return size;
+out:
+ return err;
}
if (flags&MSG_OOB)
return -EOPNOTSUPP;
- if(flags&MSG_WAITALL)
+ if (flags&MSG_WAITALL)
target = size;
/* Copy address just once */
if (sunaddr)
{
+ msg->msg_namelen = sizeof(short);
if (skb->sk->protinfo.af_unix.addr)
{
- memcpy(sunaddr, skb->sk->protinfo.af_unix.addr->name,
- skb->sk->protinfo.af_unix.addr->len);
msg->msg_namelen=skb->sk->protinfo.af_unix.addr->len;
+ memcpy(sunaddr,
+ skb->sk->protinfo.af_unix.addr->name,
+ skb->sk->protinfo.af_unix.addr->len);
}
- else
- msg->msg_namelen=sizeof(short);
sunaddr = NULL;
}
chunk = min(skb->len, size);
+ /* N.B. This could fail with -EFAULT */
memcpy_toiovec(msg->msg_iov, skb->data, chunk);
copied += chunk;
size -= chunk;
newsk = skb->sk;
newsk->pair = NULL;
+ newsk->socket = newsock;
+ newsk->sleep = &newsock->wait;
sti();
/* Now attach up the new socket */