If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
+IRC Send/Chat support
+CONFIG_IP_NF_IRC
+ There is a commonly-used extension to IRC called
+ Direct Client-to-Client Protocol (DCC). This enables users to send
+ files to each other, and also chat to each other without the need
+ of a server. DCC Sending is used anywhere you send files over IRC,
+ and DCC Chat is most commonly used by Eggdrop bots. If you are
+ using NAT, this extension will enable you to send files and initiate
+ chats. Note that you do NOT need this extension to get files or
+ have others initiate chats, or everything else in IRC.
+
+ If you want to compile it as a module, say 'M' here and read
+ Documentation/modules.txt. If unsure, say 'N'.
+
FTP protocol support
CONFIG_IP_NF_FTP
Tracking FTP connections is problematic: special helpers are
etc) subsystems now use this: say `Y' or `M' here if you want to use
either of those.
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+CONFIG_IP6_NF_MATCH_LIMIT
+ limit matching allows you to control the rate at which a rule can be
+ matched: mainly useful in combination with the LOG target ("LOG
+ target support", below) and to avoid some Denial of Service attacks.
+
If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
+MAC address match support
+CONFIG_IP6_NF_MATCH_MAC
+ mac matching allows you to match packets based on the source
+ ethernet address of the packet.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+Multiple port match support
+CONFIG_IP6_NF_MATCH_MULTIPORT
+ Multiport matching allows you to match TCP or UDP packets based on
+ a series of source or destination ports: normally a rule can only
+ match a single range of ports.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+Owner match support (EXPERIMENTAL)
+CONFIG_IP6_NF_MATCH_OWNER
+ Packet owner matching allows you to match locally-generated packets
+ based on who created them: the user, group, process or session.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+
+
limit match support
CONFIG_IP_NF_MATCH_LIMIT
limit matching allows you to control the rate at which a rule can be
If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
+TTL match support
+CONFIG_IP_NF_MATCH_TTL
+ This adds CONFIG_IP_NF_MATCH_TTL option, which enabled the user
+ to match packets by their TTL value.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+length match support
+CONFIG_IP_NF_MATCH_LENGTH
+ This option allows you to match the length of a packet against a
+ specific value or range of values.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
TOS match support
CONFIG_IP_NF_MATCH_TOS
TOS matching allows you to match packets based on the Type Of
If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
+Basic SNMP-ALG support
+CONFIG_IP_NF_NAT_SNMP_BASIC
+
+ This module implements an Application Layer Gateway (ALG) for
+ SNMP payloads. In conjunction with NAT, it allows a network
+ management system to access multiple private networks with
+ conflicting addresses. It works by modifying IP addresses
+ inside SNMP payloads to match IP-layer NAT mapping.
+
+ This is the "basic" form of SNMP-ALG, as described in RFC 2962
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
REDIRECT target support
CONFIG_IP_NF_TARGET_REDIRECT
REDIRECT is a special case of NAT: all incoming connections are
If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
+LOG target support
+CONFIG_IP6_NF_TARGET_LOG
+ This option adds a `LOG' target, which allows you to create rules in
+ any ip6tables table which records the packet header to the syslog.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
Packet filtering
CONFIG_IP6_NF_FILTER
Packet filtering defines a table `filter', which has a series of
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 14
-EXTRAVERSION =-pre4
+EXTRAVERSION =-pre6
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
* 1.08 11 Dec 2000, Richard Schaal <richard.schaal@intel.com> and
* Tigran Aivazian <tigran@veritas.com>
* Intel Pentium 4 processor support and bugfixes.
+ * 1.09 30 Oct 2001, Tigran Aivazian <tigran@veritas.com>
+ * Bugfix for HT (Hyper-Threading) enabled processors
+ * whereby processor resources are shared by all logical processors
+ * in a single CPU package.
*/
#include <linux/init.h>
#include <asm/uaccess.h>
#include <asm/processor.h>
-#define MICROCODE_VERSION "1.08"
+#define MICROCODE_VERSION "1.09"
MODULE_DESCRIPTION("Intel CPU (IA-32) microcode update driver");
MODULE_AUTHOR("Tigran Aivazian <tigran@veritas.com>");
printk(KERN_ERR
"microcode: CPU%d not 'upgrading' to earlier revision"
" %d (current=%d)\n", cpu_num, microcode[i].rev, rev);
- } else if (microcode[i].rev == rev) {
- printk(KERN_ERR
- "microcode: CPU%d already up-to-date (revision %d)\n",
- cpu_num, rev);
} else {
int sum = 0;
struct microcode *m = µcode[i];
CONFIG_FB=y
CONFIG_DUMMY_CONSOLE=y
# CONFIG_FB_CYBER2000 is not set
-# CONFIG_FB_E1355 is not set
CONFIG_FB_SBUS=y
CONFIG_FB_CGSIX=y
CONFIG_FB_BWTWO=y
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID5 is not set
+# CONFIG_MD_MULTIPATH is not set
# CONFIG_BLK_DEV_LVM is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_IPV6=m
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
+# CONFIG_VLAN_8021Q is not set
#
#
CONFIG_VFAT_FS=m
CONFIG_EFS_FS=m
# CONFIG_JFFS_FS is not set
+# CONFIG_JFFS2_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_TMPFS is not set
# CONFIG_RAMFS is not set
echo " sizeof(struct $2_struct)," >> $4
;;
-ints)
- sed -n -e '/check_asm_data:/,/\.size/p' <$2 | sed -e 's/check_asm_data://' -e 's/\.size.*//' -e 's/\.long[ ]\([0-9]*\)/\1,/' >>$3
+ sed -n -e '/check_asm_data:/,/\.size/p' <$2 | sed -e 's/check_asm_data://' -e 's/\.size.*//' -e 's/\.ident.*//' -e 's/\.long[ ]\([0-9]*\)/\1,/' >>$3
;;
*)
exit 1
-/* $Id: ioport.c,v 1.44 2001/02/13 04:07:38 davem Exp $
+/* $Id: ioport.c,v 1.45 2001/10/30 04:54:21 davem Exp $
* ioport.c: Simple io mapping allocator.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
}
}
}
-#endif CONFIG_PCI
+#endif /* CONFIG_PCI */
#ifdef CONFIG_PROC_FS
return p-buf;
}
-#endif CONFIG_PROC_FS
+#endif /* CONFIG_PROC_FS */
/*
* This is a version of find_resource and it belongs to kernel/resource.c.
-/* $Id: sparc-stub.c,v 1.27 2000/10/03 07:28:49 anton Exp $
+/* $Id: sparc-stub.c,v 1.28 2001/10/30 04:54:21 davem Exp $
* sparc-stub.c: KGDB support for the Linux kernel.
*
* Modifications to run under Linux
* to arrange for a "return 0" upon a memory fault
*/
__asm__(
- "1: ldub [%0], %1
- inc %0
- .section .fixup,#alloc,#execinstr
- .align 4
- 2: retl
- mov 0, %%o0
- .section __ex_table, #alloc
- .align 4
- .word 1b, 2b
- .text"
- : "=r" (mem), "=r" (ch) : "0" (mem));
+ "\n1:\n\t"
+ "ldub [%0], %1\n\t"
+ "inc %0\n\t"
+ ".section .fixup,#alloc,#execinstr\n\t"
+ ".align 4\n"
+ "2:\n\t"
+ "retl\n\t"
+ " mov 0, %%o0\n\t"
+ ".section __ex_table, #alloc\n\t"
+ ".align 4\n\t"
+ ".word 1b, 2b\n\t"
+ ".text\n"
+ : "=r" (mem), "=r" (ch) : "0" (mem));
*buf++ = hexchars[ch >> 4];
*buf++ = hexchars[ch & 0xf];
}
ch |= hex(*buf++);
/* Assembler code is *mem++ = ch; with return 0 on fault */
__asm__(
- "1: stb %1, [%0]
- inc %0
- .section .fixup,#alloc,#execinstr
- .align 4
- 2: retl
- mov 0, %%o0
- .section __ex_table, #alloc
- .align 4
- .word 1b, 2b
- .text"
- : "=r" (mem) : "r" (ch) , "0" (mem));
+ "\n1:\n\t"
+ "stb %1, [%0]\n\t"
+ "inc %0\n\t"
+ ".section .fixup,#alloc,#execinstr\n\t"
+ ".align 4\n"
+ "2:\n\t"
+ "retl\n\t"
+ " mov 0, %%o0\n\t"
+ ".section __ex_table, #alloc\n\t"
+ ".align 4\n\t"
+ ".word 1b, 2b\n\t"
+ ".text\n"
+ : "=r" (mem) : "r" (ch) , "0" (mem));
}
return mem;
}
/* Again, watch those c-prefixes for ELF kernels */
#if defined(__svr4__) || defined(__ELF__)
- asm(" .globl breakinst
-
- breakinst: ta 1
- ");
+ asm(".globl breakinst\n"
+ "breakinst:\n\t"
+ "ta 1\n");
#else
- asm(" .globl _breakinst
-
- _breakinst: ta 1
- ");
+ asm(".globl _breakinst\n"
+ "_breakinst:\n\t"
+ "ta 1\n");
#endif
}
-/* $Id: time.c,v 1.58 2001/01/11 15:07:09 davem Exp $
+/* $Id: time.c,v 1.59 2001/10/30 04:54:21 davem Exp $
* linux/arch/sparc/kernel/time.c
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
* is guarenteed to be atomic, this is why we can run this
* with interrupts on full blast. Don't touch this... -DaveM
*/
- __asm__ __volatile__("
- sethi %hi(master_l10_counter), %o1
- ld [%o1 + %lo(master_l10_counter)], %g3
- sethi %hi(xtime), %g2
-1: ldd [%g2 + %lo(xtime)], %o4
- ld [%g3], %o1
- ldd [%g2 + %lo(xtime)], %o2
- xor %o4, %o2, %o2
- xor %o5, %o3, %o3
- orcc %o2, %o3, %g0
- bne 1b
- cmp %o1, 0
- bge 1f
- srl %o1, 0xa, %o1
- sethi %hi(tick), %o3
- ld [%o3 + %lo(tick)], %o3
- sethi %hi(0x1fffff), %o2
- or %o2, %lo(0x1fffff), %o2
- add %o5, %o3, %o5
- and %o1, %o2, %o1
-1: add %o5, %o1, %o5
- sethi %hi(1000000), %o2
- or %o2, %lo(1000000), %o2
- cmp %o5, %o2
- bl,a 1f
- st %o4, [%o0 + 0x0]
- add %o4, 0x1, %o4
- sub %o5, %o2, %o5
- st %o4, [%o0 + 0x0]
-1: st %o5, [%o0 + 0x4]");
+ __asm__ __volatile__(
+ "sethi %hi(master_l10_counter), %o1\n\t"
+ "ld [%o1 + %lo(master_l10_counter)], %g3\n\t"
+ "sethi %hi(xtime), %g2\n"
+ "1:\n\t"
+ "ldd [%g2 + %lo(xtime)], %o4\n\t"
+ "ld [%g3], %o1\n\t"
+ "ldd [%g2 + %lo(xtime)], %o2\n\t"
+ "xor %o4, %o2, %o2\n\t"
+ "xor %o5, %o3, %o3\n\t"
+ "orcc %o2, %o3, %g0\n\t"
+ "bne 1b\n\t"
+ " cmp %o1, 0\n\t"
+ "bge 1f\n\t"
+ " srl %o1, 0xa, %o1\n\t"
+ "sethi %hi(tick), %o3\n\t"
+ "ld [%o3 + %lo(tick)], %o3\n\t"
+ "sethi %hi(0x1fffff), %o2\n\t"
+ "or %o2, %lo(0x1fffff), %o2\n\t"
+ "add %o5, %o3, %o5\n\t"
+ "and %o1, %o2, %o1\n"
+ "1:\n\t"
+ "add %o5, %o1, %o5\n\t"
+ "sethi %hi(1000000), %o2\n\t"
+ "or %o2, %lo(1000000), %o2\n\t"
+ "cmp %o5, %o2\n\t"
+ "bl,a 1f\n\t"
+ " st %o4, [%o0 + 0x0]\n\t"
+ "add %o4, 0x1, %o4\n\t"
+ "sub %o5, %o2, %o5\n\t"
+ "st %o4, [%o0 + 0x0]\n"
+ "1:\n\t"
+ "st %o5, [%o0 + 0x4]\n");
}
void do_settimeofday(struct timeval *tv)
register int ctr asm("g5");
ctr = 0;
- __asm__ __volatile__("
-1:
- ld [%%g6 + %2], %%g4
- orcc %%g0, %%g4, %%g0
- add %0, 1, %0
- bne 1b
- save %%sp, -64, %%sp
-2:
- subcc %0, 1, %0
- bne 2b
- restore %%g0, %%g0, %%g0"
+ __asm__ __volatile__(
+ "\n1:\n\t"
+ "ld [%%g6 + %2], %%g4\n\t"
+ "orcc %%g0, %%g4, %%g0\n\t"
+ "add %0, 1, %0\n\t"
+ "bne 1b\n\t"
+ " save %%sp, -64, %%sp\n"
+ "2:\n\t"
+ "subcc %0, 1, %0\n\t"
+ "bne 2b\n\t"
+ " restore %%g0, %%g0, %%g0\n"
: "=&r" (ctr)
: "0" (ctr),
"i" ((const unsigned long)(&(((struct task_struct *)0)->thread.uwinmask)))
-/* $Id: fault.c,v 1.120 2001/07/18 13:40:05 anton Exp $
+/* $Id: fault.c,v 1.121 2001/10/30 04:54:22 davem Exp $
* fault.c: Page fault handlers for the Sparc.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
memset (®s, 0, sizeof (regs));
regs.pc = pc;
regs.npc = pc + 4;
- __asm__ __volatile__ ("
- rd %%psr, %0
- nop
- nop
- nop" : "=r" (regs.psr));
+ __asm__ __volatile__ (
+ "rd %%psr, %0\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n" : "=r" (regs.psr));
unhandled_fault (address, current, ®s);
/* Not reached */
return 0;
-/* $Id: srmmu.c,v 1.231 2001/09/20 00:35:31 davem Exp $
+/* $Id: srmmu.c,v 1.232 2001/10/30 04:54:22 davem Exp $
* srmmu.c: SRMMU specific routines for memory management.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
static pte_t *srmmu_pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
BUG();
+ return NULL;
}
static void srmmu_free_pte_fast(pte_t *pte)
static void cypress_flush_tlb_mm(struct mm_struct *mm)
{
FLUSH_BEGIN(mm)
- __asm__ __volatile__("
- lda [%0] %3, %%g5
- sta %2, [%0] %3
- sta %%g0, [%1] %4
- sta %%g5, [%0] %3"
+ __asm__ __volatile__(
+ "lda [%0] %3, %%g5\n\t"
+ "sta %2, [%0] %3\n\t"
+ "sta %%g0, [%1] %4\n\t"
+ "sta %%g5, [%0] %3\n"
: /* no outputs */
: "r" (SRMMU_CTX_REG), "r" (0x300), "r" (mm->context),
"i" (ASI_M_MMUREGS), "i" (ASI_M_FLUSH_PROBE)
FLUSH_BEGIN(mm)
start &= SRMMU_PGDIR_MASK;
size = SRMMU_PGDIR_ALIGN(end) - start;
- __asm__ __volatile__("
- lda [%0] %5, %%g5
- sta %1, [%0] %5
- 1: subcc %3, %4, %3
- bne 1b
- sta %%g0, [%2 + %3] %6
- sta %%g5, [%0] %5"
+ __asm__ __volatile__(
+ "lda [%0] %5, %%g5\n\t"
+ "sta %1, [%0] %5\n"
+ "1:\n\t"
+ "subcc %3, %4, %3\n\t"
+ "bne 1b\n\t"
+ " sta %%g0, [%2 + %3] %6\n\t"
+ "sta %%g5, [%0] %5\n"
: /* no outputs */
: "r" (SRMMU_CTX_REG), "r" (mm->context), "r" (start | 0x200),
"r" (size), "r" (SRMMU_PGDIR_SIZE), "i" (ASI_M_MMUREGS),
struct mm_struct *mm = vma->vm_mm;
FLUSH_BEGIN(mm)
- __asm__ __volatile__("
- lda [%0] %3, %%g5
- sta %1, [%0] %3
- sta %%g0, [%2] %4
- sta %%g5, [%0] %3"
+ __asm__ __volatile__(
+ "lda [%0] %3, %%g5\n\t"
+ "sta %1, [%0] %3\n\t"
+ "sta %%g0, [%2] %4\n\t"
+ "sta %%g5, [%0] %3\n"
: /* no outputs */
: "r" (SRMMU_CTX_REG), "r" (mm->context), "r" (page & PAGE_MASK),
"i" (ASI_M_MMUREGS), "i" (ASI_M_FLUSH_PROBE)
-/* $Id: sun4c.c,v 1.207 2001/07/17 16:17:33 anton Exp $
+/* $Id: sun4c.c,v 1.208 2001/10/30 04:54:22 davem Exp $
* sun4c.c: Doing in software what should be done in hardware.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
unsigned long nbytes = SUN4C_VAC_SIZE;
unsigned long lsize = sun4c_vacinfo.linesize;
- __asm__ __volatile__("
- add %2, %2, %%g1
- add %2, %%g1, %%g2
- add %2, %%g2, %%g3
- add %2, %%g3, %%g4
- add %2, %%g4, %%g5
- add %2, %%g5, %%o4
- add %2, %%o4, %%o5
-1: subcc %0, %%o5, %0
- sta %%g0, [%0] %3
- sta %%g0, [%0 + %2] %3
- sta %%g0, [%0 + %%g1] %3
- sta %%g0, [%0 + %%g2] %3
- sta %%g0, [%0 + %%g3] %3
- sta %%g0, [%0 + %%g4] %3
- sta %%g0, [%0 + %%g5] %3
- bg 1b
- sta %%g0, [%1 + %%o4] %3
-" : "=&r" (nbytes)
+ __asm__ __volatile__(
+ "add %2, %2, %%g1\n\t"
+ "add %2, %%g1, %%g2\n\t"
+ "add %2, %%g2, %%g3\n\t"
+ "add %2, %%g3, %%g4\n\t"
+ "add %2, %%g4, %%g5\n\t"
+ "add %2, %%g5, %%o4\n\t"
+ "add %2, %%o4, %%o5\n"
+ "1:\n\t"
+ "subcc %0, %%o5, %0\n\t"
+ "sta %%g0, [%0] %3\n\t"
+ "sta %%g0, [%0 + %2] %3\n\t"
+ "sta %%g0, [%0 + %%g1] %3\n\t"
+ "sta %%g0, [%0 + %%g2] %3\n\t"
+ "sta %%g0, [%0 + %%g3] %3\n\t"
+ "sta %%g0, [%0 + %%g4] %3\n\t"
+ "sta %%g0, [%0 + %%g5] %3\n\t"
+ "bg 1b\n\t"
+ " sta %%g0, [%1 + %%o4] %3\n"
+ : "=&r" (nbytes)
: "0" (nbytes), "r" (lsize), "i" (ASI_FLUSHCTX)
: "g1", "g2", "g3", "g4", "g5", "o4", "o5", "cc");
}
unsigned long nbytes = SUN4C_VAC_SIZE;
unsigned long lsize = sun4c_vacinfo.linesize;
- __asm__ __volatile__("
- add %2, %2, %%g1
- add %2, %%g1, %%g2
- add %2, %%g2, %%g3
- add %2, %%g3, %%g4
- add %2, %%g4, %%g5
- add %2, %%g5, %%o4
- add %2, %%o4, %%o5
-1: subcc %1, %%o5, %1
- sta %%g0, [%0] %6
- sta %%g0, [%0 + %2] %6
- sta %%g0, [%0 + %%g1] %6
- sta %%g0, [%0 + %%g2] %6
- sta %%g0, [%0 + %%g3] %6
- sta %%g0, [%0 + %%g4] %6
- sta %%g0, [%0 + %%g5] %6
- sta %%g0, [%0 + %%o4] %6
- bg 1b
- add %0, %%o5, %0
-" : "=&r" (addr), "=&r" (nbytes), "=&r" (lsize)
+ __asm__ __volatile__(
+ "add %2, %2, %%g1\n\t"
+ "add %2, %%g1, %%g2\n\t"
+ "add %2, %%g2, %%g3\n\t"
+ "add %2, %%g3, %%g4\n\t"
+ "add %2, %%g4, %%g5\n\t"
+ "add %2, %%g5, %%o4\n\t"
+ "add %2, %%o4, %%o5\n"
+ "1:\n\t"
+ "subcc %1, %%o5, %1\n\t"
+ "sta %%g0, [%0] %6\n\t"
+ "sta %%g0, [%0 + %2] %6\n\t"
+ "sta %%g0, [%0 + %%g1] %6\n\t"
+ "sta %%g0, [%0 + %%g2] %6\n\t"
+ "sta %%g0, [%0 + %%g3] %6\n\t"
+ "sta %%g0, [%0 + %%g4] %6\n\t"
+ "sta %%g0, [%0 + %%g5] %6\n\t"
+ "sta %%g0, [%0 + %%o4] %6\n\t"
+ "bg 1b\n\t"
+ " add %0, %%o5, %0\n"
+ : "=&r" (addr), "=&r" (nbytes), "=&r" (lsize)
: "0" (addr), "1" (nbytes), "2" (lsize),
"i" (ASI_FLUSHSEG)
: "g1", "g2", "g3", "g4", "g5", "o4", "o5", "cc");
unsigned long left = PAGE_SIZE;
unsigned long lsize = sun4c_vacinfo.linesize;
- __asm__ __volatile__("
- add %2, %2, %%g1
- add %2, %%g1, %%g2
- add %2, %%g2, %%g3
- add %2, %%g3, %%g4
- add %2, %%g4, %%g5
- add %2, %%g5, %%o4
- add %2, %%o4, %%o5
-1: subcc %1, %%o5, %1
- sta %%g0, [%0] %6
- sta %%g0, [%0 + %2] %6
- sta %%g0, [%0 + %%g1] %6
- sta %%g0, [%0 + %%g2] %6
- sta %%g0, [%0 + %%g3] %6
- sta %%g0, [%0 + %%g4] %6
- sta %%g0, [%0 + %%g5] %6
- sta %%g0, [%0 + %%o4] %6
- bg 1b
- add %0, %%o5, %0
-" : "=&r" (addr), "=&r" (left), "=&r" (lsize)
+ __asm__ __volatile__(
+ "add %2, %2, %%g1\n\t"
+ "add %2, %%g1, %%g2\n\t"
+ "add %2, %%g2, %%g3\n\t"
+ "add %2, %%g3, %%g4\n\t"
+ "add %2, %%g4, %%g5\n\t"
+ "add %2, %%g5, %%o4\n\t"
+ "add %2, %%o4, %%o5\n"
+ "1:\n\t"
+ "subcc %1, %%o5, %1\n\t"
+ "sta %%g0, [%0] %6\n\t"
+ "sta %%g0, [%0 + %2] %6\n\t"
+ "sta %%g0, [%0 + %%g1] %6\n\t"
+ "sta %%g0, [%0 + %%g2] %6\n\t"
+ "sta %%g0, [%0 + %%g3] %6\n\t"
+ "sta %%g0, [%0 + %%g4] %6\n\t"
+ "sta %%g0, [%0 + %%g5] %6\n\t"
+ "sta %%g0, [%0 + %%o4] %6\n\t"
+ "bg 1b\n\t"
+ " add %0, %%o5, %0\n"
+ : "=&r" (addr), "=&r" (left), "=&r" (lsize)
: "0" (addr), "1" (left), "2" (lsize),
"i" (ASI_FLUSHPG)
: "g1", "g2", "g3", "g4", "g5", "o4", "o5", "cc");
if (sun4c_vacinfo.linesize == 32) {
while (begin < end) {
- __asm__ __volatile__("
- ld [%0 + 0x00], %%g0
- ld [%0 + 0x20], %%g0
- ld [%0 + 0x40], %%g0
- ld [%0 + 0x60], %%g0
- ld [%0 + 0x80], %%g0
- ld [%0 + 0xa0], %%g0
- ld [%0 + 0xc0], %%g0
- ld [%0 + 0xe0], %%g0
- ld [%0 + 0x100], %%g0
- ld [%0 + 0x120], %%g0
- ld [%0 + 0x140], %%g0
- ld [%0 + 0x160], %%g0
- ld [%0 + 0x180], %%g0
- ld [%0 + 0x1a0], %%g0
- ld [%0 + 0x1c0], %%g0
- ld [%0 + 0x1e0], %%g0
- " : : "r" (begin));
+ __asm__ __volatile__(
+ "ld [%0 + 0x00], %%g0\n\t"
+ "ld [%0 + 0x20], %%g0\n\t"
+ "ld [%0 + 0x40], %%g0\n\t"
+ "ld [%0 + 0x60], %%g0\n\t"
+ "ld [%0 + 0x80], %%g0\n\t"
+ "ld [%0 + 0xa0], %%g0\n\t"
+ "ld [%0 + 0xc0], %%g0\n\t"
+ "ld [%0 + 0xe0], %%g0\n\t"
+ "ld [%0 + 0x100], %%g0\n\t"
+ "ld [%0 + 0x120], %%g0\n\t"
+ "ld [%0 + 0x140], %%g0\n\t"
+ "ld [%0 + 0x160], %%g0\n\t"
+ "ld [%0 + 0x180], %%g0\n\t"
+ "ld [%0 + 0x1a0], %%g0\n\t"
+ "ld [%0 + 0x1c0], %%g0\n\t"
+ "ld [%0 + 0x1e0], %%g0\n"
+ : : "r" (begin));
begin += 512;
}
} else {
while (begin < end) {
- __asm__ __volatile__("
- ld [%0 + 0x00], %%g0
- ld [%0 + 0x10], %%g0
- ld [%0 + 0x20], %%g0
- ld [%0 + 0x30], %%g0
- ld [%0 + 0x40], %%g0
- ld [%0 + 0x50], %%g0
- ld [%0 + 0x60], %%g0
- ld [%0 + 0x70], %%g0
- ld [%0 + 0x80], %%g0
- ld [%0 + 0x90], %%g0
- ld [%0 + 0xa0], %%g0
- ld [%0 + 0xb0], %%g0
- ld [%0 + 0xc0], %%g0
- ld [%0 + 0xd0], %%g0
- ld [%0 + 0xe0], %%g0
- ld [%0 + 0xf0], %%g0
- " : : "r" (begin));
+ __asm__ __volatile__(
+ "ld [%0 + 0x00], %%g0\n\t"
+ "ld [%0 + 0x10], %%g0\n\t"
+ "ld [%0 + 0x20], %%g0\n\t"
+ "ld [%0 + 0x30], %%g0\n\t"
+ "ld [%0 + 0x40], %%g0\n\t"
+ "ld [%0 + 0x50], %%g0\n\t"
+ "ld [%0 + 0x60], %%g0\n\t"
+ "ld [%0 + 0x70], %%g0\n\t"
+ "ld [%0 + 0x80], %%g0\n\t"
+ "ld [%0 + 0x90], %%g0\n\t"
+ "ld [%0 + 0xa0], %%g0\n\t"
+ "ld [%0 + 0xb0], %%g0\n\t"
+ "ld [%0 + 0xc0], %%g0\n\t"
+ "ld [%0 + 0xd0], %%g0\n\t"
+ "ld [%0 + 0xe0], %%g0\n\t"
+ "ld [%0 + 0xf0], %%g0\n"
+ : : "r" (begin));
begin += 256;
}
}
-/* $Id: console.c,v 1.24 2001/04/27 07:02:42 davem Exp $
+/* $Id: console.c,v 1.25 2001/10/30 04:54:22 davem Exp $
* console.c: Routines that deal with sending and receiving IO
* to/from the current console device using the PROM.
*
}
break;
default:
- }
+ ;
+ };
return PROMDEV_O_UNK;
}
# Networking options
#
CONFIG_PACKET=y
-# CONFIG_PACKET_MMAP is not set
-# CONFIG_NETLINK is not set
+CONFIG_PACKET_MMAP=y
+CONFIG_NETLINK=y
+CONFIG_RTNETLINK=y
+CONFIG_NETLINK_DEV=y
# CONFIG_NETFILTER is not set
# CONFIG_FILTER is not set
CONFIG_UNIX=y
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
+CONFIG_ARPD=y
CONFIG_INET_ECN=y
# CONFIG_SYN_COOKIES is not set
CONFIG_IPV6=m
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
+CONFIG_VLAN_8021Q=m
#
#
CONFIG_BONDING=m
CONFIG_EQUALIZER=m
CONFIG_TUN=m
+# CONFIG_ETHERTAP is not set
#
# Ethernet (10 or 100Mbit)
CONFIG_NE2K_PCI=m
# CONFIG_NE3210 is not set
# CONFIG_ES3210 is not set
+# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
CONFIG_EFS_FS=m
# CONFIG_JFFS_FS is not set
# CONFIG_JFFS2_FS is not set
-CONFIG_CRAMFS=m
+# CONFIG_CRAMFS is not set
# CONFIG_TMPFS is not set
CONFIG_RAMFS=m
CONFIG_ISO9660_FS=m
-/* $Id: setup.c,v 1.69 2001/10/18 09:40:00 davem Exp $
+/* $Id: setup.c,v 1.70 2001/10/25 18:48:03 davem Exp $
* linux/arch/sparc64/kernel/setup.c
*
* Copyright (C) 1995,1996 David S. Miller (davem@caip.rutgers.edu)
pgd_t *pgdp;
pmd_t *pmdp;
pte_t *ptep;
+ int error;
+ if ((va >= LOW_OBP_ADDRESS) && (va < HI_OBP_ADDRESS)) {
+ tte = prom_virt_to_phys(va, &error);
+ if (!error)
+ res = PROM_TRUE;
+ goto done;
+ }
pgdp = pgd_offset_k(va);
if (pgd_none(*pgdp))
goto done;
extern unsigned long xcall_flush_dcache_page_cheetah;
extern unsigned long xcall_flush_dcache_page_spitfire;
-static spinlock_t dcache_xcall_lock = SPIN_LOCK_UNLOCKED;
-static struct page *dcache_page;
#ifdef DCFLUSH_DEBUG
extern atomic_t dcpage_flushes;
extern atomic_t dcpage_flushes_xcall;
#endif
-static __inline__ void __smp_flush_dcache_page_client(struct page *page)
+static __inline__ void __local_flush_dcache_page(struct page *page)
{
#if (L1DCACHE_SIZE > PAGE_SIZE)
__flush_dcache_page(page->virtual,
#endif
}
-void smp_flush_dcache_page_client(void)
-{
- __smp_flush_dcache_page_client(dcache_page);
- spin_unlock(&dcache_xcall_lock);
-}
-
-void smp_flush_dcache_page_impl(struct page *page)
+void smp_flush_dcache_page_impl(struct page *page, int cpu)
{
if (smp_processors_ready) {
- int cpu = dcache_dirty_cpu(page);
unsigned long mask = 1UL << cpu;
#ifdef DCFLUSH_DEBUG
atomic_inc(&dcpage_flushes);
#endif
if (cpu == smp_processor_id()) {
- __smp_flush_dcache_page_client(page);
+ __local_flush_dcache_page(page);
} else if ((cpu_present_map & mask) != 0) {
u64 data0;
if (tlb_type == spitfire) {
- spin_lock(&dcache_xcall_lock);
- dcache_page = page;
data0 = ((u64)&xcall_flush_dcache_page_spitfire);
- spitfire_xcall_deliver(data0, 0, 0, mask);
- /* Target cpu drops dcache_xcall_lock. */
+ if (page->mapping != NULL)
+ data0 |= ((u64)1 << 32);
+ spitfire_xcall_deliver(data0,
+ __pa(page->virtual),
+ (u64) page->virtual,
+ mask);
} else {
- /* Look mom, no locks... */
data0 = ((u64)&xcall_flush_dcache_page_cheetah);
cheetah_xcall_deliver(data0,
- (u64) page->virtual,
+ __pa(page->virtual),
0, mask);
}
#ifdef DCFLUSH_DEBUG
-/* $Id: sparc64_ksyms.c,v 1.113 2001/10/17 18:26:58 davem Exp $
+/* $Id: sparc64_ksyms.c,v 1.116 2001/10/26 15:49:21 davem Exp $
* arch/sparc64/kernel/sparc64_ksyms.c: Sparc64 specific ksyms support.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
/* Hard IRQ locking */
EXPORT_SYMBOL(global_irq_holder);
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(synchronize_irq);
+#endif
EXPORT_SYMBOL(__global_cli);
EXPORT_SYMBOL(__global_sti);
EXPORT_SYMBOL(__global_save_flags);
EXPORT_SYMBOL(cpu_data);
/* Misc SMP information */
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(smp_num_cpus);
+#endif
EXPORT_SYMBOL(__cpu_number_map);
EXPORT_SYMBOL(__cpu_logical_map);
EXPORT_SYMBOL(_do_write_unlock);
#endif
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(smp_call_function);
+#endif
#endif
EXPORT_SYMBOL(sys_ioctl);
EXPORT_SYMBOL(sys32_ioctl);
EXPORT_SYMBOL(sparc32_open);
-EXPORT_SYMBOL(move_addr_to_kernel);
-EXPORT_SYMBOL(move_addr_to_user);
#endif
/* Special internal versions of library functions. */
-/* $Id: sys_sparc.c,v 1.52 2001/04/14 01:12:02 davem Exp $
+/* $Id: sys_sparc.c,v 1.54 2001/10/28 20:49:13 davem Exp $
* linux/arch/sparc64/kernel/sys_sparc.c
*
* This file contains various random system calls that
asmlinkage int sparc64_personality(unsigned long personality)
{
- int ret;
- if (current->personality == PER_LINUX32 && personality == PER_LINUX)
- personality = PER_LINUX32;
- ret = sys_personality(personality);
- if (ret == PER_LINUX32)
+ unsigned long ret, trying, orig_ret;
+
+ trying = ret = personality;
+
+ if (current->personality == PER_LINUX32 &&
+ trying == PER_LINUX)
+ trying = ret = PER_LINUX32;
+
+ /* For PER_LINUX32 we want to retain &default_exec_domain. */
+ if (trying == PER_LINUX32)
ret = PER_LINUX;
- return ret;
+
+ orig_ret = ret;
+ ret = sys_personality(ret);
+
+ if (orig_ret == PER_LINUX && trying == PER_LINUX32) {
+ current->personality = PER_LINUX32;
+ ret = PER_LINUX;
+ }
+
+ return (int) ret;
}
/* Linux version of mmap */
-/* $Id: init.c,v 1.194 2001/10/17 18:26:58 davem Exp $
+/* $Id: init.c,v 1.199 2001/10/25 18:48:03 davem Exp $
* arch/sparc64/mm/init.c
*
* Copyright (C) 1996-1999 David S. Miller (davem@caip.rutgers.edu)
#endif
}
+#define PG_dcache_dirty PG_arch_1
+
+#define dcache_dirty_cpu(page) \
+ (((page)->flags >> 24) & (NR_CPUS - 1UL))
+
+static __inline__ void set_dcache_dirty(struct page *page)
+{
+ unsigned long mask = smp_processor_id();
+ unsigned long non_cpu_bits = (1UL << 24UL) - 1UL;
+ mask = (mask << 24) | (1UL << PG_dcache_dirty);
+ __asm__ __volatile__("1:\n\t"
+ "ldx [%2], %%g7\n\t"
+ "and %%g7, %1, %%g5\n\t"
+ "or %%g5, %0, %%g5\n\t"
+ "casx [%2], %%g7, %%g5\n\t"
+ "cmp %%g7, %%g5\n\t"
+ "bne,pn %%xcc, 1b\n\t"
+ " nop"
+ : /* no outputs */
+ : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags)
+ : "g5", "g7");
+}
+
+static __inline__ void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu)
+{
+ unsigned long mask = (1UL << PG_dcache_dirty);
+
+ __asm__ __volatile__("! test_and_clear_dcache_dirty\n"
+ "1:\n\t"
+ "ldx [%2], %%g7\n\t"
+ "srlx %%g7, 24, %%g5\n\t"
+ "cmp %%g5, %0\n\t"
+ "bne,pn %%icc, 2f\n\t"
+ " andn %%g7, %1, %%g5\n\t"
+ "casx [%2], %%g7, %%g5\n\t"
+ "cmp %%g7, %%g5\n\t"
+ "bne,pn %%xcc, 1b\n\t"
+ " nop\n"
+ "2:"
+ : /* no outputs */
+ : "r" (cpu), "r" (mask), "r" (&page->flags)
+ : "g5", "g7");
+}
+
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte)
{
struct page *page = pte_page(pte);
+ unsigned long pg_flags;
+
+ if (VALID_PAGE(page) &&
+ page->mapping &&
+ ((pg_flags = page->flags) & (1UL << PG_dcache_dirty))) {
+ int cpu = (pg_flags >> 24);
- if (VALID_PAGE(page) && page->mapping &&
- test_bit(PG_dcache_dirty, &page->flags)) {
/* This is just to optimize away some function calls
* in the SMP case.
*/
- if (dcache_dirty_cpu(page) == smp_processor_id())
+ if (cpu == smp_processor_id())
flush_dcache_page_impl(page);
else
- smp_flush_dcache_page_impl(page);
+ smp_flush_dcache_page_impl(page, cpu);
- clear_dcache_dirty(page);
+ clear_dcache_dirty_cpu(page, cpu);
}
__update_mmu_cache(vma, address, pte);
}
if (dirty) {
if (dirty_cpu == smp_processor_id())
return;
- smp_flush_dcache_page_impl(page);
+ smp_flush_dcache_page_impl(page, dirty_cpu);
}
set_dcache_dirty(page);
} else {
prom_halt();
}
+#define BASE_PAGE_SIZE 8192
+static pmd_t *prompmd;
+
+/*
+ * Translate PROM's mapping we capture at boot time into physical address.
+ * The second parameter is only set from prom_callback() invocations.
+ */
+unsigned long prom_virt_to_phys(unsigned long promva, int *error)
+{
+ pmd_t *pmdp = prompmd + ((promva >> 23) & 0x7ff);
+ pte_t *ptep;
+ unsigned long base;
+
+ if (pmd_none(*pmdp)) {
+ if (error)
+ *error = 1;
+ return(0);
+ }
+ ptep = (pte_t *)pmd_page(*pmdp) + ((promva >> 13) & 0x3ff);
+ if (!pte_present(*ptep)) {
+ if (error)
+ *error = 1;
+ return(0);
+ }
+ if (error) {
+ *error = 0;
+ return(pte_val(*ptep));
+ }
+ base = pte_val(*ptep) & _PAGE_PADDR;
+ return(base + (promva & (BASE_PAGE_SIZE - 1)));
+}
+
static void inherit_prom_mappings(void)
{
struct linux_prom_translation *trans;
unsigned long phys_page, tte_vaddr, tte_data;
void (*remap_func)(unsigned long, unsigned long, int);
- pmd_t *pmdp, *pmd;
+ pmd_t *pmdp;
pte_t *ptep;
int node, n, i, tsz;
extern unsigned int obp_iaddr_patch[2], obp_daddr_patch[2];
* in inherit_locked_prom_mappings()).
*/
#define OBP_PMD_SIZE 2048
-#define BASE_PAGE_SIZE 8192
- pmd = __alloc_bootmem(OBP_PMD_SIZE, OBP_PMD_SIZE, 0UL);
- if (pmd == NULL)
+ prompmd = __alloc_bootmem(OBP_PMD_SIZE, OBP_PMD_SIZE, 0UL);
+ if (prompmd == NULL)
early_pgtable_allocfail("pmd");
- memset(pmd, 0, OBP_PMD_SIZE);
+ memset(prompmd, 0, OBP_PMD_SIZE);
for (i = 0; i < n; i++) {
unsigned long vaddr;
- if (trans[i].virt >= 0xf0000000 && trans[i].virt < 0x100000000) {
+ if (trans[i].virt >= LOW_OBP_ADDRESS && trans[i].virt < HI_OBP_ADDRESS) {
for (vaddr = trans[i].virt;
- vaddr < trans[i].virt + trans[i].size;
+ ((vaddr < trans[i].virt + trans[i].size) &&
+ (vaddr < HI_OBP_ADDRESS));
vaddr += BASE_PAGE_SIZE) {
unsigned long val;
- pmdp = pmd + ((vaddr >> 23) & 0x7ff);
+ pmdp = prompmd + ((vaddr >> 23) & 0x7ff);
if (pmd_none(*pmdp)) {
ptep = __alloc_bootmem(BASE_PAGE_SIZE,
BASE_PAGE_SIZE,
}
}
}
- phys_page = __pa(pmd);
+ phys_page = __pa(prompmd);
obp_iaddr_patch[0] |= (phys_page >> 10);
obp_iaddr_patch[1] |= (phys_page & 0x3ff);
flushi((long)&obp_iaddr_patch[0]);
-/* $Id: ultra.S,v 1.63 2001/10/17 19:30:21 davem Exp $
+/* $Id: ultra.S,v 1.67 2001/10/23 14:28:20 davem Exp $
* ultra.S: Don't expand these all over the place...
*
* Copyright (C) 1997, 2000 David S. Miller (davem@redhat.com)
.align 32
.globl xcall_flush_dcache_page_cheetah
-xcall_flush_dcache_page_cheetah:
+xcall_flush_dcache_page_cheetah: /* %g1 == physical page address */
sethi %hi(PAGE_SIZE), %g3
1: subcc %g3, (1 << 5), %g3
stxa %g0, [%g1 + %g3] ASI_DCACHE_INVALIDATE
nop
.globl xcall_flush_dcache_page_spitfire
-xcall_flush_dcache_page_spitfire:
- rdpr %pstate, %g2
- wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate
- rdpr %pil, %g2
- wrpr %g0, 15, %pil
- sethi %hi(109f), %g7
- b,pt %xcc, etrap_irq
-109: or %g7, %lo(109b), %g7
- call smp_flush_dcache_page_client
+xcall_flush_dcache_page_spitfire: /* %g1 == physical page address
+ %g7 == kernel page virtual address
+ %g5 == (page->mapping != NULL) */
+#if (L1DCACHE_SIZE > PAGE_SIZE)
+ srlx %g1, (13 - 2), %g1 ! Form tag comparitor
+ sethi %hi(L1DCACHE_SIZE), %g3 ! D$ size == 16K
+ sub %g3, (1 << 5), %g3 ! D$ linesize == 32
+1: ldxa [%g3] ASI_DCACHE_TAG, %g2
+ andcc %g2, 0x3, %g0
+ be,pn %xcc, 2f
+ andn %g2, 0x3, %g2
+ cmp %g2, %g1
+
+ bne,pt %xcc, 2f
nop
- b,pt %xcc, rtrap
- clr %l6
+ stxa %g0, [%g3] ASI_DCACHE_TAG
+ membar #Sync
+2: cmp %g3, 0
+ bne,pt %xcc, 1b
+ sub %g3, (1 << 5), %g3
+
+ brz,pn %g5, 2f
+#endif /* L1DCACHE_SIZE > PAGE_SIZE */
+ sethi %hi(PAGE_SIZE), %g3
+
+1: flush %g7
+ subcc %g3, (1 << 5), %g3
+ bne,pt %icc, 1b
+ add %g7, (1 << 5), %g7
+
+2: retry
+ nop
+ nop
.globl xcall_capture
xcall_capture:
/* Mapping support (drm_vm.h) */
#if LINUX_VERSION_CODE < 0x020317
extern unsigned long DRM(vm_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern unsigned long DRM(vm_shm_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern unsigned long DRM(vm_dma_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern unsigned long DRM(vm_sg_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
#else
/* Return type changed in 2.3.23 */
extern struct page *DRM(vm_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern struct page *DRM(vm_shm_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern struct page *DRM(vm_dma_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
extern struct page *DRM(vm_sg_nopage)(struct vm_area_struct *vma,
- unsigned long address);
+ unsigned long address,
+ int unused);
#endif
extern void DRM(vm_open)(struct vm_area_struct *vma);
extern void DRM(vm_close)(struct vm_area_struct *vma);
#if LINUX_VERSION_CODE < 0x020317
unsigned long DRM(vm_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#else
/* Return type changed in 2.3.23 */
struct page *DRM(vm_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#endif
{
#if __REALLY_HAVE_AGP
#if LINUX_VERSION_CODE < 0x020317
unsigned long DRM(vm_shm_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#else
/* Return type changed in 2.3.23 */
struct page *DRM(vm_shm_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#endif
{
#if LINUX_VERSION_CODE >= 0x020300
#if LINUX_VERSION_CODE < 0x020317
unsigned long DRM(vm_dma_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#else
/* Return type changed in 2.3.23 */
struct page *DRM(vm_dma_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#endif
{
drm_file_t *priv = vma->vm_file->private_data;
#if LINUX_VERSION_CODE < 0x020317
unsigned long DRM(vm_sg_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#else
/* Return type changed in 2.3.23 */
struct page *DRM(vm_sg_nopage)(struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address,
+ int unused)
#endif
{
#if LINUX_VERSION_CODE >= 0x020300
if (r->entropy_count < nbytes * 8 &&
r->entropy_count < r->poolinfo.POOLBITS) {
- int nwords = min(r->poolinfo.poolwords - r->entropy_count/32,
- sizeof(tmp) / 4);
+ int nwords = min_t(int,
+ r->poolinfo.poolwords - r->entropy_count/32,
+ sizeof(tmp) / 4);
DEBUG_ENT("xfer %d from primary to %s (have %d, need %d)\n",
nwords * 32,
-/* $Id: sunbmac.c,v 1.27 2001/04/23 03:57:48 davem Exp $
+/* $Id: sunbmac.c,v 1.28 2001/10/21 06:35:29 davem Exp $
* sunbmac.c: Driver for Sparc BigMAC 100baseT ethernet adapters.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
-/* $Id: sunlance.c,v 1.108 2001/04/19 22:32:41 davem Exp $
+/* $Id: sunlance.c,v 1.109 2001/10/21 06:35:29 davem Exp $
* lance.c: Linux/Sparc/Lance driver
*
* Written 1995, 1996 by Miguel de Icaza
-/* $Id: aurora.c,v 1.17 2001/10/13 08:27:50 davem Exp $
+/* $Id: aurora.c,v 1.18 2001/10/26 17:59:31 davem Exp $
* linux/drivers/sbus/char/aurora.c -- Aurora multiport driver
*
* Copyright (c) 1999 by Oliver Aldulea (oli at bv dot ro)
-/* $Id: zs.c,v 1.67 2001/10/13 08:27:50 davem Exp $
+/* $Id: zs.c,v 1.68 2001/10/25 18:48:03 davem Exp $
* zs.c: Zilog serial port driver for the Sparc.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
static void show_serial_version(void)
{
- char *revision = "$Revision: 1.67 $";
+ char *revision = "$Revision: 1.68 $";
char *version, *p;
version = strchr(revision, ' ');
if (mapped_addr != 0) {
return (struct sun_zslayout *) mapped_addr;
} else {
- pgd_t *pgd = pgd_offset_k((unsigned long)vaddr[0]);
- pmd_t *pmd = pmd_offset(pgd, (unsigned long)vaddr[0]);
- pte_t *pte = pte_offset(pmd, (unsigned long)vaddr[0]);
- unsigned long base = pte_val(*pte) & _PAGE_PADDR;
-
- /* Translate PROM's mapping we captured at boot
- * time into physical address.
- */
- base += ((unsigned long)vaddr[0] & ~PAGE_MASK);
- return (struct sun_zslayout *) base;
+ return (struct sun_zslayout *) prom_virt_to_phys((unsigned long)vaddr[0], 0);
}
}
#else /* !(__sparc_v9__) */
#include <asm/ptrace.h>
#include <asm/pgtable.h>
#include <asm/oplib.h>
-#include <asm/vaddrs.h>
#include <asm/io.h>
#include <asm/irq.h>
break;
}
}
+ tb->sg[0].page = NULL;
if (tb->sg[segs].address == NULL) {
kfree(tb);
tb = NULL;
tb = NULL;
break;
}
+ tb->sg[segs].page = NULL;
tb->sg[segs].length = b_size;
got += b_size;
segs++;
normalize_buffer(STbuffer);
return FALSE;
}
+ STbuffer->sg[segs].page = NULL;
STbuffer->sg[segs].length = b_size;
STbuffer->sg_segs += 1;
got += b_size;
dbg("qh has not QH_TYPE");
return;
}
- dbg("QH @ %p/%08llX:", qh, (u64)qh->dma_addr);
+ dbg("QH @ %p/%08llX:", qh, (unsigned long long)qh->dma_addr);
if (qh->hw.qh.head & UHCI_PTR_TERM)
dbg(" Head Terminate");
if (!options || !*options)
return 0;
- while (this_opt = strsep(&options, ",")) {
+ while ((this_opt = strsep(&options, ",")) != NULL) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
inode->i_flags |= S_NOQUOTA;
dqopt->files[type] = f;
+ sb->dq_op = &dquot_operations;
set_enable_flags(dqopt, type);
dquot = dqget(sb, 0, type);
dqopt->block_expire[type] = (dquot != NODQUOT) ? dquot->dq_btime : MAX_DQ_TIME;
dqput(dquot);
- sb->dq_op = &dquot_operations;
add_dquot_ref(sb, type);
up(&dqopt->dqoff_sem);
hold we did free all buffers in tree balance structure
(get_empty_nodes and get_nodes_for_preserving) or in path structure
only (get_new_buffer) just before calling this */
-void wait_buffer_until_released (struct buffer_head * bh)
+void wait_buffer_until_released (const struct buffer_head * bh)
{
int repeat_counter = 0;
INITIALIZE_PATH (path_to_entry);
struct buffer_head * bh;
int item_num, entry_num;
- struct key * rkey;
+ const struct key * rkey;
struct item_head * ih, tmp_ih;
int search_res;
char * local_buf;
RFALSE( ih_item_len(ih) + IH_SIZE != -tb->insert_size[0],
"vs-12013: mode Delete, insert size %d, ih to be deleted %h",
- ih);
+ -tb->insert_size [0], ih);
bi.tb = tb;
bi.bi_bh = tbS0;
break;
default:
- reiserfs_panic (tb->tb_sb, "internal_define_dest_src_infos", "shift type is unknown (%d)", shift_mode);
+ reiserfs_panic (tb->tb_sb, "internal_define_dest_src_infos: shift type is unknown (%d)", shift_mode);
}
}
return;
}
- reiserfs_panic (tb->tb_sb, "balance_internal_when_delete", "unexpected tb->lnum[%d]==%d or tb->rnum[%d]==%d",
+ reiserfs_panic (tb->tb_sb, "balance_internal_when_delete: unexpected tb->lnum[%d]==%d or tb->rnum[%d]==%d",
h, tb->lnum[h], h, tb->rnum[h]);
}
if ( tb->blknum[h] != 1 )
- reiserfs_panic(0, "balance_internal", "One new node required for creating the new root");
+ reiserfs_panic(0, "balance_internal: One new node required for creating the new root");
/* S[h] = empty buffer from the list FEB. */
tbSh = get_FEB (tb);
blkh = B_BLK_HEAD(tbSh);
//
// when key is 0, do not set version and short key
//
-inline void make_le_item_head (struct item_head * ih, struct cpu_key * key, int version,
- loff_t offset, int type, int length, int entry_count/*or ih_free_space*/)
+inline void make_le_item_head (struct item_head * ih, const struct cpu_key * key,
+ int version,
+ loff_t offset, int type, int length,
+ int entry_count/*or ih_free_space*/)
{
if (key) {
ih->ih_key.k_dir_id = cpu_to_le32 (key->on_disk_key.k_dir_id);
}
-struct inode * reiserfs_iget (struct super_block * s, struct cpu_key * key)
+struct inode * reiserfs_iget (struct super_block * s, const struct cpu_key * key)
{
struct inode * inode;
struct reiserfs_iget4_args args ;
//
// from ext2_prepare_write, but modified
//
-int reiserfs_prepare_write(struct file *f, struct page *page, unsigned from, unsigned to) {
+int reiserfs_prepare_write(struct file *f, struct page *page,
+ unsigned from, unsigned to) {
struct inode *inode = page->mapping->host ;
reiserfs_wait_on_write_block(inode->i_sb) ;
fix_tail_page_for_writing(page) ;
}
/* buffer is in current transaction */
-inline int buffer_journaled(struct buffer_head *bh) {
+inline int buffer_journaled(const struct buffer_head *bh) {
if (bh)
- return test_bit(BH_JDirty, &bh->b_state) ;
+ return test_bit(BH_JDirty, ( struct buffer_head * ) &bh->b_state) ;
else
return 0 ;
}
/* disk block was taken off free list before being in a finished transation, or written to disk
** journal_new blocks can be reused immediately, for any purpose
*/
-inline int buffer_journal_new(struct buffer_head *bh) {
+inline int buffer_journal_new(const struct buffer_head *bh) {
if (bh)
- return test_bit(BH_JNew, &bh->b_state) ;
+ return test_bit(BH_JNew, ( struct buffer_head * )&bh->b_state) ;
else
return 0 ;
}
retry_count++ ;
goto retry;
}
- reiserfs_panic(s, "journal-563: flush_commit_list: BAD, j_commit_left is %lu, should be 1\n",
- atomic_read(&(jl->j_commit_left)));
+ reiserfs_panic(s, "journal-563: flush_commit_list: BAD, j_commit_left is %u, should be 1\n",
+ atomic_read(&(jl->j_commit_left)));
}
mark_buffer_dirty(jl->j_commit_bh) ;
static void reiserfs_end_buffer_io_sync(struct buffer_head *bh, int uptodate) {
if (buffer_journaled(bh)) {
- reiserfs_warning("clm-2084: pinned buffer %u:%s sent to disk\n",
+ reiserfs_warning("clm-2084: pinned buffer %lu:%s sent to disk\n",
bh->b_blocknr, kdevname(bh->b_dev)) ;
}
mark_buffer_uptodate(bh, uptodate) ;
}
if (th->t_trans_id != SB_JOURNAL(p_s_sb)->j_trans_id) {
- reiserfs_panic(th->t_super, "journal-1577: handle trans id %d != current trans id %d\n",
+ reiserfs_panic(th->t_super, "journal-1577: handle trans id %ld != current trans id %ld\n",
th->t_trans_id, SB_JOURNAL(p_s_sb)->j_trans_id);
}
p_s_sb->s_dirt = 1 ;
int wait_on_commit = flags & WAIT ;
if (th->t_trans_id != SB_JOURNAL(p_s_sb)->j_trans_id) {
- reiserfs_panic(th->t_super, "journal-1577: handle trans id %d != current trans id %d\n",
+ reiserfs_panic(th->t_super, "journal-1577: handle trans id %ld != current trans id %ld\n",
th->t_trans_id, SB_JOURNAL(p_s_sb)->j_trans_id);
}
}
if (SB_JOURNAL(p_s_sb)->j_start > JOURNAL_BLOCK_COUNT) {
- reiserfs_panic(p_s_sb, "journal-003: journal_end: j_start (%d) is too high\n", SB_JOURNAL(p_s_sb)->j_start) ;
+ reiserfs_panic(p_s_sb, "journal-003: journal_end: j_start (%ld) is too high\n", SB_JOURNAL(p_s_sb)->j_start) ;
}
return 1 ;
}
/* merge to right only part of item */
RFALSE( ih_item_len(ih) <= bytes_or_entries,
"vs-10060: no so much bytes %lu (needed %lu)",
- ih_item_len(ih), bytes_or_entries);
+ ( unsigned long )ih_item_len(ih), ( unsigned long )bytes_or_entries);
/* change first item key of the DEST */
if ( is_direct_le_ih (dih) ) {
t_dc = B_N_CHILD (dest_bi->bi_parent, dest_bi->bi_position);
RFALSE( dc_block_number(t_dc) != dest->b_blocknr,
"vs-10160: block number in bh does not match to field in disk_child structure %lu and %lu",
- dest->b_blocknr, dc_block_number(t_dc));
+ ( long unsigned ) dest->b_blocknr,
+ ( long unsigned ) dc_block_number(t_dc));
put_dc_size( t_dc, dc_size(t_dc) + (j - last_inserted_loc + IH_SIZE * cpy_num ) );
do_balance_mark_internal_dirty (dest_bi->tb, dest_bi->bi_parent, 0);
if (is_indirect_le_ih (ih)) {
RFALSE( cpy_bytes == ih_item_len(ih) && get_ih_free_space(ih),
"vs-10180: when whole indirect item is bottle to left neighbor, it must have free_space==0 (not %lu)",
- get_ih_free_space (ih));
+ ( long unsigned ) get_ih_free_space (ih));
set_ih_free_space (&n_ih, 0);
}
RFALSE( is_statdata_le_ih (ih), "10195: item is stat data");
RFALSE( pos_in_item && pos_in_item + cut_size != ih_item_len(ih),
"10200: invalid offset (%lu) or trunc_size (%lu) or ih_item_len (%lu)",
- pos_in_item, cut_size, ih_item_len (ih));
+ ( long unsigned ) pos_in_item, ( long unsigned ) cut_size,
+ ( long unsigned ) ih_item_len (ih));
/* shift item body to left if cut is from the head of item */
if (pos_in_item == 0) {
*/
/* The function is NOT SCHEDULE-SAFE! */
-int search_by_entry_key (struct super_block * sb, struct cpu_key * key,
+int search_by_entry_key (struct super_block * sb, const struct cpu_key * key,
struct path * path, struct reiserfs_dir_entry * de)
{
int retval;
{
if (le32_to_cpu (map[0]) != 1)
reiserfs_panic (s, "vs-15010: check_objectid_map: map corrupted: %lx",
- le32_to_cpu (map[0]));
+ ( long unsigned int ) le32_to_cpu (map[0]));
// FIXME: add something else here
}
}
reiserfs_warning ("vs-15010: reiserfs_release_objectid: tried to free free object id (%lu)",
- objectid_to_release);
+ ( long unsigned ) objectid_to_release);
}
do_reiserfs_warning(fmt);
printk ( KERN_EMERG "%s", error_buf);
BUG ();
- // console_print (error_buf);
- // for (;;);
-
- /* comment before release */
- //for (;;);
-
-#if 0 /* this is not needed, the state is ignored */
- if (sb && !(sb->s_flags & MS_RDONLY)) {
- sb->u.reiserfs_sb.s_mount_state |= REISERFS_ERROR_FS;
- sb->u.reiserfs_sb.s_rs->s_state = REISERFS_ERROR_FS;
-
- mark_buffer_dirty(sb->u.reiserfs_sb.s_sbh) ;
- sb->s_dirt = 1;
- }
-#endif
-
- /* this is to prevent panic from syncing this filesystem */
- if (sb)
- sb->s_flags |= MS_RDONLY;
+ /* this is not actually called, but makes reiserfs_panic() "noreturn" */
panic ("REISERFS: panic (device %s): %s\n",
sb ? kdevname(sb->s_dev) : "sb == 0", error_buf);
}
#include <linux/smp_lock.h>
/* Does the buffer contain a disk block which is in the tree. */
-inline int B_IS_IN_TREE (struct buffer_head * p_s_bh)
+inline int B_IS_IN_TREE (const struct buffer_head * p_s_bh)
{
RFALSE( B_LEVEL (p_s_bh) > MAX_HEIGHT,
-inline void copy_short_key (void * to, void * from)
+inline void copy_short_key (void * to, const void * from)
{
memcpy (to, from, SHORT_KEY_SIZE);
}
//
// to gets item head in le form
//
-inline void copy_item_head(void * p_v_to, void * p_v_from)
+inline void copy_item_head(struct item_head * p_v_to,
+ const struct item_head * p_v_from)
{
memcpy (p_v_to, p_v_from, IH_SIZE);
}
Returns: -1 if key1 < key2
0 if key1 == key2
1 if key1 > key2 */
-inline int comp_short_keys (struct key * le_key, struct cpu_key * cpu_key)
+inline int comp_short_keys (const struct key * le_key,
+ const struct cpu_key * cpu_key)
{
__u32 * p_s_le_u32, * p_s_cpu_u32;
int n_key_length = REISERFS_SHORT_KEY_LEN;
Compare keys using all 4 key fields.
Returns: -1 if key1 < key2 0
if key1 = key2 1 if key1 > key2 */
-inline int comp_keys (struct key * le_key, struct cpu_key * cpu_key)
+inline int comp_keys (const struct key * le_key, const struct cpu_key * cpu_key)
{
int retval;
//
// FIXME: not used yet
//
-inline int comp_cpu_keys (struct cpu_key * key1, struct cpu_key * key2)
+inline int comp_cpu_keys (const struct cpu_key * key1,
+ const struct cpu_key * key2)
{
if (key1->on_disk_key.k_dir_id < key2->on_disk_key.k_dir_id)
return -1;
return 0;
}
-inline int comp_short_le_keys (struct key * key1, struct key * key2)
+inline int comp_short_le_keys (const struct key * key1, const struct key * key2)
{
__u32 * p_s_1_u32, * p_s_2_u32;
int n_key_length = REISERFS_SHORT_KEY_LEN;
return 0;
}
-inline int comp_short_cpu_keys (struct cpu_key * key1,
- struct cpu_key * key2)
+inline int comp_short_cpu_keys (const struct cpu_key * key1,
+ const struct cpu_key * key2)
{
__u32 * p_s_1_u32, * p_s_2_u32;
int n_key_length = REISERFS_SHORT_KEY_LEN;
-inline void cpu_key2cpu_key (struct cpu_key * to, struct cpu_key * from)
+inline void cpu_key2cpu_key (struct cpu_key * to, const struct cpu_key * from)
{
memcpy (to, from, sizeof (struct cpu_key));
}
-inline void le_key2cpu_key (struct cpu_key * to, struct key * from)
+inline void le_key2cpu_key (struct cpu_key * to, const struct key * from)
{
to->on_disk_key.k_dir_id = le32_to_cpu (from->k_dir_id);
to->on_disk_key.k_objectid = le32_to_cpu (from->k_objectid);
// this does not say which one is bigger, it only returns 1 if keys
// are not equal, 0 otherwise
-inline int comp_le_keys (struct key * k1, struct key * k2)
+inline int comp_le_keys (const struct key * k1, const struct key * k2)
{
return memcmp (k1, k2, sizeof (struct key));
}
cut the number of possible items it could be by one more than half rounded down,
or we find it. */
inline int bin_search (
- void * p_v_key, /* Key to search for. */
- void * p_v_base, /* First item in the array. */
+ const void * p_v_key, /* Key to search for. */
+ const void * p_v_base,/* First item in the array. */
int p_n_num, /* Number of items in the array. */
int p_n_width, /* Item size in the array.
searched. Lest the reader be
/* Minimal possible key. It is never in the tree. */
-struct key MIN_KEY = {0, 0, {{0, 0},}};
+const struct key MIN_KEY = {0, 0, {{0, 0},}};
/* Maximal possible key. It is never in the tree. */
-struct key MAX_KEY = {0xffffffff, 0xffffffff, {{0xffffffff, 0xffffffff},}};
+const struct key MAX_KEY = {0xffffffff, 0xffffffff, {{0xffffffff, 0xffffffff},}};
/* Get delimiting key of the buffer by looking for it in the buffers in the path, starting from the bottom
of the path, and going upwards. We must check the path's validity at each step. If the key is not in
the path, there is no delimiting key in the tree (buffer is first or last buffer in tree), and in this
case we return a special key, either MIN_KEY or MAX_KEY. */
-inline struct key * get_lkey (
- struct path * p_s_chk_path,
- struct super_block * p_s_sb
+inline const struct key * get_lkey (
+ const struct path * p_s_chk_path,
+ const struct super_block * p_s_sb
) {
int n_position, n_path_offset = p_s_chk_path->path_length;
struct buffer_head * p_s_parent;
/* Get delimiting key of the buffer at the path and its right neighbor. */
-inline struct key * get_rkey (
- struct path * p_s_chk_path,
- struct super_block * p_s_sb
+inline const struct key * get_rkey (
+ const struct path * p_s_chk_path,
+ const struct super_block * p_s_sb
) {
int n_position,
n_path_offset = p_s_chk_path->path_length;
this case get_lkey and get_rkey return a special key which is MIN_KEY or MAX_KEY. */
static inline int key_in_buffer (
struct path * p_s_chk_path, /* Path which should be checked. */
- struct cpu_key * p_s_key, /* Key which should be checked. */
+ const struct cpu_key * p_s_key, /* Key which should be checked. */
struct super_block * p_s_sb /* Super block pointer. */
) {
correctness of the bottom of the path */
/* The function is NOT SCHEDULE-SAFE! */
int search_by_key (struct super_block * p_s_sb,
- struct cpu_key * p_s_key, /* Key to search. */
+ const struct cpu_key * p_s_key, /* Key to search. */
struct path * p_s_search_path, /* This structure was
allocated and initialized
by the calling
// certain level
if (!is_tree_node (p_s_bh, expected_level)) {
reiserfs_warning ("vs-5150: search_by_key: "
- "invalid format found in block %d. Fsck?\n", p_s_bh->b_blocknr);
+ "invalid format found in block %ld. Fsck?\n",
+ p_s_bh->b_blocknr);
pathrelse (p_s_search_path);
return IO_ERROR;
}
n_node_level = B_LEVEL (p_s_bh);
RFALSE( n_node_level < n_stop_level,
- "vs-5152: tree level is less than stop level (%d)",
+ "vs-5152: tree level (%d) is less than stop level (%d)",
n_node_level, n_stop_level);
n_retval = bin_search( p_s_key, B_N_PITEM_HEAD(p_s_bh, 0),
/* The function is NOT SCHEDULE-SAFE! */
int search_for_position_by_key (struct super_block * p_s_sb, /* Pointer to the super block. */
- struct cpu_key * p_cpu_key, /* Key to search (cpu variable) */
+ const struct cpu_key * p_cpu_key, /* Key to search (cpu variable) */
struct path * p_s_search_path /* Filled up by this function. */
) {
struct item_head * p_le_ih; /* pointer to on-disk structure */
/* Compare given item and item pointed to by the path. */
-int comp_items (struct item_head * stored_ih, struct path * p_s_path)
+int comp_items (const struct item_head * stored_ih, const struct path * p_s_path)
{
struct buffer_head * p_s_bh;
struct item_head * ih;
/* we need only to know, whether it is the same item */
ih = get_ih (p_s_path);
return memcmp (stored_ih, ih, IH_SIZE);
-
-#if 0
- /* Get item at the path. */
- p_s_path_item = PATH_PITEM_HEAD(p_s_path);
- /* Compare keys. */
- if ( COMP_KEYS(&(p_s_path_item->ih_key), &(p_cpu_ih->ih_key)) )
- return 1;
-
- /* Compare other items fields. */
- if( ih_entry_count(p_s_path_item) != ih_entry_count(p_cpu_ih) ||
- ih_item_len(p_s_path_item) != ih_item_len(p_cpu_ih) ||
- ih_location(p_s_path_item) != ih_location(p_cpu_ih) )
- return 1;
-
- /* Items are equal. */
- return 0;
-#endif
}
struct reiserfs_transaction_handle *th,
struct inode * inode,
struct path * p_s_path,
- struct cpu_key * p_s_item_key,
+ const struct cpu_key * p_s_item_key,
int * p_n_removed, /* Number of unformatted nodes which were removed
from end of the file. */
int * p_n_cut_size,
/* Delete object item. */
int reiserfs_delete_item (struct reiserfs_transaction_handle *th,
struct path * p_s_path, /* Path to the deleted item. */
- struct cpu_key * p_s_item_key, /* Key to search for the deleted item. */
+ const struct cpu_key * p_s_item_key, /* Key to search for the deleted item. */
struct inode * p_s_inode,/* inode is here just to update i_blocks */
struct buffer_head * p_s_un_bh) /* NULL or unformatted node pointer. */
{
struct inode * p_s_inode,
struct page *page,
struct path * p_s_path,
- struct cpu_key * p_s_item_key,
+ const struct cpu_key * p_s_item_key,
loff_t n_new_file_size,
char * p_c_mode
) {
indirect_to_direct_roll_back (th, p_s_inode, p_s_path);
}
if (n_ret_value == NO_DISK_SPACE)
- reiserfs_warning ("");
+ reiserfs_warning ("NO_DISK_SPACE");
unfix_nodes (&s_cut_balance);
return -EIO;
}
if (c_mode == M_DELETE && ih_item_len(le_ih) != UNFM_P_SIZE)
reiserfs_panic (p_s_sb, "vs-5653: reiserfs_cut_from_item: "
- "completing indirect2direct conversion indirect item %h"
+ "completing indirect2direct conversion indirect item %h "
"being deleted must be of 4 byte long", le_ih);
if (c_mode == M_CUT && s_cut_balance.insert_size[0] != -UNFM_P_SIZE) {
#ifdef CONFIG_REISERFS_CHECK
// this makes sure, that we __append__, not overwrite or add holes
-static void check_research_for_paste (struct path * path, struct cpu_key * p_s_key)
+static void check_research_for_paste (struct path * path,
+ const struct cpu_key * p_s_key)
{
struct item_head * found_ih = get_ih (path);
/* Paste bytes to the existing item. Returns bytes number pasted into the item. */
int reiserfs_paste_into_item (struct reiserfs_transaction_handle *th,
struct path * p_s_search_path, /* Path to the pasted item. */
- struct cpu_key * p_s_key, /* Key to search for the needed item.*/
+ const struct cpu_key * p_s_key, /* Key to search for the needed item.*/
const char * p_c_body, /* Pointer to the bytes to paste. */
int n_pasted_size) /* Size of pasted bytes. */
{
/* Insert new item into the buffer at the path. */
int reiserfs_insert_item(struct reiserfs_transaction_handle *th,
struct path * p_s_path, /* Path to the inserteded item. */
- struct cpu_key * key,
+ const struct cpu_key * key,
struct item_head * p_s_ih, /* Pointer to the item header to insert.*/
const char * p_c_body) /* Pointer to the bytes to insert. */
{
#define REISERFS_OLD_BLOCKSIZE 4096
#define REISERFS_SUPER_MAGIC_STRING_OFFSET_NJ 20
-
+char reiserfs_super_magic_string[] = REISERFS_SUPER_MAGIC_STRING;
+char reiser2fs_super_magic_string[] = REISER2FS_SUPER_MAGIC_STRING;
//
// a portion of this function, particularly the VFS interface portion,
struct inode * p_s_inode,
struct page *page,
struct path * p_s_path, /* path to the indirect item. */
- struct cpu_key * p_s_item_key, /* Key to look for unformatted node pointer to be cut. */
+ const struct cpu_key * p_s_item_key, /* Key to look for unformatted node pointer to be cut. */
loff_t n_new_file_size, /* New file size. */
char * p_c_mode)
{
vfsmnt->mnt_root = dget(sb->s_root);
bdput(bdev); /* sb holds a reference */
+
+#ifdef CONFIG_ROOT_NFS
attach_it:
+#endif
root_nd.mnt = root_vfsmnt;
root_nd.dentry = root_vfsmnt->mnt_sb->s_root;
graft_tree(vfsmnt, &root_nd);
ptr = &v->counter;
increment = i;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_add
- add %%o7, 8, %%o7
-" : "=&r" (increment)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_add\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (increment)
: "0" (increment), "r" (ptr)
: "g3", "g4", "g7", "memory", "cc");
ptr = &v->counter;
increment = i;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_sub
- add %%o7, 8, %%o7
-" : "=&r" (increment)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_sub\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (increment)
: "0" (increment), "r" (ptr)
: "g3", "g4", "g7", "memory", "cc");
-/* $Id: bitops.h,v 1.64 2001/07/18 13:48:23 anton Exp $
+/* $Id: bitops.h,v 1.65 2001/10/30 04:08:26 davem Exp $
* bitops.h: Bit string operations on the Sparc.
*
* Copyright 1995 David S. Miller (davem@caip.rutgers.edu)
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___set_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___set_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "memory", "cc");
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___set_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___set_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "cc");
}
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___clear_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___clear_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "memory", "cc");
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___clear_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___clear_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "cc");
}
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___change_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___change_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "memory", "cc");
ADDR = ((unsigned long *) addr) + (nr >> 5);
mask = 1 << (nr & 31);
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___change_bit
- add %%o7, 8, %%o7
-" : "=&r" (mask)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___change_bit\n\t"
+ " add %%o7, 8, %%o7\n"
+ : "=&r" (mask)
: "0" (mask), "r" (ADDR)
: "g3", "g4", "g5", "g7", "cc");
}
-/* $Id: checksum.h,v 1.31 2000/01/31 01:26:52 davem Exp $ */
+/* $Id: checksum.h,v 1.32 2001/10/30 04:32:24 davem Exp $ */
#ifndef __SPARC_CHECKSUM_H
#define __SPARC_CHECKSUM_H
register char *d asm("o1") = dst;
register int l asm("g1") = len;
- __asm__ __volatile__ ("
- call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "
- mov %4, %%g7
- " : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (sum) :
+ __asm__ __volatile__ (
+ "call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "\n\t"
+ " mov %4, %%g7\n"
+ : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (sum) :
"o1", "o2", "o3", "o4", "o5", "o7", "g1", "g2", "g3", "g4", "g5", "g7");
return ret;
}
register int l asm("g1") = len;
register unsigned int s asm("g7") = sum;
- __asm__ __volatile__ ("
- .section __ex_table,#alloc
- .align 4
- .word 1f,2
- .previous
-1:
- call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "
- st %5, [%%sp + 64]
- " : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (s), "r" (err) :
+ __asm__ __volatile__ (
+ ".section __ex_table,#alloc\n\t"
+ ".align 4\n\t"
+ ".word 1f,2\n\t"
+ ".previous\n"
+ "1:\n\t"
+ "call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "\n\t"
+ " st %5, [%%sp + 64]\n"
+ : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (s), "r" (err) :
"o1", "o2", "o3", "o4", "o5", "o7", "g1", "g2", "g3", "g4", "g5", "g7");
return ret;
}
register int l asm("g1") = len;
register unsigned int s asm("g7") = sum;
- __asm__ __volatile__ ("
- .section __ex_table,#alloc
- .align 4
- .word 1f,1
- .previous
-1:
- call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "
- st %5, [%%sp + 64]
- " : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (s), "r" (err) :
+ __asm__ __volatile__ (
+ ".section __ex_table,#alloc\n\t"
+ ".align 4\n\t"
+ ".word 1f,1\n\t"
+ ".previous\n"
+ "1:\n\t"
+ "call " C_LABEL_STR(__csum_partial_copy_sparc_generic) "\n\t"
+ " st %5, [%%sp + 64]\n"
+ : "=r" (ret) : "0" (ret), "r" (d), "r" (l), "r" (s), "r" (err) :
"o1", "o2", "o3", "o4", "o5", "o7", "g1", "g2", "g3", "g4", "g5", "g7");
return ret;
}
unsigned short proto,
unsigned int sum)
{
- __asm__ __volatile__ ("
- addcc %3, %4, %%g4
- addxcc %5, %%g4, %%g4
- ld [%2 + 0x0c], %%g2
- ld [%2 + 0x08], %%g3
- addxcc %%g2, %%g4, %%g4
- ld [%2 + 0x04], %%g2
- addxcc %%g3, %%g4, %%g4
- ld [%2 + 0x00], %%g3
- addxcc %%g2, %%g4, %%g4
- ld [%1 + 0x0c], %%g2
- addxcc %%g3, %%g4, %%g4
- ld [%1 + 0x08], %%g3
- addxcc %%g2, %%g4, %%g4
- ld [%1 + 0x04], %%g2
- addxcc %%g3, %%g4, %%g4
- ld [%1 + 0x00], %%g3
- addxcc %%g2, %%g4, %%g4
- addxcc %%g3, %%g4, %0
- addx 0, %0, %0
- "
+ __asm__ __volatile__ (
+ "addcc %3, %4, %%g4\n\t"
+ "addxcc %5, %%g4, %%g4\n\t"
+ "ld [%2 + 0x0c], %%g2\n\t"
+ "ld [%2 + 0x08], %%g3\n\t"
+ "addxcc %%g2, %%g4, %%g4\n\t"
+ "ld [%2 + 0x04], %%g2\n\t"
+ "addxcc %%g3, %%g4, %%g4\n\t"
+ "ld [%2 + 0x00], %%g3\n\t"
+ "addxcc %%g2, %%g4, %%g4\n\t"
+ "ld [%1 + 0x0c], %%g2\n\t"
+ "addxcc %%g3, %%g4, %%g4\n\t"
+ "ld [%1 + 0x08], %%g3\n\t"
+ "addxcc %%g2, %%g4, %%g4\n\t"
+ "ld [%1 + 0x04], %%g2\n\t"
+ "addxcc %%g3, %%g4, %%g4\n\t"
+ "ld [%1 + 0x00], %%g3\n\t"
+ "addxcc %%g2, %%g4, %%g4\n\t"
+ "addxcc %%g3, %%g4, %0\n\t"
+ "addx 0, %0, %0\n"
: "=&r" (sum)
: "r" (saddr), "r" (daddr),
"r"(htonl(len)), "r"(htonl(proto)), "r"(sum)
-/* $Id: scatterlist.h,v 1.6 2001/10/09 02:24:35 davem Exp $ */
+/* $Id: scatterlist.h,v 1.7 2001/10/30 04:34:57 davem Exp $ */
#ifndef _SPARC_SCATTERLIST_H
#define _SPARC_SCATTERLIST_H
#include <linux/types.h>
struct scatterlist {
- char * address; /* Location data is to be transferred to */
- unsigned int length;
+ /* This will disappear in 2.5.x */
+ char *address;
- __u32 dvma_address; /* A place to hang host-specific addresses at. */
- __u32 dvma_length;
+ /* These two are only valid if ADDRESS member of this
+ * struct is NULL.
+ */
+ struct page *page;
+ unsigned int offset;
+
+ unsigned int length;
+
+ __u32 dvma_address; /* A place to hang host-specific addresses at. */
+ __u32 dvma_length;
};
#define sg_dma_address(sg) ((sg)->dvma_address)
ptr = &(sem->count.counter);
increment = 1;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_sub
- add %%o7, 8, %%o7
- tst %%g2
- bl 2f
- nop
-1:
- .subsection 2
-2: save %%sp, -64, %%sp
- mov %%g1, %%l1
- mov %%g5, %%l5
- call %3
- mov %%g1, %%o0
- mov %%l1, %%g1
- ba 1b
- restore %%l5, %%g0, %%g5
- .previous\n"
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_sub\n\t"
+ " add %%o7, 8, %%o7\n\t"
+ "tst %%g2\n\t"
+ "bl 2f\n\t"
+ " nop\n"
+ "1:\n\t"
+ ".subsection 2\n"
+ "2:\n\t"
+ "save %%sp, -64, %%sp\n\t"
+ "mov %%g1, %%l1\n\t"
+ "mov %%g5, %%l5\n\t"
+ "call %3\n\t"
+ " mov %%g1, %%o0\n\t"
+ "mov %%l1, %%g1\n\t"
+ "ba 1b\n\t"
+ " restore %%l5, %%g0, %%g5\n\t"
+ ".previous\n"
: "=&r" (increment)
: "0" (increment), "r" (ptr), "i" (__down)
: "g3", "g4", "g7", "memory", "cc");
ptr = &(sem->count.counter);
increment = 1;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_sub
- add %%o7, 8, %%o7
- tst %%g2
- bl 2f
- clr %%g2
-1:
- .subsection 2
-2: save %%sp, -64, %%sp
- mov %%g1, %%l1
- mov %%g5, %%l5
- call %3
- mov %%g1, %%o0
- mov %%l1, %%g1
- mov %%l5, %%g5
- ba 1b
- restore %%o0, %%g0, %%g2
- .previous\n"
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_sub\n\t"
+ " add %%o7, 8, %%o7\n\t"
+ "tst %%g2\n\t"
+ "bl 2f\n\t"
+ " clr %%g2\n"
+ "1:\n\t"
+ ".subsection 2\n"
+ "2:\n\t"
+ "save %%sp, -64, %%sp\n\t"
+ "mov %%g1, %%l1\n\t"
+ "mov %%g5, %%l5\n\t"
+ "call %3\n\t"
+ " mov %%g1, %%o0\n\t"
+ "mov %%l1, %%g1\n\t"
+ "mov %%l5, %%g5\n\t"
+ "ba 1b\n\t"
+ " restore %%o0, %%g0, %%g2\n\t"
+ ".previous\n"
: "=&r" (increment)
: "0" (increment), "r" (ptr), "i" (__down_interruptible)
: "g3", "g4", "g7", "memory", "cc");
ptr = &(sem->count.counter);
increment = 1;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_sub
- add %%o7, 8, %%o7
- tst %%g2
- bl 2f
- clr %%g2
-1:
- .subsection 2
-2: save %%sp, -64, %%sp
- mov %%g1, %%l1
- mov %%g5, %%l5
- call %3
- mov %%g1, %%o0
- mov %%l1, %%g1
- mov %%l5, %%g5
- ba 1b
- restore %%o0, %%g0, %%g2
- .previous\n"
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_sub\n\t"
+ " add %%o7, 8, %%o7\n\t"
+ "tst %%g2\n\t"
+ "bl 2f\n\t"
+ " clr %%g2\n"
+ "1:\n\t"
+ ".subsection 2\n"
+ "2:\n\t"
+ "save %%sp, -64, %%sp\n\t"
+ "mov %%g1, %%l1\n\t"
+ "mov %%g5, %%l5\n\t"
+ "call %3\n\t"
+ " mov %%g1, %%o0\n\t"
+ "mov %%l1, %%g1\n\t"
+ "mov %%l5, %%g5\n\t"
+ "ba 1b\n\t"
+ " restore %%o0, %%g0, %%g2\n\t"
+ ".previous\n"
: "=&r" (increment)
: "0" (increment), "r" (ptr), "i" (__down_trylock)
: "g3", "g4", "g7", "memory", "cc");
ptr = &(sem->count.counter);
increment = 1;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___atomic_add
- add %%o7, 8, %%o7
- tst %%g2
- ble 2f
- nop
-1:
- .subsection 2
-2: save %%sp, -64, %%sp
- mov %%g1, %%l1
- mov %%g5, %%l5
- call %3
- mov %%g1, %%o0
- mov %%l1, %%g1
- ba 1b
- restore %%l5, %%g0, %%g5
- .previous\n"
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___atomic_add\n\t"
+ " add %%o7, 8, %%o7\n\t"
+ "tst %%g2\n\t"
+ "ble 2f\n\t"
+ " nop\n"
+ "1:\n\t"
+ ".subsection 2\n"
+ "2:\n\t"
+ "save %%sp, -64, %%sp\n\t"
+ "mov %%g1, %%l1\n\t"
+ "mov %%g5, %%l5\n\t"
+ "call %3\n\t"
+ " mov %%g1, %%o0\n\t"
+ "mov %%l1, %%g1\n\t"
+ "ba 1b\n\t"
+ " restore %%l5, %%g0, %%g5\n\t"
+ ".previous\n"
: "=&r" (increment)
: "0" (increment), "r" (ptr), "i" (__up)
: "g3", "g4", "g7", "memory", "cc");
extern __inline__ void spin_lock(spinlock_t *lock)
{
- __asm__ __volatile__("
-1: ldstub [%0], %%g2
- orcc %%g2, 0x0, %%g0
- bne,a 2f
- ldub [%0], %%g2
- .subsection 2
-2: orcc %%g2, 0x0, %%g0
- bne,a 2b
- ldub [%0], %%g2
- b,a 1b
- .previous
-" : /* no outputs */
+ __asm__ __volatile__(
+ "\n1:\n\t"
+ "ldstub [%0], %%g2\n\t"
+ "orcc %%g2, 0x0, %%g0\n\t"
+ "bne,a 2f\n\t"
+ " ldub [%0], %%g2\n\t"
+ ".subsection 2\n"
+ "2:\n\t"
+ "orcc %%g2, 0x0, %%g0\n\t"
+ "bne,a 2b\n\t"
+ " ldub [%0], %%g2\n\t"
+ "b,a 1b\n\t"
+ ".previous\n"
+ : /* no outputs */
: "r" (lock)
: "g2", "memory", "cc");
}
{
register rwlock_t *lp asm("g1");
lp = rw;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___rw_read_enter
- ldstub [%%g1 + 3], %%g2
-" : /* no outputs */
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___rw_read_enter\n\t"
+ " ldstub [%%g1 + 3], %%g2\n"
+ : /* no outputs */
: "r" (lp)
: "g2", "g4", "memory", "cc");
}
{
register rwlock_t *lp asm("g1");
lp = rw;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___rw_read_exit
- ldstub [%%g1 + 3], %%g2
-" : /* no outputs */
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___rw_read_exit\n\t"
+ " ldstub [%%g1 + 3], %%g2\n"
+ : /* no outputs */
: "r" (lp)
: "g2", "g4", "memory", "cc");
}
{
register rwlock_t *lp asm("g1");
lp = rw;
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___rw_write_enter
- ldstub [%%g1 + 3], %%g2
-" : /* no outputs */
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___rw_write_enter\n\t"
+ " ldstub [%%g1 + 3], %%g2\n"
+ : /* no outputs */
: "r" (lp)
: "g2", "g4", "memory", "cc");
}
-/* $Id: system.h,v 1.84 2000/09/23 02:11:22 davem Exp $ */
+/* $Id: system.h,v 1.86 2001/10/30 04:57:10 davem Exp $ */
#include <linux/config.h>
#ifndef __SPARC_SYSTEM_H
: "g1", "g2", "g3", "g4", "g5", "g7", "l0", "l1", \
"l4", "l5", "l6", "l7", "i0", "i1", "i2", "i3", "i4", "i5", "o0", "o1", "o2", \
"o3"); \
-here: } while(0)
+here:; } while(0)
/*
* Changing the IRQ level on the Sparc.
*/
extern __inline__ void setipl(unsigned long __orig_psr)
{
- __asm__ __volatile__("
- wr %0, 0x0, %%psr
- nop; nop; nop
-" : /* no outputs */
+ __asm__ __volatile__(
+ "wr %0, 0x0, %%psr\n\t"
+ "nop; nop; nop\n"
+ : /* no outputs */
: "r" (__orig_psr)
: "memory", "cc");
}
{
unsigned long tmp;
- __asm__ __volatile__("
- rd %%psr, %0
- nop; nop; nop; /* Sun4m + Cypress + SMP bug */
- or %0, %1, %0
- wr %0, 0x0, %%psr
- nop; nop; nop
-" : "=r" (tmp)
+ __asm__ __volatile__(
+ "rd %%psr, %0\n\t"
+ "nop; nop; nop;\n\t" /* Sun4m + Cypress + SMP bug */
+ "or %0, %1, %0\n\t"
+ "wr %0, 0x0, %%psr\n\t"
+ "nop; nop; nop\n"
+ : "=r" (tmp)
: "i" (PSR_PIL)
: "memory");
}
{
unsigned long tmp;
- __asm__ __volatile__("
- rd %%psr, %0
- nop; nop; nop; /* Sun4m + Cypress + SMP bug */
- andn %0, %1, %0
- wr %0, 0x0, %%psr
- nop; nop; nop
-" : "=r" (tmp)
+ __asm__ __volatile__(
+ "rd %%psr, %0\n\t"
+ "nop; nop; nop;\n\t" /* Sun4m + Cypress + SMP bug */
+ "andn %0, %1, %0\n\t"
+ "wr %0, 0x0, %%psr\n\t"
+ "nop; nop; nop\n"
+ : "=r" (tmp)
: "i" (PSR_PIL)
: "memory");
}
{
unsigned long retval;
- __asm__ __volatile__("
- rd %%psr, %0
- nop; nop; nop; /* Sun4m + Cypress + SMP bug */
- and %0, %2, %%g1
- and %1, %2, %%g2
- xorcc %%g1, %%g2, %%g0
- be 1f
- nop
- wr %0, %2, %%psr
- nop; nop; nop;
-1:
-" : "=r" (retval)
+ __asm__ __volatile__(
+ "rd %%psr, %0\n\t"
+ "nop; nop; nop;\n\t" /* Sun4m + Cypress + SMP bug */
+ "and %0, %2, %%g1\n\t"
+ "and %1, %2, %%g2\n\t"
+ "xorcc %%g1, %%g2, %%g0\n\t"
+ "be 1f\n\t"
+ " nop\n\t"
+ "wr %0, %2, %%psr\n\t"
+ "nop; nop; nop;\n"
+ "1:\n"
+ : "=r" (retval)
: "r" (__new_psr), "i" (PSR_PIL)
: "g1", "g2", "memory", "cc");
{
unsigned long retval;
- __asm__ __volatile__("
- rd %%psr, %0
- nop; nop; nop; /* Sun4m + Cypress + SMP bug */
- or %0, %1, %%g1
- wr %%g1, 0x0, %%psr
- nop; nop; nop
-" : "=r" (retval)
+ __asm__ __volatile__(
+ "rd %%psr, %0\n\t"
+ "nop; nop; nop;\n\t" /* Sun4m + Cypress + SMP bug */
+ "or %0, %1, %%g1\n\t"
+ "wr %%g1, 0x0, %%psr\n\t"
+ "nop; nop; nop\n\t"
+ : "=r" (retval)
: "i" (PSR_PIL)
: "g1", "memory");
/* Note: this is magic and the nop there is
really needed. */
- __asm__ __volatile__("
- mov %%o7, %%g4
- call ___f____xchg32
- nop
-" : "=&r" (ret)
+ __asm__ __volatile__(
+ "mov %%o7, %%g4\n\t"
+ "call ___f____xchg32\n\t"
+ " nop\n\t"
+ : "=&r" (ret)
: "0" (ret), "r" (ptr)
: "g3", "g4", "g7", "memory", "cc");
-/* $Id: uaccess.h,v 1.23 2001/09/24 03:51:39 davem Exp $
+/* $Id: uaccess.h,v 1.24 2001/10/30 04:32:24 davem Exp $
* uaccess.h: User space memore access functions.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
extern __inline__ __kernel_size_t __clear_user(void *addr, __kernel_size_t size)
{
__kernel_size_t ret;
- __asm__ __volatile__ ("
- .section __ex_table,#alloc
- .align 4
- .word 1f,3
- .previous
- mov %2, %%o1
-1: call __bzero
- mov %1, %%o0
- mov %%o0, %0
- " : "=r" (ret) : "r" (addr), "r" (size) :
+ __asm__ __volatile__ (
+ ".section __ex_table,#alloc\n\t"
+ ".align 4\n\t"
+ ".word 1f,3\n\t"
+ ".previous\n\t"
+ "mov %2, %%o1\n"
+ "1:\n\t"
+ "call __bzero\n\t"
+ " mov %1, %%o0\n\t"
+ "mov %%o0, %0\n"
+ : "=r" (ret) : "r" (addr), "r" (size) :
"o0", "o1", "o2", "o3", "o4", "o5", "o7",
"g1", "g2", "g3", "g4", "g5", "g7", "cc");
return ret;
int lines = bytes / (sizeof (long)) / 8;
do {
- __asm__ __volatile__("
- ldd [%0 + 0x00], %%g2
- ldd [%0 + 0x08], %%g4
- ldd [%0 + 0x10], %%o0
- ldd [%0 + 0x18], %%o2
- ldd [%1 + 0x00], %%o4
- ldd [%1 + 0x08], %%l0
- ldd [%1 + 0x10], %%l2
- ldd [%1 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- std %%g2, [%0 + 0x00]
- std %%g4, [%0 + 0x08]
- std %%o0, [%0 + 0x10]
- std %%o2, [%0 + 0x18]
- "
+ __asm__ __volatile__(
+ "ldd [%0 + 0x00], %%g2\n\t"
+ "ldd [%0 + 0x08], %%g4\n\t"
+ "ldd [%0 + 0x10], %%o0\n\t"
+ "ldd [%0 + 0x18], %%o2\n\t"
+ "ldd [%1 + 0x00], %%o4\n\t"
+ "ldd [%1 + 0x08], %%l0\n\t"
+ "ldd [%1 + 0x10], %%l2\n\t"
+ "ldd [%1 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "std %%g2, [%0 + 0x00]\n\t"
+ "std %%g4, [%0 + 0x08]\n\t"
+ "std %%o0, [%0 + 0x10]\n\t"
+ "std %%o2, [%0 + 0x18]\n"
:
: "r" (p1), "r" (p2)
: "g2", "g3", "g4", "g5",
int lines = bytes / (sizeof (long)) / 8;
do {
- __asm__ __volatile__("
- ldd [%0 + 0x00], %%g2
- ldd [%0 + 0x08], %%g4
- ldd [%0 + 0x10], %%o0
- ldd [%0 + 0x18], %%o2
- ldd [%1 + 0x00], %%o4
- ldd [%1 + 0x08], %%l0
- ldd [%1 + 0x10], %%l2
- ldd [%1 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%2 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%2 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%2 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%2 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- std %%g2, [%0 + 0x00]
- std %%g4, [%0 + 0x08]
- std %%o0, [%0 + 0x10]
- std %%o2, [%0 + 0x18]
- "
+ __asm__ __volatile__(
+ "ldd [%0 + 0x00], %%g2\n\t"
+ "ldd [%0 + 0x08], %%g4\n\t"
+ "ldd [%0 + 0x10], %%o0\n\t"
+ "ldd [%0 + 0x18], %%o2\n\t"
+ "ldd [%1 + 0x00], %%o4\n\t"
+ "ldd [%1 + 0x08], %%l0\n\t"
+ "ldd [%1 + 0x10], %%l2\n\t"
+ "ldd [%1 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%2 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%2 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%2 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%2 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "std %%g2, [%0 + 0x00]\n\t"
+ "std %%g4, [%0 + 0x08]\n\t"
+ "std %%o0, [%0 + 0x10]\n\t"
+ "std %%o2, [%0 + 0x18]\n"
:
: "r" (p1), "r" (p2), "r" (p3)
: "g2", "g3", "g4", "g5",
int lines = bytes / (sizeof (long)) / 8;
do {
- __asm__ __volatile__("
- ldd [%0 + 0x00], %%g2
- ldd [%0 + 0x08], %%g4
- ldd [%0 + 0x10], %%o0
- ldd [%0 + 0x18], %%o2
- ldd [%1 + 0x00], %%o4
- ldd [%1 + 0x08], %%l0
- ldd [%1 + 0x10], %%l2
- ldd [%1 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%2 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%2 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%2 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%2 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%3 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%3 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%3 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%3 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- std %%g2, [%0 + 0x00]
- std %%g4, [%0 + 0x08]
- std %%o0, [%0 + 0x10]
- std %%o2, [%0 + 0x18]
- "
+ __asm__ __volatile__(
+ "ldd [%0 + 0x00], %%g2\n\t"
+ "ldd [%0 + 0x08], %%g4\n\t"
+ "ldd [%0 + 0x10], %%o0\n\t"
+ "ldd [%0 + 0x18], %%o2\n\t"
+ "ldd [%1 + 0x00], %%o4\n\t"
+ "ldd [%1 + 0x08], %%l0\n\t"
+ "ldd [%1 + 0x10], %%l2\n\t"
+ "ldd [%1 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%2 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%2 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%2 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%2 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%3 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%3 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%3 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%3 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "std %%g2, [%0 + 0x00]\n\t"
+ "std %%g4, [%0 + 0x08]\n\t"
+ "std %%o0, [%0 + 0x10]\n\t"
+ "std %%o2, [%0 + 0x18]\n"
:
: "r" (p1), "r" (p2), "r" (p3), "r" (p4)
: "g2", "g3", "g4", "g5",
int lines = bytes / (sizeof (long)) / 8;
do {
- __asm__ __volatile__("
- ldd [%0 + 0x00], %%g2
- ldd [%0 + 0x08], %%g4
- ldd [%0 + 0x10], %%o0
- ldd [%0 + 0x18], %%o2
- ldd [%1 + 0x00], %%o4
- ldd [%1 + 0x08], %%l0
- ldd [%1 + 0x10], %%l2
- ldd [%1 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%2 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%2 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%2 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%2 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%3 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%3 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%3 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%3 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- ldd [%4 + 0x00], %%o4
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- ldd [%4 + 0x08], %%l0
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- ldd [%4 + 0x10], %%l2
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- ldd [%4 + 0x18], %%l4
- xor %%g2, %%o4, %%g2
- xor %%g3, %%o5, %%g3
- xor %%g4, %%l0, %%g4
- xor %%g5, %%l1, %%g5
- xor %%o0, %%l2, %%o0
- xor %%o1, %%l3, %%o1
- xor %%o2, %%l4, %%o2
- xor %%o3, %%l5, %%o3
- std %%g2, [%0 + 0x00]
- std %%g4, [%0 + 0x08]
- std %%o0, [%0 + 0x10]
- std %%o2, [%0 + 0x18]
- "
+ __asm__ __volatile__(
+ "ldd [%0 + 0x00], %%g2\n\t"
+ "ldd [%0 + 0x08], %%g4\n\t"
+ "ldd [%0 + 0x10], %%o0\n\t"
+ "ldd [%0 + 0x18], %%o2\n\t"
+ "ldd [%1 + 0x00], %%o4\n\t"
+ "ldd [%1 + 0x08], %%l0\n\t"
+ "ldd [%1 + 0x10], %%l2\n\t"
+ "ldd [%1 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%2 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%2 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%2 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%2 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%3 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%3 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%3 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%3 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "ldd [%4 + 0x00], %%o4\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "ldd [%4 + 0x08], %%l0\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "ldd [%4 + 0x10], %%l2\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "ldd [%4 + 0x18], %%l4\n\t"
+ "xor %%g2, %%o4, %%g2\n\t"
+ "xor %%g3, %%o5, %%g3\n\t"
+ "xor %%g4, %%l0, %%g4\n\t"
+ "xor %%g5, %%l1, %%g5\n\t"
+ "xor %%o0, %%l2, %%o0\n\t"
+ "xor %%o1, %%l3, %%o1\n\t"
+ "xor %%o2, %%l4, %%o2\n\t"
+ "xor %%o3, %%l5, %%o3\n\t"
+ "std %%g2, [%0 + 0x00]\n\t"
+ "std %%g4, [%0 + 0x08]\n\t"
+ "std %%o0, [%0 + 0x10]\n\t"
+ "std %%o2, [%0 + 0x18]\n"
:
: "r" (p1), "r" (p2), "r" (p3), "r" (p4), "r" (p5)
: "g2", "g3", "g4", "g5",
-/* $Id: floppy.h,v 1.31 2001/08/22 17:46:31 davem Exp $
+/* $Id: floppy.h,v 1.32 2001/10/26 17:59:36 davem Exp $
* asm-sparc64/floppy.h: Sparc specific parts of the Floppy driver.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: pgalloc.h,v 1.26 2001/10/18 09:06:37 davem Exp $ */
+/* $Id: pgalloc.h,v 1.29 2001/10/20 12:38:51 davem Exp $ */
#ifndef _SPARC64_PGALLOC_H
#define _SPARC64_PGALLOC_H
extern void __flush_icache_page(unsigned long);
extern void flush_dcache_page_impl(struct page *page);
#ifdef CONFIG_SMP
-extern void smp_flush_dcache_page_impl(struct page *page);
+extern void smp_flush_dcache_page_impl(struct page *page, int cpu);
#else
-#define smp_flush_dcache_page_impl flush_dcache_page_impl
+#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page)
#endif
extern void flush_dcache_page(struct page *page);
-/* $Id: pgtable.h,v 1.147 2001/10/17 18:26:58 davem Exp $
+/* $Id: pgtable.h,v 1.151 2001/10/25 18:48:03 davem Exp $
* pgtable.h: SpitFire page table operations.
*
* Copyright 1996,1997 David S. Miller (davem@caip.rutgers.edu)
#ifndef __ASSEMBLY__
-#define PG_dcache_dirty PG_arch_1
-
-#define dcache_dirty_cpu(page) \
- (((page)->flags >> 24) & (NR_CPUS - 1UL))
-
-#define set_dcache_dirty(PAGE) \
-do { unsigned long mask = smp_processor_id(); \
- unsigned long non_cpu_bits = (1UL << 24UL) - 1UL; \
- mask = (mask << 24) | (1UL << PG_dcache_dirty); \
- __asm__ __volatile__("1:\n\t" \
- "ldx [%2], %%g7\n\t" \
- "and %%g7, %1, %%g5\n\t" \
- "or %%g5, %0, %%g5\n\t" \
- "casx [%2], %%g7, %%g5\n\t" \
- "cmp %%g7, %%g5\n\t" \
- "bne,pn %%xcc, 1b\n\t" \
- " nop" \
- : /* no outputs */ \
- : "r" (mask), "r" (non_cpu_bits), "r" (&(PAGE)->flags) \
- : "g5", "g7"); \
-} while (0)
-
-#define clear_dcache_dirty(PAGE) \
- clear_bit(PG_dcache_dirty, &(PAGE)->flags)
-
/* Certain architectures need to do special things when pte's
* within a page table are directly modified. Thus, the following
* hook is made available.
#define VMALLOC_START 0x0000000140000000UL
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END 0x0000000200000000UL
+#define LOW_OBP_ADDRESS 0xf0000000UL
+#define HI_OBP_ADDRESS 0x100000000UL
#define pte_ERROR(e) __builtin_trap()
#define pmd_ERROR(e) __builtin_trap()
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
+extern unsigned long prom_virt_to_phys(unsigned long, int *);
+#define LOW_OBP_ADDRESS 0xf0000000UL
+#define HI_OBP_ADDRESS 0x100000000UL
+
extern __inline__ unsigned long
sun4u_get_pte (unsigned long addr)
{
if (addr >= PAGE_OFFSET)
return addr & _PAGE_PADDR;
+ if ((addr >= LOW_OBP_ADDRESS) && (addr < HI_OBP_ADDRESS))
+ return prom_virt_to_phys(addr, 0);
pgdp = pgd_offset_k (addr);
pmdp = pmd_offset (pgdp, addr);
ptep = pte_offset (pmdp, addr);
#include <linux/types.h> /* for "__kernel_caddr_t" et al */
#include <linux/socket.h> /* for "struct sockaddr" et al */
-/* Standard interface flags. */
+/* Standard interface flags (netdevice->flags). */
#define IFF_UP 0x1 /* interface is up */
#define IFF_BROADCAST 0x2 /* broadcast address valid */
#define IFF_DEBUG 0x4 /* turn on debugging */
#define IFF_AUTOMEDIA 0x4000 /* auto media select active */
#define IFF_DYNAMIC 0x8000 /* dialup device with changing addresses*/
+/* Private (from user) interface flags (netdevice->priv_flags). */
+#define IFF_802_1Q_VLAN 0x1 /* 802.1Q VLAN device. */
+
/*
* Device mapping structure. I'd just gone off and designed a
* beautiful scheme using only loadable modules with arguments
#define ARPHRD_ATM 19 /* ATM */
#define ARPHRD_METRICOM 23 /* Metricom STRIP (new IANA id) */
#define ARPHRD_IEEE1394 24 /* IEEE 1394 IPv4 - RFC 2734 */
+#define ARPHRD_EUI64 27 /* EUI-64 */
/* Dummy types for non ARP hardware */
#define ARPHRD_SLIP 256
#define ETH_P_RARP 0x8035 /* Reverse Addr Res packet */
#define ETH_P_ATALK 0x809B /* Appletalk DDP */
#define ETH_P_AARP 0x80F3 /* Appletalk AARP */
+#define ETH_P_8021Q 0x8100 /* 802.1Q VLAN Extended Header */
#define ETH_P_IPX 0x8137 /* IPX over DIX */
#define ETH_P_IPV6 0x86DD /* IPv6 over bluebook */
#define ETH_P_PPP_DISC 0x8863 /* PPPoE discovery messages */
--- /dev/null
+/*
+ * VLAN An implementation of 802.1Q VLAN tagging.
+ *
+ * Authors: Ben Greear <greearb@candelatech.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _LINUX_IF_VLAN_H_
+#define _LINUX_IF_VLAN_H_
+
+#ifdef __KERNEL__
+
+/* externally defined structs */
+struct vlan_group;
+struct net_device;
+struct sk_buff;
+struct packet_type;
+struct vlan_collection;
+struct vlan_dev_info;
+
+#include <linux/proc_fs.h> /* for proc_dir_entry */
+#include <linux/netdevice.h>
+
+#define VLAN_HLEN 4 /* The additional bytes (on top of the Ethernet header)
+ * that VLAN requires.
+ */
+#define VLAN_ETH_ALEN 6 /* Octets in one ethernet addr */
+#define VLAN_ETH_HLEN 18 /* Total octets in header. */
+#define VLAN_ETH_ZLEN 64 /* Min. octets in frame sans FCS */
+
+/*
+ * According to 802.3ac, the packet can be 4 bytes longer. --Klika Jan
+ */
+#define VLAN_ETH_DATA_LEN 1500 /* Max. octets in payload */
+#define VLAN_ETH_FRAME_LEN 1518 /* Max. octets in frame sans FCS */
+
+struct vlan_ethhdr {
+ unsigned char h_dest[ETH_ALEN]; /* destination eth addr */
+ unsigned char h_source[ETH_ALEN]; /* source ether addr */
+ unsigned short h_vlan_proto; /* Should always be 0x8100 */
+ unsigned short h_vlan_TCI; /* Encapsulates priority and VLAN ID */
+ unsigned short h_vlan_encapsulated_proto; /* packet type ID field (or len) */
+};
+
+struct vlan_hdr {
+ unsigned short h_vlan_TCI; /* Encapsulates priority and VLAN ID */
+ unsigned short h_vlan_encapsulated_proto; /* packet type ID field (or len) */
+};
+
+/* Find a VLAN device by the MAC address of it's Ethernet device, and
+ * it's VLAN ID. The default configuration is to have VLAN's scope
+ * to be box-wide, so the MAC will be ignored. The mac will only be
+ * looked at if we are configured to have a seperate set of VLANs per
+ * each MAC addressable interface. Note that this latter option does
+ * NOT follow the spec for VLANs, but may be useful for doing very
+ * large quantities of VLAN MUX/DEMUX onto FrameRelay or ATM PVCs.
+ */
+struct net_device *find_802_1Q_vlan_dev(struct net_device* real_dev,
+ unsigned short VID); /* vlan.c */
+
+/* found in af_inet.c */
+extern int (*vlan_ioctl_hook)(unsigned long arg);
+
+/* found in vlan_dev.c */
+struct net_device_stats* vlan_dev_get_stats(struct net_device* dev);
+int vlan_dev_rebuild_header(struct sk_buff *skb);
+int vlan_skb_recv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type* ptype);
+int vlan_dev_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, void *daddr, void *saddr,
+ unsigned len);
+int vlan_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev);
+int vlan_dev_change_mtu(struct net_device *dev, int new_mtu);
+int vlan_dev_set_mac_address(struct net_device *dev, void* addr);
+int vlan_dev_open(struct net_device* dev);
+int vlan_dev_stop(struct net_device* dev);
+int vlan_dev_init(struct net_device* dev);
+void vlan_dev_destruct(struct net_device* dev);
+void vlan_dev_copy_and_sum(struct sk_buff *dest, unsigned char *src,
+ int length, int base);
+int vlan_dev_set_ingress_priority(char* dev_name, __u32 skb_prio, short vlan_prio);
+int vlan_dev_set_egress_priority(char* dev_name, __u32 skb_prio, short vlan_prio);
+int vlan_dev_set_vlan_flag(char* dev_name, __u32 flag, short flag_val);
+
+/* VLAN multicast stuff */
+/* Delete all of the MC list entries from this vlan device. Also deals
+ * with the underlying device...
+ */
+void vlan_flush_mc_list(struct net_device* dev);
+/* copy the mc_list into the vlan_info structure. */
+void vlan_copy_mc_list(struct dev_mc_list* mc_list, struct vlan_dev_info* vlan_info);
+/** dmi is a single entry into a dev_mc_list, a single node. mc_list is
+ * an entire list, and we'll iterate through it.
+ */
+int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list);
+/** Taken from Gleb + Lennert's VLAN code, and modified... */
+void vlan_dev_set_multicast_list(struct net_device *vlan_dev);
+
+int vlan_collection_add_vlan(struct vlan_collection* vc, unsigned short vlan_id,
+ unsigned short flags);
+int vlan_collection_remove_vlan(struct vlan_collection* vc,
+ struct net_device* vlan_dev);
+int vlan_collection_remove_vlan_id(struct vlan_collection* vc, unsigned short vlan_id);
+
+/* found in vlan.c */
+/* Our listing of VLAN group(s) */
+extern struct vlan_group* p802_1Q_vlan_list;
+
+#define VLAN_NAME "vlan"
+
+/* if this changes, algorithm will have to be reworked because this
+ * depends on completely exhausting the VLAN identifier space. Thus
+ * it gives constant time look-up, but it many cases it wastes memory.
+ */
+#define VLAN_GROUP_ARRAY_LEN 4096
+
+struct vlan_group {
+ int real_dev_ifindex; /* The ifindex of the ethernet(like) device the vlan is attached to. */
+ struct net_device *vlan_devices[VLAN_GROUP_ARRAY_LEN];
+
+ struct vlan_group *next; /* the next in the list */
+};
+
+struct vlan_priority_tci_mapping {
+ unsigned long priority;
+ unsigned short vlan_qos; /* This should be shifted when first set, so we only do it
+ * at provisioning time.
+ * ((skb->priority << 13) & 0xE000)
+ */
+ struct vlan_priority_tci_mapping *next;
+};
+
+/* Holds information that makes sense if this device is a VLAN device. */
+struct vlan_dev_info {
+ /** This will be the mapping that correlates skb->priority to
+ * 3 bits of VLAN QOS tags...
+ */
+ unsigned long ingress_priority_map[8];
+ struct vlan_priority_tci_mapping *egress_priority_map[16]; /* hash table */
+
+ unsigned short vlan_id; /* The VLAN Identifier for this interface. */
+ unsigned short flags; /* (1 << 0) re_order_header This option will cause the
+ * VLAN code to move around the ethernet header on
+ * ingress to make the skb look **exactly** like it
+ * came in from an ethernet port. This destroys some of
+ * the VLAN information in the skb, but it fixes programs
+ * like DHCP that use packet-filtering and don't understand
+ * 802.1Q
+ */
+ struct dev_mc_list *old_mc_list; /* old multi-cast list for the VLAN interface..
+ * we save this so we can tell what changes were
+ * made, in order to feed the right changes down
+ * to the real hardware...
+ */
+ int old_allmulti; /* similar to above. */
+ int old_promiscuity; /* similar to above. */
+ struct net_device *real_dev; /* the underlying device/interface */
+ struct proc_dir_entry *dent; /* Holds the proc data */
+ unsigned long cnt_inc_headroom_on_tx; /* How many times did we have to grow the skb on TX. */
+ unsigned long cnt_encap_on_xmit; /* How many times did we have to encapsulate the skb on TX. */
+ struct net_device_stats dev_stats; /* Device stats (rx-bytes, tx-pkts, etc...) */
+};
+
+#define VLAN_DEV_INFO(x) ((struct vlan_dev_info *)(x->priv))
+
+/* inline functions */
+
+/* Used in vlan_skb_recv */
+static inline struct sk_buff *vlan_check_reorder_header(struct sk_buff *skb)
+{
+ if (VLAN_DEV_INFO(skb->dev)->flags & 1) {
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (skb) {
+ /* Lifted from Gleb's VLAN code... */
+ memmove(skb->data - ETH_HLEN,
+ skb->data - VLAN_ETH_HLEN, 12);
+ skb->mac.raw += VLAN_HLEN;
+ }
+ }
+
+ return skb;
+}
+
+static inline unsigned short vlan_dev_get_egress_qos_mask(struct net_device* dev,
+ struct sk_buff* skb)
+{
+ struct vlan_priority_tci_mapping *mp =
+ VLAN_DEV_INFO(dev)->egress_priority_map[(skb->priority & 0xF)];
+
+ while (mp) {
+ if (mp->priority == skb->priority) {
+ return mp->vlan_qos; /* This should already be shifted to mask
+ * correctly with the VLAN's TCI
+ */
+ }
+ mp = mp->next;
+ }
+ return 0;
+}
+
+static inline int vlan_dmi_equals(struct dev_mc_list *dmi1,
+ struct dev_mc_list *dmi2)
+{
+ return ((dmi1->dmi_addrlen == dmi2->dmi_addrlen) &&
+ (memcmp(dmi1->dmi_addr, dmi2->dmi_addr, dmi1->dmi_addrlen) == 0));
+}
+
+static inline void vlan_destroy_mc_list(struct dev_mc_list *mc_list)
+{
+ struct dev_mc_list *dmi = mc_list;
+ struct dev_mc_list *next;
+
+ while(dmi) {
+ next = dmi->next;
+ kfree(dmi);
+ dmi = next;
+ }
+}
+
+#endif /* __KERNEL__ */
+
+/* VLAN IOCTLs are found in sockios.h */
+
+/* Passed in vlan_ioctl_args structure to determine behaviour. */
+enum vlan_ioctl_cmds {
+ ADD_VLAN_CMD,
+ DEL_VLAN_CMD,
+ SET_VLAN_INGRESS_PRIORITY_CMD,
+ SET_VLAN_EGRESS_PRIORITY_CMD,
+ GET_VLAN_INGRESS_PRIORITY_CMD,
+ GET_VLAN_EGRESS_PRIORITY_CMD,
+ SET_VLAN_NAME_TYPE_CMD,
+ SET_VLAN_FLAG_CMD
+};
+
+enum vlan_name_types {
+ VLAN_NAME_TYPE_PLUS_VID, /* Name will look like: vlan0005 */
+ VLAN_NAME_TYPE_RAW_PLUS_VID, /* name will look like: eth1.0005 */
+ VLAN_NAME_TYPE_PLUS_VID_NO_PAD, /* Name will look like: vlan5 */
+ VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD, /* Name will look like: eth0.5 */
+ VLAN_NAME_TYPE_HIGHEST
+};
+
+struct vlan_ioctl_args {
+ int cmd; /* Should be one of the vlan_ioctl_cmds enum above. */
+ char device1[24];
+
+ union {
+ char device2[24];
+ int VID;
+ unsigned int skb_priority;
+ unsigned int name_type;
+ unsigned int bind_type;
+ unsigned int flag; /* Matches vlan_dev_info flags */
+ } u;
+
+ short vlan_qos;
+};
+
+#endif /* !(_LINUX_IF_VLAN_H_) */
struct vm_operations_struct {
void (*open)(struct vm_area_struct * area);
void (*close)(struct vm_area_struct * area);
- struct page * (*nopage)(struct vm_area_struct * area, unsigned long address);
+ struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int unused);
};
/*
/*
* There is only one 'core' page-freeing function.
*/
-extern void FASTCALL(free_lru_page(struct page *));
extern void FASTCALL(__free_pages(struct page *page, unsigned int order));
extern void FASTCALL(free_pages(unsigned long addr, unsigned int order));
extern void clear_page_tables(struct mm_struct *, unsigned long, int);
extern int fail_writepage(struct page *);
-struct page * shmem_nopage(struct vm_area_struct * vma, unsigned long address);
+struct page * shmem_nopage(struct vm_area_struct * vma, unsigned long address, int unused);
struct file *shmem_file_setup(char * name, loff_t size);
extern void shmem_lock(struct file * file, int lock);
extern int shmem_zero_setup(struct vm_area_struct *);
/* generic vm_area_ops exported for stackable file systems */
extern int filemap_sync(struct vm_area_struct *, unsigned long, size_t, unsigned int);
-extern struct page *filemap_nopage(struct vm_area_struct *, unsigned long);
+extern struct page *filemap_nopage(struct vm_area_struct *, unsigned long, int);
/*
* GFP bitmasks..
{
struct hh_cache *hh_next; /* Next entry */
atomic_t hh_refcnt; /* number of users */
- unsigned short hh_type; /* protocol identifier, f.e ETH_P_IP */
+ unsigned short hh_type; /* protocol identifier, f.e ETH_P_IP
+ * NOTE: For VLANs, this will be the
+ * encapuslated type. --BLG
+ */
int hh_len; /* length of header */
int (*hh_output)(struct sk_buff *skb);
rwlock_t hh_lock;
unsigned short flags; /* interface flags (a la BSD) */
unsigned short gflags;
+ unsigned short priv_flags; /* Like 'flags' but invisible to userspace. */
+ unsigned short unused_alignment_fixer; /* Because we need priv_flags,
+ * and we want to be 32-bit aligned.
+ */
+
unsigned mtu; /* interface MTU value */
unsigned short type; /* interface hardware type */
unsigned short hard_header_len; /* hardware hdr length */
#include <linux/netfilter_ipv4/ip_conntrack_ftp.h>
+#if defined(CONFIG_IP_NF_IRC) || defined(CONFIG_IP_NF_IRC_MODULE)
+#include <linux/netfilter_ipv4/ip_conntrack_irc.h>
+#endif
+
struct ip_conntrack
{
/* Usage count in here is 1 for hash table/destruct timer, 1 per skb,
union {
struct ip_ct_ftp ct_ftp_info;
+#if defined(CONFIG_IP_NF_IRC) || defined(CONFIG_IP_NF_IRC_MODULE)
+ struct ip_ct_irc ct_irc_info;
+#endif
} help;
#ifdef CONFIG_IP_NF_NAT_NEEDED
--- /dev/null
+/* IRC extension for IP connection tracking.
+ * (C) 2000 by Harald Welte <laforge@gnumonks.org>
+ * based on RR's ip_conntrack_ftp.h
+ *
+ * ip_conntrack_irc.h,v 1.6 2000/11/07 18:26:42 laforge Exp
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ *
+ */
+#ifndef _IP_CONNTRACK_IRC_H
+#define _IP_CONNTRACK_IRC_H
+
+#ifndef __KERNEL__
+#error Only in kernel.
+#endif
+
+#include <linux/netfilter_ipv4/lockhelp.h>
+
+#define IP_CONNTR_IRC 2
+
+struct dccproto {
+ char* match;
+ int matchlen;
+};
+
+/* Protects irc part of conntracks */
+DECLARE_LOCK_EXTERN(ip_irc_lock);
+
+/* We record seq number and length of irc ip/port text here: all in
+ host order. */
+struct ip_ct_irc
+{
+ /* This tells NAT that this is an IRC connection */
+ int is_irc;
+ /* sequence number where address part of DCC command begins */
+ u_int32_t seq;
+ /* 0 means not found yet */
+ u_int32_t len;
+ /* Port that was to be used */
+ u_int16_t port;
+};
+
+#endif /* _IP_CONNTRACK_IRC_H */
--- /dev/null
+#ifndef _IPT_LENGTH_H
+#define _IPT_LENGTH_H
+
+struct ipt_length_info {
+ u_int16_t min, max;
+ u_int8_t invert;
+};
+
+#endif /*_IPT_LENGTH_H*/
--- /dev/null
+/* IP tables module for matching the value of the TTL
+ * (C) 2000 by Harald Welte <laforge@gnumonks.org> */
+
+#ifndef _IPT_TTL_H
+#define _IPT_TTL_H
+
+enum {
+ IPT_TTL_EQ = 0, /* equals */
+ IPT_TTL_NE, /* not equals */
+ IPT_TTL_LT, /* less than */
+ IPT_TTL_GT, /* greater than */
+};
+
+
+struct ipt_ttl_info {
+ u_int8_t mode;
+ u_int8_t ttl;
+};
+
+
+#endif
#define PAGE_CACHE_ALIGN(addr) (((addr)+PAGE_CACHE_SIZE-1)&PAGE_CACHE_MASK)
#define page_cache_get(x) get_page(x)
-#define page_cache_release(x) free_lru_page(x)
+extern void FASTCALL(page_cache_release(struct page *));
static inline struct page *page_cache_alloc(struct address_space *x)
{
#define PCI_AGP_COMMAND_64BIT 0x0020 /* Allow processing of 64-bit addresses */
#define PCI_AGP_COMMAND_FW 0x0010 /* Force FW transfers */
#define PCI_AGP_COMMAND_RATE4 0x0004 /* Use 4x rate */
-#define PCI_AGP_COMMAND_RATE2 0x0002 /* Use 4x rate */
-#define PCI_AGP_COMMAND_RATE1 0x0001 /* Use 4x rate */
+#define PCI_AGP_COMMAND_RATE2 0x0002 /* Use 2x rate */
+#define PCI_AGP_COMMAND_RATE1 0x0001 /* Use 1x rate */
#define PCI_AGP_SIZEOF 12
/* Slot Identification */
#define RFALSE( cond, format, args... ) do {;} while( 0 )
#endif
+#define CONSTF __attribute__( ( const ) )
/*
* Disk Data Structures
*/
#define REISERFS_SUPER_MAGIC_STRING "ReIsErFs"
#define REISER2FS_SUPER_MAGIC_STRING "ReIsEr2Fs"
-static inline int is_reiserfs_magic_string (struct reiserfs_super_block * rs)
+extern char reiserfs_super_magic_string[];
+extern char reiser2fs_super_magic_string[];
+
+static inline int is_reiserfs_magic_string (const struct reiserfs_super_block * rs)
{
- return (!strncmp (rs->s_magic, REISERFS_SUPER_MAGIC_STRING,
- strlen ( REISERFS_SUPER_MAGIC_STRING)) ||
- !strncmp (rs->s_magic, REISER2FS_SUPER_MAGIC_STRING,
- strlen ( REISER2FS_SUPER_MAGIC_STRING)));
+ return (!strncmp (rs->s_magic, reiserfs_super_magic_string,
+ strlen ( reiserfs_super_magic_string)) ||
+ !strncmp (rs->s_magic, reiser2fs_super_magic_string,
+ strlen ( reiser2fs_super_magic_string)));
}
/* ReiserFS leaves the first 64k unused,
*/
#define MIN_PACK_ON_CLOSE 512
-/* the defines below say, that if file size is >=
- DIRECT_TAIL_SUPPRESSION_SIZE * blocksize, then if tail is longer
- than MAX_BYTES_SUPPRESS_DIRECT_TAIL, it will be stored in
- unformatted node */
-#define DIRECT_TAIL_SUPPRESSION_SIZE 1024
-#define MAX_BYTES_SUPPRESS_DIRECT_TAIL 1024
-
-#if 0
-
-//
-#define mark_file_with_tail(inode,offset) \
-{\
-inode->u.reiserfs_i.i_has_tail = 1;\
-}
-
-#define mark_file_without_tail(inode) \
-{\
-inode->u.reiserfs_i.i_has_tail = 0;\
-}
-
-#endif
-
// this says about version of all items (but stat data) the object
// consists of
#define inode_items_version(inode) ((inode)->u.reiserfs_i.i_version)
-/* We store tail in unformatted node if it is too big to fit into a
- formatted node or if DIRECT_TAIL_SUPPRESSION_SIZE,
- MAX_BYTES_SUPPRESS_DIRECT_TAIL and file size say that. */
-/* #define STORE_TAIL_IN_UNFM(n_file_size,n_tail_size,n_block_size) \ */
-/* ( ((n_tail_size) > MAX_DIRECT_ITEM_LEN(n_block_size)) || \ */
-/* ( ( (n_file_size) >= (n_block_size) * DIRECT_TAIL_SUPPRESSION_SIZE ) && \ */
-/* ( (n_tail_size) >= MAX_BYTES_SUPPRESS_DIRECT_TAIL ) ) ) */
-
/* This is an aggressive tail suppression policy, I am hoping it
improves our benchmarks. The principle behind it is that
percentage space saving is what matters, not absolute space
#define put_ih_item_len(ih, val) do { (ih)->ih_item_len = cpu_to_le16(val); } while (0)
-// FIXME: now would that work for other than i386 archs
#define unreachable_item(ih) (ih_version(ih) & (1 << 15))
#define get_ih_free_space(ih) (ih_version (ih) == ITEM_VERSION_2 ? 0 : ih_free_space (ih))
//
// here are conversion routines
//
+static inline int uniqueness2type (__u32 uniqueness) CONSTF;
static inline int uniqueness2type (__u32 uniqueness)
{
switch (uniqueness) {
return TYPE_ANY;
}
+static inline __u32 type2uniqueness (int type) CONSTF;
static inline __u32 type2uniqueness (int type)
{
switch (type) {
// there is no way to get version of object from key, so, provide
// version to these defines
//
-static inline loff_t le_key_k_offset (int version, struct key * key)
+static inline loff_t le_key_k_offset (int version, const struct key * key)
{
return (version == ITEM_VERSION_1) ?
le32_to_cpu( key->u.k_offset_v1.k_offset ) :
offset_v2_k_offset( &(key->u.k_offset_v2) );
}
-static inline loff_t le_ih_k_offset (struct item_head * ih)
+
+static inline loff_t le_ih_k_offset (const struct item_head * ih)
{
return le_key_k_offset (ih_version (ih), &(ih->ih_key));
}
-
-static inline loff_t le_key_k_type (int version, struct key * key)
+static inline loff_t le_key_k_type (int version, const struct key * key)
{
return (version == ITEM_VERSION_1) ?
uniqueness2type( le32_to_cpu( key->u.k_offset_v1.k_uniqueness)) :
offset_v2_k_type( &(key->u.k_offset_v2) );
}
-static inline loff_t le_ih_k_type (struct item_head * ih)
+
+static inline loff_t le_ih_k_type (const struct item_head * ih)
{
return le_key_k_type (ih_version (ih), &(ih->ih_key));
}
//
// key is pointer to cpu key, result is cpu
//
-static inline loff_t cpu_key_k_offset (struct cpu_key * key)
+static inline loff_t cpu_key_k_offset (const struct cpu_key * key)
{
return (key->version == ITEM_VERSION_1) ?
key->on_disk_key.u.k_offset_v1.k_offset :
key->on_disk_key.u.k_offset_v2.k_offset;
}
-static inline loff_t cpu_key_k_type (struct cpu_key * key)
+static inline loff_t cpu_key_k_type (const struct cpu_key * key)
{
return (key->version == ITEM_VERSION_1) ?
uniqueness2type (key->on_disk_key.u.k_offset_v1.k_uniqueness) :
#define I_DEH_N_ENTRY_LENGTH(ih,deh,i) \
((i) ? (deh_location((deh)-1) - deh_location((deh))) : (ih_item_len((ih)) - deh_location((deh))))
*/
-static inline int entry_length (struct buffer_head * bh, struct item_head * ih,
- int pos_in_item)
+static inline int entry_length (const struct buffer_head * bh,
+ const struct item_head * ih, int pos_in_item)
{
struct reiserfs_de_head * deh;
// reiserfs version 2 has max offset 60 bits. Version 1 - 32 bit offset
#define U32_MAX (~(__u32)0)
-static inline loff_t max_reiserfs_offset (struct inode * inode)
+static inline loff_t max_reiserfs_offset (const struct inode * inode)
{
if (inode_items_version (inode) == ITEM_VERSION_1)
return (loff_t)U32_MAX;
see FILESYSTEM_CHANGED() macro in reiserfs_fs.h */
} ;
-
-#if 0
- /* when balancing we potentially affect a 3 node wide column of nodes
- in the tree (the top of the column may be tapered). C is the nodes
- at the center of this column, and L and R are the nodes to the
- left and right. */
- struct seal * L_path_seals[MAX_HEIGHT];
- struct seal * C_path_seals[MAX_HEIGHT];
- struct seal * R_path_seals[MAX_HEIGHT];
- char L_path_lock_types[MAX_HEIGHT]; /* 'r', 'w', or 'n' for read, write, or none */
- char C_path_lock_types[MAX_HEIGHT];
- char R_path_lock_types[MAX_HEIGHT];
-
-
- struct seal_list_elem * C_seal[MAX_HEIGHT]; /* array of seals on nodes in the path */
- struct seal_list_elem * L_seal[MAX_HEIGHT]; /* array of seals on left neighbors of nodes in the path */
- struct seal_list_elem * R_seal[MAX_HEIGHT]; /* array of seals on right neighbors of nodes in the path*/
- struct seal_list_elem * FL_seal[MAX_HEIGHT]; /* array of seals on fathers of the left neighbors */
- struct seal_list_elem * FR_seal[MAX_HEIGHT]; /* array of seals on fathers of the right neighbors */
- struct seal_list_elem * CFL_seal[MAX_HEIGHT]; /* array of seals on common parents of center node and its left neighbor */
- struct seal_list_elem * CFR_seal[MAX_HEIGHT]; /* array of seals on common parents of center node and its right neighbor */
-
- struct char C_desired_lock_type[MAX_HEIGHT]; /* 'r', 'w', or 'n' for read, write, or none */
- struct char L_desired_lock_type[MAX_HEIGHT];
- struct char R_desired_lock_type[MAX_HEIGHT];
- struct char FL_desired_lock_type[MAX_HEIGHT];
- struct char FR_desired_lock_type[MAX_HEIGHT];
- struct char CFL_desired_lock_type[MAX_HEIGHT];
- struct char CFR_desired_lock_type[MAX_HEIGHT];
-#endif
-
-
-
-
-
/* These are modes of balancing */
/* When inserting an item. */
#define B_I_POS_UNFM_POINTER(bh,ih,pos) le32_to_cpu(*(((unp_t *)B_I_PITEM(bh,ih)) + (pos)))
#define PUT_B_I_POS_UNFM_POINTER(bh,ih,pos, val) do {*(((unp_t *)B_I_PITEM(bh,ih)) + (pos)) = cpu_to_le32(val); } while (0)
-/* Reiserfs buffer cache statistics. */
-#ifdef REISERFS_CACHE_STAT
- struct reiserfs_cache_stat
- {
- int nr_reiserfs_ll_r_block; /* Number of block reads. */
- int nr_reiserfs_ll_w_block; /* Number of block writes. */
- int nr_reiserfs_schedule; /* Number of locked buffers waits. */
- unsigned long nr_reiserfs_bread; /* Number of calls to reiserfs_bread function */
- unsigned long nr_returns; /* Number of breads of buffers that were hoped to contain a key but did not after bread completed
- (usually due to object shifting while bread was executing.)
- In the code this manifests as the number
- of times that the repeat variable is nonzero in search_by_key.*/
- unsigned long nr_fixed; /* number of calls of fix_nodes function */
- unsigned long nr_failed; /* number of calls of fix_nodes in which schedule occurred while the function worked */
- unsigned long nr_find1; /* How many times we access a child buffer using its direct pointer from an internal node.*/
- unsigned long nr_find2; /* Number of times there is neither a direct pointer to
- nor any entry in the child list pointing to the buffer. */
- unsigned long nr_find3; /* When parent is locked (meaning that there are no direct pointers)
- or parent is leaf and buffer to be found is an unformatted node. */
- } cache_stat;
-#endif
-
struct reiserfs_iget4_args {
__u32 objectid ;
} ;
int remove_from_transaction(struct super_block *p_s_sb, unsigned long blocknr, int already_cleaned) ;
int remove_from_journal_list(struct super_block *s, struct reiserfs_journal_list *jl, struct buffer_head *bh, int remove_freed) ;
-int buffer_journaled(struct buffer_head *bh) ;
+int buffer_journaled(const struct buffer_head *bh) ;
int mark_buffer_journal_new(struct buffer_head *bh) ;
int reiserfs_sync_all_buffers(kdev_t dev, int wait) ;
int reiserfs_sync_buffers(kdev_t dev, int wait) ;
int reiserfs_allocate_list_bitmaps(struct super_block *s, struct reiserfs_list_bitmap *, int) ;
/* why is this kerplunked right here? */
-static inline int reiserfs_buffer_prepared(struct buffer_head *bh) {
- if (bh && test_bit(BH_JPrepared, &bh->b_state))
+static inline int reiserfs_buffer_prepared(const struct buffer_head *bh) {
+ if (bh && test_bit(BH_JPrepared, ( struct buffer_head * ) &bh->b_state))
return 1 ;
else
return 0 ;
}
/* buffer was journaled, waiting to get to disk */
-static inline int buffer_journal_dirty(struct buffer_head *bh) {
+static inline int buffer_journal_dirty(const struct buffer_head *bh) {
if (bh)
- return test_bit(BH_JDirty_wait, &bh->b_state) ;
+ return test_bit(BH_JDirty_wait, ( struct buffer_head * ) &bh->b_state) ;
else
return 0 ;
}
int reiserfs_convert_objectid_map_v1(struct super_block *) ;
/* stree.c */
-int B_IS_IN_TREE(struct buffer_head *);
-extern inline void copy_short_key (void * to, void * from);
-extern inline void copy_item_head(void * p_v_to, void * p_v_from);
+int B_IS_IN_TREE(const struct buffer_head *);
+extern inline void copy_short_key (void * to, const void * from);
+extern inline void copy_item_head(struct item_head * p_v_to,
+ const struct item_head * p_v_from);
// first key is in cpu form, second - le
-extern inline int comp_keys (struct key * le_key, struct cpu_key * cpu_key);
-extern inline int comp_short_keys (struct key * le_key, struct cpu_key * cpu_key);
-extern inline void le_key2cpu_key (struct cpu_key * to, struct key * from);
+extern inline int comp_keys (const struct key * le_key,
+ const struct cpu_key * cpu_key);
+extern inline int comp_short_keys (const struct key * le_key,
+ const struct cpu_key * cpu_key);
+extern inline void le_key2cpu_key (struct cpu_key * to, const struct key * from);
// both are cpu keys
-extern inline int comp_cpu_keys (struct cpu_key *, struct cpu_key *);
-extern inline int comp_short_cpu_keys (struct cpu_key *, struct cpu_key *);
-extern inline void cpu_key2cpu_key (struct cpu_key *, struct cpu_key *);
+extern inline int comp_cpu_keys (const struct cpu_key *, const struct cpu_key *);
+extern inline int comp_short_cpu_keys (const struct cpu_key *,
+ const struct cpu_key *);
+extern inline void cpu_key2cpu_key (struct cpu_key *, const struct cpu_key *);
// both are in le form
-extern inline int comp_le_keys (struct key *, struct key *);
-extern inline int comp_short_le_keys (struct key *, struct key *);
+extern inline int comp_le_keys (const struct key *, const struct key *);
+extern inline int comp_short_le_keys (const struct key *, const struct key *);
//
// get key version from on disk key - kludge
//
-static inline int le_key_version (struct key * key)
+static inline int le_key_version (const struct key * key)
{
int type;
}
-static inline void copy_key (void * to, void * from)
+static inline void copy_key (struct key *to, const struct key *from)
{
memcpy (to, from, KEY_SIZE);
}
-int comp_items (struct item_head * p_s_ih, struct path * p_s_path);
-struct key * get_rkey (struct path * p_s_chk_path, struct super_block * p_s_sb);
-inline int bin_search (void * p_v_key, void * p_v_base, int p_n_num, int p_n_width, int * p_n_pos);
-int search_by_key (struct super_block *, struct cpu_key *, struct path *, int);
+int comp_items (const struct item_head * stored_ih, const struct path * p_s_path);
+const struct key * get_rkey (const struct path * p_s_chk_path,
+ const struct super_block * p_s_sb);
+inline int bin_search (const void * p_v_key, const void * p_v_base,
+ int p_n_num, int p_n_width, int * p_n_pos);
+int search_by_key (struct super_block *, const struct cpu_key *,
+ struct path *, int);
#define search_item(s,key,path) search_by_key (s, key, path, DISK_LEAF_NODE_LEVEL)
-int search_for_position_by_key (struct super_block * p_s_sb, struct cpu_key * p_s_cpu_key, struct path * p_s_search_path);
+int search_for_position_by_key (struct super_block * p_s_sb,
+ const struct cpu_key * p_s_cpu_key,
+ struct path * p_s_search_path);
extern inline void decrement_bcount (struct buffer_head * p_s_bh);
void decrement_counters_in_path (struct path * p_s_search_path);
void pathrelse (struct path * p_s_search_path);
int reiserfs_insert_item (struct reiserfs_transaction_handle *th,
struct path * path,
- struct cpu_key * key,
+ const struct cpu_key * key,
struct item_head * ih, const char * body);
int reiserfs_paste_into_item (struct reiserfs_transaction_handle *th,
struct path * path,
- struct cpu_key * key,
+ const struct cpu_key * key,
const char * body, int paste_size);
int reiserfs_cut_from_item (struct reiserfs_transaction_handle *th,
int reiserfs_delete_item (struct reiserfs_transaction_handle *th,
struct path * path,
- struct cpu_key * key,
+ const struct cpu_key * key,
struct inode * inode,
struct buffer_head * p_s_un_bh);
void reiserfs_truncate_file(struct inode *, int update_timestamps) ;
void make_cpu_key (struct cpu_key * cpu_key, const struct inode * inode, loff_t offset,
int type, int key_length);
-void make_le_item_head (struct item_head * ih, struct cpu_key * key, int version,
- loff_t offset, int type, int length, int entry_count);
+void make_le_item_head (struct item_head * ih, const struct cpu_key * key,
+ int version,
+ loff_t offset, int type, int length, int entry_count);
/*void store_key (struct key * key);
void forget_key (struct key * key);*/
int reiserfs_get_block (struct inode * inode, long block,
struct buffer_head * bh_result, int create);
-struct inode * reiserfs_iget (struct super_block * s, struct cpu_key * key);
+struct inode * reiserfs_iget (struct super_block * s,
+ const struct cpu_key * key);
void reiserfs_read_inode (struct inode * inode) ;
void reiserfs_read_inode2(struct inode * inode, void *p) ;
void reiserfs_delete_inode (struct inode * inode);
/* we don't mark inodes dirty, we just log them */
void reiserfs_dirty_inode (struct inode * inode) ;
-struct inode * reiserfs_new_inode (struct reiserfs_transaction_handle *th, const struct inode * dir, int mode,
+struct inode * reiserfs_new_inode (struct reiserfs_transaction_handle *th,
+ const struct inode * dir, int mode,
const char * symname, int item_len,
struct dentry *dentry, struct inode *inode, int * err);
int reiserfs_sync_inode (struct reiserfs_transaction_handle *th, struct inode * inode);
/* namei.c */
inline void set_de_name_and_namelen (struct reiserfs_dir_entry * de);
-int search_by_entry_key (struct super_block * sb, struct cpu_key * key, struct path * path,
+int search_by_entry_key (struct super_block * sb, const struct cpu_key * key,
+ struct path * path,
struct reiserfs_dir_entry * de);
struct dentry * reiserfs_lookup (struct inode * dir, struct dentry *dentry);
int reiserfs_create (struct inode * dir, struct dentry *dentry, int mode);
/* super.c */
inline void reiserfs_mark_buffer_dirty (struct buffer_head * bh, int flag);
inline void reiserfs_mark_buffer_clean (struct buffer_head * bh);
-void reiserfs_panic (struct super_block * s, const char * fmt, ...);
void reiserfs_write_super (struct super_block * s);
void reiserfs_put_super (struct super_block * s);
int reiserfs_remount (struct super_block * s, int * flags, char * data);
/* tail_conversion.c */
int direct2indirect (struct reiserfs_transaction_handle *, struct inode *, struct path *, struct buffer_head *, loff_t);
-int indirect2direct (struct reiserfs_transaction_handle *, struct inode *, struct page *, struct path *, struct cpu_key *, loff_t, char *);
+int indirect2direct (struct reiserfs_transaction_handle *, struct inode *, struct page *, struct path *, const struct cpu_key *, loff_t, char *);
void reiserfs_unmap_buffer(struct buffer_head *) ;
/* buffer2.c */
struct buffer_head * reiserfs_getblk (kdev_t n_dev, int n_block, int n_size);
-void wait_buffer_until_released (struct buffer_head * bh);
+void wait_buffer_until_released (const struct buffer_head * bh);
struct buffer_head * reiserfs_bread (kdev_t n_dev, int n_block, int n_size);
/* fix_nodes.c */
void * reiserfs_kmalloc (size_t size, int flags, struct super_block * s);
void reiserfs_kfree (const void * vp, size_t size, struct super_block * s);
-int fix_nodes (int n_op_mode, struct tree_balance * p_s_tb, struct item_head * p_s_ins_ih, const void *);
+int fix_nodes (int n_op_mode, struct tree_balance * p_s_tb,
+ struct item_head * p_s_ins_ih, const void *);
void unfix_nodes (struct tree_balance *);
void free_buffers_in_tb (struct tree_balance * p_s_tb);
/* prints.c */
-void reiserfs_panic (struct super_block * s, const char * fmt, ...);
+void reiserfs_panic (struct super_block * s, const char * fmt, ...)
+__attribute__ ( ( noreturn ) );/* __attribute__( ( format ( printf, 2, 3 ) ) ) */
void reiserfs_warning (const char * fmt, ...);
+/* __attribute__( ( format ( printf, 1, 2 ) ) ); */
void reiserfs_debug (struct super_block *s, int level, const char * fmt, ...);
+/* __attribute__( ( format ( printf, 3, 4 ) ) ); */
void print_virtual_node (struct virtual_node * vn);
void print_indirect_item (struct buffer_head * bh, int item_num);
void store_print_tb (struct tree_balance * tb);
__u32 r5_hash (const signed char *msg, int len);
/* version.c */
-char *reiserfs_get_version_string(void) ;
+const char *reiserfs_get_version_string(void) CONSTF;
/* the ext2 bit routines adjust for big or little endian as
** appropriate for the arch, so in our laziness we use them rather
#ifdef __i386__
static __inline__ int
-find_first_nonzero_bit(void * addr, unsigned size) {
+find_first_nonzero_bit(const void * addr, unsigned size) {
int res;
int __d0;
void *__d1;
#else /* __i386__ */
-static __inline__ int find_next_nonzero_bit(void * addr, unsigned size, unsigned offset)
+static __inline__ int find_next_nonzero_bit(const void * addr, unsigned size,
+ unsigned offset)
{
unsigned int * p = ((unsigned int *) addr) + (offset >> 5);
unsigned int result = offset & ~31UL;
absolutely safe */
#define SPARE_SPACE 500
-static inline unsigned long reiserfs_get_journal_block(struct super_block *s) {
+static inline unsigned long reiserfs_get_journal_block(const struct super_block *s) {
return le32_to_cpu(SB_DISK_SUPER_BLOCK(s)->s_journal_block) ;
}
-static inline unsigned long reiserfs_get_journal_orig_size(struct super_block *s) {
+static inline unsigned long reiserfs_get_journal_orig_size(const struct super_block *s) {
return le32_to_cpu(SB_DISK_SUPER_BLOCK(s)->s_orig_journal_size) ;
}
#define PF_DUMPCORE 0x00000200 /* dumped core */
#define PF_SIGNALED 0x00000400 /* killed by a signal */
#define PF_MEMALLOC 0x00000800 /* Allocating memory */
+#define PF_MEMDIE 0x00001000 /* Killed for out-of-memory */
#define PF_FREE_PAGES 0x00002000 /* per process page freeing */
#define PF_USEDFPU 0x00100000 /* task used FPU this quantum (SMP) */
#define SIOCADDDLCI 0x8980 /* Create new DLCI device */
#define SIOCDELDLCI 0x8981 /* Delete DLCI device */
+#define SIOCGIFVLAN 0x8982 /* 802.1Q VLAN support */
+#define SIOCSIFVLAN 0x8983 /* Set 802.1Q VLAN options */
+
/* Device private ioctl calls */
/*
#ifndef _INET_ECN_H_
#define _INET_ECN_H_
-#include <linux/config.h>
-
-#ifdef CONFIG_INET_ECN
-
static inline int INET_ECN_is_ce(__u8 dsfield)
{
return (dsfield&3) == 3;
(label) |= __constant_htons(2 << 4); \
} while (0)
-
-#else
-#define INET_ECN_is_ce(x...) (0)
-#define INET_ECN_is_not_ce(x...) (0)
-#define INET_ECN_is_capable(x...) (0)
-#define INET_ECN_encapsulate(x, y) (x)
-#define IP6_ECN_flow_init(x...) do { } while (0)
-#define IP6_ECN_flow_xmit(x...) do { } while (0)
-#define INET_ECN_xmit(x...) do { } while (0)
-#define INET_ECN_dontxmit(x...) do { } while (0)
-#endif
-
static inline void IP_ECN_set_ce(struct iphdr *iph)
{
u32 check = iph->check;
dst_release(&rt->u.dst);
}
-#ifdef CONFIG_INET_ECN
#define IPTOS_RT_MASK (IPTOS_TOS_MASK & ~3)
-#else
-#define IPTOS_RT_MASK IPTOS_TOS_MASK
-#endif
-
extern __u8 ip_tos2prio[16];
unsigned int keepalive_time; /* time before keep alive takes place */
unsigned int keepalive_intvl; /* time interval between keep alive probes */
int linger2;
+
+ unsigned long last_synq_overflow;
};
struct sk_buff *skb,
struct open_request *req,
struct dst_entry *dst);
-
- int (*hash_connecting) (struct sock *sk);
-
+
int (*remember_stamp) (struct sock *sk);
__u16 net_header_len;
struct sockaddr *uaddr,
int addr_len);
-extern int tcp_connect(struct sock *sk,
- struct sk_buff *skb);
+extern void tcp_connect_init(struct sock *sk);
+
+extern void tcp_connect_send(struct sock *sk, struct sk_buff *skb);
extern struct sk_buff * tcp_make_synack(struct sock *sk,
struct dst_entry *dst,
#ifndef _NET_TCP_ECN_H_
#define _NET_TCP_ECN_H_ 1
-#include <linux/config.h>
-
-#ifdef CONFIG_INET_ECN
-
#include <net/inet_ecn.h>
#define TCP_HP_BITS (~(TCP_RESERVED_BITS|TCP_FLAG_PSH)|TCP_FLAG_ECE|TCP_FLAG_CWR)
req->ecn_ok = 1;
}
-
-
-#else
-
-#define TCP_HP_BITS (~(TCP_RESERVED_BITS|TCP_FLAG_PSH))
-
-
-#define TCP_ECN_send_syn(x...) do { } while (0)
-#define TCP_ECN_send_synack(x...) do { } while (0)
-#define TCP_ECN_make_synack(x...) do { } while (0)
-#define TCP_ECN_send(x...) do { } while (0)
-
-#define TCP_ECN_queue_cwr(x...) do { } while (0)
-
-#define TCP_ECN_accept_cwr(x...) do { } while (0)
-#define TCP_ECN_check_ce(x...) do { } while (0)
-#define TCP_ECN_rcv_synack(x...) do { } while (0)
-#define TCP_ECN_rcv_syn(x...) do { } while (0)
-#define TCP_ECN_rcv_ecn_echo(x...) (0)
-#define TCP_ECN_openreq_child(x...) do { } while (0)
-#define TCP_ECN_create_request(x...) do { } while (0)
-#define TCP_ECN_withdraw_cwr(x...) do { } while (0)
-
-
-#endif
-
#endif
EXPORT_SYMBOL(alloc_pages_node);
EXPORT_SYMBOL(__get_free_pages);
EXPORT_SYMBOL(get_zeroed_page);
+EXPORT_SYMBOL(page_cache_release);
EXPORT_SYMBOL(__free_pages);
EXPORT_SYMBOL(free_pages);
EXPORT_SYMBOL(num_physpages);
* it in the page cache, and handles the special cases reasonably without
* having a lot of duplicated code.
*/
-struct page * filemap_nopage(struct vm_area_struct * area, unsigned long address)
+struct page * filemap_nopage(struct vm_area_struct * area, unsigned long address, int unused)
{
int error;
struct file *file = area->vm_file;
return do_anonymous_page(mm, vma, page_table, write_access, address);
spin_unlock(&mm->page_table_lock);
- new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK);
+ new_page = vma->vm_ops->nopage(vma, address & PAGE_MASK, 0);
if (new_page == NULL) /* no page was available -- SIGBUS */
return 0;
* exit() and clear out its resources quickly...
*/
p->counter = 5 * HZ;
- p->flags |= PF_MEMALLOC;
+ p->flags |= PF_MEMALLOC | PF_MEMDIE;
/* This process has hardware access, be more careful. */
if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_RAWIO)) {
return;
}
-static inline int node_zones_low(pg_data_t *pgdat)
-{
- zone_t * zone;
- int i;
-
- for (i = pgdat->nr_zones-1; i >= 0; i--) {
- zone = pgdat->node_zones + i;
-
- if (zone->free_pages > (zone->pages_low))
- return 0;
-
- }
- return 1;
-}
-
-static int all_zones_low(void)
-{
- pg_data_t * pgdat = pgdat_list;
-
- pgdat = pgdat_list;
- do {
- if (node_zones_low(pgdat))
- continue;
- return 0;
- } while ((pgdat = pgdat->node_next));
-
- return 1;
-}
-
/**
* out_of_memory - is the system out of memory?
*
*/
int out_of_memory(void)
{
- long cache_mem, limit;
+ static unsigned long first, last, count;
+ unsigned long now = jiffies;
+ unsigned long since = now - last;
- /* Enough free memory? Not OOM. */
- if (!all_zones_low())
+ /*
+ * If there's been more than a second since last query,
+ * we're not oom.
+ */
+ last = now;
+ if (since > HZ) {
+ first = now;
+ count = 0;
return 0;
+ }
- /* Enough swap space left? Not OOM. */
- if (nr_swap_pages > 0)
+ /*
+ * If we have gotten less than 100 failures,
+ * we're not really oom.
+ */
+ if (++count < 100)
return 0;
/*
- * If the buffer and page cache (including swap cache) are over
- * their (/proc tunable) minimum, we're still not OOM. We test
- * this to make sure we don't return OOM when the system simply
- * has a hard time with the cache.
+ * If we haven't tried for at least one second,
+ * we're not really oom.
*/
- cache_mem = atomic_read(&page_cache_size);
- limit = 2;
- limit *= num_physpages / 100;
+ since = now - first;
+ if (since < HZ)
+ return 0;
- if (cache_mem > limit)
+ /*
+ * Enough swap space left? Not OOM.
+ */
+ if (nr_swap_pages > 0)
return 0;
- /* Else... */
+ /*
+ * Ok, really out of memory.
+ *
+ * Reset test logic, let the poor sucker
+ * we selected die in peace (this will
+ * delay the next oom kill for at least
+ * another second and another X failures).
+ */
+ first = now;
+ count = 0;
return 1;
}
/* here we're in the low on memory slow path */
rebalance:
- if (current->flags & PF_MEMALLOC) {
+ if (current->flags & (PF_MEMALLOC | PF_MEMDIE)) {
zone = zonelist->zones;
for (;;) {
zone_t *z = *(zone++);
return 0;
}
-void free_lru_page(struct page *page)
+void page_cache_release(struct page *page)
{
if (!PageReserved(page) && put_page_testzero(page)) {
if (PageActive(page) || PageInactive(page))
return error;
}
-struct page * shmem_nopage(struct vm_area_struct * vma, unsigned long address)
+struct page * shmem_nopage(struct vm_area_struct * vma, unsigned long address, int unused)
{
struct page * page;
unsigned int idx;
}
spin_unlock(&pagemap_lru_lock);
- if (nr_pages <= 0)
- return 0;
-
- /*
- * If swapping out isn't appropriate, and
- * we still fail, try the other (usually smaller)
- * caches instead.
- */
- shrink_dcache_memory(priority, gfp_mask);
- shrink_icache_memory(priority, gfp_mask);
-#ifdef CONFIG_QUOTA
- shrink_dqcache_memory(DEF_PRIORITY, gfp_mask);
-#endif
-
return nr_pages;
}
ratio = (unsigned long) nr_pages * nr_active_pages / ((nr_inactive_pages + 1) * 2);
refill_inactive(ratio);
- return shrink_cache(nr_pages, classzone, gfp_mask, priority);
+ nr_pages = shrink_cache(nr_pages, classzone, gfp_mask, priority);
+ if (nr_pages <= 0)
+ return 0;
+
+ shrink_dcache_memory(priority, gfp_mask);
+ shrink_icache_memory(priority, gfp_mask);
+#ifdef CONFIG_QUOTA
+ shrink_dqcache_memory(DEF_PRIORITY, gfp_mask);
+#endif
+
+ return nr_pages;
}
int try_to_free_pages(zone_t *classzone, unsigned int gfp_mask, unsigned int order)
{
- int ret = 0;
int priority = DEF_PRIORITY;
int nr_pages = SWAP_CLUSTER_MAX;
return 1;
} while (--priority);
- return ret;
+ /*
+ * Hmm.. Cache shrink failed - time to kill something?
+ * Mhwahahhaha! This is the part I really like. Giggle.
+ */
+ if (out_of_memory())
+ oom_kill();
+
+ return 0;
}
DECLARE_WAIT_QUEUE_HEAD(kswapd_wait);
do
need_more_balance |= kswapd_balance_pgdat(pgdat);
while ((pgdat = pgdat->node_next));
- if (need_more_balance && out_of_memory()) {
- oom_kill();
- }
} while (need_more_balance);
}
cl2llc.c: cl2llc.pre
sed -f ./pseudo/opcd2num.sed cl2llc.pre >cl2llc.c
-
-tar:
- tar -cvf /dev/f1 .
--- /dev/null
+#
+# Makefile for the Linux VLAN layer.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+# Note 2! The CFLAGS definition is now in the main makefile...
+
+O_TARGET := 8021q.o
+
+obj-y := vlan.o vlanproc.o vlan_dev.o
+obj-m := $(O_TARGET)
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * Ethernet-type device handling.
+ *
+ * Authors: Ben Greear <greearb@candelatech.com>, <greearb@agcs.com>
+ *
+ * Fixes:
+ * Fix for packet capture - Nick Eggleston <nick@dccinc.com>;
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/uaccess.h> /* for copy_from_user */
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <net/datalink.h>
+#include <linux/mm.h>
+#include <linux/in.h>
+#include <linux/init.h>
+#include <net/p8022.h>
+#include <net/arp.h>
+#include <linux/rtnetlink.h>
+#include <linux/brlock.h>
+#include <linux/notifier.h>
+
+#include <linux/if_vlan.h>
+#include "vlan.h"
+#include "vlanproc.h"
+
+/* Global VLAN variables */
+
+/* Our listing of VLAN group(s) */
+struct vlan_group *p802_1Q_vlan_list;
+
+static char vlan_fullname[] = "802.1Q VLAN Support";
+static unsigned int vlan_version = 1;
+static unsigned int vlan_release = 5;
+static char vlan_copyright[] = " Ben Greear <greearb@candelatech.com>";
+
+static int vlan_device_event(struct notifier_block *, unsigned long, void *);
+
+struct notifier_block vlan_notifier_block = {
+ notifier_call: vlan_device_event,
+};
+
+/* These may be changed at run-time through IOCTLs */
+
+/* Determines interface naming scheme. */
+unsigned short vlan_name_type = VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD;
+
+/* Counter for how many NON-VLAN protos we've received on a VLAN. */
+unsigned long vlan_bad_proto_recvd = 0;
+
+/* DO reorder the header by default */
+unsigned short vlan_default_dev_flags = 1;
+
+static struct packet_type vlan_packet_type = {
+ type: __constant_htons(ETH_P_8021Q),
+ dev: NULL,
+ func: vlan_skb_recv, /* VLAN receive method */
+ data: (void *)(-1), /* Set here '(void *)1' when this code can SHARE SKBs */
+ next: NULL
+};
+
+/* End of global variables definitions. */
+
+/*
+ * Function vlan_proto_init (pro)
+ *
+ * Initialize VLAN protocol layer,
+ *
+ */
+static int __init vlan_proto_init(void)
+{
+ int err;
+
+ printk(VLAN_INF "%s v%u.%u %s\n",
+ vlan_fullname, vlan_version, vlan_release, vlan_copyright);
+
+ /* proc file system initialization */
+ err = vlan_proc_init();
+ if (err < 0) {
+ printk(KERN_ERR __FUNCTION__
+ "%s: can't create entry in proc filesystem!\n",
+ VLAN_NAME);
+ return 1;
+ }
+
+ dev_add_pack(&vlan_packet_type);
+
+ /* Register us to receive netdevice events */
+ register_netdevice_notifier(&vlan_notifier_block);
+
+ vlan_ioctl_hook = vlan_ioctl_handler;
+
+ printk(VLAN_INF "%s Initialization complete.\n", VLAN_NAME);
+ return 0;
+}
+
+/*
+ * Module 'remove' entry point.
+ * o delete /proc/net/router directory and static entries.
+ */
+static void __exit vlan_cleanup_module(void)
+{
+ /* Un-register us from receiving netdevice events */
+ unregister_netdevice_notifier(&vlan_notifier_block);
+
+ dev_remove_pack(&vlan_packet_type);
+ vlan_proc_cleanup();
+
+ vlan_ioctl_hook = NULL;
+}
+
+module_init(vlan_proto_init);
+module_exit(vlan_cleanup_module);
+
+/** Will search linearly for now, based on device index. Could
+ * hash, or directly link, this some day. --Ben
+ * TODO: Potential performance issue here. Linear search where N is
+ * the number of 'real' devices used by VLANs.
+ */
+struct vlan_group* vlan_find_group(int real_dev_ifindex)
+{
+ struct vlan_group *grp = NULL;
+
+ br_read_lock_bh(BR_NETPROTO_LOCK);
+ for (grp = p802_1Q_vlan_list;
+ ((grp != NULL) && (grp->real_dev_ifindex != real_dev_ifindex));
+ grp = grp->next) {
+ /* nothing */ ;
+ }
+ br_read_unlock_bh(BR_NETPROTO_LOCK);
+
+ return grp;
+}
+
+/* Find the protocol handler. Assumes VID < 0xFFF.
+ */
+struct net_device *find_802_1Q_vlan_dev(struct net_device *real_dev,
+ unsigned short VID)
+{
+ struct vlan_group *grp = vlan_find_group(real_dev->ifindex);
+
+ if (grp)
+ return grp->vlan_devices[VID];
+
+ return NULL;
+}
+
+/** This method will explicitly do a dev_put on the device if do_dev_put
+ * is TRUE. This gets around a difficulty with reference counting, and
+ * the unregister-by-name (below). If do_locks is true, it will grab
+ * a lock before un-registering. If do_locks is false, it is assumed that
+ * the lock has already been grabbed externally... --Ben
+ */
+int unregister_802_1Q_vlan_dev(int real_dev_ifindex, unsigned short vlan_id,
+ int do_dev_put, int do_locks)
+{
+ struct net_device *dev = NULL;
+ struct vlan_group *grp;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": VID: %i\n", vlan_id);
+#endif
+
+ /* sanity check */
+ if ((vlan_id >= 0xFFF) || (vlan_id <= 0))
+ return -EINVAL;
+
+ grp = vlan_find_group(real_dev_ifindex);
+ if (grp) {
+ dev = grp->vlan_devices[vlan_id];
+ if (dev) {
+ /* Remove proc entry */
+ vlan_proc_rem_dev(dev);
+
+ /* Take it out of our own structures */
+ grp->vlan_devices[vlan_id] = NULL;
+
+ /* Take it out of the global list of devices.
+ * NOTE: This deletes dev, don't access it again!!
+ */
+
+ if (do_dev_put)
+ dev_put(dev);
+
+ /* TODO: Please review this code. */
+ if (do_locks) {
+ rtnl_lock();
+ unregister_netdevice(dev);
+ rtnl_unlock();
+ } else {
+ unregister_netdevice(dev);
+ }
+
+ MOD_DEC_USE_COUNT;
+ }
+ }
+
+ return 0;
+}
+
+int unregister_802_1Q_vlan_device(const char *vlan_IF_name)
+{
+ struct net_device *dev = NULL;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": unregister VLAN by name, name -:%s:-\n",
+ vlan_IF_name);
+#endif
+
+ dev = dev_get_by_name(vlan_IF_name);
+ if (dev) {
+ if (dev->priv_flags & IFF_802_1Q_VLAN) {
+ return unregister_802_1Q_vlan_dev(
+ VLAN_DEV_INFO(dev)->real_dev->ifindex,
+ (unsigned short)(VLAN_DEV_INFO(dev)->vlan_id),
+ 1 /* do dev_put */, 1 /* do locking */);
+ } else {
+ printk(VLAN_ERR __FUNCTION__
+ ": ERROR: Tried to remove a non-vlan device "
+ "with VLAN code, name: %s priv_flags: %hX\n",
+ dev->name, dev->priv_flags);
+ dev_put(dev);
+ return -EPERM;
+ }
+ } else {
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": WARNING: Could not find dev.\n");
+#endif
+ return -EINVAL;
+ }
+}
+
+/* Attach a VLAN device to a mac address (ie Ethernet Card).
+ * Returns the device that was created, or NULL if there was
+ * an error of some kind.
+ */
+struct net_device *register_802_1Q_vlan_device(const char* eth_IF_name,
+ unsigned short VLAN_ID)
+{
+ struct vlan_group *grp;
+ struct net_device *new_dev;
+ struct net_device *real_dev; /* the ethernet device */
+ int malloc_size = 0;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": if_name -:%s:- vid: %i\n",
+ eth_IF_name, VLAN_ID);
+#endif
+
+ if (VLAN_ID >= 0xfff)
+ goto out_ret_null;
+
+ /* find the device relating to eth_IF_name. */
+ real_dev = dev_get_by_name(eth_IF_name);
+ if (!real_dev)
+ goto out_ret_null;
+
+ /* TODO: Make sure this device can really handle having a VLAN attached
+ * to it...
+ */
+ if (find_802_1Q_vlan_dev(real_dev, VLAN_ID)) {
+ /* was already registered. */
+ printk(VLAN_DBG __FUNCTION__ ": ALREADY had VLAN registered\n");
+ dev_put(real_dev);
+ return NULL;
+ }
+
+ malloc_size = (sizeof(struct net_device));
+ new_dev = (struct net_device *) kmalloc(malloc_size, GFP_KERNEL);
+ VLAN_MEM_DBG("net_device malloc, addr: %p size: %i\n",
+ new_dev, malloc_size);
+
+ if (new_dev == NULL)
+ goto out_put_dev;
+
+ memset(new_dev, 0, malloc_size);
+
+ /* set us up to not use a Qdisc, as the underlying Hardware device
+ * can do all the queueing we could want.
+ */
+ /* new_dev->qdisc_sleeping = &noqueue_qdisc; Not needed it seems. */
+ new_dev->tx_queue_len = 0; /* This should effectively give us no queue. */
+
+ /* Gotta set up the fields for the device. */
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG "About to allocate name, vlan_name_type: %i\n",
+ vlan_name_type);
+#endif
+ switch (vlan_name_type) {
+ case VLAN_NAME_TYPE_RAW_PLUS_VID:
+ /* name will look like: eth1.0005 */
+ sprintf(new_dev->name, "%s.%.4i", real_dev->name, VLAN_ID);
+ break;
+ case VLAN_NAME_TYPE_PLUS_VID_NO_PAD:
+ /* Put our vlan.VID in the name.
+ * Name will look like: vlan5
+ */
+ sprintf(new_dev->name, "vlan%i", VLAN_ID);
+ break;
+ case VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD:
+ /* Put our vlan.VID in the name.
+ * Name will look like: eth0.5
+ */
+ sprintf(new_dev->name, "%s.%i", real_dev->name, VLAN_ID);
+ break;
+ case VLAN_NAME_TYPE_PLUS_VID:
+ /* Put our vlan.VID in the name.
+ * Name will look like: vlan0005
+ */
+ default:
+ sprintf(new_dev->name, "vlan%.4i", VLAN_ID);
+ };
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG "Allocated new name -:%s:-\n", new_dev->name);
+#endif
+ /* set up method calls */
+ new_dev->init = vlan_dev_init;
+ new_dev->destructor = vlan_dev_destruct;
+
+ /* new_dev->ifindex = 0; it will be set when added to
+ * the global list.
+ * iflink is set as well.
+ */
+ new_dev->get_stats = vlan_dev_get_stats;
+
+ /* IFF_BROADCAST|IFF_MULTICAST; ??? */
+ new_dev->flags = real_dev->flags;
+ new_dev->flags &= ~IFF_UP;
+
+ /* Make this thing known as a VLAN device */
+ new_dev->priv_flags |= IFF_802_1Q_VLAN;
+
+ /* need 4 bytes for extra VLAN header info,
+ * hope the underlying device can handle it.
+ */
+ new_dev->mtu = real_dev->mtu;
+ new_dev->change_mtu = vlan_dev_change_mtu;
+
+ /* TODO: maybe just assign it to be ETHERNET? */
+ new_dev->type = real_dev->type;
+
+ /* Regular ethernet + 4 bytes (18 total). */
+ new_dev->hard_header_len = VLAN_HLEN + real_dev->hard_header_len;
+
+ new_dev->priv = kmalloc(sizeof(struct vlan_dev_info),
+ GFP_KERNEL);
+ VLAN_MEM_DBG("new_dev->priv malloc, addr: %p size: %i\n",
+ new_dev->priv,
+ sizeof(struct vlan_dev_info));
+
+ if (new_dev->priv == NULL) {
+ kfree(new_dev);
+ goto out_put_dev;
+ }
+
+ memset(new_dev->priv, 0, sizeof(struct vlan_dev_info));
+
+ memcpy(new_dev->broadcast, real_dev->broadcast, real_dev->addr_len);
+ memcpy(new_dev->dev_addr, real_dev->dev_addr, real_dev->addr_len);
+ new_dev->addr_len = real_dev->addr_len;
+
+ new_dev->open = vlan_dev_open;
+ new_dev->stop = vlan_dev_stop;
+ new_dev->hard_header = vlan_dev_hard_header;
+
+ new_dev->hard_start_xmit = vlan_dev_hard_start_xmit;
+ new_dev->rebuild_header = vlan_dev_rebuild_header;
+ new_dev->hard_header_parse = real_dev->hard_header_parse;
+ new_dev->set_mac_address = vlan_dev_set_mac_address;
+ new_dev->set_multicast_list = vlan_dev_set_multicast_list;
+
+ VLAN_DEV_INFO(new_dev)->vlan_id = VLAN_ID; /* 1 through 0xFFF */
+ VLAN_DEV_INFO(new_dev)->real_dev = real_dev;
+ VLAN_DEV_INFO(new_dev)->dent = NULL;
+ VLAN_DEV_INFO(new_dev)->flags = vlan_default_dev_flags;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG "About to go find the group for idx: %i\n",
+ real_dev->ifindex);
+#endif
+
+ /* So, got the sucker initialized, now lets place
+ * it into our local structure.
+ */
+ grp = vlan_find_group(real_dev->ifindex);
+ if (!grp) { /* need to add a new group */
+ grp = kmalloc(sizeof(struct vlan_group), GFP_KERNEL);
+ VLAN_MEM_DBG("grp malloc, addr: %p size: %i\n",
+ grp, sizeof(struct vlan_group));
+ if (!grp) {
+ kfree(new_dev->priv);
+ VLAN_FMEM_DBG("new_dev->priv free, addr: %p\n",
+ new_dev->priv);
+ kfree(new_dev);
+ VLAN_FMEM_DBG("new_dev free, addr: %p\n", new_dev);
+
+ goto out_put_dev;
+ }
+
+ printk(KERN_ALERT "VLAN REGISTER: Allocated new group.\n");
+ memset(grp, 0, sizeof(struct vlan_group));
+ grp->real_dev_ifindex = real_dev->ifindex;
+
+ br_write_lock_bh(BR_NETPROTO_LOCK);
+ grp->next = p802_1Q_vlan_list;
+ p802_1Q_vlan_list = grp;
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
+ }
+
+ grp->vlan_devices[VLAN_ID] = new_dev;
+ vlan_proc_add_dev(new_dev); /* create it's proc entry */
+
+ /* TODO: Please check this: RTNL --Ben */
+ rtnl_lock();
+ register_netdevice(new_dev);
+ rtnl_unlock();
+
+ /* NOTE: We have a reference to the real device,
+ * so hold on to the reference.
+ */
+ MOD_INC_USE_COUNT; /* Add was a success!! */
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG "Allocated new device successfully, returning.\n");
+#endif
+ return new_dev;
+
+out_put_dev:
+ dev_put(real_dev);
+
+out_ret_null:
+ return NULL;
+}
+
+static int vlan_device_event(struct notifier_block *unused, unsigned long event, void *ptr)
+{
+ struct net_device *dev = (struct net_device *)(ptr);
+ struct vlan_group *grp = NULL;
+ int i = 0;
+ struct net_device *vlandev = NULL;
+
+ switch (event) {
+ case NETDEV_CHANGEADDR:
+ /* Ignore for now */
+ break;
+
+ case NETDEV_GOING_DOWN:
+ /* Ignore for now */
+ break;
+
+ case NETDEV_DOWN:
+ /* TODO: Please review this code. */
+ /* put all related VLANs in the down state too. */
+ for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) {
+ int flgs = 0;
+
+ for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) {
+ vlandev = grp->vlan_devices[i];
+ if (!vlandev ||
+ (VLAN_DEV_INFO(vlandev)->real_dev != dev) ||
+ (!(vlandev->flags & IFF_UP)))
+ continue;
+
+ flgs = vlandev->flags;
+ flgs &= ~IFF_UP;
+ dev_change_flags(vlandev, flgs);
+ }
+ }
+ break;
+
+ case NETDEV_UP:
+ /* TODO: Please review this code. */
+ /* put all related VLANs in the down state too. */
+ for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) {
+ int flgs;
+
+ for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) {
+ vlandev = grp->vlan_devices[i];
+ if (!vlandev ||
+ (VLAN_DEV_INFO(vlandev)->real_dev != dev) ||
+ (vlandev->flags & IFF_UP))
+ continue;
+
+ flgs = vlandev->flags;
+ flgs |= IFF_UP;
+ dev_change_flags(vlandev, flgs);
+ }
+ }
+ break;
+
+ case NETDEV_UNREGISTER:
+ /* TODO: Please review this code. */
+ /* delete all related VLANs. */
+ for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) {
+ for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) {
+ vlandev = grp->vlan_devices[i];
+ if (!vlandev ||
+ (VLAN_DEV_INFO(vlandev)->real_dev != dev))
+ continue;
+
+ unregister_802_1Q_vlan_dev(
+ VLAN_DEV_INFO(vlandev)->real_dev->ifindex,
+ VLAN_DEV_INFO(vlandev)->vlan_id,
+ 0, 0);
+ vlandev = NULL;
+ }
+ }
+ break;
+ };
+
+ return NOTIFY_DONE;
+}
+
+/*
+ * VLAN IOCTL handler.
+ * o execute requested action or pass command to the device driver
+ * arg is really a void* to a vlan_ioctl_args structure.
+ */
+int vlan_ioctl_handler(unsigned long arg)
+{
+ int err = 0;
+ struct vlan_ioctl_args args;
+
+ /* everything here needs root permissions, except aguably the
+ * hack ioctls for sending packets. However, I know _I_ don't
+ * want users running that on my network! --BLG
+ */
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&args, (void*)arg,
+ sizeof(struct vlan_ioctl_args)))
+ return -EFAULT;
+
+ /* Null terminate this sucker, just in case. */
+ args.device1[23] = 0;
+ args.u.device2[23] = 0;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": args.cmd: %x\n", args.cmd);
+#endif
+
+ switch (args.cmd) {
+ case SET_VLAN_INGRESS_PRIORITY_CMD:
+ err = vlan_dev_set_ingress_priority(args.device1,
+ args.u.skb_priority,
+ args.vlan_qos);
+ break;
+
+ case SET_VLAN_EGRESS_PRIORITY_CMD:
+ err = vlan_dev_set_egress_priority(args.device1,
+ args.u.skb_priority,
+ args.vlan_qos);
+ break;
+
+ case SET_VLAN_FLAG_CMD:
+ err = vlan_dev_set_vlan_flag(args.device1,
+ args.u.flag,
+ args.vlan_qos);
+ break;
+
+ case SET_VLAN_NAME_TYPE_CMD:
+ if ((args.u.name_type >= 0) &&
+ (args.u.name_type < VLAN_NAME_TYPE_HIGHEST)) {
+ vlan_name_type = args.u.name_type;
+ err = 0;
+ } else {
+ err = -EINVAL;
+ }
+ break;
+
+ /* TODO: Figure out how to pass info back...
+ case GET_VLAN_INGRESS_PRIORITY_IOCTL:
+ err = vlan_dev_get_ingress_priority(args);
+ break;
+
+ case GET_VLAN_EGRESS_PRIORITY_IOCTL:
+ err = vlan_dev_get_egress_priority(args);
+ break;
+ */
+
+ case ADD_VLAN_CMD:
+ /* we have been given the name of the Ethernet Device we want to
+ * talk to: args.dev1 We also have the
+ * VLAN ID: args.u.VID
+ */
+ if (register_802_1Q_vlan_device(args.device1, args.u.VID)) {
+ err = 0;
+ } else {
+ err = -EINVAL;
+ }
+ break;
+
+ case DEL_VLAN_CMD:
+ /* Here, the args.dev1 is the actual VLAN we want
+ * to get rid of.
+ */
+ err = unregister_802_1Q_vlan_device(args.device1);
+ break;
+
+ default:
+ /* pass on to underlying device instead?? */
+ printk(VLAN_DBG __FUNCTION__ ": Unknown VLAN CMD: %x \n",
+ args.cmd);
+ return -EINVAL;
+ };
+
+ return err;
+}
+
+
--- /dev/null
+#ifndef __BEN_VLAN_802_1Q_INC__
+#define __BEN_VLAN_802_1Q_INC__
+
+#include <linux/if_vlan.h>
+
+/* Uncomment this if you want debug traces to be shown. */
+/* #define VLAN_DEBUG */
+
+#define VLAN_ERR KERN_ERR
+#define VLAN_INF KERN_ALERT
+#define VLAN_DBG KERN_ALERT /* change these... to debug, having a hard time
+ * changing the log level at run-time..for some reason.
+ */
+
+/*
+
+These I use for memory debugging. I feared a leak at one time, but
+I never found it..and the problem seems to have dissappeared. Still,
+I'll bet they might prove useful again... --Ben
+
+
+#define VLAN_MEM_DBG(x, y, z) printk(VLAN_DBG __FUNCTION__ ": " x, y, z);
+#define VLAN_FMEM_DBG(x, y) printk(VLAN_DBG __FUNCTION__ ": " x, y);
+*/
+
+/* This way they don't do anything! */
+#define VLAN_MEM_DBG(x, y, z)
+#define VLAN_FMEM_DBG(x, y)
+
+
+extern unsigned short vlan_name_type;
+
+/* Counter for how many NON-VLAN protos we've received on a VLAN. */
+extern unsigned long vlan_bad_proto_recvd;
+
+int vlan_ioctl_handler(unsigned long arg);
+
+/* Add some headers for the public VLAN methods. */
+int unregister_802_1Q_vlan_device(const char* vlan_IF_name);
+struct net_device *register_802_1Q_vlan_device(const char* eth_IF_name,
+ unsigned short VID);
+
+#endif /* !(__BEN_VLAN_802_1Q_INC__) */
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * Ethernet-type device handling.
+ *
+ * Authors: Ben Greear <greearb@candelatech.com>, <greearb@agcs.com>
+ *
+ * Fixes: Mar 22 2001: Martin Bokaemper <mbokaemper@unispherenetworks.com>
+ * - reset skb->pkt_type on incoming packets when MAC was changed
+ * - see that changed MAC is saddr for outgoing packets
+ * Oct 20, 2001: Ard van Breeman:
+ * - Fix MC-list, finally.
+ * - Flush MC-list on VLAN destroy.
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/in.h>
+#include <linux/init.h>
+#include <asm/uaccess.h> /* for copy_from_user */
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <net/datalink.h>
+#include <net/p8022.h>
+#include <net/arp.h>
+#include <linux/brlock.h>
+
+#include "vlan.h"
+#include "vlanproc.h"
+#include <linux/if_vlan.h>
+#include <net/ip.h>
+
+struct net_device_stats *vlan_dev_get_stats(struct net_device *dev)
+{
+ return &(((struct vlan_dev_info *)(dev->priv))->dev_stats);
+}
+
+
+/*
+ * Rebuild the Ethernet MAC header. This is called after an ARP
+ * (or in future other address resolution) has completed on this
+ * sk_buff. We now let ARP fill in the other fields.
+ *
+ * This routine CANNOT use cached dst->neigh!
+ * Really, it is used only when dst->neigh is wrong.
+ *
+ * TODO: This needs a checkup, I'm ignorant here. --BLG
+ */
+int vlan_dev_rebuild_header(struct sk_buff *skb)
+{
+ struct net_device *dev = skb->dev;
+ struct vlan_ethhdr *veth = (struct vlan_ethhdr *)(skb->data);
+
+ switch (veth->h_vlan_encapsulated_proto) {
+#ifdef CONFIG_INET
+ case __constant_htons(ETH_P_IP):
+
+ /* TODO: Confirm this will work with VLAN headers... */
+ return arp_find(veth->h_dest, skb);
+#endif
+ default:
+ printk(VLAN_DBG
+ "%s: unable to resolve type %X addresses.\n",
+ dev->name, (int)veth->h_vlan_encapsulated_proto);
+
+ memcpy(veth->h_source, dev->dev_addr, ETH_ALEN);
+ break;
+ };
+
+ return 0;
+}
+
+/*
+ * Determine the packet's protocol ID. The rule here is that we
+ * assume 802.3 if the type field is short enough to be a length.
+ * This is normal practice and works for any 'now in use' protocol.
+ *
+ * Also, at this point we assume that we ARE dealing exclusively with
+ * VLAN packets, or packets that should be made into VLAN packets based
+ * on a default VLAN ID.
+ *
+ * NOTE: Should be similar to ethernet/eth.c.
+ *
+ * SANITY NOTE: This method is called when a packet is moving up the stack
+ * towards userland. To get here, it would have already passed
+ * through the ethernet/eth.c eth_type_trans() method.
+ * SANITY NOTE 2: We are referencing to the VLAN_HDR frields, which MAY be
+ * stored UNALIGNED in the memory. RISC systems don't like
+ * such cases very much...
+ * SANITY NOTE 2a: According to Dave Miller & Alexey, it will always be aligned,
+ * so there doesn't need to be any of the unaligned stuff. It has
+ * been commented out now... --Ben
+ *
+ */
+int vlan_skb_recv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type* ptype)
+{
+ unsigned char *rawp = NULL;
+ struct vlan_hdr *vhdr = (struct vlan_hdr *)(skb->data);
+ unsigned short vid;
+ struct net_device_stats *stats;
+ unsigned short vlan_TCI;
+ unsigned short proto;
+
+ /* vlan_TCI = ntohs(get_unaligned(&vhdr->h_vlan_TCI)); */
+ vlan_TCI = ntohs(vhdr->h_vlan_TCI);
+
+ vid = (vlan_TCI & 0xFFF);
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": skb: %p vlan_id: %hx\n",
+ skb, vid);
+#endif
+
+ /* Ok, we will find the correct VLAN device, strip the header,
+ * and then go on as usual.
+ */
+
+ /* we have 12 bits of vlan ID. */
+ /* If it's NULL, we will tag it to be junked below */
+ skb->dev = find_802_1Q_vlan_dev(dev, vid);
+
+ if (!skb->dev) {
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": ERROR: No net_device for VID: %i on dev: %s [%i]\n",
+ (unsigned int)(vid), dev->name, dev->ifindex);
+#endif
+ kfree_skb(skb);
+ return -1;
+ }
+
+ /* Bump the rx counters for the VLAN device. */
+ stats = vlan_dev_get_stats(skb->dev);
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
+
+ skb_pull(skb, VLAN_HLEN); /* take off the VLAN header (4 bytes currently) */
+
+ /* Ok, lets check to make sure the device (dev) we
+ * came in on is what this VLAN is attached to.
+ */
+
+ if (dev != VLAN_DEV_INFO(skb->dev)->real_dev) {
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": dropping skb: %p because came in on wrong device, dev: %s real_dev: %s, skb_dev: %s\n",
+ skb, dev->name, VLAN_DEV_INFO(skb->dev)->real_dev->name, skb->dev->name);
+#endif
+ kfree_skb(skb);
+ stats->rx_errors++;
+ return -1;
+ }
+
+ /*
+ * Deal with ingress priority mapping.
+ */
+ skb->priority = VLAN_DEV_INFO(skb->dev)->ingress_priority_map[(ntohs(vhdr->h_vlan_TCI) >> 13) & 0x7];
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": priority: %lu for TCI: %hu (hbo)\n",
+ (unsigned long)(skb->priority), ntohs(vhdr->h_vlan_TCI));
+#endif
+
+ /* The ethernet driver already did the pkt_type calculations
+ * for us...
+ */
+ switch (skb->pkt_type) {
+ case PACKET_BROADCAST: /* Yeah, stats collect these together.. */
+ // stats->broadcast ++; // no such counter :-(
+ case PACKET_MULTICAST:
+ stats->multicast++;
+ break;
+ case PACKET_OTHERHOST:
+ /* Our lower layer thinks this is not local, let's make sure.
+ * This allows the VLAN to have a different MAC than the underlying
+ * device, and still route correctly.
+ */
+ if (memcmp(skb->mac.ethernet->h_dest, skb->dev->dev_addr, ETH_ALEN) == 0) {
+ /* It is for our (changed) MAC-address! */
+ skb->pkt_type = PACKET_HOST;
+ }
+ break;
+ default:
+ break;
+ };
+
+ /* Was a VLAN packet, grab the encapsulated protocol, which the layer
+ * three protocols care about.
+ */
+ /* proto = get_unaligned(&vhdr->h_vlan_encapsulated_proto); */
+ proto = vhdr->h_vlan_encapsulated_proto;
+
+ skb->protocol = proto;
+ if (ntohs(proto) >= 1536) {
+ /* place it back on the queue to be handled by
+ * true layer 3 protocols.
+ */
+
+ /* See if we are configured to re-write the VLAN header
+ * to make it look like ethernet...
+ */
+ skb = vlan_check_reorder_header(skb);
+
+ /* Can be null if skb-clone fails when re-ordering */
+ if (skb) {
+ netif_rx(skb);
+ } else {
+ /* TODO: Add a more specific counter here. */
+ stats->rx_errors++;
+ }
+ return 0;
+ }
+
+ rawp = skb->data;
+
+ /*
+ * This is a magic hack to spot IPX packets. Older Novell breaks
+ * the protocol design and runs IPX over 802.3 without an 802.2 LLC
+ * layer. We look for FFFF which isn't a used 802.2 SSAP/DSAP. This
+ * won't work for fault tolerant netware but does for the rest.
+ */
+ if (*(unsigned short *)rawp == 0xFFFF) {
+ skb->protocol = __constant_htons(ETH_P_802_3);
+ /* place it back on the queue to be handled by true layer 3 protocols.
+ */
+
+ /* See if we are configured to re-write the VLAN header
+ * to make it look like ethernet...
+ */
+ skb = vlan_check_reorder_header(skb);
+
+ /* Can be null if skb-clone fails when re-ordering */
+ if (skb) {
+ netif_rx(skb);
+ } else {
+ /* TODO: Add a more specific counter here. */
+ stats->rx_errors++;
+ }
+ return 0;
+ }
+
+ /*
+ * Real 802.2 LLC
+ */
+ skb->protocol = __constant_htons(ETH_P_802_2);
+ /* place it back on the queue to be handled by upper layer protocols.
+ */
+
+ /* See if we are configured to re-write the VLAN header
+ * to make it look like ethernet...
+ */
+ skb = vlan_check_reorder_header(skb);
+
+ /* Can be null if skb-clone fails when re-ordering */
+ if (skb) {
+ netif_rx(skb);
+ } else {
+ /* TODO: Add a more specific counter here. */
+ stats->rx_errors++;
+ }
+ return 0;
+}
+
+/*
+ * Create the VLAN header for an arbitrary protocol layer
+ *
+ * saddr=NULL means use device source address
+ * daddr=NULL means leave destination address (eg unresolved arp)
+ *
+ * This is called when the SKB is moving down the stack towards the
+ * physical devices.
+ */
+int vlan_dev_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, void *daddr, void *saddr,
+ unsigned len)
+{
+ struct vlan_hdr *vhdr;
+ unsigned short veth_TCI = 0;
+ int rc = 0;
+ int build_vlan_header = 0;
+ struct net_device *vdev = dev; /* save this for the bottom of the method */
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": skb: %p type: %hx len: %x vlan_id: %hx, daddr: %p\n",
+ skb, type, len, VLAN_DEV_INFO(dev)->vlan_id, daddr);
+#endif
+
+ /* build vlan header only if re_order_header flag is NOT set. This
+ * fixes some programs that get confused when they see a VLAN device
+ * sending a frame that is VLAN encoded (the consensus is that the VLAN
+ * device should look completely like an Ethernet device when the
+ * REORDER_HEADER flag is set) The drawback to this is some extra
+ * header shuffling in the hard_start_xmit. Users can turn off this
+ * REORDER behaviour with the vconfig tool.
+ */
+ build_vlan_header = ((VLAN_DEV_INFO(dev)->flags & 1) == 0);
+
+ if (build_vlan_header) {
+ vhdr = (struct vlan_hdr *) skb_push(skb, VLAN_HLEN);
+
+ /* build the four bytes that make this a VLAN header. */
+
+ /* Now, construct the second two bytes. This field looks something
+ * like:
+ * usr_priority: 3 bits (high bits)
+ * CFI 1 bit
+ * VLAN ID 12 bits (low bits)
+ *
+ */
+ veth_TCI = VLAN_DEV_INFO(dev)->vlan_id;
+ veth_TCI |= vlan_dev_get_egress_qos_mask(dev, skb);
+
+ vhdr->h_vlan_TCI = htons(veth_TCI);
+
+ /*
+ * Set the protocol type.
+ * For a packet of type ETH_P_802_3 we put the length in here instead.
+ * It is up to the 802.2 layer to carry protocol information.
+ */
+
+ if (type != ETH_P_802_3) {
+ vhdr->h_vlan_encapsulated_proto = htons(type);
+ } else {
+ vhdr->h_vlan_encapsulated_proto = htons(len);
+ }
+ }
+
+ /* Before delegating work to the lower layer, enter our MAC-address */
+ if (saddr == NULL)
+ saddr = dev->dev_addr;
+
+ dev = VLAN_DEV_INFO(dev)->real_dev;
+
+ /* MPLS can send us skbuffs w/out enough space. This check will grow the
+ * skb if it doesn't have enough headroom. Not a beautiful solution, so
+ * I'll tick a counter so that users can know it's happening... If they
+ * care...
+ */
+
+ /* NOTE: This may still break if the underlying device is not the final
+ * device (and thus there are more headers to add...) It should work for
+ * good-ole-ethernet though.
+ */
+ if (skb_headroom(skb) < dev->hard_header_len) {
+ struct sk_buff *sk_tmp = skb;
+ skb = skb_realloc_headroom(sk_tmp, dev->hard_header_len);
+ kfree_skb(sk_tmp);
+ if (skb == NULL) {
+ struct net_device_stats *stats = vlan_dev_get_stats(vdev);
+ stats->tx_dropped++;
+ return -ENOMEM;
+ }
+ VLAN_DEV_INFO(vdev)->cnt_inc_headroom_on_tx++;
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": %s: had to grow skb.\n", vdev->name);
+#endif
+ }
+
+ if (build_vlan_header) {
+ /* Now make the underlying real hard header */
+ rc = dev->hard_header(skb, dev, ETH_P_8021Q, daddr, saddr, len + VLAN_HLEN);
+
+ if (rc > 0) {
+ rc += VLAN_HLEN;
+ } else if (rc < 0) {
+ rc -= VLAN_HLEN;
+ }
+ } else {
+ /* If here, then we'll just make a normal looking ethernet frame,
+ * but, the hard_start_xmit method will insert the tag (it has to
+ * be able to do this for bridged and other skbs that don't come
+ * down the protocol stack in an orderly manner.
+ */
+ rc = dev->hard_header(skb, dev, type, daddr, saddr, len);
+ }
+
+ return rc;
+}
+
+int vlan_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct net_device_stats *stats = vlan_dev_get_stats(dev);
+ struct vlan_ethhdr *veth = (struct vlan_ethhdr *)(skb->data);
+
+ /* Handle non-VLAN frames if they are sent to us, for example by DHCP.
+ *
+ * NOTE: THIS ASSUMES DIX ETHERNET, SPECIFICALLY NOT SUPPORTING
+ * OTHER THINGS LIKE FDDI/TokenRing/802.3 SNAPs...
+ */
+
+ if (veth->h_vlan_proto != __constant_htons(ETH_P_8021Q)) {
+ /* This is not a VLAN frame...but we can fix that! */
+ unsigned short veth_TCI = 0;
+ VLAN_DEV_INFO(dev)->cnt_encap_on_xmit++;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": proto to encap: 0x%hx (hbo)\n",
+ htons(veth->h_vlan_proto));
+#endif
+
+ if (skb_headroom(skb) < VLAN_HLEN) {
+ struct sk_buff *sk_tmp = skb;
+ skb = skb_realloc_headroom(sk_tmp, VLAN_HLEN);
+ kfree_skb(sk_tmp);
+ if (skb == NULL) {
+ stats->tx_dropped++;
+ return -ENOMEM;
+ }
+ VLAN_DEV_INFO(dev)->cnt_inc_headroom_on_tx++;
+ } else {
+ if (!(skb = skb_unshare(skb, GFP_ATOMIC))) {
+ printk(KERN_ERR "vlan: failed to unshare skbuff\n");
+ stats->tx_dropped++;
+ return -ENOMEM;
+ }
+ }
+ veth = (struct vlan_ethhdr *)skb_push(skb, VLAN_HLEN);
+
+ /* Move the mac addresses to the beginning of the new header. */
+ memmove(skb->data, skb->data + VLAN_HLEN, 12);
+
+ /* first, the ethernet type */
+ /* put_unaligned(__constant_htons(ETH_P_8021Q), &veth->h_vlan_proto); */
+ veth->h_vlan_proto = __constant_htons(ETH_P_8021Q);
+
+ /* Now, construct the second two bytes. This field looks something
+ * like:
+ * usr_priority: 3 bits (high bits)
+ * CFI 1 bit
+ * VLAN ID 12 bits (low bits)
+ */
+ veth_TCI = VLAN_DEV_INFO(dev)->vlan_id;
+ veth_TCI |= vlan_dev_get_egress_qos_mask(dev, skb);
+
+ veth->h_vlan_TCI = htons(veth_TCI);
+ }
+
+ skb->dev = VLAN_DEV_INFO(dev)->real_dev;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": about to send skb: %p to dev: %s\n",
+ skb, skb->dev->name);
+ printk(VLAN_DBG " %2hx.%2hx.%2hx.%2xh.%2hx.%2hx %2hx.%2hx.%2hx.%2hx.%2hx.%2hx %4hx %4hx %4hx\n",
+ veth->h_dest[0], veth->h_dest[1], veth->h_dest[2], veth->h_dest[3], veth->h_dest[4], veth->h_dest[5],
+ veth->h_source[0], veth->h_source[1], veth->h_source[2], veth->h_source[3], veth->h_source[4], veth->h_source[5],
+ veth->h_vlan_proto, veth->h_vlan_TCI, veth->h_vlan_encapsulated_proto);
+#endif
+
+ dev_queue_xmit(skb);
+ stats->tx_packets++; /* for statics only */
+ stats->tx_bytes += skb->len;
+ return 0;
+}
+
+int vlan_dev_change_mtu(struct net_device *dev, int new_mtu)
+{
+ /* TODO: gotta make sure the underlying layer can handle it,
+ * maybe an IFF_VLAN_CAPABLE flag for devices?
+ */
+ if (VLAN_DEV_INFO(dev)->real_dev->mtu < new_mtu)
+ return -ERANGE;
+
+ dev->mtu = new_mtu;
+
+ return new_mtu;
+}
+
+int vlan_dev_open(struct net_device *dev)
+{
+ if (!(VLAN_DEV_INFO(dev)->real_dev->flags & IFF_UP))
+ return -ENETDOWN;
+
+ return 0;
+}
+
+int vlan_dev_stop(struct net_device *dev)
+{
+ vlan_flush_mc_list(dev);
+ return 0;
+}
+
+int vlan_dev_init(struct net_device *dev)
+{
+ /* TODO: figure this out, maybe do nothing?? */
+ return 0;
+}
+
+void vlan_dev_destruct(struct net_device *dev)
+{
+ if (dev) {
+ vlan_flush_mc_list(dev);
+ if (dev->priv) {
+ dev_put(VLAN_DEV_INFO(dev)->real_dev);
+ if (VLAN_DEV_INFO(dev)->dent) {
+ printk(KERN_ERR __FUNCTION__ ": dent is NOT NULL!\n");
+
+ /* If we ever get here, there is a serious bug
+ * that must be fixed.
+ */
+ }
+
+ kfree(dev->priv);
+
+ VLAN_FMEM_DBG("dev->priv free, addr: %p\n", dev->priv);
+ dev->priv = NULL;
+ }
+
+ kfree(dev);
+ VLAN_FMEM_DBG("net_device free, addr: %p\n", dev);
+ dev = NULL;
+ }
+}
+
+int vlan_dev_set_ingress_priority(char *dev_name, __u32 skb_prio, short vlan_prio)
+{
+ struct net_device *dev = dev_get_by_name(dev_name);
+
+ if (dev) {
+ if (dev->priv_flags & IFF_802_1Q_VLAN) {
+ /* see if a priority mapping exists.. */
+ VLAN_DEV_INFO(dev)->ingress_priority_map[vlan_prio & 0x7] = skb_prio;
+ dev_put(dev);
+ return 0;
+ }
+
+ dev_put(dev);
+ }
+ return -EINVAL;
+}
+
+int vlan_dev_set_egress_priority(char *dev_name, __u32 skb_prio, short vlan_prio)
+{
+ struct net_device *dev = dev_get_by_name(dev_name);
+ struct vlan_priority_tci_mapping *mp = NULL;
+ struct vlan_priority_tci_mapping *np;
+
+ if (dev) {
+ if (dev->priv_flags & IFF_802_1Q_VLAN) {
+ /* See if a priority mapping exists.. */
+ mp = VLAN_DEV_INFO(dev)->egress_priority_map[skb_prio & 0xF];
+ while (mp) {
+ if (mp->priority == skb_prio) {
+ mp->vlan_qos = ((vlan_prio << 13) & 0xE000);
+ dev_put(dev);
+ return 0;
+ }
+ }
+
+ /* Create a new mapping then. */
+ mp = VLAN_DEV_INFO(dev)->egress_priority_map[skb_prio & 0xF];
+ np = kmalloc(sizeof(struct vlan_priority_tci_mapping), GFP_KERNEL);
+ if (np) {
+ np->next = mp;
+ np->priority = skb_prio;
+ np->vlan_qos = ((vlan_prio << 13) & 0xE000);
+ VLAN_DEV_INFO(dev)->egress_priority_map[skb_prio & 0xF] = np;
+ dev_put(dev);
+ return 0;
+ } else {
+ dev_put(dev);
+ return -ENOBUFS;
+ }
+ }
+ dev_put(dev);
+ }
+ return -EINVAL;
+}
+
+/* Flags are defined in the vlan_dev_info class in include/linux/if_vlan.h file. */
+int vlan_dev_set_vlan_flag(char *dev_name, __u32 flag, short flag_val)
+{
+ struct net_device *dev = dev_get_by_name(dev_name);
+
+ if (dev) {
+ if (dev->priv_flags & IFF_802_1Q_VLAN) {
+ /* verify flag is supported */
+ if (flag == 1) {
+ if (flag_val) {
+ VLAN_DEV_INFO(dev)->flags |= 1;
+ } else {
+ VLAN_DEV_INFO(dev)->flags &= ~1;
+ }
+ dev_put(dev);
+ return 0;
+ } else {
+ printk(KERN_ERR __FUNCTION__ ": flag %i is not valid.\n",
+ (int)(flag));
+ dev_put(dev);
+ return -EINVAL;
+ }
+ } else {
+ printk(KERN_ERR __FUNCTION__
+ ": %s is not a vlan device, priv_flags: %hX.\n",
+ dev->name, dev->priv_flags);
+ dev_put(dev);
+ }
+ } else {
+ printk(KERN_ERR __FUNCTION__ ": Could not find device: %s\n", dev_name);
+ }
+
+ return -EINVAL;
+}
+
+int vlan_dev_set_mac_address(struct net_device *dev, void *addr_struct_p)
+{
+ struct sockaddr *addr = (struct sockaddr *)(addr_struct_p);
+ int i;
+
+ if (netif_running(dev))
+ return -EBUSY;
+
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ printk("%s: Setting MAC address to ", dev->name);
+ for (i = 0; i < 6; i++)
+ printk(" %2.2x", dev->dev_addr[i]);
+ printk(".\n");
+
+ if (memcmp(VLAN_DEV_INFO(dev)->real_dev->dev_addr,
+ dev->dev_addr,
+ dev->addr_len) != 0) {
+ if (!(VLAN_DEV_INFO(dev)->real_dev->flags & IFF_PROMISC)) {
+ int flgs = VLAN_DEV_INFO(dev)->real_dev->flags;
+
+ /* Increment our in-use promiscuity counter */
+ dev_set_promiscuity(VLAN_DEV_INFO(dev)->real_dev, 1);
+
+ /* Make PROMISC visible to the user. */
+ flgs |= IFF_PROMISC;
+ printk("VLAN (%s): Setting underlying device (%s) to promiscious mode.\n",
+ dev->name, VLAN_DEV_INFO(dev)->real_dev->name);
+ dev_change_flags(VLAN_DEV_INFO(dev)->real_dev, flgs);
+ }
+ } else {
+ printk("VLAN (%s): Underlying device (%s) has same MAC, not checking promiscious mode.\n",
+ dev->name, VLAN_DEV_INFO(dev)->real_dev->name);
+ }
+
+ return 0;
+}
+
+/** Taken from Gleb + Lennert's VLAN code, and modified... */
+void vlan_dev_set_multicast_list(struct net_device *vlan_dev)
+{
+ struct dev_mc_list *dmi;
+ struct net_device *real_dev;
+ int inc;
+
+ if (vlan_dev && (vlan_dev->priv_flags & IFF_802_1Q_VLAN)) {
+ /* Then it's a real vlan device, as far as we can tell.. */
+ real_dev = VLAN_DEV_INFO(vlan_dev)->real_dev;
+
+ /* compare the current promiscuity to the last promisc we had.. */
+ inc = vlan_dev->promiscuity - VLAN_DEV_INFO(vlan_dev)->old_promiscuity;
+ if (inc) {
+ printk(KERN_INFO "%s: dev_set_promiscuity(master, %d)\n",
+ vlan_dev->name, inc);
+ dev_set_promiscuity(real_dev, inc); /* found in dev.c */
+ VLAN_DEV_INFO(vlan_dev)->old_promiscuity = vlan_dev->promiscuity;
+ }
+
+ inc = vlan_dev->allmulti - VLAN_DEV_INFO(vlan_dev)->old_allmulti;
+ if (inc) {
+ printk(KERN_INFO "%s: dev_set_allmulti(master, %d)\n",
+ vlan_dev->name, inc);
+ dev_set_allmulti(real_dev, inc); /* dev.c */
+ VLAN_DEV_INFO(vlan_dev)->old_allmulti = vlan_dev->allmulti;
+ }
+
+ /* looking for addresses to add to master's list */
+ for (dmi = vlan_dev->mc_list; dmi != NULL; dmi = dmi->next) {
+ if (vlan_should_add_mc(dmi, VLAN_DEV_INFO(vlan_dev)->old_mc_list)) {
+ dev_mc_add(real_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0);
+ printk(KERN_INFO "%s: add %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address to master interface\n",
+ vlan_dev->name,
+ dmi->dmi_addr[0],
+ dmi->dmi_addr[1],
+ dmi->dmi_addr[2],
+ dmi->dmi_addr[3],
+ dmi->dmi_addr[4],
+ dmi->dmi_addr[5]);
+ }
+ }
+
+ /* looking for addresses to delete from master's list */
+ for (dmi = VLAN_DEV_INFO(vlan_dev)->old_mc_list; dmi != NULL; dmi = dmi->next) {
+ if (vlan_should_add_mc(dmi, vlan_dev->mc_list)) {
+ /* if we think we should add it to the new list, then we should really
+ * delete it from the real list on the underlying device.
+ */
+ dev_mc_delete(real_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0);
+ printk(KERN_INFO "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from master interface\n",
+ vlan_dev->name,
+ dmi->dmi_addr[0],
+ dmi->dmi_addr[1],
+ dmi->dmi_addr[2],
+ dmi->dmi_addr[3],
+ dmi->dmi_addr[4],
+ dmi->dmi_addr[5]);
+ }
+ }
+
+ /* save multicast list */
+ vlan_copy_mc_list(vlan_dev->mc_list, VLAN_DEV_INFO(vlan_dev));
+ }
+}
+
+/** dmi is a single entry into a dev_mc_list, a single node. mc_list is
+ * an entire list, and we'll iterate through it.
+ */
+int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list)
+{
+ struct dev_mc_list *idmi;
+
+ for (idmi = mc_list; idmi != NULL; ) {
+ if (vlan_dmi_equals(dmi, idmi)) {
+ if (dmi->dmi_users > idmi->dmi_users)
+ return 1;
+ else
+ return 0;
+ } else {
+ idmi = idmi->next;
+ }
+ }
+
+ return 1;
+}
+
+void vlan_copy_mc_list(struct dev_mc_list *mc_list, struct vlan_dev_info *vlan_info)
+{
+ struct dev_mc_list *dmi, *new_dmi;
+
+ vlan_destroy_mc_list(vlan_info->old_mc_list);
+ vlan_info->old_mc_list = NULL;
+
+ for (dmi = mc_list; dmi != NULL; dmi = dmi->next) {
+ new_dmi = kmalloc(sizeof(*new_dmi), GFP_ATOMIC);
+ if (new_dmi == NULL) {
+ printk(KERN_ERR "vlan: cannot allocate memory. "
+ "Multicast may not work properly from now.\n");
+ return;
+ }
+
+ /* Copy whole structure, then make new 'next' pointer */
+ *new_dmi = *dmi;
+ new_dmi->next = vlan_info->old_mc_list;
+ vlan_info->old_mc_list = new_dmi;
+ }
+}
+
+void vlan_flush_mc_list(struct net_device *dev)
+{
+ struct dev_mc_list *dmi = dev->mc_list;
+
+ while (dmi) {
+ dev_mc_delete(dev, dmi->dmi_addr, dmi->dmi_addrlen, 0);
+ printk(KERN_INFO "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from vlan interface\n",
+ dev->name,
+ dmi->dmi_addr[0],
+ dmi->dmi_addr[1],
+ dmi->dmi_addr[2],
+ dmi->dmi_addr[3],
+ dmi->dmi_addr[4],
+ dmi->dmi_addr[5]);
+ dmi = dev->mc_list;
+ }
+
+ /* dev->mc_list is NULL by the time we get here. */
+ vlan_destroy_mc_list(VLAN_DEV_INFO(dev)->old_mc_list);
+ VLAN_DEV_INFO(dev)->old_mc_list = NULL;
+}
--- /dev/null
+/******************************************************************************
+ * vlanproc.c VLAN Module. /proc filesystem interface.
+ *
+ * This module is completely hardware-independent and provides
+ * access to the router using Linux /proc filesystem.
+ *
+ * Author: Ben Greear, <greearb@candelatech.com> coppied from wanproc.c
+ * by: Gene Kozin <genek@compuserve.com>
+ *
+ * Copyright: (c) 1998 Ben Greear
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ * ============================================================================
+ * Jan 20, 1998 Ben Greear Initial Version
+ *****************************************************************************/
+
+#include <linux/config.h>
+#include <linux/stddef.h> /* offsetof(), etc. */
+#include <linux/errno.h> /* return codes */
+#include <linux/kernel.h>
+#include <linux/malloc.h> /* kmalloc(), kfree() */
+#include <linux/mm.h> /* verify_area(), etc. */
+#include <linux/string.h> /* inline mem*, str* functions */
+#include <linux/init.h> /* __initfunc et al. */
+#include <asm/segment.h> /* kernel <-> user copy */
+#include <asm/byteorder.h> /* htons(), etc. */
+#include <asm/uaccess.h> /* copy_to_user */
+#include <asm/io.h>
+#include <linux/proc_fs.h>
+#include <linux/fs.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include "vlanproc.h"
+#include "vlan.h"
+
+/****** Function Prototypes *************************************************/
+
+#ifdef CONFIG_PROC_FS
+
+/* Proc filesystem interface */
+static ssize_t vlan_proc_read(struct file *file, char *buf, size_t count,
+ loff_t *ppos);
+
+/* Methods for preparing data for reading proc entries */
+
+static int vlan_config_get_info(char *buf, char **start, off_t offs, int len);
+static int vlandev_get_info(char *buf, char **start, off_t offs, int len);
+
+/* Miscellaneous */
+
+/*
+ * Global Data
+ */
+
+/*
+ * Names of the proc directory entries
+ */
+
+static char name_root[] = "vlan";
+static char name_conf[] = "config";
+static char term_msg[] = "***KERNEL: Out of buffer space!***\n";
+
+/*
+ * Structures for interfacing with the /proc filesystem.
+ * VLAN creates its own directory /proc/net/vlan with the folowing
+ * entries:
+ * config device status/configuration
+ * <device> entry for each device
+ */
+
+/*
+ * Generic /proc/net/vlan/<file> file and inode operations
+ */
+
+static struct file_operations vlan_fops = {
+ read: vlan_proc_read,
+ ioctl: NULL, /* vlan_proc_ioctl */
+};
+
+/*
+ * /proc/net/vlan/<device> file and inode operations
+ */
+
+static struct file_operations vlandev_fops = {
+ read: vlan_proc_read,
+ ioctl: NULL, /* vlan_proc_ioctl */
+};
+
+/*
+ * Proc filesystem derectory entries.
+ */
+
+/*
+ * /proc/net/vlan
+ */
+
+static struct proc_dir_entry *proc_vlan_dir;
+
+/*
+ * /proc/net/vlan/config
+ */
+
+static struct proc_dir_entry *proc_vlan_conf;
+
+/* Strings */
+static char conf_hdr[] = "VLAN Dev name | VLAN ID\n";
+
+/*
+ * Interface functions
+ */
+
+/*
+ * Clean up /proc/net/vlan entries
+ */
+
+void __exit vlan_proc_cleanup(void)
+{
+ if (proc_vlan_conf)
+ remove_proc_entry(name_conf, proc_vlan_dir);
+
+ if (proc_vlan_dir)
+ proc_net_remove(name_root);
+
+ /* Dynamically added entries should be cleaned up as their vlan_device
+ * is removed, so we should not have to take care of it here...
+ */
+}
+
+/*
+ * Create /proc/net/vlan entries
+ */
+
+int __init vlan_proc_init(void)
+{
+ proc_vlan_dir = proc_mkdir(name_root, proc_net);
+ if (proc_vlan_dir) {
+ proc_vlan_conf = create_proc_entry(name_conf,
+ S_IFREG|S_IRUSR|S_IWUSR,
+ proc_vlan_dir);
+ if (proc_vlan_conf) {
+ proc_vlan_conf->proc_fops = &vlan_fops;
+ proc_vlan_conf->get_info = vlan_config_get_info;
+ return 0;
+ }
+ }
+ vlan_proc_cleanup();
+ return -ENOBUFS;
+}
+
+/*
+ * Add directory entry for VLAN device.
+ */
+
+int vlan_proc_add_dev (struct net_device *vlandev)
+{
+ struct vlan_dev_info *dev_info = VLAN_DEV_INFO(vlandev);
+
+ if (!(vlandev->priv_flags & IFF_802_1Q_VLAN)) {
+ printk(KERN_ERR
+ "ERROR: vlan_proc_add, device -:%s:- is NOT a VLAN\n",
+ vlandev->name);
+ return -EINVAL;
+ }
+
+ dev_info->dent = create_proc_entry(vlandev->name,
+ S_IFREG|S_IRUSR|S_IWUSR,
+ proc_vlan_dir);
+ if (!dev_info->dent)
+ return -ENOBUFS;
+
+ dev_info->dent->proc_fops = &vlandev_fops;
+ dev_info->dent->get_info = &vlandev_get_info;
+ dev_info->dent->data = vlandev;
+
+#ifdef VLAN_DEBUG
+ printk(KERN_ERR "vlan_proc_add, device -:%s:- being added.\n",
+ vlandev->name);
+#endif
+ return 0;
+}
+
+/*
+ * Delete directory entry for VLAN device.
+ */
+int vlan_proc_rem_dev(struct net_device *vlandev)
+{
+ if (!vlandev) {
+ printk(VLAN_ERR __FUNCTION__ ": invalid argument: %p\n",
+ vlandev);
+ return -EINVAL;
+ }
+
+ if (!(vlandev->priv_flags & IFF_802_1Q_VLAN)) {
+ printk(VLAN_DBG __FUNCTION__ ": invalid argument, device: %s is not a VLAN device, priv_flags: 0x%4hX.\n",
+ vlandev->name, vlandev->priv_flags);
+ return -EINVAL;
+ }
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": dev: %p\n", vlandev);
+#endif
+
+ /** NOTE: This will consume the memory pointed to by dent, it seems. */
+ remove_proc_entry(VLAN_DEV_INFO(vlandev)->dent->name, proc_vlan_dir);
+ VLAN_DEV_INFO(vlandev)->dent = NULL;
+
+ return 0;
+}
+
+/****** Proc filesystem entry points ****************************************/
+
+/*
+ * Read VLAN proc directory entry.
+ * This is universal routine for reading all entries in /proc/net/vlan
+ * directory. Each directory entry contains a pointer to the 'method' for
+ * preparing data for that entry.
+ * o verify arguments
+ * o allocate kernel buffer
+ * o call get_info() to prepare data
+ * o copy data to user space
+ * o release kernel buffer
+ *
+ * Return: number of bytes copied to user space (0, if no data)
+ * <0 error
+ */
+static ssize_t vlan_proc_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos)
+{
+ struct inode *inode = file->f_dentry->d_inode;
+ struct proc_dir_entry *dent;
+ char *page;
+ int pos, offs, len;
+
+ if (count <= 0)
+ return 0;
+
+ dent = inode->u.generic_ip;
+ if ((dent == NULL) || (dent->get_info == NULL))
+ return 0;
+
+ page = kmalloc(VLAN_PROC_BUFSZ, GFP_KERNEL);
+ VLAN_MEM_DBG("page malloc, addr: %p size: %i\n",
+ page, VLAN_PROC_BUFSZ);
+
+ if (page == NULL)
+ return -ENOBUFS;
+
+ pos = dent->get_info(page, dent->data, 0, 0);
+ offs = file->f_pos;
+ if (offs < pos) {
+ len = min_t(int, pos - offs, count);
+ if (copy_to_user(buf, (page + offs), len))
+ return -EFAULT;
+
+ file->f_pos += len;
+ } else {
+ len = 0;
+ }
+
+ kfree(page);
+ VLAN_FMEM_DBG("page free, addr: %p\n", page);
+ return len;
+}
+
+/*
+ * The following few functions build the content of /proc/net/vlan/config
+ */
+
+static int vlan_proc_get_vlan_info(char* buf, unsigned int cnt)
+{
+ struct net_device *vlandev = NULL;
+ struct vlan_group *grp = NULL;
+ int i = 0;
+ char *nm_type = NULL;
+ struct vlan_dev_info *dev_info = NULL;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": cnt == %i\n", cnt);
+#endif
+
+ if (vlan_name_type == VLAN_NAME_TYPE_RAW_PLUS_VID) {
+ nm_type = "VLAN_NAME_TYPE_RAW_PLUS_VID";
+ } else if (vlan_name_type == VLAN_NAME_TYPE_PLUS_VID_NO_PAD) {
+ nm_type = "VLAN_NAME_TYPE_PLUS_VID_NO_PAD";
+ } else if (vlan_name_type == VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD) {
+ nm_type = "VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD";
+ } else if (vlan_name_type == VLAN_NAME_TYPE_PLUS_VID) {
+ nm_type = "VLAN_NAME_TYPE_PLUS_VID";
+ } else {
+ nm_type = "UNKNOWN";
+ }
+
+ cnt += sprintf(buf + cnt, "Name-Type: %s bad_proto_recvd: %lu\n",
+ nm_type, vlan_bad_proto_recvd);
+
+ for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) {
+ /* loop through all devices for this device */
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": found a group, addr: %p\n",grp);
+#endif
+ for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) {
+ vlandev = grp->vlan_devices[i];
+ if (!vlandev)
+ continue;
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__
+ ": found a vlan_dev, addr: %p\n", vlandev);
+#endif
+ if ((cnt + 100) > VLAN_PROC_BUFSZ) {
+ if ((cnt+strlen(term_msg)) < VLAN_PROC_BUFSZ)
+ cnt += sprintf(buf+cnt, "%s", term_msg);
+
+ return cnt;
+ }
+ if (!vlandev->priv) {
+ printk(KERN_ERR __FUNCTION__
+ ": ERROR: vlandev->priv is NULL\n");
+ continue;
+ }
+
+ dev_info = VLAN_DEV_INFO(vlandev);
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__
+ ": got a good vlandev, addr: %p\n",
+ VLAN_DEV_INFO(vlandev));
+#endif
+ cnt += sprintf(buf + cnt, "%-15s| %d | %s\n",
+ vlandev->name, dev_info->vlan_id,
+ dev_info->real_dev->name);
+ }
+ }
+ return cnt;
+}
+
+/*
+ * Prepare data for reading 'Config' entry.
+ * Return length of data.
+ */
+
+static int vlan_config_get_info(char *buf, char **start,
+ off_t offs, int len)
+{
+ strcpy(buf, conf_hdr);
+ return vlan_proc_get_vlan_info(buf, (unsigned int)(strlen(conf_hdr)));
+}
+
+/*
+ * Prepare data for reading <device> entry.
+ * Return length of data.
+ *
+ * On entry, the 'start' argument will contain a pointer to VLAN device
+ * data space.
+ */
+
+static int vlandev_get_info(char *buf, char **start,
+ off_t offs, int len)
+{
+ struct net_device *vlandev = (void *) start;
+ struct net_device_stats *stats = NULL;
+ struct vlan_dev_info *dev_info = NULL;
+ struct vlan_priority_tci_mapping *mp;
+ int cnt = 0;
+ int i;
+
+#ifdef VLAN_DEBUG
+ printk(VLAN_DBG __FUNCTION__ ": vlandev: %p\n", vlandev);
+#endif
+
+ if ((vlandev == NULL) || (!vlandev->priv_flags & IFF_802_1Q_VLAN))
+ return 0;
+
+ dev_info = VLAN_DEV_INFO(vlandev);
+
+ cnt += sprintf(buf + cnt, "%s VID: %d REORDER_HDR: %i dev->priv_flags: %hx\n",
+ vlandev->name, dev_info->vlan_id,
+ (int)(dev_info->flags & 1), vlandev->priv_flags);
+
+ stats = vlan_dev_get_stats(vlandev);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "total frames received", stats->rx_packets);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "total bytes received", stats->rx_bytes);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "Broadcast/Multicast Rcvd", stats->multicast);
+
+ cnt += sprintf(buf + cnt, "\n%30s: %12lu\n",
+ "total frames transmitted", stats->tx_packets);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "total bytes transmitted", stats->tx_bytes);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "total headroom inc", dev_info->cnt_inc_headroom_on_tx);
+
+ cnt += sprintf(buf + cnt, "%30s: %12lu\n",
+ "total encap on xmit", dev_info->cnt_encap_on_xmit);
+
+ cnt += sprintf(buf + cnt, "Device: %s", dev_info->real_dev->name);
+
+ /* now show all PRIORITY mappings relating to this VLAN */
+ cnt += sprintf(buf + cnt, "\nINGRESS priority mappings: 0:%lu 1:%lu 2:%lu 3:%lu 4:%lu 5:%lu 6:%lu 7:%lu\n",
+ dev_info->ingress_priority_map[0],
+ dev_info->ingress_priority_map[1],
+ dev_info->ingress_priority_map[2],
+ dev_info->ingress_priority_map[3],
+ dev_info->ingress_priority_map[4],
+ dev_info->ingress_priority_map[5],
+ dev_info->ingress_priority_map[6],
+ dev_info->ingress_priority_map[7]);
+
+ if ((cnt + 100) > VLAN_PROC_BUFSZ) {
+ if ((cnt + strlen(term_msg)) >= VLAN_PROC_BUFSZ) {
+ /* should never get here */
+ return cnt;
+ } else {
+ cnt += sprintf(buf + cnt, "%s", term_msg);
+ return cnt;
+ }
+ }
+
+ cnt += sprintf(buf + cnt, "EGRESSS priority Mappings: ");
+
+ for (i = 0; i<16; i++) {
+ mp = dev_info->egress_priority_map[i];
+ while (mp) {
+ cnt += sprintf(buf + cnt, "%lu:%hu ",
+ mp->priority, ((mp->vlan_qos >> 13) & 0x7));
+
+ if ((cnt + 100) > VLAN_PROC_BUFSZ) {
+ if ((cnt + strlen(term_msg)) >= VLAN_PROC_BUFSZ) {
+ /* should never get here */
+ return cnt;
+ } else {
+ cnt += sprintf(buf + cnt, "%s", term_msg);
+ return cnt;
+ }
+ }
+ mp = mp->next;
+ }
+ }
+
+ cnt += sprintf(buf + cnt, "\n");
+
+ return cnt;
+}
+
+#else /* No CONFIG_PROC_FS */
+
+/*
+ * No /proc - output stubs
+ */
+
+int __init vlan_proc_init (void)
+{
+ return 0;
+}
+
+void __exit vlan_proc_cleanup(void)
+{
+ return;
+}
+
+
+int vlan_proc_add_dev(struct net_device *vlandev)
+{
+ return 0;
+}
+
+int vlan_proc_rem_dev(struct net_device *vlandev)
+{
+ return 0;
+}
+
+#endif /* No CONFIG_PROC_FS */
--- /dev/null
+#ifndef __BEN_VLAN_PROC_INC__
+#define __BEN_VLAN_PROC_INC__
+
+int vlan_proc_init(void);
+
+int vlan_proc_rem_dev(struct net_device *vlandev);
+int vlan_proc_add_dev (struct net_device *vlandev);
+void vlan_proc_cleanup (void);
+
+#define VLAN_PROC_BUFSZ (4096) /* buffer size for printing proc info */
+
+#endif /* !(__BEN_VLAN_PROC_INC__) */
tristate ' Multi-Protocol Over ATM (MPOA) support' CONFIG_ATM_MPOA
fi
fi
+
+ dep_tristate '802.1Q VLAN Support (EXPERIMENTAL)' CONFIG_VLAN_8021Q $CONFIG_EXPERIMENTAL
+
fi
comment ' '
subdir-$(CONFIG_ATM) += atm
subdir-$(CONFIG_DECNET) += decnet
subdir-$(CONFIG_ECONET) += econet
+subdir-$(CONFIG_VLAN_8021Q) += 8021q
obj-y := socket.o $(join $(subdir-y), $(patsubst %,/%.o,$(notdir $(subdir-y))))
obj-$(CONFIG_NET_PROFILE) += profile.o
include $(TOPDIR)/Rules.make
-
-tar:
- tar -cvf /dev/f1 .
extern int plip_init(void);
#endif
+
/* This define, if set, will randomly drop a packet when congestion
* is more than moderate. It helps fairness in the multi-interface
* case when one of them is a hog, but it kills performance for the
* and the routines to invoke.
*
* Why 16. Because with 16 the only overlap we get on a hash of the
- * low nibble of the protocol value is RARP/SNAP/X.25.
+ * low nibble of the protocol value is RARP/SNAP/X.25.
+ *
+ * NOTE: That is no longer true with the addition of VLAN tags. Not
+ * sure which should go first, but I bet it won't make much
+ * difference if we are running VLANs. The good news is that
+ * this protocol won't be in the list unless compiled in, so
+ * the average user (w/out VLANs) will not be adversly affected.
+ * --BLG
*
* 0800 IP
+ * 8100 802.1Q VLAN
* 0001 802.3
* 0002 AX.25
* 0004 802.2
obj-$(CONFIG_NET) := $(OBJS) $(OBJ2)
include $(TOPDIR)/Rules.make
-
-tar:
- tar -cvf /dev/f1 .
obj-$(CONFIG_IP_PNP) += ipconfig.o
include $(TOPDIR)/Rules.make
-
-tar:
- tar -cvf /dev/f1 .
*
* PF_INET protocol family socket handler.
*
- * Version: $Id: af_inet.c,v 1.133 2001/08/06 13:21:16 davem Exp $
+ * Version: $Id: af_inet.c,v 1.135 2001/10/27 03:27:13 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
int (*br_ioctl_hook)(unsigned long);
#endif
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+int (*vlan_ioctl_hook)(unsigned long arg);
+#endif
+
/* The inetsw table contains everything that inet_create needs to
* build a new socket.
*/
if (sk->state != TCP_CLOSE)
goto out;
- err = -EAGAIN;
- if (sk->num == 0) {
- if (sk->prot->get_port(sk, 0) != 0)
- goto out;
- sk->sport = htons(sk->num);
- }
-
err = sk->prot->connect(sk, uaddr, addr_len);
if (err < 0)
goto out;
#endif
return -ENOPKG;
+ case SIOCGIFVLAN:
+ case SIOCSIFVLAN:
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+#ifdef CONFIG_KMOD
+ if (vlan_ioctl_hook == NULL)
+ request_module("8021q");
+#endif
+ if (vlan_ioctl_hook != NULL)
+ return vlan_ioctl_hook(arg);
+#endif
+ return -ENOPKG;
+
case SIOCGIFDIVERT:
case SIOCSIFDIVERT:
#ifdef CONFIG_NET_DIVERT
static inline u8
ipgre_ecn_encapsulate(u8 tos, struct iphdr *old_iph, struct sk_buff *skb)
{
-#ifdef CONFIG_INET_ECN
u8 inner = 0;
if (skb->protocol == __constant_htons(ETH_P_IP))
inner = old_iph->tos;
else if (skb->protocol == __constant_htons(ETH_P_IPV6))
inner = ip6_get_dsfield((struct ipv6hdr*)old_iph);
return INET_ECN_encapsulate(tos, inner);
-#else
- return tos;
-#endif
}
int ipgre_rcv(struct sk_buff *skb)
*
* The IP to API glue.
*
- * Version: $Id: ip_sockglue.c,v 1.60 2001/09/18 22:29:09 davem Exp $
+ * Version: $Id: ip_sockglue.c,v 1.61 2001/10/20 00:00:11 davem Exp $
*
* Authors: see ip.c
*
sk->protinfo.af_inet.cmsg_flags &= ~IP_CMSG_RETOPTS;
break;
case IP_TOS: /* This sets both TOS and Precedence */
- /* Reject setting of unused bits */
-#ifndef CONFIG_INET_ECN
- if (val & ~(IPTOS_TOS_MASK|IPTOS_PREC_MASK))
- goto e_inval;
-#else
if (sk->type == SOCK_STREAM) {
val &= ~3;
val |= sk->protinfo.af_inet.tos & 3;
}
-#endif
if (IPTOS_PREC(val) >= IPTOS_PREC_CRITIC_ECP &&
!capable(CAP_NET_ADMIN)) {
err = -EPERM;
/*
- * $Id: ipconfig.c,v 1.39 2001/10/13 01:47:31 davem Exp $
+ * $Id: ipconfig.c,v 1.40 2001/10/30 03:08:02 davem Exp $
*
* Automatic Configuration of IP -- use DHCP, BOOTP, RARP, or
* user-supplied information to configure own IP address and routes.
printk(" by server %u.%u.%u.%u\n",
NIPQUAD(ic_servaddr));
#endif
+ /* The DHCP indicated server address takes
+ * precedence over the bootp header one if
+ * they are different.
+ */
+ if ((server_id != INADDR_NONE) &&
+ (b->server_ip != server_id))
+ b->server_ip = ic_servaddr;
break;
case DHCPACK:
tristate 'Connection tracking (required for masq/NAT)' CONFIG_IP_NF_CONNTRACK
if [ "$CONFIG_IP_NF_CONNTRACK" != "n" ]; then
dep_tristate ' FTP protocol support' CONFIG_IP_NF_FTP $CONFIG_IP_NF_CONNTRACK
+ dep_tristate ' IRC protocol support' CONFIG_IP_NF_IRC $CONFIG_IP_NF_CONNTRACK
fi
if [ "$CONFIG_EXPERIMENTAL" = "y" -a "$CONFIG_NETLINK" = "y" ]; then
dep_tristate ' netfilter MARK match support' CONFIG_IP_NF_MATCH_MARK $CONFIG_IP_NF_IPTABLES
dep_tristate ' Multiple port match support' CONFIG_IP_NF_MATCH_MULTIPORT $CONFIG_IP_NF_IPTABLES
dep_tristate ' TOS match support' CONFIG_IP_NF_MATCH_TOS $CONFIG_IP_NF_IPTABLES
+ dep_tristate ' LENGTH match support' CONFIG_IP_NF_MATCH_LENGTH $CONFIG_IP_NF_IPTABLES
+ dep_tristate ' TTL match support' CONFIG_IP_NF_MATCH_TTL $CONFIG_IP_NF_IPTABLES
dep_tristate ' tcpmss match support' CONFIG_IP_NF_MATCH_TCPMSS $CONFIG_IP_NF_IPTABLES
if [ "$CONFIG_IP_NF_CONNTRACK" != "n" ]; then
dep_tristate ' Connection state match support' CONFIG_IP_NF_MATCH_STATE $CONFIG_IP_NF_CONNTRACK $CONFIG_IP_NF_IPTABLES
define_bool CONFIG_IP_NF_NAT_NEEDED y
dep_tristate ' MASQUERADE target support' CONFIG_IP_NF_TARGET_MASQUERADE $CONFIG_IP_NF_NAT
dep_tristate ' REDIRECT target support' CONFIG_IP_NF_TARGET_REDIRECT $CONFIG_IP_NF_NAT
+ if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ dep_tristate ' Basic SNMP-ALG support (EXPERIMENTAL)' CONFIG_IP_NF_NAT_SNMP_BASIC $CONFIG_IP_NF_NAT
+ fi
+ if [ "$CONFIG_IP_NF_IRC" = "m" ]; then
+ define_tristate CONFIG_IP_NF_NAT_IRC m
+ else
+ if [ "$CONFIG_IP_NF_IRC" = "y" ]; then
+ define_tristate CONFIG_IP_NF_NAT_IRC $CONFIG_IP_NF_NAT
+ fi
+ fi
# If they want FTP, set to $CONFIG_IP_NF_NAT (m or y),
# or $CONFIG_IP_NF_FTP (m or y), whichever is weaker. Argh.
if [ "$CONFIG_IP_NF_FTP" = "m" ]; then
# connection tracking
obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o
+# IRC support
+obj-$(CONFIG_IP_NF_IRC) += ip_conntrack_irc.o
+obj-$(CONFIG_IP_NF_NAT_IRC) += ip_nat_irc.o
+
# connection tracking helpers
obj-$(CONFIG_IP_NF_FTP) += ip_conntrack_ftp.o
obj-$(CONFIG_IP_NF_MATCH_MULTIPORT) += ipt_multiport.o
obj-$(CONFIG_IP_NF_MATCH_OWNER) += ipt_owner.o
obj-$(CONFIG_IP_NF_MATCH_TOS) += ipt_tos.o
+
+obj-$(CONFIG_IP_NF_MATCH_LENGTH) += ipt_length.o
+
+obj-$(CONFIG_IP_NF_MATCH_TTL) += ipt_ttl.o
obj-$(CONFIG_IP_NF_MATCH_STATE) += ipt_state.o
obj-$(CONFIG_IP_NF_MATCH_UNCLEAN) += ipt_unclean.o
obj-$(CONFIG_IP_NF_MATCH_TCPMSS) += ipt_tcpmss.o
obj-$(CONFIG_IP_NF_TARGET_MARK) += ipt_MARK.o
obj-$(CONFIG_IP_NF_TARGET_MASQUERADE) += ipt_MASQUERADE.o
obj-$(CONFIG_IP_NF_TARGET_REDIRECT) += ipt_REDIRECT.o
+obj-$(CONFIG_IP_NF_NAT_SNMP_BASIC) += ip_nat_snmp_basic.o
obj-$(CONFIG_IP_NF_TARGET_LOG) += ipt_LOG.o
obj-$(CONFIG_IP_NF_TARGET_TCPMSS) += ipt_TCPMSS.o
LOCK_BH(&ip_ftp_lock);
if (htonl((array[0] << 24) | (array[1] << 16) | (array[2] << 8) | array[3])
== ct->tuplehash[dir].tuple.src.ip) {
- info->is_ftp = 1;
+ info->is_ftp = 21;
info->seq = ntohl(tcph->seq) + matchoff;
info->len = matchlen;
info->ftptype = search[i].ftptype;
--- /dev/null
+/* IRC extension for IP connection tracking, Version 1.19
+ * (C) 2000 by Harald Welte <laforge@gnumonks.org>
+ * based on RR's ip_conntrack_ftp.c
+ *
+ * ip_conntrack_irc.c,v 1.19 2001/10/25 14:34:21 laforge Exp
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ **
+ * Module load syntax:
+ * insmod ip_nat_irc.o ports=port1,port2,...port<MAX_PORTS>
+ *
+ * please give the ports of all IRC servers You wish to connect to.
+ * If You don't specify ports, the default will be port 6667
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/netfilter.h>
+#include <linux/ip.h>
+#include <net/checksum.h>
+#include <net/tcp.h>
+
+#include <linux/netfilter_ipv4/lockhelp.h>
+#include <linux/netfilter_ipv4/ip_conntrack_helper.h>
+#include <linux/netfilter_ipv4/ip_conntrack_irc.h>
+
+#define MAX_PORTS 8
+static int ports[MAX_PORTS];
+static int ports_n_c = 0;
+
+MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
+MODULE_DESCRIPTION("IRC (DCC) connection tracking module");
+MODULE_LICENSE("GPL");
+#ifdef MODULE_PARM
+MODULE_PARM(ports, "1-" __MODULE_STRING(MAX_PORTS) "i");
+MODULE_PARM_DESC(ports, "port numbers of IRC servers");
+#endif
+
+#define NUM_DCCPROTO 5
+struct dccproto dccprotos[NUM_DCCPROTO] = {
+ {"SEND ", 5},
+ {"CHAT ", 5},
+ {"MOVE ", 5},
+ {"TSEND ", 6},
+ {"SCHAT ", 6}
+};
+#define MAXMATCHLEN 6
+
+DECLARE_LOCK(ip_irc_lock);
+struct module *ip_conntrack_irc = THIS_MODULE;
+
+#if 0
+#define DEBUGP(format, args...) printk(KERN_DEBUG __FILE__ ":" __FUNCTION__ \
+ ":" format, ## args)
+#else
+#define DEBUGP(format, args...)
+#endif
+
+int parse_dcc(char *data, char *data_end, u_int32_t * ip, u_int16_t * port,
+ char **ad_beg_p, char **ad_end_p)
+/* tries to get the ip_addr and port out of a dcc command
+ return value: -1 on failure, 0 on success
+ data pointer to first byte of DCC command data
+ data_end pointer to last byte of dcc command data
+ ip returns parsed ip of dcc command
+ port returns parsed port of dcc command
+ ad_beg_p returns pointer to first byte of addr data
+ ad_end_p returns pointer to last byte of addr data */
+{
+
+ /* at least 12: "AAAAAAAA P\1\n" */
+ while (*data++ != ' ')
+ if (data > data_end - 12)
+ return -1;
+
+ *ad_beg_p = data;
+ *ip = simple_strtoul(data, &data, 10);
+
+ /* skip blanks between ip and port */
+ while (*data == ' ')
+ data++;
+
+
+ *port = simple_strtoul(data, &data, 10);
+ *ad_end_p = data;
+
+ return 0;
+}
+
+
+/* FIXME: This should be in userspace. Later. */
+static int help(const struct iphdr *iph, size_t len,
+ struct ip_conntrack *ct, enum ip_conntrack_info ctinfo)
+{
+ /* tcplen not negative guarenteed by ip_conntrack_tcp.c */
+ struct tcphdr *tcph = (void *) iph + iph->ihl * 4;
+ const char *data = (const char *) tcph + tcph->doff * 4;
+ const char *_data = data;
+ char *data_limit;
+ u_int32_t tcplen = len - iph->ihl * 4;
+ u_int32_t datalen = tcplen - tcph->doff * 4;
+ int dir = CTINFO2DIR(ctinfo);
+ struct ip_conntrack_tuple t, mask;
+
+ u_int32_t dcc_ip;
+ u_int16_t dcc_port;
+ int i;
+ char *addr_beg_p, *addr_end_p;
+
+ struct ip_ct_irc *info = &ct->help.ct_irc_info;
+
+ memset(&mask, 0, sizeof(struct ip_conntrack_tuple));
+ mask.dst.u.tcp.port = 0xFFFF;
+ mask.dst.protonum = 0xFFFF;
+
+ DEBUGP("entered\n");
+ /* Can't track connections formed before we registered */
+ if (!info)
+ return NF_ACCEPT;
+
+ /* If packet is coming from IRC server */
+ if (dir == IP_CT_DIR_REPLY)
+ return NF_ACCEPT;
+
+ /* Until there's been traffic both ways, don't look in packets. */
+ if (ctinfo != IP_CT_ESTABLISHED
+ && ctinfo != IP_CT_ESTABLISHED + IP_CT_IS_REPLY) {
+ DEBUGP("Conntrackinfo = %u\n", ctinfo);
+ return NF_ACCEPT;
+ }
+
+ /* Not whole TCP header? */
+ if (tcplen < sizeof(struct tcphdr) || tcplen < tcph->doff * 4) {
+ DEBUGP("tcplen = %u\n", (unsigned) tcplen);
+ return NF_ACCEPT;
+ }
+
+ /* Checksum invalid? Ignore. */
+ /* FIXME: Source route IP option packets --RR */
+ if (tcp_v4_check(tcph, tcplen, iph->saddr, iph->daddr,
+ csum_partial((char *) tcph, tcplen, 0))) {
+ DEBUGP("bad csum: %p %u %u.%u.%u.%u %u.%u.%u.%u\n",
+ tcph, tcplen, NIPQUAD(iph->saddr),
+ NIPQUAD(iph->daddr));
+ return NF_ACCEPT;
+ }
+
+ data_limit = (char *) data + datalen;
+ while (data < (data_limit - (22 + MAXMATCHLEN))) {
+ if (memcmp(data, "\1DCC ", 5)) {
+ data++;
+ continue;
+ }
+
+ data += 5;
+
+ DEBUGP("DCC found in master %u.%u.%u.%u:%u %u.%u.%u.%u:%u...\n",
+ NIPQUAD(iph->saddr), ntohs(tcph->source),
+ NIPQUAD(iph->daddr), ntohs(tcph->dest));
+
+ for (i = 0; i < NUM_DCCPROTO; i++) {
+ if (memcmp(data, dccprotos[i].match,
+ dccprotos[i].matchlen)) {
+ /* no match */
+ continue;
+ }
+
+ DEBUGP("DCC %s detected\n", dccprotos[i].match);
+ data += dccprotos[i].matchlen;
+ if (parse_dcc((char *) data, data_limit, &dcc_ip,
+ &dcc_port, &addr_beg_p, &addr_end_p)) {
+ /* unable to parse */
+ DEBUGP("unable to parse dcc command\n");
+ continue;
+ }
+ DEBUGP("DCC bound ip/port: %u.%u.%u.%u:%u\n",
+ HIPQUAD(dcc_ip), dcc_port);
+
+ if (ct->tuplehash[dir].tuple.src.ip != htonl(dcc_ip)) {
+ if (net_ratelimit())
+ printk(KERN_WARNING
+ "Forged DCC command from "
+ "%u.%u.%u.%u: %u.%u.%u.%u:%u\n",
+ NIPQUAD(ct->tuplehash[dir].tuple.src.ip),
+ HIPQUAD(dcc_ip), dcc_port);
+
+ continue;
+ }
+
+ LOCK_BH(&ip_irc_lock);
+
+ /* save position of address in dcc string,
+ * neccessary for NAT */
+ info->is_irc = IP_CONNTR_IRC;
+ DEBUGP("tcph->seq = %u\n", tcph->seq);
+ info->seq = ntohl(tcph->seq) + (addr_beg_p - _data);
+ info->len = (addr_end_p - addr_beg_p);
+ info->port = dcc_port;
+ DEBUGP("wrote info seq=%u (ofs=%u), len=%d\n",
+ info->seq, (addr_end_p - _data), info->len);
+
+ memset(&t, 0, sizeof(t));
+ t.src.ip = 0;
+ t.src.u.tcp.port = 0;
+ t.dst.ip = htonl(dcc_ip);
+ t.dst.u.tcp.port = htons(info->port);
+ t.dst.protonum = IPPROTO_TCP;
+
+ DEBUGP("expect_related %u.%u.%u.%u:%u-%u.%u.%u.%u:%u\n",
+ NIPQUAD(t.src.ip),
+ ntohs(t.src.u.tcp.port),
+ NIPQUAD(t.dst.ip),
+ ntohs(t.dst.u.tcp.port));
+
+ ip_conntrack_expect_related(ct, &t, &mask, NULL);
+ UNLOCK_BH(&ip_irc_lock);
+
+ return NF_ACCEPT;
+ } /* for .. NUM_DCCPROTO */
+ } /* while data < ... */
+
+ return NF_ACCEPT;
+}
+
+static struct ip_conntrack_helper irc_helpers[MAX_PORTS];
+
+static void fini(void);
+
+static int __init init(void)
+{
+ int i, ret;
+
+ /* If no port given, default to standard irc port */
+ if (ports[0] == 0)
+ ports[0] = 6667;
+
+ for (i = 0; (i < MAX_PORTS) && ports[i]; i++) {
+ memset(&irc_helpers[i], 0,
+ sizeof(struct ip_conntrack_helper));
+ irc_helpers[i].tuple.src.u.tcp.port = htons(ports[i]);
+ irc_helpers[i].tuple.dst.protonum = IPPROTO_TCP;
+ irc_helpers[i].mask.src.u.tcp.port = 0xFFFF;
+ irc_helpers[i].mask.dst.protonum = 0xFFFF;
+ irc_helpers[i].help = help;
+
+ DEBUGP("port #%d: %d\n", i, ports[i]);
+
+ ret = ip_conntrack_helper_register(&irc_helpers[i]);
+
+ if (ret) {
+ printk("ip_conntrack_irc: ERROR registering port %d\n",
+ ports[i]);
+ fini();
+ return -EBUSY;
+ }
+ ports_n_c++;
+ }
+ return 0;
+}
+
+/* This function is intentionally _NOT_ defined as __exit, because
+ * it is needed by the init function */
+static void fini(void)
+{
+ int i;
+ for (i = 0; (i < MAX_PORTS) && ports[i]; i++) {
+ DEBUGP("unregistering port %d\n",
+ ports[i]);
+ ip_conntrack_helper_unregister(&irc_helpers[i]);
+ }
+}
+
+module_init(init);
+module_exit(fini);
ftpinfo = &master->help.ct_ftp_info;
LOCK_BH(&ip_ftp_lock);
- if (!ftpinfo->is_ftp) {
+ if (ftpinfo->is_ftp != 21) {
UNLOCK_BH(&ip_ftp_lock);
DEBUGP("nat_expected: master not ftp\n");
return 0;
--- /dev/null
+/* IRC extension for TCP NAT alteration.
+ * (C) 2000 by Harald Welte <laforge@gnumonks.org>
+ * based on a copy of RR's ip_nat_ftp.c
+ *
+ * ip_nat_irc.c,v 1.15 2001/10/22 10:43:53 laforge Exp
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Module load syntax:
+ * insmod ip_nat_irc.o ports=port1,port2,...port<MAX_PORTS>
+ *
+ * please give the ports of all IRC servers You wish to connect to.
+ * If You don't specify ports, the default will be port 6667
+ */
+
+#include <linux/module.h>
+#include <linux/netfilter_ipv4.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/kernel.h>
+#include <net/tcp.h>
+#include <linux/netfilter_ipv4/ip_nat.h>
+#include <linux/netfilter_ipv4/ip_nat_helper.h>
+#include <linux/netfilter_ipv4/ip_nat_rule.h>
+#include <linux/netfilter_ipv4/ip_conntrack_irc.h>
+#include <linux/netfilter_ipv4/ip_conntrack_helper.h>
+
+#if 0
+#define DEBUGP printk
+#else
+#define DEBUGP(format, args...)
+#endif
+
+#define MAX_PORTS 8
+static int ports[MAX_PORTS];
+static int ports_c = 0;
+
+MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
+MODULE_DESCRIPTION("IRC (DCC) network address translation module");
+MODULE_LICENSE("GPL");
+#ifdef MODULE_PARM
+MODULE_PARM(ports, "1-" __MODULE_STRING(MAX_PORTS) "i");
+MODULE_PARM_DESC(ports, "port numbers of IRC servers");
+#endif
+
+/* protects irc part of conntracks */
+DECLARE_LOCK_EXTERN(ip_irc_lock);
+
+/* FIXME: Time out? --RR */
+
+static int
+irc_nat_expected(struct sk_buff **pskb,
+ unsigned int hooknum,
+ struct ip_conntrack *ct,
+ struct ip_nat_info *info,
+ struct ip_conntrack *master,
+ struct ip_nat_info *masterinfo, unsigned int *verdict)
+{
+ struct ip_nat_multi_range mr;
+ u_int32_t newdstip, newsrcip, newip;
+ struct ip_ct_irc *ircinfo;
+
+ IP_NF_ASSERT(info);
+ IP_NF_ASSERT(master);
+ IP_NF_ASSERT(masterinfo);
+
+ IP_NF_ASSERT(!(info->initialized & (1 << HOOK2MANIP(hooknum))));
+
+ DEBUGP("nat_expected: We have a connection!\n");
+
+ /* Master must be an irc connection */
+ ircinfo = &master->help.ct_irc_info;
+ LOCK_BH(&ip_irc_lock);
+ if (ircinfo->is_irc != IP_CONNTR_IRC) {
+ UNLOCK_BH(&ip_irc_lock);
+ DEBUGP("nat_expected: master not irc\n");
+ return 0;
+ }
+
+ newdstip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
+ newsrcip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
+ DEBUGP("nat_expected: DCC cmd. %u.%u.%u.%u->%u.%u.%u.%u\n",
+ NIPQUAD(newsrcip), NIPQUAD(newdstip));
+
+ UNLOCK_BH(&ip_irc_lock);
+
+ if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC)
+ newip = newsrcip;
+ else
+ newip = newdstip;
+
+ DEBUGP("nat_expected: IP to %u.%u.%u.%u\n", NIPQUAD(newip));
+
+ mr.rangesize = 1;
+ /* We don't want to manip the per-protocol, just the IPs. */
+ mr.range[0].flags = IP_NAT_RANGE_MAP_IPS;
+ mr.range[0].min_ip = mr.range[0].max_ip = newip;
+
+ *verdict = ip_nat_setup_info(ct, &mr, hooknum);
+
+ return 1;
+}
+
+static int irc_data_fixup(const struct ip_ct_irc *ct_irc_info,
+ struct ip_conntrack *ct,
+ unsigned int datalen,
+ struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo)
+{
+ u_int32_t newip;
+ struct ip_conntrack_tuple t;
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct tcphdr *tcph = (void *) iph + iph->ihl * 4;
+ int port;
+
+ /* "4294967296 65635 " */
+ char buffer[18];
+
+ MUST_BE_LOCKED(&ip_irc_lock);
+
+ DEBUGP("IRC_NAT: info (seq %u + %u) packet(seq %u + %u)\n",
+ ct_irc_info->seq, ct_irc_info->len,
+ ntohl(tcph->seq), datalen);
+
+ newip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
+
+ /* Alter conntrack's expectations. */
+
+ /* We can read expect here without conntrack lock, since it's
+ only set in ip_conntrack_irc, with ip_irc_lock held
+ writable */
+
+ t = ct->expected.tuple;
+ t.dst.ip = newip;
+ for (port = ct_irc_info->port; port != 0; port++) {
+ t.dst.u.tcp.port = htons(port);
+ if (ip_conntrack_expect_related(ct, &t,
+ &ct->expected.mask,
+ NULL) == 0) {
+ DEBUGP("using port %d", port);
+ break;
+ }
+
+ }
+ if (port == 0)
+ return 0;
+
+ /* strlen("\1DCC CHAT chat AAAAAAAA P\1\n")=27
+ * strlen("\1DCC SCHAT chat AAAAAAAA P\1\n")=28
+ * strlen("\1DCC SEND F AAAAAAAA P S\1\n")=26
+ * strlen("\1DCC MOVE F AAAAAAAA P S\1\n")=26
+ * strlen("\1DCC TSEND F AAAAAAAA P S\1\n")=27
+ * AAAAAAAAA: bound addr (1.0.0.0==16777216, min 8 digits,
+ * 255.255.255.255==4294967296, 10 digits)
+ * P: bound port (min 1 d, max 5d (65635))
+ * F: filename (min 1 d )
+ * S: size (min 1 d )
+ * 0x01, \n: terminators
+ */
+
+ sprintf(buffer, "%u %u", ntohl(newip), port);
+ DEBUGP("ip_nat_irc: Inserting '%s' == %u.%u.%u.%u, port %u\n",
+ buffer, NIPQUAD(newip), port);
+
+ return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
+ ct_irc_info->seq - ntohl(tcph->seq),
+ ct_irc_info->len, buffer,
+ strlen(buffer));
+}
+
+static unsigned int help(struct ip_conntrack *ct,
+ struct ip_nat_info *info,
+ enum ip_conntrack_info ctinfo,
+ unsigned int hooknum, struct sk_buff **pskb)
+{
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct tcphdr *tcph = (void *) iph + iph->ihl * 4;
+ unsigned int datalen;
+ int dir;
+ int score;
+ struct ip_ct_irc *ct_irc_info = &ct->help.ct_irc_info;
+
+ /* Delete SACK_OK on initial TCP SYNs. */
+ if (tcph->syn && !tcph->ack)
+ ip_nat_delete_sack(*pskb, tcph);
+
+ /* Only mangle things once: original direction in POST_ROUTING
+ and reply direction on PRE_ROUTING. */
+ dir = CTINFO2DIR(ctinfo);
+ if (!((hooknum == NF_IP_POST_ROUTING && dir == IP_CT_DIR_ORIGINAL)
+ || (hooknum == NF_IP_PRE_ROUTING && dir == IP_CT_DIR_REPLY))) {
+ DEBUGP("nat_irc: Not touching dir %s at hook %s\n",
+ dir == IP_CT_DIR_ORIGINAL ? "ORIG" : "REPLY",
+ hooknum == NF_IP_POST_ROUTING ? "POSTROUTING"
+ : hooknum == NF_IP_PRE_ROUTING ? "PREROUTING"
+ : hooknum == NF_IP_LOCAL_OUT ? "OUTPUT" : "???");
+ return NF_ACCEPT;
+ }
+ DEBUGP("got beyond not touching\n");
+
+ datalen = (*pskb)->len - iph->ihl * 4 - tcph->doff * 4;
+ score = 0;
+ LOCK_BH(&ip_irc_lock);
+ if (ct_irc_info->len) {
+ DEBUGP("got beyond ct_irc_info->len\n");
+
+ /* If it's in the right range... */
+ score += between(ct_irc_info->seq, ntohl(tcph->seq),
+ ntohl(tcph->seq) + datalen);
+ score += between(ct_irc_info->seq + ct_irc_info->len,
+ ntohl(tcph->seq),
+ ntohl(tcph->seq) + datalen);
+ if (score == 1) {
+ /* Half a match? This means a partial retransmisison.
+ It's a cracker being funky. */
+ if (net_ratelimit()) {
+ printk
+ ("IRC_NAT: partial packet %u/%u in %u/%u\n",
+ ct_irc_info->seq, ct_irc_info->len,
+ ntohl(tcph->seq),
+ ntohl(tcph->seq) + datalen);
+ }
+ UNLOCK_BH(&ip_irc_lock);
+ return NF_DROP;
+ } else if (score == 2) {
+ DEBUGP("IRC_NAT: score=2, calling fixup\n");
+ if (!irc_data_fixup(ct_irc_info, ct, datalen,
+ pskb, ctinfo)) {
+ UNLOCK_BH(&ip_irc_lock);
+ return NF_DROP;
+ }
+ /* skb may have been reallocated */
+ iph = (*pskb)->nh.iph;
+ tcph = (void *) iph + iph->ihl * 4;
+ }
+ }
+
+ UNLOCK_BH(&ip_irc_lock);
+
+ ip_nat_seq_adjust(*pskb, ct, ctinfo);
+
+ return NF_ACCEPT;
+}
+
+static struct ip_nat_helper ip_nat_irc_helpers[MAX_PORTS];
+static char ip_nih_names[MAX_PORTS][6];
+
+static struct ip_nat_expect irc_expect
+ = { {NULL, NULL}, irc_nat_expected };
+
+
+/* This function is intentionally _NOT_ defined as __exit, because
+ * it is needed by init() */
+static void fini(void)
+{
+ int i;
+
+ for (i = 0; i < ports_c; i++) {
+ DEBUGP("ip_nat_irc: unregistering helper for port %d\n",
+ ports[i]);
+ ip_nat_helper_unregister(&ip_nat_irc_helpers[i]);
+ }
+ ip_nat_expect_unregister(&irc_expect);
+}
+static int __init init(void)
+{
+ int ret;
+ int i;
+ struct ip_nat_helper *hlpr;
+ char *tmpname;
+
+ ret = ip_nat_expect_register(&irc_expect);
+ if (ret == 0) {
+
+ if (ports[0] == 0) {
+ ports[0] = 6667;
+ }
+
+ for (i = 0; (i < MAX_PORTS) && ports[i] != 0; i++) {
+ hlpr = &ip_nat_irc_helpers[i];
+ memset(hlpr, 0,
+ sizeof(struct ip_nat_helper));
+
+ hlpr->tuple.dst.protonum = IPPROTO_TCP;
+ hlpr->tuple.src.u.tcp.port = htons(ports[i]);
+ hlpr->mask.src.u.tcp.port = 0xFFFF;
+ hlpr->mask.dst.protonum = 0xFFFF;
+ hlpr->help = help;
+
+ tmpname = &ip_nih_names[i][0];
+ sprintf(tmpname, "irc%2.2d", i);
+
+ hlpr->name = tmpname;
+ DEBUGP
+ ("ip_nat_irc: Trying to register helper for port %d: name %s\n",
+ ports[i], hlpr->name);
+ ret = ip_nat_helper_register(hlpr);
+
+ if (ret) {
+ printk
+ ("ip_nat_irc: error registering helper for port %d\n",
+ ports[i]);
+ fini();
+ return 1;
+ }
+ ports_c++;
+ }
+ }
+ return ret;
+}
+
+
+module_init(init);
+module_exit(fini);
--- /dev/null
+/*
+ * ip_nat_snmp_basic.c
+ *
+ * Basic SNMP Application Layer Gateway
+ *
+ * This IP NAT module is intended for use with SNMP network
+ * discovery and monitoring applications where target networks use
+ * conflicting private address realms.
+ *
+ * Static NAT is used to remap the networks from the view of the network
+ * management system at the IP layer, and this module remaps some application
+ * layer addresses to match.
+ *
+ * The simplest form of ALG is performed, where only tagged IP addresses
+ * are modified. The module does not need to be MIB aware and only scans
+ * messages at the ASN.1/BER level.
+ *
+ * Currently, only SNMPv1 and SNMPv2 are supported.
+ *
+ * More information on ALG and associated issues can be found in
+ * RFC 2962
+ *
+ * The ASB.1/BER parsing code is derived from the gxsnmp package by Gregory
+ * McLean & Jochen Friedrich, stripped down for use in the kernel.
+ *
+ * Copyright (c) 2000 RP Internet (www.rpi.net.au).
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: James Morris <jmorris@intercode.com.au>
+ *
+ * Updates:
+ * 2000-08-06: Convert to new helper API (Harald Welte).
+ *
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/netfilter_ipv4.h>
+#include <linux/netfilter_ipv4/ip_nat.h>
+#include <linux/netfilter_ipv4/ip_nat_helper.h>
+#include <linux/brlock.h>
+#include <linux/types.h>
+#include <linux/ip.h>
+#include <net/udp.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+
+
+
+#define SNMP_PORT 161
+#define SNMP_TRAP_PORT 162
+#define NOCT1(n) (u_int8_t )((n) & 0xff)
+
+static int debug = 0;
+static spinlock_t snmp_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Application layer address mapping mimics the NAT mapping, but
+ * only for the first octet in this case (a more flexible system
+ * can be implemented if needed).
+ */
+struct oct1_map
+{
+ u_int8_t from;
+ u_int8_t to;
+};
+
+
+/*****************************************************************************
+ *
+ * Basic ASN.1 decoding routines (gxsnmp author Dirk Wisse)
+ *
+ *****************************************************************************/
+
+/* Class */
+#define ASN1_UNI 0 /* Universal */
+#define ASN1_APL 1 /* Application */
+#define ASN1_CTX 2 /* Context */
+#define ASN1_PRV 3 /* Private */
+
+/* Tag */
+#define ASN1_EOC 0 /* End Of Contents */
+#define ASN1_BOL 1 /* Boolean */
+#define ASN1_INT 2 /* Integer */
+#define ASN1_BTS 3 /* Bit String */
+#define ASN1_OTS 4 /* Octet String */
+#define ASN1_NUL 5 /* Null */
+#define ASN1_OJI 6 /* Object Identifier */
+#define ASN1_OJD 7 /* Object Description */
+#define ASN1_EXT 8 /* External */
+#define ASN1_SEQ 16 /* Sequence */
+#define ASN1_SET 17 /* Set */
+#define ASN1_NUMSTR 18 /* Numerical String */
+#define ASN1_PRNSTR 19 /* Printable String */
+#define ASN1_TEXSTR 20 /* Teletext String */
+#define ASN1_VIDSTR 21 /* Video String */
+#define ASN1_IA5STR 22 /* IA5 String */
+#define ASN1_UNITIM 23 /* Universal Time */
+#define ASN1_GENTIM 24 /* General Time */
+#define ASN1_GRASTR 25 /* Graphical String */
+#define ASN1_VISSTR 26 /* Visible String */
+#define ASN1_GENSTR 27 /* General String */
+
+/* Primitive / Constructed methods*/
+#define ASN1_PRI 0 /* Primitive */
+#define ASN1_CON 1 /* Constructed */
+
+/*
+ * Error codes.
+ */
+#define ASN1_ERR_NOERROR 0
+#define ASN1_ERR_DEC_EMPTY 2
+#define ASN1_ERR_DEC_EOC_MISMATCH 3
+#define ASN1_ERR_DEC_LENGTH_MISMATCH 4
+#define ASN1_ERR_DEC_BADVALUE 5
+
+/*
+ * ASN.1 context.
+ */
+struct asn1_ctx
+{
+ int error; /* Error condition */
+ unsigned char *pointer; /* Octet just to be decoded */
+ unsigned char *begin; /* First octet */
+ unsigned char *end; /* Octet after last octet */
+};
+
+/*
+ * Octet string (not null terminated)
+ */
+struct asn1_octstr
+{
+ unsigned char *data;
+ unsigned int len;
+};
+
+static void asn1_open(struct asn1_ctx *ctx,
+ unsigned char *buf,
+ unsigned int len)
+{
+ ctx->begin = buf;
+ ctx->end = buf + len;
+ ctx->pointer = buf;
+ ctx->error = ASN1_ERR_NOERROR;
+}
+
+static unsigned char asn1_octet_decode(struct asn1_ctx *ctx, unsigned char *ch)
+{
+ if (ctx->pointer >= ctx->end) {
+ ctx->error = ASN1_ERR_DEC_EMPTY;
+ return 0;
+ }
+ *ch = *(ctx->pointer)++;
+ return 1;
+}
+
+static unsigned char asn1_tag_decode(struct asn1_ctx *ctx, unsigned int *tag)
+{
+ unsigned char ch;
+
+ *tag = 0;
+
+ do
+ {
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+ *tag <<= 7;
+ *tag |= ch & 0x7F;
+ } while ((ch & 0x80) == 0x80);
+ return 1;
+}
+
+static unsigned char asn1_id_decode(struct asn1_ctx *ctx,
+ unsigned int *cls,
+ unsigned int *con,
+ unsigned int *tag)
+{
+ unsigned char ch;
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *cls = (ch & 0xC0) >> 6;
+ *con = (ch & 0x20) >> 5;
+ *tag = (ch & 0x1F);
+
+ if (*tag == 0x1F) {
+ if (!asn1_tag_decode(ctx, tag))
+ return 0;
+ }
+ return 1;
+}
+
+static unsigned char asn1_length_decode(struct asn1_ctx *ctx,
+ unsigned int *def,
+ unsigned int *len)
+{
+ unsigned char ch, cnt;
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ if (ch == 0x80)
+ *def = 0;
+ else {
+ *def = 1;
+
+ if (ch < 0x80)
+ *len = ch;
+ else {
+ cnt = (unsigned char) (ch & 0x7F);
+ *len = 0;
+
+ while (cnt > 0) {
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+ *len <<= 8;
+ *len |= ch;
+ cnt--;
+ }
+ }
+ }
+ return 1;
+}
+
+static unsigned char asn1_header_decode(struct asn1_ctx *ctx,
+ unsigned char **eoc,
+ unsigned int *cls,
+ unsigned int *con,
+ unsigned int *tag)
+{
+ unsigned int def, len;
+
+ if (!asn1_id_decode(ctx, cls, con, tag))
+ return 0;
+
+ if (!asn1_length_decode(ctx, &def, &len))
+ return 0;
+
+ if (def)
+ *eoc = ctx->pointer + len;
+ else
+ *eoc = 0;
+ return 1;
+}
+
+static unsigned char asn1_eoc_decode(struct asn1_ctx *ctx, unsigned char *eoc)
+{
+ unsigned char ch;
+
+ if (eoc == 0) {
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ if (ch != 0x00) {
+ ctx->error = ASN1_ERR_DEC_EOC_MISMATCH;
+ return 0;
+ }
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ if (ch != 0x00) {
+ ctx->error = ASN1_ERR_DEC_EOC_MISMATCH;
+ return 0;
+ }
+ return 1;
+ } else {
+ if (ctx->pointer != eoc) {
+ ctx->error = ASN1_ERR_DEC_LENGTH_MISMATCH;
+ return 0;
+ }
+ return 1;
+ }
+}
+
+static unsigned char asn1_null_decode(struct asn1_ctx *ctx, unsigned char *eoc)
+{
+ ctx->pointer = eoc;
+ return 1;
+}
+
+static unsigned char asn1_long_decode(struct asn1_ctx *ctx,
+ unsigned char *eoc,
+ long *integer)
+{
+ unsigned char ch;
+ unsigned int len;
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer = (signed char) ch;
+ len = 1;
+
+ while (ctx->pointer < eoc) {
+ if (++len > sizeof (long)) {
+ ctx->error = ASN1_ERR_DEC_BADVALUE;
+ return 0;
+ }
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer <<= 8;
+ *integer |= ch;
+ }
+ return 1;
+}
+
+static unsigned char asn1_uint_decode(struct asn1_ctx *ctx,
+ unsigned char *eoc,
+ unsigned int *integer)
+{
+ unsigned char ch;
+ unsigned int len;
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer = ch;
+ if (ch == 0) len = 0;
+ else len = 1;
+
+ while (ctx->pointer < eoc) {
+ if (++len > sizeof (unsigned int)) {
+ ctx->error = ASN1_ERR_DEC_BADVALUE;
+ return 0;
+ }
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer <<= 8;
+ *integer |= ch;
+ }
+ return 1;
+}
+
+static unsigned char asn1_ulong_decode(struct asn1_ctx *ctx,
+ unsigned char *eoc,
+ unsigned long *integer)
+{
+ unsigned char ch;
+ unsigned int len;
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer = ch;
+ if (ch == 0) len = 0;
+ else len = 1;
+
+ while (ctx->pointer < eoc) {
+ if (++len > sizeof (unsigned long)) {
+ ctx->error = ASN1_ERR_DEC_BADVALUE;
+ return 0;
+ }
+
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *integer <<= 8;
+ *integer |= ch;
+ }
+ return 1;
+}
+
+static unsigned char asn1_octets_decode(struct asn1_ctx *ctx,
+ unsigned char *eoc,
+ unsigned char **octets,
+ unsigned int *len)
+{
+ unsigned char *ptr;
+
+ *len = 0;
+
+ *octets = kmalloc(eoc - ctx->pointer, GFP_ATOMIC);
+ if (*octets == NULL) {
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+
+ ptr = *octets;
+ while (ctx->pointer < eoc) {
+ if (!asn1_octet_decode(ctx, (unsigned char *)ptr++)) {
+ kfree(*octets);
+ *octets = NULL;
+ return 0;
+ }
+ (*len)++;
+ }
+ return 1;
+}
+
+static unsigned char asn1_subid_decode(struct asn1_ctx *ctx,
+ unsigned long *subid)
+{
+ unsigned char ch;
+
+ *subid = 0;
+
+ do {
+ if (!asn1_octet_decode(ctx, &ch))
+ return 0;
+
+ *subid <<= 7;
+ *subid |= ch & 0x7F;
+ } while ((ch & 0x80) == 0x80);
+ return 1;
+}
+
+static unsigned char asn1_oid_decode(struct asn1_ctx *ctx,
+ unsigned char *eoc,
+ unsigned long **oid,
+ unsigned int *len)
+{
+ unsigned long subid;
+ unsigned int size;
+ unsigned long *optr;
+
+ size = eoc - ctx->pointer + 1;
+ *oid = kmalloc(size * sizeof(unsigned long), GFP_ATOMIC);
+ if (*oid == NULL) {
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+
+ optr = *oid;
+
+ if (!asn1_subid_decode(ctx, &subid)) {
+ kfree(*oid);
+ *oid = NULL;
+ return 0;
+ }
+
+ if (subid < 40) {
+ optr [0] = 0;
+ optr [1] = subid;
+ } else if (subid < 80) {
+ optr [0] = 1;
+ optr [1] = subid - 40;
+ } else {
+ optr [0] = 2;
+ optr [1] = subid - 80;
+ }
+
+ *len = 2;
+ optr += 2;
+
+ while (ctx->pointer < eoc) {
+ if (++(*len) > size) {
+ ctx->error = ASN1_ERR_DEC_BADVALUE;
+ kfree(*oid);
+ *oid = NULL;
+ return 0;
+ }
+
+ if (!asn1_subid_decode(ctx, optr++)) {
+ kfree(*oid);
+ *oid = NULL;
+ return 0;
+ }
+ }
+ return 1;
+}
+
+/*****************************************************************************
+ *
+ * SNMP decoding routines (gxsnmp author Dirk Wisse)
+ *
+ *****************************************************************************/
+
+/* SNMP Versions */
+#define SNMP_V1 0
+#define SNMP_V2C 1
+#define SNMP_V2 2
+#define SNMP_V3 3
+
+/* Default Sizes */
+#define SNMP_SIZE_COMM 256
+#define SNMP_SIZE_OBJECTID 128
+#define SNMP_SIZE_BUFCHR 256
+#define SNMP_SIZE_BUFINT 128
+#define SNMP_SIZE_SMALLOBJECTID 16
+
+/* Requests */
+#define SNMP_PDU_GET 0
+#define SNMP_PDU_NEXT 1
+#define SNMP_PDU_RESPONSE 2
+#define SNMP_PDU_SET 3
+#define SNMP_PDU_TRAP1 4
+#define SNMP_PDU_BULK 5
+#define SNMP_PDU_INFORM 6
+#define SNMP_PDU_TRAP2 7
+
+/* Errors */
+#define SNMP_NOERROR 0
+#define SNMP_TOOBIG 1
+#define SNMP_NOSUCHNAME 2
+#define SNMP_BADVALUE 3
+#define SNMP_READONLY 4
+#define SNMP_GENERROR 5
+#define SNMP_NOACCESS 6
+#define SNMP_WRONGTYPE 7
+#define SNMP_WRONGLENGTH 8
+#define SNMP_WRONGENCODING 9
+#define SNMP_WRONGVALUE 10
+#define SNMP_NOCREATION 11
+#define SNMP_INCONSISTENTVALUE 12
+#define SNMP_RESOURCEUNAVAILABLE 13
+#define SNMP_COMMITFAILED 14
+#define SNMP_UNDOFAILED 15
+#define SNMP_AUTHORIZATIONERROR 16
+#define SNMP_NOTWRITABLE 17
+#define SNMP_INCONSISTENTNAME 18
+
+/* General SNMP V1 Traps */
+#define SNMP_TRAP_COLDSTART 0
+#define SNMP_TRAP_WARMSTART 1
+#define SNMP_TRAP_LINKDOWN 2
+#define SNMP_TRAP_LINKUP 3
+#define SNMP_TRAP_AUTFAILURE 4
+#define SNMP_TRAP_EQPNEIGHBORLOSS 5
+#define SNMP_TRAP_ENTSPECIFIC 6
+
+/* SNMPv1 Types */
+#define SNMP_NULL 0
+#define SNMP_INTEGER 1 /* l */
+#define SNMP_OCTETSTR 2 /* c */
+#define SNMP_DISPLAYSTR 2 /* c */
+#define SNMP_OBJECTID 3 /* ul */
+#define SNMP_IPADDR 4 /* uc */
+#define SNMP_COUNTER 5 /* ul */
+#define SNMP_GAUGE 6 /* ul */
+#define SNMP_TIMETICKS 7 /* ul */
+#define SNMP_OPAQUE 8 /* c */
+
+/* Additional SNMPv2 Types */
+#define SNMP_UINTEGER 5 /* ul */
+#define SNMP_BITSTR 9 /* uc */
+#define SNMP_NSAP 10 /* uc */
+#define SNMP_COUNTER64 11 /* ul */
+#define SNMP_NOSUCHOBJECT 12
+#define SNMP_NOSUCHINSTANCE 13
+#define SNMP_ENDOFMIBVIEW 14
+
+union snmp_syntax
+{
+ unsigned char uc[0]; /* 8 bit unsigned */
+ char c[0]; /* 8 bit signed */
+ unsigned long ul[0]; /* 32 bit unsigned */
+ long l[0]; /* 32 bit signed */
+};
+
+struct snmp_object
+{
+ unsigned long *id;
+ unsigned int id_len;
+ unsigned short type;
+ unsigned int syntax_len;
+ union snmp_syntax syntax;
+};
+
+struct snmp_request
+{
+ unsigned long id;
+ unsigned int error_status;
+ unsigned int error_index;
+};
+
+struct snmp_v1_trap
+{
+ unsigned long *id;
+ unsigned int id_len;
+ unsigned long ip_address; /* pointer */
+ unsigned int general;
+ unsigned int specific;
+ unsigned long time;
+};
+
+/* SNMP types */
+#define SNMP_IPA 0
+#define SNMP_CNT 1
+#define SNMP_GGE 2
+#define SNMP_TIT 3
+#define SNMP_OPQ 4
+#define SNMP_C64 6
+
+/* SNMP errors */
+#define SERR_NSO 0
+#define SERR_NSI 1
+#define SERR_EOM 2
+
+static void inline mangle_address(unsigned char *begin,
+ unsigned char *addr,
+ const struct oct1_map *map,
+ u_int16_t *check);
+struct snmp_cnv
+{
+ unsigned int class;
+ unsigned int tag;
+ int syntax;
+};
+
+static struct snmp_cnv snmp_conv [] =
+{
+ {ASN1_UNI, ASN1_NUL, SNMP_NULL},
+ {ASN1_UNI, ASN1_INT, SNMP_INTEGER},
+ {ASN1_UNI, ASN1_OTS, SNMP_OCTETSTR},
+ {ASN1_UNI, ASN1_OTS, SNMP_DISPLAYSTR},
+ {ASN1_UNI, ASN1_OJI, SNMP_OBJECTID},
+ {ASN1_APL, SNMP_IPA, SNMP_IPADDR},
+ {ASN1_APL, SNMP_CNT, SNMP_COUNTER}, /* Counter32 */
+ {ASN1_APL, SNMP_GGE, SNMP_GAUGE}, /* Gauge32 == Unsigned32 */
+ {ASN1_APL, SNMP_TIT, SNMP_TIMETICKS},
+ {ASN1_APL, SNMP_OPQ, SNMP_OPAQUE},
+
+ /* SNMPv2 data types and errors */
+ {ASN1_UNI, ASN1_BTS, SNMP_BITSTR},
+ {ASN1_APL, SNMP_C64, SNMP_COUNTER64},
+ {ASN1_CTX, SERR_NSO, SNMP_NOSUCHOBJECT},
+ {ASN1_CTX, SERR_NSI, SNMP_NOSUCHINSTANCE},
+ {ASN1_CTX, SERR_EOM, SNMP_ENDOFMIBVIEW},
+ {0, 0, -1}
+};
+
+static unsigned char snmp_tag_cls2syntax(unsigned int tag,
+ unsigned int cls,
+ unsigned short *syntax)
+{
+ struct snmp_cnv *cnv;
+
+ cnv = snmp_conv;
+
+ while (cnv->syntax != -1) {
+ if (cnv->tag == tag && cnv->class == cls) {
+ *syntax = cnv->syntax;
+ return 1;
+ }
+ cnv++;
+ }
+ return 0;
+}
+
+static unsigned char snmp_object_decode(struct asn1_ctx *ctx,
+ struct snmp_object **obj)
+{
+ unsigned int cls, con, tag, len, idlen;
+ unsigned short type;
+ unsigned char *eoc, *end, *p;
+ unsigned long *lp, *id;
+ unsigned long ul;
+ long l;
+
+ *obj = NULL;
+ id = NULL;
+
+ if (!asn1_header_decode(ctx, &eoc, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ)
+ return 0;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OJI)
+ return 0;
+
+ if (!asn1_oid_decode(ctx, end, &id, &idlen))
+ return 0;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) {
+ kfree(id);
+ return 0;
+ }
+
+ if (con != ASN1_PRI) {
+ kfree(id);
+ return 0;
+ }
+
+ if (!snmp_tag_cls2syntax(tag, cls, &type)) {
+ kfree(id);
+ return 0;
+ }
+
+ switch (type) {
+ case SNMP_INTEGER:
+ len = sizeof(long);
+ if (!asn1_long_decode(ctx, end, &l)) {
+ kfree(id);
+ return 0;
+ }
+ *obj = kmalloc(sizeof(struct snmp_object) + len,
+ GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ (*obj)->syntax.l[0] = l;
+ break;
+ case SNMP_OCTETSTR:
+ case SNMP_OPAQUE:
+ if (!asn1_octets_decode(ctx, end, &p, &len)) {
+ kfree(id);
+ return 0;
+ }
+ *obj = kmalloc(sizeof(struct snmp_object) + len,
+ GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ memcpy((*obj)->syntax.c, p, len);
+ kfree(p);
+ break;
+ case SNMP_NULL:
+ case SNMP_NOSUCHOBJECT:
+ case SNMP_NOSUCHINSTANCE:
+ case SNMP_ENDOFMIBVIEW:
+ len = 0;
+ *obj = kmalloc(sizeof(struct snmp_object), GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ if (!asn1_null_decode(ctx, end)) {
+ kfree(id);
+ kfree(*obj);
+ *obj = NULL;
+ return 0;
+ }
+ break;
+ case SNMP_OBJECTID:
+ if (!asn1_oid_decode(ctx, end, (unsigned long **)&lp, &len)) {
+ kfree(id);
+ return 0;
+ }
+ len *= sizeof(unsigned long);
+ *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ memcpy((*obj)->syntax.ul, lp, len);
+ kfree(lp);
+ break;
+ case SNMP_IPADDR:
+ if (!asn1_octets_decode(ctx, end, &p, &len)) {
+ kfree(id);
+ return 0;
+ }
+ if (len != 4) {
+ kfree(p);
+ kfree(id);
+ return 0;
+ }
+ *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(p);
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ memcpy((*obj)->syntax.uc, p, len);
+ kfree(p);
+ break;
+ case SNMP_COUNTER:
+ case SNMP_GAUGE:
+ case SNMP_TIMETICKS:
+ len = sizeof(unsigned long);
+ if (!asn1_ulong_decode(ctx, end, &ul)) {
+ kfree(id);
+ return 0;
+ }
+ *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC);
+ if (*obj == NULL) {
+ kfree(id);
+ if (net_ratelimit())
+ printk("OOM in bsalg (%d)\n", __LINE__);
+ return 0;
+ }
+ (*obj)->syntax.ul[0] = ul;
+ break;
+ default:
+ kfree(id);
+ return 0;
+ }
+
+ (*obj)->syntax_len = len;
+ (*obj)->type = type;
+ (*obj)->id = id;
+ (*obj)->id_len = idlen;
+
+ if (!asn1_eoc_decode(ctx, eoc)) {
+ kfree(id);
+ kfree(*obj);
+ *obj = NULL;
+ return 0;
+ }
+ return 1;
+}
+
+static unsigned char snmp_request_decode(struct asn1_ctx *ctx,
+ struct snmp_request *request)
+{
+ unsigned int cls, con, tag;
+ unsigned char *end;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ return 0;
+
+ if (!asn1_ulong_decode(ctx, end, &request->id))
+ return 0;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ return 0;
+
+ if (!asn1_uint_decode(ctx, end, &request->error_status))
+ return 0;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ return 0;
+
+ if (!asn1_uint_decode(ctx, end, &request->error_index))
+ return 0;
+
+ return 1;
+}
+
+static unsigned char snmp_trap_decode(struct asn1_ctx *ctx,
+ struct snmp_v1_trap *trap,
+ const struct oct1_map *map,
+ u_int16_t *check)
+{
+ unsigned int cls, con, tag, len;
+ unsigned char *end;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OJI)
+ return 0;
+
+ if (!asn1_oid_decode(ctx, end, &trap->id, &trap->id_len))
+ return 0;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ goto err_id_free;
+
+ if (!((cls == ASN1_APL && con == ASN1_PRI && tag == SNMP_IPA) ||
+ (cls == ASN1_UNI && con == ASN1_PRI && tag == ASN1_OTS)))
+ goto err_id_free;
+
+ if (!asn1_octets_decode(ctx, end, (unsigned char **)&trap->ip_address, &len))
+ goto err_id_free;
+
+ /* IPv4 only */
+ if (len != 4)
+ goto err_addr_free;
+
+ mangle_address(ctx->begin, ctx->pointer - 4, map, check);
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ goto err_addr_free;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ goto err_addr_free;;
+
+ if (!asn1_uint_decode(ctx, end, &trap->general))
+ goto err_addr_free;;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ goto err_addr_free;
+
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ goto err_addr_free;
+
+ if (!asn1_uint_decode(ctx, end, &trap->specific))
+ goto err_addr_free;
+
+ if (!asn1_header_decode(ctx, &end, &cls, &con, &tag))
+ goto err_addr_free;
+
+ if (!((cls == ASN1_APL && con == ASN1_PRI && tag == SNMP_TIT) ||
+ (cls == ASN1_UNI && con == ASN1_PRI && tag == ASN1_INT)))
+ goto err_addr_free;
+
+ if (!asn1_ulong_decode(ctx, end, &trap->time))
+ goto err_addr_free;
+
+ return 1;
+
+err_id_free:
+ kfree(trap->id);
+
+err_addr_free:
+ kfree((unsigned long *)trap->ip_address);
+
+ return 0;
+}
+
+/*****************************************************************************
+ *
+ * Misc. routines
+ *
+ *****************************************************************************/
+
+static void hex_dump(unsigned char *buf, size_t len)
+{
+ size_t i;
+
+ for (i = 0; i < len; i++) {
+ if (i && !(i % 16))
+ printk("\n");
+ printk("%02x ", *(buf + i));
+ }
+ printk("\n");
+}
+
+/*
+ * Fast checksum update for possibly oddly-aligned UDP byte, from the
+ * code example in the draft.
+ */
+static void fast_csum(unsigned char *csum,
+ const unsigned char *optr,
+ const unsigned char *nptr,
+ int odd)
+{
+ long x, old, new;
+
+ x = csum[0] * 256 + csum[1];
+
+ x =~ x & 0xFFFF;
+
+ if (odd) old = optr[0] * 256;
+ else old = optr[0];
+
+ x -= old & 0xFFFF;
+ if (x <= 0) {
+ x--;
+ x &= 0xFFFF;
+ }
+
+ if (odd) new = nptr[0] * 256;
+ else new = nptr[0];
+
+ x += new & 0xFFFF;
+ if (x & 0x10000) {
+ x++;
+ x &= 0xFFFF;
+ }
+
+ x =~ x & 0xFFFF;
+ csum[0] = x / 256;
+ csum[1] = x & 0xFF;
+}
+
+/*
+ * Mangle IP address.
+ * - begin points to the start of the snmp messgae
+ * - addr points to the start of the address
+ */
+static void inline mangle_address(unsigned char *begin,
+ unsigned char *addr,
+ const struct oct1_map *map,
+ u_int16_t *check)
+{
+ if (map->from == NOCT1(*addr)) {
+ u_int32_t old;
+
+ if (debug)
+ memcpy(&old, (unsigned char *)addr, sizeof(old));
+
+ *addr = map->to;
+
+ /* Update UDP checksum if being used */
+ if (*check) {
+ unsigned char odd = !((addr - begin) % 2);
+
+ fast_csum((unsigned char *)check,
+ &map->from, &map->to, odd);
+
+ }
+
+ if (debug)
+ printk(KERN_DEBUG "bsalg: mapped %u.%u.%u.%u to "
+ "%u.%u.%u.%u\n", NIPQUAD(old), NIPQUAD(*addr));
+ }
+}
+
+/*
+ * Parse and mangle SNMP message according to mapping.
+ * (And this is the fucking 'basic' method).
+ */
+static int snmp_parse_mangle(unsigned char *msg,
+ u_int16_t len,
+ const struct oct1_map *map,
+ u_int16_t *check)
+{
+ unsigned char *eoc, *end;
+ unsigned int cls, con, tag, vers, pdutype;
+ struct asn1_ctx ctx;
+ struct asn1_octstr comm;
+ struct snmp_object **obj;
+
+ if (debug > 1)
+ hex_dump(msg, len);
+
+ asn1_open(&ctx, msg, len);
+
+ /*
+ * Start of SNMP message.
+ */
+ if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &tag))
+ return 0;
+ if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ)
+ return 0;
+
+ /*
+ * Version 1 or 2 handled.
+ */
+ if (!asn1_header_decode(&ctx, &end, &cls, &con, &tag))
+ return 0;
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT)
+ return 0;
+ if (!asn1_uint_decode (&ctx, end, &vers))
+ return 0;
+ if (debug > 1)
+ printk(KERN_DEBUG "bsalg: snmp version: %u\n", vers + 1);
+ if (vers > 1)
+ return 1;
+
+ /*
+ * Community.
+ */
+ if (!asn1_header_decode (&ctx, &end, &cls, &con, &tag))
+ return 0;
+ if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OTS)
+ return 0;
+ if (!asn1_octets_decode(&ctx, end, &comm.data, &comm.len))
+ return 0;
+ if (debug > 1) {
+ unsigned int i;
+
+ printk(KERN_DEBUG "bsalg: community: ");
+ for (i = 0; i < comm.len; i++)
+ printk("%c", comm.data[i]);
+ printk("\n");
+ }
+ kfree(comm.data);
+
+ /*
+ * PDU type
+ */
+ if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &pdutype))
+ return 0;
+ if (cls != ASN1_CTX || con != ASN1_CON)
+ return 0;
+ if (debug > 1) {
+ unsigned char *pdus[] = {
+ [SNMP_PDU_GET] = "get",
+ [SNMP_PDU_NEXT] = "get-next",
+ [SNMP_PDU_RESPONSE] = "response",
+ [SNMP_PDU_SET] = "set",
+ [SNMP_PDU_TRAP1] = "trapv1",
+ [SNMP_PDU_BULK] = "bulk",
+ [SNMP_PDU_INFORM] = "inform",
+ [SNMP_PDU_TRAP2] = "trapv2"
+ };
+
+ if (pdutype > SNMP_PDU_TRAP2)
+ printk(KERN_DEBUG "bsalg: bad pdu type %u\n", pdutype);
+ else
+ printk(KERN_DEBUG "bsalg: pdu: %s\n", pdus[pdutype]);
+ }
+ if (pdutype != SNMP_PDU_RESPONSE &&
+ pdutype != SNMP_PDU_TRAP1 && pdutype != SNMP_PDU_TRAP2)
+ return 1;
+
+ /*
+ * Request header or v1 trap
+ */
+ if (pdutype == SNMP_PDU_TRAP1) {
+ struct snmp_v1_trap trap;
+ unsigned char ret = snmp_trap_decode(&ctx, &trap, map, check);
+
+ /* Discard trap allocations regardless */
+ kfree(trap.id);
+ kfree((unsigned long *)trap.ip_address);
+
+ if (!ret)
+ return ret;
+
+ } else {
+ struct snmp_request req;
+
+ if (!snmp_request_decode(&ctx, &req))
+ return 0;
+
+ if (debug > 1)
+ printk(KERN_DEBUG "bsalg: request: id=0x%lx error_status=%u "
+ "error_index=%u\n", req.id, req.error_status,
+ req.error_index);
+ }
+
+ /*
+ * Loop through objects, look for IP addresses to mangle.
+ */
+ if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &tag))
+ return 0;
+
+ if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ)
+ return 0;
+
+ obj = kmalloc(sizeof(struct snmp_object), GFP_ATOMIC);
+ if (obj == NULL) {
+ if (net_ratelimit())
+ printk(KERN_WARNING "OOM in bsalg(%d)\n", __LINE__);
+ return 0;
+ }
+
+ while (!asn1_eoc_decode(&ctx, eoc)) {
+ unsigned int i;
+
+ if (!snmp_object_decode(&ctx, obj)) {
+ if (*obj) {
+ if ((*obj)->id)
+ kfree((*obj)->id);
+ kfree(*obj);
+ }
+ kfree(obj);
+ return 0;
+ }
+
+ if (debug > 1) {
+ printk(KERN_DEBUG "bsalg: object: ");
+ for (i = 0; i < (*obj)->id_len; i++) {
+ if (i > 0)
+ printk(".");
+ printk("%lu", (*obj)->id[i]);
+ }
+ printk(": type=%u\n", (*obj)->type);
+
+ }
+
+ if ((*obj)->type == SNMP_IPADDR)
+ mangle_address(ctx.begin, ctx.pointer - 4 , map, check);
+
+ kfree((*obj)->id);
+ kfree(*obj);
+ }
+ kfree(obj);
+
+ if (!asn1_eoc_decode(&ctx, eoc))
+ return 0;
+
+ return 1;
+}
+
+/*****************************************************************************
+ *
+ * NAT routines.
+ *
+ *****************************************************************************/
+
+/*
+ * SNMP translation routine.
+ */
+static int snmp_translate(struct ip_conntrack *ct,
+ struct ip_nat_info *info,
+ enum ip_conntrack_info ctinfo,
+ unsigned int hooknum,
+ struct sk_buff **pskb)
+{
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct udphdr *udph = (struct udphdr *)((u_int32_t *)iph + iph->ihl);
+ u_int16_t udplen = ntohs(udph->len);
+ u_int16_t paylen = udplen - sizeof(struct udphdr);
+ int dir = CTINFO2DIR(ctinfo);
+ struct oct1_map map;
+
+ /*
+ * Determine mappping for application layer addresses based
+ * on NAT manipulations for the packet.
+ */
+ if (dir == IP_CT_DIR_ORIGINAL) {
+ /* SNAT traps */
+ map.from = NOCT1(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip);
+ map.to = NOCT1(ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip);
+ } else {
+ /* DNAT replies */
+ map.from = NOCT1(ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip);
+ map.to = NOCT1(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip);
+ }
+
+ if (map.from == map.to)
+ return NF_ACCEPT;
+
+ if (!snmp_parse_mangle((unsigned char *)udph + sizeof(struct udphdr),
+ paylen, &map, &udph->check)) {
+ printk(KERN_WARNING "bsalg: parser failed\n");
+ return NF_DROP;
+ }
+ return NF_ACCEPT;
+}
+
+/*
+ * NAT helper function, packets arrive here from NAT code.
+ */
+static unsigned int nat_help(struct ip_conntrack *ct,
+ struct ip_nat_info *info,
+ enum ip_conntrack_info ctinfo,
+ unsigned int hooknum,
+ struct sk_buff **pskb)
+{
+ int dir = CTINFO2DIR(ctinfo);
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct udphdr *udph = (struct udphdr *)((u_int32_t *)iph + iph->ihl);
+
+ spin_lock_bh(&snmp_lock);
+
+ /*
+ * Translate snmp replies on pre-routing (DNAT) and snmp traps
+ * on post routing (SNAT).
+ */
+ if (!((dir == IP_CT_DIR_REPLY && hooknum == NF_IP_PRE_ROUTING &&
+ udph->source == __constant_ntohs(SNMP_PORT)) ||
+ (dir == IP_CT_DIR_ORIGINAL && hooknum == NF_IP_POST_ROUTING &&
+ udph->dest == __constant_ntohs(SNMP_TRAP_PORT)))) {
+ spin_unlock_bh(&snmp_lock);
+ return NF_ACCEPT;
+ }
+
+ if (debug > 1) {
+ printk(KERN_DEBUG "bsalg: dir=%s hook=%d manip=%s len=%d "
+ "src=%u.%u.%u.%u:%u dst=%u.%u.%u.%u:%u "
+ "osrc=%u.%u.%u.%u odst=%u.%u.%u.%u "
+ "rsrc=%u.%u.%u.%u rdst=%u.%u.%u.%u "
+ "\n",
+ dir == IP_CT_DIR_REPLY ? "reply" : "orig", hooknum,
+ HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC ? "snat" :
+ "dnat", (*pskb)->len,
+ NIPQUAD(iph->saddr), ntohs(udph->source),
+ NIPQUAD(iph->daddr), ntohs(udph->dest),
+ NIPQUAD(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip),
+ NIPQUAD(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip),
+ NIPQUAD(ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip),
+ NIPQUAD(ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip));
+ }
+
+ /*
+ * Make sure the packet length is ok. So far, we were only guaranteed
+ * to have a valid length IP header plus 8 bytes, which means we have
+ * enough room for a UDP header. Just verify the UDP length field so we
+ * can mess around with the payload.
+ */
+ if (ntohs(udph->len) == (*pskb)->len - (iph->ihl << 2)) {
+ int ret = snmp_translate(ct, info, ctinfo, hooknum, pskb);
+ spin_unlock_bh(&snmp_lock);
+ return ret;
+ }
+
+ if (net_ratelimit())
+ printk(KERN_WARNING "bsalg: dropping malformed packet "
+ "src=%u.%u.%u.%u dst=%u.%u.%u.%u\n",
+ NIPQUAD(iph->saddr), NIPQUAD(iph->daddr));
+ spin_unlock_bh(&snmp_lock);
+ return NF_DROP;
+}
+
+static struct ip_nat_helper snmp = { { NULL, NULL },
+ { { 0, { __constant_htons(SNMP_PORT) } },
+ { 0, { 0 }, IPPROTO_UDP } },
+ { { 0, { 0xFFFF } },
+ { 0, { 0 }, 0xFFFF } },
+ nat_help, "snmp" };
+
+static struct ip_nat_helper snmp_trap = { { NULL, NULL },
+ { { 0, { __constant_htons(SNMP_TRAP_PORT) } },
+ { 0, { 0 }, IPPROTO_UDP } },
+ { { 0, { 0xFFFF } },
+ { 0, { 0 }, 0xFFFF } },
+ nat_help, "snmp_trap" };
+
+/*****************************************************************************
+ *
+ * Module stuff.
+ *
+ *****************************************************************************/
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = ip_nat_helper_register(&snmp);
+ if (ret < 0)
+ return ret;
+ ret = ip_nat_helper_register(&snmp_trap);
+ if (ret < 0) {
+ ip_nat_helper_unregister(&snmp);
+ return ret;
+ }
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ ip_nat_helper_unregister(&snmp);
+ ip_nat_helper_unregister(&snmp_trap);
+ br_write_lock_bh(BR_NETPROTO_LOCK);
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_PARM(debug, "i");
+MODULE_DESCRIPTION("Basic SNMP Application Layer Gateway");
+MODULE_LICENSE("GPL");
if (len != sizeof(tmp) + tmp.size)
return -ENOPROTOOPT;
+ /* Pedantry: prevent them from hitting BUG() in vmalloc.c --RR */
+ if ((SMP_ALIGN(tmp.size) >> PAGE_SHIFT) + 2 > num_physpages)
+ return -ENOMEM;
+
newinfo = vmalloc(sizeof(struct ipt_table_info)
+ SMP_ALIGN(tmp.size) * smp_num_cpus);
if (!newinfo)
--- /dev/null
+/* Kernel module to match packet length. */
+#include <linux/module.h>
+#include <linux/skbuff.h>
+
+#include <linux/netfilter_ipv4/ipt_length.h>
+#include <linux/netfilter_ipv4/ip_tables.h>
+
+MODULE_AUTHOR("James Morris <jmorris@intercode.com.au>");
+MODULE_DESCRIPTION("IP tables packet length matching module");
+MODULE_LICENSE("GPL");
+
+static int
+match(const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const void *matchinfo,
+ int offset,
+ const void *hdr,
+ u_int16_t datalen,
+ int *hotdrop)
+{
+ const struct ipt_length_info *info = matchinfo;
+ u_int16_t pktlen = ntohs(skb->nh.iph->tot_len);
+
+ return (pktlen >= info->min && pktlen <= info->max) ^ info->invert;
+}
+
+static int
+checkentry(const char *tablename,
+ const struct ipt_ip *ip,
+ void *matchinfo,
+ unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ if (matchsize != IPT_ALIGN(sizeof(struct ipt_length_info)))
+ return 0;
+
+ return 1;
+}
+
+static struct ipt_match length_match
+= { { NULL, NULL }, "length", &match, &checkentry, NULL, THIS_MODULE };
+
+static int __init init(void)
+{
+ return ipt_register_match(&length_match);
+}
+
+static void __exit fini(void)
+{
+ ipt_unregister_match(&length_match);
+}
+
+module_init(init);
+module_exit(fini);
--- /dev/null
+/* IP tables module for matching the value of the TTL
+ *
+ * ipt_ttl.c,v 1.5 2000/11/13 11:16:08 laforge Exp
+ *
+ * (C) 2000,2001 by Harald Welte <laforge@gnumonks.org>
+ *
+ * This software is distributed under the terms GNU GPL
+ */
+
+#include <linux/module.h>
+#include <linux/skbuff.h>
+
+#include <linux/netfilter_ipv4/ipt_ttl.h>
+#include <linux/netfilter_ipv4/ip_tables.h>
+
+MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
+MODULE_DESCRIPTION("IP tables TTL matching module");
+MODULE_LICENSE("GPL");
+
+static int match(const struct sk_buff *skb, const struct net_device *in,
+ const struct net_device *out, const void *matchinfo,
+ int offset, const void *hdr, u_int16_t datalen,
+ int *hotdrop)
+{
+ const struct ipt_ttl_info *info = matchinfo;
+ const struct iphdr *iph = skb->nh.iph;
+
+ switch (info->mode) {
+ case IPT_TTL_EQ:
+ return (iph->ttl == info->ttl);
+ break;
+ case IPT_TTL_NE:
+ return (!(iph->ttl == info->ttl));
+ break;
+ case IPT_TTL_LT:
+ return (iph->ttl < info->ttl);
+ break;
+ case IPT_TTL_GT:
+ return (iph->ttl > info->ttl);
+ break;
+ default:
+ printk(KERN_WARNING "ipt_ttl: unknown mode %d\n",
+ info->mode);
+ return 0;
+ }
+
+ return 0;
+}
+
+static int checkentry(const char *tablename, const struct ipt_ip *ip,
+ void *matchinfo, unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ if (matchsize != IPT_ALIGN(sizeof(struct ipt_ttl_info)))
+ return 0;
+
+ return 1;
+}
+
+static struct ipt_match ttl_match = { { NULL, NULL }, "ttl", &match,
+ &checkentry, NULL, THIS_MODULE };
+
+static int __init init(void)
+{
+ return ipt_register_match(&ttl_match);
+}
+
+static void __exit fini(void)
+{
+ ipt_unregister_match(&ttl_match);
+
+}
+
+module_init(init);
+module_exit(fini);
*
* ROUTE - implementation of the IP router.
*
- * Version: $Id: route.c,v 1.100 2001/10/15 12:34:50 davem Exp $
+ * Version: $Id: route.c,v 1.101 2001/10/20 00:00:11 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
entry_size: sizeof(struct rtable),
};
-#ifdef CONFIG_INET_ECN
#define ECN_OR_COST(class) TC_PRIO_##class
-#else
-#define ECN_OR_COST(class) TC_PRIO_FILLER
-#endif
__u8 ip_tos2prio[16] = {
TC_PRIO_BESTEFFORT,
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
- * $Id: syncookies.c,v 1.15 2001/10/15 12:34:50 davem Exp $
+ * $Id: syncookies.c,v 1.17 2001/10/26 14:55:41 davem Exp $
*
* Missing: IPv6 support.
*/
extern int sysctl_tcp_syncookies;
-static unsigned long tcp_lastsynq_overflow;
-
/*
* This table has to be sorted and terminated with (__u16)-1.
* XXX generate a better table.
int mssind;
const __u16 mss = *mssp;
- tcp_lastsynq_overflow = jiffies;
+
+ sk->tp_pinfo.af_tcp.last_synq_overflow = jiffies;
+
/* XXX sort msstab[] by probability? Binary search? */
for (mssind = 0; mss > msstab[mssind + 1]; mssind++)
;
* Check if a ack sequence number is a valid syncookie.
* Return the decoded mss if it is, or 0 if not.
*/
-static inline int cookie_check(struct sk_buff *skb, __u32 cookie)
+static inline int cookie_check(struct sk_buff *skb, __u32 cookie)
{
__u32 seq;
__u32 mssind;
- if ((jiffies - tcp_lastsynq_overflow) > TCP_TIMEOUT_INIT)
- return 0;
-
seq = ntohl(skb->h.th->seq)-1;
mssind = check_tcp_syn_cookie(cookie,
skb->nh.iph->saddr, skb->nh.iph->daddr,
if (!sysctl_tcp_syncookies || !skb->h.th->ack)
goto out;
- mss = cookie_check(skb, cookie);
- if (!mss) {
+ if (time_after(jiffies, sk->tp_pinfo.af_tcp.last_synq_overflow + TCP_TIMEOUT_INIT) ||
+ (mss = cookie_check(skb, cookie)) == 0) {
NET_INC_STATS_BH(SyncookiesFailed);
goto out;
}
/*
* sysctl_net_ipv4.c: sysctl interface to net IPV4 subsystem.
*
- * $Id: sysctl_net_ipv4.c,v 1.49 2001/08/22 20:38:41 davem Exp $
+ * $Id: sysctl_net_ipv4.c,v 1.50 2001/10/20 00:00:11 davem Exp $
*
* Begun April 1, 1996, Mike Shaver.
* Added /proc/sys/net/ipv4 directory entry (empty =) ). [MS]
&sysctl_tcp_fack, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_TCP_REORDERING, "tcp_reordering",
&sysctl_tcp_reordering, sizeof(int), 0644, NULL, &proc_dointvec},
-#ifdef CONFIG_INET_ECN
{NET_TCP_ECN, "tcp_ecn",
&sysctl_tcp_ecn, sizeof(int), 0644, NULL, &proc_dointvec},
-#endif
{NET_TCP_DSACK, "tcp_dsack",
&sysctl_tcp_dsack, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_TCP_MEM, "tcp_mem",
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp.c,v 1.213 2001/10/10 23:54:50 davem Exp $
+ * Version: $Id: tcp.c,v 1.214 2001/10/20 00:00:11 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
info.tcpi_snd_wscale = 0;
info.tcpi_rcv_wscale = 0;
}
-#ifdef CONFIG_INET_ECN
if (tp->ecn_flags&TCP_ECN_OK)
info.tcpi_options |= TCPI_OPT_ECN;
-#endif
info.tcpi_rto = (1000000*tp->rto)/HZ;
info.tcpi_ato = (1000000*tp->ack.ato)/HZ;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_input.c,v 1.237 2001/09/21 21:27:34 davem Exp $
+ * Version: $Id: tcp_input.c,v 1.238 2001/10/20 00:00:11 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
#include <net/inet_common.h>
#include <linux/ipsec.h>
-
-/* These are on by default so the code paths get tested.
- * For the final 2.2 this may be undone at our discretion. -DaveM
- */
int sysctl_tcp_timestamps = 1;
int sysctl_tcp_window_scaling = 1;
int sysctl_tcp_sack = 1;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_ipv4.c,v 1.234 2001/10/18 09:49:08 davem Exp $
+ * Version: $Id: tcp_ipv4.c,v 1.235 2001/10/26 14:51:13 davem Exp $
*
* IPv4 specific functions
*
}
}
-static __inline__ void __tcp_v4_hash(struct sock *sk)
+static __inline__ void __tcp_v4_hash(struct sock *sk,const int listen_possible)
{
struct sock **skp;
rwlock_t *lock;
BUG_TRAP(sk->pprev==NULL);
- if(sk->state == TCP_LISTEN) {
+ if(listen_possible && sk->state == TCP_LISTEN) {
skp = &tcp_listening_hash[tcp_sk_listen_hashfn(sk)];
lock = &tcp_lhash_lock;
tcp_listen_wlock();
sk->pprev = skp;
sock_prot_inc_use(sk->prot);
write_unlock(lock);
- if (sk->state == TCP_LISTEN)
+ if (listen_possible && sk->state == TCP_LISTEN)
wake_up(&tcp_lhash_wait);
}
{
if (sk->state != TCP_CLOSE) {
local_bh_disable();
- __tcp_v4_hash(sk);
+ __tcp_v4_hash(sk, 1);
local_bh_enable();
}
}
{
rwlock_t *lock;
+ if (!sk->pprev)
+ goto ende;
+
if (sk->state == TCP_LISTEN) {
local_bh_disable();
tcp_listen_wlock();
sock_prot_dec_use(sk->prot);
}
write_unlock_bh(lock);
+
+ ende:
if (sk->state == TCP_LISTEN)
wake_up(&tcp_lhash_wait);
}
skb->h.th->source);
}
-static int tcp_v4_check_established(struct sock *sk)
+/* called with local bh disabled */
+static int __tcp_v4_check_established(struct sock *sk, __u16 lport)
{
u32 daddr = sk->rcv_saddr;
u32 saddr = sk->daddr;
int dif = sk->bound_dev_if;
TCP_V4_ADDR_COOKIE(acookie, saddr, daddr)
- __u32 ports = TCP_COMBINED_PORTS(sk->dport, sk->num);
- int hash = tcp_hashfn(daddr, sk->num, saddr, sk->dport);
+ __u32 ports = TCP_COMBINED_PORTS(sk->dport, lport);
+ int hash = tcp_hashfn(daddr, lport, saddr, sk->dport);
struct tcp_ehash_bucket *head = &tcp_ehash[hash];
struct sock *sk2, **skp;
struct tcp_tw_bucket *tw;
- write_lock_bh(&head->lock);
+ write_lock(&head->lock);
/* Check TIME-WAIT sockets first. */
for(skp = &(head + tcp_ehash_size)->chain; (sk2=*skp) != NULL;
sk->pprev = skp;
sk->hashent = hash;
sock_prot_inc_use(sk->prot);
- write_unlock_bh(&head->lock);
+ write_unlock(&head->lock);
if (tw) {
/* Silly. Should hash-dance instead... */
- local_bh_disable();
tcp_tw_deschedule(tw);
tcp_timewait_kill(tw);
NET_INC_STATS_BH(TimeWaitRecycled);
- local_bh_enable();
tcp_tw_put(tw);
}
return 0;
not_unique:
- write_unlock_bh(&head->lock);
+ write_unlock(&head->lock);
return -EADDRNOTAVAIL;
}
-/* Hash SYN-SENT socket to established hash table after
- * checking that it is unique. Note, that without kernel lock
- * we MUST make these two operations atomically.
- *
- * Optimization: if it is bound and tcp_bind_bucket has the only
- * owner (us), we need not to scan established bucket.
- */
-
-int tcp_v4_hash_connecting(struct sock *sk)
+/*
+ * Bind a port for a connect operation and hash it.
+ */
+static int tcp_v4_hash_connect(struct sock *sk, struct sockaddr_in *dst)
{
unsigned short snum = sk->num;
- struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(snum)];
- struct tcp_bind_bucket *tb = (struct tcp_bind_bucket *)sk->prev;
+ struct tcp_bind_hashbucket *head;
+ struct tcp_bind_bucket *tb;
+
+ if (snum == 0) {
+ int rover;
+ int low = sysctl_local_port_range[0];
+ int high = sysctl_local_port_range[1];
+ int remaining = (high - low) + 1;
+
+ local_bh_disable();
+ spin_lock(&tcp_portalloc_lock);
+ rover = tcp_port_rover;
+
+ do {
+ rover++;
+ if ((rover < low) || (rover > high))
+ rover = low;
+ head = &tcp_bhash[tcp_bhashfn(rover)];
+ spin_lock(&head->lock);
+
+ /* Does not bother with rcv_saddr checks,
+ * because the established check is already
+ * unique enough.
+ */
+ for (tb = head->chain; tb; tb = tb->next) {
+ if (tb->port == rover) {
+ if (!tb->owners)
+ goto ok;
+ if (!tb->fastreuse)
+ goto next_port;
+ if (!__tcp_v4_check_established(sk,rover))
+ goto ok;
+ goto next_port;
+ }
+ }
+
+ tb = tcp_bucket_create(head, rover);
+ if (!tb) {
+ spin_unlock(&head->lock);
+ break;
+ }
+ goto ok;
+ next_port:
+ spin_unlock(&head->lock);
+ } while (--remaining > 0);
+ tcp_port_rover = rover;
+
+ spin_unlock(&tcp_portalloc_lock);
+ local_bh_enable();
+
+ return -EADDRNOTAVAIL;
+
+ ok:
+ /* All locks still held and bhs disabled */
+ tcp_port_rover = rover;
+ tcp_bind_hash(sk, tb, rover);
+ sk->sport = htons(rover);
+ spin_unlock(&tcp_portalloc_lock);
+ __tcp_v4_hash(sk, 0);
+ /* fastreuse state of tb is never changed in connect */
+ spin_unlock(&head->lock);
+ local_bh_enable();
+ return 0;
+ }
+
+ head = &tcp_bhash[tcp_bhashfn(snum)];
+ tb = (struct tcp_bind_bucket *)sk->prev;
spin_lock_bh(&head->lock);
if (tb->owners == sk && sk->bind_next == NULL) {
- __tcp_v4_hash(sk);
+ __tcp_v4_hash(sk, 0);
spin_unlock_bh(&head->lock);
return 0;
} else {
- spin_unlock_bh(&head->lock);
-
+ int ret;
+ spin_unlock(&head->lock);
/* No definite answer... Walk to established hash table */
- return tcp_v4_check_established(sk);
+ ret = __tcp_v4_check_established(sk, snum);
+ local_bh_enable();
+ return ret;
}
}
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
struct sockaddr_in *usin = (struct sockaddr_in *) uaddr;
- struct sk_buff *buff;
struct rtable *rt;
u32 daddr, nexthop;
int tmp;
if (!sk->protinfo.af_inet.opt || !sk->protinfo.af_inet.opt->srr)
daddr = rt->rt_dst;
- err = -ENOBUFS;
- buff = alloc_skb(MAX_TCP_HEADER + 15, sk->allocation);
-
- if (buff == NULL)
- goto failure;
-
if (!sk->saddr)
sk->saddr = rt->rt_src;
sk->rcv_saddr = sk->saddr;
tp->mss_clamp = 536;
- err = tcp_connect(sk, buff);
- if (err == 0)
- return 0;
+ /* Initialise common fields */
+ tcp_connect_init(sk);
+
+ /* Socket identity change complete, no longer
+ * in TCP_CLOSE, so enter ourselves into the
+ * hash tables.
+ */
+ tcp_set_state(sk,TCP_SYN_SENT);
+ err = tcp_v4_hash_connect(sk, usin);
+ if (!err) {
+ struct sk_buff *buff;
+
+ err = -ENOBUFS;
+ buff = alloc_skb(MAX_TCP_HEADER + 15, sk->allocation);
+ if (buff != NULL) {
+ tcp_connect_send(sk, buff);
+ return 0;
+ }
+ }
-failure:
+ tcp_set_state(sk, TCP_CLOSE);
__sk_dst_reset(sk);
sk->route_caps = 0;
sk->dport = 0;
newtp->advmss = dst->advmss;
tcp_initialize_rcv_mss(newsk);
- __tcp_v4_hash(newsk);
+ __tcp_v4_hash(newsk, 0);
__tcp_inherit_port(sk, newsk);
return newsk;
tcp_v4_rebuild_header,
tcp_v4_conn_request,
tcp_v4_syn_recv_sock,
- tcp_v4_hash_connecting,
tcp_v4_remember_stamp,
sizeof(struct iphdr),
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_output.c,v 1.142 2001/09/21 21:27:34 davem Exp $
+ * Version: $Id: tcp_output.c,v 1.143 2001/10/26 14:51:13 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
return skb;
}
-int tcp_connect(struct sock *sk, struct sk_buff *buff)
+/*
+ * Do all connect socket setups that can be done AF independent.
+ * Could be inlined.
+ */
+void tcp_connect_init(struct sock *sk)
{
struct dst_entry *dst = __sk_dst_get(sk);
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
- /* Reserve space for headers. */
- skb_reserve(buff, MAX_TCP_HEADER);
-
/* We'll fix this up when we get a response from the other end.
* See tcp_input.c:tcp_rcv_state_process case TCP_SYN_SENT.
*/
tp->rcv_ssthresh = tp->rcv_wnd;
- /* Socket identity change complete, no longer
- * in TCP_CLOSE, so enter ourselves into the
- * hash tables.
- */
- tcp_set_state(sk,TCP_SYN_SENT);
- if (tp->af_specific->hash_connecting(sk))
- goto err_out;
-
sk->err = 0;
sk->done = 0;
tp->snd_wnd = 0;
tp->rto = TCP_TIMEOUT_INIT;
tp->retransmits = 0;
tcp_clear_retrans(tp);
+}
+
+/*
+ * Build a SYN and send it off.
+ */
+void tcp_connect_send(struct sock *sk, struct sk_buff *buff)
+{
+ struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+
+ /* Reserve space for headers. */
+ skb_reserve(buff, MAX_TCP_HEADER);
TCP_SKB_CB(buff)->flags = TCPCB_FLAG_SYN;
TCP_ECN_send_syn(tp, buff);
/* Timer for repeating the SYN until an answer. */
tcp_reset_xmit_timer(sk, TCP_TIME_RETRANS, tp->rto);
- return 0;
-
-err_out:
- tcp_set_state(sk,TCP_CLOSE);
- kfree_skb(buff);
- return -EADDRNOTAVAIL;
}
/* Send out a delayed ack, the caller does the policy checking
if [ "$CONFIG_IP6_NF_IPTABLES" != "n" ]; then
# The simple matches.
dep_tristate ' limit match support' CONFIG_IP6_NF_MATCH_LIMIT $CONFIG_IP6_NF_IPTABLES
+ dep_tristate ' MAC address match support' CONFIG_IP6_NF_MATCH_MAC $CONFIG_IP6_NF_IPTABLES
+ dep_tristate ' Multiple port match support' CONFIG_IP6_NF_MATCH_MULTIPORT $CONFIG_IP6_NF_IPTABLES
+ if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ dep_tristate ' Owner match support (EXPERIMENTAL)' CONFIG_IP6_NF_MATCH_OWNER $CONFIG_IP6_NF_IPTABLES
+ fi
# dep_tristate ' MAC address match support' CONFIG_IP6_NF_MATCH_MAC $CONFIG_IP6_NF_IPTABLES
dep_tristate ' netfilter MARK match support' CONFIG_IP6_NF_MATCH_MARK $CONFIG_IP6_NF_IPTABLES
# dep_tristate ' Multiple port match support' CONFIG_IP6_NF_MATCH_MULTIPORT $CONFIG_IP6_NF_IPTABLES
# The targets
dep_tristate ' Packet filtering' CONFIG_IP6_NF_FILTER $CONFIG_IP6_NF_IPTABLES
+ if [ "$CONFIG_IP6_NF_FILTER" != "n" ]; then
+ dep_tristate ' LOG target support' CONFIG_IP6_NF_TARGET_LOG $CONFIG_IP6_NF_FILTER
+ fi
# if [ "$CONFIG_IP6_NF_FILTER" != "n" ]; then
# dep_tristate ' REJECT target support' CONFIG_IP6_NF_TARGET_REJECT $CONFIG_IP6_NF_FILTER
obj-$(CONFIG_IP6_NF_MATCH_MARK) += ip6t_mark.o
obj-$(CONFIG_IP6_NF_MATCH_MAC) += ip6t_mac.o
obj-$(CONFIG_IP6_NF_MATCH_MULTIPORT) += ip6t_multiport.o
+obj-$(CONFIG_IP6_NF_MATCH_OWNER) += ip6t_owner.o
obj-$(CONFIG_IP6_NF_FILTER) += ip6table_filter.o
obj-$(CONFIG_IP6_NF_MANGLE) += ip6table_mangle.o
obj-$(CONFIG_IP6_NF_TARGET_MARK) += ip6t_MARK.o
+obj-$(CONFIG_IP6_NF_TARGET_LOG) += ip6t_LOG.o
include $(TOPDIR)/Rules.make
if (copy_from_user(&tmp, user, sizeof(tmp)) != 0)
return -EFAULT;
+ /* Pedantry: prevent them from hitting BUG() in vmalloc.c --RR */
+ if ((SMP_ALIGN(tmp.size) >> PAGE_SHIFT) + 2 > num_physpages)
+ return -ENOMEM;
+
newinfo = vmalloc(sizeof(struct ip6t_table_info)
+ SMP_ALIGN(tmp.size) * smp_num_cpus);
if (!newinfo)
--- /dev/null
+/*
+ * This is a module which is used for logging packets.
+ */
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/ip.h>
+#include <linux/spinlock.h>
+#include <linux/icmpv6.h>
+#include <net/udp.h>
+#include <net/tcp.h>
+#include <net/ipv6.h>
+#include <linux/netfilter_ipv6/ip6_tables.h>
+
+MODULE_AUTHOR("Jan Rekorajski <baggins@pld.org.pl>");
+MODULE_DESCRIPTION("IP6 tables LOG target module");
+MODULE_LICENSE("GPL");
+
+struct in_device;
+#include <net/route.h>
+#include <linux/netfilter_ipv6/ip6t_LOG.h>
+
+#if 0
+#define DEBUGP printk
+#else
+#define DEBUGP(format, args...)
+#endif
+
+#define NIP6(addr) \
+ ntohs((addr).s6_addr16[0]), \
+ ntohs((addr).s6_addr16[1]), \
+ ntohs((addr).s6_addr16[2]), \
+ ntohs((addr).s6_addr16[3]), \
+ ntohs((addr).s6_addr16[4]), \
+ ntohs((addr).s6_addr16[5]), \
+ ntohs((addr).s6_addr16[6]), \
+ ntohs((addr).s6_addr16[7])
+
+struct esphdr {
+ __u32 spi;
+}; /* FIXME evil kludge */
+
+/* Use lock to serialize, so printks don't overlap */
+static spinlock_t log_lock = SPIN_LOCK_UNLOCKED;
+
+/* takes in current header and pointer to the header */
+/* if another header exists, sets hdrptr to the next header
+ and returns the new header value, else returns 0 */
+static u_int8_t ip6_nexthdr(u_int8_t currenthdr, u_int8_t **hdrptr)
+{
+ u_int8_t hdrlen, nexthdr = 0;
+
+ switch(currenthdr){
+ case IPPROTO_AH:
+ /* whoever decided to do the length of AUTH for ipv6
+ in 32bit units unlike other headers should be beaten...
+ repeatedly...with a large stick...no, an even LARGER
+ stick...no, you're still not thinking big enough */
+ nexthdr = **hdrptr;
+ hdrlen = *hdrptr[1] * 4 + 8;
+ *hdrptr = *hdrptr + hdrlen;
+ break;
+ /*stupid rfc2402 */
+ case IPPROTO_DSTOPTS:
+ case IPPROTO_ROUTING:
+ case IPPROTO_HOPOPTS:
+ nexthdr = **hdrptr;
+ hdrlen = *hdrptr[1] * 8 + 8;
+ *hdrptr = *hdrptr + hdrlen;
+ break;
+ case IPPROTO_FRAGMENT:
+ nexthdr = **hdrptr;
+ *hdrptr = *hdrptr + 8;
+ break;
+ }
+ return nexthdr;
+
+}
+
+/* One level of recursion won't kill us */
+static void dump_packet(const struct ip6t_log_info *info,
+ struct ipv6hdr *ipv6h, int recurse)
+{
+ u_int8_t currenthdr = ipv6h->nexthdr;
+ u_int8_t *hdrptr;
+ int fragment;
+
+ /* Max length: 88 "SRC=0000.0000.0000.0000.0000.0000.0000.0000 DST=0000.0000.0000.0000.0000.0000.0000.0000" */
+ printk("SRC=%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x ", NIP6(ipv6h->saddr));
+ printk("DST=%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x ", NIP6(ipv6h->daddr));
+
+ /* Max length: 44 "LEN=65535 TC=255 HOPLIMIT=255 FLOWLBL=FFFFF " */
+ printk("LEN=%u TC=%u HOPLIMIT=%u FLOWLBL=%u ",
+ ntohs(ipv6h->payload_len) + sizeof(struct ipv6hdr),
+ (ntohl(*(u_int32_t *)ipv6h) & 0x0ff00000) >> 20,
+ ipv6h->hop_limit,
+ (ntohl(*(u_int32_t *)ipv6h) & 0x000fffff));
+
+ fragment = 0;
+ hdrptr = (u_int8_t *)(ipv6h + 1);
+ while (currenthdr) {
+ if ((currenthdr == IPPROTO_TCP) ||
+ (currenthdr == IPPROTO_UDP) ||
+ (currenthdr == IPPROTO_ICMPV6))
+ break;
+ /* Max length: 48 "OPT (...) " */
+ printk("OPT ( ");
+ switch (currenthdr) {
+ case IPPROTO_FRAGMENT: {
+ struct frag_hdr *fhdr = (struct frag_hdr *)hdrptr;
+
+ /* Max length: 11 "FRAG:65535 " */
+ printk("FRAG:%u ", ntohs(fhdr->frag_off) & 0xFFF8);
+
+ /* Max length: 11 "INCOMPLETE " */
+ if (fhdr->frag_off & __constant_htons(0x0001))
+ printk("INCOMPLETE ");
+
+ printk("ID:%08x ", fhdr->identification);
+
+ if (ntohs(fhdr->frag_off) & 0xFFF8)
+ fragment = 1;
+
+ break;
+ }
+ case IPPROTO_DSTOPTS:
+ case IPPROTO_ROUTING:
+ case IPPROTO_HOPOPTS:
+ break;
+ /* Max Length */
+ case IPPROTO_AH:
+ case IPPROTO_ESP:
+ if (info->logflags & IP6T_LOG_IPOPT) {
+ struct esphdr *esph = (struct esphdr *)hdrptr;
+ int esp = (currenthdr == IPPROTO_ESP);
+
+ /* Max length: 4 "ESP " */
+ printk("%s ",esp ? "ESP" : "AH");
+
+ /* Length: 15 "SPI=0xF1234567 " */
+ printk("SPI=0x%x ", ntohl(esph->spi) );
+ break;
+ }
+ default:
+ break;
+ }
+ printk(") ");
+ currenthdr = ip6_nexthdr(currenthdr, &hdrptr);
+ }
+
+ switch (currenthdr) {
+ case IPPROTO_TCP: {
+ struct tcphdr *tcph = (struct tcphdr *)hdrptr;
+
+ /* Max length: 10 "PROTO=TCP " */
+ printk("PROTO=TCP ");
+
+ if (fragment)
+ break;
+
+ /* Max length: 20 "SPT=65535 DPT=65535 " */
+ printk("SPT=%u DPT=%u ",
+ ntohs(tcph->source), ntohs(tcph->dest));
+ /* Max length: 30 "SEQ=4294967295 ACK=4294967295 " */
+ if (info->logflags & IP6T_LOG_TCPSEQ)
+ printk("SEQ=%u ACK=%u ",
+ ntohl(tcph->seq), ntohl(tcph->ack_seq));
+ /* Max length: 13 "WINDOW=65535 " */
+ printk("WINDOW=%u ", ntohs(tcph->window));
+ /* Max length: 9 "RES=0x3F " */
+ printk("RES=0x%02x ", (u_int8_t)(ntohl(tcp_flag_word(tcph) & TCP_RESERVED_BITS) >> 22));
+ /* Max length: 36 "URG ACK PSH RST SYN FIN " */
+ if (tcph->urg)
+ printk("URG ");
+ if (tcph->ack)
+ printk("ACK ");
+ if (tcph->psh)
+ printk("PSH ");
+ if (tcph->rst)
+ printk("RST ");
+ if (tcph->syn)
+ printk("SYN ");
+ if (tcph->fin)
+ printk("FIN ");
+ /* Max length: 11 "URGP=65535 " */
+ printk("URGP=%u ", ntohs(tcph->urg_ptr));
+
+ if ((info->logflags & IP6T_LOG_TCPOPT)
+ && tcph->doff * 4 != sizeof(struct tcphdr)) {
+ unsigned int i;
+
+ /* Max length: 127 "OPT (" 15*4*2chars ") " */
+ printk("OPT (");
+ for (i =sizeof(struct tcphdr); i < tcph->doff * 4; i++)
+ printk("%02X", ((u_int8_t *)tcph)[i]);
+ printk(") ");
+ }
+ break;
+ }
+ case IPPROTO_UDP: {
+ struct udphdr *udph = (struct udphdr *)hdrptr;
+
+ /* Max length: 10 "PROTO=UDP " */
+ printk("PROTO=UDP ");
+
+ if (fragment)
+ break;
+
+ /* Max length: 20 "SPT=65535 DPT=65535 " */
+ printk("SPT=%u DPT=%u LEN=%u ",
+ ntohs(udph->source), ntohs(udph->dest),
+ ntohs(udph->len));
+ break;
+ }
+ case IPPROTO_ICMPV6: {
+ struct icmp6hdr *icmp6h = (struct icmp6hdr *)hdrptr;
+
+ /* Max length: 13 "PROTO=ICMPv6 " */
+ printk("PROTO=ICMPv6 ");
+
+ if (fragment)
+ break;
+
+ /* Max length: 18 "TYPE=255 CODE=255 " */
+ printk("TYPE=%u CODE=%u ", icmp6h->icmp6_type, icmp6h->icmp6_code);
+
+ switch (icmp6h->icmp6_type) {
+ case ICMPV6_ECHO_REQUEST:
+ case ICMPV6_ECHO_REPLY:
+ /* Max length: 19 "ID=65535 SEQ=65535 " */
+ printk("ID=%u SEQ=%u ",
+ ntohs(icmp6h->icmp6_identifier),
+ ntohs(icmp6h->icmp6_sequence));
+ break;
+ case ICMPV6_MGM_QUERY:
+ case ICMPV6_MGM_REPORT:
+ case ICMPV6_MGM_REDUCTION:
+ break;
+
+ case ICMPV6_PARAMPROB:
+ /* Max length: 17 "POINTER=ffffffff " */
+ printk("POINTER=%08x ", ntohl(icmp6h->icmp6_pointer));
+ /* Fall through */
+ case ICMPV6_DEST_UNREACH:
+ case ICMPV6_PKT_TOOBIG:
+ case ICMPV6_TIME_EXCEED:
+ /* Max length: 3+maxlen */
+ if (recurse) {
+ printk("[");
+ dump_packet(info, (struct ipv6hdr *)(icmp6h + 1), 0);
+ printk("] ");
+ }
+
+ /* Max length: 10 "MTU=65535 " */
+ if (icmp6h->icmp6_type == ICMPV6_PKT_TOOBIG)
+ printk("MTU=%u ", ntohl(icmp6h->icmp6_mtu));
+ }
+ break;
+ }
+ /* Max length: 10 "PROTO 255 " */
+ default:
+ printk("PROTO=%u ", currenthdr);
+ }
+}
+
+static unsigned int
+ip6t_log_target(struct sk_buff **pskb,
+ unsigned int hooknum,
+ const struct net_device *in,
+ const struct net_device *out,
+ const void *targinfo,
+ void *userinfo)
+{
+ struct ipv6hdr *ipv6h = (*pskb)->nh.ipv6h;
+ const struct ip6t_log_info *loginfo = targinfo;
+ char level_string[4] = "< >";
+
+ level_string[1] = '0' + (loginfo->level % 8);
+ spin_lock_bh(&log_lock);
+ printk(level_string);
+ printk("%sIN=%s OUT=%s ",
+ loginfo->prefix,
+ in ? in->name : "",
+ out ? out->name : "");
+ if (in && !out) {
+ /* MAC logging for input chain only. */
+ printk("MAC=");
+ if ((*pskb)->dev && (*pskb)->dev->hard_header_len && (*pskb)->mac.raw != (void*)ipv6h) {
+ int i;
+ unsigned char *p = (*pskb)->mac.raw;
+ for (i = 0; i < (*pskb)->dev->hard_header_len; i++,p++)
+ printk("%02x%c", *p,
+ i==(*pskb)->dev->hard_header_len - 1
+ ? ' ':':');
+ } else
+ printk(" ");
+ }
+
+ dump_packet(loginfo, ipv6h, 1);
+ printk("\n");
+ spin_unlock_bh(&log_lock);
+
+ return IP6T_CONTINUE;
+}
+
+static int ip6t_log_checkentry(const char *tablename,
+ const struct ip6t_entry *e,
+ void *targinfo,
+ unsigned int targinfosize,
+ unsigned int hook_mask)
+{
+ const struct ip6t_log_info *loginfo = targinfo;
+
+ if (targinfosize != IP6T_ALIGN(sizeof(struct ip6t_log_info))) {
+ DEBUGP("LOG: targinfosize %u != %u\n",
+ targinfosize, IP6T_ALIGN(sizeof(struct ip6t_log_info)));
+ return 0;
+ }
+
+ if (loginfo->level >= 8) {
+ DEBUGP("LOG: level %u >= 8\n", loginfo->level);
+ return 0;
+ }
+
+ if (loginfo->prefix[sizeof(loginfo->prefix)-1] != '\0') {
+ DEBUGP("LOG: prefix term %i\n",
+ loginfo->prefix[sizeof(loginfo->prefix)-1]);
+ return 0;
+ }
+
+ return 1;
+}
+
+static struct ip6t_target ip6t_log_reg
+= { { NULL, NULL }, "LOG", ip6t_log_target, ip6t_log_checkentry, NULL,
+ THIS_MODULE };
+
+static int __init init(void)
+{
+ if (ip6t_register_target(&ip6t_log_reg))
+ return -EINVAL;
+
+ return 0;
+}
+
+static void __exit fini(void)
+{
+ ip6t_unregister_target(&ip6t_log_reg);
+}
+
+module_init(init);
+module_exit(fini);
#include <linux/interrupt.h>
#include <linux/netfilter_ipv6/ip6_tables.h>
-#include <linux/netfilter_ipv4/ipt_limit.h>
+#include <linux/netfilter_ipv6/ip6t_limit.h>
/* The algorithm used is the Simple Token Bucket Filter (TBF)
* see net/sched/sch_tbf.c in the linux source tree
#define CREDITS_PER_JIFFY 128
static int
-ipt_limit_match(const struct sk_buff *skb,
+ip6t_limit_match(const struct sk_buff *skb,
const struct net_device *in,
const struct net_device *out,
const void *matchinfo,
u_int16_t datalen,
int *hotdrop)
{
- struct ipt_rateinfo *r = ((struct ipt_rateinfo *)matchinfo)->master;
+ struct ip6t_rateinfo *r = ((struct ip6t_rateinfo *)matchinfo)->master;
unsigned long now = jiffies;
spin_lock_bh(&limit_lock);
/* If multiplying would overflow... */
if (user > 0xFFFFFFFF / (HZ*CREDITS_PER_JIFFY))
/* Divide first. */
- return (user / IPT_LIMIT_SCALE) * HZ * CREDITS_PER_JIFFY;
+ return (user / IP6T_LIMIT_SCALE) * HZ * CREDITS_PER_JIFFY;
- return (user * HZ * CREDITS_PER_JIFFY) / IPT_LIMIT_SCALE;
+ return (user * HZ * CREDITS_PER_JIFFY) / IP6T_LIMIT_SCALE;
}
static int
-ipt_limit_checkentry(const char *tablename,
+ip6t_limit_checkentry(const char *tablename,
const struct ip6t_ip6 *ip,
void *matchinfo,
unsigned int matchsize,
unsigned int hook_mask)
{
- struct ipt_rateinfo *r = matchinfo;
+ struct ip6t_rateinfo *r = matchinfo;
- if (matchsize != IP6T_ALIGN(sizeof(struct ipt_rateinfo)))
+ if (matchsize != IP6T_ALIGN(sizeof(struct ip6t_rateinfo)))
return 0;
/* Check for overflow. */
if (r->burst == 0
|| user2credits(r->avg * r->burst) < user2credits(r->avg)) {
- printk("Call rusty: overflow in ipt_limit: %u/%u\n",
+ printk("Call rusty: overflow in ip6t_limit: %u/%u\n",
r->avg, r->burst);
return 0;
}
- /* User avg in seconds * IPT_LIMIT_SCALE: convert to jiffies *
+ /* User avg in seconds * IP6T_LIMIT_SCALE: convert to jiffies *
128. */
r->prev = jiffies;
r->credit = user2credits(r->avg * r->burst); /* Credits full. */
return 1;
}
-static struct ip6t_match ipt_limit_reg
-= { { NULL, NULL }, "limit", ipt_limit_match, ipt_limit_checkentry, NULL,
+static struct ip6t_match ip6t_limit_reg
+= { { NULL, NULL }, "limit", ip6t_limit_match, ip6t_limit_checkentry, NULL,
THIS_MODULE };
static int __init init(void)
{
- if (ip6t_register_match(&ipt_limit_reg))
+ if (ip6t_register_match(&ip6t_limit_reg))
return -EINVAL;
return 0;
}
static void __exit fini(void)
{
- ip6t_unregister_match(&ipt_limit_reg);
+ ip6t_unregister_match(&ip6t_limit_reg);
}
module_init(init);
}
static int
-ipt_mac_checkentry(const char *tablename,
+ip6t_mac_checkentry(const char *tablename,
const struct ip6t_ip6 *ip,
void *matchinfo,
unsigned int matchsize,
{
if (hook_mask
& ~((1 << NF_IP6_PRE_ROUTING) | (1 << NF_IP6_LOCAL_IN))) {
- printk("ipt_mac: only valid for PRE_ROUTING or LOCAL_IN.\n");
+ printk("ip6t_mac: only valid for PRE_ROUTING or LOCAL_IN.\n");
return 0;
}
}
static struct ip6t_match mac_match
-= { { NULL, NULL }, "mac", &match, &ipt_mac_checkentry, NULL, THIS_MODULE };
+= { { NULL, NULL }, "mac", &match, &ip6t_mac_checkentry, NULL, THIS_MODULE };
static int __init init(void)
{
if (offset == 0 && datalen < sizeof(struct udphdr)) {
/* We've been asked to examine this packet, and we
can't. Hence, no choice but to drop. */
- duprintf("ipt_multiport:"
+ duprintf("ip6t_multiport:"
" Dropping evil offset=0 tinygram.\n");
*hotdrop = 1;
return 0;
--- /dev/null
+/* Kernel module to match various things tied to sockets associated with
+ locally generated outgoing packets.
+
+ Copyright (C) 2000,2001 Marc Boucher
+ */
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/file.h>
+#include <net/sock.h>
+
+#include <linux/netfilter_ipv6/ip6t_owner.h>
+#include <linux/netfilter_ipv6/ip6_tables.h>
+
+MODULE_AUTHOR("Marc Boucher <marc@mbsi.ca>");
+MODULE_DESCRIPTION("IP6 tables owner matching module");
+MODULE_LICENSE("GPL");
+
+static int
+match_pid(const struct sk_buff *skb, pid_t pid)
+{
+ struct task_struct *p;
+ struct files_struct *files;
+ int i;
+
+ read_lock(&tasklist_lock);
+ p = find_task_by_pid(pid);
+ if (!p)
+ goto out;
+ task_lock(p);
+ files = p->files;
+ if(files) {
+ read_lock(&files->file_lock);
+ for (i=0; i < files->max_fds; i++) {
+ if (fcheck_files(files, i) == skb->sk->socket->file) {
+ read_unlock(&files->file_lock);
+ task_unlock(p);
+ read_unlock(&tasklist_lock);
+ return 1;
+ }
+ }
+ read_unlock(&files->file_lock);
+ }
+ task_unlock(p);
+out:
+ read_unlock(&tasklist_lock);
+ return 0;
+}
+
+static int
+match_sid(const struct sk_buff *skb, pid_t sid)
+{
+ struct task_struct *p;
+ struct file *file = skb->sk->socket->file;
+ int i, found=0;
+
+ read_lock(&tasklist_lock);
+ for_each_task(p) {
+ struct files_struct *files;
+ if (p->session != sid)
+ continue;
+
+ task_lock(p);
+ files = p->files;
+ if (files) {
+ read_lock(&files->file_lock);
+ for (i=0; i < files->max_fds; i++) {
+ if (fcheck_files(files, i) == file) {
+ found = 1;
+ break;
+ }
+ }
+ read_unlock(&files->file_lock);
+ }
+ task_unlock(p);
+ if(found)
+ break;
+ }
+ read_unlock(&tasklist_lock);
+
+ return found;
+}
+
+static int
+match(const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const void *matchinfo,
+ int offset,
+ const void *hdr,
+ u_int16_t datalen,
+ int *hotdrop)
+{
+ const struct ip6t_owner_info *info = matchinfo;
+
+ if (!skb->sk || !skb->sk->socket || !skb->sk->socket->file)
+ return 0;
+
+ if(info->match & IP6T_OWNER_UID) {
+ if((skb->sk->socket->file->f_uid != info->uid) ^
+ !!(info->invert & IP6T_OWNER_UID))
+ return 0;
+ }
+
+ if(info->match & IP6T_OWNER_GID) {
+ if((skb->sk->socket->file->f_gid != info->gid) ^
+ !!(info->invert & IP6T_OWNER_GID))
+ return 0;
+ }
+
+ if(info->match & IP6T_OWNER_PID) {
+ if (!match_pid(skb, info->pid) ^
+ !!(info->invert & IP6T_OWNER_PID))
+ return 0;
+ }
+
+ if(info->match & IP6T_OWNER_SID) {
+ if (!match_sid(skb, info->sid) ^
+ !!(info->invert & IP6T_OWNER_SID))
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+checkentry(const char *tablename,
+ const struct ip6t_ip6 *ip,
+ void *matchinfo,
+ unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ if (hook_mask
+ & ~((1 << NF_IP6_LOCAL_OUT) | (1 << NF_IP6_POST_ROUTING))) {
+ printk("ip6t_owner: only valid for LOCAL_OUT or POST_ROUTING.\n");
+ return 0;
+ }
+
+ if (matchsize != IP6T_ALIGN(sizeof(struct ip6t_owner_info)))
+ return 0;
+
+ return 1;
+}
+
+static struct ip6t_match owner_match
+= { { NULL, NULL }, "owner", &match, &checkentry, NULL, THIS_MODULE };
+
+static int __init init(void)
+{
+ return ip6t_register_match(&owner_match);
+}
+
+static void __exit fini(void)
+{
+ ip6t_unregister_match(&owner_match);
+}
+
+module_init(init);
+module_exit(fini);
* Authors:
* Pedro Roque <roque@di.fc.ul.pt>
*
- * $Id: tcp_ipv6.c,v 1.140 2001/10/15 12:34:50 davem Exp $
+ * $Id: tcp_ipv6.c,v 1.141 2001/10/26 14:51:13 davem Exp $
*
* Based on:
* linux/net/ipv4/tcp.c
return -EADDRNOTAVAIL;
}
-static int tcp_v6_hash_connecting(struct sock *sk)
+static int tcp_v6_hash_connect(struct sock *sk, struct sockaddr_in6 *dst)
{
- unsigned short snum = sk->num;
- struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(snum)];
- struct tcp_bind_bucket *tb = head->chain;
+ struct tcp_bind_hashbucket *head;
+ struct tcp_bind_bucket *tb;
+
+ /* XXX */
+ if (sk->num == 0) {
+ int err = tcp_v6_get_port(sk, sk->num);
+ if (err)
+ return err;
+ sk->sport = htons(sk->num);
+ }
+
+ head = &tcp_bhash[tcp_bhashfn(sk->num)];
+ tb = head->chain;
spin_lock_bh(&head->lock);
struct in6_addr saddr_buf;
struct flowi fl;
struct dst_entry *dst;
- struct sk_buff *buff;
int addr_type;
int err;
tp->ext_header_len = np->opt->opt_flen+np->opt->opt_nflen;
tp->mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr);
- err = -ENOBUFS;
- buff = alloc_skb(MAX_TCP_HEADER + 15, sk->allocation);
-
- if (buff == NULL)
- goto failure;
-
sk->dport = usin->sin6_port;
/*
tp->write_seq = secure_tcpv6_sequence_number(np->saddr.s6_addr32,
np->daddr.s6_addr32,
sk->sport, sk->dport);
+ tcp_connect_init(sk);
+
+ tcp_set_state(sk, TCP_SYN_SENT);
+ err = tcp_v6_hash_connect(sk, usin);
+ if (!err) {
+ struct sk_buff *buff;
+ err = -ENOBUFS;
+ buff = alloc_skb(MAX_TCP_HEADER + 15, sk->allocation);
+ if (buff != NULL) {
+ tcp_connect_send(sk, buff);
+ return 0;
+ }
+ }
- err = tcp_connect(sk, buff);
- if (err == 0)
- return 0;
-failure:
+ tcp_set_state(sk, TCP_CLOSE);
+ failure:
__sk_dst_reset(sk);
sk->dport = 0;
sk->route_caps = 0;
tcp_v6_rebuild_header,
tcp_v6_conn_request,
tcp_v6_syn_recv_sock,
- tcp_v6_hash_connecting,
tcp_v6_remember_stamp,
sizeof(struct ipv6hdr),
tcp_v4_rebuild_header,
tcp_v6_conn_request,
tcp_v6_syn_recv_sock,
- tcp_v4_hash_connecting,
tcp_v4_remember_stamp,
sizeof(struct iphdr),
obj-$(CONFIG_SPX) += af_spx.o
include $(TOPDIR)/Rules.make
-
-tar:
- tar -cvf /dev/f1 .
endif
include $(TOPDIR)/Rules.make
-
-tar:
- tar -cvf /dev/f1 .
-
-
-
-
#include <net/pkt_sched.h>
#include <net/scm.h>
#include <linux/if_bridge.h>
+#include <linux/if_vlan.h>
#include <linux/random.h>
#ifdef CONFIG_NET_DIVERT
#include <linux/divert.h>
EXPORT_SYMBOL(destroy_EII_client);
#endif
+/* for 801q VLAN support */
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+EXPORT_SYMBOL(dev_change_flags);
+EXPORT_SYMBOL(vlan_ioctl_hook);
+#endif
+
EXPORT_SYMBOL(sklist_destroy_socket);
EXPORT_SYMBOL(sklist_insert_socket);
EXPORT_SYMBOL(tcp_v4_syn_recv_sock);
EXPORT_SYMBOL(tcp_v4_do_rcv);
EXPORT_SYMBOL(tcp_v4_connect);
-EXPORT_SYMBOL(tcp_v4_hash_connecting);
EXPORT_SYMBOL(tcp_unhash);
EXPORT_SYMBOL(udp_prot);
EXPORT_SYMBOL(tcp_prot);
EXPORT_SYMBOL(ipv4_specific);
EXPORT_SYMBOL(tcp_simple_retransmit);
EXPORT_SYMBOL(tcp_transmit_skb);
-EXPORT_SYMBOL(tcp_connect);
+EXPORT_SYMBOL(tcp_connect_init);
+EXPORT_SYMBOL(tcp_connect_send);
EXPORT_SYMBOL(tcp_make_synack);
EXPORT_SYMBOL(tcp_tw_deschedule);
EXPORT_SYMBOL(tcp_delete_keepalive_timer);
EXPORT_SYMBOL(rtnl_lock);
EXPORT_SYMBOL(rtnl_unlock);
+/* ABI emulation layers need this */
+EXPORT_SYMBOL(move_addr_to_kernel);
+EXPORT_SYMBOL(move_addr_to_user);
/* Used by at least ipip.c. */
EXPORT_SYMBOL(ipv4_config);
*
* PACKET - implements raw packet sockets.
*
- * Version: $Id: af_packet.c,v 1.56 2001/08/06 13:21:16 davem Exp $
+ * Version: $Id: af_packet.c,v 1.57 2001/10/30 03:38:37 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
h->tp_status = status;
mb();
+ {
+ struct page *p_start, *p_end;
+ u8 *h_end = (u8 *)h + macoff + snaplen - 1;
+
+ p_start = virt_to_page(h);
+ p_end = virt_to_page(h_end);
+ while (p_start <= p_end) {
+ flush_dcache_page(p_start);
+ p_start++;
+ }
+ }
+
sk->data_ready(sk, 0);
drop_n_restore: