S: 1131 Budapest
S: Hungary
+N: Ben Fennema
+E: bfennema@falcon.csc.calpoly.edu
+W: http://www.csc.calpoly.edu/~bfennema
+D: UDF filesystem
+S: 21760 Irma Lyle Drive
+S: Los Gatos, CA 95033-8942
+S: USA
+
N: Jürgen Fischer
E: fischer@norbit.de (=?iso-8859-1?q?J=FCrgen?= Fischer)
D: Author of Adaptec AHA-152x SCSI driver
5Mbit/s. Configure this driver after connecting the USB cable via
ifconfig plusb0 10.0.0.1 pointopoint 10.0.0.2
(and vice versa on the other host).
-
+
+USB Diamond Rio500 support
+CONFIG_USB_RIO500
+ Say Y here if you want to connect a USB rio500 to your
+ computer's USB port. Please read Documentation/usb/rio.txt
+ for more information.
+
+ This code is also available as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want).
+ The module will be called rio500.o. If you want to compile it as
+ a module, say M here and read Documentation/modules.txt.
+
ACPI support
CONFIG_ACPI
Advanced Configuration and Power Interface (ACPI) is an interface
Intel processors in P6 family, e.g. Pentium Pro, Pentium II,
Pentium III, Xeon etc. You will obviously need the actual microcode
binary data itself which is not shipped with the Linux kernel.
- Contact Intel to obtain the latest revision of microcode for
- your CPU(s). With this support compiled you can use dd(1) to write
- microcode, for example:
+ With this support compiled you can use dd(1) to write microcode,
+ for example:
+
+ # dd if=/etc/microcode of=/dev/cpu/microcode bs=98304 count=1
- # dd if=/etc/microcode of=/proc/driver/microcode bs=98304 count=1
+ You need to be superuser to do that. For latest news and information
+ on obtaining all the required ingredients for this driver, check:
- You need to be superuser to do that.
+ http://www.ocston.org/~tigran/patches/microcode
This driver is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
*
* ./Documentation/filesystems/udf.txt
*
-UDF Filesystem version 0.9.0
+UDF Filesystem version 0.9.1
If you encounter problems with reading UDF discs using this driver,
please report them to linux_udf@hootie.lvld.hp.com, which is the
gid= Set the default group.
umask= Set the default umask.
uid= Set the default user.
+ bs= Set the block size.
unhide Show otherwise hidden files.
undelete Show deleted files in lists.
+ adinicb Embed data in the inode (default)
+ noadinicb Don't embed data in the inode
+ shortad Use short ad's
+ longad Use long ad's (default)
strict Set strict conformance (unused)
The remaining are for debugging and disaster recovery:
- bs= Set the block size. (may not work unless 2048)
novrs Skip volume sequence recognition
The following expect a offset from 0.
http://www.trylinux.com/projects/udf/index.html
For the latest version and toolset see:
- http://www.csc.calpoly.edu/~bfennema/udf.html
+ http://www.csc.calpoly.edu/~bfennema/udf.html
+ http://linux-udf.sourceforge.net/
Documentation on UDF and ECMA 167 is available FREE from:
http://www.osta.org/
--- /dev/null
+Copyright (C) 1999, 2000 Bruce Tenison
+Portions Copyright (C) 1999, 2000 David Nelson
+Thanks to David Nelson for guidance and the usage of the scanner.txt
+and scanner.c files to model our driver and this informative file.
+
+Mar. 2, 2000
+
+CHANGES
+
+- Initial Revision
+
+
+OVERVIEW
+
+This README will address issues regarding how to configure the kernel
+to access a RIO 500 mp3 player.
+Before I explain how to use this to access the Rio500 please be warned:
+
+W A R N I N G:
+--------------
+
+Please note that this software is still under development. The authors
+are in no way responsible for any damage that may occur, no matter how
+inconsequential.
+
+It seems that the Rio has a problem when sending .mp3 with low batteries.
+I suggest when the batteries are low and want to transfer stuff that you
+replace it with a fresh one. In my case, what happened is I lost two 16kb
+blocks (they are no longer usable to store information to it). But I don't
+know if thats normal or not. It could simply be a problem with the flash
+memory.
+
+In an extreme case, I left my Rio playing overnight and the batteries wore
+down to nothing and appear to have corrupted the flash memory. My RIO
+needed to be replaced as a result. Diamond tech support is aware of the
+problem. Do NOT allow your batteries to wear down to nothing before
+changing them. It appears RIO 500 firmware does not handle low battery
+power well at all.
+
+On systems with OHCI controllers, the kernel OHCI code appears to have
+power on problems with some chipsets. If you are having problems
+connecting to your RIO 500, try turning it on first and then plugging it
+into the USB cable.
+
+Contact information:
+--------------------
+
+ The main page for the project is hosted at sourceforge.net in the following
+ address: http://rio500.sourceforge.net You can also go to the sourceforge
+ project page at: http://sourceforge.net/project/?group_id=1944 There is
+ also a mailing list: rio500-users@lists.sourceforge.net
+
+Authors:
+-------
+
+Most of the code was written by Cesar Miquel <miquel@df.uba.ar>. Keith
+Clayton <kclayton@jps.net> is incharge of the PPC port and making sure
+things work there. Bruce Tenison <btenison@dibbs.net> is adding support
+for .fon files and also does testing. The program will mostly sure be
+re-written and Pete Ikusz along with the rest will re-design it. I would
+also like to thank Tri Nguyen <tmn_3022000@hotmail.com> who provided use
+with some important information regarding the communication with the Rio.
+
+ADDITIONAL INFORMATION and Userspace tools
+
+http://rio500.sourceforge.net/
+
+
+REQUIREMENTS
+
+A host with a USB port. Ideally, either a UHCI (Intel) or OHCI
+(Compaq and others) hardware port should work.
+
+A Linux development kernel (2.3.x) with USB support enabled or a
+backported version to linux-2.2.x. See http://www.linux-usb.org for
+more information on accomplishing this.
+
+A Linux kernel with RIO 500 support enabled.
+
+'lspci' which is only needed to determine the type of USB hardware
+available in your machine.
+
+CONFIGURATION
+
+Using `lspci -v`, determine the type of USB hardware available.
+
+ If you see something like:
+
+ USB Controller: ......
+ Flags: .....
+ I/O ports at ....
+
+ Then you have a UHCI based controller.
+
+ If you see something like:
+
+ USB Controller: .....
+ Flags: ....
+ Memory at .....
+
+ Then you have a OHCI based controller.
+
+Using `make menuconfig` or your preferred method for configuring the
+kernel, select 'Support for USB', 'OHCI/UHCI' depending on your
+hardware (determined from the steps above), 'USB Diamond Rio500 support', and
+'Preliminary USB device filesystem'. Compile and install the modules
+(you may need to execute `depmod -a` to update the module
+dependencies).
+
+Add a device for the USB rio500:
+ `mknod /dev/usb/rio500 c 180 64`
+
+Set appropriate permissions for /dev/usb/rio500 (don't forget about
+group and world permissions). Both read and write permissions are
+required for proper operation.
+
+Load the appropriate modules (if compiled as modules):
+
+ OHCI:
+ modprobe usbcore
+ modprobe usb-ohci
+ modprobe rio500
+
+ UHCI:
+ modprobe usbcore
+ modprobe usb-uhci (or uhci)
+ modprobe rio500
+
+That's it. The Rio500 Utils at: http://rio500.sourceforge.net should
+be able to access the rio500.
+
+BUGS
+
+If you encounter any problems feel free to drop me an email.
+
+Bruce Tenison
+btenison@dibbs.net
+
L: linux-via@gtf.org
S: Maintained
+USB DIAMOND RIO500 DRIVER
+P: Cesar Miquel
+M: miquel@df.uba.ar
+L: rio500-users@lists.sourceforge.net
+W: http://rio500.sourceforge.net
+S: Maintained
+
VIDEO FOR LINUX
P: Alan Cox
M: Alan.Cox@linux.org
* Window 1 is direct access 1GB at 1GB
* Window 2 is scatter-gather 8MB at 8MB (for isa)
*/
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0);
hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
* ??? We ought to scale window 1 with memory.
*/
- /* NetBSD hints that page tables must be aligned to 32K due
- to a hardware bug. No description of what models affected. */
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, 32768);
- hose->sg_pci = iommu_arena_new(0x40000000, 0x08000000, 32768);
+ /* ??? NetBSD hints that page tables must be aligned to 32K,
+ possibly due to a hardware bug. This is over-aligned
+ from the 8K alignment one would expect for an 8MB window.
+ No description of what CIA revisions affected. */
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0x8000);
+ hose->sg_pci = iommu_arena_new(hose, 0x40000000, 0x08000000, 0);
__direct_map_base = 0x80000000;
__direct_map_size = 0x80000000;
* Window 0 is direct access 1GB at 1GB
* Window 1 is scatter-gather 8MB at 8MB (for isa)
*/
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0);
hose->sg_pci = NULL;
__direct_map_base = 0x40000000;
__direct_map_size = 0x40000000;
* ??? We ought to scale window 1 with memory.
*/
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, PAGE_SIZE);
- hose->sg_pci = iommu_arena_new(0x40000000, 0x08000000, PAGE_SIZE);
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0);
+ hose->sg_pci = iommu_arena_new(hose, 0x40000000, 0x08000000, 0);
__direct_map_base = 0x80000000;
__direct_map_size = 0x80000000;
*/
#define DEBUG_CONFIG 0
-
#if DEBUG_CONFIG
# define DBG_CNF(args) printk args
#else
ctrl = *(vuip)PYXIS_CTRL;
*(vuip)PYXIS_CTRL = ctrl | 4;
mb();
+ *(vuip)PYXIS_CTRL;
+ mb();
/* Read from PCI dense memory space at TBI_ADDR, skipping 64k
on each read. This forces SG TLB misses. It appears that
mb();
*(vuip)PYXIS_CTRL = ctrl;
mb();
+ *(vuip)PYXIS_CTRL;
+ mb();
__restore_flags(flags);
}
struct pci_controler *hose;
unsigned int temp;
-#if 0
- printk("pyxis_init: PYXIS_ERR_MASK 0x%x\n", *(vuip)PYXIS_ERR_MASK);
- printk("pyxis_init: PYXIS_ERR 0x%x\n", *(vuip)PYXIS_ERR);
- printk("pyxis_init: PYXIS_INT_REQ 0x%lx\n", *(vulp)PYXIS_INT_REQ);
- printk("pyxis_init: PYXIS_INT_MASK 0x%lx\n", *(vulp)PYXIS_INT_MASK);
- printk("pyxis_init: PYXIS_INT_ROUTE 0x%lx\n", *(vulp)PYXIS_INT_ROUTE);
- printk("pyxis_init: PYXIS_INT_HILO 0x%lx\n", *(vulp)PYXIS_INT_HILO);
- printk("pyxis_init: PYXIS_INT_CNFG 0x%x\n", *(vuip)PYXIS_INT_CNFG);
- printk("pyxis_init: PYXIS_RT_COUNT 0x%lx\n", *(vulp)PYXIS_RT_COUNT);
-#endif
-
- /*
- * Set up error reporting. Make sure CPU_PE is OFF in the mask.
- */
+ /* Set up error reporting. Make sure CPU_PE is OFF in the mask. */
temp = *(vuip)PYXIS_ERR_MASK;
- temp &= ~4;
- *(vuip)PYXIS_ERR_MASK = temp;
- mb();
- *(vuip)PYXIS_ERR_MASK; /* re-read to force write */
+ *(vuip)PYXIS_ERR_MASK = temp & ~4;
+ /* Enable master/target abort. */
temp = *(vuip)PYXIS_ERR;
- temp |= 0x180; /* master/target abort */
- *(vuip)PYXIS_ERR = temp;
+ *(vuip)PYXIS_ERR = temp | 0x180;
+
+ /* Clear the PYXIS_CFG register, which gets used for PCI Config
+ Space accesses. That is the way we want to use it, and we do
+ not want to depend on what ARC or SRM might have left behind. */
+ *(vuip)PYXIS_CFG = 0;
+
+ /* Zero the HAEs. */
+ *(vuip)PYXIS_HAE_MEM = 0;
+ *(vuip)PYXIS_HAE_IO = 0;
+
+ /* Finally, check that the PYXIS_CTRL1 has IOA_BEN set for
+ enabling byte/word PCI bus space(s) access. */
+ temp = *(vuip)PYXIS_CTRL1;
+ *(vuip)PYXIS_CTRL1 = temp | 1;
+
+ /* Syncronize with all previous changes. */
mb();
- *(vuip)PYXIS_ERR; /* re-read to force write */
+ *(vuip)PYXIS_REV;
/*
* Create our single hose.
* address range.
*/
- /* NetBSD hints that page tables must be aligned to 32K due
- to a hardware bug. No description of what models affected. */
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, 32768);
- hose->sg_pci = iommu_arena_new(0xc0000000, 0x08000000, 32768);
+#if 1
+ /* ??? There's some bit of syncronization wrt writing new tlb
+ entries that's missing. Sometimes it works, sometimes invalid
+ tlb machine checks, sometimes hard lockup. And this just within
+ the boot sequence.
+
+ I've tried extra memory barriers, extra alignment, pyxis
+ register reads, tlb flushes, and loopback tlb accesses.
+
+ I guess the pyxis revision in the sx164 is just too buggy... */
+
+ hose->sg_isa = hose->sg_pci = NULL;
+ __direct_map_base = 0x40000000;
+ __direct_map_size = 0x80000000;
+
+ *(vuip)PYXIS_W0_BASE = 0x40000000 | 1;
+ *(vuip)PYXIS_W0_MASK = (0x40000000 - 1) & 0xfff00000;
+ *(vuip)PYXIS_T0_BASE = 0;
+
+ *(vuip)PYXIS_W1_BASE = 0x80000000 | 1;
+ *(vuip)PYXIS_W1_MASK = (0x40000000 - 1) & 0xfff00000;
+ *(vuip)PYXIS_T1_BASE = 0;
+
+ *(vuip)PYXIS_W2_BASE = 0;
+ *(vuip)PYXIS_W3_BASE = 0;
+
+ alpha_mv.mv_pci_tbi = NULL;
+ mb();
+#else
+ /* ??? NetBSD hints that page tables must be aligned to 32K,
+ possibly due to a hardware bug. This is over-aligned
+ from the 8K alignment one would expect for an 8MB window.
+ No description of what CIA revisions affected. */
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0x08000);
+ hose->sg_pci = iommu_arena_new(hose, 0xc0000000, 0x08000000, 0x20000);
__direct_map_base = 0x40000000;
__direct_map_size = 0x80000000;
pyxis_enable_broken_tbi(hose->sg_pci);
alpha_mv.mv_pci_tbi(hose, 0, -1);
-alpha_mv.mv_pci_tbi = 0;
-
- /*
- * Next, clear the PYXIS_CFG register, which gets used
- * for PCI Config Space accesses. That is the way
- * we want to use it, and we do not want to depend on
- * what ARC or SRM might have left behind...
- */
- temp = *(vuip)PYXIS_CFG;
- if (temp != 0) {
- *(vuip)PYXIS_CFG = 0;
- mb();
- *(vuip)PYXIS_CFG; /* re-read to force write */
- }
-
- /* Zero the HAE. */
- *(vuip)PYXIS_HAE_MEM = 0U; mb();
- *(vuip)PYXIS_HAE_MEM; /* re-read to force write */
- *(vuip)PYXIS_HAE_IO = 0; mb();
- *(vuip)PYXIS_HAE_IO; /* re-read to force write */
-
- /*
- * Finally, check that the PYXIS_CTRL1 has IOA_BEN set for
- * enabling byte/word PCI bus space(s) access.
- */
- temp = *(vuip) PYXIS_CTRL1;
- if (!(temp & 1)) {
- *(vuip)PYXIS_CTRL1 = temp | 1;
- mb();
- *(vuip)PYXIS_CTRL1; /* re-read */
- }
+#endif
}
static inline void
* because of an idiot-syncrasy of the CYPRESS chip. It may
* respond to a PCI bus address in the last 1MB of the 4GB
* address range.
- *
- * Note that the TLB lookup logic uses bitwise concatenation,
- * not addition, so the required arena alignment is based on
- * the size of the window.
*/
- hose->sg_isa = iommu_arena_new(0x00800000, 0x00800000, 0x00800000>>10);
- hose->sg_pci = iommu_arena_new(0xc0000000, 0x08000000, 0x08000000>>10);
+ hose->sg_isa = iommu_arena_new(hose, 0x00800000, 0x00800000, 0);
+ hose->sg_pci = iommu_arena_new(hose, 0xc0000000, 0x08000000, 0);
__direct_map_base = 0x40000000;
__direct_map_size = 0x80000000;
* Alpha specific irq code.
*/
+#include <linux/config.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/irq.h>
* Started hacking from linux-2.3.30pre6/arch/i386/kernel/i8259.c.
*/
+#include <linux/config.h>
#include <linux/init.h>
#include <linux/cache.h>
#include <linux/sched.h>
mv_switch_mm: ev4_switch_mm, \
mv_activate_mm: ev4_activate_mm, \
mv_flush_tlb_current: ev4_flush_tlb_current, \
- mv_flush_tlb_other: ev4_flush_tlb_other, \
mv_flush_tlb_current_page: ev4_flush_tlb_current_page
#define DO_EV5_MMU \
mv_switch_mm: ev5_switch_mm, \
mv_activate_mm: ev5_activate_mm, \
mv_flush_tlb_current: ev5_flush_tlb_current, \
- mv_flush_tlb_other: ev5_flush_tlb_other, \
mv_flush_tlb_current_page: ev5_flush_tlb_current_page
#define DO_EV6_MMU \
mv_switch_mm: ev5_switch_mm, \
mv_activate_mm: ev5_activate_mm, \
mv_flush_tlb_current: ev5_flush_tlb_current, \
- mv_flush_tlb_other: ev5_flush_tlb_other, \
mv_flush_tlb_current_page: ev5_flush_tlb_current_page
#define IO_LITE(UP,low) \
_ctl_; })
+/* A PCI IOMMU allocation arena. There are typically two of these
+ regions per bus. */
+/* ??? The 8400 has a 32-byte pte entry, and the entire table apparently
+ lives directly on the host bridge (no tlb?). We don't support this
+ machine, but if we ever did, we'd need to parameterize all this quite
+ a bit further. Probably with per-bus operation tables. */
+
+struct pci_iommu_arena
+{
+ spinlock_t lock;
+ struct pci_controler *hose;
+ unsigned long *ptes;
+ dma_addr_t dma_base;
+ unsigned int size;
+ unsigned int next_entry;
+};
+
+
/* The hose list. */
extern struct pci_controler *hose_head, **hose_tail;
extern struct pci_controler *pci_isa_hose;
extern struct pci_controler *alloc_pci_controler(void);
extern struct resource *alloc_resource(void);
-extern struct pci_iommu_arena *iommu_arena_new(dma_addr_t, unsigned long,
- unsigned long);
+extern struct pci_iommu_arena *iommu_arena_new(struct pci_controler *,
+ dma_addr_t, unsigned long,
+ unsigned long);
extern long iommu_arena_alloc(struct pci_iommu_arena *arena, long n);
extern const char *const pci_io_names[];
# define DBGA2(args...)
#endif
+#define DEBUG_NODIRECT 0
+
static inline unsigned long
mk_iommu_pte(unsigned long paddr)
}
\f
struct pci_iommu_arena *
-iommu_arena_new(dma_addr_t base, unsigned long window_size,
- unsigned long align)
+iommu_arena_new(struct pci_controler *hose, dma_addr_t base,
+ unsigned long window_size, unsigned long align)
{
- unsigned long entries, mem_size, mem_pages;
+ unsigned long mem_size;
struct pci_iommu_arena *arena;
- entries = window_size >> PAGE_SHIFT;
- mem_size = entries * sizeof(unsigned long);
- mem_pages = calc_npages(mem_size);
+ mem_size = window_size / (PAGE_SIZE / sizeof(unsigned long));
+
+ /* Note that the TLB lookup logic uses bitwise concatenation,
+ not addition, so the required arena alignment is based on
+ the size of the window. Retain the align parameter so that
+ particular systems can over-align the arena. */
+ if (align < mem_size)
+ align = mem_size;
arena = alloc_bootmem(sizeof(*arena));
- arena->ptes = __alloc_bootmem(mem_pages * PAGE_SIZE, align, 0);
+ arena->ptes = __alloc_bootmem(mem_size, align, 0);
spin_lock_init(&arena->lock);
+ arena->hose = hose;
arena->dma_base = base;
arena->size = window_size;
- arena->alloc_hint = 0;
+ arena->next_entry = 0;
return arena;
}
/* Search forward for the first sequence of N empty ptes. */
beg = arena->ptes;
end = beg + (arena->size >> PAGE_SHIFT);
- p = beg + arena->alloc_hint;
+ p = beg + arena->next_entry;
i = 0;
while (i < n && p < end)
i = (*p++ == 0 ? i + 1 : 0);
- if (p >= end) {
- /* Failure. Assume the hint was wrong and go back to
+ if (i < n) {
+ /* Reached the end. Flush the TLB and restart the
search from the beginning. */
+ alpha_mv.mv_pci_tbi(arena->hose, 0, -1);
+
p = beg;
i = 0;
while (i < n && p < end)
i = (*p++ == 0 ? i + 1 : 0);
- if (p >= end) {
+ if (i < n) {
spin_unlock_irqrestore(&arena->lock, flags);
return -1;
}
for (p = p - n, i = 0; i < n; ++i)
p[i] = ~1UL;
- arena->alloc_hint = p - beg + n;
+ arena->next_entry = p - beg + n;
spin_unlock_irqrestore(&arena->lock, flags);
return p - beg;
p = arena->ptes + ofs;
for (i = 0; i < n; ++i)
p[i] = 0;
- arena->alloc_hint = ofs;
}
\f
/* Map a single buffer of the indicate size for PCI DMA in streaming
paddr = virt_to_phys(cpu_addr);
+#if !DEBUG_NODIRECT
/* First check to see if we can use the direct map window. */
if (paddr + size + __direct_map_base - 1 <= max_dma
&& paddr + size <= __direct_map_size) {
return ret;
}
+#endif
/* If the machine doesn't define a pci_tbi routine, we have to
assume it doesn't support sg mapping. */
if (direction == PCI_DMA_NONE)
BUG();
+#if !DEBUG_NODIRECT
if (dma_addr >= __direct_map_base
&& dma_addr < __direct_map_base + __direct_map_size) {
/* Nothing to do. */
return;
}
+#endif
arena = hose->sg_pci;
if (!arena || dma_addr < arena->dma_base)
npages = calc_npages((dma_addr & ~PAGE_MASK) + size);
iommu_arena_free(arena, dma_ofs, npages);
- alpha_mv.mv_pci_tbi(hose, dma_addr, dma_addr + size - 1);
- DBGA2("pci_unmap_single: sg [%x,%lx] np %ld from %p\n",
- dma_addr, size, npages, __builtin_return_address(0));
+ DBGA("pci_unmap_single: sg [%x,%lx] np %ld from %p\n",
+ dma_addr, size, npages, __builtin_return_address(0));
}
unsigned long *ptes;
long npages, dma_ofs, i;
+#if !DEBUG_NODIRECT
/* If everything is physically contiguous, and the addresses
fall into the direct-map window, use it. */
if (leader->dma_address == 0
return 0;
}
+#endif
/* Otherwise, we'll use the iommu to make the pages virtually
contiguous. */
DBGA(" sg_fill: [%p,%lx] -> sg %x np %ld\n",
leader->address, size, out->dma_address, npages);
+ /* All virtually contiguous. We need to find the length of each
+ physically contiguous subsegment to fill in the ptes. */
ptes = &arena->ptes[dma_ofs];
sg = leader;
- if (0 && leader->dma_address == 0) {
- /* All physically contiguous. We already have the
- length, all we need is to fill in the ptes. */
-
- paddr = virt_to_phys(sg->address) & PAGE_MASK;
- for (i = 0; i < npages; ++i, paddr += PAGE_SIZE)
- *ptes++ = mk_iommu_pte(paddr);
-
-#if DEBUG_ALLOC > 0
- DBGA(" (0) [%p,%x] np %ld\n",
- sg->address, sg->length, npages);
- for (++sg; sg < end && (int) sg->dma_address < 0; ++sg)
- DBGA(" (%ld) [%p,%x] cont\n",
- sg - leader, sg->address, sg->length);
-#endif
- } else {
- /* All virtually contiguous. We need to find the
- length of each physically contiguous subsegment
- to fill in the ptes. */
- do {
- struct scatterlist *last_sg = sg;
+ do {
+ struct scatterlist *last_sg = sg;
- size = sg->length;
- paddr = virt_to_phys(sg->address);
+ size = sg->length;
+ paddr = virt_to_phys(sg->address);
- while (sg+1 < end && (int) sg[1].dma_address == -1) {
- size += sg[1].length;
- sg++;
- }
+ while (sg+1 < end && (int) sg[1].dma_address == -1) {
+ size += sg[1].length;
+ sg++;
+ }
- npages = calc_npages((paddr & ~PAGE_MASK) + size);
+ npages = calc_npages((paddr & ~PAGE_MASK) + size);
- paddr &= PAGE_MASK;
- for (i = 0; i < npages; ++i, paddr += PAGE_SIZE)
- *ptes++ = mk_iommu_pte(paddr);
+ paddr &= PAGE_MASK;
+ for (i = 0; i < npages; ++i, paddr += PAGE_SIZE)
+ *ptes++ = mk_iommu_pte(paddr);
#if DEBUG_ALLOC > 0
- DBGA(" (%ld) [%p,%x] np %ld\n",
+ DBGA(" (%ld) [%p,%x] np %ld\n",
+ last_sg - leader, last_sg->address,
+ last_sg->length, npages);
+ while (++last_sg <= sg) {
+ DBGA(" (%ld) [%p,%x] cont\n",
last_sg - leader, last_sg->address,
- last_sg->length, npages);
- while (++last_sg <= sg) {
- DBGA(" (%ld) [%p,%x] cont\n",
- last_sg - leader, last_sg->address,
- last_sg->length);
- }
+ last_sg->length);
+ }
#endif
- } while (++sg < end && (int) sg->dma_address < 0);
- }
+ } while (++sg < end && (int) sg->dma_address < 0);
return 1;
}
/* Third, iterate over the scatterlist leaders and allocate
dma space as needed. */
for (out = sg; sg < end; ++sg) {
- int ret;
-
if ((int) sg->dma_address < 0)
continue;
-
- ret = sg_fill(sg, end, out, arena, max_dma);
- if (ret < 0)
+ if (sg_fill(sg, end, out, arena, max_dma) < 0)
goto error;
out++;
}
struct pci_iommu_arena *arena;
struct scatterlist *end;
dma_addr_t max_dma;
- dma_addr_t fstart, fend;
if (direction == PCI_DMA_NONE)
BUG();
if (!arena || arena->dma_base + arena->size > max_dma)
arena = hose->sg_isa;
- fstart = -1;
- fend = 0;
for (end = sg + nents; sg < end; ++sg) {
unsigned long addr, size;
+ long npages, ofs;
addr = sg->dma_address;
size = sg->dma_length;
-
if (!size)
break;
+#if !DEBUG_NODIRECT
if (addr >= __direct_map_base
&& addr < __direct_map_base + __direct_map_size) {
/* Nothing to do. */
DBGA(" (%ld) direct [%lx,%lx]\n",
sg - end + nents, addr, size);
- } else {
- long npages, ofs;
- dma_addr_t tend;
-
- DBGA(" (%ld) sg [%lx,%lx]\n",
- sg - end + nents, addr, size);
+ continue;
+ }
+#endif
- npages = calc_npages((addr & ~PAGE_MASK) + size);
- ofs = (addr - arena->dma_base) >> PAGE_SHIFT;
- iommu_arena_free(arena, ofs, npages);
+ DBGA(" (%ld) sg [%lx,%lx]\n",
+ sg - end + nents, addr, size);
- tend = addr + size - 1;
- if (fstart > addr)
- fstart = addr;
- if (fend < tend)
- fend = tend;
- }
+ npages = calc_npages((addr & ~PAGE_MASK) + size);
+ ofs = (addr - arena->dma_base) >> PAGE_SHIFT;
+ iommu_arena_free(arena, ofs, npages);
}
- if (fend)
- alpha_mv.mv_pci_tbi(hose, fstart, fend);
DBGA("pci_unmap_sg: %d entries\n", nents - (end - sg));
}
struct pci_controler *hose;
struct pci_iommu_arena *arena;
+#if !DEBUG_NODIRECT
/* If there exists a direct map, and the mask fits either
MAX_DMA_ADDRESS defined such that GFP_DMA does something
useful, or the total system memory as shifted by the
&& (__direct_map_base + MAX_DMA_ADDRESS-IDENT_ADDR-1 <= mask
|| __direct_map_base + (max_low_pfn<<PAGE_SHIFT)-1 <= mask))
return 1;
+#endif
/* Check that we have a scatter-gather arena that fits. */
hose = pdev ? pdev->sysdata : pci_isa_hose;
flush_tlb_mm(mm);
}
+static void
+ipi_flush_icache_page(void *x)
+{
+ struct mm_struct *mm = (struct mm_struct *) x;
+ if (mm == current->active_mm)
+ __load_new_mm_context(mm);
+}
+
+void
+flush_icache_page(struct vm_area_struct *vma, struct page *page)
+{
+ struct mm_struct *mm = vma->vm_mm;
+
+ if ((vma->vm_flags & VM_EXEC) == 0)
+ return;
+
+ mm->context = 0;
+ if (mm == current->active_mm) {
+ __load_new_mm_context(mm);
+ if (atomic_read(&mm->mm_users) <= 1)
+ return;
+ }
+
+ if (smp_call_function(ipi_flush_icache_page, mm, 1, 1)) {
+ printk(KERN_CRIT "flush_icache_page: timed out\n");
+ }
+}
\f
int
smp_info(char *buffer)
unsigned long last_asn = ASN_FIRST_VERSION;
#endif
-void ev5_flush_tlb_current(struct mm_struct *mm)
+extern void
+__load_new_mm_context(struct mm_struct *next_mm)
{
- ev5_activate_mm(NULL, mm, smp_processor_id());
+ unsigned long mmc;
+
+ mmc = __get_new_mm_context(next_mm, smp_processor_id());
+ next_mm->context = mmc;
+ current->thread.asn = mmc & HARDWARE_ASN_MASK;
+ current->thread.ptbr
+ = ((unsigned long) next_mm->pgd - IDENT_ADDR) >> PAGE_SHIFT;
+
+ __reload_thread(¤t->thread);
}
define_bool CONFIG_X86_USE_3DNOW y
fi
-if [ "$CONFIG_PROC_FS" = "y" ]; then
- tristate '/proc/driver/microcode - Intel P6 CPU microcode support' CONFIG_MICROCODE
+if [ "$CONFIG_DEVFS_FS" = "y" ]; then
+ tristate '/dev/cpu/microcode - Intel P6 CPU microcode support' CONFIG_MICROCODE
fi
choice 'High Memory Support' \
CONFIG_X86_TSC=y
CONFIG_X86_GOOD_APIC=y
CONFIG_X86_PGE=y
-# CONFIG_MICROCODE is not set
CONFIG_NOHIGHMEM=y
# CONFIG_HIGHMEM4G is not set
# CONFIG_HIGHMEM64G is not set
# CONFIG_BLK_DEV_HD_IDE is not set
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_BLK_DEV_IDECS is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
# CONFIG_BLK_DEV_IDEFLOPPY is not set
endif
endif
-ifeq ($(CONFIG_PROC_FS),y)
ifeq ($(CONFIG_MICROCODE),y)
OX_OBJS += microcode.o
else
MX_OBJS += microcode.o
endif
endif
-endif
ifeq ($(CONFIG_ACPI),y)
O_OBJS += acpi.o
* 1.02 21 February 2000, Tigran Aivazian <tigran@sco.com>
* Added 'device trimming' support. open(O_WRONLY) zeroes
* and frees the saved copy of applied microcode.
+ * 1.03 29 February 2000, Tigran Aivazian <tigran@sco.com>
+ * Made to use devfs (/dev/cpu/microcode) + cleanups.
*/
#include <linux/init.h>
+#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/smp_lock.h>
-#include <linux/proc_fs.h>
+#include <linux/devfs_fs_kernel.h>
#include <asm/msr.h>
#include <asm/uaccess.h>
#include <asm/processor.h>
-#define MICROCODE_VERSION "1.02"
+#define MICROCODE_VERSION "1.03"
MODULE_DESCRIPTION("CPU (P6) microcode update driver");
MODULE_AUTHOR("Tigran Aivazian <tigran@ocston.org>");
static unsigned long microcode_status = 0;
/* the actual array of microcode blocks, each 2048 bytes */
-static struct microcode * microcode = NULL;
+static struct microcode *microcode = NULL;
static unsigned int microcode_num = 0;
static char *mc_applied = NULL; /* holds an array of applied microcode blocks */
+static unsigned int mc_fsize; /* used often, so compute once at microcode_init() */
static struct file_operations microcode_fops = {
read: microcode_read,
release: microcode_release,
};
-static struct proc_dir_entry *proc_microcode;
+static devfs_handle_t devfs_handle;
static int __init microcode_init(void)
{
- proc_microcode = create_proc_entry("microcode", S_IWUSR|S_IRUSR, proc_root_driver);
- if (!proc_microcode) {
- printk(KERN_ERR "microcode: can't create /proc/driver/microcode\n");
- return -ENOMEM;
- }
- proc_microcode->proc_fops = µcode_fops;
+ devfs_handle = devfs_register(NULL, "cpu/microcode", 0, DEVFS_FL_DEFAULT, 0, 0,
+ S_IFREG | S_IRUSR | S_IWUSR, 0, 0, µcode_fops, NULL);
+ if (!devfs_handle) {
+ printk(KERN_ERR "microcode: can't create /dev/cpu/microcode\n");
+ return -ENOMEM;
+ }
+ /* XXX assume no hotplug CPUs so smp_num_cpus does not change */
+ mc_fsize = smp_num_cpus * sizeof(struct microcode);
printk(KERN_INFO "P6 Microcode Update Driver v%s registered\n", MICROCODE_VERSION);
return 0;
}
static void __exit microcode_exit(void)
{
- remove_proc_entry("microcode", proc_root_driver);
+ devfs_unregister(devfs_handle);
if (mc_applied)
kfree(mc_applied);
printk(KERN_INFO "P6 Microcode Update Driver v%s unregistered\n", MICROCODE_VERSION);
if (test_and_set_bit(MICROCODE_IS_OPEN, µcode_status))
return -EBUSY;
- if ((file->f_flags & O_ACCMODE) == O_WRONLY) {
- proc_microcode->size = 0;
- if (mc_applied) {
- memset(mc_applied, 0, smp_num_cpus * sizeof(struct microcode));
- kfree(mc_applied);
- mc_applied = NULL;
- }
+ if ((file->f_flags & O_ACCMODE) == O_WRONLY && mc_applied) {
+ devfs_set_file_size(devfs_handle, 0);
+ memset(mc_applied, 0, mc_fsize);
+ kfree(mc_applied);
+ mc_applied = NULL;
}
MOD_INC_USE_COUNT;
-
return 0;
}
static int microcode_release(struct inode *inode, struct file *file)
{
- MOD_DEC_USE_COUNT;
-
clear_bit(MICROCODE_IS_OPEN, µcode_status);
+ MOD_DEC_USE_COUNT;
return 0;
}
memcpy(m, µcode[update_req[i].slot], sizeof(struct microcode));
}
}
- return error ? -EIO : 0;
+ return error;
}
static void do_update_one(void *arg)
{
- struct update_req *req;
- struct cpuinfo_x86 * c;
+ int cpu_num = smp_processor_id();
+ struct cpuinfo_x86 *c = cpu_data + cpu_num;
+ struct update_req *req = (struct update_req *)arg + cpu_num;
unsigned int pf = 0, val[2], rev, sig;
- int i, cpu_num;
+ int i;
- cpu_num = smp_processor_id();
- c = cpu_data + cpu_num;
- req = (struct update_req *)arg + cpu_num;
req->err = 1; /* be pessimistic */
if (c->x86_vendor != X86_VENDOR_INTEL || c->x86 < 6)
req->err = 0;
req->slot = i;
- printk(KERN_ERR "microcode: CPU%d microcode updated "
- "from revision %d to %d, date=%08x\n",
+ printk(KERN_ERR "microcode: CPU%d updated from revision "
+ "%d to %d, date=%08x\n",
cpu_num, rev, val[1], m->date);
}
break;
static ssize_t microcode_read(struct file *file, char *buf, size_t len, loff_t *ppos)
{
- size_t fsize = smp_num_cpus * sizeof(struct microcode);
-
- if (!proc_microcode->size || *ppos >= fsize)
- return 0; /* EOF */
- if (*ppos + len > fsize)
- len = fsize - *ppos;
+ if (*ppos >= mc_fsize)
+ return 0;
+ if (*ppos + len > mc_fsize)
+ len = mc_fsize - *ppos;
if (copy_to_user(buf, mc_applied + *ppos, len))
return -EFAULT;
*ppos += len;
return -EINVAL;
}
if (!mc_applied) {
- int size = smp_num_cpus * sizeof(struct microcode);
- mc_applied = kmalloc(size, GFP_KERNEL);
+ mc_applied = kmalloc(mc_fsize, GFP_KERNEL);
if (!mc_applied) {
- printk(KERN_ERR "microcode: can't allocate memory for saved microcode\n");
+ printk(KERN_ERR "microcode: out of memory for saved microcode\n");
return -ENOMEM;
}
- memset(mc_applied, 0, size);
+ memset(mc_applied, 0, mc_fsize);
}
lock_kernel();
microcode_num = len/sizeof(struct microcode);
microcode = vmalloc(len);
if (!microcode) {
- unlock_kernel();
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_unlock;
}
if (copy_from_user(microcode, buf, len)) {
- vfree(microcode);
- unlock_kernel();
- return -EFAULT;
+ ret = -EFAULT;
+ goto out_vfree;
}
- ret = do_microcode_update();
- if (!ret) {
- proc_microcode->size = smp_num_cpus * sizeof(struct microcode);
- ret = (ssize_t)len;
+ if(do_microcode_update()) {
+ ret = -EIO;
+ goto out_vfree;
}
+ devfs_set_file_size(devfs_handle, mc_fsize);
+ ret = (ssize_t)len;
+out_vfree:
vfree(microcode);
+out_unlock:
unlock_kernel();
return ret;
}
-/* $Id: pcic.c,v 1.13 2000/02/12 03:05:37 zaitcev Exp $
+/* $Id: pcic.c,v 1.14 2000/03/01 02:53:28 davem Exp $
* pcic.c: Sparc/PCI controller support
*
* Copyright (C) 1998 V. Roganov and G. Raiko
{
}
-#if 0
-int pci_assign_resource(struct pci_dev *dev, int i)
-{
- return -ENOSYS; /* :-)... actually implement this soon */
-}
-#endif
-
int pcibios_enable_device(struct pci_dev *pdev)
{
return 0;
-/* $Id: process.c,v 1.145 2000/01/29 01:08:56 anton Exp $
+/* $Id: process.c,v 1.146 2000/03/01 02:53:27 davem Exp $
* linux/arch/sparc/kernel/process.c
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
if(regs->u_regs[UREG_G1] == 0)
base = 1;
- lock_kernel();
filename = getname((char *)regs->u_regs[base + UREG_I0]);
error = PTR_ERR(filename);
if(IS_ERR(filename))
(char **) regs->u_regs[base + UREG_I2], regs);
putname(filename);
out:
- unlock_kernel();
return error;
}
-/* $Id: irq.c,v 1.84 2000/02/25 05:44:41 davem Exp $
+/* $Id: irq.c,v 1.85 2000/03/02 02:00:24 davem Exp $
* irq.c: UltraSparc IRQ handling/init/registry.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
void init_timers(void (*cfunc)(int, void *, struct pt_regs *),
unsigned long *clock)
{
- unsigned long flags;
+ unsigned long pstate;
extern unsigned long timer_tick_offset;
int node, err;
#ifdef __SMP__
prom_halt();
}
- save_and_cli(flags);
+ /* Guarentee that the following sequences execute
+ * uninterrupted.
+ */
+ __asm__ __volatile__("rdpr %%pstate, %0\n\t"
+ "wrpr %0, %1, %%pstate"
+ : "=r" (pstate)
+ : "i" (PSTATE_IE));
/* Set things up so user can access tick register for profiling
- * purposes.
+ * purposes. Also workaround BB_ERRATA_1 by doing a dummy
+ * read back of %tick after writing it.
*/
__asm__ __volatile__("
sethi %%hi(0x80000000), %%g1
- sllx %%g1, 32, %%g1
- rd %%tick, %%g2
+ ba,pt %%xcc, 1f
+ sllx %%g1, 32, %%g1
+ .align 64
+ 1: rd %%tick, %%g2
add %%g2, 6, %%g2
andn %%g2, %%g1, %%g2
wrpr %%g2, 0, %%tick
-" : /* no outputs */
+ rdpr %%tick, %%g0"
+ : /* no outputs */
: /* no inputs */
: "g1", "g2");
+ /* Workaround for Spitfire Errata (#54 I think??), I discovered
+ * this via Sun BugID 4008234, mentioned in Solaris-2.5.1 patch
+ * number 103640.
+ *
+ * On Blackbird writes to %tick_cmpr can fail, the
+ * workaround seems to be to execute the wr instruction
+ * at the start of an I-cache line, and perform a dummy
+ * read back from %tick_cmpr right after writing to it. -DaveM
+ */
__asm__ __volatile__("
rd %%tick, %%g1
- add %%g1, %0, %%g1
- wr %%g1, 0x0, %%tick_cmpr"
+ ba,pt %%xcc, 1f
+ add %%g1, %0, %%g1
+ .align 64
+ 1: wr %%g1, 0x0, %%tick_cmpr
+ rd %%tick_cmpr, %%g0"
: /* no outputs */
: "r" (timer_tick_offset)
: "g1");
- restore_flags(flags);
+ /* Restore PSTATE_IE. */
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : /* no outputs */
+ : "r" (pstate));
+
sti();
}
-/* $Id: pci.c,v 1.15 2000/02/08 05:11:29 jj Exp $
+/* $Id: pci.c,v 1.16 2000/03/01 02:53:33 davem Exp $
* pci.c: UltraSparc PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
{
}
-int pci_assign_resource(struct pci_dev *dev, int i)
-{
- return -ENOSYS; /* :-)... actually implement this soon */
-}
-
int pcibios_enable_device(struct pci_dev *pdev)
{
return 0;
-/* $Id: process.c,v 1.103 2000/01/21 11:38:53 jj Exp $
+/* $Id: process.c,v 1.104 2000/03/01 02:53:32 davem Exp $
* arch/sparc64/kernel/process.c
*
* Copyright (C) 1995, 1996 David S. Miller (davem@caip.rutgers.edu)
/* User register window flush is done by entry.S */
/* Check for indirect call. */
- if(regs->u_regs[UREG_G1] == 0)
+ if (regs->u_regs[UREG_G1] == 0)
base = 1;
- lock_kernel();
filename = getname((char *)regs->u_regs[base + UREG_I0]);
error = PTR_ERR(filename);
- if(IS_ERR(filename))
+ if (IS_ERR(filename))
goto out;
error = do_execve(filename, (char **) regs->u_regs[base + UREG_I1],
(char **) regs->u_regs[base + UREG_I2], regs);
putname(filename);
- if(!error) {
+ if (!error) {
fprs_write(0);
current->thread.xfsr[0] = 0;
current->thread.fpsaved[0] = 0;
regs->tstate &= ~TSTATE_PEF;
}
out:
- unlock_kernel();
return error;
}
void __init smp_callin(void)
{
int cpuid = hard_smp_processor_id();
+ unsigned long pstate;
inherit_locked_prom_mappings(0);
cpu_probe();
- /* Master did this already, now is the time for us to do it. */
+ /* Guarentee that the following sequences execute
+ * uninterrupted.
+ */
+ __asm__ __volatile__("rdpr %%pstate, %0\n\t"
+ "wrpr %0, %1, %%pstate"
+ : "=r" (pstate)
+ : "i" (PSTATE_IE));
+
+ /* Set things up so user can access tick register for profiling
+ * purposes. Also workaround BB_ERRATA_1 by doing a dummy
+ * read back of %tick after writing it.
+ */
__asm__ __volatile__("
sethi %%hi(0x80000000), %%g1
- sllx %%g1, 32, %%g1
- rd %%tick, %%g2
+ ba,pt %%xcc, 1f
+ sllx %%g1, 32, %%g1
+ .align 64
+1: rd %%tick, %%g2
add %%g2, 6, %%g2
andn %%g2, %%g1, %%g2
wrpr %%g2, 0, %%tick
-" : /* no outputs */
+ rdpr %%tick, %%g0"
+ : /* no outputs */
: /* no inputs */
: "g1", "g2");
+ /* Restore PSTATE_IE. */
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : /* no outputs */
+ : "r" (pstate));
+
smp_setup_percpu_timer();
__sti();
void smp_percpu_timer_interrupt(struct pt_regs *regs)
{
- unsigned long compare, tick;
+ unsigned long compare, tick, pstate;
int cpu = smp_processor_id();
int user = user_mode(regs);
prof_counter(cpu) = prof_multiplier(cpu);
}
+ /* Guarentee that the following sequences execute
+ * uninterrupted.
+ */
+ __asm__ __volatile__("rdpr %%pstate, %0\n\t"
+ "wrpr %0, %1, %%pstate"
+ : "=r" (pstate)
+ : "i" (PSTATE_IE));
+
+ /* Workaround for Spitfire Errata (#54 I think??), I discovered
+ * this via Sun BugID 4008234, mentioned in Solaris-2.5.1 patch
+ * number 103640.
+ *
+ * On Blackbird writes to %tick_cmpr can fail, the
+ * workaround seems to be to execute the wr instruction
+ * at the start of an I-cache line, and perform a dummy
+ * read back from %tick_cmpr right after writing to it. -DaveM
+ *
+ * Just to be anal we add a workaround for Spitfire
+ * Errata 50 by preventing pipeline bypasses on the
+ * final read of the %tick register into a compare
+ * instruction. The Errata 50 description states
+ * that %tick is not prone to this bug, but I am not
+ * taking any chances.
+ */
__asm__ __volatile__("rd %%tick_cmpr, %0\n\t"
- "add %0, %2, %0\n\t"
- "wr %0, 0x0, %%tick_cmpr\n\t"
- "rd %%tick, %1"
+ "ba,pt %%xcc, 1f\n\t"
+ " add %0, %2, %0\n\t"
+ ".align 64\n"
+ "1: wr %0, 0x0, %%tick_cmpr\n\t"
+ "rd %%tick_cmpr, %%g0\n\t"
+ "rd %%tick, %1\n\t"
+ "mov %1, %1"
: "=&r" (compare), "=r" (tick)
: "r" (current_tick_offset));
+
+ /* Restore PSTATE_IE. */
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : /* no outputs */
+ : "r" (pstate));
} while (tick >= compare);
}
static void __init smp_setup_percpu_timer(void)
{
int cpu = smp_processor_id();
+ unsigned long pstate;
prof_counter(cpu) = prof_multiplier(cpu) = 1;
- __asm__ __volatile__("rd %%tick, %%g1\n\t"
- "add %%g1, %0, %%g1\n\t"
- "wr %%g1, 0x0, %%tick_cmpr"
+ /* Guarentee that the following sequences execute
+ * uninterrupted.
+ */
+ __asm__ __volatile__("rdpr %%pstate, %0\n\t"
+ "wrpr %0, %1, %%pstate"
+ : "=r" (pstate)
+ : "i" (PSTATE_IE));
+
+ /* Workaround for Spitfire Errata (#54 I think??), I discovered
+ * this via Sun BugID 4008234, mentioned in Solaris-2.5.1 patch
+ * number 103640.
+ *
+ * On Blackbird writes to %tick_cmpr can fail, the
+ * workaround seems to be to execute the wr instruction
+ * at the start of an I-cache line, and perform a dummy
+ * read back from %tick_cmpr right after writing to it. -DaveM
+ */
+ __asm__ __volatile__("
+ rd %%tick, %%g1
+ ba,pt %%xcc, 1f
+ add %%g1, %0, %%g1
+ .align 64
+ 1: wr %%g1, 0x0, %%tick_cmpr
+ rd %%tick_cmpr, %%g0"
+ : /* no outputs */
+ : "r" (current_tick_offset)
+ : "g1");
+
+ /* Restore PSTATE_IE. */
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
: /* no outputs */
- : "r" (current_tick_offset)
- : "g1");
+ : "r" (pstate));
}
void __init smp_tick_init(void)
-/* $Id: sys_sparc32.c,v 1.132 2000/02/16 07:31:35 davem Exp $
+/* $Id: sys_sparc32.c,v 1.133 2000/03/01 02:53:33 davem Exp $
* sys_sparc32.c: Conversion between 32bit and 64bit native syscalls.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
bprm.p = PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *);
memset(bprm.page, 0, MAX_ARG_PAGES * sizeof(bprm.page[0]));
+ lock_kernel();
dentry = open_namei(filename, 0, 0);
+ unlock_kernel();
+
retval = PTR_ERR(dentry);
if (IS_ERR(dentry))
return retval;
bprm.loader = 0;
bprm.exec = 0;
if ((bprm.argc = count32(argv)) < 0) {
+ lock_kernel();
dput(dentry);
+ unlock_kernel();
return bprm.argc;
}
if ((bprm.envc = count32(envp)) < 0) {
+ lock_kernel();
dput(dentry);
+ unlock_kernel();
return bprm.envc;
}
out:
/* Something went wrong, return the inode and free the argument pages*/
- if (bprm.dentry)
+ if (bprm.dentry) {
+ lock_kernel();
dput(bprm.dentry);
+ unlock_kernel();
+ }
for (i=0 ; i<MAX_ARG_PAGES ; i++)
if (bprm.page[i])
if((u32)regs->u_regs[UREG_G1] == 0)
base = 1;
- lock_kernel();
filename = getname32((char *)AA(regs->u_regs[base + UREG_I0]));
error = PTR_ERR(filename);
if(IS_ERR(filename))
regs->tstate &= ~TSTATE_PEF;
}
out:
- unlock_kernel();
return error;
}
-/* $Id: time.c,v 1.23 1999/09/21 14:35:27 davem Exp $
+/* $Id: time.c,v 1.24 2000/03/02 02:00:25 davem Exp $
* time.c: UltraSparc timer and TOD clock support.
*
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
static void timer_interrupt(int irq, void *dev_id, struct pt_regs * regs)
{
- unsigned long ticks;
+ unsigned long ticks, pstate;
write_lock(&xtime_lock);
do {
do_timer(regs);
+ /* Guarentee that the following sequences execute
+ * uninterrupted.
+ */
+ __asm__ __volatile__("rdpr %%pstate, %0\n\t"
+ "wrpr %0, %1, %%pstate"
+ : "=r" (pstate)
+ : "i" (PSTATE_IE));
+
+ /* Workaround for Spitfire Errata (#54 I think??), I discovered
+ * this via Sun BugID 4008234, mentioned in Solaris-2.5.1 patch
+ * number 103640.
+ *
+ * On Blackbird writes to %tick_cmpr can fail, the
+ * workaround seems to be to execute the wr instruction
+ * at the start of an I-cache line, and perform a dummy
+ * read back from %tick_cmpr right after writing to it. -DaveM
+ *
+ * Just to be anal we add a workaround for Spitfire
+ * Errata 50 by preventing pipeline bypasses on the
+ * final read of the %tick register into a compare
+ * instruction. The Errata 50 description states
+ * that %tick is not prone to this bug, but I am not
+ * taking any chances.
+ */
__asm__ __volatile__("
rd %%tick_cmpr, %0
- add %0, %2, %0
- wr %0, 0, %%tick_cmpr
- rd %%tick, %1"
+ ba,pt %%xcc, 1f
+ add %0, %2, %0
+ .align 64
+ 1: wr %0, 0, %%tick_cmpr
+ rd %%tick_cmpr, %%g0
+ rd %%tick, %1
+ mov %1, %1"
: "=&r" (timer_tick_compare), "=r" (ticks)
: "r" (timer_tick_offset));
+
+ /* Restore PSTATE_IE. */
+ __asm__ __volatile__("wrpr %0, 0x0, %%pstate"
+ : /* no outputs */
+ : "r" (pstate));
} while (ticks >= timer_tick_compare);
timer_check_rtc();
if [ "$CONFIG_BLK_DEV_IDEDISK" != "n" ]; then
bool ' Use multi-mode by default' CONFIG_IDEDISK_MULTI_MODE
fi
+ dep_tristate ' PCMCIA IDE support' CONFIG_BLK_DEV_IDECS $CONFIG_BLK_DEV_IDE $CONFIG_PCMCIA
dep_tristate ' Include IDE/ATAPI CDROM support' CONFIG_BLK_DEV_IDECD $CONFIG_BLK_DEV_IDE
dep_tristate ' Include IDE/ATAPI TAPE support' CONFIG_BLK_DEV_IDETAPE $CONFIG_BLK_DEV_IDE
dep_tristate ' Include IDE/ATAPI FLOPPY support' CONFIG_BLK_DEV_IDEFLOPPY $CONFIG_BLK_DEV_IDE
############
+ifeq ($(CONFIG_BLK_DEV_IDECS),y)
+L_OBJS += ide-cs.o
+else
+ ifeq ($(CONFIG_BLK_DEV_IDECS),m)
+ M_OBJS += ide-cs.o
+ endif
+endif
+
ifeq ($(CONFIG_BLK_DEV_IDEDISK),y)
L_OBJS += ide-disk.o
else
--- /dev/null
+/*======================================================================
+
+ A driver for PCMCIA IDE/ATA disk cards
+
+ ide_cs.c 1.26 1999/11/16 02:10:49
+
+ The contents of this file are subject to the Mozilla Public
+ License Version 1.1 (the "License"); you may not use this file
+ except in compliance with the License. You may obtain a copy of
+ the License at http://www.mozilla.org/MPL/
+
+ Software distributed under the License is distributed on an "AS
+ IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
+ implied. See the License for the specific language governing
+ rights and limitations under the License.
+
+ The initial developer of the original code is David A. Hinds
+ <dhinds@pcmcia.sourceforge.org>. Portions created by David A. Hinds
+ are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
+
+ Alternatively, the contents of this file may be used under the
+ terms of the GNU Public License version 2 (the "GPL"), in which
+ case the provisions of the GPL are applicable instead of the
+ above. If you wish to allow the use of your version of this file
+ only under the terms of the GPL and not to allow others to use
+ your version of this file under the MPL, indicate your decision
+ by deleting the provisions above and replace them with the notice
+ and other provisions required by the GPL. If you do not delete
+ the provisions above, a recipient may use your version of this
+ file under either the MPL or the GPL.
+
+======================================================================*/
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/ptrace.h>
+#include <linux/malloc.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/ioport.h>
+#include <linux/hdreg.h>
+#include <linux/major.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+
+#include <pcmcia/version.h>
+#include <pcmcia/cs_types.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/cistpl.h>
+#include <pcmcia/ds.h>
+#include <pcmcia/cisreg.h>
+
+#ifdef PCMCIA_DEBUG
+static int pc_debug = PCMCIA_DEBUG;
+MODULE_PARM(pc_debug, "i");
+#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
+static char *version =
+"ide_cs.c 1.26 1999/11/16 02:10:49 (David Hinds)";
+#else
+#define DEBUG(n, args...)
+#endif
+
+/*====================================================================*/
+
+/* Parameters that can be set with 'insmod' */
+
+/* Bit map of interrupts to choose from */
+static u_int irq_mask = 0xdeb8;
+static int irq_list[4] = { -1 };
+
+MODULE_PARM(irq_mask, "i");
+MODULE_PARM(irq_list, "1-4i");
+
+/*====================================================================*/
+
+static const char ide_major[] = {
+ IDE0_MAJOR, IDE1_MAJOR, IDE2_MAJOR, IDE3_MAJOR,
+#ifdef IDE4_MAJOR
+ IDE4_MAJOR, IDE5_MAJOR
+#endif
+};
+
+typedef struct ide_info_t {
+ dev_link_t link;
+ int ndev;
+ dev_node_t node;
+ int hd;
+} ide_info_t;
+
+static void ide_config(dev_link_t *link);
+static void ide_release(u_long arg);
+static int ide_event(event_t event, int priority,
+ event_callback_args_t *args);
+
+static dev_info_t dev_info = "ide_cs";
+
+static dev_link_t *ide_attach(void);
+static void ide_detach(dev_link_t *);
+
+static dev_link_t *dev_list = NULL;
+
+/*====================================================================*/
+
+static void cs_error(client_handle_t handle, int func, int ret)
+{
+ error_info_t err = { func, ret };
+ CardServices(ReportError, handle, &err);
+}
+
+/*======================================================================
+
+ ide_attach() creates an "instance" of the driver, allocating
+ local data structures for one device. The device is registered
+ with Card Services.
+
+======================================================================*/
+
+static dev_link_t *ide_attach(void)
+{
+ ide_info_t *info;
+ dev_link_t *link;
+ client_reg_t client_reg;
+ int i, ret;
+
+ DEBUG(0, "ide_attach()\n");
+
+ /* Create new ide device */
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info) return NULL;
+ memset(info, 0, sizeof(*info));
+ link = &info->link; link->priv = info;
+
+ link->release.function = &ide_release;
+ link->release.data = (u_long)link;
+ link->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
+ link->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
+ link->io.IOAddrLines = 3;
+ link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
+ link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID;
+ if (irq_list[0] == -1)
+ link->irq.IRQInfo2 = irq_mask;
+ else
+ for (i = 0; i < 4; i++)
+ link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->conf.Attributes = CONF_ENABLE_IRQ;
+ link->conf.Vcc = 50;
+ link->conf.IntType = INT_MEMORY_AND_IO;
+
+ /* Register with Card Services */
+ link->next = dev_list;
+ dev_list = link;
+ client_reg.dev_info = &dev_info;
+ client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
+ client_reg.EventMask =
+ CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
+ CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
+ CS_EVENT_PM_SUSPEND | CS_EVENT_PM_RESUME;
+ client_reg.event_handler = &ide_event;
+ client_reg.Version = 0x0210;
+ client_reg.event_callback_args.client_data = link;
+ ret = CardServices(RegisterClient, &link->handle, &client_reg);
+ if (ret != CS_SUCCESS) {
+ cs_error(link->handle, RegisterClient, ret);
+ ide_detach(link);
+ return NULL;
+ }
+
+ return link;
+} /* ide_attach */
+
+/*======================================================================
+
+ This deletes a driver "instance". The device is de-registered
+ with Card Services. If it has been released, all local data
+ structures are freed. Otherwise, the structures will be freed
+ when the device is released.
+
+======================================================================*/
+
+static void ide_detach(dev_link_t *link)
+{
+ dev_link_t **linkp;
+ long flags;
+ int ret;
+
+ DEBUG(0, "ide_detach(0x%p)\n", link);
+
+ /* Locate device structure */
+ for (linkp = &dev_list; *linkp; linkp = &(*linkp)->next)
+ if (*linkp == link) break;
+ if (*linkp == NULL)
+ return;
+
+ save_flags(flags);
+ cli();
+ if (link->state & DEV_RELEASE_PENDING) {
+ del_timer(&link->release);
+ link->state &= ~DEV_RELEASE_PENDING;
+ }
+ restore_flags(flags);
+
+ if (link->state & DEV_CONFIG)
+ ide_release((u_long)link);
+
+ if (link->handle) {
+ ret = CardServices(DeregisterClient, link->handle);
+ if (ret != CS_SUCCESS)
+ cs_error(link->handle, DeregisterClient, ret);
+ }
+
+ /* Unlink, free device structure */
+ *linkp = link->next;
+ kfree(link->priv);
+
+} /* ide_detach */
+
+/*======================================================================
+
+ ide_config() is scheduled to run after a CARD_INSERTION event
+ is received, to configure the PCMCIA socket, and to make the
+ ide device available to the system.
+
+======================================================================*/
+
+#define CS_CHECK(fn, args...) \
+while ((last_ret=CardServices(last_fn=(fn), args))!=0) goto cs_failed
+
+#define CFG_CHECK(fn, args...) \
+if (CardServices(fn, args) != 0) goto next_entry
+
+void ide_config(dev_link_t *link)
+{
+ client_handle_t handle = link->handle;
+ ide_info_t *info = link->priv;
+ tuple_t tuple;
+ u_short buf[128];
+ cisparse_t parse;
+ config_info_t conf;
+ cistpl_cftable_entry_t *cfg = &parse.cftable_entry;
+ cistpl_cftable_entry_t dflt = { 0 };
+ int i, pass, last_ret, last_fn, hd, io_base, ctl_base;
+
+ DEBUG(0, "ide_config(0x%p)\n", link);
+
+ tuple.TupleData = (cisdata_t *)buf;
+ tuple.TupleOffset = 0; tuple.TupleDataMax = 255;
+ tuple.Attributes = 0;
+ tuple.DesiredTuple = CISTPL_CONFIG;
+ CS_CHECK(GetFirstTuple, handle, &tuple);
+ CS_CHECK(GetTupleData, handle, &tuple);
+ CS_CHECK(ParseTuple, handle, &tuple, &parse);
+ link->conf.ConfigBase = parse.config.base;
+ link->conf.Present = parse.config.rmask[0];
+
+ /* Configure card */
+ link->state |= DEV_CONFIG;
+
+ /* Not sure if this is right... look up the current Vcc */
+ CS_CHECK(GetConfigurationInfo, handle, &conf);
+ link->conf.Vcc = conf.Vcc;
+
+ pass = io_base = ctl_base = 0;
+ tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY;
+ tuple.Attributes = 0;
+ CS_CHECK(GetFirstTuple, handle, &tuple);
+ while (1) {
+ CFG_CHECK(GetTupleData, handle, &tuple);
+ CFG_CHECK(ParseTuple, handle, &tuple, &parse);
+
+ /* Check for matching Vcc, unless we're desperate */
+ if (!pass) {
+ if (cfg->vcc.present & (1<<CISTPL_POWER_VNOM)) {
+ if (conf.Vcc != cfg->vcc.param[CISTPL_POWER_VNOM]/10000)
+ goto next_entry;
+ } else if (dflt.vcc.present & (1<<CISTPL_POWER_VNOM)) {
+ if (conf.Vcc != dflt.vcc.param[CISTPL_POWER_VNOM]/10000)
+ goto next_entry;
+ }
+ }
+
+ if (cfg->vpp1.present & (1<<CISTPL_POWER_VNOM))
+ link->conf.Vpp1 = link->conf.Vpp2 =
+ cfg->vpp1.param[CISTPL_POWER_VNOM]/10000;
+ else if (dflt.vpp1.present & (1<<CISTPL_POWER_VNOM))
+ link->conf.Vpp1 = link->conf.Vpp2 =
+ dflt.vpp1.param[CISTPL_POWER_VNOM]/10000;
+
+ if ((cfg->io.nwin > 0) || (dflt.io.nwin > 0)) {
+ cistpl_io_t *io = (cfg->io.nwin) ? &cfg->io : &dflt.io;
+ link->conf.ConfigIndex = cfg->index;
+ link->io.BasePort1 = io->win[0].base;
+ link->io.IOAddrLines = io->flags & CISTPL_IO_LINES_MASK;
+ if (!(io->flags & CISTPL_IO_16BIT))
+ link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
+ if (io->nwin == 2) {
+ link->io.NumPorts1 = 8;
+ link->io.BasePort2 = io->win[1].base;
+ link->io.NumPorts2 = 1;
+ CFG_CHECK(RequestIO, link->handle, &link->io);
+ io_base = link->io.BasePort1;
+ ctl_base = link->io.BasePort2;
+ } else if ((io->nwin == 1) && (io->win[0].len >= 16)) {
+ link->io.NumPorts1 = io->win[0].len;
+ link->io.NumPorts2 = 0;
+ CFG_CHECK(RequestIO, link->handle, &link->io);
+ io_base = link->io.BasePort1;
+ ctl_base = link->io.BasePort1+0x0e;
+ } else goto next_entry;
+ /* If we've got this far, we're done */
+ break;
+ }
+
+ next_entry:
+ if (cfg->flags & CISTPL_CFTABLE_DEFAULT) dflt = *cfg;
+ if (pass) {
+ CS_CHECK(GetNextTuple, handle, &tuple);
+ } else if (CardServices(GetNextTuple, handle, &tuple) != 0) {
+ CS_CHECK(GetFirstTuple, handle, &tuple);
+ memset(&dflt, 0, sizeof(dflt));
+ pass++;
+ }
+ }
+
+ CS_CHECK(RequestIRQ, handle, &link->irq);
+ CS_CHECK(RequestConfiguration, handle, &link->conf);
+
+ /* deal with brain dead IDE resource management */
+ release_region(link->io.BasePort1, link->io.NumPorts1);
+ if (link->io.NumPorts2)
+ release_region(link->io.BasePort2, link->io.NumPorts2);
+
+ /* retry registration in case device is still spinning up */
+ for (i = 0; i < 10; i++) {
+ hd = ide_register(io_base, ctl_base, link->irq.AssignedIRQ);
+ if (hd >= 0) break;
+ if (link->io.NumPorts1 == 0x20) {
+ hd = ide_register(io_base+0x10, ctl_base+0x10,
+ link->irq.AssignedIRQ);
+ if (hd >= 0) {
+ io_base += 0x10; ctl_base += 0x10;
+ break;
+ }
+ }
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ/10);
+ }
+
+ if (hd < 0) {
+ printk(KERN_NOTICE "ide_cs: ide_register() at 0x%3x & 0x%3x"
+ ", irq %u failed\n", io_base, ctl_base,
+ link->irq.AssignedIRQ);
+ goto failed;
+ }
+
+ MOD_INC_USE_COUNT;
+ info->ndev = 1;
+ sprintf(info->node.dev_name, "hd%c", 'a'+(hd*2));
+ info->node.major = ide_major[hd];
+ info->node.minor = 0;
+ info->hd = hd;
+ link->dev = &info->node;
+ printk(KERN_INFO "ide_cs: %s: Vcc = %d.%d, Vpp = %d.%d\n",
+ info->node.dev_name, link->conf.Vcc/10, link->conf.Vcc%10,
+ link->conf.Vpp1/10, link->conf.Vpp1%10);
+
+ link->state &= ~DEV_CONFIG_PENDING;
+ return;
+
+cs_failed:
+ cs_error(link->handle, last_fn, last_ret);
+failed:
+ ide_release((u_long)link);
+
+} /* ide_config */
+
+/*======================================================================
+
+ After a card is removed, ide_release() will unregister the net
+ device, and release the PCMCIA configuration. If the device is
+ still open, this will be postponed until it is closed.
+
+======================================================================*/
+
+void ide_release(u_long arg)
+{
+ dev_link_t *link = (dev_link_t *)arg;
+ ide_info_t *info = link->priv;
+
+ DEBUG(0, "ide_release(0x%p)\n", link);
+
+ if (info->ndev) {
+ ide_unregister(info->hd);
+ MOD_DEC_USE_COUNT;
+ }
+ info->ndev = 0;
+ link->dev = NULL;
+
+ CardServices(ReleaseConfiguration, link->handle);
+ CardServices(ReleaseIO, link->handle, &link->io);
+ CardServices(ReleaseIRQ, link->handle, &link->irq);
+
+ link->state &= ~DEV_CONFIG;
+
+} /* ide_release */
+
+/*======================================================================
+
+ The card status event handler. Mostly, this schedules other
+ stuff to run after an event is received. A CARD_REMOVAL event
+ also sets some flags to discourage the ide drivers from
+ talking to the ports.
+
+======================================================================*/
+
+int ide_event(event_t event, int priority,
+ event_callback_args_t *args)
+{
+ dev_link_t *link = args->client_data;
+
+ DEBUG(1, "ide_event(0x%06x)\n", event);
+
+ switch (event) {
+ case CS_EVENT_CARD_REMOVAL:
+ link->state &= ~DEV_PRESENT;
+ if (link->state & DEV_CONFIG) {
+ link->release.expires = jiffies + HZ/20;
+ link->state |= DEV_RELEASE_PENDING;
+ add_timer(&link->release);
+ }
+ break;
+ case CS_EVENT_CARD_INSERTION:
+ link->state |= DEV_PRESENT | DEV_CONFIG_PENDING;
+ ide_config(link);
+ break;
+ case CS_EVENT_PM_SUSPEND:
+ link->state |= DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_RESET_PHYSICAL:
+ if (link->state & DEV_CONFIG)
+ CardServices(ReleaseConfiguration, link->handle);
+ break;
+ case CS_EVENT_PM_RESUME:
+ link->state &= ~DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_CARD_RESET:
+ if (DEV_OK(link))
+ CardServices(RequestConfiguration, link->handle, &link->conf);
+ break;
+ }
+ return 0;
+} /* ide_event */
+
+/*====================================================================*/
+
+static int __init init_ide_cs(void)
+{
+ servinfo_t serv;
+ DEBUG(0, "%s\n", version);
+ CardServices(GetCardServicesInfo, &serv);
+ if (serv.Revision != CS_RELEASE_CODE) {
+ printk(KERN_NOTICE "ide_cs: Card Services release "
+ "does not match!\n");
+ return -1;
+ }
+ register_pccard_driver(&dev_info, &ide_attach, &ide_detach);
+ return 0;
+}
+
+static void __exit exit_ide_cs(void)
+{
+ DEBUG(0, "ide_cs: unloading\n");
+ unregister_pccard_driver(&dev_info);
+ while (dev_list != NULL)
+ ide_detach(dev_list);
+}
+
+module_init(init_ide_cs);
+module_exit(exit_ide_cs);
static int crd_load(struct file *fp, struct file *outfp);
#ifdef CONFIG_BLK_DEV_INITRD
-static int initrd_users = 0;
+static int initrd_users;
#endif
#endif
#ifndef MODULE
-int rd_doload = 0; /* 1 = load RAM disk, 0 = don't load */
+int rd_doload; /* 1 = load RAM disk, 0 = don't load */
int rd_prompt = 1; /* 1 = prompt for RAM disk, 0 = don't prompt */
-int rd_image_start = 0; /* starting block # of image */
+int rd_image_start; /* starting block # of image */
#ifdef CONFIG_BLK_DEV_INITRD
-unsigned long initrd_start,initrd_end;
+unsigned long initrd_start, initrd_end;
int mount_initrd = 1; /* zero if initrd should not be mounted */
-int initrd_below_start_ok = 0;
+int initrd_below_start_ok;
static int __init no_initrd(char *str)
{
/* special: we want to release the ramdisk memory,
it's not like with the other blockdevices where
this ioctl only flushes away the buffer cache. */
- if ((atomic_read(&inode->i_bdev->bd_openers) > 1))
+ if ((atomic_read(&inode->i_bdev->bd_openers) > 2))
return -EBUSY;
destroy_buffers(inode->i_rdev);
rd_blocksizes[minor] = 0;
if (DEVICE_NR(inode->i_rdev) == INITRD_MINOR) {
if (!initrd_start) return -ENODEV;
initrd_users++;
+ filp->f_op = &initrd_fops;
return 0;
}
#endif
{
int i;
- for (i = 0 ; i < NUM_RAMDISKS; i++)
+ for (i = 0 ; i < NUM_RAMDISKS; i++) {
+ struct block_device *bdev;
+ bdev = bdget(kdev_t_to_nr(MKDEV(MAJOR_NR,i)));
+ atomic_dec(&bdev->bd_openers);
destroy_buffers(MKDEV(MAJOR_NR, i));
+ }
devfs_unregister (devfs_handle);
unregister_blkdev( MAJOR_NR, "ramdisk" );
hardsect_size[MAJOR_NR] = rd_hardsec; /* Size of the RAM disk blocks */
blksize_size[MAJOR_NR] = rd_blocksizes; /* Avoid set_blocksize() check */
- for (i = 0; i < NUM_RAMDISKS; i++)
+ for (i = 0; i < NUM_RAMDISKS; i++) {
+ struct block_device *bdev;
register_disk(NULL, MKDEV(MAJOR_NR,i), 1, &fd_fops, rd_size<<1);
+ bdev = bdget(kdev_t_to_nr(MKDEV(MAJOR_NR,i)));
+ atomic_inc(&bdev->bd_openers); /* avoid invalidate_buffers() */
+ }
#ifdef CONFIG_BLK_DEV_INITRD
/* We ought to separate initrd operations here */
infile.f_mode = 1; /* read only */
infile.f_dentry = &in_dentry;
in_dentry.d_inode = inode;
- infile.f_op = &initrd_fops;
+ infile.f_op = &def_blk_fops;
init_special_inode(inode, S_IFBLK | S_IRUSR, kdev_t_to_nr(device));
if ((out_inode = get_empty_inode()) == NULL)
* Revision 1.8: Jul 1 1997
* port to linux-2.1.43 kernel.
* Revision 1.9: Oct 9 1998
- * Added stuff for the IO8+/PCI version. .
+ * Added stuff for the IO8+/PCI version.
+ * Revision 1.10: Oct 22 1999 / Jan 21 2000.
+ * Added stuff for setserial.
+ * Nicolas Mailhot (Nicolas.Mailhot@email.enst.fr)
*
*/
-#define VERSION "1.8"
+#define VERSION "1.10"
/*
/*
* Now we must calculate some speed depended things
*/
-
+
/* Set baud rate for port */
- tmp = (((SX_OSCFREQ + baud_table[baud]/2) / baud_table[baud] +
- CD186x_TPC/2) / CD186x_TPC);
+ tmp = port->custom_divisor ;
+ if ( tmp )
+ printk (KERN_INFO "sx%d: Using custom baud rate divisor %ld. \n"
+ "This is an untested option, please be carefull.\n",
+ port_No (port), tmp);
+ else
+ tmp = (((SX_OSCFREQ + baud_table[baud]/2) / baud_table[baud] +
+ CD186x_TPC/2) / CD186x_TPC);
+
if ((tmp < 0x10) && time_before(again, jiffies)) {
again = jiffies + HZ * 60;
/* Page 48 of version 2.0 of the CL-CD1865 databook */
sx_out(bp, CD186x_TBPRH, (tmp >> 8) & 0xff);
sx_out(bp, CD186x_RBPRL, tmp & 0xff);
sx_out(bp, CD186x_TBPRL, tmp & 0xff);
-
- baud = (baud_table[baud] + 5) / 10; /* Estimated CPS */
-
+
+ if (port->custom_divisor) {
+ baud = (SX_OSCFREQ + port->custom_divisor/2) / port->custom_divisor;
+ baud = ( baud + 5 ) / 10;
+ } else
+ baud = (baud_table[baud] + 5) / 10; /* Estimated CPS */
+
/* Two timer ticks seems enough to wakeup something like SLIP driver */
tmp = ((baud + HZ/2) / HZ) * 2 - CD186x_NFIFO;
port->wakeup_chars = (tmp < 0) ? 0 : ((tmp >= SERIAL_XMIT_SIZE) ?
if (error)
return error;
- copy_from_user(&tmp, newinfo, sizeof(tmp));
+ if (copy_from_user(&tmp, newinfo, sizeof(tmp)))
+ return -EFAULT;
#if 0
if ((tmp.irq != bp->irq) ||
change_speed = ((port->flags & ASYNC_SPD_MASK) !=
(tmp.flags & ASYNC_SPD_MASK));
+ change_speed |= (tmp.custom_divisor != port->custom_divisor);
if (!capable(CAP_SYS_ADMIN)) {
if ((tmp.close_delay != port->close_delay) ||
return -EPERM;
port->flags = ((port->flags & ~ASYNC_USR_MASK) |
(tmp.flags & ASYNC_USR_MASK));
+ port->custom_divisor = tmp.custom_divisor;
} else {
port->flags = ((port->flags & ~ASYNC_FLAGS) |
(tmp.flags & ASYNC_FLAGS));
port->close_delay = tmp.close_delay;
port->closing_wait = tmp.closing_wait;
+ port->custom_divisor = tmp.custom_divisor;
}
if (change_speed) {
save_flags(flags); cli();
tmp.baud_base = (SX_OSCFREQ + CD186x_TPC/2) / CD186x_TPC;
tmp.close_delay = port->close_delay * HZ/100;
tmp.closing_wait = port->closing_wait * HZ/100;
+ tmp.custom_divisor = port->custom_divisor;
tmp.xmit_fifo_size = CD186x_NFIFO;
- copy_to_user(retinfo, &tmp, sizeof(tmp));
+ if (copy_to_user(retinfo, &tmp, sizeof(tmp)))
+ return -EFAULT;
return 0;
}
Center of Excellence in Space Data and Information Sciences
Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771
+ 2/2/00- Added support for kernel-level ISAPnP
+ by Stephen Frost <sfrost@snowman.net> and Alessandro Zummo
Cleaned up for 2.3.x/softnet by Jeff Garzik and Alan Cox.
*/
#include <linux/module.h>
#include <linux/version.h>
+#include <linux/isapnp.h>
#include <linux/kernel.h>
#include <linux/sched.h>
{ "Default", 0, 0xFF, XCVR_10baseT, 10000},
};
+#ifdef CONFIG_ISAPNP
+struct corkscrew_isapnp_adapters_struct {
+ unsigned short vendor, function;
+ char *name;
+};
+struct corkscrew_isapnp_adapters_struct corkscrew_isapnp_adapters[] = {
+ {ISAPNP_VENDOR('T', 'C', 'M'), ISAPNP_FUNCTION(0x5051), "3Com Fast EtherLink ISA"},
+ {0, }
+};
+int corkscrew_isapnp_phys_addr[3] = {
+ 0, 0, 0
+};
+#endif
+static int nopnp = 0;
+
static int corkscrew_scan(struct net_device *dev);
static struct net_device *corkscrew_found_device(struct net_device *dev,
int ioaddr, int irq,
static int corkscrew_scan(struct net_device *dev)
{
int cards_found = 0;
- static int ioaddr = 0x100;
+ short i;
+ static int ioaddr;
+ static int pnp_cards = 0;
+
+#ifdef CONFIG_ISAPNP
+ if(nopnp == 1)
+ goto no_pnp;
+ for(i=0; corkscrew_isapnp_adapters[i].vendor != 0; i++) {
+ struct pci_dev *idev = NULL;
+ int irq, j;
+ while((idev = isapnp_find_dev(NULL,
+ corkscrew_isapnp_adapters[i].vendor,
+ corkscrew_isapnp_adapters[i].function,
+ idev))) {
+
+ if(idev->active) idev->deactivate(idev);
+
+ if(idev->prepare(idev)<0)
+ continue;
+ if (!(idev->resource[0].flags & IORESOURCE_IO))
+ continue;
+ if(idev->activate(idev)<0) {
+ printk("isapnp configure failed (out of resources?)\n");
+ return -ENOMEM;
+ }
+ if (!idev->resource[0].start || check_region(idev->resource[0].start,16))
+ continue;
+ ioaddr = idev->resource[0].start;
+ irq = idev->irq_resource[0].start;
+ if(corkscrew_debug)
+ printk ("ISAPNP reports %s at i/o 0x%x, irq %d\n",
+ corkscrew_isapnp_adapters[i].name,ioaddr, irq);
+
+ if ((inw(ioaddr + 0x2002) & 0x1f0) != (ioaddr & 0x1f0))
+ continue;
+ /* Verify by reading the device ID from the EEPROM. */
+ {
+ int timer;
+ outw(EEPROM_Read + 7, ioaddr + Wn0EepromCmd);
+ /* Pause for at least 162 us. for the read to take place. */
+ for (timer = 4; timer >= 0; timer--) {
+ udelay(162);
+ if ((inw(ioaddr + Wn0EepromCmd) & 0x0200)
+ == 0)
+ break;
+ }
+ if (inw(ioaddr + Wn0EepromData) != 0x6d50)
+ continue;
+ }
+ printk(KERN_INFO "3c515 Resource configuraiton register %#4.4x, DCR %4.4x.\n",
+ inl(ioaddr + 0x2002), inw(ioaddr + 0x2000));
+ /* irq = inw(ioaddr + 0x2002) & 15; */ /* Use the irq from isapnp */
+ corkscrew_isapnp_phys_addr[pnp_cards] = ioaddr;
+ corkscrew_found_device(dev, ioaddr, irq, CORKSCREW_ID, dev
+ && dev->mem_start ? dev->
+ mem_start : options[cards_found]);
+ dev = 0;
+ pnp_cards++;
+ cards_found++;
+ }
+ }
+no_pnp:
+#endif /* not CONFIG_ISAPNP */
/* Check all locations on the ISA bus -- evil! */
- for (; ioaddr < 0x400; ioaddr += 0x20) {
+ for (ioaddr = 0x100; ioaddr < 0x400; ioaddr += 0x20) {
int irq;
+#ifdef CONFIG_ISAPNP
+ /* Make sure this was not already picked up by isapnp */
+ if(ioaddr == corkscrew_isapnp_phys_addr[0]) continue;
+ if(ioaddr == corkscrew_isapnp_phys_addr[1]) continue;
+ if(ioaddr == corkscrew_isapnp_phys_addr[2]) continue;
+#endif
if (check_region(ioaddr, CORKSCREW_TOTAL_SIZE))
continue;
/* Check the resource configuration for a matching ioaddr. */
dev = 0;
cards_found++;
}
-
if (corkscrew_debug)
printk(KERN_INFO "%d 3c515 cards found.\n", cards_found);
return cards_found;
* Status: Stable.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Sat Nov 7 21:43:15 1998
- * Modified at: Fri Feb 18 01:48:51 2000
+ * Modified at: Wed Mar 1 11:29:34 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
* Copyright (c) 1998-2000 Dag Brattli <dagb@cs.uit.no>
switch_bank(iobase, BANK2);
outb(EXCR2_RFSIZ|EXCR2_TFSIZ, iobase+EXCR2);
- /* IRCR2: FEND_MD is set */
+ /* IRCR2: FEND_MD is not set */
switch_bank(iobase, BANK5);
- outb(0x2a, iobase+4);
+ outb(0x02, iobase+4);
/* Make sure that some defaults are OK */
switch_bank(iobase, BANK6);
/*
* rrunner.c: Linux driver for the Essential RoadRunner HIPPI board.
*
- * Written 1998 by Jes Sorensen, <Jes.Sorensen@cern.ch>.
+ * Copyright (C) 1998-2000 by Jes Sorensen, <Jes.Sorensen@cern.ch>.
*
* Thanks to Essential Communication for providing us with hardware
* and very comprehensive documentation without which I would not have
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
+ *
+ * Thanks to Jayaram Bhat from ODS/Essential for fixing some of the
+ * stupid bugs in my code.
+ *
+ * Softnet support and various other patches from Val Henson of
+ * ODS/Essential.
*/
#define DEBUG 1
#define PKT_COPY_THRESHOLD 512
#include <linux/module.h>
-
+#include <linux/version.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/ioport.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/mm.h>
-#include <linux/cache.h>
#include <net/sock.h>
#include <asm/system.h>
+#include <asm/cache.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
+#if (LINUX_VERSION_CODE < 0x02030e)
+#define net_device device
+#endif
+
+#if (LINUX_VERSION_CODE >= 0x02031b)
+#define NEW_NETINIT
+#endif
+
+#if (LINUX_VERSION_CODE < 0x02032b)
+/*
+ * SoftNet changes
+ */
+#define dev_kfree_skb_irq(a) dev_kfree_skb(a)
+#define netif_wake_queue(dev) clear_bit(0, &dev->tbusy)
+#define netif_stop_queue(dev) set_bit(0, &dev->tbusy)
+
+static inline void netif_start_queue(struct net_device *dev)
+{
+ dev->tbusy = 0;
+ dev->start = 1;
+}
+
+#define rr_mark_net_bh(foo) mark_bh(foo)
+#define rr_if_busy(dev) dev->tbusy
+#define rr_if_running(dev) dev->start /* Currently unused. */
+#define rr_if_down(dev) {do{dev->start = 0;}while (0);}
+#else
+#define NET_BH 0
+#define rr_mark_net_bh(foo) {do{} while(0);}
+#define rr_if_busy(dev) test_bit(LINK_STATE_XOFF, &dev->state)
+#define rr_if_running(dev) test_bit(LINK_STATE_START, &dev->state)
+#define rr_if_down(dev) {do{} while(0);}
+#endif
+
#include "rrunner.h"
+#define RUN_AT(x) (jiffies + (x))
+
/*
* Implementation notes:
* stack will need to know about I/O vectors or something similar.
*/
-static const char __initdata *version = "rrunner.c: v0.17 03/09/99 Jes Sorensen (Jes.Sorensen@cern.ch)\n";
+static const char __initdata *version = "rrunner.c: v0.22 03/01/2000 Jes Sorensen (Jes.Sorensen@cern.ch)\n";
+
+static struct net_device *root_dev = NULL;
/*
static int probed __initdata = 0;
+#ifdef NEW_NETINIT
int __init rr_hippi_probe (void)
+#else
+int __init rr_hippi_probe (struct net_device *dev)
+#endif
{
+#ifdef NEW_NETINIT
+ struct net_device *dev;
+#endif
int boards_found = 0;
int version_disp; /* was version info already displayed? */
- struct net_device *dev;
struct pci_dev *pdev = NULL;
struct pci_dev *opdev = NULL;
u8 pci_latency;
dev->get_stats = &rr_get_stats;
dev->do_ioctl = &rr_ioctl;
- /*
- * Dummy value.
- */
- dev->base_addr = 42;
+#if (LINUX_VERSION_CODE < 0x02030d)
+ dev->base_addr = pdev->base_address[0];
+#else
+ dev->base_addr = pdev->resource[0].start;
+#endif
/* display version info if adapter is found */
if (!version_disp)
printk(KERN_INFO "%s: Essential RoadRunner serial HIPPI "
"at 0x%08lx, irq %i, PCI latency %i\n", dev->name,
- pdev->resource[0].start, dev->irq, pci_latency);
+ dev->base_addr, dev->irq, pci_latency);
/*
* Remap the regs into kernel space.
*/
rrpriv->regs = (struct rr_regs *)
- ioremap(pdev->resource[0].start, 0x1000);
+ ioremap(dev->base_addr, 0x1000);
if (!rrpriv->regs){
printk(KERN_ERR "%s: Unable to map I/O register, "
* 1 or more boards. Otherwise, return failure (-ENODEV).
*/
+#ifdef MODULE
return boards_found;
+#else
+ if (boards_found > 0)
+ return 0;
+ else
+ return -ENODEV;
+#endif
}
-static struct net_device *root_dev = NULL;
#ifdef MODULE
#if LINUX_VERSION_CODE > 0x20118
MODULE_DESCRIPTION("Essential RoadRunner HIPPI driver");
#endif
-
int init_module(void)
{
- return rr_hippi_probe()? 0 : -ENODEV;
+ int cards;
+
+ root_dev = NULL;
+
+#ifdef NEW_NETINIT
+ cards = rr_hippi_probe();
+#else
+ cards = rr_hippi_probe(NULL);
+#endif
+ return cards ? 0 : -ENODEV;
}
void cleanup_module(void)
idx = rrpriv->info->cmd_ctrl.pi;
writel(*(u32*)(cmd), ®s->CmdRing[idx]);
- mb();
+ wmb();
idx = (idx - 1) % CMD_RING_ENTRIES;
rrpriv->info->cmd_ctrl.pi = idx;
- mb();
+ wmb();
if (readl(®s->Mode) & FATAL_ERR)
printk("error code %02x\n", readl(®s->Fail1));
/*
* Why 32 ? is this not cache line size dependant?
*/
- writel(WBURST_32, ®s->PciState);
- mb();
+ writel(RBURST_64|WBURST_64, ®s->PciState);
+ wmb();
start_pc = rr_read_eeprom_word(rrpriv, &hw->rncd_info.FwStart);
#endif
writel(start_pc + 0x800, ®s->Pc);
- mb();
+ wmb();
udelay(5);
writel(start_pc, ®s->Pc);
- mb();
+ wmb();
return 0;
}
{
struct rr_private *rrpriv;
struct rr_regs *regs;
+ struct eeprom *hw = NULL;
u32 sram_size, rev;
+ int i;
rrpriv = (struct rr_private *)dev->priv;
regs = rrpriv->regs;
printk(" Maximum receive rings %i\n", readl(®s->MaxRxRng));
#endif
+ /*
+ * Read the hardware address from the eeprom. The HW address
+ * is not really necessary for HIPPI but awfully convenient.
+ * The pointer arithmetic to put it in dev_addr is ugly, but
+ * Donald Becker does it this way for the GigE version of this
+ * card and it's shorter and more portable than any
+ * other method I've seen. -VAL
+ */
+
+ *(u16 *)(dev->dev_addr) =
+ htons(rr_read_eeprom_word(rrpriv, &hw->manf.BoardULA));
+ *(u32 *)(dev->dev_addr+2) =
+ htonl(rr_read_eeprom_word(rrpriv, &hw->manf.BoardULA[4]));
+
+ printk(" MAC: ");
+
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x\n", dev->dev_addr[i]);
+
sram_size = rr_read_eeprom_word(rrpriv, (void *)8);
printk(" SRAM size 0x%06x\n", sram_size);
{
struct rr_private *rrpriv;
struct rr_regs *regs;
- u32 hostctrl;
unsigned long myjif, flags;
struct cmd cmd;
+ u32 hostctrl;
+ int ecode = 0;
short i;
rrpriv = (struct rr_private *)dev->priv;
hostctrl = readl(®s->HostCtrl);
writel(hostctrl | HALT_NIC | RR_CLEAR_INT, ®s->HostCtrl);
- mb();
+ wmb();
if (hostctrl & PARITY_ERR){
printk("%s: Parity error halting NIC - this is serious!\n",
dev->name);
spin_unlock_irqrestore(&rrpriv->lock, flags);
- return -EFAULT;
+ ecode = -EFAULT;
+ goto error;
}
set_rxaddr(regs, rrpriv->rx_ctrl);
rr_reset(dev);
- writel(0x60, ®s->IntrTmr);
- /*
- * These seem to have no real effect as the Firmware sets
- * it's own default values
- */
- writel(0x10, ®s->WriteDmaThresh);
- writel(0x20, ®s->ReadDmaThresh);
+ /* Tuning values */
+ writel(0x5000, ®s->ConRetry);
+ writel(0x100, ®s->ConRetryTmr);
+ writel(0x500000, ®s->ConTmout);
+ writel(0x60, ®s->IntrTmr);
+ writel(0x500000, ®s->TxDataMvTimeout);
+ writel(0x200000, ®s->RxDataMvTimeout);
+ writel(0x80, ®s->WriteDmaThresh);
+ writel(0x80, ®s->ReadDmaThresh);
rrpriv->fw_running = 0;
- mb();
+ wmb();
hostctrl &= ~(HALT_NIC | INVALID_INST_B | PARITY_ERR);
writel(hostctrl, ®s->HostCtrl);
- mb();
+ wmb();
spin_unlock_irqrestore(&rrpriv->lock, flags);
- udelay(1000);
-
- /*
- * Now start the FirmWare.
- */
- cmd.code = C_START_FW;
- cmd.ring = 0;
- cmd.index = 0;
-
- rr_issue_cmd(rrpriv, &cmd);
-
- /*
- * Give the FirmWare time to chew on the `get running' command.
- */
- myjif = jiffies + 5 * HZ;
- while ((jiffies < myjif) && !rrpriv->fw_running);
-
for (i = 0; i < RX_RING_ENTRIES; i++) {
struct sk_buff *skb;
rrpriv->rx_ring[i].mode = 0;
skb = alloc_skb(dev->mtu + HIPPI_HLEN, GFP_ATOMIC);
+ if (!skb) {
+ printk(KERN_WARNING "%s: Unable to allocate memory "
+ "for receive ring - halting NIC\n", dev->name);
+ ecode = -ENOMEM;
+ goto error;
+ }
rrpriv->rx_skbuff[i] = skb;
/*
* Sanity test to see if we conflict with the DMA
rrpriv->rx_ctrl[4].entries = RX_RING_ENTRIES;
rrpriv->rx_ctrl[4].mode = 8;
rrpriv->rx_ctrl[4].pi = 0;
- mb();
+ wmb();
set_rraddr(&rrpriv->rx_ctrl[4].rngptr, rrpriv->rx_ring);
- cmd.code = C_NEW_RNG;
- cmd.ring = 4;
+ udelay(1000);
+
+ /*
+ * Now start the FirmWare.
+ */
+ cmd.code = C_START_FW;
+ cmd.ring = 0;
cmd.index = 0;
+
rr_issue_cmd(rrpriv, &cmd);
-#if 0
-{
- u32 tmp;
- tmp = readl(®s->ExtIo);
- writel(0x80, ®s->ExtIo);
-
- i = jiffies + 1 * HZ;
- while (jiffies < i);
- writel(tmp, ®s->ExtIo);
-}
-#endif
- dev->tbusy = 0;
- dev->start = 1;
- return 0;
+ /*
+ * Give the FirmWare time to chew on the `get running' command.
+ */
+ myjif = jiffies + 5 * HZ;
+ while ((jiffies < myjif) && !rrpriv->fw_running);
+
+ netif_start_queue(dev);
+
+ return ecode;
+
+ error:
+ /*
+ * We might have gotten here because we are out of memory,
+ * make sure we release everything we allocated before failing
+ */
+ for (i = 0; i < RX_RING_ENTRIES; i++) {
+ if (rrpriv->rx_skbuff[i]) {
+ rrpriv->rx_ring[i].size = 0;
+ set_rraddr(&rrpriv->rx_ring[i].addr, 0);
+ dev_kfree_skb(rrpriv->rx_skbuff[i]);
+ }
+ }
+ return ecode;
}
switch (rrpriv->evt_ring[eidx].code){
case E_NIC_UP:
tmp = readl(®s->FwRev);
- printk("%s: Firmware revision %i.%i.%i up and running\n",
- dev->name, (tmp >> 16), ((tmp >> 8) & 0xff),
- (tmp & 0xff));
+ printk(KERN_INFO "%s: Firmware revision %i.%i.%i "
+ "up and running\n", dev->name,
+ (tmp >> 16), ((tmp >> 8) & 0xff), (tmp & 0xff));
rrpriv->fw_running = 1;
- mb();
+ writel(RX_RING_ENTRIES - 1, ®s->IpRxPi);
+ wmb();
break;
case E_LINK_ON:
- printk("%s: Optical link ON\n", dev->name);
+ printk(KERN_INFO "%s: Optical link ON\n", dev->name);
break;
case E_LINK_OFF:
- printk("%s: Optical link OFF\n", dev->name);
+ printk(KERN_INFO "%s: Optical link OFF\n", dev->name);
break;
case E_RX_IDLE:
- printk("%s: RX data not moving\n", dev->name);
+ printk(KERN_WARNING "%s: RX data not moving\n",
+ dev->name);
break;
case E_WATCHDOG:
- printk("%s: The watchdog is here to see us\n",
+ printk(KERN_INFO "%s: The watchdog is here to see "
+ "us\n", dev->name);
+ break;
+ case E_INTERN_ERR:
+ printk(KERN_ERR "%s: HIPPI Internal NIC error\n",
dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_HOST_ERR:
+ printk(KERN_ERR "%s: Host software error\n",
+ dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
break;
/*
* TX events.
*/
case E_CON_REJ:
- printk("%s: Connection rejected\n", dev->name);
+ printk(KERN_WARNING "%s: Connection rejected\n",
+ dev->name);
rrpriv->stats.tx_aborted_errors++;
break;
case E_CON_TMOUT:
- printk("%s: Connection timeout\n", dev->name);
+ printk(KERN_WARNING "%s: Connection timeout\n",
+ dev->name);
break;
case E_DISC_ERR:
- printk("%s: HIPPI disconnect error\n", dev->name);
+ printk(KERN_WARNING "%s: HIPPI disconnect error\n",
+ dev->name);
rrpriv->stats.tx_aborted_errors++;
break;
+ case E_INT_PRTY:
+ printk(KERN_ERR "%s: HIPPI Internal Parity error\n",
+ dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
case E_TX_IDLE:
- printk("%s: Transmitter idle\n", dev->name);
+ printk(KERN_WARNING "%s: Transmitter idle\n",
+ dev->name);
break;
case E_TX_LINK_DROP:
- printk("%s: Link lost during transmit\n", dev->name);
+ printk(KERN_WARNING "%s: Link lost during transmit\n",
+ dev->name);
rrpriv->stats.tx_aborted_errors++;
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_TX_INV_RNG:
+ printk(KERN_ERR "%s: Invalid send ring block\n",
+ dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_TX_INV_BUF:
+ printk(KERN_ERR "%s: Invalid send buffer address\n",
+ dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_TX_INV_DSC:
+ printk(KERN_ERR "%s: Invalid descriptor address\n",
+ dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
break;
/*
* RX events.
*/
- case E_VAL_RNG: /* Should be ignored */
-#if (DEBUG > 2)
- printk("%s: RX ring valid event\n", dev->name);
-#endif
- writel(RX_RING_ENTRIES - 1, ®s->IpRxPi);
- break;
- case E_INV_RNG:
- printk("%s: RX ring invalid event\n", dev->name);
- break;
case E_RX_RNG_OUT:
- printk("%s: Receive ring full\n", dev->name);
+ printk(KERN_INFO "%s: Receive ring full\n", dev->name);
break;
case E_RX_PAR_ERR:
- printk("%s: Receive parity error.\n", dev->name);
+ printk(KERN_WARNING "%s: Receive parity error\n",
+ dev->name);
break;
case E_RX_LLRC_ERR:
- printk("%s: Receive LLRC error.\n", dev->name);
+ printk(KERN_WARNING "%s: Receive LLRC error\n",
+ dev->name);
break;
case E_PKT_LN_ERR:
- printk("%s: Receive packet length error.\n",
+ printk(KERN_WARNING "%s: Receive packet length "
+ "error\n", dev->name);
+ break;
+ case E_RX_INV_BUF:
+ printk(KERN_ERR "%s: Invalid receive buffer "
+ "address\n", dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_RX_INV_DSC:
+ printk(KERN_ERR "%s: Invalid receive descriptor "
+ "address\n", dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
+ break;
+ case E_RNG_BLK:
+ printk(KERN_ERR "%s: Invalid ring block\n",
dev->name);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ wmb();
break;
default:
- printk("%s: Unhandled event 0x%02x\n",
+ printk(KERN_WARNING "%s: Unhandled event 0x%02x\n",
dev->name, rrpriv->evt_ring[eidx].code);
}
eidx = (eidx + 1) % EVT_RING_ENTRIES;
}
rrpriv->info->evt_ctrl.pi = eidx;
- mb();
+ wmb();
return eidx;
}
static void rx_int(struct net_device *dev, u32 rxlimit, u32 index)
{
struct rr_private *rrpriv = (struct rr_private *)dev->priv;
- u32 pkt_len;
struct rr_regs *regs = rrpriv->regs;
do {
+ u32 pkt_len;
pkt_len = rrpriv->rx_ring[index].size;
#if (DEBUG > 2)
printk("index %i, rxlimit %i\n", index, rxlimit);
if (pkt_len < PKT_COPY_THRESHOLD) {
skb = alloc_skb(pkt_len, GFP_ATOMIC);
if (skb == NULL){
- printk("%s: Out of memory deferring "
- "packet\n", dev->name);
+ printk(KERN_WARNING "%s: Unable to allocate skb (%i bytes), deferring packet\n", dev->name, pkt_len);
rrpriv->stats.rx_dropped++;
goto defer;
}else
} while(index != rxlimit);
rrpriv->cur_rx = index;
- mb();
+ wmb();
}
struct rr_regs *regs;
struct net_device *dev = (struct net_device *)dev_id;
u32 prodidx, rxindex, eidx, txcsmr, rxlimit, txcon;
- unsigned long flags;
rrpriv = (struct rr_private *)dev->priv;
regs = rrpriv->regs;
if (!(readl(®s->HostCtrl) & RR_INT))
return;
- spin_lock_irqsave(&rrpriv->lock, flags);
+ spin_lock(&rrpriv->lock);
prodidx = readl(®s->EvtPrd);
txcsmr = (prodidx >> 8) & 0xff;
do {
rrpriv->stats.tx_packets++;
rrpriv->stats.tx_bytes +=rrpriv->tx_skbuff[txcon]->len;
- dev_kfree_skb(rrpriv->tx_skbuff[txcon]);
+ dev_kfree_skb_irq(rrpriv->tx_skbuff[txcon]);
rrpriv->tx_skbuff[txcon] = NULL;
rrpriv->tx_ring[txcon].size = 0;
txcon = (txcon + 1) % TX_RING_ENTRIES;
} while (txcsmr != txcon);
- mb();
+ wmb();
rrpriv->dirty_tx = txcon;
- if (rrpriv->tx_full && dev->tbusy &&
+ if (rrpriv->tx_full && rr_if_busy(dev) &&
(((rrpriv->info->tx_ctrl.pi + 1) % TX_RING_ENTRIES)
!= rrpriv->dirty_tx)){
rrpriv->tx_full = 0;
- dev->tbusy = 0;
- mark_bh(NET_BH);
+ netif_wake_queue(dev);
+ rr_mark_net_bh(NET_BH);
}
}
eidx |= ((txcsmr << 8) | (rxlimit << 16));
writel(eidx, ®s->EvtCon);
- mb();
+ wmb();
- spin_unlock_irqrestore(&rrpriv->lock, flags);
+ spin_unlock(&rrpriv->lock);
+}
+
+
+static void rr_timer(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct rr_private *rrpriv = (struct rr_private *)dev->priv;
+ struct rr_regs *regs = rrpriv->regs;
+ unsigned long flags;
+ int i;
+
+ if (readl(®s->HostCtrl) & NIC_HALTED){
+ printk("%s: Restarting nic\n", dev->name);
+ memset(rrpriv->rx_ctrl, 0, 256 * sizeof(struct ring_ctrl));
+ memset(rrpriv->info, 0, sizeof(struct rr_info));
+ wmb();
+ for (i = 0; i < TX_RING_ENTRIES; i++) {
+ if (rrpriv->tx_skbuff[i]) {
+ rrpriv->tx_ring[i].size = 0;
+ set_rraddr(&rrpriv->tx_ring[i].addr, 0);
+ dev_kfree_skb(rrpriv->tx_skbuff[i]);
+ rrpriv->tx_skbuff[i] = NULL;
+ }
+ }
+
+ for (i = 0; i < RX_RING_ENTRIES; i++) {
+ if (rrpriv->rx_skbuff[i]) {
+ rrpriv->rx_ring[i].size = 0;
+ set_rraddr(&rrpriv->rx_ring[i].addr, 0);
+ dev_kfree_skb(rrpriv->rx_skbuff[i]);
+ rrpriv->rx_skbuff[i] = NULL;
+ }
+ }
+ if (rr_init1(dev)) {
+ spin_lock_irqsave(&rrpriv->lock, flags);
+ writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT,
+ ®s->HostCtrl);
+ spin_unlock_irqrestore(&rrpriv->lock, flags);
+ }
+ }
+ rrpriv->timer.expires = RUN_AT(5*HZ);
+ add_timer(&rrpriv->timer);
}
goto error;
}
- rrpriv->rx_ctrl = kmalloc(256*sizeof(struct ring_ctrl),
- GFP_KERNEL | GFP_DMA);
+ rrpriv->rx_ctrl = kmalloc(256*sizeof(struct ring_ctrl), GFP_KERNEL);
if (!rrpriv->rx_ctrl) {
ecode = -ENOMEM;
goto error;
}
- rrpriv->info = kmalloc(sizeof(struct rr_info), GFP_KERNEL | GFP_DMA);
+ rrpriv->info = kmalloc(sizeof(struct rr_info), GFP_KERNEL);
if (!rrpriv->info){
- kfree(rrpriv->rx_ctrl);
+ rrpriv->rx_ctrl = NULL;
ecode = -ENOMEM;
goto error;
}
memset(rrpriv->rx_ctrl, 0, 256 * sizeof(struct ring_ctrl));
memset(rrpriv->info, 0, sizeof(struct rr_info));
- mb();
+ wmb();
spin_lock_irqsave(&rrpriv->lock, flags);
writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT, ®s->HostCtrl);
goto error;
}
- rr_init1(dev);
+ if ((ecode = rr_init1(dev)))
+ goto error;
- dev->tbusy = 0;
- dev->start = 1;
+ /* Set the timer to switch to check for link beat and perhaps switch
+ to an alternate media type. */
+ init_timer(&rrpriv->timer);
+ rrpriv->timer.expires = RUN_AT(5*HZ); /* 5 sec. watchdog */
+ rrpriv->timer.data = (unsigned long)dev;
+ rrpriv->timer.function = &rr_timer; /* timer handler */
+ add_timer(&rrpriv->timer);
+
+ netif_start_queue(dev);
MOD_INC_USE_COUNT;
- return 0;
+ return ecode;
error:
spin_lock_irqsave(&rrpriv->lock, flags);
writel(readl(®s->HostCtrl)|HALT_NIC|RR_CLEAR_INT, ®s->HostCtrl);
spin_unlock_irqrestore(&rrpriv->lock, flags);
- dev->tbusy = 1;
- dev->start = 0;
- return -ENOMEM;
+ if (rrpriv->info) {
+ kfree(rrpriv->info);
+ rrpriv->info = NULL;
+ }
+ if (rrpriv->rx_ctrl) {
+ kfree(rrpriv->rx_ctrl);
+ rrpriv->rx_ctrl = NULL;
+ }
+
+ netif_stop_queue(dev);
+ rr_if_down(dev);
+
+ return ecode;
}
u32 tmp;
short i;
- dev->start = 0;
- set_bit(0, (void*)&dev->tbusy);
-
+ netif_stop_queue(dev);
+ rr_if_down(dev);
+
rrpriv = (struct rr_private *)dev->priv;
regs = rrpriv->regs;
}else{
tmp |= HALT_NIC | RR_CLEAR_INT;
writel(tmp, ®s->HostCtrl);
- mb();
+ wmb();
}
rrpriv->fw_running = 0;
+ del_timer(&rrpriv->timer);
+
writel(0, ®s->TxPi);
writel(0, ®s->IpRxPi);
rrpriv->tx_ring[i].size = 0;
set_rraddr(&rrpriv->tx_ring[i].addr, 0);
dev_kfree_skb(rrpriv->tx_skbuff[i]);
+ rrpriv->tx_skbuff[i] = NULL;
}
}
rrpriv->rx_ring[i].size = 0;
set_rraddr(&rrpriv->rx_ring[i].addr, 0);
dev_kfree_skb(rrpriv->rx_skbuff[i]);
+ rrpriv->rx_skbuff[i] = NULL;
}
}
- kfree(rrpriv->rx_ctrl);
- kfree(rrpriv->info);
+ if (rrpriv->rx_ctrl) {
+ kfree(rrpriv->rx_ctrl);
+ rrpriv->rx_ctrl = NULL;
+ }
+ if (rrpriv->info) {
+ kfree(rrpriv->info);
+ rrpriv->info = NULL;
+ }
free_irq(dev->irq, dev);
spin_unlock(&rrpriv->lock);
printk("incoming skb too small - reallocating\n");
if (!(new_skb = dev_alloc_skb(len + 8))) {
dev_kfree_skb(skb);
- dev->tbusy = 0;
+ netif_wake_queue(dev);
return -EBUSY;
}
skb_reserve(new_skb, 8);
rrpriv->tx_ring[index].size = len + 8; /* include IFIELD */
rrpriv->tx_ring[index].mode = PACKET_START | PACKET_END;
txctrl->pi = (index + 1) % TX_RING_ENTRIES;
+ wmb();
writel(txctrl->pi, ®s->TxPi);
if (txctrl->pi == rrpriv->dirty_tx){
rrpriv->tx_full = 1;
- set_bit(0, (void*)&dev->tbusy);
+ netif_stop_queue(dev);
}
spin_unlock_irqrestore(&rrpriv->lock, flags);
* Changes by Jochen Friedrich to enable RFC1469 Option 2 multicasting
* i.e. using functional address C0 00 00 04 00 00 to transmit and
* receive multicast packets.
+ *
+ * Changes by Mike Sullivan (based on original sram patch by Dave Grothe
+ * to support windowing into on adapter shared ram.
+ * i.e. Use LANAID to setup a PnP configuration with 16K RAM. Paging
+ * will shift this 16K window over the entire available shared RAM.
*/
/* change the define of IBMTR_DEBUG_MESSAGES to a nonzero value
#define NO_AUTODETECT 1
#undef NO_AUTODETECT
-#undef ENABLE_PAGING
+/* #undef ENABLE_PAGING */
+#define ENABLE_PAGING 1
#define FALSE 0
static char *version =
"ibmtr.c: v1.3.57 8/ 7/94 Peter De Schrijver and Mark Swanson\n"
" v2.1.125 10/20/98 Paul Norton <pnorton@ieee.org>\n"
-" v2.2.0 12/30/98 Joel Sloan <jjs@c-me.com>\n";
+" v2.2.0 12/30/98 Joel Sloan <jjs@c-me.com>\n"
+" v2.2.1 02/08/00 Mike Sullivan <sullivam@us.ibm.com>\n";
static char pcchannelid[] = {
0x05, 0x00, 0x04, 0x09,
ti->mapped_ram_size = ti->avail_shared_ram;
} else {
#ifdef ENABLE_PAGING
- unsigned char pg_size;
+ unsigned char pg_size=0;
#endif
#if !TR_NEWFORMAT
pg_size=64; /* 32KB page size */
break;
case 0xc:
- ti->page_mask=(ti->mapped_ram_size==32) ? 0xc0 : 0;
- ti->page_mask=(ti->mapped_ram_size==64) ? 0x80 : 0;
- DPRINTK("Dual size shared RAM page (code=0xC), don't support it!\n");
- /* nb/dwm: I did this because RRR (3,2) bits are documented as
- R/O and I can't find how to select which page size
- Also, the above conditional statement sequence is invalid
- as page_mask will always be set by the second stmt */
- kfree_s(ti, sizeof(struct tok_info));
- return -ENODEV;
+ switch (ti->mapped_ram_size) {
+ case 32:
+ ti->page_mask=0xc0;
+ pg_size=32;
+ break;
+ case 64:
+ ti->page_mask=0x80;
+ pg_size=64;
+ break;
+ }
break;
default:
DPRINTK("Unknown shared ram paging info %01X\n",ti->shared_ram_paging);
return -ENODEV;
break;
}
+
+ if (ibmtr_debug_trace & TRC_INIT)
+ DPRINTK("Shared RAM paging code: "
+ "%02X mapped RAM size: %dK shared RAM size: %dK page mask: %0xX\n:",
+ ti->shared_ram_paging, ti->mapped_ram_size/2, ti->avail_shared_ram/2, ti->page_mask);
+
if (ti->page_mask) {
if (pg_size > ti->mapped_ram_size) {
DPRINTK("Page size (%d) > mapped ram window (%d), can't page.\n",
- pg_size, ti->mapped_ram_size);
+ pg_size/2, ti->mapped_ram_size/2);
ti->page_mask = 0; /* reset paging */
- } else {
- ti->mapped_ram_size=ti->avail_shared_ram;
- DPRINTK("Shared RAM paging enabled. Page size : %uK\n",
- ((ti->page_mask^ 0xff)+1)>>2);
- }
+ }
+ } else if (pg_size > ti->mapped_ram_size) {
+ DPRINTK("Page size (%d) > mapped ram window (%d), can't page.\n",
+ pg_size/2, ti->mapped_ram_size/2);
+ }
#endif
}
/* finish figuring the shared RAM address */
DPRINTK("Hardware address : %02X:%02X:%02X:%02X:%02X:%02X\n",
dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
+ if (ti->page_mask)
+ DPRINTK("Shared RAM paging enabled. Page size: %uK Shared Ram size %dK\n",
+ ((ti->page_mask ^ 0xff)+1)>>2,ti->avail_shared_ram/2);
+ else
+ DPRINTK("Shared RAM paging disabled. ti->page_mask %x\n",ti->page_mask);
#endif
/* Calculate the maximum DHB we can use */
- switch (ti->mapped_ram_size) {
+ if (!ti->page_mask) {
+ ti->avail_shared_ram=ti->mapped_ram_size;
+ }
+ switch (ti->avail_shared_ram) {
case 16 : /* 8KB shared RAM */
ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt16 = 2;
break;
case 32 : /* 16KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 520;
ti->rbuf_cnt4 = 9;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 4096);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032; /* 1024 usable */
ti->rbuf_cnt16 = 4;
break;
case 64 : /* 32KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 10240);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 10;
break;
case 127 : /* 63KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 16384);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 16;
break;
case 128 : /* 64KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 17960);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 18;
break;
{
struct tok_info *ti=(struct tok_info *)dev->priv;
+ SET_PAGE(ti->srb_page);
ti->open_status = CLOSED;
dev->init = tok_init_card;
address[3] |= mclist->dmi_addr[5];
mclist = mclist->next;
}
- SET_PAGE(ti->srb);
+ SET_PAGE(ti->srb_page);
for (i=0; i<sizeof(struct srb_set_funct_addr); i++)
isa_writeb(0, ti->srb+i);
struct tok_info *ti=(struct tok_info *) dev->priv;
netif_stop_queue(dev);
-
+ SET_PAGE(ti->srb_page);
isa_writeb(DIR_CLOSE_ADAPTER,
ti->srb + offsetof(struct srb_close_adapter, command));
isa_writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
sleep_on(&ti->wait_for_tok_int);
+ SET_PAGE(ti->srb_page);
if (isa_readb(ti->srb + offsetof(struct srb_close_adapter, ret_code)))
DPRINTK("close adapter failed: %02X\n",
(int)isa_readb(ti->srb + offsetof(struct srb_close_adapter, ret_code)));
unsigned char status;
struct tok_info *ti;
struct net_device *dev;
+#ifdef ENABLE_PAGING
+ unsigned char save_srpr;
+#endif
dev = dev_id;
#if TR_VERBOSE
#endif
ti = (struct tok_info *) dev->priv;
spin_lock(&(ti->lock));
+#ifdef ENABLE_PAGING
+ save_srpr=isa_readb(ti->mmio+ACA_OFFSET+ACA_RW+SRPR_EVEN);
+#endif
/* Disable interrupts till processing is finished */
isa_writeb((~INT_ENABLE), ti->mmio + ACA_OFFSET + ACA_RESET + ISRP_EVEN);
if (status == 0xFF)
{
DPRINTK("PCMCIA card removed.\n");
- spin_unlock(&(ti->lock));
- return;
+ goto return_point ;
}
/* Check ISRP EVEN too. */
if ( isa_readb (ti->mmio + ACA_OFFSET + ACA_RW + ISRP_EVEN) == 0xFF)
{
DPRINTK("PCMCIA card removed.\n");
- spin_unlock(&(ti->lock));
- return;
- }
+ goto return_point ;
+ }
#endif
int i;
__u32 check_reason;
+ __u8 check_reason_page=0;
- check_reason=ti->mmio + ntohs(isa_readw(ti->sram + ACA_OFFSET + ACA_RW +WWCR_EVEN));
+ check_reason=ntohs(isa_readw(ti->sram + ACA_OFFSET + ACA_RW +WWCR_EVEN));
+ if (ti->page_mask) {
+ check_reason_page=(check_reason>>8) & ti->page_mask;
+ check_reason &= ~(ti->page_mask << 8);
+ }
+ check_reason += ti->sram;
+ SET_PAGE(check_reason_page);
DPRINTK("Adapter check interrupt\n");
DPRINTK("8 reason bytes follow: ");
/* SRB, ASB, ARB or SSB response */
if (status & SRB_RESP_INT) { /* SRB response */
+ SET_PAGE(ti->srb_page);
+#if TR_VERBOSE
+ DPRINTK("SRB resp: cmd=%02X rsp=%02X\n",
+ isa_readb(ti->srb),
+ isa_readb(ti->srb + offsetof(struct srb_xmit, ret_code)));
+#endif
switch(isa_readb(ti->srb)) { /* SRB command check */
DPRINTK("error on xmit_dir_frame request: %02X\n",
xmit_ret_code);
if (ti->current_skb) {
- dev_kfree_skb(ti->current_skb);
+ dev_kfree_skb_irq(ti->current_skb);
ti->current_skb=NULL;
}
netif_wake_queue(dev);
DPRINTK("error on xmit_ui_frame request: %02X\n",
xmit_ret_code);
if (ti->current_skb) {
- dev_kfree_skb(ti->current_skb);
+ dev_kfree_skb_irq(ti->current_skb);
ti->current_skb=NULL;
}
netif_wake_queue(dev);
unsigned char open_ret_code;
__u16 open_error_code;
- ti->srb=ti->sram+ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, srb_addr)));
- ti->ssb=ti->sram+ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, ssb_addr)));
- ti->arb=ti->sram+ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, arb_addr)));
- ti->asb=ti->sram+ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, asb_addr)));
+ ti->srb=ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, srb_addr)));
+ ti->ssb=ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, ssb_addr)));
+ ti->arb=ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, arb_addr)));
+ ti->asb=ntohs(isa_readw(ti->init_srb +offsetof(struct srb_open_response, asb_addr)));
+ if (ti->page_mask) {
+ ti->srb_page=(ti->srb>>8) & ti->page_mask;
+ ti->srb &= ~(ti->page_mask<<8);
+ ti->ssb_page=(ti->ssb>>8) & ti->page_mask;
+ ti->ssb &= ~(ti->page_mask<<8);
+ ti->arb_page=(ti->arb>>8) & ti->page_mask;
+ ti->arb &= ~(ti->page_mask<<8);
+ ti->asb_page=(ti->asb>>8) & ti->page_mask;
+ ti->asb &= ~(ti->page_mask<<8);
+ }
+ ti->srb+=ti->sram;
+ ti->ssb+=ti->sram;
+ ti->arb+=ti->sram;
+ ti->asb+=ti->sram;
+
ti->current_skb=NULL;
open_ret_code = isa_readb(ti->init_srb +offsetof(struct srb_open_response, ret_code));
} /* SRB response */
if (status & ASB_FREE_INT) { /* ASB response */
+ SET_PAGE(ti->asb_page);
+#if TR_VERBOSE
+ DPRINTK("ASB resp: cmd=%02X\n", isa_readb(ti->asb));
+ #endif
switch(isa_readb(ti->asb)) { /* ASB command check */
} /* ASB response */
if (status & ARB_CMD_INT) { /* ARB response */
+ SET_PAGE(ti->arb_page);
+#if TR_VERBOSE
+ DPRINTK("ARB resp: cmd=%02X rsp=%02X\n",
+ isa_readb(ti->arb),
+ isa_readb(ti->arb + offsetof(struct arb_dlc_status, status)));
+#endif
switch (isa_readb(ti->arb)) { /* ARB command check */
if (status & SSB_RESP_INT) { /* SSB response */
unsigned char retcode;
+ SET_PAGE(ti->ssb_page);
+#if TR_VERBOSE
+ DPRINTK("SSB resp: cmd=%02X rsp=%02X\n",
+ isa_readb(ti->ssb), isa_readb(ti->ssb+2));
+#endif
switch (isa_readb(ti->ssb)) { /* SSB command check */
case XMIT_DIR_FRAME:
case XMIT_XID_CMD:
DPRINTK("xmit xid ret_code: %02X\n", (int)isa_readb(ti->ssb+2));
+ break;
default:
DPRINTK("Unknown command %02X in ssb\n", (int)isa_readb(ti->ssb));
DPRINTK("Unexpected interrupt from tr adapter\n");
}
+#ifdef PCMCIA
+ return_point:
+#endif
+#ifdef ENABLE_PAGING
+ isa_writeb(save_srpr, ti->mmio+ACA_OFFSET+ACA_RW+SRPR_EVEN);
+#endif
+
spin_unlock(&(ti->lock));
}
isa_writeb(ti->sram_base, ti->mmio + ACA_OFFSET + ACA_RW + RRR_EVEN);
ti->sram=((__u32)ti->sram_base << 12);
}
- ti->init_srb=ti->sram
- +ntohs((unsigned short)isa_readw(ti->mmio+ ACA_OFFSET + WRBR_EVEN));
- SET_PAGE(ntohs((unsigned short)isa_readw(ti->mmio+ACA_OFFSET + WRBR_EVEN)));
+ ti->init_srb=ntohs((unsigned short)isa_readw(ti->mmio+ ACA_OFFSET + WRBR_EVEN));
+ if (ti->page_mask) {
+ ti->init_srb_page=(ti->init_srb>>8)&ti->page_mask;
+ ti->init_srb &= ~(ti->page_mask<<8);
+ }
+ ti->init_srb+=ti->sram;
+
+ if (ti->avail_shared_ram == 127) {
+ int i;
+ int last_512=0xfe00;
+ if (ti->page_mask) {
+ last_512 &= ~(ti->page_mask<<8);
+ }
+ /* initialize high section of ram (if necessary) */
+ SET_PAGE(0xc0);
+ for (i=0; i<512; i++) {
+ isa_writeb(0,ti->sram+last_512+i);
+ }
+ }
+ SET_PAGE(ti->init_srb_page);
dev->mem_start = ti->sram;
dev->mem_end = ti->sram + (ti->mapped_ram_size<<9) - 1;
#if TR_VERBOSE
{
int i;
- DPRINTK("init_srb(%p):", ti->init_srb);
+ DPRINTK("init_srb(%lx):", (long)ti->init_srb);
for (i=0;i<17;i++) printk("%02X ", (int)isa_readb(ti->init_srb+i));
printk("\n");
}
/* Reset adapter */
netif_stop_queue(dev);
-#ifdef ENABLE_PAGING
- if(ti->page_mask)
- isa_writeb(SRPR_ENABLE_PAGING, ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN);
-#endif
-
isa_writeb(~INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_RESET + ISRP_EVEN);
#if !TR_NEWFORMAT
outb(0, PIOaddr+ADAPTRESET);
for (i=jiffies+TR_RESET_INTERVAL; time_before_eq(jiffies, i);); /* wait 50ms */
outb(0,PIOaddr+ADAPTRESETREL);
+#ifdef ENABLE_PAGING
+ if(ti->page_mask)
+ isa_writeb(SRPR_ENABLE_PAGING, ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN);
+#endif
#if !TR_NEWFORMAT
DPRINTK("card reset\n");
int i;
struct tok_info *ti=(struct tok_info *) dev->priv;
- SET_PAGE(ti->srb);
+ SET_PAGE(ti->srb_page);
for (i=0; i<sizeof(struct dlc_open_sap); i++)
isa_writeb(0, ti->srb+i);
ti->init_srb + offsetof(struct dir_open_adapter, dlc_max_sta));
ti->srb=ti->init_srb; /* We use this one in the interrupt handler */
+ ti->srb_page=ti->init_srb_page;
+ DPRINTK("Opend adapter: Xmit bfrs: %d X %d, Rcv bfrs: %d X %d\n",
+ isa_readb(ti->init_srb+offsetof(struct dir_open_adapter,num_dhb)),
+ ntohs(isa_readw(ti->init_srb+offsetof(struct dir_open_adapter,dhb_length))),
+ ntohs(isa_readw(ti->init_srb+offsetof(struct dir_open_adapter,num_rcv_buf))),
+ ntohs(isa_readw(ti->init_srb+offsetof(struct dir_open_adapter,rcv_buf_len))) );
isa_writeb(INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_SET + ISRP_EVEN);
isa_writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
unsigned char xmit_command;
int i;
struct trllc *llc;
+ struct srb_xmit xsrb;
+ __u8 dhb_page=0;
+ __u8 llc_ssap;
+ SET_PAGE(ti->asb_page);
if (isa_readb(ti->asb + offsetof(struct asb_xmit_resp, ret_code))!=0xFF)
DPRINTK("ASB not free !!!\n");
providing a shared memory address for us
to stuff with data. Here we compute the
effective address where we will place data.*/
- dhb=ti->sram
- +ntohs(isa_readw(ti->arb + offsetof(struct arb_xmit_req, dhb_address)));
+ SET_PAGE(ti->arb_page);
+ dhb=ntohs(isa_readw(ti->arb + offsetof(struct arb_xmit_req, dhb_address)));
+ if (ti->page_mask) {
+ dhb_page=(dhb >> 8) & ti->page_mask;
+ dhb &= ~(ti->page_mask << 8);
+ }
+ dhb+=ti->sram;
/* Figure out the size of the 802.5 header */
if (!(trhdr->saddr[0] & 0x80)) /* RIF present? */
llc = (struct trllc *)(ti->current_skb->data + hdr_len);
- xmit_command = isa_readb(ti->srb + offsetof(struct srb_xmit, command));
-
+ llc_ssap=llc->ssap;
+ SET_PAGE(ti->srb_page);
+ isa_memcpy_fromio(&xsrb, ti->srb, sizeof(xsrb));
+ SET_PAGE(ti->asb_page);
+ xmit_command=xsrb.command;
+
isa_writeb(xmit_command, ti->asb + offsetof(struct asb_xmit_resp, command));
- isa_writew(isa_readb(ti->srb + offsetof(struct srb_xmit, station_id)),
+ isa_writew(xsrb.station_id,
ti->asb + offsetof(struct asb_xmit_resp, station_id));
- isa_writeb(llc->ssap, ti->asb + offsetof(struct asb_xmit_resp, rsap_value));
- isa_writeb(isa_readb(ti->srb + offsetof(struct srb_xmit, cmd_corr)),
+ isa_writeb(llc_ssap, ti->asb + offsetof(struct asb_xmit_resp, rsap_value));
+ isa_writeb(xsrb.cmd_corr,
ti->asb + offsetof(struct asb_xmit_resp, cmd_corr));
isa_writeb(0, ti->asb + offsetof(struct asb_xmit_resp, ret_code));
isa_writew(htons(0x11),
ti->asb + offsetof(struct asb_xmit_resp, frame_length));
isa_writeb(0x0e, ti->asb + offsetof(struct asb_xmit_resp, hdr_length));
+ SET_PAGE(dhb_page);
isa_writeb(AC, dhb);
isa_writeb(LLC_FRAME, dhb+1);
isa_writew(htons(ti->current_skb->len),
ti->asb + offsetof(struct asb_xmit_resp, frame_length));
+ SET_PAGE(dhb_page);
isa_memcpy_toio(dhb, ti->current_skb->data, ti->current_skb->len);
isa_writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
ti->tr_stats.tx_bytes+=ti->current_skb->len;
- dev_kfree_skb(ti->current_skb);
+ dev_kfree_skb_irq(ti->current_skb);
ti->current_skb=NULL;
netif_wake_queue(dev);
if (ti->readlog_pending) ibmtr_readlog(dev);
{
struct tok_info *ti=(struct tok_info *) dev->priv;
__u32 rbuffer, rbufdata;
+ __u8 rbuffer_page=0;
__u32 llc;
unsigned char *data;
unsigned int rbuffer_len, lan_hdr_len, hdr_len, ip_len, length;
int IPv4_p = 0;
unsigned int chksum = 0;
struct iphdr *iph;
+ struct arb_rec_req rarb;
- rbuffer=(ti->sram
- +ntohs(isa_readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr))))+2;
-
+ SET_PAGE(ti->arb_page);
+ isa_memcpy_fromio(&rarb, ti->arb, sizeof(rarb));
+ rbuffer=ntohs(rarb.rec_buf_addr)+2;
+ if (ti->page_mask) {
+ rbuffer_page=(rbuffer >> 8) & ti->page_mask;
+ rbuffer &= ~(ti->page_mask<<8);
+ }
+ rbuffer += ti->sram;
+
+ SET_PAGE(ti->asb_page);
if(isa_readb(ti->asb + offsetof(struct asb_rec, ret_code))!=0xFF)
DPRINTK("ASB not free !!!\n");
isa_writeb(REC_DATA,
ti->asb + offsetof(struct asb_rec, command));
- isa_writew(isa_readw(ti->arb + offsetof(struct arb_rec_req, station_id)),
+ isa_writew(rarb.station_id,
ti->asb + offsetof(struct asb_rec, station_id));
- isa_writew(isa_readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr)),
+ isa_writew(rarb.rec_buf_addr,
ti->asb + offsetof(struct asb_rec, rec_buf_addr));
- lan_hdr_len=isa_readb(ti->arb + offsetof(struct arb_rec_req, lan_hdr_len));
+ lan_hdr_len=rarb.lan_hdr_len;
hdr_len = lan_hdr_len + sizeof(struct trllc) + sizeof(struct iphdr);
-
+
+ SET_PAGE(rbuffer_page);
llc=(rbuffer + offsetof(struct rec_buf, data) + lan_hdr_len);
#if TR_VERBOSE
DPRINTK("offsetof data: %02X lan_hdr_len: %02X\n",
(unsigned int)offsetof(struct rec_buf,data), (unsigned int)lan_hdr_len);
- DPRINTK("llc: %08X rec_buf_addr: %04X ti->sram: %p\n", llc,
- ntohs(isa_readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr))),
- ti->sram);
+ DPRINTK("llc: %08X rec_buf_addr: %04X ti->sram: %lx\n", llc,
+ ntohs(rarb.rec_buf_addr),
+ (long)ti->sram);
DPRINTK("dsap: %02X, ssap: %02X, llc: %02X, protid: %02X%02X%02X, "
"ethertype: %04X\n",
(int)isa_readb(llc + offsetof(struct trllc, dsap)),
(int)isa_readw(llc + offsetof(struct trllc, ethertype)));
#endif
if (isa_readb(llc + offsetof(struct trllc, llc))!=UI_CMD) {
+ SET_PAGE(ti->asb_page);
isa_writeb(DATA_LOST, ti->asb + offsetof(struct asb_rec, ret_code));
ti->tr_stats.rx_dropped++;
isa_writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
return;
}
- length = ntohs(isa_readw(ti->arb+offsetof(struct arb_rec_req, frame_len)));
- if ((isa_readb(llc + offsetof(struct trllc, dsap))==EXTENDED_SAP) &&
+ length = ntohs(rarb.frame_len);
+ if ((isa_readb(llc + offsetof(struct trllc, dsap))==EXTENDED_SAP) &&
(isa_readb(llc + offsetof(struct trllc, ssap))==EXTENDED_SAP) &&
(length>=hdr_len)) {
IPv4_p = 1;
}
#endif
- skb_size = length-lan_hdr_len+sizeof(struct trh_hdr)+sizeof(struct trllc);
+ skb_size = length;
if (!(skb=dev_alloc_skb(skb_size))) {
DPRINTK("out of memory. frame dropped.\n");
ti->tr_stats.rx_dropped++;
+ SET_PAGE(ti->asb_page);
isa_writeb(DATA_LOST, ti->asb + offsetof(struct asb_rec, ret_code));
isa_writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
return;
break;
length -= rbuffer_len;
data += rbuffer_len;
+ if (ti->page_mask) {
+ rbuffer_page=(rbuffer>>8) & ti->page_mask;
+ rbuffer &= ~(ti->page_mask << 8);
+ }
rbuffer += ti->sram;
+ SET_PAGE(rbuffer_page);
rbuffer_len = ntohs(isa_readw(rbuffer + offsetof(struct rec_buf, buf_len)));
rbufdata = rbuffer + offsetof(struct rec_buf, data);
}
+ SET_PAGE(ti->asb_page);
isa_writeb(0, ti->asb + offsetof(struct asb_rec, ret_code));
isa_writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
/* Save skb; we'll need it when the adapter asks for the data */
ti->current_skb=skb;
+ SET_PAGE(ti->srb_page);
isa_writeb(XMIT_UI_FRAME, ti->srb + offsetof(struct srb_xmit, command));
isa_writew(ti->exsap_station_id, ti->srb
+offsetof(struct srb_xmit, station_id));
ti=(struct tok_info *) dev->priv;
ti->readlog_pending = 0;
+ SET_PAGE(ti->srb_page);
isa_writeb(DIR_READ_LOG, ti->srb);
isa_writeb(INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_SET + ISRP_EVEN);
isa_writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
#define TCR_ODD 0x0D
#define TVR_EVEN 0x0E /* Timer value registers - even and odd */
#define TVR_ODD 0x0F
-#define SRPR_EVEN 0x10 /* Shared RAM paging registers - even and odd */
+#define SRPR_EVEN 0x18 /* Shared RAM paging registers - even and odd */
#define SRPR_ENABLE_PAGING 0xc0
-#define SRPR_ODD 0x11 /* Not used. */
+#define SRPR_ODD 0x19 /* Not used. */
#define TOKREAD 0x60
#define TOKOR 0x40
#define TOKAND 0x20
#define ACA_RW 0x00
#ifdef ENABLE_PAGING
-#define SET_PAGE(x) (isa_writeb(((x>>8)&ti.page_mask), \
+#define SET_PAGE(x) (isa_writeb((x), \
ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN))
#else
#define SET_PAGE(x)
__u32 ssb; /* System Status Block address */
__u32 arb; /* Adapter Request Block address */
__u32 asb; /* Adapter Status Block address */
+ __u8 init_srb_page;
+ __u8 srb_page;
+ __u8 ssb_page;
+ __u8 arb_page;
+ __u8 asb_page;
unsigned short exsap_station_id;
unsigned short global_int_enable;
struct sk_buff *current_skb;
DEBUG(3, "cs: read_cb_mem(%d, %#x, %u)\n", space, addr, len);
+ if (!s->cb_config)
+ goto fail;
+
dev = &s->cb_config[fn].dev;
/* Config space? */
dep_tristate ' DABUSB driver' CONFIG_USB_DABUSB $CONFIG_USB
dep_tristate ' PLUSB Prolific USB-Network driver' CONFIG_USB_PLUSB $CONFIG_USB
dep_tristate ' USB ADMteks Pegasus based devices support' CONFIG_USB_PEGASUS $CONFIG_USB
+ dep_tristate ' USB Diamond Rio500 support' CONFIG_USB_RIO500 $CONFIG_USB
comment 'USB HID'
dep_tristate ' USB Human Interface Device (HID) support' CONFIG_USB_HID $CONFIG_USB
obj-$(CONFIG_USB_PLUSB) += plusb.o
obj-$(CONFIG_USB_OV511) += ov511.o
obj-$(CONFIG_USB_PEGASUS) += pegasus.o
+obj-$(CONFIG_USB_RIO500) += rio500.o
# Extract lists of the multi-part drivers.
# The 'int-*' lists are the intermediate files used to build the multi's.
--- /dev/null
+/* -*- linux-c -*- */
+
+/*
+ * Driver for USB Rio 500
+ *
+ * Cesar Miquel (miquel@df.uba.ar)
+ *
+ * based on hp_scanner.c by David E. Nelson (dnelson@jump.net)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Based upon mouse.c (Brad Keryan) and printer.c (Michael Gee).
+ *
+ * */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/miscdevice.h>
+#include <linux/random.h>
+#include <linux/poll.h>
+#include <linux/init.h>
+#include <linux/malloc.h>
+#include <linux/spinlock.h>
+
+#include "usb.h"
+
+#include "rio500_usb.h"
+
+#define RIO_MINOR 64
+
+/* stall/wait timeout for rio */
+#define NAK_TIMEOUT (HZ)
+
+#define IBUF_SIZE 128
+
+/* Size of the rio buffer */
+#define OBUF_SIZE 0x10000
+
+struct rio_usb_data {
+ struct usb_device *rio_dev; /* init: probe_rio */
+ unsigned int ifnum; /* Interface number of the USB device */
+ int isopen; /* nz if open */
+ int present; /* Device is present on the bus */
+ char *obuf, *ibuf; /* transfer buffers */
+ char bulk_in_ep, bulk_out_ep; /* Endpoint assignments */
+ wait_queue_head_t wait_q; /* for timeouts */
+};
+
+static struct rio_usb_data rio_instance;
+
+static int open_rio(struct inode *inode, struct file *file)
+{
+ struct rio_usb_data *rio = &rio_instance;
+
+ if (rio->isopen || !rio->present) {
+ return -EBUSY;
+ }
+ rio->isopen = 1;
+
+ init_waitqueue_head(&rio->wait_q);
+
+ MOD_INC_USE_COUNT;
+
+ info("Rio opened.");
+
+ return 0;
+}
+
+static int close_rio(struct inode *inode, struct file *file)
+{
+ struct rio_usb_data *rio = &rio_instance;
+
+ rio->isopen = 0;
+
+ MOD_DEC_USE_COUNT;
+
+ info("Rio closed.");
+ return 0;
+}
+
+static int
+ioctl_rio(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct RioCommand rio_cmd;
+ struct rio_usb_data *rio = &rio_instance;
+ void *data;
+ unsigned char *buffer;
+ int result, requesttype;
+ int retries;
+
+ /* Sanity check to make sure rio is connected, powered, etc */
+ if ( rio == NULL ||
+ rio->present == 0 ||
+ rio->rio_dev == NULL )
+ return -1;
+
+ switch (cmd) {
+ case RIO_RECV_COMMAND:
+ data = (void *) arg;
+ if (data == NULL)
+ break;
+ copy_from_user_ret(&rio_cmd, data, sizeof(struct RioCommand),
+ -EFAULT);
+ if (rio_cmd.length > PAGE_SIZE)
+ return -EINVAL;
+ buffer = (unsigned char *) __get_free_page(GFP_KERNEL);
+ if (buffer == NULL)
+ return -ENOMEM;
+ copy_from_user_ret(buffer, rio_cmd.buffer, rio_cmd.length,
+ -EFAULT);
+
+ requesttype = rio_cmd.requesttype | USB_DIR_IN |
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE;
+ dbg
+ ("sending command:reqtype=%0x req=%0x value=%0x index=%0x len=%0x",
+ requesttype, rio_cmd.request, rio_cmd.value,
+ rio_cmd.index, rio_cmd.length);
+ /* Send rio control message */
+ retries = 3;
+ while (retries) {
+ result = usb_control_msg(rio->rio_dev,
+ usb_rcvctrlpipe(rio-> rio_dev, 0),
+ rio_cmd.request,
+ requesttype,
+ rio_cmd.value,
+ rio_cmd.index, buffer,
+ rio_cmd.length,
+ rio_cmd.timeout);
+ if (result == -ETIMEDOUT)
+ retries--;
+ else if (result < 0) {
+ err("Error executing ioctrl. code = %d",
+ le32_to_cpu(result));
+ retries = 0;
+ } else {
+ dbg("Executed ioctl. Result = %d (data=%04x)",
+ le32_to_cpu(result),
+ le32_to_cpu(*((long *) buffer)));
+ copy_to_user_ret(rio_cmd.buffer, buffer,
+ rio_cmd.length, -EFAULT);
+ retries = 0;
+ }
+
+ /* rio_cmd.buffer contains a raw stream of single byte
+ data which has been returned from rio. Data is
+ interpreted at application level. For data that
+ will be cast to data types longer than 1 byte, data
+ will be little_endian and will potentially need to
+ be swapped at the app level */
+
+ }
+ free_page((unsigned long) buffer);
+ break;
+
+ case RIO_SEND_COMMAND:
+ data = (void *) arg;
+ if (data == NULL)
+ break;
+ copy_from_user_ret(&rio_cmd, data, sizeof(struct RioCommand),
+ -EFAULT);
+ if (rio_cmd.length > PAGE_SIZE)
+ return -EINVAL;
+ buffer = (unsigned char *) __get_free_page(GFP_KERNEL);
+ if (buffer == NULL)
+ return -ENOMEM;
+ copy_from_user_ret(buffer, rio_cmd.buffer, rio_cmd.length,
+ -EFAULT);
+
+ requesttype = rio_cmd.requesttype | USB_DIR_OUT |
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE;
+ dbg("sending command: reqtype=%0x req=%0x value=%0x index=%0x len=%0x",
+ requesttype, rio_cmd.request, rio_cmd.value,
+ rio_cmd.index, rio_cmd.length);
+ /* Send rio control message */
+ retries = 3;
+ while (retries) {
+ result = usb_control_msg(rio->rio_dev,
+ usb_sndctrlpipe(rio-> rio_dev, 0),
+ rio_cmd.request,
+ requesttype,
+ rio_cmd.value,
+ rio_cmd.index, buffer,
+ rio_cmd.length,
+ rio_cmd.timeout);
+ if (result == -ETIMEDOUT)
+ retries--;
+ else if (result < 0) {
+ err("Error executing ioctrl. code = %d",
+ le32_to_cpu(result));
+ retries = 0;
+ } else {
+ dbg("Executed ioctl. Result = %d",
+ le32_to_cpu(result));
+ retries = 0;
+
+ }
+
+ }
+ free_page((unsigned long) buffer);
+ break;
+
+ default:
+ return -ENOIOCTLCMD;
+ break;
+ }
+
+ return 0;
+}
+
+static ssize_t
+write_rio(struct file *file, const char *buffer,
+ size_t count, loff_t * ppos)
+{
+ struct rio_usb_data *rio = &rio_instance;
+
+ unsigned long copy_size;
+ unsigned long bytes_written = 0;
+ unsigned int partial;
+
+ int result = 0;
+ int maxretry;
+
+ /* Sanity check to make sure rio is connected, powered, etc */
+ if ( rio == NULL ||
+ rio->present == 0 ||
+ rio->rio_dev == NULL )
+ return -1;
+
+ do {
+ unsigned long thistime;
+ char *obuf = rio->obuf;
+
+ thistime = copy_size =
+ (count >= OBUF_SIZE) ? OBUF_SIZE : count;
+ if (copy_from_user(rio->obuf, buffer, copy_size))
+ return -EFAULT;
+ maxretry = 5;
+ while (thistime) {
+ if (!rio->rio_dev)
+ return -ENODEV;
+ if (signal_pending(current)) {
+ return bytes_written ? bytes_written : -EINTR;
+ }
+
+ result = usb_bulk_msg(rio->rio_dev,
+ usb_sndbulkpipe(rio->rio_dev, 2),
+ obuf, thistime, &partial, 5 * HZ);
+
+ dbg("write stats: result:%d thistime:%lu partial:%u",
+ result, thistime, partial);
+
+ if (result == USB_ST_TIMEOUT) { /* NAK - so hold for a while */
+ if (!maxretry--) {
+ return -ETIME;
+ }
+ interruptible_sleep_on_timeout(&rio-> wait_q, NAK_TIMEOUT);
+ continue;
+ } else if (!result & partial) {
+ obuf += partial;
+ thistime -= partial;
+ } else
+ break;
+ };
+ if (result) {
+ err("Write Whoops - %x", result);
+ return -EIO;
+ }
+ bytes_written += copy_size;
+ count -= copy_size;
+ buffer += copy_size;
+ } while (count > 0);
+
+ return bytes_written ? bytes_written : -EIO;
+}
+
+static ssize_t
+read_rio(struct file *file, char *buffer, size_t count, loff_t * ppos)
+{
+ struct rio_usb_data *rio = &rio_instance;
+ ssize_t read_count;
+ unsigned int partial;
+ int this_read;
+ int result;
+ int maxretry = 10;
+ char *ibuf = rio->ibuf;
+
+ /* Sanity check to make sure rio is connected, powered, etc */
+ if ( rio == NULL ||
+ rio->present == 0 ||
+ rio->rio_dev == NULL )
+ return -1;
+
+ read_count = 0;
+
+ while (count > 0) {
+ if (signal_pending(current)) {
+ return read_count ? read_count : -EINTR;
+ }
+ if (!rio->rio_dev)
+ return -ENODEV;
+ this_read = (count >= IBUF_SIZE) ? IBUF_SIZE : count;
+
+ result = usb_bulk_msg(rio->rio_dev,
+ usb_rcvbulkpipe(rio->rio_dev, 1),
+ ibuf, this_read, &partial,
+ (int) (HZ * .1));
+
+ dbg(KERN_DEBUG "read stats: result:%d this_read:%u partial:%u",
+ result, this_read, partial);
+
+ if (partial) {
+ count = this_read = partial;
+ } else if (result == USB_ST_TIMEOUT || result == 15) { /* FIXME: 15 ??? */
+ if (!maxretry--) {
+ err("read_rio: maxretry timeout");
+ return -ETIME;
+ }
+ interruptible_sleep_on_timeout(&rio->wait_q,
+ NAK_TIMEOUT);
+ continue;
+ } else if (result != USB_ST_DATAUNDERRUN) {
+ err("Read Whoops - result:%u partial:%u this_read:%u",
+ result, partial, this_read);
+ return -EIO;
+ } else {
+ return (0);
+ }
+
+ if (this_read) {
+ if (copy_to_user(buffer, ibuf, this_read))
+ return -EFAULT;
+ count -= this_read;
+ read_count += this_read;
+ buffer += this_read;
+ }
+ }
+ return read_count;
+}
+
+static void *probe_rio(struct usb_device *dev, unsigned int ifnum)
+{
+ struct rio_usb_data *rio = &rio_instance;
+
+ if (dev->descriptor.idVendor != 0x841) {
+ return NULL;
+ }
+
+ if (dev->descriptor.idProduct != 0x1 /* RIO 500 */ ) {
+ warn(KERN_INFO "Rio player model not supported/tested.");
+ return NULL;
+ }
+
+ info("USB Rio found at address %d", dev->devnum);
+
+ rio->present = 1;
+ rio->rio_dev = dev;
+
+ if (!(rio->obuf = (char *) kmalloc(OBUF_SIZE, GFP_KERNEL))) {
+ err("probe_rio: Not enough memory for the output buffer");
+ return NULL;
+ }
+ dbg("probe_rio: obuf address:%p", rio->obuf);
+
+ if (!(rio->ibuf = (char *) kmalloc(IBUF_SIZE, GFP_KERNEL))) {
+ err("probe_rio: Not enough memory for the input buffer");
+ kfree(rio->obuf);
+ return NULL;
+ }
+ dbg("probe_rio: ibuf address:%p", rio->ibuf);
+
+ return rio;
+}
+
+static void disconnect_rio(struct usb_device *dev, void *ptr)
+{
+ struct rio_usb_data *rio = (struct rio_usb_data *) ptr;
+
+ if (rio->isopen) {
+ rio->isopen = 0;
+ /* better let it finish - the release will do whats needed */
+ rio->rio_dev = NULL;
+ return;
+ }
+ kfree(rio->ibuf);
+ kfree(rio->obuf);
+
+ info("USB Rio disconnected.");
+
+ rio->present = 0;
+}
+
+static struct
+file_operations usb_rio_fops = {
+ NULL, /* seek */
+ read_rio,
+ write_rio,
+ NULL, /* readdir */
+ NULL, /* poll */
+ ioctl_rio, /* ioctl */
+ NULL, /* mmap */
+ open_rio,
+ NULL, /* flush */
+ close_rio,
+ NULL,
+ NULL, /* fasync */
+};
+
+static struct
+usb_driver rio_driver = {
+ "rio500",
+ probe_rio,
+ disconnect_rio,
+ {NULL, NULL},
+ &usb_rio_fops,
+ RIO_MINOR
+};
+
+int usb_rio_init(void)
+{
+ if (usb_register(&rio_driver) < 0)
+ return -1;
+
+ info("USB Rio support registered.");
+ return 0;
+}
+
+
+void usb_rio_cleanup(void)
+{
+ struct rio_usb_data *rio = &rio_instance;
+
+ rio->present = 0;
+ usb_deregister(&rio_driver);
+
+
+}
+
+module_init(usb_rio_init);
+module_exit(usb_rio_cleanup);
+
--- /dev/null
+/* ----------------------------------------------------------------------
+
+ Copyright (C) 2000 Cesar Miquel (miquel@df.uba.ar)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ ---------------------------------------------------------------------- */
+
+
+
+#define RIO_SEND_COMMAND 0x1
+#define RIO_RECV_COMMAND 0x2
+
+#define RIO_DIR_OUT 0x0
+#define RIO_DIR_IN 0x1
+
+struct RioCommand {
+ short length;
+ int request;
+ int requesttype;
+ int value;
+ int index;
+ void *buffer;
+ int timeout;
+};
#include <linux/fb.h>
#include <linux/init.h>
#include <linux/selection.h>
-#include <linux/delay.h>
#include <asm/pgtable.h>
#include <asm/io.h>
#ifdef CONFIG_ZORRO
if (!(ia_valid & ATTR_MTIME_SET))
attr->ia_mtime = now;
- if (inode->i_sb && inode->i_sb->s_op &&
- inode->i_op->setattr)
+ if (inode && inode->i_op && inode->i_op->setattr)
error = inode->i_op->setattr(dentry, attr);
else {
error = inode_change_ok(inode, attr);
* ftp://prep.ai.mit.edu/pub/gnu/GPL
* Each contributing author retains all rights to their own work.
*
- * (C) 1999 Ben Fennema
+ * (C) 1999-2000 Ben Fennema
* (C) 1999 Stelias Computing Inc
*
* HISTORY
#include "udfdecl.h"
#include <linux/fs.h>
#include <linux/locks.h>
+#include <linux/quotaops.h>
#include <linux/udf_fs.h>
#include <asm/bitops.h>
unsigned int block_group)
{
int slot;
- int nr_groups = (UDF_SB_PARTLEN(sb, UDF_SB_PARTITION(sb)) +
- (sizeof(struct SpaceBitmapDesc) << 3) + (sb->s_blocksize * 8) - 1) / (sb->s_blocksize * 8);
+ int nr_groups = (UDF_SB_PARTLEN(sb, UDF_SB_PARTITION(sb)) +
+ (sizeof(struct SpaceBitmapDesc) << 3) + (sb->s_blocksize * 8) - 1) / (sb->s_blocksize * 8);
if (UDF_SB_LOADED_BLOCK_BITMAPS(sb) > 0 &&
UDF_SB_BLOCK_BITMAP_NUMBER(sb, 0) == block_group &&
udf_debug("bit %ld already set\n", bit + i);
udf_debug("byte=%2x\n", ((char *)bh->b_data)[(bit + i) >> 3]);
}
- else if (UDF_SB_LVIDBH(sb))
+ else
{
- UDF_SB_LVID(sb)->freeSpaceTable[UDF_SB_PARTITION(sb)] =
- cpu_to_le32(le32_to_cpu(UDF_SB_LVID(sb)->freeSpaceTable[UDF_SB_PARTITION(sb)])+1);
+ DQUOT_FREE_BLOCK(sb, inode, 1);
+ if (UDF_SB_LVIDBH(sb))
+ {
+ UDF_SB_LVID(sb)->freeSpaceTable[UDF_SB_PARTITION(sb)] =
+ cpu_to_le32(le32_to_cpu(UDF_SB_LVID(sb)->freeSpaceTable[UDF_SB_PARTITION(sb)])+1);
+ }
}
}
mark_buffer_dirty(bh, 1);
return;
}
-int udf_alloc_blocks(const struct inode * inode, Uint16 partition,
+int udf_prealloc_blocks(const struct inode * inode, Uint16 partition,
Uint32 first_block, Uint32 block_count)
{
int alloc_count = 0;
{
if (!udf_test_bit(bit, bh->b_data))
goto out;
- if (!udf_clear_bit(bit, bh->b_data))
+ else if (DQUOT_PREALLOC_BLOCK(sb, inode, 1))
+ goto out;
+ else if (!udf_clear_bit(bit, bh->b_data))
{
udf_debug("bit already cleared for block %d\n", bit);
+ DQUOT_FREE_BLOCK(sb, inode, 1);
goto out;
}
block_count --;
alloc_count ++;
bit ++;
block ++;
-
}
mark_buffer_dirty(bh, 1);
if (block_count > 0)
for (i=0; i<7 && bit > (group_start << 3) && udf_test_bit(bit - 1, bh->b_data); i++, bit--);
got_block:
+
+ /*
+ * Check quota for allocation of this block.
+ */
+ if (DQUOT_ALLOC_BLOCK(sb, inode, 1))
+ {
+ unlock_super(sb);
+ *err = -EDQUOT;
+ return 0;
+ }
+
newblock = bit + (block_group << (sb->s_blocksize_bits + 3)) -
(sizeof(struct SpaceBitmapDesc) << 3);
* ftp://prep.ai.mit.edu/pub/gnu/GPL
* Each contributing author retains all rights to their own work.
*
- * (C) 1998-1999 Ben Fennema
+ * (C) 1998-2000 Ben Fennema
*
* HISTORY
*
/* readdir and lookup functions */
struct file_operations udf_dir_operations = {
- read: generic_read_dir,
- readdir: udf_readdir,
- ioctl: udf_ioctl,
- fsync: udf_sync_file,
+ read: generic_read_dir,
+ readdir: udf_readdir,
+ ioctl: udf_ioctl,
+ fsync: udf_sync_file,
};
/*
{
filp->f_pos = nf_pos;
- fi = udf_fileident_read(dir, &nf_pos, &fibh, &cfi, &bloc, &extoffset, &offset, &bh);
+ fi = udf_fileident_read(dir, &nf_pos, &fibh, &cfi, &bloc, &extoffset, &eloc, &elen, &offset, &bh);
if (!fi)
{
if ( (cfi.fileCharacteristics & FILE_DELETED) != 0 )
{
- if ( !IS_UNDELETE(dir->i_sb) )
+ if ( !UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNDELETE) )
continue;
}
if ( (cfi.fileCharacteristics & FILE_HIDDEN) != 0 )
{
- if ( !IS_UNHIDE(dir->i_sb) )
+ if ( !UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNHIDE) )
continue;
}
struct udf_fileident_bh *fibh,
struct FileIdentDesc *cfi,
lb_addr *bloc, Uint32 *extoffset,
+ lb_addr *eloc, Uint32 *elen,
Uint32 *offset, struct buffer_head **bh)
{
struct FileIdentDesc *fi;
- lb_addr eloc;
- Uint32 elen;
int block;
fibh->soffset = fibh->eoffset;
{
int lextoffset = *extoffset;
- if (udf_next_aext(dir, bloc, extoffset, &eloc, &elen, bh, 1) !=
+ if (udf_next_aext(dir, bloc, extoffset, eloc, elen, bh, 1) !=
EXTENT_RECORDED_ALLOCATED)
{
return NULL;
}
- block = udf_get_lb_pblock(dir->i_sb, eloc, *offset);
+ block = udf_get_lb_pblock(dir->i_sb, *eloc, *offset);
(*offset) ++;
- if ((*offset << dir->i_sb->s_blocksize_bits) >= elen)
+ if ((*offset << dir->i_sb->s_blocksize_bits) >= *elen)
*offset = 0;
else
*extoffset = lextoffset;
{
int lextoffset = *extoffset;
- if (udf_next_aext(dir, bloc, extoffset, &eloc, &elen, bh, 1) !=
+ if (udf_next_aext(dir, bloc, extoffset, eloc, elen, bh, 1) !=
EXTENT_RECORDED_ALLOCATED)
{
return NULL;
}
- block = udf_get_lb_pblock(dir->i_sb, eloc, *offset);
+ block = udf_get_lb_pblock(dir->i_sb, *eloc, *offset);
(*offset) ++;
- if ((*offset << dir->i_sb->s_blocksize_bits) >= elen)
+ if ((*offset << dir->i_sb->s_blocksize_bits) >= *elen)
*offset = 0;
else
*extoffset = lextoffset;
* Each contributing author retains all rights to their own work.
*
* (C) 1998-1999 Dave Boynton
- * (C) 1998-1999 Ben Fennema
- * (C) 1999 Stelias Computing Inc
+ * (C) 1998-2000 Ben Fennema
+ * (C) 1999-2000 Stelias Computing Inc
*
* HISTORY
*
*/
#include "udfdecl.h"
-#include <linux/config.h>
#include <linux/fs.h>
#include <linux/udf_fs.h>
#include <asm/uaccess.h>
#include "udf_i.h"
#include "udf_sb.h"
-/*
- * Make sure the offset never goes beyond the 32-bit mark..
- */
-static loff_t udf_file_llseek(struct file * file, loff_t offset, int origin)
-{
- struct inode * inode = file->f_dentry->d_inode;
-
- switch (origin)
- {
- case 2:
- {
- offset += inode->i_size;
- break;
- }
- case 1:
- {
- offset += file->f_pos;
- break;
- }
- }
- if (offset != file->f_pos)
- {
- file->f_pos = offset;
- file->f_reada = 0;
- file->f_version = ++event;
- }
- return offset;
-}
-
static int udf_adinicb_readpage(struct dentry *dentry, struct page * page)
{
- struct inode *inode = dentry->d_inode;
+ struct inode *inode = (struct inode *)page->mapping->host;
struct buffer_head *bh;
- unsigned long kaddr = 0;
+ int block;
+ char *kaddr;
if (!PageLocked(page))
PAGE_BUG(page);
- kaddr = kmap(page);
- memset((char *)kaddr, 0, PAGE_CACHE_SIZE);
- bh = getblk (inode->i_dev, inode->i_ino, inode->i_sb->s_blocksize);
- ll_rw_block (READ, 1, &bh);
- wait_on_buffer(bh);
- memcpy((char *)kaddr, bh->b_data + udf_ext0_offset(inode),
- inode->i_size);
+ kaddr = (char *)kmap(page);
+ memset(kaddr, 0, PAGE_CACHE_SIZE);
+ block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0);
+ bh = bread (inode->i_dev, block, inode->i_sb->s_blocksize);
+ memcpy(kaddr, bh->b_data + udf_ext0_offset(inode), inode->i_size);
brelse(bh);
SetPageUptodate(page);
kunmap(page);
static int udf_adinicb_writepage(struct dentry *dentry, struct page *page)
{
- struct inode *inode = dentry->d_inode;
+ struct inode *inode = (struct inode *)page->mapping->host;
struct buffer_head *bh;
- unsigned long kaddr = 0;
+ int block;
+ char *kaddr;
if (!PageLocked(page))
- BUG();
+ PAGE_BUG(page);
- kaddr = kmap(page);
- bh = getblk (inode->i_dev, inode->i_ino, inode->i_sb->s_blocksize);
- if (!buffer_uptodate(bh))
- {
- ll_rw_block (READ, 1, &bh);
- wait_on_buffer(bh);
- }
- memcpy(bh->b_data + udf_ext0_offset(inode), (char *)kaddr,
- inode->i_size);
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer(bh);
+ kaddr = (char *)kmap(page);
+ block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0);
+ bh = bread (inode->i_dev, block, inode->i_sb->s_blocksize);
+ memcpy(bh->b_data + udf_ext0_offset(inode), kaddr, inode->i_size);
+ mark_buffer_dirty(bh, 0);
brelse(bh);
SetPageUptodate(page);
kunmap(page);
static int udf_adinicb_commit_write(struct file *file, struct page *page, unsigned offset, unsigned to)
{
- struct inode *inode = file->f_dentry->d_inode;
+ struct inode *inode = (struct inode *)page->mapping->host;
+
struct buffer_head *bh;
+ int block;
char *kaddr = (char*)page_address(page);
- bh = bread (inode->i_dev, inode->i_ino, inode->i_sb->s_blocksize);
- if (!buffer_uptodate(bh)) {
- ll_rw_block (READ, 1, &bh);
- wait_on_buffer(bh);
- }
+
+ block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0);
+ bh = bread (inode->i_dev, block, inode->i_sb->s_blocksize);
memcpy(bh->b_data + udf_file_entry_alloc_offset(inode) + offset,
kaddr + offset, to-offset);
mark_buffer_dirty(bh, 0);
brelse(bh);
- kunmap(page);
SetPageUptodate(page);
+ kunmap(page);
/* only one page here */
if (to > inode->i_size)
inode->i_size = to;
}
struct address_space_operations udf_adinicb_aops = {
- readpage: udf_adinicb_readpage,
- writepage: udf_adinicb_writepage,
- prepare_write: udf_adinicb_prepare_write,
- commit_write: udf_adinicb_commit_write
+ readpage: udf_adinicb_readpage,
+ writepage: udf_adinicb_writepage,
+ prepare_write: udf_adinicb_prepare_write,
+ commit_write: udf_adinicb_commit_write,
};
static ssize_t udf_file_write(struct file * file, const char * buf,
struct inode *inode = file->f_dentry->d_inode;
int err, pos;
- if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB) {
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ {
if (file->f_flags & O_APPEND)
pos = inode->i_size;
else
pos = *ppos;
- if (inode->i_sb->s_blocksize <
- (udf_file_entry_alloc_offset(inode) + pos + count)) {
- udf_expand_file_adinicb(file, pos + count, &err);
- if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB) {
+ if (inode->i_sb->s_blocksize < (udf_file_entry_alloc_offset(inode) +
+ pos + count))
+ {
+ udf_expand_file_adinicb(inode, pos + count, &err);
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ {
udf_debug("udf_expand_adinicb: err=%d\n", err);
return err;
}
- } else {
+ }
+ else
+ {
if (pos + count > inode->i_size)
UDF_I_LENALLOC(inode) = pos + count;
else
}
retval = generic_file_write(file, buf, count, ppos);
- if (retval > 0) {
+
+ if (retval > 0)
+ {
UDF_I_UCTIME(inode) = UDF_I_UMTIME(inode) = CURRENT_UTIME;
mark_inode_dirty(inode);
}
}
struct file_operations udf_file_operations = {
- llseek: udf_file_llseek,
- read: generic_file_read,
- write: udf_file_write,
- ioctl: udf_ioctl,
- mmap: generic_file_mmap,
- open: udf_open_file,
- release: udf_release_file,
- fsync: udf_sync_file,
+ read: generic_file_read,
+ ioctl: udf_ioctl,
+ open: udf_open_file,
+ mmap: generic_file_mmap,
+ write: udf_file_write,
+ release: udf_release_file,
+ fsync: udf_sync_file,
};
struct inode_operations udf_file_inode_operations = {
-#if CONFIG_UDF_RW == 1
- truncate: udf_truncate,
-#endif
+ truncate: udf_truncate,
};
#include "udfdecl.h"
#include <linux/fs.h>
#include <linux/locks.h>
+#include <linux/quotaops.h>
#include <linux/udf_fs.h>
#include "udf_i.h"
ino = inode->i_ino;
+ /*
+ * Note: we must free any quota before locking the superblock,
+ * as writing the quota to disk may need the lock as well.
+ */
+ DQUOT_FREE_INODE(sb, inode);
+ DQUOT_DROP(inode);
+
lock_super(sb);
is_directory = S_ISDIR(inode->i_mode);
inode->i_nlink = 1;
inode->i_dev = sb->s_dev;
inode->i_uid = current->fsuid;
- if (dir->i_mode & S_ISGID)
+ if (test_opt (sb, GRPID))
+ inode->i_gid = dir->i_gid;
+ else if (dir->i_mode & S_ISGID)
{
inode->i_gid = dir->i_gid;
if (S_ISDIR(mode))
}
else
inode->i_gid = current->fsgid;
+
UDF_I_LOCATION(inode).logicalBlockNum = block;
UDF_I_LOCATION(inode).partitionReferenceNum = UDF_I_LOCATION(dir).partitionReferenceNum;
inode->i_ino = udf_get_lb_pblock(sb, UDF_I_LOCATION(inode), 0);
inode->i_size = 0;
UDF_I_LENEATTR(inode) = 0;
UDF_I_LENALLOC(inode) = 0;
- UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_IN_ICB;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_EXTENDED_FE))
+ {
+ UDF_I_EXTENDED_FE(inode) = 1;
+ UDF_UPDATE_UDFREV(inode->i_sb, UDF_VERS_USE_EXTENDED_FE);
+ }
+ else
+ UDF_I_EXTENDED_FE(inode) = 0;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_AD_IN_ICB))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_IN_ICB;
+ else if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_SHORT;
+ else
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
UDF_I_UMTIME(inode) = UDF_I_UATIME(inode) = UDF_I_UCTIME(inode) = CURRENT_UTIME;
+ UDF_I_NEW_INODE(inode) = 1;
insert_inode_hash(inode);
mark_inode_dirty(inode);
+
unlock_super(sb);
+ if (DQUOT_ALLOC_INODE(sb, inode))
+ {
+ sb->dq_op->drop(inode);
+ inode->i_nlink = 0;
+ iput(inode);
+ *err = -EDQUOT;
+ return NULL;
+ }
+
*err = 0;
return inode;
}
*/
void udf_put_inode(struct inode * inode)
{
+ lock_kernel();
udf_discard_prealloc(inode);
+ write_inode_now(inode);
+ unlock_kernel();
}
/*
*/
void udf_delete_inode(struct inode * inode)
{
+ lock_kernel();
+
+ if (is_bad_inode(inode))
+ {
+ clear_inode(inode);
+ goto out;
+ }
+
inode->i_size = 0;
- if (inode->i_blocks)
- udf_truncate(inode);
+ udf_truncate(inode);
+ write_inode_now(inode);
udf_free_inode(inode);
+out:
+ unlock_kernel();
}
void udf_discard_prealloc(struct inode * inode)
udf_trunc(inode);
}
-static int udf_alloc_block(struct inode *inode, Uint16 partition,
- Uint32 goal, int *err)
-{
- int result = 0;
- wait_on_super(inode->i_sb);
-
- result = udf_new_block(inode, partition, goal, err);
-
- return result;
-}
-
static int udf_writepage(struct dentry *dentry, struct page *page)
{
- return block_write_full_page(page,udf_get_block);
+ return block_write_full_page(page, udf_get_block);
}
+
static int udf_readpage(struct dentry *dentry, struct page *page)
{
- return block_read_full_page(page,udf_get_block);
+ return block_read_full_page(page, udf_get_block);
}
+
static int udf_prepare_write(struct page *page, unsigned from, unsigned to)
{
- return block_prepare_write(page,from,to,udf_get_block);
+ return block_prepare_write(page, from, to, udf_get_block);
}
+
static int udf_bmap(struct address_space *mapping, long block)
{
return generic_block_bmap(mapping,block,udf_get_block);
}
-static struct address_space_operations udf_aops = {
- readpage: udf_readpage,
- writepage: udf_writepage,
- prepare_write: udf_prepare_write,
- commit_write: generic_commit_write,
- bmap: udf_bmap
+
+struct address_space_operations udf_aops = {
+ readpage: udf_readpage,
+ writepage: udf_writepage,
+ prepare_write: udf_prepare_write,
+ commit_write: generic_commit_write,
+ bmap: udf_bmap,
};
-void udf_expand_file_adinicb(struct file * filp, int newsize, int * err)
+void udf_expand_file_adinicb(struct inode * inode, int newsize, int * err)
{
- struct inode * inode = filp->f_dentry->d_inode;
struct buffer_head *bh = NULL;
struct page *page;
unsigned long kaddr = 0;
+ int block;
/* from now on we have normal address_space methods */
inode->i_data.a_ops = &udf_aops;
if (!UDF_I_LENALLOC(inode))
{
- UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_SHORT;
+ else
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
mark_inode_dirty(inode);
return;
}
- bh = udf_tread(inode->i_sb, inode->i_ino, inode->i_sb->s_blocksize);
+ block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0);
+ bh = udf_tread(inode->i_sb, block, inode->i_sb->s_blocksize);
if (!bh)
return;
- page = grab_cache_page(&inode->i_data, 0);
+ page = grab_cache_page(inode->i_mapping, 0);
if (!PageLocked(page))
- BUG();
+ PAGE_BUG(page);
if (!Page_Uptodate(page))
{
kaddr = kmap(page);
PAGE_CACHE_SIZE - UDF_I_LENALLOC(inode));
memcpy((char *)kaddr, bh->b_data + udf_file_entry_alloc_offset(inode),
UDF_I_LENALLOC(inode));
+ SetPageUptodate(page);
kunmap(page);
}
memset(bh->b_data + udf_file_entry_alloc_offset(inode),
0, UDF_I_LENALLOC(inode));
UDF_I_LENALLOC(inode) = 0;
- UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_SHORT;
+ else
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
inode->i_blocks = inode->i_sb->s_blocksize / 512;
mark_buffer_dirty(bh, 1);
udf_release_data(bh);
- inode->i_data.a_ops->writepage(filp->f_dentry, page);
+ inode->i_data.a_ops->writepage(NULL, page);
UnlockPage(page);
page_cache_release(page);
struct buffer_head * udf_expand_dir_adinicb(struct inode *inode, int *block, int *err)
{
- long_ad newad;
int newblock;
struct buffer_head *sbh = NULL, *dbh = NULL;
+ lb_addr bloc, eloc;
+ Uint32 elen, extoffset;
struct udf_fileident_bh sfibh, dfibh;
loff_t f_pos = udf_ext0_offset(inode) >> 2;
if (!inode->i_size)
{
- UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_SHORT;
+ else
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
mark_inode_dirty(inode);
return NULL;
}
/* alloc block, and copy data to it */
- *block = udf_alloc_block(inode,
+ *block = udf_new_block(inode,
UDF_I_LOCATION(inode).partitionReferenceNum,
UDF_I_LOCATION(inode).logicalBlockNum, err);
dfibh.sbh = dfibh.ebh = dbh;
while ( (f_pos < size) )
{
- sfi = udf_fileident_read(inode, &f_pos, &sfibh, &cfi, NULL, NULL, NULL, NULL);
+ sfi = udf_fileident_read(inode, &f_pos, &sfibh, &cfi, NULL, NULL, NULL, NULL, NULL, NULL);
if (!sfi)
{
udf_release_data(sbh);
memset(sbh->b_data + udf_file_entry_alloc_offset(inode),
0, UDF_I_LENALLOC(inode));
- memset(&newad, 0x00, sizeof(long_ad));
- newad.extLength = inode->i_size;
- newad.extLocation.logicalBlockNum = *block;
- newad.extLocation.partitionReferenceNum = UDF_I_LOCATION(inode).partitionReferenceNum;
+ UDF_I_LENALLOC(inode) = 0;
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD))
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_SHORT;
+ else
+ UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
+ bloc = UDF_I_LOCATION(inode);
+ eloc.logicalBlockNum = *block;
+ eloc.partitionReferenceNum = UDF_I_LOCATION(inode).partitionReferenceNum;
+ elen = inode->i_size;
+ extoffset = udf_file_entry_alloc_offset(inode);
+ udf_add_aext(inode, &bloc, &extoffset, eloc, elen, &sbh, 0);
/* UniqueID stuff */
- memcpy(sbh->b_data + udf_file_entry_alloc_offset(inode),
- &newad, sizeof(newad));
-
- UDF_I_LENALLOC(inode) = sizeof(newad);
- UDF_I_ALLOCTYPE(inode) = ICB_FLAG_AD_LONG;
inode->i_blocks = inode->i_sb->s_blocksize / 512;
mark_buffer_dirty(sbh, 1);
udf_release_data(sbh);
if (etype == -1)
{
endnum = startnum = ((count > 1) ? 1 : count);
+ if (laarr[c].extLength & (inode->i_sb->s_blocksize - 1))
+ {
+ laarr[c].extLength =
+ (laarr[c].extLength & UDF_EXTENT_FLAG_MASK) |
+ (((laarr[c].extLength & UDF_EXTENT_LENGTH_MASK) +
+ inode->i_sb->s_blocksize - 1) &
+ ~(inode->i_sb->s_blocksize - 1));
+ }
c = !c;
laarr[c].extLength = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) |
((offset + 1) << inode->i_sb->s_blocksize_bits);
goal = UDF_I_LOCATION(inode).logicalBlockNum + 1;
}
- if (!(newblocknum = udf_alloc_block(inode,
+ if (!(newblocknum = udf_new_block(inode,
UDF_I_LOCATION(inode).partitionReferenceNum, goal, err)))
{
udf_release_data(pbh);
{
int start, length = 0, currlength = 0, i;
- if (*endnum == (c+1) && !lastblock)
+ if (*endnum >= (c+1) && !lastblock)
return;
if ((laarr[c+1].extLength >> 30) == EXTENT_NOT_RECORDED_ALLOCATED)
int next = laarr[start].extLocation.logicalBlockNum +
(((laarr[start].extLength & UDF_EXTENT_LENGTH_MASK) +
inode->i_sb->s_blocksize - 1) >> inode->i_sb->s_blocksize_bits);
- int numalloc = udf_alloc_blocks(inode,
+ int numalloc = udf_prealloc_blocks(inode,
laarr[start].extLocation.partitionReferenceNum,
next, (UDF_DEFAULT_PREALLOC_BLOCKS > length ? length :
UDF_DEFAULT_PREALLOC_BLOCKS) - currlength);
*/
inode->i_blksize = PAGE_SIZE;
- inode->i_version = 1;
bh = udf_read_ptagged(inode->i_sb, UDF_I_LOCATION(inode), 0, &ident);
long convtime_usec;
int offset, alen;
+ inode->i_version = ++event;
+ UDF_I_NEW_INODE(inode) = 0;
+
fe = (struct FileEntry *)bh->b_data;
efe = (struct ExtendedFileEntry *)bh->b_data;
void udf_write_inode(struct inode * inode)
{
+ lock_kernel();
udf_update_inode(inode, 0);
+ unlock_kernel();
}
int udf_sync_inode(struct inode * inode)
}
fe = (struct FileEntry *)bh->b_data;
efe = (struct ExtendedFileEntry *)bh->b_data;
+ if (UDF_I_NEW_INODE(inode) == 1)
+ {
+ if (UDF_I_EXTENDED_FE(inode) == 0)
+ memset(bh->b_data, 0x0, sizeof(struct FileEntry));
+ else
+ memset(bh->b_data, 0x00, sizeof(struct ExtendedFileEntry));
+ memset(bh->b_data + udf_file_entry_alloc_offset(inode) +
+ UDF_I_LENALLOC(inode), 0x0, inode->i_sb->s_blocksize -
+ udf_file_entry_alloc_offset(inode) - UDF_I_LENALLOC(inode));
+ UDF_I_NEW_INODE(inode) = 0;
+ }
if (inode->i_uid != UDF_SB(inode->i_sb)->s_uid)
fe->uid = cpu_to_le32(inode->i_uid);
else
fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
-
fe->informationLength = cpu_to_le64(inode->i_size);
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode))
efe->descTag.tagIdent = le16_to_cpu(TID_EXTENDED_FILE_ENTRY);
crclen = sizeof(struct ExtendedFileEntry);
}
- fe->icbTag.strategyType = UDF_I_STRAT4096(inode) ? cpu_to_le16(4096) :
- cpu_to_le16(4);
+ if (UDF_I_STRAT4096(inode))
+ {
+ fe->icbTag.strategyType = cpu_to_le16(4096);
+ fe->icbTag.strategyParameter = cpu_to_le16(1);
+ fe->icbTag.numEntries = cpu_to_le16(2);
+ }
+ else
+ {
+ fe->icbTag.strategyType = cpu_to_le16(4);
+ fe->icbTag.numEntries = cpu_to_le16(1);
+ }
if (S_ISDIR(inode->i_mode))
fe->icbTag.fileType = FILE_TYPE_DIRECTORY;
char *sptr, *dptr;
struct buffer_head *nbh;
int err, loffset;
- Uint32 lblock = bloc->logicalBlockNum;
- Uint16 lpart = bloc->partitionReferenceNum;
+ lb_addr obloc = *bloc;
if (!(bloc->logicalBlockNum = udf_new_block(inode,
- lpart, lblock, &err)))
+ obloc.partitionReferenceNum, obloc.logicalBlockNum, &err)))
{
return -1;
}
return -1;
}
aed = (struct AllocExtDesc *)(nbh->b_data);
- aed->previousAllocExtLocation = cpu_to_le32(lblock);
+ aed->previousAllocExtLocation = cpu_to_le32(obloc.logicalBlockNum);
if (*extoffset + adsize > inode->i_sb->s_blocksize)
{
loffset = *extoffset;
sptr = (*bh)->b_data + *extoffset;
*extoffset = sizeof(struct AllocExtDesc);
- if (UDF_I_LOCATION(inode).logicalBlockNum == lblock)
- UDF_I_LENALLOC(inode) += adsize;
- else
+ if (memcmp(&UDF_I_LOCATION(inode), &obloc, sizeof(lb_addr)))
{
aed = (struct AllocExtDesc *)(*bh)->b_data;
aed->lengthAllocDescs =
cpu_to_le32(le32_to_cpu(aed->lengthAllocDescs) + adsize);
}
+ else
+ {
+ UDF_I_LENALLOC(inode) += adsize;
+ mark_inode_dirty(inode);
+ }
}
udf_new_tag(nbh->b_data, TID_ALLOC_EXTENT_DESC, 2, 1,
bloc->logicalBlockNum, sizeof(tag));
short_ad *sad = NULL;
long_ad *lad = NULL;
- if (!(*bh))
+ if (!(*bh))
{
if (!(*bh = udf_tread(inode->i_sb,
udf_get_lb_pblock(inode->i_sb, bloc, 0),
udf_update_tag((*bh)->b_data,
le32_to_cpu(aed->lengthAllocDescs) + sizeof(struct AllocExtDesc));
}
+ else
+ mark_inode_dirty(inode);
mark_buffer_dirty(*bh, 1);
}
case ICB_FLAG_AD_IN_ICB:
{
- *bloc = *eloc = UDF_I_LOCATION(inode);
- *elen = UDF_I_LENALLOC(inode);
- *extoffset = udf_file_entry_alloc_offset(inode);
+ if (UDF_I_LENALLOC(inode) == 0)
+ return -1;
etype = EXTENT_RECORDED_ALLOCATED;
+ *eloc = UDF_I_LOCATION(inode);
+ *elen = UDF_I_LENALLOC(inode);
break;
}
default:
if (*elen)
return etype;
- udf_debug("Empty Extent, inode=%ld, alloctype=%d, elen=%d, etype=%d, extoffset=%d\n",
- inode->i_ino, UDF_I_ALLOCTYPE(inode), *elen, etype, *extoffset);
+ udf_debug("Empty Extent, inode=%ld, alloctype=%d, eloc=%d, elen=%d, etype=%d, extoffset=%d\n",
+ inode->i_ino, UDF_I_ALLOCTYPE(inode), eloc->logicalBlockNum, *elen, etype, *extoffset);
if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_SHORT)
*extoffset -= sizeof(short_ad);
else if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_LONG)
int inode_bmap(struct inode *inode, int block, lb_addr *bloc, Uint32 *extoffset,
lb_addr *eloc, Uint32 *elen, Uint32 *offset, struct buffer_head **bh)
{
- int etype, lbcount = 0, b_off;
+ int etype, lbcount = 0;
if (block < 0)
{
return -1;
}
- *extoffset = udf_file_entry_alloc_offset(inode);
+ *extoffset = 0;
*elen = 0;
- b_off = block << inode->i_sb->s_blocksize_bits;
*bloc = UDF_I_LOCATION(inode);
do
{
- lbcount += *elen;
-
if ((etype = udf_next_aext(inode, bloc, extoffset, eloc, elen, bh, 1)) == -1)
{
- *offset = (b_off - lbcount) >> inode->i_sb->s_blocksize_bits;
+ *offset = block - lbcount;
return -1;
}
- } while (lbcount + *elen <= b_off);
+ lbcount += ((*elen + inode->i_sb->s_blocksize - 1) >>
+ inode->i_sb->s_blocksize_bits);
+ } while (lbcount <= block);
- *offset = (b_off - lbcount) >> inode->i_sb->s_blocksize_bits;
+ *offset = block + ((*elen + inode->i_sb->s_blocksize - 1) >>
+ inode->i_sb->s_blocksize_bits) - lbcount;
return etype;
}
if (bh)
udf_release_data(bh);
- if (UDF_SB(inode->i_sb)->s_flags & UDF_FLAG_VARCONV)
+ if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_VARCONV))
return udf_fixed_to_variable(ret);
else
return ret;
}
unsigned int
-udf_get_last_block(struct super_block *sb, int *flags)
+udf_get_last_block(struct super_block *sb)
{
extern int *blksize_size[];
kdev_t dev = sb->s_dev;
struct block_device *bdev = sb->s_bdev;
int ret;
- unsigned long lblock;
- unsigned int hbsize = get_hardblocksize(dev);
- unsigned int secsize = 512;
- unsigned int mult = 0;
- unsigned int div = 0;
+ unsigned long lblock = 0;
- if (!hbsize)
- hbsize = blksize_size[MAJOR(dev)][MINOR(dev)];
+ ret = ioctl_by_bdev(bdev, CDROM_LAST_WRITTEN, (unsigned long) &lblock);
- if (secsize > hbsize)
- mult = secsize / hbsize;
- else if (hbsize > secsize)
- div = hbsize / secsize;
+ if (ret) /* Hard Disk */
+ {
+ unsigned int hbsize = get_hardblocksize(dev);
+ unsigned int blocksize = sb->s_blocksize;
+ unsigned int mult = 0;
+ unsigned int div = 0;
- lblock = 0;
- ret = ioctl_by_bdev(bdev, BLKGETSIZE, (unsigned long) &lblock);
+ if (!hbsize)
+ hbsize = blksize_size[MAJOR(dev)][MINOR(dev)];
- if (!ret && lblock != 0x7FFFFFFF) /* Hard Disk */
- {
- if (mult)
- lblock *= mult;
- else if (div)
- lblock /= div;
- }
- else /* CDROM */
- {
- ret = ioctl_by_bdev(bdev, CDROM_LAST_WRITTEN, (unsigned long) &lblock);
+ if (hbsize > blocksize)
+ mult = hbsize / blocksize;
+ else if (blocksize > hbsize)
+ div = blocksize / hbsize;
+
+ ret = ioctl_by_bdev(bdev, BLKGETSIZE, (unsigned long) &lblock);
+
+ if (!ret && lblock != 0x7FFFFFFF)
+ {
+ if (mult)
+ lblock *= mult;
+ else if (div)
+ lblock /= div;
+ }
}
if (!ret && lblock)
extern struct buffer_head *
udf_tread(struct super_block *sb, int block, int size)
{
- if (UDF_SB(sb)->s_flags & UDF_FLAG_VARCONV)
+ if (UDF_QUERY_FLAG(sb, UDF_FLAG_VARCONV))
return bread(sb->s_dev, udf_fixed_to_variable(block), size);
else
return bread(sb->s_dev, block, size);
#include "udfdecl.h"
-#if defined(__linux__) && defined(__KERNEL__)
-#include <linux/config.h>
-#include <linux/version.h>
#include "udf_i.h"
#include "udf_sb.h"
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/malloc.h>
+#include <linux/quotaops.h>
#include <linux/udf_fs.h>
-#endif
static inline int udf_match(int len, const char * const name, struct qstr *qs)
{
if (!(fibh->sbh = fibh->ebh = udf_tread(dir->i_sb, block, dir->i_sb->s_blocksize)))
{
- udf_debug("udf_tread failed: block=%d\n", block);
udf_release_data(bh);
return NULL;
}
while ( (f_pos < size) )
{
- fi = udf_fileident_read(dir, &f_pos, fibh, cfi, &bloc, &extoffset, &offset, &bh);
+ fi = udf_fileident_read(dir, &f_pos, fibh, cfi, &bloc, &extoffset, &eloc, &elen, &offset, &bh);
if (!fi)
{
if ( (cfi->fileCharacteristics & FILE_DELETED) != 0 )
{
- if ( !IS_UNDELETE(dir->i_sb) )
+ if ( !UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNDELETE) )
continue;
}
if ( (cfi->fileCharacteristics & FILE_HIDDEN) != 0 )
{
- if ( !IS_UNHIDE(dir->i_sb) )
+ if ( !UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNHIDE) )
continue;
}
return NULL;
sb = dir->i_sb;
- if (!dentry->d_name.len)
- return NULL;
-
- if (dir->i_size == 0)
+ if (dentry->d_name.len)
{
- *err = -ENOENT;
- return NULL;
- }
+ if ( !(udf_char_to_ustr(&unifilename, dentry->d_name.name, dentry->d_name.len)) )
+ {
+ *err = -ENAMETOOLONG;
+ return NULL;
+ }
- if ( !(udf_char_to_ustr(&unifilename, dentry->d_name.name, dentry->d_name.len)) )
+ if ( !(namelen = udf_UTF8toCS0(name, &unifilename, UDF_NAME_LEN)) )
+ {
+ *err = -ENAMETOOLONG;
+ return NULL;
+ }
+ }
+ else if (dir->i_size != 0)
{
- *err = -ENAMETOOLONG;
+ *err = -ENOENT;
return NULL;
}
-
- if ( !(namelen = udf_UTF8toCS0(name, &unifilename, UDF_NAME_LEN)) )
- return 0;
+ else /* .. */
+ namelen = 0;
nfidlen = (sizeof(struct FileIdentDesc) + 0 + namelen + 3) & ~3;
}
else
offset = 0;
- }
- else
- {
- udf_release_data(bh);
- return NULL;
- }
- if (!(fibh->sbh = fibh->ebh = udf_tread(dir->i_sb, block, dir->i_sb->s_blocksize)))
- return NULL;
-
- block = UDF_I_LOCATION(dir).logicalBlockNum;
-
- while ( (f_pos < size) )
- {
- fi = udf_fileident_read(dir, &f_pos, fibh, cfi, &bloc, &extoffset, &offset, &bh);
-
- if (!fi)
+ if (!(fibh->sbh = fibh->ebh = udf_tread(dir->i_sb, block, dir->i_sb->s_blocksize)))
{
- if (fibh->sbh != fibh->ebh)
- udf_release_data(fibh->ebh);
- udf_release_data(fibh->sbh);
udf_release_data(bh);
return NULL;
}
-
- liu = le16_to_cpu(cfi->lengthOfImpUse);
- lfi = cfi->lengthFileIdent;
-
- if (fibh->sbh == fibh->ebh)
- nameptr = fi->fileIdent + liu;
- else
+
+ block = UDF_I_LOCATION(dir).logicalBlockNum;
+
+ while ( (f_pos < size) )
{
- int poffset; /* Unpaded ending offset */
-
- poffset = fibh->soffset + sizeof(struct FileIdentDesc) + liu + lfi;
-
- if (poffset >= lfi)
- nameptr = (char *)(fibh->ebh->b_data + poffset - lfi);
- else
+ fi = udf_fileident_read(dir, &f_pos, fibh, cfi, &bloc, &extoffset, &eloc, &elen, &offset, &bh);
+
+ if (!fi)
{
- nameptr = fname;
- memcpy(nameptr, fi->fileIdent + liu, lfi - poffset);
- memcpy(nameptr + lfi - poffset, fibh->ebh->b_data, poffset);
+ if (fibh->sbh != fibh->ebh)
+ udf_release_data(fibh->ebh);
+ udf_release_data(fibh->sbh);
+ udf_release_data(bh);
+ return NULL;
}
- }
-
- if ( (cfi->fileCharacteristics & FILE_DELETED) != 0 )
- {
- if (((sizeof(struct FileIdentDesc) + liu + lfi + 3) & ~3) == nfidlen)
+
+ liu = le16_to_cpu(cfi->lengthOfImpUse);
+ lfi = cfi->lengthFileIdent;
+
+ if (fibh->sbh == fibh->ebh)
+ nameptr = fi->fileIdent + liu;
+ else
{
- udf_release_data(bh);
- cfi->descTag.tagSerialNum = cpu_to_le16(1);
- cfi->fileVersionNum = cpu_to_le16(1);
- cfi->fileCharacteristics = 0;
- cfi->lengthFileIdent = namelen;
- cfi->lengthOfImpUse = cpu_to_le16(0);
- if (!udf_write_fi(cfi, fi, fibh, NULL, name))
- return fi;
+ int poffset; /* Unpaded ending offset */
+
+ poffset = fibh->soffset + sizeof(struct FileIdentDesc) + liu + lfi;
+
+ if (poffset >= lfi)
+ nameptr = (char *)(fibh->ebh->b_data + poffset - lfi);
else
+ {
+ nameptr = fname;
+ memcpy(nameptr, fi->fileIdent + liu, lfi - poffset);
+ memcpy(nameptr + lfi - poffset, fibh->ebh->b_data, poffset);
+ }
+ }
+
+ if ( (cfi->fileCharacteristics & FILE_DELETED) != 0 )
+ {
+ if (((sizeof(struct FileIdentDesc) + liu + lfi + 3) & ~3) == nfidlen)
+ {
+ udf_release_data(bh);
+ cfi->descTag.tagSerialNum = cpu_to_le16(1);
+ cfi->fileVersionNum = cpu_to_le16(1);
+ cfi->fileCharacteristics = 0;
+ cfi->lengthFileIdent = namelen;
+ cfi->lengthOfImpUse = cpu_to_le16(0);
+ if (!udf_write_fi(cfi, fi, fibh, NULL, name))
+ return fi;
+ else
+ return NULL;
+ }
+ }
+
+ if (!lfi)
+ continue;
+
+ if ((flen = udf_get_filename(nameptr, fname, lfi)))
+ {
+ if (udf_match(flen, fname, &(dentry->d_name)))
+ {
+ if (fibh->sbh != fibh->ebh)
+ udf_release_data(fibh->ebh);
+ udf_release_data(fibh->sbh);
+ udf_release_data(bh);
+ *err = -EEXIST;
return NULL;
+ }
}
}
-
- if (!lfi)
- continue;
-
- if ((flen = udf_get_filename(nameptr, fname, lfi)))
+ }
+ else
+ {
+ block = udf_get_lb_pblock(dir->i_sb, UDF_I_LOCATION(dir), 0);
+ if (UDF_I_ALLOCTYPE(dir) == ICB_FLAG_AD_IN_ICB)
{
- if (udf_match(flen, fname, &(dentry->d_name)))
- {
- if (fibh->sbh != fibh->ebh)
- udf_release_data(fibh->ebh);
- udf_release_data(fibh->sbh);
- udf_release_data(bh);
- *err = -EEXIST;
- return NULL;
- }
+ fibh->sbh = fibh->ebh = udf_tread(dir->i_sb, block, dir->i_sb->s_blocksize);
+ fibh->soffset = fibh->eoffset = udf_file_entry_alloc_offset(dir);
+ }
+ else
+ {
+ fibh->sbh = fibh->ebh = NULL;
+ fibh->soffset = fibh->eoffset = sb->s_blocksize;
}
}
if (!(fibh->sbh = fibh->ebh = udf_expand_dir_adinicb(dir, &block, err)))
return NULL;
bloc = UDF_I_LOCATION(dir);
+ eloc.logicalBlockNum = block;
+ eloc.partitionReferenceNum = UDF_I_LOCATION(dir).partitionReferenceNum;
+ elen = dir->i_sb->s_blocksize;
extoffset = udf_file_entry_alloc_offset(dir);
- }
- else
- {
if (UDF_I_ALLOCTYPE(dir) == ICB_FLAG_AD_SHORT)
- extoffset -= sizeof(short_ad);
+ extoffset += sizeof(short_ad);
else if (UDF_I_ALLOCTYPE(dir) == ICB_FLAG_AD_LONG)
- extoffset -= sizeof(long_ad);
+ extoffset += sizeof(long_ad);
}
if (sb->s_blocksize - fibh->eoffset >= nfidlen)
}
if (UDF_I_ALLOCTYPE(dir) != ICB_FLAG_AD_IN_ICB)
- {
- Uint32 lextoffset = extoffset;
- if (udf_next_aext(dir, &bloc, &extoffset, &eloc, &elen, &bh, 1) !=
- EXTENT_RECORDED_ALLOCATED)
- {
- udf_release_data(bh);
- udf_release_data(fibh->sbh);
- return NULL;
- }
- else
- {
- if (elen & (sb->s_blocksize - 1))
- elen += nfidlen;
- block = eloc.logicalBlockNum + ((elen - 1) >>
- dir->i_sb->s_blocksize_bits);
- elen = (EXTENT_RECORDED_ALLOCATED << 30) | elen;
- udf_write_aext(dir, bloc, &lextoffset, eloc, elen, &bh, 1);
- }
- }
+ block = eloc.logicalBlockNum + ((elen - 1) >>
+ dir->i_sb->s_blocksize_bits);
else
block = UDF_I_LOCATION(dir).logicalBlockNum;
}
else
{
- Uint32 lextoffset = extoffset;
-
fibh->soffset = fibh->eoffset - sb->s_blocksize;
fibh->eoffset += nfidlen - sb->s_blocksize;
if (fibh->sbh != fibh->ebh)
fibh->sbh = fibh->ebh;
}
- if (udf_next_aext(dir, &bloc, &extoffset, &eloc, &elen, &bh, 1) !=
- EXTENT_RECORDED_ALLOCATED)
- {
- udf_release_data(bh);
- udf_release_data(fibh->sbh);
- return NULL;
- }
- else
- {
- elen = ((elen + sb->s_blocksize - 1) & ~(sb->s_blocksize - 1));
- block = eloc.logicalBlockNum +
- ((elen - 1) >> dir->i_sb->s_blocksize_bits);
- elen = (EXTENT_RECORDED_ALLOCATED << 30) | elen;
- udf_write_aext(dir, bloc, &lextoffset, eloc, elen, &bh, 0);
- }
+ block = eloc.logicalBlockNum + ((elen - 1) >>
+ dir->i_sb->s_blocksize_bits);
*err = -ENOSPC;
if (!(fibh->ebh = udf_bread(dir, f_pos >> (dir->i_sb->s_blocksize_bits - 2), 1, err)))
udf_release_data(fibh->sbh);
return NULL;
}
+
if (!(fibh->soffset))
{
- if (udf_next_aext(dir, &bloc, &lextoffset, &eloc, &elen, &bh, 1) ==
+ if (udf_next_aext(dir, &bloc, &extoffset, &eloc, &elen, &bh, 1) ==
EXTENT_RECORDED_ALLOCATED)
{
- if (block == (eloc.logicalBlockNum +
- ((elen - 1) >> dir->i_sb->s_blocksize_bits)))
- {
- if (udf_next_aext(dir, &bloc, &lextoffset, &eloc, &elen, &bh, 1) !=
- EXTENT_RECORDED_ALLOCATED)
- {
- udf_release_data(bh);
- udf_release_data(fibh->sbh);
- udf_release_data(fibh->ebh);
- udf_debug("next extent not recorded and allocated\n");
- return NULL;
- }
- }
+ block = eloc.logicalBlockNum + ((elen - 1) >>
+ dir->i_sb->s_blocksize_bits);
}
else
- {
- udf_release_data(bh);
- udf_release_data(fibh->sbh);
- udf_release_data(fibh->ebh);
- udf_debug("next extent not recorded and allocated\n");
- return NULL;
- }
- block = eloc.logicalBlockNum + ((elen - 1) >>
- dir->i_sb->s_blocksize_bits);
- }
+ block ++;
- fi = (struct FileIdentDesc *)(fibh->sbh->b_data + sb->s_blocksize + fibh->soffset);
+ udf_release_data(fibh->sbh);
+ fibh->sbh = fibh->ebh;
+ fi = (struct FileIdentDesc *)(fibh->sbh->b_data);
+ }
+ else
+ {
+ fi = (struct FileIdentDesc *)
+ (fibh->sbh->b_data + sb->s_blocksize + fibh->soffset);
+ }
}
memset(cfi, 0, sizeof(struct FileIdentDesc));
if (!inode)
return err;
- inode->i_data.a_ops = &udf_adinicb_aops;
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ inode->i_data.a_ops = &udf_adinicb_aops;
+ else
+ inode->i_data.a_ops = &udf_aops;
inode->i_op = &udf_file_inode_operations;
inode->i_fop = &udf_file_operations;
inode->i_mode = mode;
if (!(fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err)))
{
- udf_debug("udf_add_entry failure!\n");
inode->i_nlink --;
mark_inode_dirty(inode);
iput(inode);
init_special_inode(inode, mode, rdev);
if (!(fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err)))
{
- udf_debug("udf_add_entry failure!\n");
inode->i_nlink --;
mark_inode_dirty(inode);
iput(inode);
struct inode * inode;
struct udf_fileident_bh fibh;
int err;
- struct FileEntry *fe;
struct FileIdentDesc cfi, *fi;
- Uint32 loc;
+ struct dentry parent;
err = -EMLINK;
if (dir->i_nlink >= (256<<sizeof(dir->i_nlink))-1)
inode->i_op = &udf_dir_inode_operations;
inode->i_fop = &udf_dir_operations;
- inode->i_size = (sizeof(struct FileIdentDesc) + 3) & ~3;
- UDF_I_LENALLOC(inode) = inode->i_size;
- loc = UDF_I_LOCATION(inode).logicalBlockNum;
- fibh.sbh = udf_tread(inode->i_sb, inode->i_ino, inode->i_sb->s_blocksize);
-
- if (!fibh.sbh)
+ parent.d_name.len = 0;
+ parent.d_name.name = NULL;
+ inode->i_size = 0;
+ if (!(fi = udf_add_entry(inode, &parent, &fibh, &cfi, &err)))
{
inode->i_nlink--;
mark_inode_dirty(inode);
goto out;
}
inode->i_nlink = 2;
- fe = (struct FileEntry *)fibh.sbh->b_data;
- fi = (struct FileIdentDesc *)&(fe->extendedAttr[UDF_I_LENEATTR(inode)]);
- udf_new_tag((char *)&cfi, TID_FILE_IDENT_DESC, 2, 1, loc,
- sizeof(struct FileIdentDesc));
- cfi.fileVersionNum = cpu_to_le16(1);
- cfi.fileCharacteristics = FILE_DIRECTORY | FILE_PARENT;
- cfi.lengthFileIdent = 0;
cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize);
cfi.icb.extLocation = cpu_to_lelb(UDF_I_LOCATION(dir));
*(Uint32 *)((struct ADImpUse *)cfi.icb.impUse)->impUse =
cpu_to_le32(UDF_I_UNIQUE(dir) & 0x00000000FFFFFFFFUL);
- cfi.lengthOfImpUse = cpu_to_le16(0);
- fibh.ebh = fibh.sbh;
- fibh.soffset = sizeof(struct FileEntry);
- fibh.eoffset = sizeof(struct FileEntry) + inode->i_size;
+ cfi.fileCharacteristics = FILE_DIRECTORY | FILE_PARENT;
udf_write_fi(&cfi, fi, &fibh, NULL, NULL);
udf_release_data(fibh.sbh);
inode->i_mode = S_IFDIR | (mode & (S_IRWXUGO|S_ISVTX) & ~current->fs->umask);
if (!(fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err)))
{
- udf_debug("udf_add_entry failure!\n");
inode->i_nlink = 0;
mark_inode_dirty(inode);
iput(inode);
while ( (f_pos < size) )
{
- fi = udf_fileident_read(dir, &f_pos, &fibh, &cfi, &bloc, &extoffset, &offset, &bh);
+ fi = udf_fileident_read(dir, &f_pos, &fibh, &cfi, &bloc, &extoffset, &eloc, &elen, &offset, &bh);
if (!fi)
{
goto out;
inode = dentry->d_inode;
+ DQUOT_INIT(inode);
retval = -EIO;
if (udf_get_lb_pblock(dir->i_sb, lelb_to_cpu(cfi.icb.extLocation), 0) != inode->i_ino)
goto out;
inode = dentry->d_inode;
+ DQUOT_INIT(inode);
retval = -EIO;
{
struct inode * inode;
struct PathComponent *pc;
+ char *compstart;
struct udf_fileident_bh fibh;
struct buffer_head *bh = NULL;
int eoffset, elen = 0;
struct FileIdentDesc cfi;
char *ea;
int err;
+ int block;
if (!(inode = udf_new_inode(dir, S_IFLNK, &err)))
goto out;
inode->i_data.a_ops = &udf_symlink_aops;
inode->i_op = &page_symlink_inode_operations;
- bh = udf_tread(inode->i_sb, inode->i_ino, inode->i_sb->s_blocksize);
- ea = bh->b_data + udf_file_entry_alloc_offset(inode);
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ {
+ struct buffer_head *bh = NULL;
+ lb_addr bloc, eloc;
+ Uint32 elen, extoffset;
+
+ block = udf_new_block(inode,
+ UDF_I_LOCATION(inode).partitionReferenceNum,
+ UDF_I_LOCATION(inode).logicalBlockNum, &err);
+ if (!block)
+ goto out_no_entry;
+ bloc = UDF_I_LOCATION(inode);
+ eloc.logicalBlockNum = block;
+ eloc.partitionReferenceNum = UDF_I_LOCATION(inode).partitionReferenceNum;
+ elen = inode->i_sb->s_blocksize;
+ extoffset = udf_file_entry_alloc_offset(inode);
+ udf_add_aext(inode, &bloc, &extoffset, eloc, elen, &bh, 0);
+ udf_release_data(bh);
+
+ inode->i_blocks = inode->i_sb->s_blocksize / 512;
+ block = udf_get_pblock(inode->i_sb, block,
+ UDF_I_LOCATION(inode).partitionReferenceNum, 0);
+ }
+ else
+ block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0);
+
+ bh = udf_tread(inode->i_sb, block, inode->i_sb->s_blocksize);
+ ea = bh->b_data + udf_ext0_offset(inode);
eoffset = inode->i_sb->s_blocksize - (ea - bh->b_data);
pc = (struct PathComponent *)ea;
elen += sizeof(struct PathComponent);
}
- while (*symname && eoffset > elen + sizeof(struct PathComponent))
+ err = -ENAMETOOLONG;
+
+ while (*symname)
{
- char *compstart;
+ if (elen + sizeof(struct PathComponent) > eoffset)
+ goto out_no_entry;
+
pc = (struct PathComponent *)(ea + elen);
compstart = (char *)symname;
if (pc->componentType == 5)
{
if (elen + sizeof(struct PathComponent) + symname - compstart > eoffset)
- pc->lengthComponentIdent = eoffset - elen - sizeof(struct PathComponent);
+ goto out_no_entry;
else
pc->lengthComponentIdent = symname - compstart;
}
udf_release_data(bh);
- UDF_I_LENALLOC(inode) = inode->i_size = elen;
+ inode->i_size = elen;
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ UDF_I_LENALLOC(inode) = inode->i_size;
mark_inode_dirty(inode);
if (!(fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err)))
- goto out;
+ goto out_no_entry;
cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize);
cfi.icb.extLocation = cpu_to_lelb(UDF_I_LOCATION(inode));
- if (UDF_SB_LVIDBH(inode->i_sb))
- {
- struct LogicalVolHeaderDesc *lvhd;
- Uint64 uniqueID;
- lvhd = (struct LogicalVolHeaderDesc *)(UDF_SB_LVID(inode->i_sb)->logicalVolContentsUse);
- uniqueID = le64_to_cpu(lvhd->uniqueID);
+ if (UDF_SB_LVIDBH(inode->i_sb))
+ {
+ struct LogicalVolHeaderDesc *lvhd;
+ Uint64 uniqueID;
+ lvhd = (struct LogicalVolHeaderDesc *)(UDF_SB_LVID(inode->i_sb)->logicalVolContentsUse);
+ uniqueID = le64_to_cpu(lvhd->uniqueID);
*(Uint32 *)((struct ADImpUse *)cfi.icb.impUse)->impUse =
- le32_to_cpu(uniqueID & 0x00000000FFFFFFFFUL);
- if (!(++uniqueID & 0x00000000FFFFFFFFUL))
- uniqueID += 16;
- lvhd->uniqueID = cpu_to_le64(uniqueID);
- mark_buffer_dirty(UDF_SB_LVIDBH(inode->i_sb), 1);
+ cpu_to_le32(uniqueID & 0x00000000FFFFFFFFUL);
+ if (!(++uniqueID & 0x00000000FFFFFFFFUL))
+ uniqueID += 16;
+ lvhd->uniqueID = cpu_to_le64(uniqueID);
+ mark_buffer_dirty(UDF_SB_LVIDBH(inode->i_sb), 1);
}
udf_write_fi(&cfi, fi, &fibh, NULL, NULL);
if (UDF_I_ALLOCTYPE(dir) == ICB_FLAG_AD_IN_ICB)
out:
return err;
+
+out_no_entry:
+ inode->i_nlink--;
+ mark_inode_dirty(inode);
+ iput(inode);
+ goto out;
}
static int udf_link(struct dentry * old_dentry, struct inode * dir,
return err;
cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize);
cfi.icb.extLocation = cpu_to_lelb(UDF_I_LOCATION(inode));
- if (UDF_SB_LVIDBH(inode->i_sb))
- {
- struct LogicalVolHeaderDesc *lvhd;
- Uint64 uniqueID;
- lvhd = (struct LogicalVolHeaderDesc *)(UDF_SB_LVID(inode->i_sb)->logicalVolContentsUse);
- uniqueID = le64_to_cpu(lvhd->uniqueID);
+ if (UDF_SB_LVIDBH(inode->i_sb))
+ {
+ struct LogicalVolHeaderDesc *lvhd;
+ Uint64 uniqueID;
+ lvhd = (struct LogicalVolHeaderDesc *)(UDF_SB_LVID(inode->i_sb)->logicalVolContentsUse);
+ uniqueID = le64_to_cpu(lvhd->uniqueID);
*(Uint32 *)((struct ADImpUse *)cfi.icb.impUse)->impUse =
cpu_to_le32(uniqueID & 0x00000000FFFFFFFFUL);
- if (!(++uniqueID & 0x00000000FFFFFFFFUL))
- uniqueID += 16;
- lvhd->uniqueID = cpu_to_le64(uniqueID);
- mark_buffer_dirty(UDF_SB_LVIDBH(inode->i_sb), 1);
+ if (!(++uniqueID & 0x00000000FFFFFFFFUL))
+ uniqueID += 16;
+ lvhd->uniqueID = cpu_to_le64(uniqueID);
+ mark_buffer_dirty(UDF_SB_LVIDBH(inode->i_sb), 1);
}
udf_write_fi(&cfi, fi, &fibh, NULL, NULL);
if (UDF_I_ALLOCTYPE(dir) == ICB_FLAG_AD_IN_ICB)
}
else
{
-/*
DQUOT_INIT(new_inode);
-*/
}
}
if (S_ISDIR(old_inode->i_mode))
}
struct inode_operations udf_dir_inode_operations = {
- lookup: udf_lookup,
-#if CONFIG_UDF_RW == 1
- create: udf_create,
- link: udf_link,
- unlink: udf_unlink,
- symlink: udf_symlink,
- mkdir: udf_mkdir,
- rmdir: udf_rmdir,
- mknod: udf_mknod,
- rename: udf_rename,
-#endif
+ lookup: udf_lookup,
+ create: udf_create,
+ link: udf_link,
+ unlink: udf_unlink,
+ symlink: udf_symlink,
+ mkdir: udf_mkdir,
+ rmdir: udf_rmdir,
+ mknod: udf_mknod,
+ rename: udf_rename,
};
* ftp://prep.ai.mit.edu/pub/gnu/GPL
* Each contributing author retains all rights to their own work.
*
- * (C) 1998-1999 Ben Fennema
+ * (C) 1998-2000 Ben Fennema
*
* HISTORY
*
/* These are the "meat" - everything else is stuffing */
static struct super_block *udf_read_super(struct super_block *, void *, int);
static void udf_put_super(struct super_block *);
+static void udf_write_super(struct super_block *);
static int udf_remount_fs(struct super_block *, int *, char *);
static int udf_check_valid(struct super_block *, int, int);
static int udf_vrs(struct super_block *sb, int silent);
/* UDF filesystem type */
static struct file_system_type udf_fstype = {
- "udf", /* name */
- FS_REQUIRES_DEV, /* fs_flags */
- udf_read_super, /* read_super */
- NULL /* next */
+ name: "udf",
+ fs_flags: FS_REQUIRES_DEV,
+ read_super: udf_read_super,
};
/* Superblock operations */
-static struct super_operations udf_sb_ops =
-{
- read_inode: udf_read_inode,
- put_inode: udf_put_inode,
- put_super: udf_put_super,
- statfs: udf_statfs,
- remount_fs: udf_remount_fs,
-#if CONFIG_UDF_RW == 1
- write_inode: udf_write_inode,
- delete_inode: udf_delete_inode,
-#endif
+static struct super_operations udf_sb_ops = {
+ read_inode: udf_read_inode,
+ write_inode: udf_write_inode,
+ put_inode: udf_put_inode,
+ delete_inode: udf_delete_inode,
+ put_super: udf_put_super,
+ write_super: udf_write_super,
+ statfs: udf_statfs,
+ remount_fs: udf_remount_fs,
};
struct udf_options
* gid= Set the default group.
* umask= Set the default umask.
* uid= Set the default user.
+ * bs= Set the block size.
* unhide Show otherwise hidden files.
* undelete Show deleted files in lists.
+ * adinicb Embed data in the inode (default)
+ * noadinicb Don't embed data in the inode
+ * shortad Use short ad's
+ * longad Use long ad's (default)
* strict Set strict conformance (unused)
*
* The remaining are for debugging and disaster recovery:
*
- * bs= Set the block size. (may not work unless 2048)
- * novrs Skip volume sequence recognition
+ * novrs Skip volume sequence recognition
*
* The following expect a offset from 0.
*
char *opt, *val;
uopt->novrs = 0;
- uopt->blocksize = 2048;
+ uopt->blocksize = 512;
uopt->partition = 0xFFFF;
uopt->session = 0xFFFFFFFF;
uopt->lastblock = 0xFFFFFFFF;
if (!options)
return 1;
- for (opt = strtok(options, ","); opt; opt = strtok(NULL, ","))
+ for (opt = strtok(options, ","); opt; opt = strtok(NULL, ","))
{
/* Make "opt=val" into two strings */
val = strchr(opt, '=');
else if (!strcmp(opt, "bs") && val)
uopt->blocksize = simple_strtoul(val, NULL, 0);
else if (!strcmp(opt, "unhide") && !val)
- uopt->flags |= UDF_FLAG_UNHIDE;
+ uopt->flags |= (1 << UDF_FLAG_UNHIDE);
else if (!strcmp(opt, "undelete") && !val)
- uopt->flags |= UDF_FLAG_UNDELETE;
+ uopt->flags |= (1 << UDF_FLAG_UNDELETE);
+ else if (!strcmp(opt, "noadinicb") && !val)
+ uopt->flags &= ~(1 << UDF_FLAG_USE_AD_IN_ICB);
+ else if (!strcmp(opt, "adinicb") && !val)
+ uopt->flags |= (1 << UDF_FLAG_USE_AD_IN_ICB);
+ else if (!strcmp(opt, "shortad") && !val)
+ uopt->flags |= (1 << UDF_FLAG_USE_SHORT_AD);
+ else if (!strcmp(opt, "longad") && !val)
+ uopt->flags &= ~(1 << UDF_FLAG_USE_SHORT_AD);
else if (!strcmp(opt, "gid") && val)
uopt->gid = simple_strtoul(val, NULL, 0);
else if (!strcmp(opt, "umask") && val)
uopt->umask = simple_strtoul(val, NULL, 0);
else if (!strcmp(opt, "strict") && !val)
- uopt->flags |= UDF_FLAG_STRICT;
+ uopt->flags |= (1 << UDF_FLAG_STRICT);
else if (!strcmp(opt, "uid") && val)
uopt->uid = simple_strtoul(val, NULL, 0);
else if (!strcmp(opt, "session") && val)
return 1;
}
+void
+udf_write_super(struct super_block *sb)
+{
+ if (!(sb->s_flags & MS_RDONLY))
+ udf_open_lvid(sb);
+ sb->s_dirt = 0;
+}
+
static int
udf_remount_fs(struct super_block *sb, int *flags, char *options)
{
struct udf_options uopt;
- uopt.flags = UDF_SB(sb)->s_flags ;
- uopt.uid = UDF_SB(sb)->s_uid ;
- uopt.gid = UDF_SB(sb)->s_gid ;
- uopt.umask = UDF_SB(sb)->s_umask ;
+ uopt.flags = UDF_SB(sb)->s_flags ;
+ uopt.uid = UDF_SB(sb)->s_uid ;
+ uopt.gid = UDF_SB(sb)->s_gid ;
+ uopt.umask = UDF_SB(sb)->s_umask ;
if ( !udf_parse_options(options, &uopt) )
return -EINVAL;
- UDF_SB(sb)->s_flags = uopt.flags;
- UDF_SB(sb)->s_uid = uopt.uid;
- UDF_SB(sb)->s_gid = uopt.gid;
- UDF_SB(sb)->s_umask = uopt.umask;
+ UDF_SB(sb)->s_flags = uopt.flags;
+ UDF_SB(sb)->s_uid = uopt.uid;
+ UDF_SB(sb)->s_gid = uopt.gid;
+ UDF_SB(sb)->s_umask = uopt.umask;
+
+#if CONFIG_UDF_RW != 1
+ *flags |= MS_RDONLY;
+#endif
if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY))
return 0;
udf_set_blocksize(struct super_block *sb, int bsize)
{
/* Use specified block size if specified */
- sb->s_blocksize = get_hardblocksize(sb->s_dev);
- sb->s_blocksize = sb->s_blocksize ? sb->s_blocksize : 2048;
+ if (!(sb->s_blocksize = get_hardblocksize(sb->s_dev)))
+ sb->s_blocksize = 2048;
if (bsize > sb->s_blocksize)
sb->s_blocksize = bsize;
for (;!nsr02 && !nsr03; sector += 2048)
{
/* Read a block */
- bh = udf_tread(sb, sector >> sb->s_blocksize_bits, 2048);
+ bh = udf_tread(sb, sector >> sb->s_blocksize_bits, sb->s_blocksize);
if (!bh)
break;
}
else if (location == udf_variable_to_fixed(last[i]) - UDF_SB_SESSION(sb))
{
- UDF_SB(sb)->s_flags |= UDF_FLAG_VARCONV;
+ UDF_SET_FLAG(sb, UDF_FLAG_VARCONV);
lastblock = UDF_SB_ANCHOR(sb)[0] = udf_variable_to_fixed(last[i]);
UDF_SB_ANCHOR(sb)[1] = lastblock - 256;
}
if (ident == TID_ANCHOR_VOL_DESC_PTR &&
location == udf_variable_to_fixed(last[i]) - 256)
{
- UDF_SB(sb)->s_flags |= UDF_FLAG_VARCONV;
+ UDF_SET_FLAG(sb, UDF_FLAG_VARCONV);
lastblock = udf_variable_to_fixed(last[i]);
UDF_SB_ANCHOR(sb)[1] = lastblock - 256;
}
udf_release_data(bh);
if (ident == TID_ANCHOR_VOL_DESC_PTR && location == 256)
- UDF_SB(sb)->s_flags |= UDF_FLAG_VARCONV;
+ UDF_SET_FLAG(sb, UDF_FLAG_VARCONV);
}
}
if ( udf_stamp_to_time(&recording, &recording_usec,
lets_to_cpu(pvoldesc->recordingDateAndTime)) )
{
- timestamp ts;
- ts = lets_to_cpu(pvoldesc->recordingDateAndTime);
+ timestamp ts;
+ ts = lets_to_cpu(pvoldesc->recordingDateAndTime);
udf_debug("recording time %ld/%ld, %04u/%02u/%02u %02u:%02u (%x)\n",
recording, recording_usec,
ts.year, ts.month, ts.day, ts.hour, ts.minute, ts.typeAndTimezone);
- UDF_SB_RECORDTIME(sb) = recording;
+ UDF_SB_RECORDTIME(sb) = recording;
}
if ( !udf_build_ustr(&instr, pvoldesc->volIdent, 32) )
{
- if (!udf_CS0toUTF8(&outstr, &instr))
+ if (udf_CS0toUTF8(&outstr, &instr))
{
udf_debug("volIdent[] = '%s'\n", outstr.u_name);
strncpy( UDF_SB_VOLIDENT(sb), outstr.u_name, outstr.u_len);
if ( !udf_build_ustr(&instr, pvoldesc->volSetIdent, 128) )
{
- if (!udf_CS0toUTF8(&outstr, &instr))
+ if (udf_CS0toUTF8(&outstr, &instr))
udf_debug("volSetIdent[] = '%s'\n", outstr.u_name);
}
}
Uint16 ident;
struct buffer_head *bh;
long main_s, main_e, reserve_s, reserve_e;
- int i;
+ int i, j;
if (!sb)
- return 1;
+ return 1;
for (i=0; i<sizeof(UDF_SB_ANCHOR(sb))/sizeof(int); i++)
{
return 1;
}
- if (i == 0)
- ino.partitionReferenceNum = i+1;
- else
- ino.partitionReferenceNum = i-1;
+ for (j=0; j<UDF_SB_NUMPARTS(sb); j++)
+ {
+ if (j != i &&
+ UDF_SB_PARTVSN(sb,i) == UDF_SB_PARTVSN(sb,j) &&
+ UDF_SB_PARTNUM(sb,i) == UDF_SB_PARTNUM(sb,j))
+ {
+ ino.partitionReferenceNum = j;
+ ino.logicalBlockNum = UDF_SB_LASTBLOCK(sb) -
+ UDF_SB_PARTROOT(sb,j);
+ break;
+ }
+ }
- ino.logicalBlockNum = UDF_SB_LASTBLOCK(sb) - UDF_SB_PARTROOT(sb,ino.partitionReferenceNum);
+ if (j == UDF_SB_NUMPARTS(sb))
+ return 1;
if (!(UDF_SB_VAT(sb) = udf_iget(sb, ino)))
return 1;
static void udf_open_lvid(struct super_block *sb)
{
-#if CONFIG_UDF_RW == 1
if (UDF_SB_LVIDBH(sb))
{
int i;
((Uint8 *)&(UDF_SB_LVID(sb)->descTag))[i];
mark_buffer_dirty(UDF_SB_LVIDBH(sb), 1);
+ sb->s_dirt = 0;
}
-#endif
}
static void udf_close_lvid(struct super_block *sb)
{
-#if CONFIG_UDF_RW == 1
if (UDF_SB_LVIDBH(sb) &&
UDF_SB_LVID(sb)->integrityType == INTEGRITY_TYPE_OPEN)
{
UDF_SB_LVIDIU(sb)->impIdent.identSuffix[1] = UDF_OS_ID_LINUX;
if (udf_time_to_stamp(&cpu_time, CURRENT_TIME, CURRENT_UTIME))
UDF_SB_LVID(sb)->recordingDateAndTime = cpu_to_lets(cpu_time);
-
+ if (UDF_MAX_WRITE_VERSION > le16_to_cpu(UDF_SB_LVIDIU(sb)->maxUDFWriteRev))
+ UDF_SB_LVIDIU(sb)->maxUDFWriteRev = cpu_to_le16(UDF_MAX_WRITE_VERSION);
+ if (UDF_SB_UDFREV(sb) > le16_to_cpu(UDF_SB_LVIDIU(sb)->minUDFReadRev))
+ UDF_SB_LVIDIU(sb)->minUDFReadRev = cpu_to_le16(UDF_SB_UDFREV(sb));
+ if (UDF_SB_UDFREV(sb) > le16_to_cpu(UDF_SB_LVIDIU(sb)->minUDFWriteRev))
+ UDF_SB_LVIDIU(sb)->minUDFWriteRev = cpu_to_le16(UDF_SB_UDFREV(sb));
UDF_SB_LVID(sb)->integrityType = INTEGRITY_TYPE_CLOSE;
UDF_SB_LVID(sb)->descTag.descCRC =
mark_buffer_dirty(UDF_SB_LVIDBH(sb), 1);
}
-#endif
}
/*
lb_addr rootdir, fileset;
int i;
- uopt.flags = 0;
+ uopt.flags = (1 << UDF_FLAG_USE_AD_IN_ICB);
uopt.uid = -1;
uopt.gid = -1;
uopt.umask = 0;
MOD_INC_USE_COUNT;
lock_super(sb);
+ memset(UDF_SB(sb), 0x00, sizeof(struct udf_sb_info));
- UDF_SB_PARTMAPS(sb) = NULL;
- UDF_SB_LVIDBH(sb) = NULL;
- UDF_SB_VAT(sb) = NULL;
+#if CONFIG_UDF_RW != 1
+ sb->s_flags |= MS_RDONLY;
+#endif
if (!udf_parse_options((char *)options, &uopt))
goto error_out;
- memset(UDF_SB_ANCHOR(sb), 0x00, sizeof(UDF_SB_ANCHOR(sb)));
fileset.logicalBlockNum = 0xFFFFFFFF;
fileset.partitionReferenceNum = 0xFFFF;
- UDF_SB_RECORDTIME(sb)=0;
- UDF_SB_VOLIDENT(sb)[0]=0;
UDF_SB(sb)->s_flags = uopt.flags;
UDF_SB(sb)->s_uid = uopt.uid;
udf_debug("Multi-session=%d\n", UDF_SB_SESSION(sb));
if ( uopt.lastblock == 0xFFFFFFFF )
- UDF_SB_LASTBLOCK(sb) = udf_get_last_block(sb, &(UDF_SB(sb)->s_flags));
+ UDF_SB_LASTBLOCK(sb) = udf_get_last_block(sb);
else
UDF_SB_LASTBLOCK(sb) = uopt.lastblock;
if (udf_check_valid(sb, uopt.novrs, silent)) /* read volume recognition sequences */
{
- udf_debug("No VRS found\n");
+ printk("UDF-fs: No VRS found\n");
goto error_out;
}
if (udf_load_partition(sb, &fileset))
{
- udf_debug("No partition found (1)\n");
+ printk("UDF-fs: No partition found (1)\n");
goto error_out;
}
+ if ( UDF_SB_LVIDBH(sb) )
+ {
+ Uint16 minUDFReadRev = le16_to_cpu(UDF_SB_LVIDIU(sb)->minUDFReadRev);
+ Uint16 minUDFWriteRev = le16_to_cpu(UDF_SB_LVIDIU(sb)->minUDFWriteRev);
+ /* Uint16 maxUDFWriteRev = le16_to_cpu(UDF_SB_LVIDIU(sb)->maxUDFWriteRev); */
+
+ if (minUDFReadRev > UDF_MAX_READ_VERSION)
+ {
+ printk("UDF-fs: minUDFReadRev=%x (max is %x)\n",
+ UDF_SB_LVIDIU(sb)->minUDFReadRev, UDF_MAX_READ_VERSION);
+ goto error_out;
+ }
+ else if (minUDFWriteRev > UDF_MAX_WRITE_VERSION)
+ {
+ sb->s_flags |= MS_RDONLY;
+ }
+
+ if (minUDFReadRev >= UDF_VERS_USE_EXTENDED_FE)
+ UDF_SET_FLAG(sb, UDF_FLAG_USE_EXTENDED_FE);
+ if (minUDFReadRev >= UDF_VERS_USE_STREAMS)
+ UDF_SET_FLAG(sb, UDF_FLAG_USE_STREAMS);
+ }
+
if ( !UDF_SB_NUMPARTS(sb) )
{
- udf_debug("No partition found (2)\n");
+ printk("UDF-fs: No partition found (2)\n");
goto error_out;
}
if ( udf_find_fileset(sb, &fileset, &rootdir) )
{
- udf_debug("No fileset found\n");
+ printk("UDF-fs: No fileset found\n");
goto error_out;
}
inode = udf_iget(sb, rootdir);
if (!inode)
{
- udf_debug("Error in udf_iget, block=%d, partition=%d\n",
+ printk("UDF-fs: Error in udf_iget, block=%d, partition=%d\n",
rootdir.logicalBlockNum, rootdir.partitionReferenceNum);
goto error_out;
}
sb->s_root = d_alloc_root(inode);
if (!sb->s_root)
{
+ printk("UDF-fs: Couldn't allocate root dentry\n");
iput(inode);
- udf_debug("Couldn't allocate root dentry\n");
goto error_out;
}
* symlinks can't do much...
*/
struct address_space_operations udf_symlink_aops = {
- readpage: udf_symlink_filler,
+ readpage: udf_symlink_filler,
};
#include "udf_i.h"
#include "udf_sb.h"
-static void extent_trunc(struct inode * inode, lb_addr bloc, int *extoffset,
- lb_addr eloc, Uint8 etype, Uint32 elen, struct buffer_head **bh, Uint32 offset)
+static void extent_trunc(struct inode * inode, lb_addr bloc, int extoffset,
+ lb_addr eloc, Uint8 etype, Uint32 elen, struct buffer_head **bh, Uint32 nelen)
{
lb_addr neloc = { 0, 0 };
- int nelen = 0;
int blocks = inode->i_sb->s_blocksize / 512;
int last_block = (elen + inode->i_sb->s_blocksize - 1) >> inode->i_sb->s_blocksize_bits;
+ int first_block = (nelen + inode->i_sb->s_blocksize - 1) >> inode->i_sb->s_blocksize_bits;
- if (offset)
+ if (nelen)
{
- nelen = (etype << 30) |
- (((offset - 1) << inode->i_sb->s_blocksize_bits) +
- (inode->i_size & (inode->i_sb->s_blocksize - 1)));
neloc = eloc;
+ nelen = (etype << 30) | nelen;
+ }
+
+ if (elen != nelen)
+ {
+ udf_write_aext(inode, bloc, &extoffset, neloc, nelen, bh, 0);
+ if (last_block - first_block > 0)
+ {
+ if (etype == EXTENT_RECORDED_ALLOCATED)
+ {
+ inode->i_blocks -= (blocks * (last_block - first_block));
+ mark_inode_dirty(inode);
+ }
+ if (etype != EXTENT_NOT_RECORDED_NOT_ALLOCATED)
+ udf_free_blocks(inode, eloc, first_block, last_block - first_block);
+ }
}
- if (etype == EXTENT_RECORDED_ALLOCATED)
- inode->i_blocks -= (blocks * (last_block - offset));
- udf_write_aext(inode, bloc, extoffset, neloc, nelen, bh, 1);
- mark_inode_dirty(inode);
- if (etype != EXTENT_NOT_RECORDED_NOT_ALLOCATED)
- udf_free_blocks(inode, eloc, offset, last_block - offset);
}
void udf_trunc(struct inode * inode)
lb_addr bloc, eloc, neloc = { 0, 0 };
Uint32 extoffset, elen, offset, nelen = 0, lelen = 0, lenalloc;
int etype;
- int first_block = (inode->i_size + inode->i_sb->s_blocksize - 1) >> inode->i_sb->s_blocksize_bits;
+ int first_block = inode->i_size >> inode->i_sb->s_blocksize_bits;
struct buffer_head *bh = NULL;
int adsize;
else
adsize = 0;
- if ((etype = inode_bmap(inode, first_block, &bloc, &extoffset, &eloc, &elen, &offset, &bh)) != -1)
+ etype = inode_bmap(inode, first_block, &bloc, &extoffset, &eloc, &elen, &offset, &bh);
+ offset = (offset << inode->i_sb->s_blocksize_bits) |
+ (inode->i_size & (inode->i_sb->s_blocksize - 1));
+ if (etype != -1)
{
extoffset -= adsize;
- extent_trunc(inode, bloc, &extoffset, eloc, etype, elen, &bh, offset);
+ extent_trunc(inode, bloc, extoffset, eloc, etype, elen, &bh, offset);
+ extoffset += adsize;
if (offset)
lenalloc = extoffset;
else
{
if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr)))
+ {
UDF_I_LENALLOC(inode) = lenalloc;
+ mark_inode_dirty(inode);
+ }
else
{
struct AllocExtDesc *aed = (struct AllocExtDesc *)(bh->b_data);
aed->lengthAllocDescs = cpu_to_le32(lenalloc);
+ udf_update_tag(bh->b_data, lenalloc +
+ sizeof(struct AllocExtDesc));
+ mark_buffer_dirty(bh, 1);
}
}
lelen = 1;
}
else
- extent_trunc(inode, bloc, &extoffset, eloc, etype, elen, &bh, 0);
+ {
+ extent_trunc(inode, bloc, extoffset, eloc, etype, elen, &bh, 0);
+ extoffset += adsize;
+ }
}
if (lelen)
else
{
if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr)))
+ {
UDF_I_LENALLOC(inode) = lenalloc;
+ mark_inode_dirty(inode);
+ }
else
{
struct AllocExtDesc *aed = (struct AllocExtDesc *)(bh->b_data);
aed->lengthAllocDescs = cpu_to_le32(lenalloc);
+ udf_update_tag(bh->b_data, lenalloc +
+ sizeof(struct AllocExtDesc));
+ mark_buffer_dirty(bh, 1);
}
}
}
else if (inode->i_size)
{
- char tetype;
-
if (offset)
{
extoffset -= adsize;
- tetype = udf_next_aext(inode, &bloc, &extoffset, &eloc, &elen, &bh, 1);
- if (tetype == EXTENT_NOT_RECORDED_NOT_ALLOCATED)
+ etype = udf_next_aext(inode, &bloc, &extoffset, &eloc, &elen, &bh, 1);
+ if (etype == EXTENT_NOT_RECORDED_NOT_ALLOCATED)
{
extoffset -= adsize;
- elen = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) |
- (elen + (offset << inode->i_sb->s_blocksize_bits));
+ elen = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) | (elen + offset);
udf_write_aext(inode, bloc, &extoffset, eloc, elen, &bh, 0);
}
+ else if (etype == EXTENT_NOT_RECORDED_ALLOCATED)
+ {
+ lb_addr neloc = { 0, 0 };
+ extoffset -= adsize;
+ nelen = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) |
+ ((elen + offset + inode->i_sb->s_blocksize - 1) &
+ ~(inode->i_sb->s_blocksize - 1));
+ udf_write_aext(inode, bloc, &extoffset, neloc, nelen, &bh, 1);
+ udf_add_aext(inode, &bloc, &extoffset, eloc, (etype << 30) | elen, &bh, 1);
+ }
else
{
if (elen & (inode->i_sb->s_blocksize - 1))
udf_write_aext(inode, bloc, &extoffset, eloc, elen, &bh, 1);
}
memset(&eloc, 0x00, sizeof(lb_addr));
- elen = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) |
- (offset << inode->i_sb->s_blocksize_bits);
+ elen = (EXTENT_NOT_RECORDED_NOT_ALLOCATED << 30) | offset;
udf_add_aext(inode, &bloc, &extoffset, eloc, elen, &bh, 1);
}
}
void udf_truncate(struct inode * inode)
{
+ int err;
+
if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
S_ISLNK(inode->i_mode)))
return;
return;
if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
- UDF_I_LENALLOC(inode) = inode->i_size;
+ {
+ if (inode->i_sb->s_blocksize < (udf_file_entry_alloc_offset(inode) +
+ inode->i_size))
+ {
+ udf_expand_file_adinicb(inode, inode->i_size, &err);
+ if (UDF_I_ALLOCTYPE(inode) == ICB_FLAG_AD_IN_ICB)
+ {
+ inode->i_size = UDF_I_LENALLOC(inode);
+ return;
+ }
+ else
+ udf_trunc(inode);
+ }
+ else
+ UDF_I_LENALLOC(inode) = inode->i_size;
+ }
else
udf_trunc(inode);
#define UDF_I_ALLOCTYPE(X) ( UDF_I(X)->i_alloc_type )
#define UDF_I_EXTENDED_FE(X)( UDF_I(X)->i_extended_fe )
#define UDF_I_STRAT4096(X) ( UDF_I(X)->i_strat_4096 )
+#define UDF_I_NEW_INODE(X) ( UDF_I(X)->i_new_inode )
#define UDF_I_NEXT_ALLOC_BLOCK(X) ( UDF_I(X)->i_next_alloc_block )
#define UDF_I_NEXT_ALLOC_GOAL(X) ( UDF_I(X)->i_next_alloc_goal )
#define UDF_I_UATIME(X) ( UDF_I(X)->i_uatime )
#define __LINUX_UDF_SB_H
/* Since UDF 1.50 is ISO 13346 based... */
-#define UDF_SUPER_MAGIC 0x15013346
+#define UDF_SUPER_MAGIC 0x15013346
-#define UDF_FLAG_STRICT 0x00000001U
-#define UDF_FLAG_UNDELETE 0x00000002U
-#define UDF_FLAG_UNHIDE 0x00000004U
-#define UDF_FLAG_VARCONV 0x00000008U
+#define UDF_MAX_READ_VERSION 0x0200
+#define UDF_MAX_WRITE_VERSION 0x0200
+#define UDF_FLAG_USE_EXTENDED_FE 0
+#define UDF_VERS_USE_EXTENDED_FE 0x0200
+#define UDF_FLAG_USE_STREAMS 1
+#define UDF_VERS_USE_STREAMS 0x0200
+#define UDF_FLAG_USE_SHORT_AD 2
+#define UDF_FLAG_USE_AD_IN_ICB 3
+#define UDF_FLAG_USE_FILE_CTIME_EA 4
+#define UDF_FLAG_STRICT 5
+#define UDF_FLAG_UNDELETE 6
+#define UDF_FLAG_UNHIDE 7
+#define UDF_FLAG_VARCONV 8
+
#define UDF_SB_FREE(X)\
{\
if (UDF_SB(X))\
memset(UDF_SB_PARTMAPS(X), 0x00, sizeof(struct udf_part_map) * Y);\
}
-#define IS_STRICT(X) ( UDF_SB(X)->s_flags & UDF_FLAG_STRICT )
-#define IS_UNDELETE(X) ( UDF_SB(X)->s_flags & UDF_FLAG_UNDELETE )
-#define IS_UNHIDE(X) ( UDF_SB(X)->s_flags & UDF_FLAG_UNHIDE )
-
-#define UDF_SB_SESSION(X) ( UDF_SB(X)->s_session )
-#define UDF_SB_ANCHOR(X) ( UDF_SB(X)->s_anchor )
-#define UDF_SB_NUMPARTS(X) ( UDF_SB(X)->s_partitions )
-#define UDF_SB_VOLUME(X) ( UDF_SB(X)->s_thisvolume )
-#define UDF_SB_LASTBLOCK(X) ( UDF_SB(X)->s_lastblock )
-#define UDF_SB_VOLDESC(X) ( UDF_SB(X)->s_voldesc )
-#define UDF_SB_LVIDBH(X) ( UDF_SB(X)->s_lvidbh )
-#define UDF_SB_LVID(X) ( (struct LogicalVolIntegrityDesc *)UDF_SB_LVIDBH(X)->b_data )
-#define UDF_SB_LVIDIU(X) ( (struct LogicalVolIntegrityDescImpUse *)&(UDF_SB_LVID(sb)->impUse[UDF_SB_LVID(sb)->numOfPartitions * 2 * sizeof(Uint32)/sizeof(Uint8)]) )
-#define UDF_SB_PARTITION(X) ( UDF_SB(X)->s_partition )
-#define UDF_SB_RECORDTIME(X) ( UDF_SB(X)->s_recordtime )
-#define UDF_SB_VOLIDENT(X) ( UDF_SB(X)->s_volident )
-#define UDF_SB_PARTMAPS(X) ( UDF_SB(X)->s_partmaps )
-#define UDF_SB_SERIALNUM(X) ( UDF_SB(X)->s_serialnum )
-#define UDF_SB_VAT(X) ( UDF_SB(X)->s_vat )
-
-#define UDF_SB_BLOCK_BITMAP_NUMBER(X,Y) ( UDF_SB(X)->s_block_bitmap_number[Y] )
-#define UDF_SB_BLOCK_BITMAP(X,Y) ( UDF_SB(X)->s_block_bitmap[Y] )
-#define UDF_SB_LOADED_BLOCK_BITMAPS(X) ( UDF_SB(X)->s_loaded_block_bitmaps )
+#define UDF_QUERY_FLAG(X,Y) ( UDF_SB(X)->s_flags & ( 1 << (Y) ) )
+#define UDF_SET_FLAG(X,Y) ( UDF_SB(X)->s_flags |= ( 1 << (Y) ) )
+#define UDF_CLEAR_FLAG(X,Y) ( UDF_SB(X)->s_flags &= ~( 1 << (Y) ) )
+
+#define UDF_UPDATE_UDFREV(X,Y) ( ((Y) > UDF_SB_UDFREV(X)) ? UDF_SB_UDFREV(X) = (Y) : UDF_SB_UDFREV(X) )
+
+#define UDF_SB_PARTMAPS(X) ( UDF_SB(X)->s_partmaps )
+#define UDF_SB_PARTTYPE(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_partition_type )
+#define UDF_SB_PARTROOT(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_partition_root )
+#define UDF_SB_PARTLEN(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_partition_len )
+#define UDF_SB_PARTVSN(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_volumeseqnum )
+#define UDF_SB_PARTNUM(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_partition_num )
+#define UDF_SB_TYPESPAR(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_type_specific.s_sparing )
+#define UDF_SB_TYPEVIRT(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_type_specific.s_virtual )
+#define UDF_SB_PARTFUNC(X,Y) ( UDF_SB_PARTMAPS(X)[(Y)].s_partition_func )
-#define UDF_SB_PARTTYPE(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_partition_type )
-#define UDF_SB_PARTROOT(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_partition_root )
-#define UDF_SB_PARTLEN(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_partition_len )
-#define UDF_SB_PARTVSN(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_volumeseqnum )
-#define UDF_SB_PARTNUM(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_partition_num )
-#define UDF_SB_TYPESPAR(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_type_specific.s_sparing )
-#define UDF_SB_TYPEVIRT(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_type_specific.s_virtual )
-#define UDF_SB_PARTFUNC(X,Y) ( UDF_SB_PARTMAPS(X)[Y].s_partition_func )
+#define UDF_SB_VOLIDENT(X) ( UDF_SB(X)->s_volident )
+#define UDF_SB_NUMPARTS(X) ( UDF_SB(X)->s_partitions )
+#define UDF_SB_PARTITION(X) ( UDF_SB(X)->s_partition )
+#define UDF_SB_SESSION(X) ( UDF_SB(X)->s_session )
+#define UDF_SB_ANCHOR(X) ( UDF_SB(X)->s_anchor )
+#define UDF_SB_LASTBLOCK(X) ( UDF_SB(X)->s_lastblock )
+#define UDF_SB_LVIDBH(X) ( UDF_SB(X)->s_lvidbh )
+#define UDF_SB_LVID(X) ( (struct LogicalVolIntegrityDesc *)UDF_SB_LVIDBH(X)->b_data )
+#define UDF_SB_LVIDIU(X) ( (struct LogicalVolIntegrityDescImpUse *)&(UDF_SB_LVID(X)->impUse[UDF_SB_LVID(X)->numOfPartitions * 2 * sizeof(Uint32)/sizeof(Uint8)]) )
+
+#define UDF_SB_LOADED_BLOCK_BITMAPS(X) ( UDF_SB(X)->s_loaded_block_bitmaps )
+#define UDF_SB_BLOCK_BITMAP_NUMBER(X,Y) ( UDF_SB(X)->s_block_bitmap_number[(Y)] )
+#define UDF_SB_BLOCK_BITMAP(X,Y) ( UDF_SB(X)->s_block_bitmap[(Y)] )
+#define UDF_SB_UMASK(X) ( UDF_SB(X)->s_umask )
+#define UDF_SB_GID(X) ( UDF_SB(X)->s_gid )
+#define UDF_SB_UID(X) ( UDF_SB(X)->s_uid )
+#define UDF_SB_RECORDTIME(X) ( UDF_SB(X)->s_recordtime )
+#define UDF_SB_SERIALNUM(X) ( UDF_SB(X)->s_serialnum )
+#define UDF_SB_UDFREV(X) ( UDF_SB(X)->s_udfrev )
+#define UDF_SB_FLAGS(X) ( UDF_SB(X)->s_flags )
+#define UDF_SB_VAT(X) ( UDF_SB(X)->s_vat )
#endif /* __LINUX_UDF_SB_H */
#ifndef __UDF_DECL_H
#define __UDF_DECL_H
-#define UDF_VERSION_NOTICE "v0.9.0"
-
#include <linux/udf_167.h>
#include <linux/udf_udf.h>
#include "udfend.h"
+#include <linux/udf_fs.h>
+
#ifdef __KERNEL__
#include <linux/types.h>
-#include <linux/udf_fs.h>
-#include <linux/config.h>
#ifndef LINUX_VERSION_CODE
#include <linux/version.h>
struct super_block;
extern struct inode_operations udf_dir_inode_operations;
-extern struct inode_operations udf_file_inode_operations;
extern struct file_operations udf_dir_operations;
+extern struct inode_operations udf_file_inode_operations;
extern struct file_operations udf_file_operations;
+extern struct address_space_operations udf_aops;
extern struct address_space_operations udf_adinicb_aops;
extern struct address_space_operations udf_symlink_aops;
/* inode.c */
extern struct inode *udf_iget(struct super_block *, lb_addr);
extern int udf_sync_inode(struct inode *);
-extern void udf_expand_file_adinicb(struct file *, int, int *);
+extern void udf_expand_file_adinicb(struct inode *, int, int *);
extern struct buffer_head * udf_expand_dir_adinicb(struct inode *, int *, int *);
extern struct buffer_head * udf_getblk(struct inode *, long, int, int *);
extern struct buffer_head * udf_bread(struct inode *, int, int, int *);
/* lowlevel.c */
extern unsigned int udf_get_last_session(struct super_block *);
-extern unsigned int udf_get_last_block(struct super_block *, int *);
+extern unsigned int udf_get_last_block(struct super_block *);
/* partition.c */
extern Uint32 udf_get_pblock(struct super_block *, Uint32, Uint16, Uint32);
/* balloc.c */
extern void udf_free_blocks(const struct inode *, lb_addr, Uint32, Uint32);
-extern int udf_alloc_blocks(const struct inode *, Uint16, Uint32, Uint32);
+extern int udf_prealloc_blocks(const struct inode *, Uint16, Uint32, Uint32);
extern int udf_new_block(const struct inode *, Uint16, Uint32, int *);
extern int udf_sync_file(struct file *, struct dentry *);
/* directory.c */
extern Uint8 * udf_filead_read(struct inode *, Uint8 *, Uint8, lb_addr, int *, int *, struct buffer_head **, int *);
-extern struct FileIdentDesc * udf_fileident_read(struct inode *, loff_t *, struct udf_fileident_bh *, struct FileIdentDesc *, lb_addr *, Uint32 *, Uint32 *, struct buffer_head **);
+extern struct FileIdentDesc * udf_fileident_read(struct inode *, loff_t *, struct udf_fileident_bh *, struct FileIdentDesc *, lb_addr *, Uint32 *, lb_addr *, Uint32 *, Uint32 *, struct buffer_head **);
#endif /* __KERNEL__ */
/* How many days come before each month (0-12). */
const unsigned short int __mon_yday[2][13] =
- {
- /* Normal years. */
- { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
- /* Leap years. */
- { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
- };
+{
+ /* Normal years. */
+ { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
+ /* Leap years. */
+ { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
+};
#define MAX_YEAR_SECONDS 69
#define SPD 0x15180 /*3600*24*/
extern struct timezone sys_tz;
#endif
-#define SECS_PER_HOUR (60 * 60)
-#define SECS_PER_DAY (SECS_PER_HOUR * 24)
+#define SECS_PER_HOUR (60 * 60)
+#define SECS_PER_DAY (SECS_PER_HOUR * 24)
time_t *
udf_stamp_to_time(time_t *dest, long *dest_usec, timestamp src)
gettimeofday(&tv, &sys_tz);
#endif
- offset = (-sys_tz.tz_minuteswest);
+ offset = -sys_tz.tz_minuteswest;
- if (!dest)
- return NULL;
+ if (!dest)
+ return NULL;
dest->typeAndTimezone = 0x1000 | (offset & 0x0FFF);
dest->hundredsOfMicroseconds = (tv_usec - dest->centiseconds * 10000) / 100;
dest->microseconds = (tv_usec - dest->centiseconds * 10000 -
dest->hundredsOfMicroseconds * 100);
- return dest;
+ return dest;
}
/* EOF */
#ifdef __KERNEL__
#include <linux/kernel.h>
-#include <linux/string.h> /* for memset */
+#include <linux/string.h> /* for memset */
#include <linux/udf_fs.h>
#else
#include <string.h>
*/
int udf_build_ustr(struct ustr *dest, dstring *ptr, int size)
{
- int usesize;
+ int usesize;
if ( (!dest) || (!ptr) || (!size) )
return -1;
*/
int udf_build_ustr_exact(struct ustr *dest, dstring *ptr, int exactsize)
{
- if ( (!dest) || (!ptr) || (!exactsize) )
- return -1;
-
- memset(dest, 0, sizeof(struct ustr));
- dest->u_cmpID=ptr[0];
- dest->u_len=exactsize-1;
- memcpy(dest->u_name, ptr+1, exactsize-1);
- return 0;
+ if ( (!dest) || (!ptr) || (!exactsize) )
+ return -1;
+
+ memset(dest, 0, sizeof(struct ustr));
+ dest->u_cmpID=ptr[0];
+ dest->u_len=exactsize-1;
+ memcpy(dest->u_name, ptr+1, exactsize-1);
+ return 0;
}
/*
void (*mv_switch_mm)(struct mm_struct *, struct mm_struct *,
struct task_struct *, long);
- void (*mv_activate_mm)(struct mm_struct *, struct mm_struct *, long);
+ void (*mv_activate_mm)(struct mm_struct *, struct mm_struct *);
void (*mv_flush_tlb_current)(struct mm_struct *);
- void (*mv_flush_tlb_other)(struct mm_struct *);
void (*mv_flush_tlb_current_page)(struct mm_struct * mm,
struct vm_area_struct *vma,
unsigned long addr);
#include <asm/io.h>
#endif
-static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk, unsigned cpu)
+static inline void
+enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk, unsigned cpu)
{
}
* in use) and the Valid bit set, then entries can also effectively be
* made coherent by assigning a new, unused ASN to the currently
* running process and not reusing the previous ASN before calling the
- * appropriate PALcode routine to invalidate the translation buffer
- * (TB)".
+ * appropriate PALcode routine to invalidate the translation buffer (TB)".
*
* In short, the EV4 has a "kind of" ASN capability, but it doesn't actually
* work correctly and can thus not be used (explaining the lack of PAL-code
#define __MMU_EXTERN_INLINE
#endif
-extern void get_new_mm_context(struct task_struct *p, struct mm_struct *mm);
-
static inline unsigned long
__get_new_mm_context(struct mm_struct *mm, long cpu)
{
if ((asn & HARDWARE_ASN_MASK) >= MAX_ASN) {
tbiap();
+ imb();
next = (asn & ~HARDWARE_ASN_MASK) + ASN_FIRST_VERSION;
}
cpu_last_asn(cpu) = next;
return next;
}
-__EXTERN_INLINE void
-ev4_switch_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm,
- struct task_struct *next, long cpu)
-{
- /* As described, ASN's are broken. But we can optimize for
- switching between threads -- if the mm is unchanged from
- current we needn't flush. */
- /* ??? May not be needed because EV4 PALcode recognizes that
- ASN's are broken and does a tbiap itself on swpctx, under
- the "Must set ASN or flush" rule. At least this is true
- for a 1992 SRM, reports Joseph Martin (jmartin@hlo.dec.com).
- I'm going to leave this here anyway, just to Be Sure. -- r~ */
-
- if (prev_mm != next_mm)
- tbiap();
-}
-
-__EXTERN_INLINE void
-ev4_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm, long cpu)
-{
- /* This is only called after changing mm on current. */
- tbiap();
-
- current->thread.ptbr
- = ((unsigned long) next_mm->pgd - IDENT_ADDR) >> PAGE_SHIFT;
-
- __reload_thread(¤t->thread);
-}
-
__EXTERN_INLINE void
ev5_switch_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm,
struct task_struct *next, long cpu)
}
__EXTERN_INLINE void
-ev5_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm, long cpu)
+ev4_switch_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm,
+ struct task_struct *next, long cpu)
{
- unsigned long mmc = __get_new_mm_context(next_mm, cpu);
- next_mm->context = mmc;
- current->thread.asn = mmc & HARDWARE_ASN_MASK;
- current->thread.ptbr
- = ((unsigned long) next_mm->pgd - IDENT_ADDR) >> PAGE_SHIFT;
+ /* As described, ASN's are broken for TLB usage. But we can
+ optimize for switching between threads -- if the mm is
+ unchanged from current we needn't flush. */
+ /* ??? May not be needed because EV4 PALcode recognizes that
+ ASN's are broken and does a tbiap itself on swpctx, under
+ the "Must set ASN or flush" rule. At least this is true
+ for a 1992 SRM, reports Joseph Martin (jmartin@hlo.dec.com).
+ I'm going to leave this here anyway, just to Be Sure. -- r~ */
+ if (prev_mm != next_mm)
+ tbiap();
+
+ /* Do continue to allocate ASNs, because we can still use them
+ to avoid flushing the icache. */
+ ev5_switch_mm(prev_mm, next_mm, next, cpu);
+}
- __reload_thread(¤t->thread);
+extern void __load_new_mm_context(struct mm_struct *);
+
+__EXTERN_INLINE void
+ev5_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm)
+{
+ __load_new_mm_context(next_mm);
}
+__EXTERN_INLINE void
+ev4_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm)
+{
+ __load_new_mm_context(next_mm);
+ tbiap();
+}
#ifdef CONFIG_ALPHA_GENERIC
-# define switch_mm alpha_mv.mv_switch_mm
-# define activate_mm(x,y) alpha_mv.mv_activate_mm((x),(y),smp_processor_id())
+# define switch_mm(a,b,c,d) alpha_mv.mv_switch_mm((a),(b),(c),(d))
+# define activate_mm(x,y) alpha_mv.mv_activate_mm((x),(y))
#else
# ifdef CONFIG_ALPHA_EV4
-# define switch_mm ev4_switch_mm
-# define activate_mm(x,y) ev4_activate_mm((x),(y),smp_processor_id())
+# define switch_mm(a,b,c,d) ev4_switch_mm((a),(b),(c),(d))
+# define activate_mm(x,y) ev4_activate_mm((x),(y))
# else
-# define switch_mm ev5_switch_mm
-# define activate_mm(x,y) ev5_activate_mm((x),(y),smp_processor_id())
+# define switch_mm(a,b,c,d) ev5_switch_mm((a),(b),(c),(d))
+# define activate_mm(x,y) ev5_activate_mm((x),(y))
# endif
#endif
struct pci_dev;
struct pci_bus;
struct resource;
-
-/* A PCI IOMMU allocation arena. There are typically two of these
- regions per bus. */
-/* ??? The 8400 has a 32-byte pte entry, and the entire table apparently
- lives directly on the host bridge (no tlb?). We don't support this
- machine, but if we ever did, we'd need to parameterize all this quite
- a bit further. Probably with per-bus operation tables. */
-
-struct pci_iommu_arena
-{
- spinlock_t lock;
- unsigned long *ptes;
- dma_addr_t dma_base;
- unsigned int size;
- unsigned int alloc_hint;
-};
+struct pci_iommu_arena;
/* A controler. Used to manage multiple PCI busses. */
#include <linux/config.h>
+#ifndef __EXTERN_INLINE
+#define __EXTERN_INLINE extern inline
+#define __MMU_EXTERN_INLINE
+#endif
+
+extern void __load_new_mm_context(struct mm_struct *);
+
+
/* Caches aren't brain-dead on the Alpha. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(mm, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
-/*
- * The icache is not coherent with the dcache on alpha, thus before
- * running self modified code like kernel modules we must always run
- * an imb().
- */
+
+/* Note that the following two definitions are _highly_ dependent
+ on the contexts in which they are used in the kernel. I personally
+ think it is criminal how loosely defined these macros are. */
+
+/* We need to flush the kernel's icache after loading modules. The
+ only other use of this macro is in load_aout_interp which is not
+ used on Alpha.
+
+ Note that this definition should *not* be used for userspace
+ icache flushing. While functional, it is _way_ overkill. The
+ icache is tagged with ASNs and it suffices to allocate a new ASN
+ for the process. */
#ifndef __SMP__
#define flush_icache_range(start, end) imb()
#else
#define flush_icache_range(start, end) smp_imb()
extern void smp_imb(void);
#endif
-#define flush_icache_page(vma, page) do { } while (0)
+
+/* We need to flush the userspace icache after setting breakpoints in
+ ptrace. I don't think it's needed in do_swap_page, or do_no_page,
+ but I don't know how to get rid of it either.
+
+ Instead of indiscriminately using imb, take advantage of the fact
+ that icache entries are tagged with the ASN and load a new mm context. */
+/* ??? Ought to use this in arch/alpha/kernel/signal.c too. */
+
+#ifndef __SMP__
+static inline void
+flush_icache_page(struct vm_area_struct *vma, struct page *page)
+{
+ if (vma->vm_flags & VM_EXEC) {
+ struct mm_struct *mm = vma->vm_mm;
+ mm->context = 0;
+ if (current->active_mm == mm)
+ __load_new_mm_context(mm);
+ }
+}
+#else
+extern void flush_icache_page(struct vm_area_struct *vma, struct page *page);
+#endif
+
/*
* Use a few helper functions to hide the ugly broken ASN
* numbers on early Alphas (ev4 and ev45)
*/
-#ifndef __EXTERN_INLINE
-#define __EXTERN_INLINE extern inline
-#define __MMU_EXTERN_INLINE
-#endif
-
__EXTERN_INLINE void
ev4_flush_tlb_current(struct mm_struct *mm)
{
+ __load_new_mm_context(mm);
tbiap();
}
__EXTERN_INLINE void
-ev4_flush_tlb_other(struct mm_struct *mm)
+ev5_flush_tlb_current(struct mm_struct *mm)
{
+ __load_new_mm_context(mm);
}
-extern void ev5_flush_tlb_current(struct mm_struct *mm);
-
-__EXTERN_INLINE void
-ev5_flush_tlb_other(struct mm_struct *mm)
+extern inline void
+flush_tlb_other(struct mm_struct *mm)
{
mm->context = 0;
}
struct vm_area_struct *vma,
unsigned long addr)
{
- tbi(2 + ((vma->vm_flags & VM_EXEC) != 0), addr);
+ int tbi_flag = 2;
+ if (vma->vm_flags & VM_EXEC) {
+ __load_new_mm_context(mm);
+ tbi_flag = 3;
+ }
+ tbi(tbi_flag, addr);
}
__EXTERN_INLINE void
unsigned long addr)
{
if (vma->vm_flags & VM_EXEC)
- ev5_flush_tlb_current(mm);
+ __load_new_mm_context(mm);
else
tbi(2, addr);
}
#ifdef CONFIG_ALPHA_GENERIC
# define flush_tlb_current alpha_mv.mv_flush_tlb_current
-# define flush_tlb_other alpha_mv.mv_flush_tlb_other
# define flush_tlb_current_page alpha_mv.mv_flush_tlb_current_page
#else
# ifdef CONFIG_ALPHA_EV4
# define flush_tlb_current ev4_flush_tlb_current
-# define flush_tlb_other ev4_flush_tlb_other
# define flush_tlb_current_page ev4_flush_tlb_current_page
# else
# define flush_tlb_current ev5_flush_tlb_current
-# define flush_tlb_other ev5_flush_tlb_other
# define flush_tlb_current_page ev5_flush_tlb_current_page
# endif
#endif
#define PTRS_PER_PMD (1UL << (PAGE_SHIFT-3))
#define PTRS_PER_PGD ((1UL << (PAGE_SHIFT-3))-1)
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
/* Number of pointers that fit on a page: this will go away. */
#define PTRS_PER_PAGE (1UL << (PAGE_SHIFT-3))
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
-#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 1
+#define USER_PTRS_PER_PGD ((TASK_SIZE/PGDIR_SIZE) - FIRST_USER_PGD_NR)
/*
* The table below defines the page protection levels that we insert into our
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define PTRS_PER_PGD (__IA64_UL(1) << (PAGE_SHIFT-3))
#define USER_PTRS_PER_PGD PTRS_PER_PGD
+#define FIRST_USER_PGD_NR 0
/*
* Definitions for second level:
#define PTRS_PER_PMD 8
#define PTRS_PER_PGD 128
#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
/* Virtual address region for use by kernel_map() */
#define KMAP_START 0xd0000000
#define PTRS_PER_PMD 1
#define PTRS_PER_PGD 1024
#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
#define VMALLOC_START KSEG2
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define PTRS_PER_PMD 1
#define PTRS_PER_PGD 1024
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
#define PTRS_PER_PMD BTFIXUP_SIMM13(ptrs_per_pmd)
#define PTRS_PER_PGD BTFIXUP_SIMM13(ptrs_per_pgd)
#define USER_PTRS_PER_PGD BTFIXUP_SIMM13(user_ptrs_per_pgd)
+#define FIRST_USER_PGD_NR 0
#define PAGE_NONE __pgprot(BTFIXUP_INT(page_none))
#define PAGE_SHARED __pgprot(BTFIXUP_INT(page_shared))
/* Kernel has a separate 44bit address space. */
#define USER_PTRS_PER_PGD ((const int)((current->thread.flags & SPARC_FLAG_32BIT) ? \
(1) : (PTRS_PER_PGD)))
+#define FIRST_USER_PGD_NR 0
#define PTE_TABLE_SIZE 0x2000 /* 1024 entries 8 bytes each */
#define PMD_TABLE_SIZE 0x2000 /* 2048 entries 4 bytes each */
#include <linux/types.h>
#include <linux/skbuff.h>
#include <linux/net.h>
+#include <linux/if.h>
#include <linux/wait.h>
#include <linux/list.h>
#endif
extern void netfilter_init(void);
/* Largest hook number + 1 */
-#define NF_MAX_HOOKS 5
+#define NF_MAX_HOOKS 8
struct sk_buff;
struct net_device;
const struct net_device *out,
int (*okfn)(struct sk_buff *));
-typedef unsigned int nf_cacheflushfn(const void *packet,
- const struct net_device *in,
- const struct net_device *out,
- u_int32_t packetcount,
- u_int32_t bytecount);
-
struct nf_hook_ops
{
struct list_head list;
/* User fills in from here down. */
nf_hookfn *hook;
- nf_cacheflushfn *flush;
int pf;
int hooknum;
/* Hooks are ordered in ascending priority. */
int (*get)(struct sock *sk, int optval, void *user, int *len);
};
+/* Each queued (to userspace) skbuff has one of these. */
+struct nf_info
+{
+ /* The ops struct which sent us to userspace. */
+ struct nf_hook_ops *elem;
+
+ /* If we're sent to userspace, this keeps housekeeping info */
+ int pf;
+ unsigned int hook;
+ struct net_device *indev, *outdev;
+ int (*okfn)(struct sk_buff *);
+};
+
/* Function to register/unregister hook points. */
int nf_register_hook(struct nf_hook_ops *reg);
void nf_unregister_hook(struct nf_hook_ops *reg);
extern struct list_head nf_hooks[NPROTO][NF_MAX_HOOKS];
-/* Activate hook/flush; either okfn or kfree_skb called, unless a hook
+/* Activate hook; either okfn or kfree_skb called, unless a hook
returns NF_STOLEN (in which case, it's up to the hook to deal with
the consequences).
struct net_device *indev, struct net_device *outdev,
int (*okfn)(struct sk_buff *));
-void nf_cacheflush(int pf, unsigned int hook, const void *packet,
- const struct net_device *indev, const struct net_device *outdev,
- __u32 packetcount, __u32 bytecount);
-
/* Call setsockopt() */
int nf_setsockopt(struct sock *sk, int pf, int optval, char *opt,
int len);
int nf_getsockopt(struct sock *sk, int pf, int optval, char *opt,
int *len);
-struct nf_wakeme
-{
- wait_queue_head_t sleep;
- struct sk_buff_head skbq;
-};
-
-/* For netfilter device. */
-struct nf_interest
-{
- struct list_head list;
-
- int pf;
- /* Bitmask of hook numbers to match (1 << hooknum). */
- unsigned int hookmask;
- /* If non-zero, only catch packets with this mark. */
- unsigned int mark;
- /* If non-zero, only catch packets of this reason. */
- unsigned int reason;
-
- struct nf_wakeme *wake;
-};
-
-/* For asynchronous packet handling. */
-extern void nf_register_interest(struct nf_interest *interest);
-extern void nf_unregister_interest(struct nf_interest *interest);
-extern void nf_getinfo(const struct sk_buff *skb,
- struct net_device **indev,
- struct net_device **outdev,
- unsigned long *mark);
+/* Packet queuing */
+typedef int (*nf_queue_outfn_t)(struct sk_buff *skb,
+ struct nf_info *info, void *data);
+extern int nf_register_queue_handler(int pf,
+ nf_queue_outfn_t outfn, void *data);
+extern int nf_unregister_queue_handler(int pf);
extern void nf_reinject(struct sk_buff *skb,
- unsigned long mark,
+ struct nf_info *info,
unsigned int verdict);
#ifdef CONFIG_NETFILTER_DEBUG
#define NF_DN_LOCAL_OUT 3
/* Packets about to hit the wire. */
#define NF_DN_POST_ROUTING 4
-#define NF_DN_NUMHOOKS 5
+/* Input Hello Packets */
+#define NF_DN_HELLO 5
+/* Input Routing Packets */
+#define NF_DN_ROUTE 6
+#define NF_DN_NUMHOOKS 7
#endif /*__LINUX_DECNET_NETFILTER_H*/
#ifdef CONFIG_NETFILTER_DEBUG
#ifdef __KERNEL__
-void debug_print_hooks_ip(unsigned int nf_debug);
void nf_debug_ip_local_deliver(struct sk_buff *skb);
void nf_debug_ip_loopback_xmit(struct sk_buff *newskb);
void nf_debug_ip_finish_output2(struct sk_buff *skb);
#define INTEGRITY_TYPE_CLOSE 1
/* Recorded Address (ECMA 167 4/7.1) */
-#ifndef _LINUX_UDF_FS_I_H
-/* Declared in udf_fs_i.h */
typedef struct {
Uint32 logicalBlockNum;
Uint16 partitionReferenceNum;
} lb_addr;
-#endif
/* Extent interpretation (ECMA 167 4/14.14.1.1) */
#define EXTENT_RECORDED_ALLOCATED 0x00
*
* HISTORY
*
- * 10/02/98 dgb rearranged all headers
- * 11/26/98 blf added byte order macros
- * 12/05/98 dgb removed other includes to reduce kernel namespace pollution.
- * This should only be included by the kernel now!
*/
#if !defined(_LINUX_UDF_FS_H)
#define UDF_PREALLOCATE
#define UDF_DEFAULT_PREALLOC_BLOCKS 8
-#define UDF_DEFAULT_PREALLOC_DIR_BLOCKS 0
-#define UDFFS_DATE "2000/01/17"
-#define UDFFS_VERSION "0.9.0"
+#define UDFFS_DATE "2000/02/29"
+#define UDFFS_VERSION "0.9.1"
+
#define UDFFS_DEBUG
#ifdef UDFFS_DEBUG
#define udf_info(f, a...) \
printk (KERN_INFO "UDF-fs INFO " ## f, ## a);
-/* Prototype for fs/filesystem.c (the only thing really required in this file) */
+#ifdef __KERNEL__
+/*
+ * Function prototypes (all other prototypes included in udfdecl.h)
+ */
extern int init_udf_fs(void);
+#endif /* __KERNEL__ */
+
#endif /* !defined(_LINUX_UDF_FS_H) */
unsigned i_alloc_type : 3;
unsigned i_extended_fe : 1;
unsigned i_strat_4096 : 1;
- unsigned reserved : 27;
+ unsigned i_new_inode : 1;
+ unsigned reserved : 26;
};
#endif
/* Fileset Info */
__u16 s_serialnum;
+ /* highest UDF revision we have recorded to this media */
+ __u16 s_udfrev;
+
/* Miscellaneous flags */
__u32 s_flags;
extern void dn_neigh_init(void);
extern void dn_neigh_cleanup(void);
extern struct neighbour *dn_neigh_lookup(struct neigh_table *tbl, void *ptr);
-extern void dn_neigh_router_hello(struct sk_buff *skb);
-extern void dn_neigh_endnode_hello(struct sk_buff *skb);
+extern int dn_neigh_router_hello(struct sk_buff *skb);
+extern int dn_neigh_endnode_hello(struct sk_buff *skb);
extern void dn_neigh_pointopoint_hello(struct sk_buff *skb);
extern int dn_neigh_elist(struct net_device *dev, unsigned char *ptr, int n);
if (mm->map_count)
printk("exit_mmap: map count is %d\n", mm->map_count);
- clear_page_tables(mm, 0, USER_PTRS_PER_PGD);
+ clear_page_tables(mm, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD);
}
/* Insert vm structure into process list sorted by address
zone_t *zone;
int i;
- sum = nr_lru_pages - atomic_read(&page_cache_size);
+ sum = nr_lru_pages;
for (i = 0; i < NUMNODES; i++)
for (zone = NODE_DATA(i)->node_zones; zone <= NODE_DATA(i)->node_zones+ZONE_NORMAL; zone++)
sum += zone->free_pages;
* Authors:
* Lennert Buytenhek <buytenh@gnu.org>
*
- * $Id: br_device.c,v 1.2 2000/02/24 19:48:06 davem Exp $
+ * $Id: br_device.c,v 1.3 2000/03/01 02:58:09 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
static void __inline__ handle_bridge(struct sk_buff *skb,
struct packet_type *pt_prev)
{
- if (pt_prev)
- deliver_to_old_ones(pt_prev, skb, 0);
- else {
- atomic_inc(&skb->users);
- pt_prev->func(skb, skb->dev, pt_prev);
+ if (pt_prev) {
+ if (!pt_prev->data)
+ deliver_to_old_ones(pt_prev, skb, 0);
+ else {
+ atomic_inc(&skb->users);
+ pt_prev->func(skb, skb->dev, pt_prev);
+ }
}
br_handle_frame_hook(skb);
* way.
*
* Rusty Russell (C)1998 -- This code is GPL.
+ *
+ * February 2000: Modified by James Morris to have 1 queue per protocol.
*/
#include <linux/config.h>
#include <linux/netfilter.h>
#include <linux/interrupt.h>
#include <linux/if.h>
#include <linux/netdevice.h>
-#include <linux/spinlock.h>
+#include <linux/brlock.h>
#define __KERNEL_SYSCALLS__
#include <linux/unistd.h>
#define NFDEBUG(format, args...)
#endif
-/* Each queued (to userspace) skbuff has one of these. */
-struct nf_info
-{
- /* The ops struct which sent us to userspace. */
- struct nf_hook_ops *elem;
-
- /* If we're sent to userspace, this keeps housekeeping info */
- int pf;
- unsigned long mark;
- unsigned int hook;
- struct net_device *indev, *outdev;
- int (*okfn)(struct sk_buff *);
-};
-
-static rwlock_t nf_lock = RW_LOCK_UNLOCKED;
+/* Sockopts only registered and called from user context, so
+ BR_NETPROTO_LOCK would be overkill. Also, [gs]etsockopt calls may
+ sleep. */
static DECLARE_MUTEX(nf_sockopt_mutex);
struct list_head nf_hooks[NPROTO][NF_MAX_HOOKS];
static LIST_HEAD(nf_sockopts);
-static LIST_HEAD(nf_interested);
+
+/*
+ * A queue handler may be registered for each protocol. Each is protected by
+ * long term mutex. The handler must provide an an outfn() to accept packets
+ * for queueing and must reinject all packets it receives, no matter what.
+ */
+static struct nf_queue_handler_t {
+ nf_queue_outfn_t outfn;
+ void *data;
+} queue_handler[NPROTO];
int nf_register_hook(struct nf_hook_ops *reg)
{
struct list_head *i;
-#ifdef CONFIG_NETFILTER_DEBUG
- if (reg->pf<0 || reg->pf>=NPROTO || reg->hooknum >= NF_MAX_HOOKS) {
- NFDEBUG("nf_register_hook: bad vals: pf=%i, hooknum=%u.\n",
- reg->pf, reg->hooknum);
- return -EINVAL;
- }
-#endif
NFDEBUG("nf_register_hook: pf=%i hook=%u.\n", reg->pf, reg->hooknum);
-
- write_lock_bh(&nf_lock);
+
+ br_write_lock_bh(BR_NETPROTO_LOCK);
for (i = nf_hooks[reg->pf][reg->hooknum].next;
i != &nf_hooks[reg->pf][reg->hooknum];
i = i->next) {
break;
}
list_add(®->list, i->prev);
- write_unlock_bh(&nf_lock);
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
return 0;
}
void nf_unregister_hook(struct nf_hook_ops *reg)
{
-#ifdef CONFIG_NETFILTER_DEBUG
- if (reg->pf<0 || reg->pf>=NPROTO || reg->hooknum >= NF_MAX_HOOKS) {
- NFDEBUG("nf_unregister_hook: bad vals: pf=%i, hooknum=%u.\n",
- reg->pf, reg->hooknum);
- return;
- }
-#endif
- write_lock_bh(&nf_lock);
+ br_write_lock_bh(BR_NETPROTO_LOCK);
list_del(®->list);
- write_unlock_bh(&nf_lock);
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
}
/* Do exclusive ranges overlap? */
struct list_head *i;
int ret = 0;
-#ifdef CONFIG_NETFILTER_DEBUG
- if (reg->pf<0 || reg->pf>=NPROTO) {
- NFDEBUG("nf_register_sockopt: bad val: pf=%i.\n", reg->pf);
- return -EINVAL;
- }
- if (reg->set_optmin > reg->set_optmax) {
- NFDEBUG("nf_register_sockopt: bad set val: min=%i max=%i.\n",
- reg->set_optmin, reg->set_optmax);
- return -EINVAL;
- }
- if (reg->get_optmin > reg->get_optmax) {
- NFDEBUG("nf_register_sockopt: bad get val: min=%i max=%i.\n",
- reg->get_optmin, reg->get_optmax);
- return -EINVAL;
- }
-#endif
if (down_interruptible(&nf_sockopt_mutex) != 0)
return -EINTR;
void nf_unregister_sockopt(struct nf_sockopt_ops *reg)
{
-#ifdef CONFIG_NETFILTER_DEBUG
- if (reg->pf<0 || reg->pf>=NPROTO) {
- NFDEBUG("nf_register_sockopt: bad val: pf=%i.\n", reg->pf);
- return;
- }
-#endif
/* No point being interruptible: we're probably in cleanup_module() */
down(&nf_sockopt_mutex);
list_del(®->list);
#include <net/tcp.h>
#include <linux/netfilter_ipv4.h>
+static void debug_print_hooks_ip(unsigned int nf_debug)
+{
+ if (nf_debug & (1 << NF_IP_PRE_ROUTING)) {
+ printk("PRE_ROUTING ");
+ nf_debug ^= (1 << NF_IP_PRE_ROUTING);
+ }
+ if (nf_debug & (1 << NF_IP_LOCAL_IN)) {
+ printk("LOCAL_IN ");
+ nf_debug ^= (1 << NF_IP_LOCAL_IN);
+ }
+ if (nf_debug & (1 << NF_IP_FORWARD)) {
+ printk("FORWARD ");
+ nf_debug ^= (1 << NF_IP_FORWARD);
+ }
+ if (nf_debug & (1 << NF_IP_LOCAL_OUT)) {
+ printk("LOCAL_OUT ");
+ nf_debug ^= (1 << NF_IP_LOCAL_OUT);
+ }
+ if (nf_debug & (1 << NF_IP_POST_ROUTING)) {
+ printk("POST_ROUTING ");
+ nf_debug ^= (1 << NF_IP_POST_ROUTING);
+ }
+ if (nf_debug)
+ printk("Crap bits: 0x%04X", nf_debug);
+ printk("\n");
+}
+
void nf_dump_skb(int pf, struct sk_buff *skb)
{
printk("skb: pf=%i %s dev=%s len=%u\n",
{
/* If it's owned, it must have gone through the
* NF_IP_LOCAL_OUT and NF_IP_POST_ROUTING.
- * Otherwise, must have gone through NF_IP_RAW_INPUT,
+ * Otherwise, must have gone through
* NF_IP_PRE_ROUTING, NF_IP_FORWARD and NF_IP_POST_ROUTING.
*/
if (skb->sk) {
}
} else {
if (skb->nf_debug != ((1 << NF_IP_PRE_ROUTING)
-#ifdef CONFIG_IP_NETFILTER_RAW_INPUT
- | (1 << NF_IP_RAW_INPUT)
-#endif
| (1 << NF_IP_FORWARD)
| (1 << NF_IP_POST_ROUTING))) {
printk("ip_finish_output: bad unowned skb = %p: ",skb);
}
}
}
-
-
#endif /*CONFIG_NETFILTER_DEBUG*/
-void nf_cacheflush(int pf, unsigned int hook, const void *packet,
- const struct net_device *indev, const struct net_device *outdev,
- __u32 packetcount, __u32 bytecount)
-{
- struct list_head *i;
-
- read_lock_bh(&nf_lock);
- for (i = nf_hooks[pf][hook].next;
- i != &nf_hooks[pf][hook];
- i = i->next) {
- if (((struct nf_hook_ops *)i)->flush)
- ((struct nf_hook_ops *)i)->flush(packet, indev,
- outdev,
- packetcount,
- bytecount);
- }
- read_unlock_bh(&nf_lock);
-}
-
/* Call get/setsockopt() */
static int nf_sockopt(struct sock *sk, int pf, int val,
char *opt, int *len, int get)
struct nf_hook_ops *elem = (struct nf_hook_ops *)*i;
switch (elem->hook(hook, skb, indev, outdev, okfn)) {
case NF_QUEUE:
- NFDEBUG("nf_iterate: NF_QUEUE for %p.\n", *skb);
return NF_QUEUE;
case NF_STOLEN:
- NFDEBUG("nf_iterate: NF_STOLEN for %p.\n", *skb);
return NF_STOLEN;
case NF_DROP:
- NFDEBUG("nf_iterate: NF_DROP for %p.\n", *skb);
return NF_DROP;
#ifdef CONFIG_NETFILTER_DEBUG
return NF_ACCEPT;
}
+int nf_register_queue_handler(int pf, nf_queue_outfn_t outfn, void *data)
+{
+ int ret;
+
+ br_write_lock_bh(BR_NETPROTO_LOCK);
+ if (queue_handler[pf].outfn)
+ ret = -EBUSY;
+ else {
+ queue_handler[pf].outfn = outfn;
+ queue_handler[pf].data = data;
+ ret = 0;
+ }
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
+
+ return ret;
+}
+
+/* The caller must flush their queue before this */
+int nf_unregister_queue_handler(int pf)
+{
+ NFDEBUG("Unregistering Netfilter queue handler for pf=%d\n", pf);
+ br_write_lock_bh(BR_NETPROTO_LOCK);
+ queue_handler[pf].outfn = NULL;
+ queue_handler[pf].data = NULL;
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
+ return 0;
+}
+
+/*
+ * Any packet that leaves via this function must come back
+ * through nf_reinject().
+ */
static void nf_queue(struct sk_buff *skb,
struct list_head *elem,
int pf, unsigned int hook,
struct net_device *outdev,
int (*okfn)(struct sk_buff *))
{
- struct list_head *i;
+ int status;
+ struct nf_info *info;
- struct nf_info *info = kmalloc(sizeof(*info), GFP_ATOMIC);
+ if (!queue_handler[pf].outfn) {
+ NFDEBUG("nf_queue: noone wants the packet, dropping it.\n");
+ kfree_skb(skb);
+ return;
+ }
+
+ info = kmalloc(sizeof(*info), GFP_ATOMIC);
if (!info) {
- NFDEBUG("nf_hook: OOM.\n");
+ if (net_ratelimit())
+ printk(KERN_ERR "OOM queueing packet %p\n",
+ skb);
kfree_skb(skb);
return;
}
- /* Can't do struct assignments with arrays in them. Damn. */
- info->elem = (struct nf_hook_ops *)elem;
- info->mark = skb->nfmark;
- info->pf = pf;
- info->hook = hook;
- info->okfn = okfn;
- info->indev = indev;
- info->outdev = outdev;
- skb->nfmark = (unsigned long)info;
+ *info = (struct nf_info) {
+ (struct nf_hook_ops *)elem, pf, hook, indev, outdev, okfn };
/* Bump dev refs so they don't vanish while packet is out */
if (indev) dev_hold(indev);
if (outdev) dev_hold(outdev);
- for (i = nf_interested.next; i != &nf_interested; i = i->next) {
- struct nf_interest *recip = (struct nf_interest *)i;
-
- if ((recip->hookmask & (1 << info->hook))
- && info->pf == recip->pf
- && (!recip->mark || info->mark == recip->mark)
- && (!recip->reason || skb->nfreason == recip->reason)) {
- /* FIXME: Andi says: use netlink. Hmmm... --RR */
- if (skb_queue_len(&recip->wake->skbq) >= 100) {
- NFDEBUG("nf_hook: queue to long.\n");
- goto free_discard;
- }
- /* Hand it to userspace for collection */
- skb_queue_tail(&recip->wake->skbq, skb);
- NFDEBUG("Waking up pf=%i hook=%u mark=%lu reason=%u\n",
- pf, hook, skb->nfmark, skb->nfreason);
- wake_up_interruptible(&recip->wake->sleep);
-
- return;
- }
+ status = queue_handler[pf].outfn(skb, info, queue_handler[pf].data);
+ if (status < 0) {
+ /* James M doesn't say fuck enough. */
+ if (indev) dev_put(indev);
+ if (outdev) dev_put(outdev);
+ kfree_s(info, sizeof(*info));
+ kfree_skb(skb);
+ return;
}
- NFDEBUG("nf_hook: noone wants the packet.\n");
-
- free_discard:
- if (indev) dev_put(indev);
- if (outdev) dev_put(outdev);
-
- kfree_s(info, sizeof(*info));
- kfree_skb(skb);
}
-/* nf_hook() doesn't have lock, so may give false positive. */
+/* We have BR_NETPROTO_LOCK here */
int nf_hook_slow(int pf, unsigned int hook, struct sk_buff *skb,
struct net_device *indev,
struct net_device *outdev,
unsigned int verdict;
int ret = 0;
-#ifdef CONFIG_NETFILTER_DEBUG
- if (pf < 0 || pf >= NPROTO || hook >= NF_MAX_HOOKS) {
- NFDEBUG("nf_hook: bad vals: pf=%i, hook=%u.\n",
- pf, hook);
- kfree_skb(skb);
- return -EINVAL; /* -ECODERFUCKEDUP ?*/
- }
-
- if (skb->nf_debug & (1 << hook)) {
- NFDEBUG("nf_hook: hook %i already set.\n", hook);
- nf_dump_skb(pf, skb);
- }
- skb->nf_debug |= (1 << hook);
-#endif
- read_lock_bh(&nf_lock);
elem = &nf_hooks[pf][hook];
verdict = nf_iterate(&nf_hooks[pf][hook], &skb, hook, indev,
outdev, &elem, okfn);
NFDEBUG("nf_hook: Verdict = QUEUE.\n");
nf_queue(skb, elem, pf, hook, indev, outdev, okfn);
}
- read_unlock_bh(&nf_lock);
switch (verdict) {
case NF_ACCEPT:
return ret;
}
-struct nf_waitinfo {
- unsigned int verdict;
- struct task_struct *owner;
-};
-
-/* For netfilter device. */
-void nf_register_interest(struct nf_interest *interest)
+void nf_reinject(struct sk_buff *skb, struct nf_info *info,
+ unsigned int verdict)
{
- /* First in, best dressed. */
- write_lock_bh(&nf_lock);
- list_add(&interest->list, &nf_interested);
- write_unlock_bh(&nf_lock);
-}
-
-void nf_unregister_interest(struct nf_interest *interest)
-{
- struct sk_buff *skb;
-
- write_lock_bh(&nf_lock);
- list_del(&interest->list);
- write_unlock_bh(&nf_lock);
-
- /* Blow away any queued skbs; this is overzealous. */
- while ((skb = skb_dequeue(&interest->wake->skbq)) != NULL)
- nf_reinject(skb, 0, NF_DROP);
-}
-
-void nf_getinfo(const struct sk_buff *skb,
- struct net_device **indev,
- struct net_device **outdev,
- unsigned long *mark)
-{
- const struct nf_info *info = (const struct nf_info *)skb->nfmark;
-
- *indev = info->indev;
- *outdev = info->outdev;
- *mark = info->mark;
-}
-
-void nf_reinject(struct sk_buff *skb, unsigned long mark, unsigned int verdict)
-{
- struct nf_info *info = (struct nf_info *)skb->nfmark;
struct list_head *elem = &info->elem->list;
struct list_head *i;
- read_lock_bh(&nf_lock);
-
+ /* We don't have BR_NETPROTO_LOCK here */
+ br_read_lock_bh(BR_NETPROTO_LOCK);
for (i = nf_hooks[info->pf][info->hook].next; i != elem; i = i->next) {
if (i == &nf_hooks[info->pf][info->hook]) {
/* The module which sent it to userspace is gone. */
+ NFDEBUG("%s: module disappeared, dropping packet.\n",
+ __FUNCTION__);
verdict = NF_DROP;
break;
}
}
- /* Continue traversal iff userspace said ok, and devices still
- exist... */
+ /* Continue traversal iff userspace said ok... */
if (verdict == NF_ACCEPT) {
- skb->nfmark = mark;
verdict = nf_iterate(&nf_hooks[info->pf][info->hook],
&skb, info->hook,
info->indev, info->outdev, &elem,
info->okfn);
}
- if (verdict == NF_QUEUE) {
- nf_queue(skb, elem, info->pf, info->hook,
- info->indev, info->outdev, info->okfn);
- }
- read_unlock_bh(&nf_lock);
-
switch (verdict) {
case NF_ACCEPT:
- local_bh_disable();
info->okfn(skb);
- local_bh_enable();
break;
+ case NF_QUEUE:
+ nf_queue(skb, elem, info->pf, info->hook,
+ info->indev, info->outdev, info->okfn);
+
case NF_DROP:
kfree_skb(skb);
break;
/* Release those devices we held, or Alexey will kill me. */
if (info->indev) dev_put(info->indev);
if (info->outdev) dev_put(info->outdev);
-
+
kfree_s(info, sizeof(*info));
return;
}
-/* FIXME: Before cache is ever used, this must be implemented for real. */
-void nf_invalidate_cache(int pf)
-{
-}
-
-#ifdef CONFIG_NETFILTER_DEBUG
-
-void debug_print_hooks_ip(unsigned int nf_debug)
-{
- if (nf_debug & (1 << NF_IP_PRE_ROUTING)) {
- printk("PRE_ROUTING ");
- nf_debug ^= (1 << NF_IP_PRE_ROUTING);
- }
- if (nf_debug & (1 << NF_IP_LOCAL_IN)) {
- printk("LOCAL_IN ");
- nf_debug ^= (1 << NF_IP_LOCAL_IN);
- }
- if (nf_debug & (1 << NF_IP_FORWARD)) {
- printk("FORWARD ");
- nf_debug ^= (1 << NF_IP_FORWARD);
- }
- if (nf_debug & (1 << NF_IP_LOCAL_OUT)) {
- printk("LOCAL_OUT ");
- nf_debug ^= (1 << NF_IP_LOCAL_OUT);
- }
- if (nf_debug & (1 << NF_IP_POST_ROUTING)) {
- printk("POST_ROUTING ");
- nf_debug ^= (1 << NF_IP_POST_ROUTING);
- }
- if (nf_debug)
- printk("Crap bits: 0x%04X", nf_debug);
- printk("\n");
-}
-#endif /* CONFIG_NETFILTER_DEBUG */
-
void __init netfilter_init(void)
{
int i, h;
- for (i = 0; i < NPROTO; i++)
+ for (i = 0; i < NPROTO; i++) {
for (h = 0; h < NF_MAX_HOOKS; h++)
INIT_LIST_HEAD(&nf_hooks[i][h]);
+ }
}
o sendmsg() in the raw socket layer (yes, its for sending routing messages)
- o Better filtering of traffic in raw sockets. Aside from receiving routing
- messages, there really doesn't seem to be a lot else that raw sockets
- could be useful for... suggestions on a postcard please :-)
-
o Fix /proc for raw sockets
o Lots of testing with real applications
#include <linux/netdevice.h>
#include <linux/inet.h>
#include <linux/route.h>
+#include <linux/netfilter.h>
#include <net/sock.h>
#include <asm/segment.h>
#include <asm/system.h>
memcpy(&newsk->protinfo.dn.addr, &sk->protinfo.dn.addr, sizeof(struct sockaddr_dn));
+ /*
+ * If we are listening on a wild socket, we don't want
+ * the newly created socket on the wrong hash queue.
+ */
+ newsk->protinfo.dn.addr.sdn_flags &= ~SDF_WILD;
+
skb_pull(skb, dn_username2sockaddr(skb->data, skb->len, &newsk->protinfo.dn.addr, &type));
skb_pull(skb, dn_username2sockaddr(skb->data, skb->len, &newsk->protinfo.dn.peer, &type));
*(dn_address *)newsk->protinfo.dn.peer.sdn_add.a_addr = cb->src;
struct dn_scp *scp = &sk->protinfo.dn;
struct optdata_dn opt;
struct accessdata_dn acc;
-#ifdef CONFIG_DECNET_FW
- char tmp_fw[MAX(sizeof(struct dn_fwtest),sizeof(struct dn_fwnew))];
-#endif
int err;
if (optlen && !optval)
dn_nsp_send_disc(sk, 0x38, 0, GFP_KERNEL);
break;
-#ifdef CONFIG_DECNET_FW
- case DN_FW_APPEND:
- case DN_FW_REPLACE:
- case DN_FW_DELETE:
- case DN_FW_DELETE_NUM:
- case DN_FW_INSERT:
- case DN_FW_FLUSH:
- case DN_FW_ZERO:
- case DN_FW_CHECK:
- case DN_FW_CREATECHAIN:
- case DN_FW_DELETECHAIN:
- case DN_FW_POLICY:
-
- if (!capable(CAP_NET_ADMIN))
- return -EACCES;
- if ((optlen > sizeof(tmp_fw)) || (optlen < 1))
- return -EINVAL;
- if (copy_from_user(&tmp_fw, optval, optlen))
- return -EFAULT;
- err = dn_fw_ctl(optname, &tmp_fw, optlen);
- return err;
-#endif
default:
+#ifdef CONFIG_NETFILTER
+ return nf_setsockopt(sk, PF_DECnet, optname, optval, optlen);
+#endif
case DSO_LINKINFO:
case DSO_STREAM:
case DSO_SEQPACKET:
- return -EOPNOTSUPP;
+ return -ENOPROTOOPT;
}
return 0;
return -EFAULT;
break;
+ default:
+#ifdef CONFIG_NETFILTER
+ {
+ int val, len = *optlen;
+ val = nf_getsockopt(sk, PF_DECnet, optname,
+ optval, &len);
+ if (val >= 0)
+ val = put_user(len, optlen);
+ return val;
+ }
+#endif
case DSO_STREAM:
case DSO_SEQPACKET:
case DSO_CONACCEPT:
case DSO_CONREJECT:
- default:
- return -EOPNOTSUPP;
+ return -ENOPROTOOPT;
}
return 0;
__constant_htons(ETH_P_DNA_RT),
NULL, /* All devices */
dn_route_rcv,
- NULL,
+ (void*)1,
NULL,
};
/*
* Ethernet router hello message received
*/
-void dn_neigh_router_hello(struct sk_buff *skb)
+int dn_neigh_router_hello(struct sk_buff *skb)
{
struct rtnode_hello_message *msg = (struct rtnode_hello_message *)skb->data;
}
kfree_skb(skb);
+ return 0;
}
/*
* Endnode hello message received
*/
-void dn_neigh_endnode_hello(struct sk_buff *skb)
+int dn_neigh_endnode_hello(struct sk_buff *skb)
{
struct endnode_hello_message *msg = (struct endnode_hello_message *)skb->data;
struct neighbour *neigh;
}
kfree_skb(skb);
+ return 0;
}
struct dn_scp *scp = &sk->protinfo.dn;
unsigned short reason;
- if (skb->len != 2)
+ if (skb->len < 2)
goto out;
reason = dn_ntohs(*(__u16 *)skb->data);
* Moved output state machine into one function
* Steve Whitehouse: New output state machine
* Paul Koning: Connect Confirm message fix.
+ * Eduardo Serrat: Fix to stop dn_nsp_do_disc() sending malformed packets.
*/
/******************************************************************************
int ddl, unsigned char *dd, __u16 rem, __u16 loc)
{
struct sk_buff *skb = NULL;
- int size = 7 + (ddl ? (ddl + 1) : 0);
+ int size = 8 + ddl;
unsigned char *msg;
if ((dst == NULL) || (rem == 0)) {
msg += 2;
*(__u16 *)msg = dn_htons(reason);
msg += 2;
+ *msg++ = ddl;
if (ddl) {
- *msg++ = ddl;
memcpy(msg, dd, ddl);
}
* my copying the IPv4 routing code. The
* hooks here are modified and will continue
* to evolve for a while.
+ * Steve Whitehouse : Real SMP at last :-) Also new netfilter
+ * stuff. Look out raw sockets your days
+ * are numbered!
*/
/******************************************************************************
return 0;
}
+static int dn_route_discard(struct sk_buff *skb)
+{
+ kfree_skb(skb);
+ return 0;
+}
+
+static int dn_route_ptp_hello(struct sk_buff *skb)
+{
+ dn_dev_hello(skb);
+ dn_neigh_pointopoint_hello(skb);
+ return 0;
+}
+
int dn_route_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt)
{
- struct dn_skb_cb *cb = (struct dn_skb_cb *)skb->cb;
+ struct dn_skb_cb *cb;
unsigned char flags = 0;
int padlen = 0;
__u16 len = dn_ntohs(*(__u16 *)skb->data);
if (dn == NULL)
goto dump_it;
- cb->stamp = jiffies;
- cb->iif = dev->ifindex;
+ if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL)
+ goto out;
skb_pull(skb, 2);
flags = *skb->data;
+ cb = (struct dn_skb_cb *)skb->cb;
+ cb->stamp = jiffies;
+ cb->iif = dev->ifindex;
+
/*
* If we have padding, remove it.
*/
switch(flags & DN_RT_CNTL_MSK) {
case DN_RT_PKT_HELO:
- dn_dev_hello(skb);
- dn_neigh_pointopoint_hello(skb);
- return 0;
+ NF_HOOK(PF_DECnet, NF_DN_HELLO, skb, skb->dev, NULL, dn_route_ptp_hello);
+ goto out;
case DN_RT_PKT_L1RT:
case DN_RT_PKT_L2RT:
-#ifdef CONFIG_DECNET_ROUTER
- return dn_fib_rt_message(skb);
-#else
- break;
-#endif /* CONFIG_DECNET_ROUTER */
+ NF_HOOK(PF_DECnet, NF_DN_ROUTE, skb, skb->dev, NULL, dn_route_discard);
+ goto out;
case DN_RT_PKT_ERTH:
- dn_neigh_router_hello(skb);
- return 0;
+ NF_HOOK(PF_DECnet, NF_DN_HELLO, skb, skb->dev, NULL, dn_neigh_router_hello);
+ goto out;
case DN_RT_PKT_EEDH:
- dn_neigh_endnode_hello(skb);
- return 0;
+ NF_HOOK(PF_DECnet, NF_DN_HELLO, skb, skb->dev, NULL, dn_neigh_endnode_hello);
+ goto out;
}
} else {
if (dn->parms.state != DN_DEV_S_RU)
dump_it:
kfree_skb(skb);
+out:
return 0;
}
* Authors:
* Pedro Roque <roque@di.fc.ul.pt>
*
- * $Id: ip6_output.c,v 1.25 2000/02/27 19:42:53 davem Exp $
+ * $Id: ip6_output.c,v 1.26 2000/03/01 02:58:12 davem Exp $
*
* Based on linux/net/ipv4/ip_output.c
*
#include <linux/smp_lock.h>
SOCKOPS_WRAP(ipx_dgram, PF_IPX);
-
-/* Called by protocol.c on kernel start up */
-
static struct packet_type ipx_8023_packet_type =
{
- 0, /* MUTTER ntohs(ETH_P_802_3),*/
+ __constant_htons(ETH_P_802_3),
NULL, /* All devices */
ipx_rcv,
NULL,
static struct packet_type ipx_dix_packet_type =
{
- 0, /* MUTTER ntohs(ETH_P_IPX),*/
+ __constant_htons(ETH_P_IPX),
NULL, /* All devices */
ipx_rcv,
NULL,
static unsigned char ipx_8022_type = 0xE0;
static unsigned char ipx_snap_id[5] = { 0x0, 0x0, 0x0, 0x81, 0x37 };
+
+
+/* Called by protocols.c on kernel start up */
+
void ipx_proto_init(struct net_proto *pro)
{
(void) sock_register(&ipx_family_ops);
pEII_datalink = make_EII_client();
- ipx_dix_packet_type.type = htons(ETH_P_IPX);
dev_add_pack(&ipx_dix_packet_type);
p8023_datalink = make_8023_client();
- ipx_8023_packet_type.type = htons(ETH_P_802_3);
dev_add_pack(&ipx_8023_packet_type);
if((p8022_datalink = register_8022_client(ipx_8022_type,ipx_rcv)) == NULL)
#include <asm/segment.h>
#include <asm/uaccess.h>
#include <asm/dma.h>
+#include <asm/io.h>
#include <net/pkt_sched.h>
* Status: Experimental.
* Author: Dag Brattli <dagb@cs.uit.no>
* Created at: Thu Aug 21 00:02:07 1997
- * Modified at: Sat Dec 25 21:09:47 1999
+ * Modified at: Wed Mar 1 11:28:34 2000
* Modified by: Dag Brattli <dagb@cs.uit.no>
*
- * Copyright (c) 1997, 1999 Dag Brattli <dagb@cs.uit.no>,
+ * Copyright (c) 1997, 1999-2000 Dag Brattli <dagb@cs.uit.no>,
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
switch (event) {
case IAP_RECV_F_LST:
- iriap_send_ack(self);
+ /*iriap_send_ack(self);*/
/*LM_Idle_request(idle); */
iriap_next_call_state(self, S_WAIT_FOR_CALL);
for (p = fmt; *p != '\0'; p++) {
switch (*p) {
case 'b': /* 8 bits unsigned byte */
- buf[n++] = va_arg(args, __u8);
+ buf[n++] = (__u8)va_arg(args, int);
break;
case 's': /* 16 bits unsigned short */
- arg.s = va_arg(args, __u16);
+ arg.s = (__u16)va_arg(args, int);
put_unaligned(arg.s, (__u16 *)(buf+n)); n+=2;
break;
case 'i': /* 32 bits unsigned integer */
arg.i = va_arg(args, __u32);
put_unaligned(arg.i, (__u32 *)(buf+n)); n+=4;
break;
+#if 0
case 'c': /* \0 terminated string */
arg.c = va_arg(args, char *);
strcpy(buf+n, arg.c);
n += strlen(arg.c) + 1;
break;
+#endif
default:
va_end(args);
return -1;
EXPORT_SYMBOL(nf_unregister_hook);
EXPORT_SYMBOL(nf_register_sockopt);
EXPORT_SYMBOL(nf_unregister_sockopt);
-EXPORT_SYMBOL(nf_getinfo);
EXPORT_SYMBOL(nf_reinject);
-EXPORT_SYMBOL(nf_register_interest);
-EXPORT_SYMBOL(nf_unregister_interest);
+EXPORT_SYMBOL(nf_register_queue_handler);
+EXPORT_SYMBOL(nf_unregister_queue_handler);
EXPORT_SYMBOL(nf_hook_slow);
EXPORT_SYMBOL(nf_hooks);
#endif