S: Crimea
S: UKRAINE, 334320
+N: Walt Drummond
+E: drummond@valinux.com
+D: Linux/IA-64
+S: 1382 Bordeaux Drive
+S: Sunnyvale, CA 94087
+S: USA
+
+N: Don Dugger
+E: n0ano@valinux.com
+D: Linux/IA-64
+S: 1209 Pearl Street, #12
+S: Boulder, CO 80302
+S: USA
+
N: Thomas Dunbar
E: tdunbar@vtaix.cc.vt.edu
D: TeX & METAFONT hacking/maintenance
S: S-114 53 Stockholm
S: Sweden
+N: Stephane Eranian
+E: eranian@hpl.hp.com
+D: Linux/ia64
+S: 1501 Page Mill Rd, MS 1U17
+S: Palo Alto, CA 94304
+S: USA
+
N: Paal-Kristian Engstad
E: engstad@intermetrics.com
D: Kernel smbfs (to mount WfW, NT and OS/2 network drives.)
S: Huntington Beach, California 92649
S: USA
+N: Johannes Erdfelt
+E: jerdfelt@valinux.com
+D: Linux/IA-64 bootloader and kernel goop, USB
+S: 6350 Stoneridge Mall Road
+S: Pleasanton, CA 94588
+S: USA
+
N: Doug Evans
E: dje@cygnus.com
D: Wrote Xenix FS (part of standard kernel since 0.99.15)
E: Kai.Makisara@metla.fi
D: SCSI Tape Driver
+N: Asit Mallick
+E: asit.k.mallick@intel.com
+D: Linux/IA-64
+S: 2200 Mission College Blvd
+S: Santa Clara, CA 95052
+S: USA
+
N: Martin Mares
E: mj@atrey.karlin.mff.cuni.cz
W: http://atrey.karlin.mff.cuni.cz/~mj/
S: 13349 Berlin
S: Germany
+N: Nicolas Pitre
+E: nico@cam.org
+D: StrongARM SA1100 support integrator & hacker
+S: Montreal, Quebec, Canada
+
N: Emanuel Pirker
E: epirker@edu.uni-klu.ac.at
D: AIC5800 IEEE 1394, RAW I/O on 1394
S: 89020-350 Blumenau - Santa Catarina
S: Brazil
+N: Goutham Rao
+E: goutham.rao@intel.com
+D: Linux/IA-64
+S: 2200 Mission College Blvd
+S: Santa Clara, CA 95052
+S: USA
+
N: Eric S. Raymond
E: esr@thyrsus.com
W: http://www.tuxedo.org/~esr/
--- /dev/null
+ This is at least a partial credits-file of people that have
+ contributed to the Linux project. It is sorted by name and
+ formatted to allow easy grepping and beautification by
+ scripts. The fields are: name (N), email (E), web-address
+ (W), PGP key ID and fingerprint (P), description (D), and
+ snail-mail address (S).
+ Thanks,
+
+ Linus
+----------
+
+N: Matti Aarnio
+E: mea@nic.funet.fi
+D: Alpha systems hacking, IPv6 and other network related stuff
+D: One of assisting postmasters for vger.rutgers.edu's lists
+S: (ask for current address)
+S: Finland
+
+N: Dragos Acostachioaie
+E: dragos@iname.com
+W: http://www.arbornet.org/~dragos
+D: /proc/sysvipc
+S: C. Negri 6, bl. D3
+S: Iasi 6600
+S: Romania
+
+N: Dave Airlie
+E: airlied@linux.ie
+W: http://www.csn.ul.ie/~airlied
+D: NFS over TCP patches
+S: University of Limerick
+S: Ireland
+
+N: Tigran A. Aivazian
+E: tigran@ocston.org
+W: http://www.ocston.org/~tigran
+D: BFS filesystem
+S: United Kingdom
+
+N: Werner Almesberger
+E: werner.almesberger@lrc.di.epfl.ch
+D: dosfs, LILO, some fd features, various other hacks here and there
+S: Ecole Polytechnique Federale de Lausanne
+S: DI-LRC
+S: INR (Ecublens)
+S: CH-1015 Lausanne
+S: Switzerland
+
+N: Tim Alpaerts
+E: tim_alpaerts@toyota-motor-europe.com
+D: 802.2 class II logical link control layer,
+D: the humble start of an opening towards the IBM SNA protocols
+S: Klaproosstraat 72 c 10
+S: B-2610 Wilrijk-Antwerpen
+S: Belgium
+
+N: C. Scott Ananian
+E: cananian@alumni.princeton.edu
+W: http://www.pdos.lcs.mit.edu/~cananian
+P: 1024/85AD9EED AD C0 49 08 91 67 DF D7 FA 04 1A EE 09 E8 44 B0
+D: Unix98 pty support.
+D: APM update to 1.2 spec.
+
+N: Erik Andersen
+E: andersee@debian.org
+W: http://www.xmission.com/~andersen
+P: 1024/FC4CFFED 78 3C 6A 19 FA 5D 92 5A FB AC 7B A5 A5 E1 FF 8E
+D: Maintainer of ide-cd and Uniform CD-ROM driver,
+D: ATAPI CD-Changer support, Major 2.1.x CD-ROM update.
+S: 4538 South Carnegie Tech Street
+S: Salt Lake City, Utah 84120
+S: USA
+
+N: H. Peter Anvin
+E: hpa@zytor.com
+W: http://www.zytor.com/~hpa/
+P: 2047/2A960705 BA 03 D3 2C 14 A8 A8 BD 1E DF FE 69 EE 35 BD 74
+D: Author of the SYSLINUX boot loader, maintainer of the linux.* news
+D: hierarchy and the Linux Device List; various kernel hacks
+S: 4390 Albany Drive #46
+S: San Jose, California 95129
+S: USA
+
+N: Andrea Arcangeli
+E: andrea@e-mind.com
+W: http://e-mind.com/~andrea/
+P: 1024/CB4660B9 CC A0 71 81 F4 A0 63 AC C0 4B 81 1D 8C 15 C8 E5
+D: Parport hacker
+D: Implemented a workaround for some interrupt buggy printers
+D: Author of pscan that helps to fix lp/parport bug
+D: Author of lil (Linux Interrupt Latency benchmark)
+D: Fixed the shm swap deallocation at swapoff time (try_to_unuse message)
+D: Various other kernel hacks
+S: Via Ciaclini 26
+S: Imola 40026
+S: Italy
+
+N: Derek Atkins
+E: warlord@MIT.EDU
+D: Linux-AFS Port, random kernel hacker,
+D: VFS fixes (new notify_change in particular)
+D: Moving all VFS access checks into the file systems
+S: MIT Room E15-341
+S: 20 Ames Street
+S: Cambridge, Massachusetts 02139
+S: USA
+
+N: Michel Aubry
+E: giovanni <giovanni@sudfr.com>
+D: Aladdin 1533/1543(C) chipset IDE
+D: VIA MVP-3/TX Pro III chipset IDE
+
+N: Jens Axboe
+E: axboe@image.dk
+D: Linux CD-ROM maintainer
+D: jiffies wrap fixes + schedule timeouts depending on HZ == 100
+S: Peter Bangs Vej 258, 2TH
+S: 2500 Valby
+S: Denmark
+
+N: John Aycock
+E: aycock@cpsc.ucalgary.ca
+D: Adaptec 274x driver
+S: Department of Computer Science
+S: University of Calgary
+S: Calgary, Alberta
+S: Canada
+
+N: Ralf Baechle
+E: ralf@gnu.org
+P: 1024/AF7B30C1 CF 97 C2 CC 6D AE A7 FE C8 BA 9C FC 88 DE 32 C3
+D: Linux/MIPS port
+D: Linux/68k hacker
+S: Hauptstrasse 19
+S: 79837 St. Blasien
+S: Germany
+
+N: Krishna Balasubramanian
+E: balasub@cis.ohio-state.edu
+D: Wrote SYS V IPC (part of standard kernel since 0.99.10)
+
+N: Dario Ballabio
+E: dario@milano.europe.dg.com
+D: Author and maintainer of the Ultrastor 14F/34F SCSI driver
+D: Author and maintainer of the EATA ISA/EISA/PCI SCSI driver
+S: Data General Corporation
+S: Milano
+S: Italy
+
+N: Arindam Banerji
+E: axb@cse.nd.edu
+D: Contributed ESDI driver routines needed to port LINUX to the PS/2 MCA.
+S: Department of Computer Science & Eng.
+S: University of Notre Dame
+S: Notre Dame, Indiana
+S: USA
+
+N: James Banks
+E: james@sovereign.org
+D: TLAN network driver
+D: Logitech Busmouse driver
+
+N: Krzysztof G. Baranowski
+E: kgb@manjak.knm.org.pl
+P: 1024/FA6F16D1 96 D1 1A CF 5F CA 69 EC F9 4F 36 1F 6D 60 7B DA
+D: Maintainer of the System V file system.
+D: System V fs update for 2.1.x dcache.
+D: Forward ported a couple of SCSI drivers.
+D: Various bugfixes.
+S: ul. Koscielna 12a
+S: 62-300 Wrzesnia
+S: Poland
+
+N: Paul Barton-Davis
+E: pbd@op.net
+D: Driver for WaveFront soundcards (Turtle Beach Maui, Tropez, Tropez+)
+D: Various bugfixes and changes to sound drivers
+S: USA
+
+N: Carlos Henrique Bauer
+E: chbauer@acm.org
+E: bauer@atlas.unisinos.br
+D: Some new sysctl entries for the parport driver.
+D: New sysctl function for handling unsigned longs
+S: Universidade do Vale do Rio dos Sinos - UNISINOS
+S: DSI/IDASI
+S: Av. Unisinos, 950
+S: 93022000 Sao Leopoldo RS
+S: Brazil
+
+N: Peter Bauer
+E: 100136.3530@compuserve.com
+D: Driver for depca-ethernet-board
+S: 69259 Wilhemsfeld
+S: Rainweg 15
+S: Germany
+
+N: Fred Baumgarten
+E: dc6iq@insl1.etec.uni-karlsruhe.de
+E: dc6iq@adacom.org
+E: dc6iq@db0ais.#hes.deu.eu (packet radio)
+D: NET-2 & netstat(8)
+S: Soevener Strasse 11
+S: 53773 Hennef
+S: Germany
+
+N: Donald Becker
+E: becker@cesdis.gsfc.nasa.gov
+D: General low-level networking hacker
+D: Most of the ethercard drivers
+D: Original author of the NFS server
+S: USRA Center of Excellence in Space Data and Information Sciences
+S: Code 930.5, Goddard Space Flight Center
+S: Greenbelt, Maryland 20771
+S: USA
+
+N: Randolph Bentson
+E: bentson@grieg.seaslug.org
+W: http://www.aa.net/~bentson/
+P: 1024/39ED5729 5C A8 7A F4 B2 7A D1 3E B5 3B 81 CF 47 30 11 71
+D: Author of driver for Cyclom-Y and Cyclades-Z async mux
+S: 2322 37th Ave SW
+S: Seattle, Washington 98126-2010
+S: USA
+
+N: Stephen R. van den Berg (AKA BuGless)
+E: berg@pool.informatik.rwth-aachen.de
+D: General kernel, gcc, and libc hacker
+D: Specialisation: tweaking, ensuring portability, tweaking, cleaning,
+D: tweaking and occasionally debugging :-)
+S: Bouwensstraat 22
+S: 6369 BG Simpelveld
+S: The Netherlands
+
+N: Hennus Bergman
+E: hennus@cybercomm.nl
+W: http://www.cybercomm.nl/~hennus/
+P: 1024/77D50909 76 99 FD 31 91 E1 96 1C 90 BB 22 80 62 F6 BD 63
+D: Author and maintainer of the QIC-02 tape driver
+S: The Netherlands
+
+N: Ross Biro
+E: bir7@leland.Stanford.Edu
+D: Original author of the Linux networking code
+
+N: Anton Blanchard
+E: anton@progsoc.uts.edu.au
+W: http://www.progsoc.uts.edu.au/~anton/
+P: 1024/8462A731 4C 55 86 34 44 59 A7 99 2B 97 88 4A 88 9A 0D 97
+D: sun4 port
+S: 47 Robert Street
+S: Marrickville NSW 2204
+S: Australia
+
+N: Philip Blundell
+E: philb@gnu.org
+D: Linux/ARM hacker
+D: Device driver hacker (eexpress, 3c505, c-qcam, ...)
+D: m68k port to HP9000/300
+D: AUN network protocols
+D: Co-architect of the parallel port sharing system
+S: Nexus Electronics Ltd
+S: 10 St Barnabas Road, Cambridge CB1 2BY
+S: United Kingdom
+
+N: Thomas Bogendörfer
+E: tsbogend@alpha.franken.de
+D: PCnet32 driver, SONIC driver, JAZZ_ESP driver
+D: newport abscon driver, g364 framebuffer driver
+D: strace for Linux/Alpha
+D: Linux/MIPS hacker
+S: Schafhofstr. 40
+S: 90556 Cadolzburg
+S: Germany
+
+N: Bill Bogstad
+E: bogstad@pobox.com
+D: wrote /proc/self hack, minor samba & dosemu patches
+
+N: Axel Boldt
+E: boldt@math.ucsb.edu
+W: http://math-www.uni-paderborn.de/~axel/
+D: Configuration help text support
+D: Linux CD and Support Giveaway List
+
+N: Erik Inge Bolsø
+E: knan@mo.himolde.no
+D: Misc kernel hacks
+
+N: Andreas E. Bombe
+E: andreas.bombe@munich.netsurf.de
+W: http://home.pages.de/~andreas.bombe/
+P: 1024/04880A44 72E5 7031 4414 2EB6 F6B4 4CBD 1181 7032 0488 0A44
+D: IEEE 1394 subsystem rewrite and maintainer
+D: Texas Instruments PCILynx IEEE 1394 driver
+
+N: Zoltán Böszörményi
+E: zboszor@mail.externet.hu
+D: MTRR emulation with Cyrix style ARR registers, Athlon MTRR support
+
+N: John Boyd
+E: boyd@cis.ohio-state.edu
+D: Co-author of wd7000 SCSI driver
+S: 101 Curl Drive #591
+S: Columbus, Ohio 43210
+S: USA
+
+N: Peter Braam
+E: braam@cs.cmu.edu
+W: http://coda.cs.cmu.edu/~braam
+D: Coda Filesystem
+S: Dept of Computer Science
+S: 5000 Forbes Avenue
+S: Pittsburgh, Pennsylvania 15213
+S: USA
+
+N: Derrick J. Brashear
+E: shadow@dementia.org
+W: http://www.dementia.org/~shadow
+P: 512/71EC9367 C5 29 0F BC 83 51 B9 F0 BC 05 89 A0 4F 1F 30 05
+D: Author of Sparc CS4231 audio driver, random Sparc work
+S: 403 Gilmore Avenue
+S: Trafford, Pennsylvania 15085
+S: USA
+
+N: Dag Brattli
+E: dagb@cs.uit.no
+W: http://www.cs.uit.no/~dagb
+D: IrDA Subsystem
+S: 19. Wellington Road
+S: Lancaster, LA1 4DN
+S: UK, England
+
+N: Andries Brouwer
+E: aeb@cwi.nl
+D: random Linux hacker
+S: Bessemerstraat 21
+S: Amsterdam
+S: The Netherlands
+
+N: Zach Brown
+E: zab@zabbo.net
+D: maestro pci sound
+
+N: Ray Burr
+E: ryb@nightmare.com
+D: Original author of Amiga FFS filesystem
+S: Orlando, Florida
+S: USA
+
+N: Michael Callahan
+E: callahan@maths.ox.ac.uk
+D: PPP for Linux
+S: The Mathematical Institute
+S: 25-29 St Giles
+S: Oxford
+S: United Kingdom
+
+N: Remy Card
+E: Remy.Card@masi.ibp.fr
+E: Remy.Card@linux.org
+D: Extended file system [defunct] designer and developer
+D: Second extended file system designer and developer
+S: Institut Blaise Pascal
+S: 4 Place Jussieu
+S: 75252 Paris Cedex 05
+S: France
+
+N: Ulf Carlsson
+D: SGI Indy audio (HAL2) drivers
+E: ulfc@bun.falkenberg.se
+
+N: Ed Carp
+E: ecarp@netcom.com
+D: uucp, elm, pine, pico port
+D: cron, at(1) developer
+S: 48287 Sawleaf
+S: Fremont, California 94539
+S: USA
+
+N: Gordon Chaffee
+E: chaffee@cs.berkeley.edu
+W: http://bmrc.berkeley.edu/people/chaffee/
+D: vfat, fat32, joliet, native language support
+S: 3700 Warwick Road
+S: Fremont, California 94555
+S: USA
+
+N: Chih-Jen Chang
+E: chihjenc@scf.usc.edu
+E: chihjen@iis.sinica.edu.tw
+D: IGMP(Internet Group Management Protocol) version 2
+S: 3F, 65 Tajen street
+S: Tamsui town, Taipei county,
+S: Taiwan 251
+S: Republic of China
+
+N: Raymond Chen
+E: raymondc@microsoft.com
+D: Author of Configure script
+S: 14509 NE 39th Street #1096
+S: Bellevue, Washington 98007
+S: USA
+
+N: Stuart Cheshire
+E: cheshire@cs.stanford.edu
+D: Author of Starmode Radio IP (STRIP) driver
+D: Originator of design for new combined interrupt handlers
+S: William Gates Department
+S: Stanford University
+S: Stanford, California 94305
+S: USA
+
+N: Juan Jose Ciarlante
+W: http://juanjox.linuxhq.com/
+E: jjciarla@raiz.uncu.edu.ar
+E: jjo@mendoza.gov.ar
+D: Network driver alias support
+D: IP masq hashing and app modules
+D: IP masq 2.1 features and bugs
+S: Las Cuevas 2385 - Bo Guemes
+S: Las Heras, Mendoza CP 5539
+S: Argentina
+
+N: Hamish Coleman
+E: hamish@zot.apana.org.au
+D: SEEQ8005 network driver
+S: 98 Paxton Street
+S: East Malvern, Victoria, 3145
+S: Australia
+
+N: Neil Conway
+E: nconway.list@ukaea.org.uk
+D: Assorted sched/mm titbits
+S: Oxfordshire, UK.
+
+N: Alan Cox
+W: http://roadrunner.swansea.linux.org.uk/alan.shtml
+E: alan@lxorguk.ukuu.org.uk
+E: alan@www.linux.org.uk (linux.org.uk stuff)
+E: Alan.Cox@linux.org (if others fail)
+D: Linux Networking (0.99.10->2.0.29)
+D: Original Appletalk, AX.25, and IPX code
+D: Current 3c501 hacker. >>More 3c501 info/tricks wanted<<.
+D: Watchdog timer drivers
+D: Linux/SMP x86 (up to 2.0 only)
+D: Initial Mac68K port
+D: Video4Linux design, bw-qcam and PMS driver ports.
+D: 2.1.x modular sound
+S: c/o Red Hat UK Ltd
+S: Alexandra House
+S: Alexandra Terrace
+S: Guildford, GU1 3DA
+S: United Kingdom
+
+N: Laurence Culhane
+E: loz@holmes.demon.co.uk
+D: Wrote the initial alpha SLIP code
+S: 81 Hood Street
+S: Northampton
+S: NN1 3QT
+S: United Kingdom
+
+N: Ray Dassen
+E: jdassen@wi.LeidenUniv.nl
+W: http://www.wi.leidenuniv.nl/~jdassen/
+P: 1024/672D05C1 DD 60 32 60 F7 90 64 80 E7 6F D4 E4 F8 C9 4A 58
+D: Debian GNU/Linux: www.debian.org maintainer, FAQ co-maintainer,
+D: packages testing, nit-picking & fixing. Enjoying BugFree (TM) kernels.
+S: Zuidsingel 10A
+S: 2312 SB Leiden
+S: The Netherlands
+
+N: David Davies
+E: davies@wanton.lkg.dec.com
+D: Network driver author - depca, ewrk3 and de4x5
+D: Wrote shared interrupt support
+S: Digital Equipment Corporation
+S: 550 King Street
+S: Littleton, Massachusetts 01460
+S: USA
+
+N: Wayne Davison
+E: davison@borland.com
+D: Second extended file system co-designer
+
+N: Terry Dawson
+E: terry@perf.no.itg.telecom.com.au
+E: terry@albert.vk2ktj.ampr.org (Amateur Radio use only)
+D: trivial hack to add variable address length routing to Rose.
+D: AX25-HOWTO, HAM-HOWTO, IPX-HOWTO, NET-2-HOWTO
+D: ax25-utils maintainer.
+
+N: Peter Denison
+E: peterd@pnd-pc.demon.co.uk
+W: http://www.pnd-pc.demon.co.uk/promise/
+D: Promise DC4030VL caching HD controller drivers
+
+N: Todd J. Derr
+E: tjd@fore.com
+W: http://www.wordsmith.org/~tjd
+D: Random console hacks and other miscellaneous stuff
+S: 3000 FORE Drive
+S: Warrendale, Pennsylvania 15086
+S: USA
+
+N: Alex deVries
+E: adevries@thepuffingroup.com
+D: Various SGI parts, bits of HAL2 and Newport, PA-RISC Linux.
+S: 41.5 William Street
+S: Ottawa, Ontario
+S: K1N 6Z9
+S: CANADA
+
+N: Eddie C. Dost
+E: ecd@skynet.be
+D: Linux/Sparc kernel hacker
+D: Linux/Sparc maintainer
+S: Rue de la Chapelle 51
+S: 4850 Moresnet
+S: Belgium
+
+N: Cort Dougan
+E: cort@ppc.kernel.org
+W: http://www.ppc.kernel.org/~cort/
+D: PowerPC
+S: Computer Science Department
+S: New Mexico Tech
+S: Socorro, New Mexico 87801
+S: USA
+
+N: Oleg Drokin
+E: green@ccssu.crimea.ua
+W: http://www.ccssu.crimea.ua/~green
+D: Cleaning up sound drivers.
+S: Skvoznoy per., 14a
+S: Evpatoria
+S: Crimea
+S: UKRAINE, 334320
+
+N: Thomas Dunbar
+E: tdunbar@vtaix.cc.vt.edu
+D: TeX & METAFONT hacking/maintenance
+S: Dean, Graduate School
+S: Virginia Tech
+S: Blacksburg, Virginia 24061
+S: USA
+
+N: Randy Dunlap
+E: randy.dunlap@intel.com
+W: http://home.att.net/~randy.dunlap/
+W: http://www.linux-usb.org
+D: Linux-USB subsystem, USB core/UHCI/printer/storage drivers
+S: 5200 NE Elam Young Pkwy., M/S HF3-77
+S: Hillsboro, Oregon 97124
+S: USA
+
+N: Cyrus Durgin
+E: cider@speakeasy.org
+W: http://www.speakeasy.org/~cider/
+D: implemented kmod
+
+N: Torsten Duwe
+E: Torsten.Duwe@informatik.uni-erlangen.de
+D: Part-time kernel hacker
+D: The Linux Support Team Erlangen
+S: Grevenbroicher Str. 17
+S: 47807 Krefeld
+S: Germany
+
+N: Tom Dyas
+E: tdyas@eden.rutgers.edu
+D: minor hacks and some sparc port stuff
+S: New Jersey
+S: USA
+
+N: Drew Eckhardt
+E: drew@PoohSticks.ORG
+D: SCSI code
+D: Assorted snippets elsewhere
+D: Boot sector "..." printing
+S: 2037 Walnut #6
+S: Boulder, Colorado 80302
+S: USA
+
+N: Heiko Eissfeldt
+E: heiko@colossus.escape.de heiko@unifix.de
+D: verify_area stuff, generic SCSI fixes
+D: SCSI Programming HOWTO
+D: POSIX.1 compliance testing
+S: Unifix Software GmbH
+S: Bueltenweg 27a
+S: D-38106 Braunschweig
+S: Germany
+
+N: Bjorn Ekwall
+E: bj0rn@blox.se
+W: http://www.pi.se/blox/
+D: Extended support for loadable modules
+D: D-Link pocket adapter drivers
+S: Grevgatan 11
+S: S-114 53 Stockholm
+S: Sweden
+
+N: Paal-Kristian Engstad
+E: engstad@intermetrics.com
+D: Kernel smbfs (to mount WfW, NT and OS/2 network drives.)
+S: 17101 Springdale Street #225
+S: Huntington Beach, California 92649
+S: USA
+
+N: Doug Evans
+E: dje@cygnus.com
+D: Wrote Xenix FS (part of standard kernel since 0.99.15)
+
+N: Riccardo Facchetti
+E: fizban@tin.it
+P: 1024/6E657BB5 AF 22 90 33 78 76 04 8B AF F9 97 1E B5 E2 65 30
+D: Audio Excel DSP 16 init driver author
+D: libmodem author
+D: Yet Another Micro Monitor port and current maintainer
+D: First ELF-HOWTO author
+D: random kernel hacker
+S: Via Paolo VI n.29
+S: 23900 - LECCO (Lc)
+S: Italy
+
+N: Rik Faith
+E: faith@cs.unc.edu
+E: faith@acm.org
+D: Author: Future Domain TMC-16x0 SCSI driver
+D: Debugging: SCSI code; Cyclades serial driver; APM driver
+D: Debugging: XFree86 Mach 32 server, accelerated server code
+
+N: János Farkas
+E: chexum@shadow.banki.hu
+D: romfs, various (mostly networking) fixes
+P: 1024/F81FB2E1 41 B7 E4 E6 3E D4 A6 71 6D 9C F3 9F F2 BF DF 6E
+S: Madarász Viktor utca 25
+S: 1131 Budapest
+S: Hungary
+
+N: Jürgen Fischer
+E: fischer@norbit.de (=?iso-8859-1?q?J=FCrgen?= Fischer)
+D: Author of Adaptec AHA-152x SCSI driver
+S: Schulstraße 18
+S: 26506 Norden
+S: Germany
+
+N: Jeremy Fitzhardinge
+E: jeremy@zip.com.au
+D: Improved mmap and munmap handling
+D: General mm minor tidyups
+S: 67 Surrey St.
+S: Darlinghurst, Sydney
+S: New South Wales 2010
+S: Australia
+
+N: Ralf Flaxa
+E: rfflaxa@immd4.informatik.uni-erlangen.de
+D: The Linux Support Team Erlangen
+D: Creator of LST distribution
+D: Author of installation tool LISA
+S: Pfitznerweg 6
+S: 74523 Schwaebisch Hall
+S: Germany
+
+N: Lawrence Foard
+E: entropy@world.std.com
+D: Floppy track reading, fs code
+S: 217 Park Avenue, Suite 108
+S: Worcester, Massachusetts 01609
+S: USA
+
+N: Karl Fogel
+E: kfogel@cs.oberlin.edu
+D: Contributor, Linux User's Guide
+S: 1123 North Oak Park Avenue
+S: Oak Park, Illinois 60302
+S: USA
+
+N: Daniel J. Frasnelli
+E: dfrasnel@alphalinux.org
+W: http://www.alphalinux.org/
+P: 1024/3EF87611 B9 F1 44 50 D3 E8 C2 80 DA E5 55 AA 56 7C 42 DA
+D: DEC Alpha hacker
+D: Miscellaneous bug squisher
+
+N: Jim Freeman
+E: jfree@sovereign.org
+W: http://www.sovereign.org/
+D: Initial GPL'd Frame Relay driver
+D: Dynamic PPP devices
+D: Sundry modularizations (PPP, IPX, ...) and fixes
+
+N: Bob Frey
+E: bobf@advansys.com
+D: AdvanSys SCSI driver
+S: 1150 Ringwood Court
+S: San Jose, California 95131
+S: USA
+
+N: Nigel Gamble
+E: nigel@nrg.org
+E: nigel@sgi.com
+D: Interrupt-driven printer driver
+S: 120 Alley Way
+S: Mountain View, California 94040
+S: USA
+
+N: Jeff Garzik
+E: jgarzik@mandrakesoft.com
+
+N: Jacques Gelinas
+E: jacques@solucorp.qc.ca
+D: Author of the Umsdos file system
+S: 1326 De Val-Brillant
+S: Laval, Quebec
+S: Canada H7Y 1V9
+
+N: David Gentzel
+E: gentzel@telerama.lm.com
+D: Original BusLogic driver and original UltraStor driver
+S: Whitfield Software Services
+S: 600 North Bell Avenue, Suite 160
+S: Carnegie, Pennsylvania 15106-4304
+S: USA
+
+N: Philip Gladstone
+E: philip@raptor.com
+D: Kernel / timekeeping stuff
+
+N: Richard E. Gooch
+E: rgooch@atnf.csiro.au
+D: parent process death signal to children
+D: prctl() syscall
+D: /proc/mtrr support to manipulate MTRRs on Intel P6 family
+S: CSIRO Australia Telescope National Facility
+S: P.O. Box 76, Epping
+S: New South Wales, 2121
+S: Australia
+
+N: Dmitry S. Gorodchanin
+E: pgmdsg@ibi.com
+D: RISCom/8 driver, misc kernel fixes.
+S: 4 Main Street
+S: Woodbridge, Connecticut 06525
+S: USA
+
+N: Paul Gortmaker
+E: p_gortmaker@yahoo.com
+D: Author of RTC driver & several net drivers, Ethernet & BootPrompt Howto.
+D: Made support for modules, ramdisk, generic-serial, etc. optional.
+D: Transformed old user space bdflush into 1st kernel thread - kflushd.
+D: Many other patches, documentation files, mini kernels, utilities, ...
+
+N: John E. Gotts
+E: jgotts@linuxsavvy.com
+D: kernel hacker
+S: 8124 Constitution Apt. 7
+S: Sterling Heights, Michigan 48313
+S: USA
+
+N: Tristan Greaves
+E: Tristan.Greaves@icl.com
+E: tmg296@ecs.soton.ac.uk
+W: http://www.ecs.soton.ac.uk/~tmg296
+D: Miscellaneous ipv4 sysctl patches
+S: 15 Little Mead
+S: Denmead
+S: Hampshire
+S: PO7 6HS
+S: United Kingdom
+
+N: Michael A. Griffith
+E: grif@cs.ucr.edu
+W: http://www.cs.ucr.edu/~grif
+D: Loopback speedup, qlogic SCSI hacking, VT_LOCKSWITCH
+S: Department of Computer Science
+S: University of California, Riverside
+S: Riverside, California 92521-0304
+S: USA
+
+N: Grant Guenther
+E: grant@torque.net
+W: http://www.torque.net/linux-pp.html
+D: original author of ppa driver for parallel port ZIP drive
+D: original architect of the parallel-port sharing scheme
+D: PARIDE subsystem: drivers for parallel port IDE & ATAPI devices
+S: 44 St. Joseph Street, Suite 506
+S: Toronto, Ontario, M4Y 2W4
+S: Canada
+
+N: Richard Günther
+E: richard.guenther@student.uni-tuebingen.de
+P: 2048/2E829319 2F 83 FC 93 E9 E4 19 E2 93 7A 32 42 45 37 23 57
+D: binfmt_misc
+S: Fichtenweg 3/511
+S: 72076 Tübingen
+S: Germany
+
+N: Danny ter Haar
+E: dth@cistron.nl
+D: /proc/procinfo, reboot on panic , kernel pre-patch tester ;)
+S: Cistron Internet Services
+S: PO-Box 297
+S: 2400 AG, Alphen aan den Rijn
+S: The Netherlands
+
+N: Bruno Haible
+E: haible@ma2s2.mathematik.uni-karlsruhe.de
+D: SysV FS, shm swapping, memory management fixes
+S: 17 rue Danton
+S: F - 94270 Le Kremlin-Bicêtre
+S: France
+
+N: Greg Hankins
+E: gregh@cc.gatech.edu
+D: fixed keyboard driver to separate LED and locking status
+S: 25360 Georgia Tech Station
+S: Atlanta, Georgia 30332
+S: USA
+
+N: Angelo Haritsis
+E: ah@computer.org
+D: kernel patches (serial, watchdog)
+D: xringd, vuzkern, greekXfonts
+S: 77 Clarence Mews
+S: London SE16 1GD
+S: United Kingdom
+
+N: Kai Harrekilde-Petersen
+E: khp@olicom.dk
+D: Original author of the ftape-HOWTO, i82078 fdc detection code.
+
+N: Bart Hartgers
+E: bart@etpmod.phys.tue.nl
+D: MTRR emulation with Centaur MCRs
+S: Gen Stedmanstraat 212
+S: 5623 HZ Eindhoven
+S: The Netherlands
+
+N: Andrew Haylett
+E: ajh@primag.co.uk
+D: Selection mechanism
+
+N: Andre Hedrick
+E: andre@suse.com
+D: Random SMP kernel hacker...
+D: Uniform Multi-Platform E-IDE driver
+D: AEC6210UF Ultra33
+D: Aladdin 1533/1543(C) chipset
+D: Active-Chipset maddness..........
+D: HighPoint HPT343/5 Ultra/33 & HPT366 Ultra/66 chipsets
+D: Intel PIIX chipset
+D: Promise PDC20246/20247 & PDC20262 chipsets
+D: SiS5513 Ultra/66/33 chipsets
+D: VIA 82C586/596/686 chipsets
+S: 580 Second Street, Suite 2
+S: Oakland, CA
+S: USA
+
+N: Jochen Hein
+E: jochen@jochen.org
+P: 1024/4A27F015 25 72 FB E3 85 9F DE 3B CB 0A DA DA 40 77 05 6C
+D: National Language Support
+D: Linux Internationalization Project
+D: German Localization for Linux and GNU software
+S: Frankenstraße 33
+S: 34131 Kassel
+S: Germany
+
+N: Richard Henderson
+E: rth@twiddle.net
+E: rth@cygnus.com
+D: Alpha hacker, kernel and userland
+S: 50 E. Middlefield #10
+S: Mountain View, California 94043
+S: USA
+
+N: Benjamin Herrenschmidt
+E: bh40@calva.net
+E: benh@mipsys.com
+D: PowerMac booter (BootX)
+D: Additional PowerBook support
+S: 22, rue des Marguettes
+S: 75012 Paris
+S: France
+
+N: Sebastian Hetze
+E: she@lunetix.de
+D: German Linux Documentation,
+D: Organization of German Linux Conferences
+S: Danckelmannstr. 48
+S: 14059 Berlin
+S: Germany
+
+N: David Hinds
+E: dhinds@zen.stanford.edu
+W: http://tao.stanford.edu/~dhinds
+D: PCMCIA and CardBus stuff, PCMCIA-HOWTO, PCMCIA client drivers
+S: 2019 W. Middlefield Rd #1
+S: Mountain View, CA 94043
+S: USA
+
+N: Michael Hipp
+E: hippm@informatik.uni-tuebingen.de
+D: drivers for the racal ni5210 & ni6510 Ethernet-boards
+S: Talstr. 1
+S: D - 72072 Tuebingen
+S: Germany
+
+N: Jauder Ho
+E: jauderho@carumba.com
+W: http://www.carumba.com/
+D: bug toaster (A1 sauce makes all the difference)
+D: Random linux hacker
+
+N: Dirk Hohndel
+E: hohndel@suse.de
+D: The XFree86[tm] Project
+D: USB mouse maintainer
+S: SuSE Rhein/Main AG
+S: Mergenthalerallee 45-47
+S: 65760 Eschborn
+S: Germany
+
+N: Kenji Hollis
+E: kenji@bitgate.com
+W: http://www.bitgate.com/
+D: Berkshire PC Watchdog Driver
+D: Small/Industrial Driver Project
+
+N: Nick Holloway
+E: Nick.Holloway@alfie.demon.co.uk
+E: Nick.Holloway@parallax.co.uk
+W: http://www.alfie.demon.co.uk/
+P: 1024/75C49395 3A F0 E3 4E B7 9F E0 7E 47 A3 B0 D5 68 6A C2 FB
+D: Occasional Linux hacker...
+S: 15 Duke Street
+S: Chapelfields
+S: Coventry
+S: CV5 8BZ
+S: United Kingdom
+
+N: Ron Holt
+E: ron@sovereign.org
+W: http://www.holt.org/
+W: http://www.ronholt.com/
+D: Kernel development
+D: Kernel LDT modifications to support Wabi and Wine
+S: Holtron Internetics, Inc.
+S: 998 East 900 South, Suite 26
+S: Provo, Utah 84606-5607
+S: USA
+
+N: Rob W. W. Hooft
+E: hooft@EMBL-Heidelberg.DE
+D: Shared libs for graphics-tools and for the f2c compiler
+D: Some kernel programming on the floppy and sound drivers in early days
+D: Some other hacks to get different kinds of programs to work for linux
+S: Panoramastrasse 18
+S: D-69126 Heidelberg
+S: Germany
+
+N: Christopher Horn
+E: chorn@warwick.net
+D: Miscellaneous sysctl hacks
+S: 36 Mudtown Road
+S: Wantage, New Jersey 07461
+S: USA
+
+N: Harald Hoyer
+E: HarryH@Royal.Net
+W: http://hot.spotline.de/
+W: http://home.pages.de/~saturn
+D: ip_masq_quake
+D: md boot support
+S: Alleenstrasse 27
+S: D-71679 Asperg
+S: Germany
+
+N: Kenn Humborg
+E: kenn@wombat.ie
+D: Mods to loop device to support sparse backing files
+S: Ballinagard
+S: Roscommon
+S: Ireland
+
+N: Miguel de Icaza Amozurrutia
+E: miguel@nuclecu.unam.mx
+D: Linux/SPARC team, Midnight Commander maintainer
+S: Avenida Copilco 162, 22-1003
+S: Mexico, DF
+S: Mexico
+
+N: Ian Jackson
+E: iwj10@cus.cam.ac.uk
+E: ijackson@nyx.cs.du.edu
+D: FAQ maintainer and poster of the daily postings
+D: FSSTND group member
+D: Debian core team member and maintainer of several Debian packages
+S: 2 Lexington Close
+S: Cambridge
+S: CB3 0DS
+S: United Kingdom
+
+N: Andreas Jaeger
+E: aj@suse.de
+D: Various smaller kernel fixes
+D: glibc developer
+S: Gottfried-Kinkel-Str. 18
+S: D 67659 Kaiserslautern
+S: Germany
+
+N: Mike Jagdis
+E: jaggy@purplet.demon.co.uk
+E: Mike.Jagdis@purplet.demon.co.uk
+D: iBCS personalities, socket and X interfaces, x.out loader, syscalls...
+D: Purple Distribution maintainer
+D: UK FidoNet support
+D: ISODE && PP
+D: Kernel and device driver hacking
+S: 280 Silverdale Road
+S: Earley
+S: Reading
+S: RG6 2NU
+S: United Kingdom
+
+N: Jakub Jelinek
+E: jakub@redhat.com
+W: http://sunsite.mff.cuni.cz/~jj
+P: 1024/0F7623C5 53 95 71 3C EB 73 99 97 02 49 40 47 F9 19 68 20
+D: Sparc hacker, SILO, mc
+D: Maintain sunsite.mff.cuni.cz
+S: K osmidomkum 723
+S: 160 00 Praha 6
+S: Czech Republic
+
+N: Niels Kristian Bech Jensen
+E: nkbj@image.dk
+W: http://www.image.dk/~nkbj
+D: 4.4BSD and NeXTstep filesystem support in the old ufs.
+D: Openstep filesystem and NeXTstep CDROM support in the new ufs.
+D: Danish HOWTO, Linux+FreeBSD mini-HOWTO.
+S: Dr. Holsts Vej 34, lejl. 164
+S: DK-8230 Åbyhøj
+S: Denmark
+
+N: Michael K. Johnson
+E: johnsonm@redhat.com
+W: http://www.redhat.com/~johnsonm
+P: 1024/4536A8DD 2A EC 88 08 40 64 CE D8 DD F8 12 2B 61 43 83 15
+D: The Linux Documentation Project
+D: Kernel Hackers' Guide
+D: Procps
+D: Proc filesystem
+D: Maintain tsx-11.mit.edu
+D: LP driver
+S: 201 Howell Street, Apartment 1C
+S: Chapel Hill, North Carolina 27514-4818
+S: USA
+
+N: Dave Jones
+E: dave@powertweak.com
+E: djones2@glam.ac.uk
+W: http://linux.powertweak.com
+D: Moved PCI bridge tuning to userspace (Powertweak).
+D: Centaur/IDT Winchip/Winchip 2 tweaks.
+D: Misc clean ups and other random hacking.
+S: 28, Laura Street,
+S: Treforest, Pontypridd,
+S: Mid Glamorgan, CF37 1NW,
+S: Wales, United Kingdom
+
+N: Bernhard Kaindl
+E: bkaindl@netway.at
+E: edv@bartelt.via.at
+D: Author of a menu based configuration tool, kmenu, which
+D: is the predecessor of 'make menuconfig' and 'make xconfig'.
+D: digiboard driver update(modularisation work and 2.1.x upd)
+S: Tallak 95
+S: 8103 Rein
+S: Austria
+
+N: Jan Kara
+E: jack@atrey.karlin.mff.cuni.cz
+E: jack@suse.cz
+D: Quota fixes for 2.2 kernel
+D: Quota fixes for 2.3 kernel
+D: Few other fixes in filesystem area (buffer cache, isofs, loopback)
+W: http://atrey.karlin.mff.cuni.cz/~jack/
+S: Krosenska' 543
+S: 181 00 Praha 8
+S: Czech Republic
+
+N: Jan "Yenya" Kasprzak
+E: kas@fi.muni.cz
+D: Author of the COSA/SRP sync serial board driver.
+D: Port of the syncppp.c from the 2.0 to the 2.1 kernel.
+P: 1024/D3498839 0D 99 A7 FB 20 66 05 D7 8B 35 FC DE 05 B1 8A 5E
+W: http://www.fi.muni.cz/~kas/
+S: c/o Faculty of Informatics, Masaryk University
+S: Botanicka' 68a
+S: 602 00 Brno
+S: Czech Republic
+
+N: Fred N. van Kempen
+E: waltje@linux.com
+D: NET-2
+D: Drivers
+D: Kernel cleanups
+S: Korte Heul 95
+S: 1403 ND BUSSUM
+S: The Netherlands
+
+N: Karl Keyte
+E: karl@koft.com
+D: Disk usage statistics and modifications to line printer driver
+S: 26a Sheen Road
+S: Richmond
+S: Surrey
+S: TW9 1AE
+S: United Kingdom
+
+N: Russell King
+E: rmk@arm.uk.linux.org
+D: Linux/arm integrator, maintainer & hacker
+S: Burgh Heath, Tadworth, Surrey.
+S: England
+
+N: Olaf Kirch
+E: okir@monad.swb.de
+D: Author of the Linux Network Administrators' Guide
+S: Kattreinstr 38
+S: D-64295
+S: Germany
+
+N: Andi Kleen
+E: ak@muc.de
+D: network hacker, syncookies
+S: Schwalbenstr. 96
+S: 85551 Ottobrunn
+S: Germany
+
+N: Ian Kluft
+E: ikluft@thunder.sbay.org
+W: http://www.kluft.com/~ikluft/
+D: NET-1 beta testing & minor patches, original Smail binary packages for
+D: Slackware and Debian, vote-taker for 2nd comp.os.linux reorganization
+S: Post Office Box 611311
+S: San Jose, California 95161-1311
+S: USA
+
+N: Thorsten Knabe
+E: Thorsten Knabe <tek@rbg.informatik.tu-darmstadt.de>
+E: Thorsten Knabe <tek01@hrzpub.tu-darmstadt.de>
+W: http://www.student.informatik.tu-darmstadt.de/~tek
+W: http://www.tu-darmstadt.de/~tek01
+P: 1024/3BC8D885 8C 29 C5 0A C0 D1 D6 F4 20 D4 2D AB 29 F6 D0 60
+D: AD1816 sound driver
+S: Am Bergfried 10
+S: 63225 Langen
+S: Germany
+
+N: Alain L. Knaff
+E: Alain.Knaff@poboxes.com
+D: floppy driver
+S: 19, rue Jean l'Aveugle
+S: L-1148 Luxembourg-City
+S: Luxembourg
+
+N: Gerd Knorr
+E: kraxel@goldbach.in-berlin.de
+D: SCSI CD-ROM driver hacking, vesafb, v4l, minor bug fixes
+
+N: Harald Koenig
+E: koenig@tat.physik.uni-tuebingen.de
+D: XFree86 (S3), DCF77, some kernel hacks and fixes
+S: Koenigsberger Str. 90
+S: D-72336 Balingen
+S: Germany
+
+N: Rudolf Koenig
+E: rfkoenig@immd4.informatik.uni-erlangen.de
+D: The Linux Support Team Erlangen
+
+N: Andreas Koensgen
+E: ajk@iehk.rwth-aachen.de
+D: 6pack driver for AX.25
+
+N: Willy Konynenberg
+E: willy@xos.nl
+W: http://www.xos.nl/
+D: IP transparent proxy support
+S: X/OS Experts in Open Systems BV
+S: Kruislaan 419
+S: 1098 VA Amsterdam
+S: The Netherlands
+
+N: Gene Kozin
+E: 74604.152@compuserve.com
+W: http://www.sangoma.com
+D: WAN Router & Sangoma WAN drivers
+S: Sangoma Technologies Inc.
+S: 7170 Warden Avenue, Unit 2
+S: Markham, Ontario
+S: L3R 8B2
+S: Canada
+
+N: Andreas S. Krebs
+E: akrebs@altavista.net
+D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
+
+N: Russell Kroll
+E: rkroll@exploits.org
+W: http://www.exploits.org/
+D: V4L Aztech radio card driver, mods to Aimslab driver
+S: Post Office Box 49458
+S: Colorado Springs, Colorado 80949-9458
+S: USA
+
+N: Andrzej M. Krzysztofowicz
+E: ankry@mif.pg.gda.pl
+D: XT disk driver
+D: Aladdin 1533/1543(C) chipset IDE
+D: PIIX chipset IDE
+S: ul. Matemblewska 1B/10
+S: 80-283 Gdansk
+S: Poland
+
+N: Gero Kuhlmann
+E: gero@gkminix.han.de
+D: mounting root via NFS
+S: Donarweg 4
+S: D-30657 Hannover
+S: Germany
+
+N: Markus Kuhn
+E: mskuhn@cip.informatik.uni-erlangen.de
+W: http://wwwcip.informatik.uni-erlangen.de/user/mskuhn
+D: Unicode, real-time, time, standards
+S: Schlehenweg 9
+S: D-91080 Uttenreuth
+S: Germany
+
+N: Jaroslav Kysela
+E: perex@jcu.cz
+W: http://www.pf.jcu.cz/~perex
+D: Original Author and Maintainer for HP 10/100 Mbit Network Adapters
+S: Unix Centre of Pedagogical Faculty, University of South Bohemia
+
+N: Bas Laarhoven
+E: bas@vimec.nl
+D: Loadable modules and ftape driver
+S: J. Obrechtstr 23
+S: NL-5216 GP 's-Hertogenbosch
+S: The Netherlands
+
+N: Savio Lam
+E: lam836@cs.cuhk.hk
+D: Author of the dialog utility, foundation
+D: for Menuconfig's lxdialog.
+
+N: Tom Lees
+E: tom@lpsg.demon.co.uk
+W: http://www.lpsg.demon.co.uk/
+P: 1024/87D4D065 2A 66 86 9D 02 4D A6 1E B8 A2 17 9D 4F 9B 89 D6
+D: Original author and current maintainer of
+D: PnP code.
+
+N: David van Leeuwen
+E: david@tm.tno.nl
+D: Philips/LMS cm206 cdrom driver, generic cdrom driver
+S: Scheltemalaan 14
+S: 3817 KS Amersfoort
+S: The Netherlands
+
+N: Volker Lendecke
+E: vl@kki.org
+D: Kernel smbfs (to mount WfW, NT and OS/2 network drives.)
+D: NCP filesystem support (to mount NetWare volumes)
+S: Von Ossietzky Str. 12
+S: 37085 Goettingen
+S: Germany
+
+N: Kevin Lentin
+E: kevinl@cs.monash.edu.au
+D: NCR53C400/T130B SCSI extension to NCR5380 driver.
+S: 18 Board Street
+S: Doncaster VIC 3108
+S: Australia
+
+N: Hans Lermen
+E: lermen@elserv.ffm.fgan.de
+D: Author of the LOADLIN Linux loader, hacking on boot stuff
+D: Coordinator of DOSEMU releases
+S: Am Muehlenweg 38
+S: D53424 Remagen
+S: Germany
+
+N: Achim Leubner
+E: achim@vortex.de
+D: GDT SCSI Disk Array Controller driver
+S: ICP vortex Computersysteme GmbH
+S: Flein
+S: Germany
+
+N: Phil Lewis
+E: beans@bucket.ualr.edu
+D: Promised to send money if I would put his name in the source tree.
+S: Post Office Box 371
+S: North Little Rock, Arkansas 72115
+S: USA
+
+N: Stephan Linz
+E: linz@mazet.de
+E: Stephan.Linz@gmx.de
+W: http://www.crosswinds.net/~tuxer
+D: PCILynx patch to work with 1394a PHY and without local RAM
+S: (ask for current address)
+S: Germany
+
+N: Siegfried "Frieder" Loeffler (dg1sek)
+E: floeff@tunix.mathematik.uni-stuttgart.de, fl@LF.net
+W: http://www.mathematik.uni-stuttgart.de/~floeff
+D: Busmaster driver for HP 10/100 Mbit Network Adapters
+S: University of Stuttgart, Germany and
+S: Ecole Nationale Superieure des Telecommunications, Paris
+
+N: Jamie Lokier
+E: jamie@imbolc.ucc.ie
+D: Reboot-through-BIOS for broken 486 motherboards
+S: 11 Goodson Walk
+S: Marston
+S: Oxford
+S: OX3 0HX
+S: United Kingdom
+
+N: Mark Lord
+E: mlord@pobox.com
+D: EIDE driver, hd.c support
+D: EIDE PCI and bus-master DMA support
+D: Hard Disk Parameter (hdparm) utility
+S: 33 Ridgefield Cr
+S: Nepean, Ontario
+S: Canada K2H 6S3
+
+N: Warner Losh
+E: imp@village.org
+D: Linux/MIPS Deskstation support, Provided OI/OB for Linux
+S: 8786 Niwot Road
+S: Niwot, Colorado 80503
+S: USA
+
+N: Martin von Löwis
+E: loewis@informatik.hu-berlin.de
+D: script binary format
+D: NTFS driver
+
+N: H.J. Lu
+E: hjl@gnu.ai.mit.edu
+D: GCC + libraries hacker
+
+N: Tuomas J. Lukka
+E: Tuomas.Lukka@Helsinki.FI
+D: Original dual-monitor patches
+D: Console-mouse-tracking patches
+S: Puistokaari 1 E 18
+S: 00200 Helsinki
+S: Finland
+
+N: Hamish Macdonald
+E: hamishm@lucent.com
+D: Linux/68k port
+S: 32 Clydesdale Avenue
+S: Kanata, Ontario
+S: Canada K2M-2G7
+
+N: Peter MacDonald
+D: SLS distribution
+D: Initial implementation of VC's, pty's and select()
+
+N: Pavel Machek
+E: pavel@atrey.karlin.mff.cuni.cz
+D: Softcursor for vga, hypertech cdrom support, vcsa bugfix
+D: Network block device, sun4/330 port
+S: Volkova 1131
+S: 198 00 Praha 9
+S: Czech Republic
+
+N: Paul Mackerras
+E: paulus@linuxcare.com
+D: Linux port for PCI Power Macintosh
+S: Linuxcare, Inc.
+S: 24 Marcus Clarke Street
+S: Canberra ACT 2601
+S: Australia
+
+N: Pat Mackinlay
+E: pat@it.com.au
+D: 8 bit XT hard disk driver
+D: Miscellaneous ST0x, TMC-8xx and other SCSI hacking
+S: 25 McMillan Street
+S: Victoria Park 6100
+S: Australia
+
+N: James B. MacLean
+E: macleajb@ednet.ns.ca
+W: http://www.ednet.ns.ca/~macleajb/dosemu.html
+D: Former Coordinator of DOSEMU releases
+D: Program in DOSEMU
+S: PO BOX 220, HFX. CENTRAL
+S: Halifax, Nova Scotia
+S: Canada B3J 3C8
+
+N: Kai Mäkisara
+E: Kai.Makisara@metla.fi
+D: SCSI Tape Driver
+
+N: Martin Mares
+E: mj@atrey.karlin.mff.cuni.cz
+W: http://atrey.karlin.mff.cuni.cz/~mj/
+D: BIOS video mode handling code
+D: MOXA C-218 serial board driver
+D: Network autoconfiguration
+D: Random kernel hacking
+S: Kankovskeho 1241
+S: 182 00 Praha 8
+S: Czech Republic
+
+N: John A. Martin
+E: jam@acm.org
+W: http://www.tux.org/~jam/
+P: 1024/04456D53 9D A3 6C 6B 88 80 8A 61 D7 06 22 4F 95 40 CE D2
+P: 1024/3B986635 5A61 7EE6 9E20 51FB 59FB 2DA5 3E18 DD55 3B98 6635
+D: FSSTND contributor
+D: Credit file compilator
+
+N: Kevin E. Martin
+E: martin@cs.unc.edu
+D: Developed original accelerated X servers included in XFree86
+D: XF86_Mach64
+D: XF86_Mach32
+D: XF86_Mach8
+D: XF86_8514
+D: cfdisk (curses based disk partitioning program)
+
+N: Mike McLagan
+E: mike.mclagan@linux.org
+W: http://www.invlogic.com/~mmclagan
+D: DLCI/FRAD drivers for Sangoma SDLAs
+S: Innovative Logic Corp
+S: Post Office Box 1068
+S: Laurel, Maryland 20732
+S: USA
+
+N: Bradley McLean
+E: brad@bradpc.gaylord.com
+D: Device driver hacker
+D: General kernel debugger
+S: 249 Nichols Avenue
+S: Syracuse, New York 13206
+S: USA
+
+N: Dirk Melchers
+E: dirk@merlin.nbg.sub.org
+D: 8 bit XT hard disk driver for OMTI5520
+S: Schloessleinsgasse 31
+S: D-90453 Nuernberg
+S: Germany
+
+N: Arnaldo Carvalho de Melo
+E: acme@conectiva.com.br
+W: http://www.conectiva.com.br/~acme
+D: wanrouter hacking
+D: USB hacking
+D: miscellaneous Makefile & Config.in fixes
+D: Cyclom 2X synchronous card driver
+D: i18n for minicom, net-tools, util-linux, fetchmail, etc
+S: Conectiva S.A.
+S: R. Tocantins, 89 - Cristo Rei
+S: 80050-430 - Curitiba - Paraná
+S: Brazil
+
+N: Michael Meskes
+E: meskes@debian.org
+P: 1024/04B6E8F5 6C 77 33 CA CC D6 22 03 AB AB 15 A3 AE AD 39 7D
+D: Kernel hacker. PostgreSQL hacker. Software watchdog daemon.
+D: Maintainer of several Debian packages
+S: Th.-Heuss-Str. 61
+S: D-41812 Erkelenz
+S: Germany
+
+N: Nigel Metheringham
+E: Nigel.Metheringham@ThePLAnet.net
+P: 1024/31455639 B7 99 BD B8 00 17 BD 46 C1 15 B8 AB 87 BC 25 FA
+D: IP Masquerading work and minor fixes
+S: Planet Online
+S: The White House, Melbourne Street, LEEDS
+S: LS2 7PS, United Kingdom
+
+N: Craig Metz
+E: cmetz@inner.net
+D: Some of PAS 16 mixer & PCM support, inet6-apps
+
+N: William (Bill) Metzenthen
+E: billm@suburbia.net
+D: Author of the FPU emulator.
+D: Minor kernel hacker for other lost causes (Hercules mono, etc).
+S: 22 Parker Street
+S: Ormond
+S: Victoria 3163
+S: Australia
+
+N: Pauline Middelink
+E: middelin@polyware.nl
+D: General low-level bug fixes, /proc fixes, identd support
+D: Author of IP masquerading
+D: Zoran ZR36120 Video For Linux driver
+S: Boterkorfhoek 34
+S: 7546 JA Enschede
+S: Netherlands
+
+N: David S. Miller
+E: davem@redhat.com
+D: Sparc and blue box hacker
+D: Vger Linux mailing list co-maintainer
+D: Linux Emacs elf/qmagic support + other libc/gcc things
+D: Yee bore de yee bore! ;-)
+S: 750 N. Shoreline Blvd.
+S: Apt. #111
+S: Mountain View, California 94043
+S: USA
+
+N: Rick Miller
+E: rdmiller@execpc.com
+W: http://www.execpc.com/~rdmiller/
+D: Original Linux Device Registrar (Major/minor numbers)
+D: au-play, bwBASIC
+S: S78 W16203 Woods Road
+S: Muskego, Wisconsin 53150
+S: USA
+
+N: Harald Milz
+E: hm@seneca.linux.de
+D: Linux Projects Map, Linux Commercial-HOWTO
+D: general Linux publicity in Germany, vacation port
+D: UUCP and CNEWS binary packages for LST
+S: Editorial Board iX Mag
+S: Helstorfer Str. 7
+S: D-30625 Hannover
+S: Germany
+
+N: Corey Minyard
+E: minyard@wf-rch.cirr.com
+D: Sony CDU31A CDROM Driver
+S: 1805 Marquette
+S: Richardson, Texas 75081
+S: USA
+
+N: Eberhard Moenkeberg
+E: emoenke@gwdg.de
+D: CDROM driver "sbpcd" (Matsushita/Panasonic/Soundblaster)
+S: Reinholdstrasse 14
+S: D-37083 Goettingen
+S: Germany
+
+N: David Mosberger-Tang
+E: davidm@hpl.hp.com if IA-64 related, else David.Mosberger@acm.org
+D: Linux/Alpha and Linux/ia64
+S: 35706 Runckel Lane
+S: Fremont, California 94536
+S: USA
+
+N: Ian A. Murdock
+E: imurdock@gnu.ai.mit.edu
+D: Creator of Debian distribution
+S: 30 White Tail Lane
+S: Lafayette, Indiana 47905
+S: USA
+
+N: Trond Myklebust
+E: trond.myklebust@fys.uio.no
+D: current NFS client hacker.
+S: Dagaliveien 31e
+S: N-0391 Oslo
+S: Norway
+
+N: Johan Myreen
+E: jem@iki.fi
+D: PS/2 mouse driver writer etc.
+S: Dragonvagen 1 A 13
+S: FIN-00330 Helsingfors
+S: Finland
+
+N: Matija Nalis
+E: mnalis@jagor.srce.hr
+E: mnalis@voyager.hr
+D: Maintainer of the Umsdos file system
+S: Listopadska 7
+S: 10000 Zagreb
+S: Croatia
+
+N: Jonathan Naylor
+E: g4klx@g4klx.demon.co.uk
+E: g4klx@amsat.org
+W: http://zone.pspt.fi/~jsn/
+D: AX.25, NET/ROM and ROSE amateur radio protocol suites
+D: CCITT X.25 PLP and LAPB.
+S: 24 Castle View Drive
+S: Cromford
+S: Matlock
+S: Derbyshire DE4 3RL
+S: United Kingdom
+
+N: Russell Nelson
+E: nelson@crynwr.com
+W: http://www.crynwr.com/~nelson
+P: 1024/83942741 FF 68 EE 27 A0 5A AA C3 F5 DC 05 62 BD 5B 20 2F
+D: Author of cs89x0, maintainer of kernel changelog through 1.3.3
+D: Wrote many packet drivers, from which some Ethernet drivers are derived.
+S: 521 Pleasant Valley Road
+S: Potsdam, New York 13676
+S: USA
+
+N: Michael Neuffer
+E: mike@i-Connect.Net
+E: neuffer@goofy.zdv.uni-mainz.de
+W: http://www.i-Connect.Net/~mike/
+D: Developer and maintainer of the EATA-DMA SCSI driver
+D: Co-developer EATA-PIO SCSI driver
+D: /proc/scsi and assorted other snippets
+S: Zum Schiersteiner Grund 2
+S: 55127 Mainz
+S: Germany
+
+N: David C. Niemi
+E: niemi@tux.org
+W: http://www.tux.org/~niemi/
+D: Assistant maintainer of Mtools, fdutils, and floppy driver
+D: Administrator of Tux.Org Linux Server, http://www.tux.org
+S: 2364 Old Trail Drive
+S: Reston, Virginia 20191
+S: USA
+
+N: Michael O'Reilly
+E: michael@iinet.com.au
+E: oreillym@tartarus.uwa.edu.au
+D: Wrote the original dynamic sized disk cache stuff. I think the only
+D: part that remains is the GFP_KERNEL et al #defines. :)
+S: 192 Nichsolson Road
+S: Subiaco, 6008
+S: Perth, Western Australia
+S: Australia
+
+N: Greg Page
+E: gpage@sovereign.org
+D: IPX development and support
+
+N: David Parsons
+E: orc@pell.chi.il.us
+D: improved memory detection code.
+
+N: Ivan Passos
+E: ivan@cyclades.com
+D: Author of the Cyclades-PC300 synchronous card driver
+D: Maintainer of the Cyclom-Y/Cyclades-Z asynchronous card driver
+S: Cyclades Corp
+S: 41934 Christy St
+S: Fremont, CA 94538
+S: USA
+
+N: Mikulas Patocka
+E: mikulas@artax.karlin.mff.cuni.cz
+W: http://artax.karlin.mff.cuni.cz/~mikulas/
+P: 1024/BB11D2D5 A0 F1 28 4A C4 14 1E CF 92 58 7A 8F 69 BC A4 D3
+D: Read/write HPFS filesystem
+S: Weissova 8
+S: 644 00 Brno
+S: Czech Republic
+
+N: Vojtech Pavlik
+E: vojtech@suse.cz
+D: Joystick driver
+D: arcnet-hardware readme
+D: Minor ARCnet hacking
+D: USB (HID, ACM, Printer ...)
+S: Ucitelska 1576
+S: Prague 8
+S: 182 00 Czech Republic
+
+N: Barak A. Pearlmutter
+E: bap@cs.unm.edu
+W: http://www.cs.unm.edu/~bap/
+P: 512/602D785D 9B A1 83 CD EE CB AD 93 20 C6 4C B7 F5 E9 60 D4
+D: Author of mark-and-sweep GC integrated by Alan Cox
+S: Computer Science Department
+S: FEC 313
+S: University of New Mexico
+S: Albuquerque, New Mexico 87131
+S: USA
+
+N: Avery Pennarun
+E: apenwarr@worldvisions.ca
+W: http://www.worldvisions.ca/~apenwarr/
+D: ARCnet driver
+D: "make xconfig" improvements
+D: Various minor hacking
+S: RR #5, 497 Pole Line Road
+S: Thunder Bay, Ontario
+S: CANADA P7C 5M9
+
+N: Yuri Per
+E: yuri@pts.mipt.ru
+D: Some smbfs fixes
+S: Demonstratsii 8-382
+S: Tula 300000
+S: Russia
+
+N: Gordon Peters
+E: GordPeters@smarttech.com
+D: Isochronous receive for IEEE 1394 driver (OHCI module).
+D: Bugfixes for the aforementioned.
+S: Calgary, Alberta
+S: Canada
+
+N: Johnnie Peters
+E: jpeters@phx.mcd.mot.com
+D: Motorola PowerPC changes for PReP
+S: 2900 S. Diable Way
+S: Tempe, Arizona 85282
+S: USA
+
+N: Kirk Petersen
+E: kirk@speakeasy.org
+W: http://www.speakeasy.org/~kirk/
+D: implemented kmod
+D: modularized BSD Unix domain sockets
+
+N: Reed H. Petty
+E: rhp@draper.net
+W: http://www.draper.net
+D: Loop device driver extensions
+D: Encryption transfer modules (no export)
+S: Post Office Box 1815
+S: Harrison, Arkansas 72602-1815
+S: USA
+
+N: Kai Petzke
+E: wpp@marie.physik.tu-berlin.de
+W: http://physik.tu-berlin.de/~wpp
+P: 1024/B42868C1 D9 59 B9 98 BB 93 05 38 2E 3E 31 79 C3 65 5D E1
+D: Driver for Laser Magnetic Storage CD-ROM
+D: Some kernel bug fixes
+D: Port of the database Postgres
+D: "Unix fuer Jedermann" a German introduction to linux (see my web page)
+S: M"ullerstr. 69
+S: 13349 Berlin
+S: Germany
+
+N: Nicolas Pitre
+E: nico@cam.org
+D: StrongARM SA1100 support integrator & hacker
+S: Montreal, Quebec, Canada
+
+N: Emanuel Pirker
+E: epirker@edu.uni-klu.ac.at
+D: AIC5800 IEEE 1394, RAW I/O on 1394
+D: Starter of Linux1394 effort
+S: ask per mail for current address
+
+N: Ken Pizzini
+E: ken@halcyon.com
+D: CDROM driver "sonycd535" (Sony CDU-535/531)
+
+N: Frederic Potter
+E: Frederic.Potter@masi.ibp.fr
+D: Some PCI kernel support
+
+N: Stefan Probst
+E: sp@caldera.de
+D: The Linux Support Team Erlangen, 1993-97
+S: Caldera (Deutschland) GmbH
+S: Lazarettstrasse 8
+S: 91054 Erlangen
+S: Germany
+
+N: Daniel Quinlan
+E: quinlan@pathname.com
+W: http://www.pathname.com/~quinlan/
+D: FSSTND coordinator; FHS editor
+D: random Linux documentation, patches, and hacks
+S: 4390 Albany Drive #41A
+S: San Jose, California 95129
+S: USA
+
+N: Augusto Cesar Radtke
+E: bishop@sekure.org
+W: http://bishop.sekure.org
+D: {copy,get,put}_user calls updates
+D: Miscellaneous hacks
+S: R. Otto Marquardt, 226 - Garcia
+S: 89020-350 Blumenau - Santa Catarina
+S: Brazil
+
+N: Eric S. Raymond
+E: esr@thyrsus.com
+W: http://www.tuxedo.org/~esr/
+D: terminfo master file maintainer
+D: Editor: Installation HOWTO, Distributions HOWTO, XFree86 HOWTO
+D: Author: fetchmail, Emacs VC mode, Emacs GUD mode
+S: 6 Karen Drive
+S: Malvern, Pennsylvania 19355
+S: USA
+
+N: Stefan Reinauer
+E: stepan@linux.de
+W: http://www.freiburg.linux.de/~stepan/
+D: Modularization of some filesystems
+D: /proc/sound, minor fixes
+S: Schlossbergring 9
+S: 79098 Freiburg
+S: Germany
+
+N: Joerg Reuter
+E: jreuter@poboxes.com
+W: http://poboxes.com/jreuter/
+W: http://qsl.net/dl1bke/
+D: Generic Z8530 driver, AX.25 DAMA slave implementation
+D: Several AX.25 hacks
+
+N: Francois-Rene Rideau
+E: fare@tunes.org
+W: http://www.tunes.org/~fare
+D: petty kernel janitor (byteorder, ufs)
+S: 6, rue Augustin Thierry
+S: 75019 Paris
+S: France
+
+N: Rik van Riel
+E: riel@nl.linux.org
+W: http://www.nl.linux.org/~riel/
+D: Linux-MM site, Documentation/sysctl/*, swap/mm readaround
+D: clustering contributor, kswapd fixes, random kernel hacker,
+D: nl.linux.org maintainer, minor scheduler additions
+S: IJsselstraat 23a
+S: 9725 GA Groningen
+S: The Netherlands
+
+N: William E. Roadcap
+E: roadcapw@cfw.com
+W: http://www.cfw.com/~roadcapw
+D: Author of menu based configuration tool, Menuconfig.
+S: 1407 Broad Street
+S: Waynesboro, Virginia 22980
+S: USA
+
+N: Andrew J. Robinson
+E: arobinso@nyx.net
+W: http://www.nyx.net/~arobinso
+D: Hayes ESP serial port driver
+
+N: Florian La Roche
+E: rzsfl@rz.uni-sb.de
+E: flla@stud.uni-sb.de
+D: Net programs and kernel net hacker
+S: Gaildorfer Str. 27
+S: 7000 Stuttgart 50
+S: Germany
+
+N: Stephen Rothwell
+E: sfr@linuxcare.com
+W: http://linuxcare.com.au/sfr
+P: 1024/BD8C7805 CD A4 9D 01 10 6E 7E 3B 91 88 FA D9 C8 40 AA 02
+D: Boot/setup/build work for setup > 2K
+D: Author, APM driver
+S: 66 Maltby Circuit
+S: Wanniassa ACT 2903
+S: Australia
+
+N: Gerard Roudier
+E: groudier@iplus.fr
+D: Contributed to asynchronous read-ahead improvement
+S: 21 Rue Carnot
+S: 95170 Deuil La Barre
+S: France
+
+N: Sebastien Rougeaux
+E: Sebastien.Rougeaux@syseng.anu.edu.au
+D: IEEE 1394 OHCI module
+S: Research School of Information Science and Engineering
+S: The Australian National University, ACT 0200
+S: Australia
+
+N: Alessandro Rubini
+E: rubini@ipvvis.unipv.it
+D: the gpm mouse server and kernel support for it
+
+N: Philipp Rumpf
+E: prumpf@jcsbs.lanobis.de
+D: ipi_count for x86
+D: random bugfixes
+S: Rueting 4
+S: 23743 Groemitz
+S: Germany
+
+N: Paul `Rusty' Russell
+E: rusty@linuxcare.com
+W: http://www.rustcorp.com
+D: Ruggedly handsome.
+D: netfilter, ipchains with Michael Neuling.
+S: 301/222 City Walk
+S: Canberra ACT 2601
+S: Australia
+
+N: Thomas Sailer
+E: sailer@ife.ee.ethz.ch
+E: HB9JNX@HB9W.CHE.EU (packet radio)
+D: hfmodem, Baycom and sound card radio modem driver
+S: Weinbergstrasse 76
+S: 8408 Winterthur
+S: Switzerland
+
+N: Robert Sanders
+E: gt8134b@prism.gatech.edu
+D: Dosemu
+
+N: Hannu Savolainen
+E: hannu@voxware.pp.fi
+D: Kernel sound drivers
+S: Hiekkalaiturintie 3 A 8
+S: 00980 Helsinki
+S: Finland
+
+N: Eric Schenk
+E: Eric.Schenk@dna.lth.se
+D: Random kernel debugging.
+D: SYSV Semaphore code rewrite.
+D: Network layer debugging.
+D: Dial on demand facility (diald).
+S: Dag Hammerskjolds v. 3E
+S: S-226 64 LUND
+S: Sweden
+
+N: Henning P. Schmiedehausen
+E: hps@tanstaafl.de
+D: added PCI support to the serial driver
+S: Buckenhof, Germany
+
+N: Michael Schmitz
+E:
+D: Macintosh IDE Driver
+
+N: Peter De Schrijver
+E: stud11@cc4.kuleuven.ac.be
+D: Mitsumi CD-ROM driver patches March version
+S: Molenbaan 29
+S: B2240 Zandhoven
+S: Belgium
+
+N: Martin Schulze
+E: joey@linux.de
+W: http://home.pages.de/~joey/
+D: Random Linux Hacker, Linux Promoter
+D: CD-List, Books-List, Ex-FAQ
+D: Linux-Support, -Mailbox, -Stammtisch
+D: several improvements to system programs
+S: Oldenburg
+S: Germany
+
+N: Darren Senn
+E: sinster@darkwater.com
+D: Whatever I notice needs doing (so far: itimers, /proc)
+S: Post Office Box 64132
+S: Sunnyvale, California 94088-4132
+S: USA
+
+N: Simon Shapiro
+E: shimon@i-Connect.Net
+W: http://www.-i-Connect.Net/~shimon
+D: SCSI debugging
+D: Maintainer of the Debian Kernel packages
+S: 14355 SW Allen Blvd., Suite #140
+S: Beaverton, Oregon 97008
+S: USA
+
+N: Mike Shaver
+E: shaver@hungry.org
+W: http://www.hungry.org/~shaver/
+D: MIPS work, /proc/sys/net, misc net hacking
+S: 149 Union St.
+S: Kingston, Ontario
+S: Canada K7L 2P4
+
+N: John Shifflett
+E: john@geolog.com
+E: jshiffle@netcom.com
+D: Always IN2000 SCSI driver
+D: wd33c93 SCSI driver (linux-m68k)
+S: San Jose, California
+S: USA
+
+N: Jaspreet Singh
+E: jaspreet@sangoma.com
+W: www.sangoma.com
+D: WANPIPE drivers & API Support for Sangoma S508/FT1 cards
+S: Sangoma Technologies Inc.,
+S: 1001 Denison Street
+S: Suite 101
+S: Markham, Ontario L3R 2Z6
+S: Canada
+
+N: Rick Sladkey
+E: jrs@world.std.com
+D: utility hacker: Emacs, NFS server, mount, kmem-ps, UPS debugger, strace, GDB
+D: library hacker: RPC, profil(3), realpath(3), regexp.h
+D: kernel hacker: unnamed block devs, NFS client, fast select, precision timer
+S: 24 Avon Place
+S: Arlington, Massachusetts 02174
+S: USA
+
+N: Craig Small
+E: csmall@triode.apana.org.au
+E: vk2xlz@gonzo.vk2xlz.ampr.org (packet radio)
+D: Gracilis PackeTwin device driver
+D: RSPF daemon
+S: 10 Stockalls Place
+S: Minto, NSW, 2566
+S: Australia
+
+N: Chris Smith
+E: csmith@convex.com
+D: Read only HPFS filesystem
+S: Richardson, Texas
+S: USA
+
+N: Miquel van Smoorenburg
+E: miquels@cistron.nl
+D: Kernel and net hacker. Sysvinit, minicom. doing Debian stuff.
+S: Cistron Internet Services
+S: PO-Box 297
+S: 2400 AG, Alphen aan den Rijn
+S: The Netherlands
+
+N: Scott Snyder
+E: snyder@fnald0.fnal.gov
+D: ATAPI cdrom driver
+S: MS 352, Fermilab
+S: Post Office Box 500
+S: Batavia, Illinois 60510
+S: USA
+
+N: Leo Spiekman
+E: leo@netlabs.net
+W: http://www.netlabs.net/hp/leo/
+D: Optics Storage 8000AT cdrom driver
+S: Cliffwood, New Jersey 07721
+S: USA
+
+N: Henrik Storner
+E: storner@image.dk
+W: http://www.image.dk/~storner/
+W: http://www.sslug.dk/
+D: Configure script: Invented tristate for module-configuration
+D: vfat/msdos integration, kerneld docs, Linux promotion
+D: Miscellaneous bug-fixes
+S: Chr. Winthersvej 1 B, st.th.
+S: DK-1860 Frederiksberg C
+S: Denmark
+
+N: Drew Sullivan
+E: drew@ss.org
+W: http://www.ss.org/
+P: 1024/ACFFA969 5A 9C 42 AB E4 24 82 31 99 56 00 BF D3 2B 25 46
+D: iBCS2 developer
+S: 22 Irvington Cres.
+S: Willowdale, Ontario
+S: Canada M2N 2Z1
+
+N: Adrian Sun
+E: asun@cobaltnet.com
+D: hfs support
+D: alpha rtc port, random appletalk fixes
+S: Department of Zoology, University of Washington
+S: Seattle, WA 98195-1800
+S: USA
+
+N: Corey Thomas
+E: corey@world.std.com
+W: http://world.std.com/~corey/index.html
+D: Raylink/WebGear wireless LAN device driver (ray_cs) author
+S: 145 Howard St.
+S: Northborough, MA 01532
+S: USA
+
+N: Tommy Thorn
+E: Tommy.Thorn@irisa.fr
+W: http://www.irisa.fr/prive/thorn/index.html
+P: 512/B4AFC909 BC BF 6D B1 52 26 1E D6 E3 2F A3 24 2A 84 FE 21
+D: Device driver hacker (aha1542 & plip)
+S: IRISA
+S: Universit=E9 de Rennes I
+S: F-35042 Rennes Cedex
+S: France
+
+N: Jon Tombs
+E: jon@gte.esi.us.es
+W: http://www.esi.us.es/~jon
+D: NFS mmap()
+D: XF86_S3
+D: Kernel modules
+D: Parts of various other programs (xfig, open, ...)
+S: C/ Federico Garcia Lorca 1 10-A
+S: Sevilla 41005
+S: Spain
+
+N: Linus Torvalds
+E: torvalds@transmeta.com
+W: http://www.cs.helsinki.fi/Linus.Torvalds
+P: 1024/A86B35C5 96 54 50 29 EC 11 44 7A BE 67 3C 24 03 13 62 C8
+D: Original kernel hacker
+S: 1050 Woodduck Avenue
+S: Santa Clara, California 95051
+S: USA
+
+N: Marcelo W. Tosatti
+E: marcelo@conectiva.com.br
+W: http://bazar.conectiva.com.br/~marcelo/
+D: Miscellaneous kernel hacker
+D: Cyclom 2X driver, drbd hacker
+D: linuxconf apache & proftpd module maintainer
+S: Conectiva S.A.
+S: R. Tocantins, 89 - Cristo Rei
+S: 80050-430 - Curitiba - Paraná
+S: Brazil
+
+N: Stefan Traby
+E: stefan@quant-x.com
+D: Minor Alpha kernel hacks
+S: Mitterlasznitzstr. 13
+S: 8302 Nestelbach
+S: Austria
+
+N: Jeff Tranter
+E: Jeff_Tranter@Mitel.COM
+D: Enhancements to Joystick driver
+D: Author of Sound HOWTO and CD-ROM HOWTO
+D: Author of several small utilities
+D: (bogomips, scope, eject, statserial)
+S: 1 Laurie Court
+S: Kanata, Ontario
+S: Canada K2L 1S2
+
+N: Andrew Tridgell
+E: tridge@samba.org
+W: http://linuxcare.com.au/tridge/
+D: dosemu, networking, samba
+S: 3 Ballow Crescent
+S: MacGregor A.C.T 2615
+S: Australia
+
+N: Winfried Trümper
+E: winni@xpilot.org
+W: http://www.shop.de/~winni/
+D: German HOWTO, Crash-Kurs Linux (German, 100 comprehensive pages)
+D: CD-Writing HOWTO, various mini-HOWTOs
+D: One-week tutorials on Linux twice a year (free of charge)
+D: Linux-Workshop Köln (aka LUG Cologne, Germany), Installfests
+S: Tacitusstr. 6
+S: D-50968 Köln
+
+N: Tsu-Sheng Tsao
+E: tsusheng@scf.usc.edu
+D: IGMP(Internet Group Management Protocol) version 2
+S: 2F 14 ALY 31 LN 166 SEC 1 SHIH-PEI RD
+S: Taipei
+S: Taiwan 112
+S: Republic of China
+S: 24335 Delta Drive
+S: Diamond Bar, California 91765
+S: USA
+
+N: Theodore Ts'o
+E: tytso@mit.edu
+D: Random Linux hacker
+D: Maintainer of tsx-11.mit.edu ftp archive
+D: Maintainer of c.o.l.* Usenet<->mail gateway
+D: Author of serial driver
+D: Author of the new e2fsck
+D: Author of job control and system call restart code
+D: Author of ramdisk device driver
+D: Author of loopback device driver
+S: MIT Room E40-343
+S: 1 Amherst Street
+S: Cambridge, Massachusetts 02139
+S: USA
+
+N: Simmule Turner
+E: sturner@tele-tv.com
+D: Added swapping to filesystem
+S: 4226 Landgreen Street
+S: Rockville, Maryland 20853
+S: USA
+
+N: Stephen Tweedie
+E: sct@dcs.ed.ac.uk
+P: 1024/E7A417AD E2 FE A4 20 34 EC ED FC 7D 7E 67 8D E0 31 D1 69
+D: Second extended file system developer
+D: General filesystem hacker
+D: kswap vm management code
+S: Dept. of Computer Science
+S: University of Edinburgh
+S: JCMB, The King's Buildings
+S: Mayfield Road
+S: Edinburgh
+S: EH9 3JZ
+S: United Kingdom
+
+N: Thomas Uhl
+E: uhl@sun1.rz.fh-heilbronn.de
+D: Application programmer
+D: Linux promoter
+D: Author of a German book on Linux
+S: Obere Heerbergstrasse 17
+S: 97078 Wuerzburg
+S: Germany
+
+N: Greg Ungerer
+E: gerg@stallion.com
+D: Author of Stallion multiport serial drivers
+S: Stallion Technologies
+S: 33 Woodstock Rd
+S: Toowong, QLD. 4066
+S: Australia
+
+N: Jeffrey A. Uphoff
+E: juphoff@transmeta.com
+E: jeff.uphoff@linux.org
+P: 1024/9ED505C5 D7 BB CA AA 10 45 40 1B 16 19 0A C0 38 A0 3E CB
+D: Linux Security/Alert mailing lists' moderator/maintainer.
+D: NSM (rpc.statd) developer.
+D: PAM S/Key module developer.
+D: 'dip' contributor.
+D: AIPS port, astronomical community support.
+S: Transmeta Corporation
+S: 2540 Mission College Blvd.
+S: Santa Clara, CA 95054
+S: USA
+
+N: Matthias Urlichs
+E: urlichs@noris.de
+E: urlichs@smurf.sub.org
+D: Consultant, developer, kernel hacker
+D: Playing with Streams, ISDN, and BSD networking code for Linux
+S: Schleiermacherstrasse 12
+S: 90491 Nuernberg
+S: Germany
+
+N: Geert Uytterhoeven
+E: geert@linux-m68k.org
+W: http://www.cs.kuleuven.ac.be/~geert/
+P: 1024/EC4A1EE1 8B 88 38 35 88 1E 95 A1 CD 9E AE DC 4B 4A 2F 41
+D: m68k/Amiga and PPC/CHRP Longtrail coordinator
+D: Frame buffer device and XF68_FBDev maintainer
+D: m68k IDE maintainer
+D: Amiga Zorro maintainer
+D: Amiga Buddha and Catweasel chipset IDE
+D: Atari Falcon chipset IDE
+D: Amiga Gayle chipset IDE
+S: C. Huysmansstraat 12
+S: B-3128 Baal
+S: Belgium
+
+N: Petr Vandrovec
+E: vandrove@vc.cvut.cz
+D: Small contributions to ncpfs
+S: Chudenicka 8
+S: 10200 Prague 10, Hostivar
+S: Czech Republic
+
+N: James R. Van Zandt
+E: jrv@vanzandt.mv.com
+P: 1024/E298966D F0 37 4F FD E5 7E C5 E6 F1 A0 1E 22 6F 46 DA 0C
+D: Author and maintainer of the Double Talk speech synthesizer driver
+S: 27 Spencer Drive
+S: Nashua, New Hampshire 03062
+S: USA
+
+N: Andrew Veliath
+E: andrewtv@usa.net
+D: Turtle Beach MultiSound sound driver
+S: USA
+
+N: Dirk Verworner
+D: Co-author of German book ``Linux-Kernel-Programmierung''
+D: Co-founder of Berlin Linux User Group
+
+N: Patrick Volkerding
+E: volkerdi@ftp.cdrom.com
+D: Produced the Slackware distribution, updated the SVGAlib
+D: patches for ghostscript, worked on color 'ls', etc.
+S: 301 15th Street S.
+S: Moorhead, Minnesota 56560
+S: USA
+
+N: Jos Vos
+E: jos@xos.nl
+W: http://www.xos.nl/
+D: Various IP firewall updates, ipfwadm
+S: X/OS Experts in Open Systems BV
+S: Kruislaan 419
+S: 1098 VA Amsterdam
+S: The Netherlands
+
+N: Tim Waugh
+E: tim@cyberelk.demon.co.uk
+D: Co-architect of the parallel-port sharing system
+S: 34 Bladon Close
+S: GUILDFORD
+S: Surrey
+S: GU1 1TY
+S: United Kingdom
+
+N: Juergen Weigert
+E: jnweiger@immd4.informatik.uni-erlangen.de
+D: The Linux Support Team Erlangen
+
+N: David Weinehall
+E: tao@acc.umu.se
+W: http://www.acc.umu.se/~tao/
+W: http://www.acc.umu.se/~mcalinux/
+D: Fixes for the NE/2-driver
+D: Miscellaneous MCA-support
+D: Cleanup of the Config-files
+S: Axtorpsvagen 40:20
+S: S-903 37 UMEA
+S: Sweden
+
+N: Matt Welsh
+E: mdw@metalab.unc.edu
+W: http://www.cs.berkeley.edu/~mdw
+D: Original Linux Documentation Project coordinator
+D: Author, "Running Linux" (O'Reilly)
+D: Author, "Linux Installation and Getting Started" (LDP) and several HOWTOs
+D: Linuxdoc-SGML formatting system (now SGML-Tools)
+D: Device drivers for various high-speed network interfaces (Myrinet, ATM)
+D: Keithley DAS1200 device driver
+D: Original maintainer of sunsite WWW and FTP sites
+D: Original moderator of c.o.l.announce and c.o.l.answers
+S: Computer Science Division
+S: UC Berkeley
+S: Berkeley, CA 94720-1776
+S: USA
+
+N: Greg Wettstein
+E: greg@wind.rmcc.com
+D: Filesystem valid flag for MINIX filesystem.
+D: Minor kernel debugging.
+D: Development and maintenance of sysklogd.
+D: Monitoring of development kernels for long-term stability.
+D: Early implementations of Linux in a commercial environment.
+S: Dr. Greg Wettstein, Ph.D.
+S: Oncology Research Division Computing Facility
+S: Roger Maris Cancer Center
+S: 820 4th St. N.
+S: Fargo, North Dakota 58122
+S: USA
+
+N: Steven Whitehouse
+E: SteveW@ACM.org
+W: http://www-sigproc.eng.cam.ac.uk/~sjw44/
+D: Linux DECnet project: http://www.sucs.swan.ac.uk/~rohan/DECnet/index.html
+D: Minor debugging of other networking protocols.
+D: Misc bug fixes and filesystem development
+
+N: Hans-Joachim Widmaier
+E: hjw@zvw.de
+D: AFFS rewrite
+S: Eichenweg 16
+S: 73650 Winterbach
+S: Germany
+
+N: Marco van Wieringen
+E: mvw@planets.elm.net
+D: Author of process accounting and diskquota
+S: Breeburgsingel 12
+S: 2135 CN Hoofddorp
+S: The Netherlands
+
+N: G\"unter Windau
+E: gunter@mbfys.kun.nl
+D: Some bug fixes in the polling printer driver (lp.c)
+S: University of Nijmegen
+S: Geert-Grooteplein Noord 21
+S: 6525 EZ Nijmegen
+S: The Netherlands
+
+N: Ulrich Windl
+E: Ulrich.Windl@rz.uni-regensburg.de
+P: 1024/E843660D CF D7 43 A1 5A 49 14 25 7C 04 A0 6E 4C 3A AC 6D
+D: Supports NTP on Linux. Added PPS code. Fixed bugs in adjtimex().
+S: Alte Regensburger Str. 11a
+S: 93149 Nittenau
+S: Germany
+
+N: Lars Wirzenius
+E: liw@iki.fi
+D: Linux System Administrator's Guide, author, former maintainer
+D: comp.os.linux.announce, former moderator
+D: Linux Documentation Project, co-founder
+D: Original sprintf in kernel
+D: Original kernel README (for version 0.97)
+D: Linux News (electronic magazine, now dead), founder and former editor
+D: Meta-FAQ, originator, former maintainer
+D: INFO-SHEET, former maintainer
+D: Author of the longest-living linux bug
+
+N: Jonathan Woithe
+E: jwoithe@physics.adelaide.edu.au
+W: http://www.physics.adelaide.edu.au/~jwoithe
+D: ALS-007 sound card extensions to Sound Blaster driver
+S: 4/36 Trevelyan St
+S: Wayville SA 5034
+S: Australia
+
+N: Clifford Wolf
+E: god@clifford.at
+W: http://www.clifford.at/
+D: Menuconfig/lxdialog improvement
+S: Foehrengasse 16
+S: A-2333 Leopoldsdorf b. Wien
+S: Austria
+
+N: Roger E. Wolff
+E: R.E.Wolff@BitWizard.nl
+D: Written kmalloc/kfree
+D: Written Specialix IO8+ driver
+D: Written Specialix SX driver
+S: van Bronckhorststraat 12
+S: 2612 XV Delft
+S: The Netherlands
+
+N: David Woodhouse
+E: David.Woodhouse@mvhi.com
+E: Dave@imladris.demon.co.uk
+D: Extensive ARCnet rewrite
+D: ARCnet COM20020, COM90xx IO-MAP drivers
+D: SO_BINDTODEVICE in 2.1.x (from Elliot Poger's code in 2.0.31)
+D: Contributed to NCPFS rewrite for 2.1.x dcache
+D: Alpha platforms: SX164, LX164 and Ruffian ported to 2.1.x
+S: 29, David Bull Way
+S: Milton, Cambridge. CB4 6DP
+S: England
+
+N: Frank Xia
+E: qx@math.columbia.edu
+D: Xiafs filesystem [defunct]
+S: 542 West 112th Street, 5N
+S: New York, New York 10025
+S: USA
+
+N: Victor Yodaiken
+E: yodaiken@fsmlabs.com
+D: RTLinux (RealTime Linux)
+S: POB 1822
+S: Socorro NM, 87801
+S: USA
+
+N: Eric Youngdale
+E: eric@andante.org
+W: http://www.andante.org
+D: General kernel hacker
+D: SCSI iso9660 and ELF
+S: 17 Canterbury Square #101
+S: Alexandria, Virginia 22304
+S: USA
+
+N: Niibe Yutaka
+E: gniibe@mri.co.jp
+D: PLIP driver
+D: Asynchronous socket I/O in the NET code
+S: Mitsubishi Research Institute, Inc.
+S: ARCO Tower 1-8-1 Shimomeguro Meguro-ku
+S: Tokyo 153
+S: Japan
+
+N: Orest Zborowski
+E: orestz@eskimo.com
+D: XFree86 and kernel development
+S: 1507 145th Place SE #B5
+S: Bellevue, Washington 98007
+S: USA
+
+N: Richard Zidlicky
+E: rdzidlic@geocities.com,rdzidlic@cip.informatik.uni-erlangen.de
+W: http://www.geocities.com/SiliconValley/Bay/2602/
+D: Q40 port - see arch/m68k/q40/README
+S: Germany
+
+N: Werner Zimmermann
+E: Werner.Zimmermann@fht-esslingen.de
+D: CDROM driver "aztcd" (Aztech/Okano/Orchid/Wearnes)
+S: Flandernstrasse 101
+S: D-73732 Esslingen
+S: Germany
+
+N: Leonard N. Zubkoff
+E: lnz@dandelion.com
+W: http://www.dandelion.com/Linux/
+D: BusLogic SCSI driver
+D: Mylex DAC960 PCI RAID driver
+D: Miscellaneous kernel fixes
+S: 3078 Sulphur Spring Court
+S: San Jose, California 95148
+S: USA
+
+N: Marc Zyngier
+E: maz@wild-wind.fr.eu.org
+D: MD driver
+S: 11 rue Victor HUGO
+S: 95560 Montsoult
+S: France
+
+# Don't add your name here, unless you really _are_ after Marc
+# alphabetically. Leonard used to be very proud of being the
+# last entry, and he'll get positively pissed if he can't even
+# be second-to-last. (and this file really _is_ supposed to be
+# in alphabetic order)
Acorn VIDC support
CONFIG_FB_ACORN
This is the frame buffer device driver for the Acorn VIDC graphics
- chipset.
+ hardware found in Acorn RISC PCs and other ARM-based machines. If
+ unsure, say N.
Apollo frame buffer device
CONFIG_FB_APOLLO
kernel. Please note that this driver DOES NOT support the
Cybervision 64 3D card, as they use incompatible video chips.
+CyberPro 20x0 support
+CONFIG_FB_CYBER2000
+ This enables support for the Integraphics CyberPro 20x0 and 5000
+ VGA chips used in the Rebel.com Netwinder and other machines.
+ Say Y if you have a NetWinder or a graphics card containing this
+ device, otherwise say N.
+
Amiga CyberVision3D support (EXPERIMENTAL)
CONFIG_FB_VIRGE
This enables support for the Cybervision 64/3D graphics card from
# LocalWords: adbmouse DRI DRM dlabs GMX PLCs Applicom fieldbus applicom int
# LocalWords: VWSND eg ESSSOLO CFU CFNR scribed eiconctrl eicon hylafax KFPU
# LocalWords: EXTRAPREC fpu mainboards KHTTPD kHTTPd khttpd Xcelerator
-# LocalWords: LOGIBUSMOUSE OV511 ov511
+# LocalWords: LOGIBUSMOUSE OV511 ov511 Integraphics
0xc0008000 as well.
But prior to execute the kernel, a ramdisk image must also be loaded in
-memory. Use memory address 0x00800000 for this.
+memory. Use memory address 0xd8000000 for this. Note that the file
+containing the (compressed) ramdisk image must not exceed 4 MB.
Currently supported:
- RS232 serial ports
- LCD screen
- keyboard (needs to be cleaned up badly... any volunteer?)
+The actual Brutus support may not be complete without extra patches.
+If such patches exist, they should be found from
+ftp.netwinder.org/users/n/nico.
+
A full PCMCIA support is still missing, although it's possible to hack
some drivers in order to drive already inserted cards at boot time with
little modifications.
remains to be done, and other ideas for the emulator.
Bug reports, comments, suggestions should be directed to me at
-<scottb@netwinder.com>. General reports of "this program doesn't
+<scottb@netwinder.org>. General reports of "this program doesn't
work correctly when your emulator is installed" are useful for
determining that bugs still exist; but are virtually useless when
attempting to isolate the problem. Please report them, but don't
-The BFS filesystem is used on SCO UnixWare machines for /stand slice.
-By default, if you attempt to mount it read-write it will be automatically
-mounted read-only. If you want to enable (limited) write support, you need
-to select "BFS write support" when configuring the kernel. The write support
-at this stage is limited to the blocks preallocated for a given inode.
-This means that writes beyond the value of inode->iu_eblock will fail with EIO.
-In particular, this means you can create empty files but not write data to them
-or you can write data to the existing files and increase their size but not the
-number of blocks allocated to them. I am currently working on removing this
-limitation, i.e. ability to migrate inodes within BFS filesystem.
+BFS FILESYSTEM FOR LINUX
+========================
+
+The BFS filesystem is used by SCO UnixWare OS for the /stand slice, which
+usually contains the kernel image and a few other files required for the
+boot process.
In order to access /stand partition under Linux you obviously need to
know the partition number and the kernel must support UnixWare disk slices
# mount -t bfs -o loop stand.img /mnt/stand
this will allocate the first available loopback device (and load loop.o
-kernel module if necessary) automatically. Beware that umount will not
+kernel module if necessary) automatically. If the loopback driver is not
+loaded automatically, make sure that your kernel is compiled with kmod
+support (CONFIG_KMOD) enabled. Beware that umount will not
deallocate /dev/loopN device if /etc/mtab file on your system is a
symbolic link to /proc/mounts. You will need to do it manually using
"-d" switch of losetup(8). Read losetup(8) manpage for more info.
# od -Ad -tx4 stand.img | more
-The first 4 bytes should be 0x1BADFACE.
+The first 4 bytes should be 0x1badface.
-If you have any questions or suggestions regarding this BFS implementation
-please contact me:
+If you have any patches, questions or suggestions regarding this BFS
+implementation please contact the author:
Tigran A. Aivazian <tigran@ocston.org>.
--- /dev/null
+ Linux kernel release 2.3.xx for the IA-64 Platform
+
+ These are the release notes for Linux version 2.3 for IA-64
+ platform. This document provides information specific to IA-64
+ ONLY, to get additional information about the Linux kernel also
+ read the original Linux README provided with the kernel.
+
+INSTALLING the kernel:
+
+ - IA-64 kernel installation is the same as the other platforms, see
+ original README for details.
+
+
+SOFTWARE REQUIREMENTS
+
+ Compiling and running this kernel requires an IA-64 compliant GCC
+ compiler. And various software packages also compiled with an
+ IA-64 compliant GCC compiler.
+
+
+CONFIGURING the kernel:
+
+ Configuration is the same, see original README for details.
+
+
+COMPILING the kernel:
+
+ - Compiling this kernel doesn't differ from other platform so read
+ the original README for details BUT make sure you have an IA-64
+ compliant GCC compiler.
+
+IA-64 SPECIFICS
+
+ - Security related issues:
+
+ o mmap needs to check whether mapping would overlap with the
+ address-space hole in a region or whether the mapping would be
+ across regions. In both cases, mmap should fail.
+
+ o ptrace is a huge security hole right now as it does not reject
+ writing to security sensitive bits (such as the PSR!).
+
+ - General issues:
+
+ o Kernel modules aren't supported yet.
+
+ o For non-RT signals, siginfo isn't passed through from the kernel
+ to the point where the signal is actually delivered. Also, we
+ should make sure the siginfo data is compliant with the UNIX
+ ABI.
+
+ o Hardly any performance tuning has been done. Obvious targets
+ include the library routines (memcpy, IP checksum, etc.). Less
+ obvious targets include making sure we don't flush the TLB
+ needlessly, etc. Also, the TLB handlers should probably try to
+ do a speculative load from the virtually mapped linear page
+ table and only if that fails fall back on walking the page table
+ tree.
+
+ o Discontigous large memory support; memory above 4GB will be
+ discontigous since the 4GB-64MB is reserved for firmware and I/O
+ space.
+
+ o Correct mapping for PAL runtime code; PAL code needs to be
+ mapped by a TR.
+
+ o Make current IRQ/IOSAPIC handling closer to IA32 such as,
+ disable/enable interrupts, use of INPROGRESS flag etc.
+
+ o clone system call implementation; needs to setup proper backing
+ store
+
+ o SMP locks cleanup/optimization
+
+ o IA32 support. Currently experimental. It mostly works but
+ there are problems with some dynamically loaded programs.
+(C) 1997-1998 Caldera, Inc.
+(C) 1998 James Banks
+(C) 1999-2000 Torben Mathiasen <torben.mathiasen@compaq.com>
-
-I haven't had any time to do anything for a long time, and this isn't
-likely to change. So there's a driver here for anyone looking to
-carry forward a project :)
-
-For those who are looking for help, I can't. I haven't looked at
-a kernel since the early 2.0 series, so I won't know what's going on.
-Your best chance at help would be joining the TLAN mailing list and
-posting your question there.
-
-You can join by sending "subscribe tlan" in the body of an email to
-majordomo@vuser.vu.union.edu.
-
-Thanks to those who have (and who will ;) put work in to keep the TLAN
-driver working as the kernel moves on.
-
-James
-james@sovereign.org
-
-
-TLAN driver for Linux, version 1.0
+TLAN driver for Linux, version 1.3
README
but I do not expect any problems.
-II. Building the Driver.
-
- The TLAN driver may be compiled into the kernel, or it may be compiled
- as a module separately, or in the kernel. A patch is included for
- 2.0.29 (which also works for 2.0.30, 2.0.31, and 2.0.32).
-
- To compile it as part of the kernel:
- 1. Download and untar the TLAN driver package.
- 2. If your kernel is 2.1.45 or later, you do not need to patch the
- kernel sources. Copy the tlan.c and tlan.h to drivers/net in
- the kernel source tree.
- 3. Otherwise, apply the appropriate patch for your kernel. For
- example:
-
- cd /usr/src/linux
- patch -p1 < kernel.2.0.29
-
- 4. Copy the files tlan.c and tlan.h from the TLAN package to the
- directory drivers/net in the Linux kernel source tree.
- 5. Configure your kernel for the TLAN driver. Answer 'Y' when
- prompted to ask about experimental code (the first question).
- Then answer 'Y' when prompted if to include TI ThunderLAN
- support. If you want the driver compiled as a module, answer 'M'
- instead of 'Y'.
- 6. Make the kernel and, if necessary, the modules.
-
- To compile the TLAN driver independently:
- 1. Download and untar the TLAN driver package.
- 2. Change to the tlan directory.
- 3. If you are NOT using a versioned kernel (ie, want an non-
- versioned module), edit the Makefile, and comment out the
- line:
- MODVERSIONS = -DMODVERSIONS
- 4. Run 'make'.
-
-
-III. Driver Options
+II. Driver Options
1. You can append debug=x to the end of the insmod line to get
debug messages, where x is a bit field where the bits mean
the following:
device that does not have an AUI/BNC connector will probably
cause it to not function correctly.)
- 4. You can set duplex=1 to force half duplex, and duplex=2 to
+ 3. You can set duplex=1 to force half duplex, and duplex=2 to
force full duplex.
- 5. You can set speed=10 to force 10Mbs operation, and speed=100Mbs
+ 4. You can set speed=10 to force 10Mbs operation, and speed=100
to force 100Mbs operation. (I'm not sure what will happen
if a card which only supports 10Mbs is forced into 100Mbs
mode.)
- 3. If the driver is built into the kernel, you can use the 3rd
+ 5. If the driver is built into the kernel, you can use the 3rd
and 4th parameters to set aui and debug respectively. For
example:
+/* kernel-parameters are currently not supported. I will fix this asap. */
+
ether=0,0,0x1,0x7,eth0
This sets aui to 0x1 and debug to 0x7, assuming eth0 is a
The bits in the third byte are assigned as follows:
0x01 = aui
- 0x02 = use SA_INTERRUPT flag when reserving the irq.
0x04 = use half duplex
0x08 = use full duplex
0x10 = use 10BaseT
0x20 = use 100BaseTx
-IV. Things to try if you have problems.
+III. Things to try if you have problems.
1. Make sure your card's PCI id is among those listed in
section I, above.
- 1. Make sure routing is correct.
- 2. If you are using a 2.1.x kernel, try to duplicate the
- problem on a 2.0.x (preferably 2.0.29 or 2.0.30) kernel.
+ 2. Make sure routing is correct.
+ 3. Try forcing different speed/duplex settings
There is also a tlan mailing list which you can join by sending "subscribe tlan"
L: linux-kernel@vger.rutgers.edu
S: Maintained
+SA1100 SUPPORT
+P: Nicolas Pitre
+M: nico@cam.org
+L: sa1100-linux@pa.dec.com
+S: Maintained
+
SBPCD CDROM DRIVER
P: Eberhard Moenkeberg
M: emoenke@gwdg.de
# CONFIG_FTAPE is not set
# CONFIG_DRM is not set
# CONFIG_DRM_TDFX is not set
+
+#
+# PCMCIA character device support
+#
+# CONFIG_PCMCIA_SERIAL_CS is not set
# CONFIG_AGP is not set
#
#
# CONFIG_WAN is not set
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
#
# SCSI support
#
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_PARPORT_PC_FIFO=y
+# CONFIG_PARPORT_PC_PCMCIA is not set
# CONFIG_PARPORT_ARC is not set
# CONFIG_PARPORT_AMIGA is not set
# CONFIG_PARPORT_MFC3 is not set
# CONFIG_DRM is not set
# CONFIG_DRM_TDFX is not set
+#
+# PCMCIA character device support
+#
+# CONFIG_PCMCIA_SERIAL_CS is not set
+# CONFIG_AGP is not set
+
#
# Support for USB
#
#
# CONFIG_WAN is not set
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
#
# SCSI support
#
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
# CONFIG_PARPORT_PC_FIFO is not set
+# CONFIG_PARPORT_PC_PCMCIA is not set
# CONFIG_PARPORT_ARC is not set
# CONFIG_PARPORT_AMIGA is not set
# CONFIG_PARPORT_MFC3 is not set
# CONFIG_FTAPE is not set
# CONFIG_DRM is not set
# CONFIG_DRM_TDFX is not set
+
+#
+# PCMCIA character device support
+#
+# CONFIG_PCMCIA_SERIAL_CS is not set
# CONFIG_AGP is not set
CONFIG_RPCMOUSE=y
#
# CONFIG_WAN is not set
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
#
# SCSI support
#
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_PARPORT_PC_FIFO=y
+# CONFIG_PARPORT_PC_PCMCIA is not set
# CONFIG_PARPORT_ARC is not set
# CONFIG_PARPORT_AMIGA is not set
# CONFIG_PARPORT_MFC3 is not set
# CONFIG_FTAPE is not set
# CONFIG_DRM is not set
# CONFIG_DRM_TDFX is not set
+
+#
+# PCMCIA character device support
+#
+# CONFIG_PCMCIA_SERIAL_CS is not set
# CONFIG_AGP is not set
#
#
# CONFIG_WAN is not set
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
#
# SCSI support
#
ISA_DMA_OBJS += dma-isa.o
endif
-O_OBJS_arc = dma-arc.o iic.o fiq.o oldlatches.o
-O_OBJS_a5k = dma-a5k.o iic.o fiq.o
-O_OBJS_rpc = dma-rpc.o iic.o fiq.o
+O_OBJS_arc = dma-arc.o iic.o fiq.o time-acorn.o oldlatches.o
+O_OBJS_a5k = dma-a5k.o iic.o fiq.o time-acorn.o
+O_OBJS_rpc = dma-rpc.o iic.o fiq.o time-acorn.o
O_OBJS_ebsa110 = dma-dummy.o
O_OBJS_footbridge = dma-footbridge.o $(ISA_DMA_OBJS) isa.o
O_OBJS_nexuspci = dma-dummy.o
* Assign any unassigned resources. Note that we really ought to
* have min/max stuff here - max mem address is 0x0fffffff
*/
- pci_assign_unassigned_resources(hw_pci->io_start, hw_pci->mem_start);
+ pci_assign_unassigned_resources();
pci_fixup_irqs(hw_pci->swizzle, hw_pci->map_irq);
pci_set_bus_ranges();
return str;
}
-/*
- * Assign new address to PCI resource. We hope our resource information
- * is complete.
- *
- * Expects start=0, end=size-1, flags=resource type.
- */
-int pci_assign_resource(struct pci_dev *dev, int i)
+void pcibios_align_resource(void *data, struct resource *res, unsigned long size)
{
- return 0;
}
-void pcibios_align_resource(void *data, struct resource *res, unsigned long size)
+int pcibios_enable_device(struct pci_dev *dev)
{
+ u16 cmd, old_cmd;
+ int idx;
+ struct resource *r;
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ old_cmd = cmd;
+ for (idx = 0; idx < 6; idx++) {
+ r = dev->resource + idx;
+ if (!r->start && r->end) {
+ printk(KERN_ERR "PCI: Device %s not available because"
+ " of resource collisions\n", dev->slot_name);
+ return -EINVAL;
+ }
+ if (r->flags & IORESOURCE_IO)
+ cmd |= PCI_COMMAND_IO;
+ if (r->flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+ if (cmd != old_cmd) {
+ printk("PCI: enabling device %s (%04x -> %04x)\n",
+ dev->slot_name, old_cmd, cmd);
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+ }
+ return 0;
}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/time-acorn.c
+ *
+ * Copyright (c) 1996-2000 Russell King.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ */
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/ioc.h>
+#include <asm/irq.h>
+
+extern unsigned long (*gettimeoffset)(void);
+
+static unsigned long ioctime_gettimeoffset(void)
+{
+ unsigned int count1, count2, status1, status2;
+ unsigned long offset = 0;
+
+ status1 = inb(IOC_IRQREQA);
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+ barrier ();
+ status2 = inb(IOC_IRQREQA);
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+
+ if (count2 < count1) {
+ /*
+ * This means that we haven't just had an interrupt
+ * while reading into status2.
+ */
+ if (status2 & (1 << 5))
+ offset = tick;
+ count1 = count2;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt while reading
+ * status2.
+ */
+ offset += tick;
+ count1 = count2;
+ }
+
+ count1 = LATCH - count1;
+ /*
+ * count1 = number of clock ticks since last interrupt
+ */
+ offset += count1 * tick / LATCH;
+ return offset;
+}
+
+void __init ioctime_init(void)
+{
+ outb(LATCH & 255, IOC_T0LTCHL);
+ outb(LATCH >> 8, IOC_T0LTCHH);
+ outb(0, IOC_T0GO);
+
+ gettimeoffset = ioctime_gettimeoffset;
+}
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
-#include <linux/param.h>
-#include <linux/string.h>
-#include <linux/mm.h>
#include <linux/interrupt.h>
#include <linux/time.h>
-#include <linux/delay.h>
#include <linux/init.h>
#include <linux/smp.h>
#include <asm/hardware.h>
extern int setup_arm_irq(int, struct irqaction *);
+extern void setup_timer(void);
extern volatile unsigned long lost_ticks;
/* change this if you have some constant time drift */
#define BIN_TO_BCD(val) ((val)=(((val)/10)<<4) + (val)%10)
#endif
+static int dummy_set_rtc(void)
+{
+ return 0;
+}
+
+/*
+ * hook for setting the RTC's idea of the current time.
+ */
+int (*set_rtc)(void) = dummy_set_rtc;
+
+static unsigned long dummy_gettimeoffset(void)
+{
+ return 0;
+}
+
+/*
+ * hook for getting the time offset
+ */
+unsigned long (*gettimeoffset)(void) = dummy_gettimeoffset;
+
/* Converts Gregorian date to seconds since 1970-01-01 00:00:00.
* Assumes input in normal date format, i.e. 1980-12-31 23:59:59
* => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
* machines were long is 32-bit! (However, as time_t is signed, we
* will already get problems at other places on 2038-01-19 03:14:08)
*/
-unsigned long mktime(unsigned int year, unsigned int mon,
- unsigned int day, unsigned int hour,
- unsigned int min, unsigned int sec)
+unsigned long
+mktime(unsigned int year, unsigned int mon, unsigned int day,
+ unsigned int hour, unsigned int min, unsigned int sec)
{
if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
mon += 12; /* Puts Feb last since it has leap day */
}
/*
- * Handle profile stuff...
+ * Handle kernel profile stuff...
*/
-static void do_profile(unsigned long pc)
+static inline void do_profile(struct pt_regs *regs)
{
- if (prof_buffer && current->pid) {
+ if (!user_mode(regs) &&
+ prof_buffer &&
+ current->pid) {
+ unsigned long pc = instruction_pointer(regs);
extern int _stext;
pc -= (unsigned long)&_stext;
}
}
-#include <asm/arch/time.h>
+static long next_rtc_update;
+
+/*
+ * If we have an externally synchronized linux clock, then update
+ * CMOS clock accordingly every ~11 minutes. set_rtc() has to be
+ * called as close as possible to 500 ms before the new second
+ * starts.
+ */
+static inline void do_set_rtc(void)
+{
+ if (time_status & STA_UNSYNC || set_rtc == NULL)
+ return;
+
+ if (next_rtc_update &&
+ time_before(xtime.tv_sec, next_rtc_update))
+ return;
+
+ if (xtime.tv_usec < 50000 - (tick >> 1) &&
+ xtime.tv_usec >= 50000 + (tick >> 1))
+ return;
+
+ if (set_rtc())
+ /*
+ * rtc update failed. Try again in 60s
+ */
+ next_rtc_update = xtime.tv_sec + 60;
+ else
+ next_rtc_update = xtime.tv_sec + 660;
+}
+
+#ifdef CONFIG_LEDS
-static unsigned long do_gettimeoffset(void)
+#include <asm/leds.h>
+
+static void do_leds(void)
{
- return gettimeoffset ();
+ static unsigned int count = 50;
+ static int last_pid;
+
+ if (current->pid != last_pid) {
+ last_pid = current->pid;
+ if (last_pid)
+ leds_event(led_idle_end);
+ else
+ leds_event(led_idle_start);
+ }
+
+ if (--count == 0) {
+ count = 50;
+ leds_event(led_timer);
+ }
}
+#else
+#define do_leds()
+#endif
void do_gettimeofday(struct timeval *tv)
{
save_flags_cli (flags);
*tv = xtime;
- tv->tv_usec += do_gettimeoffset();
+ tv->tv_usec += gettimeoffset();
/*
* xtime is atomically updated in timer_bh. lost_ticks is
* Discover what correction gettimeofday
* would have done, and then undo it!
*/
- tv->tv_usec -= do_gettimeoffset();
+ tv->tv_usec -= gettimeoffset();
if (tv->tv_usec < 0) {
tv->tv_usec += 1000000;
sti();
}
+static struct irqaction timer_irq = {
+ NULL, 0, 0, "timer", NULL, NULL
+};
+
+/*
+ * Include architecture specific code
+ */
+#include <asm/arch/time.h>
+
+/*
+ * This must cause the timer to start ticking.
+ * It doesn't have to set the current time though
+ * from an RTC - it can be done later once we have
+ * some buses initialised.
+ */
void __init time_init(void)
{
xtime.tv_usec = 0;
+ xtime.tv_sec = 0;
setup_timer();
}
asmlinkage void
baddataabort(int code, unsigned long instr, struct pt_regs *regs)
{
- unsigned long phys, addr = instruction_pointer(regs);
+ unsigned long addr = instruction_pointer(regs);
#ifdef CONFIG_DEBUG_ERRORS
dump_instr(addr, 1);
pmd_t *pmd;
pmd = pmd_offset (pgd, addr);
printk (", *pmd = %08lx", pmd_val (*pmd));
- if (!pmd_none (*pmd)) {
- unsigned long ptr = pte_page(*pte_offset(pmd, addr));
- printk (", *pte = %08lx", pte_val (*pte_offset (pmd, addr)));
- phys = ptr + (addr & 0x7fff);
- }
+ if (!pmd_none (*pmd))
+ printk (", *pte = %08lx", pte_val(*pte_offset (pmd, addr)));
}
printk ("\n");
}
#include <linux/module.h>
+#include <linux/types.h>
#include <asm/io.h>
* Copy data from IO memory space to "real" memory space.
* This needs to be optimized.
*/
-void _memcpy_fromio(void * to, unsigned long from, unsigned long count)
+void _memcpy_fromio(void * to, unsigned long from, size_t count)
{
while (count) {
count--;
* Copy data from "real" memory space to IO memory space.
* This needs to be optimized.
*/
-void _memcpy_toio(unsigned long to, const void * from, unsigned long count)
+void _memcpy_toio(unsigned long to, const void * from, size_t count)
{
while (count) {
count--;
* "memset" on IO memory space.
* This needs to be optimized.
*/
-void _memset_io(unsigned long dst, int c, unsigned long count)
+void _memset_io(unsigned long dst, int c, size_t count)
{
while (count) {
count--;
* 'flags' are the extra L_PTE_ flags that you want to specify for this
* mapping. See include/asm-arm/proc-armv/pgtable.h for more information.
*/
-void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
+void * __ioremap(unsigned long phys_addr, size_t size, unsigned long flags)
{
void * addr;
struct vm_struct * area;
{
cr_alignment &= ~4;
cr_no_alignment &= ~4;
+ flush_cache_all();
set_cr(cr_alignment);
return 1;
}
{
cr_alignment &= ~(8|4);
cr_no_alignment &= ~(8|4);
+ flush_cache_all();
set_cr(cr_alignment);
return 1;
}
void __init pagetable_init(void)
{
- struct map_desc *init_maps, *p;
+ struct map_desc *init_maps, *p, *q;
unsigned long address = 0;
int i;
* pgdir entries that are not in the description.
*/
i = 0;
+ q = init_maps;
do {
- if (address < init_maps->virtual || init_maps == p) {
+ if (address < q->virtual || q == p) {
clear_mapping(address);
address += PGDIR_SIZE;
} else {
- create_mapping(init_maps);
+ create_mapping(q);
- address = init_maps->virtual + init_maps->length;
+ address = q->virtual + q->length;
address = (address + PGDIR_SIZE - 1) & PGDIR_MASK;
- init_maps ++;
+ q ++;
}
} while (address != 0);
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
EXPORT_SYMBOL(disable_irq_nosync);
+EXPORT_SYMBOL(probe_irq_mask);
EXPORT_SYMBOL(kernel_thread);
EXPORT_SYMBOL(acpi_idle);
EXPORT_SYMBOL(acpi_power_off);
--- /dev/null
+#
+# ia64/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1998, 1999 by David Mosberger-Tang <davidm@hpl.hp.com>
+#
+
+NM := $(CROSS_COMPILE)nm -B
+
+LINKFLAGS = -static -T arch/$(ARCH)/vmlinux.lds
+# next line is for HP compiler backend:
+#AFLAGS += -DGCC_RETVAL_POINTER_IN_R8
+# The next line is needed when compiling with the July snapshot of the Cygnus compiler:
+#EXTRA = -ma0-bugs -D__GCC_DOESNT_KNOW_IN_REGS__
+# next two lines are for the September snapshot of the Cygnus compiler:
+AFLAGS += -D__GCC_MULTIREG_RETVALS__
+EXTRA = -ma0-bugs -D__GCC_MULTIREG_RETVALS__
+
+CFLAGS := -g $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-range=f10-f15,f32-f127
+
+ifdef CONFIG_IA64_GENERIC
+ CORE_FILES := arch/$(ARCH)/hp/hp.a \
+ arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/dig/dig.a \
+ $(CORE_FILES)
+ SUBDIRS := arch/$(ARCH)/hp \
+ arch/$(ARCH)/sn/sn1 \
+ arch/$(ARCH)/sn \
+ arch/$(ARCH)/dig \
+ $(SUBDIRS)
+
+else # !GENERIC
+
+ifeq ($(CONFIG_IA64_HP_SIM),y)
+ SUBDIRS := arch/$(ARCH)/hp \
+ $(SUBDIRS)
+ CORE_FILES := arch/$(ARCH)/hp/hp.a \
+ $(CORE_FILES)
+endif
+
+ifeq ($(CONFIG_IA64_SGI_SN1_SIM),y)
+ SUBDIRS := arch/$(ARCH)/sn/sn1 \
+ arch/$(ARCH)/sn \
+ $(SUBDIRS)
+ CORE_FILES := arch/$(ARCH)/sn/sn.a \
+ $(CORE_FILES)
+endif
+
+ifeq ($(CONFIG_IA64_SOFTSDV),y)
+ SUBDIRS := arch/$(ARCH)/dig \
+ $(SUBDIRS)
+ CORE_FILES := arch/$(ARCH)/dig/dig.a \
+ $(CORE_FILES)
+endif
+
+ifeq ($(CONFIG_IA64_DIG),y)
+ SUBDIRS := arch/$(ARCH)/dig \
+ $(SUBDIRS)
+ CORE_FILES := arch/$(ARCH)/dig/dig.a \
+ $(CORE_FILES)
+endif
+
+endif # !GENERIC
+
+ifeq ($(CONFIG_IA32_SUPPORT),y)
+ SUBDIRS := arch/$(ARCH)/ia32 $(SUBDIRS)
+ CORE_FILES := arch/$(ARCH)/ia32/ia32.o $(CORE_FILES)
+endif
+
+ifdef CONFIG_KDB
+ LIBS := $(LIBS) $(TOPDIR)/arch/$(ARCH)/kdb/kdb.a
+ SUBDIRS := $(SUBDIRS) arch/$(ARCH)/kdb
+endif
+
+HEAD := arch/$(ARCH)/kernel/head.o arch/ia64/kernel/init_task.o
+
+SUBDIRS := arch/$(ARCH)/tools arch/$(ARCH)/kernel arch/$(ARCH)/mm arch/$(ARCH)/lib $(SUBDIRS)
+CORE_FILES := arch/$(ARCH)/kernel/kernel.o arch/$(ARCH)/mm/mm.o $(CORE_FILES)
+
+LIBS := $(TOPDIR)/arch/$(ARCH)/lib/lib.a $(LIBS) \
+ $(TOPDIR)/arch/$(ARCH)/lib/lib.a
+
+MAKEBOOT = $(MAKE) -C arch/$(ARCH)/boot
+
+vmlinux: arch/$(ARCH)/vmlinux.lds
+
+arch/$(ARCH)/vmlinux.lds: arch/$(ARCH)/vmlinux.lds.S FORCE
+ gcc -D__ASSEMBLY__ -E -C -P -I$(HPATH) -I$(HPATH)/asm-$(ARCH) \
+ arch/$(ARCH)/vmlinux.lds.S > $@
+
+FORCE: ;
+
+rawboot:
+ @$(MAKEBOOT) rawboot
+
+#
+# My boot writes directly to a specific disk partition, I doubt most
+# people will want to do that without changes..
+#
+msb my-special-boot:
+ @$(MAKEBOOT) msb
+
+bootimage:
+ @$(MAKEBOOT) bootimage
+
+srmboot:
+ @$(MAKEBOOT) srmboot
+
+archclean:
+ @$(MAKE) -C arch/$(ARCH)/kernel clean
+ @$(MAKE) -C arch/$(ARCH)/tools clean
+ @$(MAKEBOOT) clean
+
+archmrproper:
+ rm -f arch/$(ARCH)/vmlinux.lds
+ @$(MAKE) -C arch/$(ARCH)/tools mrproper
+
+archdep:
+ @$(MAKEBOOT) dep
+
+bootpfile:
+ @$(MAKEBOOT) bootpfile
--- /dev/null
+#
+# ia64/boot/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1998 by David Mosberger-Tang <davidm@hpl.hp.com>
+#
+
+LINKFLAGS = -static -T bootloader.lds
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -E -o $*.o $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -c -o $*.o $<
+
+OBJECTS = bootloader.o
+TARGETS =
+
+ifdef CONFIG_IA64_HP_SIM
+ TARGETS += bootloader
+endif
+
+all: $(TARGETS)
+
+bootloader: $(OBJECTS)
+ $(LD) $(LINKFLAGS) $(OBJECTS) $(LIBS) -o bootloader
+
+clean:
+ rm -f $(TARGETS)
+
+dep:
--- /dev/null
+/*
+ * arch/ia64/boot/bootloader.c
+ *
+ * Loads an ELF kernel.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 01/07/99 S.Eranian modified to pass command line arguments to kernel
+ */
+#include <linux/elf.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+
+#include <asm/elf.h>
+#include <asm/pal.h>
+#include <asm/pgtable.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+
+/* Simulator system calls: */
+
+#define SSC_CONSOLE_INIT 20
+#define SSC_GETCHAR 21
+#define SSC_PUTCHAR 31
+#define SSC_OPEN 50
+#define SSC_CLOSE 51
+#define SSC_READ 52
+#define SSC_WRITE 53
+#define SSC_GET_COMPLETION 54
+#define SSC_WAIT_COMPLETION 55
+#define SSC_CONNECT_INTERRUPT 58
+#define SSC_GENERATE_INTERRUPT 59
+#define SSC_SET_PERIODIC_INTERRUPT 60
+#define SSC_GET_RTC 65
+#define SSC_EXIT 66
+#define SSC_LOAD_SYMBOLS 69
+#define SSC_GET_TOD 74
+
+#define SSC_GET_ARGS 75
+
+struct disk_req {
+ unsigned long addr;
+ unsigned len;
+};
+
+struct disk_stat {
+ int fd;
+ unsigned count;
+};
+
+#include "../kernel/fw-emu.c"
+
+static void
+cons_write (const char *buf)
+{
+ unsigned long ch;
+
+ while ((ch = *buf++) != '\0') {
+ ssc(ch, 0, 0, 0, SSC_PUTCHAR);
+ if (ch == '\n')
+ ssc('\r', 0, 0, 0, SSC_PUTCHAR);
+ }
+}
+
+void
+enter_virtual_mode (unsigned long new_psr)
+{
+ asm volatile ("mov cr.ipsr=%0" :: "r"(new_psr));
+ asm volatile ("mov cr.iip=%0" :: "r"(&&target));
+ asm volatile ("mov cr.ifs=r0");
+ asm volatile ("rfi;;"); /* must be last insn in an insn group */
+
+ target:
+}
+
+
+#define MAX_ARGS 32
+
+void
+_start (void)
+{
+ register long sp asm ("sp");
+ static char stack[16384] __attribute__ ((aligned (16)));
+ static char mem[4096];
+ static char buffer[1024];
+ unsigned long flags, off;
+ int fd, i;
+ struct disk_req req;
+ struct disk_stat stat;
+ struct elfhdr *elf;
+ struct elf_phdr *elf_phdr; /* program header */
+ unsigned long e_entry, e_phoff, e_phnum;
+ char *kpath, *args;
+ long arglen = 0;
+
+ asm volatile ("movl gp=__gp" ::: "memory");
+ asm volatile ("mov sp=%0" :: "r"(stack) : "memory");
+ asm volatile ("bsw.1;;");
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ asm volative ("nop 0;; nop 0;; nop 0;;");
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+
+ ssc(0, 0, 0, 0, SSC_CONSOLE_INIT);
+
+ /*
+ * S.Eranian: extract the commandline argument from the
+ * simulator
+ *
+ * The expected format is as follows:
+ *
+ * kernelname args...
+ *
+ * Both are optional but you can't have the second one without the
+ * first.
+ */
+ arglen = ssc((long) buffer, 0, 0, 0, SSC_GET_ARGS);
+
+ kpath = "vmlinux";
+ args = buffer;
+ if (arglen > 0) {
+ kpath = buffer;
+ while (*args != ' ' && *args != '\0')
+ ++args, --arglen;
+ if (*args == ' ')
+ *args++ = '\0', --arglen;
+ }
+
+ if (arglen <= 0) {
+ args = "";
+ arglen = 1;
+ }
+
+ fd = ssc((long) kpath, 1, 0, 0, SSC_OPEN);
+
+ if (fd < 0) {
+ cons_write(kpath);
+ cons_write(": file not found, reboot now\n");
+ for(;;);
+ }
+ stat.fd = fd;
+ off = 0;
+
+ req.len = sizeof(mem);
+ req.addr = (long) mem;
+ ssc(fd, 1, (long) &req, off, SSC_READ);
+ ssc((long) &stat, 0, 0, 0, SSC_WAIT_COMPLETION);
+
+ elf = (struct elfhdr *) mem;
+ if (elf->e_ident[0] == 0x7f && strncmp(elf->e_ident + 1, "ELF", 3) != 0) {
+ cons_write("not an ELF file\n");
+ return;
+ }
+ if (elf->e_type != ET_EXEC) {
+ cons_write("not an ELF executable\n");
+ return;
+ }
+ if (!elf_check_arch(elf->e_machine)) {
+ cons_write("kernel not for this processor\n");
+ return;
+ }
+
+ e_entry = elf->e_entry;
+ e_phnum = elf->e_phnum;
+ e_phoff = elf->e_phoff;
+
+ cons_write("loading ");
+ cons_write(kpath);
+ cons_write("...\n");
+
+ for (i = 0; i < e_phnum; ++i) {
+ req.len = sizeof(*elf_phdr);
+ req.addr = (long) mem;
+ ssc(fd, 1, (long) &req, e_phoff, SSC_READ);
+ ssc((long) &stat, 0, 0, 0, SSC_WAIT_COMPLETION);
+ if (stat.count != sizeof(*elf_phdr)) {
+ cons_write("failed to read phdr\n");
+ return;
+ }
+ e_phoff += sizeof(*elf_phdr);
+
+ elf_phdr = (struct elf_phdr *) mem;
+ req.len = elf_phdr->p_filesz;
+ req.addr = __pa(elf_phdr->p_vaddr);
+ ssc(fd, 1, (long) &req, elf_phdr->p_offset, SSC_READ);
+ ssc((long) &stat, 0, 0, 0, SSC_WAIT_COMPLETION);
+ memset((char *)__pa(elf_phdr->p_vaddr) + elf_phdr->p_filesz, 0,
+ elf_phdr->p_memsz - elf_phdr->p_filesz);
+ }
+ ssc(fd, 0, 0, 0, SSC_CLOSE);
+
+ cons_write("starting kernel...\n");
+
+ /* fake an I/O base address: */
+ asm volatile ("mov ar.k0=%0" :: "r"(0xffffc000000UL));
+
+ /*
+ * Install a translation register that identity maps the
+ * kernel's 256MB page.
+ */
+ ia64_clear_ic(flags);
+ ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
+ ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
+ ia64_srlz_d();
+ ia64_itr(0x3, 0, 1024*1024,
+ pte_val(mk_pte_phys(1024*1024, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
+ _PAGE_SIZE_1M);
+ ia64_itr(0x3, 1, PAGE_OFFSET,
+ pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
+ _PAGE_SIZE_256M);
+ ia64_srlz_i();
+
+ enter_virtual_mode(flags | IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT
+ | IA64_PSR_DFH | IA64_PSR_BN);
+
+ sys_fw_init(args, arglen);
+
+ ssc(0, (long) kpath, 0, 0, SSC_LOAD_SYMBOLS);
+
+ /*
+ * Install the kernel's command line argument on ZERO_PAGE
+ * just after the botoparam structure.
+ * In case we don't have any argument just put \0
+ */
+ memcpy(((struct ia64_boot_param *)ZERO_PAGE_ADDR) + 1, args, arglen);
+ sp = __pa(&stack);
+
+ asm volatile ("br.sptk.few %0" :: "b"(e_entry));
+
+ cons_write("kernel returned!\n");
+ ssc(-1, 0, 0, 0, SSC_EXIT);
+}
--- /dev/null
+OUTPUT_FORMAT("elf64-ia64-little")
+OUTPUT_ARCH(ia64)
+ENTRY(_start)
+SECTIONS
+{
+ /* Read-only sections, merged into text segment: */
+ . = 0x100000;
+
+ _text = .;
+ .text : { *(__ivt_section) *(.text) }
+ _etext = .;
+
+ /* Global data */
+ _data = .;
+ .rodata : { *(.rodata) }
+ .data : { *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS }
+ __gp = ALIGN (8) + 0x200000;
+ .got : { *(.got.plt) *(.got) }
+ /* We want the small data sections together, so single-instruction offsets
+ can access them all, and initialized data all before uninitialized, so
+ we can shorten the on-disk segment size. */
+ .sdata : { *(.sdata) }
+ _edata = .;
+
+ _bss = .;
+ .sbss : { *(.sbss) *(.scommon) }
+ .bss : { *(.bss) *(COMMON) }
+ . = ALIGN(64 / 8);
+ _end = . ;
+
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ .comment 0 : { *(.comment) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging sections are relative to the beginning
+ of the section so we begin them at 0. */
+ /* DWARF 1 */
+ .debug 0 : { *(.debug) }
+ .line 0 : { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : { *(.debug_srcinfo) }
+ .debug_sfnames 0 : { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : { *(.debug_aranges) }
+ .debug_pubnames 0 : { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : { *(.debug_info) }
+ .debug_abbrev 0 : { *(.debug_abbrev) }
+ .debug_line 0 : { *(.debug_line) }
+ .debug_frame 0 : { *(.debug_frame) }
+ .debug_str 0 : { *(.debug_str) }
+ .debug_loc 0 : { *(.debug_loc) }
+ .debug_macinfo 0 : { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : { *(.debug_weaknames) }
+ .debug_funcnames 0 : { *(.debug_funcnames) }
+ .debug_typenames 0 : { *(.debug_typenames) }
+ .debug_varnames 0 : { *(.debug_varnames) }
+ /* These must appear regardless of . */
+}
--- /dev/null
+mainmenu_name "Kernel configuration of Linux for IA-64 machines"
+
+mainmenu_option next_comment
+comment 'General setup'
+
+choice 'IA-64 system type' \
+ "Generic CONFIG_IA64_GENERIC \
+ HP-simulator CONFIG_IA64_HP_SIM \
+ SN1-simulator CONFIG_IA64_SGI_SN1_SIM \
+ DIG-compliant CONFIG_IA64_DIG" Generic
+
+choice 'Kernel page size' \
+ "4KB CONFIG_IA64_PAGE_SIZE_4KB \
+ 8KB CONFIG_IA64_PAGE_SIZE_8KB \
+ 16KB CONFIG_IA64_PAGE_SIZE_16KB \
+ 64KB CONFIG_IA64_PAGE_SIZE_64KB" 16KB
+
+if [ "$CONFIG_IA64_DIG" = "y" ]; then
+ bool ' Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
+ bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS n
+ bool ' Enable BigSur hacks' CONFIG_IA64_BIGSUR_HACKS y
+ bool ' Enable Lion hacks' CONFIG_IA64_LION_HACKS n
+ bool ' Emulate PAL/SAL/EFI firmware' CONFIG_IA64_FW_EMU n
+ bool ' Get PCI IRQ routing from firmware/ACPI' CONFIG_IA64_IRQ_ACPI y
+fi
+
+if [ "$CONFIG_IA64_GENERIC" = "y" ]; then
+ define_bool CONFIG_IA64_SOFTSDV_HACKS y
+fi
+
+if [ "$CONFIG_IA64_SGI_SN1_SIM" = "y" ]; then
+ define_bool CONFIG_NUMA y
+ define_bool CONFIG_IA64_SOFTSDV_HACKS y
+fi
+
+define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /dev/kcore.
+
+bool 'SMP support' CONFIG_SMP n
+bool 'Performance monitor support' CONFIG_PERFMON n
+
+bool 'Networking support' CONFIG_NET n
+bool 'System V IPC' CONFIG_SYSVIPC n
+bool 'BSD Process Accounting' CONFIG_BSD_PROCESS_ACCT n
+bool 'Sysctl support' CONFIG_SYSCTL n
+tristate 'Kernel support for ELF binaries' CONFIG_BINFMT_ELF
+tristate 'Kernel support for MISC binaries' CONFIG_BINFMT_MISC
+
+bool 'PCI support' CONFIG_PCI n
+source drivers/pci/Config.in
+
+source drivers/pcmcia/Config.in
+
+mainmenu_option next_comment
+ comment 'Code maturity level options'
+ bool 'Prompt for development and/or incomplete code/drivers' \
+ CONFIG_EXPERIMENTAL n
+endmenu
+
+mainmenu_option next_comment
+ comment 'Loadable module support'
+ bool 'Enable loadable module support' CONFIG_MODULES n
+ if [ "$CONFIG_MODULES" = "y" ]; then
+ bool 'Set version information on all symbols for modules' CONFIG_MODVERSIONS n
+ bool 'Kernel module loader' CONFIG_KMOD n
+ fi
+endmenu
+
+source drivers/parport/Config.in
+
+endmenu
+
+source drivers/pnp/Config.in
+source drivers/block/Config.in
+source drivers/i2o/Config.in
+
+if [ "$CONFIG_NET" = "y" ]; then
+ source net/Config.in
+fi
+
+mainmenu_option next_comment
+comment 'SCSI support'
+
+tristate 'SCSI support' CONFIG_SCSI
+
+if [ "$CONFIG_SCSI" != "n" ]; then
+ source drivers/scsi/Config.in
+ bool 'Simulated SCSI disk' CONFIG_SCSI_SIM n
+fi
+endmenu
+
+if [ "$CONFIG_NET" = "y" ]; then
+ mainmenu_option next_comment
+ comment 'Network device support'
+
+ bool 'Network device support' CONFIG_NETDEVICES n
+ if [ "$CONFIG_NETDEVICES" = "y" ]; then
+ source drivers/net/Config.in
+ fi
+ endmenu
+fi
+
+source net/ax25/Config.in
+
+mainmenu_option next_comment
+comment 'ISDN subsystem'
+
+tristate 'ISDN support' CONFIG_ISDN
+if [ "$CONFIG_ISDN" != "n" ]; then
+ source drivers/isdn/Config.in
+fi
+endmenu
+
+mainmenu_option next_comment
+comment 'CD-ROM drivers (not for SCSI or IDE/ATAPI drives)'
+
+bool 'Support non-SCSI/IDE/ATAPI drives' CONFIG_CD_NO_IDESCSI n
+if [ "$CONFIG_CD_NO_IDESCSI" != "n" ]; then
+ source drivers/cdrom/Config.in
+fi
+endmenu
+
+source drivers/char/Config.in
+source drivers/usb/Config.in
+source drivers/misc/Config.in
+
+source fs/Config.in
+
+source fs/nls/Config.in
+
+if [ "$CONFIG_VT" = "y" ]; then
+ mainmenu_option next_comment
+ comment 'Console drivers'
+ bool 'VGA text console' CONFIG_VGA_CONSOLE n
+ if [ "$CONFIG_FB" = "y" ]; then
+ define_bool CONFIG_PCI_CONSOLE y
+ fi
+ source drivers/video/Config.in
+ endmenu
+fi
+
+mainmenu_option next_comment
+comment 'Sound'
+
+tristate 'Sound card support' CONFIG_SOUND
+if [ "$CONFIG_SOUND" != "n" ]; then
+ source drivers/sound/Config.in
+fi
+endmenu
+
+mainmenu_option next_comment
+comment 'Kernel hacking'
+
+#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC
+if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ tristate 'Kernel support for IA-32 emulation' CONFIG_IA32_SUPPORT
+ tristate 'Kernel FP software completion' CONFIG_MATHEMU
+else
+ define_bool CONFIG_MATHEMU y
+fi
+
+bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ n
+bool 'Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK n
+bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG n
+bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ n
+bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS n
+bool 'Built-in Kernel Debugger support' CONFIG_KDB
+if [ "$CONFIG_KDB" = "y" ]; then
+ bool 'Compile the kernel with frame pointers' CONFIG_KDB_FRAMEPTR
+ int 'KDB Kernel Symbol Table size?' CONFIG_KDB_STBSIZE 10000
+fi
+
+endmenu
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# General setup
+#
+CONFIG_IA64_SIM=y
+CONFIG_PCI=y
+# CONFIG_PCI_QUIRKS is not set
+CONFIG_PCI_OLD_PROC=y
+# CONFIG_NET is not set
+# CONFIG_SYSVIPC is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+# CONFIG_SYSCTL is not set
+# CONFIG_BINFMT_ELF is not set
+# CONFIG_BINFMT_MISC is not set
+# CONFIG_BINFMT_JAVA is not set
+# CONFIG_BINFMT_EM86 is not set
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+# CONFIG_PNP is not set
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_IDE is not set
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_HD_ONLY is not set
+
+#
+# Additional Block Devices
+#
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_MD is not set
+# CONFIG_BLK_DEV_RAM is not set
+# CONFIG_BLK_DEV_XD is not set
+CONFIG_PARIDE_PARPORT=y
+# CONFIG_PARIDE is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI support
+#
+# CONFIG_SCSI is not set
+# CONFIG_SCSI_G_NCR5380_PORT is not set
+# CONFIG_SCSI_G_NCR5380_MEM is not set
+
+#
+# Amateur Radio support
+#
+# CONFIG_HAMRADIO is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# CD-ROM drivers (not for SCSI or IDE/ATAPI drives)
+#
+# CONFIG_CD_NO_IDESCSI is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL is not set
+# CONFIG_SERIAL_EXTENDED is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+# CONFIG_UNIX98_PTYS is not set
+# CONFIG_MOUSE is not set
+# CONFIG_QIC02_TAPE is not set
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+CONFIG_EFI_RTC=y
+# CONFIG_VIDEO_DEV is not set
+# CONFIG_NVRAM is not set
+# CONFIG_JOYSTICK is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_FT_NORMAL_DEBUG is not set
+# CONFIG_FT_FULL_DEBUG is not set
+# CONFIG_FT_NO_TRACE is not set
+# CONFIG_FT_NO_TRACE_AT_ALL is not set
+# CONFIG_FT_STD_FDC is not set
+# CONFIG_FT_MACH2 is not set
+# CONFIG_FT_PROBE_FC10 is not set
+# CONFIG_FT_ALT_FDC is not set
+
+#
+# Filesystems
+#
+# CONFIG_QUOTA is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_EXT2_FS is not set
+# CONFIG_ISO9660_FS is not set
+# CONFIG_FAT_FS is not set
+# CONFIG_PROC_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_NTFS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_UFS_FS is not set
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_SMD_DISKLABEL is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_ADFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_MAC_PARTITION is not set
+# CONFIG_NLS is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_MATHEMU is not set
+# CONFIG_MAGIC_SYSRQ is not set
--- /dev/null
+#
+# ia64/platform/dig/Makefile
+#
+# Copyright (C) 1999 Silicon Graphics, Inc.
+# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+#
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -E -o $*.s $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -c -o $*.o $<
+
+all: dig.a
+
+O_TARGET = dig.a
+O_OBJS = iosapic.o setup.o
+
+ifeq ($(CONFIG_IA64_GENERIC),y)
+O_OBJS += machvec.o
+endif
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * Streamlined APIC support.
+ *
+ * Copyright (C) 1999 Intel Corp.
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co.
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
+ */
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/string.h>
+
+#include <asm/io.h>
+#include <asm/iosapic.h>
+#include <asm/irq.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+#include <asm/delay.h>
+#include <asm/processor.h>
+
+#undef DEBUG_IRQ_ROUTING
+
+/*
+ * IRQ vectors 0..15 are treated as the legacy interrupts of the PC-AT
+ * platform. No new drivers should ever ask for specific irqs, but we
+ * provide compatibility here in case there is an old driver that does
+ * ask for specific irqs (serial, keyboard, stuff like that). Since
+ * IA-64 doesn't allow irq 0..15 to be used for external interrupts
+ * anyhow, this in no way prevents us from doing the Right Thing
+ * with new drivers.
+ */
+struct iosapic_vector iosapic_vector[NR_IRQS] = {
+ [0 ... NR_IRQS-1] = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }
+};
+
+#ifndef CONFIG_IA64_IRQ_ACPI
+/*
+ * Defines the default interrupt routing information for the LION platform
+ * XXX - this information should be obtained from the ACPI and hardcoded since
+ * we do not have ACPI AML support.
+ */
+
+struct intr_routing_entry intr_routing[] = {
+ {0,0,0,2,0,0,0,0},
+ {0,0,1,1,0,0,0,0},
+ {0,0,2,0xff,0,0,0,0},
+ {0,0,3,3,0,0,0,0},
+ {0,0,4,4,0,0,0,0},
+ {0,0,5,5,0,0,0,0},
+ {0,0,6,6,0,0,0,0},
+ {0,0,7,7,0,0,0,0},
+ {0,0,8,8,0,0,0,0},
+ {0,0,9,9,0,0,0,0},
+ {0,0,10,10,0,0,0,0},
+ {0,0,11,11,0,0,0,0},
+ {0,0,12,12,0,0,0,0},
+ {0,0,13,13,0,0,0,0},
+ {0,0,14,14,0,0,0,0},
+ {0,0,15,15,0,0,0,0},
+#ifdef CONFIG_IA64_LION_HACKS
+ {1, 0, 0x04, 16, 0, 0, 1, 1}, /* bus 0, device id 1, INTA */
+ {1, 0, 0x05, 26, 0, 0, 1, 1}, /* bus 0, device id 1, INTB */
+ {1, 0, 0x06, 36, 0, 0, 1, 1}, /* bus 0, device id 1, INTC */
+ {1, 0, 0x07, 42, 0, 0, 1, 1}, /* bus 0, device id 1, INTD */
+
+ {1, 0, 0x08, 17, 0, 0, 1, 1}, /* bus 0, device id 2, INTA */
+ {1, 0, 0x09, 27, 0, 0, 1, 1}, /* bus 0, device id 2, INTB */
+ {1, 0, 0x0a, 37, 0, 0, 1, 1}, /* bus 0, device id 2, INTC */
+ {1, 0, 0x0b, 42, 0, 0, 1, 1}, /* bus 0, device id 2, INTD */
+
+ {1, 0, 0x0f, 50, 0, 0, 1, 1}, /* bus 0, device id 3, INTD */
+
+ {1, 0, 0x14, 51, 0, 0, 1, 1}, /* bus 0, device id 5, INTA */
+
+ {1, 0, 0x18, 49, 0, 0, 1, 1}, /* bus 0, device id 6, INTA */
+
+ {1, 1, 0x04, 18, 0, 0, 1, 1}, /* bus 1, device id 1, INTA */
+ {1, 1, 0x05, 28, 0, 0, 1, 1}, /* bus 1, device id 1, INTB */
+ {1, 1, 0x06, 38, 0, 0, 1, 1}, /* bus 1, device id 1, INTC */
+ {1, 1, 0x07, 43, 0, 0, 1, 1}, /* bus 1, device id 1, INTD */
+
+ {1, 1, 0x08, 48, 0, 0, 1, 1}, /* bus 1, device id 2, INTA */
+
+ {1, 1, 0x0c, 19, 0, 0, 1, 1}, /* bus 1, device id 3, INTA */
+ {1, 1, 0x0d, 29, 0, 0, 1, 1}, /* bus 1, device id 3, INTB */
+ {1, 1, 0x0e, 38, 0, 0, 1, 1}, /* bus 1, device id 3, INTC */
+ {1, 1, 0x0f, 44, 0, 0, 1, 1}, /* bus 1, device id 3, INTD */
+
+ {1, 1, 0x10, 20, 0, 0, 1, 1}, /* bus 1, device id 4, INTA */
+ {1, 1, 0x11, 30, 0, 0, 1, 1}, /* bus 1, device id 4, INTB */
+ {1, 1, 0x12, 39, 0, 0, 1, 1}, /* bus 1, device id 4, INTC */
+ {1, 1, 0x13, 45, 0, 0, 1, 1}, /* bus 1, device id 4, INTD */
+
+ {1, 2, 0x04, 21, 0, 0, 1, 1}, /* bus 2, device id 1, INTA */
+ {1, 2, 0x05, 31, 0, 0, 1, 1}, /* bus 2, device id 1, INTB */
+ {1, 2, 0x06, 39, 0, 0, 1, 1}, /* bus 2, device id 1, INTC */
+ {1, 2, 0x07, 45, 0, 0, 1, 1}, /* bus 2, device id 1, INTD */
+
+ {1, 2, 0x08, 22, 0, 0, 1, 1}, /* bus 2, device id 2, INTA */
+ {1, 2, 0x09, 32, 0, 0, 1, 1}, /* bus 2, device id 2, INTB */
+ {1, 2, 0x0a, 40, 0, 0, 1, 1}, /* bus 2, device id 2, INTC */
+ {1, 2, 0x0b, 46, 0, 0, 1, 1}, /* bus 2, device id 2, INTD */
+
+ {1, 2, 0x0c, 23, 0, 0, 1, 1}, /* bus 2, device id 3, INTA */
+ {1, 2, 0x0d, 33, 0, 0, 1, 1}, /* bus 2, device id 3, INTB */
+ {1, 2, 0x0e, 40, 0, 0, 1, 1}, /* bus 2, device id 3, INTC */
+ {1, 2, 0x0f, 46, 0, 0, 1, 1}, /* bus 2, device id 3, INTD */
+
+ {1, 3, 0x04, 24, 0, 0, 1, 1}, /* bus 3, device id 1, INTA */
+ {1, 3, 0x05, 34, 0, 0, 1, 1}, /* bus 3, device id 1, INTB */
+ {1, 3, 0x06, 41, 0, 0, 1, 1}, /* bus 3, device id 1, INTC */
+ {1, 3, 0x07, 47, 0, 0, 1, 1}, /* bus 3, device id 1, INTD */
+
+ {1, 3, 0x08, 25, 0, 0, 1, 1}, /* bus 3, device id 2, INTA */
+ {1, 3, 0x09, 35, 0, 0, 1, 1}, /* bus 3, device id 2, INTB */
+ {1, 3, 0x0a, 41, 0, 0, 1, 1}, /* bus 3, device id 2, INTC */
+ {1, 3, 0x0b, 47, 0, 0, 1, 1}, /* bus 3, device id 2, INTD */
+#else
+ /*
+ * BigSur platform, bus 0, device 1,2,4 and bus 1 device 0-3
+ */
+ {1,1,0x0,19,0,0,1,1}, /* bus 1, device id 0, INTA */
+ {1,1,0x1,18,0,0,1,1}, /* bus 1, device id 0, INTB */
+ {1,1,0x2,17,0,0,1,1}, /* bus 1, device id 0, INTC */
+ {1,1,0x3,16,0,0,1,1}, /* bus 1, device id 0, INTD */
+
+ {1,1,0x4,23,0,0,1,1}, /* bus 1, device id 1, INTA */
+ {1,1,0x5,22,0,0,1,1}, /* bus 1, device id 1, INTB */
+ {1,1,0x6,21,0,0,1,1}, /* bus 1, device id 1, INTC */
+ {1,1,0x7,20,0,0,1,1}, /* bus 1, device id 1, INTD */
+
+ {1,1,0x8,27,0,0,1,1}, /* bus 1, device id 2, INTA */
+ {1,1,0x9,26,0,0,1,1}, /* bus 1, device id 2, INTB */
+ {1,1,0xa,25,0,0,1,1}, /* bus 1, device id 2, INTC */
+ {1,1,0xb,24,0,0,1,1}, /* bus 1, device id 2, INTD */
+
+ {1,1,0xc,31,0,0,1,1}, /* bus 1, device id 3, INTA */
+ {1,1,0xd,30,0,0,1,1}, /* bus 1, device id 3, INTB */
+ {1,1,0xe,29,0,0,1,1}, /* bus 1, device id 3, INTC */
+ {1,1,0xf,28,0,0,1,1}, /* bus 1, device id 3, INTD */
+
+ {1,0,0x4,35,0,0,1,1}, /* bus 0, device id 1, INTA */
+ {1,0,0x5,34,0,0,1,1}, /* bus 0, device id 1, INTB */
+ {1,0,0x6,33,0,0,1,1}, /* bus 0, device id 1, INTC */
+ {1,0,0x7,32,0,0,1,1}, /* bus 0, device id 1, INTD */
+
+ {1,0,0x8,39,0,0,1,1}, /* bus 0, device id 2, INTA */
+ {1,0,0x9,38,0,0,1,1}, /* bus 0, device id 2, INTB */
+ {1,0,0xa,37,0,0,1,1}, /* bus 0, device id 2, INTC */
+ {1,0,0xb,36,0,0,1,1}, /* bus 0, device id 2, INTD */
+
+ {1,0,0x10,43,0,0,1,1}, /* bus 0, device id 4, INTA */
+ {1,0,0x11,42,0,0,1,1}, /* bus 0, device id 4, INTB */
+ {1,0,0x12,41,0,0,1,1}, /* bus 0, device id 4, INTC */
+ {1,0,0x13,40,0,0,1,1}, /* bus 0, device id 4, INTD */
+
+ {1,0,0x14,17,0,0,1,1}, /* bus 0, device id 5, INTA */
+ {1,0,0x18,18,0,0,1,1}, /* bus 0, device id 6, INTA */
+ {1,0,0x1c,19,0,0,1,1}, /* bus 0, device id 7, INTA */
+#endif
+ {0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff},
+};
+
+int
+iosapic_get_PCI_irq_vector(int bus, int slot, int pci_pin)
+{
+ int i = -1;
+
+ while (intr_routing[++i].srcbus != 0xff) {
+ if (intr_routing[i].srcbus == BUS_PCI) {
+ if ((intr_routing[i].srcbusirq == ((slot << 2) | pci_pin))
+ && (intr_routing[i].srcbusno == bus)) {
+ return(intr_routing[i].iosapic_pin);
+ }
+ }
+ }
+ return -1;
+}
+
+#else /* CONFIG_IA64_IRQ_ACPI */
+
+/*
+ * find the IRQ in the IOSAPIC map for the PCI device on bus/slot/pin
+ */
+int
+iosapic_get_PCI_irq_vector(int bus, int slot, int pci_pin)
+{
+ int i;
+
+ for (i = 0; i < NR_IRQS; i++) {
+ if ((iosapic_bustype(i) == BUS_PCI) &&
+ (iosapic_bus(i) == bus) &&
+ (iosapic_busdata(i) == ((slot << 16) | pci_pin))) {
+ return i;
+ }
+ }
+
+ return -1;
+}
+#endif /* !CONFIG_IA64_IRQ_ACPI */
+
+static void
+set_rte (unsigned long iosapic_addr, int entry, int pol, int trigger, int delivery,
+ long dest, int vector)
+{
+ int low32;
+ int high32;
+
+ low32 = ((pol << IO_SAPIC_POLARITY_SHIFT) |
+ (trigger << IO_SAPIC_TRIGGER_SHIFT) |
+ (delivery << IO_SAPIC_DELIVERY_SHIFT) |
+ vector);
+
+ /* dest contains both id and eid */
+ high32 = (dest << IO_SAPIC_DEST_SHIFT);
+
+ /*
+ * program the rte
+ */
+ writel(IO_SAPIC_RTE_HIGH(entry), iosapic_addr + IO_SAPIC_REG_SELECT);
+ writel(high32, iosapic_addr + IO_SAPIC_WINDOW);
+ writel(IO_SAPIC_RTE_LOW(entry), iosapic_addr + IO_SAPIC_REG_SELECT);
+ writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
+}
+
+
+static void
+enable_pin (unsigned int pin, unsigned long iosapic_addr)
+{
+ int low32;
+
+ writel(IO_SAPIC_RTE_LOW(pin), iosapic_addr + IO_SAPIC_REG_SELECT);
+ low32 = readl(iosapic_addr + IO_SAPIC_WINDOW);
+
+ low32 &= ~(1 << IO_SAPIC_MASK_SHIFT); /* Zero only the mask bit */
+ writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
+}
+
+
+static void
+disable_pin (unsigned int pin, unsigned long iosapic_addr)
+{
+ int low32;
+
+ writel(IO_SAPIC_RTE_LOW(pin), iosapic_addr + IO_SAPIC_REG_SELECT);
+ low32 = readl(iosapic_addr + IO_SAPIC_WINDOW);
+
+ low32 |= (1 << IO_SAPIC_MASK_SHIFT); /* Set only the mask bit */
+ writel(low32, iosapic_addr + IO_SAPIC_WINDOW);
+}
+
+#define iosapic_shutdown_irq iosapic_disable_irq
+
+static void
+iosapic_enable_irq (unsigned int irq)
+{
+ int pin = iosapic_pin(irq);
+
+ if (pin < 0)
+ /* happens during irq auto probing... */
+ return;
+ enable_pin(pin, iosapic_addr(irq));
+}
+
+static void
+iosapic_disable_irq (unsigned int irq)
+{
+ int pin = iosapic_pin(irq);
+
+ if (pin < 0)
+ return;
+ disable_pin(pin, iosapic_addr(irq));
+}
+
+unsigned int
+iosapic_version(unsigned long base_addr)
+{
+ /*
+ * IOSAPIC Version Register return 32 bit structure like:
+ * {
+ * unsigned int version : 8;
+ * unsigned int reserved1 : 8;
+ * unsigned int pins : 8;
+ * unsigned int reserved2 : 8;
+ * }
+ */
+ writel(IO_SAPIC_VERSION, base_addr + IO_SAPIC_REG_SELECT);
+ return readl(IO_SAPIC_WINDOW + base_addr);
+}
+
+static int
+iosapic_handle_irq (unsigned int irq, struct pt_regs *regs)
+{
+ struct irqaction *action = 0;
+ struct irq_desc *id = irq_desc + irq;
+ unsigned int status;
+ int retval;
+
+ spin_lock(&irq_controller_lock);
+ {
+ status = id->status;
+
+ /* do we need to do something IOSAPIC-specific to ACK the irq here??? */
+ /* Yes, but only level-triggered interrupts. We'll do that later */
+ if ((status & IRQ_INPROGRESS) == 0 && (status & IRQ_ENABLED) != 0) {
+ action = id->action;
+ status |= IRQ_INPROGRESS;
+ }
+ id->status = status & ~(IRQ_REPLAY | IRQ_WAITING);
+ }
+ spin_unlock(&irq_controller_lock);
+
+ if (!action) {
+ if (!(id->status & IRQ_AUTODETECT))
+ printk("iosapic_handle_irq: unexpected interrupt %u;"
+ "disabling it (status=%x)\n", irq, id->status);
+ /*
+ * If we don't have a handler, disable the pin so we
+ * won't get any further interrupts (until
+ * re-enabled). --davidm 99/12/17
+ */
+ iosapic_disable_irq(irq);
+ return 0;
+ }
+
+ retval = invoke_irq_handlers (irq, regs, action);
+
+ if (iosapic_trigger(irq) == IO_SAPIC_LEVEL) /* ACK Level trigger interrupts */
+ writel(irq, iosapic_addr(irq) + IO_SAPIC_EOI);
+
+ spin_lock(&irq_controller_lock);
+ {
+ status = (id->status & ~IRQ_INPROGRESS);
+ id->status = status;
+ }
+ spin_unlock(&irq_controller_lock);
+
+ return retval;
+}
+
+void __init
+iosapic_init (unsigned long addr)
+{
+ int i;
+#ifdef CONFIG_IA64_IRQ_ACPI
+ struct pci_vector_struct *vectors;
+ int irq;
+#else
+ int vector;
+#endif
+
+ /*
+ * Disable all local interrupts
+ */
+
+ ia64_set_itv(0, 1);
+ ia64_set_lrr0(0, 1);
+ ia64_set_lrr1(0, 1);
+
+ /*
+ * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support
+ * enabled.
+ */
+
+ outb(0xff, 0xA1);
+ outb(0xff, 0x21);
+
+#if defined(CONFIG_IA64_SOFTSDV_HACKS)
+ memset(iosapic_vector, 0x0, sizeof(iosapic_vector));
+ for (i = 0; i < NR_IRQS; i++) {
+ iosapic_pin(i) = 0xff;
+ iosapic_addr(i) = (unsigned long) ioremap(IO_SAPIC_DEFAULT_ADDR, 0);
+ }
+ /* XXX this should come from systab or some such: */
+ iosapic_pin(TIMER_IRQ) = 5; /* System Clock Interrupt */
+ iosapic_pin(0x40) = 3; /* Keyboard */
+ iosapic_pin(0x92) = 9; /* COM1 Serial Port */
+ iosapic_pin(0x80) = 4; /* Periodic Interrupt */
+ iosapic_pin(0xc0) = 2; /* Mouse */
+ iosapic_pin(0xe0) = 1; /* IDE Disk */
+ iosapic_pin(0xf0) = 6; /* E-IDE CDROM */
+ iosapic_pin(0xa0) = 10; /* Real PCI Interrupt */
+#elif !defined(CONFIG_IA64_IRQ_ACPI)
+ /*
+ * For systems where the routing info in ACPI is
+ * unavailable/wrong, use the intr_routing information to
+ * initialize the iosapic array
+ */
+ i = -1;
+ while (intr_routing[++i].srcbus != 0xff) {
+ if (intr_routing[i].srcbus == BUS_ISA) {
+ vector = map_legacy_irq(intr_routing[i].srcbusirq);
+ } else if (intr_routing[i].srcbus == BUS_PCI) {
+ vector = intr_routing[i].iosapic_pin;
+ } else {
+ printk("unknown bus type %d for intr_routing[%d]\n",
+ intr_routing[i].srcbus, i);
+ continue;
+ }
+ iosapic_pin(vector) = intr_routing[i].iosapic_pin;
+ iosapic_dmode(vector) = intr_routing[i].mode;
+ iosapic_polarity(vector) = intr_routing[i].polarity;
+ iosapic_trigger(vector) = intr_routing[i].trigger;
+# ifdef DEBUG_IRQ_ROUTING
+ printk("irq[0x%x(0x%x)]:0x%x, %d, %d, %d\n", vector, intr_routing[i].srcbusirq,
+ iosapic_pin(vector), iosapic_dmode(vector), iosapic_polarity(vector),
+ iosapic_trigger(vector));
+# endif
+ }
+#else /* !defined(CONFIG_IA64_SOFTSDV_HACKS) && !defined(CONFIG_IA64_IRQ_ACPI) */
+ /*
+ * Map the legacy ISA devices into the IOAPIC data; We'll override these
+ * later with data from the ACPI Interrupt Source Override table.
+ *
+ * Huh, the Lion w/ FPSWA firmware has entries for _all_ of the legacy IRQs,
+ * including those that are not different from PC/AT standard. I don't know
+ * if this is a bug in the other firmware or not. I'm going to leave this code
+ * here, so that this works on BigSur but will go ask Intel. --wfd 2000-Jan-19
+ *
+ */
+ for (i =0 ; i < IA64_MIN_VECTORED_IRQ; i++) {
+ irq = map_legacy_irq(i);
+ iosapic_pin(irq) = i;
+ iosapic_bus(irq) = BUS_ISA;
+ iosapic_busdata(irq) = 0;
+ iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
+ iosapic_trigger(irq) = IO_SAPIC_EDGE;
+ iosapic_polarity(irq) = IO_SAPIC_POL_HIGH;
+#ifdef DEBUG_IRQ_ROUTING
+ printk("ISA: IRQ %02x -> Vector %02x IOSAPIC Pin %d\n", i, irq, iosapic_pin(irq));
+#endif
+ }
+
+ /*
+ * Map the PCI Interrupt data into the ACPI IOSAPIC data using
+ * the info that the bootstrap loader passed to us.
+ */
+ ia64_boot_param.pci_vectors = (__u64) __va(ia64_boot_param.pci_vectors);
+ vectors = (struct pci_vector_struct *) ia64_boot_param.pci_vectors;
+ for (i = 0; i < ia64_boot_param.num_pci_vectors; i++) {
+ irq = map_legacy_irq(vectors[i].irq);
+
+ iosapic_bustype(irq) = BUS_PCI;
+ iosapic_pin(irq) = irq - iosapic_baseirq(irq);
+ iosapic_bus(irq) = vectors[i].bus;
+ /*
+ * Map the PCI slot and pin data into iosapic_busdata()
+ */
+ iosapic_busdata(irq) = (vectors[i].pci_id & 0xffff0000) | vectors[i].pin;
+
+ /* Default settings for PCI */
+ iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
+ iosapic_trigger(irq) = IO_SAPIC_LEVEL;
+ iosapic_polarity(irq) = IO_SAPIC_POL_LOW;
+
+#ifdef DEBUG_IRQ_ROUTING
+ printk("PCI: BUS %d Slot %x Pin %x IRQ %02x --> Vector %02x IOSAPIC Pin %d\n",
+ vectors[i].bus, vectors[i].pci_id>>16, vectors[i].pin, vectors[i].irq,
+ irq, iosapic_pin(irq));
+#endif
+ }
+#endif /* !CONFIG_IA64_IRQ_ACPI */
+}
+
+static void
+iosapic_startup_irq (unsigned int irq)
+{
+ int pin;
+
+ if (irq == TIMER_IRQ)
+ return;
+ pin = iosapic_pin(irq);
+ if (pin < 0)
+ /* happens during irq auto probing... */
+ return;
+ set_rte(iosapic_addr(irq), pin, iosapic_polarity(irq), iosapic_trigger(irq),
+ iosapic_dmode(irq), (ia64_get_lid() >> 16) & 0xffff, irq);
+ enable_pin(pin, iosapic_addr(irq));
+}
+
+struct hw_interrupt_type irq_type_iosapic = {
+ "IOSAPIC",
+ iosapic_init,
+ iosapic_startup_irq,
+ iosapic_shutdown_irq,
+ iosapic_handle_irq,
+ iosapic_enable_irq,
+ iosapic_disable_irq
+};
+
+void
+dig_irq_init (struct irq_desc desc[NR_IRQS])
+{
+ int i;
+
+ /*
+ * Claim all non-legacy irq vectors as ours unless they're
+ * claimed by someone else already (e.g., timer or IPI are
+ * handled internally).
+ */
+ for (i = IA64_MIN_VECTORED_IRQ; i <= IA64_MAX_VECTORED_IRQ; ++i) {
+ if (irq_desc[i].handler == &irq_type_default)
+ irq_desc[i].handler = &irq_type_iosapic;
+ }
+}
+
+void
+dig_pci_fixup (void)
+{
+ struct pci_dev *dev;
+ int irq;
+ unsigned char pin;
+
+ pci_for_each_dev(dev) {
+ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
+ if (pin) {
+ pin--; /* interrupt pins are numbered starting from 1 */
+ irq = iosapic_get_PCI_irq_vector(dev->bus->number, PCI_SLOT(dev->devfn),
+ pin);
+ if (irq < 0 && dev->bus->parent) { /* go back to the bridge */
+ struct pci_dev * bridge = dev->bus->self;
+
+ /* do the bridge swizzle... */
+ pin = (pin + PCI_SLOT(dev->devfn)) % 4;
+ irq = iosapic_get_PCI_irq_vector(bridge->bus->number,
+ PCI_SLOT(bridge->devfn), pin);
+ if (irq >= 0)
+ printk(KERN_WARNING
+ "PCI: using PPB(B%d,I%d,P%d) to get irq %02x\n",
+ bridge->bus->number, PCI_SLOT(bridge->devfn),
+ pin, irq);
+ }
+ if (irq >= 0) {
+ printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> %02x\n",
+ dev->bus->number, PCI_SLOT(dev->devfn), pin, irq);
+ dev->irq = irq;
+ }
+ }
+ /*
+ * Nothing to fixup
+ * Fix out-of-range IRQ numbers
+ */
+ if (dev->irq >= NR_IRQS)
+ dev->irq = 15; /* Spurious interrupts */
+ }
+}
--- /dev/null
+#include <asm/machvec_init.h>
+#include <asm/machvec_dig.h>
+
+MACHVEC_DEFINE(dig)
--- /dev/null
+/*
+ * Platform dependent support for Intel SoftSDV simulator.
+ *
+ * Copyright (C) 1999 Intel Corp.
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
+ */
+#include <linux/config.h>
+
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/kdev_t.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+#include <linux/console.h>
+#include <linux/timex.h>
+#include <linux/sched.h>
+#include <linux/mc146818rtc.h>
+
+#include <asm/io.h>
+#include <asm/machvec.h>
+#include <asm/system.h>
+
+#ifdef CONFIG_IA64_FW_EMU
+# include "../../kernel/fw-emu.c"
+#endif
+
+/*
+ * This is here so we can use the CMOS detection in ide-probe.c to
+ * determine what drives are present. In theory, we don't need this
+ * as the auto-detection could be done via ide-probe.c:do_probe() but
+ * in practice that would be much slower, which is painful when
+ * running in the simulator. Note that passing zeroes in DRIVE_INFO
+ * is sufficient (the IDE driver will autodetect the drive geometry).
+ */
+char drive_info[4*16];
+
+unsigned char aux_device_present = 0xaa; /* XXX remove this when legacy I/O is gone */
+
+void __init
+dig_setup (char **cmdline_p)
+{
+ unsigned int orig_x, orig_y, num_cols, num_rows, font_height;
+
+ /*
+ * This assumes that the EFI partition is physical disk 1
+ * partition 1 and the Linux root disk is physical disk 1
+ * partition 2.
+ */
+#ifdef CONFIG_IA64_LION_HACKS
+ /* default to /dev/sda2 on Lion... */
+ ROOT_DEV = to_kdev_t(0x0802); /* default to second partition on first drive */
+#else
+ /* default to /dev/dha2 on BigSur... */
+ ROOT_DEV = to_kdev_t(0x0302); /* default to second partition on first drive */
+#endif
+
+#ifdef CONFIG_SMP
+ init_smp_config();
+#endif
+
+ memset(&screen_info, 0, sizeof(screen_info));
+
+ if (!ia64_boot_param.console_info.num_rows
+ || !ia64_boot_param.console_info.num_cols)
+ {
+ printk("dig_setup: warning: invalid screen-info, guessing 80x25\n");
+ orig_x = 0;
+ orig_y = 0;
+ num_cols = 80;
+ num_rows = 25;
+ font_height = 16;
+ } else {
+ orig_x = ia64_boot_param.console_info.orig_x;
+ orig_y = ia64_boot_param.console_info.orig_y;
+ num_cols = ia64_boot_param.console_info.num_cols;
+ num_rows = ia64_boot_param.console_info.num_rows;
+ font_height = 400 / num_rows;
+ }
+
+ screen_info.orig_x = orig_x;
+ screen_info.orig_y = orig_y;
+ screen_info.orig_video_cols = num_cols;
+ screen_info.orig_video_lines = num_rows;
+ screen_info.orig_video_points = font_height;
+ screen_info.orig_video_mode = 3; /* XXX fake */
+ screen_info.orig_video_isVGA = 1; /* XXX fake */
+ screen_info.orig_video_ega_bx = 3; /* XXX fake */
+}
--- /dev/null
+#
+# ia64/platform/hp/Makefile
+#
+# Copyright (C) 1999 Silicon Graphics, Inc.
+# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+#
+
+all: hp.a
+
+O_TARGET = hp.a
+O_OBJS = hpsim_console.o hpsim_irq.o hpsim_setup.o
+
+ifeq ($(CONFIG_IA64_GENERIC),y)
+O_OBJS += hpsim_machvec.o
+endif
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * Platform dependent support for HP simulator.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kdev_t.h>
+#include <linux/console.h>
+
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/pal.h>
+#include <asm/machvec.h>
+#include <asm/pgtable.h>
+#include <asm/sal.h>
+
+#include "hpsim_ssc.h"
+
+static int simcons_init (struct console *, char *);
+static void simcons_write (struct console *, const char *, unsigned);
+static int simcons_wait_key (struct console *);
+static kdev_t simcons_console_device (struct console *);
+
+struct console hpsim_cons = {
+ "simcons",
+ simcons_write, /* write */
+ NULL, /* read */
+ simcons_console_device, /* device */
+ simcons_wait_key, /* wait_key */
+ NULL, /* unblank */
+ simcons_init, /* setup */
+ CON_PRINTBUFFER, /* flags */
+ -1, /* index */
+ 0, /* cflag */
+ NULL /* next */
+};
+
+static int
+simcons_init (struct console *cons, char *options)
+{
+ return 0;
+}
+
+static void
+simcons_write (struct console *cons, const char *buf, unsigned count)
+{
+ unsigned long ch;
+
+ while (count-- > 0) {
+ ch = *buf++;
+ ia64_ssc(ch, 0, 0, 0, SSC_PUTCHAR);
+ if (ch == '\n')
+ ia64_ssc('\r', 0, 0, 0, SSC_PUTCHAR);
+ }
+}
+
+static int
+simcons_wait_key (struct console *cons)
+{
+ char ch;
+
+ do {
+ ch = ia64_ssc(0, 0, 0, 0, SSC_GETCHAR);
+ } while (ch == '\0');
+ return ch;
+}
+
+static kdev_t
+simcons_console_device (struct console *c)
+{
+ return MKDEV(TTY_MAJOR, 64 + c->index);
+}
--- /dev/null
+/*
+ * Platform dependent support for HP simulator.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kdev_t.h>
+#include <linux/console.h>
+
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/pal.h>
+#include <asm/machvec.h>
+#include <asm/pgtable.h>
+#include <asm/sal.h>
+
+
+static int
+irq_hp_sim_handle_irq (unsigned int irq, struct pt_regs *regs)
+{
+ struct irqaction *action = 0;
+ struct irq_desc *id = irq_desc + irq;
+ unsigned int status;
+ int retval;
+
+ spin_lock(&irq_controller_lock);
+ {
+ status = id->status;
+ if ((status & IRQ_INPROGRESS) == 0 && (status & IRQ_ENABLED) != 0) {
+ action = id->action;
+ status |= IRQ_INPROGRESS;
+ }
+ id->status = status & ~(IRQ_REPLAY | IRQ_WAITING);
+ }
+ spin_unlock(&irq_controller_lock);
+
+ if (!action) {
+ if (!(id->status & IRQ_AUTODETECT))
+ printk("irq_hpsim_handle_irq: unexpected interrupt %u\n", irq);
+ return 0;
+ }
+
+ retval = invoke_irq_handlers(irq, regs, action);
+
+ spin_lock(&irq_controller_lock);
+ {
+ id->status &= ~IRQ_INPROGRESS;
+ }
+ spin_unlock(&irq_controller_lock);
+
+ return retval;
+}
+
+static void
+irq_hp_sim_noop (unsigned int irq)
+{
+}
+
+static struct hw_interrupt_type irq_type_hp_sim = {
+ "hp_sim",
+ (void (*)(unsigned long)) irq_hp_sim_noop, /* init */
+ irq_hp_sim_noop, /* startup */
+ irq_hp_sim_noop, /* shutdown */
+ irq_hp_sim_handle_irq, /* handle */
+ irq_hp_sim_noop, /* enable */
+ irq_hp_sim_noop, /* disable */
+};
+
+void
+hpsim_irq_init (struct irq_desc desc[NR_IRQS])
+{
+ int i;
+
+ for (i = IA64_MIN_VECTORED_IRQ; i <= IA64_MAX_VECTORED_IRQ; ++i) {
+ irq_desc[i].handler = &irq_type_hp_sim;
+ }
+}
--- /dev/null
+#include <asm/machvec_init.h>
+#include <asm/machvec_hpsim.h>
+
+MACHVEC_DEFINE(hpsim)
--- /dev/null
+/*
+ * Platform dependent support for HP simulator.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kdev_t.h>
+#include <linux/console.h>
+
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/pal.h>
+#include <asm/machvec.h>
+#include <asm/pgtable.h>
+#include <asm/sal.h>
+
+#include "hpsim_ssc.h"
+
+extern struct console hpsim_cons;
+
+/*
+ * Simulator system call.
+ */
+inline long
+ia64_ssc (long arg0, long arg1, long arg2, long arg3, int nr)
+{
+#ifdef __GCC_DOESNT_KNOW_IN_REGS__
+ register long in0 asm ("r32") = arg0;
+ register long in1 asm ("r33") = arg1;
+ register long in2 asm ("r34") = arg2;
+ register long in3 asm ("r35") = arg3;
+#else
+ register long in0 asm ("in0") = arg0;
+ register long in1 asm ("in1") = arg1;
+ register long in2 asm ("in2") = arg2;
+ register long in3 asm ("in3") = arg3;
+#endif
+ register long r8 asm ("r8");
+ register long r15 asm ("r15") = nr;
+
+ asm volatile ("break 0x80001"
+ : "=r"(r8)
+ : "r"(r15), "r"(in0), "r"(in1), "r"(in2), "r"(in3));
+ return r8;
+}
+
+void
+ia64_ssc_connect_irq (long intr, long irq)
+{
+ ia64_ssc(intr, irq, 0, 0, SSC_CONNECT_INTERRUPT);
+}
+
+void
+ia64_ctl_trace (long on)
+{
+ ia64_ssc(on, 0, 0, 0, SSC_CTL_TRACE);
+}
+
+void __init
+hpsim_setup (char **cmdline_p)
+{
+ ROOT_DEV = to_kdev_t(0x0801); /* default to first SCSI drive */
+
+ register_console (&hpsim_cons);
+}
--- /dev/null
+/*
+ * Platform dependent support for HP simulator.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
+ */
+#ifndef _IA64_PLATFORM_HPSIM_SSC_H
+#define _IA64_PLATFORM_HPSIM_SSC_H
+
+/* Simulator system calls: */
+
+#define SSC_CONSOLE_INIT 20
+#define SSC_GETCHAR 21
+#define SSC_PUTCHAR 31
+#define SSC_CONNECT_INTERRUPT 58
+#define SSC_GENERATE_INTERRUPT 59
+#define SSC_SET_PERIODIC_INTERRUPT 60
+#define SSC_GET_RTC 65
+#define SSC_EXIT 66
+#define SSC_LOAD_SYMBOLS 69
+#define SSC_GET_TOD 74
+#define SSC_CTL_TRACE 76
+
+#define SSC_NETDEV_PROBE 100
+#define SSC_NETDEV_SEND 101
+#define SSC_NETDEV_RECV 102
+#define SSC_NETDEV_ATTACH 103
+#define SSC_NETDEV_DETACH 104
+
+/*
+ * Simulator system call.
+ */
+extern long ia64_ssc (long arg0, long arg1, long arg2, long arg3, int nr);
+
+#endif /* _IA64_PLATFORM_HPSIM_SSC_H */
--- /dev/null
+#
+# Makefile for the ia32 kernel emulation subsystem.
+#
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -E -o $*.s $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -c -o $*.o $<
+
+all: ia32.o
+
+O_TARGET := ia32.o
+O_OBJS := ia32_entry.o ia32_signal.o sys_ia32.o ia32_support.o binfmt_elf32.o
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * IA-32 ELF support.
+ *
+ * Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
+ */
+#include <linux/posix_types.h>
+
+#include <asm/signal.h>
+#include <asm/ia32.h>
+
+#define CONFIG_BINFMT_ELF32
+
+/* Override some function names */
+#undef start_thread
+#define start_thread ia32_start_thread
+#define init_elf_binfmt init_elf32_binfmt
+
+#undef CONFIG_BINFMT_ELF
+#ifdef CONFIG_BINFMT_ELF32
+# define CONFIG_BINFMT_ELF CONFIG_BINFMT_ELF32
+#endif
+
+#undef CONFIG_BINFMT_ELF_MODULE
+#ifdef CONFIG_BINFMT_ELF32_MODULE
+# define CONFIG_BINFMT_ELF_MODULE CONFIG_BINFMT_ELF32_MODULE
+#endif
+
+void ia64_elf32_init(struct pt_regs *regs);
+#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
+
+#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
+
+/* Ugly but avoids duplication */
+#include "../../../fs/binfmt_elf.c"
+
+/* Global descriptor table */
+unsigned long *ia32_gdt_table, *ia32_tss;
+
+struct page *
+put_shared_page(struct task_struct * tsk, struct page *page, unsigned long address)
+{
+ pgd_t * pgd;
+ pmd_t * pmd;
+ pte_t * pte;
+
+ if (page_count(page) != 1)
+ printk("mem_map disagrees with %p at %08lx\n", page, address);
+ pgd = pgd_offset(tsk->mm, address);
+ pmd = pmd_alloc(pgd, address);
+ if (!pmd) {
+ __free_page(page);
+ oom(tsk);
+ return 0;
+ }
+ pte = pte_alloc(pmd, address);
+ if (!pte) {
+ __free_page(page);
+ oom(tsk);
+ return 0;
+ }
+ if (!pte_none(*pte)) {
+ pte_ERROR(*pte);
+ __free_page(page);
+ return 0;
+ }
+ flush_page_to_ram(page);
+ set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED)));
+ /* no need for flush_tlb */
+ return page;
+}
+
+void ia64_elf32_init(struct pt_regs *regs)
+{
+ int nr;
+
+ put_shared_page(current, mem_map + MAP_NR(ia32_gdt_table), IA32_PAGE_OFFSET);
+ if (PAGE_SHIFT <= IA32_PAGE_SHIFT)
+ put_shared_page(current, mem_map + MAP_NR(ia32_tss), IA32_PAGE_OFFSET + PAGE_SIZE);
+
+ nr = smp_processor_id();
+
+ /* Do all the IA-32 setup here */
+
+ /* CS descriptor */
+ __asm__("mov ar.csd = %0" : /* no outputs */
+ : "r" IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0xBL, 1L,
+ 3L, 1L, 1L, 1L));
+ /* SS descriptor */
+ __asm__("mov ar.ssd = %0" : /* no outputs */
+ : "r" IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0x3L, 1L,
+ 3L, 1L, 1L, 1L));
+ /* EFLAGS */
+ __asm__("mov ar.eflag = %0" : /* no outputs */ : "r" (IA32_EFLAG));
+
+ /* Control registers */
+ __asm__("mov ar.cflg = %0"
+ : /* no outputs */
+ : "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
+ __asm__("mov ar.fsr = %0"
+ : /* no outputs */
+ : "r" ((ulong)IA32_FSR_DEFAULT));
+ __asm__("mov ar.fcr = %0"
+ : /* no outputs */
+ : "r" ((ulong)IA32_FCR_DEFAULT));
+ __asm__("mov ar.fir = r0");
+ __asm__("mov ar.fdr = r0");
+ /* TSS */
+ __asm__("mov ar.k1 = %0"
+ : /* no outputs */
+ : "r" IA64_SEG_DESCRIPTOR(IA32_PAGE_OFFSET + PAGE_SIZE,
+ 0x1FFFL, 0xBL, 1L,
+ 3L, 1L, 1L, 1L));
+
+ /* Get the segment selectors right */
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES == DS, GS, FS are zero */
+ regs->r17 = (_TSS(nr) << 48) | (_LDT(nr) << 32)
+ | (__USER_DS << 16) | __USER_CS;
+
+ /* Setup other segment descriptors - ESD, DSD, FSD, GSD */
+ regs->r24 = IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0x3L, 1L, 3L, 1L, 1L, 1L);
+ regs->r27 = IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0x3L, 1L, 3L, 1L, 1L, 1L);
+ regs->r28 = IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0x3L, 1L, 3L, 1L, 1L, 1L);
+ regs->r29 = IA64_SEG_DESCRIPTOR(0L, 0xFFFFFL, 0x3L, 1L, 3L, 1L, 1L, 1L);
+
+ /* Setup the LDT and GDT */
+ regs->r30 = ia32_gdt_table[_LDT(nr)];
+ regs->r31 = IA64_SEG_DESCRIPTOR(0xc0000000L, 0x400L, 0x3L, 1L, 3L,
+ 1L, 1L, 1L);
+
+ /* Clear psr.ac */
+ regs->cr_ipsr &= ~IA64_PSR_AC;
+
+ regs->loadrs = 0;
+}
+
+#undef STACK_TOP
+#define STACK_TOP ((IA32_PAGE_OFFSET/3) * 2)
+
+int ia32_setup_arg_pages(struct linux_binprm *bprm)
+{
+ unsigned long stack_base;
+ struct vm_area_struct *mpnt;
+ int i;
+
+ stack_base = STACK_TOP - MAX_ARG_PAGES*PAGE_SIZE;
+
+ bprm->p += stack_base;
+ if (bprm->loader)
+ bprm->loader += stack_base;
+ bprm->exec += stack_base;
+
+ mpnt = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (!mpnt)
+ return -ENOMEM;
+
+ {
+ mpnt->vm_mm = current->mm;
+ mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
+ mpnt->vm_end = STACK_TOP;
+ mpnt->vm_page_prot = PAGE_COPY;
+ mpnt->vm_flags = VM_STACK_FLAGS;
+ mpnt->vm_ops = NULL;
+ mpnt->vm_pgoff = 0;
+ mpnt->vm_file = NULL;
+ mpnt->vm_private_data = 0;
+ insert_vm_struct(current->mm, mpnt);
+ current->mm->total_vm = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
+ }
+
+ for (i = 0 ; i < MAX_ARG_PAGES ; i++) {
+ if (bprm->page[i]) {
+ current->mm->rss++;
+ put_dirty_page(current,bprm->page[i],stack_base);
+ }
+ stack_base += PAGE_SIZE;
+ }
+
+ return 0;
+}
--- /dev/null
+#include <asm/offsets.h>
+#include <asm/signal.h>
+
+ .global ia32_ret_from_syscall
+ .proc ia64_ret_from_syscall
+ia32_ret_from_syscall:
+ cmp.ge p6,p7=r8,r0 // syscall executed successfully?
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
+ ;;
+ st8 [r2]=r8 // store return value in slot for r8
+ br.cond.sptk.few ia64_leave_kernel
+
+ //
+ // Invoke a system call, but do some tracing before and after the call.
+ // We MUST preserve the current register frame throughout this routine
+ // because some system calls (such as ia64_execve) directly
+ // manipulate ar.pfs.
+ //
+ // Input:
+ // r15 = syscall number
+ // b6 = syscall entry point
+ //
+ .global ia32_trace_syscall
+ .proc ia32_trace_syscall
+ia32_trace_syscall:
+ br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
+.Lret4: br.call.sptk.few rp=b6 // do the syscall
+.Lret5: cmp.lt p6,p0=r8,r0 // syscall failed?
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
+ ;;
+ st8.spill [r2]=r8 // store return value in slot for r8
+ br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
+.Lret6: br.cond.sptk.many ia64_leave_kernel // rp MUST be != ia64_leave_kernel!
+
+ .endp ia32_trace_syscall
+
+ .align 16
+ .global sys32_fork
+ .proc sys32_fork
+sys32_fork:
+ alloc r16=ar.pfs,2,2,3,0;;
+ movl r28=1f
+ mov loc1=rp
+ br.cond.sptk.many save_switch_stack
+1:
+ mov loc0=r16 // save ar.pfs across do_fork
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp
+ adds r2=IA64_SWITCH_STACK_SIZE+IA64_PT_REGS_R12_OFFSET+16,sp
+ mov out0=SIGCHLD // out0 = clone_flags
+ ;;
+ ld8 out1=[r2] // fetch usp from pt_regs.r12
+ br.call.sptk.few rp=do_fork
+.ret1:
+ mov ar.pfs=loc0
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
+ mov rp=loc1
+ ;;
+ br.ret.sptk.many rp
+ .endp sys32_fork
+
+ .rodata
+ .align 8
+ .globl ia32_syscall_table
+ia32_syscall_table:
+ data8 sys_ni_syscall /* 0 - old "setup(" system call*/
+ data8 sys_exit
+ data8 sys32_fork
+ data8 sys_read
+ data8 sys_write
+ data8 sys_open /* 5 */
+ data8 sys_close
+ data8 sys32_waitpid
+ data8 sys_creat
+ data8 sys_link
+ data8 sys_unlink /* 10 */
+ data8 sys32_execve
+ data8 sys_chdir
+ data8 sys_ni_syscall /* sys_time is not supported on ia64 */
+ data8 sys_mknod
+ data8 sys_chmod /* 15 */
+ data8 sys_lchown
+ data8 sys_ni_syscall /* old break syscall holder */
+ data8 sys_ni_syscall
+ data8 sys_lseek
+ data8 sys_getpid /* 20 */
+ data8 sys_mount
+ data8 sys_oldumount
+ data8 sys_setuid
+ data8 sys_getuid
+ data8 sys_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */
+ data8 sys_ptrace
+ data8 sys32_alarm
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall /* 30 */
+ data8 sys_ni_syscall /* old stty syscall holder */
+ data8 sys_ni_syscall /* old gtty syscall holder */
+ data8 sys_access
+ data8 sys_nice
+ data8 sys_ni_syscall /* 35 */ /* old ftime syscall holder */
+ data8 sys_sync
+ data8 sys_kill
+ data8 sys_rename
+ data8 sys_mkdir
+ data8 sys_rmdir /* 40 */
+ data8 sys_dup
+ data8 sys32_pipe
+ data8 sys_times
+ data8 sys_ni_syscall /* old prof syscall holder */
+ data8 sys_brk /* 45 */
+ data8 sys_setgid
+ data8 sys_getgid
+ data8 sys_ni_syscall
+ data8 sys_geteuid
+ data8 sys_getegid /* 50 */
+ data8 sys_acct
+ data8 sys_umount /* recycled never used phys( */
+ data8 sys_ni_syscall /* old lock syscall holder */
+ data8 sys_ioctl
+ data8 sys_fcntl /* 55 */
+ data8 sys_ni_syscall /* old mpx syscall holder */
+ data8 sys_setpgid
+ data8 sys_ni_syscall /* old ulimit syscall holder */
+ data8 sys_ni_syscall
+ data8 sys_umask /* 60 */
+ data8 sys_chroot
+ data8 sys_ustat
+ data8 sys_dup2
+ data8 sys_getppid
+ data8 sys_getpgrp /* 65 */
+ data8 sys_setsid
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_ni_syscall
+ data8 sys_setreuid /* 70 */
+ data8 sys_setregid
+ data8 sys_ni_syscall
+ data8 sys_sigpending
+ data8 sys_sethostname
+ data8 sys32_setrlimit /* 75 */
+ data8 sys32_getrlimit
+ data8 sys_getrusage
+ data8 sys32_gettimeofday
+ data8 sys32_settimeofday
+ data8 sys_getgroups /* 80 */
+ data8 sys_setgroups
+ data8 sys_ni_syscall
+ data8 sys_symlink
+ data8 sys_ni_syscall
+ data8 sys_readlink /* 85 */
+ data8 sys_uselib
+ data8 sys_swapon
+ data8 sys_reboot
+ data8 sys32_readdir
+ data8 sys32_mmap /* 90 */
+ data8 sys_munmap
+ data8 sys_truncate
+ data8 sys_ftruncate
+ data8 sys_fchmod
+ data8 sys_fchown /* 95 */
+ data8 sys_getpriority
+ data8 sys_setpriority
+ data8 sys_ni_syscall /* old profil syscall holder */
+ data8 sys32_statfs
+ data8 sys32_fstatfs /* 100 */
+ data8 sys_ioperm
+ data8 sys32_socketcall
+ data8 sys_syslog
+ data8 sys32_setitimer
+ data8 sys32_getitimer /* 105 */
+ data8 sys32_newstat
+ data8 sys32_newlstat
+ data8 sys32_newfstat
+ data8 sys_ni_syscall
+ data8 sys_iopl /* 110 */
+ data8 sys_vhangup
+ data8 sys_ni_syscall // used to be sys_idle
+ data8 sys_ni_syscall
+ data8 sys32_wait4
+ data8 sys_swapoff /* 115 */
+ data8 sys_sysinfo
+ data8 sys32_ipc
+ data8 sys_fsync
+ data8 sys32_sigreturn
+ data8 sys_clone /* 120 */
+ data8 sys_setdomainname
+ data8 sys_newuname
+ data8 sys_modify_ldt
+ data8 sys_adjtimex
+ data8 sys32_mprotect /* 125 */
+ data8 sys_sigprocmask
+ data8 sys_create_module
+ data8 sys_init_module
+ data8 sys_delete_module
+ data8 sys_get_kernel_syms /* 130 */
+ data8 sys_quotactl
+ data8 sys_getpgid
+ data8 sys_fchdir
+ data8 sys_bdflush
+ data8 sys_sysfs /* 135 */
+ data8 sys_personality
+ data8 sys_ni_syscall /* for afs_syscall */
+ data8 sys_setfsuid
+ data8 sys_setfsgid
+ data8 sys_llseek /* 140 */
+ data8 sys32_getdents
+ data8 sys32_select
+ data8 sys_flock
+ data8 sys_msync
+ data8 sys32_readv /* 145 */
+ data8 sys32_writev
+ data8 sys_getsid
+ data8 sys_fdatasync
+ data8 sys_sysctl
+ data8 sys_mlock /* 150 */
+ data8 sys_munlock
+ data8 sys_mlockall
+ data8 sys_munlockall
+ data8 sys_sched_setparam
+ data8 sys_sched_getparam /* 155 */
+ data8 sys_sched_setscheduler
+ data8 sys_sched_getscheduler
+ data8 sys_sched_yield
+ data8 sys_sched_get_priority_max
+ data8 sys_sched_get_priority_min /* 160 */
+ data8 sys_sched_rr_get_interval
+ data8 sys32_nanosleep
+ data8 sys_mremap
+ data8 sys_setresuid
+ data8 sys_getresuid /* 165 */
+ data8 sys_vm86
+ data8 sys_query_module
+ data8 sys_poll
+ data8 sys_nfsservctl
+ data8 sys_setresgid /* 170 */
+ data8 sys_getresgid
+ data8 sys_prctl
+ data8 sys32_rt_sigreturn
+ data8 sys32_rt_sigaction
+ data8 sys32_rt_sigprocmask /* 175 */
+ data8 sys_rt_sigpending
+ data8 sys_rt_sigtimedwait
+ data8 sys_rt_sigqueueinfo
+ data8 sys_rt_sigsuspend
+ data8 sys_pread /* 180 */
+ data8 sys_pwrite
+ data8 sys_chown
+ data8 sys_getcwd
+ data8 sys_capget
+ data8 sys_capset /* 185 */
+ data8 sys_sigaltstack
+ data8 sys_sendfile
+ data8 sys_ni_syscall /* streams1 */
+ data8 sys_ni_syscall /* streams2 */
+ data8 sys32_vfork /* 190 */
--- /dev/null
+/*
+ * IA32 Architecture-specific signal handling support.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
+ * Copyright (C) 2000 VA Linux Co
+ * Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
+ *
+ * Derived from i386 and Alpha versions.
+ */
+
+#include <linux/config.h>
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/ptrace.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/wait.h>
+
+#include <asm/uaccess.h>
+#include <asm/rse.h>
+#include <asm/sigcontext.h>
+#include <asm/segment.h>
+#include <asm/ia32.h>
+
+#define DEBUG_SIG 0
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+
+struct sigframe_ia32
+{
+ int pretcode;
+ int sig;
+ struct sigcontext_ia32 sc;
+ struct _fpstate_ia32 fpstate;
+ unsigned int extramask[_IA32_NSIG_WORDS-1];
+ char retcode[8];
+};
+
+struct rt_sigframe_ia32
+{
+ int pretcode;
+ int sig;
+ int pinfo;
+ int puc;
+ struct siginfo info;
+ struct ucontext_ia32 uc;
+ struct _fpstate_ia32 fpstate;
+ char retcode[8];
+};
+
+static int
+setup_sigcontext_ia32(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
+ struct pt_regs *regs, unsigned long mask)
+{
+ int err = 0;
+
+ err |= __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&sc->fs);
+ err |= __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&sc->gs);
+
+ err |= __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc->es);
+ err |= __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds);
+ err |= __put_user(regs->r15, &sc->edi);
+ err |= __put_user(regs->r14, &sc->esi);
+ err |= __put_user(regs->r13, &sc->ebp);
+ err |= __put_user(regs->r12, &sc->esp);
+ err |= __put_user(regs->r11, &sc->ebx);
+ err |= __put_user(regs->r10, &sc->edx);
+ err |= __put_user(regs->r9, &sc->ecx);
+ err |= __put_user(regs->r8, &sc->eax);
+#if 0
+ err |= __put_user(current->tss.trap_no, &sc->trapno);
+ err |= __put_user(current->tss.error_code, &sc->err);
+#endif
+ err |= __put_user(regs->cr_iip, &sc->eip);
+ err |= __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs);
+#if 0
+ err |= __put_user(regs->eflags, &sc->eflags);
+#endif
+
+ err |= __put_user(regs->r12, &sc->esp_at_signal);
+ err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
+
+#if 0
+ tmp = save_i387(fpstate);
+ if (tmp < 0)
+ err = 1;
+ else
+ err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
+
+ /* non-iBCS2 extensions.. */
+ err |= __put_user(mask, &sc->oldmask);
+ err |= __put_user(current->tss.cr2, &sc->cr2);
+#endif
+
+ return err;
+}
+
+static int
+restore_sigcontext_ia32(struct pt_regs *regs, struct sigcontext_ia32 *sc, int *peax)
+{
+ unsigned int err = 0;
+
+#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
+
+#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48)
+#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32)
+#define copyseg_cs(tmp) (regs->r17 |= tmp)
+#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16)
+#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16)
+#define copyseg_ds(tmp) (regs->r16 |= tmp)
+
+#define COPY_SEG(seg) \
+ { unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp); }
+
+#define COPY_SEG_STRICT(seg) \
+ { unsigned short tmp; \
+ err |= __get_user(tmp, &sc->seg); \
+ copyseg_##seg(tmp|3); }
+
+ /* To make COPY_SEGs easier, we zero r16, r17 */
+ regs->r16 = 0;
+ regs->r17 = 0;
+
+ COPY_SEG(gs);
+ COPY_SEG(fs);
+ COPY_SEG(es);
+ COPY_SEG(ds);
+ COPY(r15, edi);
+ COPY(r14, esi);
+ COPY(r13, ebp);
+ COPY(r12, esp);
+ COPY(r11, ebx);
+ COPY(r10, edx);
+ COPY(r9, ecx);
+ COPY(cr_iip, eip);
+ COPY_SEG_STRICT(cs);
+ COPY_SEG_STRICT(ss);
+#if 0
+ {
+ unsigned int tmpflags;
+ err |= __get_user(tmpflags, &sc->eflags);
+ /* XXX: Change this to ar.eflags */
+ regs->eflags = (regs->eflags & ~0x40DD5) | (tmpflags & 0x40DD5);
+ regs->orig_eax = -1; /* disable syscall checks */
+ }
+
+ {
+ struct _fpstate * buf;
+ err |= __get_user(buf, &sc->fpstate);
+ if (buf) {
+ if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
+ goto badframe;
+ err |= restore_i387(buf);
+ }
+ }
+#endif
+
+ err |= __get_user(*peax, &sc->eax);
+ return err;
+
+#if 0
+badframe:
+ return 1;
+#endif
+
+}
+
+/*
+ * Determine which stack to use..
+ */
+static inline void *
+get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+{
+ unsigned long esp;
+ unsigned int xss;
+
+ /* Default to using normal stack */
+ esp = regs->r12;
+ xss = regs->r16 >> 16;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (! on_sig_stack(esp))
+ esp = current->sas_ss_sp + current->sas_ss_size;
+ }
+ /* Legacy stack switching not supported */
+
+ return (void *)((esp - frame_size) & -8ul);
+}
+
+static void
+setup_frame_ia32(int sig, struct k_sigaction *ka, sigset_t *set,
+ struct pt_regs * regs)
+{
+ struct sigframe_ia32 *frame;
+ int err = 0;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? (int)(current->exec_domain->signal_invmap[sig])
+ : sig),
+ &frame->sig);
+
+ err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
+
+ if (_NSIG_WORDS > 1) {
+ err |= __copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask));
+ }
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ err |= __put_user(frame->retcode, &frame->pretcode);
+ /* This is popl %eax ; movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb858, (short *)(frame->retcode+0));
+#define __IA32_NR_sigreturn 119
+ err |= __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retcode+2));
+ err |= __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+4));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+6));
+
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = (unsigned long) ka->sa.sa_handler;
+
+ set_fs(USER_DS);
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES == DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
+
+#if 0
+ regs->eflags &= ~TF_MASK;
+#endif
+
+#if 1
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
+ current->comm, current->pid, frame, regs->cr_iip, frame->pretcode);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+}
+
+static void
+setup_rt_frame_ia32(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs * regs)
+{
+ struct rt_sigframe_ia32 *frame;
+ int err = 0;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err |= __put_user((current->exec_domain
+ && current->exec_domain->signal_invmap
+ && sig < 32
+ ? current->exec_domain->signal_invmap[sig]
+ : sig),
+ &frame->sig);
+ err |= __put_user(&frame->info, &frame->pinfo);
+ err |= __put_user(&frame->uc, &frame->puc);
+ err |= __copy_to_user(&frame->info, info, sizeof(*info));
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->r12),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate,
+ regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+
+ err |= __put_user(frame->retcode, &frame->pretcode);
+ /* This is movl $,%eax ; int $0x80 */
+ err |= __put_user(0xb8, (char *)(frame->retcode+0));
+#define __IA32_NR_rt_sigreturn 173
+ err |= __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1));
+ err |= __put_user(0x80cd, (short *)(frame->retcode+5));
+
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up registers for signal handler */
+ regs->r12 = (unsigned long) frame;
+ regs->cr_iip = (unsigned long) ka->sa.sa_handler;
+
+ set_fs(USER_DS);
+
+ regs->r16 = (__USER_DS << 16) | (__USER_DS); /* ES == DS, GS, FS are zero */
+ regs->r17 = (__USER_DS << 16) | __USER_CS;
+
+#if 0
+ regs->eflags &= ~TF_MASK;
+#endif
+
+#if 1
+ printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
+ current->comm, current->pid, frame, regs->cr_iip, frame->pretcode);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+}
+
+long
+ia32_setup_frame1 (int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs)
+{
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+ setup_rt_frame_ia32(sig, ka, info, set, regs);
+ else
+ setup_frame_ia32(sig, ka, set, regs);
+
+}
+
+asmlinkage int
+sys32_sigreturn(int arg1, int arg2, int arg3, int arg4, int arg5, unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ struct sigframe_ia32 *frame = (struct sigframe_ia32 *)(regs->r12- 8);
+ sigset_t set;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_IA32_NSIG_WORDS > 1
+ && __copy_from_user((((char *) &set.sig) + 4),
+ &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = (sigset_t) set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->sc, &eax))
+ goto badframe;
+ return eax;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+asmlinkage int
+sys32_rt_sigreturn(int arg1, int arg2, int arg3, int arg4, int arg5, unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ struct rt_sigframe_ia32 *frame = (struct rt_sigframe_ia32 *)(regs->r12 - 4);
+ sigset_t set;
+ stack_t st;
+ int eax;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax))
+ goto badframe;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, regs->r12);
+
+ return eax;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
--- /dev/null
+/*
+ * IA32 helper functions
+ */
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/ia32.h>
+
+extern unsigned long *ia32_gdt_table, *ia32_tss;
+
+extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
+
+/*
+ * Setup IA32 GDT and TSS
+ */
+void
+ia32_gdt_init(void)
+{
+ unsigned long gdt_and_tss_page;
+
+ /* allocate two IA-32 pages of memory: */
+ gdt_and_tss_page = __get_free_pages(GFP_KERNEL,
+ (IA32_PAGE_SHIFT < PAGE_SHIFT)
+ ? 0 : (IA32_PAGE_SHIFT + 1) - PAGE_SHIFT);
+ ia32_gdt_table = (unsigned long *) gdt_and_tss_page;
+ ia32_tss = (unsigned long *) (gdt_and_tss_page + IA32_PAGE_SIZE);
+
+ /* Zero the gdt and tss */
+ memset((void *) gdt_and_tss_page, 0, 2*IA32_PAGE_SIZE);
+
+ /* CS descriptor in IA-32 format */
+ ia32_gdt_table[4] = IA32_SEG_DESCRIPTOR(0L, 0xBFFFFFFFL, 0xBL, 1L,
+ 3L, 1L, 1L, 1L, 1L);
+
+ /* DS descriptor in IA-32 format */
+ ia32_gdt_table[5] = IA32_SEG_DESCRIPTOR(0L, 0xBFFFFFFFL, 0x3L, 1L,
+ 3L, 1L, 1L, 1L, 1L);
+}
+
+/*
+ * Handle bad IA32 interrupt via syscall
+ */
+void
+ia32_bad_interrupt (unsigned long int_num, struct pt_regs *regs)
+{
+ siginfo_t siginfo;
+
+ die_if_kernel("Bad IA-32 interrupt", regs, int_num);
+
+ siginfo.si_signo = SIGTRAP;
+ siginfo.si_errno = int_num; /* XXX is it legal to abuse si_errno like this? */
+ siginfo.si_code = TRAP_BRKPT;
+ force_sig_info(SIGTRAP, &siginfo, current);
+}
+
--- /dev/null
+/*
+ * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on
+ * sys_sparc32
+ *
+ * Copyright (C) 2000 VA Linux Co
+ * Copyright (C) 2000 Don Dugger <n0ano@valinux.com>
+ * Copyright (C) 1999 Arun Sharma <arun.sharma@intel.com>
+ * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
+ * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
+ *
+ * These routines maintain argument size conversion between 32bit and 64bit
+ * environment.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/signal.h>
+#include <linux/utime.h>
+#include <linux/resource.h>
+#include <linux/times.h>
+#include <linux/utime.h>
+#include <linux/utsname.h>
+#include <linux/timex.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/sem.h>
+#include <linux/msg.h>
+#include <linux/shm.h>
+#include <linux/malloc.h>
+#include <linux/uio.h>
+#include <linux/nfs_fs.h>
+#include <linux/smb_fs.h>
+#include <linux/smb_mount.h>
+#include <linux/ncp_fs.h>
+#include <linux/quota.h>
+#include <linux/file.h>
+#include <linux/module.h>
+#include <linux/sunrpc/svc.h>
+#include <linux/nfsd/nfsd.h>
+#include <linux/nfsd/cache.h>
+#include <linux/nfsd/xdr.h>
+#include <linux/nfsd/syscall.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/personality.h>
+#include <linux/stat.h>
+#include <linux/timex.h>
+
+#include <linux/ipc.h>
+#include <linux/sem.h>
+#include <linux/shm.h>
+
+#include <asm/types.h>
+#include <asm/uaccess.h>
+#include <asm/semaphore.h>
+#include <asm/ipc.h>
+
+#include <net/scm.h>
+#include <net/sock.h>
+#include <asm/ia32.h>
+
+#define A(__x) ((unsigned long)(__x))
+#define AA(__x) ((unsigned long)(__x))
+
+/*
+ * This is trivial, and on the face of it looks like it
+ * could equally well be done in user mode.
+ *
+ * Not so, for quite unobvious reasons - register pressure.
+ * In user mode vfork() cannot have a stack frame, and if
+ * done by calling the "clone()" system call directly, you
+ * do not have enough call-clobbered registers to hold all
+ * the information you need.
+ */
+asmlinkage int sys32_vfork(
+int dummy0,
+int dummy1,
+int dummy2,
+int dummy3,
+int dummy4,
+int dummy5,
+int dummy6,
+int dummy7,
+int stack)
+{
+ struct pt_regs *regs = (struct pt_regs *)&stack;
+
+ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->r12, regs);
+}
+
+static int
+nargs(unsigned int arg, char **ap)
+{
+ char *ptr;
+ int n, err;
+
+ n = 0;
+ do {
+ if (err = get_user(ptr, (int *)arg))
+ return(err);
+ if (ap)
+ *ap++ = ptr;
+ arg += sizeof(unsigned int);
+ n++;
+ } while (ptr);
+ return(n - 1);
+}
+
+asmlinkage long
+sys32_execve(
+char *filename,
+unsigned int argv,
+unsigned int envp,
+int dummy3,
+int dummy4,
+int dummy5,
+int dummy6,
+int dummy7,
+int stack)
+{
+ struct pt_regs *regs = (struct pt_regs *)&stack;
+ char **av, **ae;
+ int na, ne, r, len;
+
+ na = nargs(argv, NULL);
+ ne = nargs(envp, NULL);
+ len = (na + ne + 2) * sizeof(*av);
+ /*
+ * kmalloc won't work because the `sys_exec' code will attempt
+ * to do a `get_user' on the arg list and `get_user' will fail
+ * on a kernel address (simplifies `get_user'). Instead we
+ * do an mmap to get a user address. Note that since a successful
+ * `execve' frees all current memory we only have to do an
+ * `munmap' if the `execve' failes.
+ */
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+
+ av = do_mmap_pgoff(0, NULL, len,
+ PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, 0);
+
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+
+ if (IS_ERR(av))
+ return(av);
+ ae = av + na + 1;
+ av[na] = (char *)0;
+ ae[ne] = (char *)0;
+ (void)nargs(argv, av);
+ (void)nargs(envp, ae);
+ r = sys_execve(filename, av, ae, regs);
+ if (IS_ERR(r))
+ sys_munmap(av, len);
+ return(r);
+}
+
+static inline int
+putstat(struct stat32 *ubuf, struct stat *kbuf)
+{
+ int err;
+
+ err = put_user (kbuf->st_dev, &ubuf->st_dev);
+ err |= __put_user (kbuf->st_ino, &ubuf->st_ino);
+ err |= __put_user (kbuf->st_mode, &ubuf->st_mode);
+ err |= __put_user (kbuf->st_nlink, &ubuf->st_nlink);
+ err |= __put_user (kbuf->st_uid, &ubuf->st_uid);
+ err |= __put_user (kbuf->st_gid, &ubuf->st_gid);
+ err |= __put_user (kbuf->st_rdev, &ubuf->st_rdev);
+ err |= __put_user (kbuf->st_size, &ubuf->st_size);
+ err |= __put_user (kbuf->st_atime, &ubuf->st_atime);
+ err |= __put_user (kbuf->st_mtime, &ubuf->st_mtime);
+ err |= __put_user (kbuf->st_ctime, &ubuf->st_ctime);
+ err |= __put_user (kbuf->st_blksize, &ubuf->st_blksize);
+ err |= __put_user (kbuf->st_blocks, &ubuf->st_blocks);
+ return err;
+}
+
+extern asmlinkage int sys_newstat(char * filename, struct stat * statbuf);
+
+asmlinkage int
+sys32_newstat(char * filename, struct stat32 *statbuf)
+{
+ int ret;
+ struct stat s;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_newstat(filename, &s);
+ set_fs (old_fs);
+ if (putstat (statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_newlstat(char * filename, struct stat * statbuf);
+
+asmlinkage int
+sys32_newlstat(char * filename, struct stat32 *statbuf)
+{
+ int ret;
+ struct stat s;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_newlstat(filename, &s);
+ set_fs (old_fs);
+ if (putstat (statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_newfstat(unsigned int fd, struct stat * statbuf);
+
+asmlinkage int
+sys32_newfstat(unsigned int fd, struct stat32 *statbuf)
+{
+ int ret;
+ struct stat s;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_newfstat(fd, &s);
+ set_fs (old_fs);
+ if (putstat (statbuf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+#define ALIGN4K(a) (((a) + 0xfff) & ~0xfff)
+#define OFFSET4K(a) ((a) & 0xfff)
+
+unsigned long
+do_mmap_fake(struct file *file, unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags, unsigned long off)
+{
+ struct inode *inode;
+ void *front, *back;
+ unsigned long baddr;
+ int r;
+ char c;
+
+ if (OFFSET4K(addr) || OFFSET4K(off))
+ return -EINVAL;
+ if (prot & PROT_WRITE)
+ prot |= PROT_EXEC;
+ front = NULL;
+ back = NULL;
+ if ((baddr = (addr & PAGE_MASK)) != addr && get_user(c, (char *)baddr) == 0) {
+ front = kmalloc(addr - baddr, GFP_KERNEL);
+ memcpy(front, (void *)baddr, addr - baddr);
+ }
+ if ((addr + len) & ~PAGE_MASK && get_user(c, (char *)(addr + len)) == 0) {
+ back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
+ memcpy(back, addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+ }
+ if ((r = do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS, 0)) < 0)
+ return(r);
+ if (back) {
+ memcpy(addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+ kfree(back);
+ }
+ if (front) {
+ memcpy((void *)baddr, front, addr - baddr);
+ kfree(front);
+ }
+ if (flags & MAP_ANONYMOUS) {
+ memset(addr, 0, len);
+ return(addr);
+ }
+ if (!file)
+ return -EINVAL;
+ inode = file->f_dentry->d_inode;
+ if (!inode->i_op || !inode->i_op->default_file_ops)
+ return -EINVAL;
+ if (!file->f_op->read)
+ return -EINVAL;
+ if (file->f_op->llseek) {
+ if (file->f_op->llseek(file,off,0) != off)
+ return -EINVAL;
+ } else
+ file->f_pos = off;
+ r = file->f_op->read(file, (char *)addr, len, &file->f_pos);
+ return (r < 0) ? -EINVAL : addr;
+}
+
+/*
+ * Linux/i386 didn't use to be able to handle more than
+ * 4 system call parameters, so these system calls used a memory
+ * block for parameter passing..
+ */
+
+struct mmap_arg_struct {
+ unsigned int addr;
+ unsigned int len;
+ unsigned int prot;
+ unsigned int flags;
+ unsigned int fd;
+ unsigned int offset;
+};
+
+asmlinkage int
+sys32_mmap(struct mmap_arg_struct *arg)
+{
+ int error = -EFAULT;
+ struct file * file = NULL;
+ struct mmap_arg_struct a;
+
+ if (copy_from_user(&a, arg, sizeof(a)))
+ return -EFAULT;
+
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+ if (!(a.flags & MAP_ANONYMOUS)) {
+ error = -EBADF;
+ file = fget(a.fd);
+ if (!file)
+ goto out;
+ }
+ a.flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+
+ if ((a.flags & MAP_FIXED) && ((a.addr & ~PAGE_MASK) || (a.offset & ~PAGE_MASK))) {
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+ error = do_mmap_fake(file, a.addr, a.len, a.prot, a.flags, a.offset);
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+ } else
+ error = do_mmap(file, a.addr, a.len, a.prot, a.flags, a.offset);
+ if (file)
+ fput(file);
+out:
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+ return error;
+}
+
+asmlinkage long
+sys32_pipe(int *fd)
+{
+ int retval;
+ int fds[2];
+
+ lock_kernel();
+ retval = do_pipe(fds);
+ if (retval)
+ goto out;
+ if (copy_to_user(fd, fds, sizeof(fds)))
+ retval = -EFAULT;
+ out:
+ unlock_kernel();
+ return retval;
+}
+
+asmlinkage long
+sys32_mprotect(unsigned long start, size_t len, unsigned long prot)
+{
+
+ if (prot == 0)
+ return(0);
+ len += start & ~PAGE_MASK;
+ if ((start & ~PAGE_MASK) && (prot & PROT_WRITE))
+ prot |= PROT_EXEC;
+ return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
+}
+
+asmlinkage int
+sys32_rt_sigaction(int sig, struct sigaction32 *act,
+ struct sigaction32 *oact, unsigned int sigsetsize)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+ sigset32_t set32;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset32_t))
+ return -EINVAL;
+
+ if (act) {
+ ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
+ ret |= __copy_from_user(&set32, &act->sa_mask,
+ sizeof(sigset32_t));
+ switch (_NSIG_WORDS) {
+ case 4: new_ka.sa.sa_mask.sig[3] = set32.sig[6]
+ | (((long)set32.sig[7]) << 32);
+ case 3: new_ka.sa.sa_mask.sig[2] = set32.sig[4]
+ | (((long)set32.sig[5]) << 32);
+ case 2: new_ka.sa.sa_mask.sig[1] = set32.sig[2]
+ | (((long)set32.sig[3]) << 32);
+ case 1: new_ka.sa.sa_mask.sig[0] = set32.sig[0]
+ | (((long)set32.sig[1]) << 32);
+ }
+ ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+
+ if (ret)
+ return -EFAULT;
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ switch (_NSIG_WORDS) {
+ case 4:
+ set32.sig[7] = (old_ka.sa.sa_mask.sig[3] >> 32);
+ set32.sig[6] = old_ka.sa.sa_mask.sig[3];
+ case 3:
+ set32.sig[5] = (old_ka.sa.sa_mask.sig[2] >> 32);
+ set32.sig[4] = old_ka.sa.sa_mask.sig[2];
+ case 2:
+ set32.sig[3] = (old_ka.sa.sa_mask.sig[1] >> 32);
+ set32.sig[2] = old_ka.sa.sa_mask.sig[1];
+ case 1:
+ set32.sig[1] = (old_ka.sa.sa_mask.sig[0] >> 32);
+ set32.sig[0] = old_ka.sa.sa_mask.sig[0];
+ }
+ ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
+ ret |= __copy_to_user(&oact->sa_mask, &set32,
+ sizeof(sigset32_t));
+ ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ }
+
+ return ret;
+}
+
+
+extern asmlinkage int sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
+ size_t sigsetsize);
+
+asmlinkage int
+sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset,
+ unsigned int sigsetsize)
+{
+ sigset_t s;
+ sigset32_t s32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ if (set) {
+ if (copy_from_user (&s32, set, sizeof(sigset32_t)))
+ return -EFAULT;
+ switch (_NSIG_WORDS) {
+ case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
+ case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
+ case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
+ case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
+ }
+ }
+ set_fs (KERNEL_DS);
+ ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL,
+ sigsetsize);
+ set_fs (old_fs);
+ if (ret) return ret;
+ if (oset) {
+ switch (_NSIG_WORDS) {
+ case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
+ case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
+ case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
+ case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
+ }
+ if (copy_to_user (oset, &s32, sizeof(sigset32_t)))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+static inline int
+put_statfs (struct statfs32 *ubuf, struct statfs *kbuf)
+{
+ int err;
+
+ err = put_user (kbuf->f_type, &ubuf->f_type);
+ err |= __put_user (kbuf->f_bsize, &ubuf->f_bsize);
+ err |= __put_user (kbuf->f_blocks, &ubuf->f_blocks);
+ err |= __put_user (kbuf->f_bfree, &ubuf->f_bfree);
+ err |= __put_user (kbuf->f_bavail, &ubuf->f_bavail);
+ err |= __put_user (kbuf->f_files, &ubuf->f_files);
+ err |= __put_user (kbuf->f_ffree, &ubuf->f_ffree);
+ err |= __put_user (kbuf->f_namelen, &ubuf->f_namelen);
+ err |= __put_user (kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]);
+ err |= __put_user (kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]);
+ return err;
+}
+
+extern asmlinkage int sys_statfs(const char * path, struct statfs * buf);
+
+asmlinkage int
+sys32_statfs(const char * path, struct statfs32 *buf)
+{
+ int ret;
+ struct statfs s;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_statfs((const char *)path, &s);
+ set_fs (old_fs);
+ if (put_statfs(buf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_fstatfs(unsigned int fd, struct statfs * buf);
+
+asmlinkage int
+sys32_fstatfs(unsigned int fd, struct statfs32 *buf)
+{
+ int ret;
+ struct statfs s;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_fstatfs(fd, &s);
+ set_fs (old_fs);
+ if (put_statfs(buf, &s))
+ return -EFAULT;
+ return ret;
+}
+
+struct timeval32
+{
+ int tv_sec, tv_usec;
+};
+
+struct itimerval32
+{
+ struct timeval32 it_interval;
+ struct timeval32 it_value;
+};
+
+static inline long
+get_tv32(struct timeval *o, struct timeval32 *i)
+{
+ return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
+ (__get_user(o->tv_sec, &i->tv_sec) |
+ __get_user(o->tv_usec, &i->tv_usec)));
+ return ENOSYS;
+}
+
+static inline long
+put_tv32(struct timeval32 *o, struct timeval *i)
+{
+ return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) ||
+ (__put_user(i->tv_sec, &o->tv_sec) |
+ __put_user(i->tv_usec, &o->tv_usec)));
+}
+
+static inline long
+get_it32(struct itimerval *o, struct itimerval32 *i)
+{
+ return (!access_ok(VERIFY_READ, i, sizeof(*i)) ||
+ (__get_user(o->it_interval.tv_sec, &i->it_interval.tv_sec) |
+ __get_user(o->it_interval.tv_usec, &i->it_interval.tv_usec) |
+ __get_user(o->it_value.tv_sec, &i->it_value.tv_sec) |
+ __get_user(o->it_value.tv_usec, &i->it_value.tv_usec)));
+ return ENOSYS;
+}
+
+static inline long
+put_it32(struct itimerval32 *o, struct itimerval *i)
+{
+ return (!access_ok(VERIFY_WRITE, i, sizeof(*i)) ||
+ (__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) |
+ __put_user(i->it_interval.tv_usec, &o->it_interval.tv_usec) |
+ __put_user(i->it_value.tv_sec, &o->it_value.tv_sec) |
+ __put_user(i->it_value.tv_usec, &o->it_value.tv_usec)));
+ return ENOSYS;
+}
+
+extern int do_getitimer(int which, struct itimerval *value);
+
+asmlinkage int
+sys32_getitimer(int which, struct itimerval32 *it)
+{
+ struct itimerval kit;
+ int error;
+
+ error = do_getitimer(which, &kit);
+ if (!error && put_it32(it, &kit))
+ error = -EFAULT;
+
+ return error;
+}
+
+extern int do_setitimer(int which, struct itimerval *, struct itimerval *);
+
+asmlinkage int
+sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out)
+{
+ struct itimerval kin, kout;
+ int error;
+
+ if (in) {
+ if (get_it32(&kin, in))
+ return -EFAULT;
+ } else
+ memset(&kin, 0, sizeof(kin));
+
+ error = do_setitimer(which, &kin, out ? &kout : NULL);
+ if (error || !out)
+ return error;
+ if (put_it32(out, &kout))
+ return -EFAULT;
+
+ return 0;
+
+}
+asmlinkage unsigned long
+sys32_alarm(unsigned int seconds)
+{
+ struct itimerval it_new, it_old;
+ unsigned int oldalarm;
+
+ it_new.it_interval.tv_sec = it_new.it_interval.tv_usec = 0;
+ it_new.it_value.tv_sec = seconds;
+ it_new.it_value.tv_usec = 0;
+ do_setitimer(ITIMER_REAL, &it_new, &it_old);
+ oldalarm = it_old.it_value.tv_sec;
+ /* ehhh.. We can't return 0 if we have an alarm pending.. */
+ /* And we'd better return too much than too little anyway */
+ if (it_old.it_value.tv_usec)
+ oldalarm++;
+ return oldalarm;
+}
+
+/* Translations due to time_t size differences. Which affects all
+ sorts of things, like timeval and itimerval. */
+
+extern struct timezone sys_tz;
+extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz);
+
+asmlinkage int
+sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz)
+{
+ if (tv) {
+ struct timeval ktv;
+ do_gettimeofday(&ktv);
+ if (put_tv32(tv, &ktv))
+ return -EFAULT;
+ }
+ if (tz) {
+ if (copy_to_user(tz, &sys_tz, sizeof(sys_tz)))
+ return -EFAULT;
+ }
+ return 0;
+}
+
+asmlinkage int
+sys32_settimeofday(struct timeval32 *tv, struct timezone *tz)
+{
+ struct timeval ktv;
+ struct timezone ktz;
+
+ if (tv) {
+ if (get_tv32(&ktv, tv))
+ return -EFAULT;
+ }
+ if (tz) {
+ if (copy_from_user(&ktz, tz, sizeof(ktz)))
+ return -EFAULT;
+ }
+
+ return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL);
+}
+
+struct dirent32 {
+ unsigned int d_ino;
+ unsigned int d_off;
+ unsigned short d_reclen;
+ char d_name[NAME_MAX + 1];
+};
+
+static void
+xlate_dirent(void *dirent, long n)
+{
+ long off;
+ struct dirent *dirp;
+ struct dirent32 *dirp32;
+
+ off = 0;
+ while (off < n) {
+ dirp = (struct dirent *)(dirent + off);
+ off += dirp->d_reclen;
+ dirp32 = (struct dirent32 *)dirp;
+ dirp32->d_ino = dirp->d_ino;
+ dirp32->d_off = (unsigned int)dirp->d_off;
+ dirp32->d_reclen = dirp->d_reclen;
+ strncpy(dirp32->d_name, dirp->d_name, dirp->d_reclen - ((3 * 4) + 2));
+ }
+ return;
+}
+
+asmlinkage long
+sys32_getdents(unsigned int fd, void * dirent, unsigned int count)
+{
+ long n;
+
+ if ((n = sys_getdents(fd, dirent, count)) < 0)
+ return(n);
+ xlate_dirent(dirent, n);
+ return(n);
+}
+
+asmlinkage int
+sys32_readdir(unsigned int fd, void * dirent, unsigned int count)
+{
+ int n;
+ struct dirent *dirp;
+
+ if ((n = old_readdir(fd, dirent, count)) < 0)
+ return(n);
+ dirp = (struct dirent *)dirent;
+ xlate_dirent(dirent, dirp->d_reclen);
+ return(n);
+}
+
+/*
+ * We can actually return ERESTARTSYS instead of EINTR, but I'd
+ * like to be certain this leads to no problems. So I return
+ * EINTR just for safety.
+ *
+ * Update: ERESTARTSYS breaks at least the xview clock binary, so
+ * I'm trying ERESTARTNOHAND which restart only when you want to.
+ */
+#define MAX_SELECT_SECONDS \
+ ((unsigned long) (MAX_SCHEDULE_TIMEOUT / HZ)-1)
+#define ROUND_UP(x,y) (((x)+(y)-1)/(y))
+
+asmlinkage int
+sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
+{
+ fd_set_bits fds;
+ char *bits;
+ long timeout;
+ int ret, size;
+
+ timeout = MAX_SCHEDULE_TIMEOUT;
+ if (tvp32) {
+ time_t sec, usec;
+
+ get_user(sec, &tvp32->tv_sec);
+ get_user(usec, &tvp32->tv_usec);
+
+ ret = -EINVAL;
+ if (sec < 0 || usec < 0)
+ goto out_nofds;
+
+ if ((unsigned long) sec < MAX_SELECT_SECONDS) {
+ timeout = ROUND_UP(usec, 1000000/HZ);
+ timeout += sec * (unsigned long) HZ;
+ }
+ }
+
+ ret = -EINVAL;
+ if (n < 0)
+ goto out_nofds;
+
+ if (n > current->files->max_fdset)
+ n = current->files->max_fdset;
+
+ /*
+ * We need 6 bitmaps (in/out/ex for both incoming and outgoing),
+ * since we used fdset we need to allocate memory in units of
+ * long-words.
+ */
+ ret = -ENOMEM;
+ size = FDS_BYTES(n);
+ bits = kmalloc(6 * size, GFP_KERNEL);
+ if (!bits)
+ goto out_nofds;
+ fds.in = (unsigned long *) bits;
+ fds.out = (unsigned long *) (bits + size);
+ fds.ex = (unsigned long *) (bits + 2*size);
+ fds.res_in = (unsigned long *) (bits + 3*size);
+ fds.res_out = (unsigned long *) (bits + 4*size);
+ fds.res_ex = (unsigned long *) (bits + 5*size);
+
+ if ((ret = get_fd_set(n, inp, fds.in)) ||
+ (ret = get_fd_set(n, outp, fds.out)) ||
+ (ret = get_fd_set(n, exp, fds.ex)))
+ goto out;
+ zero_fd_set(n, fds.res_in);
+ zero_fd_set(n, fds.res_out);
+ zero_fd_set(n, fds.res_ex);
+
+ ret = do_select(n, &fds, &timeout);
+
+ if (tvp32 && !(current->personality & STICKY_TIMEOUTS)) {
+ time_t sec = 0, usec = 0;
+ if (timeout) {
+ sec = timeout / HZ;
+ usec = timeout % HZ;
+ usec *= (1000000/HZ);
+ }
+ put_user(sec, (int *)&tvp32->tv_sec);
+ put_user(usec, (int *)&tvp32->tv_usec);
+ }
+
+ if (ret < 0)
+ goto out;
+ if (!ret) {
+ ret = -ERESTARTNOHAND;
+ if (signal_pending(current))
+ goto out;
+ ret = 0;
+ }
+
+ set_fd_set(n, inp, fds.res_in);
+ set_fd_set(n, outp, fds.res_out);
+ set_fd_set(n, exp, fds.res_ex);
+
+out:
+ kfree(bits);
+out_nofds:
+ return ret;
+}
+
+struct rusage32 {
+ struct timeval32 ru_utime;
+ struct timeval32 ru_stime;
+ int ru_maxrss;
+ int ru_ixrss;
+ int ru_idrss;
+ int ru_isrss;
+ int ru_minflt;
+ int ru_majflt;
+ int ru_nswap;
+ int ru_inblock;
+ int ru_oublock;
+ int ru_msgsnd;
+ int ru_msgrcv;
+ int ru_nsignals;
+ int ru_nvcsw;
+ int ru_nivcsw;
+};
+
+static int
+put_rusage (struct rusage32 *ru, struct rusage *r)
+{
+ int err;
+
+ err = put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec);
+ err |= __put_user (r->ru_utime.tv_usec, &ru->ru_utime.tv_usec);
+ err |= __put_user (r->ru_stime.tv_sec, &ru->ru_stime.tv_sec);
+ err |= __put_user (r->ru_stime.tv_usec, &ru->ru_stime.tv_usec);
+ err |= __put_user (r->ru_maxrss, &ru->ru_maxrss);
+ err |= __put_user (r->ru_ixrss, &ru->ru_ixrss);
+ err |= __put_user (r->ru_idrss, &ru->ru_idrss);
+ err |= __put_user (r->ru_isrss, &ru->ru_isrss);
+ err |= __put_user (r->ru_minflt, &ru->ru_minflt);
+ err |= __put_user (r->ru_majflt, &ru->ru_majflt);
+ err |= __put_user (r->ru_nswap, &ru->ru_nswap);
+ err |= __put_user (r->ru_inblock, &ru->ru_inblock);
+ err |= __put_user (r->ru_oublock, &ru->ru_oublock);
+ err |= __put_user (r->ru_msgsnd, &ru->ru_msgsnd);
+ err |= __put_user (r->ru_msgrcv, &ru->ru_msgrcv);
+ err |= __put_user (r->ru_nsignals, &ru->ru_nsignals);
+ err |= __put_user (r->ru_nvcsw, &ru->ru_nvcsw);
+ err |= __put_user (r->ru_nivcsw, &ru->ru_nivcsw);
+ return err;
+}
+
+extern asmlinkage int sys_wait4(pid_t pid,unsigned int * stat_addr,
+ int options, struct rusage * ru);
+
+asmlinkage int
+sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options,
+ struct rusage32 *ru)
+{
+ if (!ru)
+ return sys_wait4(pid, stat_addr, options, NULL);
+ else {
+ struct rusage r;
+ int ret;
+ unsigned int status;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_wait4(pid, stat_addr ? &status : NULL, options, &r);
+ set_fs (old_fs);
+ if (put_rusage (ru, &r)) return -EFAULT;
+ if (stat_addr && put_user (status, stat_addr))
+ return -EFAULT;
+ return ret;
+ }
+}
+
+asmlinkage int
+sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options)
+{
+ return sys32_wait4(pid, stat_addr, options, NULL);
+}
+
+struct timespec32 {
+ int tv_sec;
+ int tv_nsec;
+};
+
+extern asmlinkage int sys_nanosleep(struct timespec *rqtp,
+ struct timespec *rmtp);
+
+asmlinkage int
+sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp)
+{
+ struct timespec t;
+ int ret;
+ mm_segment_t old_fs = get_fs ();
+
+ if (get_user (t.tv_sec, &rqtp->tv_sec) ||
+ __get_user (t.tv_nsec, &rqtp->tv_nsec))
+ return -EFAULT;
+ set_fs (KERNEL_DS);
+ ret = sys_nanosleep(&t, rmtp ? &t : NULL);
+ set_fs (old_fs);
+ if (rmtp && ret == -EINTR) {
+ if (__put_user (t.tv_sec, &rmtp->tv_sec) ||
+ __put_user (t.tv_nsec, &rmtp->tv_nsec))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+struct iovec32 { unsigned int iov_base; int iov_len; };
+
+typedef ssize_t (*IO_fn_t)(struct file *, char *, size_t, loff_t *);
+
+static long
+do_readv_writev32(int type, struct file *file, const struct iovec32 *vector,
+ u32 count)
+{
+ unsigned long tot_len;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov=iovstack, *ivp;
+ struct inode *inode;
+ long retval, i;
+ IO_fn_t fn;
+
+ /* First get the "struct iovec" from user memory and
+ * verify all the pointers
+ */
+ if (!count)
+ return 0;
+ if(verify_area(VERIFY_READ, vector, sizeof(struct iovec32)*count))
+ return -EFAULT;
+ if (count > UIO_MAXIOV)
+ return -EINVAL;
+ if (count > UIO_FASTIOV) {
+ iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL);
+ if (!iov)
+ return -ENOMEM;
+ }
+
+ tot_len = 0;
+ i = count;
+ ivp = iov;
+ while(i > 0) {
+ u32 len;
+ u32 buf;
+
+ __get_user(len, &vector->iov_len);
+ __get_user(buf, &vector->iov_base);
+ tot_len += len;
+ ivp->iov_base = (void *)A(buf);
+ ivp->iov_len = (__kernel_size_t) len;
+ vector++;
+ ivp++;
+ i--;
+ }
+
+ inode = file->f_dentry->d_inode;
+ /* VERIFY_WRITE actually means a read, as we write to user space */
+ retval = locks_verify_area((type == VERIFY_WRITE
+ ? FLOCK_VERIFY_READ : FLOCK_VERIFY_WRITE),
+ inode, file, file->f_pos, tot_len);
+ if (retval) {
+ if (iov != iovstack)
+ kfree(iov);
+ return retval;
+ }
+
+ /* Then do the actual IO. Note that sockets need to be handled
+ * specially as they have atomicity guarantees and can handle
+ * iovec's natively
+ */
+ if (inode->i_sock) {
+ int err;
+ err = sock_readv_writev(type, inode, file, iov, count, tot_len);
+ if (iov != iovstack)
+ kfree(iov);
+ return err;
+ }
+
+ if (!file->f_op) {
+ if (iov != iovstack)
+ kfree(iov);
+ return -EINVAL;
+ }
+ /* VERIFY_WRITE actually means a read, as we write to user space */
+ fn = file->f_op->read;
+ if (type == VERIFY_READ)
+ fn = (IO_fn_t) file->f_op->write;
+ ivp = iov;
+ while (count > 0) {
+ void * base;
+ int len, nr;
+
+ base = ivp->iov_base;
+ len = ivp->iov_len;
+ ivp++;
+ count--;
+ nr = fn(file, base, len, &file->f_pos);
+ if (nr < 0) {
+ if (retval)
+ break;
+ retval = nr;
+ break;
+ }
+ retval += nr;
+ if (nr != len)
+ break;
+ }
+ if (iov != iovstack)
+ kfree(iov);
+ return retval;
+}
+
+asmlinkage long
+sys32_readv(int fd, struct iovec32 *vector, u32 count)
+{
+ struct file *file;
+ long ret = -EBADF;
+
+ lock_kernel();
+ file = fget(fd);
+ if(!file)
+ goto bad_file;
+
+ if(!(file->f_mode & 1))
+ goto out;
+
+ ret = do_readv_writev32(VERIFY_WRITE, file,
+ vector, count);
+out:
+ fput(file);
+bad_file:
+ unlock_kernel();
+ return ret;
+}
+
+asmlinkage long
+sys32_writev(int fd, struct iovec32 *vector, u32 count)
+{
+ struct file *file;
+ int ret = -EBADF;
+
+ lock_kernel();
+ file = fget(fd);
+ if(!file)
+ goto bad_file;
+
+ if(!(file->f_mode & 2))
+ goto out;
+
+ down(&file->f_dentry->d_inode->i_sem);
+ ret = do_readv_writev32(VERIFY_READ, file,
+ vector, count);
+ up(&file->f_dentry->d_inode->i_sem);
+out:
+ fput(file);
+bad_file:
+ unlock_kernel();
+ return ret;
+}
+
+#define RLIM_INFINITY32 0x7fffffff
+#define RESOURCE32(x) ((x > RLIM_INFINITY32) ? RLIM_INFINITY32 : x)
+
+struct rlimit32 {
+ int rlim_cur;
+ int rlim_max;
+};
+
+extern asmlinkage int sys_getrlimit(unsigned int resource, struct rlimit *rlim);
+
+asmlinkage int
+sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim)
+{
+ struct rlimit r;
+ int ret;
+ mm_segment_t old_fs = get_fs ();
+
+ set_fs (KERNEL_DS);
+ ret = sys_getrlimit(resource, &r);
+ set_fs (old_fs);
+ if (!ret) {
+ ret = put_user (RESOURCE32(r.rlim_cur), &rlim->rlim_cur);
+ ret |= __put_user (RESOURCE32(r.rlim_max), &rlim->rlim_max);
+ }
+ return ret;
+}
+
+extern asmlinkage int sys_setrlimit(unsigned int resource, struct rlimit *rlim);
+
+asmlinkage int
+sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim)
+{
+ struct rlimit r;
+ int ret;
+ mm_segment_t old_fs = get_fs ();
+
+ if (resource >= RLIM_NLIMITS) return -EINVAL;
+ if (get_user (r.rlim_cur, &rlim->rlim_cur) ||
+ __get_user (r.rlim_max, &rlim->rlim_max))
+ return -EFAULT;
+ if (r.rlim_cur == RLIM_INFINITY32)
+ r.rlim_cur = RLIM_INFINITY;
+ if (r.rlim_max == RLIM_INFINITY32)
+ r.rlim_max = RLIM_INFINITY;
+ set_fs (KERNEL_DS);
+ ret = sys_setrlimit(resource, &r);
+ set_fs (old_fs);
+ return ret;
+}
+
+/* Argument list sizes for sys_socketcall */
+#define AL(x) ((x) * sizeof(u32))
+static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
+ AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
+ AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
+#undef AL
+
+extern asmlinkage int sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
+extern asmlinkage int sys_connect(int fd, struct sockaddr *uservaddr,
+ int addrlen);
+extern asmlinkage int sys_accept(int fd, struct sockaddr *upeer_sockaddr,
+ int *upeer_addrlen);
+extern asmlinkage int sys_getsockname(int fd, struct sockaddr *usockaddr,
+ int *usockaddr_len);
+extern asmlinkage int sys_getpeername(int fd, struct sockaddr *usockaddr,
+ int *usockaddr_len);
+extern asmlinkage int sys_send(int fd, void *buff, size_t len, unsigned flags);
+extern asmlinkage int sys_sendto(int fd, u32 buff, __kernel_size_t32 len,
+ unsigned flags, u32 addr, int addr_len);
+extern asmlinkage int sys_recv(int fd, void *ubuf, size_t size, unsigned flags);
+extern asmlinkage int sys_recvfrom(int fd, u32 ubuf, __kernel_size_t32 size,
+ unsigned flags, u32 addr, u32 addr_len);
+extern asmlinkage int sys_setsockopt(int fd, int level, int optname,
+ char *optval, int optlen);
+extern asmlinkage int sys_getsockopt(int fd, int level, int optname,
+ u32 optval, u32 optlen);
+
+extern asmlinkage int sys_socket(int family, int type, int protocol);
+extern asmlinkage int sys_socketpair(int family, int type, int protocol,
+ int usockvec[2]);
+extern asmlinkage int sys_shutdown(int fd, int how);
+extern asmlinkage int sys_listen(int fd, int backlog);
+
+asmlinkage int sys32_socketcall(int call, u32 *args)
+{
+ int i, ret;
+ u32 a[6];
+ u32 a0,a1;
+
+ if (call<SYS_SOCKET||call>SYS_RECVMSG)
+ return -EINVAL;
+ if (copy_from_user(a, args, nas[call]))
+ return -EFAULT;
+ a0=a[0];
+ a1=a[1];
+
+ switch(call)
+ {
+ case SYS_SOCKET:
+ ret = sys_socket(a0, a1, a[2]);
+ break;
+ case SYS_BIND:
+ ret = sys_bind(a0, (struct sockaddr *)A(a1), a[2]);
+ break;
+ case SYS_CONNECT:
+ ret = sys_connect(a0, (struct sockaddr *)A(a1), a[2]);
+ break;
+ case SYS_LISTEN:
+ ret = sys_listen(a0, a1);
+ break;
+ case SYS_ACCEPT:
+ ret = sys_accept(a0, (struct sockaddr *)A(a1),
+ (int *)A(a[2]));
+ break;
+ case SYS_GETSOCKNAME:
+ ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
+ (int *)A(a[2]));
+ break;
+ case SYS_GETPEERNAME:
+ ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
+ (int *)A(a[2]));
+ break;
+ case SYS_SOCKETPAIR:
+ ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
+ break;
+ case SYS_SEND:
+ ret = sys_send(a0, (void *)A(a1), a[2], a[3]);
+ break;
+ case SYS_SENDTO:
+ ret = sys_sendto(a0, a1, a[2], a[3], a[4], a[5]);
+ break;
+ case SYS_RECV:
+ ret = sys_recv(a0, (void *)A(a1), a[2], a[3]);
+ break;
+ case SYS_RECVFROM:
+ ret = sys_recvfrom(a0, a1, a[2], a[3], a[4], a[5]);
+ break;
+ case SYS_SHUTDOWN:
+ ret = sys_shutdown(a0,a1);
+ break;
+ case SYS_SETSOCKOPT:
+ ret = sys_setsockopt(a0, a1, a[2], (char *)A(a[3]),
+ a[4]);
+ break;
+ case SYS_GETSOCKOPT:
+ ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
+ break;
+ case SYS_SENDMSG:
+ ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
+ a[2]);
+ break;
+ case SYS_RECVMSG:
+ ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
+ a[2]);
+ break;
+ default:
+ ret = EINVAL;
+ break;
+ }
+ return ret;
+}
+
+/*
+ * Declare the IA32 version of the msghdr
+ */
+
+struct msghdr32 {
+ unsigned int msg_name; /* Socket name */
+ int msg_namelen; /* Length of name */
+ unsigned int msg_iov; /* Data blocks */
+ unsigned int msg_iovlen; /* Number of blocks */
+ unsigned int msg_control; /* Per protocol magic (eg BSD file descriptor passing) */
+ unsigned int msg_controllen; /* Length of cmsg list */
+ unsigned msg_flags;
+};
+
+static inline int
+shape_msg(struct msghdr *mp, struct msghdr32 *mp32)
+{
+ unsigned int i;
+
+ if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
+ return(-EFAULT);
+ __get_user(i, &mp32->msg_name);
+ mp->msg_name = (void *)i;
+ __get_user(mp->msg_namelen, &mp32->msg_namelen);
+ __get_user(i, &mp32->msg_iov);
+ mp->msg_iov = (struct iov *)i;
+ __get_user(mp->msg_iovlen, &mp32->msg_iovlen);
+ __get_user(i, &mp32->msg_control);
+ mp->msg_control = (void *)i;
+ __get_user(mp->msg_controllen, &mp32->msg_controllen);
+ __get_user(mp->msg_flags, &mp32->msg_flags);
+ return(0);
+}
+
+/*
+ * Verify & re-shape IA32 iovec. The caller must ensure that the
+ * iovec is big enough to hold the re-shaped message iovec.
+ *
+ * Save time not doing verify_area. copy_*_user will make this work
+ * in any case.
+ *
+ * Don't need to check the total size for overflow (cf net/core/iovec.c),
+ * 32-bit sizes can't overflow a 64-bit count.
+ */
+
+static inline int
+verify_iovec32(struct msghdr *m, struct iovec *iov, char *address, int mode)
+{
+ int size, err, ct;
+ struct iovec32 *iov32;
+
+ if(m->msg_namelen)
+ {
+ if(mode==VERIFY_READ)
+ {
+ err=move_addr_to_kernel(m->msg_name, m->msg_namelen, address);
+ if(err<0)
+ goto out;
+ }
+
+ m->msg_name = address;
+ } else
+ m->msg_name = NULL;
+
+ err = -EFAULT;
+ size = m->msg_iovlen * sizeof(struct iovec32);
+ if (copy_from_user(iov, m->msg_iov, size))
+ goto out;
+ m->msg_iov=iov;
+
+ err = 0;
+ iov32 = (struct iovec32 *)iov;
+ for (ct = m->msg_iovlen; ct-- > 0; ) {
+ iov[ct].iov_len = (__kernel_size_t)iov32[ct].iov_len;
+ iov[ct].iov_base = (void *)iov32[ct].iov_base;
+ err += iov[ct].iov_len;
+ }
+out:
+ return err;
+}
+
+extern __inline__ void
+sockfd_put(struct socket *sock)
+{
+ fput(sock->file);
+}
+
+/* XXX This really belongs in some header file... -DaveM */
+#define MAX_SOCK_ADDR 128 /* 108 for Unix domain -
+ 16 for IP, 16 for IPX,
+ 24 for IPv6,
+ about 80 for AX.25 */
+
+extern struct socket *sockfd_lookup(int fd, int *err);
+
+/*
+ * BSD sendmsg interface
+ */
+
+asmlinkage int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
+{
+ struct socket *sock;
+ char address[MAX_SOCK_ADDR];
+ struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
+ unsigned char ctl[sizeof(struct cmsghdr) + 20]; /* 20 is size of ipv6_pktinfo */
+ unsigned char *ctl_buf = ctl;
+ struct msghdr msg_sys;
+ int err, ctl_len, iov_size, total_len;
+
+ err = -EFAULT;
+ if (shape_msg(&msg_sys, msg))
+ goto out;
+
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+
+ /* do not move before msg_sys is valid */
+ err = -EINVAL;
+ if (msg_sys.msg_iovlen > UIO_MAXIOV)
+ goto out_put;
+
+ /* Check whether to allocate the iovec area*/
+ err = -ENOMEM;
+ iov_size = msg_sys.msg_iovlen * sizeof(struct iovec32);
+ if (msg_sys.msg_iovlen > UIO_FASTIOV) {
+ iov = sock_kmalloc(sock->sk, iov_size, GFP_KERNEL);
+ if (!iov)
+ goto out_put;
+ }
+
+ /* This will also move the address data into kernel space */
+ err = verify_iovec32(&msg_sys, iov, address, VERIFY_READ);
+ if (err < 0)
+ goto out_freeiov;
+ total_len = err;
+
+ err = -ENOBUFS;
+
+ if (msg_sys.msg_controllen > INT_MAX)
+ goto out_freeiov;
+ ctl_len = msg_sys.msg_controllen;
+ if (ctl_len)
+ {
+ if (ctl_len > sizeof(ctl))
+ {
+ err = -ENOBUFS;
+ ctl_buf = sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL);
+ if (ctl_buf == NULL)
+ goto out_freeiov;
+ }
+ err = -EFAULT;
+ if (copy_from_user(ctl_buf, msg_sys.msg_control, ctl_len))
+ goto out_freectl;
+ msg_sys.msg_control = ctl_buf;
+ }
+ msg_sys.msg_flags = flags;
+
+ if (sock->file->f_flags & O_NONBLOCK)
+ msg_sys.msg_flags |= MSG_DONTWAIT;
+ err = sock_sendmsg(sock, &msg_sys, total_len);
+
+out_freectl:
+ if (ctl_buf != ctl)
+ sock_kfree_s(sock->sk, ctl_buf, ctl_len);
+out_freeiov:
+ if (iov != iovstack)
+ sock_kfree_s(sock->sk, iov, iov_size);
+out_put:
+ sockfd_put(sock);
+out:
+ return err;
+}
+
+/*
+ * BSD recvmsg interface
+ */
+
+asmlinkage int sys32_recvmsg(int fd, struct msghdr32 *msg, unsigned int flags)
+{
+ struct socket *sock;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov=iovstack;
+ struct msghdr msg_sys;
+ unsigned long cmsg_ptr;
+ int err, iov_size, total_len, len;
+
+ /* kernel mode address */
+ char addr[MAX_SOCK_ADDR];
+
+ /* user mode address pointers */
+ struct sockaddr *uaddr;
+ int *uaddr_len;
+
+ err=-EFAULT;
+ if (shape_msg(&msg_sys, msg))
+ goto out;
+
+ sock = sockfd_lookup(fd, &err);
+ if (!sock)
+ goto out;
+
+ err = -EINVAL;
+ if (msg_sys.msg_iovlen > UIO_MAXIOV)
+ goto out_put;
+
+ /* Check whether to allocate the iovec area*/
+ err = -ENOMEM;
+ iov_size = msg_sys.msg_iovlen * sizeof(struct iovec);
+ if (msg_sys.msg_iovlen > UIO_FASTIOV) {
+ iov = sock_kmalloc(sock->sk, iov_size, GFP_KERNEL);
+ if (!iov)
+ goto out_put;
+ }
+
+ /*
+ * Save the user-mode address (verify_iovec will change the
+ * kernel msghdr to use the kernel address space)
+ */
+
+ uaddr = msg_sys.msg_name;
+ uaddr_len = &msg->msg_namelen;
+ err = verify_iovec32(&msg_sys, iov, addr, VERIFY_WRITE);
+ if (err < 0)
+ goto out_freeiov;
+ total_len=err;
+
+ cmsg_ptr = (unsigned long)msg_sys.msg_control;
+ msg_sys.msg_flags = 0;
+
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ err = sock_recvmsg(sock, &msg_sys, total_len, flags);
+ if (err < 0)
+ goto out_freeiov;
+ len = err;
+
+ if (uaddr != NULL) {
+ err = move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len);
+ if (err < 0)
+ goto out_freeiov;
+ }
+ err = __put_user(msg_sys.msg_flags, &msg->msg_flags);
+ if (err)
+ goto out_freeiov;
+ err = __put_user((unsigned long)msg_sys.msg_control-cmsg_ptr,
+ &msg->msg_controllen);
+ if (err)
+ goto out_freeiov;
+ err = len;
+
+out_freeiov:
+ if (iov != iovstack)
+ sock_kfree_s(sock->sk, iov, iov_size);
+out_put:
+ sockfd_put(sock);
+out:
+ return err;
+}
+
+/*
+ * sys32_ipc() is the de-multiplexer for the SysV IPC calls in 32bit emulation..
+ *
+ * This is really horribly ugly.
+ */
+
+struct msgbuf32 { s32 mtype; char mtext[1]; };
+
+struct ipc_perm32
+{
+ key_t key;
+ __kernel_uid_t32 uid;
+ __kernel_gid_t32 gid;
+ __kernel_uid_t32 cuid;
+ __kernel_gid_t32 cgid;
+ __kernel_mode_t32 mode;
+ unsigned short seq;
+};
+
+struct semid_ds32 {
+ struct ipc_perm32 sem_perm; /* permissions .. see ipc.h */
+ __kernel_time_t32 sem_otime; /* last semop time */
+ __kernel_time_t32 sem_ctime; /* last change time */
+ u32 sem_base; /* ptr to first semaphore in array */
+ u32 sem_pending; /* pending operations to be processed */
+ u32 sem_pending_last; /* last pending operation */
+ u32 undo; /* undo requests on this array */
+ unsigned short sem_nsems; /* no. of semaphores in array */
+};
+
+struct msqid_ds32
+{
+ struct ipc_perm32 msg_perm;
+ u32 msg_first;
+ u32 msg_last;
+ __kernel_time_t32 msg_stime;
+ __kernel_time_t32 msg_rtime;
+ __kernel_time_t32 msg_ctime;
+ u32 wwait;
+ u32 rwait;
+ unsigned short msg_cbytes;
+ unsigned short msg_qnum;
+ unsigned short msg_qbytes;
+ __kernel_ipc_pid_t32 msg_lspid;
+ __kernel_ipc_pid_t32 msg_lrpid;
+};
+
+struct shmid_ds32 {
+ struct ipc_perm32 shm_perm;
+ int shm_segsz;
+ __kernel_time_t32 shm_atime;
+ __kernel_time_t32 shm_dtime;
+ __kernel_time_t32 shm_ctime;
+ __kernel_ipc_pid_t32 shm_cpid;
+ __kernel_ipc_pid_t32 shm_lpid;
+ unsigned short shm_nattch;
+};
+
+#define IPCOP_MASK(__x) (1UL << (__x))
+
+static int
+do_sys32_semctl(int first, int second, int third, void *uptr)
+{
+ union semun fourth;
+ u32 pad;
+ int err = -EINVAL;
+
+ if (!uptr)
+ goto out;
+ err = -EFAULT;
+ if (get_user (pad, (u32 *)uptr))
+ goto out;
+ if(third == SETVAL)
+ fourth.val = (int)pad;
+ else
+ fourth.__pad = (void *)A(pad);
+ if (IPCOP_MASK (third) &
+ (IPCOP_MASK (IPC_INFO) | IPCOP_MASK (SEM_INFO) |
+ IPCOP_MASK (GETVAL) | IPCOP_MASK (GETPID) |
+ IPCOP_MASK (GETNCNT) | IPCOP_MASK (GETZCNT) |
+ IPCOP_MASK (GETALL) | IPCOP_MASK (SETALL) |
+ IPCOP_MASK (IPC_RMID))) {
+ err = sys_semctl (first, second, third, fourth);
+ } else {
+ struct semid_ds s;
+ struct semid_ds32 *usp = (struct semid_ds32 *)A(pad);
+ mm_segment_t old_fs;
+ int need_back_translation;
+
+ if (third == IPC_SET) {
+ err = get_user (s.sem_perm.uid, &usp->sem_perm.uid);
+ err |= __get_user(s.sem_perm.gid, &usp->sem_perm.gid);
+ err |= __get_user(s.sem_perm.mode, &usp->sem_perm.mode);
+ if (err)
+ goto out;
+ fourth.__pad = &s;
+ }
+ need_back_translation =
+ (IPCOP_MASK (third) &
+ (IPCOP_MASK (SEM_STAT) | IPCOP_MASK (IPC_STAT))) != 0;
+ if (need_back_translation)
+ fourth.__pad = &s;
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_semctl (first, second, third, fourth);
+ set_fs (old_fs);
+ if (need_back_translation) {
+ int err2 = put_user(s.sem_perm.key, &usp->sem_perm.key);
+ err2 |= __put_user(s.sem_perm.uid, &usp->sem_perm.uid);
+ err2 |= __put_user(s.sem_perm.gid, &usp->sem_perm.gid);
+ err2 |= __put_user(s.sem_perm.cuid,
+ &usp->sem_perm.cuid);
+ err2 |= __put_user (s.sem_perm.cgid,
+ &usp->sem_perm.cgid);
+ err2 |= __put_user (s.sem_perm.mode,
+ &usp->sem_perm.mode);
+ err2 |= __put_user (s.sem_perm.seq, &usp->sem_perm.seq);
+ err2 |= __put_user (s.sem_otime, &usp->sem_otime);
+ err2 |= __put_user (s.sem_ctime, &usp->sem_ctime);
+ err2 |= __put_user (s.sem_nsems, &usp->sem_nsems);
+ if (err2) err = -EFAULT;
+ }
+ }
+out:
+ return err;
+}
+
+static int
+do_sys32_msgsnd (int first, int second, int third, void *uptr)
+{
+ struct msgbuf *p = kmalloc (second + sizeof (struct msgbuf)
+ + 4, GFP_USER);
+ struct msgbuf32 *up = (struct msgbuf32 *)uptr;
+ mm_segment_t old_fs;
+ int err;
+
+ if (!p)
+ return -ENOMEM;
+ err = get_user (p->mtype, &up->mtype);
+ err |= __copy_from_user (p->mtext, &up->mtext, second);
+ if (err)
+ goto out;
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_msgsnd (first, p, second, third);
+ set_fs (old_fs);
+out:
+ kfree (p);
+ return err;
+}
+
+static int
+do_sys32_msgrcv (int first, int second, int msgtyp, int third,
+ int version, void *uptr)
+{
+ struct msgbuf32 *up;
+ struct msgbuf *p;
+ mm_segment_t old_fs;
+ int err;
+
+ if (!version) {
+ struct ipc_kludge *uipck = (struct ipc_kludge *)uptr;
+ struct ipc_kludge ipck;
+
+ err = -EINVAL;
+ if (!uptr)
+ goto out;
+ err = -EFAULT;
+ if (copy_from_user (&ipck, uipck, sizeof (struct ipc_kludge)))
+ goto out;
+ uptr = (void *)A(ipck.msgp);
+ msgtyp = ipck.msgtyp;
+ }
+ err = -ENOMEM;
+ p = kmalloc (second + sizeof (struct msgbuf) + 4, GFP_USER);
+ if (!p)
+ goto out;
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_msgrcv (first, p, second + 4, msgtyp, third);
+ set_fs (old_fs);
+ if (err < 0)
+ goto free_then_out;
+ up = (struct msgbuf32 *)uptr;
+ if (put_user (p->mtype, &up->mtype) ||
+ __copy_to_user (&up->mtext, p->mtext, err))
+ err = -EFAULT;
+free_then_out:
+ kfree (p);
+out:
+ return err;
+}
+
+static int
+do_sys32_msgctl (int first, int second, void *uptr)
+{
+ int err;
+
+ if (IPCOP_MASK (second) &
+ (IPCOP_MASK (IPC_INFO) | IPCOP_MASK (MSG_INFO) |
+ IPCOP_MASK (IPC_RMID))) {
+ err = sys_msgctl (first, second, (struct msqid_ds *)uptr);
+ } else {
+ struct msqid_ds m;
+ struct msqid_ds32 *up = (struct msqid_ds32 *)uptr;
+ mm_segment_t old_fs;
+
+ if (second == IPC_SET) {
+ err = get_user (m.msg_perm.uid, &up->msg_perm.uid);
+ err |= __get_user (m.msg_perm.gid, &up->msg_perm.gid);
+ err |= __get_user (m.msg_perm.mode, &up->msg_perm.mode);
+ err |= __get_user (m.msg_qbytes, &up->msg_qbytes);
+ if (err)
+ goto out;
+ }
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_msgctl (first, second, &m);
+ set_fs (old_fs);
+ if (IPCOP_MASK (second) &
+ (IPCOP_MASK (MSG_STAT) | IPCOP_MASK (IPC_STAT))) {
+ int err2 = put_user (m.msg_perm.key, &up->msg_perm.key);
+ err2 |= __put_user(m.msg_perm.uid, &up->msg_perm.uid);
+ err2 |= __put_user(m.msg_perm.gid, &up->msg_perm.gid);
+ err2 |= __put_user(m.msg_perm.cuid, &up->msg_perm.cuid);
+ err2 |= __put_user(m.msg_perm.cgid, &up->msg_perm.cgid);
+ err2 |= __put_user(m.msg_perm.mode, &up->msg_perm.mode);
+ err2 |= __put_user(m.msg_perm.seq, &up->msg_perm.seq);
+ err2 |= __put_user(m.msg_stime, &up->msg_stime);
+ err2 |= __put_user(m.msg_rtime, &up->msg_rtime);
+ err2 |= __put_user(m.msg_ctime, &up->msg_ctime);
+ err2 |= __put_user(m.msg_cbytes, &up->msg_cbytes);
+ err2 |= __put_user(m.msg_qnum, &up->msg_qnum);
+ err2 |= __put_user(m.msg_qbytes, &up->msg_qbytes);
+ err2 |= __put_user(m.msg_lspid, &up->msg_lspid);
+ err2 |= __put_user(m.msg_lrpid, &up->msg_lrpid);
+ if (err2)
+ err = -EFAULT;
+ }
+ }
+
+out:
+ return err;
+}
+
+static int
+do_sys32_shmat (int first, int second, int third, int version, void *uptr)
+{
+ unsigned long raddr;
+ u32 *uaddr = (u32 *)A((u32)third);
+ int err = -EINVAL;
+
+ if (version == 1)
+ goto out;
+ err = sys_shmat (first, uptr, second, &raddr);
+ if (err)
+ goto out;
+ err = put_user (raddr, uaddr);
+out:
+ return err;
+}
+
+static int
+do_sys32_shmctl (int first, int second, void *uptr)
+{
+ int err;
+
+ if (IPCOP_MASK (second) &
+ (IPCOP_MASK (IPC_INFO) | IPCOP_MASK (SHM_LOCK)
+ | IPCOP_MASK (SHM_UNLOCK) | IPCOP_MASK (IPC_RMID))) {
+ err = sys_shmctl (first, second, (struct shmid_ds *)uptr);
+ } else {
+ struct shmid_ds s;
+ struct shmid_ds32 *up = (struct shmid_ds32 *)uptr;
+ mm_segment_t old_fs;
+
+ if (second == IPC_SET) {
+ err = get_user (s.shm_perm.uid, &up->shm_perm.uid);
+ err |= __get_user (s.shm_perm.gid, &up->shm_perm.gid);
+ err |= __get_user (s.shm_perm.mode, &up->shm_perm.mode);
+ if (err)
+ goto out;
+ }
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_shmctl (first, second, &s);
+ set_fs (old_fs);
+ if (err < 0)
+ goto out;
+
+ /* Mask it even in this case so it becomes a CSE. */
+ if (second == SHM_INFO) {
+ struct shm_info32 {
+ int used_ids;
+ u32 shm_tot, shm_rss, shm_swp;
+ u32 swap_attempts, swap_successes;
+ } *uip = (struct shm_info32 *)uptr;
+ struct shm_info *kp = (struct shm_info *)&s;
+ int err2 = put_user (kp->used_ids, &uip->used_ids);
+ err2 |= __put_user (kp->shm_tot, &uip->shm_tot);
+ err2 |= __put_user (kp->shm_rss, &uip->shm_rss);
+ err2 |= __put_user (kp->shm_swp, &uip->shm_swp);
+ err2 |= __put_user (kp->swap_attempts,
+ &uip->swap_attempts);
+ err2 |= __put_user (kp->swap_successes,
+ &uip->swap_successes);
+ if (err2)
+ err = -EFAULT;
+ } else if (IPCOP_MASK (second) &
+ (IPCOP_MASK (SHM_STAT) | IPCOP_MASK (IPC_STAT))) {
+ int err2 = put_user (s.shm_perm.key, &up->shm_perm.key);
+ err2 |= __put_user (s.shm_perm.uid, &up->shm_perm.uid);
+ err2 |= __put_user (s.shm_perm.gid, &up->shm_perm.gid);
+ err2 |= __put_user (s.shm_perm.cuid,
+ &up->shm_perm.cuid);
+ err2 |= __put_user (s.shm_perm.cgid,
+ &up->shm_perm.cgid);
+ err2 |= __put_user (s.shm_perm.mode,
+ &up->shm_perm.mode);
+ err2 |= __put_user (s.shm_perm.seq, &up->shm_perm.seq);
+ err2 |= __put_user (s.shm_atime, &up->shm_atime);
+ err2 |= __put_user (s.shm_dtime, &up->shm_dtime);
+ err2 |= __put_user (s.shm_ctime, &up->shm_ctime);
+ err2 |= __put_user (s.shm_segsz, &up->shm_segsz);
+ err2 |= __put_user (s.shm_nattch, &up->shm_nattch);
+ err2 |= __put_user (s.shm_cpid, &up->shm_cpid);
+ err2 |= __put_user (s.shm_lpid, &up->shm_lpid);
+ if (err2)
+ err = -EFAULT;
+ }
+ }
+out:
+ return err;
+}
+
+asmlinkage int
+sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
+{
+ int version, err;
+
+ lock_kernel();
+ version = call >> 16; /* hack for backward compatibility */
+ call &= 0xffff;
+
+ if (call <= SEMCTL)
+ switch (call) {
+ case SEMOP:
+ /* struct sembuf is the same on 32 and 64bit :)) */
+ err = sys_semop (first, (struct sembuf *)AA(ptr),
+ second);
+ goto out;
+ case SEMGET:
+ err = sys_semget (first, second, third);
+ goto out;
+ case SEMCTL:
+ err = do_sys32_semctl (first, second, third,
+ (void *)AA(ptr));
+ goto out;
+ default:
+ err = -EINVAL;
+ goto out;
+ };
+ if (call <= MSGCTL)
+ switch (call) {
+ case MSGSND:
+ err = do_sys32_msgsnd (first, second, third,
+ (void *)AA(ptr));
+ goto out;
+ case MSGRCV:
+ err = do_sys32_msgrcv (first, second, fifth, third,
+ version, (void *)AA(ptr));
+ goto out;
+ case MSGGET:
+ err = sys_msgget ((key_t) first, second);
+ goto out;
+ case MSGCTL:
+ err = do_sys32_msgctl (first, second, (void *)AA(ptr));
+ goto out;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+ if (call <= SHMCTL)
+ switch (call) {
+ case SHMAT:
+ err = do_sys32_shmat (first, second, third,
+ version, (void *)AA(ptr));
+ goto out;
+ case SHMDT:
+ err = sys_shmdt ((char *)AA(ptr));
+ goto out;
+ case SHMGET:
+ err = sys_shmget (first, second, third);
+ goto out;
+ case SHMCTL:
+ err = do_sys32_shmctl (first, second, (void *)AA(ptr));
+ goto out;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+
+ err = -EINVAL;
+
+out:
+ unlock_kernel();
+ return err;
+}
+
+#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
+
+/* In order to reduce some races, while at the same time doing additional
+ * checking and hopefully speeding things up, we copy filenames to the
+ * kernel data space before using them..
+ *
+ * POSIX.1 2.4: an empty pathname is invalid (ENOENT).
+ */
+static inline int
+do_getname32(const char *filename, char *page)
+{
+ int retval;
+
+ /* 32bit pointer will be always far below TASK_SIZE :)) */
+ retval = strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE);
+ if (retval > 0) {
+ if (retval < PAGE_SIZE)
+ return 0;
+ return -ENAMETOOLONG;
+ } else if (!retval)
+ retval = -ENOENT;
+ return retval;
+}
+
+char *
+getname32(const char *filename)
+{
+ char *tmp, *result;
+
+ result = ERR_PTR(-ENOMEM);
+ tmp = (char *)__get_free_page(GFP_KERNEL);
+ if (tmp) {
+ int retval = do_getname32(filename, tmp);
+
+ result = tmp;
+ if (retval < 0) {
+ putname(tmp);
+ result = ERR_PTR(retval);
+ }
+ }
+ return result;
+}
+
+/* 32-bit timeval and related flotsam. */
+
+extern asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int on);
+
+asmlinkage int
+sys32_ioperm(u32 from, u32 num, int on)
+{
+ return sys_ioperm((unsigned long)from, (unsigned long)num, on);
+}
+
+static inline int
+get_flock(struct flock *kfl, struct flock32 *ufl)
+{
+ int err;
+
+ err = get_user(kfl->l_type, &ufl->l_type);
+ err |= __get_user(kfl->l_whence, &ufl->l_whence);
+ err |= __get_user(kfl->l_start, &ufl->l_start);
+ err |= __get_user(kfl->l_len, &ufl->l_len);
+ err |= __get_user(kfl->l_pid, &ufl->l_pid);
+ return err;
+}
+
+static inline int
+put_flock(struct flock *kfl, struct flock32 *ufl)
+{
+ int err;
+
+ err = __put_user(kfl->l_type, &ufl->l_type);
+ err |= __put_user(kfl->l_whence, &ufl->l_whence);
+ err |= __put_user(kfl->l_start, &ufl->l_start);
+ err |= __put_user(kfl->l_len, &ufl->l_len);
+ err |= __put_user(kfl->l_pid, &ufl->l_pid);
+ return err;
+}
+
+extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd,
+ unsigned long arg);
+
+asmlinkage long
+sys32_fcntl(unsigned int fd, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case F_GETLK:
+ case F_SETLK:
+ case F_SETLKW:
+ {
+ struct flock f;
+ mm_segment_t old_fs;
+ long ret;
+
+ if(get_flock(&f, (struct flock32 *)arg))
+ return -EFAULT;
+ old_fs = get_fs(); set_fs (KERNEL_DS);
+ ret = sys_fcntl(fd, cmd, (unsigned long)&f);
+ set_fs (old_fs);
+ if(put_flock(&f, (struct flock32 *)arg))
+ return -EFAULT;
+ return ret;
+ }
+ default:
+ return sys_fcntl(fd, cmd, (unsigned long)arg);
+ }
+}
+
+struct dqblk32 {
+ __u32 dqb_bhardlimit;
+ __u32 dqb_bsoftlimit;
+ __u32 dqb_curblocks;
+ __u32 dqb_ihardlimit;
+ __u32 dqb_isoftlimit;
+ __u32 dqb_curinodes;
+ __kernel_time_t32 dqb_btime;
+ __kernel_time_t32 dqb_itime;
+};
+
+extern asmlinkage int sys_quotactl(int cmd, const char *special, int id,
+ caddr_t addr);
+
+asmlinkage int
+sys32_quotactl(int cmd, const char *special, int id, unsigned long addr)
+{
+ int cmds = cmd >> SUBCMDSHIFT;
+ int err;
+ struct dqblk d;
+ mm_segment_t old_fs;
+ char *spec;
+
+ switch (cmds) {
+ case Q_GETQUOTA:
+ break;
+ case Q_SETQUOTA:
+ case Q_SETUSE:
+ case Q_SETQLIM:
+ if (copy_from_user (&d, (struct dqblk32 *)addr,
+ sizeof (struct dqblk32)))
+ return -EFAULT;
+ d.dqb_itime = ((struct dqblk32 *)&d)->dqb_itime;
+ d.dqb_btime = ((struct dqblk32 *)&d)->dqb_btime;
+ break;
+ default:
+ return sys_quotactl(cmd, special,
+ id, (caddr_t)addr);
+ }
+ spec = getname32 (special);
+ err = PTR_ERR(spec);
+ if (IS_ERR(spec)) return err;
+ old_fs = get_fs ();
+ set_fs (KERNEL_DS);
+ err = sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d);
+ set_fs (old_fs);
+ putname (spec);
+ if (cmds == Q_GETQUOTA) {
+ __kernel_time_t b = d.dqb_btime, i = d.dqb_itime;
+ ((struct dqblk32 *)&d)->dqb_itime = i;
+ ((struct dqblk32 *)&d)->dqb_btime = b;
+ if (copy_to_user ((struct dqblk32 *)addr, &d,
+ sizeof (struct dqblk32)))
+ return -EFAULT;
+ }
+ return err;
+}
+
+extern asmlinkage int sys_utime(char * filename, struct utimbuf * times);
+
+struct utimbuf32 {
+ __kernel_time_t32 actime, modtime;
+};
+
+asmlinkage int
+sys32_utime(char * filename, struct utimbuf32 *times)
+{
+ struct utimbuf t;
+ mm_segment_t old_fs;
+ int ret;
+ char *filenam;
+
+ if (!times)
+ return sys_utime(filename, NULL);
+ if (get_user (t.actime, ×->actime) ||
+ __get_user (t.modtime, ×->modtime))
+ return -EFAULT;
+ filenam = getname32 (filename);
+ ret = PTR_ERR(filenam);
+ if (!IS_ERR(filenam)) {
+ old_fs = get_fs();
+ set_fs (KERNEL_DS);
+ ret = sys_utime(filenam, &t);
+ set_fs (old_fs);
+ putname (filenam);
+ }
+ return ret;
+}
+
+/*
+ * Ooo, nasty. We need here to frob 32-bit unsigned longs to
+ * 64-bit unsigned longs.
+ */
+
+static inline int
+get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
+{
+ if (ufdset) {
+ unsigned long odd;
+
+ if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32)))
+ return -EFAULT;
+
+ odd = n & 1UL;
+ n &= ~1UL;
+ while (n) {
+ unsigned long h, l;
+ __get_user(l, ufdset);
+ __get_user(h, ufdset+1);
+ ufdset += 2;
+ *fdset++ = h << 32 | l;
+ n -= 2;
+ }
+ if (odd)
+ __get_user(*fdset, ufdset);
+ } else {
+ /* Tricky, must clear full unsigned long in the
+ * kernel fdset at the end, this makes sure that
+ * actually happens.
+ */
+ memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
+ }
+ return 0;
+}
+
+static inline void
+set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset)
+{
+ unsigned long odd;
+
+ if (!ufdset)
+ return;
+
+ odd = n & 1UL;
+ n &= ~1UL;
+ while (n) {
+ unsigned long h, l;
+ l = *fdset++;
+ h = l >> 32;
+ __put_user(l, ufdset);
+ __put_user(h, ufdset+1);
+ ufdset += 2;
+ n -= 2;
+ }
+ if (odd)
+ __put_user(*fdset, ufdset);
+}
+
+extern asmlinkage int sys_sysfs(int option, unsigned long arg1,
+ unsigned long arg2);
+
+asmlinkage int
+sys32_sysfs(int option, u32 arg1, u32 arg2)
+{
+ return sys_sysfs(option, arg1, arg2);
+}
+
+struct ncp_mount_data32 {
+ int version;
+ unsigned int ncp_fd;
+ __kernel_uid_t32 mounted_uid;
+ __kernel_pid_t32 wdog_pid;
+ unsigned char mounted_vol[NCP_VOLNAME_LEN + 1];
+ unsigned int time_out;
+ unsigned int retry_count;
+ unsigned int flags;
+ __kernel_uid_t32 uid;
+ __kernel_gid_t32 gid;
+ __kernel_mode_t32 file_mode;
+ __kernel_mode_t32 dir_mode;
+};
+
+static void *
+do_ncp_super_data_conv(void *raw_data)
+{
+ struct ncp_mount_data *n = (struct ncp_mount_data *)raw_data;
+ struct ncp_mount_data32 *n32 = (struct ncp_mount_data32 *)raw_data;
+
+ n->dir_mode = n32->dir_mode;
+ n->file_mode = n32->file_mode;
+ n->gid = n32->gid;
+ n->uid = n32->uid;
+ memmove (n->mounted_vol, n32->mounted_vol,
+ (sizeof (n32->mounted_vol) + 3 * sizeof (unsigned int)));
+ n->wdog_pid = n32->wdog_pid;
+ n->mounted_uid = n32->mounted_uid;
+ return raw_data;
+}
+
+struct smb_mount_data32 {
+ int version;
+ __kernel_uid_t32 mounted_uid;
+ __kernel_uid_t32 uid;
+ __kernel_gid_t32 gid;
+ __kernel_mode_t32 file_mode;
+ __kernel_mode_t32 dir_mode;
+};
+
+static void *
+do_smb_super_data_conv(void *raw_data)
+{
+ struct smb_mount_data *s = (struct smb_mount_data *)raw_data;
+ struct smb_mount_data32 *s32 = (struct smb_mount_data32 *)raw_data;
+
+ s->version = s32->version;
+ s->mounted_uid = s32->mounted_uid;
+ s->uid = s32->uid;
+ s->gid = s32->gid;
+ s->file_mode = s32->file_mode;
+ s->dir_mode = s32->dir_mode;
+ return raw_data;
+}
+
+static int
+copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel)
+{
+ int i;
+ unsigned long page;
+ struct vm_area_struct *vma;
+
+ *kernel = 0;
+ if(!user)
+ return 0;
+ vma = find_vma(current->mm, (unsigned long)user);
+ if(!vma || (unsigned long)user < vma->vm_start)
+ return -EFAULT;
+ if(!(vma->vm_flags & VM_READ))
+ return -EFAULT;
+ i = vma->vm_end - (unsigned long) user;
+ if(PAGE_SIZE <= (unsigned long) i)
+ i = PAGE_SIZE - 1;
+ if(!(page = __get_free_page(GFP_KERNEL)))
+ return -ENOMEM;
+ if(copy_from_user((void *) page, user, i)) {
+ free_page(page);
+ return -EFAULT;
+ }
+ *kernel = page;
+ return 0;
+}
+
+extern asmlinkage int sys_mount(char * dev_name, char * dir_name, char * type,
+ unsigned long new_flags, void *data);
+
+#define SMBFS_NAME "smbfs"
+#define NCPFS_NAME "ncpfs"
+
+asmlinkage int
+sys32_mount(char *dev_name, char *dir_name, char *type,
+ unsigned long new_flags, u32 data)
+{
+ unsigned long type_page;
+ int err, is_smb, is_ncp;
+
+ if(!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ is_smb = is_ncp = 0;
+ err = copy_mount_stuff_to_kernel((const void *)type, &type_page);
+ if(err)
+ return err;
+ if(type_page) {
+ is_smb = !strcmp((char *)type_page, SMBFS_NAME);
+ is_ncp = !strcmp((char *)type_page, NCPFS_NAME);
+ }
+ if(!is_smb && !is_ncp) {
+ if(type_page)
+ free_page(type_page);
+ return sys_mount(dev_name, dir_name, type, new_flags,
+ (void *)AA(data));
+ } else {
+ unsigned long dev_page, dir_page, data_page;
+ mm_segment_t old_fs;
+
+ err = copy_mount_stuff_to_kernel((const void *)dev_name,
+ &dev_page);
+ if(err)
+ goto out;
+ err = copy_mount_stuff_to_kernel((const void *)dir_name,
+ &dir_page);
+ if(err)
+ goto dev_out;
+ err = copy_mount_stuff_to_kernel((const void *)AA(data),
+ &data_page);
+ if(err)
+ goto dir_out;
+ if(is_ncp)
+ do_ncp_super_data_conv((void *)data_page);
+ else if(is_smb)
+ do_smb_super_data_conv((void *)data_page);
+ else
+ panic("The problem is here...");
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_mount((char *)dev_page, (char *)dir_page,
+ (char *)type_page, new_flags,
+ (void *)data_page);
+ set_fs(old_fs);
+
+ if(data_page)
+ free_page(data_page);
+ dir_out:
+ if(dir_page)
+ free_page(dir_page);
+ dev_out:
+ if(dev_page)
+ free_page(dev_page);
+ out:
+ if(type_page)
+ free_page(type_page);
+ return err;
+ }
+}
+
+struct sysinfo32 {
+ s32 uptime;
+ u32 loads[3];
+ u32 totalram;
+ u32 freeram;
+ u32 sharedram;
+ u32 bufferram;
+ u32 totalswap;
+ u32 freeswap;
+ unsigned short procs;
+ char _f[22];
+};
+
+extern asmlinkage int sys_sysinfo(struct sysinfo *info);
+
+asmlinkage int
+sys32_sysinfo(struct sysinfo32 *info)
+{
+ struct sysinfo s;
+ int ret, err;
+ mm_segment_t old_fs = get_fs ();
+
+ set_fs (KERNEL_DS);
+ ret = sys_sysinfo(&s);
+ set_fs (old_fs);
+ err = put_user (s.uptime, &info->uptime);
+ err |= __put_user (s.loads[0], &info->loads[0]);
+ err |= __put_user (s.loads[1], &info->loads[1]);
+ err |= __put_user (s.loads[2], &info->loads[2]);
+ err |= __put_user (s.totalram, &info->totalram);
+ err |= __put_user (s.freeram, &info->freeram);
+ err |= __put_user (s.sharedram, &info->sharedram);
+ err |= __put_user (s.bufferram, &info->bufferram);
+ err |= __put_user (s.totalswap, &info->totalswap);
+ err |= __put_user (s.freeswap, &info->freeswap);
+ err |= __put_user (s.procs, &info->procs);
+ if (err)
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_sched_rr_get_interval(pid_t pid,
+ struct timespec *interval);
+
+asmlinkage int
+sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *interval)
+{
+ struct timespec t;
+ int ret;
+ mm_segment_t old_fs = get_fs ();
+
+ set_fs (KERNEL_DS);
+ ret = sys_sched_rr_get_interval(pid, &t);
+ set_fs (old_fs);
+ if (put_user (t.tv_sec, &interval->tv_sec) ||
+ __put_user (t.tv_nsec, &interval->tv_nsec))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_sigprocmask(int how, old_sigset_t *set,
+ old_sigset_t *oset);
+
+asmlinkage int
+sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset)
+{
+ old_sigset_t s;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ if (set && get_user (s, set)) return -EFAULT;
+ set_fs (KERNEL_DS);
+ ret = sys_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL);
+ set_fs (old_fs);
+ if (ret) return ret;
+ if (oset && put_user (s, oset)) return -EFAULT;
+ return 0;
+}
+
+extern asmlinkage int sys_sigpending(old_sigset_t *set);
+
+asmlinkage int
+sys32_sigpending(old_sigset_t32 *set)
+{
+ old_sigset_t s;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_sigpending(&s);
+ set_fs (old_fs);
+ if (put_user (s, set)) return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
+
+asmlinkage int
+sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize)
+{
+ sigset_t s;
+ sigset_t32 s32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_rt_sigpending(&s, sigsetsize);
+ set_fs (old_fs);
+ if (!ret) {
+ switch (_NSIG_WORDS) {
+ case 4: s32.sig[7] = (s.sig[3] >> 32); s32.sig[6] = s.sig[3];
+ case 3: s32.sig[5] = (s.sig[2] >> 32); s32.sig[4] = s.sig[2];
+ case 2: s32.sig[3] = (s.sig[1] >> 32); s32.sig[2] = s.sig[1];
+ case 1: s32.sig[1] = (s.sig[0] >> 32); s32.sig[0] = s.sig[0];
+ }
+ if (copy_to_user (set, &s32, sizeof(sigset_t32)))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+siginfo_t32 *
+siginfo64to32(siginfo_t32 *d, siginfo_t *s)
+{
+ memset (&d, 0, sizeof(siginfo_t32));
+ d->si_signo = s->si_signo;
+ d->si_errno = s->si_errno;
+ d->si_code = s->si_code;
+ if (s->si_signo >= SIGRTMIN) {
+ d->si_pid = s->si_pid;
+ d->si_uid = s->si_uid;
+ /* XXX: Ouch, how to find this out??? */
+ d->si_int = s->si_int;
+ } else switch (s->si_signo) {
+ /* XXX: What about POSIX1.b timers */
+ case SIGCHLD:
+ d->si_pid = s->si_pid;
+ d->si_status = s->si_status;
+ d->si_utime = s->si_utime;
+ d->si_stime = s->si_stime;
+ break;
+ case SIGSEGV:
+ case SIGBUS:
+ case SIGFPE:
+ case SIGILL:
+ d->si_addr = (long)(s->si_addr);
+ /* XXX: Do we need to translate this from ia64 to ia32 traps? */
+ d->si_trapno = s->si_trapno;
+ break;
+ case SIGPOLL:
+ d->si_band = s->si_band;
+ d->si_fd = s->si_fd;
+ break;
+ default:
+ d->si_pid = s->si_pid;
+ d->si_uid = s->si_uid;
+ break;
+ }
+ return d;
+}
+
+siginfo_t *
+siginfo32to64(siginfo_t *d, siginfo_t32 *s)
+{
+ d->si_signo = s->si_signo;
+ d->si_errno = s->si_errno;
+ d->si_code = s->si_code;
+ if (s->si_signo >= SIGRTMIN) {
+ d->si_pid = s->si_pid;
+ d->si_uid = s->si_uid;
+ /* XXX: Ouch, how to find this out??? */
+ d->si_int = s->si_int;
+ } else switch (s->si_signo) {
+ /* XXX: What about POSIX1.b timers */
+ case SIGCHLD:
+ d->si_pid = s->si_pid;
+ d->si_status = s->si_status;
+ d->si_utime = s->si_utime;
+ d->si_stime = s->si_stime;
+ break;
+ case SIGSEGV:
+ case SIGBUS:
+ case SIGFPE:
+ case SIGILL:
+ d->si_addr = (void *)A(s->si_addr);
+ /* XXX: Do we need to translate this from ia32 to ia64 traps? */
+ d->si_trapno = s->si_trapno;
+ break;
+ case SIGPOLL:
+ d->si_band = s->si_band;
+ d->si_fd = s->si_fd;
+ break;
+ default:
+ d->si_pid = s->si_pid;
+ d->si_uid = s->si_uid;
+ break;
+ }
+ return d;
+}
+
+extern asmlinkage int
+sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo,
+ const struct timespec *uts, size_t sigsetsize);
+
+asmlinkage int
+sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo,
+ struct timespec32 *uts, __kernel_size_t32 sigsetsize)
+{
+ sigset_t s;
+ sigset_t32 s32;
+ struct timespec t;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+ siginfo_t info;
+ siginfo_t32 info32;
+
+ if (copy_from_user (&s32, uthese, sizeof(sigset_t32)))
+ return -EFAULT;
+ switch (_NSIG_WORDS) {
+ case 4: s.sig[3] = s32.sig[6] | (((long)s32.sig[7]) << 32);
+ case 3: s.sig[2] = s32.sig[4] | (((long)s32.sig[5]) << 32);
+ case 2: s.sig[1] = s32.sig[2] | (((long)s32.sig[3]) << 32);
+ case 1: s.sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32);
+ }
+ if (uts) {
+ ret = get_user (t.tv_sec, &uts->tv_sec);
+ ret |= __get_user (t.tv_nsec, &uts->tv_nsec);
+ if (ret)
+ return -EFAULT;
+ }
+ set_fs (KERNEL_DS);
+ ret = sys_rt_sigtimedwait(&s, &info, &t, sigsetsize);
+ set_fs (old_fs);
+ if (ret >= 0 && uinfo) {
+ if (copy_to_user (uinfo, siginfo64to32(&info32, &info),
+ sizeof(siginfo_t32)))
+ return -EFAULT;
+ }
+ return ret;
+}
+
+extern asmlinkage int
+sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo);
+
+asmlinkage int
+sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
+{
+ siginfo_t info;
+ siginfo_t32 info32;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ if (copy_from_user (&info32, uinfo, sizeof(siginfo_t32)))
+ return -EFAULT;
+ /* XXX: Is this correct? */
+ siginfo32to64(&info, &info32);
+ set_fs (KERNEL_DS);
+ ret = sys_rt_sigqueueinfo(pid, sig, &info);
+ set_fs (old_fs);
+ return ret;
+}
+
+extern asmlinkage int sys_setreuid(uid_t ruid, uid_t euid);
+
+asmlinkage int sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
+{
+ uid_t sruid, seuid;
+
+ sruid = (ruid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ return sys_setreuid(sruid, seuid);
+}
+
+extern asmlinkage int sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
+
+asmlinkage int
+sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
+ __kernel_uid_t32 suid)
+{
+ uid_t sruid, seuid, ssuid;
+
+ sruid = (ruid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
+ seuid = (euid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
+ ssuid = (suid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
+ return sys_setresuid(sruid, seuid, ssuid);
+}
+
+extern asmlinkage int sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
+
+asmlinkage int
+sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
+ __kernel_uid_t32 *suid)
+{
+ uid_t a, b, c;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_getresuid(&a, &b, &c);
+ set_fs (old_fs);
+ if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_setregid(gid_t rgid, gid_t egid);
+
+asmlinkage int
+sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
+{
+ gid_t srgid, segid;
+
+ srgid = (rgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ return sys_setregid(srgid, segid);
+}
+
+extern asmlinkage int sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
+
+asmlinkage int
+sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
+ __kernel_gid_t32 sgid)
+{
+ gid_t srgid, segid, ssgid;
+
+ srgid = (rgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
+ segid = (egid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
+ ssgid = (sgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
+ return sys_setresgid(srgid, segid, ssgid);
+}
+
+extern asmlinkage int sys_getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid);
+
+asmlinkage int
+sys32_getresgid(__kernel_gid_t32 *rgid, __kernel_gid_t32 *egid,
+ __kernel_gid_t32 *sgid)
+{
+ gid_t a, b, c;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_getresgid(&a, &b, &c);
+ set_fs (old_fs);
+ if (!ret) {
+ ret = put_user (a, rgid);
+ ret |= put_user (b, egid);
+ ret |= put_user (c, sgid);
+ }
+ return ret;
+}
+
+struct tms32 {
+ __kernel_clock_t32 tms_utime;
+ __kernel_clock_t32 tms_stime;
+ __kernel_clock_t32 tms_cutime;
+ __kernel_clock_t32 tms_cstime;
+};
+
+extern asmlinkage long sys_times(struct tms * tbuf);
+
+asmlinkage long
+sys32_times(struct tms32 *tbuf)
+{
+ struct tms t;
+ long ret;
+ mm_segment_t old_fs = get_fs ();
+ int err;
+
+ set_fs (KERNEL_DS);
+ ret = sys_times(tbuf ? &t : NULL);
+ set_fs (old_fs);
+ if (tbuf) {
+ err = put_user (t.tms_utime, &tbuf->tms_utime);
+ err |= __put_user (t.tms_stime, &tbuf->tms_stime);
+ err |= __put_user (t.tms_cutime, &tbuf->tms_cutime);
+ err |= __put_user (t.tms_cstime, &tbuf->tms_cstime);
+ if (err)
+ ret = -EFAULT;
+ }
+ return ret;
+}
+
+extern asmlinkage int sys_getgroups(int gidsetsize, gid_t *grouplist);
+
+asmlinkage int
+sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
+{
+ gid_t gl[NGROUPS];
+ int ret, i;
+ mm_segment_t old_fs = get_fs ();
+
+ set_fs (KERNEL_DS);
+ ret = sys_getgroups(gidsetsize, gl);
+ set_fs (old_fs);
+ if (gidsetsize && ret > 0 && ret <= NGROUPS)
+ for (i = 0; i < ret; i++, grouplist++)
+ if (__put_user (gl[i], grouplist))
+ return -EFAULT;
+ return ret;
+}
+
+extern asmlinkage int sys_setgroups(int gidsetsize, gid_t *grouplist);
+
+asmlinkage int
+sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
+{
+ gid_t gl[NGROUPS];
+ int ret, i;
+ mm_segment_t old_fs = get_fs ();
+
+ if ((unsigned) gidsetsize > NGROUPS)
+ return -EINVAL;
+ for (i = 0; i < gidsetsize; i++, grouplist++)
+ if (__get_user (gl[i], grouplist))
+ return -EFAULT;
+ set_fs (KERNEL_DS);
+ ret = sys_setgroups(gidsetsize, gl);
+ set_fs (old_fs);
+ return ret;
+}
+
+extern asmlinkage int
+sys_getrusage(int who, struct rusage *ru);
+
+asmlinkage int
+sys32_getrusage(int who, struct rusage32 *ru)
+{
+ struct rusage r;
+ int ret;
+ mm_segment_t old_fs = get_fs();
+
+ set_fs (KERNEL_DS);
+ ret = sys_getrusage(who, &r);
+ set_fs (old_fs);
+ if (put_rusage (ru, &r)) return -EFAULT;
+ return ret;
+}
+
+
+/* XXX These as well... */
+extern __inline__ struct socket *
+socki_lookup(struct inode *inode)
+{
+ return &inode->u.socket_i;
+}
+
+extern __inline__ struct socket *
+sockfd_lookup(int fd, int *err)
+{
+ struct file *file;
+ struct inode *inode;
+
+ if (!(file = fget(fd)))
+ {
+ *err = -EBADF;
+ return NULL;
+ }
+
+ inode = file->f_dentry->d_inode;
+ if (!inode || !inode->i_sock || !socki_lookup(inode))
+ {
+ *err = -ENOTSOCK;
+ fput(file);
+ return NULL;
+ }
+
+ return socki_lookup(inode);
+}
+
+struct msghdr32 {
+ u32 msg_name;
+ int msg_namelen;
+ u32 msg_iov;
+ __kernel_size_t32 msg_iovlen;
+ u32 msg_control;
+ __kernel_size_t32 msg_controllen;
+ unsigned msg_flags;
+};
+
+struct cmsghdr32 {
+ __kernel_size_t32 cmsg_len;
+ int cmsg_level;
+ int cmsg_type;
+};
+
+/* Bleech... */
+#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) \
+ __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen))
+#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) \
+ cmsg32_nxthdr((mhdr), (cmsg), (cmsglen))
+
+#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) )
+
+#define CMSG32_DATA(cmsg) \
+ ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32))))
+#define CMSG32_SPACE(len) \
+ (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len))
+#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len))
+
+#define __CMSG32_FIRSTHDR(ctl,len) ((len) >= sizeof(struct cmsghdr32) ? \
+ (struct cmsghdr32 *)(ctl) : \
+ (struct cmsghdr32 *)NULL)
+#define CMSG32_FIRSTHDR(msg) \
+ __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen)
+
+__inline__ struct cmsghdr32 *
+__cmsg32_nxthdr(void *__ctl, __kernel_size_t __size,
+ struct cmsghdr32 *__cmsg, int __cmsg_len)
+{
+ struct cmsghdr32 * __ptr;
+
+ __ptr = (struct cmsghdr32 *)(((unsigned char *) __cmsg) +
+ CMSG32_ALIGN(__cmsg_len));
+ if ((unsigned long)((char*)(__ptr+1) - (char *) __ctl) > __size)
+ return NULL;
+
+ return __ptr;
+}
+
+__inline__ struct cmsghdr32 *
+cmsg32_nxthdr (struct msghdr *__msg, struct cmsghdr32 *__cmsg, int __cmsg_len)
+{
+ return __cmsg32_nxthdr(__msg->msg_control, __msg->msg_controllen,
+ __cmsg, __cmsg_len);
+}
+
+static inline int
+iov_from_user32_to_kern(struct iovec *kiov, struct iovec32 *uiov32, int niov)
+{
+ int tot_len = 0;
+
+ while(niov > 0) {
+ u32 len, buf;
+
+ if(get_user(len, &uiov32->iov_len) ||
+ get_user(buf, &uiov32->iov_base)) {
+ tot_len = -EFAULT;
+ break;
+ }
+ tot_len += len;
+ kiov->iov_base = (void *)A(buf);
+ kiov->iov_len = (__kernel_size_t) len;
+ uiov32++;
+ kiov++;
+ niov--;
+ }
+ return tot_len;
+}
+
+static inline int
+msghdr_from_user32_to_kern(struct msghdr *kmsg, struct msghdr32 *umsg)
+{
+ u32 tmp1, tmp2, tmp3;
+ int err;
+
+ err = get_user(tmp1, &umsg->msg_name);
+ err |= __get_user(tmp2, &umsg->msg_iov);
+ err |= __get_user(tmp3, &umsg->msg_control);
+ if (err)
+ return -EFAULT;
+
+ kmsg->msg_name = (void *)A(tmp1);
+ kmsg->msg_iov = (struct iovec *)A(tmp2);
+ kmsg->msg_control = (void *)A(tmp3);
+
+ err = get_user(kmsg->msg_namelen, &umsg->msg_namelen);
+ err |= get_user(kmsg->msg_iovlen, &umsg->msg_iovlen);
+ err |= get_user(kmsg->msg_controllen, &umsg->msg_controllen);
+ err |= get_user(kmsg->msg_flags, &umsg->msg_flags);
+
+ return err;
+}
+
+/* I've named the args so it is easy to tell whose space the pointers are in. */
+static int
+verify_iovec32(struct msghdr *kern_msg, struct iovec *kern_iov,
+ char *kern_address, int mode)
+{
+ int tot_len;
+
+ if(kern_msg->msg_namelen) {
+ if(mode==VERIFY_READ) {
+ int err = move_addr_to_kernel(kern_msg->msg_name,
+ kern_msg->msg_namelen,
+ kern_address);
+ if(err < 0)
+ return err;
+ }
+ kern_msg->msg_name = kern_address;
+ } else
+ kern_msg->msg_name = NULL;
+
+ if(kern_msg->msg_iovlen > UIO_FASTIOV) {
+ kern_iov = kmalloc(kern_msg->msg_iovlen * sizeof(struct iovec),
+ GFP_KERNEL);
+ if(!kern_iov)
+ return -ENOMEM;
+ }
+
+ tot_len = iov_from_user32_to_kern(kern_iov,
+ (struct iovec32 *)kern_msg->msg_iov,
+ kern_msg->msg_iovlen);
+ if(tot_len >= 0)
+ kern_msg->msg_iov = kern_iov;
+ else if(kern_msg->msg_iovlen > UIO_FASTIOV)
+ kfree(kern_iov);
+
+ return tot_len;
+}
+
+/* There is a lot of hair here because the alignment rules (and
+ * thus placement) of cmsg headers and length are different for
+ * 32-bit apps. -DaveM
+ */
+static int
+cmsghdr_from_user32_to_kern(struct msghdr *kmsg, unsigned char *stackbuf,
+ int stackbuf_size)
+{
+ struct cmsghdr32 *ucmsg;
+ struct cmsghdr *kcmsg, *kcmsg_base;
+ __kernel_size_t32 ucmlen;
+ __kernel_size_t kcmlen, tmp;
+
+ kcmlen = 0;
+ kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf;
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while(ucmsg != NULL) {
+ if(get_user(ucmlen, &ucmsg->cmsg_len))
+ return -EFAULT;
+
+ /* Catch bogons. */
+ if(CMSG32_ALIGN(ucmlen) <
+ CMSG32_ALIGN(sizeof(struct cmsghdr32)))
+ return -EINVAL;
+ if((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control)
+ + ucmlen) > kmsg->msg_controllen)
+ return -EINVAL;
+
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmlen += tmp;
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+ if(kcmlen == 0)
+ return -EINVAL;
+
+ /* The kcmlen holds the 64-bit version of the control length.
+ * It may not be modified as we do not stick it into the kmsg
+ * until we have successfully copied over all of the data
+ * from the user.
+ */
+ if(kcmlen > stackbuf_size)
+ kcmsg_base = kcmsg = kmalloc(kcmlen, GFP_KERNEL);
+ if(kcmsg == NULL)
+ return -ENOBUFS;
+
+ /* Now copy them over neatly. */
+ memset(kcmsg, 0, kcmlen);
+ ucmsg = CMSG32_FIRSTHDR(kmsg);
+ while(ucmsg != NULL) {
+ __get_user(ucmlen, &ucmsg->cmsg_len);
+ tmp = ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) +
+ CMSG_ALIGN(sizeof(struct cmsghdr)));
+ kcmsg->cmsg_len = tmp;
+ __get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level);
+ __get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type);
+
+ /* Copy over the data. */
+ if(copy_from_user(CMSG_DATA(kcmsg),
+ CMSG32_DATA(ucmsg),
+ (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg)))))
+ goto out_free_efault;
+
+ /* Advance. */
+ kcmsg = (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp));
+ ucmsg = CMSG32_NXTHDR(kmsg, ucmsg, ucmlen);
+ }
+
+ /* Ok, looks like we made it. Hook it up and return success. */
+ kmsg->msg_control = kcmsg_base;
+ kmsg->msg_controllen = kcmlen;
+ return 0;
+
+out_free_efault:
+ if(kcmsg_base != (struct cmsghdr *)stackbuf)
+ kfree(kcmsg_base);
+ return -EFAULT;
+}
+
+static void
+put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ struct cmsghdr32 cmhdr;
+ int cmlen = CMSG32_LEN(len);
+
+ if(cm == NULL || kmsg->msg_controllen < sizeof(*cm)) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ return;
+ }
+
+ if(kmsg->msg_controllen < cmlen) {
+ kmsg->msg_flags |= MSG_CTRUNC;
+ cmlen = kmsg->msg_controllen;
+ }
+ cmhdr.cmsg_level = level;
+ cmhdr.cmsg_type = type;
+ cmhdr.cmsg_len = cmlen;
+
+ if(copy_to_user(cm, &cmhdr, sizeof cmhdr))
+ return;
+ if(copy_to_user(CMSG32_DATA(cm), data,
+ cmlen - sizeof(struct cmsghdr32)))
+ return;
+ cmlen = CMSG32_SPACE(len);
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+}
+
+static void scm_detach_fds32(struct msghdr *kmsg, struct scm_cookie *scm)
+{
+ struct cmsghdr32 *cm = (struct cmsghdr32 *) kmsg->msg_control;
+ int fdmax = (kmsg->msg_controllen - sizeof(struct cmsghdr32))
+ / sizeof(int);
+ int fdnum = scm->fp->count;
+ struct file **fp = scm->fp->fp;
+ int *cmfptr;
+ int err = 0, i;
+
+ if (fdnum < fdmax)
+ fdmax = fdnum;
+
+ for (i = 0, cmfptr = (int *) CMSG32_DATA(cm);
+ i < fdmax;
+ i++, cmfptr++) {
+ int new_fd;
+ err = get_unused_fd();
+ if (err < 0)
+ break;
+ new_fd = err;
+ err = put_user(new_fd, cmfptr);
+ if (err) {
+ put_unused_fd(new_fd);
+ break;
+ }
+ /* Bump the usage count and install the file. */
+ fp[i]->f_count++;
+ current->files->fd[new_fd] = fp[i];
+ }
+
+ if (i > 0) {
+ int cmlen = CMSG32_LEN(i * sizeof(int));
+ if (!err)
+ err = put_user(SOL_SOCKET, &cm->cmsg_level);
+ if (!err)
+ err = put_user(SCM_RIGHTS, &cm->cmsg_type);
+ if (!err)
+ err = put_user(cmlen, &cm->cmsg_len);
+ if (!err) {
+ cmlen = CMSG32_SPACE(i * sizeof(int));
+ kmsg->msg_control += cmlen;
+ kmsg->msg_controllen -= cmlen;
+ }
+ }
+ if (i < fdnum)
+ kmsg->msg_flags |= MSG_CTRUNC;
+
+ /*
+ * All of the files that fit in the message have had their
+ * usage counts incremented, so we just free the list.
+ */
+ __scm_destroy(scm);
+}
+
+/* In these cases we (currently) can just copy to data over verbatim
+ * because all CMSGs created by the kernel have well defined types which
+ * have the same layout in both the 32-bit and 64-bit API. One must add
+ * some special cased conversions here if we start sending control messages
+ * with incompatible types.
+ *
+ * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after
+ * we do our work. The remaining cases are:
+ *
+ * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean
+ * IP_TTL int 32-bit clean
+ * IP_TOS __u8 32-bit clean
+ * IP_RECVOPTS variable length 32-bit clean
+ * IP_RETOPTS variable length 32-bit clean
+ * (these last two are clean because the types are defined
+ * by the IPv4 protocol)
+ * IP_RECVERR struct sock_extended_err +
+ * struct sockaddr_in 32-bit clean
+ * SOL_IPV6 IPV6_RECVERR struct sock_extended_err +
+ * struct sockaddr_in6 32-bit clean
+ * IPV6_PKTINFO struct in6_pktinfo 32-bit clean
+ * IPV6_HOPLIMIT int 32-bit clean
+ * IPV6_FLOWINFO u32 32-bit clean
+ * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean
+ * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean
+ * IPV6_RTHDR ipv6 routing exthdr 32-bit clean
+ * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean
+ */
+static void
+cmsg32_recvmsg_fixup(struct msghdr *kmsg, unsigned long orig_cmsg_uptr)
+{
+ unsigned char *workbuf, *wp;
+ unsigned long bufsz, space_avail;
+ struct cmsghdr *ucmsg;
+
+ bufsz = ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr;
+ space_avail = kmsg->msg_controllen + bufsz;
+ wp = workbuf = kmalloc(bufsz, GFP_KERNEL);
+ if(workbuf == NULL)
+ goto fail;
+
+ /* To make this more sane we assume the kernel sends back properly
+ * formatted control messages. Because of how the kernel will truncate
+ * the cmsg_len for MSG_TRUNC cases, we need not check that case either.
+ */
+ ucmsg = (struct cmsghdr *) orig_cmsg_uptr;
+ while(((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) {
+ struct cmsghdr32 *kcmsg32 = (struct cmsghdr32 *) wp;
+ int clen64, clen32;
+
+ /* UCMSG is the 64-bit format CMSG entry in user-space.
+ * KCMSG32 is within the kernel space temporary buffer
+ * we use to convert into a 32-bit style CMSG.
+ */
+ __get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len);
+ __get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level);
+ __get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type);
+
+ clen64 = kcmsg32->cmsg_len;
+ copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg),
+ clen64 - CMSG_ALIGN(sizeof(*ucmsg)));
+ clen32 = ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) +
+ CMSG32_ALIGN(sizeof(struct cmsghdr32)));
+ kcmsg32->cmsg_len = clen32;
+
+ ucmsg = (struct cmsghdr *) (((char *)ucmsg) +
+ CMSG_ALIGN(clen64));
+ wp = (((char *)kcmsg32) + CMSG32_ALIGN(clen32));
+ }
+
+ /* Copy back fixed up data, and adjust pointers. */
+ bufsz = (wp - workbuf);
+ copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz);
+
+ kmsg->msg_control = (struct cmsghdr *)
+ (((char *)orig_cmsg_uptr) + bufsz);
+ kmsg->msg_controllen = space_avail - bufsz;
+
+ kfree(workbuf);
+ return;
+
+fail:
+ /* If we leave the 64-bit format CMSG chunks in there,
+ * the application could get confused and crash. So to
+ * ensure greater recovery, we report no CMSGs.
+ */
+ kmsg->msg_controllen += bufsz;
+ kmsg->msg_control = (void *) orig_cmsg_uptr;
+}
+
+asmlinkage int
+sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
+{
+ struct socket *sock;
+ char address[MAX_SOCK_ADDR];
+ struct iovec iov[UIO_FASTIOV];
+ unsigned char ctl[sizeof(struct cmsghdr) + 20];
+ unsigned char *ctl_buf = ctl;
+ struct msghdr kern_msg;
+ int err, total_len;
+
+ if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
+ return -EFAULT;
+ if(kern_msg.msg_iovlen > UIO_MAXIOV)
+ return -EINVAL;
+ err = verify_iovec32(&kern_msg, iov, address, VERIFY_READ);
+ if (err < 0)
+ goto out;
+ total_len = err;
+
+ if(kern_msg.msg_controllen) {
+ err = cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl));
+ if(err)
+ goto out_freeiov;
+ ctl_buf = kern_msg.msg_control;
+ }
+ kern_msg.msg_flags = user_flags;
+
+ lock_kernel();
+ sock = sockfd_lookup(fd, &err);
+ if (sock != NULL) {
+ if (sock->file->f_flags & O_NONBLOCK)
+ kern_msg.msg_flags |= MSG_DONTWAIT;
+ err = sock_sendmsg(sock, &kern_msg, total_len);
+ sockfd_put(sock);
+ }
+ unlock_kernel();
+
+ /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */
+ if(ctl_buf != ctl)
+ kfree(ctl_buf);
+out_freeiov:
+ if(kern_msg.msg_iov != iov)
+ kfree(kern_msg.msg_iov);
+out:
+ return err;
+}
+
+asmlinkage int
+sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
+{
+ struct iovec iovstack[UIO_FASTIOV];
+ struct msghdr kern_msg;
+ char addr[MAX_SOCK_ADDR];
+ struct socket *sock;
+ struct iovec *iov = iovstack;
+ struct sockaddr *uaddr;
+ int *uaddr_len;
+ unsigned long cmsg_ptr;
+ int err, total_len, len = 0;
+
+ if(msghdr_from_user32_to_kern(&kern_msg, user_msg))
+ return -EFAULT;
+ if(kern_msg.msg_iovlen > UIO_MAXIOV)
+ return -EINVAL;
+
+ uaddr = kern_msg.msg_name;
+ uaddr_len = &user_msg->msg_namelen;
+ err = verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE);
+ if (err < 0)
+ goto out;
+ total_len = err;
+
+ cmsg_ptr = (unsigned long) kern_msg.msg_control;
+ kern_msg.msg_flags = 0;
+
+ lock_kernel();
+ sock = sockfd_lookup(fd, &err);
+ if (sock != NULL) {
+ struct scm_cookie scm;
+
+ if (sock->file->f_flags & O_NONBLOCK)
+ user_flags |= MSG_DONTWAIT;
+ memset(&scm, 0, sizeof(scm));
+ err = sock->ops->recvmsg(sock, &kern_msg, total_len,
+ user_flags, &scm);
+ if(err >= 0) {
+ len = err;
+ if(!kern_msg.msg_control) {
+ if(sock->passcred || scm.fp)
+ kern_msg.msg_flags |= MSG_CTRUNC;
+ if(scm.fp)
+ __scm_destroy(&scm);
+ } else {
+ /* If recvmsg processing itself placed some
+ * control messages into user space, it's is
+ * using 64-bit CMSG processing, so we need
+ * to fix it up before we tack on more stuff.
+ */
+ if((unsigned long) kern_msg.msg_control
+ != cmsg_ptr)
+ cmsg32_recvmsg_fixup(&kern_msg,
+ cmsg_ptr);
+
+ /* Wheee... */
+ if(sock->passcred)
+ put_cmsg32(&kern_msg,
+ SOL_SOCKET, SCM_CREDENTIALS,
+ sizeof(scm.creds),
+ &scm.creds);
+ if(scm.fp != NULL)
+ scm_detach_fds32(&kern_msg, &scm);
+ }
+ }
+ sockfd_put(sock);
+ }
+ unlock_kernel();
+
+ if(uaddr != NULL && err >= 0)
+ err = move_addr_to_user(addr, kern_msg.msg_namelen, uaddr,
+ uaddr_len);
+ if(cmsg_ptr != 0 && err >= 0) {
+ unsigned long ucmsg_ptr = ((unsigned long)kern_msg.msg_control);
+ __kernel_size_t32 uclen = (__kernel_size_t32) (ucmsg_ptr
+ - cmsg_ptr);
+ err |= __put_user(uclen, &user_msg->msg_controllen);
+ }
+ if(err >= 0)
+ err = __put_user(kern_msg.msg_flags, &user_msg->msg_flags);
+ if(kern_msg.msg_iov != iov)
+ kfree(kern_msg.msg_iov);
+out:
+ if(err < 0)
+ return err;
+ return len;
+}
+
+extern void check_pending(int signum);
+
+asmlinkage int
+sys32_sigaction (int sig, struct old_sigaction32 *act,
+ struct old_sigaction32 *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if(sig < 0) {
+ current->tss.new_signal = 1;
+ sig = -sig;
+ }
+
+ if (act) {
+ old_sigset_t32 mask;
+
+ ret = get_user((long)new_ka.sa.sa_handler, &act->sa_handler);
+ ret |= __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ ret |= __get_user(mask, &act->sa_mask);
+ if (ret)
+ return ret;
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ ret = put_user((long)old_ka.sa.sa_handler, &oact->sa_handler);
+ ret |= __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ ret |= __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+#ifdef CONFIG_MODULES
+
+extern asmlinkage unsigned long sys_create_module(const char *name_user,
+ size_t size);
+
+asmlinkage unsigned long
+sys32_create_module(const char *name_user, __kernel_size_t32 size)
+{
+ return sys_create_module(name_user, (size_t)size);
+}
+
+extern asmlinkage int sys_init_module(const char *name_user,
+ struct module *mod_user);
+
+/* Hey, when you're trying to init module, take time and prepare us a nice 64bit
+ * module structure, even if from 32bit modutils... Why to pollute kernel... :))
+ */
+asmlinkage int
+sys32_init_module(const char *name_user, struct module *mod_user)
+{
+ return sys_init_module(name_user, mod_user);
+}
+
+extern asmlinkage int sys_delete_module(const char *name_user);
+
+asmlinkage int
+sys32_delete_module(const char *name_user)
+{
+ return sys_delete_module(name_user);
+}
+
+struct module_info32 {
+ u32 addr;
+ u32 size;
+ u32 flags;
+ s32 usecount;
+};
+
+/* Query various bits about modules. */
+
+static inline long
+get_mod_name(const char *user_name, char **buf)
+{
+ unsigned long page;
+ long retval;
+
+ if ((unsigned long)user_name >= TASK_SIZE
+ && !segment_eq(get_fs (), KERNEL_DS))
+ return -EFAULT;
+
+ page = __get_free_page(GFP_KERNEL);
+ if (!page)
+ return -ENOMEM;
+
+ retval = strncpy_from_user((char *)page, user_name, PAGE_SIZE);
+ if (retval > 0) {
+ if (retval < PAGE_SIZE) {
+ *buf = (char *)page;
+ return retval;
+ }
+ retval = -ENAMETOOLONG;
+ } else if (!retval)
+ retval = -EINVAL;
+
+ free_page(page);
+ return retval;
+}
+
+static inline void
+put_mod_name(char *buf)
+{
+ free_page((unsigned long)buf);
+}
+
+static __inline__ struct module *
+find_module(const char *name)
+{
+ struct module *mod;
+
+ for (mod = module_list; mod ; mod = mod->next) {
+ if (mod->flags & MOD_DELETED)
+ continue;
+ if (!strcmp(mod->name, name))
+ break;
+ }
+
+ return mod;
+}
+
+static int
+qm_modules(char *buf, size_t bufsize, __kernel_size_t32 *ret)
+{
+ struct module *mod;
+ size_t nmod, space, len;
+
+ nmod = space = 0;
+
+ for (mod = module_list; mod->next != NULL; mod = mod->next, ++nmod) {
+ len = strlen(mod->name)+1;
+ if (len > bufsize)
+ goto calc_space_needed;
+ if (copy_to_user(buf, mod->name, len))
+ return -EFAULT;
+ buf += len;
+ bufsize -= len;
+ space += len;
+ }
+
+ if (put_user(nmod, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+calc_space_needed:
+ space += len;
+ while ((mod = mod->next)->next != NULL)
+ space += strlen(mod->name)+1;
+
+ if (put_user(space, ret))
+ return -EFAULT;
+ else
+ return -ENOSPC;
+}
+
+static int
+qm_deps(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
+{
+ size_t i, space, len;
+
+ if (mod->next == NULL)
+ return -EINVAL;
+ if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
+ if (put_user(0, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+ space = 0;
+ for (i = 0; i < mod->ndeps; ++i) {
+ const char *dep_name = mod->deps[i].dep->name;
+
+ len = strlen(dep_name)+1;
+ if (len > bufsize)
+ goto calc_space_needed;
+ if (copy_to_user(buf, dep_name, len))
+ return -EFAULT;
+ buf += len;
+ bufsize -= len;
+ space += len;
+ }
+
+ if (put_user(i, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+calc_space_needed:
+ space += len;
+ while (++i < mod->ndeps)
+ space += strlen(mod->deps[i].dep->name)+1;
+
+ if (put_user(space, ret))
+ return -EFAULT;
+ else
+ return -ENOSPC;
+}
+
+static int
+qm_refs(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
+{
+ size_t nrefs, space, len;
+ struct module_ref *ref;
+
+ if (mod->next == NULL)
+ return -EINVAL;
+ if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
+ if (put_user(0, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+ space = 0;
+ for (nrefs = 0, ref = mod->refs; ref ; ++nrefs, ref = ref->next_ref) {
+ const char *ref_name = ref->ref->name;
+
+ len = strlen(ref_name)+1;
+ if (len > bufsize)
+ goto calc_space_needed;
+ if (copy_to_user(buf, ref_name, len))
+ return -EFAULT;
+ buf += len;
+ bufsize -= len;
+ space += len;
+ }
+
+ if (put_user(nrefs, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+calc_space_needed:
+ space += len;
+ while ((ref = ref->next_ref) != NULL)
+ space += strlen(ref->ref->name)+1;
+
+ if (put_user(space, ret))
+ return -EFAULT;
+ else
+ return -ENOSPC;
+}
+
+static inline int
+qm_symbols(struct module *mod, char *buf, size_t bufsize,
+ __kernel_size_t32 *ret)
+{
+ size_t i, space, len;
+ struct module_symbol *s;
+ char *strings;
+ unsigned *vals;
+
+ if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) != MOD_RUNNING)
+ if (put_user(0, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+ space = mod->nsyms * 2*sizeof(u32);
+
+ i = len = 0;
+ s = mod->syms;
+
+ if (space > bufsize)
+ goto calc_space_needed;
+
+ if (!access_ok(VERIFY_WRITE, buf, space))
+ return -EFAULT;
+
+ bufsize -= space;
+ vals = (unsigned *)buf;
+ strings = buf+space;
+
+ for (; i < mod->nsyms ; ++i, ++s, vals += 2) {
+ len = strlen(s->name)+1;
+ if (len > bufsize)
+ goto calc_space_needed;
+
+ if (copy_to_user(strings, s->name, len)
+ || __put_user(s->value, vals+0)
+ || __put_user(space, vals+1))
+ return -EFAULT;
+
+ strings += len;
+ bufsize -= len;
+ space += len;
+ }
+
+ if (put_user(i, ret))
+ return -EFAULT;
+ else
+ return 0;
+
+calc_space_needed:
+ for (; i < mod->nsyms; ++i, ++s)
+ space += strlen(s->name)+1;
+
+ if (put_user(space, ret))
+ return -EFAULT;
+ else
+ return -ENOSPC;
+}
+
+static inline int
+qm_info(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *ret)
+{
+ int error = 0;
+
+ if (mod->next == NULL)
+ return -EINVAL;
+
+ if (sizeof(struct module_info32) <= bufsize) {
+ struct module_info32 info;
+ info.addr = (unsigned long)mod;
+ info.size = mod->size;
+ info.flags = mod->flags;
+ info.usecount =
+ ((mod_member_present(mod, can_unload)
+ && mod->can_unload)
+ ? -1 : atomic_read(&mod->uc.usecount));
+
+ if (copy_to_user(buf, &info, sizeof(struct module_info32)))
+ return -EFAULT;
+ } else
+ error = -ENOSPC;
+
+ if (put_user(sizeof(struct module_info32), ret))
+ return -EFAULT;
+
+ return error;
+}
+
+asmlinkage int
+sys32_query_module(char *name_user, int which, char *buf,
+ __kernel_size_t32 bufsize, u32 ret)
+{
+ struct module *mod;
+ int err;
+
+ lock_kernel();
+ if (name_user == 0) {
+ /* This finds "kernel_module" which is not exported. */
+ for(mod = module_list; mod->next != NULL; mod = mod->next)
+ ;
+ } else {
+ long namelen;
+ char *name;
+
+ if ((namelen = get_mod_name(name_user, &name)) < 0) {
+ err = namelen;
+ goto out;
+ }
+ err = -ENOENT;
+ if (namelen == 0) {
+ /* This finds "kernel_module" which is not exported. */
+ for(mod = module_list;
+ mod->next != NULL;
+ mod = mod->next) ;
+ } else if ((mod = find_module(name)) == NULL) {
+ put_mod_name(name);
+ goto out;
+ }
+ put_mod_name(name);
+ }
+
+ switch (which)
+ {
+ case 0:
+ err = 0;
+ break;
+ case QM_MODULES:
+ err = qm_modules(buf, bufsize, (__kernel_size_t32 *)AA(ret));
+ break;
+ case QM_DEPS:
+ err = qm_deps(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
+ break;
+ case QM_REFS:
+ err = qm_refs(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
+ break;
+ case QM_SYMBOLS:
+ err = qm_symbols(mod, buf, bufsize,
+ (__kernel_size_t32 *)AA(ret));
+ break;
+ case QM_INFO:
+ err = qm_info(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret));
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+out:
+ unlock_kernel();
+ return err;
+}
+
+struct kernel_sym32 {
+ u32 value;
+ char name[60];
+};
+
+extern asmlinkage int sys_get_kernel_syms(struct kernel_sym *table);
+
+asmlinkage int
+sys32_get_kernel_syms(struct kernel_sym32 *table)
+{
+ int len, i;
+ struct kernel_sym *tbl;
+ mm_segment_t old_fs;
+
+ len = sys_get_kernel_syms(NULL);
+ if (!table) return len;
+ tbl = kmalloc (len * sizeof (struct kernel_sym), GFP_KERNEL);
+ if (!tbl) return -ENOMEM;
+ old_fs = get_fs();
+ set_fs (KERNEL_DS);
+ sys_get_kernel_syms(tbl);
+ set_fs (old_fs);
+ for (i = 0; i < len; i++, table += sizeof (struct kernel_sym32)) {
+ if (put_user (tbl[i].value, &table->value) ||
+ copy_to_user (table->name, tbl[i].name, 60))
+ break;
+ }
+ kfree (tbl);
+ return i;
+}
+
+#else /* CONFIG_MODULES */
+
+asmlinkage unsigned long
+sys32_create_module(const char *name_user, size_t size)
+{
+ return -ENOSYS;
+}
+
+asmlinkage int
+sys32_init_module(const char *name_user, struct module *mod_user)
+{
+ return -ENOSYS;
+}
+
+asmlinkage int
+sys32_delete_module(const char *name_user)
+{
+ return -ENOSYS;
+}
+
+asmlinkage int
+sys32_query_module(const char *name_user, int which, char *buf, size_t bufsize,
+ size_t *ret)
+{
+ /* Let the program know about the new interface. Not that
+ it'll do them much good. */
+ if (which == 0)
+ return 0;
+
+ return -ENOSYS;
+}
+
+asmlinkage int
+sys32_get_kernel_syms(struct kernel_sym *table)
+{
+ return -ENOSYS;
+}
+
+#endif /* CONFIG_MODULES */
+
+/* Stuff for NFS server syscalls... */
+struct nfsctl_svc32 {
+ u16 svc32_port;
+ s32 svc32_nthreads;
+};
+
+struct nfsctl_client32 {
+ s8 cl32_ident[NFSCLNT_IDMAX+1];
+ s32 cl32_naddr;
+ struct in_addr cl32_addrlist[NFSCLNT_ADDRMAX];
+ s32 cl32_fhkeytype;
+ s32 cl32_fhkeylen;
+ u8 cl32_fhkey[NFSCLNT_KEYMAX];
+};
+
+struct nfsctl_export32 {
+ s8 ex32_client[NFSCLNT_IDMAX+1];
+ s8 ex32_path[NFS_MAXPATHLEN+1];
+ __kernel_dev_t32 ex32_dev;
+ __kernel_ino_t32 ex32_ino;
+ s32 ex32_flags;
+ __kernel_uid_t32 ex32_anon_uid;
+ __kernel_gid_t32 ex32_anon_gid;
+};
+
+struct nfsctl_uidmap32 {
+ u32 ug32_ident; /* char * */
+ __kernel_uid_t32 ug32_uidbase;
+ s32 ug32_uidlen;
+ u32 ug32_udimap; /* uid_t * */
+ __kernel_uid_t32 ug32_gidbase;
+ s32 ug32_gidlen;
+ u32 ug32_gdimap; /* gid_t * */
+};
+
+struct nfsctl_fhparm32 {
+ struct sockaddr gf32_addr;
+ __kernel_dev_t32 gf32_dev;
+ __kernel_ino_t32 gf32_ino;
+ s32 gf32_version;
+};
+
+struct nfsctl_arg32 {
+ s32 ca32_version; /* safeguard */
+ union {
+ struct nfsctl_svc32 u32_svc;
+ struct nfsctl_client32 u32_client;
+ struct nfsctl_export32 u32_export;
+ struct nfsctl_uidmap32 u32_umap;
+ struct nfsctl_fhparm32 u32_getfh;
+ u32 u32_debug;
+ } u;
+#define ca32_svc u.u32_svc
+#define ca32_client u.u32_client
+#define ca32_export u.u32_export
+#define ca32_umap u.u32_umap
+#define ca32_getfh u.u32_getfh
+#define ca32_authd u.u32_authd
+#define ca32_debug u.u32_debug
+};
+
+union nfsctl_res32 {
+ struct knfs_fh cr32_getfh;
+ u32 cr32_debug;
+};
+
+static int
+nfs_svc32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= __get_user(karg->ca_svc.svc_port, &arg32->ca32_svc.svc32_port);
+ err |= __get_user(karg->ca_svc.svc_nthreads,
+ &arg32->ca32_svc.svc32_nthreads);
+ return err;
+}
+
+static int
+nfs_clnt32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= copy_from_user(&karg->ca_client.cl_ident[0],
+ &arg32->ca32_client.cl32_ident[0],
+ NFSCLNT_IDMAX);
+ err |= __get_user(karg->ca_client.cl_naddr,
+ &arg32->ca32_client.cl32_naddr);
+ err |= copy_from_user(&karg->ca_client.cl_addrlist[0],
+ &arg32->ca32_client.cl32_addrlist[0],
+ (sizeof(struct in_addr) * NFSCLNT_ADDRMAX));
+ err |= __get_user(karg->ca_client.cl_fhkeytype,
+ &arg32->ca32_client.cl32_fhkeytype);
+ err |= __get_user(karg->ca_client.cl_fhkeylen,
+ &arg32->ca32_client.cl32_fhkeylen);
+ err |= copy_from_user(&karg->ca_client.cl_fhkey[0],
+ &arg32->ca32_client.cl32_fhkey[0],
+ NFSCLNT_KEYMAX);
+ return err;
+}
+
+static int
+nfs_exp32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= copy_from_user(&karg->ca_export.ex_client[0],
+ &arg32->ca32_export.ex32_client[0],
+ NFSCLNT_IDMAX);
+ err |= copy_from_user(&karg->ca_export.ex_path[0],
+ &arg32->ca32_export.ex32_path[0],
+ NFS_MAXPATHLEN);
+ err |= __get_user(karg->ca_export.ex_dev,
+ &arg32->ca32_export.ex32_dev);
+ err |= __get_user(karg->ca_export.ex_ino,
+ &arg32->ca32_export.ex32_ino);
+ err |= __get_user(karg->ca_export.ex_flags,
+ &arg32->ca32_export.ex32_flags);
+ err |= __get_user(karg->ca_export.ex_anon_uid,
+ &arg32->ca32_export.ex32_anon_uid);
+ err |= __get_user(karg->ca_export.ex_anon_gid,
+ &arg32->ca32_export.ex32_anon_gid);
+ return err;
+}
+
+static int
+nfs_uud32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ u32 uaddr;
+ int i;
+ int err;
+
+ memset(karg, 0, sizeof(*karg));
+ if(__get_user(karg->ca_version, &arg32->ca32_version))
+ return -EFAULT;
+ karg->ca_umap.ug_ident = (char *)get_free_page(GFP_USER);
+ if(!karg->ca_umap.ug_ident)
+ return -ENOMEM;
+ err = __get_user(uaddr, &arg32->ca32_umap.ug32_ident);
+ if(strncpy_from_user(karg->ca_umap.ug_ident,
+ (char *)A(uaddr), PAGE_SIZE) <= 0)
+ return -EFAULT;
+ err |= __get_user(karg->ca_umap.ug_uidbase,
+ &arg32->ca32_umap.ug32_uidbase);
+ err |= __get_user(karg->ca_umap.ug_uidlen,
+ &arg32->ca32_umap.ug32_uidlen);
+ err |= __get_user(uaddr, &arg32->ca32_umap.ug32_udimap);
+ if (err)
+ return -EFAULT;
+ karg->ca_umap.ug_udimap = kmalloc((sizeof(uid_t) *
+ karg->ca_umap.ug_uidlen),
+ GFP_USER);
+ if(!karg->ca_umap.ug_udimap)
+ return -ENOMEM;
+ for(i = 0; i < karg->ca_umap.ug_uidlen; i++)
+ err |= __get_user(karg->ca_umap.ug_udimap[i],
+ &(((__kernel_uid_t32 *)A(uaddr))[i]));
+ err |= __get_user(karg->ca_umap.ug_gidbase,
+ &arg32->ca32_umap.ug32_gidbase);
+ err |= __get_user(karg->ca_umap.ug_uidlen,
+ &arg32->ca32_umap.ug32_gidlen);
+ err |= __get_user(uaddr, &arg32->ca32_umap.ug32_gdimap);
+ if (err)
+ return -EFAULT;
+ karg->ca_umap.ug_gdimap = kmalloc((sizeof(gid_t) *
+ karg->ca_umap.ug_uidlen),
+ GFP_USER);
+ if(!karg->ca_umap.ug_gdimap)
+ return -ENOMEM;
+ for(i = 0; i < karg->ca_umap.ug_gidlen; i++)
+ err |= __get_user(karg->ca_umap.ug_gdimap[i],
+ &(((__kernel_gid_t32 *)A(uaddr))[i]));
+
+ return err;
+}
+
+static int
+nfs_getfh32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= copy_from_user(&karg->ca_getfh.gf_addr,
+ &arg32->ca32_getfh.gf32_addr,
+ (sizeof(struct sockaddr)));
+ err |= __get_user(karg->ca_getfh.gf_dev,
+ &arg32->ca32_getfh.gf32_dev);
+ err |= __get_user(karg->ca_getfh.gf_ino,
+ &arg32->ca32_getfh.gf32_ino);
+ err |= __get_user(karg->ca_getfh.gf_version,
+ &arg32->ca32_getfh.gf32_version);
+ return err;
+}
+
+static int
+nfs_getfh32_res_trans(union nfsctl_res *kres, union nfsctl_res32 *res32)
+{
+ int err;
+
+ err = copy_to_user(&res32->cr32_getfh,
+ &kres->cr_getfh,
+ sizeof(res32->cr32_getfh));
+ err |= __put_user(kres->cr_debug, &res32->cr32_debug);
+ return err;
+}
+
+extern asmlinkage int sys_nfsservctl(int cmd, void *arg, void *resp);
+
+int asmlinkage
+sys32_nfsservctl(int cmd, struct nfsctl_arg32 *arg32, union nfsctl_res32 *res32)
+{
+ struct nfsctl_arg *karg = NULL;
+ union nfsctl_res *kres = NULL;
+ mm_segment_t oldfs;
+ int err;
+
+ karg = kmalloc(sizeof(*karg), GFP_USER);
+ if(!karg)
+ return -ENOMEM;
+ if(res32) {
+ kres = kmalloc(sizeof(*kres), GFP_USER);
+ if(!kres) {
+ kfree(karg);
+ return -ENOMEM;
+ }
+ }
+ switch(cmd) {
+ case NFSCTL_SVC:
+ err = nfs_svc32_trans(karg, arg32);
+ break;
+ case NFSCTL_ADDCLIENT:
+ err = nfs_clnt32_trans(karg, arg32);
+ break;
+ case NFSCTL_DELCLIENT:
+ err = nfs_clnt32_trans(karg, arg32);
+ break;
+ case NFSCTL_EXPORT:
+ err = nfs_exp32_trans(karg, arg32);
+ break;
+ /* This one is unimplemented, be we're ready for it. */
+ case NFSCTL_UGIDUPDATE:
+ err = nfs_uud32_trans(karg, arg32);
+ break;
+ case NFSCTL_GETFH:
+ err = nfs_getfh32_trans(karg, arg32);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ if(err)
+ goto done;
+ oldfs = get_fs();
+ set_fs(KERNEL_DS);
+ err = sys_nfsservctl(cmd, karg, kres);
+ set_fs(oldfs);
+
+ if(!err && cmd == NFSCTL_GETFH)
+ err = nfs_getfh32_res_trans(kres, res32);
+
+done:
+ if(karg) {
+ if(cmd == NFSCTL_UGIDUPDATE) {
+ if(karg->ca_umap.ug_ident)
+ kfree(karg->ca_umap.ug_ident);
+ if(karg->ca_umap.ug_udimap)
+ kfree(karg->ca_umap.ug_udimap);
+ if(karg->ca_umap.ug_gdimap)
+ kfree(karg->ca_umap.ug_gdimap);
+ }
+ kfree(karg);
+ }
+ if(kres)
+ kfree(kres);
+ return err;
+}
+
+asmlinkage int sys_utimes(char *, struct timeval *);
+
+asmlinkage int
+sys32_utimes(char *filename, struct timeval32 *tvs)
+{
+ char *kfilename;
+ struct timeval ktvs[2];
+ mm_segment_t old_fs;
+ int ret;
+
+ kfilename = getname32(filename);
+ ret = PTR_ERR(kfilename);
+ if (!IS_ERR(kfilename)) {
+ if (tvs) {
+ if (get_tv32(&ktvs[0], tvs) ||
+ get_tv32(&ktvs[1], 1+tvs))
+ return -EFAULT;
+ }
+
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ ret = sys_utimes(kfilename, &ktvs[0]);
+ set_fs(old_fs);
+
+ putname(kfilename);
+ }
+ return ret;
+}
+
+/* These are here just in case some old ia32 binary calls it. */
+asmlinkage int
+sys32_pause(void)
+{
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ return -ERESTARTNOHAND;
+}
+
+/* PCI config space poking. */
+extern asmlinkage int sys_pciconfig_read(unsigned long bus,
+ unsigned long dfn,
+ unsigned long off,
+ unsigned long len,
+ unsigned char *buf);
+
+extern asmlinkage int sys_pciconfig_write(unsigned long bus,
+ unsigned long dfn,
+ unsigned long off,
+ unsigned long len,
+ unsigned char *buf);
+
+asmlinkage int
+sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
+{
+ return sys_pciconfig_read((unsigned long) bus,
+ (unsigned long) dfn,
+ (unsigned long) off,
+ (unsigned long) len,
+ (unsigned char *)AA(ubuf));
+}
+
+asmlinkage int
+sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
+{
+ return sys_pciconfig_write((unsigned long) bus,
+ (unsigned long) dfn,
+ (unsigned long) off,
+ (unsigned long) len,
+ (unsigned char *)AA(ubuf));
+}
+
+extern asmlinkage int sys_prctl(int option, unsigned long arg2,
+ unsigned long arg3, unsigned long arg4,
+ unsigned long arg5);
+
+asmlinkage int
+sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5)
+{
+ return sys_prctl(option,
+ (unsigned long) arg2,
+ (unsigned long) arg3,
+ (unsigned long) arg4,
+ (unsigned long) arg5);
+}
+
+
+extern asmlinkage int sys_newuname(struct new_utsname * name);
+
+asmlinkage int
+sys32_newuname(struct new_utsname * name)
+{
+ int ret = sys_newuname(name);
+
+ if (current->personality == PER_LINUX32 && !ret) {
+ ret = copy_to_user(name->machine, "sparc\0\0", 8);
+ }
+ return ret;
+}
+
+extern asmlinkage ssize_t sys_pread(unsigned int fd, char * buf,
+ size_t count, loff_t pos);
+
+extern asmlinkage ssize_t sys_pwrite(unsigned int fd, const char * buf,
+ size_t count, loff_t pos);
+
+typedef __kernel_ssize_t32 ssize_t32;
+
+asmlinkage ssize_t32
+sys32_pread(unsigned int fd, char *ubuf, __kernel_size_t32 count,
+ u32 poshi, u32 poslo)
+{
+ return sys_pread(fd, ubuf, count,
+ ((loff_t)AA(poshi) << 32) | AA(poslo));
+}
+
+asmlinkage ssize_t32
+sys32_pwrite(unsigned int fd, char *ubuf, __kernel_size_t32 count,
+ u32 poshi, u32 poslo)
+{
+ return sys_pwrite(fd, ubuf, count,
+ ((loff_t)AA(poshi) << 32) | AA(poslo));
+}
+
+
+extern asmlinkage int sys_personality(unsigned long);
+
+asmlinkage int
+sys32_personality(unsigned long personality)
+{
+ int ret;
+ lock_kernel();
+ if (current->personality == PER_LINUX32 && personality == PER_LINUX)
+ personality = PER_LINUX32;
+ ret = sys_personality(personality);
+ unlock_kernel();
+ if (ret == PER_LINUX32)
+ ret = PER_LINUX;
+ return ret;
+}
+
+extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset,
+ size_t count);
+
+asmlinkage int
+sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count)
+{
+ mm_segment_t old_fs = get_fs();
+ int ret;
+ off_t of;
+
+ if (offset && get_user(of, offset))
+ return -EFAULT;
+
+ set_fs(KERNEL_DS);
+ ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
+ set_fs(old_fs);
+
+ if (!ret && offset && put_user(of, offset))
+ return -EFAULT;
+
+ return ret;
+}
+
+/* Handle adjtimex compatability. */
+
+struct timex32 {
+ u32 modes;
+ s32 offset, freq, maxerror, esterror;
+ s32 status, constant, precision, tolerance;
+ struct timeval32 time;
+ s32 tick;
+ s32 ppsfreq, jitter, shift, stabil;
+ s32 jitcnt, calcnt, errcnt, stbcnt;
+ s32 :32; s32 :32; s32 :32; s32 :32;
+ s32 :32; s32 :32; s32 :32; s32 :32;
+ s32 :32; s32 :32; s32 :32; s32 :32;
+};
+
+extern int do_adjtimex(struct timex *);
+
+asmlinkage int
+sys32_adjtimex(struct timex32 *utp)
+{
+ struct timex txc;
+ int ret;
+
+ memset(&txc, 0, sizeof(struct timex));
+
+ if(get_user(txc.modes, &utp->modes) ||
+ __get_user(txc.offset, &utp->offset) ||
+ __get_user(txc.freq, &utp->freq) ||
+ __get_user(txc.maxerror, &utp->maxerror) ||
+ __get_user(txc.esterror, &utp->esterror) ||
+ __get_user(txc.status, &utp->status) ||
+ __get_user(txc.constant, &utp->constant) ||
+ __get_user(txc.precision, &utp->precision) ||
+ __get_user(txc.tolerance, &utp->tolerance) ||
+ __get_user(txc.time.tv_sec, &utp->time.tv_sec) ||
+ __get_user(txc.time.tv_usec, &utp->time.tv_usec) ||
+ __get_user(txc.tick, &utp->tick) ||
+ __get_user(txc.ppsfreq, &utp->ppsfreq) ||
+ __get_user(txc.jitter, &utp->jitter) ||
+ __get_user(txc.shift, &utp->shift) ||
+ __get_user(txc.stabil, &utp->stabil) ||
+ __get_user(txc.jitcnt, &utp->jitcnt) ||
+ __get_user(txc.calcnt, &utp->calcnt) ||
+ __get_user(txc.errcnt, &utp->errcnt) ||
+ __get_user(txc.stbcnt, &utp->stbcnt))
+ return -EFAULT;
+
+ ret = do_adjtimex(&txc);
+
+ if(put_user(txc.modes, &utp->modes) ||
+ __put_user(txc.offset, &utp->offset) ||
+ __put_user(txc.freq, &utp->freq) ||
+ __put_user(txc.maxerror, &utp->maxerror) ||
+ __put_user(txc.esterror, &utp->esterror) ||
+ __put_user(txc.status, &utp->status) ||
+ __put_user(txc.constant, &utp->constant) ||
+ __put_user(txc.precision, &utp->precision) ||
+ __put_user(txc.tolerance, &utp->tolerance) ||
+ __put_user(txc.time.tv_sec, &utp->time.tv_sec) ||
+ __put_user(txc.time.tv_usec, &utp->time.tv_usec) ||
+ __put_user(txc.tick, &utp->tick) ||
+ __put_user(txc.ppsfreq, &utp->ppsfreq) ||
+ __put_user(txc.jitter, &utp->jitter) ||
+ __put_user(txc.shift, &utp->shift) ||
+ __put_user(txc.stabil, &utp->stabil) ||
+ __put_user(txc.jitcnt, &utp->jitcnt) ||
+ __put_user(txc.calcnt, &utp->calcnt) ||
+ __put_user(txc.errcnt, &utp->errcnt) ||
+ __put_user(txc.stbcnt, &utp->stbcnt))
+ ret = -EFAULT;
+
+ return ret;
+}
+#endif // NOTYET
+
--- /dev/null
+#
+# Makefile for ia64-specific kdb files..
+#
+# Copyright 1999, Silicon Graphics Inc.
+#
+# Written March 1999 by Scott Lurndal at Silicon Graphics, Inc.
+# Code for IA64 written by Goutham Rao <goutham.rao@intel.com> and
+# Sreenivas Subramoney <sreenivas.subramoney@intel.com>
+#
+
+SUB_DIRS :=
+MOD_SUB_DIRS := $(SUB_DIRS)
+ALL_SUB_DIRS := $(SUB_DIRS)
+
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -c $< -o $*.o
+
+L_TARGET = kdb.a
+L_OBJS = kdbsupport.o kdb_io.o kdb_bt.o kdb_traps.o
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/**
+ * Minimalist Kernel Debugger
+ * Machine dependent stack traceback code for IA-64.
+ *
+ * Copyright (C) 1999 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 1999 Sreenivas Subramoney <sreenivas.subramoney@intel.com>
+ * Intel Corporation, August 1999.
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 99/12/03 D. Mosberger Reimplemented based on <asm-ia64/unwind.h> API.
+ * 99/12/06 D. Mosberger Added support for backtracing other processes.
+ */
+
+#include <linux/ctype.h>
+#include <linux/string.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/kdb.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/kdbsupport.h>
+
+/*
+ * Minimal stack back trace functionality.
+ */
+int
+kdb_bt (int argc, const char **argv, const char **envp, struct pt_regs *regs)
+{
+ struct task_struct *task = current;
+ struct ia64_frame_info info;
+ char *name;
+ int diag;
+
+ if (strcmp(argv[0], "btp") == 0) {
+ unsigned long pid;
+
+ diag = kdbgetularg(argv[1], &pid);
+ if (diag)
+ return diag;
+
+ task = find_task_by_pid(pid);
+ if (!task) {
+ kdb_printf("No process with pid == %d found\n", pid);
+ return 0;
+ }
+ regs = ia64_task_regs(task);
+ } else if (argc) {
+ kdb_printf("bt <address> is unsupported for IA-64\n");
+ return 0;
+ }
+
+ if (task == current) {
+ /*
+ * Upon entering kdb, the stack frame looks like this:
+ *
+ * +---------------------+
+ * | struct pt_regs |
+ * +---------------------+
+ * | |
+ * | kernel stack |
+ * | |
+ * +=====================+ <--- top of stack upon entering kdb
+ * | struct pt_regs |
+ * +---------------------+
+ * | struct switch_stack |
+ * +---------------------+
+ */
+ if (user_mode(regs)) {
+ /* We are not implementing stack backtrace from user mode code */
+ kdb_printf ("Not in Kernel\n");
+ return 0;
+ }
+ ia64_unwind_init_from_current(&info, regs);
+ } else {
+ /*
+ * For a blocked task, the stack frame looks like this:
+ *
+ * +---------------------+
+ * | struct pt_regs |
+ * +---------------------+
+ * | |
+ * | kernel stack |
+ * | |
+ * +---------------------+
+ * | struct switch_stack |
+ * +=====================+ <--- task->thread.ksp
+ */
+ ia64_unwind_init_from_blocked_task(&info, task);
+ }
+
+ kdb_printf("Ret Address Reg Stack base Name\n\n") ;
+ do {
+ unsigned long ip = ia64_unwind_get_ip(&info);
+
+ name = kdbnearsym(ip);
+ if (!name) {
+ kdb_printf("Interrupt\n");
+ return 0;
+ }
+ kdb_printf("0x%016lx: [0x%016lx] %s\n", ip, ia64_unwind_get_bsp(&info), name);
+ } while (ia64_unwind_to_previous_frame(&info) >= 0);
+ return 0;
+}
--- /dev/null
+/*
+ * Kernel Debugger Console I/O handler
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Scott Lurndal (slurn@engr.sgi.com)
+ * Copyright (C) Scott Foehner (sfoehner@engr.sgi.com)
+ * Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+ *
+ * Written March 1999 by Scott Lurndal at Silicon Graphics, Inc.
+ *
+ * Modifications from:
+ * Chuck Fleckenstein 1999/07/20
+ * Move kdb_info struct declaration to this file
+ * for cases where serial support is not compiled into
+ * the kernel.
+ *
+ * Masahiro Adegawa 1999/07/20
+ * Handle some peculiarities of japanese 86/106
+ * keyboards.
+ *
+ * marc@mucom.co.il 1999/07/20
+ * Catch buffer overflow for serial input.
+ *
+ * Scott Foehner
+ * Port to ia64
+ */
+
+#include <linux/kernel.h>
+#include <linux/console.h>
+#include <linux/serial_reg.h>
+#include <linux/spinlock.h>
+
+#include <asm/io.h>
+
+#include "pc_keyb.h"
+
+int kdb_port = 0;
+
+/*
+ * This module contains code to read characters from the keyboard or a serial
+ * port.
+ *
+ * It is used by the kernel debugger, and is polled, not interrupt driven.
+ *
+ */
+
+/*
+ * send: Send a byte to the keyboard controller. Used primarily to
+ * alter LED settings.
+ */
+
+static void
+kdb_kbdsend(unsigned char byte)
+{
+ while (inb(KBD_STATUS_REG) & KBD_STAT_IBF)
+ ;
+ outb(KBD_DATA_REG, byte);
+}
+
+static void
+kdb_kbdsetled(int leds)
+{
+ kdb_kbdsend(KBD_CMD_SET_LEDS);
+ kdb_kbdsend((unsigned char)leds);
+}
+
+static void
+console_read (char *buffer, size_t bufsize)
+{
+ struct console *in;
+ struct console *out;
+ char *cp, ch;
+
+ for (in = console_drivers; in; in = in->next) {
+ if ((in->flags & CON_ENABLED) && (in->read || in->wait_key))
+ break;
+ }
+ for (out = console_drivers; out; out = out->next) {
+ if ((out->flags & CON_ENABLED) && out->write)
+ break;
+ }
+
+ if ((!in->read && !in->wait_key) || !out->write) {
+ panic("kdb_io: can't do console i/o!");
+ }
+
+ if (in->read) {
+ /* this is untested... */
+ (*in->read)(in, buffer, bufsize);
+ return;
+ }
+
+ bufsize -= 2; /* leave room for CR & NUL terminator */
+ cp = buffer;
+ while (1) {
+ ch = (*in->wait_key)(in);
+ switch (ch) {
+ case '\b':
+ if (cp > buffer) {
+ --cp, ++bufsize;
+ (*out->write)(out, "\b \b", 3);
+ }
+ break;
+
+ case '\025':
+ while (cp > buffer) {
+ --cp, ++bufsize;
+ (*out->write)(out, "\b \b", 3);
+ }
+ break;
+
+ case '\r':
+ case '\n':
+ (*out->write)(out, "\r\n", 2);
+ *cp++ = '\n';
+ *cp++ = '\0';
+ return;
+
+ default:
+ if (bufsize > 0) {
+ (*out->write)(out, &ch, 1);
+ --bufsize;
+ *cp++ = ch;
+ }
+ break;
+ }
+ }
+}
+
+char *
+kdb_getscancode(char *buffer, size_t bufsize)
+{
+ /*
+ * XXX Shouldn't kdb _always_ use console based I/O? That's what the console
+ * abstraction is for, after all... ---davidm
+ */
+#ifdef CONFIG_IA64_HP_SIM
+ extern spinlock_t console_lock;
+ unsigned long flags;
+
+ spin_lock_irqsave(&console_lock, flags);
+ console_read(buffer, bufsize);
+ spin_unlock_irqrestore(&console_lock, flags);
+ return buffer;
+#else /* !CONFIG_IA64_HP_SIM */
+ char *cp = buffer;
+ int scancode, scanstatus;
+ static int shift_lock = 0; /* CAPS LOCK state (0-off, 1-on) */
+ static int shift_key = 0; /* Shift next keypress */
+ static int ctrl_key = 0;
+ static int leds = 2; /* Num lock */
+ u_short keychar;
+ extern u_short plain_map[], shift_map[], ctrl_map[];
+
+ bufsize -= 2; /* Reserve space for newline and null byte */
+
+ /*
+ * If we came in via a serial console, we allow that to
+ * be the input window for kdb.
+ */
+ if (kdb_port != 0) {
+ char ch;
+ int status;
+#define serial_inp(info, offset) inb((info) + (offset))
+#define serial_out(info, offset, v) outb((v), (info) + (offset))
+
+ while(1) {
+ while ((status = serial_inp(kdb_port, UART_LSR))
+ & UART_LSR_DR) {
+readchar:
+ ch = serial_inp(kdb_port, UART_RX);
+ if (ch == 8) { /* BS */
+ if (cp > buffer) {
+ --cp, bufsize++;
+ printk("%c %c", 0x08, 0x08);
+ }
+ continue;
+ }
+ serial_out(kdb_port, UART_TX, ch);
+ if (ch == 13) { /* CR */
+ *cp++ = '\n';
+ *cp++ = '\0';
+ serial_out(kdb_port, UART_TX, 10);
+ return(buffer);
+ }
+ /*
+ * Discard excess characters
+ */
+ if (bufsize > 0) {
+ *cp++ = ch;
+ bufsize--;
+ }
+ }
+ while (((status = serial_inp(kdb_port, UART_LSR))
+ & UART_LSR_DR) == 0);
+ }
+ }
+
+ while (1) {
+
+ /*
+ * Wait for a valid scancode
+ */
+
+ while ((inb(KBD_STATUS_REG) & KBD_STAT_OBF) == 0)
+ ;
+
+ /*
+ * Fetch the scancode
+ */
+ scancode = inb(KBD_DATA_REG);
+ scanstatus = inb(KBD_STATUS_REG);
+
+ /*
+ * Ignore mouse events.
+ */
+ if (scanstatus & KBD_STAT_MOUSE_OBF)
+ continue;
+
+ /*
+ * Ignore release, trigger on make
+ * (except for shift keys, where we want to
+ * keep the shift state so long as the key is
+ * held down).
+ */
+
+ if (((scancode&0x7f) == 0x2a)
+ || ((scancode&0x7f) == 0x36)) {
+ /*
+ * Next key may use shift table
+ */
+ if ((scancode & 0x80) == 0) {
+ shift_key=1;
+ } else {
+ shift_key=0;
+ }
+ continue;
+ }
+
+ if ((scancode&0x7f) == 0x1d) {
+ /*
+ * Left ctrl key
+ */
+ if ((scancode & 0x80) == 0) {
+ ctrl_key = 1;
+ } else {
+ ctrl_key = 0;
+ }
+ continue;
+ }
+
+ if ((scancode & 0x80) != 0)
+ continue;
+
+ scancode &= 0x7f;
+
+ /*
+ * Translate scancode
+ */
+
+ if (scancode == 0x3a) {
+ /*
+ * Toggle caps lock
+ */
+ shift_lock ^= 1;
+ leds ^= 0x4; /* toggle caps lock led */
+
+ kdb_kbdsetled(leds);
+ continue;
+ }
+
+ if (scancode == 0x0e) {
+ /*
+ * Backspace
+ */
+ if (cp > buffer) {
+ --cp, bufsize++;
+
+ /*
+ * XXX - erase character on screen
+ */
+ printk("%c %c", 0x08, 0x08);
+ }
+ continue;
+ }
+
+ if (scancode == 0xe0) {
+ continue;
+ }
+
+ /*
+ * For Japanese 86/106 keyboards
+ * See comment in drivers/char/pc_keyb.c.
+ * - Masahiro Adegawa
+ */
+ if (scancode == 0x73) {
+ scancode = 0x59;
+ } else if (scancode == 0x7d) {
+ scancode = 0x7c;
+ }
+
+ if (!shift_lock && !shift_key) {
+ keychar = plain_map[scancode];
+ } else if (shift_lock || shift_key) {
+ keychar = shift_map[scancode];
+ } else if (ctrl_key) {
+ keychar = ctrl_map[scancode];
+ } else {
+ keychar = 0x0020;
+ printk("Unknown state/scancode (%d)\n", scancode);
+ }
+
+ if ((scancode & 0x7f) == 0x1c) {
+ /*
+ * enter key. All done.
+ */
+ printk("\n");
+ break;
+ }
+
+ /*
+ * echo the character.
+ */
+ printk("%c", keychar&0xff);
+
+ if (bufsize) {
+ --bufsize;
+ *cp++ = keychar&0xff;
+ } else {
+ printk("buffer overflow\n");
+ break;
+ }
+
+ }
+
+ *cp++ = '\n'; /* White space for parser */
+ *cp++ = '\0'; /* String termination */
+
+#if defined(NOTNOW)
+ cp = buffer;
+ while (*cp) {
+ printk("char 0x%x\n", *cp++);
+ }
+#endif
+
+ return buffer;
+#endif /* !CONFIG_IA64_HP_SIM */
+}
+
--- /dev/null
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/kdb.h>
+
+static struct kdb_bp_support {
+ unsigned long addr ;
+ int slot ;
+} kdb_bp_info[NR_CPUS] ;
+
+
+extern void kdb_bp_install (void);
+
+/*
+ * This gets invoked right before a call to ia64_fault().
+ * Returns zero the normal fault handler should be invoked.
+ */
+long
+ia64_kdb_fault_handler (unsigned long vector, unsigned long isr, unsigned long ifa,
+ unsigned long iim, unsigned long itir, unsigned long arg5,
+ unsigned long arg6, unsigned long arg7, unsigned long stack)
+{
+ struct switch_stack *sw = (struct switch_stack *) &stack;
+ struct pt_regs *regs = (struct pt_regs *) (sw + 1);
+ int bundle_slot;
+
+ /*
+ * TBD
+ * If KDB is configured, enter KDB for any fault.
+ */
+ if ((vector == 29) || (vector == 35) || (vector == 36)) {
+ if (!user_mode(regs)) {
+ bundle_slot = ia64_psr(regs)->ri;
+ if (vector == 29) {
+ if (bundle_slot == 0) {
+ kdb_bp_info[0].addr = regs->cr_iip;
+ kdb_bp_info[0].slot = bundle_slot;
+ kdb(KDB_REASON_FLTDBG, 0, regs);
+ } else {
+ if ((bundle_slot < 3) &&
+ (kdb_bp_info[0].addr == regs->cr_iip))
+ {
+ ia64_psr(regs)->id = 1;
+ ia64_psr(regs)->db = 1;
+ kdb_bp_install() ;
+ } else /* some error ?? */
+ kdb(KDB_REASON_FLTDBG, 0, regs);
+ }
+ } else /* single step or taken branch */
+ kdb(KDB_REASON_DEBUG, 0, regs);
+ return 1;
+ }
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * Minimalist Kernel Debugger
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Scott Lurndal (slurn@engr.sgi.com)
+ * Copyright (C) Scott Foehner (sfoehner@engr.sgi.com)
+ * Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+ * Copyright (C) David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Written March 1999 by Scott Lurndal at Silicon Graphics, Inc.
+ *
+ * Modifications from:
+ * Richard Bass 1999/07/20
+ * Many bug fixes and enhancements.
+ * Scott Foehner
+ * Port to ia64
+ * Srinivasa Thirumalachar
+ * RSE support for ia64
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/kdb.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+
+#include <asm/uaccess.h>
+#include <asm/kdbsupport.h>
+#include <asm/rse.h>
+
+extern kdb_state_t kdb_state ;
+k_machreg_t dbregs[KDB_DBREGS];
+
+static int __init
+kdb_setup (char *str)
+{
+ kdb_flags |= KDB_FLAG_EARLYKDB;
+ return 1;
+}
+
+__setup("kdb", kdb_setup);
+
+static int
+kdb_ia64_sir (int argc, const char **argv, const char **envp, struct pt_regs *regs)
+{
+ u64 lid, tpr, lrr0, lrr1, itv, pmv, cmcv;
+
+ asm ("mov %0=cr.lid" : "=r"(lid));
+ asm ("mov %0=cr.tpr" : "=r"(tpr));
+ asm ("mov %0=cr.lrr0" : "=r"(lrr0));
+ asm ("mov %0=cr.lrr1" : "=r"(lrr1));
+ printk ("lid=0x%lx, tpr=0x%lx, lrr0=0x%lx, llr1=0x%lx\n", lid, tpr, lrr0, lrr1);
+
+ asm ("mov %0=cr.itv" : "=r"(itv));
+ asm ("mov %0=cr.pmv" : "=r"(pmv));
+ asm ("mov %0=cr.cmcv" : "=r"(cmcv));
+ printk ("itv=0x%lx, pmv=0x%lx, cmcv=0x%lx\n", itv, pmv, cmcv);
+
+ printk ("irr=0x%016lx,0x%016lx,0x%016lx,0x%016lx\n",
+ ia64_get_irr0(), ia64_get_irr1(), ia64_get_irr2(), ia64_get_irr3());
+ return 0;
+}
+
+void __init
+kdb_init (void)
+{
+ extern void kdb_inittab(void);
+ unsigned long reg;
+
+ kdb_inittab();
+ kdb_initbptab();
+#if 0
+ kdb_disinit();
+#endif
+ kdb_printf("kdb version %d.%d by Scott Lurndal. "\
+ "Copyright SGI, All Rights Reserved\n",
+ KDB_MAJOR_VERSION, KDB_MINOR_VERSION);
+
+ /* Enable debug registers */
+ __asm__ ("mov %0=psr":"=r"(reg));
+ reg |= IA64_PSR_DB;
+ __asm__ ("mov psr.l=%0"::"r"(reg));
+ ia64_srlz_d();
+
+ /* Init kdb state */
+ kdb_state.bkpt_handling_state = BKPTSTATE_NOT_HANDLED ;
+
+ kdb_register("irr", kdb_ia64_sir, "", "Show interrupt registers", 0);
+}
+
+/*
+ * kdbprintf
+ * kdbgetword
+ * kdb_getstr
+ */
+
+char *
+kbd_getstr(char *buffer, size_t bufsize, char *prompt)
+{
+ extern char* kdb_getscancode(char *, size_t);
+
+#if defined(CONFIG_SMP)
+ kdb_printf(prompt, smp_processor_id());
+#else
+ kdb_printf("%s", prompt);
+#endif
+
+ return kdb_getscancode(buffer, bufsize);
+
+}
+
+int
+kdb_printf(const char *fmt, ...)
+{
+ char buffer[256];
+ va_list ap;
+ int diag;
+ int linecount;
+
+ diag = kdbgetintenv("LINES", &linecount);
+ if (diag)
+ linecount = 22;
+
+ va_start(ap, fmt);
+ vsprintf(buffer, fmt, ap);
+ va_end(ap);
+
+ printk("%s", buffer);
+#if 0
+ if (strchr(buffer, '\n') != NULL) {
+ kdb_nextline++;
+ }
+
+ if (kdb_nextline == linecount) {
+ char buf1[16];
+ char buf2[32];
+ extern char* kdb_getscancode(char *, size_t);
+ char *moreprompt;
+
+ /*
+ * Pause until cr.
+ */
+ moreprompt = kdbgetenv("MOREPROMPT");
+ if (moreprompt == NULL) {
+ moreprompt = "more> ";
+ }
+
+#if defined(CONFIG_SMP)
+ if (strchr(moreprompt, '%')) {
+ sprintf(buf2, moreprompt, smp_processor_id());
+ moreprompt = buf2;
+ }
+#endif
+
+ printk(moreprompt);
+ (void) kdb_getscancode(buf1, sizeof(buf1));
+
+ kdb_nextline = 1;
+
+ if ((buf1[0] == 'q')
+ || (buf1[0] == 'Q')) {
+ kdb_longjmp(&kdbjmpbuf, 1);
+ }
+ }
+#endif
+ return 0;
+}
+
+unsigned long
+kdbgetword(unsigned long addr, int width)
+{
+ /*
+ * This function checks the address for validity. Any address
+ * in the range PAGE_OFFSET to high_memory is legal, any address
+ * which maps to a vmalloc region is legal, and any address which
+ * is a user address, we use get_user() to verify validity.
+ */
+
+ if (addr < PAGE_OFFSET) {
+ /*
+ * Usermode address.
+ */
+ unsigned long diag;
+ unsigned long ulval;
+
+ switch (width) {
+ case 8:
+ { unsigned long *lp;
+
+ lp = (unsigned long *) addr;
+ diag = get_user(ulval, lp);
+ break;
+ }
+ case 4:
+ { unsigned int *ip;
+
+ ip = (unsigned int *) addr;
+ diag = get_user(ulval, ip);
+ break;
+ }
+ case 2:
+ { unsigned short *sp;
+
+ sp = (unsigned short *) addr;
+ diag = get_user(ulval, sp);
+ break;
+ }
+ case 1:
+ { unsigned char *cp;
+
+ cp = (unsigned char *) addr;
+ diag = get_user(ulval, cp);
+ break;
+ }
+ default:
+ printk("kdbgetword: Bad width\n");
+ return 0L;
+ }
+
+ if (diag) {
+ if ((kdb_flags & KDB_FLAG_SUPRESS) == 0) {
+ printk("kdb: Bad user address 0x%lx\n", addr);
+ kdb_flags |= KDB_FLAG_SUPRESS;
+ }
+ return 0L;
+ }
+ kdb_flags &= ~KDB_FLAG_SUPRESS;
+ return ulval;
+ }
+
+ if (addr > (unsigned long)high_memory) {
+ extern int kdb_vmlist_check(unsigned long, unsigned long);
+
+ if (!kdb_vmlist_check(addr, addr+width)) {
+ /*
+ * Would appear to be an illegal kernel address;
+ * Print a message once, and don't print again until
+ * a legal address is used.
+ */
+ if ((kdb_flags & KDB_FLAG_SUPRESS) == 0) {
+ printk("kdb: Bad kernel address 0x%lx\n", addr);
+ kdb_flags |= KDB_FLAG_SUPRESS;
+ }
+ return 0L;
+ }
+ }
+
+ /*
+ * A good address. Reset error flag.
+ */
+ kdb_flags &= ~KDB_FLAG_SUPRESS;
+
+ switch (width) {
+ case 8:
+ { unsigned long *lp;
+
+ lp = (unsigned long *)(addr);
+ return *lp;
+ }
+ case 4:
+ { unsigned int *ip;
+
+ ip = (unsigned int *)(addr);
+ return *ip;
+ }
+ case 2:
+ { unsigned short *sp;
+
+ sp = (unsigned short *)(addr);
+ return *sp;
+ }
+ case 1:
+ { unsigned char *cp;
+
+ cp = (unsigned char *)(addr);
+ return *cp;
+ }
+ }
+
+ printk("kdbgetword: Bad width\n");
+ return 0L;
+}
+
+/*
+ * Start of breakpoint management routines
+ */
+
+/*
+ * Arg: bp structure
+ */
+
+int
+kdb_allocdbreg(kdb_bp_t *bp)
+{
+ int i=0;
+
+ /* For inst bkpt, just return. No hw reg alloc to be done. */
+
+ if (bp->bp_mode == BKPTMODE_INST) {
+ return i;
+ } else if (bp->bp_mode == BKPTMODE_DATAW) {
+ for(i=0; i<KDB_DBREGS; i++) {
+ if (dbregs[i] == 0xffffffff) {
+ dbregs[i] = 0;
+ return i;
+ }
+ }
+ }
+
+ return -1;
+}
+
+void
+kdb_freedbreg(kdb_bp_t *bp)
+{
+ if (bp->bp_mode == BKPTMODE_DATAW)
+ dbregs[bp->bp_reg] = 0xffffffff;
+}
+
+void
+kdb_initdbregs(void)
+{
+ int i;
+
+ for(i=0; i<KDB_DBREGS; i++) {
+ dbregs[i] = 0xffffffff;
+ }
+}
+int
+kdbinstalltrap(int type, handler_t newh, handler_t *oldh)
+{
+ /*
+ * Usurp INTn. XXX - TBD.
+ */
+
+ return 0;
+}
+
+int
+install_instbkpt(kdb_bp_t *bp)
+{
+ unsigned long *addr = (unsigned long *)bp->bp_addr ;
+ bundle_t *bundle = (bundle_t *)bp->bp_longinst;
+
+ /* save current bundle */
+ *bundle = *(bundle_t *)addr ;
+
+ /* Set the break point! */
+ ((bundle_t *)addr)->lform.low8 = (
+ (((bundle_t *)addr)->lform.low8 & ~INST_SLOT0_MASK) |
+ BREAK_INSTR);
+
+ /* set flag */
+ bp->bp_instvalid = 1 ;
+
+ /* flush icache as it is stale now */
+ ia64_flush_icache_page((unsigned long)addr) ;
+
+#ifdef KDB_DEBUG
+ kdb_printf ("[0x%016lx]: install 0x%016lx with 0x%016lx\n",
+ addr, bundle->lform.low8, addr[0]) ;
+#endif
+ return 0 ;
+}
+
+int
+install_databkpt(kdb_bp_t *bp)
+{
+ unsigned long dbreg_addr = bp->bp_reg * 2;
+ unsigned long dbreg_cond = dbreg_addr + 1;
+ unsigned long value = 0x8fffffffffffffff;
+ unsigned long addr = (unsigned long)bp->bp_addr;
+ __asm__ ("mov dbr[%0]=%1"::"r"(dbreg_cond),"r"(value));
+// __asm__ ("movl %0,%%db0\n\t"::"r"(contents));
+ __asm__ ("mov dbr[%0]=%1"::"r"(dbreg_addr),"r"(addr));
+ ia64_insn_group_barrier();
+ ia64_srlz_i();
+ ia64_insn_group_barrier();
+
+#ifdef KDB_DEBUG
+ kdb_printf("installed dbkpt at 0x%016lx\n", addr) ;
+#endif
+ return 0;
+}
+
+int
+kdbinstalldbreg(kdb_bp_t *bp)
+{
+ if (bp->bp_mode == BKPTMODE_INST) {
+ return install_instbkpt(bp) ;
+ } else if (bp->bp_mode == BKPTMODE_DATAW) {
+ return install_databkpt(bp) ;
+ }
+ return 0;
+}
+
+void
+remove_instbkpt(kdb_bp_t *bp)
+{
+ unsigned long *addr = (unsigned long *)bp->bp_addr ;
+ bundle_t *bundle = (bundle_t *)bp->bp_longinst;
+
+ if (!bp->bp_instvalid)
+ /* Nothing to remove. If we just alloced the bkpt
+ * but never resumed, the bp_inst will not be valid. */
+ return ;
+
+#ifdef KDB_DEBUG
+ kdb_printf ("[0x%016lx]: remove 0x%016lx with 0x%016lx\n",
+ addr, addr[0], bundle->lform.low8) ;
+#endif
+
+ /* restore current bundle */
+ *(bundle_t *)addr = *bundle ;
+ /* reset the flag */
+ bp->bp_instvalid = 0 ;
+ ia64_flush_icache_page((unsigned long)addr) ;
+}
+
+void
+remove_databkpt(kdb_bp_t *bp)
+{
+ int regnum = bp->bp_reg ;
+ unsigned long dbreg_addr = regnum * 2;
+ unsigned long dbreg_cond = dbreg_addr + 1;
+ unsigned long value = 0x0fffffffffffffff;
+ __asm__ ("mov dbr[%0]=%1"::"r"(dbreg_cond),"r"(value));
+// __asm__ ("movl %0,%%db0\n\t"::"r"(contents));
+ ia64_insn_group_barrier();
+ ia64_srlz_i();
+ ia64_insn_group_barrier();
+
+#ifdef KDB_DEBUG
+ kdb_printf("removed dbkpt at 0x%016lx\n", bp->bp_addr) ;
+#endif
+}
+
+void
+kdbremovedbreg(kdb_bp_t *bp)
+{
+ if (bp->bp_mode == BKPTMODE_INST) {
+ remove_instbkpt(bp) ;
+ } else if (bp->bp_mode == BKPTMODE_DATAW) {
+ remove_databkpt(bp) ;
+ }
+}
+
+k_machreg_t
+kdb_getdr6(void)
+{
+ return kdb_getdr(6);
+}
+
+k_machreg_t
+kdb_getdr7(void)
+{
+ return kdb_getdr(7);
+}
+
+k_machreg_t
+kdb_getdr(int regnum)
+{
+ k_machreg_t contents = 0;
+ unsigned long reg = (unsigned long)regnum;
+
+ __asm__ ("mov %0=ibr[%1]"::"r"(contents),"r"(reg));
+// __asm__ ("mov ibr[%0]=%1"::"r"(dbreg_cond),"r"(value));
+
+ return contents;
+}
+
+
+k_machreg_t
+kdb_getcr(int regnum)
+{
+ k_machreg_t contents = 0;
+ return contents;
+}
+
+void
+kdb_putdr6(k_machreg_t contents)
+{
+ kdb_putdr(6, contents);
+}
+
+void
+kdb_putdr7(k_machreg_t contents)
+{
+ kdb_putdr(7, contents);
+}
+
+void
+kdb_putdr(int regnum, k_machreg_t contents)
+{
+}
+
+void
+get_fault_regs(fault_regs_t *fr)
+{
+ fr->ifa = 0 ;
+ fr->isr = 0 ;
+
+ __asm__ ("rsm psr.ic;;") ;
+ ia64_srlz_d();
+ __asm__ ("mov %0=cr.ifa" : "=r"(fr->ifa));
+ __asm__ ("mov %0=cr.isr" : "=r"(fr->isr));
+ __asm__ ("ssm psr.ic;;") ;
+ ia64_srlz_d();
+}
+
+/*
+ * kdb_db_trap
+ *
+ * Perform breakpoint processing upon entry to the
+ * processor debugger fault. Determine and print
+ * the active breakpoint.
+ *
+ * Parameters:
+ * ef Exception frame containing machine register state
+ * reason Why did we enter kdb - fault or break
+ * Outputs:
+ * None.
+ * Returns:
+ * 0 Standard instruction or data breakpoint encountered
+ * 1 Single Step fault ('ss' command)
+ * 2 Single Step fault, caller should continue ('ssb' command)
+ * Locking:
+ * None.
+ * Remarks:
+ * Yup, there be goto's here.
+ */
+
+int
+kdb_db_trap(struct pt_regs *ef, int reason)
+{
+ int i, rv=0;
+
+ /* Trying very hard to not change the interface to kdb.
+ * So, eventhough we have these values in the fault function
+ * it is not passed in but read again.
+ */
+ fault_regs_t faultregs ;
+
+ if (reason == KDB_REASON_FLTDBG)
+ get_fault_regs(&faultregs) ;
+
+ /* NOTE : XXX: This has to be done only for data bkpts */
+ /* Prevent it from continuously faulting */
+ ef->cr_ipsr |= 0x0000002000000000;
+
+ if (ef->cr_ipsr & 0x0000010000000000) {
+ /* single step */
+ ef->cr_ipsr &= 0xfffffeffffffffff;
+ if ((kdb_state.bkpt_handling_state == BKPTSTATE_HANDLED)
+ && (kdb_state.cmd_given == CMDGIVEN_GO))
+ ;
+ else
+ kdb_printf("SS trap at 0x%lx\n", ef->cr_iip + ia64_psr(ef)->ri);
+ rv = 1;
+ kdb_state.reason_for_entry = ENTRYREASON_SSTEP ;
+ goto handled;
+ } else
+ kdb_state.reason_for_entry = ENTRYREASON_GO ;
+
+ /*
+ * Determine which breakpoint was encountered.
+ */
+ for(i=0; i<KDB_MAXBPT; i++) {
+ if ((breakpoints[i].bp_enabled)
+ && ((breakpoints[i].bp_addr == ef->cr_iip) ||
+ ((faultregs.ifa) &&
+ (breakpoints[i].bp_addr == faultregs.ifa)))) {
+ /*
+ * Hit this breakpoint. Remove it while we are
+ * handling hit to avoid recursion. XXX ??
+ */
+ if (breakpoints[i].bp_addr == faultregs.ifa)
+ kdb_printf("Data breakpoint #%d for 0x%lx at 0x%lx\n",
+ i, breakpoints[i].bp_addr, ef->cr_iip + ia64_psr(ef)->ri);
+ else
+ kdb_printf("%s breakpoint #%d at 0x%lx\n",
+ rwtypes[0],
+ i, breakpoints[i].bp_addr);
+
+ /*
+ * For an instruction breakpoint, disassemble
+ * the current instruction.
+ */
+#if 0
+ if (rw == 0) {
+ kdb_id1(ef->eip);
+ }
+#endif
+
+ goto handled;
+ }
+ }
+
+#if 0
+unknown:
+#endif
+ kdb_printf("Unknown breakpoint. Should forward. \n");
+ /* Need a flag for this. The skip should be done XXX
+ * when a go or single step command is done for this session.
+ * For now it is here.
+ */
+ ia64_increment_ip(ef) ;
+ return rv ;
+
+handled:
+
+ /* We are here after handling a break inst/data bkpt */
+ if (kdb_state.bkpt_handling_state == BKPTSTATE_NOT_HANDLED) {
+ kdb_state.bkpt_handling_state = BKPTSTATE_HANDLED ;
+ if (kdb_state.reason_for_entry == ENTRYREASON_GO) {
+ kdb_setsinglestep(ef) ;
+ kdb_state.kdb_action = ACTION_NOBPINSTALL;
+ /* We dont want bp install just this once */
+ kdb_state.cmd_given = CMDGIVEN_UNKNOWN ;
+ }
+ } else if (kdb_state.bkpt_handling_state == BKPTSTATE_HANDLED) {
+ kdb_state.bkpt_handling_state = BKPTSTATE_NOT_HANDLED ;
+ if (kdb_state.reason_for_entry == ENTRYREASON_SSTEP) {
+ if (kdb_state.cmd_given == CMDGIVEN_GO)
+ kdb_state.kdb_action = ACTION_NOPROMPT ;
+ kdb_state.cmd_given = CMDGIVEN_UNKNOWN ;
+ }
+ } else
+ kdb_printf("Unknown value of bkpt state\n") ;
+
+ return rv;
+
+}
+
+void
+kdb_setsinglestep(struct pt_regs *regs)
+{
+ regs->cr_ipsr |= 0x0000010000000000;
+#if 0
+ regs->eflags |= EF_TF;
+#endif
+}
+
+/*
+ * Symbol table functions.
+ */
+
+/*
+ * kdbgetsym
+ *
+ * Return the symbol table entry for the given symbol
+ *
+ * Parameters:
+ * symname Character string containing symbol name
+ * Outputs:
+ * Returns:
+ * NULL Symbol doesn't exist
+ * ksp Pointer to symbol table entry
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+__ksymtab_t *
+kdbgetsym(const char *symname)
+{
+ __ksymtab_t *ksp = __kdbsymtab;
+ int i;
+
+ if (symname == NULL)
+ return NULL;
+
+ for (i=0; i<__kdbsymtabsize; i++, ksp++) {
+ if (ksp->name && (strcmp(ksp->name, symname)==0)) {
+ return ksp;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * kdbgetsymval
+ *
+ * Return the address of the given symbol.
+ *
+ * Parameters:
+ * symname Character string containing symbol name
+ * Outputs:
+ * Returns:
+ * 0 Symbol name is NULL
+ * addr Address corresponding to symname
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+unsigned long
+kdbgetsymval(const char *symname)
+{
+ __ksymtab_t *ksp = kdbgetsym(symname);
+
+ return (ksp?ksp->value:0);
+}
+
+/*
+ * kdbaddmodsym
+ *
+ * Add a symbol to the kernel debugger symbol table. Called when
+ * a new module is loaded into the kernel.
+ *
+ * Parameters:
+ * symname Character string containing symbol name
+ * value Value of symbol
+ * Outputs:
+ * Returns:
+ * 0 Successfully added to table.
+ * 1 Duplicate symbol
+ * 2 Symbol table full
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+int
+kdbaddmodsym(char *symname, unsigned long value)
+{
+
+ /*
+ * Check for duplicate symbols.
+ */
+ if (kdbgetsym(symname)) {
+ printk("kdb: Attempt to register duplicate symbol '%s' @ 0x%lx\n",
+ symname, value);
+ return 1;
+ }
+
+ if (__kdbsymtabsize < __kdbmaxsymtabsize) {
+ __ksymtab_t *ksp = &__kdbsymtab[__kdbsymtabsize++];
+
+ ksp->name = symname;
+ ksp->value = value;
+ return 0;
+ }
+
+ /*
+ * No room left in kernel symbol table.
+ */
+ {
+ static int __kdbwarn = 0;
+
+ if (__kdbwarn == 0) {
+ __kdbwarn++;
+ printk("kdb: Exceeded symbol table size. Increase CONFIG_KDB_SYMTAB_SIZE in kernel configuration\n");
+ }
+ }
+
+ return 2;
+}
+
+/*
+ * kdbdelmodsym
+ *
+ * Add a symbol to the kernel debugger symbol table. Called when
+ * a new module is loaded into the kernel.
+ *
+ * Parameters:
+ * symname Character string containing symbol name
+ * value Value of symbol
+ * Outputs:
+ * Returns:
+ * 0 Successfully added to table.
+ * 1 Symbol not found
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+int
+kdbdelmodsym(const char *symname)
+{
+ __ksymtab_t *ksp, *endksp;
+
+ if (symname == NULL)
+ return 1;
+
+ /*
+ * Search for the symbol. If found, move
+ * all successive symbols down one position
+ * in the symbol table to avoid leaving holes.
+ */
+ endksp = &__kdbsymtab[__kdbsymtabsize];
+ for (ksp = __kdbsymtab; ksp < endksp; ksp++) {
+ if (ksp->name && (strcmp(ksp->name, symname) == 0)) {
+ endksp--;
+ for ( ; ksp < endksp; ksp++) {
+ *ksp = *(ksp + 1);
+ }
+ __kdbsymtabsize--;
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+/*
+ * kdbnearsym
+ *
+ * Return the name of the symbol with the nearest address
+ * less than 'addr'.
+ *
+ * Parameters:
+ * addr Address to check for symbol near
+ * Outputs:
+ * Returns:
+ * NULL No symbol with address less than 'addr'
+ * symbol Returns the actual name of the symbol.
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+char *
+kdbnearsym(unsigned long addr)
+{
+ __ksymtab_t *ksp = __kdbsymtab;
+ __ksymtab_t *kpp = NULL;
+ int i;
+
+ for(i=0; i<__kdbsymtabsize; i++, ksp++) {
+ if (!ksp->name)
+ continue;
+
+ if (addr == ksp->value) {
+ kpp = ksp;
+ break;
+ }
+ if (addr > ksp->value) {
+ if ((kpp == NULL)
+ || (ksp->value > kpp->value)) {
+ kpp = ksp;
+ }
+ }
+ }
+
+ /*
+ * If more than 128k away, don't bother.
+ */
+ if ((kpp == NULL)
+ || ((addr - kpp->value) > 0x20000)) {
+ return NULL;
+ }
+
+ return kpp->name;
+}
+
+/*
+ * kdbgetregcontents
+ *
+ * Return the contents of the register specified by the
+ * input string argument. Return an error if the string
+ * does not match a machine register.
+ *
+ * The following pseudo register names are supported:
+ * ®s - Prints address of exception frame
+ * kesp - Prints kernel stack pointer at time of fault
+ * sstk - Prints switch stack for ia64
+ * %<regname> - Uses the value of the registers at the
+ * last time the user process entered kernel
+ * mode, instead of the registers at the time
+ * kdb was entered.
+ *
+ * Parameters:
+ * regname Pointer to string naming register
+ * regs Pointer to structure containing registers.
+ * Outputs:
+ * *contents Pointer to unsigned long to recieve register contents
+ * Returns:
+ * 0 Success
+ * KDB_BADREG Invalid register name
+ * Locking:
+ * None.
+ * Remarks:
+ *
+ * Note that this function is really machine independent. The kdb
+ * register list is not, however.
+ */
+
+static struct kdbregs {
+ char *reg_name;
+ size_t reg_offset;
+} kdbreglist[] = {
+ { " psr", offsetof(struct pt_regs, cr_ipsr) },
+ { " ifs", offsetof(struct pt_regs, cr_ifs) },
+ { " ip", offsetof(struct pt_regs, cr_iip) },
+
+ { "unat", offsetof(struct pt_regs, ar_unat) },
+ { " pfs", offsetof(struct pt_regs, ar_pfs) },
+ { " rsc", offsetof(struct pt_regs, ar_rsc) },
+
+ { "rnat", offsetof(struct pt_regs, ar_rnat) },
+ { "bsps", offsetof(struct pt_regs, ar_bspstore) },
+ { " pr", offsetof(struct pt_regs, pr) },
+
+ { "ldrs", offsetof(struct pt_regs, loadrs) },
+ { " ccv", offsetof(struct pt_regs, ar_ccv) },
+ { "fpsr", offsetof(struct pt_regs, ar_fpsr) },
+
+ { " b0", offsetof(struct pt_regs, b0) },
+ { " b6", offsetof(struct pt_regs, b6) },
+ { " b7", offsetof(struct pt_regs, b7) },
+
+ { " r1",offsetof(struct pt_regs, r1) },
+ { " r2",offsetof(struct pt_regs, r2) },
+ { " r3",offsetof(struct pt_regs, r3) },
+
+ { " r8",offsetof(struct pt_regs, r8) },
+ { " r9",offsetof(struct pt_regs, r9) },
+ { " r10",offsetof(struct pt_regs, r10) },
+
+ { " r11",offsetof(struct pt_regs, r11) },
+ { " r12",offsetof(struct pt_regs, r12) },
+ { " r13",offsetof(struct pt_regs, r13) },
+
+ { " r14",offsetof(struct pt_regs, r14) },
+ { " r15",offsetof(struct pt_regs, r15) },
+ { " r16",offsetof(struct pt_regs, r16) },
+
+ { " r17",offsetof(struct pt_regs, r17) },
+ { " r18",offsetof(struct pt_regs, r18) },
+ { " r19",offsetof(struct pt_regs, r19) },
+
+ { " r20",offsetof(struct pt_regs, r20) },
+ { " r21",offsetof(struct pt_regs, r21) },
+ { " r22",offsetof(struct pt_regs, r22) },
+
+ { " r23",offsetof(struct pt_regs, r23) },
+ { " r24",offsetof(struct pt_regs, r24) },
+ { " r25",offsetof(struct pt_regs, r25) },
+
+ { " r26",offsetof(struct pt_regs, r26) },
+ { " r27",offsetof(struct pt_regs, r27) },
+ { " r28",offsetof(struct pt_regs, r28) },
+
+ { " r29",offsetof(struct pt_regs, r29) },
+ { " r30",offsetof(struct pt_regs, r30) },
+ { " r31",offsetof(struct pt_regs, r31) },
+
+};
+
+static const int nkdbreglist = sizeof(kdbreglist) / sizeof(struct kdbregs);
+
+int
+kdbgetregcontents(const char *regname,
+ struct pt_regs *regs,
+ unsigned long *contents)
+{
+ int i;
+
+ if (strcmp(regname, "®s") == 0) {
+ *contents = (unsigned long)regs;
+ return 0;
+ }
+
+ if (strcmp(regname, "sstk") == 0) {
+ *contents = (unsigned long)getprsregs(regs) ;
+ return 0;
+ }
+
+ if (strcmp(regname, "isr") == 0) {
+ fault_regs_t fr ;
+ get_fault_regs(&fr) ;
+ *contents = fr.isr ;
+ return 0 ;
+ }
+
+#if 0
+ /* XXX need to verify this */
+ if (strcmp(regname, "kesp") == 0) {
+ *contents = (unsigned long)regs + sizeof(struct pt_regs);
+ return 0;
+ }
+
+ if (regname[0] == '%') {
+ /* User registers: %%e[a-c]x, etc */
+ regname++;
+ regs = (struct pt_regs *)
+ (current->thread.ksp - sizeof(struct pt_regs));
+ }
+#endif
+
+ for (i=0; i<nkdbreglist; i++) {
+ if (strstr(kdbreglist[i].reg_name, regname))
+ break;
+ }
+
+ if (i == nkdbreglist) {
+ /* Lets check the rse maybe */
+ if (regname[0] == 'r')
+ if (show_cur_stack_frame(regs, simple_strtoul(regname+1, 0, 0) - 31,
+ contents))
+ return 0 ;
+ return KDB_BADREG;
+ }
+
+ *contents = *(unsigned long *)((unsigned long)regs +
+ kdbreglist[i].reg_offset);
+
+ return 0;
+}
+
+/*
+ * kdbsetregcontents
+ *
+ * Set the contents of the register specified by the
+ * input string argument. Return an error if the string
+ * does not match a machine register.
+ *
+ * Supports modification of user-mode registers via
+ * %<register-name>
+ *
+ * Parameters:
+ * regname Pointer to string naming register
+ * regs Pointer to structure containing registers.
+ * contents Unsigned long containing new register contents
+ * Outputs:
+ * Returns:
+ * 0 Success
+ * KDB_BADREG Invalid register name
+ * Locking:
+ * None.
+ * Remarks:
+ */
+
+int
+kdbsetregcontents(const char *regname,
+ struct pt_regs *regs,
+ unsigned long contents)
+{
+ int i;
+
+ if (regname[0] == '%') {
+ regname++;
+ regs = (struct pt_regs *)
+ (current->thread.ksp - sizeof(struct pt_regs));
+ }
+
+ for (i=0; i<nkdbreglist; i++) {
+ if (strnicmp(kdbreglist[i].reg_name,
+ regname,
+ strlen(regname)) == 0)
+ break;
+ }
+
+ if ((i == nkdbreglist)
+ || (strlen(kdbreglist[i].reg_name) != strlen(regname))) {
+ return KDB_BADREG;
+ }
+
+ *(unsigned long *)((unsigned long)regs + kdbreglist[i].reg_offset) =
+ contents;
+
+ return 0;
+}
+
+/*
+ * kdbdumpregs
+ *
+ * Dump the specified register set to the display.
+ *
+ * Parameters:
+ * regs Pointer to structure containing registers.
+ * type Character string identifying register set to dump
+ * extra string further identifying register (optional)
+ * Outputs:
+ * Returns:
+ * 0 Success
+ * Locking:
+ * None.
+ * Remarks:
+ * This function will dump the general register set if the type
+ * argument is NULL (struct pt_regs). The alternate register
+ * set types supported by this function:
+ *
+ * d Debug registers
+ * c Control registers
+ * u User registers at most recent entry to kernel
+ * Following not yet implemented:
+ * m Model Specific Registers (extra defines register #)
+ * r Memory Type Range Registers (extra defines register)
+ *
+ * For now, all registers are covered as follows:
+ *
+ * rd - dumps all regs
+ * rd %isr - current interrupt status reg, read freshly
+ * rd s - valid stacked regs
+ * rd %sstk - gets switch stack addr. dump memory and search
+ * rd d - debug regs, may not be too useful
+ *
+ * ARs TB Done
+ * Interrupt regs TB Done ??
+ * OTHERS TB Decided ??
+ *
+ * Intel wish list
+ * These will be implemented later - Srinivasa
+ *
+ * type action
+ * ---- ------
+ * g dump all General static registers
+ * s dump all general Stacked registers
+ * f dump all Floating Point registers
+ * p dump all Predicate registers
+ * b dump all Branch registers
+ * a dump all Application registers
+ * c dump all Control registers
+ *
+ */
+
+int
+kdbdumpregs(struct pt_regs *regs,
+ const char *type,
+ const char *extra)
+
+{
+ int i;
+ int count = 0;
+
+ if (type
+ && (type[0] == 'u')) {
+ type = NULL;
+ regs = (struct pt_regs *)
+ (current->thread.ksp - sizeof(struct pt_regs));
+ }
+
+ if (type == NULL) {
+ for (i=0; i<nkdbreglist; i++) {
+ kdb_printf("%s: 0x%16.16lx ",
+ kdbreglist[i].reg_name,
+ *(unsigned long *)((unsigned long)regs +
+ kdbreglist[i].reg_offset));
+
+ if ((++count % 3) == 0)
+ kdb_printf("\n");
+ }
+
+ kdb_printf("®s = 0x%16.16lx\n", regs);
+
+ return 0;
+ }
+
+ switch (type[0]) {
+ case 'd':
+ {
+ for(i=0; i<8; i+=2) {
+ kdb_printf("idr%d: 0x%16.16lx idr%d: 0x%16.16lx\n", i,
+ kdb_getdr(i), i+1, kdb_getdr(i+1));
+
+ }
+ return 0;
+ }
+#if 0
+ case 'c':
+ {
+ unsigned long cr[5];
+
+ for (i=0; i<5; i++) {
+ cr[i] = kdb_getcr(i);
+ }
+ kdb_printf("cr0 = 0x%8.8x cr1 = 0x%8.8x cr2 = 0x%8.8x cr3 = 0x%8.8x\ncr4 = 0x%8.8x\n",
+ cr[0], cr[1], cr[2], cr[3], cr[4]);
+ return 0;
+ }
+#endif
+ case 'm':
+ break;
+ case 'r':
+ break;
+
+ case 's':
+ {
+ show_cur_stack_frame(regs, 0, NULL) ;
+
+ return 0 ;
+ }
+
+ case '%':
+ {
+ unsigned long contents ;
+
+ if (!kdbgetregcontents(type+1, regs, &contents))
+ kdb_printf("%s = 0x%16.16lx\n", type+1, contents) ;
+ else
+ kdb_printf("diag: Invalid register %s\n", type+1) ;
+
+ return 0 ;
+ }
+
+ default:
+ return KDB_BADREG;
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+
+k_machreg_t
+kdb_getpc(struct pt_regs *regs)
+{
+ return regs->cr_iip + ia64_psr(regs)->ri;
+}
+
+int
+kdb_setpc(struct pt_regs *regs, k_machreg_t newpc)
+{
+ regs->cr_iip = newpc & ~0xf;
+ ia64_psr(regs)->ri = newpc & 0x3;
+ return 0;
+}
+
+void
+kdb_disableint(kdbintstate_t *state)
+{
+ int *fp = (int *)state;
+ int flags;
+
+ __save_flags(flags);
+ __cli();
+
+ *fp = flags;
+}
+
+void
+kdb_restoreint(kdbintstate_t *state)
+{
+ int flags = *(int *)state;
+ __restore_flags(flags);
+}
+
+int
+kdb_putword(unsigned long addr, unsigned long contents)
+{
+ *(unsigned long *)addr = contents;
+ return 0;
+}
+
+int
+kdb_getcurrentframe(struct pt_regs *regs)
+{
+#if 0
+ regs->xcs = 0;
+#if defined(CONFIG_KDB_FRAMEPTR)
+ asm volatile("movl %%ebp,%0":"=m" (*(int *)®s->ebp));
+#endif
+ asm volatile("movl %%esp,%0":"=m" (*(int *)®s->esp));
+#endif
+ return 0;
+}
+
+unsigned long
+show_cur_stack_frame(struct pt_regs *regs, int regno, unsigned long *contents)
+{
+ long sof = regs->cr_ifs & ((1<<7)-1) ; /* size of frame */
+ unsigned long i ;
+ int j;
+ struct switch_stack *prs_regs = getprsregs(regs) ;
+ unsigned long *sofptr = (prs_regs? ia64_rse_skip_regs(
+ (unsigned long *)prs_regs->ar_bspstore, -sof) : NULL) ;
+
+ if (!sofptr) {
+ printk("Unable to display Current Stack Frame\n") ;
+ return 0 ;
+ }
+
+ if (regno < 0)
+ return 0 ;
+
+ for (i=sof, j=0;i;i--,j++) {
+ /* remember to skip the nat collection dword */
+ if ((((unsigned long)sofptr>>3) & (((1<<6)-1)))
+ == ((1<<6)-1))
+ sofptr++ ;
+
+ /* return the value in the reg if regno is non zero */
+
+ if (regno) {
+ if ((j+1) == regno) {
+ if (contents)
+ *contents = *sofptr ;
+ return -1;
+ }
+ sofptr++ ;
+ } else {
+ printk(" r%d: %016lx ", 32+j, *sofptr++) ;
+ if (!((j+1)%3)) printk("\n") ;
+ }
+ }
+
+ if (regno) {
+ if (!i) /* bogus rse number */
+ return 0 ;
+ } else
+ printk("\n") ;
+
+ return 0 ;
+}
--- /dev/null
+/*
+ * linux/drivers/char/pc_keyb.h
+ *
+ * PC Keyboard And Keyboard Controller
+ *
+ * (c) 1997 Martin Mares <mj@atrey.karlin.mff.cuni.cz>
+ */
+
+/*
+ * Configuration Switches
+ */
+
+#undef KBD_REPORT_ERR /* Report keyboard errors */
+#define KBD_REPORT_UNKN /* Report unknown scan codes */
+#define KBD_REPORT_TIMEOUTS /* Report keyboard timeouts */
+#undef KBD_IS_FOCUS_9000 /* We have the brain-damaged FOCUS-9000 keyboard */
+#undef INITIALIZE_MOUSE /* Define if your PS/2 mouse needs initialization. */
+
+
+
+#define KBD_INIT_TIMEOUT 1000 /* Timeout in ms for initializing the keyboard */
+#define KBC_TIMEOUT 250 /* Timeout in ms for sending to keyboard controller */
+#define KBD_TIMEOUT 1000 /* Timeout in ms for keyboard command acknowledge */
+
+/*
+ * Internal variables of the driver
+ */
+
+extern unsigned char pckbd_read_mask;
+extern unsigned char aux_device_present;
+
+/*
+ * Keyboard Controller Registers
+ */
+
+#define KBD_STATUS_REG 0x64 /* Status register (R) */
+#define KBD_CNTL_REG 0x64 /* Controller command register (W) */
+#define KBD_DATA_REG 0x60 /* Keyboard data register (R/W) */
+
+/*
+ * Keyboard Controller Commands
+ */
+
+#define KBD_CCMD_READ_MODE 0x20 /* Read mode bits */
+#define KBD_CCMD_WRITE_MODE 0x60 /* Write mode bits */
+#define KBD_CCMD_GET_VERSION 0xA1 /* Get controller version */
+#define KBD_CCMD_MOUSE_DISABLE 0xA7 /* Disable mouse interface */
+#define KBD_CCMD_MOUSE_ENABLE 0xA8 /* Enable mouse interface */
+#define KBD_CCMD_TEST_MOUSE 0xA9 /* Mouse interface test */
+#define KBD_CCMD_SELF_TEST 0xAA /* Controller self test */
+#define KBD_CCMD_KBD_TEST 0xAB /* Keyboard interface test */
+#define KBD_CCMD_KBD_DISABLE 0xAD /* Keyboard interface disable */
+#define KBD_CCMD_KBD_ENABLE 0xAE /* Keyboard interface enable */
+#define KBD_CCMD_WRITE_AUX_OBUF 0xD3 /* Write to output buffer as if
+ initiated by the auxiliary device */
+#define KBD_CCMD_WRITE_MOUSE 0xD4 /* Write the following byte to the mouse */
+
+/*
+ * Keyboard Commands
+ */
+
+#define KBD_CMD_SET_LEDS 0xED /* Set keyboard leds */
+#define KBD_CMD_SET_RATE 0xF3 /* Set typematic rate */
+#define KBD_CMD_ENABLE 0xF4 /* Enable scanning */
+#define KBD_CMD_DISABLE 0xF5 /* Disable scanning */
+#define KBD_CMD_RESET 0xFF /* Reset */
+
+/*
+ * Keyboard Replies
+ */
+
+#define KBD_REPLY_POR 0xAA /* Power on reset */
+#define KBD_REPLY_ACK 0xFA /* Command ACK */
+#define KBD_REPLY_RESEND 0xFE /* Command NACK, send the cmd again */
+
+/*
+ * Status Register Bits
+ */
+
+#define KBD_STAT_OBF 0x01 /* Keyboard output buffer full */
+#define KBD_STAT_IBF 0x02 /* Keyboard input buffer full */
+#define KBD_STAT_SELFTEST 0x04 /* Self test successful */
+#define KBD_STAT_CMD 0x08 /* Last write was a command write (0=data) */
+#define KBD_STAT_UNLOCKED 0x10 /* Zero if keyboard locked */
+#define KBD_STAT_MOUSE_OBF 0x20 /* Mouse output buffer full */
+#define KBD_STAT_GTO 0x40 /* General receive/xmit timeout */
+#define KBD_STAT_PERR 0x80 /* Parity error */
+
+#define AUX_STAT_OBF (KBD_STAT_OBF | KBD_STAT_MOUSE_OBF)
+
+/*
+ * Controller Mode Register Bits
+ */
+
+#define KBD_MODE_KBD_INT 0x01 /* Keyboard data generate IRQ1 */
+#define KBD_MODE_MOUSE_INT 0x02 /* Mouse data generate IRQ12 */
+#define KBD_MODE_SYS 0x04 /* The system flag (?) */
+#define KBD_MODE_NO_KEYLOCK 0x08 /* The keylock doesn't affect the keyboard if set */
+#define KBD_MODE_DISABLE_KBD 0x10 /* Disable keyboard interface */
+#define KBD_MODE_DISABLE_MOUSE 0x20 /* Disable mouse interface */
+#define KBD_MODE_KCC 0x40 /* Scan code conversion to PC format */
+#define KBD_MODE_RFU 0x80
+
+/*
+ * Mouse Commands
+ */
+
+#define AUX_SET_RES 0xE8 /* Set resolution */
+#define AUX_SET_SCALE11 0xE6 /* Set 1:1 scaling */
+#define AUX_SET_SCALE21 0xE7 /* Set 2:1 scaling */
+#define AUX_GET_SCALE 0xE9 /* Get scaling factor */
+#define AUX_SET_STREAM 0xEA /* Set stream mode */
+#define AUX_SET_SAMPLE 0xF3 /* Set sample rate */
+#define AUX_ENABLE_DEV 0xF4 /* Enable aux device */
+#define AUX_DISABLE_DEV 0xF5 /* Disable aux device */
+#define AUX_RESET 0xFF /* Reset aux device */
+
+#define AUX_BUF_SIZE 2048
+
+struct aux_queue {
+ unsigned long head;
+ unsigned long tail;
+ struct wait_queue *proc_list;
+ struct fasync_struct *fasync;
+ unsigned char buf[AUX_BUF_SIZE];
+};
+
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+# Note 2! The CFLAGS definitions are now in the main makefile...
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -E -o $*.s $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -c -o $*.o $<
+
+all: kernel.o head.o init_task.o
+
+O_TARGET := kernel.o
+O_OBJS := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_default.o irq_internal.o ivt.o \
+ pal.o process.o perfmon.o ptrace.o sal.o sal_stub.o semaphore.o setup.o signal.o \
+ sys_ia64.o traps.o time.o unaligned.o unwind.o
+#O_OBJS := fpreg.o
+#OX_OBJS := ia64_ksyms.o
+
+ifeq ($(CONFIG_IA64_GENERIC),y)
+O_OBJS += machvec.o
+endif
+
+ifdef CONFIG_PCI
+O_OBJS += pci.o
+endif
+
+ifdef CONFIG_SMP
+O_OBJS += smp.o irq_lock.o
+endif
+
+ifeq ($(CONFIG_MCA),y)
+O_OBJS += mca.o mca_asm.o
+endif
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * Advanced Configuration and Power Interface
+ *
+ * Based on 'ACPI Specification 1.0b' February 2, 1999 and
+ * 'IA-64 Extensions to ACPI Specification' Revision 0.6
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999,2000 Walt Drummond <drummond@valinux.com>
+ */
+
+#include <linux/config.h>
+
+#include <linux/init.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include <asm/acpi-ext.h>
+#include <asm/page.h>
+#include <asm/efi.h>
+#include <asm/io.h>
+#include <asm/iosapic.h>
+#include <asm/irq.h>
+
+#undef ACPI_DEBUG /* Guess what this does? */
+
+#ifdef CONFIG_SMP
+extern unsigned long ipi_base_addr;
+#endif
+
+/* These are ugly but will be reclaimed by the kernel */
+int __initdata acpi_cpus = 0;
+int __initdata acpi_apic_map[32];
+int __initdata cpu_cnt = 0;
+
+void (*acpi_idle) (void);
+
+/*
+ * Identify usable CPU's and remember them for SMP bringup later.
+ */
+static void __init
+acpi_lsapic(char *p)
+{
+ int add = 1;
+
+ acpi_entry_lsapic_t *lsapic = (acpi_entry_lsapic_t *) p;
+
+ if ((lsapic->flags & LSAPIC_PRESENT) == 0)
+ return;
+
+ printk(" CPU %d (%.04x:%.04x): ", cpu_cnt, lsapic->eid, lsapic->id);
+
+ if ((lsapic->flags & LSAPIC_ENABLED) == 0) {
+ printk("Disabled.\n");
+ add = 0;
+ } else if (lsapic->flags & LSAPIC_PERFORMANCE_RESTRICTED) {
+ printk("Performance Restricted; ignoring.\n");
+ add = 0;
+ }
+
+ if (add) {
+ printk("Available.\n");
+ acpi_cpus++;
+ acpi_apic_map[cpu_cnt] = (lsapic->id << 8) | lsapic->eid;
+ }
+
+ cpu_cnt++;
+}
+
+/*
+ * Find all IOSAPICs and tag the iosapic_vector structure with the appropriate
+ * base addresses.
+ */
+static void __init
+acpi_iosapic(char *p)
+{
+ /*
+ * This is not good. ACPI is not necessarily limited to CONFIG_IA64_SV, yet
+ * ACPI does not necessarily imply IOSAPIC either. Perhaps there should be
+ * a means for platform_setup() to register ACPI handlers?
+ */
+#ifdef CONFIG_IA64_DIG
+ acpi_entry_iosapic_t *iosapic = (acpi_entry_iosapic_t *) p;
+ unsigned int ver;
+ int l, v, pins;
+
+ ver = iosapic_version(iosapic->address);
+ pins = (ver >> 16) & 0xff;
+
+ printk("IOSAPIC Version %x.%x: address 0x%lx IRQs 0x%x - 0x%x\n",
+ (ver & 0xf0) >> 4, (ver & 0x0f), iosapic->address,
+ iosapic->irq_base, iosapic->irq_base + pins);
+
+ for (l = 0; l < pins; l++) {
+ v = map_legacy_irq(iosapic->irq_base + l);
+ if (v > IA64_MAX_VECTORED_IRQ) {
+ printk(" !!! IRQ %d > 255\n", v);
+ continue;
+ }
+ /* XXX Check for IOSAPIC collisions */
+ iosapic_addr(v) = (unsigned long) ioremap(iosapic->address, 0);
+ iosapic_baseirq(v) = iosapic->irq_base;
+ }
+ iosapic_init(iosapic->address);
+#endif
+}
+
+
+/*
+ * Configure legacy IRQ information in iosapic_vector
+ */
+static void __init
+acpi_legacy_irq(char *p)
+{
+ /*
+ * This is not good. ACPI is not necessarily limited to CONFIG_IA64_SV, yet
+ * ACPI does not necessarily imply IOSAPIC either. Perhaps there should be
+ * a means for platform_setup() to register ACPI handlers?
+ */
+#ifdef CONFIG_IA64_IRQ_ACPI
+ acpi_entry_int_override_t *legacy = (acpi_entry_int_override_t *) p;
+ unsigned char vector;
+ int i;
+
+ vector = map_legacy_irq(legacy->isa_irq);
+
+ /*
+ * Clobber any old pin mapping. It may be that it gets replaced later on
+ */
+ for (i = 0; i < IA64_MAX_VECTORED_IRQ; i++) {
+ if (i == vector)
+ continue;
+ if (iosapic_pin(i) == iosapic_pin(vector))
+ iosapic_pin(i) = 0xff;
+ }
+
+ iosapic_pin(vector) = legacy->pin;
+ iosapic_bus(vector) = BUS_ISA; /* This table only overrides the ISA devices */
+ iosapic_busdata(vector) = 0;
+
+ /*
+ * External timer tick is special...
+ */
+ if (vector != TIMER_IRQ)
+ iosapic_dmode(vector) = IO_SAPIC_LOWEST_PRIORITY;
+ else
+ iosapic_dmode(vector) = IO_SAPIC_FIXED;
+
+ /* See MPS 1.4 section 4.3.4 */
+ switch (legacy->flags) {
+ case 0x5:
+ iosapic_polarity(vector) = IO_SAPIC_POL_HIGH;
+ iosapic_trigger(vector) = IO_SAPIC_EDGE;
+ break;
+ case 0x8:
+ iosapic_polarity(vector) = IO_SAPIC_POL_LOW;
+ iosapic_trigger(vector) = IO_SAPIC_EDGE;
+ break;
+ case 0xd:
+ iosapic_polarity(vector) = IO_SAPIC_POL_HIGH;
+ iosapic_trigger(vector) = IO_SAPIC_LEVEL;
+ break;
+ case 0xf:
+ iosapic_polarity(vector) = IO_SAPIC_POL_LOW;
+ iosapic_trigger(vector) = IO_SAPIC_LEVEL;
+ break;
+ default:
+ printk(" ACPI Legacy IRQ 0x%02x: Unknown flags 0x%x\n", legacy->isa_irq,
+ legacy->flags);
+ break;
+ }
+
+#ifdef ACPI_DEBUG
+ printk("Legacy ISA IRQ %x -> IA64 Vector %x IOSAPIC Pin %x Active %s %s Trigger\n",
+ legacy->isa_irq, vector, iosapic_pin(vector),
+ ((iosapic_polarity(vector) == IO_SAPIC_POL_LOW) ? "Low" : "High"),
+ ((iosapic_trigger(vector) == IO_SAPIC_LEVEL) ? "Level" : "Edge"));
+#endif /* ACPI_DEBUG */
+
+#endif /* CONFIG_IA64_IRQ_ACPI */
+}
+
+/*
+ * Info on platform interrupt sources: NMI. PMI, INIT, etc.
+ */
+static void __init
+acpi_platform(char *p)
+{
+ acpi_entry_platform_src_t *plat = (acpi_entry_platform_src_t *) p;
+
+ printk("PLATFORM: IOSAPIC %x -> Vector %lx on CPU %.04u:%.04u\n",
+ plat->iosapic_vector, plat->global_vector, plat->eid, plat->id);
+}
+
+/*
+ * Parse the ACPI Multiple SAPIC Table
+ */
+static void __init
+acpi_parse_msapic(acpi_sapic_t *msapic)
+{
+ char *p, *end;
+
+ memset(&acpi_apic_map, -1, sizeof(acpi_apic_map));
+
+#ifdef CONFIG_SMP
+ /* Base address of IPI Message Block */
+ ipi_base_addr = ioremap(msapic->interrupt_block, 0);
+#endif
+
+ p = (char *) (msapic + 1);
+ end = p + (msapic->header.length - sizeof(acpi_sapic_t));
+
+ while (p < end) {
+
+ switch (*p) {
+ case ACPI_ENTRY_LOCAL_SAPIC:
+ acpi_lsapic(p);
+ break;
+
+ case ACPI_ENTRY_IO_SAPIC:
+ acpi_iosapic(p);
+ break;
+
+ case ACPI_ENTRY_INT_SRC_OVERRIDE:
+ acpi_legacy_irq(p);
+ break;
+
+ case ACPI_ENTRY_PLATFORM_INT_SOURCE:
+ acpi_platform(p);
+ break;
+
+ default:
+ break;
+ }
+
+ /* Move to next table entry. */
+ p += *(p + 1);
+ }
+
+ /* Make bootup pretty */
+ printk(" %d CPUs available, %d CPUs total\n", acpi_cpus, cpu_cnt);
+}
+
+int __init
+acpi_parse(acpi_rsdp_t *rsdp)
+{
+ acpi_rsdt_t *rsdt;
+ acpi_desc_table_hdr_t *hdrp;
+ long tables, i;
+
+ if (!rsdp) {
+ printk("Uh-oh, no ACPI Root System Description Pointer table!\n");
+ return 0;
+ }
+
+ if (strncmp(rsdp->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) {
+ printk("Uh-oh, ACPI RSDP signature incorrect!\n");
+ return 0;
+ }
+
+ rsdp->rsdt = __va(rsdp->rsdt);
+ rsdt = rsdp->rsdt;
+ if (strncmp(rsdt->header.signature, ACPI_RSDT_SIG, ACPI_RSDT_SIG_LEN)) {
+ printk("Uh-oh, ACPI RDST signature incorrect!\n");
+ return 0;
+ }
+
+ printk("ACPI: %.6s %.8s %d.%d\n", rsdt->header.oem_id, rsdt->header.oem_table_id,
+ rsdt->header.oem_revision >> 16, rsdt->header.oem_revision & 0xffff);
+
+ tables = (rsdt->header.length - sizeof(acpi_desc_table_hdr_t)) / 8;
+ for (i = 0; i < tables; i++) {
+ hdrp = (acpi_desc_table_hdr_t *) __va(rsdt->entry_ptrs[i]);
+
+ /* Only interested int the MSAPIC table for now ... */
+ if (strncmp(hdrp->signature, ACPI_SAPIC_SIG, ACPI_SAPIC_SIG_LEN) != 0)
+ continue;
+
+ acpi_parse_msapic((acpi_sapic_t *) hdrp);
+ } /* while() */
+
+ if (acpi_cpus == 0) {
+ printk("ACPI: Found 0 CPUS; assuming 1\n");
+ acpi_cpus = 1; /* We've got at least one of these, no? */
+ }
+ return 1;
+}
+
+const char *
+acpi_get_sysname (void)
+{
+ /* the following should go away once we have an ACPI parser: */
+#ifdef CONFIG_IA64_GENERIC
+ return "hpsim";
+#else
+# if defined (CONFIG_IA64_HP_SIM)
+ return "hpsim";
+# elif defined (CONFIG_IA64_SGI_SN1_SIM)
+ return "sn1";
+# elif defined (CONFIG_IA64_DIG)
+ return "dig";
+# else
+# error Unknown platform. Fix acpi.c.
+# endif
+#endif
+}
--- /dev/null
+/*
+ * Extensible Firmware Interface
+ *
+ * Based on Extensible Firmware Interface Specification version 0.9 April 30, 1999
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Hewlett-Packard Co.
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * All EFI Runtime Services are not implemented yet as EFI only
+ * supports physical mode addressing on SoftSDV. This is to be fixed
+ * in a future version. --drummond 1999-07-20
+ *
+ * Implemented EFI runtime services and virtual mode calls. --davidm
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/time.h>
+
+#include <asm/efi.h>
+#include <asm/io.h>
+#include <asm/processor.h>
+
+#define EFI_DEBUG
+
+extern efi_status_t efi_call_phys (void *, ...);
+
+struct efi efi;
+
+static efi_runtime_services_t *runtime;
+
+static efi_status_t
+phys_get_time (efi_time_t *tm, efi_time_cap_t *tc)
+{
+ return efi_call_phys(__va(runtime->get_time), __pa(tm), __pa(tc));
+}
+
+static efi_status_t
+phys_set_time (efi_time_t *tm)
+{
+ return efi_call_phys(__va(runtime->set_time), __pa(tm));
+}
+
+static efi_status_t
+phys_get_wakeup_time (efi_bool_t *enabled, efi_bool_t *pending, efi_time_t *tm)
+{
+ return efi_call_phys(__va(runtime->get_wakeup_time), __pa(enabled), __pa(pending),
+ __pa(tm));
+}
+
+static efi_status_t
+phys_set_wakeup_time (efi_bool_t enabled, efi_time_t *tm)
+{
+ return efi_call_phys(__va(runtime->set_wakeup_time), enabled, __pa(tm));
+}
+
+static efi_status_t
+phys_get_variable (efi_char16_t *name, efi_guid_t *vendor, u32 *attr,
+ unsigned long *data_size, void *data)
+{
+ return efi_call_phys(__va(runtime->get_variable), __pa(name), __pa(vendor), __pa(attr),
+ __pa(data_size), __pa(data));
+}
+
+static efi_status_t
+phys_get_next_variable (unsigned long *name_size, efi_char16_t *name, efi_guid_t *vendor)
+{
+ return efi_call_phys(__va(runtime->get_next_variable), __pa(name_size), __pa(name),
+ __pa(vendor));
+}
+
+static efi_status_t
+phys_set_variable (efi_char16_t *name, efi_guid_t *vendor, u32 attr,
+ unsigned long data_size, void *data)
+{
+ return efi_call_phys(__va(runtime->set_variable), __pa(name), __pa(vendor), attr,
+ data_size, __pa(data));
+}
+
+static efi_status_t
+phys_get_next_high_mono_count (u64 *count)
+{
+ return efi_call_phys(__va(runtime->get_next_high_mono_count), __pa(count));
+}
+
+static void
+phys_reset_system (int reset_type, efi_status_t status,
+ unsigned long data_size, efi_char16_t *data)
+{
+ efi_call_phys(__va(runtime->reset_system), status, data_size, __pa(data));
+}
+
+/*
+ * Converts Gregorian date to seconds since 1970-01-01 00:00:00.
+ * Assumes input in normal date format, i.e. 1980-12-31 23:59:59
+ * => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
+ *
+ * [For the Julian calendar (which was used in Russia before 1917,
+ * Britain & colonies before 1752, anywhere else before 1582,
+ * and is still in use by some communities) leave out the
+ * -year/100+year/400 terms, and add 10.]
+ *
+ * This algorithm was first published by Gauss (I think).
+ *
+ * WARNING: this function will overflow on 2106-02-07 06:28:16 on
+ * machines were long is 32-bit! (However, as time_t is signed, we
+ * will already get problems at other places on 2038-01-19 03:14:08)
+ */
+static inline unsigned long
+mktime (unsigned int year, unsigned int mon, unsigned int day, unsigned int hour,
+ unsigned int min, unsigned int sec)
+{
+ if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
+ mon += 12; /* Puts Feb last since it has leap day */
+ year -= 1;
+ }
+ return ((((unsigned long)(year/4 - year/100 + year/400 + 367*mon/12 + day)
+ + year*365 - 719499
+ )*24 + hour /* now have hours */
+ )*60 + min /* now have minutes */
+ )*60 + sec; /* finally seconds */
+}
+
+void
+efi_gettimeofday (struct timeval *tv)
+{
+ efi_time_t tm;
+
+ memset(tv, 0, sizeof(tv));
+ if ((*efi.get_time)(&tm, 0) != EFI_SUCCESS)
+ return;
+
+ tv->tv_sec = mktime(tm.year, tm.month, tm.day, tm.hour, tm.minute, tm.second);
+ tv->tv_usec = tm.nanosecond / 1000;
+}
+
+/*
+ * Walks the EFI memory map and calls CALLBACK once for each EFI
+ * memory descriptor that has memory that is available for OS use.
+ */
+void
+efi_memmap_walk (efi_freemem_callback_t callback, void *arg)
+{
+ int prev_valid = 0;
+ struct range {
+ u64 start;
+ u64 end;
+ } prev, curr;
+ void *efi_map_start, *efi_map_end, *p;
+ efi_memory_desc_t *md;
+ u64 efi_desc_size, start, end;
+
+ efi_map_start = __va(ia64_boot_param.efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
+ efi_desc_size = ia64_boot_param.efi_memdesc_size;
+
+ for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
+ md = p;
+ switch (md->type) {
+ case EFI_LOADER_CODE:
+ case EFI_LOADER_DATA:
+ case EFI_BOOT_SERVICES_CODE:
+ case EFI_BOOT_SERVICES_DATA:
+ case EFI_CONVENTIONAL_MEMORY:
+#ifndef CONFIG_IA64_VIRTUAL_MEM_MAP
+ if (md->phys_addr > 1024*1024*1024UL) {
+ printk("Warning: ignoring %luMB of memory above 1GB!\n",
+ md->num_pages >> 8);
+ md->type = EFI_UNUSABLE_MEMORY;
+ continue;
+ }
+#endif
+
+ curr.start = PAGE_OFFSET + md->phys_addr;
+ curr.end = curr.start + (md->num_pages << 12);
+
+ if (!prev_valid) {
+ prev = curr;
+ prev_valid = 1;
+ } else {
+ if (curr.start < prev.start)
+ printk("Oops: EFI memory table not ordered!\n");
+
+ if (prev.end == curr.start) {
+ /* merge two consecutive memory ranges */
+ prev.end = curr.end;
+ } else {
+ start = PAGE_ALIGN(prev.start);
+ end = prev.end & PAGE_MASK;
+ if ((end > start) && (*callback)(start, end, arg) < 0)
+ return;
+ prev = curr;
+ }
+ }
+ break;
+
+ default:
+ continue;
+ }
+ }
+ if (prev_valid) {
+ start = PAGE_ALIGN(prev.start);
+ end = prev.end & PAGE_MASK;
+ if (end > start)
+ (*callback)(start, end, arg);
+ }
+}
+
+void __init
+efi_init (void)
+{
+ void *efi_map_start, *efi_map_end, *p;
+ efi_config_table_t *config_tables;
+ efi_memory_desc_t *md;
+ efi_char16_t *c16;
+ u64 efi_desc_size;
+ char vendor[100] = "unknown";
+ int i;
+
+ efi.systab = __va(ia64_boot_param.efi_systab);
+
+ /*
+ * Verify the EFI Table
+ */
+ if (efi.systab == NULL)
+ panic("Woah! Can't find EFI system table.\n");
+ if (efi.systab->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
+ panic("Woah! EFI system table signature incorrect\n");
+ if (efi.systab->hdr.revision != EFI_SYSTEM_TABLE_REVISION)
+ printk("Warning: EFI system table version mismatch: "
+ "got %d.%02d, expected %d.%02d\n",
+ efi.systab->hdr.revision >> 16, efi.systab->hdr.revision & 0xffff,
+ EFI_SYSTEM_TABLE_REVISION >> 16, EFI_SYSTEM_TABLE_REVISION & 0xffff);
+
+ config_tables = __va(efi.systab->tables);
+
+ /* Show what we know for posterity */
+ c16 = __va(efi.systab->fw_vendor);
+ if (c16) {
+ for (i = 0;i < sizeof(vendor) && *c16; ++i)
+ vendor[i] = *c16++;
+ vendor[i] = '\0';
+ }
+
+ printk("EFI v%u.%.02u by %s:",
+ efi.systab->hdr.revision >> 16, efi.systab->hdr.revision & 0xffff, vendor);
+
+ for (i = 0; i < efi.systab->nr_tables; i++) {
+ if (efi_guidcmp(config_tables[i].guid, MPS_TABLE_GUID) == 0) {
+ efi.mps = __va(config_tables[i].table);
+ printk(" MPS=0x%lx", config_tables[i].table);
+ } else if (efi_guidcmp(config_tables[i].guid, ACPI_TABLE_GUID) == 0) {
+ efi.acpi = __va(config_tables[i].table);
+ printk(" ACPI=0x%lx", config_tables[i].table);
+ } else if (efi_guidcmp(config_tables[i].guid, SMBIOS_TABLE_GUID) == 0) {
+ efi.smbios = __va(config_tables[i].table);
+ printk(" SMBIOS=0x%lx", config_tables[i].table);
+ } else if (efi_guidcmp(config_tables[i].guid, SAL_SYSTEM_TABLE_GUID) == 0) {
+ efi.sal_systab = __va(config_tables[i].table);
+ printk(" SALsystab=0x%lx", config_tables[i].table);
+ }
+ }
+ printk("\n");
+
+ runtime = __va(efi.systab->runtime);
+ efi.get_time = phys_get_time;
+ efi.set_time = phys_set_time;
+ efi.get_wakeup_time = phys_get_wakeup_time;
+ efi.set_wakeup_time = phys_set_wakeup_time;
+ efi.get_variable = phys_get_variable;
+ efi.get_next_variable = phys_get_next_variable;
+ efi.set_variable = phys_set_variable;
+ efi.get_next_high_mono_count = phys_get_next_high_mono_count;
+ efi.reset_system = phys_reset_system;
+
+ efi_map_start = __va(ia64_boot_param.efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
+ efi_desc_size = ia64_boot_param.efi_memdesc_size;
+
+#ifdef EFI_DEBUG
+ /* print EFI memory map: */
+ for (i = 0, p = efi_map_start; p < efi_map_end; ++i, p += efi_desc_size) {
+ md = p;
+ printk("mem%02u: type=%u, attr=0x%lx, range=[0x%016lx-0x%016lx) (%luMB)\n",
+ i, md->type, md->attribute,
+ md->phys_addr, md->phys_addr + (md->num_pages<<12) - 1, md->num_pages >> 8);
+ }
+#endif
+}
+
+void
+efi_enter_virtual_mode (void)
+{
+ void *efi_map_start, *efi_map_end, *p;
+ efi_memory_desc_t *md;
+ efi_status_t status;
+ u64 efi_desc_size;
+
+ efi_map_start = __va(ia64_boot_param.efi_memmap);
+ efi_map_end = efi_map_start + ia64_boot_param.efi_memmap_size;
+ efi_desc_size = ia64_boot_param.efi_memdesc_size;
+
+ for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
+ md = p;
+ if (md->attribute & EFI_MEMORY_RUNTIME) {
+ /*
+ * Some descriptors have multiple bits set, so the order of
+ * the tests is relevant.
+ */
+ if (md->attribute & EFI_MEMORY_WB) {
+ md->virt_addr = (u64) __va(md->phys_addr);
+ } else if (md->attribute & EFI_MEMORY_UC) {
+ md->virt_addr = (u64) ioremap(md->phys_addr, 0);
+ } else if (md->attribute & EFI_MEMORY_WC) {
+#if 0
+ md->virt_addr = ia64_remap(md->phys_addr, (_PAGE_A | _PAGE_P
+ | _PAGE_D
+ | _PAGE_MA_WC
+ | _PAGE_PL_0
+ | _PAGE_AR_RW));
+#else
+ printk("EFI_MEMORY_WC mapping\n");
+ md->virt_addr = (u64) ioremap(md->phys_addr, 0);
+#endif
+ } else if (md->attribute & EFI_MEMORY_WT) {
+#if 0
+ md->virt_addr = ia64_remap(md->phys_addr, (_PAGE_A | _PAGE_P
+ | _PAGE_D | _PAGE_MA_WT
+ | _PAGE_PL_0
+ | _PAGE_AR_RW));
+#else
+ printk("EFI_MEMORY_WT mapping\n");
+ md->virt_addr = (u64) ioremap(md->phys_addr, 0);
+#endif
+ }
+ }
+ }
+
+ status = efi_call_phys(__va(runtime->set_virtual_address_map),
+ ia64_boot_param.efi_memmap_size,
+ efi_desc_size, ia64_boot_param.efi_memdesc_version,
+ ia64_boot_param.efi_memmap);
+ if (status != EFI_SUCCESS) {
+ printk("Warning: unable to switch EFI into virtual mode (status=%lu)\n", status);
+ return;
+ }
+
+ /*
+ * Now that EFI is in virtual mode, we arrange for EFI functions to be
+ * called directly:
+ */
+ efi.get_time = __va(runtime->get_time);
+ efi.set_time = __va(runtime->set_time);
+ efi.get_wakeup_time = __va(runtime->get_wakeup_time);
+ efi.set_wakeup_time = __va(runtime->set_wakeup_time);
+ efi.get_variable = __va(runtime->get_variable);
+ efi.get_next_variable = __va(runtime->get_next_variable);
+ efi.set_variable = __va(runtime->set_variable);
+ efi.get_next_high_mono_count = __va(runtime->get_next_high_mono_count);
+ efi.reset_system = __va(runtime->reset_system);
+}
--- /dev/null
+/*
+ * EFI call stub.
+ *
+ * Copyright (C) 1999 David Mosberger <davidm@hpl.hp.com>
+ *
+ * This stub allows us to make EFI calls in physical mode with interrupts
+ * turned off. We need this because we can't call SetVirtualMap() until
+ * the kernel has booted far enough to allow allocation of struct vma_struct
+ * entries (which we would need to map stuff with memory attributes other
+ * than uncached or writeback...). Since the GetTime() service gets called
+ * earlier than that, we need to be able to make physical mode EFI calls from
+ * the kernel.
+ */
+
+/*
+ * PSR settings as per SAL spec (Chapter 8 in the "IA-64 System
+ * Abstraction Layer Specification", revision 2.6e). Note that
+ * psr.dfl and psr.dfh MUST be cleared, despite what this manual says.
+ * Otherwise, SAL dies whenever it's trying to do an IA-32 BIOS call
+ * (the br.ia instruction fails unless psr.dfl and psr.dfh are
+ * cleared). Fortunately, SAL promises not to touch the floating
+ * point regs, so at least we don't have to save f2-f127.
+ */
+#define PSR_BITS_TO_CLEAR \
+ (IA64_PSR_I | IA64_PSR_IT | IA64_PSR_DT | IA64_PSR_RT | \
+ IA64_PSR_DD | IA64_PSR_SS | IA64_PSR_RI | IA64_PSR_ED | \
+ IA64_PSR_DFL | IA64_PSR_DFH)
+
+#define PSR_BITS_TO_SET \
+ (IA64_PSR_BN)
+
+#include <asm/processor.h>
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .text
+
+/*
+ * Switch execution mode from virtual to physical or vice versa.
+ *
+ * Inputs:
+ * r16 = new psr to establish
+ */
+ .proc switch_mode
+switch_mode:
+ {
+ alloc r2=ar.pfs,0,0,0,0
+ rsm psr.i | psr.ic // disable interrupts and interrupt collection
+ mov r15=ip
+ }
+ ;;
+ {
+ flushrs // must be first insn in group
+ srlz.i
+ shr.u r19=r15,61 // r19 <- top 3 bits of current IP
+ }
+ ;;
+ mov cr.ipsr=r16 // set new PSR
+ add r3=1f-switch_mode,r15
+ xor r15=0x7,r19 // flip the region bits
+
+ mov r17=ar.bsp
+ mov r14=rp // get return address into a general register
+
+ // switch RSE backing store:
+ ;;
+ dep r17=r15,r17,61,3 // make ar.bsp physical or virtual
+ mov r18=ar.rnat // save ar.rnat
+ ;;
+ mov ar.bspstore=r17 // this steps on ar.rnat
+ dep r3=r15,r3,61,3 // make rfi return address physical or virtual
+ ;;
+ mov cr.iip=r3
+ mov cr.ifs=r0
+ dep sp=r15,sp,61,3 // make stack pointer physical or virtual
+ ;;
+ mov ar.rnat=r18 // restore ar.rnat
+ dep r14=r15,r14,61,3 // make function return address physical or virtual
+ rfi // must be last insn in group
+ ;;
+1: mov rp=r14
+ br.ret.sptk.few rp
+ .endp switch_mode
+
+/*
+ * Inputs:
+ * in0 = address of function descriptor of EFI routine to call
+ * in1..in7 = arguments to routine
+ *
+ * Outputs:
+ * r8 = EFI_STATUS returned by called function
+ */
+
+ .global efi_call_phys
+ .proc efi_call_phys
+efi_call_phys:
+
+ alloc loc0=ar.pfs,8,5,7,0
+ ld8 r2=[in0],8 // load EFI function's entry point
+ mov loc1=rp
+ ;;
+ mov loc2=gp // save global pointer
+ mov loc4=ar.rsc // save RSE configuration
+ mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ ;;
+
+ ld8 gp=[in0] // load EFI function's global pointer
+ mov out0=in1
+ mov out1=in2
+ movl r16=PSR_BITS_TO_CLEAR
+
+ mov loc3=psr // save processor status word
+ movl r17=PSR_BITS_TO_SET
+ ;;
+ mov out2=in3
+ or loc3=loc3,r17
+ mov b6=r2
+ ;;
+ andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared
+ mov out3=in4
+ br.call.sptk.few rp=switch_mode
+.ret0:
+ mov out4=in5
+ mov out5=in6
+ mov out6=in7
+ br.call.sptk.few rp=b6 // call the EFI function
+.ret1:
+ mov ar.rsc=r0 // put RSE in enforced lazy, LE mode
+ mov r16=loc3
+ br.call.sptk.few rp=switch_mode // return to virtual mode
+.ret2:
+ mov ar.rsc=loc4 // restore RSE configuration
+ mov ar.pfs=loc0
+ mov rp=loc1
+ mov gp=loc2
+ br.ret.sptk.few rp
+
+ .endp efi_call_phys
--- /dev/null
+/*
+ * ia64/kernel/entry.S
+ *
+ * Kernel entry points.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <Don.Dugger@intel.com>
+ */
+/*
+ * Global (preserved) predicate usage on syscall entry/exit path:
+ *
+ *
+ * pEOI: See entry.h.
+ * pKern: See entry.h.
+ * pSys: See entry.h.
+ * pNonSys: !pSys
+ * p2: (Alias of pKern!) True if any signals are pending.
+ * p16/p17: Used by stubs calling ia64_do_signal to indicate if current task
+ * has PF_PTRACED flag bit set. p16 is true if so, p17 is the complement.
+ */
+
+#include <linux/config.h>
+
+#include <asm/errno.h>
+#include <asm/offsets.h>
+#include <asm/processor.h>
+#include <asm/unistd.h>
+
+#include "entry.h"
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ /*
+ * execve() is special because in case of success, we need to
+ * setup a null register window frame.
+ */
+ .align 16
+ .proc ia64_execve
+ia64_execve:
+ alloc loc0=ar.pfs,3,2,4,0
+ mov loc1=rp
+ mov out0=in0 // filename
+ ;; // stop bit between alloc and call
+ mov out1=in1 // argv
+ mov out2=in2 // envp
+ add out3=16,sp // regs
+ br.call.sptk.few rp=sys_execve
+.ret0: cmp4.ge p6,p0=r8,r0
+ mov ar.pfs=loc0 // restore ar.pfs
+ ;;
+(p6) mov ar.pfs=r0 // clear ar.pfs in case of success
+ sxt4 r8=r8 // return 64-bit result
+ mov rp=loc1
+
+ br.ret.sptk.few rp
+ .endp ia64_execve
+
+ .align 16
+ .global sys_clone
+ .proc sys_clone
+sys_clone:
+ alloc r16=ar.pfs,2,2,3,0;;
+ movl r28=1f
+ mov loc1=rp
+ br.cond.sptk.many save_switch_stack
+1:
+ mov loc0=r16 // save ar.pfs across do_fork
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp
+ adds r2=IA64_SWITCH_STACK_SIZE+IA64_PT_REGS_R12_OFFSET+16,sp
+ cmp.eq p8,p9=in1,r0 // usp == 0?
+ mov out0=in0 // out0 = clone_flags
+ ;;
+(p8) ld8 out1=[r2] // fetch usp from pt_regs.r12
+(p9) mov out1=in1
+ br.call.sptk.few rp=do_fork
+.ret1:
+ mov ar.pfs=loc0
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
+ mov rp=loc1
+ ;;
+ br.ret.sptk.many rp
+ .endp sys_clone
+
+/*
+ * prev_task <- switch_to(struct task_struct *next)
+ */
+ .align 16
+ .global ia64_switch_to
+ .proc ia64_switch_to
+ia64_switch_to:
+ alloc r16=ar.pfs,1,0,0,0
+ movl r28=1f
+ br.cond.sptk.many save_switch_stack
+1:
+ // disable interrupts to ensure atomicity for next few instructions:
+ mov r17=psr // M-unit
+ ;;
+ rsm psr.i // M-unit
+ dep r18=-1,r0,0,61 // build mask 0x1fffffffffffffff
+ ;;
+ srlz.d
+ ;;
+ adds r22=IA64_TASK_THREAD_KSP_OFFSET,r13
+ adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
+ ;;
+ st8 [r22]=sp // save kernel stack pointer of old task
+ ld8 sp=[r21] // load kernel stack pointer of new task
+ and r20=in0,r18 // physical address of "current"
+ ;;
+ mov r8=r13 // return pointer to previously running task
+ mov r13=in0 // set "current" pointer
+ mov ar.k6=r20 // copy "current" into ar.k6
+ ;;
+ // restore interrupts
+ mov psr.l=r17
+ ;;
+ srlz.d
+
+ movl r28=1f
+ br.cond.sptk.many load_switch_stack
+1:
+ br.ret.sptk.few rp
+ .endp ia64_switch_to
+
+ /*
+ * Like save_switch_stack, but also save the stack frame that is active
+ * at the time this function is called.
+ */
+ .align 16
+ .proc save_switch_stack_with_current_frame
+save_switch_stack_with_current_frame:
+1: {
+ alloc r16=ar.pfs,0,0,0,0 // pass ar.pfs to save_switch_stack
+ mov r28=ip
+ }
+ ;;
+ adds r28=1f-1b,r28
+ br.cond.sptk.many save_switch_stack
+1: br.ret.sptk.few rp
+ .endp save_switch_stack_with_current_frame
+/*
+ * Note that interrupts are enabled during save_switch_stack and
+ * load_switch_stack. This means that we may get an interrupt with
+ * "sp" pointing to the new kernel stack while ar.bspstore is still
+ * pointing to the old kernel backing store area. Since ar.rsc,
+ * ar.rnat, ar.bsp, and ar.bspstore are all preserved by interrupts,
+ * this is not a problem.
+ */
+
+/*
+ * save_switch_stack:
+ * - r16 holds ar.pfs
+ * - r28 holds address to return to
+ * - rp (b0) holds return address to save
+ */
+ .align 16
+ .global save_switch_stack
+ .proc save_switch_stack
+save_switch_stack:
+ flushrs // flush dirty regs to backing store (must be first in insn group)
+ mov r17=ar.unat // preserve caller's
+ adds r2=-IA64_SWITCH_STACK_SIZE+16,sp // r2 = &sw->caller_unat
+ ;;
+ mov r18=ar.fpsr // preserve fpsr
+ mov ar.rsc=r0 // put RSE in mode: enforced lazy, little endian, pl 0
+ ;;
+ mov r19=ar.rnat
+ adds r3=-IA64_SWITCH_STACK_SIZE+24,sp // r3 = &sw->ar_fpsr
+
+ // Note: the instruction ordering is important here: we can't
+ // store anything to the switch stack before sp is updated
+ // as otherwise an interrupt might overwrite the memory!
+ adds sp=-IA64_SWITCH_STACK_SIZE,sp
+ ;;
+ st8 [r2]=r17,16
+ st8 [r3]=r18,24
+ ;;
+ stf.spill [r2]=f2,32
+ stf.spill [r3]=f3,32
+ mov r21=b0
+ ;;
+ stf.spill [r2]=f4,32
+ stf.spill [r3]=f5,32
+ ;;
+ stf.spill [r2]=f10,32
+ stf.spill [r3]=f11,32
+ mov r22=b1
+ ;;
+ stf.spill [r2]=f12,32
+ stf.spill [r3]=f13,32
+ mov r23=b2
+ ;;
+ stf.spill [r2]=f14,32
+ stf.spill [r3]=f15,32
+ mov r24=b3
+ ;;
+ stf.spill [r2]=f16,32
+ stf.spill [r3]=f17,32
+ mov r25=b4
+ ;;
+ stf.spill [r2]=f18,32
+ stf.spill [r3]=f19,32
+ mov r26=b5
+ ;;
+ stf.spill [r2]=f20,32
+ stf.spill [r3]=f21,32
+ mov r17=ar.lc // I-unit
+ ;;
+ stf.spill [r2]=f22,32
+ stf.spill [r3]=f23,32
+ ;;
+ stf.spill [r2]=f24,32
+ stf.spill [r3]=f25,32
+ ;;
+ stf.spill [r2]=f26,32
+ stf.spill [r3]=f27,32
+ ;;
+ stf.spill [r2]=f28,32
+ stf.spill [r3]=f29,32
+ ;;
+ stf.spill [r2]=f30,32
+ stf.spill [r3]=f31,24
+ ;;
+ st8.spill [r2]=r4,16
+ st8.spill [r3]=r5,16
+ ;;
+ st8.spill [r2]=r6,16
+ st8.spill [r3]=r7,16
+ ;;
+ st8 [r2]=r21,16 // save b0
+ st8 [r3]=r22,16 // save b1
+ /* since we're done with the spills, read and save ar.unat: */
+ mov r18=ar.unat // M-unit
+ mov r20=ar.bspstore // M-unit
+ ;;
+ st8 [r2]=r23,16 // save b2
+ st8 [r3]=r24,16 // save b3
+ ;;
+ st8 [r2]=r25,16 // save b4
+ st8 [r3]=r26,16 // save b5
+ ;;
+ st8 [r2]=r16,16 // save ar.pfs
+ st8 [r3]=r17,16 // save ar.lc
+ mov r21=pr
+ ;;
+ st8 [r2]=r18,16 // save ar.unat
+ st8 [r3]=r19,16 // save ar.rnat
+ mov b7=r28
+ ;;
+ st8 [r2]=r20 // save ar.bspstore
+ st8 [r3]=r21 // save predicate registers
+ mov ar.rsc=3 // put RSE back into eager mode, pl 0
+ br.cond.sptk.few b7
+ .endp save_switch_stack
+
+/*
+ * load_switch_stack:
+ * - r28 holds address to return to
+ */
+ .align 16
+ .proc load_switch_stack
+load_switch_stack:
+ invala // invalidate ALAT
+ adds r2=IA64_SWITCH_STACK_B0_OFFSET+16,sp // get pointer to switch_stack.b0
+ mov ar.rsc=r0 // put RSE into enforced lazy mode
+ adds r3=IA64_SWITCH_STACK_B0_OFFSET+24,sp // get pointer to switch_stack.b1
+ ;;
+ ld8 r21=[r2],16 // restore b0
+ ld8 r22=[r3],16 // restore b1
+ ;;
+ ld8 r23=[r2],16 // restore b2
+ ld8 r24=[r3],16 // restore b3
+ ;;
+ ld8 r25=[r2],16 // restore b4
+ ld8 r26=[r3],16 // restore b5
+ ;;
+ ld8 r16=[r2],16 // restore ar.pfs
+ ld8 r17=[r3],16 // restore ar.lc
+ ;;
+ ld8 r18=[r2],16 // restore ar.unat
+ ld8 r19=[r3],16 // restore ar.rnat
+ mov b0=r21
+ ;;
+ ld8 r20=[r2] // restore ar.bspstore
+ ld8 r21=[r3] // restore predicate registers
+ mov ar.pfs=r16
+ ;;
+ mov ar.bspstore=r20
+ ;;
+ loadrs // invalidate stacked regs outside current frame
+ adds r2=16-IA64_SWITCH_STACK_SIZE,r2 // get pointer to switch_stack.caller_unat
+ ;; // stop bit for rnat dependency
+ mov ar.rnat=r19
+ mov ar.unat=r18 // establish unat holding the NaT bits for r4-r7
+ adds r3=16-IA64_SWITCH_STACK_SIZE,r3 // get pointer to switch_stack.ar_fpsr
+ ;;
+ ld8 r18=[r2],16 // restore caller's unat
+ ld8 r19=[r3],24 // restore fpsr
+ mov ar.lc=r17
+ ;;
+ ldf.fill f2=[r2],32
+ ldf.fill f3=[r3],32
+ mov pr=r21,-1
+ ;;
+ ldf.fill f4=[r2],32
+ ldf.fill f5=[r3],32
+ ;;
+ ldf.fill f10=[r2],32
+ ldf.fill f11=[r3],32
+ mov b1=r22
+ ;;
+ ldf.fill f12=[r2],32
+ ldf.fill f13=[r3],32
+ mov b2=r23
+ ;;
+ ldf.fill f14=[r2],32
+ ldf.fill f15=[r3],32
+ mov b3=r24
+ ;;
+ ldf.fill f16=[r2],32
+ ldf.fill f17=[r3],32
+ mov b4=r25
+ ;;
+ ldf.fill f18=[r2],32
+ ldf.fill f19=[r3],32
+ mov b5=r26
+ ;;
+ ldf.fill f20=[r2],32
+ ldf.fill f21=[r3],32
+ ;;
+ ldf.fill f22=[r2],32
+ ldf.fill f23=[r3],32
+ ;;
+ ldf.fill f24=[r2],32
+ ldf.fill f25=[r3],32
+ ;;
+ ldf.fill f26=[r2],32
+ ldf.fill f27=[r3],32
+ ;;
+ ldf.fill f28=[r2],32
+ ldf.fill f29=[r3],32
+ ;;
+ ldf.fill f30=[r2],32
+ ldf.fill f31=[r3],24
+ ;;
+ ld8.fill r4=[r2],16
+ ld8.fill r5=[r3],16
+ mov b7=r28
+ ;;
+ ld8.fill r6=[r2],16
+ ld8.fill r7=[r3],16
+ mov ar.unat=r18 // restore caller's unat
+ mov ar.fpsr=r19 // restore fpsr
+ mov ar.rsc=3 // put RSE back into eager mode, pl 0
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // pop switch_stack
+ br.cond.sptk.few b7
+ .endp load_switch_stack
+
+ .align 16
+ .global __ia64_syscall
+ .proc __ia64_syscall
+__ia64_syscall:
+ .regstk 6,0,0,0
+ mov r15=in5 // put syscall number in place
+ break __BREAK_SYSCALL
+ movl r2=errno
+ cmp.eq p6,p7=-1,r10
+ ;;
+(p6) st4 [r2]=r8
+(p6) mov r8=-1
+ br.ret.sptk.few rp
+ .endp __ia64_syscall
+
+ //
+ // We invoke syscall_trace through this intermediate function to
+ // ensure that the syscall input arguments are not clobbered. We
+ // also use it to preserve b6, which contains the syscall entry point.
+ //
+ .align 16
+ .global invoke_syscall_trace
+ .proc invoke_syscall_trace
+invoke_syscall_trace:
+ alloc loc0=ar.pfs,8,3,0,0
+ ;; // WAW on CFM at the br.call
+ mov loc1=rp
+ br.call.sptk.many rp=save_switch_stack_with_current_frame // must preserve b6!!
+.ret2: mov loc2=b6
+ br.call.sptk.few rp=syscall_trace
+.ret3: adds sp=IA64_SWITCH_STACK_SIZE,sp // drop switch_stack frame
+ mov rp=loc1
+ mov ar.pfs=loc0
+ mov b6=loc2
+ ;;
+ br.ret.sptk.few rp
+ .endp invoke_syscall_trace
+
+ //
+ // Invoke a system call, but do some tracing before and after the call.
+ // We MUST preserve the current register frame throughout this routine
+ // because some system calls (such as ia64_execve) directly
+ // manipulate ar.pfs.
+ //
+ // Input:
+ // r15 = syscall number
+ // b6 = syscall entry point
+ //
+ .global ia64_trace_syscall
+ .global ia64_strace_leave_kernel
+ .global ia64_strace_clear_r8
+
+ .proc ia64_strace_clear_r8
+ia64_strace_clear_r8: // this is where we return after cloning when PF_TRACESYS is on
+# ifdef CONFIG_SMP
+ br.call.sptk.few rp=invoke_schedule_tail
+# endif
+ mov r8=0
+ br strace_check_retval
+ .endp ia64_strace_clear_r8
+
+ .proc ia64_trace_syscall
+ia64_trace_syscall:
+ br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
+.ret4: br.call.sptk.few rp=b6 // do the syscall
+strace_check_retval:
+.ret5: cmp.lt p6,p0=r8,r0 // syscall failed?
+ ;;
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
+ adds r3=IA64_PT_REGS_R8_OFFSET+32,sp // r3 = &pt_regs.r10
+ mov r10=0
+(p6) br.cond.sptk.few strace_error // syscall failed ->
+ ;; // avoid RAW on r10
+strace_save_retval:
+ st8.spill [r2]=r8 // store return value in slot for r8
+ st8.spill [r3]=r10 // clear error indication in slot for r10
+ia64_strace_leave_kernel:
+ br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
+.ret6: br.cond.sptk.many ia64_leave_kernel
+
+strace_error:
+ ld8 r3=[r2] // load pt_regs.r8
+ sub r9=0,r8 // negate return value to get errno value
+ ;;
+ cmp.ne p6,p0=r3,r0 // is pt_regs.r8!=0?
+ adds r3=16,r2 // r3=&pt_regs.r10
+ ;;
+(p6) mov r10=-1
+(p6) mov r8=r9
+ br.cond.sptk.few strace_save_retval
+ .endp ia64_trace_syscall
+
+/*
+ * A couple of convenience macros to help implement/understand the state
+ * restoration that happens at the end of ia64_ret_from_syscall.
+ */
+#define rARPR r31
+#define rCRIFS r30
+#define rCRIPSR r29
+#define rCRIIP r28
+#define rARRSC r27
+#define rARPFS r26
+#define rARUNAT r25
+#define rARRNAT r24
+#define rARBSPSTORE r23
+#define rKRBS r22
+#define rB6 r21
+
+ .align 16
+ .global ia64_ret_from_syscall
+ .global ia64_ret_from_syscall_clear_r8
+ .global ia64_leave_kernel
+ .proc ia64_ret_from_syscall
+ia64_ret_from_syscall_clear_r8:
+#ifdef CONFIG_SMP
+ // In SMP mode, we need to call schedule_tail to complete the scheduling process.
+ // Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains the
+ // address of the previously executing task.
+ br.call.sptk.few rp=invoke_schedule_tail
+.ret7:
+#endif
+ mov r8=0
+ ;; // added stop bits to prevent r8 dependency
+ia64_ret_from_syscall:
+ cmp.ge p6,p7=r8,r0 // syscall executed successfully?
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
+ adds r3=IA64_PT_REGS_R8_OFFSET+32,sp // r3 = &pt_regs.r10
+ ;;
+(p6) st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
+(p6) st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
+(p7) br.cond.spnt.few handle_syscall_error // handle potential syscall failure
+
+ia64_leave_kernel:
+ // check & deliver software interrupts (bottom half handlers):
+
+ movl r2=bh_active // sheesh, why aren't these two in
+ movl r3=bh_mask // a struct??
+ ;;
+ ld8 r2=[r2]
+ ld8 r3=[r3]
+ ;;
+ and r2=r2,r3
+ ;;
+ cmp.ne p6,p7=r2,r0 // any soft interrupts ready for delivery?
+(p6) br.call.dpnt.few rp=invoke_do_bottom_half
+1:
+(pKern) br.cond.dpnt.many restore_all // yup -> skip check for rescheduling & signal delivery
+
+ // call schedule() until we find a task that doesn't have need_resched set:
+
+back_from_resched:
+ { .mii
+ adds r2=IA64_TASK_NEED_RESCHED_OFFSET,r13
+ mov r3=ip
+ adds r14=IA64_TASK_SIGPENDING_OFFSET,r13
+ }
+ ;;
+ ld8 r2=[r2]
+ ld4 r14=[r14]
+ mov rp=r3 // arrange for schedule() to return to back_from_resched
+ ;;
+ /*
+ * If pEOI is set, we need to write the cr.eoi now and then
+ * clear pEOI because both invoke_schedule() and
+ * handle_signal_delivery() may call the scheduler. Since
+ * we're returning to user-level, we get at most one nested
+ * interrupt of the same priority level, which doesn't tax the
+ * kernel stack too much.
+ */
+(pEOI) mov cr.eoi=r0
+ cmp.ne p6,p0=r2,r0
+ cmp.ne p2,p0=r14,r0 // NOTE: pKern is an alias for p2!!
+(pEOI) cmp.ne pEOI,p0=r0,r0 // clear pEOI before calling schedule()
+ srlz.d
+(p6) br.call.spnt.many b6=invoke_schedule // ignore return value
+2:
+ // check & deliver pending signals:
+(p2) br.call.spnt.few rp=handle_signal_delivery
+restore_all:
+
+ // start restoring the state saved on the kernel stack (struct pt_regs):
+
+ adds r2=IA64_PT_REGS_R8_OFFSET+16,r12
+ adds r3=IA64_PT_REGS_R8_OFFSET+24,r12
+ ;;
+ ld8.fill r8=[r2],16
+ ld8.fill r9=[r3],16
+ ;;
+ ld8.fill r10=[r2],16
+ ld8.fill r11=[r3],16
+ ;;
+ ld8.fill r16=[r2],16
+ ld8.fill r17=[r3],16
+ ;;
+ ld8.fill r18=[r2],16
+ ld8.fill r19=[r3],16
+ ;;
+ ld8.fill r20=[r2],16
+ ld8.fill r21=[r3],16
+ ;;
+ ld8.fill r22=[r2],16
+ ld8.fill r23=[r3],16
+ ;;
+ ld8.fill r24=[r2],16
+ ld8.fill r25=[r3],16
+ ;;
+ ld8.fill r26=[r2],16
+ ld8.fill r27=[r3],16
+ ;;
+ ld8.fill r28=[r2],16
+ ld8.fill r29=[r3],16
+ ;;
+ ld8.fill r30=[r2],16
+ ld8.fill r31=[r3],16
+ ;;
+ ld8 r1=[r2],16 // ar.ccv
+ ld8 r13=[r3],16 // ar.fpsr
+ ;;
+ ld8 r14=[r2],16 // b0
+ ld8 r15=[r3],16+8 // b7
+ ;;
+ ldf.fill f6=[r2],32
+ ldf.fill f7=[r3],32
+ ;;
+ ldf.fill f8=[r2],32
+ ldf.fill f9=[r3],32
+ ;;
+ mov ar.ccv=r1
+ mov ar.fpsr=r13
+ mov b0=r14
+ // turn off interrupts, interrupt collection, & data translation
+ rsm psr.i | psr.ic | psr.dt
+ ;;
+ srlz.i // EAS 2.5
+ mov b7=r15
+ ;;
+ invala // invalidate ALAT
+ dep r12=0,r12,61,3 // convert sp to physical address
+ bsw.0;; // switch back to bank 0 (must be last in insn group)
+ ;;
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ nop.i 0x0
+ ;;
+ nop.i 0x0
+ ;;
+ nop.i 0x0
+ ;;
+#endif
+ adds r16=16,r12
+ adds r17=24,r12
+ ;;
+ ld8 rCRIPSR=[r16],16 // load cr.ipsr
+ ld8 rCRIIP=[r17],16 // load cr.iip
+ ;;
+ ld8 rCRIFS=[r16],16 // load cr.ifs
+ ld8 rARUNAT=[r17],16 // load ar.unat
+ ;;
+ ld8 rARPFS=[r16],16 // load ar.pfs
+ ld8 rARRSC=[r17],16 // load ar.rsc
+ ;;
+ ld8 rARRNAT=[r16],16 // load ar.rnat (may be garbage)
+ ld8 rARBSPSTORE=[r17],16 // load ar.bspstore (may be garbage)
+ ;;
+ ld8 rARPR=[r16],16 // load predicates
+ ld8 rB6=[r17],16 // load b6
+ ;;
+ ld8 r18=[r16],16 // load ar.rsc value for "loadrs"
+ ld8.fill r1=[r17],16 // load r1
+ ;;
+ ld8.fill r2=[r16],16
+ ld8.fill r3=[r17],16
+ ;;
+ ld8.fill r12=[r16],16
+ ld8.fill r13=[r17],16
+ extr.u r19=rCRIPSR,32,2 // extract ps.cpl
+ ;;
+ ld8.fill r14=[r16],16
+ ld8.fill r15=[r17],16
+ cmp.eq p6,p7=r0,r19 // are we returning to kernel mode? (psr.cpl==0)
+ ;;
+ mov b6=rB6
+ mov ar.pfs=rARPFS
+(p6) br.cond.dpnt.few skip_rbs_switch
+
+ /*
+ * Restore user backing store.
+ *
+ * NOTE: alloc, loadrs, and cover can't be predicated.
+ *
+ * XXX This needs some scheduling/tuning once we believe it
+ * really does work as intended.
+ */
+ mov r16=ar.bsp // get existing backing store pointer
+(pNonSys) br.cond.dpnt.few dont_preserve_current_frame
+ cover // add current frame into dirty partition
+ ;;
+ mov rCRIFS=cr.ifs // fetch the cr.ifs value that "cover" produced
+ mov r17=ar.bsp // get new backing store pointer
+ ;;
+ sub r16=r17,r16 // calculate number of bytes that were added to rbs
+ ;;
+ shl r16=r16,16 // shift additional frame size into position for loadrs
+ ;;
+ add r18=r16,r18 // adjust the loadrs value
+ ;;
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
+ // Reset ITM if we've missed a timer tick. Workaround for SoftSDV bug
+ mov r16 = r2
+ mov r2 = ar.itc
+ mov r17 = cr.itm
+ ;;
+ cmp.gt p6,p7 = r2, r17
+(p6) addl r17 = 100, r2
+ ;;
+ mov cr.itm = r17
+ mov r2 = r16
+#endif
+dont_preserve_current_frame:
+ alloc r16=ar.pfs,0,0,0,0 // drop the current call frame (noop for syscalls)
+ ;;
+ mov ar.rsc=r18 // load ar.rsc to be used for "loadrs"
+#ifdef CONFIG_IA32_SUPPORT
+ tbit.nz p6,p0=rCRIPSR,IA64_PSR_IS_BIT
+ ;;
+(p6) mov ar.rsc=r0 // returning to IA32 mode
+#endif
+ ;;
+ loadrs
+ ;;
+ mov ar.bspstore=rARBSPSTORE
+ ;;
+ mov ar.rnat=rARRNAT // must happen with RSE in lazy mode
+
+skip_rbs_switch:
+ mov ar.rsc=rARRSC
+ mov ar.unat=rARUNAT
+ mov cr.ifs=rCRIFS // restore cr.ifs only if not a (synchronous) syscall
+(pEOI) mov cr.eoi=r0
+ mov pr=rARPR,-1
+ mov cr.iip=rCRIIP
+ mov cr.ipsr=rCRIPSR
+ ;;
+ rfi;; // must be last instruction in an insn group
+
+handle_syscall_error:
+ /*
+ * Some system calls (e.g., ptrace, mmap) can return arbitrary
+ * values which could lead us to mistake a negative return
+ * value as a failed syscall. Those syscall must deposit
+ * a non-zero value in pt_regs.r8 to indicate an error.
+ * If pt_regs.r8 is zero, we assume that the call completed
+ * successfully.
+ */
+ ld8 r3=[r2] // load pt_regs.r8
+ sub r9=0,r8 // negate return value to get errno
+ ;;
+ mov r10=-1 // return -1 in pt_regs.r10 to indicate error
+ cmp.eq p6,p7=r3,r0 // is pt_regs.r8==0?
+ adds r3=16,r2 // r3=&pt_regs.r10
+ ;;
+(p6) mov r9=r8
+(p6) mov r10=0
+ ;;
+ st8.spill [r2]=r9 // store errno in pt_regs.r8 and set unat bit
+ st8.spill [r3]=r10 // store error indication in pt_regs.r10 and set unat bit
+ br.cond.sptk.many ia64_leave_kernel
+ .endp __ret_from_syscall
+
+#ifdef CONFIG_SMP
+ /*
+ * Invoke schedule_tail(task) while preserving in0-in7, which may be needed
+ * in case a system call gets restarted.
+ */
+ .proc invoke_schedule_tail
+invoke_schedule_tail:
+ alloc loc0=ar.pfs,8,2,1,0
+ mov loc1=rp
+ mov out0=r8 // Address of previous task
+ ;;
+ br.call.sptk.few rp=schedule_tail
+.ret8:
+ mov ar.pfs=loc0
+ mov rp=loc1
+ br.ret.sptk.many rp
+ .endp invoke_schedule_tail
+#endif /* CONFIG_SMP */
+
+ /*
+ * Invoke do_bottom_half() while preserving in0-in7, which may be needed
+ * in case a system call gets restarted.
+ */
+ .proc invoke_do_bottom_half
+invoke_do_bottom_half:
+ alloc loc0=ar.pfs,8,2,0,0
+ mov loc1=rp
+ ;;
+ br.call.sptk.few rp=do_bottom_half
+.ret9:
+ mov ar.pfs=loc0
+ mov rp=loc1
+ br.ret.sptk.many rp
+ .endp invoke_do_bottom_half
+
+ /*
+ * Invoke schedule() while preserving in0-in7, which may be needed
+ * in case a system call gets restarted.
+ */
+ .proc invoke_schedule
+invoke_schedule:
+ alloc loc0=ar.pfs,8,2,0,0
+ mov loc1=rp
+ ;;
+ br.call.sptk.few rp=schedule
+.ret10:
+ mov ar.pfs=loc0
+ mov rp=loc1
+ br.ret.sptk.many rp
+ .endp invoke_schedule
+
+ //
+ // Setup stack and call ia64_do_signal. Note that pSys and pNonSys need to
+ // be set up by the caller. We declare 8 input registers so the system call
+ // args get preserved, in case we need to restart a system call.
+ //
+ .align 16
+ .proc handle_signal_delivery
+handle_signal_delivery:
+ alloc loc0=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
+ mov r9=ar.unat
+
+ // If the process is being ptraced, the signal may not actually be delivered to
+ // the process. Instead, SIGCHLD will be sent to the parent. We need to
+ // setup a switch_stack so ptrace can inspect the processes state if necessary.
+ adds r2=IA64_TASK_FLAGS_OFFSET,r13
+ ;;
+ ld8 r2=[r2]
+ mov out0=0 // there is no "oldset"
+ adds out1=16,sp // out1=&pt_regs
+ ;;
+(pSys) mov out2=1 // out2==1 => we're in a syscall
+ tbit.nz p16,p17=r2,PF_PTRACED_BIT
+(p16) br.cond.spnt.many setup_switch_stack
+ ;;
+back_from_setup_switch_stack:
+(pNonSys) mov out2=0 // out2==0 => not a syscall
+ adds r3=-IA64_SWITCH_STACK_SIZE+IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+(p17) adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for (dummy) switch_stack
+ ;;
+(p17) st8 [r3]=r9 // save ar.unat in sw->caller_unat
+ mov loc1=rp // save return address
+ br.call.sptk.few rp=ia64_do_signal
+.ret11:
+ adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ ;;
+ ld8 r9=[r3] // load new unat from sw->caller_unat
+ mov rp=loc1
+ ;;
+(p17) adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch_stack
+(p17) mov ar.unat=r9
+(p17) mov ar.pfs=loc0
+(p17) br.ret.sptk.many rp
+
+ // restore the switch stack (ptrace may have modified it):
+ movl r28=1f
+ br.cond.sptk.many load_switch_stack
+1: br.ret.sptk.many rp
+ // NOT REACHED
+
+setup_switch_stack:
+ movl r28=back_from_setup_switch_stack
+ mov r16=loc0
+ br.cond.sptk.many save_switch_stack
+ // NOT REACHED
+
+ .endp handle_signal_delivery
+
+ .align 16
+ .proc sys_rt_sigsuspend
+ .global sys_rt_sigsuspend
+sys_rt_sigsuspend:
+ alloc loc0=ar.pfs,2,2,3,0
+ mov r9=ar.unat
+
+ // If the process is being ptraced, the signal may not actually be delivered to
+ // the process. Instead, SIGCHLD will be sent to the parent. We need to
+ // setup a switch_stack so ptrace can inspect the processes state if necessary.
+ adds r2=IA64_TASK_FLAGS_OFFSET,r13
+ ;;
+ ld8 r2=[r2]
+ mov out0=in0 // mask
+ mov out1=in1 // sigsetsize
+ ;;
+ adds out2=16,sp // out1=&pt_regs
+ tbit.nz p16,p17=r2,PF_PTRACED_BIT
+(p16) br.cond.spnt.many sigsuspend_setup_switch_stack
+ ;;
+back_from_sigsuspend_setup_switch_stack:
+ adds r3=-IA64_SWITCH_STACK_SIZE+IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+(p17) adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for (dummy) switch_stack
+ ;;
+(p17) st8 [r3]=r9 // save ar.unat in sw->caller_unat
+ mov loc1=rp // save return address
+ br.call.sptk.many rp=ia64_rt_sigsuspend
+.ret12:
+ adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ ;;
+ ld8 r9=[r3] // load new unat from sw->caller_unat
+ mov rp=loc1
+ ;;
+(p17) adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch_stack
+(p17) mov ar.unat=r9
+(p17) mov ar.pfs=loc0
+(p17) br.ret.sptk.many rp
+
+ // restore the switch stack (ptrace may have modified it):
+ movl r28=1f
+ br.cond.sptk.many load_switch_stack
+1: br.ret.sptk.many rp
+ // NOT REACHED
+
+sigsuspend_setup_switch_stack:
+ movl r28=back_from_sigsuspend_setup_switch_stack
+ mov r16=loc0
+ br.cond.sptk.many save_switch_stack
+ // NOT REACHED
+
+ .endp sys_rt_sigsuspend
+
+ .align 16
+ .proc sys_rt_sigreturn
+sys_rt_sigreturn:
+ alloc loc0=ar.pfs,8,1,1,0 // preserve all eight input regs in case of syscall restart!
+ adds out0=16,sp // out0 = &pt_regs
+ ;;
+ adds sp=-IA64_SWITCH_STACK_SIZE,sp // make space for unat and padding
+ br.call.sptk.few rp=ia64_rt_sigreturn
+.ret13:
+ adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
+ ;;
+ ld8 r9=[r3] // load new ar.unat
+ mov rp=r8
+ ;;
+ adds sp=IA64_SWITCH_STACK_SIZE,sp // drop (dummy) switch-stack frame
+ mov ar.unat=r9
+ mov ar.pfs=loc0
+ br.ret.sptk.many rp
+ .endp sys_rt_sigreturn
+
+ .align 16
+ .global ia64_prepare_handle_unaligned
+ .proc ia64_prepare_handle_unaligned
+ia64_prepare_handle_unaligned:
+ movl r28=1f
+ //
+ // r16 = fake ar.pfs, we simply need to make sure
+ // privilege is still 0
+ //
+ mov r16=r0
+ br.cond.sptk.few save_switch_stack
+1: br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
+.ret14:
+ movl r28=2f
+ br.cond.sptk.many load_switch_stack
+2: br.cond.sptk.many rp // goes to ia64_leave_kernel
+ .endp ia64_prepare_handle_unaligned
+
+#ifdef CONFIG_KDB
+ //
+ // This gets called from ivt.S with:
+ // SAVE MIN with cover done
+ // SAVE REST done
+ // no parameters
+ // r15 has return value = ia64_leave_kernel
+ //
+ .align 16
+ .global ia64_invoke_kdb
+ .proc ia64_invoke_kdb
+ia64_invoke_kdb:
+ alloc r16=ar.pfs,0,0,4,0
+ movl r28=1f // save_switch_stack protocol
+ ;; // avoid WAW on CFM
+ br.cond.sptk.many save_switch_stack // to flushrs
+1: mov out0=4 // kdb entry reason
+ mov out1=0 // err number
+ adds out2=IA64_SWITCH_STACK_SIZE+16,sp // pt_regs
+ add out3=16,sp // switch_stack
+ br.call.sptk.few rp=kdb
+.ret15:
+ movl r28=1f // load_switch_stack proto
+ br.cond.sptk.many load_switch_stack
+1: br.ret.sptk.many rp
+ .endp ia64_invoke_kdb
+
+ //
+ // When KDB is compiled in, we intercept each fault and give
+ // kdb a chance to run before calling the normal fault handler.
+ //
+ .align 16
+ .global ia64_invoke_kdb_fault_handler
+ .proc ia64_invoke_kdb_fault_handler
+ia64_invoke_kdb_fault_handler:
+ alloc r16=ar.pfs,5,1,5,0
+ movl r28=1f
+ mov loc0=rp // save this
+ br.cond.sptk.many save_switch_stack // to flushrs
+ ;; // avoid WAW on CFM
+1: mov out0=in0 // vector number
+ mov out1=in1 // cr.isr
+ mov out2=in2 // cr.ifa
+ mov out3=in3 // cr.iim
+ mov out4=in4 // cr.itir
+ br.call.sptk.few rp=ia64_kdb_fault_handler
+.ret16:
+
+ movl r28=1f
+ br.cond.sptk.many load_switch_stack
+1: cmp.ne p6,p0=r8,r0 // did ia64_kdb_fault_handler return 0?
+ mov rp=loc0
+(p6) br.ret.spnt.many rp // no, we're done
+ ;; // avoid WAW on rp
+ mov out0=in0 // vector number
+ mov out1=in1 // cr.isr
+ mov out2=in2 // cr.ifa
+ mov out3=in3 // cr.iim
+ mov out4=in4 // cr.itir
+ mov in0=ar.pfs // preserve ar.pfs returned by load_switch_stack
+ br.call.sptk.few rp=ia64_fault // yup -> we need to invoke normal fault handler now
+.ret17:
+ mov ar.pfs=in0
+ mov rp=loc0
+ br.ret.sptk.many rp
+
+ .endp ia64_invoke_kdb_fault_handler
+
+#endif /* CONFIG_KDB */
+
+ .rodata
+ .align 8
+ .globl sys_call_table
+sys_call_table:
+ data8 sys_ni_syscall // This must be sys_ni_syscall! See ivt.S.
+ data8 sys_exit // 1025
+ data8 sys_read
+ data8 sys_write
+ data8 sys_open
+ data8 sys_close
+ data8 sys_creat // 1030
+ data8 sys_link
+ data8 sys_unlink
+ data8 ia64_execve
+ data8 sys_chdir
+ data8 sys_fchdir // 1035
+ data8 sys_utimes
+ data8 sys_mknod
+ data8 sys_chmod
+ data8 sys_chown
+ data8 sys_lseek // 1040
+ data8 sys_getpid
+ data8 sys_getppid
+ data8 sys_mount
+ data8 sys_umount
+ data8 sys_setuid // 1045
+ data8 sys_getuid
+ data8 sys_geteuid
+ data8 sys_ptrace
+ data8 sys_access
+ data8 sys_sync // 1050
+ data8 sys_fsync
+ data8 sys_fdatasync
+ data8 sys_kill
+ data8 sys_rename
+ data8 sys_mkdir // 1055
+ data8 sys_rmdir
+ data8 sys_dup
+ data8 sys_pipe
+ data8 sys_times
+ data8 ia64_brk // 1060
+ data8 sys_setgid
+ data8 sys_getgid
+ data8 sys_getegid
+ data8 sys_acct
+ data8 sys_ioctl // 1065
+ data8 sys_fcntl
+ data8 sys_umask
+ data8 sys_chroot
+ data8 sys_ustat
+ data8 sys_dup2 // 1070
+ data8 sys_setreuid
+ data8 sys_setregid
+ data8 sys_getresuid
+ data8 sys_setresuid
+ data8 sys_getresgid // 1075
+ data8 sys_setresgid
+ data8 sys_getgroups
+ data8 sys_setgroups
+ data8 sys_getpgid
+ data8 sys_setpgid // 1080
+ data8 sys_setsid
+ data8 sys_getsid
+ data8 sys_sethostname
+ data8 sys_setrlimit
+ data8 sys_getrlimit // 1085
+ data8 sys_getrusage
+ data8 sys_gettimeofday
+ data8 sys_settimeofday
+ data8 sys_select
+ data8 sys_poll // 1090
+ data8 sys_symlink
+ data8 sys_readlink
+ data8 sys_uselib
+ data8 sys_swapon
+ data8 sys_swapoff // 1095
+ data8 sys_reboot
+ data8 sys_truncate
+ data8 sys_ftruncate
+ data8 sys_fchmod
+ data8 sys_fchown // 1100
+ data8 ia64_getpriority
+ data8 sys_setpriority
+ data8 sys_statfs
+ data8 sys_fstatfs
+ data8 sys_ioperm // 1105
+ data8 sys_semget
+ data8 sys_semop
+ data8 sys_semctl
+ data8 sys_msgget
+ data8 sys_msgsnd // 1110
+ data8 sys_msgrcv
+ data8 sys_msgctl
+ data8 sys_shmget
+ data8 ia64_shmat
+ data8 sys_shmdt // 1115
+ data8 sys_shmctl
+ data8 sys_syslog
+ data8 sys_setitimer
+ data8 sys_getitimer
+ data8 sys_newstat // 1120
+ data8 sys_newlstat
+ data8 sys_newfstat
+ data8 sys_vhangup
+ data8 sys_lchown
+ data8 sys_vm86 // 1125
+ data8 sys_wait4
+ data8 sys_sysinfo
+ data8 sys_clone
+ data8 sys_setdomainname
+ data8 sys_newuname // 1130
+ data8 sys_adjtimex
+ data8 sys_create_module
+ data8 sys_init_module
+ data8 sys_delete_module
+ data8 sys_get_kernel_syms // 1135
+ data8 sys_query_module
+ data8 sys_quotactl
+ data8 sys_bdflush
+ data8 sys_sysfs
+ data8 sys_personality // 1140
+ data8 ia64_ni_syscall // sys_afs_syscall
+ data8 sys_setfsuid
+ data8 sys_setfsgid
+ data8 sys_getdents
+ data8 sys_flock // 1145
+ data8 sys_readv
+ data8 sys_writev
+ data8 sys_pread
+ data8 sys_pwrite
+ data8 sys_sysctl // 1150
+ data8 sys_mmap
+ data8 sys_munmap
+ data8 sys_mlock
+ data8 sys_mlockall
+ data8 sys_mprotect // 1155
+ data8 sys_mremap
+ data8 sys_msync
+ data8 sys_munlock
+ data8 sys_munlockall
+ data8 sys_sched_getparam // 1160
+ data8 sys_sched_setparam
+ data8 sys_sched_getscheduler
+ data8 sys_sched_setscheduler
+ data8 sys_sched_yield
+ data8 sys_sched_get_priority_max // 1165
+ data8 sys_sched_get_priority_min
+ data8 sys_sched_rr_get_interval
+ data8 sys_nanosleep
+ data8 sys_nfsservctl
+ data8 sys_prctl // 1170
+ data8 sys_getpagesize
+ data8 sys_mmap2
+ data8 sys_pciconfig_read
+ data8 sys_pciconfig_write
+ data8 sys_perfmonctl // 1175
+ data8 sys_sigaltstack
+ data8 sys_rt_sigaction
+ data8 sys_rt_sigpending
+ data8 sys_rt_sigprocmask
+ data8 sys_rt_sigqueueinfo // 1180
+ data8 sys_rt_sigreturn
+ data8 sys_rt_sigsuspend
+ data8 sys_rt_sigtimedwait
+ data8 sys_getcwd
+ data8 sys_capget // 1185
+ data8 sys_capset
+ data8 sys_sendfile
+ data8 sys_ni_syscall // sys_getpmsg (STREAMS)
+ data8 sys_ni_syscall // sys_putpmsg (STREAMS)
+ data8 sys_socket // 1190
+ data8 sys_bind
+ data8 sys_connect
+ data8 sys_listen
+ data8 sys_accept
+ data8 sys_getsockname // 1195
+ data8 sys_getpeername
+ data8 sys_socketpair
+ data8 sys_send
+ data8 sys_sendto
+ data8 sys_recv // 1200
+ data8 sys_recvfrom
+ data8 sys_shutdown
+ data8 sys_setsockopt
+ data8 sys_getsockopt
+ data8 sys_sendmsg // 1205
+ data8 sys_recvmsg
+ data8 sys_pivot_root
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1210
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1215
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1220
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1225
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1230
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1235
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1240
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1245
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1250
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1255
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1260
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1265
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1270
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall // 1275
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+ data8 ia64_ni_syscall
+
--- /dev/null
+/*
+ * Preserved registers that are shared between code in ivt.S and entry.S. Be
+ * careful not to step on these!
+ */
+#define pEOI p1 /* should leave_kernel write EOI? */
+#define pKern p2 /* will leave_kernel return to kernel-mode? */
+#define pSys p4 /* are we processing a (synchronous) system call? */
+#define pNonSys p5 /* complement of pSys */
--- /dev/null
+/*
+ * PAL & SAL emulation.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * For the HP simulator, this file gets include in boot/bootloader.c.
+ * For SoftSDV, this file gets included in sys_softsdv.c.
+ */
+#include <linux/config.h>
+
+#ifdef CONFIG_PCI
+# include <linux/pci.h>
+#endif
+
+#include <asm/efi.h>
+#include <asm/io.h>
+#include <asm/pal.h>
+#include <asm/sal.h>
+
+#define MB (1024*1024UL)
+
+#define NUM_MEM_DESCS 3
+
+static char fw_mem[( sizeof(efi_system_table_t)
+ + sizeof(efi_runtime_services_t)
+ + 1*sizeof(efi_config_table_t)
+ + sizeof(struct ia64_sal_systab)
+ + sizeof(struct ia64_sal_desc_entry_point)
+ + NUM_MEM_DESCS*(sizeof(efi_memory_desc_t))
+ + 1024)] __attribute__ ((aligned (8)));
+
+#ifdef CONFIG_IA64_HP_SIM
+
+/* Simulator system calls: */
+
+#define SSC_EXIT 66
+
+/*
+ * Simulator system call.
+ */
+static long
+ssc (long arg0, long arg1, long arg2, long arg3, int nr)
+{
+ register long r8 asm ("r8");
+
+ asm volatile ("mov r15=%1\n\t"
+ "break 0x80001"
+ : "=r"(r8)
+ : "r"(nr), "r"(arg0), "r"(arg1), "r"(arg2), "r"(arg3));
+ return r8;
+}
+
+#define SECS_PER_HOUR (60 * 60)
+#define SECS_PER_DAY (SECS_PER_HOUR * 24)
+
+/* Compute the `struct tm' representation of *T,
+ offset OFFSET seconds east of UTC,
+ and store year, yday, mon, mday, wday, hour, min, sec into *TP.
+ Return nonzero if successful. */
+int
+offtime (unsigned long t, efi_time_t *tp)
+{
+ const unsigned short int __mon_yday[2][13] =
+ {
+ /* Normal years. */
+ { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
+ /* Leap years. */
+ { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
+ };
+ long int days, rem, y;
+ const unsigned short int *ip;
+
+ days = t / SECS_PER_DAY;
+ rem = t % SECS_PER_DAY;
+ while (rem < 0) {
+ rem += SECS_PER_DAY;
+ --days;
+ }
+ while (rem >= SECS_PER_DAY) {
+ rem -= SECS_PER_DAY;
+ ++days;
+ }
+ tp->hour = rem / SECS_PER_HOUR;
+ rem %= SECS_PER_HOUR;
+ tp->minute = rem / 60;
+ tp->second = rem % 60;
+ /* January 1, 1970 was a Thursday. */
+ y = 1970;
+
+# define DIV(a, b) ((a) / (b) - ((a) % (b) < 0))
+# define LEAPS_THRU_END_OF(y) (DIV (y, 4) - DIV (y, 100) + DIV (y, 400))
+# define __isleap(year) \
+ ((year) % 4 == 0 && ((year) % 100 != 0 || (year) % 400 == 0))
+
+ while (days < 0 || days >= (__isleap (y) ? 366 : 365)) {
+ /* Guess a corrected year, assuming 365 days per year. */
+ long int yg = y + days / 365 - (days % 365 < 0);
+
+ /* Adjust DAYS and Y to match the guessed year. */
+ days -= ((yg - y) * 365 + LEAPS_THRU_END_OF (yg - 1)
+ - LEAPS_THRU_END_OF (y - 1));
+ y = yg;
+ }
+ tp->year = y;
+ ip = __mon_yday[__isleap(y)];
+ for (y = 11; days < (long int) ip[y]; --y)
+ continue;
+ days -= ip[y];
+ tp->month = y + 1;
+ tp->day = days + 1;
+ return 1;
+}
+
+#endif /* CONFIG_IA64_HP_SIM */
+
+/*
+ * Very ugly, but we need this in the simulator only. Once we run on
+ * real hw, this can all go away.
+ */
+extern void pal_emulator_static (void);
+
+asm ("
+ .proc pal_emulator_static
+pal_emulator_static:
+ mov r8=-1
+ cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */
+(p7) br.cond.sptk.few 1f
+ ;;
+ mov r8=0 /* status = 0 */
+ movl r9=0x100000000 /* tc.base */
+ movl r10=0x0000000200000003 /* count[0], count[1] */
+ movl r11=0x1000000000002000 /* stride[0], stride[1] */
+ br.cond.sptk.few rp
+
+1: cmp.eq p6,p7=14,r28 /* PAL_FREQ_RATIOS */
+(p7) br.cond.sptk.few 1f
+ mov r8=0 /* status = 0 */
+ movl r9 =0x100000064 /* proc_ratio (1/100) */
+ movl r10=0x100000100 /* bus_ratio<<32 (1/256) */
+ movl r11=0x100000064 /* itc_ratio<<32 (1/100) */
+1: br.cond.sptk.few rp
+ .endp pal_emulator_static\n");
+
+/* Macro to emulate SAL call using legacy IN and OUT calls to CF8, CFC etc.. */
+
+#define BUILD_CMD(addr) ((0x80000000 | (addr)) & ~3)
+
+#define REG_OFFSET(addr) (0x00000000000000FF & (addr))
+#define DEVICE_FUNCTION(addr) (0x000000000000FF00 & (addr))
+#define BUS_NUMBER(addr) (0x0000000000FF0000 & (addr))
+
+static efi_status_t
+efi_get_time (efi_time_t *tm, efi_time_cap_t *tc)
+{
+#ifdef CONFIG_IA64_HP_SIM
+ struct {
+ int tv_sec; /* must be 32bits to work */
+ int tv_usec;
+ } tv32bits;
+
+ ssc((unsigned long) &tv32bits, 0, 0, 0, SSC_GET_TOD);
+
+ memset(tm, 0, sizeof(*tm));
+ offtime(tv32bits.tv_sec, tm);
+
+ if (tc)
+ memset(tc, 0, sizeof(*tc));
+#else
+# error Not implemented yet...
+#endif
+ return EFI_SUCCESS;
+}
+
+static void
+efi_reset_system (int reset_type, efi_status_t status, unsigned long data_size, efi_char16_t *data)
+{
+#ifdef CONFIG_IA64_HP_SIM
+ ssc(status, 0, 0, 0, SSC_EXIT);
+#else
+# error Not implemented yet...
+#endif
+}
+
+static efi_status_t
+efi_unimplemented (void)
+{
+ return EFI_UNSUPPORTED;
+}
+
+static long
+sal_emulator (long index, unsigned long in1, unsigned long in2,
+ unsigned long in3, unsigned long in4, unsigned long in5,
+ unsigned long in6, unsigned long in7)
+{
+ register long r9 asm ("r9") = 0;
+ register long r10 asm ("r10") = 0;
+ register long r11 asm ("r11") = 0;
+ long status;
+
+ /*
+ * Don't do a "switch" here since that gives us code that
+ * isn't self-relocatable.
+ */
+ status = 0;
+ if (index == SAL_FREQ_BASE) {
+ switch (in1) {
+ case SAL_FREQ_BASE_PLATFORM:
+ r9 = 100000000;
+ break;
+
+ case SAL_FREQ_BASE_INTERVAL_TIMER:
+ /*
+ * Is this supposed to be the cr.itc frequency
+ * or something platform specific? The SAL
+ * doc ain't exactly clear on this...
+ */
+#if defined(CONFIG_IA64_SOFTSDV_HACKS)
+ r9 = 4000000;
+#elif defined(CONFIG_IA64_SDV)
+ r9 = 300000000;
+#else
+ r9 = 700000000;
+#endif
+ break;
+
+ case SAL_FREQ_BASE_REALTIME_CLOCK:
+ r9 = 1;
+ break;
+
+ default:
+ status = -1;
+ break;
+ }
+ } else if (index == SAL_SET_VECTORS) {
+ ;
+ } else if (index == SAL_GET_STATE_INFO) {
+ ;
+ } else if (index == SAL_GET_STATE_INFO_SIZE) {
+ ;
+ } else if (index == SAL_CLEAR_STATE_INFO) {
+ ;
+ } else if (index == SAL_MC_RENDEZ) {
+ ;
+ } else if (index == SAL_MC_SET_PARAMS) {
+ ;
+ } else if (index == SAL_CACHE_FLUSH) {
+ ;
+ } else if (index == SAL_CACHE_INIT) {
+ ;
+#ifdef CONFIG_PCI
+ } else if (index == SAL_PCI_CONFIG_READ) {
+ /*
+ * in1 contains the PCI configuration address and in2
+ * the size of the read. The value that is read is
+ * returned via the general register r9.
+ */
+ outl(BUILD_CMD(in1), 0xCF8);
+ if (in2 == 1) /* Reading byte */
+ r9 = inb(0xCFC + ((REG_OFFSET(in1) & 3)));
+ else if (in2 == 2) /* Reading word */
+ r9 = inw(0xCFC + ((REG_OFFSET(in1) & 2)));
+ else /* Reading dword */
+ r9 = inl(0xCFC);
+ status = PCIBIOS_SUCCESSFUL;
+ } else if (index == SAL_PCI_CONFIG_WRITE) {
+ /*
+ * in1 contains the PCI configuration address, in2 the
+ * size of the write, and in3 the actual value to be
+ * written out.
+ */
+ outl(BUILD_CMD(in1), 0xCF8);
+ if (in2 == 1) /* Writing byte */
+ outb(in3, 0xCFC + ((REG_OFFSET(in1) & 3)));
+ else if (in2 == 2) /* Writing word */
+ outw(in3, 0xCFC + ((REG_OFFSET(in1) & 2)));
+ else /* Writing dword */
+ outl(in3, 0xCFC);
+ status = PCIBIOS_SUCCESSFUL;
+#endif /* CONFIG_PCI */
+ } else if (index == SAL_UPDATE_PAL) {
+ ;
+ } else {
+ status = -1;
+ }
+ asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11));
+ return status;
+}
+
+
+/*
+ * This is here to work around a bug in egcs-1.1.1b that causes the
+ * compiler to crash (seems like a bug in the new alias analysis code.
+ */
+void *
+id (long addr)
+{
+ return (void *) addr;
+}
+
+void
+sys_fw_init (const char *args, int arglen)
+{
+ efi_system_table_t *efi_systab;
+ efi_runtime_services_t *efi_runtime;
+ efi_config_table_t *efi_tables;
+ struct ia64_sal_systab *sal_systab;
+ efi_memory_desc_t *efi_memmap, *md;
+ unsigned long *pal_desc, *sal_desc;
+ struct ia64_sal_desc_entry_point *sal_ed;
+ struct ia64_boot_param *bp;
+ unsigned char checksum = 0;
+ char *cp, *cmd_line;
+
+ memset(fw_mem, 0, sizeof(fw_mem));
+
+ pal_desc = (unsigned long *) &pal_emulator_static;
+ sal_desc = (unsigned long *) &sal_emulator;
+
+ cp = fw_mem;
+ efi_systab = (void *) cp; cp += sizeof(*efi_systab);
+ efi_runtime = (void *) cp; cp += sizeof(*efi_runtime);
+ efi_tables = (void *) cp; cp += sizeof(*efi_tables);
+ sal_systab = (void *) cp; cp += sizeof(*sal_systab);
+ sal_ed = (void *) cp; cp += sizeof(*sal_ed);
+ efi_memmap = (void *) cp; cp += NUM_MEM_DESCS*sizeof(*efi_memmap);
+ cmd_line = (void *) cp;
+
+ if (args) {
+ if (arglen >= 1024)
+ arglen = 1023;
+ memcpy(cmd_line, args, arglen);
+ } else {
+ arglen = 0;
+ }
+ cmd_line[arglen] = '\0';
+
+ memset(efi_systab, 0, sizeof(efi_systab));
+ efi_systab->hdr.signature = EFI_SYSTEM_TABLE_SIGNATURE;
+ efi_systab->hdr.revision = EFI_SYSTEM_TABLE_REVISION;
+ efi_systab->hdr.headersize = sizeof(efi_systab->hdr);
+ efi_systab->fw_vendor = __pa("H\0e\0w\0l\0e\0t\0t\0-\0P\0a\0c\0k\0a\0r\0d\0\0");
+ efi_systab->fw_revision = 1;
+ efi_systab->runtime = __pa(efi_runtime);
+ efi_systab->nr_tables = 1;
+ efi_systab->tables = __pa(efi_tables);
+
+ efi_runtime->hdr.signature = EFI_RUNTIME_SERVICES_SIGNATURE;
+ efi_runtime->hdr.revision = EFI_RUNTIME_SERVICES_REVISION;
+ efi_runtime->hdr.headersize = sizeof(efi_runtime->hdr);
+ efi_runtime->get_time = __pa(&efi_get_time);
+ efi_runtime->set_time = __pa(&efi_unimplemented);
+ efi_runtime->get_wakeup_time = __pa(&efi_unimplemented);
+ efi_runtime->set_wakeup_time = __pa(&efi_unimplemented);
+ efi_runtime->set_virtual_address_map = __pa(&efi_unimplemented);
+ efi_runtime->get_variable = __pa(&efi_unimplemented);
+ efi_runtime->get_next_variable = __pa(&efi_unimplemented);
+ efi_runtime->set_variable = __pa(&efi_unimplemented);
+ efi_runtime->get_next_high_mono_count = __pa(&efi_unimplemented);
+ efi_runtime->reset_system = __pa(&efi_reset_system);
+
+ efi_tables->guid = SAL_SYSTEM_TABLE_GUID;
+ efi_tables->table = __pa(sal_systab);
+
+ /* fill in the SAL system table: */
+ memcpy(sal_systab->signature, "SST_", 4);
+ sal_systab->size = sizeof(*sal_systab);
+ sal_systab->sal_rev_minor = 1;
+ sal_systab->sal_rev_major = 0;
+ sal_systab->entry_count = 1;
+ sal_systab->ia32_bios_present = 0;
+
+#ifdef CONFIG_IA64_GENERIC
+ strcpy(sal_systab->oem_id, "Generic");
+ strcpy(sal_systab->product_id, "IA-64 system");
+#endif
+
+#ifdef CONFIG_IA64_HP_SIM
+ strcpy(sal_systab->oem_id, "Hewlett-Packard");
+ strcpy(sal_systab->product_id, "HP-simulator");
+#endif
+
+#ifdef CONFIG_IA64_SDV
+ strcpy(sal_systab->oem_id, "Intel");
+ strcpy(sal_systab->product_id, "SDV");
+#endif
+
+#ifdef CONFIG_IA64_SGI_SN1_SIM
+ strcpy(sal_systab->oem_id, "SGI");
+ strcpy(sal_systab->product_id, "SN1");
+#endif
+
+ /* fill in an entry point: */
+ sal_ed->type = SAL_DESC_ENTRY_POINT;
+ sal_ed->pal_proc = __pa(pal_desc[0]);
+ sal_ed->sal_proc = __pa(sal_desc[0]);
+ sal_ed->gp = __pa(sal_desc[1]);
+
+ for (cp = (char *) sal_systab; cp < (char *) efi_memmap; ++cp)
+ checksum += *cp;
+
+ sal_systab->checksum = -checksum;
+
+ /* fill in a memory descriptor: */
+ md = &efi_memmap[0];
+ md->type = EFI_CONVENTIONAL_MEMORY;
+ md->pad = 0;
+ md->phys_addr = 2*MB;
+ md->virt_addr = 0;
+ md->num_pages = (64*MB) >> 12; /* 64MB (in 4KB pages) */
+ md->attribute = EFI_MEMORY_WB;
+
+ /* descriptor for firmware emulator: */
+ md = &efi_memmap[1];
+ md->type = EFI_RUNTIME_SERVICES_DATA;
+ md->pad = 0;
+ md->phys_addr = 1*MB;
+ md->virt_addr = 0;
+ md->num_pages = (1*MB) >> 12; /* 1MB (in 4KB pages) */
+ md->attribute = EFI_MEMORY_WB;
+
+ /* descriptor for high memory (>4GB): */
+ md = &efi_memmap[2];
+ md->type = EFI_CONVENTIONAL_MEMORY;
+ md->pad = 0;
+ md->phys_addr = 4096*MB;
+ md->virt_addr = 0;
+ md->num_pages = (32*MB) >> 12; /* 32MB (in 4KB pages) */
+ md->attribute = EFI_MEMORY_WB;
+
+ bp = id(ZERO_PAGE_ADDR);
+ bp->efi_systab = __pa(&fw_mem);
+ bp->efi_memmap = __pa(efi_memmap);
+ bp->efi_memmap_size = NUM_MEM_DESCS*sizeof(efi_memory_desc_t);
+ bp->efi_memdesc_size = sizeof(efi_memory_desc_t);
+ bp->efi_memdesc_version = 1;
+ bp->command_line = __pa(cmd_line);
+ bp->console_info.num_cols = 80;
+ bp->console_info.num_rows = 25;
+ bp->console_info.orig_x = 0;
+ bp->console_info.orig_y = 24;
+ bp->num_pci_vectors = 0;
+ bp->fpswa = 0;
+}
--- /dev/null
+/*
+ * This file contains the code that gets mapped at the upper end of
+ * each task's text region. For now, it contains the signal
+ * trampoline code only.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/offsets.h>
+#include <asm/sigcontext.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+#include <asm/page.h>
+
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .section __gate_section,"ax"
+
+ .align PAGE_SIZE
+
+# define SIGINFO_OFF 16
+# define SIGCONTEXT_OFF (SIGINFO_OFF + ((IA64_SIGINFO_SIZE + 15) & ~15))
+# define FLAGS_OFF IA64_SIGCONTEXT_FLAGS_OFFSET
+# define CFM_OFF IA64_SIGCONTEXT_CFM_OFFSET
+# define FR6_OFF IA64_SIGCONTEXT_FR6_OFFSET
+# define BSP_OFF IA64_SIGCONTEXT_AR_BSP_OFFSET
+# define RNAT_OFF IA64_SIGCONTEXT_AR_RNAT_OFFSET
+# define base0 r2
+# define base1 r3
+ /*
+ * When we get here, the memory stack looks like this:
+ *
+ * +===============================+
+ * | |
+ * // struct sigcontext //
+ * | |
+ * +===============================+ <-- sp+SIGCONTEXT_OFF
+ * | |
+ * // rest of siginfo //
+ * | |
+ * + +---------------+
+ * | | siginfo.code |
+ * +---------------+---------------+
+ * | siginfo.errno | siginfo.signo |
+ * +-------------------------------+ <-- sp+SIGINFO_OFF
+ * | 16 byte of scratch |
+ * | space |
+ * +-------------------------------+ <-- sp
+ *
+ * The register stack looks _exactly_ the way it looked at the
+ * time the signal occurred. In other words, we're treading
+ * on a potential mine-field: each incoming general register
+ * may be a NaT value (includeing sp, in which case the process
+ * ends up dying with a SIGSEGV).
+ *
+ * The first need to do is a cover to get the registers onto
+ * the backing store. Once that is done, we invoke the signal
+ * handler which may modify some of the machine state. After
+ * returning from the signal handler, we return control to the
+ * previous context by executing a sigreturn system call. A
+ * signal handler may call the rt_sigreturn() function to
+ * directly return to a given sigcontext. However, the
+ * user-level sigreturn() needs to do much more than calling
+ * the rt_sigreturn() system call as it needs to unwind the
+ * stack to restore preserved registers that may have been
+ * saved on the signal handler's call stack.
+ *
+ * On entry:
+ * r2 = signal number
+ * r3 = plabel of signal handler
+ * r15 = new register backing store (ignored)
+ * [sp+16] = sigframe
+ */
+
+ .global ia64_sigtramp
+ .proc ia64_sigtramp
+ia64_sigtramp:
+ ld8 r10=[r3],8 // get signal handler entry point
+ br.call.sptk.many rp=invoke_sighandler
+.ret0: mov r15=__NR_rt_sigreturn
+ break __BREAK_SYSCALL
+ .endp ia64_sigramp
+
+ .proc invoke_sighandler
+invoke_sighandler:
+ ld8 gp=[r3] // get signal handler's global pointer
+ mov b6=r10
+ cover // push args in interrupted frame onto backing store
+ ;;
+ alloc r8=ar.pfs,0,1,3,0 // get CFM0, EC0, and CPL0 into r8
+ mov r17=ar.bsp // fetch ar.bsp
+ mov loc0=rp // save return pointer
+ ;;
+ cmp.ne p8,p0=r15,r0 // do we need to switch the rbs?
+ mov out0=r2 // signal number
+(p8) br.cond.spnt.few setup_rbs // yup -> (clobbers r14 and r16)
+back_from_setup_rbs:
+ adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ st8 [base0]=r17,(CFM_OFF-BSP_OFF) // save sc_ar_bsp
+ adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
+ ;;
+
+ st8 [base0]=r8 // save CFM0, EC0, and CPL0
+ adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ stf.spill [base0]=f6,32
+ stf.spill [base1]=f7,32
+ ;;
+ stf.spill [base0]=f8,32
+ stf.spill [base1]=f9,32
+ ;;
+ stf.spill [base0]=f10,32
+ stf.spill [base1]=f11,32
+ adds out1=SIGINFO_OFF,sp // siginfo pointer
+ ;;
+ stf.spill [base0]=f12,32
+ stf.spill [base1]=f13,32
+ adds out2=SIGCONTEXT_OFF,sp // sigcontext pointer
+ ;;
+ stf.spill [base0]=f14,32
+ stf.spill [base1]=f15,32
+ br.call.sptk.few rp=b6 // call the signal handler
+.ret2: adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ ld8 r15=[base0],(CFM_OFF-BSP_OFF) // fetch sc_ar_bsp and advance to CFM_OFF
+ mov r14=ar.bsp
+ ;;
+ ld8 r8=[base0] // restore (perhaps modified) CFM0, EC0, and CPL0
+ cmp.ne p8,p0=r14,r15 // do we need to restore the rbs?
+(p8) br.cond.spnt.few restore_rbs // yup -> (clobbers r14 and r16)
+back_from_restore_rbs:
+ {
+ and r9=0x7f,r8 // r9 <- CFM0.sof
+ extr.u r10=r8,7,7 // r10 <- CFM0.sol
+ mov r11=ip
+ }
+ ;;
+ adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
+ adds r11=(cont-back_from_restore_rbs),r11
+ sub r9=r9,r10 // r9 <- CFM0.sof - CFM0.sol == CFM0.nout
+ ;;
+ adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
+ dep r9=r9,r9,7,7 // r9.sol = r9.sof
+ mov b6=r11
+ ;;
+ ldf.fill f6=[base0],32
+ ldf.fill f7=[base1],32
+ mov rp=loc0 // copy return pointer out of stacked register
+ ;;
+ ldf.fill f8=[base0],32
+ ldf.fill f9=[base1],32
+ ;;
+ ldf.fill f10=[base0],32
+ ldf.fill f11=[base1],32
+ ;;
+ ldf.fill f12=[base0],32
+ ldf.fill f13=[base1],32
+ mov ar.pfs=r9
+ ;;
+ ldf.fill f14=[base0],32
+ ldf.fill f15=[base1],32
+ br.ret.sptk.few b6
+cont: mov ar.pfs=r8 // ar.pfs = CFM0
+ br.ret.sptk.few rp // re-establish CFM0
+ .endp invoke_signal_handler
+
+ .proc setup_rbs
+setup_rbs:
+ flushrs // must be first in insn
+ ;;
+ mov ar.rsc=r0 // put RSE into enforced lazy mode
+ adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
+ mov r14=ar.rnat // get rnat as updated by flushrs
+ ;;
+ mov ar.bspstore=r15 // set new register backing store area
+ st8 [r16]=r14 // save sc_ar_rnat
+ ;;
+ mov ar.rsc=0xf // set RSE into eager mode, pl 3
+ invala // invalidate ALAT
+ br.cond.sptk.many back_from_setup_rbs
+
+ .proc restore_rbs
+restore_rbs:
+ flushrs
+ mov ar.rsc=r0 // put RSE into enforced lazy mode
+ adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
+ ;;
+ ld8 r14=[r16] // get new rnat
+ mov ar.bspstore=r15 // set old register backing store area
+ ;;
+ mov ar.rnat=r14 // establish new rnat
+ mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc)
+ // invala not necessary as that will happen when returning to user-mode
+ br.cond.sptk.many back_from_restore_rbs
+
+ .endp restore_rbs
--- /dev/null
+/*
+ * Here is where the ball gets rolling as far as the kernel is concerned.
+ * When control is transferred to _start, the bootload has already
+ * loaded us to the correct address. All that's left to do here is
+ * to set up the kernel's global pointer and jump to the kernel
+ * entry point.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Intel Corp.
+ * Copyright (C) 1999 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <Don.Dugger@intel.com>
+ */
+
+#include <linux/config.h>
+
+#include <asm/fpu.h>
+#include <asm/pal.h>
+#include <asm/offsets.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .section __special_page_section,"ax"
+
+ .global empty_zero_page
+empty_zero_page:
+ .skip PAGE_SIZE
+
+ .global swapper_pg_dir
+swapper_pg_dir:
+ .skip PAGE_SIZE
+
+ .global empty_bad_page
+empty_bad_page:
+ .skip PAGE_SIZE
+
+ .global empty_bad_pte_table
+empty_bad_pte_table:
+ .skip PAGE_SIZE
+
+ .global empty_bad_pmd_table
+empty_bad_pmd_table:
+ .skip PAGE_SIZE
+
+ .rodata
+halt_msg:
+ stringz "Halting kernel\n"
+
+ .text
+ .align 16
+ .global _start
+ .proc _start
+_start:
+ // set IVT entry point---can't access I/O ports without it
+ movl r3=ia64_ivt
+ ;;
+ mov cr.iva=r3
+ movl r2=FPSR_DEFAULT
+ ;;
+ srlz.i
+ movl gp=__gp
+
+ mov ar.fpsr=r2
+ ;;
+
+#ifdef CONFIG_IA64_EARLY_PRINTK
+ mov r2=6
+ mov r3=(8<<8) | (28<<2)
+ ;;
+ mov rr[r2]=r3
+ ;;
+ srlz.i
+ ;;
+#endif
+
+#define isAP p2 // are we booting an Application Processor (not the BSP)?
+
+ // Find the init_task for the currently booting CPU. At poweron, and in
+ // UP mode, cpu_now_booting is 0
+ movl r3=cpu_now_booting
+ ;;
+ ld4 r3=[r3]
+ movl r2=init_tasks
+ ;;
+ shladd r2=r3,3,r2
+ ;;
+ ld8 r2=[r2]
+ cmp4.ne isAP,p0=r3,r0 // p9 == true if this is an application processor (ap)
+ ;; // RAW on r2
+ extr r3=r2,0,61 // r3 == phys addr of task struct
+ ;;
+
+ // load the "current" pointer (r13) and ar.k6 with the current task
+ mov r13=r2
+ mov ar.k6=r3 // Physical address
+ ;;
+ /*
+ * Reserve space at the top of the stack for "struct pt_regs". Kernel threads
+ * don't store interesting values in that structure, but the space still needs
+ * to be there because time-critical stuff such as the context switching can
+ * be implemented more efficiently (for example, __switch_to()
+ * always sets the psr.dfh bit of the task it is switching to).
+ */
+ addl r12=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r2
+ addl r2=IA64_RBS_OFFSET,r2 // initialize the RSE
+ mov ar.rsc=r0 // place RSE in enforced lazy mode
+ ;;
+ mov ar.bspstore=r2 // establish the new RSE stack
+ ;;
+ loadrs // load zero bytes from the register stack
+ ;;
+ mov ar.rsc=0x3 // place RSE in eager mode
+ ;;
+
+#ifdef CONFIG_IA64_EARLY_PRINTK
+ .rodata
+alive_msg:
+ stringz "I'm alive and well\n"
+ .previous
+
+ alloc r2=ar.pfs,0,0,2,0
+ movl out0=alive_msg
+ ;;
+ br.call.sptk.few rp=early_printk
+1: // force new bundle
+#endif /* CONFIG_IA64_EARLY_PRINTK */
+
+ alloc r2=ar.pfs,8,0,2,0
+#ifdef CONFIG_SMP
+(isAP) br.call.sptk.few rp=smp_callin
+.ret1:
+(isAP) br.cond.sptk.few self
+#endif
+
+#undef isAP
+
+ // This is executed by the bootstrap processor (bsp) only:
+
+#ifdef CONFIG_IA64_FW_EMU
+ // initialize PAL & SAL emulator:
+ br.call.sptk.few rp=sys_fw_init
+ ;;
+#endif
+ br.call.sptk.few rp=start_kernel
+.ret2:
+ addl r2=@ltoff(halt_msg),gp
+ ;;
+ ld8 out0=[r2]
+ br.call.sptk.few b0=console_print
+self: br.sptk.few self // endless loop
+ .endp _start
+
+ .align 16
+ .global ia64_save_debug_regs
+ .proc ia64_save_debug_regs
+ia64_save_debug_regs:
+ alloc r16=ar.pfs,1,0,0,0
+ mov r20=ar.lc // preserve ar.lc
+ mov ar.lc=IA64_NUM_DBG_REGS-1
+ mov r18=0
+ add r19=IA64_NUM_DBG_REGS*8,in0
+ ;;
+1: mov r16=dbr[r18]
+ mov r17=ibr[r18]
+ add r18=1,r18
+ ;;
+ st8.nta [in0]=r16,8
+ st8.nta [r19]=r17,8
+ br.cloop.sptk.few 1b
+
+ mov ar.lc=r20 // restore ar.lc
+ br.ret.sptk.few b0
+ .endp ia64_save_debug_regs
+
+ .align 16
+ .global ia64_load_debug_regs
+ .proc ia64_load_debug_regs
+ia64_load_debug_regs:
+ alloc r16=ar.pfs,1,0,0,0
+ lfetch.nta [in0]
+ mov r20=ar.lc // preserve ar.lc
+ add r19=IA64_NUM_DBG_REGS*8,in0
+ mov ar.lc=IA64_NUM_DBG_REGS-1
+ mov r18=-1
+ ;;
+1: ld8.nta r16=[in0],8
+ ld8.nta r17=[r19],8
+ add r18=1,r18
+ ;;
+ mov dbr[r18]=r16
+ mov ibr[r18]=r17
+ br.cloop.sptk.few 1b
+
+ mov ar.lc=r20 // restore ar.lc
+ br.ret.sptk.few b0
+ .endp ia64_load_debug_regs
+
+ .align 16
+ .global __ia64_save_fpu
+ .proc __ia64_save_fpu
+__ia64_save_fpu:
+ alloc r2=ar.pfs,1,0,0,0
+ adds r3=16,in0
+ ;;
+ stf.spill.nta [in0]=f32,32
+ stf.spill.nta [ r3]=f33,32
+ ;;
+ stf.spill.nta [in0]=f34,32
+ stf.spill.nta [ r3]=f35,32
+ ;;
+ stf.spill.nta [in0]=f36,32
+ stf.spill.nta [ r3]=f37,32
+ ;;
+ stf.spill.nta [in0]=f38,32
+ stf.spill.nta [ r3]=f39,32
+ ;;
+ stf.spill.nta [in0]=f40,32
+ stf.spill.nta [ r3]=f41,32
+ ;;
+ stf.spill.nta [in0]=f42,32
+ stf.spill.nta [ r3]=f43,32
+ ;;
+ stf.spill.nta [in0]=f44,32
+ stf.spill.nta [ r3]=f45,32
+ ;;
+ stf.spill.nta [in0]=f46,32
+ stf.spill.nta [ r3]=f47,32
+ ;;
+ stf.spill.nta [in0]=f48,32
+ stf.spill.nta [ r3]=f49,32
+ ;;
+ stf.spill.nta [in0]=f50,32
+ stf.spill.nta [ r3]=f51,32
+ ;;
+ stf.spill.nta [in0]=f52,32
+ stf.spill.nta [ r3]=f53,32
+ ;;
+ stf.spill.nta [in0]=f54,32
+ stf.spill.nta [ r3]=f55,32
+ ;;
+ stf.spill.nta [in0]=f56,32
+ stf.spill.nta [ r3]=f57,32
+ ;;
+ stf.spill.nta [in0]=f58,32
+ stf.spill.nta [ r3]=f59,32
+ ;;
+ stf.spill.nta [in0]=f60,32
+ stf.spill.nta [ r3]=f61,32
+ ;;
+ stf.spill.nta [in0]=f62,32
+ stf.spill.nta [ r3]=f63,32
+ ;;
+ stf.spill.nta [in0]=f64,32
+ stf.spill.nta [ r3]=f65,32
+ ;;
+ stf.spill.nta [in0]=f66,32
+ stf.spill.nta [ r3]=f67,32
+ ;;
+ stf.spill.nta [in0]=f68,32
+ stf.spill.nta [ r3]=f69,32
+ ;;
+ stf.spill.nta [in0]=f70,32
+ stf.spill.nta [ r3]=f71,32
+ ;;
+ stf.spill.nta [in0]=f72,32
+ stf.spill.nta [ r3]=f73,32
+ ;;
+ stf.spill.nta [in0]=f74,32
+ stf.spill.nta [ r3]=f75,32
+ ;;
+ stf.spill.nta [in0]=f76,32
+ stf.spill.nta [ r3]=f77,32
+ ;;
+ stf.spill.nta [in0]=f78,32
+ stf.spill.nta [ r3]=f79,32
+ ;;
+ stf.spill.nta [in0]=f80,32
+ stf.spill.nta [ r3]=f81,32
+ ;;
+ stf.spill.nta [in0]=f82,32
+ stf.spill.nta [ r3]=f83,32
+ ;;
+ stf.spill.nta [in0]=f84,32
+ stf.spill.nta [ r3]=f85,32
+ ;;
+ stf.spill.nta [in0]=f86,32
+ stf.spill.nta [ r3]=f87,32
+ ;;
+ stf.spill.nta [in0]=f88,32
+ stf.spill.nta [ r3]=f89,32
+ ;;
+ stf.spill.nta [in0]=f90,32
+ stf.spill.nta [ r3]=f91,32
+ ;;
+ stf.spill.nta [in0]=f92,32
+ stf.spill.nta [ r3]=f93,32
+ ;;
+ stf.spill.nta [in0]=f94,32
+ stf.spill.nta [ r3]=f95,32
+ ;;
+ stf.spill.nta [in0]=f96,32
+ stf.spill.nta [ r3]=f97,32
+ ;;
+ stf.spill.nta [in0]=f98,32
+ stf.spill.nta [ r3]=f99,32
+ ;;
+ stf.spill.nta [in0]=f100,32
+ stf.spill.nta [ r3]=f101,32
+ ;;
+ stf.spill.nta [in0]=f102,32
+ stf.spill.nta [ r3]=f103,32
+ ;;
+ stf.spill.nta [in0]=f104,32
+ stf.spill.nta [ r3]=f105,32
+ ;;
+ stf.spill.nta [in0]=f106,32
+ stf.spill.nta [ r3]=f107,32
+ ;;
+ stf.spill.nta [in0]=f108,32
+ stf.spill.nta [ r3]=f109,32
+ ;;
+ stf.spill.nta [in0]=f110,32
+ stf.spill.nta [ r3]=f111,32
+ ;;
+ stf.spill.nta [in0]=f112,32
+ stf.spill.nta [ r3]=f113,32
+ ;;
+ stf.spill.nta [in0]=f114,32
+ stf.spill.nta [ r3]=f115,32
+ ;;
+ stf.spill.nta [in0]=f116,32
+ stf.spill.nta [ r3]=f117,32
+ ;;
+ stf.spill.nta [in0]=f118,32
+ stf.spill.nta [ r3]=f119,32
+ ;;
+ stf.spill.nta [in0]=f120,32
+ stf.spill.nta [ r3]=f121,32
+ ;;
+ stf.spill.nta [in0]=f122,32
+ stf.spill.nta [ r3]=f123,32
+ ;;
+ stf.spill.nta [in0]=f124,32
+ stf.spill.nta [ r3]=f125,32
+ ;;
+ stf.spill.nta [in0]=f126,32
+ stf.spill.nta [ r3]=f127,32
+ br.ret.sptk.few rp
+ .endp __ia64_save_fpu
+
+ .align 16
+ .global __ia64_load_fpu
+ .proc __ia64_load_fpu
+__ia64_load_fpu:
+ alloc r2=ar.pfs,1,0,0,0
+ adds r3=16,in0
+ ;;
+ ldf.fill.nta f32=[in0],32
+ ldf.fill.nta f33=[ r3],32
+ ;;
+ ldf.fill.nta f34=[in0],32
+ ldf.fill.nta f35=[ r3],32
+ ;;
+ ldf.fill.nta f36=[in0],32
+ ldf.fill.nta f37=[ r3],32
+ ;;
+ ldf.fill.nta f38=[in0],32
+ ldf.fill.nta f39=[ r3],32
+ ;;
+ ldf.fill.nta f40=[in0],32
+ ldf.fill.nta f41=[ r3],32
+ ;;
+ ldf.fill.nta f42=[in0],32
+ ldf.fill.nta f43=[ r3],32
+ ;;
+ ldf.fill.nta f44=[in0],32
+ ldf.fill.nta f45=[ r3],32
+ ;;
+ ldf.fill.nta f46=[in0],32
+ ldf.fill.nta f47=[ r3],32
+ ;;
+ ldf.fill.nta f48=[in0],32
+ ldf.fill.nta f49=[ r3],32
+ ;;
+ ldf.fill.nta f50=[in0],32
+ ldf.fill.nta f51=[ r3],32
+ ;;
+ ldf.fill.nta f52=[in0],32
+ ldf.fill.nta f53=[ r3],32
+ ;;
+ ldf.fill.nta f54=[in0],32
+ ldf.fill.nta f55=[ r3],32
+ ;;
+ ldf.fill.nta f56=[in0],32
+ ldf.fill.nta f57=[ r3],32
+ ;;
+ ldf.fill.nta f58=[in0],32
+ ldf.fill.nta f59=[ r3],32
+ ;;
+ ldf.fill.nta f60=[in0],32
+ ldf.fill.nta f61=[ r3],32
+ ;;
+ ldf.fill.nta f62=[in0],32
+ ldf.fill.nta f63=[ r3],32
+ ;;
+ ldf.fill.nta f64=[in0],32
+ ldf.fill.nta f65=[ r3],32
+ ;;
+ ldf.fill.nta f66=[in0],32
+ ldf.fill.nta f67=[ r3],32
+ ;;
+ ldf.fill.nta f68=[in0],32
+ ldf.fill.nta f69=[ r3],32
+ ;;
+ ldf.fill.nta f70=[in0],32
+ ldf.fill.nta f71=[ r3],32
+ ;;
+ ldf.fill.nta f72=[in0],32
+ ldf.fill.nta f73=[ r3],32
+ ;;
+ ldf.fill.nta f74=[in0],32
+ ldf.fill.nta f75=[ r3],32
+ ;;
+ ldf.fill.nta f76=[in0],32
+ ldf.fill.nta f77=[ r3],32
+ ;;
+ ldf.fill.nta f78=[in0],32
+ ldf.fill.nta f79=[ r3],32
+ ;;
+ ldf.fill.nta f80=[in0],32
+ ldf.fill.nta f81=[ r3],32
+ ;;
+ ldf.fill.nta f82=[in0],32
+ ldf.fill.nta f83=[ r3],32
+ ;;
+ ldf.fill.nta f84=[in0],32
+ ldf.fill.nta f85=[ r3],32
+ ;;
+ ldf.fill.nta f86=[in0],32
+ ldf.fill.nta f87=[ r3],32
+ ;;
+ ldf.fill.nta f88=[in0],32
+ ldf.fill.nta f89=[ r3],32
+ ;;
+ ldf.fill.nta f90=[in0],32
+ ldf.fill.nta f91=[ r3],32
+ ;;
+ ldf.fill.nta f92=[in0],32
+ ldf.fill.nta f93=[ r3],32
+ ;;
+ ldf.fill.nta f94=[in0],32
+ ldf.fill.nta f95=[ r3],32
+ ;;
+ ldf.fill.nta f96=[in0],32
+ ldf.fill.nta f97=[ r3],32
+ ;;
+ ldf.fill.nta f98=[in0],32
+ ldf.fill.nta f99=[ r3],32
+ ;;
+ ldf.fill.nta f100=[in0],32
+ ldf.fill.nta f101=[ r3],32
+ ;;
+ ldf.fill.nta f102=[in0],32
+ ldf.fill.nta f103=[ r3],32
+ ;;
+ ldf.fill.nta f104=[in0],32
+ ldf.fill.nta f105=[ r3],32
+ ;;
+ ldf.fill.nta f106=[in0],32
+ ldf.fill.nta f107=[ r3],32
+ ;;
+ ldf.fill.nta f108=[in0],32
+ ldf.fill.nta f109=[ r3],32
+ ;;
+ ldf.fill.nta f110=[in0],32
+ ldf.fill.nta f111=[ r3],32
+ ;;
+ ldf.fill.nta f112=[in0],32
+ ldf.fill.nta f113=[ r3],32
+ ;;
+ ldf.fill.nta f114=[in0],32
+ ldf.fill.nta f115=[ r3],32
+ ;;
+ ldf.fill.nta f116=[in0],32
+ ldf.fill.nta f117=[ r3],32
+ ;;
+ ldf.fill.nta f118=[in0],32
+ ldf.fill.nta f119=[ r3],32
+ ;;
+ ldf.fill.nta f120=[in0],32
+ ldf.fill.nta f121=[ r3],32
+ ;;
+ ldf.fill.nta f122=[in0],32
+ ldf.fill.nta f123=[ r3],32
+ ;;
+ ldf.fill.nta f124=[in0],32
+ ldf.fill.nta f125=[ r3],32
+ ;;
+ ldf.fill.nta f126=[in0],32
+ ldf.fill.nta f127=[ r3],32
+ br.ret.sptk.few rp
+ .endp __ia64_load_fpu
+
+ .align 16
+ .global __ia64_init_fpu
+ .proc __ia64_init_fpu
+__ia64_init_fpu:
+ alloc r2=ar.pfs,0,0,0,0
+ stf.spill [sp]=f0
+ mov f32=f0
+ ;;
+ ldf.fill f33=[sp]
+ ldf.fill f34=[sp]
+ mov f35=f0
+ ;;
+ ldf.fill f36=[sp]
+ ldf.fill f37=[sp]
+ mov f38=f0
+ ;;
+ ldf.fill f39=[sp]
+ ldf.fill f40=[sp]
+ mov f41=f0
+ ;;
+ ldf.fill f42=[sp]
+ ldf.fill f43=[sp]
+ mov f44=f0
+ ;;
+ ldf.fill f45=[sp]
+ ldf.fill f46=[sp]
+ mov f47=f0
+ ;;
+ ldf.fill f48=[sp]
+ ldf.fill f49=[sp]
+ mov f50=f0
+ ;;
+ ldf.fill f51=[sp]
+ ldf.fill f52=[sp]
+ mov f53=f0
+ ;;
+ ldf.fill f54=[sp]
+ ldf.fill f55=[sp]
+ mov f56=f0
+ ;;
+ ldf.fill f57=[sp]
+ ldf.fill f58=[sp]
+ mov f59=f0
+ ;;
+ ldf.fill f60=[sp]
+ ldf.fill f61=[sp]
+ mov f62=f0
+ ;;
+ ldf.fill f63=[sp]
+ ldf.fill f64=[sp]
+ mov f65=f0
+ ;;
+ ldf.fill f66=[sp]
+ ldf.fill f67=[sp]
+ mov f68=f0
+ ;;
+ ldf.fill f69=[sp]
+ ldf.fill f70=[sp]
+ mov f71=f0
+ ;;
+ ldf.fill f72=[sp]
+ ldf.fill f73=[sp]
+ mov f74=f0
+ ;;
+ ldf.fill f75=[sp]
+ ldf.fill f76=[sp]
+ mov f77=f0
+ ;;
+ ldf.fill f78=[sp]
+ ldf.fill f79=[sp]
+ mov f80=f0
+ ;;
+ ldf.fill f81=[sp]
+ ldf.fill f82=[sp]
+ mov f83=f0
+ ;;
+ ldf.fill f84=[sp]
+ ldf.fill f85=[sp]
+ mov f86=f0
+ ;;
+ ldf.fill f87=[sp]
+ ldf.fill f88=[sp]
+ mov f89=f0
+ ;;
+ ldf.fill f90=[sp]
+ ldf.fill f91=[sp]
+ mov f92=f0
+ ;;
+ ldf.fill f93=[sp]
+ ldf.fill f94=[sp]
+ mov f95=f0
+ ;;
+ ldf.fill f96=[sp]
+ ldf.fill f97=[sp]
+ mov f98=f0
+ ;;
+ ldf.fill f99=[sp]
+ ldf.fill f100=[sp]
+ mov f101=f0
+ ;;
+ ldf.fill f102=[sp]
+ ldf.fill f103=[sp]
+ mov f104=f0
+ ;;
+ ldf.fill f105=[sp]
+ ldf.fill f106=[sp]
+ mov f107=f0
+ ;;
+ ldf.fill f108=[sp]
+ ldf.fill f109=[sp]
+ mov f110=f0
+ ;;
+ ldf.fill f111=[sp]
+ ldf.fill f112=[sp]
+ mov f113=f0
+ ;;
+ ldf.fill f114=[sp]
+ ldf.fill f115=[sp]
+ mov f116=f0
+ ;;
+ ldf.fill f117=[sp]
+ ldf.fill f118=[sp]
+ mov f119=f0
+ ;;
+ ldf.fill f120=[sp]
+ ldf.fill f121=[sp]
+ mov f122=f0
+ ;;
+ ldf.fill f123=[sp]
+ ldf.fill f124=[sp]
+ mov f125=f0
+ ;;
+ ldf.fill f126=[sp]
+ mov f127=f0
+ br.ret.sptk.few rp
+ .endp __ia64_init_fpu
--- /dev/null
+/*
+ * This is where we statically allocate and initialize the initial
+ * task.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+static struct vm_area_struct init_mmap = INIT_MMAP;
+static struct fs_struct init_fs = INIT_FS;
+static struct files_struct init_files = INIT_FILES;
+static struct signal_struct init_signals = INIT_SIGNALS;
+struct mm_struct init_mm = INIT_MM(init_mm);
+
+/*
+ * Initial task structure.
+ *
+ * We need to make sure that this is page aligned due to the way
+ * process stacks are handled. This is done by having a special
+ * "init_task" linker map entry..
+ */
+union task_union init_task_union
+ __attribute__((section("init_task"))) =
+ { INIT_TASK(init_task_union.task) };
--- /dev/null
+/*
+ * linux/arch/ia64/kernel/irq.c
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 6/10/99: Updated to bring in sync with x86 version to facilitate
+ * support for SMP and different interrupt controllers.
+ */
+
+#include <linux/config.h>
+
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/kernel_stat.h>
+#include <linux/malloc.h>
+#include <linux/ptrace.h>
+#include <linux/random.h> /* for rand_initialize_irq() */
+#include <linux/signal.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/threads.h>
+
+#ifdef CONFIG_KDB
+# include <linux/kdb.h>
+#endif
+
+#include <asm/bitops.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/machvec.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+
+/* This is used to detect bad usage of probe_irq_on()/probe_irq_off(). */
+#define PROBE_IRQ_COOKIE 0xfeedC0FFEE
+
+struct irq_desc irq_desc[NR_IRQS];
+
+/*
+ * Micro-access to controllers is serialized over the whole
+ * system. We never hold this lock when we call the actual
+ * IRQ handler.
+ */
+spinlock_t irq_controller_lock;
+
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+spinlock_t ivr_read_lock;
+#endif
+
+unsigned int local_bh_count[NR_CPUS];
+/*
+ * used in irq_enter()/irq_exit()
+ */
+unsigned int local_irq_count[NR_CPUS];
+
+static struct irqaction timer_action = { NULL, 0, 0, NULL, NULL, NULL};
+
+#ifdef CONFIG_SMP
+static struct irqaction ipi_action = { NULL, 0, 0, NULL, NULL, NULL};
+#endif
+
+/*
+ * Legacy IRQ to IA-64 vector translation table. Any vector not in
+ * this table maps to itself (ie: irq 0x30 => IA64 vector 0x30)
+ */
+__u8 irq_to_vector_map[IA64_MIN_VECTORED_IRQ] = {
+ /* 8259 IRQ translation, first 16 entries */
+ TIMER_IRQ, 0x50, 0x0f, 0x51, 0x52, 0x53, 0x43, 0x54,
+ 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x40, 0x41,
+};
+
+/*
+ * Reverse of the above table.
+ */
+static __u8 vector_to_legacy_map[256];
+
+/*
+ * used by proc fs (/proc/interrupts)
+ */
+int
+get_irq_list (char *buf)
+{
+ int i;
+ struct irqaction * action;
+ char *p = buf;
+
+#ifdef CONFIG_SMP
+ p += sprintf(p, " ");
+ for (i = 0; i < smp_num_cpus; i++)
+ p += sprintf(p, "CPU%d ", i);
+ *p++ = '\n';
+#endif
+ /*
+ * Simply scans the external vectored interrupts
+ */
+ for (i = 0; i < NR_IRQS; i++) {
+ action = irq_desc[i].action;
+ if (!action)
+ continue;
+ p += sprintf(p, "%3d: ",i);
+#ifndef CONFIG_SMP
+ p += sprintf(p, "%10u ", kstat_irqs(i));
+#else
+ {
+ int j;
+ for (j = 0; j < smp_num_cpus; j++)
+ p += sprintf(p, "%10u ",
+ kstat.irqs[cpu_logical_map(j)][i]);
+ }
+#endif
+ p += sprintf(p, " %14s", irq_desc[i].handler->typename);
+ p += sprintf(p, " %c%s", (action->flags & SA_INTERRUPT) ? '+' : ' ',
+ action->name);
+
+ for (action = action->next; action; action = action->next) {
+ p += sprintf(p, ", %c%s",
+ (action->flags & SA_INTERRUPT)?'+':' ',
+ action->name);
+ }
+ *p++ = '\n';
+ }
+ return p - buf;
+}
+
+/*
+ * That's where the IVT branches when we get an external
+ * interrupt. This branches to the correct hardware IRQ handler via
+ * function ptr.
+ */
+void
+ia64_handle_irq (unsigned long irq, struct pt_regs *regs)
+{
+ unsigned long bsp, sp, saved_tpr;
+
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+# ifndef CONFIG_SMP
+ static unsigned int max_prio = 0;
+# endif
+ unsigned int prev_prio;
+ unsigned long eoi_ptr;
+
+# ifdef CONFIG_USB
+ disable_usb();
+# endif
+ /*
+ * Stop IPIs by getting the ivr_read_lock
+ */
+ spin_lock(&ivr_read_lock);
+
+ /*
+ * Disable PCI writes
+ */
+ outl(0x80ff81c0, 0xcf8);
+ outl(0x73002188, 0xcfc);
+ eoi_ptr = inl(0xcfc);
+
+ irq = ia64_get_ivr();
+
+ /*
+ * Enable PCI writes
+ */
+ outl(0x73182188, 0xcfc);
+
+ spin_unlock(&ivr_read_lock);
+
+# ifdef CONFIG_USB
+ reenable_usb();
+# endif
+
+# ifndef CONFIG_SMP
+ prev_prio = max_prio;
+ if (irq < max_prio) {
+ printk ("ia64_handle_irq: got irq %lu while %u was in progress!\n",
+ irq, max_prio);
+
+ } else
+ max_prio = irq;
+# endif /* !CONFIG_SMP */
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+
+ /* Always set TPR to limit maximum interrupt nesting depth to
+ * 16 (without this, it would be ~240, which could easily lead
+ * to kernel stack overflows.
+ */
+ saved_tpr = ia64_get_tpr();
+ ia64_srlz_d();
+ ia64_set_tpr(irq);
+ ia64_srlz_d();
+
+ asm ("mov %0=ar.bsp" : "=r"(bsp));
+ asm ("mov %0=sp" : "=r"(sp));
+
+ if ((sp - bsp) < 1024) {
+ static long last_time;
+ static unsigned char count;
+
+ if (count > 5 && jiffies - last_time > 5*HZ)
+ count = 0;
+ if (++count < 5) {
+ last_time = jiffies;
+ printk("ia64_handle_irq: DANGER: less than 1KB of free stack space!!\n"
+ "(bsp=0x%lx, sp=%lx)\n", bsp, sp);
+ }
+#ifdef CONFIG_KDB
+ kdb(KDB_REASON_PANIC, 0, regs);
+#endif
+ }
+
+ /*
+ * The interrupt is now said to be in service
+ */
+ if (irq >= NR_IRQS) {
+ printk("handle_irq: invalid irq=%lu\n", irq);
+ goto out;
+ }
+
+ ++kstat.irqs[smp_processor_id()][irq];
+
+ if (irq == IA64_SPURIOUS_INT) {
+ printk("handle_irq: spurious interrupt\n");
+ goto out;
+ }
+
+ /*
+ * Handle the interrupt by calling the hardware specific handler (IOSAPIC, Internal, etc).
+ */
+ (*irq_desc[irq].handler->handle)(irq, regs);
+ out:
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ {
+ long pEOI;
+
+ asm ("mov %0=0;; (p1) mov %0=1" : "=r"(pEOI));
+ if (!pEOI) {
+ printk("Yikes: ia64_handle_irq() without pEOI!!\n");
+ asm volatile ("cmp.eq p1,p0=r0,r0" : "=r"(pEOI));
+# ifdef CONFIG_KDB
+ kdb(KDB_REASON_PANIC, 0, regs);
+# endif
+ }
+ }
+
+ local_irq_disable();
+# ifndef CONFIG_SMP
+ if (max_prio == irq)
+ max_prio = prev_prio;
+# endif /* !CONFIG_SMP */
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+ ia64_srlz_d();
+ ia64_set_tpr(saved_tpr);
+ ia64_srlz_d();
+}
+
+
+/*
+ * This should really return information about whether we should do
+ * bottom half handling etc. Right now we end up _always_ checking the
+ * bottom half, which is a waste of time and is not what some drivers
+ * would prefer.
+ */
+int
+invoke_irq_handlers (unsigned int irq, struct pt_regs *regs, struct irqaction *action)
+{
+ void (*handler)(int, void *, struct pt_regs *);
+ unsigned long flags, flags_union = 0;
+ int cpu = smp_processor_id();
+ unsigned int requested_irq;
+ void *dev_id;
+
+ irq_enter(cpu, irq);
+
+ if ((action->flags & SA_INTERRUPT) == 0)
+ __sti();
+
+ do {
+ flags = action->flags;
+ requested_irq = irq;
+ if ((flags & SA_LEGACY) != 0)
+ requested_irq = vector_to_legacy_map[irq];
+ flags_union |= flags;
+ handler = action->handler;
+ dev_id = action->dev_id;
+ action = action->next;
+ (*handler)(requested_irq, dev_id, regs);
+ } while (action);
+ if ((flags_union & SA_SAMPLE_RANDOM) != 0)
+ add_interrupt_randomness(irq);
+ __cli();
+
+ irq_exit(cpu, irq);
+ return flags_union | 1; /* force the "do bottom halves" bit */
+}
+
+void
+disable_irq_nosync (unsigned int irq)
+{
+ unsigned long flags;
+
+ irq = map_legacy_irq(irq);
+
+ spin_lock_irqsave(&irq_controller_lock, flags);
+ if (irq_desc[irq].depth++ > 0) {
+ irq_desc[irq].status &= ~IRQ_ENABLED;
+ irq_desc[irq].handler->disable(irq);
+ }
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+}
+
+/*
+ * Synchronous version of the above, making sure the IRQ is
+ * no longer running on any other IRQ..
+ */
+void
+disable_irq (unsigned int irq)
+{
+ disable_irq_nosync(irq);
+
+ irq = map_legacy_irq(irq);
+
+ if (!local_irq_count[smp_processor_id()]) {
+ do {
+ barrier();
+ } while ((irq_desc[irq].status & IRQ_INPROGRESS) != 0);
+ }
+}
+
+void
+enable_irq (unsigned int irq)
+{
+ unsigned long flags;
+
+ irq = map_legacy_irq(irq);
+
+ spin_lock_irqsave(&irq_controller_lock, flags);
+ switch (irq_desc[irq].depth) {
+ case 1:
+ irq_desc[irq].status |= IRQ_ENABLED;
+ (*irq_desc[irq].handler->enable)(irq);
+ /* fall through */
+ default:
+ --irq_desc[irq].depth;
+ break;
+
+ case 0:
+ printk("enable_irq: unbalanced from %p\n", __builtin_return_address(0));
+ }
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+}
+
+/*
+ * This function encapsulates the initialization that needs to be
+ * performed under the protection of lock irq_controller_lock. The
+ * lock must have been acquired by the time this is called.
+ */
+static inline int
+setup_irq (unsigned int irq, struct irqaction *new)
+{
+ int shared = 0;
+ struct irqaction *old, **p;
+
+ p = &irq_desc[irq].action;
+ old = *p;
+ if (old) {
+ if (!(old->flags & new->flags & SA_SHIRQ)) {
+ return -EBUSY;
+ }
+ /* add new interrupt at end of irq queue */
+ do {
+ p = &old->next;
+ old = *p;
+ } while (old);
+ shared = 1;
+ }
+ *p = new;
+
+ /* when sharing do not unmask */
+ if (!shared) {
+ irq_desc[irq].depth = 0;
+ irq_desc[irq].status |= IRQ_ENABLED;
+ (*irq_desc[irq].handler->startup)(irq);
+ }
+ return 0;
+}
+
+int
+request_irq (unsigned int requested_irq, void (*handler)(int, void *, struct pt_regs *),
+ unsigned long irqflags, const char * devname, void *dev_id)
+{
+ int retval, need_kfree = 0;
+ struct irqaction *action;
+ unsigned long flags;
+ unsigned int irq;
+
+#ifdef IA64_DEBUG
+ printk("request_irq(0x%x) called\n", requested_irq);
+#endif
+ /*
+ * Sanity-check: shared interrupts should REALLY pass in
+ * a real dev-ID, otherwise we'll have trouble later trying
+ * to figure out which interrupt is which (messes up the
+ * interrupt freeing logic etc).
+ */
+ if ((irqflags & SA_SHIRQ) && !dev_id)
+ printk("Bad boy: %s (at %p) called us without a dev_id!\n",
+ devname, current_text_addr());
+
+ irq = map_legacy_irq(requested_irq);
+ if (irq != requested_irq)
+ irqflags |= SA_LEGACY;
+
+ if (irq >= NR_IRQS)
+ return -EINVAL;
+
+ if (!handler)
+ return -EINVAL;
+
+ /*
+ * The timer_action and ipi_action cannot be allocated
+ * dynamically because its initialization happens really early
+ * on in init/main.c at this point the memory allocator has
+ * not yet been initialized. So we use a statically reserved
+ * buffer for it. In some sense that's no big deal because we
+ * need one no matter what.
+ */
+ if (irq == TIMER_IRQ)
+ action = &timer_action;
+#ifdef CONFIG_SMP
+ else if (irq == IPI_IRQ)
+ action = &ipi_action;
+#endif
+ else {
+ action = kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ need_kfree = 1;
+ }
+
+ if (!action)
+ return -ENOMEM;
+
+ action->handler = handler;
+ action->flags = irqflags;
+ action->mask = 0;
+ action->name = devname;
+ action->next = NULL;
+ action->dev_id = dev_id;
+
+ if ((irqflags & SA_SAMPLE_RANDOM) != 0)
+ rand_initialize_irq(irq);
+
+ spin_lock_irqsave(&irq_controller_lock, flags);
+ retval = setup_irq(irq, action);
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+
+ if (need_kfree && retval)
+ kfree(action);
+
+ return retval;
+}
+
+void
+free_irq (unsigned int irq, void *dev_id)
+{
+ struct irqaction *action, **p;
+ unsigned long flags;
+
+ /*
+ * some sanity checks first
+ */
+ if (irq >= NR_IRQS) {
+ printk("Trying to free IRQ%d\n",irq);
+ return;
+ }
+
+ irq = map_legacy_irq(irq);
+
+ /*
+ * Find the corresponding irqaction
+ */
+ spin_lock_irqsave(&irq_controller_lock, flags);
+ for (p = &irq_desc[irq].action; (action = *p) != NULL; p = &action->next) {
+ if (action->dev_id != dev_id)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *p = action->next;
+ if (!irq_desc[irq].action) {
+ irq_desc[irq].status &= ~IRQ_ENABLED;
+ (*irq_desc[irq].handler->shutdown)(irq);
+ }
+
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+
+#ifdef CONFIG_SMP
+ /* Wait to make sure it's not being used on another CPU */
+ while (irq_desc[irq].status & IRQ_INPROGRESS)
+ barrier();
+#endif
+
+ if (action != &timer_action
+#ifdef CONFIG_SMP
+ && action != &ipi_action
+#endif
+ )
+ kfree(action);
+ return;
+ }
+ printk("Trying to free free IRQ%d\n", irq);
+}
+
+/*
+ * IRQ autodetection code. Note that the return value of
+ * probe_irq_on() is no longer being used (it's role has been replaced
+ * by the IRQ_AUTODETECT flag).
+ */
+unsigned long
+probe_irq_on (void)
+{
+ struct irq_desc *id;
+ unsigned long delay;
+
+#ifdef IA64_DEBUG
+ printk("probe_irq_on() called\n");
+#endif
+
+ spin_lock_irq(&irq_controller_lock);
+ for (id = irq_desc; id < irq_desc + NR_IRQS; ++id) {
+ if (!id->action) {
+ id->status |= IRQ_AUTODETECT | IRQ_WAITING;
+ (*id->handler->startup)(id - irq_desc);
+ }
+ }
+ spin_unlock_irq(&irq_controller_lock);
+
+ /* wait for spurious interrupts to trigger: */
+
+ for (delay = jiffies + HZ/10; time_after(delay, jiffies); )
+ /* about 100ms delay */
+ synchronize_irq();
+
+ /* filter out obviously spurious interrupts: */
+ spin_lock_irq(&irq_controller_lock);
+ for (id = irq_desc; id < irq_desc + NR_IRQS; ++id) {
+ unsigned int status = id->status;
+
+ if (!(status & IRQ_AUTODETECT))
+ continue;
+
+ if (!(status & IRQ_WAITING)) {
+ id->status = status & ~IRQ_AUTODETECT;
+ (*id->handler->shutdown)(id - irq_desc);
+ }
+ }
+ spin_unlock_irq(&irq_controller_lock);
+ return PROBE_IRQ_COOKIE; /* return meaningless return value */
+}
+
+int
+probe_irq_off (unsigned long cookie)
+{
+ int irq_found, nr_irqs;
+ struct irq_desc *id;
+
+#ifdef IA64_DEBUG
+ printk("probe_irq_off(cookie=0x%lx) -> ", cookie);
+#endif
+
+ if (cookie != PROBE_IRQ_COOKIE)
+ printk("bad irq probe from %p\n", __builtin_return_address(0));
+
+ nr_irqs = 0;
+ irq_found = 0;
+ spin_lock_irq(&irq_controller_lock);
+ for (id = irq_desc + IA64_MIN_VECTORED_IRQ; id < irq_desc + NR_IRQS; ++id) {
+ unsigned int status = id->status;
+
+ if (!(status & IRQ_AUTODETECT))
+ continue;
+
+ if (!(status & IRQ_WAITING)) {
+ if (!nr_irqs)
+ irq_found = (id - irq_desc);
+ ++nr_irqs;
+ }
+ id->status = status & ~IRQ_AUTODETECT;
+ (*id->handler->shutdown)(id - irq_desc);
+ }
+ spin_unlock_irq(&irq_controller_lock);
+
+ if (nr_irqs > 1)
+ irq_found = -irq_found;
+
+#ifdef IA64_DEBUG
+ printk("%d\n", irq_found);
+#endif
+ return irq_found;
+}
+
+#ifdef CONFIG_SMP
+
+void __init
+init_IRQ_SMP (void)
+{
+ if (request_irq(IPI_IRQ, handle_IPI, 0, "IPI", NULL))
+ panic("Could not allocate IPI Interrupt Handler!");
+}
+
+#endif
+
+void __init
+init_IRQ (void)
+{
+ int i;
+
+ for (i = 0; i < IA64_MIN_VECTORED_IRQ; ++i)
+ vector_to_legacy_map[irq_to_vector_map[i]] = i;
+
+ for (i = 0; i < NR_IRQS; ++i) {
+ irq_desc[i].handler = &irq_type_default;
+ }
+
+ irq_desc[TIMER_IRQ].handler = &irq_type_ia64_internal;
+#ifdef CONFIG_SMP
+ /*
+ * Configure the IPI vector and handler
+ */
+ irq_desc[IPI_IRQ].handler = &irq_type_ia64_internal;
+ init_IRQ_SMP();
+#endif
+
+ platform_irq_init(irq_desc);
+
+ /* clear TPR to enable all interrupt classes: */
+ ia64_set_tpr(0);
+}
+
+/* TBD:
+ * Certain IA64 platforms can have inter-processor interrupt support.
+ * This interface is supposed to default to the IA64 IPI block-based
+ * mechanism if the platform doesn't provide a separate mechanism
+ * for IPIs.
+ * Choices : (1) Extend hw_interrupt_type interfaces
+ * (2) Use machine vector mechanism
+ * For now defining the following interface as a place holder.
+ */
+void
+ipi_send (int cpu, int vector, int delivery_mode)
+{
+}
--- /dev/null
+#include <linux/kernel.h>
+#include <linux/sched.h>
+
+#include <asm/irq.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+
+
+static int
+irq_default_handle_irq (unsigned int irq, struct pt_regs *regs)
+{
+ printk("Unexpected irq vector 0x%x on CPU %u!\n", irq, smp_processor_id());
+ return 0; /* don't call do_bottom_half() for spurious interrupts */
+}
+
+static void
+irq_default_noop (unsigned int irq)
+{
+ /* nuthing to do... */
+}
+
+struct hw_interrupt_type irq_type_default = {
+ "default",
+ (void (*)(unsigned long)) irq_default_noop, /* init */
+ irq_default_noop, /* startup */
+ irq_default_noop, /* shutdown */
+ irq_default_handle_irq, /* handle */
+ irq_default_noop, /* enable */
+ irq_default_noop /* disable */
+};
--- /dev/null
+/*
+ * Internal Interrupt Vectors
+ *
+ * This takes care of interrupts that are generated by the CPU
+ * internally, such as the ITC and IPI interrupts.
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+
+#include <asm/irq.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/irq.h>
+
+/*
+ * This is identical to IOSAPIC handle_irq. It may go away . . .
+ */
+static int
+internal_handle_irq (unsigned int irq, struct pt_regs *regs)
+{
+ struct irqaction *action = 0;
+ struct irq_desc *id = irq_desc + irq;
+ unsigned int status;
+ int retval;
+
+ spin_lock(&irq_controller_lock);
+ {
+ status = id->status;
+ if ((status & IRQ_ENABLED) != 0)
+ action = id->action;
+ id->status = status & ~(IRQ_REPLAY | IRQ_WAITING);
+ }
+ spin_unlock(&irq_controller_lock);
+
+ if (!action) {
+ if (!(id->status & IRQ_AUTODETECT))
+ printk("irq_hpsim_handle_irq: unexpected interrupt %u\n", irq);
+ return 0;
+ }
+
+ retval = invoke_irq_handlers(irq, regs, action);
+
+ spin_lock(&irq_controller_lock);
+ {
+ status = (id->status & ~IRQ_INPROGRESS);
+ id->status = status;
+ }
+ spin_unlock(&irq_controller_lock);
+
+ return retval;
+}
+
+static void
+internal_noop (unsigned int irq)
+{
+ /* nuthing to do... */
+}
+
+struct hw_interrupt_type irq_type_ia64_internal = {
+ "IA64 internal",
+ (void (*)(unsigned long)) internal_noop, /* init */
+ internal_noop, /* startup */
+ internal_noop, /* shutdown */
+ internal_handle_irq, /* handle */
+ internal_noop, /* enable */
+ internal_noop /* disable */
+};
+
--- /dev/null
+/*
+ * SMP IRQ Lock support
+ *
+ * Global interrupt locks for SMP. Allow interrupts to come in on any
+ * CPU, yet make cli/sti act globally to protect critical regions..
+ * These function usually appear in irq.c, but I think it's cleaner this way.
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/threads.h>
+#include <linux/init.h>
+
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/irq.h>
+#include <asm/bitops.h>
+#include <asm/pgtable.h>
+#include <asm/delay.h>
+
+int global_irq_holder = NO_PROC_ID;
+spinlock_t global_irq_lock;
+atomic_t global_irq_count;
+atomic_t global_bh_count;
+atomic_t global_bh_lock;
+
+#define INIT_STUCK (1<<26)
+
+void
+irq_enter(int cpu, int irq)
+{
+ int stuck = INIT_STUCK;
+
+ hardirq_enter(cpu, irq);
+ barrier();
+ while (global_irq_lock.lock) {
+ if (cpu == global_irq_holder) {
+ break;
+ }
+
+ if (!--stuck) {
+ printk("irq_enter stuck (irq=%d, cpu=%d, global=%d)\n",
+ irq, cpu,global_irq_holder);
+ stuck = INIT_STUCK;
+ }
+ barrier();
+ }
+}
+
+void
+irq_exit(int cpu, int irq)
+{
+ hardirq_exit(cpu, irq);
+ release_irqlock(cpu);
+}
+
+static void
+show(char * str)
+{
+ int i;
+ unsigned long *stack;
+ int cpu = smp_processor_id();
+
+ printk("\n%s, CPU %d:\n", str, cpu);
+ printk("irq: %d [%d %d]\n",
+ atomic_read(&global_irq_count), local_irq_count[0], local_irq_count[1]);
+ printk("bh: %d [%d %d]\n",
+ atomic_read(&global_bh_count), local_bh_count[0], local_bh_count[1]);
+
+ stack = (unsigned long *) &stack;
+ for (i = 40; i ; i--) {
+ unsigned long x = *++stack;
+ if (x > (unsigned long) &get_options && x < (unsigned long) &vsprintf) {
+ printk("<[%08lx]> ", x);
+ }
+ }
+}
+
+#define MAXCOUNT 100000000
+
+static inline void
+wait_on_bh(void)
+{
+ int count = MAXCOUNT;
+ do {
+ if (!--count) {
+ show("wait_on_bh");
+ count = ~0;
+ }
+ /* nothing .. wait for the other bh's to go away */
+ } while (atomic_read(&global_bh_count) != 0);
+}
+
+static inline void
+wait_on_irq(int cpu)
+{
+ int count = MAXCOUNT;
+
+ for (;;) {
+
+ /*
+ * Wait until all interrupts are gone. Wait
+ * for bottom half handlers unless we're
+ * already executing in one..
+ */
+ if (!atomic_read(&global_irq_count)) {
+ if (local_bh_count[cpu] || !atomic_read(&global_bh_count))
+ break;
+ }
+
+ /* Duh, we have to loop. Release the lock to avoid deadlocks */
+ spin_unlock(&global_irq_lock);
+ mb();
+
+ for (;;) {
+ if (!--count) {
+ show("wait_on_irq");
+ count = ~0;
+ }
+ __sti();
+ udelay(cpu + 1);
+ __cli();
+ if (atomic_read(&global_irq_count))
+ continue;
+ if (global_irq_lock.lock)
+ continue;
+ if (!local_bh_count[cpu] && atomic_read(&global_bh_count))
+ continue;
+ if (spin_trylock(&global_irq_lock))
+ break;
+ }
+ }
+}
+
+/*
+ * This is called when we want to synchronize with
+ * bottom half handlers. We need to wait until
+ * no other CPU is executing any bottom half handler.
+ *
+ * Don't wait if we're already running in an interrupt
+ * context or are inside a bh handler.
+ */
+void
+synchronize_bh(void)
+{
+ if (atomic_read(&global_bh_count)) {
+ int cpu = smp_processor_id();
+ if (!local_irq_count[cpu] && !local_bh_count[cpu]) {
+ wait_on_bh();
+ }
+ }
+}
+
+
+/*
+ * This is called when we want to synchronize with
+ * interrupts. We may for example tell a device to
+ * stop sending interrupts: but to make sure there
+ * are no interrupts that are executing on another
+ * CPU we need to call this function.
+ */
+void
+synchronize_irq(void)
+{
+ int cpu = smp_processor_id();
+ int local_count;
+ int global_count;
+
+ mb();
+ do {
+ local_count = local_irq_count[cpu];
+ global_count = atomic_read(&global_irq_count);
+ } while (global_count != local_count);
+}
+
+static inline void
+get_irqlock(int cpu)
+{
+ if (!spin_trylock(&global_irq_lock)) {
+ /* do we already hold the lock? */
+ if ((unsigned char) cpu == global_irq_holder)
+ return;
+ /* Uhhuh.. Somebody else got it. Wait.. */
+ spin_lock(&global_irq_lock);
+ }
+ /*
+ * We also to make sure that nobody else is running
+ * in an interrupt context.
+ */
+ wait_on_irq(cpu);
+
+ /*
+ * Ok, finally..
+ */
+ global_irq_holder = cpu;
+}
+
+/*
+ * A global "cli()" while in an interrupt context
+ * turns into just a local cli(). Interrupts
+ * should use spinlocks for the (very unlikely)
+ * case that they ever want to protect against
+ * each other.
+ *
+ * If we already have local interrupts disabled,
+ * this will not turn a local disable into a
+ * global one (problems with spinlocks: this makes
+ * save_flags+cli+sti usable inside a spinlock).
+ */
+void
+__global_cli(void)
+{
+ unsigned long flags;
+
+ __save_flags(flags);
+ if (flags & IA64_PSR_I) {
+ int cpu = smp_processor_id();
+ __cli();
+ if (!local_irq_count[cpu])
+ get_irqlock(cpu);
+ }
+}
+
+void
+__global_sti(void)
+{
+ int cpu = smp_processor_id();
+
+ if (!local_irq_count[cpu])
+ release_irqlock(cpu);
+ __sti();
+}
+
+/*
+ * SMP flags value to restore to:
+ * 0 - global cli
+ * 1 - global sti
+ * 2 - local cli
+ * 3 - local sti
+ */
+unsigned long
+__global_save_flags(void)
+{
+ int retval;
+ int local_enabled;
+ unsigned long flags;
+
+ __save_flags(flags);
+ local_enabled = flags & IA64_PSR_I;
+ /* default to local */
+ retval = 2 + local_enabled;
+
+ /* check for global flags if we're not in an interrupt */
+ if (!local_irq_count[smp_processor_id()]) {
+ if (local_enabled)
+ retval = 1;
+ if (global_irq_holder == (unsigned char) smp_processor_id())
+ retval = 0;
+ }
+ return retval;
+}
+
+void
+__global_restore_flags(unsigned long flags)
+{
+ switch (flags) {
+ case 0:
+ __global_cli();
+ break;
+ case 1:
+ __global_sti();
+ break;
+ case 2:
+ __cli();
+ break;
+ case 3:
+ __sti();
+ break;
+ default:
+ printk("global_restore_flags: %08lx (%08lx)\n",
+ flags, (&flags)[-1]);
+ }
+}
--- /dev/null
+/*
+ * arch/ia64/kernel/ivt.S
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1998-2000 David Mosberger <davidm@hpl.hp.com>
+ */
+
+#include <linux/config.h>
+
+#include <asm/break.h>
+#include <asm/offsets.h>
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+
+#include "entry.h"
+
+/*
+ * A couple of convenience macros that make writing and reading
+ * SAVE_MIN and SAVE_REST easier.
+ */
+#define rARPR r31
+#define rCRIFS r30
+#define rCRIPSR r29
+#define rCRIIP r28
+#define rARRSC r27
+#define rARPFS r26
+#define rARUNAT r25
+#define rARRNAT r24
+#define rARBSPSTORE r23
+#define rKRBS r22
+#define rB6 r21
+#define rR1 r20
+
+/*
+ * DO_SAVE_MIN switches to the kernel stacks (if necessary) and saves
+ * the minimum state necessary that allows us to turn psr.ic back
+ * on.
+ *
+ * Assumed state upon entry:
+ * psr.ic: off
+ * psr.dt: off
+ * r31: contains saved predicates (pr)
+ *
+ * Upon exit, the state is as follows:
+ * psr.ic: off
+ * psr.dt: off
+ * r2 = points to &pt_regs.r16
+ * r12 = kernel sp (kernel virtual address)
+ * r13 = points to current task_struct (kernel virtual address)
+ * p15 = TRUE if psr.i is set in cr.ipsr
+ * predicate registers (other than p6, p7, and p15), b6, r3, r8, r9, r10, r11, r14, r15:
+ * preserved
+ *
+ * Note that psr.ic is NOT turned on by this macro. This is so that
+ * we can pass interruption state as arguments to a handler.
+ */
+#define DO_SAVE_MIN(COVER,EXTRA) \
+ mov rARRSC=ar.rsc; \
+ mov rARPFS=ar.pfs; \
+ mov rR1=r1; \
+ mov rARUNAT=ar.unat; \
+ mov rCRIPSR=cr.ipsr; \
+ mov rB6=b6; /* rB6 = branch reg 6 */ \
+ mov rCRIIP=cr.iip; \
+ mov r1=ar.k6; /* r1 = current */ \
+ ;; \
+ invala; \
+ extr.u r16=rCRIPSR,32,2; /* extract psr.cpl */ \
+ ;; \
+ cmp.eq pKern,p7=r0,r16; /* are we in kernel mode already? (psr.cpl==0) */ \
+ /* switch from user to kernel RBS: */ \
+ COVER; \
+ ;; \
+(p7) mov ar.rsc=r0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
+(p7) addl rKRBS=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
+ ;; \
+(p7) mov rARRNAT=ar.rnat; \
+(pKern) dep r1=0,sp,61,3; /* compute physical addr of sp */ \
+(p7) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
+(p7) mov rARBSPSTORE=ar.bspstore; /* save ar.bspstore */ \
+(p7) dep rKRBS=-1,rKRBS,61,3; /* compute kernel virtual addr of RBS */ \
+ ;; \
+(pKern) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
+(p7) mov ar.bspstore=rKRBS; /* switch to kernel RBS */ \
+ ;; \
+(p7) mov r18=ar.bsp; \
+(p7) mov ar.rsc=0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */ \
+ \
+ mov r16=r1; /* initialize first base pointer */ \
+ adds r17=8,r1; /* initialize second base pointer */ \
+ ;; \
+ st8 [r16]=rCRIPSR,16; /* save cr.ipsr */ \
+ st8 [r17]=rCRIIP,16; /* save cr.iip */ \
+(pKern) mov r18=r0; /* make sure r18 isn't NaT */ \
+ ;; \
+ st8 [r16]=rCRIFS,16; /* save cr.ifs */ \
+ st8 [r17]=rARUNAT,16; /* save ar.unat */ \
+(p7) sub r18=r18,rKRBS; /* r18=RSE.ndirty*8 */ \
+ ;; \
+ st8 [r16]=rARPFS,16; /* save ar.pfs */ \
+ st8 [r17]=rARRSC,16; /* save ar.rsc */ \
+ tbit.nz p15,p0=rCRIPSR,IA64_PSR_I_BIT \
+ ;; /* avoid RAW on r16 & r17 */ \
+(pKern) adds r16=16,r16; /* skip over ar_rnat field */ \
+(pKern) adds r17=16,r17; /* skip over ar_bspstore field */ \
+(p7) st8 [r16]=rARRNAT,16; /* save ar.rnat */ \
+(p7) st8 [r17]=rARBSPSTORE,16; /* save ar.bspstore */ \
+ ;; \
+ st8 [r16]=rARPR,16; /* save predicates */ \
+ st8 [r17]=rB6,16; /* save b6 */ \
+ shl r18=r18,16; /* compute ar.rsc to be used for "loadrs" */ \
+ ;; \
+ st8 [r16]=r18,16; /* save ar.rsc value for "loadrs" */ \
+ st8.spill [r17]=rR1,16; /* save original r1 */ \
+ cmp.ne pEOI,p0=r0,r0 /* clear pEOI by default */ \
+ ;; \
+ st8.spill [r16]=r2,16; \
+ st8.spill [r17]=r3,16; \
+ adds r2=IA64_PT_REGS_R16_OFFSET,r1; \
+ ;; \
+ st8.spill [r16]=r12,16; \
+ st8.spill [r17]=r13,16; \
+ cmp.eq pNonSys,pSys=r0,r0 /* initialize pSys=0, pNonSys=1 */ \
+ ;; \
+ st8.spill [r16]=r14,16; \
+ st8.spill [r17]=r15,16; \
+ dep r14=-1,r0,61,3; \
+ ;; \
+ st8.spill [r16]=r8,16; \
+ st8.spill [r17]=r9,16; \
+ adds r12=-16,r1; /* switch to kernel memory stack (with 16 bytes of scratch) */ \
+ ;; \
+ st8.spill [r16]=r10,16; \
+ st8.spill [r17]=r11,16; \
+ mov r13=ar.k6; /* establish `current' */ \
+ ;; \
+ or r2=r2,r14; /* make first base a kernel virtual address */ \
+ EXTRA; \
+ movl r1=__gp; /* establish kernel global pointer */ \
+ ;; \
+ or r12=r12,r14; /* make sp a kernel virtual address */ \
+ or r13=r13,r14; /* make `current' a kernel virtual address */ \
+ bsw.1;; /* switch back to bank 1 (must be last in insn group) */
+
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+# define STOPS nop.i 0x0;; nop.i 0x0;; nop.i 0x0;;
+#else
+# define STOPS
+#endif
+
+#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
+#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
+#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
+
+/*
+ * SAVE_REST saves the remainder of pt_regs (with psr.ic on). This
+ * macro guarantees to preserve all predicate registers, r8, r9, r10,
+ * r11, r14, and r15.
+ *
+ * Assumed state upon entry:
+ * psr.ic: on
+ * psr.dt: on
+ * r2: points to &pt_regs.r16
+ * r3: points to &pt_regs.r17
+ */
+#define SAVE_REST \
+ st8.spill [r2]=r16,16; \
+ st8.spill [r3]=r17,16; \
+ ;; \
+ st8.spill [r2]=r18,16; \
+ st8.spill [r3]=r19,16; \
+ ;; \
+ mov r16=ar.ccv; /* M-unit */ \
+ movl r18=FPSR_DEFAULT /* L-unit */ \
+ ;; \
+ mov r17=ar.fpsr; /* M-unit */ \
+ mov ar.fpsr=r18; /* M-unit */ \
+ ;; \
+ st8.spill [r2]=r20,16; \
+ st8.spill [r3]=r21,16; \
+ mov r18=b0; \
+ ;; \
+ st8.spill [r2]=r22,16; \
+ st8.spill [r3]=r23,16; \
+ mov r19=b7; \
+ ;; \
+ st8.spill [r2]=r24,16; \
+ st8.spill [r3]=r25,16; \
+ ;; \
+ st8.spill [r2]=r26,16; \
+ st8.spill [r3]=r27,16; \
+ ;; \
+ st8.spill [r2]=r28,16; \
+ st8.spill [r3]=r29,16; \
+ ;; \
+ st8.spill [r2]=r30,16; \
+ st8.spill [r3]=r31,16; \
+ ;; \
+ st8 [r2]=r16,16; /* ar.ccv */ \
+ st8 [r3]=r17,16; /* ar.fpsr */ \
+ ;; \
+ st8 [r2]=r18,16; /* b0 */ \
+ st8 [r3]=r19,16+8; /* b7 */ \
+ ;; \
+ stf.spill [r2]=f6,32; \
+ stf.spill [r3]=f7,32; \
+ ;; \
+ stf.spill [r2]=f8,32; \
+ stf.spill [r3]=f9,32
+
+/*
+ * This file defines the interrupt vector table used by the CPU.
+ * It does not include one entry per possible cause of interruption.
+ *
+ * External interrupts only use 1 entry. All others are internal interrupts
+ *
+ * The first 20 entries of the table contain 64 bundles each while the
+ * remaining 48 entries contain only 16 bundles each.
+ *
+ * The 64 bundles are used to allow inlining the whole handler for critical
+ * interrupts like TLB misses.
+ *
+ * For each entry, the comment is as follows:
+ *
+ * // 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51)
+ * entry offset ----/ / / / /
+ * entry number ---------/ / / /
+ * size of the entry -------------/ / /
+ * vector name -------------------------------------/ /
+ * related interrupts (what is the real interrupt?) ----------/
+ *
+ * The table is 32KB in size and must be aligned on 32KB boundary.
+ * (The CPU ignores the 15 lower bits of the address)
+ *
+ * Table is based upon EAS2.4 (June 1998)
+ */
+
+#define FAULT(n) \
+ rsm psr.dt; /* avoid nested faults due to TLB misses... */ \
+ ;; \
+ srlz.d; /* ensure everyone knows psr.dt is off... */ \
+ mov r31=pr; \
+ mov r19=n;; /* prepare to save predicates */ \
+ br.cond.sptk.many dispatch_to_fault_handler
+
+/*
+ * As we don't (hopefully) use the space available, we need to fill it with
+ * nops. the parameter may be used for debugging and is representing the entry
+ * number
+ */
+#define BREAK_BUNDLE(a) break.m (a); \
+ break.i (a); \
+ break.i (a)
+/*
+ * 4 breaks bundles all together
+ */
+#define BREAK_BUNDLE4(a); BREAK_BUNDLE(a); BREAK_BUNDLE(a); BREAK_BUNDLE(a); BREAK_BUNDLE(a)
+
+/*
+ * 8 bundles all together (too lazy to use only 4 at a time !)
+ */
+#define BREAK_BUNDLE8(a); BREAK_BUNDLE4(a); BREAK_BUNDLE4(a)
+
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .section __ivt_section,"ax"
+
+ .align 32768 // align on 32KB boundary
+ .global ia64_ivt
+ia64_ivt:
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x0000 Entry 0 (size 64 bundles) VHPT Translation (8,20,47)
+ /*
+ * The VHPT vector is invoked when the TLB entry for the virtual page table
+ * is missing. This happens only as a result of a previous
+ * (the "original") TLB miss, which may either be caused by an instruction
+ * fetch or a data access (or non-access).
+ *
+ * What we do here is normal TLB miss handing for the _original_ miss, followed
+ * by inserting the TLB entry for the virtual page table page that the VHPT
+ * walker was attempting to access. The latter gets inserted as long
+ * as both L1 and L2 have valid mappings for the faulting address.
+ * The TLB entry for the original miss gets inserted only if
+ * the L3 entry indicates that the page is present.
+ *
+ * do_page_fault gets invoked in the following cases:
+ * - the faulting virtual address uses unimplemented address bits
+ * - the faulting virtual address has no L1, L2, or L3 mapping
+ */
+ mov r16=cr.ifa // get address that caused the TLB miss
+ ;;
+ rsm psr.dt // use physical addressing for data
+ mov r31=pr // save the predicate registers
+ mov r19=ar.k7 // get page table base address
+ shl r21=r16,3 // shift bit 60 into sign bit
+ shr.u r17=r16,61 // get the region number into r17
+ ;;
+ cmp.eq p6,p7=5,r17 // is IFA pointing into to region 5?
+ shr.u r18=r16,PGDIR_SHIFT // get bits 33-63 of the faulting address
+ ;;
+(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
+ srlz.d // ensure "rsm psr.dt" has taken effect
+(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+ ;;
+(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
+(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
+ cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
+ shr.u r18=r16,PMD_SHIFT // shift L2 index into position
+ ;;
+(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
+ ld8 r17=[r17] // fetch the L1 entry (may be 0)
+ ;;
+(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
+ dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
+ ;;
+(p7) ld8 r17=[r17] // fetch the L2 entry (may be 0)
+ shr.u r19=r16,PAGE_SHIFT // shift L3 index into position
+ ;;
+(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
+ dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
+ ;;
+(p7) ld8 r18=[r17] // read the L3 PTE
+ mov r19=cr.isr // cr.isr bit 0 tells us if this is an insn miss
+ ;;
+(p7) tbit.z p6,p7=r18,0 // page present bit cleared?
+ mov r21=cr.iha // get the VHPT address that caused the TLB miss
+ ;; // avoid RAW on p7
+(p7) tbit.nz.unc p10,p11=r19,32 // is it an instruction TLB miss?
+ dep r17=0,r17,0,PAGE_SHIFT // clear low bits to get page address
+ ;;
+(p10) itc.i r18;; // insert the instruction TLB entry (EAS2.6: must be last in insn group!)
+(p11) itc.d r18;; // insert the data TLB entry (EAS2.6: must be last in insn group!)
+(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
+ mov cr.ifa=r21
+
+ // Now compute and insert the TLB entry for the virtual page table.
+ // We never execute in a page table page so there is no need to set
+ // the exception deferral bit.
+ adds r16=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r17
+ ;;
+(p7) itc.d r16;; // EAS2.6: must be last in insn group!
+ mov pr=r31,-1 // restore predicate registers
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x0400 Entry 1 (size 64 bundles) ITLB (21)
+ /*
+ * The ITLB basically does the same as the VHPT handler except
+ * that we always insert exactly one instruction TLB entry.
+ */
+ mov r16=cr.ifa // get address that caused the TLB miss
+ ;;
+ rsm psr.dt // use physical addressing for data
+ mov r31=pr // save the predicate registers
+ mov r19=ar.k7 // get page table base address
+ shl r21=r16,3 // shift bit 60 into sign bit
+ shr.u r17=r16,61 // get the region number into r17
+ ;;
+ cmp.eq p6,p7=5,r17 // is IFA pointing into to region 5?
+ shr.u r18=r16,PGDIR_SHIFT // get bits 33-63 of the faulting address
+ ;;
+(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
+ srlz.d // ensure "rsm psr.dt" has taken effect
+(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+ ;;
+(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
+(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
+ cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
+ shr.u r18=r16,PMD_SHIFT // shift L2 index into position
+ ;;
+(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
+ ld8 r17=[r17] // fetch the L1 entry (may be 0)
+ ;;
+(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
+ dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
+ ;;
+(p7) ld8 r17=[r17] // fetch the L2 entry (may be 0)
+ shr.u r19=r16,PAGE_SHIFT // shift L3 index into position
+ ;;
+(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
+ dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
+ ;;
+(p7) ld8 r18=[r17] // read the L3 PTE
+ ;;
+(p7) tbit.z p6,p7=r18,0 // page present bit cleared?
+ ;;
+(p7) itc.i r18;; // insert the instruction TLB entry (EAS2.6: must be last in insn group!)
+(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
+ ;;
+ mov pr=r31,-1 // restore predicate registers
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x0800 Entry 2 (size 64 bundles) DTLB (9,48)
+ /*
+ * The DTLB basically does the same as the VHPT handler except
+ * that we always insert exactly one data TLB entry.
+ */
+ mov r16=cr.ifa // get address that caused the TLB miss
+ ;;
+ rsm psr.dt // use physical addressing for data
+ mov r31=pr // save the predicate registers
+ mov r19=ar.k7 // get page table base address
+ shl r21=r16,3 // shift bit 60 into sign bit
+ shr.u r17=r16,61 // get the region number into r17
+ ;;
+ cmp.eq p6,p7=5,r17 // is IFA pointing into to region 5?
+ shr.u r18=r16,PGDIR_SHIFT // get bits 33-63 of the faulting address
+ ;;
+(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
+ srlz.d // ensure "rsm psr.dt" has taken effect
+(p6) movl r19=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+ ;;
+(p6) dep r17=r18,r19,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
+(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
+ cmp.eq p7,p6=0,r21 // unused address bits all zeroes?
+ shr.u r18=r16,PMD_SHIFT // shift L2 index into position
+ ;;
+(p6) cmp.eq p7,p6=-1,r21 // unused address bits all ones?
+ ld8 r17=[r17] // fetch the L1 entry (may be 0)
+ ;;
+(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
+ dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
+ ;;
+(p7) ld8 r17=[r17] // fetch the L2 entry (may be 0)
+ shr.u r19=r16,PAGE_SHIFT // shift L3 index into position
+ ;;
+(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
+ dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
+ ;;
+(p7) ld8 r18=[r17] // read the L3 PTE
+ ;;
+(p7) tbit.z p6,p7=r18,0 // page present bit cleared?
+ ;;
+(p7) itc.d r18;; // insert the instruction TLB entry (EAS2.6: must be last in insn group!)
+(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
+ ;;
+ mov pr=r31,-1 // restore predicate registers
+ rfi;; // must be last insn in an insn group
+
+ //-----------------------------------------------------------------------------------
+ // call do_page_fault (predicates are in r31, psr.dt is off, r16 is faulting address)
+page_fault:
+ SAVE_MIN_WITH_COVER
+ //
+ // Copy control registers to temporary registers, then turn on psr bits,
+ // then copy the temporary regs to the output regs. We have to do this
+ // because the "alloc" can cause a mandatory store which could lead to
+ // an "Alt DTLB" fault which we can handle only if psr.ic is on.
+ //
+ mov r8=cr.ifa
+ mov r9=cr.isr
+ adds r3=8,r2 // set up second base pointer
+ ;;
+ ssm psr.ic | psr.dt
+ ;;
+ srlz.d // guarantee that interrupt collection is enabled
+(p15) ssm psr.i // restore psr.i
+ ;;
+ srlz.i // must precede "alloc"! (srlz.i implies srlz.d)
+ movl r14=ia64_leave_kernel
+ ;;
+ alloc r15=ar.pfs,0,0,3,0 // must be first in insn group
+ mov out0=r8
+ mov out1=r9
+ ;;
+ SAVE_REST
+ mov rp=r14
+ ;;
+ adds out2=16,r12 // out2 = pointer to pt_regs
+ br.call.sptk.few b6=ia64_do_page_fault // ignore return address
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
+ mov r16=cr.ifa // get address that caused the TLB miss
+ movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
+ ;;
+ shr.u r18=r16,57 // move address bit 61 to bit 4
+ dep r16=0,r16,52,12 // clear top 12 bits of address
+ ;;
+ andcm r18=0x10,r18 // bit 4=~address-bit(61)
+ dep r16=r17,r16,0,12 // insert PTE control bits into r16
+ ;;
+ or r16=r16,r18 // set bit 4 (uncached) if the access was to region 6
+ ;;
+ itc.i r16;; // insert the TLB entry(EAS2.6: must be last in insn group!)
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46)
+ mov r16=cr.ifa // get address that caused the TLB miss
+ movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW
+ ;;
+ shr.u r18=r16,57 // move address bit 61 to bit 4
+ dep r16=0,r16,52,12 // clear top 12 bits of address
+ ;;
+ andcm r18=0x10,r18 // bit 4=~address-bit(61)
+ dep r16=r17,r16,0,12 // insert PTE control bits into r16
+ ;;
+ or r16=r16,r18 // set bit 4 (uncached) if the access was to region 6
+ ;;
+ itc.d r16;; // insert the TLB entry (EAS2.6: must be last in insn group!)
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x1400 Entry 5 (size 64 bundles) Data nested TLB (6,45)
+ //
+ // In the absence of kernel bugs, we get here when the Dirty-bit, Instruction
+ // Access-bit, or Data Access-bit faults cause a nested fault because the
+ // dTLB entry for the virtual page table isn't present. In such a case,
+ // we lookup the pte for the faulting address by walking the page table
+ // and return to the contination point passed in register r30.
+ // In accessing the page tables, we don't need to check for NULL entries
+ // because if the page tables didn't map the faulting address, it would not
+ // be possible to receive one of the above faults.
+ //
+ // Input: r16: faulting address
+ // r29: saved b0
+ // r30: continuation address
+ //
+ // Output: r17: physical address of L3 PTE of faulting address
+ // r29: saved b0
+ // r30: continuation address
+ //
+ // Clobbered: b0, r18, r19, r21, r31, psr.dt (cleared)
+ //
+ rsm psr.dt // switch to using physical data addressing
+ mov r19=ar.k7 // get the page table base address
+ shl r21=r16,3 // shift bit 60 into sign bit
+ ;;
+ mov r31=pr // save the predicate registers
+ shr.u r17=r16,61 // get the region number into r17
+ ;;
+ cmp.eq p6,p7=5,r17 // is faulting address in region 5?
+ shr.u r18=r16,PGDIR_SHIFT // get bits 33-63 of faulting address
+ ;;
+(p7) dep r17=r17,r19,(PAGE_SHIFT-3),3 // put region number bits in place
+ srlz.d
+(p6) movl r17=__pa(SWAPPER_PGD_ADDR) // region 5 is rooted at swapper_pg_dir
+(p6) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-1
+(p7) shr r21=r21,PGDIR_SHIFT+PAGE_SHIFT-4
+ ;;
+(p6) dep r17=r18,r17,3,(PAGE_SHIFT-3) // r17=PTA + IFA(33,42)*8
+(p7) dep r17=r18,r17,3,(PAGE_SHIFT-6) // r17=PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)
+ shr.u r18=r16,PMD_SHIFT // shift L2 index into position
+ ;;
+ ld8 r17=[r17] // fetch the L1 entry
+ mov b0=r30
+ ;;
+ dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
+ ;;
+ ld8 r17=[r17] // fetch the L2 entry
+ shr.u r19=r16,PAGE_SHIFT // shift L3 index into position
+ ;;
+ dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
+ ;;
+ mov pr=r31,-1 // restore predicates
+ br.cond.sptk.few b0 // return to continuation point
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x1800 Entry 6 (size 64 bundles) Instruction Key Miss (24)
+ FAULT(6)
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51)
+ FAULT(7)
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x2000 Entry 8 (size 64 bundles) Dirty-bit (54)
+ //
+ // What we do here is to simply turn on the dirty bit in the PTE. We need
+ // to update both the page-table and the TLB entry. To efficiently access
+ // the PTE, we address it through the virtual page table. Most likely, the
+ // TLB entry for the relevant virtual page table page is still present in
+ // the TLB so we can normally do this without additional TLB misses.
+ // In case the necessary virtual page table TLB entry isn't present, we take
+ // a nested TLB miss hit where we look up the physical address of the L3 PTE
+ // and then continue at label 1 below.
+ //
+ mov r16=cr.ifa // get the address that caused the fault
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r29=b0 // save b0 in case of nested fault
+ ;;
+1: ld8 r18=[r17]
+ ;; // avoid RAW on r18
+ or r18=_PAGE_D,r18 // set the dirty bit
+ mov b0=r29 // restore b0
+ ;;
+ st8 [r17]=r18 // store back updated PTE
+ itc.d r18;; // install updated PTE (EAS2.6: must be last in insn group!)
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x2400 Entry 9 (size 64 bundles) Instruction Access-bit (27)
+ // Like Entry 8, except for instruction access
+ mov r16=cr.ifa // get the address that caused the fault
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ mov r31=pr // save predicates
+ mov r30=cr.ipsr
+ ;;
+ extr.u r17=r30,IA64_PSR_IS_BIT,1 // get instruction arch. indicator
+ ;;
+ cmp.eq p6,p0 = r17,r0 // check if IA64 instruction set
+ ;;
+(p6) mov r16=cr.iip // get real faulting address
+ ;;
+(p6) mov cr.ifa=r16 // reset IFA
+ mov pr=r31,-1
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r29=b0 // save b0 in case of nested fault)
+ ;;
+1: ld8 r18=[r17]
+ ;; // avoid raw on r18
+ or r18=_PAGE_A,r18 // set the accessed bit
+ mov b0=r29 // restore b0
+ ;;
+ st8 [r17]=r18 // store back updated PTE
+ itc.i r18;; // install updated PTE (EAS2.6: must be last in insn group!)
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x2800 Entry 10 (size 64 bundles) Data Access-bit (15,55)
+ // Like Entry 8, except for data access
+ mov r16=cr.ifa // get the address that caused the fault
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r29=b0 // save b0 in case of nested fault)
+ ;;
+1: ld8 r18=[r17]
+ ;; // avoid RAW on r18
+ or r18=_PAGE_A,r18 // set the accessed bit
+ mov b0=r29 // restore b0
+ ;;
+ st8 [r17]=r18 // store back updated PTE
+ itc.d r18;; // install updated PTE (EAS2.6: must be last in insn group!)
+ rfi;; // must be last insn in an insn group
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x2c00 Entry 11 (size 64 bundles) Break instruction (33)
+ mov r16=cr.iim
+ mov r17=__IA64_BREAK_SYSCALL
+ mov r31=pr // prepare to save predicates
+ rsm psr.dt // avoid nested faults due to TLB misses...
+ ;;
+ srlz.d // ensure everyone knows psr.dt is off...
+ cmp.eq p0,p7=r16,r17 // is this a system call? (p7 <- false, if so)
+
+#if 1
+ // Allow syscalls via the old system call number for the time being. This is
+ // so we can transition to the new syscall number in a relatively smooth
+ // fashion.
+ mov r17=0x80000
+ ;;
+(p7) cmp.eq.or.andcm p0,p7=r16,r17 // is this the old syscall number?
+#endif
+
+(p7) br.cond.spnt.many non_syscall
+
+ SAVE_MIN // uses r31; defines r2:
+
+ // turn interrupt collection and data translation back on:
+ ssm psr.ic | psr.dt
+ srlz.d // guarantee that interrupt collection is enabled
+ cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
+ ;;
+(p15) ssm psr.i // restore psr.i
+ ;;
+ srlz.i // ensure everybody knows psr.ic and psr.dt are back on
+ adds r8=(IA64_PT_REGS_R8_OFFSET-IA64_PT_REGS_R16_OFFSET),r2
+ ;;
+ stf8 [r8]=f1 // ensure pt_regs.r8 != 0 (see handle_syscall_error)
+ adds r3=8,r2 // set up second base pointer for SAVE_REST
+ ;;
+ SAVE_REST
+ ;; // avoid WAW on r2 & r3
+
+ mov r3=255
+ adds r15=-1024,r15 // r15 contains the syscall number---subtract 1024
+ adds r2=IA64_TASK_FLAGS_OFFSET,r13 // r2 = ¤t->flags
+
+ ;;
+ cmp.geu.unc p6,p7=r3,r15 // (syscall > 0 && syscall <= 1024+255) ?
+ movl r16=sys_call_table
+ ;;
+(p6) shladd r16=r15,3,r16
+ movl r15=ia64_ret_from_syscall
+(p7) adds r16=(__NR_ni_syscall-1024)*8,r16 // force __NR_ni_syscall
+ ;;
+ ld8 r16=[r16] // load address of syscall entry point
+ mov rp=r15 // set the real return addr
+ ;;
+ ld8 r2=[r2] // r2 = current->flags
+ mov b6=r16
+
+ // arrange things so we skip over break instruction when returning:
+
+ adds r16=16,sp // get pointer to cr_ipsr
+ adds r17=24,sp // get pointer to cr_iip
+ ;;
+ ld8 r18=[r16] // fetch cr_ipsr
+ tbit.z p8,p0=r2,5 // (current->flags & PF_TRACESYS) == 0?
+ ;;
+ ld8 r19=[r17] // fetch cr_iip
+ extr.u r20=r18,41,2 // extract ei field
+ ;;
+ cmp.eq p6,p7=2,r20 // isr.ei==2?
+ adds r19=16,r19 // compute address of next bundle
+ ;;
+(p6) mov r20=0 // clear ei to 0
+(p7) adds r20=1,r20 // increment ei to next slot
+ ;;
+(p6) st8 [r17]=r19 // store new cr.iip if cr.isr.ei wrapped around
+ dep r18=r20,r18,41,2 // insert new ei into cr.isr
+ ;;
+ st8 [r16]=r18 // store new value for cr.isr
+
+(p8) br.call.sptk.few b6=b6 // ignore this return addr
+ br.call.sptk.few rp=ia64_trace_syscall // rp will be overwritten (ignored)
+ // NOT REACHED
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
+ rsm psr.dt // avoid nested faults due to TLB misses...
+ ;;
+ srlz.d // ensure everyone knows psr.dt is off...
+ mov r31=pr // prepare to save predicates
+ ;;
+
+ SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3
+ ssm psr.ic | psr.dt // turn interrupt collection and data translation back on
+ ;;
+ adds r3=8,r2 // set up second base pointer for SAVE_REST
+ cmp.eq pEOI,p0=r0,r0 // set pEOI flag so that ia64_leave_kernel writes cr.eoi
+ srlz.i // ensure everybody knows psr.ic and psr.dt are back on
+ ;;
+ SAVE_REST
+ ;;
+ alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ mov out0=r0 // defer reading of cr.ivr to handle_irq...
+#else
+ mov out0=cr.ivr // pass cr.ivr as first arg
+#endif
+ add out1=16,sp // pass pointer to pt_regs as second arg
+ ;;
+ srlz.d // make sure we see the effect of cr.ivr
+ movl r14=ia64_leave_kernel
+ ;;
+ mov rp=r14
+ br.call.sptk.few b6=ia64_handle_irq
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x3400 Entry 13 (size 64 bundles) Reserved
+ FAULT(13)
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x3800 Entry 14 (size 64 bundles) Reserved
+ FAULT(14)
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x3c00 Entry 15 (size 64 bundles) Reserved
+ FAULT(15)
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x4000 Entry 16 (size 64 bundles) Reserved
+ FAULT(16)
+
+#ifdef CONFIG_IA32_SUPPORT
+
+ // There is no particular reason for this code to be here, other than that
+ // there happens to be space here that would go unused otherwise. If this
+ // fault ever gets "unreserved", simply moved the following code to a more
+ // suitable spot...
+
+ // IA32 interrupt entry point
+
+dispatch_to_ia32_handler:
+ SAVE_MIN
+ ;;
+ mov r14=cr.isr
+ ssm psr.ic | psr.dt
+ srlz.d // guarantee that interrupt collection is enabled
+ ;;
+(p15) ssm psr.i
+ ;;
+ srlz.d
+ adds r3=8,r2 // Base pointer for SAVE_REST
+ ;;
+ SAVE_REST
+ ;;
+ mov r15=0x80
+ shr r14=r14,16 // Get interrupt number
+ ;;
+ cmp.ne p6,p0=r14,r15
+(p6) br.call.dpnt.few b6=non_ia32_syscall
+
+ adds r14=IA64_PT_REGS_R8_OFFSET + 16,sp // 16 byte hole per SW conventions
+
+ ;;
+ alloc r15=ar.pfs,0,0,6,0 // must first in an insn group
+ ;;
+ ld4 r8=[r14],8 // r8 == EAX (syscall number)
+ mov r15=0xff
+ ;;
+ cmp.ltu.unc p6,p7=r8,r15
+ ld4 out1=[r14],8 // r9 == ecx
+ ;;
+ ld4 out2=[r14],8 // r10 == edx
+ ;;
+ ld4 out0=[r14] // r11 == ebx
+ adds r14=(IA64_PT_REGS_R8_OFFSET-(8*3)) + 16,sp
+ ;;
+ ld4 out5=[r14],8 // r13 == ebp
+ ;;
+ ld4 out3=[r14],8 // r14 == esi
+ adds r2=IA64_TASK_FLAGS_OFFSET,r13 // r2 = ¤t->flags
+ ;;
+ ld4 out4=[r14] // R15 == edi
+ movl r16=ia32_syscall_table
+ ;;
+(p6) shladd r16=r8,3,r16 // Force ni_syscall if not valid syscall number
+ ld8 r2=[r2] // r2 = current->flags
+ ;;
+ ld8 r16=[r16]
+ tbit.z p8,p0=r2,5 // (current->flags & PF_TRACESYS) == 0?
+ ;;
+ movl r15=ia32_ret_from_syscall
+ mov b6=r16
+ ;;
+ mov rp=r15
+(p8) br.call.sptk.few b6=b6
+ br.call.sptk.few rp=ia32_trace_syscall // rp will be overwritten (ignored)
+
+non_ia32_syscall:
+ alloc r15=ar.pfs,0,0,2,0
+ mov out0=r14 // interrupt #
+ add out1=16,sp // pointer to pt_regs
+ ;; // avoid WAW on CFM
+ br.call.sptk.few rp=ia32_bad_interrupt
+ ;;
+ movl r15=ia64_leave_kernel
+ ;;
+ mov rp=r15
+ br.ret.sptk.many rp
+
+#endif /* CONFIG_IA32_SUPPORT */
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x4400 Entry 17 (size 64 bundles) Reserved
+ FAULT(17)
+
+non_syscall:
+
+#ifdef CONFIG_KDB
+ mov r17=__IA64_BREAK_KDB
+ ;;
+ cmp.eq p8,p0=r16,r17 // is this a kernel breakpoint?
+#endif
+
+ SAVE_MIN_WITH_COVER
+
+ // There is no particular reason for this code to be here, other than that
+ // there happens to be space here that would go unused otherwise. If this
+ // fault ever gets "unreserved", simply moved the following code to a more
+ // suitable spot...
+
+ mov r8=cr.iim // get break immediate (must be done while psr.ic is off)
+ adds r3=8,r2 // set up second base pointer for SAVE_REST
+
+ // turn interrupt collection and data translation back on:
+ ssm psr.ic | psr.dt
+ srlz.d // guarantee that interrupt collection is enabled
+ ;;
+(p15) ssm psr.i // restore psr.i
+ ;;
+ srlz.i // ensure everybody knows psr.ic and psr.dt are back on
+ movl r15=ia64_leave_kernel
+ ;;
+ alloc r14=ar.pfs,0,0,2,0
+ mov out0=r8 // break number
+ add out1=16,sp // pointer to pt_regs
+ ;;
+ SAVE_REST
+ mov rp=r15
+ ;;
+#ifdef CONFIG_KDB
+(p8) br.call.sptk.few b6=ia64_invoke_kdb
+#endif
+ br.call.sptk.few b6=ia64_bad_break // avoid WAW on CFM and ignore return addr
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x4800 Entry 18 (size 64 bundles) Reserved
+ FAULT(18)
+
+ // There is no particular reason for this code to be here, other than that
+ // there happens to be space here that would go unused otherwise. If this
+ // fault ever gets "unreserved", simply moved the following code to a more
+ // suitable spot...
+
+dispatch_unaligned_handler:
+ SAVE_MIN_WITH_COVER
+ ;;
+ //
+ // we can't have the alloc while psr.ic is cleared because
+ // we might get a mandatory RSE (when you reach the end of the
+ // rotating partition when doing the alloc) spill which could cause
+ // a page fault on the kernel virtual address and the handler
+ // wouldn't get the state to recover.
+ //
+ mov r15=cr.ifa
+ ssm psr.ic | psr.dt
+ srlz.d // guarantee that interrupt collection is enabled
+ ;;
+(p15) ssm psr.i // restore psr.i
+ ;;
+ srlz.i
+ adds r3=8,r2 // set up second base pointer
+ ;;
+ SAVE_REST
+ ;;
+ alloc r14=ar.pfs,0,0,2,0 // now it's safe (must be first in insn group!)
+ ;; // avoid WAW on r14
+ movl r14=ia64_leave_kernel
+ mov out0=r15 // out0 = faulting address
+ adds out1=16,sp // out1 = pointer to pt_regs
+ ;;
+ mov rp=r14
+ br.sptk.few ia64_prepare_handle_unaligned
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x4c00 Entry 19 (size 64 bundles) Reserved
+ FAULT(19)
+
+ // There is no particular reason for this code to be here, other than that
+ // there happens to be space here that would go unused otherwise. If this
+ // fault ever gets "unreserved", simply moved the following code to a more
+ // suitable spot...
+
+dispatch_to_fault_handler:
+ //
+ // Input:
+ // psr.ic: off
+ // psr.dt: off
+ // r19: fault vector number (e.g., 24 for General Exception)
+ // r31: contains saved predicates (pr)
+ //
+ SAVE_MIN_WITH_COVER_R19
+ //
+ // Copy control registers to temporary registers, then turn on psr bits,
+ // then copy the temporary regs to the output regs. We have to do this
+ // because the "alloc" can cause a mandatory store which could lead to
+ // an "Alt DTLB" fault which we can handle only if psr.ic is on.
+ //
+ mov r8=cr.isr
+ mov r9=cr.ifa
+ mov r10=cr.iim
+ mov r11=cr.itir
+ ;;
+ ssm psr.ic | psr.dt
+ srlz.d // guarantee that interrupt collection is enabled
+ ;;
+(p15) ssm psr.i // restore psr.i
+ adds r3=8,r2 // set up second base pointer for SAVE_REST
+ ;;
+ srlz.i // must precede "alloc"!
+ ;;
+ alloc r14=ar.pfs,0,0,5,0 // must be first in insn group
+ mov out0=r15
+ mov out1=r8
+ mov out2=r9
+ mov out3=r10
+ mov out4=r11
+ ;;
+ SAVE_REST
+ movl r14=ia64_leave_kernel
+ ;;
+ mov rp=r14
+#ifdef CONFIG_KDB
+ br.call.sptk.few b6=ia64_invoke_kdb_fault_handler
+#else
+ br.call.sptk.few b6=ia64_fault
+#endif
+//
+// --- End of long entries, Beginning of short entries
+//
+
+ .align 1024
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49)
+ mov r16=cr.ifa
+ rsm psr.dt
+#if 0
+ // If you disable this, you MUST re-enable to update_mmu_cache() code in pgtable.h
+ mov r17=_PAGE_SIZE_4K<<2
+ ;;
+ ptc.l r16,r17
+#endif
+ ;;
+ mov r31=pr
+ srlz.d
+ br.cond.sptk.many page_fault
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52)
+ mov r16=cr.ifa
+ rsm psr.dt
+ mov r31=pr
+ ;;
+ srlz.d
+ br.cond.sptk.many page_fault
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26)
+ mov r16=cr.ifa
+ rsm psr.dt
+ mov r31=pr
+ ;;
+ srlz.d
+ br.cond.sptk.many page_fault
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53)
+ mov r16=cr.ifa
+ rsm psr.dt
+ mov r31=pr
+ ;;
+ srlz.d
+ br.cond.sptk.many page_fault
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39)
+ FAULT(24)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5500 Entry 25 (size 16 bundles) Disabled FP-Register (35)
+ rsm psr.dt | psr.dfh // ensure we can access fph
+ ;;
+ srlz.d
+ mov r31=pr
+ mov r19=25
+ br.cond.sptk.many dispatch_to_fault_handler
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5600 Entry 26 (size 16 bundles) Nat Consumption (11,23,37,50)
+ FAULT(26)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5700 Entry 27 (size 16 bundles) Speculation (40)
+ //
+ // A [f]chk.[as] instruction needs to take the branch to
+ // the recovery code but this part of the architecture is
+ // not implemented in hardware on some CPUs, such as Itanium.
+ // Thus, in general we need to emulate the behavior.
+ // IIM contains the relative target (not yet sign extended).
+ // So after sign extending it we simply add it to IIP.
+ // We also need to reset the EI field of the IPSR to zero,
+ // i.e., the slot to restart into.
+ //
+ // cr.imm contains zero_ext(imm21)
+ //
+ mov r18=cr.iim
+ ;;
+ mov r17=cr.iip
+ shl r18=r18,43 // put sign bit in position (43=64-21)
+ ;;
+
+ mov r16=cr.ipsr
+ shr r18=r18,39 // sign extend (39=43-4)
+ ;;
+
+ add r17=r17,r18 // now add the offset
+ ;;
+ mov cr.iip=r17
+ dep r16=0,r16,41,2 // clear EI
+ ;;
+
+ mov cr.ipsr=r16
+ ;;
+
+ rfi;; // and go back (must be last insn in group)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5800 Entry 28 (size 16 bundles) Reserved
+ FAULT(28)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5900 Entry 29 (size 16 bundles) Debug (16,28,56)
+ FAULT(29)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5a00 Entry 30 (size 16 bundles) Unaligned Reference (57)
+ rsm psr.dt // avoid nested faults due to TLB misses...
+ mov r16=cr.ipsr
+ mov r31=pr // prepare to save predicates
+ ;;
+ srlz.d // ensure everyone knows psr.dt is off
+ mov r19=30 // error vector for fault_handler (when kernel)
+ extr.u r16=r16,32,2 // extract psr.cpl
+ ;;
+ cmp.eq p6,p7=r0,r16 // if kernel cpl then fault else emulate
+(p7) br.cond.sptk.many dispatch_unaligned_handler
+(p6) br.cond.sptk.many dispatch_to_fault_handler
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5b00 Entry 31 (size 16 bundles) Unsupported Data Reference (57)
+ FAULT(31)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5c00 Entry 32 (size 16 bundles) Floating-Point Fault (64)
+ FAULT(32)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5d00 Entry 33 (size 16 bundles) Floating Point Trap (66)
+ FAULT(33)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5e00 Entry 34 (size 16 bundles) Lower Privilege Tranfer Trap (66)
+ FAULT(34)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x5f00 Entry 35 (size 16 bundles) Taken Branch Trap (68)
+ FAULT(35)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6000 Entry 36 (size 16 bundles) Single Step Trap (69)
+ FAULT(36)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6100 Entry 37 (size 16 bundles) Reserved
+ FAULT(37)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6200 Entry 38 (size 16 bundles) Reserved
+ FAULT(38)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6300 Entry 39 (size 16 bundles) Reserved
+ FAULT(39)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6400 Entry 40 (size 16 bundles) Reserved
+ FAULT(40)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6500 Entry 41 (size 16 bundles) Reserved
+ FAULT(41)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6600 Entry 42 (size 16 bundles) Reserved
+ FAULT(42)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6700 Entry 43 (size 16 bundles) Reserved
+ FAULT(43)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6800 Entry 44 (size 16 bundles) Reserved
+ FAULT(44)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6900 Entry 45 (size 16 bundles) IA-32 Exeception (17,18,29,41,42,43,44,58,60,61,62,72,73,75,76,77)
+ FAULT(45)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6a00 Entry 46 (size 16 bundles) IA-32 Intercept (30,31,59,70,71)
+ FAULT(46)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6b00 Entry 47 (size 16 bundles) IA-32 Interrupt (74)
+#ifdef CONFIG_IA32_SUPPORT
+ rsm psr.dt
+ ;;
+ srlz.d
+ mov r31=pr
+ br.cond.sptk.many dispatch_to_ia32_handler
+#else
+ FAULT(47)
+#endif
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6c00 Entry 48 (size 16 bundles) Reserved
+ FAULT(48)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6d00 Entry 49 (size 16 bundles) Reserved
+ FAULT(49)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6e00 Entry 50 (size 16 bundles) Reserved
+ FAULT(50)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x6f00 Entry 51 (size 16 bundles) Reserved
+ FAULT(51)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7000 Entry 52 (size 16 bundles) Reserved
+ FAULT(52)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7100 Entry 53 (size 16 bundles) Reserved
+ FAULT(53)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7200 Entry 54 (size 16 bundles) Reserved
+ FAULT(54)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7300 Entry 55 (size 16 bundles) Reserved
+ FAULT(55)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7400 Entry 56 (size 16 bundles) Reserved
+ FAULT(56)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7500 Entry 57 (size 16 bundles) Reserved
+ FAULT(57)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7600 Entry 58 (size 16 bundles) Reserved
+ FAULT(58)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7700 Entry 59 (size 16 bundles) Reserved
+ FAULT(59)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7800 Entry 60 (size 16 bundles) Reserved
+ FAULT(60)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7900 Entry 61 (size 16 bundles) Reserved
+ FAULT(61)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7a00 Entry 62 (size 16 bundles) Reserved
+ FAULT(62)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7b00 Entry 63 (size 16 bundles) Reserved
+ FAULT(63)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7c00 Entry 64 (size 16 bundles) Reserved
+ FAULT(64)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7d00 Entry 65 (size 16 bundles) Reserved
+ FAULT(65)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7e00 Entry 66 (size 16 bundles) Reserved
+ FAULT(66)
+
+ .align 256
+/////////////////////////////////////////////////////////////////////////////////////////
+// 0x7f00 Entry 67 (size 16 bundles) Reserved
+ FAULT(67)
--- /dev/null
+#include <linux/config.h>
+#include <linux/kernel.h>
+
+#include <asm/page.h>
+#include <asm/machvec.h>
+
+struct ia64_machine_vector ia64_mv;
+
+void
+machvec_noop (void)
+{
+}
+
+/*
+ * Most platforms use this routine for mapping page frame addresses
+ * into a memory map index.
+ */
+unsigned long
+map_nr_dense (unsigned long addr)
+{
+ return MAP_NR_DENSE(addr);
+}
+
+static struct ia64_machine_vector *
+lookup_machvec (const char *name)
+{
+ extern struct ia64_machine_vector machvec_start[];
+ extern struct ia64_machine_vector machvec_end[];
+ struct ia64_machine_vector *mv;
+
+ for (mv = machvec_start; mv < machvec_end; ++mv)
+ if (strcmp (mv->name, name) == 0)
+ return mv;
+
+ return 0;
+}
+
+void
+machvec_init (const char *name)
+{
+ struct ia64_machine_vector *mv;
+
+ mv = lookup_machvec(name);
+ if (!mv) {
+ panic("generic kernel failed to find machine vector for platform %s!", name);
+ }
+ ia64_mv = *mv;
+ printk("booting generic kernel on platform %s\n", name);
+}
--- /dev/null
+/*
+ * File: mca.c
+ * Purpose: Generic MCA handling layer
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Vijay Chander(vijay@engr.sgi.com)
+ */
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <asm/page.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+#include <asm/sal.h>
+#include <asm/mca.h>
+#include <asm/spinlock.h>
+#include <asm/irq.h>
+#include <asm/machvec.h>
+
+
+ia64_mc_info_t ia64_mc_info;
+ia64_mca_sal_to_os_state_t ia64_sal_to_os_handoff_state;
+ia64_mca_os_to_sal_state_t ia64_os_to_sal_handoff_state;
+u64 ia64_mca_proc_state_dump[256];
+u64 ia64_mca_stack[1024];
+u64 ia64_mca_stackframe[32];
+u64 ia64_mca_bspstore[1024];
+
+static void ia64_mca_cmc_vector_setup(int enable,
+ int_vector_t cmc_vector);
+static void ia64_mca_wakeup_ipi_wait(void);
+static void ia64_mca_wakeup(int cpu);
+static void ia64_mca_wakeup_all(void);
+static void ia64_log_init(int,int);
+static void ia64_log_get(int,int, prfunc_t);
+static void ia64_log_clear(int,int,int, prfunc_t);
+
+/*
+ * ia64_mca_cmc_vector_setup
+ * Setup the correctable machine check vector register in the processor
+ * Inputs
+ * Enable (1 - enable cmc interrupt , 0 - disable)
+ * CMC handler entry point (if enabled)
+ *
+ * Outputs
+ * None
+ */
+static void
+ia64_mca_cmc_vector_setup(int enable,
+ int_vector_t cmc_vector)
+{
+ cmcv_reg_t cmcv;
+
+ cmcv.cmcv_regval = 0;
+ cmcv.cmcv_mask = enable;
+ cmcv.cmcv_vector = cmc_vector;
+ ia64_set_cmcv(cmcv.cmcv_regval);
+}
+
+
+#if defined(MCA_TEST)
+
+sal_log_processor_info_t slpi_buf;
+
+void
+mca_test(void)
+{
+ slpi_buf.slpi_valid.slpi_psi = 1;
+ slpi_buf.slpi_valid.slpi_cache_check = 1;
+ slpi_buf.slpi_valid.slpi_tlb_check = 1;
+ slpi_buf.slpi_valid.slpi_bus_check = 1;
+ slpi_buf.slpi_valid.slpi_minstate = 1;
+ slpi_buf.slpi_valid.slpi_bank1_gr = 1;
+ slpi_buf.slpi_valid.slpi_br = 1;
+ slpi_buf.slpi_valid.slpi_cr = 1;
+ slpi_buf.slpi_valid.slpi_ar = 1;
+ slpi_buf.slpi_valid.slpi_rr = 1;
+ slpi_buf.slpi_valid.slpi_fr = 1;
+
+ ia64_os_mca_dispatch();
+}
+
+#endif /* #if defined(MCA_TEST) */
+
+/*
+ * mca_init
+ * Do all the mca specific initialization on a per-processor basis.
+ *
+ * 1. Register spinloop and wakeup request interrupt vectors
+ *
+ * 2. Register OS_MCA handler entry point
+ *
+ * 3. Register OS_INIT handler entry point
+ *
+ * 4. Initialize CMCV register to enable/disable CMC interrupt on the
+ * processor and hook a handler in the platform-specific mca_init.
+ *
+ * 5. Initialize MCA/CMC/INIT related log buffers maintained by the OS.
+ *
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void __init
+mca_init(void)
+{
+ int i;
+
+ MCA_DEBUG("mca_init : begin\n");
+ /* Clear the Rendez checkin flag for all cpus */
+ for(i = 0 ; i < IA64_MAXCPUS; i++)
+ ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
+
+ /* NOTE : The actual irqs for the rendez, wakeup and
+ * cmc interrupts are requested in the platform-specific
+ * mca initialization code.
+ */
+ /*
+ * Register the rendezvous spinloop and wakeup mechanism with SAL
+ */
+
+ /* Register the rendezvous interrupt vector with SAL */
+ if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_RENDEZ_INT_VECTOR,
+ IA64_MCA_RENDEZ_TIMEOUT))
+ return;
+
+ /* Register the wakeup interrupt vector with SAL */
+ if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP,
+ SAL_MC_PARAM_MECHANISM_INT,
+ IA64_MCA_WAKEUP_INT_VECTOR,
+ 0))
+ return;
+
+ MCA_DEBUG("mca_init : registered mca rendezvous spinloop and wakeup mech.\n");
+ /*
+ * Setup the correctable machine check vector
+ */
+ ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE,
+ IA64_MCA_CMC_INT_VECTOR);
+
+ MCA_DEBUG("mca_init : correctable mca vector setup done\n");
+
+ ia64_mc_info.imi_mca_handler = __pa(ia64_os_mca_dispatch);
+ ia64_mc_info.imi_mca_handler_size =
+ __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
+ /* Register the os mca handler with SAL */
+ if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
+ ia64_mc_info.imi_mca_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_mca_handler_size,
+ 0,0,0))
+
+ return;
+
+ MCA_DEBUG("mca_init : registered os mca handler with SAL\n");
+
+ ia64_mc_info.imi_monarch_init_handler = __pa(ia64_monarch_init_handler);
+ ia64_mc_info.imi_monarch_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ ia64_mc_info.imi_slave_init_handler = __pa(ia64_slave_init_handler);
+ ia64_mc_info.imi_slave_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ /* Register the os init handler with SAL */
+ if (ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
+ ia64_mc_info.imi_monarch_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_monarch_init_handler_size,
+ ia64_mc_info.imi_slave_init_handler,
+ __pa(ia64_get_gp()),
+ ia64_mc_info.imi_slave_init_handler_size))
+
+
+ return;
+
+ MCA_DEBUG("mca_init : registered os init handler with SAL\n");
+
+ /* Initialize the areas set aside by the OS to buffer the
+ * platform/processor error states for MCA/INIT/CMC
+ * handling.
+ */
+ ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR);
+ ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM);
+ ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR);
+ ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM);
+ ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR);
+ ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM);
+
+ mca_init_platform();
+
+ MCA_DEBUG("mca_init : platform-specific mca handling setup done\n");
+
+#if defined(MCA_TEST)
+ mca_test();
+#endif /* #if defined(MCA_TEST) */
+
+ printk("Mca related initialization done\n");
+}
+
+/*
+ * ia64_mca_wakeup_ipi_wait
+ * Wait for the inter-cpu interrupt to be sent by the
+ * monarch processor once it is done with handling the
+ * MCA.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void
+ia64_mca_wakeup_ipi_wait(void)
+{
+ int irr_num = (IA64_MCA_WAKEUP_INT_VECTOR >> 6);
+ int irr_bit = (IA64_MCA_WAKEUP_INT_VECTOR & 0x3f);
+ u64 irr = 0;
+
+ do {
+ switch(irr_num) {
+ case 0:
+ irr = ia64_get_irr0();
+ break;
+ case 1:
+ irr = ia64_get_irr1();
+ break;
+ case 2:
+ irr = ia64_get_irr2();
+ break;
+ case 3:
+ irr = ia64_get_irr3();
+ break;
+ }
+ } while (!(irr & (1 << irr_bit))) ;
+}
+
+/*
+ * ia64_mca_wakeup
+ * Send an inter-cpu interrupt to wake-up a particular cpu
+ * and mark that cpu to be out of rendez.
+ * Inputs
+ * cpuid
+ * Outputs
+ * None
+ */
+void
+ia64_mca_wakeup(int cpu)
+{
+ ipi_send(cpu, IA64_MCA_WAKEUP_INT_VECTOR, IA64_IPI_DM_INT);
+ ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
+
+}
+/*
+ * ia64_mca_wakeup_all
+ * Wakeup all the cpus which have rendez'ed previously.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void
+ia64_mca_wakeup_all(void)
+{
+ int cpu;
+
+ /* Clear the Rendez checkin flag for all cpus */
+ for(cpu = 0 ; cpu < IA64_MAXCPUS; cpu++)
+ if (ia64_mc_info.imi_rendez_checkin[cpu] == IA64_MCA_RENDEZ_CHECKIN_DONE)
+ ia64_mca_wakeup(cpu);
+
+}
+/*
+ * ia64_mca_rendez_interrupt_handler
+ * This is handler used to put slave processors into spinloop
+ * while the monarch processor does the mca handling and later
+ * wake each slave up once the monarch is done.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void
+ia64_mca_rendez_int_handler(int rendez_irq, void *arg, struct pt_regs *ptregs)
+{
+ int flags;
+ /* Mask all interrupts */
+ save_and_cli(flags);
+
+ ia64_mc_info.imi_rendez_checkin[ia64_get_cpuid(0)] = IA64_MCA_RENDEZ_CHECKIN_DONE;
+ /* Register with the SAL monarch that the slave has
+ * reached SAL
+ */
+ ia64_sal_mc_rendez();
+
+ /* Wait for the wakeup IPI from the monarch
+ * This waiting is done by polling on the wakeup-interrupt
+ * vector bit in the processor's IRRs
+ */
+ ia64_mca_wakeup_ipi_wait();
+
+ /* Enable all interrupts */
+ restore_flags(flags);
+
+
+}
+
+
+/*
+ * ia64_mca_wakeup_int_handler
+ * The interrupt handler for processing the inter-cpu interrupt to the
+ * slave cpu which was spinning in the rendez loop.
+ * Since this spinning is done by turning off the interrupts and
+ * polling on the wakeup-interrupt bit in the IRR, there is
+ * nothing useful to be done in the handler.
+ * Inputs
+ * wakeup_irq (Wakeup-interrupt bit)
+ * arg (Interrupt handler specific argument)
+ * ptregs (Exception frame at the time of the interrupt)
+ * Outputs
+ *
+ */
+void
+ia64_mca_wakeup_int_handler(int wakeup_irq, void *arg, struct pt_regs *ptregs)
+{
+
+}
+
+/*
+ * ia64_return_to_sal_check
+ * This is function called before going back from the OS_MCA handler
+ * to the OS_MCA dispatch code which finally takes the control back
+ * to the SAL.
+ * The main purpose of this routine is to setup the OS_MCA to SAL
+ * return state which can be used by the OS_MCA dispatch code
+ * just before going back to SAL.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+
+void
+ia64_return_to_sal_check(void)
+{
+ /* Copy over some relevant stuff from the sal_to_os_mca_handoff
+ * so that it can be used at the time of os_mca_to_sal_handoff
+ */
+ ia64_os_to_sal_handoff_state.imots_sal_gp =
+ ia64_sal_to_os_handoff_state.imsto_sal_gp;
+
+ ia64_os_to_sal_handoff_state.imots_sal_check_ra =
+ ia64_sal_to_os_handoff_state.imsto_sal_check_ra;
+
+ /* For now ignore the MCA */
+ ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_CORRECTED;
+}
+/*
+ * ia64_mca_ucmc_handler
+ * This is uncorrectable machine check handler called from OS_MCA
+ * dispatch code which is in turn called from SAL_CHECK().
+ * This is the place where the core of OS MCA handling is done.
+ * Right now the logs are extracted and displayed in a well-defined
+ * format. This handler code is supposed to be run only on the
+ * monarch processor. Once the monarch is done with MCA handling
+ * further MCA logging is enabled by clearing logs.
+ * Monarch also has the duty of sending wakeup-IPIs to pull the
+ * slave processors out of rendez. spinloop.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void
+ia64_mca_ucmc_handler(void)
+{
+
+ /* Get the MCA processor log */
+ ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ /* Get the MCA platform log */
+ ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
+
+ ia64_log_print(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+
+ /*
+ * Do some error handling - Platform-specific mca handler is called at this point
+ */
+
+ mca_handler_platform() ;
+
+ /* Clear the SAL MCA logs */
+ ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, 1, printk);
+ ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, 1, printk);
+
+ /* Wakeup all the processors which are spinning in the rendezvous
+ * loop.
+ */
+ ia64_mca_wakeup_all();
+ ia64_return_to_sal_check();
+}
+
+/*
+ * SAL to OS entry point for INIT on the monarch processor
+ * This has been defined for registration purposes with SAL
+ * as a part of mca_init.
+ */
+void
+ia64_monarch_init_handler()
+{
+}
+/*
+ * SAL to OS entry point for INIT on the slave processor
+ * This has been defined for registration purposes with SAL
+ * as a part of mca_init.
+ */
+
+void
+ia64_slave_init_handler()
+{
+}
+/*
+ * ia64_mca_cmc_int_handler
+ * This is correctable machine check interrupt handler.
+ * Right now the logs are extracted and displayed in a well-defined
+ * format.
+ * Inputs
+ * None
+ * Outputs
+ * None
+ */
+void
+ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
+{
+ /* Get the CMC processor log */
+ ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ /* Get the CMC platform log */
+ ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)printk);
+
+
+ ia64_log_print(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)printk);
+ cmci_handler_platform(cmc_irq, arg, ptregs);
+
+ /* Clear the CMC SAL logs now that they have been saved in the OS buffer */
+ ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR);
+ ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM);
+}
+
+/*
+ * IA64_MCA log support
+ */
+#define IA64_MAX_LOGS 2 /* Double-buffering for nested MCAs */
+#define IA64_MAX_LOG_TYPES 3 /* MCA, CMC, INIT */
+#define IA64_MAX_LOG_SUBTYPES 2 /* Processor, Platform */
+
+typedef struct ia64_state_log_s {
+ spinlock_t isl_lock;
+ int isl_index;
+ sal_log_header_t isl_log[IA64_MAX_LOGS];
+
+} ia64_state_log_t;
+
+static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES][IA64_MAX_LOG_SUBTYPES];
+
+#define IA64_LOG_LOCK_INIT(it, sit) spin_lock_init(&ia64_state_log[it][sit].isl_lock)
+#define IA64_LOG_LOCK(it, sit) spin_lock_irqsave(&ia64_state_log[it][sit].isl_lock, s)
+#define IA64_LOG_UNLOCK(it, sit) spin_unlock_irqrestore(&ia64_state_log[it][sit].isl_lock,\
+ s)
+#define IA64_LOG_NEXT_INDEX(it, sit) ia64_state_log[it][sit].isl_index
+#define IA64_LOG_CURR_INDEX(it, sit) 1 - ia64_state_log[it][sit].isl_index
+#define IA64_LOG_INDEX_INC(it, sit) \
+ ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
+#define IA64_LOG_INDEX_DEC(it, sit) \
+ ia64_state_log[it][sit].isl_index = 1 - ia64_state_log[it][sit].isl_index
+#define IA64_LOG_NEXT_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_NEXT_INDEX(it,sit)]))
+#define IA64_LOG_CURR_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].isl_log[IA64_LOG_CURR_INDEX(it,sit)]))
+
+/*
+ * ia64_log_init
+ * Reset the OS ia64 log buffer
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
+ * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * Outputs : None
+ */
+void
+ia64_log_init(int sal_info_type, int sal_sub_info_type)
+{
+ IA64_LOG_LOCK_INIT(sal_info_type, sal_sub_info_type);
+ IA64_LOG_NEXT_INDEX(sal_info_type, sal_sub_info_type) = 0;
+ memset(IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type), 0,
+ sizeof(sal_log_header_t) * IA64_MAX_LOGS);
+}
+
+/*
+ * ia64_log_get
+ * Get the current MCA log from SAL and copy it into the OS log buffer.
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
+ * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * Outputs : None
+ *
+ */
+void
+ia64_log_get(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+{
+ sal_log_header_t *log_buffer;
+ int s;
+
+ IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+
+
+ /* Get the process state information */
+ log_buffer = IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type);
+
+ if (ia64_sal_get_state_info(sal_info_type, sal_sub_info_type ,(u64 *)log_buffer))
+ prfunc("ia64_mca_log_get : Getting processor log failed\n");
+
+ IA64_LOG_INDEX_INC(sal_info_type, sal_sub_info_type);
+
+ IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+
+}
+
+/*
+ * ia64_log_clear
+ * Clear the current MCA log from SAL and dpending on the clear_os_buffer flags
+ * clear the OS log buffer also
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
+ * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * clear_os_buffer
+ * prfunc (print function)
+ * Outputs : None
+ *
+ */
+void
+ia64_log_clear(int sal_info_type, int sal_sub_info_type, int clear_os_buffer, prfunc_t prfunc)
+{
+ if (ia64_sal_clear_state_info(sal_info_type, sal_sub_info_type))
+ prfunc("ia64_mca_log_get : Clearing processor log failed\n");
+
+ if (clear_os_buffer) {
+ sal_log_header_t *log_buffer;
+ int s;
+
+ IA64_LOG_LOCK(sal_info_type, sal_sub_info_type);
+
+ /* Get the process state information */
+ log_buffer = IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type);
+
+ memset(log_buffer, 0, sizeof(sal_log_header_t));
+
+ IA64_LOG_INDEX_DEC(sal_info_type, sal_sub_info_type);
+
+ IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type);
+ }
+
+}
+
+/*
+ * ia64_log_processor_regs_print
+ * Print the contents of the saved processor register(s) in the format
+ * <reg_prefix>[<index>] <value>
+ *
+ * Inputs : regs (Register save buffer)
+ * reg_num (# of registers)
+ * reg_class (application/banked/control/bank1_general)
+ * reg_prefix (ar/br/cr/b1_gr)
+ * Outputs : None
+ *
+ */
+void
+ia64_log_processor_regs_print(u64 *regs,
+ int reg_num,
+ char *reg_class,
+ char *reg_prefix,
+ prfunc_t prfunc)
+{
+ int i;
+
+ prfunc("+%s Registers\n", reg_class);
+ for (i = 0; i < reg_num; i++)
+ prfunc("+ %s[%d] 0x%lx\n", reg_prefix, i, regs[i]);
+}
+
+static char *pal_mesi_state[] = {
+ "Invalid",
+ "Shared",
+ "Exclusive",
+ "Modified",
+ "Reserved1",
+ "Reserved2",
+ "Reserved3",
+ "Reserved4"
+};
+
+static char *pal_cache_op[] = {
+ "Unknown",
+ "Move in",
+ "Cast out",
+ "Coherency check",
+ "Internal",
+ "Instruction fetch",
+ "Implicit Writeback",
+ "Reserved"
+};
+
+/*
+ * ia64_log_cache_check_info_print
+ * Display the machine check information related to cache error(s).
+ * Inputs : i (Multiple errors are logged, i - index of logged error)
+ * info (Machine check info logged by the PAL and later
+ * captured by the SAL)
+ * target_addr (Address which caused the cache error)
+ * Outputs : None
+ */
+void
+ia64_log_cache_check_info_print(int i,
+ pal_cache_check_info_t info,
+ u64 target_addr,
+ prfunc_t prfunc)
+{
+ prfunc("+ Cache check info[%d]\n+", i);
+ prfunc(" Level: L%d",info.level);
+ if (info.mv)
+ prfunc(" ,Mesi: %s",pal_mesi_state[info.mesi]);
+ prfunc(" ,Index: %d,", info.index);
+ if (info.ic)
+ prfunc(" ,Cache: Instruction");
+ if (info.dc)
+ prfunc(" ,Cache: Data");
+ if (info.tl)
+ prfunc(" ,Line: Tag");
+ if (info.dl)
+ prfunc(" ,Line: Data");
+ prfunc(" ,Operation: %s,", pal_cache_op[info.op]);
+ if (info.wv)
+ prfunc(" ,Way: %d,", info.way);
+ if (info.tv)
+ prfunc(" ,Target Addr: 0x%lx", target_addr);
+ if (info.mc)
+ prfunc(" ,MC: Corrected");
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_tlb_check_info_print
+ * Display the machine check information related to tlb error(s).
+ * Inputs : i (Multiple errors are logged, i - index of logged error)
+ * info (Machine check info logged by the PAL and later
+ * captured by the SAL)
+ * Outputs : None
+ */
+
+void
+ia64_log_tlb_check_info_print(int i,
+ pal_tlb_check_info_t info,
+ prfunc_t prfunc)
+{
+ prfunc("+ TLB Check Info [%d]\n+", i);
+ if (info.itc)
+ prfunc(" Failure: Instruction Translation Cache");
+ if (info.dtc)
+ prfunc(" Failure: Data Translation Cache");
+ if (info.itr) {
+ prfunc(" Failure: Instruction Translation Register");
+ prfunc(" ,Slot: %d", info.tr_slot);
+ }
+ if (info.dtr) {
+ prfunc(" Failure: Data Translation Register");
+ prfunc(" ,Slot: %d", info.tr_slot);
+ }
+ if (info.mc)
+ prfunc(" ,MC: Corrected");
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_bus_check_info_print
+ * Display the machine check information related to bus error(s).
+ * Inputs : i (Multiple errors are logged, i - index of logged error)
+ * info (Machine check info logged by the PAL and later
+ * captured by the SAL)
+ * req_addr (Address of the requestor of the transaction)
+ * resp_addr (Address of the responder of the transaction)
+ * target_addr (Address where the data was to be delivered to or
+ * obtained from)
+ * Outputs : None
+ */
+void
+ia64_log_bus_check_info_print(int i,
+ pal_bus_check_info_t info,
+ u64 req_addr,
+ u64 resp_addr,
+ u64 targ_addr,
+ prfunc_t prfunc)
+{
+ prfunc("+ BUS Check Info [%d]\n+", i);
+ prfunc(" Status Info: %d", info.bsi);
+ prfunc(" ,Severity: %d", info.sev);
+ prfunc(" ,Transaction Type: %d", info.type);
+ prfunc(" ,Transaction Size: %d", info.size);
+ if (info.cc)
+ prfunc(" ,Cache-cache-transfer");
+ if (info.ib)
+ prfunc(" ,Error: Internal");
+ if (info.eb)
+ prfunc(" ,Error: External");
+ if (info.mc)
+ prfunc(" ,MC: Corrected");
+ if (info.tv)
+ prfunc(" ,Target Address: 0x%lx", targ_addr);
+ if (info.rq)
+ prfunc(" ,Requestor Address: 0x%lx", req_addr);
+ if (info.tv)
+ prfunc(" ,Responder Address: 0x%lx", resp_addr);
+ prfunc("\n");
+}
+
+/*
+ * ia64_log_processor_info_print
+ * Display the processor-specific information logged by PAL as a part
+ * of MCA or INIT or CMC.
+ * Inputs : lh (Pointer of the sal log header which specifies the format
+ * of SAL state info as specified by the SAL spec).
+ * Outputs : None
+ */
+void
+ia64_log_processor_info_print(sal_log_header_t *lh, prfunc_t prfunc)
+{
+ sal_log_processor_info_t *slpi;
+ int i;
+
+ if (!lh)
+ return;
+
+ if (lh->slh_log_type != SAL_SUB_INFO_TYPE_PROCESSOR)
+ return;
+
+#if defined(MCA_TEST)
+ slpi = &slpi_buf;
+#else
+ slpi = (sal_log_processor_info_t *)lh->slh_log_dev_spec_info;
+#endif /#if defined(MCA_TEST) */
+
+ if (!slpi) {
+ prfunc("No Processor Error Log found\n");
+ return;
+ }
+
+ /* Print branch register contents if valid */
+ if (slpi->slpi_valid.slpi_br)
+ ia64_log_processor_regs_print(slpi->slpi_br, 8, "Branch", "br", prfunc);
+
+ /* Print control register contents if valid */
+ if (slpi->slpi_valid.slpi_cr)
+ ia64_log_processor_regs_print(slpi->slpi_cr, 128, "Control", "cr", prfunc);
+
+ /* Print application register contents if valid */
+ if (slpi->slpi_valid.slpi_ar)
+ ia64_log_processor_regs_print(slpi->slpi_br, 128, "Application", "ar", prfunc);
+
+ /* Print region register contents if valid */
+ if (slpi->slpi_valid.slpi_rr)
+ ia64_log_processor_regs_print(slpi->slpi_rr, 8, "Region", "rr", prfunc);
+
+ /* Print floating-point register contents if valid */
+ if (slpi->slpi_valid.slpi_fr)
+ ia64_log_processor_regs_print(slpi->slpi_fr, 128, "Floating-point", "fr",
+ prfunc);
+
+ /* Print bank1-gr NAT register contents if valid */
+ ia64_log_processor_regs_print(&slpi->slpi_bank1_nat_bits, 1, "NAT", "nat", prfunc);
+
+ /* Print bank 1 register contents if valid */
+ if (slpi->slpi_valid.slpi_bank1_gr)
+ ia64_log_processor_regs_print(slpi->slpi_bank1_gr, 16, "Bank1-General", "gr",
+ prfunc);
+
+ /* Print the cache check information if any*/
+ for (i = 0 ; i < MAX_CACHE_ERRORS; i++)
+ ia64_log_cache_check_info_print(i,
+ slpi->slpi_cache_check_info[i].slpi_cache_check,
+ slpi->slpi_cache_check_info[i].slpi_target_address,
+ prfunc);
+ /* Print the tlb check information if any*/
+ for (i = 0 ; i < MAX_TLB_ERRORS; i++)
+ ia64_log_tlb_check_info_print(i,slpi->slpi_tlb_check_info[i], prfunc);
+
+ /* Print the bus check information if any*/
+ for (i = 0 ; i < MAX_BUS_ERRORS; i++)
+ ia64_log_bus_check_info_print(i,
+ slpi->slpi_bus_check_info[i].slpi_bus_check,
+ slpi->slpi_bus_check_info[i].slpi_requestor_addr,
+ slpi->slpi_bus_check_info[i].slpi_responder_addr,
+ slpi->slpi_bus_check_info[i].slpi_target_addr,
+ prfunc);
+
+}
+
+/*
+ * ia64_log_print
+ * Display the contents of the OS error log information
+ * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC})
+ * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM})
+ * Outputs : None
+ */
+void
+ia64_log_print(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc)
+{
+ char *info_type, *sub_info_type;
+
+ switch(sal_info_type) {
+ case SAL_INFO_TYPE_MCA:
+ info_type = "MCA";
+ break;
+ case SAL_INFO_TYPE_INIT:
+ info_type = "INIT";
+ break;
+ case SAL_INFO_TYPE_CMC:
+ info_type = "CMC";
+ break;
+ default:
+ info_type = "UNKNOWN";
+ break;
+ }
+
+ switch(sal_sub_info_type) {
+ case SAL_SUB_INFO_TYPE_PROCESSOR:
+ sub_info_type = "PROCESSOR";
+ break;
+ case SAL_SUB_INFO_TYPE_PLATFORM:
+ sub_info_type = "PLATFORM";
+ break;
+ default:
+ sub_info_type = "UNKNOWN";
+ break;
+ }
+
+ prfunc("+BEGIN HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
+ if (sal_sub_info_type == SAL_SUB_INFO_TYPE_PROCESSOR)
+ ia64_log_processor_info_print(
+ IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),
+ prfunc);
+ else
+ log_print_platform(IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type),prfunc);
+ prfunc("+END HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type);
+}
--- /dev/null
+#include <asm/processor.h>
+#include <asm/mcaasm.h>
+#include <asm/page.h>
+#include <asm/mca.h>
+
+ .psr abi64
+ .psr lsb
+ .lsb
+
+/*
+ * SAL_TO_OS_MCA_HANDOFF_STATE
+ * 1. GR1 = OS GP
+ * 2. GR8 = PAL_PROC physical address
+ * 3. GR9 = SAL_PROC physical address
+ * 4. GR10 = SAL GP (physical)
+ * 5. GR11 = Rendez state
+ * 6. GR12 = Return address to location within SAL_CHECK
+ */
+#define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \
+ movl _tmp=ia64_sal_to_os_handoff_state;; \
+ st8 [_tmp]=r1,0x08;; \
+ st8 [_tmp]=r8,0x08;; \
+ st8 [_tmp]=r9,0x08;; \
+ st8 [_tmp]=r10,0x08;; \
+ st8 [_tmp]=r11,0x08;; \
+ st8 [_tmp]=r12,0x08;;
+
+/*
+ * OS_MCA_TO_SAL_HANDOFF_STATE
+ * 1. GR8 = OS_MCA status
+ * 2. GR9 = SAL GP (physical)
+ * 3. GR22 = New min state save area pointer
+ */
+#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
+ movl _tmp=ia64_os_to_sal_handoff_state;; \
+ DATA_VA_TO_PA(_tmp);; \
+ ld8 r8=[_tmp],0x08;; \
+ ld8 r9=[_tmp],0x08;; \
+ ld8 r22=[_tmp],0x08;;
+
+/*
+ * BRANCH
+ * Jump to the instruction referenced by
+ * "to_label".
+ * Branch is taken only if the predicate
+ * register "p" is true.
+ * "ip" is the address of the instruction
+ * located at "from_label".
+ * "temp" is a scratch register like r2
+ * "adjust" needed for HP compiler.
+ * A screwup somewhere with constant arithmetic.
+ */
+#define BRANCH(to_label, temp, p, adjust) \
+100: (p) mov temp=ip; \
+ ;; \
+ (p) adds temp=to_label-100b,temp;\
+ (p) adds temp=adjust,temp; \
+ (p) mov b1=temp ; \
+ (p) br b1
+
+ .global ia64_os_mca_dispatch
+ .global ia64_os_mca_dispatch_end
+ .global ia64_sal_to_os_handoff_state
+ .global ia64_os_to_sal_handoff_state
+ .global ia64_os_mca_ucmc_handler
+ .global ia64_mca_proc_state_dump
+ .global ia64_mca_proc_state_restore
+ .global ia64_mca_stack
+ .global ia64_mca_stackframe
+ .global ia64_mca_bspstore
+
+ .text
+ .align 16
+
+ia64_os_mca_dispatch:
+
+#if defined(MCA_TEST)
+ // Pretend that we are in interrupt context
+ mov r2=psr
+ dep r2=0, r2, PSR_IC, 2;
+ mov psr.l = r2
+#endif /* #if defined(MCA_TEST) */
+
+ // Save the SAL to OS MCA handoff state as defined
+ // by SAL SPEC 2.5
+ // NOTE : The order in which the state gets saved
+ // is dependent on the way the C-structure
+ // for ia64_mca_sal_to_os_state_t has been
+ // defined in include/asm/mca.h
+ SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(r2)
+
+ // LOG PROCESSOR STATE INFO FROM HERE ON..
+ ;;
+begin_os_mca_dump:
+ BRANCH(ia64_os_mca_proc_state_dump, r2, p0, 0x0)
+ ;;
+ia64_os_mca_done_dump:
+
+ // Setup new stack frame for OS_MCA handling
+ movl r2=ia64_mca_bspstore // local bspstore area location in r2
+ movl r3=ia64_mca_stackframe // save stack frame to memory in r3
+ rse_switch_context(r6,r3,r2);; // RSC management in this new context
+ movl r12=ia64_mca_stack;;
+
+ // Enter virtual mode from physical mode
+ VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4)
+ia64_os_mca_virtual_begin:
+
+ // call our handler
+ movl r2=ia64_mca_ucmc_handler;;
+ mov b6=r2;;
+ br.call.sptk.few b0=b6
+ ;;
+
+ // Revert back to physical mode before going back to SAL
+ PHYSICAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_end, r4)
+ia64_os_mca_virtual_end:
+
+#if defined(MCA_TEST)
+ // Pretend that we are in interrupt context
+ mov r2=psr
+ dep r2=0, r2, PSR_IC, 2;
+ mov psr.l = r2
+#endif /* #if defined(MCA_TEST) */
+
+ // restore the original stack frame here
+ movl r2=ia64_mca_stackframe // restore stack frame from memory at r2
+ ;;
+ DATA_VA_TO_PA(r2)
+ movl r4=IA64_PSR_MC
+ ;;
+ rse_return_context(r4,r3,r2) // switch from interrupt context for RSE
+
+ // let us restore all the registers from our PSI structure
+ mov r8=gp
+ ;;
+begin_os_mca_restore:
+ BRANCH(ia64_os_mca_proc_state_restore, r2, p0, 0x0)
+ ;;
+
+ia64_os_mca_done_restore:
+ ;;
+#ifdef SOFTSDV
+ VIRTUAL_MODE_ENTER(r2,r3, vmode_enter, r4)
+vmode_enter:
+ br.ret.sptk.few b0
+#else
+ // branch back to SALE_CHECK
+ OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2)
+ ld8 r3=[r2];;
+ mov b0=r3 // SAL_CHECK return address
+ br b0
+ ;;
+#endif /* #ifdef SOFTSDV */
+ia64_os_mca_dispatch_end:
+//EndMain//////////////////////////////////////////////////////////////////////
+
+
+//++
+// Name:
+// ia64_os_mca_proc_state_dump()
+//
+// Stub Description:
+//
+// This stub dumps the processor state during MCHK to a data area
+//
+//--
+
+ia64_os_mca_proc_state_dump:
+// Get and save GR0-31 from Proc. Min. State Save Area to SAL PSI
+ movl r2=ia64_mca_proc_state_dump;; // Os state dump area
+
+// save ar.NaT
+ mov r5=ar.unat // ar.unat
+
+// save banked GRs 16-31 along with NaT bits
+ bsw.1;;
+ st8.spill [r2]=r16,8;;
+ st8.spill [r2]=r17,8;;
+ st8.spill [r2]=r18,8;;
+ st8.spill [r2]=r19,8;;
+ st8.spill [r2]=r20,8;;
+ st8.spill [r2]=r21,8;;
+ st8.spill [r2]=r22,8;;
+ st8.spill [r2]=r23,8;;
+ st8.spill [r2]=r24,8;;
+ st8.spill [r2]=r25,8;;
+ st8.spill [r2]=r26,8;;
+ st8.spill [r2]=r27,8;;
+ st8.spill [r2]=r28,8;;
+ st8.spill [r2]=r29,8;;
+ st8.spill [r2]=r30,8;;
+ st8.spill [r2]=r31,8;;
+
+ mov r4=ar.unat;;
+ st8 [r2]=r4,8 // save User NaT bits for r16-r31
+ mov ar.unat=r5 // restore original unat
+ bsw.0;;
+
+//save BRs
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2 // duplicate r2 in r4
+
+ mov r3=b0
+ mov r5=b1
+ mov r7=b2;;
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=b3
+ mov r5=b4
+ mov r7=b5;;
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=b6
+ mov r5=b7;;
+ st8 [r2]=r3,2*8
+ st8 [r4]=r5,2*8;;
+
+cSaveCRs:
+// save CRs
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2 // duplicate r2 in r4
+
+ mov r3=cr0 // cr.dcr
+ mov r5=cr1 // cr.itm
+ mov r7=cr2;; // cr.iva
+
+ st8 [r2]=r3,8*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;; // 48 byte rements
+
+ mov r3=cr8;; // cr.pta
+ st8 [r2]=r3,8*8;; // 64 byte rements
+
+// if PSR.ic=0, reading interruption registers causes an illegal operation fault
+ mov r3=psr;;
+ tbit.nz.unc p2,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
+(p2) st8 [r2]=r0,9*8+160 // increment by 168 byte inc.
+begin_skip_intr_regs:
+ BRANCH(SkipIntrRegs, r9, p2, 0x0)
+ ;;
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2 // duplicate r2 in r6
+
+ mov r3=cr16 // cr.ipsr
+ mov r5=cr17 // cr.isr
+ mov r7=r0;; // cr.ida => cr18
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=cr19 // cr.iip
+ mov r5=cr20 // cr.idtr
+ mov r7=cr21;; // cr.iitr
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=cr22 // cr.iipa
+ mov r5=cr23 // cr.ifs
+ mov r7=cr24;; // cr.iim
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=cr25;; // cr.iha
+ st8 [r2]=r3,160;; // 160 byte rement
+
+SkipIntrRegs:
+ st8 [r2]=r0,168 // another 168 byte .
+
+ mov r3=cr66;; // cr.lid
+ st8 [r2]=r3,40 // 40 byte rement
+
+ mov r3=cr71;; // cr.ivr
+ st8 [r2]=r3,8
+
+ mov r3=cr72;; // cr.tpr
+ st8 [r2]=r3,24 // 24 byte increment
+
+ mov r3=r0;; // cr.eoi => cr75
+ st8 [r2]=r3,168 // 168 byte inc.
+
+ mov r3=r0;; // cr.irr0 => cr96
+ st8 [r2]=r3,16 // 16 byte inc.
+
+ mov r3=r0;; // cr.irr1 => cr98
+ st8 [r2]=r3,16 // 16 byte inc.
+
+ mov r3=r0;; // cr.irr2 => cr100
+ st8 [r2]=r3,16 // 16 byte inc
+
+ mov r3=r0;; // cr.irr3 => cr100
+ st8 [r2]=r3,16 // 16b inc.
+
+ mov r3=r0;; // cr.itv => cr114
+ st8 [r2]=r3,16 // 16 byte inc.
+
+ mov r3=r0;; // cr.pmv => cr116
+ st8 [r2]=r3,8
+
+ mov r3=r0;; // cr.lrr0 => cr117
+ st8 [r2]=r3,8
+
+ mov r3=r0;; // cr.lrr1 => cr118
+ st8 [r2]=r3,8
+
+ mov r3=r0;; // cr.cmcv => cr119
+ st8 [r2]=r3,8*10;;
+
+cSaveARs:
+// save ARs
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2 // duplicate r2 in r6
+
+ mov r3=ar0 // ar.kro
+ mov r5=ar1 // ar.kr1
+ mov r7=ar2;; // ar.kr2
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=ar3 // ar.kr3
+ mov r5=ar4 // ar.kr4
+ mov r7=ar5;; // ar.kr5
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=ar6 // ar.kr6
+ mov r5=ar7 // ar.kr7
+ mov r7=r0;; // ar.kr8
+ st8 [r2]=r3,10*8
+ st8 [r4]=r5,10*8
+ st8 [r6]=r7,10*8;; // rement by 72 bytes
+
+ mov r3=ar16 // ar.rsc
+ mov ar16=r0 // put RSE in enforced lazy mode
+ mov r5=ar17 // ar.bsp
+ mov r7=ar18;; // ar.bspstore
+ st8 [r2]=r3,3*8
+ st8 [r4]=r5,3*8
+ st8 [r6]=r7,3*8;;
+
+ mov r3=ar19;; // ar.rnat
+ st8 [r2]=r3,8*13 // increment by 13x8 bytes
+
+ mov r3=ar32;; // ar.ccv
+ st8 [r2]=r3,8*4
+
+ mov r3=ar36;; // ar.unat
+ st8 [r2]=r3,8*4
+
+ mov r3=ar40;; // ar.fpsr
+ st8 [r2]=r3,8*4
+
+ mov r3=ar44;; // ar.itc
+ st8 [r2]=r3,160 // 160
+
+ mov r3=ar64;; // ar.pfs
+ st8 [r2]=r3,8
+
+ mov r3=ar65;; // ar.lc
+ st8 [r2]=r3,8
+
+ mov r3=ar66;; // ar.ec
+ st8 [r2]=r3
+ add r2=8*62,r2 //padding
+
+// save RRs
+ mov ar.lc=0x08-1
+ movl r4=0x00;;
+
+cStRR:
+ mov r3=rr[r4];;
+ st8 [r2]=r3,8
+ add r4=1,r4
+ br.cloop.sptk.few cStRR
+ ;;
+end_os_mca_dump:
+ BRANCH(ia64_os_mca_done_dump, r2, p0, -0x10)
+ ;;
+
+//EndStub//////////////////////////////////////////////////////////////////////
+
+
+//++
+// Name:
+// ia64_os_mca_proc_state_restore()
+//
+// Stub Description:
+//
+// This is a stub to restore the saved processor state during MCHK
+//
+//--
+
+ia64_os_mca_proc_state_restore:
+
+// Restore bank1 GR16-31
+ movl r2=ia64_mca_proc_state_dump // Convert virtual address
+ ;; // of OS state dump area
+ DATA_VA_TO_PA(r2) // to physical address
+ ;;
+restore_GRs: // restore bank-1 GRs 16-31
+ bsw.1;;
+ add r3=16*8,r2;; // to get to NaT of GR 16-31
+ ld8 r3=[r3];;
+ mov ar.unat=r3;; // first restore NaT
+
+ ld8.fill r16=[r2],8;;
+ ld8.fill r17=[r2],8;;
+ ld8.fill r18=[r2],8;;
+ ld8.fill r19=[r2],8;;
+ ld8.fill r20=[r2],8;;
+ ld8.fill r21=[r2],8;;
+ ld8.fill r22=[r2],8;;
+ ld8.fill r23=[r2],8;;
+ ld8.fill r24=[r2],8;;
+ ld8.fill r25=[r2],8;;
+ ld8.fill r26=[r2],8;;
+ ld8.fill r27=[r2],8;;
+ ld8.fill r28=[r2],8;;
+ ld8.fill r29=[r2],8;;
+ ld8.fill r30=[r2],8;;
+ ld8.fill r31=[r2],8;;
+
+ ld8 r3=[r2],8;; // increment to skip NaT
+ bsw.0;;
+
+restore_BRs:
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2;; // duplicate r2 in r4
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov b0=r3
+ mov b1=r5
+ mov b2=r7;;
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov b3=r3
+ mov b4=r5
+ mov b5=r7;;
+
+ ld8 r3=[r2],2*8
+ ld8 r5=[r4],2*8;;
+ mov b6=r3
+ mov b7=r5;;
+
+restore_CRs:
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2;; // duplicate r2 in r4
+
+ ld8 r3=[r2],8*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;; // 48 byte increments
+ mov cr0=r3 // cr.dcr
+ mov cr1=r5 // cr.itm
+ mov cr2=r7;; // cr.iva
+
+ ld8 r3=[r2],8*8;; // 64 byte increments
+// mov cr8=r3 // cr.pta
+
+
+// if PSR.ic=1, reading interruption registers causes an illegal operation fault
+ mov r3=psr;;
+ tbit.nz.unc p2,p0=r3,PSR_IC;; // PSI Valid Log bit pos. test
+(p2) st8 [r2]=r0,9*8+160 // increment by 160 byte inc.
+
+begin_rskip_intr_regs:
+ BRANCH(rSkipIntrRegs, r9, p2, 0x0)
+ ;;
+
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2;; // duplicate r2 in r4
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov cr16=r3 // cr.ipsr
+ mov cr17=r5 // cr.isr is read only
+// mov cr18=r7;; // cr.ida
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov cr19=r3 // cr.iip
+ mov cr20=r5 // cr.idtr
+ mov cr21=r7;; // cr.iitr
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov cr22=r3 // cr.iipa
+ mov cr23=r5 // cr.ifs
+ mov cr24=r7 // cr.iim
+
+ ld8 r3=[r2],160;; // 160 byte increment
+ mov cr25=r3 // cr.iha
+
+rSkipIntrRegs:
+ ld8 r3=[r2],168;; // another 168 byte inc.
+
+ ld8 r3=[r2],40;; // 40 byte increment
+ mov cr66=r3 // cr.lid
+
+ ld8 r3=[r2],8;;
+// mov cr71=r3 // cr.ivr is read only
+ ld8 r3=[r2],24;; // 24 byte increment
+ mov cr72=r3 // cr.tpr
+
+ ld8 r3=[r2],168;; // 168 byte inc.
+// mov cr75=r3 // cr.eoi
+
+ ld8 r3=[r2],16;; // 16 byte inc.
+// mov cr96=r3 // cr.irr0 is read only
+
+ ld8 r3=[r2],16;; // 16 byte inc.
+// mov cr98=r3 // cr.irr1 is read only
+
+ ld8 r3=[r2],16;; // 16 byte inc
+// mov cr100=r3 // cr.irr2 is read only
+
+ ld8 r3=[r2],16;; // 16b inc.
+// mov cr102=r3 // cr.irr3 is read only
+
+ ld8 r3=[r2],16;; // 16 byte inc.
+// mov cr114=r3 // cr.itv
+
+ ld8 r3=[r2],8;;
+// mov cr116=r3 // cr.pmv
+ ld8 r3=[r2],8;;
+// mov cr117=r3 // cr.lrr0
+ ld8 r3=[r2],8;;
+// mov cr118=r3 // cr.lrr1
+ ld8 r3=[r2],8*10;;
+// mov cr119=r3 // cr.cmcv
+
+restore_ARs:
+ add r4=8,r2 // duplicate r2 in r4
+ add r6=2*8,r2;; // duplicate r2 in r4
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov ar0=r3 // ar.kro
+ mov ar1=r5 // ar.kr1
+ mov ar2=r7;; // ar.kr2
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+ mov ar3=r3 // ar.kr3
+ mov ar4=r5 // ar.kr4
+ mov ar5=r7;; // ar.kr5
+
+ ld8 r3=[r2],10*8
+ ld8 r5=[r4],10*8
+ ld8 r7=[r6],10*8;;
+ mov ar6=r3 // ar.kr6
+ mov ar7=r5 // ar.kr7
+// mov ar8=r6 // ar.kr8
+ ;;
+
+ ld8 r3=[r2],3*8
+ ld8 r5=[r4],3*8
+ ld8 r7=[r6],3*8;;
+// mov ar16=r3 // ar.rsc
+// mov ar17=r5 // ar.bsp is read only
+ mov ar16=r0 // make sure that RSE is in enforced lazy mode
+ mov ar18=r7;; // ar.bspstore
+
+ ld8 r9=[r2],8*13;;
+ mov ar19=r9 // ar.rnat
+
+ mov ar16=r3 // ar.rsc
+ ld8 r3=[r2],8*4;;
+ mov ar32=r3 // ar.ccv
+
+ ld8 r3=[r2],8*4;;
+ mov ar36=r3 // ar.unat
+
+ ld8 r3=[r2],8*4;;
+ mov ar40=r3 // ar.fpsr
+
+ ld8 r3=[r2],160;; // 160
+// mov ar44=r3 // ar.itc
+
+ ld8 r3=[r2],8;;
+ mov ar64=r3 // ar.pfs
+
+ ld8 r3=[r2],8;;
+ mov ar65=r3 // ar.lc
+
+ ld8 r3=[r2];;
+ mov ar66=r3 // ar.ec
+ add r2=8*62,r2;; // padding
+
+restore_RRs:
+ mov r5=ar.lc
+ mov ar.lc=0x08-1
+ movl r4=0x00
+cStRRr:
+ ld8 r3=[r2],8;;
+// mov rr[r4]=r3 // what are its access previledges?
+ add r4=1,r4
+ br.cloop.sptk.few cStRRr
+ ;;
+ mov ar.lc=r5
+ ;;
+end_os_mca_restore:
+ BRANCH(ia64_os_mca_done_restore, r2, p0, -0x20)
+ ;;
+//EndStub//////////////////////////////////////////////////////////////////////
--- /dev/null
+/*
+ * PAL Firmware support
+ * IA-64 Processor Programmers Reference Vol 2
+ *
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 David Mosberger <davidm@hpl.hp.com>
+ */
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .data
+pal_entry_point:
+ data8 ia64_pal_default_handler
+ .text
+
+/*
+ * Set the PAL entry point address. This could be written in C code, but we do it here
+ * to keep it all in one module (besides, it's so trivial that it's
+ * not a big deal).
+ *
+ * in0 Address of the PAL entry point (text address, NOT a function descriptor).
+ */
+ .align 16
+ .global ia64_pal_handler_init
+ .proc ia64_pal_handler_init
+ia64_pal_handler_init:
+ alloc r3=ar.pfs,1,0,0,0
+ movl r2=pal_entry_point
+ ;;
+ st8 [r2]=in0
+ br.ret.sptk.few rp
+
+ .endp ia64_pal_handler_init
+
+/*
+ * Default PAL call handler. This needs to be coded in assembly because it uses
+ * the static calling convention, i.e., the RSE may not be used and calls are
+ * done via "br.cond" (not "br.call").
+ */
+ .align 16
+ .global ia64_pal_default_handler
+ .proc ia64_pal_default_handler
+ia64_pal_default_handler:
+ mov r8=-1
+ br.cond.sptk.few rp
+
+/*
+ * Make a PAL call using the static calling convention.
+ *
+ * in0 Pointer to struct ia64_pal_retval
+ * in1 Index of PAL service
+ * in2 - in4 Remaning PAL arguments
+ *
+ */
+
+#ifdef __GCC_MULTIREG_RETVALS__
+# define arg0 in0
+# define arg1 in1
+# define arg2 in2
+# define arg3 in3
+# define arg4 in4
+#else
+# define arg0 in1
+# define arg1 in2
+# define arg2 in3
+# define arg3 in4
+# define arg4 in5
+#endif
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 16
+ .global ia64_pal_call_static
+ .proc ia64_pal_call_static
+ia64_pal_call_static:
+ alloc loc0 = ar.pfs,6,90,0,0
+ movl loc2 = pal_entry_point
+1: {
+ mov r28 = arg0
+ mov r29 = arg1
+ mov r8 = ip
+ }
+ ;;
+ ld8 loc2 = [loc2] // loc2 <- entry point
+ mov r30 = arg2
+ mov r31 = arg3
+ ;;
+ mov loc3 = psr
+ mov loc1 = rp
+ adds r8 = .ret0-1b,r8
+ ;;
+ rsm psr.i
+ mov b7 = loc2
+ mov rp = r8
+ ;;
+ br.cond.sptk.few b7
+.ret0: mov psr.l = loc3
+#ifndef __GCC_MULTIREG_RETVALS__
+ st8 [in0] = r8, 8
+ ;;
+ st8 [in0] = r9, 8
+ ;;
+ st8 [in0] = r10, 8
+ ;;
+ st8 [in0] = r11, 8
+#endif
+ mov ar.pfs = loc0
+ mov rp = loc1
+ ;;
+ srlz.d // seralize restoration of psr.l
+ br.ret.sptk.few b0
+ .endp ia64_pal_call_static
--- /dev/null
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ *
+ * XXX This doesn't do the right thing yet. It appears we would have
+ * to add additional zones so we can implement the various address
+ * mask constraints that we might encounter. A zone for memory < 32
+ * bits is obviously necessary...
+ */
+
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+
+#include <asm/io.h>
+
+/* Pure 2^n version of get_order */
+extern __inline__ unsigned long
+get_order (unsigned long size)
+{
+ unsigned long order = ia64_fls(size);
+
+ printk ("get_order: size=%lu, order=%lu\n", size, order);
+
+ if (log > PAGE_SHIFT)
+ order -= PAGE_SHIFT;;
+ return order;
+}
+
+void *
+pci_alloc_consistent (struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle)
+{
+ void *ret;
+ int gfp = GFP_ATOMIC;
+
+ if (!hwdev || hwdev->dma_mask != 0xffffffff)
+ gfp |= GFP_DMA;
+ ret = (void *)__get_free_pages(gfp, get_order(size));
+
+ if (ret) {
+ memset(ret, 0, size);
+ *dma_handle = virt_to_bus(ret);
+ }
+ return ret;
+}
+
+void
+pci_free_consistent (struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
+{
+ free_pages((unsigned long) vaddr, get_order(size));
+}
--- /dev/null
+/*
+ * pci.c - Low-Level PCI Access in IA64
+ *
+ * Derived from bios32.c of i386 tree.
+ *
+ */
+
+#include <linux/config.h>
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/malloc.h>
+#include <linux/smp_lock.h>
+#include <linux/spinlock.h>
+
+#include <asm/machvec.h>
+#include <asm/page.h>
+#include <asm/segment.h>
+#include <asm/system.h>
+#include <asm/io.h>
+
+#include <asm/sal.h>
+
+
+#ifdef CONFIG_SMP
+# include <asm/smp.h>
+#endif
+#include <asm/irq.h>
+
+
+#undef DEBUG
+#define DEBUG
+
+#ifdef DEBUG
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+/*
+ * This interrupt-safe spinlock protects all accesses to PCI
+ * configuration space.
+ */
+
+spinlock_t pci_lock = SPIN_LOCK_UNLOCKED;
+
+struct pci_fixup pcibios_fixups[] = { { 0 } };
+
+#define PCI_NO_CHECKS 0x400
+#define PCI_NO_PEER_FIXUP 0x800
+
+static unsigned int pci_probe = PCI_NO_CHECKS;
+
+/* Macro to build a PCI configuration address to be passed as a parameter to SAL. */
+
+#define PCI_CONFIG_ADDRESS(dev, where) (((u64) dev->bus->number << 16) | ((u64) (dev->devfn & 0xff) << 8) | (where & 0xff))
+
+static int
+pci_conf_read_config_byte(struct pci_dev *dev, int where, u8 *value)
+{
+ s64 status;
+ u64 lval;
+
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS(dev, where), 1, &lval);
+ *value = lval;
+ return status;
+}
+
+static int
+pci_conf_read_config_word(struct pci_dev *dev, int where, u16 *value)
+{
+ s64 status;
+ u64 lval;
+
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS(dev, where), 2, &lval);
+ *value = lval;
+ return status;
+}
+
+static int
+pci_conf_read_config_dword(struct pci_dev *dev, int where, u32 *value)
+{
+ s64 status;
+ u64 lval;
+
+ status = ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS(dev, where), 4, &lval);
+ *value = lval;
+ return status;
+}
+
+static int
+pci_conf_write_config_byte (struct pci_dev *dev, int where, u8 value)
+{
+ return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS(dev, where), 1, value);
+}
+
+static int
+pci_conf_write_config_word (struct pci_dev *dev, int where, u16 value)
+{
+ return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS(dev, where), 2, value);
+}
+
+static int
+pci_conf_write_config_dword (struct pci_dev *dev, int where, u32 value)
+{
+ return ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS(dev, where), 4, value);
+}
+
+
+static struct pci_ops pci_conf = {
+ pci_conf_read_config_byte,
+ pci_conf_read_config_word,
+ pci_conf_read_config_dword,
+ pci_conf_write_config_byte,
+ pci_conf_write_config_word,
+ pci_conf_write_config_dword
+};
+
+/*
+ * Try to find PCI BIOS. This will always work for IA64.
+ */
+
+static struct pci_ops * __init
+pci_find_bios(void)
+{
+ return &pci_conf;
+}
+
+/*
+ * Initialization. Uses the SAL interface
+ */
+
+#define PCI_BUSSES_TO_SCAN 2 /* On "real" ;) hardware this will be 255 */
+
+void __init
+pcibios_init(void)
+{
+ struct pci_ops *ops = NULL;
+ int i;
+
+ if ((ops = pci_find_bios()) == NULL) {
+ printk("PCI: No PCI bus detected\n");
+ return;
+ }
+
+ printk("PCI: Probing PCI hardware\n");
+ for (i = 0; i < PCI_BUSSES_TO_SCAN; i++)
+ pci_scan_bus(i, ops, NULL);
+ platform_pci_fixup();
+ return;
+}
+
+/*
+ * Called after each bus is probed, but before its children
+ * are examined.
+ */
+
+void __init
+pcibios_fixup_bus(struct pci_bus *b)
+{
+ return;
+}
+
+int
+pci_assign_resource (struct pci_dev *dev, int i)
+{
+ printk("pci_assign_resource: not implemented!\n");
+ return -ENODEV;
+}
+
+void __init
+pcibios_update_resource(struct pci_dev *dev, struct resource *root,
+ struct resource *res, int resource)
+{
+ unsigned long where, size;
+ u32 reg;
+
+ where = PCI_BASE_ADDRESS_0 + (resource * 4);
+ size = res->end - res->start;
+ pci_read_config_dword(dev, where, ®);
+ reg = (reg & size) | (((u32)(res->start - root->start)) & ~size);
+ pci_write_config_dword(dev, where, reg);
+
+ /* ??? FIXME -- record old value for shutdown. */
+}
+
+void __init
+pcibios_update_irq(struct pci_dev *dev, int irq)
+{
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq);
+
+ /* ??? FIXME -- record old value for shutdown. */
+}
+
+void __init
+pcibios_fixup_pbus_ranges (struct pci_bus * bus, struct pbus_set_ranges_data * ranges)
+{
+ ranges->io_start -= bus->resource[0]->start;
+ ranges->io_end -= bus->resource[0]->start;
+ ranges->mem_start -= bus->resource[1]->start;
+ ranges->mem_end -= bus->resource[1]->start;
+}
+
+int __init
+pcibios_enable_device (struct pci_dev *dev)
+{
+ /* Not needed, since we enable all devices at startup. */
+ return 0;
+}
+
+/*
+ * PCI BIOS setup, always defaults to SAL interface
+ */
+
+char * __init
+pcibios_setup(char *str)
+{
+ pci_probe = PCI_NO_CHECKS;
+ return NULL;
+}
+
+void
+pcibios_align_resource (void *data, struct resource *res, unsigned long size)
+{
+}
+
+#if 0 /*def CONFIG_PROC_FS*/
+/*
+ * This is an ugly hack to get a (weak) unresolved reference to something that is
+ * in drivers/pci/proc.c. Without this, the file does not get linked in at all
+ * (I suspect the reason this isn't needed on Linux/x86 is that most people compile
+ * with module support, in which case the EXPORT_SYMBOL() stuff will ensure the
+ * code gets linked in. Sigh... --davidm 99/12/20.
+ */
+asm ("data8 proc_bus_pci_add");
+#endif
--- /dev/null
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+
+#include <asm/errno.h>
+#include <asm/irq.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_PERFMON
+
+#define MAX_PERF_COUNTER 4 /* true for Itanium, at least */
+#define WRITE_PMCS_AND_START 0xa0
+#define WRITE_PMCS 0xa1
+#define READ_PMDS 0xa2
+#define STOP_PMCS 0xa3
+#define IA64_COUNTER_MASK 0xffffffffffffff6f
+#define PERF_OVFL_VAL 0xffffffff
+
+struct perfmon_counter {
+ unsigned long data;
+ int counter_num;
+};
+
+unsigned long pmds[MAX_PERF_COUNTER];
+struct task_struct *perf_owner;
+
+/*
+ * We set dcr.pp, psr.pp, and the appropriate pmc control values with
+ * this. Notice that we go about modifying _each_ task's pt_regs to
+ * set cr_ipsr.pp. This will start counting when "current" does an
+ * _rfi_. Also, since each task's cr_ipsr.pp, and cr_ipsr is inherited
+ * across forks, we do _not_ need additional code on context
+ * switches. On stopping of the counters we dont _need_ to go about
+ * changing every task's cr_ipsr back to where it wuz, because we can
+ * just set pmc[0]=1. But we do it anyways becuase we will probably
+ * add thread specific accounting later.
+ *
+ * The obvious problem with this is that on SMP systems, it is a bit
+ * of work (when someone wants to do it) - it would be easier if we
+ * just added code to the context-switch path. I think we would need
+ * to lock the run queue to ensure no context switches, send an IPI to
+ * each processor, and in that IPI handler, just modify the psr bit of
+ * only the _current_ thread, since we have modified the psr bit
+ * correctly in the kernel stack for every process which is not
+ * running. Might crash on SMP systems without the
+ * lock_kernel(). Hence the lock..
+ */
+asmlinkage unsigned long
+sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+{
+ struct perfmon_counter tmp, *cptr = ptr;
+ unsigned long pmd, cnum, dcr, flags;
+ struct task_struct *p;
+ struct pt_regs *regs;
+ struct perf_counter;
+ int i;
+
+ switch (cmd1) {
+ case WRITE_PMCS: /* Writes to PMC's and clears PMDs */
+ case WRITE_PMCS_AND_START: /* Also starts counting */
+
+ if (!access_ok(VERIFY_READ, cptr, sizeof(struct perf_counter)*cmd2))
+ return -EFAULT;
+
+ if (cmd2 >= MAX_PERF_COUNTER)
+ return -EFAULT;
+
+ if (perf_owner && perf_owner != current)
+ return -EBUSY;
+ perf_owner = current;
+
+ for (i = 0; i < cmd2; i++, cptr++) {
+ copy_from_user(&tmp, cptr, sizeof(tmp));
+ /* XXX need to check validity of counter_num and perhaps data!! */
+ ia64_set_pmc(tmp.counter_num, tmp.data);
+ ia64_set_pmd(tmp.counter_num, 0);
+ pmds[tmp.counter_num - 4] = 0;
+ }
+
+ if (cmd1 == WRITE_PMCS_AND_START) {
+ local_irq_save(flags);
+ dcr = ia64_get_dcr();
+ dcr |= IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+
+ /*
+ * This is a no can do. It obviously wouldn't
+ * work on SMP where another process may not
+ * be blocked at all.
+ *
+ * Perhaps we need a global predicate in the
+ * leave_kernel path to control if pp should
+ * be on or off?
+ */
+ lock_kernel();
+ for_each_task(p) {
+ regs = (struct pt_regs *) (((char *)p) + IA64_STK_OFFSET) - 1;
+ ia64_psr(regs)->pp = 1;
+ }
+ unlock_kernel();
+ ia64_set_pmc(0, 0);
+ }
+ break;
+
+ case READ_PMDS:
+ if (cmd2 >= MAX_PERF_COUNTER)
+ return -EFAULT;
+ if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perf_counter)*cmd2))
+ return -EFAULT;
+ local_irq_save(flags);
+ /* XXX this looks wrong */
+ __asm__ __volatile__("rsm psr.pp\n");
+ dcr = ia64_get_dcr();
+ dcr &= ~IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+
+ /*
+ * We cannot touch pmc[0] to stop counting here, as
+ * that particular instruction might cause an overflow
+ * and the mask in pmc[0] might get lost. I'm not very
+ * sure of the hardware behavior here. So we stop
+ * counting by psr.pp = 0. And we reset dcr.pp to
+ * prevent an interrupt from mucking up psr.pp in the
+ * meanwhile. Perfmon interrupts are pended, hence the
+ * above code should be ok if one of the above
+ * instructions cause overflows. Is this ok? When I
+ * muck with dcr, is the cli/sti needed??
+ */
+ for (i = 0, cnum = 4; i < MAX_PERF_COUNTER; i++, cnum++, cptr++) {
+ pmd = pmds[i] + (ia64_get_pmd(cnum) & PERF_OVFL_VAL);
+ put_user(pmd, &cptr->data);
+ }
+ local_irq_save(flags);
+ /* XXX this looks wrong */
+ __asm__ __volatile__("ssm psr.pp");
+ dcr = ia64_get_dcr();
+ dcr |= IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+ break;
+
+ case STOP_PMCS:
+ ia64_set_pmc(0, 1);
+ for (i = 0; i < MAX_PERF_COUNTER; ++i)
+ ia64_set_pmc(i, 0);
+
+ local_irq_save(flags);
+ dcr = ia64_get_dcr();
+ dcr &= ~IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+ /*
+ * This is a no can do. It obviously wouldn't
+ * work on SMP where another process may not
+ * be blocked at all.
+ *
+ * Perhaps we need a global predicate in the
+ * leave_kernel path to control if pp should
+ * be on or off?
+ */
+ lock_kernel();
+ for_each_task(p) {
+ regs = (struct pt_regs *) (((char *)p) + IA64_STK_OFFSET) - 1;
+ ia64_psr(regs)->pp = 0;
+ }
+ unlock_kernel();
+ perf_owner = 0;
+ break;
+
+ default:
+ break;
+ }
+ return 0;
+}
+
+static inline void
+update_counters (void)
+{
+ unsigned long mask, i, cnum, val;
+
+ mask = ia64_get_pmd(0) >> 4;
+ for (i = 0, cnum = 4; i < MAX_PERF_COUNTER; cnum++, i++, mask >>= 1) {
+ if (mask & 0x1)
+ val = PERF_OVFL_VAL;
+ else
+ /* since we got an interrupt, might as well clear every pmd. */
+ val = ia64_get_pmd(cnum) & PERF_OVFL_VAL;
+ pmds[i] += val;
+ ia64_set_pmd(cnum, 0);
+ }
+}
+
+static void
+perfmon_interrupt (int irq, void *arg, struct pt_regs *regs)
+{
+ update_counters();
+ ia64_set_pmc(0, 0);
+ ia64_srlz_d();
+}
+
+void
+perfmon_init (void)
+{
+ if (request_irq(PERFMON_IRQ, perfmon_interrupt, 0, "perfmon", NULL)) {
+ printk("perfmon_init: could not allocate performance monitor vector %u\n",
+ PERFMON_IRQ);
+ return;
+ }
+ ia64_set_pmv(PERFMON_IRQ);
+ ia64_srlz_d();
+}
+
+#else /* !CONFIG_PERFMON */
+
+asmlinkage unsigned long
+sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+{
+ return -ENOSYS;
+}
+
+#endif /* !CONFIG_PERFMON */
--- /dev/null
+/*
+ * Architecture-specific setup.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#define __KERNEL_SYSCALLS__ /* see <asm/unistd.h> */
+#include <linux/config.h>
+
+#include <linux/acpi.h>
+#include <linux/elf.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+
+#include <asm/delay.h>
+#include <asm/efi.h>
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/sal.h>
+#include <asm/uaccess.h>
+#include <asm/user.h>
+
+
+void
+show_regs (struct pt_regs *regs)
+{
+ unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
+
+ printk("\npsr : %016lx ifs : %016lx ip : [<%016lx>]\n",
+ regs->cr_ipsr, regs->cr_ifs, ip);
+ printk("unat: %016lx pfs : %016lx rsc : %016lx\n",
+ regs->ar_unat, regs->ar_pfs, regs->ar_rsc);
+ printk("rnat: %016lx bsps: %016lx pr : %016lx\n",
+ regs->ar_rnat, regs->ar_bspstore, regs->pr);
+ printk("ldrs: %016lx ccv : %016lx fpsr: %016lx\n",
+ regs->loadrs, regs->ar_ccv, regs->ar_fpsr);
+ printk("b0 : %016lx b6 : %016lx b7 : %016lx\n", regs->b0, regs->b6, regs->b7);
+ printk("f6 : %05lx%016lx f7 : %05lx%016lx\n",
+ regs->f6.u.bits[1], regs->f6.u.bits[0],
+ regs->f7.u.bits[1], regs->f7.u.bits[0]);
+ printk("f8 : %05lx%016lx f9 : %05lx%016lx\n",
+ regs->f8.u.bits[1], regs->f8.u.bits[0],
+ regs->f9.u.bits[1], regs->f9.u.bits[0]);
+
+ printk("r1 : %016lx r2 : %016lx r3 : %016lx\n", regs->r1, regs->r2, regs->r3);
+ printk("r8 : %016lx r9 : %016lx r10 : %016lx\n", regs->r8, regs->r9, regs->r10);
+ printk("r11 : %016lx r12 : %016lx r13 : %016lx\n", regs->r11, regs->r12, regs->r13);
+ printk("r14 : %016lx r15 : %016lx r16 : %016lx\n", regs->r14, regs->r15, regs->r16);
+ printk("r17 : %016lx r18 : %016lx r19 : %016lx\n", regs->r17, regs->r18, regs->r19);
+ printk("r20 : %016lx r21 : %016lx r22 : %016lx\n", regs->r20, regs->r21, regs->r22);
+ printk("r23 : %016lx r24 : %016lx r25 : %016lx\n", regs->r23, regs->r24, regs->r25);
+ printk("r26 : %016lx r27 : %016lx r28 : %016lx\n", regs->r26, regs->r27, regs->r28);
+ printk("r29 : %016lx r30 : %016lx r31 : %016lx\n", regs->r29, regs->r30, regs->r31);
+
+ /* print the stacked registers if cr.ifs is valid: */
+ if (regs->cr_ifs & 0x8000000000000000) {
+ unsigned long val, sof, *bsp, ndirty;
+ int i, is_nat = 0;
+
+ sof = regs->cr_ifs & 0x7f; /* size of frame */
+ ndirty = (regs->loadrs >> 19);
+ bsp = ia64_rse_skip_regs((unsigned long *) regs->ar_bspstore, ndirty);
+ for (i = 0; i < sof; ++i) {
+ get_user(val, ia64_rse_skip_regs(bsp, i));
+ printk("r%-3u:%c%016lx%s", 32 + i, is_nat ? '*' : ' ', val,
+ ((i == sof - 1) || (i % 3) == 2) ? "\n" : " ");
+ }
+ }
+}
+
+void __attribute__((noreturn))
+cpu_idle (void *unused)
+{
+ /* endless idle loop with no priority at all */
+ init_idle();
+ current->priority = 0;
+ current->counter = -100;
+
+#ifdef CONFIG_SMP
+ if (!current->need_resched)
+ min_xtp();
+#endif
+
+ while (1) {
+ while (!current->need_resched) {
+ continue;
+ }
+#ifdef CONFIG_SMP
+ normal_xtp();
+#endif
+ schedule();
+ check_pgt_cache();
+ if (acpi_idle)
+ (*acpi_idle)();
+ }
+}
+
+/*
+ * Copy the state of an ia-64 thread.
+ *
+ * We get here through the following call chain:
+ *
+ * <clone syscall>
+ * sys_clone
+ * do_fork
+ * copy_thread
+ *
+ * This means that the stack layout is as follows:
+ *
+ * +---------------------+ (highest addr)
+ * | struct pt_regs |
+ * +---------------------+
+ * | struct switch_stack |
+ * +---------------------+
+ * | |
+ * | memory stack |
+ * | | <-- sp (lowest addr)
+ * +---------------------+
+ *
+ * Note: if we get called through kernel_thread() then the memory
+ * above "(highest addr)" is valid kernel stack memory that needs to
+ * be copied as well.
+ *
+ * Observe that we copy the unat values that are in pt_regs and
+ * switch_stack. Since the interpretation of unat is dependent upon
+ * the address to which the registers got spilled, doing this is valid
+ * only as long as we preserve the alignment of the stack. Since the
+ * stack is always page aligned, we know this is the case.
+ *
+ * XXX Actually, the above isn't true when we create kernel_threads().
+ * If we ever needs to create kernel_threads() that preserve the unat
+ * values we'll need to fix this. Perhaps an easy workaround would be
+ * to always clear the unat bits in the child thread.
+ */
+int
+copy_thread (int nr, unsigned long clone_flags, unsigned long usp,
+ struct task_struct *p, struct pt_regs *regs)
+{
+ unsigned long rbs, child_rbs, rbs_size, stack_offset, stack_top, stack_used;
+ struct switch_stack *child_stack, *stack;
+ extern char ia64_ret_from_syscall_clear_r8;
+ extern char ia64_strace_clear_r8;
+ struct pt_regs *child_ptregs;
+
+#ifdef CONFIG_SMP
+ /*
+ * For SMP idle threads, fork_by_hand() calls do_fork with
+ * NULL regs.
+ */
+ if (!regs)
+ return 0;
+#endif
+
+ stack_top = (unsigned long) current + IA64_STK_OFFSET;
+ stack = ((struct switch_stack *) regs) - 1;
+ stack_used = stack_top - (unsigned long) stack;
+ stack_offset = IA64_STK_OFFSET - stack_used;
+
+ child_stack = (struct switch_stack *) ((unsigned long) p + stack_offset);
+ child_ptregs = (struct pt_regs *) (child_stack + 1);
+
+ /* copy parent's switch_stack & pt_regs to child: */
+ memcpy(child_stack, stack, stack_used);
+
+ rbs = (unsigned long) current + IA64_RBS_OFFSET;
+ child_rbs = (unsigned long) p + IA64_RBS_OFFSET;
+ rbs_size = stack->ar_bspstore - rbs;
+
+ /* copy the parent's register backing store to the child: */
+ memcpy((void *) child_rbs, (void *) rbs, rbs_size);
+
+ child_ptregs->r8 = 0; /* child gets a zero return value */
+ if (user_mode(child_ptregs))
+ child_ptregs->r12 = usp; /* user stack pointer */
+ else {
+ /*
+ * Note: we simply preserve the relative position of
+ * the stack pointer here. There is no need to
+ * allocate a scratch area here, since that will have
+ * been taken care of by the caller of sys_clone()
+ * already.
+ */
+ child_ptregs->r12 = (unsigned long) (child_ptregs + 1); /* kernel sp */
+ child_ptregs->r13 = (unsigned long) p; /* set `current' pointer */
+ }
+ if (p->flags & PF_TRACESYS)
+ child_stack->b0 = (unsigned long) &ia64_strace_clear_r8;
+ else
+ child_stack->b0 = (unsigned long) &ia64_ret_from_syscall_clear_r8;
+ child_stack->ar_bspstore = child_rbs + rbs_size;
+
+ /* copy the thread_struct: */
+ p->thread.ksp = (unsigned long) child_stack - 16;
+ /*
+ * NOTE: The calling convention considers all floating point
+ * registers in the high partition (fph) to be scratch. Since
+ * the only way to get to this point is through a system call,
+ * we know that the values in fph are all dead. Hence, there
+ * is no need to inherit the fph state from the parent to the
+ * child and all we have to do is to make sure that
+ * IA64_THREAD_FPH_VALID is cleared in the child.
+ *
+ * XXX We could push this optimization a bit further by
+ * clearing IA64_THREAD_FPH_VALID on ANY system call.
+ * However, it's not clear this is worth doing. Also, it
+ * would be a slight deviation from the normal Linux system
+ * call behavior where scratch registers are preserved across
+ * system calls (unless used by the system call itself).
+ *
+ * If we wanted to inherit the fph state from the parent to the
+ * child, we would have to do something along the lines of:
+ *
+ * if (ia64_get_fpu_owner() == current && ia64_psr(regs)->mfh) {
+ * p->thread.flags |= IA64_THREAD_FPH_VALID;
+ * ia64_save_fpu(&p->thread.fph);
+ * } else if (current->thread.flags & IA64_THREAD_FPH_VALID) {
+ * memcpy(p->thread.fph, current->thread.fph, sizeof(p->thread.fph));
+ * }
+ */
+ p->thread.flags = (current->thread.flags & ~IA64_THREAD_FPH_VALID);
+ return 0;
+}
+
+void
+ia64_elf_core_copy_regs (struct pt_regs *pt, elf_gregset_t dst)
+{
+ struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
+ unsigned long ar_ec, cfm, ar_bsp, ndirty, *krbs;
+
+ ar_ec = (sw->ar_pfs >> 52) & 0x3f;
+
+ cfm = pt->cr_ifs & ((1UL << 63) - 1);
+ if ((pt->cr_ifs & (1UL << 63)) == 0) {
+ /* if cr_ifs isn't valid, we got here through a syscall or a break */
+ cfm = sw->ar_pfs & ((1UL << 38) - 1);
+ }
+
+ krbs = (unsigned long *) current + IA64_RBS_OFFSET/8;
+ ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 16));
+ ar_bsp = (long) ia64_rse_skip_regs((long *) pt->ar_bspstore, ndirty);
+
+ /* r0-r31
+ * NaT bits (for r0-r31; bit N == 1 iff rN is a NaT)
+ * predicate registers (p0-p63)
+ * b0-b7
+ * ip cfm user-mask
+ * ar.rsc ar.bsp ar.bspstore ar.rnat
+ * ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec
+ */
+ memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
+
+ /* r0 is zero */ dst[ 1] = pt->r1; dst[ 2] = pt->r2; dst[ 3] = pt->r3;
+ dst[ 4] = sw->r4; dst[ 5] = sw->r5; dst[ 6] = sw->r6; dst[ 7] = sw->r7;
+ dst[ 8] = pt->r8; dst[ 9] = pt->r9; dst[10] = pt->r10; dst[11] = pt->r11;
+ dst[12] = pt->r12; dst[13] = pt->r13; dst[14] = pt->r14; dst[15] = pt->r15;
+ memcpy(dst + 16, &pt->r16, 16*8); /* r16-r31 are contiguous */
+
+ dst[32] = ia64_get_nat_bits(pt, sw);
+ dst[33] = pt->pr;
+
+ /* branch regs: */
+ dst[34] = pt->b0; dst[35] = sw->b1; dst[36] = sw->b2; dst[37] = sw->b3;
+ dst[38] = sw->b4; dst[39] = sw->b5; dst[40] = pt->b6; dst[41] = pt->b7;
+
+ dst[42] = pt->cr_iip; dst[43] = pt->cr_ifs;
+ dst[44] = pt->cr_ipsr; /* XXX perhaps we should filter out some bits here? --davidm */
+
+ dst[45] = pt->ar_rsc; dst[46] = ar_bsp; dst[47] = pt->ar_bspstore; dst[48] = pt->ar_rnat;
+ dst[49] = pt->ar_ccv; dst[50] = pt->ar_unat; dst[51] = sw->ar_fpsr; dst[52] = pt->ar_pfs;
+ dst[53] = sw->ar_lc; dst[54] = (sw->ar_pfs >> 52) & 0x3f;
+}
+
+int
+dump_fpu (struct pt_regs *pt, elf_fpregset_t dst)
+{
+ struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+
+ memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
+
+ /* f0 is 0.0 */ /* f1 is 1.0 */ dst[2] = sw->f2; dst[3] = sw->f3;
+ dst[4] = sw->f4; dst[5] = sw->f5; dst[6] = pt->f6; dst[7] = pt->f7;
+ dst[8] = pt->f8; dst[9] = pt->f9;
+ memcpy(dst + 10, &sw->f10, 22*16); /* f10-f31 are contiguous */
+
+ if ((fpu_owner == current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
+ if (fpu_owner == current) {
+ __ia64_save_fpu(current->thread.fph);
+ }
+ memcpy(dst + 32, current->thread.fph, 96*16);
+ }
+ return 1; /* f0-f31 are always valid so we always return 1 */
+}
+
+asmlinkage long
+sys_execve (char *filename, char **argv, char **envp, struct pt_regs *regs)
+{
+ int error;
+
+ lock_kernel();
+ filename = getname(filename);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
+ goto out;
+ error = do_execve(filename, argv, envp, regs);
+ putname(filename);
+out:
+ unlock_kernel();
+ return error;
+}
+
+pid_t
+kernel_thread (int (*fn)(void *), void *arg, unsigned long flags)
+{
+ struct task_struct *parent = current;
+ int result;
+
+ clone(flags | CLONE_VM, 0);
+ if (parent != current) {
+ result = (*fn)(arg);
+ _exit(result);
+ }
+ return 0; /* parent: just return */
+}
+
+/*
+ * Flush thread state. This is called when a thread does an execve().
+ */
+void
+flush_thread (void)
+{
+ /* drop floating-point and debug-register state if it exists: */
+ current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID);
+
+ if (ia64_get_fpu_owner() == current) {
+ ia64_set_fpu_owner(0);
+ }
+}
+
+/*
+ * Clean up state associated with current thread. This is called when
+ * the thread calls exit().
+ */
+void
+exit_thread (void)
+{
+ if (ia64_get_fpu_owner() == current) {
+ ia64_set_fpu_owner(0);
+ }
+}
+
+/*
+ * Free remaining state associated with DEAD_TASK. This is called
+ * after the parent of DEAD_TASK has collected the exist status of the
+ * task via wait().
+ */
+void
+release_thread (struct task_struct *dead_task)
+{
+ /* nothing to do */
+}
+
+unsigned long
+get_wchan (struct task_struct *p)
+{
+ struct ia64_frame_info info;
+ unsigned long ip;
+ int count = 0;
+ /*
+ * These bracket the sleeping functions..
+ */
+ extern void scheduling_functions_start_here(void);
+ extern void scheduling_functions_end_here(void);
+# define first_sched ((unsigned long) scheduling_functions_start_here)
+# define last_sched ((unsigned long) scheduling_functions_end_here)
+
+ /*
+ * Note: p may not be a blocked task (it could be current or
+ * another process running on some other CPU. Rather than
+ * trying to determine if p is really blocked, we just assume
+ * it's blocked and rely on the unwind routines to fail
+ * gracefully if the process wasn't really blocked after all.
+ * --davidm 99/12/15
+ */
+ ia64_unwind_init_from_blocked_task(&info, p);
+ do {
+ if (ia64_unwind_to_previous_frame(&info) < 0)
+ return 0;
+ ip = ia64_unwind_get_ip(&info);
+ if (ip < first_sched || ip >= last_sched)
+ return ip;
+ } while (count++ < 16);
+ return 0;
+# undef first_sched
+# undef last_sched
+}
+
+void
+machine_restart (char *restart_cmd)
+{
+ (*efi.reset_system)(EFI_RESET_WARM, 0, 0, 0);
+}
+
+void
+machine_halt (void)
+{
+ printk("machine_halt: need PAL or ACPI version here!!\n");
+ machine_restart(0);
+}
+
+void
+machine_power_off (void)
+{
+ printk("machine_power_off: unimplemented (need ACPI version here)\n");
+ machine_halt ();
+}
--- /dev/null
+/*
+ * Kernel support for the ptrace() and syscall tracing interfaces.
+ *
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Derived from the x86 and Alpha versions. Most of the code in here
+ * could actually be factored into a common set of routines.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/smp_lock.h>
+#include <linux/user.h>
+
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/ptrace_offsets.h>
+#include <asm/rse.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+
+/*
+ * Collect the NaT bits for r1-r31 from sw->caller_unat and
+ * sw->ar_unat and return a NaT bitset where bit i is set iff the NaT
+ * bit of register i is set.
+ */
+long
+ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack *sw)
+{
+# define GET_BITS(str, first, last, unat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&str->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << first; \
+ (ia64_rotl(unat, first) >> bit) & mask; \
+ })
+ unsigned long val;
+
+ val = GET_BITS(pt, 1, 3, sw->caller_unat);
+ val |= GET_BITS(pt, 12, 15, sw->caller_unat);
+ val |= GET_BITS(pt, 8, 11, sw->caller_unat);
+ val |= GET_BITS(pt, 16, 31, sw->caller_unat);
+ val |= GET_BITS(sw, 4, 7, sw->ar_unat);
+ return val;
+
+# undef GET_BITS
+}
+
+/*
+ * Store the NaT bitset NAT in pt->caller_unat and sw->ar_unat.
+ */
+void
+ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack *sw, unsigned long nat)
+{
+# define PUT_BITS(str, first, last, nat) \
+ ({ \
+ unsigned long bit = ia64_unat_pos(&str->r##first); \
+ unsigned long mask = ((1UL << (last - first + 1)) - 1) << bit; \
+ (ia64_rotr(nat, first) << bit) & mask; \
+ })
+ sw->caller_unat = PUT_BITS(pt, 1, 3, nat);
+ sw->caller_unat |= PUT_BITS(pt, 12, 15, nat);
+ sw->caller_unat |= PUT_BITS(pt, 8, 11, nat);
+ sw->caller_unat |= PUT_BITS(pt, 16, 31, nat);
+ sw->ar_unat = PUT_BITS(sw, 4, 7, nat);
+
+# undef PUT_BITS
+}
+
+#define IA64_MLI_TEMPLATE 0x2
+#define IA64_MOVL_OPCODE 6
+
+void
+ia64_increment_ip (struct pt_regs *regs)
+{
+ unsigned long w0, w1, ri = ia64_psr(regs)->ri + 1;
+
+ if (ri > 2) {
+ ri = 0;
+ regs->cr_iip += 16;
+ } else if (ri == 2) {
+ get_user(w0, (char *) regs->cr_iip + 0);
+ get_user(w1, (char *) regs->cr_iip + 8);
+ if (((w0 >> 1) & 0xf) == IA64_MLI_TEMPLATE && (w1 >> 60) == IA64_MOVL_OPCODE) {
+ /*
+ * rfi'ing to slot 2 of an MLI bundle causes
+ * an illegal operation fault. We don't want
+ * that to happen... Note that we check the
+ * opcode only. "movl" has a vc bit of 0, but
+ * since a vc bit of 1 is currently reserved,
+ * we might just as well treat it like a movl.
+ */
+ ri = 0;
+ regs->cr_iip += 16;
+ }
+ }
+ ia64_psr(regs)->ri = ri;
+}
+
+void
+ia64_decrement_ip (struct pt_regs *regs)
+{
+ unsigned long w0, w1, ri = ia64_psr(regs)->ri - 1;
+
+ if (ia64_psr(regs)->ri == 0) {
+ regs->cr_iip -= 16;
+ ri = 2;
+ get_user(w0, (char *) regs->cr_iip + 0);
+ get_user(w1, (char *) regs->cr_iip + 8);
+ if (((w0 >> 1) & 0xf) == IA64_MLI_TEMPLATE && (w1 >> 60) == IA64_MOVL_OPCODE) {
+ /*
+ * rfi'ing to slot 2 of an MLI bundle causes
+ * an illegal operation fault. We don't want
+ * that to happen... Note that we check the
+ * opcode only. "movl" has a vc bit of 0, but
+ * since a vc bit of 1 is currently reserved,
+ * we might just as well treat it like a movl.
+ */
+ ri = 1;
+ }
+ }
+ ia64_psr(regs)->ri = ri;
+}
+
+/*
+ * This routine is used to read an rnat bits that are stored on the
+ * kernel backing store. Since, in general, the alignment of the user
+ * and kernel are different, this is not completely trivial. In
+ * essence, we need to construct the user RNAT based on up to two
+ * kernel RNAT values and/or the RNAT value saved in the child's
+ * pt_regs.
+ *
+ * user rbs
+ *
+ * +--------+ <-- lowest address
+ * | slot62 |
+ * +--------+
+ * | rnat | 0x....1f8
+ * +--------+
+ * | slot00 | \
+ * +--------+ |
+ * | slot01 | > child_regs->ar_rnat
+ * +--------+ |
+ * | slot02 | / kernel rbs
+ * +--------+ +--------+
+ * <- child_regs->ar_bspstore | slot61 | <-- krbs
+ * +- - - - + +--------+
+ * | slot62 |
+ * +- - - - + +--------+
+ * | rnat |
+ * +- - - - + +--------+
+ * vrnat | slot00 |
+ * +- - - - + +--------+
+ * = =
+ * +--------+
+ * | slot00 | \
+ * +--------+ |
+ * | slot01 | > child_stack->ar_rnat
+ * +--------+ |
+ * | slot02 | /
+ * +--------+
+ * <--- child_stack->ar_bspstore
+ *
+ * The way to think of this code is as follows: bit 0 in the user rnat
+ * corresponds to some bit N (0 <= N <= 62) in one of the kernel rnat
+ * value. The kernel rnat value holding this bit is stored in
+ * variable rnat0. rnat1 is loaded with the kernel rnat value that
+ * form the upper bits of the user rnat value.
+ *
+ * Boundary cases:
+ *
+ * o when reading the rnat "below" the first rnat slot on the kernel
+ * backing store, rnat0/rnat1 are set to 0 and the low order bits
+ * are merged in from pt->ar_rnat.
+ *
+ * o when reading the rnat "above" the last rnat slot on the kernel
+ * backing store, rnat0/rnat1 gets its value from sw->ar_rnat.
+ */
+static unsigned long
+get_rnat (struct pt_regs *pt, struct switch_stack *sw,
+ unsigned long *krbs, unsigned long *urnat_addr)
+{
+ unsigned long rnat0 = 0, rnat1 = 0, urnat = 0, *slot0_kaddr, kmask = ~0UL;
+ unsigned long *kbsp, *ubspstore, *rnat0_kaddr, *rnat1_kaddr, shift;
+ long num_regs;
+
+ kbsp = (unsigned long *) sw->ar_bspstore;
+ ubspstore = (unsigned long *) pt->ar_bspstore;
+ /*
+ * First, figure out which bit number slot 0 in user-land maps
+ * to in the kernel rnat. Do this by figuring out how many
+ * register slots we're beyond the user's backingstore and
+ * then computing the equivalent address in kernel space.
+ */
+ num_regs = ia64_rse_num_regs(ubspstore, urnat_addr + 1);
+ slot0_kaddr = ia64_rse_skip_regs(krbs, num_regs);
+ shift = ia64_rse_slot_num(slot0_kaddr);
+ rnat1_kaddr = ia64_rse_rnat_addr(slot0_kaddr);
+ rnat0_kaddr = rnat1_kaddr - 64;
+
+ if (ubspstore + 63 > urnat_addr) {
+ /* some bits need to be merged in from pt->ar_rnat */
+ kmask = ~((1UL << ia64_rse_slot_num(ubspstore)) - 1);
+ urnat = (pt->ar_rnat & ~kmask);
+ }
+ if (rnat0_kaddr >= kbsp) {
+ rnat0 = sw->ar_rnat;
+ } else if (rnat0_kaddr > krbs) {
+ rnat0 = *rnat0_kaddr;
+ }
+ if (rnat1_kaddr >= kbsp) {
+ rnat1 = sw->ar_rnat;
+ } else if (rnat1_kaddr > krbs) {
+ rnat1 = *rnat1_kaddr;
+ }
+ urnat |= ((rnat1 << (63 - shift)) | (rnat0 >> shift)) & kmask;
+ return urnat;
+}
+
+/*
+ * The reverse of get_rnat.
+ */
+static void
+put_rnat (struct pt_regs *pt, struct switch_stack *sw,
+ unsigned long *krbs, unsigned long *urnat_addr, unsigned long urnat)
+{
+ unsigned long rnat0 = 0, rnat1 = 0, rnat = 0, *slot0_kaddr, kmask = ~0UL, mask;
+ unsigned long *kbsp, *ubspstore, *rnat0_kaddr, *rnat1_kaddr, shift;
+ long num_regs;
+
+ kbsp = (unsigned long *) sw->ar_bspstore;
+ ubspstore = (unsigned long *) pt->ar_bspstore;
+ /*
+ * First, figure out which bit number slot 0 in user-land maps
+ * to in the kernel rnat. Do this by figuring out how many
+ * register slots we're beyond the user's backingstore and
+ * then computing the equivalent address in kernel space.
+ */
+ num_regs = (long) ia64_rse_num_regs(ubspstore, urnat_addr + 1);
+ slot0_kaddr = ia64_rse_skip_regs(krbs, num_regs);
+ shift = ia64_rse_slot_num(slot0_kaddr);
+ rnat1_kaddr = ia64_rse_rnat_addr(slot0_kaddr);
+ rnat0_kaddr = rnat1_kaddr - 64;
+
+ if (ubspstore + 63 > urnat_addr) {
+ /* some bits need to be place in pt->ar_rnat: */
+ kmask = ~((1UL << ia64_rse_slot_num(ubspstore)) - 1);
+ pt->ar_rnat = (pt->ar_rnat & kmask) | (rnat & ~kmask);
+ }
+ /*
+ * Note: Section 11.1 of the EAS guarantees that bit 63 of an
+ * rnat slot is ignored. so we don't have to clear it here.
+ */
+ rnat0 = (urnat << shift);
+ mask = ~0UL << shift;
+ if (rnat0_kaddr >= kbsp) {
+ sw->ar_rnat = (sw->ar_rnat & ~mask) | (rnat0 & mask);
+ } else if (rnat0_kaddr > krbs) {
+ *rnat0_kaddr = ((*rnat0_kaddr & ~mask) | (rnat0 & mask));
+ }
+
+ rnat1 = (urnat >> (63 - shift));
+ mask = ~0UL >> (63 - shift);
+ if (rnat1_kaddr >= kbsp) {
+ sw->ar_rnat = (sw->ar_rnat & ~mask) | (rnat1 & mask);
+ } else if (rnat1_kaddr > krbs) {
+ *rnat1_kaddr = ((*rnat1_kaddr & ~mask) | (rnat1 & mask));
+ }
+}
+
+long
+ia64_peek (struct pt_regs *regs, struct task_struct *child, unsigned long addr, long *val)
+{
+ unsigned long *bspstore, *krbs, krbs_num_regs, regnum, *rbs_end, *laddr;
+ struct switch_stack *child_stack;
+ struct pt_regs *child_regs;
+ size_t copied;
+ long ret;
+
+ laddr = (unsigned long *) addr;
+ child_regs = ia64_task_regs(child);
+ child_stack = (struct switch_stack *) child_regs - 1;
+ bspstore = (unsigned long *) child_regs->ar_bspstore;
+ krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
+ rbs_end = ia64_rse_skip_regs(bspstore, krbs_num_regs);
+ if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(rbs_end)) {
+ /*
+ * Attempt to read the RBS in an area that's actually
+ * on the kernel RBS => read the corresponding bits in
+ * the kernel RBS.
+ */
+ if (ia64_rse_is_rnat_slot(laddr))
+ ret = get_rnat(child_regs, child_stack, krbs, laddr);
+ else {
+ regnum = ia64_rse_num_regs(bspstore, laddr);
+ laddr = ia64_rse_skip_regs(krbs, regnum);
+ if (regnum >= krbs_num_regs) {
+ ret = 0;
+ } else {
+ if ((unsigned long) laddr >= (unsigned long) high_memory) {
+ printk("yikes: trying to access long at %p\n", laddr);
+ return -EIO;
+ }
+ ret = *laddr;
+ }
+ }
+ } else {
+ copied = access_process_vm(child, addr, &ret, sizeof(ret), 0);
+ if (copied != sizeof(ret))
+ return -EIO;
+ }
+ *val = ret;
+ return 0;
+}
+
+long
+ia64_poke (struct pt_regs *regs, struct task_struct *child, unsigned long addr, long val)
+{
+ unsigned long *bspstore, *krbs, krbs_num_regs, regnum, *rbs_end, *laddr;
+ struct switch_stack *child_stack;
+ struct pt_regs *child_regs;
+
+ laddr = (unsigned long *) addr;
+ child_regs = ia64_task_regs(child);
+ child_stack = (struct switch_stack *) child_regs - 1;
+ bspstore = (unsigned long *) child_regs->ar_bspstore;
+ krbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ krbs_num_regs = ia64_rse_num_regs(krbs, (unsigned long *) child_stack->ar_bspstore);
+ rbs_end = ia64_rse_skip_regs(bspstore, krbs_num_regs);
+ if (laddr >= bspstore && laddr <= ia64_rse_rnat_addr(rbs_end)) {
+ /*
+ * Attempt to write the RBS in an area that's actually
+ * on the kernel RBS => write the corresponding bits
+ * in the kernel RBS.
+ */
+ if (ia64_rse_is_rnat_slot(laddr))
+ put_rnat(child_regs, child_stack, krbs, laddr, val);
+ else {
+ regnum = ia64_rse_num_regs(bspstore, laddr);
+ laddr = ia64_rse_skip_regs(krbs, regnum);
+ if (regnum < krbs_num_regs) {
+ *laddr = val;
+ }
+ }
+ } else if (access_process_vm(child, addr, &val, sizeof(val), 1) != sizeof(val)) {
+ return -EIO;
+ }
+ return 0;
+}
+
+/*
+ * Ensure the state in child->thread.fph is up-to-date.
+ */
+static void
+sync_fph (struct task_struct *child)
+{
+ if (ia64_psr(ia64_task_regs(child))->mfh && ia64_get_fpu_owner() == child) {
+ ia64_save_fpu(&child->thread.fph[0]);
+ child->thread.flags |= IA64_THREAD_FPH_VALID;
+ }
+ if (!(child->thread.flags & IA64_THREAD_FPH_VALID)) {
+ memset(&child->thread.fph, 0, sizeof(child->thread.fph));
+ child->thread.flags |= IA64_THREAD_FPH_VALID;
+ }
+}
+
+asmlinkage long
+sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data,
+ long arg4, long arg5, long arg6, long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ struct switch_stack *child_stack;
+ struct pt_regs *child_regs;
+ struct task_struct *child;
+ unsigned long flags, *base;
+ long ret, regnum;
+
+ lock_kernel();
+ ret = -EPERM;
+ if (request == PTRACE_TRACEME) {
+ /* are we already being traced? */
+ if (current->flags & PF_PTRACED)
+ goto out;
+ current->flags |= PF_PTRACED;
+ ret = 0;
+ goto out;
+ }
+
+ ret = -ESRCH;
+ read_lock(&tasklist_lock);
+ child = find_task_by_pid(pid);
+ read_unlock(&tasklist_lock);
+ if (!child)
+ goto out;
+ ret = -EPERM;
+ if (pid == 1) /* no messing around with init! */
+ goto out;
+
+ if (request == PTRACE_ATTACH) {
+ if (child == current)
+ goto out;
+ if ((!child->dumpable ||
+ (current->uid != child->euid) ||
+ (current->uid != child->suid) ||
+ (current->uid != child->uid) ||
+ (current->gid != child->egid) ||
+ (current->gid != child->sgid) ||
+ (!cap_issubset(child->cap_permitted, current->cap_permitted)) ||
+ (current->gid != child->gid)) && !capable(CAP_SYS_PTRACE))
+ goto out;
+ /* the same process cannot be attached many times */
+ if (child->flags & PF_PTRACED)
+ goto out;
+ child->flags |= PF_PTRACED;
+ if (child->p_pptr != current) {
+ unsigned long flags;
+
+ write_lock_irqsave(&tasklist_lock, flags);
+ REMOVE_LINKS(child);
+ child->p_pptr = current;
+ SET_LINKS(child);
+ write_unlock_irqrestore(&tasklist_lock, flags);
+ }
+ send_sig(SIGSTOP, child, 1);
+ ret = 0;
+ goto out;
+ }
+ ret = -ESRCH;
+ if (!(child->flags & PF_PTRACED))
+ goto out;
+ if (child->state != TASK_STOPPED) {
+ if (request != PTRACE_KILL)
+ goto out;
+ }
+ if (child->p_pptr != current)
+ goto out;
+
+ switch (request) {
+ case PTRACE_PEEKTEXT:
+ case PTRACE_PEEKDATA: /* read word at location addr */
+ ret = ia64_peek(regs, child, addr, &data);
+ if (ret == 0) {
+ ret = data;
+ regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
+ }
+ goto out;
+
+ case PTRACE_POKETEXT:
+ case PTRACE_POKEDATA: /* write the word at location addr */
+ ret = ia64_poke(regs, child, addr, data);
+ goto out;
+
+ case PTRACE_PEEKUSR: /* read the word at addr in the USER area */
+ ret = -EIO;
+ if ((addr & 0x7) != 0)
+ goto out;
+
+ if (addr < PT_CALLER_UNAT) {
+ /* accessing fph */
+ sync_fph(child);
+ addr += (unsigned long) &child->thread.fph;
+ ret = *(unsigned long *) addr;
+ } else if (addr < PT_F9+16) {
+ /* accessing switch_stack or pt_regs: */
+ child_regs = ia64_task_regs(child);
+ child_stack = (struct switch_stack *) child_regs - 1;
+ ret = *(unsigned long *) ((long) child_stack + addr - PT_CALLER_UNAT);
+
+ if (addr == PT_AR_BSP) {
+ /* ret currently contains pt_regs.loadrs */
+ unsigned long *rbs, *bspstore, ndirty;
+
+ rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ bspstore = (unsigned long *) child_regs->ar_bspstore;
+ ndirty = ia64_rse_num_regs(rbs, rbs + (ret >> 19));
+ ret = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+ }
+ } else {
+ if (addr >= PT_IBR) {
+ regnum = (addr - PT_IBR) >> 3;
+ base = &child->thread.ibr[0];
+ } else {
+ regnum = (addr - PT_DBR) >> 3;
+ base = &child->thread.dbr[0];
+ }
+ if (regnum >= 8)
+ goto out;
+ data = base[regnum];
+ }
+ regs->r8 = 0; /* ensure "ret" is not mistaken as an error code */
+ goto out;
+
+ case PTRACE_POKEUSR: /* write the word at addr in the USER area */
+ ret = -EIO;
+ if ((addr & 0x7) != 0)
+ goto out;
+
+ if (addr < PT_CALLER_UNAT) {
+ /* accessing fph */
+ sync_fph(child);
+ addr += (unsigned long) &child->thread.fph;
+ *(unsigned long *) addr = data;
+ if (ret < 0)
+ goto out;
+ } else if (addr < PT_F9+16) {
+ /* accessing switch_stack or pt_regs */
+ child_regs = ia64_task_regs(child);
+ child_stack = (struct switch_stack *) child_regs - 1;
+
+ if (addr == PT_AR_BSP) {
+ /* compute the loadrs value based on bsp and bspstore: */
+ unsigned long *rbs, *bspstore, ndirty, *kbsp;
+
+ bspstore = (unsigned long *) child_regs->ar_bspstore;
+ ndirty = ia64_rse_num_regs(bspstore, (unsigned long *) data);
+ rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+ kbsp = ia64_rse_skip_regs(rbs, ndirty);
+ data = (kbsp - rbs) << 19;
+ }
+ *(unsigned long *) ((long) child_stack + addr - PT_CALLER_UNAT) = data;
+ } else {
+ if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
+ child->thread.flags |= IA64_THREAD_DBG_VALID;
+ memset(current->thread.dbr, 0, sizeof current->thread.dbr);
+ memset(current->thread.ibr, 0, sizeof current->thread.ibr);
+ }
+
+ if (addr >= PT_IBR) {
+ regnum = (addr - PT_IBR) >> 3;
+ base = &child->thread.ibr[0];
+ } else {
+ regnum = (addr - PT_DBR) >> 3;
+ base = &child->thread.dbr[0];
+ }
+ if (regnum >= 8)
+ goto out;
+ if (regnum & 1) {
+ /* force breakpoint to be effective a most for user-level: */
+ data &= ~(0x7UL << 56);
+ }
+ base[regnum] = data;
+ }
+ ret = 0;
+ goto out;
+
+ case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+ case PTRACE_CONT: /* restart after signal. */
+ ret = -EIO;
+ if (data > _NSIG)
+ goto out;
+ if (request == PTRACE_SYSCALL)
+ child->flags |= PF_TRACESYS;
+ else
+ child->flags &= ~PF_TRACESYS;
+ child->exit_code = data;
+
+ /* make sure the single step/take-branch tra bits are not set: */
+ ia64_psr(ia64_task_regs(child))->ss = 0;
+ ia64_psr(ia64_task_regs(child))->tb = 0;
+
+ wake_up_process(child);
+ ret = 0;
+ goto out;
+
+ case PTRACE_KILL:
+ /*
+ * Make the child exit. Best I can do is send it a
+ * sigkill. Perhaps it should be put in the status
+ * that it wants to exit.
+ */
+ if (child->state == TASK_ZOMBIE) /* already dead */
+ goto out;
+ child->exit_code = SIGKILL;
+
+ /* make sure the single step/take-branch tra bits are not set: */
+ ia64_psr(ia64_task_regs(child))->ss = 0;
+ ia64_psr(ia64_task_regs(child))->tb = 0;
+
+ wake_up_process(child);
+ ret = 0;
+ goto out;
+
+ case PTRACE_SINGLESTEP: /* let child execute for one instruction */
+ case PTRACE_SINGLEBLOCK:
+ ret = -EIO;
+ if (data > _NSIG)
+ goto out;
+
+ child->flags &= ~PF_TRACESYS;
+ if (request == PTRACE_SINGLESTEP) {
+ ia64_psr(ia64_task_regs(child))->ss = 1;
+ } else {
+ ia64_psr(ia64_task_regs(child))->tb = 1;
+ }
+ child->exit_code = data;
+
+ /* give it a chance to run. */
+ wake_up_process(child);
+ ret = 0;
+ goto out;
+
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = -EIO;
+ if (data > _NSIG)
+ goto out;
+
+ child->flags &= ~(PF_PTRACED|PF_TRACESYS);
+ child->exit_code = data;
+ write_lock_irqsave(&tasklist_lock, flags);
+ REMOVE_LINKS(child);
+ child->p_pptr = child->p_opptr;
+ SET_LINKS(child);
+ write_unlock_irqrestore(&tasklist_lock, flags);
+
+ /* make sure the single step/take-branch tra bits are not set: */
+ ia64_psr(ia64_task_regs(child))->ss = 0;
+ ia64_psr(ia64_task_regs(child))->tb = 0;
+
+ wake_up_process(child);
+ ret = 0;
+ goto out;
+
+ default:
+ ret = -EIO;
+ goto out;
+ }
+ out:
+ unlock_kernel();
+ return ret;
+}
+
+void
+syscall_trace (void)
+{
+ if ((current->flags & (PF_PTRACED|PF_TRACESYS)) != (PF_PTRACED|PF_TRACESYS))
+ return;
+ current->exit_code = SIGTRAP;
+ set_current_state(TASK_STOPPED);
+ notify_parent(current, SIGCHLD);
+ schedule();
+ /*
+ * This isn't the same as continuing with a signal, but it
+ * will do for normal use. strace only continues with a
+ * signal if the stopping signal is not SIGTRAP. -brl
+ */
+ if (current->exit_code) {
+ send_sig(current->exit_code, current, 1);
+ current->exit_code = 0;
+ }
+}
--- /dev/null
+/*
+ * System Abstraction Layer (SAL) interface routines.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ */
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+
+#include <asm/page.h>
+#include <asm/sal.h>
+#include <asm/pal.h>
+
+#define SAL_DEBUG
+
+spinlock_t sal_lock = SPIN_LOCK_UNLOCKED;
+
+static struct {
+ void *addr; /* function entry point */
+ void *gpval; /* gp value to use */
+} pdesc;
+
+static long
+default_handler (void)
+{
+ return -1;
+}
+
+ia64_sal_handler ia64_sal = (ia64_sal_handler) default_handler;
+
+const char *
+ia64_sal_strerror (long status)
+{
+ const char *str;
+ switch (status) {
+ case 0: str = "Call completed without error"; break;
+ case 1: str = "Effect a warm boot of the system to complete "
+ "the update"; break;
+ case -1: str = "Not implemented"; break;
+ case -2: str = "Invalid argument"; break;
+ case -3: str = "Call completed with error"; break;
+ case -4: str = "Virtual address not registered"; break;
+ case -5: str = "No information available"; break;
+ case -6: str = "Insufficient space to add the entry"; break;
+ case -7: str = "Invalid entry_addr value"; break;
+ case -8: str = "Invalid interrupt vector"; break;
+ case -9: str = "Requested memory not available"; break;
+ case -10: str = "Unable to write to the NVM device"; break;
+ case -11: str = "Invalid partition type specified"; break;
+ case -12: str = "Invalid NVM_Object id specified"; break;
+ case -13: str = "NVM_Object already has the maximum number "
+ "of partitions"; break;
+ case -14: str = "Insufficient space in partition for the "
+ "requested write sub-function"; break;
+ case -15: str = "Insufficient data buffer space for the "
+ "requested read record sub-function"; break;
+ case -16: str = "Scratch buffer required for the write/delete "
+ "sub-function"; break;
+ case -17: str = "Insufficient space in the NVM_Object for the "
+ "requested create sub-function"; break;
+ case -18: str = "Invalid value specified in the partition_rec "
+ "argument"; break;
+ case -19: str = "Record oriented I/O not supported for this "
+ "partition"; break;
+ case -20: str = "Bad format of record to be written or "
+ "required keyword variable not "
+ "specified"; break;
+ default: str = "Unknown SAL status code"; break;
+ }
+ return str;
+}
+
+static void __init
+ia64_sal_handler_init (void *entry_point, void *gpval)
+{
+ /* fill in the SAL procedure descriptor and point ia64_sal to it: */
+ pdesc.addr = entry_point;
+ pdesc.gpval = gpval;
+ ia64_sal = (ia64_sal_handler) &pdesc;
+}
+
+
+void __init
+ia64_sal_init (struct ia64_sal_systab *systab)
+{
+ unsigned long min, max;
+ char *p;
+ struct ia64_sal_desc_entry_point *ep;
+ int i;
+
+ if (!systab) {
+ printk("Hmm, no SAL System Table.\n");
+ return;
+ }
+
+ if (strncmp(systab->signature, "SST_", 4) != 0)
+ printk("bad signature in system table!");
+
+ printk("SAL v%u.%02u: ia32bios=%s, oem=%.32s, product=%.32s\n",
+ systab->sal_rev_major, systab->sal_rev_minor,
+ systab->ia32_bios_present ? "present" : "absent",
+ systab->oem_id, systab->product_id);
+
+ min = ~0UL;
+ max = 0;
+
+ p = (char *) (systab + 1);
+ for (i = 0; i < systab->entry_count; i++) {
+ /*
+ * The first byte of each entry type contains the type desciptor.
+ */
+ switch (*p) {
+ case SAL_DESC_ENTRY_POINT:
+ ep = (struct ia64_sal_desc_entry_point *) p;
+#ifdef SAL_DEBUG
+ printk("sal[%d] - entry: pal_proc=0x%lx, sal_proc=0x%lx\n",
+ i, ep->pal_proc, ep->sal_proc);
+#endif
+ ia64_pal_handler_init(__va(ep->pal_proc));
+ ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
+ break;
+
+ case SAL_DESC_AP_WAKEUP:
+#ifdef CONFIG_SMP
+ {
+ struct ia64_sal_desc_ap_wakeup *ap = (void *) p;
+# ifdef SAL_DEBUG
+ printk("sal[%d] - wakeup type %x, 0x%lx\n",
+ i, ap->mechanism, ap->vector);
+# endif
+ switch (ap->mechanism) {
+ case IA64_SAL_AP_EXTERNAL_INT:
+ ap_wakeup_vector = ap->vector;
+# ifdef SAL_DEBUG
+ printk("SAL: AP wakeup using external interrupt; "
+ "vector 0x%lx\n", ap_wakeup_vector);
+# endif
+ break;
+
+ default:
+ printk("SAL: AP wakeup mechanism unsupported!\n");
+ break;
+ }
+ break;
+ }
+#endif
+ }
+ p += SAL_DESC_SIZE(*p);
+ }
+}
--- /dev/null
+/*
+ * gcc currently does not conform to the ia-64 calling convention as far
+ * as returning function values are concerned. Instead of returning
+ * values up to 32 bytes in size in r8-r11, gcc returns any value
+ * bigger than a doubleword via a structure that's allocated by the
+ * caller and whose address is passed into the function. Since
+ * SAL_PROC returns values according to the calling convention, this
+ * stub takes care of copying r8-r11 to the place where gcc expects
+ * them.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#ifndef __GCC_MULTIREG_RETVALS__
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 16
+ .global ia64_sal_stub
+ia64_sal_stub:
+ /*
+ * Sheesh, the Cygnus backend passes the pointer to a return value structure in
+ * in0 whereas the HP backend passes it in r8. Don't you hate those little
+ * differences...
+ */
+#ifdef GCC_RETVAL_POINTER_IN_R8
+ adds r2=-24,sp
+ adds sp=-48,sp
+ mov r14=rp
+ ;;
+ st8 [r2]=r8,8 // save pointer to return value
+ addl r3=@ltoff(ia64_sal),gp
+ ;;
+ ld8 r3=[r3]
+ st8 [r2]=gp,8 // save global pointer
+ ;;
+ ld8 r3=[r3] // fetch the value of ia64_sal
+ st8 [r2]=r14 // save return pointer
+ ;;
+ ld8 r2=[r3],8 // load function's entry point
+ ;;
+ ld8 gp=[r3] // load function's global pointer
+ ;;
+ mov b6=r2
+ br.call.sptk.few rp=b6
+.ret0: adds r2=24,sp
+ ;;
+ ld8 r3=[r2],8 // restore pointer to return value
+ ;;
+ ld8 gp=[r2],8 // restore global pointer
+ st8 [r3]=r8,8
+ ;;
+ ld8 r14=[r2] // restore return pointer
+ st8 [r3]=r9,8
+ ;;
+ mov rp=r14
+ st8 [r3]=r10,8
+ ;;
+ st8 [r3]=r11,8
+ adds sp=48,sp
+ br.sptk.few rp
+#else
+ /*
+ * On input:
+ * in0 = pointer to return value structure
+ * in1 = index of SAL function to call
+ * in2..inN = remaining args to SAL call
+ */
+ /*
+ * We allocate one input and eight output register such that the br.call instruction
+ * will rename in1-in7 to in0-in6---exactly what we want because SAL doesn't want to
+ * see the pointer to the return value structure.
+ */
+ alloc r15=ar.pfs,1,0,8,0
+
+ adds r2=-24,sp
+ adds sp=-48,sp
+ mov r14=rp
+ ;;
+ st8 [r2]=r15,8 // save ar.pfs
+ addl r3=@ltoff(ia64_sal),gp
+ ;;
+ ld8 r3=[r3] // get address of ia64_sal
+ st8 [r2]=gp,8 // save global pointer
+ ;;
+ ld8 r3=[r3] // get value of ia64_sal
+ st8 [r2]=r14,8 // save return address (rp)
+ ;;
+ ld8 r2=[r3],8 // load function's entry point
+ ;;
+ ld8 gp=[r3] // load function's global pointer
+ mov b6=r2
+ br.call.sptk.few rp=b6 // make SAL call
+.ret0: adds r2=24,sp
+ ;;
+ ld8 r15=[r2],8 // restore ar.pfs
+ ;;
+ ld8 gp=[r2],8 // restore global pointer
+ st8 [in0]=r8,8 // store 1. dword of return value
+ ;;
+ ld8 r14=[r2] // restore return address (rp)
+ st8 [in0]=r9,8 // store 2. dword of return value
+ ;;
+ mov rp=r14
+ st8 [in0]=r10,8 // store 3. dword of return value
+ ;;
+ st8 [in0]=r11,8
+ adds sp=48,sp // pop stack frame
+ mov ar.pfs=r15
+ br.ret.sptk.few rp
+#endif
+
+ .endp ia64_sal_stub
+#endif /* __GCC_MULTIREG_RETVALS__ */
--- /dev/null
+/*
+ * IA-64 semaphore implementation (derived from x86 version).
+ *
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * Semaphores are implemented using a two-way counter: The "count"
+ * variable is decremented for each process that tries to aquire the
+ * semaphore, while the "sleepers" variable is a count of such
+ * aquires.
+ *
+ * Notably, the inline "up()" and "down()" functions can efficiently
+ * test if they need to do any extra work (up needs to do something
+ * only if count was negative before the increment operation.
+ *
+ * "sleepers" and the contention routine ordering is protected by the
+ * semaphore spinlock.
+ *
+ * Note that these functions are only called when there is contention
+ * on the lock, and as such all this is the "non-critical" part of the
+ * whole semaphore business. The critical part is the inline stuff in
+ * <asm/semaphore.h> where we want to avoid any extra jumps and calls.
+ */
+#include <linux/sched.h>
+
+#include <asm/semaphore.h>
+
+/*
+ * Logic:
+ * - Only on a boundary condition do we need to care. When we go
+ * from a negative count to a non-negative, we wake people up.
+ * - When we go from a non-negative count to a negative do we
+ * (a) synchronize with the "sleepers" count and (b) make sure
+ * that we're on the wakeup list before we synchronize so that
+ * we cannot lose wakeup events.
+ */
+
+void
+__up (struct semaphore *sem)
+{
+ wake_up(&sem->wait);
+}
+
+static spinlock_t semaphore_lock = SPIN_LOCK_UNLOCKED;
+
+void
+__down (struct semaphore *sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+ tsk->state = TASK_UNINTERRUPTIBLE|TASK_EXCLUSIVE;
+ add_wait_queue_exclusive(&sem->wait, &wait);
+
+ spin_lock_irq(&semaphore_lock);
+ sem->sleepers++;
+ for (;;) {
+ int sleepers = sem->sleepers;
+
+ /*
+ * Add "everybody else" into it. They aren't
+ * playing, because we own the spinlock.
+ */
+ if (!atomic_add_negative(sleepers - 1, &sem->count)) {
+ sem->sleepers = 0;
+ break;
+ }
+ sem->sleepers = 1; /* us - see -1 above */
+ spin_unlock_irq(&semaphore_lock);
+
+ schedule();
+ tsk->state = TASK_UNINTERRUPTIBLE|TASK_EXCLUSIVE;
+ spin_lock_irq(&semaphore_lock);
+ }
+ spin_unlock_irq(&semaphore_lock);
+ remove_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_RUNNING;
+ wake_up(&sem->wait);
+}
+
+int
+__down_interruptible (struct semaphore * sem)
+{
+ int retval = 0;
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+ tsk->state = TASK_INTERRUPTIBLE|TASK_EXCLUSIVE;
+ add_wait_queue_exclusive(&sem->wait, &wait);
+
+ spin_lock_irq(&semaphore_lock);
+ sem->sleepers ++;
+ for (;;) {
+ int sleepers = sem->sleepers;
+
+ /*
+ * With signals pending, this turns into
+ * the trylock failure case - we won't be
+ * sleeping, and we* can't get the lock as
+ * it has contention. Just correct the count
+ * and exit.
+ */
+ if (signal_pending(current)) {
+ retval = -EINTR;
+ sem->sleepers = 0;
+ atomic_add(sleepers, &sem->count);
+ break;
+ }
+
+ /*
+ * Add "everybody else" into it. They aren't
+ * playing, because we own the spinlock. The
+ * "-1" is because we're still hoping to get
+ * the lock.
+ */
+ if (!atomic_add_negative(sleepers - 1, &sem->count)) {
+ sem->sleepers = 0;
+ break;
+ }
+ sem->sleepers = 1; /* us - see -1 above */
+ spin_unlock_irq(&semaphore_lock);
+
+ schedule();
+ tsk->state = TASK_INTERRUPTIBLE|TASK_EXCLUSIVE;
+ spin_lock_irq(&semaphore_lock);
+ }
+ spin_unlock_irq(&semaphore_lock);
+ tsk->state = TASK_RUNNING;
+ remove_wait_queue(&sem->wait, &wait);
+ wake_up(&sem->wait);
+ return retval;
+}
+
+/*
+ * Trylock failed - make sure we correct for having decremented the
+ * count.
+ */
+int
+__down_trylock (struct semaphore *sem)
+{
+ int sleepers;
+
+ spin_lock_irq(&semaphore_lock);
+ sleepers = sem->sleepers + 1;
+ sem->sleepers = 0;
+
+ /*
+ * Add "everybody else" and us into it. They aren't
+ * playing, because we own the spinlock.
+ */
+ if (!atomic_add_negative(sleepers, &sem->count))
+ wake_up(&sem->wait);
+
+ spin_unlock_irq(&semaphore_lock);
+ return 1;
+}
+
+/*
+ * Helper routines for rw semaphores. These could be optimized some
+ * more, but since they're off the critical path, I prefer clarity for
+ * now...
+ */
+
+/*
+ * This gets called if we failed to acquire the lock, but we're biased
+ * to acquire the lock by virtue of causing the count to change from 0
+ * to -1. Being biased, we sleep and attempt to grab the lock until
+ * we succeed. When this function returns, we own the lock.
+ */
+static inline void
+down_read_failed_biased (struct rw_semaphore *sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+
+ add_wait_queue(&sem->wait, &wait); /* put ourselves at the head of the list */
+
+ for (;;) {
+ if (sem->read_bias_granted && xchg(&sem->read_bias_granted, 0))
+ break;
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+ if (!sem->read_bias_granted)
+ schedule();
+ }
+ remove_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_RUNNING;
+}
+
+/*
+ * This gets called if we failed to aquire the lock and we are not
+ * biased to acquire the lock. We undo the decrement that was
+ * done earlier, go to sleep, and then attempt to re-acquire the
+ * lock afterwards.
+ */
+static inline void
+down_read_failed (struct rw_semaphore *sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+
+ /*
+ * Undo the decrement we did in down_read() and check if we
+ * need to wake up someone.
+ */
+ __up_read(sem);
+
+ add_wait_queue(&sem->wait, &wait);
+ while (sem->count < 0) {
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+ if (sem->count >= 0)
+ break;
+ schedule();
+ }
+ remove_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_RUNNING;
+}
+
+/*
+ * Wait for the lock to become unbiased. Readers are non-exclusive.
+ */
+void
+__down_read_failed (struct rw_semaphore *sem, long count)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+
+ while (1) {
+ if (count == -1) {
+ down_read_failed_biased(sem);
+ return;
+ }
+ /* unbiased */
+ down_read_failed(sem);
+
+ count = ia64_fetch_and_add(-1, &sem->count);
+ if (count >= 0)
+ return;
+ }
+}
+
+static inline void
+down_write_failed_biased (struct rw_semaphore *sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+
+ /* put ourselves at the end of the list */
+ add_wait_queue_exclusive(&sem->write_bias_wait, &wait);
+
+ for (;;) {
+ if (sem->write_bias_granted && xchg(&sem->write_bias_granted, 0))
+ break;
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
+ if (!sem->write_bias_granted)
+ schedule();
+ }
+
+ remove_wait_queue(&sem->write_bias_wait, &wait);
+ tsk->state = TASK_RUNNING;
+
+ /*
+ * If the lock is currently unbiased, awaken the sleepers
+ * FIXME: this wakes up the readers early in a bit of a
+ * stampede -> bad!
+ */
+ if (sem->count >= 0)
+ wake_up(&sem->wait);
+}
+
+
+static inline void
+down_write_failed (struct rw_semaphore *sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+
+ __up_write(sem); /* this takes care of granting the lock */
+
+ add_wait_queue_exclusive(&sem->wait, &wait);
+
+ while (sem->count < 0) {
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
+ if (sem->count >= 0)
+ break; /* we must attempt to aquire or bias the lock */
+ schedule();
+ }
+
+ remove_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_RUNNING;
+}
+
+
+/*
+ * Wait for the lock to become unbiased. Since we're a writer, we'll
+ * make ourselves exclusive.
+ */
+void
+__down_write_failed (struct rw_semaphore *sem, long count)
+{
+ long old_count;
+
+ while (1) {
+ if (count == -RW_LOCK_BIAS) {
+ down_write_failed_biased(sem);
+ return;
+ }
+ down_write_failed(sem);
+
+ do {
+ old_count = sem->count;
+ count = old_count - RW_LOCK_BIAS;
+ } while (cmpxchg(&sem->count, old_count, count) != old_count);
+
+ if (count == 0)
+ return;
+ }
+}
+
+void
+__rwsem_wake (struct rw_semaphore *sem, long count)
+{
+ wait_queue_head_t *wq;
+
+ if (count == 0) {
+ /* wake a writer */
+ if (xchg(&sem->write_bias_granted, 1))
+ BUG();
+ wq = &sem->write_bias_wait;
+ } else {
+ /* wake reader(s) */
+ if (xchg(&sem->read_bias_granted, 1))
+ BUG();
+ wq = &sem->wait;
+ }
+ wake_up(wq); /* wake up everyone on the wait queue */
+}
--- /dev/null
+/*
+ * Architecture-specific setup.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 2000, Rohit Seth <rohit.seth@intel.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ *
+ * 02/04/00 D.Mosberger some more get_cpuinfo fixes...
+ * 02/01/00 R.Seth fixed get_cpuinfo for SMP
+ * 01/07/99 S.Eranian added the support for command line argument
+ * 06/24/99 W.Drummond added boot_cpu_data.
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/reboot.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/threads.h>
+#include <linux/console.h>
+
+#include <asm/acpi-ext.h>
+#include <asm/page.h>
+#include <asm/machvec.h>
+#include <asm/processor.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+#include <asm/efi.h>
+
+extern char _end;
+
+/* cpu_data[bootstrap_processor] is data for the bootstrap processor: */
+struct cpuinfo_ia64 cpu_data[NR_CPUS];
+
+unsigned long ia64_cycles_per_usec;
+struct ia64_boot_param ia64_boot_param;
+struct screen_info screen_info;
+unsigned long cpu_initialized = 0;
+/* This tells _start which CPU is booting. */
+int cpu_now_booting = 0;
+
+#define COMMAND_LINE_SIZE 512
+
+char saved_command_line[COMMAND_LINE_SIZE]; /* used in proc filesystem */
+
+static int
+find_max_pfn (unsigned long start, unsigned long end, void *arg)
+{
+ unsigned long *max_pfn = arg, pfn;
+
+ pfn = (PAGE_ALIGN(end - 1) - PAGE_OFFSET) >> PAGE_SHIFT;
+ if (pfn > *max_pfn)
+ *max_pfn = pfn;
+ return 0;
+}
+
+static int
+free_available_memory (unsigned long start, unsigned long end, void *arg)
+{
+# define KERNEL_END ((unsigned long) &_end)
+# define MIN(a,b) ((a) < (b) ? (a) : (b))
+# define MAX(a,b) ((a) > (b) ? (a) : (b))
+ unsigned long range_start, range_end;
+
+ range_start = MIN(start, KERNEL_START);
+ range_end = MIN(end, KERNEL_START);
+
+ /*
+ * XXX This should not be necessary, but the bootmem allocator
+ * is broken and fails to work correctly when the starting
+ * address is not properly aligned.
+ */
+ range_start = PAGE_ALIGN(range_start);
+
+ if (range_start < range_end)
+ free_bootmem(__pa(range_start), range_end - range_start);
+
+ range_start = MAX(start, KERNEL_END);
+ range_end = MAX(end, KERNEL_END);
+
+ /*
+ * XXX This should not be necessary, but the bootmem allocator
+ * is broken and fails to work correctly when the starting
+ * address is not properly aligned.
+ */
+ range_start = PAGE_ALIGN(range_start);
+
+ if (range_start < range_end)
+ free_bootmem(__pa(range_start), range_end - range_start);
+
+ return 0;
+}
+
+void __init
+setup_arch (char **cmdline_p)
+{
+ unsigned long max_pfn, bootmap_start, bootmap_size;
+
+ /*
+ * The secondary bootstrap loader passes us the boot
+ * parameters at the beginning of the ZERO_PAGE, so let's
+ * stash away those values before ZERO_PAGE gets cleared out.
+ */
+ memcpy(&ia64_boot_param, (void *) ZERO_PAGE_ADDR, sizeof(ia64_boot_param));
+
+ efi_init();
+
+ max_pfn = 0;
+ efi_memmap_walk(find_max_pfn, &max_pfn);
+
+ /*
+ * This is wrong, wrong, wrong. Darn it, you'd think if they
+ * change APIs, they'd do things for the better. Grumble...
+ */
+ bootmap_start = PAGE_ALIGN(__pa(&_end));
+ bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
+
+ efi_memmap_walk(free_available_memory, 0);
+
+ reserve_bootmem(bootmap_start, bootmap_size);
+#if 0
+ /* XXX fix me */
+ init_mm.start_code = (unsigned long) &_stext;
+ init_mm.end_code = (unsigned long) &_etext;
+ init_mm.end_data = (unsigned long) &_edata;
+ init_mm.brk = (unsigned long) &_end;
+
+ code_resource.start = virt_to_bus(&_text);
+ code_resource.end = virt_to_bus(&_etext) - 1;
+ data_resource.start = virt_to_bus(&_etext);
+ data_resource.end = virt_to_bus(&_edata) - 1;
+#endif
+
+ /* process SAL system table: */
+ ia64_sal_init(efi.sal_systab);
+
+ *cmdline_p = __va(ia64_boot_param.command_line);
+ strncpy(saved_command_line, *cmdline_p, sizeof(saved_command_line));
+ saved_command_line[COMMAND_LINE_SIZE-1] = '\0'; /* for safety */
+
+ printk("args to kernel: %s\n", *cmdline_p);
+
+#ifndef CONFIG_SMP
+ cpu_init();
+ identify_cpu(&cpu_data[0]);
+#endif
+
+ if (efi.acpi) {
+ /* Parse the ACPI tables */
+ acpi_parse(efi.acpi);
+ }
+
+#ifdef CONFIG_IA64_GENERIC
+ machvec_init(acpi_get_sysname());
+#endif
+
+#ifdef CONFIG_VT
+# if defined(CONFIG_VGA_CONSOLE)
+ conswitchp = &vga_con;
+# elif defined(CONFIG_DUMMY_CONSOLE)
+ conswitchp = &dummy_con;
+# endif
+#endif
+ platform_setup(cmdline_p);
+}
+
+/*
+ * Display cpu info for all cpu's.
+ */
+int
+get_cpuinfo (char *buffer)
+{
+ char family[32], model[32], features[128], *cp, *p = buffer;
+ struct cpuinfo_ia64 *c;
+ unsigned long mask;
+
+ for (c = cpu_data; c < cpu_data + NR_CPUS; ++c) {
+ if (!(cpu_initialized & (1UL << (c - cpu_data))))
+ continue;
+
+ mask = c->features;
+
+ if (c->family == 7)
+ memcpy(family, "IA-64", 6);
+ else
+ sprintf(family, "%u", c->family);
+
+ switch (c->model) {
+ case 0: strcpy(model, "Itanium"); break;
+ default: sprintf(model, "%u", c->model); break;
+ }
+
+ /* build the feature string: */
+ memcpy(features, " standard", 10);
+ cp = features;
+ if (mask & 1) {
+ strcpy(cp, " branchlong");
+ cp = strchr(cp, '\0');
+ mask &= ~1UL;
+ }
+ if (mask)
+ sprintf(cp, " 0x%lx", mask);
+
+ p += sprintf(buffer,
+ "CPU# %lu\n"
+ "\tvendor : %s\n"
+ "\tfamily : %s\n"
+ "\tmodel : %s\n"
+ "\trevision : %u\n"
+ "\tarchrev : %u\n"
+ "\tfeatures :%s\n" /* don't change this---it _is_ right! */
+ "\tcpu number : %lu\n"
+ "\tcpu regs : %u\n"
+ "\tcpu MHz : %lu.%06lu\n"
+ "\titc MHz : %lu.%06lu\n"
+ "\tBogoMIPS : %lu.%02lu\n\n",
+ c - cpu_data, c->vendor, family, model, c->revision, c->archrev,
+ features,
+ c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
+ c->itc_freq / 1000000, c->itc_freq % 1000000,
+ loops_per_sec() / 500000, (loops_per_sec() / 5000) % 100);
+ }
+ return p - buffer;
+}
+
+void
+identify_cpu (struct cpuinfo_ia64 *c)
+{
+ union {
+ unsigned long bits[5];
+ struct {
+ /* id 0 & 1: */
+ char vendor[16];
+
+ /* id 2 */
+ u64 ppn; /* processor serial number */
+
+ /* id 3: */
+ unsigned number : 8;
+ unsigned revision : 8;
+ unsigned model : 8;
+ unsigned family : 8;
+ unsigned archrev : 8;
+ unsigned reserved : 24;
+
+ /* id 4: */
+ u64 features;
+ } field;
+ } cpuid;
+ int i;
+
+ for (i = 0; i < 5; ++i) {
+ cpuid.bits[i] = ia64_get_cpuid(i);
+ }
+
+#ifdef CONFIG_SMP
+ /*
+ * XXX Instead of copying the ITC info from the bootstrap
+ * processor, ia64_init_itm() should be done per CPU. That
+ * should get you the right info. --davidm 1/24/00
+ */
+ if (c != &cpu_data[bootstrap_processor]) {
+ memset(c, 0, sizeof(struct cpuinfo_ia64));
+ c->proc_freq = cpu_data[bootstrap_processor].proc_freq;
+ c->itc_freq = cpu_data[bootstrap_processor].itc_freq;
+ c->cyc_per_usec = cpu_data[bootstrap_processor].cyc_per_usec;
+ c->usec_per_cyc = cpu_data[bootstrap_processor].usec_per_cyc;
+ }
+#else
+ memset(c, 0, sizeof(struct cpuinfo_ia64));
+#endif
+
+ memcpy(c->vendor, cpuid.field.vendor, 16);
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
+ /* BUG: SoftSDV doesn't support the cpuid registers. */
+ if (c->vendor[0] == '\0')
+ memcpy(c->vendor, "Intel", 6);
+#endif
+ c->ppn = cpuid.field.ppn;
+ c->number = cpuid.field.number;
+ c->revision = cpuid.field.revision;
+ c->model = cpuid.field.model;
+ c->family = cpuid.field.family;
+ c->archrev = cpuid.field.archrev;
+ c->features = cpuid.field.features;
+#ifdef CONFIG_SMP
+ c->loops_per_sec = loops_per_sec;
+#endif
+}
+
+/*
+ * cpu_init() initializes state that is per-CPU. This function acts
+ * as a 'CPU state barrier', nothing should get across.
+ */
+void
+cpu_init (void)
+{
+ int nr = smp_processor_id();
+
+ /* Clear the stack memory reserved for pt_regs: */
+ memset(ia64_task_regs(current), 0, sizeof(struct pt_regs));
+
+ /*
+ * Initialize default control register to defer speculative
+ * faults. On a speculative load, we want to defer access
+ * right, key miss, and key permission faults. We currently
+ * do NOT defer TLB misses, page-not-present, access bit, or
+ * debug faults but kernel code should not rely on any
+ * particular setting of these bits.
+ */
+ ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+ ia64_set_fpu_owner(0); /* initialize ar.k5 */
+
+ if (test_and_set_bit(nr, &cpu_initialized)) {
+ printk("CPU#%d already initialized!\n", nr);
+ machine_halt();
+ }
+ atomic_inc(&init_mm.mm_count);
+ current->active_mm = &init_mm;
+}
--- /dev/null
+/*
+ * Architecture-specific signal handling support.
+ *
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Derived from i386 and Alpha versions.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/ptrace.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/wait.h>
+
+#include <asm/ia32.h>
+#include <asm/uaccess.h>
+#include <asm/rse.h>
+#include <asm/sigcontext.h>
+
+#define DEBUG_SIG 0
+#define STACK_ALIGN 16 /* minimal alignment for stack pointer */
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+#if _NSIG_WORDS > 1
+# define PUT_SIGSET(k,u) __copy_to_user((u)->sig, (k)->sig, sizeof(sigset_t))
+# define GET_SIGSET(k,u) __copy_from_user((k)->sig, (u)->sig, sizeof(sigset_t))
+#else
+# define PUT_SIGSET(k,u) __put_user((k)->sig[0], &(u)->sig[0])
+# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
+#endif
+
+struct sigframe {
+ struct siginfo info;
+ struct sigcontext sc;
+};
+
+extern long sys_wait4 (int, int *, int, struct rusage *);
+extern long ia64_do_signal (sigset_t *, struct pt_regs *, long); /* forward decl */
+
+long
+ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct pt_regs *pt)
+{
+ sigset_t oldset, set;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset_t))
+ return -EINVAL;
+ if (GET_SIGSET(&set, uset))
+ return -EFAULT;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sigmask_lock);
+ {
+ oldset = current->blocked;
+ current->blocked = set;
+ recalc_sigpending(current);
+ }
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ /*
+ * The return below usually returns to the signal handler. We need to
+ * pre-set the correct error code here to ensure that the right values
+ * get saved in sigcontext by ia64_do_signal.
+ */
+ pt->r8 = EINTR;
+ pt->r10 = -1;
+ while (1) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ if (ia64_do_signal(&oldset, pt, 1))
+ return -EINTR;
+ }
+}
+
+asmlinkage long
+sys_sigaltstack (const stack_t *uss, stack_t *uoss, long arg2, long arg3, long arg4,
+ long arg5, long arg6, long arg7, long stack)
+{
+ struct pt_regs *pt = (struct pt_regs *) &stack;
+
+ return do_sigaltstack(uss, uoss, pt->r12);
+}
+
+static long
+restore_sigcontext (struct sigcontext *sc, struct pt_regs *pt)
+{
+ struct switch_stack *sw = (struct switch_stack *) pt - 1;
+ unsigned long ip, flags, nat, um;
+ long err;
+
+ /* restore scratch that always needs gets updated during signal delivery: */
+ err = __get_user(flags, &sc->sc_flags);
+
+ err |= __get_user(nat, &sc->sc_nat);
+ err |= __get_user(ip, &sc->sc_ip); /* instruction pointer */
+ err |= __get_user(pt->ar_fpsr, &sc->sc_ar_fpsr);
+ err |= __get_user(pt->ar_pfs, &sc->sc_ar_pfs);
+ err |= __get_user(um, &sc->sc_um); /* user mask */
+ err |= __get_user(pt->ar_rsc, &sc->sc_ar_rsc);
+ err |= __get_user(pt->ar_ccv, &sc->sc_ar_ccv);
+ err |= __get_user(pt->ar_unat, &sc->sc_ar_unat);
+ err |= __get_user(pt->pr, &sc->sc_pr); /* predicates */
+ err |= __get_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __get_user(pt->b6, &sc->sc_br[6]);
+ err |= __copy_from_user(&pt->r1, &sc->sc_gr[1], 3*8); /* r1-r3 */
+ err |= __copy_from_user(&pt->r8, &sc->sc_gr[8], 4*8); /* r8-r11 */
+ err |= __copy_from_user(&pt->r12, &sc->sc_gr[12], 4*8); /* r12-r15 */
+ err |= __copy_from_user(&pt->r16, &sc->sc_gr[16], 16*8); /* r16-r31 */
+
+ /* establish new instruction pointer: */
+ pt->cr_iip = ip & ~0x3UL;
+ ia64_psr(pt)->ri = ip & 0x3;
+ pt->cr_ipsr = (pt->cr_ipsr & ~IA64_PSR_UM) | (um & IA64_PSR_UM);
+
+ ia64_put_nat_bits (pt, sw, nat); /* restore the original scratch NaT bits */
+
+ if (flags & IA64_SC_FLAG_FPH_VALID) {
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+
+ __copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
+ if (fpu_owner == current) {
+ __ia64_load_fpu(current->thread.fph);
+ }
+ }
+ return err;
+}
+
+/*
+ * When we get here, ((struct switch_stack *) pt - 1) is a
+ * switch_stack frame that has no defined value. Upon return, we
+ * expect sw->caller_unat to contain the new unat value. The reason
+ * we use a full switch_stack frame is so everything is symmetric
+ * with ia64_do_signal().
+ */
+long
+ia64_rt_sigreturn (struct pt_regs *pt)
+{
+ extern char ia64_strace_leave_kernel, ia64_leave_kernel;
+ struct sigcontext *sc;
+ struct siginfo si;
+ sigset_t set;
+ long retval;
+
+ sc = &((struct sigframe *) (pt->r12 + 16))->sc;
+
+ /*
+ * When we return to the previously executing context, r8 and
+ * r10 have already been setup the way we want them. Indeed,
+ * if the signal wasn't delivered while in a system call, we
+ * must not touch r8 or r10 as otherwise user-level stat could
+ * be corrupted.
+ */
+ retval = (long) &ia64_leave_kernel | 1;
+ if ((current->flags & PF_TRACESYS)
+ && (sc->sc_flags & IA64_SC_FLAG_IN_SYSCALL))
+ retval = (long) &ia64_strace_leave_kernel;
+
+ if (!access_ok(VERIFY_READ, sc, sizeof(*sc)))
+ goto give_sigsegv;
+
+ if (GET_SIGSET(&set, &sc->sc_mask))
+ goto give_sigsegv;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (restore_sigcontext(sc, pt))
+ goto give_sigsegv;
+
+#if DEBUG_SIG
+ printk("SIG return (%s:%d): sp=%lx ip=%lx\n",
+ current->comm, current->pid, pt->r12, pt->cr_iip);
+#endif
+ /*
+ * It is more difficult to avoid calling this function than to
+ * call it and ignore errors.
+ */
+ do_sigaltstack(&sc->sc_stack, 0, pt->r12);
+ return retval;
+
+ give_sigsegv:
+ si.si_signo = SIGSEGV;
+ si.si_errno = 0;
+ si.si_code = SI_KERNEL;
+ si.si_pid = current->pid;
+ si.si_uid = current->uid;
+ si.si_addr = sc;
+ force_sig_info(SIGSEGV, &si, current);
+ return retval;
+}
+
+/*
+ * This does just the minimum required setup of sigcontext.
+ * Specifically, it only installs data that is either not knowable at
+ * the user-level or that gets modified before execution in the
+ * trampoline starts. Everything else is done at the user-level.
+ */
+static long
+setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct pt_regs *pt)
+{
+ struct switch_stack *sw = (struct switch_stack *) pt - 1;
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+ unsigned long flags = 0, ifs, nat;
+ long err;
+
+ ifs = pt->cr_ifs;
+
+ if (on_sig_stack((unsigned long) sc))
+ flags |= IA64_SC_FLAG_ONSTACK;
+ if ((ifs & (1UL << 63)) == 0) {
+ /* if cr_ifs isn't valid, we got here through a syscall */
+ flags |= IA64_SC_FLAG_IN_SYSCALL;
+ }
+ if ((fpu_owner == current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
+ flags |= IA64_SC_FLAG_FPH_VALID;
+ if (fpu_owner == current) {
+ __ia64_save_fpu(current->thread.fph);
+ }
+ __copy_to_user(&sc->sc_fr[32], current->thread.fph, 96*16);
+ }
+
+ /*
+ * Note: sw->ar_unat is UNDEFINED unless the process is being
+ * PTRACED. However, this is OK because the NaT bits of the
+ * preserved registers (r4-r7) are never being looked at by
+ * the signal handler (register r4-r7 are used instead).
+ */
+ nat = ia64_get_nat_bits(pt, sw);
+
+ err = __put_user(flags, &sc->sc_flags);
+ err |= __put_user(nat, &sc->sc_nat);
+ err |= PUT_SIGSET(mask, &sc->sc_mask);
+ err |= __put_user(pt->cr_ipsr & IA64_PSR_UM, &sc->sc_um);
+ err |= __put_user(pt->ar_rsc, &sc->sc_ar_rsc);
+ err |= __put_user(pt->ar_ccv, &sc->sc_ar_ccv);
+ err |= __put_user(pt->ar_unat, &sc->sc_ar_unat); /* ar.unat */
+ err |= __put_user(pt->ar_fpsr, &sc->sc_ar_fpsr); /* ar.fpsr */
+ err |= __put_user(pt->ar_pfs, &sc->sc_ar_pfs);
+ err |= __put_user(pt->pr, &sc->sc_pr); /* predicates */
+ err |= __put_user(pt->b0, &sc->sc_br[0]); /* b0 (rp) */
+ err |= __put_user(pt->b6, &sc->sc_br[6]); /* b6 */
+ err |= __put_user(pt->b7, &sc->sc_br[7]); /* b7 */
+
+ err |= __copy_to_user(&sc->sc_gr[1], &pt->r1, 3*8); /* r1-r3 */
+ err |= __copy_to_user(&sc->sc_gr[8], &pt->r8, 4*8); /* r8-r11 */
+ err |= __copy_to_user(&sc->sc_gr[12], &pt->r12, 4*8); /* r12-r15 */
+ err |= __copy_to_user(&sc->sc_gr[16], &pt->r16, 16*8); /* r16-r31 */
+
+ err |= __put_user(pt->cr_iip + ia64_psr(pt)->ri, &sc->sc_ip);
+ err |= __put_user(pt->r12, &sc->sc_gr[12]); /* r12 */
+ return err;
+}
+
+static long
+setup_frame (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set, struct pt_regs *pt)
+{
+ struct switch_stack *sw = (struct switch_stack *) pt - 1;
+ extern char ia64_sigtramp[], __start_gate_section[];
+ unsigned long tramp_addr, new_rbs = 0;
+ struct sigframe *frame;
+ struct siginfo si;
+ long err;
+
+ frame = (void *) pt->r12;
+ tramp_addr = GATE_ADDR + (ia64_sigtramp - __start_gate_section);
+ if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && !on_sig_stack((unsigned long) frame)) {
+ new_rbs = (current->sas_ss_sp + sizeof(long) - 1) & ~(sizeof(long) - 1);
+ frame = (void *) ((current->sas_ss_sp + current->sas_ss_size)
+ & ~(STACK_ALIGN - 1));
+ }
+ frame = (void *) frame - ((sizeof(*frame) + STACK_ALIGN - 1) & ~(STACK_ALIGN - 1));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ err = __copy_to_user(&frame->info, info, sizeof(siginfo_t));
+
+ err |= __put_user(current->sas_ss_sp, &frame->sc.sc_stack.ss_sp);
+ err |= __put_user(current->sas_ss_size, &frame->sc.sc_stack.ss_size);
+ err |= __put_user(sas_ss_flags(pt->r12), &frame->sc.sc_stack.ss_flags);
+ err |= setup_sigcontext(&frame->sc, set, pt);
+
+ if (err)
+ goto give_sigsegv;
+
+ pt->r12 = (unsigned long) frame - 16; /* new stack pointer */
+ pt->r2 = sig; /* signal number */
+ pt->r3 = (unsigned long) ka->sa.sa_handler; /* addr. of handler's proc. descriptor */
+ pt->r15 = new_rbs;
+ pt->ar_fpsr = FPSR_DEFAULT; /* reset fpsr for signal handler */
+ pt->cr_iip = tramp_addr;
+ ia64_psr(pt)->ri = 0; /* start executing in first slot */
+
+ /*
+ * Note: this affects only the NaT bits of the scratch regs
+ * (the ones saved in pt_regs, which is exactly what we want.
+ * The NaT bits for the preserved regs (r4-r7) are in
+ * sw->ar_unat iff this process is being PTRACED.
+ */
+ sw->caller_unat = 0; /* ensure NaT bits of at least r2, r3, r12, and r15 are clear */
+
+#if DEBUG_SIG
+ printk("SIG deliver (%s:%d): sig=%d sp=%lx ip=%lx handler=%lx\n",
+ current->comm, current->pid, sig, pt->r12, pt->cr_iip, pt->r3);
+#endif
+ return 1;
+
+ give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ si.si_signo = SIGSEGV;
+ si.si_errno = 0;
+ si.si_code = SI_KERNEL;
+ si.si_pid = current->pid;
+ si.si_uid = current->uid;
+ si.si_addr = frame;
+ force_sig_info(SIGSEGV, &si, current);
+ return 0;
+}
+
+static long
+handle_signal (unsigned long sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *oldset,
+ struct pt_regs *pt)
+{
+#ifdef CONFIG_IA32_SUPPORT
+ if (IS_IA32_PROCESS(pt)) {
+ /* send signal to IA-32 process */
+ if (!ia32_setup_frame1(sig, ka, info, oldset, pt))
+ return 0;
+ } else
+#endif
+ /* send signal to IA-64 process */
+ if (!setup_frame(sig, ka, info, oldset, pt))
+ return 0;
+
+ if (ka->sa.sa_flags & SA_ONESHOT)
+ ka->sa.sa_handler = SIG_DFL;
+
+ if (!(ka->sa.sa_flags & SA_NODEFER)) {
+ spin_lock_irq(¤t->sigmask_lock);
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+ }
+ return 1;
+}
+
+/*
+ * When we get here, `pt' points to struct pt_regs and ((struct
+ * switch_stack *) pt - 1) points to a switch stack structure.
+ * HOWEVER, in the normal case, the ONLY value valid in the
+ * switch_stack is the caller_unat field. The entire switch_stack is
+ * valid ONLY if current->flags has PF_PTRACED set.
+ *
+ * Note that `init' is a special process: it doesn't get signals it
+ * doesn't want to handle. Thus you cannot kill init even with a
+ * SIGKILL even by mistake.
+ *
+ * Note that we go through the signals twice: once to check the
+ * signals that the kernel can handle, and then we build all the
+ * user-level signal handling stack-frames in one go after that.
+ */
+long
+ia64_do_signal (sigset_t *oldset, struct pt_regs *pt, long in_syscall)
+{
+ struct k_sigaction *ka;
+ siginfo_t info;
+ long restart = in_syscall;
+
+ /*
+ * In the ia64_leave_kernel code path, we want the common case
+ * to go fast, which is why we may in certain cases get here
+ * from kernel mode. Just return without doing anything if so.
+ */
+ if (!user_mode(pt))
+ return 0;
+
+ if (!oldset)
+ oldset = ¤t->blocked;
+
+ if (pt->r10 != -1) {
+ /*
+ * A system calls has to be restarted only if one of
+ * the error codes ERESTARTNOHAND, ERESTARTSYS, or
+ * ERESTARTNOINTR is returned. If r10 isn't -1 then
+ * r8 doesn't hold an error code and we don't need to
+ * restart the syscall, so we set in_syscall to zero.
+ */
+ restart = 0;
+ }
+
+ for (;;) {
+ unsigned long signr;
+
+ spin_lock_irq(¤t->sigmask_lock);
+ signr = dequeue_signal(¤t->blocked, &info);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ if (!signr)
+ break;
+
+ if ((current->flags & PF_PTRACED) && signr != SIGKILL) {
+ /* Let the debugger run. */
+ current->exit_code = signr;
+ set_current_state(TASK_STOPPED);
+ notify_parent(current, SIGCHLD);
+ schedule();
+ signr = current->exit_code;
+
+ /* We're back. Did the debugger cancel the sig? */
+ if (!signr)
+ continue;
+ current->exit_code = 0;
+
+ /* The debugger continued. Ignore SIGSTOP. */
+ if (signr == SIGSTOP)
+ continue;
+
+ /* Update the siginfo structure. Is this good? */
+ if (signr != info.si_signo) {
+ info.si_signo = signr;
+ info.si_errno = 0;
+ info.si_code = SI_USER;
+ info.si_pid = current->p_pptr->pid;
+ info.si_uid = current->p_pptr->uid;
+ }
+
+ /* If the (new) signal is now blocked, requeue it. */
+ if (sigismember(¤t->blocked, signr)) {
+ send_sig_info(signr, &info, current);
+ continue;
+ }
+ }
+
+ ka = ¤t->sig->action[signr - 1];
+ if (ka->sa.sa_handler == SIG_IGN) {
+ if (signr != SIGCHLD)
+ continue;
+ /* Check for SIGCHLD: it's special. */
+ while (sys_wait4(-1, NULL, WNOHANG, NULL) > 0)
+ /* nothing */;
+ continue;
+ }
+
+ if (ka->sa.sa_handler == SIG_DFL) {
+ int exit_code = signr;
+
+ /* Init gets no signals it doesn't want. */
+ if (current->pid == 1)
+ continue;
+
+ switch (signr) {
+ case SIGCONT: case SIGCHLD: case SIGWINCH:
+ continue;
+
+ case SIGTSTP: case SIGTTIN: case SIGTTOU:
+ if (is_orphaned_pgrp(current->pgrp))
+ continue;
+ /* FALLTHRU */
+
+ case SIGSTOP:
+ set_current_state(TASK_STOPPED);
+ current->exit_code = signr;
+ if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags
+ & SA_NOCLDSTOP))
+ notify_parent(current, SIGCHLD);
+ schedule();
+ continue;
+
+ case SIGQUIT: case SIGILL: case SIGTRAP:
+ case SIGABRT: case SIGFPE: case SIGSEGV:
+ case SIGBUS: case SIGSYS: case SIGXCPU: case SIGXFSZ:
+ if (do_coredump(signr, pt))
+ exit_code |= 0x80;
+ /* FALLTHRU */
+
+ default:
+ lock_kernel();
+ sigaddset(¤t->signal, signr);
+ recalc_sigpending(current);
+ current->flags |= PF_SIGNALED;
+ do_exit(exit_code);
+ /* NOTREACHED */
+ }
+ }
+
+ if (restart) {
+ switch (pt->r8) {
+ case ERESTARTSYS:
+ if ((ka->sa.sa_flags & SA_RESTART) == 0) {
+ case ERESTARTNOHAND:
+ pt->r8 = EINTR;
+ /* note: pt->r10 is already -1 */
+ break;
+ }
+ case ERESTARTNOINTR:
+ ia64_decrement_ip(pt);
+ }
+ }
+
+ /* Whee! Actually deliver the signal. If the
+ delivery failed, we need to continue to iterate in
+ this loop so we can deliver the SIGSEGV... */
+ if (handle_signal(signr, ka, &info, oldset, pt))
+ return 1;
+ }
+
+ /* Did we come from a system call? */
+ if (restart) {
+ /* Restart the system call - no handlers present */
+ if (pt->r8 == ERESTARTNOHAND ||
+ pt->r8 == ERESTARTSYS ||
+ pt->r8 == ERESTARTNOINTR) {
+ /*
+ * Note: the syscall number is in r15 which is
+ * saved in pt_regs so all we need to do here
+ * is adjust ip so that the "break"
+ * instruction gets re-executed.
+ */
+ ia64_decrement_ip(pt);
+ }
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * SMP Support
+ *
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Lots of stuff stolen from arch/alpha/kernel/smp.c
+ *
+ * 99/10/05 davidm Update to bring it in sync with new command-line processing scheme.
+ */
+#define __KERNEL_SYSCALLS__
+
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/kernel_stat.h>
+#include <linux/mm.h>
+
+#include <asm/atomic.h>
+#include <asm/bitops.h>
+#include <asm/current.h>
+#include <asm/delay.h>
+
+#ifdef CONFIG_KDB
+#include <linux/kdb.h>
+void smp_kdb_interrupt (struct pt_regs* regs);
+void kdb_global(int cpuid);
+extern unsigned long smp_kdb_wait;
+extern int kdb_new_cpu;
+#endif
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+
+extern int cpu_idle(void * unused);
+extern void _start(void);
+
+extern int cpu_now_booting; /* Used by head.S to find idle task */
+extern unsigned long cpu_initialized; /* Bitmap of available cpu's */
+extern struct cpuinfo_ia64 cpu_data[NR_CPUS]; /* Duh... */
+
+spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
+
+#ifdef CONFIG_KDB
+unsigned long cpu_online_map = 1;
+#endif
+
+volatile int cpu_number_map[NR_CPUS] = { -1, }; /* SAPIC ID -> Logical ID */
+volatile int __cpu_logical_map[NR_CPUS] = { -1, }; /* logical ID -> SAPIC ID */
+int smp_num_cpus = 1;
+int bootstrap_processor = -1; /* SAPIC ID of BSP */
+int smp_threads_ready = 0; /* Set when the idlers are all forked */
+unsigned long ipi_base_addr = IPI_DEFAULT_BASE_ADDR; /* Base addr of IPI table */
+cycles_t cacheflush_time = 0;
+unsigned long ap_wakeup_vector = -1; /* External Int to use to wakeup AP's */
+static int max_cpus = -1; /* Command line */
+static unsigned long ipi_op[NR_CPUS];
+struct smp_call_struct {
+ void (*func) (void *info);
+ void *info;
+ long wait;
+ atomic_t unstarted_count;
+ atomic_t unfinished_count;
+};
+static struct smp_call_struct *smp_call_function_data;
+
+#ifdef CONFIG_KDB
+unsigned long smp_kdb_wait = 0; /* Bitmask of waiters */
+#endif
+
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+extern spinlock_t ivr_read_lock;
+#endif
+
+int use_xtp = 0; /* XXX */
+
+#define IPI_RESCHEDULE 0
+#define IPI_CALL_FUNC 1
+#define IPI_CPU_STOP 2
+#define IPI_KDB_INTERRUPT 4
+
+/*
+ * Setup routine for controlling SMP activation
+ *
+ * Command-line option of "nosmp" or "maxcpus=0" will disable SMP
+ * activation entirely (the MPS table probe still happens, though).
+ *
+ * Command-line option of "maxcpus=<NUM>", where <NUM> is an integer
+ * greater than 0, limits the maximum number of CPUs activated in
+ * SMP mode to <NUM>.
+ */
+
+static int __init nosmp(char *str)
+{
+ max_cpus = 0;
+ return 1;
+}
+
+__setup("nosmp", nosmp);
+
+static int __init maxcpus(char *str)
+{
+ get_option(&str, &max_cpus);
+ return 1;
+}
+
+__setup("maxcpus=", maxcpus);
+
+/*
+ * Yoink this CPU from the runnable list...
+ */
+void
+halt_processor(void)
+{
+ clear_bit(smp_processor_id(), &cpu_initialized);
+ max_xtp();
+ __cli();
+ for (;;)
+ ;
+
+}
+
+void
+handle_IPI(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int this_cpu = smp_processor_id();
+ unsigned long *pending_ipis = &ipi_op[this_cpu];
+ unsigned long ops;
+
+ /* Count this now; we may make a call that never returns. */
+ cpu_data[this_cpu].ipi_count++;
+
+ mb(); /* Order interrupt and bit testing. */
+ while ((ops = xchg(pending_ipis, 0)) != 0) {
+ mb(); /* Order bit clearing and data access. */
+ do {
+ unsigned long which;
+
+ which = ffz(~ops);
+ ops &= ~(1 << which);
+
+ switch (which) {
+ case IPI_RESCHEDULE:
+ /*
+ * Reschedule callback. Everything to be done is done by the
+ * interrupt return path.
+ */
+ break;
+
+ case IPI_CALL_FUNC:
+ {
+ struct smp_call_struct *data;
+ void (*func)(void *info);
+ void *info;
+ int wait;
+
+ data = smp_call_function_data;
+ func = data->func;
+ info = data->info;
+ wait = data->wait;
+
+ mb();
+ atomic_dec (&data->unstarted_count);
+
+ /* At this point the structure may be gone unless wait is true. */
+ (*func)(info);
+
+ /* Notify the sending CPU that the task is done. */
+ mb();
+ if (wait)
+ atomic_dec (&data->unfinished_count);
+ }
+ break;
+
+ case IPI_CPU_STOP:
+ halt_processor();
+ break;
+
+#ifdef CONFIG_KDB
+ case IPI_KDB_INTERRUPT:
+ smp_kdb_interrupt(regs);
+ break;
+#endif
+
+ default:
+ printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which);
+ break;
+ } /* Switch */
+ } while (ops);
+
+ mb(); /* Order data access and bit testing. */
+ }
+}
+
+static inline void
+send_IPI(int dest_cpu, unsigned char vector)
+{
+ unsigned long ipi_addr;
+ unsigned long ipi_data;
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ unsigned long flags;
+#endif
+
+ ipi_data = vector;
+ ipi_addr = ipi_base_addr | ((dest_cpu << 8) << 4); /* 16-bit SAPIC ID's; assume CPU bus 0 */
+ mb();
+
+#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ /*
+ * Disable IVR reads
+ */
+ save_flags(flags);
+ __cli();
+ spin_lock(&ivr_read_lock);
+ writeq(ipi_data, ipi_addr);
+ spin_unlock(&ivr_read_lock);
+ restore_flags(flags);
+#else
+ writeq(ipi_data, ipi_addr);
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+
+}
+
+static inline void
+send_IPI_single(int dest_cpu, int op)
+{
+
+ if (dest_cpu == -1)
+ return;
+
+ ipi_op[dest_cpu] |= (1 << op);
+ send_IPI(dest_cpu, IPI_IRQ);
+}
+
+static inline void
+send_IPI_allbutself(int op)
+{
+ int i;
+ int cpu_id = 0;
+
+ for (i = 0; i < smp_num_cpus; i++) {
+ cpu_id = __cpu_logical_map[i];
+ if (cpu_id != smp_processor_id())
+ send_IPI_single(cpu_id, op);
+ }
+}
+
+static inline void
+send_IPI_all(int op)
+{
+ int i;
+
+ for (i = 0; i < smp_num_cpus; i++)
+ send_IPI_single(__cpu_logical_map[i], op);
+}
+
+static inline void
+send_IPI_self(int op)
+{
+ send_IPI_single(smp_processor_id(), op);
+}
+
+void
+smp_send_reschedule(int cpu)
+{
+ send_IPI_single(cpu, IPI_RESCHEDULE);
+}
+
+void
+smp_send_stop(void)
+{
+ send_IPI_allbutself(IPI_CPU_STOP);
+}
+
+/*
+ * Run a function on all other CPUs.
+ * <func> The function to run. This must be fast and non-blocking.
+ * <info> An arbitrary pointer to pass to the function.
+ * <retry> If true, keep retrying until ready.
+ * <wait> If true, wait until function has completed on other CPUs.
+ * [RETURNS] 0 on success, else a negative status code.
+ *
+ * Does not return until remote CPUs are nearly ready to execute <func>
+ * or are or have executed.
+ */
+
+int
+smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
+{
+ struct smp_call_struct data;
+ long timeout;
+ static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+
+ data.func = func;
+ data.info = info;
+ data.wait = wait;
+ atomic_set(&data.unstarted_count, smp_num_cpus - 1);
+ atomic_set(&data.unfinished_count, smp_num_cpus - 1);
+
+ if (retry) {
+ while (1) {
+ if (smp_call_function_data) {
+ schedule (); /* Give a mate a go */
+ continue;
+ }
+ spin_lock (&lock);
+ if (smp_call_function_data) {
+ spin_unlock (&lock); /* Bad luck */
+ continue;
+ }
+ /* Mine, all mine! */
+ break;
+ }
+ }
+ else {
+ if (smp_call_function_data)
+ return -EBUSY;
+ spin_lock (&lock);
+ if (smp_call_function_data) {
+ spin_unlock (&lock);
+ return -EBUSY;
+ }
+ }
+
+ smp_call_function_data = &data;
+ spin_unlock (&lock);
+ data.func = func;
+ data.info = info;
+ atomic_set (&data.unstarted_count, smp_num_cpus - 1);
+ data.wait = wait;
+ if (wait)
+ atomic_set (&data.unfinished_count, smp_num_cpus - 1);
+
+ /* Send a message to all other CPUs and wait for them to respond */
+ send_IPI_allbutself(IPI_CALL_FUNC);
+
+ /* Wait for response */
+ timeout = jiffies + HZ;
+ while ( (atomic_read (&data.unstarted_count) > 0) &&
+ time_before (jiffies, timeout) )
+ barrier ();
+ if (atomic_read (&data.unstarted_count) > 0) {
+ smp_call_function_data = NULL;
+ return -ETIMEDOUT;
+ }
+ if (wait)
+ while (atomic_read (&data.unfinished_count) > 0)
+ barrier ();
+ smp_call_function_data = NULL;
+ return 0;
+}
+
+/*
+ * Flush all other CPU's tlb and then mine. Do this with smp_call_function() as we
+ * want to ensure all TLB's flushed before proceeding.
+ *
+ * XXX: Is it OK to use the same ptc.e info on all cpus?
+ */
+void
+smp_flush_tlb_all(void)
+{
+ smp_call_function((void (*)(void *))__flush_tlb_all, NULL, 1, 1);
+ __flush_tlb_all();
+}
+
+/*
+ * Ideally sets up per-cpu profiling hooks. Doesn't do much now...
+ */
+static inline void __init
+smp_setup_percpu_timer(int cpuid)
+{
+ cpu_data[cpuid].prof_counter = 1;
+ cpu_data[cpuid].prof_multiplier = 1;
+}
+
+void
+smp_do_timer(struct pt_regs *regs)
+{
+ int cpu = smp_processor_id();
+ int user = user_mode(regs);
+ struct cpuinfo_ia64 *data = &cpu_data[cpu];
+
+ extern void update_one_process(struct task_struct *, unsigned long, unsigned long,
+ unsigned long, int);
+ if (!--data->prof_counter) {
+ irq_enter(cpu, TIMER_IRQ);
+
+ update_one_process(current, 1, user, !user, cpu);
+ if (current->pid) {
+ if (--current->counter < 0) {
+ current->counter = 0;
+ current->need_resched = 1;
+ }
+
+ if (user) {
+ if (current->priority < DEF_PRIORITY) {
+ kstat.cpu_nice++;
+ kstat.per_cpu_nice[cpu]++;
+ } else {
+ kstat.cpu_user++;
+ kstat.per_cpu_user[cpu]++;
+ }
+ } else {
+ kstat.cpu_system++;
+ kstat.per_cpu_system[cpu]++;
+ }
+ }
+
+ data->prof_counter = data->prof_multiplier;
+ irq_exit(cpu, TIMER_IRQ);
+ }
+}
+
+
+/*
+ * Called by both boot and secondaries to move global data into
+ * per-processor storage.
+ */
+static inline void __init
+smp_store_cpu_info(int cpuid)
+{
+ struct cpuinfo_ia64 *c = &cpu_data[cpuid];
+
+ identify_cpu(c);
+}
+
+/*
+ * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
+ * no RRs set, better than even chance that psr is bogus. Fix all that and
+ * call _start. In effect, pretend to be lilo.
+ *
+ * Stolen from lilo_start.c. Thanks David!
+ */
+void
+start_ap(void)
+{
+ unsigned long flags;
+
+ /*
+ * Install a translation register that identity maps the
+ * kernel's 256MB page(s).
+ */
+ ia64_clear_ic(flags);
+ ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
+ ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
+ ia64_itr(0x3, 1, PAGE_OFFSET,
+ pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
+ _PAGE_SIZE_256M);
+
+ flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
+ IA64_PSR_BN);
+
+ asm volatile ("movl r8 = 1f\n"
+ ";;\n"
+ "mov cr.ipsr=%0\n"
+ "mov cr.iip=r8\n"
+ "mov cr.ifs=r0\n"
+ ";;\n"
+ "rfi;;"
+ "1:\n"
+ "movl r1 = __gp" :: "r"(flags) : "r8");
+ _start();
+}
+
+
+/*
+ * AP's start using C here.
+ */
+void __init
+smp_callin(void)
+{
+ extern void ia64_rid_init(void);
+ extern void ia64_init_itm(void);
+ extern void ia64_cpu_local_tick(void);
+
+ ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+ ia64_set_fpu_owner(0);
+ ia64_rid_init(); /* initialize region ids */
+
+ cpu_init();
+ __flush_tlb_all();
+
+ smp_store_cpu_info(smp_processor_id());
+ smp_setup_percpu_timer(smp_processor_id());
+
+ while (!smp_threads_ready)
+ mb();
+
+ normal_xtp();
+
+ /* setup the CPU local timer tick */
+ ia64_cpu_local_tick();
+
+ /* Disable all local interrupts */
+ ia64_set_lrr0(0, 1);
+ ia64_set_lrr1(0, 1);
+
+ __sti(); /* Interrupts have been off till now. */
+ cpu_idle(NULL);
+}
+
+/*
+ * Create the idle task for a new AP. DO NOT use kernel_thread() because
+ * that could end up calling schedule() in the ia64_leave_kernel exit
+ * path in which case the new idle task could get scheduled before we
+ * had a chance to remove it from the run-queue...
+ */
+static int __init
+fork_by_hand(void)
+{
+ /*
+ * Don't care about the usp and regs settings since we'll never
+ * reschedule the forked task.
+ */
+ return do_fork(CLONE_VM|CLONE_PID, 0, 0);
+}
+
+/*
+ * Bring one cpu online.
+ *
+ * NB: cpuid is the CPU BUS-LOCAL ID, not the entire SAPIC ID. See asm/smp.h.
+ */
+static int __init
+smp_boot_one_cpu(int cpuid, int cpunum)
+{
+ struct task_struct *idle;
+ long timeout;
+
+ /*
+ * Create an idle task for this CPU. Note that the address we
+ * give to kernel_thread is irrelevant -- it's going to start
+ * where OS_BOOT_RENDEVZ vector in SAL says to start. But
+ * this gets all the other task-y sort of data structures set
+ * up like we wish. We need to pull the just created idle task
+ * off the run queue and stuff it into the init_tasks[] array.
+ * Sheesh . . .
+ */
+ if (fork_by_hand() < 0)
+ panic("failed fork for CPU %d", cpuid);
+ /*
+ * We remove it from the pidhash and the runqueue
+ * once we got the process:
+ */
+ idle = init_task.prev_task;
+ if (!idle)
+ panic("No idle process for CPU %d", cpuid);
+ init_tasks[cpunum] = idle;
+ del_from_runqueue(idle);
+ unhash_process(idle);
+
+ /* Schedule the first task manually. */
+ idle->processor = cpuid;
+ idle->has_cpu = 1;
+
+ /* Let _start know what logical CPU we're booting (offset into init_tasks[] */
+ cpu_now_booting = cpunum;
+
+ /* Kick the AP in the butt */
+ send_IPI(cpuid, ap_wakeup_vector);
+ ia64_srlz_i();
+ mb();
+
+ /*
+ * OK, wait a bit for that CPU to finish staggering about. smp_callin() will
+ * call cpu_init() which will set a bit for this AP. When that bit flips, the AP
+ * is waiting for smp_threads_ready to be 1 and we can move on.
+ */
+ for (timeout = 0; timeout < 100000; timeout++) {
+ if (test_bit(cpuid, &cpu_initialized))
+ goto alive;
+ udelay(10);
+ barrier();
+ }
+
+ printk(KERN_ERR "SMP: Processor %d is stuck.\n", cpuid);
+ return -1;
+
+alive:
+ /* Remember the AP data */
+ cpu_number_map[cpuid] = cpunum;
+#ifdef CONFIG_KDB
+ cpu_online_map |= (1<<cpunum);
+ printk ("DEBUGGER: cpu_online_map = 0x%08x\n", cpu_online_map);
+#endif
+ __cpu_logical_map[cpunum] = cpuid;
+ return 0;
+}
+
+
+
+/*
+ * Called by smp_init bring all the secondaries online and hold them.
+ * XXX: this is ACPI specific; it uses "magic" variables exported from acpi.c
+ * to 'discover' the AP's. Blech.
+ */
+void __init
+smp_boot_cpus(void)
+{
+ int i, cpu_count = 1;
+ unsigned long bogosum;
+ int sapic_id;
+ extern int acpi_cpus;
+ extern int acpi_apic_map[32];
+
+ /* Take care of some initial bookkeeping. */
+ memset(&cpu_number_map, -1, sizeof(cpu_number_map));
+ memset(&__cpu_logical_map, -1, sizeof(__cpu_logical_map));
+ memset(&ipi_op, 0, sizeof(ipi_op));
+
+ /* Setup BSP mappings */
+ cpu_number_map[bootstrap_processor] = 0;
+ __cpu_logical_map[0] = bootstrap_processor;
+ current->processor = bootstrap_processor;
+
+ /* Mark BSP booted and get active_mm context */
+ cpu_init();
+
+ /* reset XTP for interrupt routing */
+ normal_xtp();
+
+ /* And generate an entry in cpu_data */
+ smp_store_cpu_info(bootstrap_processor);
+#if 0
+ smp_tune_scheduling();
+#endif
+ smp_setup_percpu_timer(bootstrap_processor);
+
+ init_idle();
+
+ /* Nothing to do when told not to. */
+ if (max_cpus == 0) {
+ printk(KERN_INFO "SMP mode deactivated.\n");
+ return;
+ }
+
+ if (acpi_cpus > 1) {
+ printk(KERN_INFO "SMP: starting up secondaries.\n");
+
+ for (i = 0; i < NR_CPUS; i++) {
+ if (acpi_apic_map[i] == -1 ||
+ acpi_apic_map[i] == bootstrap_processor << 8) /* XXX Fix me Walt */
+ continue;
+
+ /*
+ * IA64 SAPIC ID's are 16-bits. See asm/smp.h for more info
+ */
+ sapic_id = acpi_apic_map[i] >> 8;
+ if (smp_boot_one_cpu(sapic_id, cpu_count))
+ continue;
+
+ cpu_count++; /* Count good CPUs only... */
+ }
+ }
+
+ if (cpu_count == 1) {
+ printk(KERN_ERR "SMP: Bootstrap processor only.\n");
+ return;
+ }
+
+ bogosum = 0;
+ for (i = 0; i < NR_CPUS; i++) {
+ if (cpu_initialized & (1L << i))
+ bogosum += cpu_data[i].loops_per_sec;
+ }
+
+ printk(KERN_INFO "SMP: Total of %d processors activated "
+ "(%lu.%02lu BogoMIPS).\n",
+ cpu_count, (bogosum + 2500) / 500000,
+ ((bogosum + 2500) / 5000) % 100);
+
+ smp_num_cpus = cpu_count;
+}
+
+/*
+ * Called from main.c by each AP.
+ */
+void __init
+smp_commence(void)
+{
+ mb();
+}
+
+/*
+ * Not used; part of the i386 bringup
+ */
+void __init
+initialize_secondary(void)
+{
+}
+
+int __init
+setup_profiling_timer(unsigned int multiplier)
+{
+ return -EINVAL;
+}
+
+/*
+ * Assume that CPU's have been discovered by some platform-dependant
+ * interface. For SoftSDV/Lion, that would be ACPI.
+ *
+ * Setup of the IPI irq handler is done in irq.c:init_IRQ_SMP().
+ *
+ * So this just gets the BSP SAPIC ID and print's it out. Dull, huh?
+ *
+ * Not anymore. This also registers the AP OS_MC_REDVEZ address with SAL.
+ */
+void __init
+init_smp_config(void)
+{
+ struct fptr {
+ unsigned long fp;
+ unsigned long gp;
+ } *ap_startup;
+ long sal_ret;
+
+ /* Grab the BSP ID */
+ bootstrap_processor = hard_smp_processor_id();
+
+ /* Tell SAL where to drop the AP's. */
+ ap_startup = (struct fptr *) start_ap;
+ sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ,
+ __pa(ap_startup->fp), __pa(ap_startup->gp), 0,
+ 0, 0, 0);
+ if (sal_ret < 0) {
+ printk("SMP: Can't set SAL AP Boot Rendezvous: %s\n", ia64_sal_strerror(sal_ret));
+ printk(" Forcing UP mode\n");
+ smp_num_cpus = 1;
+ }
+
+}
+
+#ifdef CONFIG_KDB
+void smp_kdb_stop (int all, struct pt_regs* regs)
+{
+ if (all)
+ {
+ printk ("Sending IPI to all on CPU %i\n", smp_processor_id ());
+ smp_kdb_wait = 0xffffffff;
+ clear_bit (smp_processor_id(), &smp_kdb_wait);
+ send_IPI_allbutself (IPI_KDB_INTERRUPT);
+ }
+ else
+ {
+ printk ("Sending IPI to self on CPU %i\n",
+ smp_processor_id ());
+ set_bit (smp_processor_id(), &smp_kdb_wait);
+ clear_bit (__cpu_logical_map[kdb_new_cpu], &smp_kdb_wait);
+ smp_kdb_interrupt (regs);
+ }
+}
+
+void smp_kdb_interrupt (struct pt_regs* regs)
+{
+ printk ("kdb: IPI on CPU %i with mask 0x%08x\n",
+ smp_processor_id (), smp_kdb_wait);
+
+ /* All CPUs spin here forever */
+ while (test_bit (smp_processor_id(), &smp_kdb_wait));
+
+ /* Enter KDB on CPU selected by KDB on the last CPU */
+ if (__cpu_logical_map[kdb_new_cpu] == smp_processor_id ())
+ {
+ kdb (KDB_REASON_SWITCH, 0, regs);
+ }
+}
+
+#endif
+
--- /dev/null
+/*
+ * This file contains various system calls that have different calling
+ * conventions on different platforms.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/sched.h>
+#include <linux/file.h> /* doh, must come after sched.h... */
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+
+asmlinkage long
+ia64_getpriority (int which, int who, long arg2, long arg3, long arg4, long arg5, long arg6,
+ long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ extern long sys_getpriority (int, int);
+ long prio;
+
+ prio = sys_getpriority(which, who);
+ if (prio >= 0) {
+ regs->r8 = 0; /* ensure negative priority is not mistaken as error code */
+ prio = 20 - prio;
+ }
+ return prio;
+}
+
+asmlinkage unsigned long
+sys_getpagesize (void)
+{
+ return PAGE_SIZE;
+}
+
+asmlinkage unsigned long
+ia64_shmat (int shmid, void *shmaddr, int shmflg, long arg3, long arg4, long arg5, long arg6,
+ long arg7, long stack)
+{
+ extern int sys_shmat (int shmid, char *shmaddr, int shmflg, ulong *raddr);
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long raddr;
+ int retval;
+
+ retval = sys_shmat(shmid, shmaddr, shmflg, &raddr);
+ if (retval < 0)
+ return retval;
+
+ regs->r8 = 0; /* ensure negative addresses are not mistaken as an error code */
+ return raddr;
+}
+
+asmlinkage unsigned long
+ia64_brk (long brk, long arg1, long arg2, long arg3,
+ long arg4, long arg5, long arg6, long arg7, long stack)
+{
+ extern unsigned long sys_brk (unsigned long brk);
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long retval;
+
+ retval = sys_brk(brk);
+
+ regs->r8 = 0; /* ensure large retval isn't mistaken as error code */
+ return retval;
+}
+
+/*
+ * On IA-64, we return the two file descriptors in ret0 and ret1 (r8
+ * and r9) as this is faster than doing a copy_to_user().
+ */
+asmlinkage long
+sys_pipe (long arg0, long arg1, long arg2, long arg3,
+ long arg4, long arg5, long arg6, long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ int fd[2];
+ int retval;
+
+ lock_kernel();
+ retval = do_pipe(fd);
+ if (retval)
+ goto out;
+ retval = fd[0];
+ regs->r9 = fd[1];
+ out:
+ unlock_kernel();
+ return retval;
+}
+
+static inline unsigned long
+do_mmap2 (unsigned long addr, unsigned long len, int prot, int flags, int fd, unsigned long pgoff)
+{
+ struct file *file = 0;
+
+ /*
+ * A zero mmap always succeeds in Linux, independent of
+ * whether or not the remaining arguments are valid.
+ */
+ if (PAGE_ALIGN(len) == 0)
+ return addr;
+
+#ifdef notyet
+ /* Don't permit mappings that would cross a region boundary: */
+ region_start = IA64_GET_REGION(addr);
+ region_end = IA64_GET_REGION(addr + len);
+ if (region_start != region_end)
+ return -EINVAL;
+
+ <<x??x>>
+#endif
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ return -EBADF;
+ }
+
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+
+ addr = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+
+ if (file)
+ fput(file);
+ return addr;
+}
+
+/*
+ * mmap2() is like mmap() except that the offset is expressed in units
+ * of PAGE_SIZE (instead of bytes). This allows to mmap2() (pieces
+ * of) files that are larger than the address space of the CPU.
+ */
+asmlinkage unsigned long
+sys_mmap2 (unsigned long addr, unsigned long len, int prot, int flags, int fd, long pgoff,
+ long arg6, long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+
+ addr = do_mmap2(addr, len, prot, flags, fd, pgoff);
+ if (!IS_ERR(addr))
+ regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
+ return addr;
+}
+
+asmlinkage unsigned long
+sys_mmap (unsigned long addr, unsigned long len, int prot, int flags,
+ int fd, long off, long arg6, long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+
+ addr = do_mmap2(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
+ if (!IS_ERR(addr))
+ regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
+ return addr;
+}
+
+asmlinkage long
+sys_ioperm (unsigned long from, unsigned long num, int on)
+{
+ printk(KERN_ERR "sys_ioperm(from=%lx, num=%lx, on=%d)\n", from, num, on);
+ return -EIO;
+}
+
+asmlinkage long
+sys_iopl (int level, long arg1, long arg2, long arg3)
+{
+ lock_kernel();
+ printk(KERN_ERR "sys_iopl(level=%d)!\n", level);
+ unlock_kernel();
+ return -ENOSYS;
+}
+
+asmlinkage long
+sys_vm86 (long arg0, long arg1, long arg2, long arg3)
+{
+ lock_kernel();
+ printk(KERN_ERR "sys_vm86(%lx, %lx, %lx, %lx)!\n", arg0, arg1, arg2, arg3);
+ unlock_kernel();
+ return -ENOSYS;
+}
+
+asmlinkage long
+sys_modify_ldt (long arg0, long arg1, long arg2, long arg3)
+{
+ lock_kernel();
+ printk(KERN_ERR "sys_modify_ldt(%lx, %lx, %lx, %lx)!\n", arg0, arg1, arg2, arg3);
+ unlock_kernel();
+ return -ENOSYS;
+}
+
+#ifndef CONFIG_PCI
+
+asmlinkage long
+sys_pciconfig_read (unsigned long bus, unsigned long dfn, unsigned long off, unsigned long len,
+ void *buf)
+{
+ return -ENOSYS;
+}
+
+asmlinkage long
+sys_pciconfig_write (unsigned long bus, unsigned long dfn, unsigned long off, unsigned long len,
+ void *buf)
+{
+ return -ENOSYS;
+}
+
+
+#endif /* CONFIG_PCI */
--- /dev/null
+/*
+ * linux/arch/ia64/kernel/time.c
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ * Copyright (C) 1999-2000 VA Linux Systems
+ * Copyright (C) 1999-2000 Walt Drummond <drummond@valinux.com>
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/time.h>
+
+#include <asm/delay.h>
+#include <asm/efi.h>
+#include <asm/irq.h>
+#include <asm/machvec.h>
+#include <asm/ptrace.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+
+extern rwlock_t xtime_lock;
+extern volatile unsigned long lost_ticks;
+
+#ifdef CONFIG_IA64_DEBUG_IRQ
+
+unsigned long last_cli_ip;
+
+#endif
+
+static struct {
+ unsigned long delta;
+ unsigned long next[NR_CPUS];
+} itm;
+
+static void
+do_profile (unsigned long ip)
+{
+ extern char _stext;
+
+ if (prof_buffer && current->pid) {
+ ip -= (unsigned long) &_stext;
+ ip >>= prof_shift;
+ /*
+ * Don't ignore out-of-bounds IP values silently,
+ * put them into the last histogram slot, so if
+ * present, they will show up as a sharp peak.
+ */
+ if (ip > prof_len - 1)
+ ip = prof_len - 1;
+
+ atomic_inc((atomic_t *) &prof_buffer[ip]);
+ }
+}
+
+/*
+ * Return the number of micro-seconds that elapsed since the last
+ * update to jiffy. The xtime_lock must be at least read-locked when
+ * calling this routine.
+ */
+static inline unsigned long
+gettimeoffset (void)
+{
+ unsigned long now = ia64_get_itc();
+ unsigned long elapsed_cycles, lost;
+
+ elapsed_cycles = now - (itm.next[smp_processor_id()] - itm.delta);
+
+ lost = lost_ticks;
+ if (lost)
+ elapsed_cycles += lost*itm.delta;
+
+ return (elapsed_cycles*my_cpu_data.usec_per_cyc) >> IA64_USEC_PER_CYC_SHIFT;
+}
+
+void
+do_settimeofday (struct timeval *tv)
+{
+ write_lock_irq(&xtime_lock);
+ {
+ /*
+ * This is revolting. We need to set the xtime.tv_usec
+ * correctly. However, the value in this location is
+ * is value at the last tick. Discover what
+ * correction gettimeofday would have done, and then
+ * undo it!
+ */
+ tv->tv_usec -= gettimeoffset();
+ while (tv->tv_usec < 0) {
+ tv->tv_usec += 1000000;
+ tv->tv_sec--;
+ }
+
+ xtime = *tv;
+ time_adjust = 0; /* stop active adjtime() */
+ time_status |= STA_UNSYNC;
+ time_maxerror = NTP_PHASE_LIMIT;
+ time_esterror = NTP_PHASE_LIMIT;
+ }
+ write_unlock_irq(&xtime_lock);
+}
+
+void
+do_gettimeofday (struct timeval *tv)
+{
+ unsigned long flags, usec, sec;
+
+ read_lock_irqsave(&xtime_lock, flags);
+ {
+ usec = gettimeoffset();
+
+ sec = xtime.tv_sec;
+ usec += xtime.tv_usec;
+ }
+ read_unlock_irqrestore(&xtime_lock, flags);
+
+ while (usec >= 1000000) {
+ usec -= 1000000;
+ ++sec;
+ }
+
+ tv->tv_sec = sec;
+ tv->tv_usec = usec;
+}
+
+static void
+timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ static unsigned long last_time;
+ static unsigned char count;
+ int cpu = smp_processor_id();
+
+ /*
+ * Here we are in the timer irq handler. We have irqs locally
+ * disabled, but we don't know if the timer_bh is running on
+ * another CPU. We need to avoid to SMP race by acquiring the
+ * xtime_lock.
+ */
+ write_lock(&xtime_lock);
+ while (1) {
+ /* do kernel PC profiling here. */
+ if (!user_mode(regs))
+ do_profile(regs->cr_iip);
+
+#ifdef CONFIG_SMP
+ smp_do_timer(regs);
+ if (smp_processor_id() == bootstrap_processor)
+ do_timer(regs);
+#else
+ do_timer(regs);
+#endif
+
+ itm.next[cpu] += itm.delta;
+ /*
+ * There is a race condition here: to be on the "safe"
+ * side, we process timer ticks until itm.next is
+ * ahead of the itc by at least half the timer
+ * interval. This should give us enough time to set
+ * the new itm value without losing a timer tick.
+ */
+ if (time_after(itm.next[cpu], ia64_get_itc() + itm.delta/2)) {
+ ia64_set_itm(itm.next[cpu]);
+ break;
+ }
+
+#if !(defined(CONFIG_IA64_SOFTSDV_HACKS) && defined(CONFIG_SMP))
+ /*
+ * SoftSDV in SMP mode is _slow_, so we do "loose" ticks,
+ * but it's really OK...
+ */
+ if (count > 0 && jiffies - last_time > 5*HZ)
+ count = 0;
+ if (count++ == 0) {
+ last_time = jiffies;
+ printk("Lost clock tick on CPU %d (now=%lx, next=%lx)!!\n",
+ cpu, ia64_get_itc(), itm.next[cpu]);
+# ifdef CONFIG_IA64_DEBUG_IRQ
+ printk("last_cli_ip=%lx\n", last_cli_ip);
+# endif
+ }
+#endif
+ }
+ write_unlock(&xtime_lock);
+}
+
+/*
+ * Encapsulate access to the itm structure for SMP.
+ */
+void __init
+ia64_cpu_local_tick(void)
+{
+ /* arrange for the cycle counter to generate a timer interrupt: */
+ ia64_set_itv(TIMER_IRQ, 0);
+ ia64_set_itc(0);
+ itm.next[smp_processor_id()] = ia64_get_itc() + itm.delta;
+ ia64_set_itm(itm.next[smp_processor_id()]);
+}
+
+void __init
+ia64_init_itm (void)
+{
+ unsigned long platform_base_freq, itc_freq, drift;
+ struct pal_freq_ratio itc_ratio, proc_ratio;
+ long status;
+
+ /*
+ * According to SAL v2.6, we need to use a SAL call to determine the
+ * platform base frequency and then a PAL call to determine the
+ * frequency ratio between the ITC and the base frequency.
+ */
+ status = ia64_sal_freq_base(SAL_FREQ_BASE_PLATFORM, &platform_base_freq, &drift);
+ if (status != 0) {
+ printk("SAL_FREQ_BASE_PLATFORM failed: %s\n", ia64_sal_strerror(status));
+ } else {
+ status = ia64_pal_freq_ratios(&proc_ratio, 0, &itc_ratio);
+ if (status != 0)
+ printk("PAL_FREQ_RATIOS failed with status=%ld\n", status);
+ }
+ if (status != 0) {
+ /* invent "random" values */
+ printk("SAL/PAL failed to obtain frequency info---inventing reasonably values\n");
+ platform_base_freq = 100000000;
+ itc_ratio.num = 3;
+ itc_ratio.den = 1;
+ }
+#if defined(CONFIG_IA64_LION_HACKS)
+ /* Our Lion currently returns base freq 104.857MHz, which
+ ain't right (it really is 100MHz). */
+ printk("SAL/PAL returned: base-freq=%lu, itc-ratio=%lu/%lu, proc-ratio=%lu/%lu\n",
+ platform_base_freq, itc_ratio.num, itc_ratio.den,
+ proc_ratio.num, proc_ratio.den);
+ platform_base_freq = 100000000;
+#elif 0 && defined(CONFIG_IA64_BIGSUR_HACKS)
+ /* BigSur with 991020 firmware returned itc-ratio=9/2 and base
+ freq 75MHz, which wasn't right. The 991119 firmware seems
+ to return the right values, so this isn't necessary
+ anymore... */
+ printk("SAL/PAL returned: base-freq=%lu, itc-ratio=%lu/%lu, proc-ratio=%lu/%lu\n",
+ platform_base_freq, itc_ratio.num, itc_ratio.den,
+ proc_ratio.num, proc_ratio.den);
+ platform_base_freq = 100000000;
+ proc_ratio.num = 5; proc_ratio.den = 1;
+ itc_ratio.num = 5; itc_ratio.den = 1;
+#elif defined(CONFIG_IA64_SOFTSDV_HACKS)
+ platform_base_freq = 10000000;
+ proc_ratio.num = 4; proc_ratio.den = 1;
+ itc_ratio.num = 4; itc_ratio.den = 1;
+#else
+ if (platform_base_freq < 40000000) {
+ printk("Platform base frequency %lu bogus---resetting to 75MHz!\n",
+ platform_base_freq);
+ platform_base_freq = 75000000;
+ }
+#endif
+ if (!proc_ratio.den)
+ proc_ratio.num = 1; /* avoid division by zero */
+ if (!itc_ratio.den)
+ itc_ratio.num = 1; /* avoid division by zero */
+
+ itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
+ itm.delta = itc_freq / HZ;
+ printk("timer: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
+ platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
+ itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
+
+ my_cpu_data.proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
+ my_cpu_data.itc_freq = itc_freq;
+ my_cpu_data.cyc_per_usec = itc_freq / 1000000;
+ my_cpu_data.usec_per_cyc = (1000000UL << IA64_USEC_PER_CYC_SHIFT) / itc_freq;
+
+ /* Setup the CPU local timer tick */
+ ia64_cpu_local_tick();
+}
+
+void __init
+time_init (void)
+{
+ /*
+ * Request the IRQ _before_ doing anything to cause that
+ * interrupt to be posted.
+ */
+ if (request_irq(TIMER_IRQ, timer_interrupt, 0, "timer", NULL))
+ panic("Could not allocate timer IRQ!");
+
+ efi_gettimeofday(&xtime);
+ ia64_init_itm();
+}
--- /dev/null
+/*
+ * Architecture-specific trap handling.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * The fpu_fault() handler needs to be able to access and update all
+ * floating point registers. Those saved in pt_regs can be accessed
+ * through that structure, but those not saved, will be accessed
+ * directly. To make this work, we need to ensure that the compiler
+ * does not end up using a preserved floating point register on its
+ * own. The following achieves this by declaring preserved registers
+ * that are not marked as "fixed" as global register variables.
+ */
+register double f2 asm ("f2"); register double f3 asm ("f3");
+register double f4 asm ("f4"); register double f5 asm ("f5");
+
+register long f16 asm ("f16"); register long f17 asm ("f17");
+register long f18 asm ("f18"); register long f19 asm ("f19");
+register long f20 asm ("f20"); register long f21 asm ("f21");
+register long f22 asm ("f22"); register long f23 asm ("f23");
+
+register double f24 asm ("f24"); register double f25 asm ("f25");
+register double f26 asm ("f26"); register double f27 asm ("f27");
+register double f28 asm ("f28"); register double f29 asm ("f29");
+register double f30 asm ("f30"); register double f31 asm ("f31");
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+
+#ifdef CONFIG_KDB
+# include <linux/kdb.h>
+#endif
+
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+
+#include <asm/fpswa.h>
+
+static fpswa_interface_t *fpswa_interface;
+
+void __init
+trap_init (void)
+{
+ printk("fpswa interface at %lx\n", ia64_boot_param.fpswa);
+ if (ia64_boot_param.fpswa) {
+#define OLD_FIRMWARE
+#ifdef OLD_FIRMWARE
+ /*
+ * HACK to work around broken firmware. This code
+ * applies the label fixup to the FPSWA interface and
+ * works both with old and new (fixed) firmware.
+ */
+ unsigned long addr = (unsigned long) __va(ia64_boot_param.fpswa);
+ unsigned long gp_val = *(unsigned long *)(addr + 8);
+
+ /* go indirect and indexed to get table address */
+ addr = gp_val;
+ gp_val = *(unsigned long *)(addr + 8);
+
+ while (gp_val == *(unsigned long *)(addr + 8)) {
+ *(unsigned long *)addr |= PAGE_OFFSET;
+ *(unsigned long *)(addr + 8) |= PAGE_OFFSET;
+ addr += 16;
+ }
+#endif
+ /* FPSWA fixup: make the interface pointer a kernel virtual address: */
+ fpswa_interface = __va(ia64_boot_param.fpswa);
+ }
+}
+
+void
+die_if_kernel (char *str, struct pt_regs *regs, long err)
+{
+ if (user_mode(regs)) {
+#if 1
+ /* XXX for debugging only */
+ printk ("!!die_if_kernel: %s(%d): %s %ld\n",
+ current->comm, current->pid, str, err);
+ show_regs(regs);
+#endif
+ return;
+ }
+
+ printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err);
+
+#ifdef CONFIG_KDB
+ while (1) {
+ kdb(KDB_REASON_PANIC, 0, regs);
+ printk("Cant go anywhere from Panic!\n");
+ }
+#endif
+
+ show_regs(regs);
+
+ if (current->thread.flags & IA64_KERNEL_DEATH) {
+ printk("die_if_kernel recursion detected.\n");
+ sti();
+ while (1);
+ }
+ current->thread.flags |= IA64_KERNEL_DEATH;
+ do_exit(SIGSEGV);
+}
+
+void
+ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
+{
+ siginfo_t siginfo;
+
+ /* gdb uses a break number of 0xccccc for debug breakpoints: */
+ if (break_num != 0xccccc)
+ die_if_kernel("Bad break", regs, break_num);
+
+ siginfo.si_signo = SIGTRAP;
+ siginfo.si_errno = break_num; /* XXX is it legal to abuse si_errno like this? */
+ siginfo.si_code = TRAP_BRKPT;
+ send_sig_info(SIGTRAP, &siginfo, current);
+}
+
+/*
+ * Unimplemented system calls. This is called only for stuff that
+ * we're supposed to implement but haven't done so yet. Everything
+ * else goes to sys_ni_syscall.
+ */
+asmlinkage long
+ia64_ni_syscall (unsigned long arg0, unsigned long arg1, unsigned long arg2, unsigned long arg3,
+ unsigned long arg4, unsigned long arg5, unsigned long arg6, unsigned long arg7,
+ unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+
+ printk("<sc%ld(%lx,%lx,%lx,%lx)>\n", regs->r15, arg0, arg1, arg2, arg3);
+ return -ENOSYS;
+}
+
+/*
+ * disabled_fp_fault() is called when a user-level process attempts to
+ * access one of the registers f32..f127 while it doesn't own the
+ * fp-high register partition. When this happens, we save the current
+ * fph partition in the task_struct of the fpu-owner (if necessary)
+ * and then load the fp-high partition of the current task (if
+ * necessary).
+ */
+static inline void
+disabled_fph_fault (struct pt_regs *regs)
+{
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+
+ regs->cr_ipsr &= ~(IA64_PSR_DFH | IA64_PSR_MFH);
+ if (fpu_owner != current) {
+ ia64_set_fpu_owner(current);
+
+ if (fpu_owner && ia64_psr(ia64_task_regs(fpu_owner))->mfh) {
+ fpu_owner->thread.flags |= IA64_THREAD_FPH_VALID;
+ __ia64_save_fpu(fpu_owner->thread.fph);
+ }
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
+ __ia64_load_fpu(current->thread.fph);
+ } else {
+ __ia64_init_fpu();
+ }
+ }
+}
+
+static inline int
+fp_emulate (int fp_fault, void *bundle, long *ipsr, long *fpsr, long *isr, long *pr, long *ifs,
+ struct pt_regs *regs)
+{
+ fp_state_t fp_state;
+ fpswa_ret_t ret;
+#ifdef FPSWA_BUG
+ struct ia64_fpreg f6_15[10];
+#endif
+
+ if (!fpswa_interface)
+ return -1;
+
+ memset(&fp_state, 0, sizeof(fp_state_t));
+
+ /*
+ * compute fp_state. only FP registers f6 - f11 are used by the
+ * kernel, so set those bits in the mask and set the low volatile
+ * pointer to point to these registers.
+ */
+ fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
+#ifndef FPSWA_BUG
+ fp_state.fp_state_low_volatile = ®s->f6;
+#else
+ f6_15[0] = regs->f6;
+ f6_15[1] = regs->f7;
+ f6_15[2] = regs->f8;
+ f6_15[3] = regs->f9;
+ __asm__ ("stf.spill %0=f10" : "=m"(f6_15[4]));
+ __asm__ ("stf.spill %0=f11" : "=m"(f6_15[5]));
+ __asm__ ("stf.spill %0=f12" : "=m"(f6_15[6]));
+ __asm__ ("stf.spill %0=f13" : "=m"(f6_15[7]));
+ __asm__ ("stf.spill %0=f14" : "=m"(f6_15[8]));
+ __asm__ ("stf.spill %0=f15" : "=m"(f6_15[9]));
+ fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) f6_15;
+#endif
+ /*
+ * unsigned long (*EFI_FPSWA) (
+ * unsigned long trap_type,
+ * void *Bundle,
+ * unsigned long *pipsr,
+ * unsigned long *pfsr,
+ * unsigned long *pisr,
+ * unsigned long *ppreds,
+ * unsigned long *pifs,
+ * void *fp_state);
+ */
+ ret = (*fpswa_interface->fpswa)((unsigned long) fp_fault, bundle,
+ (unsigned long *) ipsr, (unsigned long *) fpsr,
+ (unsigned long *) isr, (unsigned long *) pr,
+ (unsigned long *) ifs, &fp_state);
+#ifdef FPSWA_BUG
+ __asm__ ("ldf.fill f10=%0" :: "m"(f6_15[4]));
+ __asm__ ("ldf.fill f11=%0" :: "m"(f6_15[5]));
+ __asm__ ("ldf.fill f12=%0" :: "m"(f6_15[6]));
+ __asm__ ("ldf.fill f13=%0" :: "m"(f6_15[7]));
+ __asm__ ("ldf.fill f14=%0" :: "m"(f6_15[8]));
+ __asm__ ("ldf.fill f15=%0" :: "m"(f6_15[9]));
+ regs->f6 = f6_15[0];
+ regs->f7 = f6_15[1];
+ regs->f8 = f6_15[2];
+ regs->f9 = f6_15[3];
+#endif
+ return ret.status;
+}
+
+/*
+ * Handle floating-point assist faults and traps.
+ */
+static int
+handle_fpu_swa (int fp_fault, struct pt_regs *regs, unsigned long isr)
+{
+ long exception, bundle[2];
+ unsigned long fault_ip;
+ static int fpu_swa_count = 0;
+ static unsigned long last_time;
+
+ fault_ip = regs->cr_iip;
+ if (!fp_fault && (ia64_psr(regs)->ri == 0))
+ fault_ip -= 16;
+ if (copy_from_user(bundle, (void *) fault_ip, sizeof(bundle)))
+ return -1;
+
+ if (fpu_swa_count > 5 && jiffies - last_time > 5*HZ)
+ fpu_swa_count = 0;
+ if (++fpu_swa_count < 5) {
+ last_time = jiffies;
+ printk("%s(%d): floating-point assist fault at ip %016lx\n",
+ current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri);
+ }
+
+ exception = fp_emulate(fp_fault, bundle, ®s->cr_ipsr, ®s->ar_fpsr, &isr, ®s->pr,
+ ®s->cr_ifs, regs);
+ if (fp_fault) {
+ if (exception == 0) {
+ /* emulation was successful */
+ ia64_increment_ip(regs);
+ } else if (exception == -1) {
+ printk("handle_fpu_swa: fp_emulate() returned -1\n");
+ return -2;
+ } else {
+ /* is next instruction a trap? */
+ if (exception & 2) {
+ ia64_increment_ip(regs);
+ }
+ return -1;
+ }
+ } else {
+ if (exception == -1) {
+ printk("handle_fpu_swa: fp_emulate() returned -1\n");
+ return -2;
+ } else if (exception != 0) {
+ /* raise exception */
+ return -1;
+ }
+ }
+ return 0;
+}
+
+void
+ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa,
+ unsigned long iim, unsigned long itir, unsigned long arg5,
+ unsigned long arg6, unsigned long arg7, unsigned long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ unsigned long code, error = isr;
+ struct siginfo siginfo;
+ char buf[128];
+ int result;
+ static const char *reason[] = {
+ "IA-64 Illegal Operation fault",
+ "IA-64 Privileged Operation fault",
+ "IA-64 Privileged Register fault",
+ "IA-64 Reserved Register/Field fault",
+ "Disabled Instruction Set Transition fault",
+ "Unknown fault 5", "Unknown fault 6", "Unknown fault 7", "Illegal Hazard fault",
+ "Unknown fault 9", "Unknown fault 10", "Unknown fault 11", "Unknown fault 12",
+ "Unknown fault 13", "Unknown fault 14", "Unknown fault 15"
+ };
+
+#if 0
+ /* this is for minimal trust debugging; yeah this kind of stuff is useful at times... */
+
+ if (vector != 25) {
+ static unsigned long last_time;
+ static char count;
+ unsigned long n = vector;
+ char buf[32], *cp;
+
+ if (count > 5 && jiffies - last_time > 5*HZ)
+ count = 0;
+
+ if (count++ < 5) {
+ last_time = jiffies;
+ cp = buf + sizeof(buf);
+ *--cp = '\0';
+ while (n) {
+ *--cp = "0123456789abcdef"[n & 0xf];
+ n >>= 4;
+ }
+ printk("<0x%s>", cp);
+ }
+ }
+#endif
+
+ switch (vector) {
+ case 24: /* General Exception */
+ code = (isr >> 4) & 0xf;
+ sprintf(buf, "General Exception: %s%s", reason[code],
+ (code == 3) ? ((isr & (1UL << 37))
+ ? " (RSE access)" : " (data access)") : "");
+#ifndef CONFIG_ITANIUM_ASTEP_SPECIFIC
+ if (code == 8) {
+# ifdef CONFIG_IA64_PRINT_HAZARDS
+ printk("%016lx:possible hazard, pr = %016lx\n", regs->cr_iip, regs->pr);
+# endif
+ return;
+ }
+#endif
+ break;
+
+ case 25: /* Disabled FP-Register */
+ if (isr & 2) {
+ disabled_fph_fault(regs);
+ return;
+ }
+ sprintf(buf, "Disabled FPL fault---not supposed to happen!");
+ break;
+
+ case 29: /* Debug */
+ case 35: /* Taken Branch Trap */
+ case 36: /* Single Step Trap */
+ switch (vector) {
+ case 29: siginfo.si_code = TRAP_BRKPT; break;
+ case 35: siginfo.si_code = TRAP_BRANCH; break;
+ case 36: siginfo.si_code = TRAP_TRACE; break;
+ }
+ siginfo.si_signo = SIGTRAP;
+ siginfo.si_errno = 0;
+ force_sig_info(SIGTRAP, &siginfo, current);
+ return;
+
+ case 30: /* Unaligned fault */
+ sprintf(buf, "Unaligned access in kernel mode---don't do this!");
+ break;
+
+ case 32: /* fp fault */
+ case 33: /* fp trap */
+ result = handle_fpu_swa((vector == 32) ? 1 : 0, regs, isr);
+ if (result < 0) {
+ siginfo.si_signo = SIGFPE;
+ siginfo.si_errno = 0;
+ siginfo.si_code = 0; /* XXX fix me */
+ siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
+ send_sig_info(SIGFPE, &siginfo, current);
+ if (result == -1)
+ send_sig_info(SIGFPE, &siginfo, current);
+ else
+ force_sig(SIGFPE, current);
+ }
+ return;
+
+ case 34: /* Unimplemented Instruction Address Trap */
+ if (user_mode(regs)) {
+ printk("Woah! Unimplemented Instruction Address Trap!\n");
+ siginfo.si_code = ILL_BADIADDR;
+ siginfo.si_signo = SIGILL;
+ siginfo.si_errno = 0;
+ force_sig_info(SIGILL, &siginfo, current);
+ return;
+ }
+ sprintf(buf, "Unimplemented Instruction Address fault");
+ break;
+
+ case 45:
+ printk("Unexpected IA-32 exception\n");
+ force_sig(SIGSEGV, current);
+ return;
+
+ case 46:
+ printk("Unexpected IA-32 intercept trap\n");
+ force_sig(SIGSEGV, current);
+ return;
+
+ case 47:
+ sprintf(buf, "IA-32 Interruption Fault (int 0x%lx)", isr >> 16);
+ break;
+
+ default:
+ sprintf(buf, "Fault %lu", vector);
+ break;
+ }
+ die_if_kernel(buf, regs, error);
+ force_sig(SIGILL, current);
+}
--- /dev/null
+/*
+ * Architecture-specific unaligned trap handling.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <asm/uaccess.h>
+#include <asm/rse.h>
+#include <asm/processor.h>
+#include <asm/unaligned.h>
+
+extern void die_if_kernel(char *str, struct pt_regs *regs, long err) __attribute__ ((noreturn));
+
+#undef DEBUG_UNALIGNED_TRAP
+
+#ifdef DEBUG_UNALIGNED_TRAP
+#define DPRINT(a) { printk("%s, line %d: ", __FUNCTION__, __LINE__); printk a;}
+#else
+#define DPRINT(a)
+#endif
+
+#define IA64_FIRST_STACKED_GR 32
+#define IA64_FIRST_ROTATING_FR 32
+#define SIGN_EXT9 __IA64_UL(0xffffffffffffff00)
+
+/*
+ * For M-unit:
+ *
+ * opcode | m | x6 |
+ * --------|------|---------|
+ * [40-37] | [36] | [35:30] |
+ * --------|------|---------|
+ * 4 | 1 | 6 | = 11 bits
+ * --------------------------
+ * However bits [31:30] are not directly useful to distinguish between
+ * load/store so we can use [35:32] instead, which gives the following
+ * mask ([40:32]) using 9 bits. The 'e' comes from the fact that we defer
+ * checking the m-bit until later in the load/store emulation.
+ */
+#define IA64_OPCODE_MASK 0x1ef00000000
+
+/*
+ * Table C-28 Integer Load/Store
+ *
+ * We ignore [35:32]= 0x6, 0x7, 0xE, 0xF
+ *
+ * ld8.fill, st8.fill MUST be aligned because the RNATs are based on
+ * the address (bits [8:3]), so we must failed.
+ */
+#define LD_OP 0x08000000000
+#define LDS_OP 0x08100000000
+#define LDA_OP 0x08200000000
+#define LDSA_OP 0x08300000000
+#define LDBIAS_OP 0x08400000000
+#define LDACQ_OP 0x08500000000
+/* 0x086, 0x087 are not relevant */
+#define LDCCLR_OP 0x08800000000
+#define LDCNC_OP 0x08900000000
+#define LDCCLRACQ_OP 0x08a00000000
+#define ST_OP 0x08c00000000
+#define STREL_OP 0x08d00000000
+/* 0x08e,0x8f are not relevant */
+
+/*
+ * Table C-29 Integer Load +Reg
+ *
+ * we use the ld->m (bit [36:36]) field to determine whether or not we have
+ * a load/store of this form.
+ */
+
+/*
+ * Table C-30 Integer Load/Store +Imm
+ *
+ * We ignore [35:32]= 0x6, 0x7, 0xE, 0xF
+ *
+ * ld8.fill, st8.fill must be aligned because the Nat register are based on
+ * the address, so we must fail and the program must be fixed.
+ */
+#define LD_IMM_OP 0x0a000000000
+#define LDS_IMM_OP 0x0a100000000
+#define LDA_IMM_OP 0x0a200000000
+#define LDSA_IMM_OP 0x0a300000000
+#define LDBIAS_IMM_OP 0x0a400000000
+#define LDACQ_IMM_OP 0x0a500000000
+/* 0x0a6, 0xa7 are not relevant */
+#define LDCCLR_IMM_OP 0x0a800000000
+#define LDCNC_IMM_OP 0x0a900000000
+#define LDCCLRACQ_IMM_OP 0x0aa00000000
+#define ST_IMM_OP 0x0ac00000000
+#define STREL_IMM_OP 0x0ad00000000
+/* 0x0ae,0xaf are not relevant */
+
+/*
+ * Table C-32 Floating-point Load/Store
+ */
+#define LDF_OP 0x0c000000000
+#define LDFS_OP 0x0c100000000
+#define LDFA_OP 0x0c200000000
+#define LDFSA_OP 0x0c300000000
+/* 0x0c6 is irrelevant */
+#define LDFCCLR_OP 0x0c800000000
+#define LDFCNC_OP 0x0c900000000
+/* 0x0cb is irrelevant */
+#define STF_OP 0x0cc00000000
+
+/*
+ * Table C-33 Floating-point Load +Reg
+ *
+ * we use the ld->m (bit [36:36]) field to determine whether or not we have
+ * a load/store of this form.
+ */
+
+/*
+ * Table C-34 Floating-point Load/Store +Imm
+ */
+#define LDF_IMM_OP 0x0e000000000
+#define LDFS_IMM_OP 0x0e100000000
+#define LDFA_IMM_OP 0x0e200000000
+#define LDFSA_IMM_OP 0x0e300000000
+/* 0x0e6 is irrelevant */
+#define LDFCCLR_IMM_OP 0x0e800000000
+#define LDFCNC_IMM_OP 0x0e900000000
+#define STF_IMM_OP 0x0ec00000000
+
+typedef struct {
+ unsigned long qp:6; /* [0:5] */
+ unsigned long r1:7; /* [6:12] */
+ unsigned long imm:7; /* [13:19] */
+ unsigned long r3:7; /* [20:26] */
+ unsigned long x:1; /* [27:27] */
+ unsigned long hint:2; /* [28:29] */
+ unsigned long x6_sz:2; /* [30:31] */
+ unsigned long x6_op:4; /* [32:35], x6 = x6_sz|x6_op */
+ unsigned long m:1; /* [36:36] */
+ unsigned long op:4; /* [37:40] */
+ unsigned long pad:23; /* [41:63] */
+} load_store_t;
+
+
+typedef enum {
+ UPD_IMMEDIATE, /* ldXZ r1=[r3],imm(9) */
+ UPD_REG /* ldXZ r1=[r3],r2 */
+} update_t;
+
+/*
+ * We use tables to keep track of the offsets of registers in the saved state.
+ * This way we save having big switch/case statements.
+ *
+ * We use bit 0 to indicate switch_stack or pt_regs.
+ * The offset is simply shifted by 1 bit.
+ * A 2-byte value should be enough to hold any kind of offset
+ *
+ * In case the calling convention changes (and thus pt_regs/switch_stack)
+ * simply use RSW instead of RPT or vice-versa.
+ */
+
+#define RPO(x) ((size_t) &((struct pt_regs *)0)->x)
+#define RSO(x) ((size_t) &((struct switch_stack *)0)->x)
+
+#define RPT(x) (RPO(x) << 1)
+#define RSW(x) (1| RSO(x)<<1)
+
+#define GR_OFFS(x) (gr_info[x]>>1)
+#define GR_IN_SW(x) (gr_info[x] & 0x1)
+
+#define FR_OFFS(x) (fr_info[x]>>1)
+#define FR_IN_SW(x) (fr_info[x] & 0x1)
+
+static u16 gr_info[32]={
+ 0, /* r0 is read-only : WE SHOULD NEVER GET THIS */
+
+ RPT(r1), RPT(r2), RPT(r3),
+
+ RSW(r4), RSW(r5), RSW(r6), RSW(r7),
+
+ RPT(r8), RPT(r9), RPT(r10), RPT(r11),
+ RPT(r12), RPT(r13), RPT(r14), RPT(r15),
+
+ RPT(r16), RPT(r17), RPT(r18), RPT(r19),
+ RPT(r20), RPT(r21), RPT(r22), RPT(r23),
+ RPT(r24), RPT(r25), RPT(r26), RPT(r27),
+ RPT(r28), RPT(r29), RPT(r30), RPT(r31)
+};
+
+static u16 fr_info[32]={
+ 0, /* constant : WE SHOULD NEVER GET THIS */
+ 0, /* constant : WE SHOULD NEVER GET THIS */
+
+ RSW(f2), RSW(f3), RSW(f4), RSW(f5),
+
+ RPT(f6), RPT(f7), RPT(f8), RPT(f9),
+
+ RSW(f10), RSW(f11), RSW(f12), RSW(f13), RSW(f14),
+ RSW(f15), RSW(f16), RSW(f17), RSW(f18), RSW(f19),
+ RSW(f20), RSW(f21), RSW(f22), RSW(f23), RSW(f24),
+ RSW(f25), RSW(f26), RSW(f27), RSW(f28), RSW(f29),
+ RSW(f30), RSW(f31)
+};
+
+/* Invalidate ALAT entry for integer register REGNO. */
+static void
+invala_gr (int regno)
+{
+# define F(reg) case reg: __asm__ __volatile__ ("invala.e r%0" :: "i"(reg)); break
+
+ switch (regno) {
+ F( 0); F( 1); F( 2); F( 3); F( 4); F( 5); F( 6); F( 7);
+ F( 8); F( 9); F( 10); F( 11); F( 12); F( 13); F( 14); F( 15);
+ F( 16); F( 17); F( 18); F( 19); F( 20); F( 21); F( 22); F( 23);
+ F( 24); F( 25); F( 26); F( 27); F( 28); F( 29); F( 30); F( 31);
+ F( 32); F( 33); F( 34); F( 35); F( 36); F( 37); F( 38); F( 39);
+ F( 40); F( 41); F( 42); F( 43); F( 44); F( 45); F( 46); F( 47);
+ F( 48); F( 49); F( 50); F( 51); F( 52); F( 53); F( 54); F( 55);
+ F( 56); F( 57); F( 58); F( 59); F( 60); F( 61); F( 62); F( 63);
+ F( 64); F( 65); F( 66); F( 67); F( 68); F( 69); F( 70); F( 71);
+ F( 72); F( 73); F( 74); F( 75); F( 76); F( 77); F( 78); F( 79);
+ F( 80); F( 81); F( 82); F( 83); F( 84); F( 85); F( 86); F( 87);
+ F( 88); F( 89); F( 90); F( 91); F( 92); F( 93); F( 94); F( 95);
+ F( 96); F( 97); F( 98); F( 99); F(100); F(101); F(102); F(103);
+ F(104); F(105); F(106); F(107); F(108); F(109); F(110); F(111);
+ F(112); F(113); F(114); F(115); F(116); F(117); F(118); F(119);
+ F(120); F(121); F(122); F(123); F(124); F(125); F(126); F(127);
+ }
+# undef F
+}
+
+/* Invalidate ALAT entry for floating-point register REGNO. */
+static void
+invala_fr (int regno)
+{
+# define F(reg) case reg: __asm__ __volatile__ ("invala.e f%0" :: "i"(reg)); break
+
+ switch (regno) {
+ F( 0); F( 1); F( 2); F( 3); F( 4); F( 5); F( 6); F( 7);
+ F( 8); F( 9); F( 10); F( 11); F( 12); F( 13); F( 14); F( 15);
+ F( 16); F( 17); F( 18); F( 19); F( 20); F( 21); F( 22); F( 23);
+ F( 24); F( 25); F( 26); F( 27); F( 28); F( 29); F( 30); F( 31);
+ F( 32); F( 33); F( 34); F( 35); F( 36); F( 37); F( 38); F( 39);
+ F( 40); F( 41); F( 42); F( 43); F( 44); F( 45); F( 46); F( 47);
+ F( 48); F( 49); F( 50); F( 51); F( 52); F( 53); F( 54); F( 55);
+ F( 56); F( 57); F( 58); F( 59); F( 60); F( 61); F( 62); F( 63);
+ F( 64); F( 65); F( 66); F( 67); F( 68); F( 69); F( 70); F( 71);
+ F( 72); F( 73); F( 74); F( 75); F( 76); F( 77); F( 78); F( 79);
+ F( 80); F( 81); F( 82); F( 83); F( 84); F( 85); F( 86); F( 87);
+ F( 88); F( 89); F( 90); F( 91); F( 92); F( 93); F( 94); F( 95);
+ F( 96); F( 97); F( 98); F( 99); F(100); F(101); F(102); F(103);
+ F(104); F(105); F(106); F(107); F(108); F(109); F(110); F(111);
+ F(112); F(113); F(114); F(115); F(116); F(117); F(118); F(119);
+ F(120); F(121); F(122); F(123); F(124); F(125); F(126); F(127);
+ }
+# undef F
+}
+
+static void
+set_rse_reg(struct pt_regs *regs, unsigned long r1, unsigned long val, int nat)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs - 1;
+ unsigned long *kbs = ((unsigned long *)current) + IA64_RBS_OFFSET/8;
+ unsigned long on_kbs;
+ unsigned long *bsp, *bspstore, *addr, *ubs_end, *slot;
+ unsigned long rnats;
+ long nlocals;
+
+ /*
+ * cr_ifs=[rv:ifm], ifm=[....:sof(6)]
+ * nlocal=number of locals (in+loc) register of the faulting function
+ */
+ nlocals = (regs->cr_ifs) & 0x7f;
+
+ DPRINT(("sw.bsptore=%lx pt.bspstore=%lx\n", sw->ar_bspstore, regs->ar_bspstore));
+ DPRINT(("cr.ifs=%lx sof=%ld sol=%ld\n",
+ regs->cr_ifs, regs->cr_ifs &0x7f, (regs->cr_ifs>>7)&0x7f));
+
+ on_kbs = ia64_rse_num_regs(kbs, (unsigned long *)sw->ar_bspstore);
+ bspstore = (unsigned long *)regs->ar_bspstore;
+
+ DPRINT(("rse_slot_num=0x%lx\n",ia64_rse_slot_num((unsigned long *)sw->ar_bspstore)));
+ DPRINT(("kbs=%p nlocals=%ld\n", kbs, nlocals));
+ DPRINT(("bspstore next rnat slot %p\n",
+ ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
+ DPRINT(("on_kbs=%ld rnats=%ld\n",
+ on_kbs, ((sw->ar_bspstore-(unsigned long)kbs)>>3) - on_kbs));
+
+ /*
+ * See get_rse_reg() for an explanation on the following instructions
+ */
+ ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
+ bsp = ia64_rse_skip_regs(ubs_end, -nlocals);
+ addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
+
+ DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
+ ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+
+ ia64_poke(regs, current, (unsigned long)addr, val);
+
+ /*
+ * addr will now contain the address of the RNAT for the register
+ */
+ addr = ia64_rse_rnat_addr(addr);
+
+ ia64_peek(regs, current, (unsigned long)addr, &rnats);
+ DPRINT(("rnat @%p = 0x%lx nat=%d rnatval=%lx\n",
+ addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
+
+ if ( nat ) {
+ rnats |= __IA64_UL(1) << ia64_rse_slot_num(slot);
+ } else {
+ rnats &= ~(__IA64_UL(1) << ia64_rse_slot_num(slot));
+ }
+ ia64_poke(regs, current, (unsigned long)addr, rnats);
+
+ DPRINT(("rnat changed to @%p = 0x%lx\n", addr, rnats));
+}
+
+
+static void
+get_rse_reg(struct pt_regs *regs, unsigned long r1, unsigned long *val, int *nat)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs - 1;
+ unsigned long *kbs = (unsigned long *)current + IA64_RBS_OFFSET/8;
+ unsigned long on_kbs;
+ long nlocals;
+ unsigned long *bsp, *addr, *ubs_end, *slot, *bspstore;
+ unsigned long rnats;
+
+ /*
+ * cr_ifs=[rv:ifm], ifm=[....:sof(6)]
+ * nlocals=number of local registers in the faulting function
+ */
+ nlocals = (regs->cr_ifs) & 0x7f;
+
+ /*
+ * save_switch_stack does a flushrs and saves bspstore.
+ * on_kbs = actual number of registers saved on kernel backing store
+ * (taking into accound potential RNATs)
+ *
+ * Note that this number can be greater than nlocals if the dirty
+ * parititions included more than one stack frame at the time we
+ * switched to KBS
+ */
+ on_kbs = ia64_rse_num_regs(kbs, (unsigned long *)sw->ar_bspstore);
+ bspstore = (unsigned long *)regs->ar_bspstore;
+
+ /*
+ * To simplify the logic, we calculate everything as if there was only
+ * one backing store i.e., the user one (UBS). We let it to peek/poke
+ * to figure out whether the register we're looking for really is
+ * on the UBS or on KBS.
+ *
+ * regs->ar_bsptore = address of last register saved on UBS (before switch)
+ *
+ * ubs_end = virtual end of the UBS (if everything had been spilled there)
+ *
+ * We know that ubs_end is the point where the last register on the
+ * stack frame we're interested in as been saved. So we need to walk
+ * our way backward to figure out what the BSP "was" for that frame,
+ * this will give us the location of r32.
+ *
+ * bsp = "virtual UBS" address of r32 for our frame
+ *
+ * Finally, get compute the address of the register we're looking for
+ * using bsp as our base (move up again).
+ *
+ * Please note that in our case, we know that the register is necessarily
+ * on the KBS because we are only interested in the current frame at the moment
+ * we got the exception i.e., bsp is not changed until we switch to KBS.
+ */
+ ubs_end = ia64_rse_skip_regs(bspstore, on_kbs);
+ bsp = ia64_rse_skip_regs(ubs_end, -nlocals);
+ addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
+
+ DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
+ ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+
+ ia64_peek(regs, current, (unsigned long)addr, val);
+
+ /*
+ * addr will now contain the address of the RNAT for the register
+ */
+ addr = ia64_rse_rnat_addr(addr);
+
+ ia64_peek(regs, current, (unsigned long)addr, &rnats);
+ DPRINT(("rnat @%p = 0x%lx\n", addr, rnats));
+
+ if ( nat ) *nat = rnats >> ia64_rse_slot_num(slot) & 0x1;
+}
+
+
+static void
+setreg(unsigned long regnum, unsigned long val, int nat, struct pt_regs *regs)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs -1;
+ unsigned long addr;
+ unsigned long bitmask;
+ unsigned long *unat;
+
+
+ /*
+ * First takes care of stacked registers
+ */
+ if ( regnum >= IA64_FIRST_STACKED_GR ) {
+ set_rse_reg(regs, regnum, val, nat);
+ return;
+ }
+
+ /*
+ * Using r0 as a target raises a General Exception fault which has
+ * higher priority than the Unaligned Reference fault.
+ */
+
+ /*
+ * Now look at registers in [0-31] range and init correct UNAT
+ */
+ if ( GR_IN_SW(regnum) ) {
+ addr = (unsigned long)sw;
+ unat = &sw->ar_unat;
+ } else {
+ addr = (unsigned long)regs;
+ unat = &sw->caller_unat;
+ }
+ DPRINT(("tmp_base=%lx switch_stack=%s offset=%d\n",
+ addr, unat==&sw->ar_unat ? "yes":"no", GR_OFFS(regnum)));
+ /*
+ * add offset from base of struct
+ * and do it !
+ */
+ addr += GR_OFFS(regnum);
+
+ *(unsigned long *)addr = val;
+
+ /*
+ * We need to clear the corresponding UNAT bit to fully emulate the load
+ * UNAT bit_pos = GR[r3]{8:3} form EAS-2.4
+ */
+ bitmask = __IA64_UL(1) << (addr >> 3 & 0x3f);
+ DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, unat, *unat));
+ if ( nat ) {
+ *unat |= bitmask;
+ } else {
+ *unat &= ~bitmask;
+ }
+ DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, unat,*unat));
+}
+
+#define IA64_FPH_OFFS(r) (r - IA64_FIRST_ROTATING_FR)
+
+static void
+setfpreg(unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs - 1;
+ unsigned long addr;
+
+ /*
+ * From EAS-2.5: FPDisableFault has higher priority than
+ * Unaligned Fault. Thus, when we get here, we know the partition is
+ * enabled.
+ *
+ * The registers [32-127] are ususally saved in the tss. When get here,
+ * they are NECESSARY live because they are only saved explicitely.
+ * We have 3 ways of updating the values: force a save of the range
+ * in tss, use a gigantic switch/case statement or generate code on the
+ * fly to store to the right register.
+ * For now, we are using the (slow) save/restore way.
+ */
+ if ( regnum >= IA64_FIRST_ROTATING_FR ) {
+ /*
+ * force a save of [32-127] to tss
+ * we use the __() form to avoid fiddling with the dfh bit
+ */
+ __ia64_save_fpu(¤t->thread.fph[0]);
+
+ current->thread.fph[IA64_FPH_OFFS(regnum)] = *fpval;
+
+ __ia64_load_fpu(¤t->thread.fph[0]);
+
+ /*
+ * mark the high partition as being used now
+ *
+ * This is REQUIRED because the disabled_fph_fault() does
+ * not set it, it's relying on the faulting instruction to
+ * do it. In our case the faulty instruction never gets executed
+ * completely, so we need to toggle the bit.
+ */
+ regs->cr_ipsr |= IA64_PSR_MFH;
+ } else {
+ /*
+ * pt_regs or switch_stack ?
+ */
+ if ( FR_IN_SW(regnum) ) {
+ addr = (unsigned long)sw;
+ } else {
+ addr = (unsigned long)regs;
+ }
+
+ DPRINT(("tmp_base=%lx offset=%d\n", addr, FR_OFFS(regnum)));
+
+ addr += FR_OFFS(regnum);
+ *(struct ia64_fpreg *)addr = *fpval;
+
+ /*
+ * mark the low partition as being used now
+ *
+ * It is highly unlikely that this bit is not already set, but
+ * let's do it for safety.
+ */
+ regs->cr_ipsr |= IA64_PSR_MFL;
+
+ }
+}
+
+/*
+ * Those 2 inline functions generate the spilled versions of the constant floating point
+ * registers which can be used with stfX
+ */
+static inline void
+float_spill_f0(struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("stf.spill [%0]=f0" :: "r"(final) : "memory");
+}
+
+static inline void
+float_spill_f1(struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("stf.spill [%0]=f1" :: "r"(final) : "memory");
+}
+
+static void
+getfpreg(unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs -1;
+ unsigned long addr;
+
+ /*
+ * From EAS-2.5: FPDisableFault has higher priority than
+ * Unaligned Fault. Thus, when we get here, we know the partition is
+ * enabled.
+ *
+ * When regnum > 31, the register is still live and
+ * we need to force a save to the tss to get access to it.
+ * See discussion in setfpreg() for reasons and other ways of doing this.
+ */
+ if ( regnum >= IA64_FIRST_ROTATING_FR ) {
+
+ /*
+ * force a save of [32-127] to tss
+ * we use the__ia64_save_fpu() form to avoid fiddling with
+ * the dfh bit.
+ */
+ __ia64_save_fpu(¤t->thread.fph[0]);
+
+ *fpval = current->thread.fph[IA64_FPH_OFFS(regnum)];
+ } else {
+ /*
+ * f0 = 0.0, f1= 1.0. Those registers are constant and are thus
+ * not saved, we must generate their spilled form on the fly
+ */
+ switch(regnum) {
+ case 0:
+ float_spill_f0(fpval);
+ break;
+ case 1:
+ float_spill_f1(fpval);
+ break;
+ default:
+ /*
+ * pt_regs or switch_stack ?
+ */
+ addr = FR_IN_SW(regnum) ? (unsigned long)sw
+ : (unsigned long)regs;
+
+ DPRINT(("is_sw=%d tmp_base=%lx offset=0x%x\n",
+ FR_IN_SW(regnum), addr, FR_OFFS(regnum)));
+
+ addr += FR_OFFS(regnum);
+ *fpval = *(struct ia64_fpreg *)addr;
+ }
+ }
+}
+
+
+static void
+getreg(unsigned long regnum, unsigned long *val, int *nat, struct pt_regs *regs)
+{
+ struct switch_stack *sw = (struct switch_stack *)regs -1;
+ unsigned long addr, *unat;
+
+ if ( regnum >= IA64_FIRST_STACKED_GR ) {
+ get_rse_reg(regs, regnum, val, nat);
+ return;
+ }
+
+ /*
+ * take care of r0 (read-only always evaluate to 0)
+ */
+ if ( regnum == 0 ) {
+ *val = 0;
+ *nat = 0;
+ return;
+ }
+
+ /*
+ * Now look at registers in [0-31] range and init correct UNAT
+ */
+ if ( GR_IN_SW(regnum) ) {
+ addr = (unsigned long)sw;
+ unat = &sw->ar_unat;
+ } else {
+ addr = (unsigned long)regs;
+ unat = &sw->caller_unat;
+ }
+
+ DPRINT(("addr_base=%lx offset=0x%x\n", addr, GR_OFFS(regnum)));
+
+ addr += GR_OFFS(regnum);
+
+ *val = *(unsigned long *)addr;
+
+ /*
+ * do it only when requested
+ */
+ if ( nat ) *nat = (*unat >> (addr >> 3 & 0x3f)) & 0x1UL;
+}
+
+static void
+emulate_load_updates(update_t type, load_store_t *ld, struct pt_regs *regs, unsigned long ifa)
+{
+ /*
+ * IMPORTANT:
+ * Given the way we handle unaligned speculative loads, we should
+ * not get to this point in the code but we keep this sanity check,
+ * just in case.
+ */
+ if ( ld->x6_op == 1 || ld->x6_op == 3 ) {
+ printk(KERN_ERR __FUNCTION__": register update on speculative load, error\n");
+ die_if_kernel("unaligned reference on specualtive load with register update\n",
+ regs, 30);
+ }
+
+
+ /*
+ * at this point, we know that the base register to update is valid i.e.,
+ * it's not r0
+ */
+ if ( type == UPD_IMMEDIATE ) {
+ unsigned long imm;
+
+ /*
+ * Load +Imm: ldXZ r1=[r3],imm(9)
+ *
+ *
+ * form imm9: [13:19] contain the first 7 bits
+ */
+ imm = ld->x << 7 | ld->imm;
+
+ /*
+ * sign extend (1+8bits) if m set
+ */
+ if (ld->m) imm |= SIGN_EXT9;
+
+ /*
+ * ifa == r3 and we know that the NaT bit on r3 was clear so
+ * we can directly use ifa.
+ */
+ ifa += imm;
+
+ setreg(ld->r3, ifa, 0, regs);
+
+ DPRINT(("ld.x=%d ld.m=%d imm=%ld r3=0x%lx\n", ld->x, ld->m, imm, ifa));
+
+ } else if ( ld->m ) {
+ unsigned long r2;
+ int nat_r2;
+
+ /*
+ * Load +Reg Opcode: ldXZ r1=[r3],r2
+ *
+ * Note: that we update r3 even in the case of ldfX.a
+ * (where the load does not happen)
+ *
+ * The way the load algorithm works, we know that r3 does not
+ * have its NaT bit set (would have gotten NaT consumption
+ * before getting the unaligned fault). So we can use ifa
+ * which equals r3 at this point.
+ *
+ * IMPORTANT:
+ * The above statement holds ONLY because we know that we
+ * never reach this code when trying to do a ldX.s.
+ * If we ever make it to here on an ldfX.s then
+ */
+ getreg(ld->imm, &r2, &nat_r2, regs);
+
+ ifa += r2;
+
+ /*
+ * propagate Nat r2 -> r3
+ */
+ setreg(ld->r3, ifa, nat_r2, regs);
+
+ DPRINT(("imm=%d r2=%ld r3=0x%lx nat_r2=%d\n",ld->imm, r2, ifa, nat_r2));
+ }
+}
+
+
+static int
+emulate_load_int(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+{
+ unsigned long val;
+ unsigned int len = 1<< ld->x6_sz;
+
+ /*
+ * the macro supposes sequential access (which is the case)
+ * if the first byte is an invalid address we return here. Otherwise
+ * there is a guard page at the top of the user's address page and
+ * the first access would generate a NaT consumption fault and return
+ * with a SIGSEGV, which is what we want.
+ *
+ * Note: the first argument is ignored
+ */
+ if ( access_ok(VERIFY_READ, (void *)ifa, len) < 0 ) {
+ DPRINT(("verify area failed on %lx\n", ifa));
+ return -1;
+ }
+
+ /*
+ * r0, as target, doesn't need to be checked because Illegal Instruction
+ * faults have higher priority than unaligned faults.
+ *
+ * r0 cannot be found as the base as it would never generate an
+ * unaligned reference.
+ */
+
+ /*
+ * ldX.a we don't try to emulate anything but we must
+ * invalidate the ALAT entry.
+ * See comment below for explanation on how we handle ldX.a
+ */
+ if ( ld->x6_op != 0x2 ) {
+ /*
+ * we rely on the macros in unaligned.h for now i.e.,
+ * we let the compiler figure out how to read memory gracefully.
+ *
+ * We need this switch/case because the way the inline function
+ * works. The code is optimized by the compiler and looks like
+ * a single switch/case.
+ */
+ switch(len) {
+ case 2:
+ val = ia64_get_unaligned((void *)ifa, 2);
+ break;
+ case 4:
+ val = ia64_get_unaligned((void *)ifa, 4);
+ break;
+ case 8:
+ val = ia64_get_unaligned((void *)ifa, 8);
+ break;
+ default:
+ DPRINT(("unknown size: x6=%d\n", ld->x6_sz));
+ return -1;
+ }
+
+ setreg(ld->r1, val, 0, regs);
+ }
+
+ /*
+ * check for updates on any kind of loads
+ */
+ if ( ld->op == 0x5 || ld->m )
+ emulate_load_updates(ld->op == 0x5 ? UPD_IMMEDIATE: UPD_REG,
+ ld, regs, ifa);
+
+ /*
+ * handling of various loads (based on EAS2.4):
+ *
+ * ldX.acq (ordered load):
+ * - acquire semantics would have been used, so force fence instead.
+ *
+ *
+ * ldX.c.clr (check load and clear):
+ * - if we get to this handler, it's because the entry was not in the ALAT.
+ * Therefore the operation reverts to a normal load
+ *
+ * ldX.c.nc (check load no clear):
+ * - same as previous one
+ *
+ * ldX.c.clr.acq (ordered check load and clear):
+ * - same as above for c.clr part. The load needs to have acquire semantics. So
+ * we use the fence semantics which is stronger and thus ensures correctness.
+ *
+ * ldX.a (advanced load):
+ * - suppose ldX.a r1=[r3]. If we get to the unaligned trap it's because the
+ * address doesn't match requested size alignement. This means that we would
+ * possibly need more than one load to get the result.
+ *
+ * The load part can be handled just like a normal load, however the difficult
+ * part is to get the right thing into the ALAT. The critical piece of information
+ * in the base address of the load & size. To do that, a ld.a must be executed,
+ * clearly any address can be pushed into the table by using ld1.a r1=[r3]. Now
+ * if we use the same target register, we will be okay for the check.a instruction.
+ * If we look at the store, basically a stX [r3]=r1 checks the ALAT for any entry
+ * which would overlap within [r3,r3+X] (the size of the load was store in the
+ * ALAT). If such an entry is found the entry is invalidated. But this is not good
+ * enough, take the following example:
+ * r3=3
+ * ld4.a r1=[r3]
+ *
+ * Could be emulated by doing:
+ * ld1.a r1=[r3],1
+ * store to temporary;
+ * ld1.a r1=[r3],1
+ * store & shift to temporary;
+ * ld1.a r1=[r3],1
+ * store & shift to temporary;
+ * ld1.a r1=[r3]
+ * store & shift to temporary;
+ * r1=temporary
+ *
+ * So int this case, you would get the right value is r1 but the wrong info in
+ * the ALAT. Notice that you could do it in reverse to finish with address 3
+ * but you would still get the size wrong. To get the size right, one needs to
+ * execute exactly the same kind of load. You could do it from a aligned
+ * temporary location, but you would get the address wrong.
+ *
+ * So no matter what, it is not possible to emulate an advanced load
+ * correctly. But is that really critical ?
+ *
+ *
+ * Now one has to look at how ld.a is used, one must either do a ld.c.* or
+ * chck.a.* to reuse the value stored in the ALAT. Both can "fail" (meaning no
+ * entry found in ALAT), and that's perfectly ok because:
+ *
+ * - ld.c.*, if the entry is not present a normal load is executed
+ * - chk.a.*, if the entry is not present, execution jumps to recovery code
+ *
+ * In either case, the load can be potentially retried in another form.
+ *
+ * So it's okay NOT to do any actual load on an unaligned ld.a. However the ALAT
+ * must be invalidated for the register (so that's chck.a.*,ld.c.* don't pick up
+ * a stale entry later) The register base update MUST also be performed.
+ *
+ * Now what is the content of the register and its NaT bit in the case we don't
+ * do the load ? EAS2.4, says (in case an actual load is needed)
+ *
+ * - r1 = [r3], Nat = 0 if succeeds
+ * - r1 = 0 Nat = 0 if trying to access non-speculative memory
+ *
+ * For us, there is nothing to do, because both ld.c.* and chk.a.* are going to
+ * retry and thus eventually reload the register thereby changing Nat and
+ * register content.
+ */
+
+ /*
+ * when the load has the .acq completer then
+ * use ordering fence.
+ */
+ if (ld->x6_op == 0x5 || ld->x6_op == 0xa)
+ mb();
+
+ /*
+ * invalidate ALAT entry in case of advanced load
+ */
+ if (ld->x6_op == 0x2)
+ invala_gr(ld->r1);
+
+ return 0;
+}
+
+static int
+emulate_store_int(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+{
+ unsigned long r2;
+ unsigned int len = 1<< ld->x6_sz;
+
+ /*
+ * the macro supposes sequential access (which is the case)
+ * if the first byte is an invalid address we return here. Otherwise
+ * there is a guard page at the top of the user's address page and
+ * the first access would generate a NaT consumption fault and return
+ * with a SIGSEGV, which is what we want.
+ *
+ * Note: the first argument is ignored
+ */
+ if ( access_ok(VERIFY_WRITE, (void *)ifa, len) < 0 ) {
+ DPRINT(("verify area failed on %lx\n",ifa));
+ return -1;
+ }
+
+ /*
+ * if we get to this handler, Nat bits on both r3 and r2 have already
+ * been checked. so we don't need to do it
+ *
+ * extract the value to be stored
+ */
+ getreg(ld->imm, &r2, 0, regs);
+
+ /*
+ * we rely on the macros in unaligned.h for now i.e.,
+ * we let the compiler figure out how to read memory gracefully.
+ *
+ * We need this switch/case because the way the inline function
+ * works. The code is optimized by the compiler and looks like
+ * a single switch/case.
+ */
+ DPRINT(("st%d [%lx]=%lx\n", len, ifa, r2));
+
+ switch(len) {
+ case 2:
+ ia64_put_unaligned(r2, (void *)ifa, 2);
+ break;
+ case 4:
+ ia64_put_unaligned(r2, (void *)ifa, 4);
+ break;
+ case 8:
+ ia64_put_unaligned(r2, (void *)ifa, 8);
+ break;
+ default:
+ DPRINT(("unknown size: x6=%d\n", ld->x6_sz));
+ return -1;
+ }
+ /*
+ * stX [r3]=r2,imm(9)
+ *
+ * NOTE:
+ * ld->r3 can never be r0, because r0 would not generate an
+ * unaligned access.
+ */
+ if ( ld->op == 0x5 ) {
+ unsigned long imm;
+
+ /*
+ * form imm9: [12:6] contain first 7bits
+ */
+ imm = ld->x << 7 | ld->r1;
+ /*
+ * sign extend (8bits) if m set
+ */
+ if ( ld->m ) imm |= SIGN_EXT9;
+ /*
+ * ifa == r3 (NaT is necessarily cleared)
+ */
+ ifa += imm;
+
+ DPRINT(("imm=%lx r3=%lx\n", imm, ifa));
+
+ setreg(ld->r3, ifa, 0, regs);
+ }
+ /*
+ * we don't have alat_invalidate_multiple() so we need
+ * to do the complete flush :-<<
+ */
+ ia64_invala();
+
+ /*
+ * stX.rel: use fence instead of release
+ */
+ if ( ld->x6_op == 0xd ) mb();
+
+ return 0;
+}
+
+/*
+ * floating point operations sizes in bytes
+ */
+static const unsigned short float_fsz[4]={
+ 16, /* extended precision (e) */
+ 8, /* integer (8) */
+ 4, /* single precision (s) */
+ 8 /* double precision (d) */
+};
+
+static inline void
+mem2float_extended(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldfe f6=[%0];; stf.spill [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+mem2float_integer(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldf8 f6=[%0];; stf.spill [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+mem2float_single(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldfs f6=[%0];; stf.spill [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+mem2float_double(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldfd f6=[%0];; stf.spill [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+float2mem_extended(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldf.fill f6=[%0];; stfe [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+float2mem_integer(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldf.fill f6=[%0];; stf8 [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+float2mem_single(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldf.fill f6=[%0];; stfs [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static inline void
+float2mem_double(struct ia64_fpreg *init, struct ia64_fpreg *final)
+{
+ __asm__ __volatile__ ("ldf.fill f6=[%0];; stfd [%1]=f6"
+ :: "r"(init), "r"(final) : "f6","memory");
+}
+
+static int
+emulate_load_floatpair(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+{
+ struct ia64_fpreg fpr_init[2];
+ struct ia64_fpreg fpr_final[2];
+ unsigned long len = float_fsz[ld->x6_sz];
+
+ if ( access_ok(VERIFY_READ, (void *)ifa, len<<1) < 0 ) {
+ DPRINT(("verify area failed on %lx\n", ifa));
+ return -1;
+ }
+ /*
+ * fr0 & fr1 don't need to be checked because Illegal Instruction
+ * faults have higher priority than unaligned faults.
+ *
+ * r0 cannot be found as the base as it would never generate an
+ * unaligned reference.
+ */
+
+ /*
+ * make sure we get clean buffers
+ */
+ memset(&fpr_init,0, sizeof(fpr_init));
+ memset(&fpr_final,0, sizeof(fpr_final));
+
+ /*
+ * ldfpX.a: we don't try to emulate anything but we must
+ * invalidate the ALAT entry and execute updates, if any.
+ */
+ if ( ld->x6_op != 0x2 ) {
+ /*
+ * does the unaligned access
+ */
+ memcpy(&fpr_init[0], (void *)ifa, len);
+ memcpy(&fpr_init[1], (void *)(ifa+len), len);
+
+ DPRINT(("ld.r1=%d ld.imm=%d x6_sz=%d\n", ld->r1, ld->imm, ld->x6_sz));
+#ifdef DEBUG_UNALIGNED_TRAP
+ { int i; char *c = (char *)&fpr_init;
+ printk("fpr_init= ");
+ for(i=0; i < len<<1; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+#endif
+ /*
+ * XXX fixme
+ * Could optimize inlines by using ldfpX & 2 spills
+ */
+ switch( ld->x6_sz ) {
+ case 0:
+ mem2float_extended(&fpr_init[0], &fpr_final[0]);
+ mem2float_extended(&fpr_init[1], &fpr_final[1]);
+ break;
+ case 1:
+ mem2float_integer(&fpr_init[0], &fpr_final[0]);
+ mem2float_integer(&fpr_init[1], &fpr_final[1]);
+ break;
+ case 2:
+ mem2float_single(&fpr_init[0], &fpr_final[0]);
+ mem2float_single(&fpr_init[1], &fpr_final[1]);
+ break;
+ case 3:
+ mem2float_double(&fpr_init[0], &fpr_final[0]);
+ mem2float_double(&fpr_init[1], &fpr_final[1]);
+ break;
+ }
+#ifdef DEBUG_UNALIGNED_TRAP
+ { int i; char *c = (char *)&fpr_final;
+ printk("fpr_final= ");
+ for(i=0; i < len<<1; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+#endif
+ /*
+ * XXX fixme
+ *
+ * A possible optimization would be to drop fpr_final
+ * and directly use the storage from the saved context i.e.,
+ * the actual final destination (pt_regs, switch_stack or tss).
+ */
+ setfpreg(ld->r1, &fpr_final[0], regs);
+ setfpreg(ld->imm, &fpr_final[1], regs);
+ }
+
+ /*
+ * Check for updates: only immediate updates are available for this
+ * instruction.
+ */
+ if ( ld->m ) {
+
+ /*
+ * the immediate is implicit given the ldsz of the operation:
+ * single: 8 (2x4) and for all others it's 16 (2x8)
+ */
+ ifa += len<<1;
+
+ /*
+ * IMPORTANT:
+ * the fact that we force the NaT of r3 to zero is ONLY valid
+ * as long as we don't come here with a ldfpX.s.
+ * For this reason we keep this sanity check
+ */
+ if ( ld->x6_op == 1 || ld->x6_op == 3 ) {
+ printk(KERN_ERR "%s: register update on speculative load pair, error\n", __FUNCTION__);
+ }
+
+
+ setreg(ld->r3, ifa, 0, regs);
+ }
+
+ /*
+ * Invalidate ALAT entries, if any, for both registers.
+ */
+ if ( ld->x6_op == 0x2 ) {
+ invala_fr(ld->r1);
+ invala_fr(ld->imm);
+ }
+ return 0;
+}
+
+
+static int
+emulate_load_float(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+{
+ struct ia64_fpreg fpr_init;
+ struct ia64_fpreg fpr_final;
+ unsigned long len = float_fsz[ld->x6_sz];
+
+ /*
+ * check for load pair because our masking scheme is not fine grain enough
+ if ( ld->x == 1 ) return emulate_load_floatpair(ifa,ld,regs);
+ */
+
+ if ( access_ok(VERIFY_READ, (void *)ifa, len) < 0 ) {
+ DPRINT(("verify area failed on %lx\n", ifa));
+ return -1;
+ }
+ /*
+ * fr0 & fr1 don't need to be checked because Illegal Instruction
+ * faults have higher priority than unaligned faults.
+ *
+ * r0 cannot be found as the base as it would never generate an
+ * unaligned reference.
+ */
+
+
+ /*
+ * make sure we get clean buffers
+ */
+ memset(&fpr_init,0, sizeof(fpr_init));
+ memset(&fpr_final,0, sizeof(fpr_final));
+
+ /*
+ * ldfX.a we don't try to emulate anything but we must
+ * invalidate the ALAT entry.
+ * See comments in ldX for descriptions on how the various loads are handled.
+ */
+ if ( ld->x6_op != 0x2 ) {
+
+ /*
+ * does the unaligned access
+ */
+ memcpy(&fpr_init, (void *)ifa, len);
+
+ DPRINT(("ld.r1=%d x6_sz=%d\n", ld->r1, ld->x6_sz));
+#ifdef DEBUG_UNALIGNED_TRAP
+ { int i; char *c = (char *)&fpr_init;
+ printk("fpr_init= ");
+ for(i=0; i < len; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+#endif
+ /*
+ * we only do something for x6_op={0,8,9}
+ */
+ switch( ld->x6_sz ) {
+ case 0:
+ mem2float_extended(&fpr_init, &fpr_final);
+ break;
+ case 1:
+ mem2float_integer(&fpr_init, &fpr_final);
+ break;
+ case 2:
+ mem2float_single(&fpr_init, &fpr_final);
+ break;
+ case 3:
+ mem2float_double(&fpr_init, &fpr_final);
+ break;
+ }
+#ifdef DEBUG_UNALIGNED_TRAP
+ { int i; char *c = (char *)&fpr_final;
+ printk("fpr_final= ");
+ for(i=0; i < len; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+#endif
+ /*
+ * XXX fixme
+ *
+ * A possible optimization would be to drop fpr_final
+ * and directly use the storage from the saved context i.e.,
+ * the actual final destination (pt_regs, switch_stack or tss).
+ */
+ setfpreg(ld->r1, &fpr_final, regs);
+ }
+
+ /*
+ * check for updates on any loads
+ */
+ if ( ld->op == 0x7 || ld->m )
+ emulate_load_updates(ld->op == 0x7 ? UPD_IMMEDIATE: UPD_REG,
+ ld, regs, ifa);
+
+
+ /*
+ * invalidate ALAT entry in case of advanced floating point loads
+ */
+ if (ld->x6_op == 0x2)
+ invala_fr(ld->r1);
+
+ return 0;
+}
+
+
+static int
+emulate_store_float(unsigned long ifa, load_store_t *ld, struct pt_regs *regs)
+{
+ struct ia64_fpreg fpr_init;
+ struct ia64_fpreg fpr_final;
+ unsigned long len = float_fsz[ld->x6_sz];
+
+ /*
+ * the macro supposes sequential access (which is the case)
+ * if the first byte is an invalid address we return here. Otherwise
+ * there is a guard page at the top of the user's address page and
+ * the first access would generate a NaT consumption fault and return
+ * with a SIGSEGV, which is what we want.
+ *
+ * Note: the first argument is ignored
+ */
+ if ( access_ok(VERIFY_WRITE, (void *)ifa, len) < 0 ) {
+ DPRINT(("verify area failed on %lx\n",ifa));
+ return -1;
+ }
+
+ /*
+ * make sure we get clean buffers
+ */
+ memset(&fpr_init,0, sizeof(fpr_init));
+ memset(&fpr_final,0, sizeof(fpr_final));
+
+
+ /*
+ * if we get to this handler, Nat bits on both r3 and r2 have already
+ * been checked. so we don't need to do it
+ *
+ * extract the value to be stored
+ */
+ getfpreg(ld->imm, &fpr_init, regs);
+ /*
+ * during this step, we extract the spilled registers from the saved
+ * context i.e., we refill. Then we store (no spill) to temporary
+ * aligned location
+ */
+ switch( ld->x6_sz ) {
+ case 0:
+ float2mem_extended(&fpr_init, &fpr_final);
+ break;
+ case 1:
+ float2mem_integer(&fpr_init, &fpr_final);
+ break;
+ case 2:
+ float2mem_single(&fpr_init, &fpr_final);
+ break;
+ case 3:
+ float2mem_double(&fpr_init, &fpr_final);
+ break;
+ }
+ DPRINT(("ld.r1=%d x6_sz=%d\n", ld->r1, ld->x6_sz));
+#ifdef DEBUG_UNALIGNED_TRAP
+ { int i; char *c = (char *)&fpr_init;
+ printk("fpr_init= ");
+ for(i=0; i < len; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+ { int i; char *c = (char *)&fpr_final;
+ printk("fpr_final= ");
+ for(i=0; i < len; i++ ) {
+ printk("%02x ", c[i]&0xff);
+ }
+ printk("\n");
+ }
+#endif
+
+ /*
+ * does the unaligned store
+ */
+ memcpy((void *)ifa, &fpr_final, len);
+
+ /*
+ * stfX [r3]=r2,imm(9)
+ *
+ * NOTE:
+ * ld->r3 can never be r0, because r0 would not generate an
+ * unaligned access.
+ */
+ if ( ld->op == 0x7 ) {
+ unsigned long imm;
+
+ /*
+ * form imm9: [12:6] contain first 7bits
+ */
+ imm = ld->x << 7 | ld->r1;
+ /*
+ * sign extend (8bits) if m set
+ */
+ if ( ld->m ) imm |= SIGN_EXT9;
+ /*
+ * ifa == r3 (NaT is necessarily cleared)
+ */
+ ifa += imm;
+
+ DPRINT(("imm=%lx r3=%lx\n", imm, ifa));
+
+ setreg(ld->r3, ifa, 0, regs);
+ }
+ /*
+ * we don't have alat_invalidate_multiple() so we need
+ * to do the complete flush :-<<
+ */
+ ia64_invala();
+
+ return 0;
+}
+
+void
+ia64_handle_unaligned(unsigned long ifa, struct pt_regs *regs)
+{
+ static unsigned long unalign_count;
+ static long last_time;
+
+ struct ia64_psr *ipsr = ia64_psr(regs);
+ unsigned long *bundle_addr;
+ unsigned long opcode;
+ unsigned long op;
+ load_store_t *insn;
+ int ret = -1;
+
+ /*
+ * We flag unaligned references while in kernel as
+ * errors: the kernel must be fixed. The switch code
+ * is in ivt.S at entry 30.
+ *
+ * So here we keep a simple sanity check.
+ */
+ if ( !user_mode(regs) ) {
+ die_if_kernel("Unaligned reference while in kernel\n", regs, 30);
+ /* NOT_REACHED */
+ }
+
+ /*
+ * Make sure we log the unaligned access, so that user/sysadmin can notice it
+ * and eventually fix the program.
+ *
+ * We don't want to do that for every access so we pace it with jiffies.
+ */
+ if ( unalign_count > 5 && jiffies - last_time > 5*HZ ) unalign_count = 0;
+ if ( ++unalign_count < 5 ) {
+ last_time = jiffies;
+ printk("%s(%d): unaligned trap accessing %016lx (ip=%016lx)\n",
+ current->comm, current->pid, ifa, regs->cr_iip + ipsr->ri);
+
+ }
+
+ DPRINT(("iip=%lx ifa=%lx isr=%lx\n", regs->cr_iip, ifa, regs->cr_ipsr));
+ DPRINT(("ISR.ei=%d ISR.sp=%d\n", ipsr->ri, ipsr->it));
+
+ bundle_addr = (unsigned long *)(regs->cr_iip);
+
+ /*
+ * extract the instruction from the bundle given the slot number
+ */
+ switch ( ipsr->ri ) {
+ case 0: op = *bundle_addr >> 5;
+ break;
+
+ case 1: op = *bundle_addr >> 46 | (*(bundle_addr+1) & 0x7fffff)<<18;
+ break;
+
+ case 2: op = *(bundle_addr+1) >> 23;
+ break;
+ }
+
+ insn = (load_store_t *)&op;
+ opcode = op & IA64_OPCODE_MASK;
+
+ DPRINT(("opcode=%lx ld.qp=%d ld.r1=%d ld.imm=%d ld.r3=%d ld.x=%d ld.hint=%d "
+ "ld.x6=0x%x ld.m=%d ld.op=%d\n",
+ opcode,
+ insn->qp,
+ insn->r1,
+ insn->imm,
+ insn->r3,
+ insn->x,
+ insn->hint,
+ insn->x6_sz,
+ insn->m,
+ insn->op));
+
+ /*
+ * IMPORTANT:
+ * Notice that the swictch statement DOES not cover all possible instructions
+ * that DO generate unaligned references. This is made on purpose because for some
+ * instructions it DOES NOT make sense to try and emulate the access. Sometimes it
+ * is WRONG to try and emulate. Here is a list of instruction we don't emulate i.e.,
+ * the program will get a signal and die:
+ *
+ * load/store:
+ * - ldX.spill
+ * - stX.spill
+ * Reason: RNATs are based on addresses
+ *
+ * synchronization:
+ * - cmpxchg
+ * - fetchadd
+ * - xchg
+ * Reason: ATOMIC operations cannot be emulated properly using multiple
+ * instructions.
+ *
+ * speculative loads:
+ * - ldX.sZ
+ * Reason: side effects, code must be ready to deal with failure so simpler
+ * to let the load fail.
+ * ---------------------------------------------------------------------------------
+ * XXX fixme
+ *
+ * I would like to get rid of this switch case and do something
+ * more elegant.
+ */
+ switch(opcode) {
+ case LDS_OP:
+ case LDSA_OP:
+ case LDS_IMM_OP:
+ case LDSA_IMM_OP:
+ case LDFS_OP:
+ case LDFSA_OP:
+ case LDFS_IMM_OP:
+ /*
+ * The instruction will be retried with defered exceptions
+ * turned on, and we should get Nat bit installed
+ *
+ * IMPORTANT:
+ * When PSR_ED is set, the register & immediate update
+ * forms are actually executed even though the operation
+ * failed. So we don't need to take care of this.
+ */
+ DPRINT(("forcing PSR_ED\n"));
+ regs->cr_ipsr |= IA64_PSR_ED;
+ return;
+
+ case LD_OP:
+ case LDA_OP:
+ case LDBIAS_OP:
+ case LDACQ_OP:
+ case LDCCLR_OP:
+ case LDCNC_OP:
+ case LDCCLRACQ_OP:
+ case LD_IMM_OP:
+ case LDA_IMM_OP:
+ case LDBIAS_IMM_OP:
+ case LDACQ_IMM_OP:
+ case LDCCLR_IMM_OP:
+ case LDCNC_IMM_OP:
+ case LDCCLRACQ_IMM_OP:
+ ret = emulate_load_int(ifa, insn, regs);
+ break;
+ case ST_OP:
+ case STREL_OP:
+ case ST_IMM_OP:
+ case STREL_IMM_OP:
+ ret = emulate_store_int(ifa, insn, regs);
+ break;
+ case LDF_OP:
+ case LDFA_OP:
+ case LDFCCLR_OP:
+ case LDFCNC_OP:
+ case LDF_IMM_OP:
+ case LDFA_IMM_OP:
+ case LDFCCLR_IMM_OP:
+ case LDFCNC_IMM_OP:
+ ret = insn->x ?
+ emulate_load_floatpair(ifa, insn, regs):
+ emulate_load_float(ifa, insn, regs);
+ break;
+ case STF_OP:
+ case STF_IMM_OP:
+ ret = emulate_store_float(ifa, insn, regs);
+ }
+
+ DPRINT(("ret=%d\n", ret));
+ if ( ret ) {
+ lock_kernel();
+ force_sig(SIGSEGV, current);
+ unlock_kernel();
+ } else {
+ /*
+ * given today's architecture this case is not likely to happen
+ * because a memory access instruction (M) can never be in the
+ * last slot of a bundle. But let's keep it for now.
+ */
+ if ( ipsr->ri == 2 ) regs->cr_iip += 16;
+ ipsr->ri = ++ipsr->ri & 3;
+ }
+
+ DPRINT(("ipsr->ri=%d iip=%lx\n", ipsr->ri, regs->cr_iip));
+}
--- /dev/null
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+
+#include <asm/unwind.h>
+
+void
+ia64_unwind_init_from_blocked_task (struct ia64_frame_info *info, struct task_struct *t)
+{
+ struct switch_stack *sw = (struct switch_stack *) (t->thread.ksp + 16);
+ unsigned long sol, limit, top;
+
+ memset(info, 0, sizeof(*info));
+
+ sol = (sw->ar_pfs >> 7) & 0x7f; /* size of locals */
+
+ limit = (unsigned long) t + IA64_RBS_OFFSET;
+ top = sw->ar_bspstore;
+ if (top - (unsigned long) t >= IA64_STK_OFFSET)
+ top = limit;
+
+ info->regstk.limit = (unsigned long *) limit;
+ info->regstk.top = (unsigned long *) top;
+ info->bsp = ia64_rse_skip_regs(info->regstk.top, -sol);
+ info->top_rnat = sw->ar_rnat;
+ info->cfm = sw->ar_pfs;
+ info->ip = sw->b0;
+}
+
+void
+ia64_unwind_init_from_current (struct ia64_frame_info *info, struct pt_regs *regs)
+{
+ struct switch_stack *sw = (struct switch_stack *) regs - 1;
+ unsigned long sol, sof, *bsp, limit, top;
+
+ limit = (unsigned long) current + IA64_RBS_OFFSET;
+ top = sw->ar_bspstore;
+ if (top - (unsigned long) current >= IA64_STK_OFFSET)
+ top = limit;
+
+ memset(info, 0, sizeof(*info));
+
+ sol = (sw->ar_pfs >> 7) & 0x7f; /* size of frame */
+ info->regstk.limit = (unsigned long *) limit;
+ info->regstk.top = (unsigned long *) top;
+ info->top_rnat = sw->ar_rnat;
+
+ /* this gives us the bsp top level frame (kdb interrupt frame): */
+ bsp = ia64_rse_skip_regs((unsigned long *) top, -sol);
+
+ /* now skip past the interrupt frame: */
+ sof = regs->cr_ifs & 0x7f; /* size of frame */
+ info->cfm = regs->cr_ifs;
+ info->bsp = ia64_rse_skip_regs(bsp, -sof);
+ info->ip = regs->cr_iip;
+}
+
+static unsigned long
+read_reg (struct ia64_frame_info *info, int regnum, int *is_nat)
+{
+ unsigned long *addr, *rnat_addr, rnat;
+
+ addr = ia64_rse_skip_regs(info->bsp, regnum);
+ if (addr < info->regstk.limit || addr >= info->regstk.top || ((long) addr & 0x7) != 0) {
+ *is_nat = 1;
+ return 0xdeadbeefdeadbeef;
+ }
+ rnat_addr = ia64_rse_rnat_addr(addr);
+
+ if (rnat_addr >= info->regstk.top)
+ rnat = info->top_rnat;
+ else
+ rnat = *rnat_addr;
+ *is_nat = (rnat & (1UL << ia64_rse_slot_num(addr))) != 0;
+ return *addr;
+}
+
+/*
+ * On entry, info->regstk.top should point to the register backing
+ * store for r32.
+ */
+int
+ia64_unwind_to_previous_frame (struct ia64_frame_info *info)
+{
+ unsigned long sol, cfm = info->cfm;
+ int is_nat;
+
+ sol = (cfm >> 7) & 0x7f; /* size of locals */
+
+ /*
+ * In general, we would have to make use of unwind info to
+ * unwind an IA-64 stack, but for now gcc uses a special
+ * convention that makes this possible without full-fledged
+ * unwindo info. Specifically, we expect "rp" in the second
+ * last, and "ar.pfs" in the last local register, so the
+ * number of locals in a frame must be at least two. If it's
+ * less than that, we reached the end of the C call stack.
+ */
+ if (sol < 2)
+ return -1;
+
+ info->ip = read_reg(info, sol - 2, &is_nat);
+ if (is_nat)
+ return -1;
+
+ cfm = read_reg(info, sol - 1, &is_nat);
+ if (is_nat)
+ return -1;
+
+ sol = (cfm >> 7) & 0x7f;
+
+ info->cfm = cfm;
+ info->bsp = ia64_rse_skip_regs(info->bsp, -sol);
+ return 0;
+}
--- /dev/null
+#
+# Makefile for ia64-specific library routines..
+#
+
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -c $< -o $@
+
+OBJS = __divdi3.o __divsi3.o __udivdi3.o __udivsi3.o \
+ __moddi3.o __modsi3.o __umoddi3.o __umodsi3.o \
+ checksum.o clear_page.o csum_partial_copy.o copy_page.o \
+ copy_user.o clear_user.o memset.o strncpy_from_user.o \
+ strlen.o strlen_user.o strnlen_user.o \
+ flush.o do_csum.o
+
+lib.a: $(OBJS)
+ $(AR) rcs lib.a $(OBJS)
+
+__divdi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -o $@ $<
+
+__divsi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DSINGLE -c -o $@ $<
+
+__udivdi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DUNSIGNED -c -o $@ $<
+
+__udivsi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DUNSIGNED -DSINGLE -c -o $@ $<
+
+__moddi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DMODULO -c -o $@ $<
+
+__modsi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DMODULO -DSINGLE -c -o $@ $<
+
+__umoddi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DMODULO -DUNSIGNED -c -o $@ $<
+
+__umodsi3.o: idiv.S
+ $(CC) $(AFLAGS) -c -DMODULO -DUNSIGNED -DSINGLE -c -o $@ $<
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * Network checksum routines
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * Most of the code coming from arch/alpha/lib/checksum.c
+ *
+ * This file contains network checksum routines that are better done
+ * in an architecture-specific manner due to speed..
+ */
+
+#include <linux/string.h>
+
+#include <asm/byteorder.h>
+
+static inline unsigned short
+from64to16(unsigned long x)
+{
+ /* add up 32-bit words for 33 bits */
+ x = (x & 0xffffffff) + (x >> 32);
+ /* add up 16-bit and 17-bit words for 17+c bits */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up 16-bit and 2-bit for 16+c bit */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+/*
+ * computes the checksum of the TCP/UDP pseudo-header
+ * returns a 16-bit checksum, already complemented.
+ */
+unsigned short int csum_tcpudp_magic(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum)
+{
+ return ~from64to16(saddr + daddr + sum +
+ ((unsigned long) ntohs(len) << 16) +
+ ((unsigned long) proto << 8));
+}
+
+unsigned int csum_tcpudp_nofold(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum)
+{
+ unsigned long result;
+
+ result = (saddr + daddr + sum +
+ ((unsigned long) ntohs(len) << 16) +
+ ((unsigned long) proto << 8));
+
+ /* Fold down to 32-bits so we don't loose in the typedef-less
+ network stack. */
+ /* 64 to 33 */
+ result = (result & 0xffffffff) + (result >> 32);
+ /* 33 to 32 */
+ result = (result & 0xffffffff) + (result >> 32);
+ return result;
+}
+
+extern unsigned long do_csum(const unsigned char *, unsigned int, unsigned int);
+extern unsigned long do_csum_c(const unsigned char *, unsigned int, unsigned int);
+
+/*
+ * This is a version of ip_compute_csum() optimized for IP headers,
+ * which always checksum on 4 octet boundaries.
+ */
+unsigned short ip_fast_csum(unsigned char * iph, unsigned int ihl)
+{
+ return ~do_csum(iph,ihl*4,0);
+}
+
+/*
+ * computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum)
+{
+ unsigned long result = do_csum(buff, len, 0);
+
+ /* add in old sum, and carry.. */
+ result += sum;
+ /* 32+c bits -> 32 bits */
+ result = (result & 0xffffffff) + (result >> 32);
+ return result;
+}
+
+
+/*
+ * this routine is used for miscellaneous IP-like checksums, mainly
+ * in icmp.c
+ */
+unsigned short ip_compute_csum(unsigned char * buff, int len)
+{
+ return ~do_csum(buff,len, 0);
+}
--- /dev/null
+/*
+ *
+ * Optimized version of the standard clearpage() function
+ *
+ * Based on comments from ddd. Try not to overflow the write buffer.
+ *
+ * Inputs:
+ * in0: address of page
+ *
+ * Output:
+ * none
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <asm/page.h>
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global clear_page
+ .proc clear_page
+clear_page:
+ alloc r11=ar.pfs,1,0,0,0
+ mov r16=ar.lc // slow
+ mov r17=PAGE_SIZE/32-1 // -1 = repeat/until
+ ;;
+ adds r18=16,in0
+ mov ar.lc=r17
+ ;;
+1: stf.spill.nta [in0]=f0,32
+ stf.spill.nta [r18]=f0,32
+ br.cloop.dptk.few 1b
+ ;;
+ mov ar.lc=r16 // restore lc
+ br.ret.sptk.few rp
+
+ .endp clear_page
--- /dev/null
+/*
+ * This routine clears to zero a linear memory buffer in user space.
+ *
+ * Inputs:
+ * in0: address of buffer
+ * in1: length of buffer in bytes
+ * Outputs:
+ * r8: number of bytes that didn't get cleared due to a fault
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ */
+
+//
+// arguments
+//
+#define buf r32
+#define len r33
+
+//
+// local registers
+//
+#define cnt r16
+#define buf2 r17
+#define saved_lc r18
+#define saved_pr r19
+#define saved_pfs r20
+#define tmp r21
+#define len2 r22
+#define len3 r23
+
+//
+// Theory of operations:
+// - we check whether or not the buffer is small, i.e., less than 17
+// in which case we do the byte by byte loop.
+//
+// - Otherwise we go progressively from 1 byte store to 8byte store in
+// the head part, the body is a 16byte store loop and we finish we the
+// tail for the last 15 bytes.
+// The good point about this breakdown is that the long buffer handling
+// contains only 2 branches.
+//
+// The reason for not using shifting & masking for both the head and the
+// tail is to stay semantically correct. This routine is not supposed
+// to write bytes outside of the buffer. While most of the time this would
+// be ok, we can't tolerate a mistake. A classical example is the case
+// of multithreaded code were to the extra bytes touched is actually owned
+// by another thread which runs concurrently to ours. Another, less likely,
+// example is with device drivers where reading an I/O mapped location may
+// have side effects (same thing for writing).
+//
+
+// The label comes first because our store instruction contains a comma
+// and confuse the preprocessor otherwise
+//
+#define EX(y,x...) \
+ .section __ex_table,"a"; \
+ data4 @gprel(99f); \
+ data4 y-99f; \
+ .previous; \
+99: x
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global __do_clear_user
+ .proc __do_clear_user
+
+__do_clear_user:
+ alloc saved_pfs=ar.pfs,2,0,0,0
+ cmp.eq p6,p0=r0,len // check for zero length
+ mov saved_lc=ar.lc // preserve ar.lc (slow)
+ ;; // avoid WAW on CFM
+ adds tmp=-1,len // br.ctop is repeat/until
+ mov ret0=len // return value is length at this point
+(p6) br.ret.spnt.few rp
+ ;;
+ cmp.lt p6,p0=16,len // if len > 16 then long memset
+ mov ar.lc=tmp // initialize lc for small count
+(p6) br.cond.dptk.few long_do_clear
+ ;; // WAR on ar.lc
+ //
+ // worst case 16 cyles, avg 8 cycles
+ //
+ // We could have played with the predicates to use the extra
+ // M slot for 2 stores/iteration but the cost the initialization
+ // the various counters compared to how long the loop is supposed
+ // to last on average does not make this solution viable.
+ //
+1:
+ EX( .Lexit1, st1 [buf]=r0,1 )
+ adds len=-1,len // countdown length using len
+ br.cloop.dptk.few 1b
+ ;; // avoid RAW on ar.lc
+ //
+ // .Lexit4: comes from byte by byte loop
+ // len contains bytes left
+.Lexit1:
+ mov ret0=len // faster than using ar.lc
+ mov ar.lc=saved_lc
+ br.ret.sptk.few rp // end of short clear_user
+
+
+ //
+ // At this point we know we have more than 16 bytes to copy
+ // so we focus on alignment (no branches required)
+ //
+ // The use of len/len2 for countdown of the number of bytes left
+ // instead of ret0 is due to the fact that the exception code
+ // changes the values of r8.
+ //
+long_do_clear:
+ tbit.nz p6,p0=buf,0 // odd alignment (for long_do_clear)
+ ;;
+ EX( .Lexit3, (p6) st1 [buf]=r0,1 ) // 1-byte aligned
+(p6) adds len=-1,len;; // sync because buf is modified
+ tbit.nz p6,p0=buf,1
+ ;;
+ EX( .Lexit3, (p6) st2 [buf]=r0,2 ) // 2-byte aligned
+(p6) adds len=-2,len;;
+ tbit.nz p6,p0=buf,2
+ ;;
+ EX( .Lexit3, (p6) st4 [buf]=r0,4 ) // 4-byte aligned
+(p6) adds len=-4,len;;
+ tbit.nz p6,p0=buf,3
+ ;;
+ EX( .Lexit3, (p6) st8 [buf]=r0,8 ) // 8-byte aligned
+(p6) adds len=-8,len;;
+ shr.u cnt=len,4 // number of 128-bit (2x64bit) words
+ ;;
+ cmp.eq p6,p0=r0,cnt
+ adds tmp=-1,cnt
+(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+ ;;
+ adds buf2=8,buf // setup second base pointer
+ mov ar.lc=tmp
+ ;;
+
+ //
+ // 16bytes/iteration core loop
+ //
+ // The second store can never generate a fault because
+ // we come into the loop only when we are 16-byte aligned.
+ // This means that if we cross a page then it will always be
+ // in the first store and never in the second.
+ //
+ //
+ // We need to keep track of the remaining length. A possible (optimistic)
+ // way would be to ue ar.lc and derive how many byte were left by
+ // doing : left= 16*ar.lc + 16. this would avoid the addition at
+ // every iteration.
+ // However we need to keep the synchronization point. A template
+ // M;;MB does not exist and thus we can keep the addition at no
+ // extra cycle cost (use a nop slot anyway). It also simplifies the
+ // (unlikely) error recovery code
+ //
+
+2:
+
+ EX(.Lexit3, st8 [buf]=r0,16 )
+ ;; // needed to get len correct when error
+ st8 [buf2]=r0,16
+ adds len=-16,len
+ br.cloop.dptk.few 2b
+ ;;
+ mov ar.lc=saved_lc
+ //
+ // tail correction based on len only
+ //
+ // We alternate the use of len3,len2 to allow parallelism and correct
+ // error handling. We also reuse p6/p7 to return correct value.
+ // The addition of len2/len3 does not cost anything more compared to
+ // the regular memset as we had empty slots.
+ //
+.dotail:
+ mov len2=len // for parallelization of error handling
+ mov len3=len
+ tbit.nz p6,p0=len,3
+ ;;
+ EX( .Lexit2, (p6) st8 [buf]=r0,8 ) // at least 8 bytes
+(p6) adds len3=-8,len2
+ tbit.nz p7,p6=len,2
+ ;;
+ EX( .Lexit2, (p7) st4 [buf]=r0,4 ) // at least 4 bytes
+(p7) adds len2=-4,len3
+ tbit.nz p6,p7=len,1
+ ;;
+ EX( .Lexit2, (p6) st2 [buf]=r0,2 ) // at least 2 bytes
+(p6) adds len3=-2,len2
+ tbit.nz p7,p6=len,0
+ ;;
+ EX( .Lexit2, (p7) st1 [buf]=r0 ) // only 1 byte left
+ mov ret0=r0 // success
+ br.ret.dptk.few rp // end of most likely path
+
+ //
+ // Outlined error handling code
+ //
+
+ //
+ // .Lexit3: comes from core loop, need restore pr/lc
+ // len contains bytes left
+ //
+ //
+ // .Lexit2:
+ // if p6 -> coming from st8 or st2 : len2 contains what's left
+ // if p7 -> coming from st4 or st1 : len3 contains what's left
+ // We must restore lc/pr even though might not have been used.
+.Lexit2:
+(p6) mov len=len2
+(p7) mov len=len3
+ ;;
+ //
+ // .Lexit4: comes from head, need not restore pr/lc
+ // len contains bytes left
+ //
+.Lexit3:
+ mov ret0=len
+ mov ar.lc=saved_lc
+ br.ret.dptk.few rp
+ .endp
--- /dev/null
+/*
+ *
+ * Optimized version of the standard copy_page() function
+ *
+ * Based on comments from ddd. Try not to overflow write buffer.
+ *
+ * Inputs:
+ * in0: address of target page
+ * in1: address of source page
+ * Output:
+ * no return value
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ */
+#include <asm/page.h>
+
+#define lcount r16
+#define saved_pr r17
+#define saved_lc r18
+#define saved_pfs r19
+#define src1 r20
+#define src2 r21
+#define tgt1 r22
+#define tgt2 r23
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global copy_page
+ .proc copy_page
+
+copy_page:
+ alloc saved_pfs=ar.pfs,10,0,0,8 // we need 6 roatating (8 minimum)
+ // + 2 input
+
+ .rotr t1[4], t2[4] // our 2 pipelines with depth of 4 each
+
+ mov saved_lc=ar.lc // save ar.lc ahead of time
+ mov saved_pr=pr // rotating predicates are preserved
+ // resgisters we must save.
+ mov src1=in1 // initialize 1st stream source
+ adds src2=8,in1 // initialize 2nd stream source
+ mov lcount=PAGE_SIZE/16-1 // as many 16bytes as there are on a page
+ // -1 is because br.ctop is repeat/until
+
+ adds tgt2=8,in0 // initialize 2nd stream target
+ mov tgt1=in0 // initialize 1st stream target
+ ;;
+ mov pr.rot=1<<16 // pr16=1 & pr[17-63]=0 , 63 not modified
+
+ mov ar.lc=lcount // set loop counter
+ mov ar.ec=4 // ar.ec must match pipeline depth
+ ;;
+
+ // We need to preload the n-1 stages of the pipeline (n=depth).
+ // We do this during the "prolog" of the loop: we execute
+ // n-1 times the "load" bundle. Then both loads & stores are
+ // enabled until we reach the end of the last word of the page
+ // on the load side. Then, we enter the epilogue (controlled by ec)
+ // where we just do the stores and no loads n-1 times : drain the pipe.
+ //
+ // The initialization of the prolog is done via the predicate registers:
+ // the choice of pr19 DEPENDS on the depth of the pipeline (n).
+ // When lc > 0 pr63=1 and it is fed back into pr16 and pr16-pr62
+ // are then shifted right at every iteration,
+ // Thus by initializing pr16=1 and pr17-19=0 (19=16+4-1) before the loop
+ // we get pr19=1 after 4 iterations (n in our case).
+ //
+1: // engage loop now, let the magic happen...
+(p16) ld8 t1[0]=[src1],16 // new data on top of pipeline in 1st stream
+(p16) ld8 t2[0]=[src2],16 // new data on top of pipeline in 2nd stream
+ nop.i 0x0
+(p19) st8 [tgt1]=t1[3],16 // store top of 1st pipeline
+(p19) st8 [tgt2]=t2[3],16 // store top of 2nd pipeline
+ br.ctop.dptk.few 1b // once lc==0, ec-- & p16=0
+ // stores but no loads anymore
+ ;;
+ mov pr=saved_pr,0xffffffffffff0000 // restore predicates
+ mov ar.pfs=saved_pfs // restore ar.ec
+ mov ar.lc=saved_lc // restore saved lc
+ br.ret.sptk.few rp // bye...
+
+ .endp copy_page
--- /dev/null
+/*
+ * This routine copies a linear memory buffer across the user/kernel boundary. When
+ * reading a byte from the source causes a fault, the remainder of the destination
+ * buffer is zeroed out. Note that this can happen only when copying from user
+ * to kernel memory and we do this to absolutely guarantee that the
+ * kernel doesn't operate on random data.
+ *
+ * This file is derived from arch/alpha/lib/copy_user.S.
+ *
+ * Inputs:
+ * in0: address of destination buffer
+ * in1: address of source buffer
+ * in2: length of buffer in bytes
+ * Outputs:
+ * r8: number of bytes that didn't get copied due to a fault
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define EXI(x...) \
+99: x; \
+ .section __ex_table,"a"; \
+ data4 @gprel(99b); \
+ data4 .Lexit_in-99b; \
+ .previous
+
+#define EXO(x...) \
+99: x; \
+ .section __ex_table,"a"; \
+ data4 @gprel(99b); \
+ data4 .Lexit_out-99b; \
+ .previous
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global __copy_user
+ .proc __copy_user
+__copy_user:
+ alloc r10=ar.pfs,3,0,0,0
+ mov r9=ar.lc // save ar.lc
+ mov ar.lc=in2 // set ar.lc to length of buffer
+ br.sptk.few .Lentr
+
+ // XXX braindead copy loop---this needs to be optimized
+.Loop1:
+ EXI(ld1 r8=[in1],1)
+ ;;
+ EXO(st1 [in0]=r8,1)
+.Lentr: br.cloop.dptk.few .Loop1 // repeat unless ar.lc--==0
+ ;; // avoid RAW on ar.lc
+.Lexit_out:
+ mov r8=ar.lc // return how many bytes we _didn't_ copy
+ mov ar.lc=r9
+ br.ret.sptk.few rp
+
+.Lexit_in:
+ // clear the remainder of the buffer:
+ mov r8=ar.lc // return how many bytes we _didn't_ copy
+.Loop2:
+ st1 [in0]=r0,1 // this cannot fault because we get here only on user->kernel copies
+ br.cloop.dptk.few .Loop2
+ ;; // avoid RAW on ar.lc
+ mov ar.lc=r9
+ br.ret.sptk.few rp
+
+ .endp __copy_user
--- /dev/null
+/*
+ * Network Checksum & Copy routine
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * Most of the code has been imported from Linux/Alpha
+ */
+
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include <asm/uaccess.h>
+
+/*
+ * XXX Fixme: those 2 inlines are meant for debugging and will go away
+ */
+static inline unsigned
+short from64to16(unsigned long x)
+{
+ /* add up 32-bit words for 33 bits */
+ x = (x & 0xffffffff) + (x >> 32);
+ /* add up 16-bit and 17-bit words for 17+c bits */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up 16-bit and 2-bit for 16+c bit */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+static inline
+unsigned long do_csum_c(const unsigned char * buff, int len, unsigned int psum)
+{
+ int odd, count;
+ unsigned long result = (unsigned long)psum;
+
+ if (len <= 0)
+ goto out;
+ odd = 1 & (unsigned long) buff;
+ if (odd) {
+ result = *buff << 8;
+ len--;
+ buff++;
+ }
+ count = len >> 1; /* nr of 16-bit words.. */
+ if (count) {
+ if (2 & (unsigned long) buff) {
+ result += *(unsigned short *) buff;
+ count--;
+ len -= 2;
+ buff += 2;
+ }
+ count >>= 1; /* nr of 32-bit words.. */
+ if (count) {
+ if (4 & (unsigned long) buff) {
+ result += *(unsigned int *) buff;
+ count--;
+ len -= 4;
+ buff += 4;
+ }
+ count >>= 1; /* nr of 64-bit words.. */
+ if (count) {
+ unsigned long carry = 0;
+ do {
+ unsigned long w = *(unsigned long *) buff;
+ count--;
+ buff += 8;
+ result += carry;
+ result += w;
+ carry = (w > result);
+ } while (count);
+ result += carry;
+ result = (result & 0xffffffff) + (result >> 32);
+ }
+ if (len & 4) {
+ result += *(unsigned int *) buff;
+ buff += 4;
+ }
+ }
+ if (len & 2) {
+ result += *(unsigned short *) buff;
+ buff += 2;
+ }
+ }
+ if (len & 1)
+ result += *buff;
+
+ result = from64to16(result);
+
+ if (odd)
+ result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
+
+out:
+ return result;
+}
+
+/*
+ * XXX Fixme
+ *
+ * This is very ugly but temporary. THIS NEEDS SERIOUS ENHANCEMENTS.
+ * But it's very tricky to get right even in C.
+ */
+extern unsigned long do_csum(const unsigned char *, int);
+
+static unsigned int
+do_csum_partial_copy_from_user (const char *src, char *dst, int len,
+ unsigned int psum, int *errp)
+{
+ const unsigned char *psrc = src;
+ unsigned long result;
+ int cplen = len;
+ int r = 0;
+
+ /* XXX Fixme
+ * for now we separate the copy from checksum for obvious
+ * alignment difficulties. Look at the Alpha code and you'll be
+ * scared.
+ */
+
+ while ( cplen-- ) r |=__get_user(*dst++,psrc++);
+
+ if ( r && errp ) *errp = r;
+
+ result = do_csum(src, len);
+
+ /* add in old sum, and carry.. */
+ result += psum;
+ /* 32+c bits -> 32 bits */
+ result = (result & 0xffffffff) + (result >> 32);
+ return result;
+}
+
+unsigned int
+csum_partial_copy_from_user(const char *src, char *dst, int len,
+ unsigned int sum, int *errp)
+{
+ if (!access_ok(src, len, VERIFY_READ)) {
+ *errp = -EFAULT;
+ memset(dst, 0, len);
+ return sum;
+ }
+
+ return do_csum_partial_copy_from_user(src, dst, len, sum, errp);
+}
+
+unsigned int
+csum_partial_copy_nocheck(const char *src, char *dst, int len, unsigned int sum)
+{
+ return do_csum_partial_copy_from_user(src, dst, len, sum, NULL);
+}
+
+unsigned int
+csum_partial_copy (const char *src, char *dst, int len, unsigned int sum)
+{
+ unsigned int ret;
+ int error = 0;
+
+ ret = do_csum_partial_copy_from_user(src, dst, len, sum, &error);
+ if (error)
+ printk("csum_partial_copy_old(): tell mingo to convert me!\n");
+
+ return ret;
+}
+
--- /dev/null
+/*
+ *
+ * Optmized version of the standard do_csum() function
+ *
+ * Return: a 64bit quantity containing the 16bit Internet checksum
+ *
+ * Inputs:
+ * in0: address of buffer to checksum (char *)
+ * in1: length of the buffer (int)
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ */
+
+//
+// Theory of operations:
+// The goal is to go as quickly as possible to the point where
+// we can checksum 8 bytes/loop. Before reaching that point we must
+// take care of incorrect alignment of first byte.
+//
+// The code hereafter also takes care of the "tail" part of the buffer
+// before entering the core loop, if any. The checksum is a sum so it
+// allows us to commute operations. So we do do the "head" and "tail"
+// first to finish at full speed in the body. Once we get the head and
+// tail values, we feed them into the pipeline, very handy initialization.
+//
+// Of course we deal with the special case where the whole buffer fits
+// into one 8 byte word. In this case we have only one entry in the pipeline.
+//
+// We use a (3+1)-stage pipeline in the loop to account for possible
+// load latency and also to accomodate for head and tail.
+//
+// The end of the function deals with folding the checksum from 64bits
+// down to 16bits taking care of the carry.
+//
+// This version avoids synchronization in the core loop by also using a
+// pipeline for the accumulation of the checksum in result[].
+//
+// p[]
+// |---|
+// 0| | r32 : new value loaded in pipeline
+// |---|
+// 1| | r33 : in transit data
+// |---|
+// 2| | r34 : current value to add to checksum
+// |---|
+// 3| | r35 : previous value added to checksum (previous iteration)
+// |---|
+//
+// result[]
+// |---|
+// 0| | r36 : new checksum
+// |---|
+// 1| | r37 : previous value of checksum
+// |---|
+// 2| | r38 : final checksum when out of the loop (after 2 epilogue rots)
+// |---|
+//
+//
+// NOT YET DONE:
+// - Take advantage of the MMI bandwidth to load more than 8byte per loop
+// iteration
+// - use the lfetch instruction to augment the chances of the data being in
+// the cache when we need it.
+// - Maybe another algorithm which would take care of the folding at the
+// end in a different manner
+// - Work with people more knowledgeable than me on the network stack
+// to figure out if we could not split the function depending on the
+// type of packet or alignment we get. Like the ip_fast_csum() routine
+// where we know we have at least 20bytes worth of data to checksum.
+// - Look at RFCs about checksums to see whether or not we can do better
+//
+// - Do a better job of handling small packets.
+//
+#define saved_pfs r11
+#define hmask r16
+#define tmask r17
+#define first r18
+#define firstval r19
+#define firstoff r20
+#define last r21
+#define lastval r22
+#define lastoff r23
+#define saved_lc r24
+#define saved_pr r25
+#define tmp1 r26
+#define tmp2 r27
+#define tmp3 r28
+#define carry r29
+
+#define buf in0
+#define len in1
+
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+// unsigned long do_csum(unsigned char *buf,int len)
+
+ .align 32
+ .global do_csum
+ .proc do_csum
+do_csum:
+ alloc saved_pfs=ar.pfs,2,8,0,8
+
+ .rotr p[4], result[3]
+ mov ret0=r0 // in case we have zero length
+ cmp4.lt p0,p6=r0,len // check for zero length or negative (32bit len)
+ ;; // avoid WAW on CFM
+ mov tmp3=0x7 // a temporary mask/value
+ add tmp1=buf,len // last byte's address
+(p6) br.ret.spnt.few rp // return if true (hope we can avoid that)
+
+ and firstoff=7,buf // how many bytes off for first element
+ tbit.nz p10,p0=buf,0 // is buf an odd address ?
+ mov hmask=-1 // intialize head mask
+ ;;
+
+ andcm first=buf,tmp3 // 8byte aligned down address of first element
+ mov tmask=-1 // initialize tail mask
+ adds tmp2=-1,tmp1 // last-1
+ ;;
+ and lastoff=7,tmp1 // how many bytes off for last element
+ andcm last=tmp2,tmp3 // address of word containing last byte
+ mov saved_pr=pr // preserve predicates (rotation)
+ ;;
+ sub tmp3=last,first // tmp3=distance from first to last
+ cmp.eq p8,p9=last,first // everything fits in one word ?
+ sub tmp1=8,lastoff // complement to lastoff
+
+ ld8 firstval=[first],8 // load,ahead of time, "first" word
+ shl tmp2=firstoff,3 // number of bits
+ ;;
+ and tmp1=7, tmp1 // make sure that if tmp1==8 -> tmp1=0
+
+(p9) ld8 lastval=[last] // load,ahead of time, "last" word, if needed
+(p8) mov lastval=r0 // we don't need lastval if first==last
+ mov result[1]=r0 // initialize result
+ ;;
+
+ shl tmp1=tmp1,3 // number of bits
+ shl hmask=hmask,tmp2 // build head mask, mask off [0,firstoff[
+ ;;
+ shr.u tmask=tmask,tmp1 // build tail mask, mask off ]8,lastoff]
+ mov saved_lc=ar.lc // save lc
+ ;;
+(p8) and hmask=hmask,tmask // apply tail mask to head mask if 1 word only
+(p9) and p[1]=lastval,tmask // mask last it as appropriate
+ shr.u tmp3=tmp3,3 // we do 8 bytes per loop
+ ;;
+ cmp.lt p6,p7=2,tmp3 // tmp3 > 2 ?
+ and p[2]=firstval,hmask // and mask it as appropriate
+ add tmp1=-2,tmp3 // -2 = -1 (br.ctop) -1 (last-first)
+ ;;
+ // XXX Fixme: not very nice initialization here
+ //
+ // Setup loop control registers:
+ //
+ // tmp3=0 (1 word) : lc=0, ec=2, p16=F
+ // tmp3=1 (2 words) : lc=0, ec=3, p16=F
+ // tmp3=2 (3 words) : lc=0, ec=4, p16=T
+ // tmp3>2 (4 or more): lc=tmp3-2, ec=4, p16=T
+ //
+ cmp.eq p8,p9=r0,tmp3 // tmp3 == 0 ?
+(p6) mov ar.lc=tmp1
+(p7) mov ar.lc=0
+ ;;
+ cmp.lt p6,p7=1,tmp3 // tmp3 > 1 ?
+(p8) mov ar.ec=2 // we need the extra rotation on result[]
+(p9) mov ar.ec=3 // hard not to set it twice sometimes
+ ;;
+ mov carry=r0 // initialize carry
+(p6) mov ar.ec=4
+(p6) mov pr.rot=0xffffffffffff0000 // p16=T, p18=T
+
+ cmp.ne p8,p0=r0,r0 // p8 is false
+ mov p[3]=r0 // make sure first compare fails
+(p7) mov pr.rot=0xfffffffffffe0000 // p16=F, p18=T
+ ;;
+1:
+(p16) ld8 p[0]=[first],8 // load next
+(p8) adds carry=1,carry // add carry on prev_prev_value
+(p18) add result[0]=result[1],p[2] // new_res = prev_res + cur_val
+ cmp.ltu p8,p0=result[1],p[3] // p8= prev_result < prev_val
+ br.ctop.dptk.few 1b // loop until lc--==0
+ ;; // RAW on carry when loop exits
+ (p8) adds carry=1,carry;; // correct for carry on prev_value
+ add result[2]=carry,result[2];; // add carry to final result
+ cmp.ltu p6,p7=result[2], carry // check for new carry
+ ;;
+(p6) adds result[2]=1,result[1] // correct if required
+ movl tmp3=0xffffffff
+ ;;
+ // XXX Fixme
+ //
+ // now fold 64 into 16 bits taking care of carry
+ // that's not very good because it has lots of sequentiality
+ //
+ and tmp1=result[2],tmp3
+ shr.u tmp2=result[2],32
+ ;;
+ add result[2]=tmp1,tmp2
+ shr.u tmp3=tmp3,16
+ ;;
+ and tmp1=result[2],tmp3
+ shr.u tmp2=result[2],16
+ ;;
+ add result[2]=tmp1,tmp2
+ ;;
+ and tmp1=result[2],tmp3
+ shr.u tmp2=result[2],16
+ ;;
+ add result[2]=tmp1,tmp2
+ ;;
+ and tmp1=result[2],tmp3
+ shr.u tmp2=result[2],16
+ ;;
+ add ret0=tmp1,tmp2
+ mov pr=saved_pr,0xffffffffffff0000
+ ;;
+ // if buf was odd then swap bytes
+ mov ar.pfs=saved_pfs // restore ar.ec
+(p10) mux1 ret0=ret0,@rev // reverse word
+ ;;
+ mov ar.lc=saved_lc
+(p10) shr.u ret0=ret0,64-16 // + shift back to position = swap bytes
+ br.ret.sptk.few rp
--- /dev/null
+/*
+ * Cache flushing routines.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <asm/page.h>
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 16
+ .global ia64_flush_icache_page
+ .proc ia64_flush_icache_page
+ia64_flush_icache_page:
+ alloc r2=ar.pfs,1,0,0,0
+ mov r3=ar.lc // save ar.lc
+ mov r8=PAGE_SIZE/64-1 // repeat/until loop
+ ;;
+ mov ar.lc=r8
+ add r8=32,in0
+ ;;
+.Loop1: fc in0 // issuable on M0 only
+ add in0=64,in0
+ fc r8
+ add r8=64,r8
+ br.cloop.sptk.few .Loop1
+ ;;
+ sync.i
+ ;;
+ srlz.i
+ ;;
+ mov ar.lc=r3 // restore ar.lc
+ br.ret.sptk.few rp
+ .endp ia64_flush_icache_page
--- /dev/null
+/*
+ * Integer division routine.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+/* Simple integer division. It uses the straight forward division
+ algorithm. This may not be the absolutely fastest way to do it,
+ but it's not horrible either. According to ski, the worst case
+ scenario of dividing 0xffffffffffffffff by 1 takes 133 cycles.
+
+ An alternative would be to use an algorithm similar to the
+ floating point division algorithm (Newton-Raphson iteration),
+ but that approach is rather tricky (one has to be very careful
+ to get the last bit right...).
+
+ While this algorithm is straight-forward, it does use a couple
+ of neat ia-64 specific tricks:
+
+ - it uses the floating point unit to determine the initial
+ shift amount (shift = floor(ld(x)) - floor(ld(y)))
+
+ - it uses predication to avoid a branch in the case where
+ x < y (this is what p8 is used for)
+
+ - it uses rotating registers and the br.ctop branch to
+ implement a software-pipelined loop that's unrolled
+ twice (without any code expansion!)
+
+ - the code is relatively well scheduled to avoid unnecessary
+ nops while maximizing parallelism
+*/
+
+#include <linux/config.h>
+#include <asm/break.h>
+
+ .text
+ .psr abi64
+#ifdef __BIG_ENDIAN__
+ .psr msb
+ .msb
+#else
+ .psr lsb
+ .lsb
+#endif
+
+#ifdef MODULO
+# define OP mod
+# define Q r9
+# define R r8
+#else
+# define OP div
+# define Q r8
+# define R r9
+#endif
+
+#ifdef SINGLE
+# define PREC si
+#else
+# define PREC di
+#endif
+
+#ifdef UNSIGNED
+# define SGN u
+# define INT_TO_FP(a,b) fma.s0 a=b,f1,f0
+# define FP_TO_INT(a,b) fcvt.fxu.trunc.s0 a=b
+#else
+# define SGN
+# define INT_TO_FP(a,b) fcvt.xf a=b
+# define FP_TO_INT(a,b) fcvt.fx.trunc.s0 a=b
+#endif
+
+#define PASTE1(a,b) a##b
+#define PASTE(a,b) PASTE1(a,b)
+#define NAME PASTE(PASTE(__,SGN),PASTE(OP,PASTE(PREC,3)))
+
+ .align 32
+ .global NAME
+ .proc NAME
+NAME:
+
+ alloc r2=ar.pfs,2,6,0,8
+ mov r18=pr
+#ifdef SINGLE
+# ifdef UNSIGNED
+ zxt4 in0=in0
+ zxt4 in1=in1
+# else
+ sxt4 in0=in0
+ sxt4 in1=in1
+# endif
+ ;;
+#endif
+
+#ifndef UNSIGNED
+ cmp.lt p6,p0=in0,r0 // x negative?
+ cmp.lt p7,p0=in1,r0 // y negative?
+ ;;
+(p6) sub in0=r0,in0 // make x positive
+(p7) sub in1=r0,in1 // ditto for y
+ ;;
+#endif
+
+ setf.sig f8=in0
+ mov r3=ar.lc // save ar.lc
+ setf.sig f9=in1
+ ;;
+ mov Q=0 // initialize q
+ mov R=in0 // stash away x in a static register
+ mov r16=1 // r16 = 1
+ INT_TO_FP(f8,f8)
+ cmp.eq p8,p0=0,in0 // x==0?
+ cmp.eq p9,p0=0,in1 // y==0?
+ ;;
+ INT_TO_FP(f9,f9)
+(p8) br.dpnt.few .L3
+(p9) break __IA64_BREAK_KDB // attempted division by zero (should never happen)
+ mov ar.ec=r0 // epilogue count = 0
+ ;;
+ getf.exp r14=f8 // r14 = exponent of x
+ getf.exp r15=f9 // r15 = exponent of y
+ mov ar.lc=r0 // loop count = 0
+ ;;
+ sub r17=r14,r15 // r17 = (exp of x - exp y) = shift amount
+ cmp.ge p8,p0=r14,r15
+ ;;
+
+ .rotr y[2], mask[2] // in0 and in1 may no longer be valid after
+ // the first write to a rotating register!
+
+(p8) shl y[1]=in1,r17 // y[1] = y<<shift
+(p8) shl mask[1]=r16,r17 // mask[1] = 1<<shift
+
+(p8) mov ar.lc=r17 // loop count = r17
+ ;;
+.L1:
+(p8) cmp.geu.unc p9,p0=R,y[1]// p9 = (x >= y[1])
+(p8) shr.u mask[0]=mask[1],1 // prepare mask[0] and y[0] for next
+(p8) shr.u y[0]=y[1],1 // iteration
+ ;;
+(p9) sub R=R,y[1] // if (x >= y[1]), subtract y[1] from x
+(p9) add Q=Q,mask[1] // and set corresponding bit in q (Q)
+ br.ctop.dptk.few .L1 // repeated unless ar.lc-- == 0
+ ;;
+.L2:
+#ifndef UNSIGNED
+# ifdef MODULO
+(p6) sub R=r0,R // set sign of remainder according to x
+# else
+(p6) sub Q=r0,Q // set sign of quotient
+ ;;
+(p7) sub Q=r0,Q
+# endif
+#endif
+.L3:
+ mov ar.pfs=r2 // restore ar.pfs
+ mov ar.lc=r3 // restore ar.lc
+ mov pr=r18,0xffffffffffff0000 // restore p16-p63
+ br.ret.sptk.few rp
--- /dev/null
+/*
+ *
+ * Optimized version of the standard memset() function
+ *
+ * Return: none
+ *
+ *
+ * Inputs:
+ * in0: address of buffer
+ * in1: byte value to use for storing
+ * in2: length of the buffer
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ */
+
+
+// arguments
+//
+#define buf r32
+#define val r33
+#define len r34
+
+//
+// local registers
+//
+#define saved_pfs r14
+#define cnt r18
+#define buf2 r19
+#define saved_lc r20
+#define saved_pr r21
+#define tmp r22
+
+ .text
+ .psr abi64
+ .psr lsb
+
+ .align 16
+ .global memset
+ .proc memset
+
+memset:
+ alloc saved_pfs=ar.pfs,3,0,0,0 // cnt is sink here
+ cmp.eq p8,p0=r0,len // check for zero length
+ mov saved_lc=ar.lc // preserve ar.lc (slow)
+ ;;
+ adds tmp=-1,len // br.ctop is repeat/until
+ tbit.nz p6,p0=buf,0 // odd alignment
+(p8) br.ret.spnt.few rp
+
+ cmp.lt p7,p0=16,len // if len > 16 then long memset
+ mux1 val=val,@brcst // prepare value
+(p7) br.cond.dptk.few long_memset
+ ;;
+ mov ar.lc=tmp // initialize lc for small count
+ ;; // avoid RAW and WAW on ar.lc
+1: // worst case 15 cyles, avg 8 cycles
+ st1 [buf]=val,1
+ br.cloop.dptk.few 1b
+ ;; // avoid RAW on ar.lc
+ mov ar.lc=saved_lc
+ mov ar.pfs=saved_pfs
+ br.ret.sptk.few rp // end of short memset
+
+ // at this point we know we have more than 16 bytes to copy
+ // so we focus on alignment
+long_memset:
+(p6) st1 [buf]=val,1 // 1-byte aligned
+(p6) adds len=-1,len;; // sync because buf is modified
+ tbit.nz p6,p0=buf,1
+ ;;
+(p6) st2 [buf]=val,2 // 2-byte aligned
+(p6) adds len=-2,len;;
+ tbit.nz p6,p0=buf,2
+ ;;
+(p6) st4 [buf]=val,4 // 4-byte aligned
+(p6) adds len=-4,len;;
+ tbit.nz p6,p0=buf,3
+ ;;
+(p6) st8 [buf]=val,8 // 8-byte aligned
+(p6) adds len=-8,len;;
+ shr.u cnt=len,4 // number of 128-bit (2x64bit) words
+ ;;
+ cmp.eq p6,p0=r0,cnt
+ adds tmp=-1,cnt
+(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left
+ ;;
+ adds buf2=8,buf // setup second base pointer
+ mov ar.lc=tmp
+ ;;
+2: // 16bytes/iteration
+ st8 [buf]=val,16
+ st8 [buf2]=val,16
+ br.cloop.dptk.few 2b
+ ;;
+.dotail: // tail correction based on len only
+ tbit.nz p6,p0=len,3
+ ;;
+(p6) st8 [buf]=val,8 // at least 8 bytes
+ tbit.nz p6,p0=len,2
+ ;;
+(p6) st4 [buf]=val,4 // at least 4 bytes
+ tbit.nz p6,p0=len,1
+ ;;
+(p6) st2 [buf]=val,2 // at least 2 bytes
+ tbit.nz p6,p0=len,0
+ mov ar.lc=saved_lc
+ ;;
+(p6) st1 [buf]=val // only 1 byte left
+ br.ret.dptk.few rp
+ .endp
--- /dev/null
+/*
+ *
+ * Optimized version of the standard strlen() function
+ *
+ *
+ * Inputs:
+ * in0 address of string
+ *
+ * Outputs:
+ * ret0 the number of characters in the string (0 if empty string)
+ * does not count the \0
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 09/24/99 S.Eranian add speculation recovery code
+ */
+
+//
+//
+// This is an enhanced version of the basic strlen. it includes a combination
+// of compute zero index (czx), parallel comparisons, speculative loads and
+// loop unroll using rotating registers.
+//
+// General Ideas about the algorithm:
+// The goal is to look at the string in chunks of 8 bytes.
+// so we need to do a few extra checks at the beginning because the
+// string may not be 8-byte aligned. In this case we load the 8byte
+// quantity which includes the start of the string and mask the unused
+// bytes with 0xff to avoid confusing czx.
+// We use speculative loads and software pipelining to hide memory
+// latency and do read ahead safely. This way we defer any exception.
+//
+// Because we don't want the kernel to be relying on particular
+// settings of the DCR register, we provide recovery code in case
+// speculation fails. The recovery code is going to "redo" the work using
+// only normal loads. If we still get a fault then we generate a
+// kernel panic. Otherwise we return the strlen as usual.
+//
+// The fact that speculation may fail can be caused, for instance, by
+// the DCR.dm bit being set. In this case TLB misses are deferred, i.e.,
+// a NaT bit will be set if the translation is not present. The normal
+// load, on the other hand, will cause the translation to be inserted
+// if the mapping exists.
+//
+// It should be noted that we execute recovery code only when we need
+// to use the data that has been speculatively loaded: we don't execute
+// recovery code on pure read ahead data.
+//
+// Remarks:
+// - the cmp r0,r0 is used as a fast way to initialize a predicate
+// register to 1. This is required to make sure that we get the parallel
+// compare correct.
+//
+// - we don't use the epilogue counter to exit the loop but we need to set
+// it to zero beforehand.
+//
+// - after the loop we must test for Nat values because neither the
+// czx nor cmp instruction raise a NaT consumption fault. We must be
+// careful not to look too far for a Nat for which we don't care.
+// For instance we don't need to look at a NaT in val2 if the zero byte
+// was in val1.
+//
+// - Clearly performance tuning is required.
+//
+//
+//
+#define saved_pfs r11
+#define tmp r10
+#define base r16
+#define orig r17
+#define saved_pr r18
+#define src r19
+#define mask r20
+#define val r21
+#define val1 r22
+#define val2 r23
+
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global strlen
+ .proc strlen
+strlen:
+ alloc saved_pfs=ar.pfs,11,0,0,8 // rotating must be multiple of 8
+
+ .rotr v[2], w[2] // declares our 4 aliases
+
+ extr.u tmp=in0,0,3 // tmp=least significant 3 bits
+ mov orig=in0 // keep trackof initial byte address
+ dep src=0,in0,0,3 // src=8byte-aligned in0 address
+ mov saved_pr=pr // preserve predicates (rotation)
+ ;;
+ ld8 v[1]=[src],8 // must not speculate: can fail here
+ shl tmp=tmp,3 // multiply by 8bits/byte
+ mov mask=-1 // our mask
+ ;;
+ ld8.s w[1]=[src],8 // speculatively load next
+ cmp.eq p6,p0=r0,r0 // sets p6 to true for cmp.and
+ sub tmp=64,tmp // how many bits to shift our mask on the right
+ ;;
+ shr.u mask=mask,tmp // zero enough bits to hold v[1] valuable part
+ mov ar.ec=r0 // clear epilogue counter (saved in ar.pfs)
+ ;;
+ add base=-16,src // keep track of aligned base
+ or v[1]=v[1],mask // now we have a safe initial byte pattern
+ ;;
+1:
+ ld8.s v[0]=[src],8 // speculatively load next
+ czx1.r val1=v[1] // search 0 byte from right
+ czx1.r val2=w[1] // search 0 byte from right following 8bytes
+ ;;
+ ld8.s w[0]=[src],8 // speculatively load next to next
+ cmp.eq.and p6,p0=8,val1 // p6 = p6 and val1==8
+ cmp.eq.and p6,p0=8,val2 // p6 = p6 and mask==8
+(p6) br.wtop.dptk.few 1b // loop until p6 == 0
+ ;;
+ //
+ // We must return try the recovery code iff
+ // val1_is_nat || (val1==8 && val2_is_nat)
+ //
+ // XXX Fixme
+ // - there must be a better way of doing the test
+ //
+ cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
+#ifdef notyet
+ tnat.nz p6,p7=val1 // test NaT on val1
+#else
+ tnat.z p7,p6=val1 // test NaT on val1
+#endif
+(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+ ;;
+ //
+ // if we come here p7 is true, i.e., initialized for // cmp
+ //
+ cmp.eq.and p7,p0=8,val1// val1==8?
+ tnat.nz.and p7,p0=val2 // test NaT if val2
+(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+ ;;
+(p8) mov val1=val2 // the other test got us out of the loop
+(p8) adds src=-16,src // correct position when 3 ahead
+(p9) adds src=-24,src // correct position when 4 ahead
+ ;;
+ sub ret0=src,orig // distance from base
+ sub tmp=8,val1 // which byte in word
+ mov pr=saved_pr,0xffffffffffff0000
+ ;;
+ sub ret0=ret0,tmp // adjust
+ mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
+ br.ret.sptk.few rp // end of normal execution
+
+ //
+ // Outlined recovery code when speculation failed
+ //
+ // This time we don't use speculation and rely on the normal exception
+ // mechanism. that's why the loop is not as good as the previous one
+ // because read ahead is not possible
+ //
+ // IMPORTANT:
+ // Please note that in the case of strlen() as opposed to strlen_user()
+ // we don't use the exception mechanism, as this function is not
+ // supposed to fail. If that happens it means we have a bug and the
+ // code will cause of kernel fault.
+ //
+ // XXX Fixme
+ // - today we restart from the beginning of the string instead
+ // of trying to continue where we left off.
+ //
+recover:
+ ld8 val=[base],8 // will fail if unrecoverable fault
+ ;;
+ or val=val,mask // remask first bytes
+ cmp.eq p0,p6=r0,r0 // nullify first ld8 in loop
+ ;;
+ //
+ // ar.ec is still zero here
+ //
+2:
+(p6) ld8 val=[base],8 // will fail if unrecoverable fault
+ ;;
+ czx1.r val1=val // search 0 byte from right
+ ;;
+ cmp.eq p6,p0=8,val1 // val1==8 ?
+(p6) br.wtop.dptk.few 2b // loop until p6 == 0
+ sub ret0=base,orig // distance from base
+ sub tmp=7,val1 // 7=8-1 because this strlen returns strlen+1
+ mov pr=saved_pr,0xffffffffffff0000
+ ;;
+ sub ret0=ret0,tmp // length=now - back -1
+ mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
+ br.ret.sptk.few rp // end of sucessful recovery code
+
+ .endp strlen
--- /dev/null
+/*
+ * Optimized version of the strlen_user() function
+ *
+ * Inputs:
+ * in0 address of buffer
+ *
+ * Outputs:
+ * ret0 0 in case of fault, strlen(buffer)+1 otherwise
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 01/19/99 S.Eranian heavily enhanced version (see details below)
+ * 09/24/99 S.Eranian added speculation recovery code
+ */
+
+//
+// int strlen_user(char *)
+// ------------------------
+// Returns:
+// - length of string + 1
+// - 0 in case an exception is raised
+//
+// This is an enhanced version of the basic strlen_user. it includes a
+// combination of compute zero index (czx), parallel comparisons, speculative
+// loads and loop unroll using rotating registers.
+//
+// General Ideas about the algorithm:
+// The goal is to look at the string in chunks of 8 bytes.
+// so we need to do a few extra checks at the beginning because the
+// string may not be 8-byte aligned. In this case we load the 8byte
+// quantity which includes the start of the string and mask the unused
+// bytes with 0xff to avoid confusing czx.
+// We use speculative loads and software pipelining to hide memory
+// latency and do read ahead safely. This way we defer any exception.
+//
+// Because we don't want the kernel to be relying on particular
+// settings of the DCR register, we provide recovery code in case
+// speculation fails. The recovery code is going to "redo" the work using
+// only normal loads. If we still get a fault then we return an
+// error (ret0=0). Otherwise we return the strlen+1 as usual.
+// The fact that speculation may fail can be caused, for instance, by
+// the DCR.dm bit being set. In this case TLB misses are deferred, i.e.,
+// a NaT bit will be set if the translation is not present. The normal
+// load, on the other hand, will cause the translation to be inserted
+// if the mapping exists.
+//
+// It should be noted that we execute recovery code only when we need
+// to use the data that has been speculatively loaded: we don't execute
+// recovery code on pure read ahead data.
+//
+// Remarks:
+// - the cmp r0,r0 is used as a fast way to initialize a predicate
+// register to 1. This is required to make sure that we get the parallel
+// compare correct.
+//
+// - we don't use the epilogue counter to exit the loop but we need to set
+// it to zero beforehand.
+//
+// - after the loop we must test for Nat values because neither the
+// czx nor cmp instruction raise a NaT consumption fault. We must be
+// careful not to look too far for a Nat for which we don't care.
+// For instance we don't need to look at a NaT in val2 if the zero byte
+// was in val1.
+//
+// - Clearly performance tuning is required.
+//
+//
+//
+
+#define EX(y,x...) \
+ .section __ex_table,"a"; \
+ data4 @gprel(99f); \
+ data4 y-99f; \
+ .previous; \
+99: x
+
+#define saved_pfs r11
+#define tmp r10
+#define base r16
+#define orig r17
+#define saved_pr r18
+#define src r19
+#define mask r20
+#define val r21
+#define val1 r22
+#define val2 r23
+
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global __strlen_user
+ .proc __strlen_user
+__strlen_user:
+ alloc saved_pfs=ar.pfs,11,0,0,8
+
+ .rotr v[2], w[2] // declares our 4 aliases
+
+ extr.u tmp=in0,0,3 // tmp=least significant 3 bits
+ mov orig=in0 // keep trackof initial byte address
+ dep src=0,in0,0,3 // src=8byte-aligned in0 address
+ mov saved_pr=pr // preserve predicates (rotation)
+ ;;
+ ld8.s v[1]=[src],8 // load the initial 8bytes (must speculate)
+ shl tmp=tmp,3 // multiply by 8bits/byte
+ mov mask=-1 // our mask
+ ;;
+ ld8.s w[1]=[src],8 // load next 8 bytes in 2nd pipeline
+ cmp.eq p6,p0=r0,r0 // sets p6 (required because of // cmp.and)
+ sub tmp=64,tmp // how many bits to shift our mask on the right
+ ;;
+ shr.u mask=mask,tmp // zero enough bits to hold v[1] valuable part
+ mov ar.ec=r0 // clear epilogue counter (saved in ar.pfs)
+ ;;
+ add base=-16,src // keep track of aligned base
+ chk.s v[1], recover // if already NaT, then directly skip to recover
+ or v[1]=v[1],mask // now we have a safe initial byte pattern
+ ;;
+1:
+ ld8.s v[0]=[src],8 // speculatively load next
+ czx1.r val1=v[1] // search 0 byte from right
+ czx1.r val2=w[1] // search 0 byte from right following 8bytes
+ ;;
+ ld8.s w[0]=[src],8 // speculatively load next to next
+ cmp.eq.and p6,p0=8,val1 // p6 = p6 and val1==8
+ cmp.eq.and p6,p0=8,val2 // p6 = p6 and mask==8
+(p6) br.wtop.dptk.few 1b // loop until p6 == 0
+ ;;
+ //
+ // We must return try the recovery code iff
+ // val1_is_nat || (val1==8 && val2_is_nat)
+ //
+ // XXX Fixme
+ // - there must be a better way of doing the test
+ //
+ cmp.eq p8,p9=8,val1 // p6 = val1 had zero (disambiguate)
+#ifdef notyet
+ tnat.nz p6,p7=val1 // test NaT on val1
+#else
+ tnat.z p7,p6=val1 // test NaT on val1
+#endif
+(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT
+ ;;
+ //
+ // if we come here p7 is true, i.e., initialized for // cmp
+ //
+ cmp.eq.and p7,p0=8,val1// val1==8?
+ tnat.nz.and p7,p0=val2 // test NaT if val2
+(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT
+ ;;
+(p8) mov val1=val2 // val2 contains the value
+(p8) adds src=-16,src // correct position when 3 ahead
+(p9) adds src=-24,src // correct position when 4 ahead
+ ;;
+ sub ret0=src,orig // distance from origin
+ sub tmp=7,val1 // 7=8-1 because this strlen returns strlen+1
+ mov pr=saved_pr,0xffffffffffff0000
+ ;;
+ sub ret0=ret0,tmp // length=now - back -1
+ mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
+ br.ret.sptk.few rp // end of normal execution
+
+ //
+ // Outlined recovery code when speculation failed
+ //
+ // This time we don't use speculation and rely on the normal exception
+ // mechanism. that's why the loop is not as good as the previous one
+ // because read ahead is not possible
+ //
+ // XXX Fixme
+ // - today we restart from the beginning of the string instead
+ // of trying to continue where we left off.
+ //
+recover:
+ EX(.Lexit1, ld8 val=[base],8) // load the initial bytes
+ ;;
+ or val=val,mask // remask first bytes
+ cmp.eq p0,p6=r0,r0 // nullify first ld8 in loop
+ ;;
+ //
+ // ar.ec is still zero here
+ //
+2:
+ EX(.Lexit1, (p6) ld8 val=[base],8)
+ ;;
+ czx1.r val1=val // search 0 byte from right
+ ;;
+ cmp.eq p6,p0=8,val1 // val1==8 ?
+(p6) br.wtop.dptk.few 2b // loop until p6 == 0
+ ;;
+ sub ret0=base,orig // distance from base
+ sub tmp=7,val1 // 7=8-1 because this strlen returns strlen+1
+ mov pr=saved_pr,0xffffffffffff0000
+ ;;
+ sub ret0=ret0,tmp // length=now - back -1
+ mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
+ br.ret.sptk.few rp // end of sucessful recovery code
+
+ //
+ // We failed even on the normal load (called from exception handler)
+ //
+.Lexit1:
+ mov ret0=0
+ mov pr=saved_pr,0xffffffffffff0000
+ mov ar.pfs=saved_pfs // because of ar.ec, restore no matter what
+ br.ret.sptk.few rp
+
+ .endp __strlen_user
--- /dev/null
+/*
+ * Just like strncpy() except for the return value. If no fault occurs during
+ * the copying, the number of bytes copied is returned. If a fault occurs,
+ * -EFAULT is returned.
+ *
+ * Inputs:
+ * in0: address of destination buffer
+ * in1: address of string to be copied
+ * in2: length of buffer in bytes
+ * Outputs:
+ * r8: -EFAULT in case of fault or number of bytes copied if no fault
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define EX(x...) \
+99: x; \
+ .section __ex_table,"a"; \
+ data4 @gprel(99b); \
+ data4 .Lexit-99b; \
+ .previous
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global __strncpy_from_user
+ .proc __strncpy_from_user
+__strncpy_from_user:
+ alloc r11=ar.pfs,3,0,0,0
+ mov r9=in1
+ add r10=in1,in2
+
+ // XXX braindead copy loop---this needs to be optimized
+.Loop1:
+ EX(ld1 r8=[in1],1)
+ ;;
+ st1 [in0]=r8,1
+ cmp.ltu p6,p0=in1,r10
+ ;;
+(p6) cmp.ne.and p6,p0=r8,r0
+ ;;
+(p6) br.cond.dpnt.few .Loop1
+
+1: sub r8=in1,r9 // length of string (including NUL character)
+.Lexit:
+ mov ar.pfs=r11
+ br.ret.sptk.few rp
+
+ .endp __strncpy_from_user
--- /dev/null
+/*
+ * Returns 0 if exception before NUL or reaching the supplied limit (N),
+ * a value greater than N if the string is longer than the limit, else
+ * strlen.
+ *
+ * Inputs:
+ * in0: address of buffer
+ * in1: string length limit N
+ * Outputs:
+ * r8: 0 in case of fault, strlen(buffer)+1 otherwise
+ *
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/* If a fault occurs, r8 gets set to -EFAULT and r9 gets cleared. */
+#define EX(x...) \
+ .section __ex_table,"a"; \
+ data4 @gprel(99f); \
+ data4 (.Lexit-99f)|1; \
+ .previous \
+99: x;
+
+ .text
+ .psr abi64
+ .psr lsb
+ .lsb
+
+ .align 32
+ .global __strnlen_user
+ .proc __strnlen_user
+__strnlen_user:
+ alloc r2=ar.pfs,2,0,0,0
+ mov r16=ar.lc // preserve ar.lc
+ add r3=-1,in1
+ ;;
+ mov ar.lc=r3
+ mov r9=0
+
+ // XXX braindead strlen loop---this needs to be optimized
+.Loop1:
+ EX(ld1 r8=[in0],1)
+ add r9=1,r9
+ ;;
+ cmp.eq p6,p0=r8,r0
+(p6) br.dpnt.few .Lexit
+ br.cloop.dptk.few .Loop1
+
+ add r9=1,in1 // NUL not found---return N+1
+ ;;
+.Lexit:
+ mov r8=r9
+ mov ar.lc=r16 // restore ar.lc
+ br.ret.sptk.few rp
+
+ .endp __strnlen_user
--- /dev/null
+#
+# Makefile for the ia64-specific parts of the memory manager.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+# Note 2! The CFLAGS definition is now in the main makefile...
+
+O_TARGET := mm.o
+#O_OBJS := ioremap.o
+O_OBJS := init.o fault.o tlb.o extable.o
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * Kernel exception handling table support. Derived from arch/alpha/mm/extable.c.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <asm/uaccess.h>
+
+extern const struct exception_table_entry __start___ex_table[];
+extern const struct exception_table_entry __stop___ex_table[];
+
+static inline const struct exception_table_entry *
+search_one_table (const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ signed long value)
+{
+ /* Abort early if the search value is out of range. */
+ if (value != (signed int)value)
+ return 0;
+
+ while (first <= last) {
+ const struct exception_table_entry *mid;
+ long diff;
+ /*
+ * We know that first and last are both kernel virtual
+ * pointers (region 7) so first+last will cause an
+ * overflow. We fix that by calling __va() on the
+ * result, which will ensure that the top two bits get
+ * set again.
+ */
+ mid = (void *) __va((((__u64) first + (__u64) last)/2/sizeof(*mid))*sizeof(*mid));
+ diff = mid->addr - value;
+ if (diff == 0)
+ return mid;
+ else if (diff < 0)
+ first = mid+1;
+ else
+ last = mid-1;
+ }
+ return 0;
+}
+
+register unsigned long gp __asm__("gp");
+
+const struct exception_table_entry *
+search_exception_table (unsigned long addr)
+{
+#ifndef CONFIG_MODULE
+ /* There is only the kernel to search. */
+ return search_one_table(__start___ex_table, __stop___ex_table - 1, addr - gp);
+#else
+ struct exception_table_entry *ret;
+ /* The kernel is the last "module" -- no need to treat it special. */
+ struct module *mp;
+
+ for (mp = module_list; mp ; mp = mp->next) {
+ if (!mp->ex_table_start)
+ continue;
+ ret = search_one_table(mp->ex_table_start, mp->ex_table_end - 1, addr - mp->gp);
+ if (ret)
+ return ret;
+ }
+ return 0;
+#endif
+}
--- /dev/null
+/*
+ * MMU fault handling support.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/hardirq.h>
+
+extern void die_if_kernel (char *, struct pt_regs *, long);
+
+/*
+ * This routine is analogous to expand_stack() but instead grows the
+ * register backing store (which grows towards higher addresses).
+ * Since the register backing store is access sequentially, we
+ * disallow growing the RBS by more than a page at a time. Note that
+ * the VM_GROWSUP flag can be set on any VM area but that's fine
+ * because the total process size is still limited by RLIMIT_STACK and
+ * RLIMIT_AS.
+ */
+static inline long
+expand_backing_store (struct vm_area_struct *vma, unsigned long address)
+{
+ unsigned long grow;
+
+ grow = PAGE_SIZE >> PAGE_SHIFT;
+ if (address - vma->vm_start > current->rlim[RLIMIT_STACK].rlim_cur
+ || (((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) > current->rlim[RLIMIT_AS].rlim_cur))
+ return -ENOMEM;
+ vma->vm_end += PAGE_SIZE;
+ vma->vm_mm->total_vm += grow;
+ if (vma->vm_flags & VM_LOCKED)
+ vma->vm_mm->locked_vm += grow;
+ return 0;
+}
+
+void
+ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *regs)
+{
+ struct mm_struct *mm = current->mm;
+ const struct exception_table_entry *fix;
+ struct vm_area_struct *vma, *prev_vma;
+ struct siginfo si;
+ int signal = SIGSEGV;
+ unsigned long mask;
+
+ /*
+ * If we're in an interrupt or have no user
+ * context, we must not take the fault..
+ */
+ if (in_interrupt() || !mm)
+ goto no_context;
+
+ down(&mm->mmap_sem);
+
+ vma = find_vma_prev(mm, address, &prev_vma);
+ if (!vma)
+ goto bad_area;
+
+ /* find_vma_prev() returns vma such that address < vma->vm_end or NULL */
+ if (address < vma->vm_start)
+ goto check_expansion;
+
+ good_area:
+ /* OK, we've got a good vm_area for this memory area. Check the access permissions: */
+
+# define VM_READ_BIT 0
+# define VM_WRITE_BIT 1
+# define VM_EXEC_BIT 2
+
+# if (((1 << VM_READ_BIT) != VM_READ || (1 << VM_WRITE_BIT) != VM_WRITE) \
+ || (1 << VM_EXEC_BIT) != VM_EXEC)
+# error File is out of sync with <linux/mm.h>. Pleaes update.
+# endif
+
+ mask = ( (((isr >> IA64_ISR_X_BIT) & 1UL) << VM_EXEC_BIT)
+ | (((isr >> IA64_ISR_W_BIT) & 1UL) << VM_WRITE_BIT)
+ | (((isr >> IA64_ISR_R_BIT) & 1UL) << VM_READ_BIT));
+
+ if ((vma->vm_flags & mask) != mask)
+ goto bad_area;
+
+ /*
+ * If for any reason at all we couldn't handle the fault, make
+ * sure we exit gracefully rather than endlessly redo the
+ * fault.
+ */
+ if (!handle_mm_fault(current, vma, address, (isr & IA64_ISR_W) != 0)) {
+ /*
+ * We ran out of memory, or some other thing happened
+ * to us that made us unable to handle the page fault
+ * gracefully.
+ */
+ signal = SIGBUS;
+ goto bad_area;
+ }
+ up(&mm->mmap_sem);
+ return;
+
+ check_expansion:
+ if (!(prev_vma && (prev_vma->vm_flags & VM_GROWSUP) && (address == prev_vma->vm_end))) {
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ goto bad_area;
+ if (expand_stack(vma, address))
+ goto bad_area;
+ } else if (expand_backing_store(prev_vma, address))
+ goto bad_area;
+ goto good_area;
+
+ bad_area:
+ up(&mm->mmap_sem);
+ if (isr & IA64_ISR_SP) {
+ /*
+ * This fault was due to a speculative load set the
+ * "ed" bit in the psr to ensure forward progress
+ * (target register will get a NaT).
+ */
+ ia64_psr(regs)->ed = 1;
+ return;
+ }
+ if (user_mode(regs)) {
+#if 0
+printk("%s(%d): segfault accessing %lx\n", current->comm, current->pid, address);
+show_regs(regs);
+#endif
+ si.si_signo = signal;
+ si.si_errno = 0;
+ si.si_code = SI_KERNEL;
+ si.si_addr = (void *) address;
+ force_sig_info(SIGSEGV, &si, current);
+ return;
+ }
+
+ no_context:
+ fix = search_exception_table(regs->cr_iip);
+ if (fix) {
+ regs->r8 = -EFAULT;
+ if (fix->skip & 1) {
+ regs->r9 = 0;
+ }
+ regs->cr_iip += ((long) fix->skip) & ~15;
+ regs->cr_ipsr &= ~IA64_PSR_RI; /* clear exception slot number */
+ return;
+ }
+
+ /*
+ * Oops. The kernel tried to access some bad page. We'll have
+ * to terminate things with extreme prejudice.
+ */
+ printk(KERN_ALERT "Unable to handle kernel paging request at "
+ "virtual address %016lx\n", address);
+ die_if_kernel("Oops", regs, isr);
+ do_exit(SIGKILL);
+ return;
+}
--- /dev/null
+/*
+ * Initialize MMU support.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+#include <linux/bootmem.h>
+#include <linux/mm.h>
+#include <linux/reboot.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+
+#include <asm/dma.h>
+#include <asm/efi.h>
+#include <asm/ia32.h>
+#include <asm/io.h>
+#include <asm/pgalloc.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+
+/* References to section boundaries: */
+extern char _stext, _etext, _edata, __init_begin, __init_end;
+
+/*
+ * These are allocated in head.S so that we get proper page alignment.
+ * If you change the size of these then change head.S as well.
+ */
+extern char empty_bad_page[PAGE_SIZE];
+extern pmd_t empty_bad_pmd_table[PTRS_PER_PMD];
+extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
+
+extern void ia64_tlb_init (void);
+extern void show_net_buffers (void);
+
+static unsigned long totalram_pages;
+
+/*
+ * Fill in empty_bad_pmd_table with entries pointing to
+ * empty_bad_pte_table and return the address of this PMD table.
+ */
+static pmd_t *
+get_bad_pmd_table (void)
+{
+ pmd_t v;
+ int i;
+
+ pmd_set(&v, empty_bad_pte_table);
+
+ for (i = 0; i < PTRS_PER_PMD; ++i)
+ empty_bad_pmd_table[i] = v;
+
+ return empty_bad_pmd_table;
+}
+
+/*
+ * Fill in empty_bad_pte_table with PTEs pointing to empty_bad_page
+ * and return the address of this PTE table.
+ */
+static pte_t *
+get_bad_pte_table (void)
+{
+ pte_t v;
+ int i;
+
+ set_pte(&v, pte_mkdirty(mk_pte_phys(__pa(empty_bad_page), PAGE_SHARED)));
+
+ for (i = 0; i < PTRS_PER_PTE; ++i)
+ empty_bad_pte_table[i] = v;
+
+ return empty_bad_pte_table;
+}
+
+void
+__handle_bad_pgd (pgd_t *pgd)
+{
+ pgd_ERROR(*pgd);
+ pgd_set(pgd, get_bad_pmd_table());
+}
+
+void
+__handle_bad_pmd (pmd_t *pmd)
+{
+ pmd_ERROR(*pmd);
+ pmd_set(pmd, get_bad_pte_table());
+}
+
+/*
+ * Allocate and initialize an L3 directory page and set
+ * the L2 directory entry PMD to the newly allocated page.
+ */
+pte_t*
+get_pte_slow (pmd_t *pmd, unsigned long offset)
+{
+ pte_t *pte;
+
+ pte = (pte_t *) __get_free_page(GFP_KERNEL);
+ if (pmd_none(*pmd)) {
+ if (pte) {
+ /* everything A-OK */
+ clear_page(pte);
+ pmd_set(pmd, pte);
+ return pte + offset;
+ }
+ pmd_set(pmd, get_bad_pte_table());
+ return NULL;
+ }
+ free_page((unsigned long) pte);
+ if (pmd_bad(*pmd)) {
+ __handle_bad_pmd(pmd);
+ return NULL;
+ }
+ return (pte_t *) pmd_page(*pmd) + offset;
+}
+
+int
+do_check_pgt_cache (int low, int high)
+{
+ int freed = 0;
+
+ if (pgtable_cache_size > high) {
+ do {
+ if (pgd_quicklist)
+ free_page((unsigned long)get_pgd_fast()), ++freed;
+ if (pmd_quicklist)
+ free_page((unsigned long)get_pmd_fast()), ++freed;
+ if (pte_quicklist)
+ free_page((unsigned long)get_pte_fast()), ++freed;
+ } while (pgtable_cache_size > low);
+ }
+ return freed;
+}
+
+/*
+ * This performs some platform-dependent address space initialization.
+ * On IA-64, we want to setup the VM area for the register backing
+ * store (which grows upwards) and install the gateway page which is
+ * used for signal trampolines, etc.
+ */
+void
+ia64_init_addr_space (void)
+{
+ struct vm_area_struct *vma;
+
+ /*
+ * If we're out of memory and kmem_cache_alloc() returns NULL,
+ * we simply ignore the problem. When the process attempts to
+ * write to the register backing store for the first time, it
+ * will get a SEGFAULT in this case.
+ */
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (vma) {
+ vma->vm_mm = current->mm;
+ vma->vm_start = IA64_RBS_BOT;
+ vma->vm_end = vma->vm_start + PAGE_SIZE;
+ vma->vm_page_prot = PAGE_COPY;
+ vma->vm_flags = VM_READ|VM_WRITE|VM_MAYREAD|VM_MAYWRITE|VM_GROWSUP;
+ vma->vm_ops = NULL;
+ vma->vm_pgoff = 0;
+ vma->vm_file = NULL;
+ vma->vm_private_data = NULL;
+ insert_vm_struct(current->mm, vma);
+ }
+}
+
+void
+free_initmem (void)
+{
+ unsigned long addr;
+
+ addr = (unsigned long) &__init_begin;
+ for (; addr < (unsigned long) &__init_end; addr += PAGE_SIZE) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(addr)].flags);
+ set_page_count(&mem_map[MAP_NR(addr)], 1);
+ free_page(addr);
+ ++totalram_pages;
+ }
+ printk ("Freeing unused kernel memory: %ldkB freed\n",
+ (&__init_end - &__init_begin) >> 10);
+}
+
+void
+si_meminfo (struct sysinfo *val)
+{
+ val->totalram = totalram_pages;
+ val->sharedram = 0;
+ val->freeram = nr_free_pages();
+ val->bufferram = atomic_read(&buffermem_pages);
+ val->totalhigh = 0;
+ val->freehigh = 0;
+ val->mem_unit = PAGE_SIZE;
+ return;
+}
+
+void
+show_mem (void)
+{
+ int i,free = 0,total = 0,reserved = 0;
+ int shared = 0, cached = 0;
+
+ printk("Mem-info:\n");
+ show_free_areas();
+ printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
+ i = max_mapnr;
+ while (i-- > 0) {
+ total++;
+ if (PageReserved(mem_map+i))
+ reserved++;
+ else if (PageSwapCache(mem_map+i))
+ cached++;
+ else if (!page_count(mem_map + i))
+ free++;
+ else
+ shared += page_count(mem_map + i) - 1;
+ }
+ printk("%d pages of RAM\n", total);
+ printk("%d reserved pages\n", reserved);
+ printk("%d pages shared\n", shared);
+ printk("%d pages swap cached\n", cached);
+ printk("%ld pages in page table cache\n", pgtable_cache_size);
+ show_buffers();
+#ifdef CONFIG_NET
+ show_net_buffers();
+#endif
+}
+
+/*
+ * This is like put_dirty_page() but installs a clean page with PAGE_GATE protection
+ * (execute-only, typically).
+ */
+struct page *
+put_gate_page (struct page *page, unsigned long address)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ if (!PageReserved(page))
+ printk("put_gate_page: gate page at 0x%lx not in reserved memory\n",
+ page_address(page));
+ pgd = pgd_offset_k(address); /* note: this is NOT pgd_offset()! */
+ pmd = pmd_alloc(pgd, address);
+ if (!pmd) {
+ __free_page(page);
+ oom(current);
+ return 0;
+ }
+ pte = pte_alloc(pmd, address);
+ if (!pte) {
+ __free_page(page);
+ oom(current);
+ return 0;
+ }
+ if (!pte_none(*pte)) {
+ pte_ERROR(*pte);
+ __free_page(page);
+ return 0;
+ }
+ flush_page_to_ram(page);
+ set_pte(pte, page_pte_prot(page, PAGE_GATE));
+ /* no need for flush_tlb */
+ return page;
+}
+
+void __init
+ia64_rid_init (void)
+{
+ unsigned long flags, rid, pta;
+
+ /* Set up the kernel identity mappings (regions 6 & 7) and the vmalloc area (region 5): */
+ ia64_clear_ic(flags);
+
+ rid = ia64_rid(IA64_REGION_ID_KERNEL, __IA64_UNCACHED_OFFSET);
+ ia64_set_rr(__IA64_UNCACHED_OFFSET, (rid << 8) | (_PAGE_SIZE_256M << 2));
+
+ rid = ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET);
+ ia64_set_rr(PAGE_OFFSET, (rid << 8) | (_PAGE_SIZE_256M << 2));
+
+ rid = ia64_rid(IA64_REGION_ID_KERNEL, VMALLOC_START);
+ ia64_set_rr(VMALLOC_START, (rid << 8) | (PAGE_SHIFT << 2) | 1);
+
+ __restore_flags(flags);
+
+ /*
+ * Check if the virtually mapped linear page table (VMLPT)
+ * overlaps with a mapped address space. The IA-64
+ * architecture guarantees that at least 50 bits of virtual
+ * address space are implemented but if we pick a large enough
+ * page size (e.g., 64KB), the VMLPT is big enough that it
+ * will overlap with the upper half of the kernel mapped
+ * region. I assume that once we run on machines big enough
+ * to warrant 64KB pages, IMPL_VA_MSB will be significantly
+ * bigger, so we can just adjust the number below to get
+ * things going. Alternatively, we could truncate the upper
+ * half of each regions address space to not permit mappings
+ * that would overlap with the VMLPT. --davidm 99/11/13
+ */
+# define ld_pte_size 3
+# define ld_max_addr_space_pages 3*(PAGE_SHIFT - ld_pte_size) /* max # of mappable pages */
+# define ld_max_addr_space_size (ld_max_addr_space_pages + PAGE_SHIFT)
+# define ld_max_vpt_size (ld_max_addr_space_pages + ld_pte_size)
+# define POW2(n) (1ULL << (n))
+# define IMPL_VA_MSB 50
+ if (POW2(ld_max_addr_space_size - 1) + POW2(ld_max_vpt_size) > POW2(IMPL_VA_MSB))
+ panic("mm/init: overlap between virtually mapped linear page table and "
+ "mapped kernel space!");
+ pta = POW2(61) - POW2(IMPL_VA_MSB);
+ /*
+ * Set the (virtually mapped linear) page table address. Bit
+ * 8 selects between the short and long format, bits 2-7 the
+ * size of the table, and bit 0 whether the VHPT walker is
+ * enabled.
+ */
+ ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 1);
+}
+
+#ifdef CONFIG_IA64_VIRTUAL_MEM_MAP
+
+static int
+create_mem_map_page_table (u64 start, u64 end, void *arg)
+{
+ unsigned long address, start_page, end_page;
+ struct page *map_start, *map_end;
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+ void *page;
+
+ map_start = mem_map + MAP_NR(start);
+ map_end = mem_map + MAP_NR(end);
+
+ start_page = (unsigned long) map_start & PAGE_MASK;
+ end_page = PAGE_ALIGN((unsigned long) map_end);
+
+ printk("[%lx,%lx) -> %lx-%lx\n", start, end, start_page, end_page);
+
+ for (address = start_page; address < end_page; address += PAGE_SIZE) {
+ pgd = pgd_offset_k(address);
+ if (pgd_none(*pgd)) {
+ pmd = alloc_bootmem_pages(PAGE_SIZE);
+ clear_page(pmd);
+ pgd_set(pgd, pmd);
+ pmd += (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
+ } else
+ pmd = pmd_offset(pgd, address);
+ if (pmd_none(*pmd)) {
+ pte = alloc_bootmem_pages(PAGE_SIZE);
+ clear_page(pte);
+ pmd_set(pmd, pte);
+ pte += (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+ } else
+ pte = pte_offset(pmd, address);
+
+ if (pte_none(*pte)) {
+ page = alloc_bootmem_pages(PAGE_SIZE);
+ clear_page(page);
+ set_pte(pte, mk_pte_phys(__pa(page), PAGE_KERNEL));
+ }
+ }
+ return 0;
+}
+
+#endif /* CONFIG_IA64_VIRTUAL_MEM_MAP */
+
+/*
+ * Set up the page tables.
+ */
+void
+paging_init (void)
+{
+ unsigned long max_dma, zones_size[MAX_NR_ZONES];
+
+ clear_page((void *) ZERO_PAGE_ADDR);
+
+ ia64_rid_init();
+ __flush_tlb_all();
+
+ /* initialize mem_map[] */
+
+ memset(zones_size, 0, sizeof(zones_size));
+
+ max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS);
+ if (max_low_pfn < max_dma)
+ zones_size[ZONE_DMA] = max_low_pfn;
+ else {
+ zones_size[ZONE_DMA] = max_dma;
+ zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
+ }
+ free_area_init(zones_size);
+}
+
+static int
+count_pages (u64 start, u64 end, void *arg)
+{
+ unsigned long *count = arg;
+
+ *count += (end - start) >> PAGE_SHIFT;
+ return 0;
+}
+
+static int
+count_reserved_pages (u64 start, u64 end, void *arg)
+{
+ unsigned long num_reserved = 0;
+ unsigned long *count = arg;
+ struct page *pg;
+
+ for (pg = mem_map + MAP_NR(start); pg < mem_map + MAP_NR(end); ++pg)
+ if (PageReserved(pg))
+ ++num_reserved;
+ *count += num_reserved;
+ return 0;
+}
+
+void
+mem_init (void)
+{
+ extern char __start_gate_section[];
+ long reserved_pages, codesize, datasize, initsize;
+
+ if (!mem_map)
+ BUG();
+
+ num_physpages = 0;
+ efi_memmap_walk(count_pages, &num_physpages);
+
+ max_mapnr = max_low_pfn;
+ high_memory = __va(max_low_pfn * PAGE_SIZE);
+
+ ia64_tlb_init();
+
+ totalram_pages += free_all_bootmem();
+
+ reserved_pages = 0;
+ efi_memmap_walk(count_reserved_pages, &reserved_pages);
+
+ codesize = (unsigned long) &_etext - (unsigned long) &_stext;
+ datasize = (unsigned long) &_edata - (unsigned long) &_etext;
+ initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+
+ printk("Memory: %luk/%luk available (%luk code, %luk reserved, %luk data, %luk init)\n",
+ (unsigned long) nr_free_pages() << (PAGE_SHIFT - 10),
+ max_mapnr << (PAGE_SHIFT - 10), codesize >> 10, reserved_pages << (PAGE_SHIFT - 10),
+ datasize >> 10, initsize >> 10);
+
+ /* install the gate page in the global page table: */
+ put_gate_page(mem_map + MAP_NR(__start_gate_section), GATE_ADDR);
+
+#ifndef CONFIG_IA64_SOFTSDV_HACKS
+ /*
+ * (Some) SoftSDVs seem to have a problem with this call.
+ * Since it's mostly a performance optimization, just don't do
+ * it for now... --davidm 99/12/6
+ */
+ efi_enter_virtual_mode();
+#endif
+
+#ifdef CONFIG_IA32_SUPPORT
+ ia32_gdt_init();
+#endif
+ return;
+}
--- /dev/null
+/*
+ * TLB support routines.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/mm.h>
+
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
+#include <asm/pal.h>
+
+#define SUPPORTED_PGBITS ( \
+ 1 << _PAGE_SIZE_256M | \
+ 1 << _PAGE_SIZE_64M | \
+ 1 << _PAGE_SIZE_16M | \
+ 1 << _PAGE_SIZE_4M | \
+ 1 << _PAGE_SIZE_1M | \
+ 1 << _PAGE_SIZE_256K | \
+ 1 << _PAGE_SIZE_64K | \
+ 1 << _PAGE_SIZE_16K | \
+ 1 << _PAGE_SIZE_8K | \
+ 1 << _PAGE_SIZE_4K )
+
+static void wrap_context (struct mm_struct *mm);
+
+unsigned long ia64_next_context = (1UL << IA64_HW_CONTEXT_BITS) + 1;
+
+ /*
+ * Put everything in a struct so we avoid the global offset table whenever
+ * possible.
+ */
+ia64_ptce_info_t ia64_ptce_info;
+
+/*
+ * Seralize usage of ptc.g
+ */
+spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED; /* see <asm/pgtable.h> */
+
+void
+get_new_mmu_context (struct mm_struct *mm)
+{
+ if ((ia64_next_context & IA64_HW_CONTEXT_MASK) == 0) {
+ wrap_context(mm);
+ }
+ mm->context = ia64_next_context++;
+}
+
+/*
+ * This is where we handle the case where (ia64_next_context &
+ * IA64_HW_CONTEXT_MASK) == 0. Whenever this happens, we need to
+ * flush the entire TLB and skip over region id number 0, which is
+ * used by the kernel.
+ */
+static void
+wrap_context (struct mm_struct *mm)
+{
+ struct task_struct *task;
+
+ /*
+ * We wrapped back to the first region id so we nuke the TLB
+ * so we can switch to the next generation of region ids.
+ */
+ __flush_tlb_all();
+ if (ia64_next_context++ == 0) {
+ /*
+ * Oops, we've used up all 64 bits of the context
+ * space---walk through task table to ensure we don't
+ * get tricked into using an old context. If this
+ * happens, the machine has been running for a long,
+ * long time!
+ */
+ ia64_next_context = (1UL << IA64_HW_CONTEXT_BITS) + 1;
+
+ read_lock(&tasklist_lock);
+ for_each_task (task) {
+ if (task->mm == mm)
+ continue;
+ flush_tlb_mm(mm);
+ }
+ read_unlock(&tasklist_lock);
+ }
+}
+
+void
+__flush_tlb_all (void)
+{
+ unsigned long i, j, flags, count0, count1, stride0, stride1, addr = ia64_ptce_info.base;
+
+ count0 = ia64_ptce_info.count[0];
+ count1 = ia64_ptce_info.count[1];
+ stride0 = ia64_ptce_info.stride[0];
+ stride1 = ia64_ptce_info.stride[1];
+
+ save_and_cli(flags);
+ for (i = 0; i < count0; ++i) {
+ for (j = 0; j < count1; ++j) {
+ asm volatile ("ptc.e %0" :: "r"(addr));
+ addr += stride1;
+ }
+ addr += stride0;
+ }
+ restore_flags(flags);
+ ia64_insn_group_barrier();
+ ia64_srlz_i(); /* srlz.i implies srlz.d */
+ ia64_insn_group_barrier();
+}
+
+void
+flush_tlb_range (struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+ unsigned long size = end - start;
+ unsigned long nbits;
+
+ if (mm != current->active_mm) {
+ /* this doesn't happen often, if at all, so it's not worth optimizing for... */
+ mm->context = 0;
+ return;
+ }
+
+ nbits = ia64_fls(size + 0xfff);
+ if (((1UL << nbits) & SUPPORTED_PGBITS) == 0) {
+ if (nbits > _PAGE_SIZE_256M)
+ nbits = _PAGE_SIZE_256M;
+ else
+ /*
+ * Some page sizes are not implemented in the
+ * IA-64 arch, so if we get asked to clear an
+ * unsupported page size, round up to the
+ * nearest page size. Note that we depend on
+ * the fact that if page size N is not
+ * implemented, 2*N _is_ implemented.
+ */
+ ++nbits;
+ if (((1UL << nbits) & SUPPORTED_PGBITS) == 0)
+ panic("flush_tlb_range: BUG: nbits=%lu\n", nbits);
+ }
+ start &= ~((1UL << nbits) - 1);
+
+ spin_lock(&ptcg_lock);
+ do {
+#ifdef CONFIG_SMP
+ __asm__ __volatile__ ("ptc.g %0,%1;;srlz.i;;"
+ :: "r"(start), "r"(nbits<<2) : "memory");
+#else
+ __asm__ __volatile__ ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
+#endif
+ start += (1UL << nbits);
+ } while (start < end);
+ spin_unlock(&ptcg_lock);
+ ia64_insn_group_barrier();
+ ia64_srlz_i(); /* srlz.i implies srlz.d */
+ ia64_insn_group_barrier();
+}
+
+void
+ia64_tlb_init (void)
+{
+ ia64_get_ptce(&ia64_ptce_info);
+ __flush_tlb_all(); /* nuke left overs from bootstrapping... */
+}
--- /dev/null
+#
+# ia64/sn/Makefile
+#
+# Copyright (C) 1999 Silicon Graphics, Inc.
+# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+#
+
+CFLAGS := $(CFLAGS) -DCONFIG_SGI_SN1 -DSN1 -DSN -DSOFTSDV \
+ -DLANGUAGE_C=1 -D_LANGUAGE_C=1
+AFLAGS := $(AFLAGS) -DCONFIG_SGI_SN1 -DSN1 -DSOFTSDV
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -E -o $*.s $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -c -o $*.o $<
+
+all: sn.a
+
+O_TARGET = sn.a
+O_HEADERS =
+O_OBJS = sn1/sn1.a
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+#
+# ia64/platform/sn/sn1/Makefile
+#
+# Copyright (C) 1999 Silicon Graphics, Inc.
+# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+#
+
+CFLAGS := $(CFLAGS) -DCONFIG_SGI_SN1 -DSN1 -DSN -DSOFTSDV \
+ -DLANGUAGE_C=1 -D_LANGUAGE_C=1
+AFLAGS := $(AFLAGS) -DCONFIG_SGI_SN1 -DSN1 -DSOFTSDV
+
+.S.s:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -E -o $*.s $<
+.S.o:
+ $(CC) -D__ASSEMBLY__ $(AFLAGS) -c -o $*.o $<
+
+all: sn1.a
+
+O_TARGET = sn1.a
+O_HEADERS =
+O_OBJS = irq.o setup.o
+
+ifeq ($(CONFIG_IA64_GENERIC),y)
+O_OBJS += machvec.o
+endif
+
+clean::
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+#include <linux/config.h>
+#include <linux/kernel.h>
+
+#include <asm/irq.h>
+#include <asm/ptrace.h>
+
+static int
+sn1_startup_irq(unsigned int irq)
+{
+ return(0);
+}
+
+static void
+sn1_shutdown_irq(unsigned int irq)
+{
+}
+
+static void
+sn1_disable_irq(unsigned int irq)
+{
+}
+
+static void
+sn1_enable_irq(unsigned int irq)
+{
+}
+
+static int
+sn1_handle_irq(unsigned int irq, struct pt_regs *regs)
+{
+ return(0);
+}
+
+struct hw_interrupt_type irq_type_sn1 = {
+ "sn1_irq",
+ sn1_startup_irq,
+ sn1_shutdown_irq,
+ sn1_handle_irq,
+ sn1_enable_irq,
+ sn1_disable_irq
+};
+
+void
+sn1_irq_init (struct irq_desc desc[NR_IRQS])
+{
+ int i;
+
+ for (i = IA64_MIN_VECTORED_IRQ; i <= IA64_MAX_VECTORED_IRQ; ++i) {
+ irq_desc[i].handler = &irq_type_sn1;
+ }
+}
--- /dev/null
+#include <asm/machvec_init.h>
+#include <asm/machvec_sn1.h>
+
+MACHVEC_DEFINE(sn1)
--- /dev/null
+/*
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Vijay Chander(vijay@engr.sgi.com)
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/kdev_t.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+#include <linux/console.h>
+#include <linux/timex.h>
+#include <linux/sched.h>
+
+#include <asm/io.h>
+#include <asm/machvec.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+
+
+/*
+ * The format of "screen_info" is strange, and due to early i386-setup
+ * code. This is just enough to make the console code think we're on a
+ * VGA color display.
+ */
+struct screen_info sn1_screen_info = {
+ orig_x: 0,
+ orig_y: 0,
+ orig_video_mode: 3,
+ orig_video_cols: 80,
+ orig_video_ega_bx: 3,
+ orig_video_lines: 25,
+ orig_video_isVGA: 1,
+ orig_video_points: 16
+};
+
+/*
+ * This is here so we can use the CMOS detection in ide-probe.c to
+ * determine what drives are present. In theory, we don't need this
+ * as the auto-detection could be done via ide-probe.c:do_probe() but
+ * in practice that would be much slower, which is painful when
+ * running in the simulator. Note that passing zeroes in DRIVE_INFO
+ * is sufficient (the IDE driver will autodetect the drive geometry).
+ */
+char drive_info[4*16];
+
+unsigned long
+sn1_map_nr (unsigned long addr)
+{
+ return MAP_NR_SN1(addr);
+}
+
+void
+sn1_setup(char **cmdline_p)
+{
+
+ ROOT_DEV = to_kdev_t(0x0301); /* default to first IDE drive */
+
+#if !defined (CONFIG_IA64_SOFTSDV_HACKS)
+ /*
+ * Program the timer to deliver timer ticks. 0x40 is the I/O port
+ * address of PIT counter 0, 0x43 is the I/O port address of the
+ * PIT control word.
+ */
+ request_region(0x40,0x20,"timer");
+ outb(0x34, 0x43); /* Control word */
+ outb(LATCH & 0xff , 0x40); /* LSB */
+ outb(LATCH >> 8, 0x40); /* MSB */
+ printk("PIT: LATCH at 0x%x%x for %d HZ\n", LATCH >> 8, LATCH & 0xff, HZ);
+#endif
+#ifdef __SMP__
+ init_smp_config();
+#endif
+ screen_info = sn1_screen_info;
+}
--- /dev/null
+CFLAGS = -D__KERNEL__ -g -O2 -Wall -I$(TOPDIR)/include
+
+ifdef CONFIG_SMP
+ CFLAGS += -D__SMP__
+endif
+
+TARGET = $(TOPDIR)/include/asm-ia64/offsets.h
+
+all:
+
+clean:
+ rm -f print_offsets.s print_offsets offsets.h
+
+fastdep: offsets.h
+ @if ! cmp -s offsets.h ${TARGET}; then \
+ echo "Updating ${TARGET}..."; \
+ cp offsets.h ${TARGET}; \
+ else \
+ echo "${TARGET} is up to date"; \
+ fi
+
+#
+# If we're cross-compiling, we use the cross-compiler to translate
+# print_offsets.c into an assembly file and then awk to translate this
+# file into offsets.h. This avoids having to use a simulator to
+# generate this file. This is based on an idea suggested by Asit
+# Mallick. If we're running natively, we can of course just build
+# print_offsets and run it. --davidm
+#
+
+ifeq ($(CROSS_COMPILE),)
+
+offsets.h: print_offsets
+ ./print_offsets > offsets.h
+
+print_offsets: print_offsets.c
+ $(CC) $(CFLAGS) print_offsets.c -o $@
+
+else
+
+offsets.h: print_offsets.s
+ $(AWK) -f print_offsets.awk $^ > $@
+
+print_offsets.s: print_offsets.c
+ $(CC) $(CFLAGS) -S print_offsets.c -o $@
+
+endif
+
+.PHONY: all
--- /dev/null
+BEGIN {
+ print "#ifndef _ASM_IA64_OFFSETS_H"
+ print "#define _ASM_IA64_OFFSETS_H"
+ print "/*"
+ print " * DO NOT MODIFY"
+ print " *"
+ print " * This file was generated by arch/ia64/tools/print_offsets.awk."
+ print " *"
+ print " */"
+ #
+ # This is a cheesy hack. Make sure that
+ # PF_PTRACED == 1<<PF_PTRACED_BIT.
+ #
+ print "#define PF_PTRACED_BIT 4"
+}
+
+# look for .tab:
+# stringz "name"
+# data value
+# sequence
+
+/.*[.]size/ {
+ inside_table = 0
+}
+
+/\/\/ end/ {
+ inside_table = 0
+}
+
+{
+ if (inside_table) {
+ if ($1 == "//") getline;
+ name=$2
+ getline
+ getline
+ if ($1 == "//") getline;
+ value=$2
+ len = length(name)
+ name = substr(name, 2, len - 2)
+ len -= 2
+ if (len == 0)
+ print ""
+ else {
+ len += 8
+ if (len >= 40) {
+ space=" "
+ } else {
+ space=""
+ while (len < 40) {
+ len += 8
+ space = space"\t"
+ }
+ }
+ printf("#define %s%s%lu\t/* 0x%lx */\n", name, space, value, value)
+ }
+ }
+}
+
+/tab:/ {
+ inside_table = 1
+}
+
+/tab#:/ {
+ inside_table = 1
+}
+
+END {
+ print ""
+ print "#endif /* _ASM_IA64_OFFSETS_H */"
+}
--- /dev/null
+/*
+ * Utility to generate asm-ia64/offsets.h.
+ *
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Note that this file has dual use: when building the kernel
+ * natively, the file is translated into a binary and executed. When
+ * building the kernel in a cross-development environment, this file
+ * gets translated into an assembly file which, in turn, is processed
+ * by awk to generate offsets.h. So if you make any changes to this
+ * file, be sure to verify that the awk procedure still works (see
+ * prin_offsets.awk).
+ */
+#include <linux/sched.h>
+
+#include <asm-ia64/processor.h>
+#include <asm-ia64/ptrace.h>
+#include <asm-ia64/siginfo.h>
+#include <asm-ia64/sigcontext.h>
+
+#ifdef offsetof
+# undef offsetof
+#endif
+
+/*
+ * We _can't_ include the host's standard header file, as those are in
+ * potential conflict with the what the Linux kernel declares for the
+ * target system.
+ */
+extern int printf (const char *, ...);
+
+#define offsetof(type,field) ((char *) &((type *) 0)->field - (char *) 0)
+
+struct
+ {
+ const char name[256];
+ unsigned long value;
+ }
+tab[] =
+ {
+ { "IA64_TASK_SIZE", sizeof (struct task_struct) },
+ { "IA64_PT_REGS_SIZE", sizeof (struct pt_regs) },
+ { "IA64_SWITCH_STACK_SIZE", sizeof (struct switch_stack) },
+ { "IA64_SIGINFO_SIZE", sizeof (struct siginfo) },
+ { "", 0 }, /* spacer */
+ { "IA64_TASK_FLAGS_OFFSET", offsetof (struct task_struct, flags) },
+ { "IA64_TASK_SIGPENDING_OFFSET", offsetof (struct task_struct, sigpending) },
+ { "IA64_TASK_NEED_RESCHED_OFFSET", offsetof (struct task_struct, need_resched) },
+ { "IA64_TASK_THREAD_OFFSET", offsetof (struct task_struct, thread) },
+ { "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) },
+ { "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) },
+ { "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) },
+ { "IA64_PT_REGS_CR_IPSR_OFFSET", offsetof (struct pt_regs, cr_ipsr) },
+ { "IA64_PT_REGS_R12_OFFSET", offsetof (struct pt_regs, r12) },
+ { "IA64_PT_REGS_R8_OFFSET", offsetof (struct pt_regs, r8) },
+ { "IA64_PT_REGS_R16_OFFSET", offsetof (struct pt_regs, r16) },
+ { "IA64_SWITCH_STACK_B0_OFFSET", offsetof (struct switch_stack, b0) },
+ { "IA64_SWITCH_STACK_CALLER_UNAT_OFFSET", offsetof (struct switch_stack, caller_unat) },
+ { "IA64_SIGCONTEXT_AR_BSP_OFFSET", offsetof (struct sigcontext, sc_ar_bsp) },
+ { "IA64_SIGCONTEXT_AR_RNAT_OFFSET", offsetof (struct sigcontext, sc_ar_rnat) },
+ { "IA64_SIGCONTEXT_FLAGS_OFFSET", offsetof (struct sigcontext, sc_flags) },
+ { "IA64_SIGCONTEXT_CFM_OFFSET", offsetof (struct sigcontext, sc_cfm) },
+ { "IA64_SIGCONTEXT_FR6_OFFSET", offsetof (struct sigcontext, sc_fr[6]) },
+};
+
+static const char *tabs = "\t\t\t\t\t\t\t\t\t\t";
+
+int
+main (int argc, char **argv)
+{
+ const char *space;
+ int i, num_tabs;
+ size_t len;
+
+ printf ("#ifndef _ASM_IA64_OFFSETS_H\n");
+ printf ("#define _ASM_IA64_OFFSETS_H\n\n");
+
+ printf ("/*\n * DO NOT MODIFY\n *\n * This file was generated by "
+ "arch/ia64/tools/print_offsets.\n *\n */\n\n");
+
+ /* This is stretching things a bit, but entry.S needs the bit number
+ for PF_PTRACED and it can't include <linux/sched.h> so this seems
+ like a reasonably solution. At least the code won't break shoudl
+ PF_PTRACED ever change. */
+ printf ("#define PF_PTRACED_BIT\t\t\t%u\n\n", ffs (PF_PTRACED) - 1);
+
+ for (i = 0; i < sizeof (tab) / sizeof (tab[0]); ++i)
+ {
+ if (tab[i].name[0] == '\0')
+ printf ("\n");
+ else
+ {
+ len = strlen (tab[i].name);
+
+ num_tabs = (40 - len) / 8;
+ if (num_tabs <= 0)
+ space = " ";
+ else
+ space = strchr(tabs, '\0') - (40 - len) / 8;
+
+ printf ("#define %s%s%lu\t/* 0x%lx */\n",
+ tab[i].name, space, tab[i].value, tab[i].value);
+ }
+ }
+
+ printf ("\n#endif /* _ASM_IA64_OFFSETS_H */\n");
+ return 0;
+}
--- /dev/null
+#include <linux/config.h>
+
+#include <asm/page.h>
+#include <asm/system.h>
+
+OUTPUT_FORMAT("elf64-ia64-little")
+OUTPUT_ARCH(ia64)
+ENTRY(_start)
+SECTIONS
+{
+ v = PAGE_OFFSET; /* this symbol is here to make debugging with kdb easier... */
+
+ . = KERNEL_START;
+
+ _text = .;
+ _stext = .;
+ .text : AT(ADDR(.text) - PAGE_OFFSET)
+ {
+ *(__ivt_section)
+ /* these are not really text pages, but the zero page needs to be in a fixed location: */
+ *(__special_page_section)
+ __start_gate_section = .;
+ *(__gate_section)
+ __stop_gate_section = .;
+ *(.text)
+ }
+ .text2 : AT(ADDR(.text2) - PAGE_OFFSET)
+ { *(.text2) }
+#ifdef CONFIG_SMP
+ .text.lock : AT(ADDR(.text.lock) - PAGE_OFFSET)
+ { *(.text.lock) }
+#endif
+ _etext = .;
+
+ /* Exception table */
+ . = ALIGN(16);
+ __start___ex_table = .;
+ __ex_table : AT(ADDR(__ex_table) - PAGE_OFFSET)
+ { *(__ex_table) }
+ __stop___ex_table = .;
+
+#if defined(CONFIG_KDB)
+ /* Kernel symbols and strings for kdb */
+# define KDB_MEAN_SYMBOL_SIZE 48
+# define KDB_SPACE (CONFIG_KDB_STBSIZE * KDB_MEAN_SYMBOL_SIZE)
+ . = ALIGN(8);
+ _skdb = .;
+ .kdb : AT(ADDR(.kdb) - PAGE_OFFSET)
+ {
+ *(kdbsymtab)
+ *(kdbstrings)
+ }
+ _ekdb = .;
+ . = _skdb + KDB_SPACE;
+#endif
+
+ /* Kernel symbol names for modules: */
+ .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
+ { *(.kstrtab) }
+
+ /* The initial task and kernel stack */
+ . = ALIGN(PAGE_SIZE);
+ init_task : AT(ADDR(init_task) - PAGE_OFFSET)
+ { *(init_task) }
+
+ /* Startup code */
+ __init_begin = .;
+ .text.init : AT(ADDR(.text.init) - PAGE_OFFSET)
+ { *(.text.init) }
+ .data.init : AT(ADDR(.data.init) - PAGE_OFFSET)
+ { *(.data.init) }
+ . = ALIGN(16);
+ __setup_start = .;
+ .setup.init : AT(ADDR(.setup.init) - PAGE_OFFSET)
+ { *(.setup.init) }
+ __setup_end = .;
+ __initcall_start = .;
+ .initcall.init : AT(ADDR(.initcall.init) - PAGE_OFFSET)
+ { *(.initcall.init) }
+ __initcall_end = .;
+ . = ALIGN(PAGE_SIZE);
+ __init_end = .;
+
+ .data.page_aligned : AT(ADDR(.data.page_aligned) - PAGE_OFFSET)
+ { *(.data.idt) }
+
+ . = ALIGN(64);
+ .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - PAGE_OFFSET)
+ { *(.data.cacheline_aligned) }
+
+ /* Global data */
+ _data = .;
+
+ .rodata : AT(ADDR(.rodata) - PAGE_OFFSET)
+ { *(.rodata) }
+ .opd : AT(ADDR(.opd) - PAGE_OFFSET)
+ { *(.opd) }
+ .data : AT(ADDR(.data) - PAGE_OFFSET)
+ { *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS }
+
+ __gp = ALIGN (8) + 0x200000;
+
+ .got : AT(ADDR(.got) - PAGE_OFFSET)
+ { *(.got.plt) *(.got) }
+ /* We want the small data sections together, so single-instruction offsets
+ can access them all, and initialized data all before uninitialized, so
+ we can shorten the on-disk segment size. */
+ .sdata : AT(ADDR(.sdata) - PAGE_OFFSET)
+ { *(.sdata) }
+ _edata = .;
+ _bss = .;
+ .sbss : AT(ADDR(.sbss) - PAGE_OFFSET)
+ { *(.sbss) *(.scommon) }
+ .bss : AT(ADDR(.bss) - PAGE_OFFSET)
+ { *(.bss) *(COMMON) }
+ . = ALIGN(64 / 8);
+ _end = .;
+
+ /* Sections to be discarded */
+ /DISCARD/ : {
+ *(.text.exit)
+ *(.data.exit)
+ }
+
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging sections are relative to the beginning
+ of the section so we begin them at 0. */
+ /* DWARF 1 */
+ .debug 0 : { *(.debug) }
+ .line 0 : { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : { *(.debug_srcinfo) }
+ .debug_sfnames 0 : { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : { *(.debug_aranges) }
+ .debug_pubnames 0 : { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : { *(.debug_info) }
+ .debug_abbrev 0 : { *(.debug_abbrev) }
+ .debug_line 0 : { *(.debug_line) }
+ .debug_frame 0 : { *(.debug_frame) }
+ .debug_str 0 : { *(.debug_str) }
+ .debug_loc 0 : { *(.debug_loc) }
+ .debug_macinfo 0 : { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : { *(.debug_weaknames) }
+ .debug_funcnames 0 : { *(.debug_funcnames) }
+ .debug_typenames 0 : { *(.debug_typenames) }
+ .debug_varnames 0 : { *(.debug_varnames) }
+ /* These must appear regardless of . */
+ /* Discard them for now since Intel SoftSDV cannot handle them.
+ .comment 0 : { *(.comment) }
+ .note 0 : { *(.note) }
+ */
+ /DISCARD/ : { *(.comment) }
+ /DISCARD/ : { *(.note) }
+}
O_TARGET := acorn-char.o
M_OBJS :=
-O_OBJS :=
+O_OBJS := i2c.o pcf8583.o
O_OBJS_arc := keyb_arc.o
O_OBJS_a5k := keyb_arc.o
O_OBJS_rpc := keyb_ps2.o
-ifeq ($(MACHINE),rpc)
- ifeq ($(CONFIG_BUSMOUSE),y)
- OX_OBJS += mouse_rpc.o
- else
- ifeq ($(CONFIG_BUSMOUSE),m)
- MX_OBJS += mouse_rpc.o
- endif
+ifeq ($(CONFIG_RPCMOUSE),y)
+ OX_OBJS += mouse_rpc.o
+else
+ ifeq ($(CONFIG_RPCSMOUSE),m)
+ MX_OBJS += mouse_rpc.o
endif
endif
--- /dev/null
+/*
+ * linux/drivers/acorn/char/i2c.c
+ *
+ * Copyright (C) 2000 Russell King
+ *
+ * ARM IOC/IOMD i2c driver.
+ *
+ * On Acorn machines, the following i2c devices are on the bus:
+ * - PCF8583 real time clock & static RAM
+ */
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/i2c.h>
+#include <linux/i2c-algo-bit.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/ioc.h>
+#include <asm/system.h>
+
+#include "pcf8583.h"
+
+extern unsigned long
+mktime(unsigned int year, unsigned int mon, unsigned int day,
+ unsigned int hour, unsigned int min, unsigned int sec);
+extern int (*set_rtc)(void);
+
+static struct i2c_client *rtc_client;
+
+static inline int rtc_command(int cmd, void *data)
+{
+ int ret = -EIO;
+
+ if (rtc_client)
+ ret = rtc_client->driver->command(rtc_client, cmd, data);
+
+ return ret;
+}
+
+/*
+ * Read the current RTC time and date, and update xtime.
+ */
+static void get_rtc_time(void)
+{
+ unsigned char ctrl;
+ unsigned char year;
+ struct rtc_tm rtctm;
+ struct mem rtcmem = { 0xc0, 1, &year };
+
+ /*
+ * Ensure that the RTC is running.
+ */
+ rtc_command(RTC_GETCTRL, &ctrl);
+ if (ctrl & 0xc0) {
+ unsigned char new_ctrl;
+
+ new_ctrl = ctrl & ~0xc0;
+
+ printk("RTC: resetting control %02X -> %02X\n",
+ ctrl, new_ctrl);
+
+ rtc_command(RTC_SETCTRL, &new_ctrl);
+ }
+
+ /*
+ * Acorn machines store the year in
+ * the static RAM at location 192.
+ */
+ if (rtc_command(MEM_READ, &rtcmem))
+ return;
+
+ if (rtc_command(RTC_GETDATETIME, &rtctm))
+ return;
+
+ if (year < 70)
+ year += 100;
+
+ xtime.tv_usec = rtctm.cs * 10000;
+ xtime.tv_sec = mktime(1900 + year, rtctm.mon, rtctm.mday,
+ rtctm.hours, rtctm.mins, rtctm.secs);
+}
+
+/*
+ * Set the RTC time only. Note that
+ * we do not touch the date.
+ */
+static int set_rtc_time(void)
+{
+ struct rtc_tm new_rtctm, old_rtctm;
+ unsigned long nowtime = xtime.tv_sec;
+
+ if (rtc_command(RTC_GETDATETIME, &old_rtctm))
+ return 0;
+
+ new_rtctm.cs = xtime.tv_usec / 10000;
+ new_rtctm.secs = nowtime % 60; nowtime /= 60;
+ new_rtctm.mins = nowtime % 60; nowtime /= 60;
+ new_rtctm.hours = nowtime % 24;
+
+ /*
+ * avoid writing when we're going to change the day
+ * of the month. We will retry in the next minute.
+ * This basically means that if the RTC must not drift
+ * by more than 1 minute in 11 minutes.
+ *
+ * [ rtc: 1/1/2000 23:58:00, real 2/1/2000 00:01:00,
+ * rtc gets set to 1/1/2000 00:01:00 ]
+ */
+ if ((old_rtctm.hours == 23 && old_rtctm.mins == 59) ||
+ (new_rtctm.hours == 23 && new_rtctm.mins == 59))
+ return 1;
+
+ return rtc_command(RTC_SETTIME, &new_rtctm);
+}
+
+
+#define FORCE_ONES 0xdc
+#define SCL 0x02
+#define SDA 0x01
+
+static int ioc_control;
+
+static void ioc_setscl(void *data, int state)
+{
+ if (state)
+ ioc_control |= SCL;
+ else
+ ioc_control &= ~SCL;
+ outb(ioc_control, IOC_CONTROL);
+}
+
+static void ioc_setsda(void *data, int state)
+{
+ if (state)
+ ioc_control |= SDA;
+ else
+ ioc_control &= ~SDA;
+ outb(ioc_control, IOC_CONTROL);
+}
+
+static int ioc_getscl(void *data)
+{
+ return (inb(IOC_CONTROL) & SCL) != 0;
+}
+
+static int ioc_getsda(void *data)
+{
+ return (inb(IOC_CONTROL) & SDA) != 0;
+}
+
+static struct i2c_algo_bit_data ioc_data = {
+ NULL,
+ ioc_setsda,
+ ioc_setscl,
+ ioc_getsda,
+ ioc_getscl,
+ 80, 80, 100
+};
+
+static int ioc_client_reg(struct i2c_client *client)
+{
+ if (client->id == I2C_DRIVERID_PCF8583 &&
+ client->addr == 0x50) {
+ rtc_client = client;
+ get_rtc_time();
+ set_rtc = set_rtc_time;
+ }
+
+ return 0;
+}
+
+static int ioc_client_unreg(struct i2c_client *client)
+{
+ if (client == rtc_client) {
+ set_rtc = NULL;
+ rtc_client = NULL;
+ }
+
+ return 0;
+}
+
+static struct i2c_adapter ioc_ops = {
+ "IOC/IOMD",
+ I2C_HW_B_IOC,
+ NULL,
+ &ioc_data,
+ NULL,
+ NULL,
+ ioc_client_reg,
+ ioc_client_unreg
+};
+
+static int __init i2c_ioc_init(void)
+{
+ ioc_control = inb(IOC_CONTROL) | FORCE_ONES;
+
+ ioc_setscl(NULL, 1);
+ ioc_setsda(NULL, 1);
+
+ return i2c_bit_add_bus(&ioc_ops);
+}
+
+__initcall(i2c_ioc_init);
--- /dev/null
+/*
+ * linux/drivers/i2c/pcf8583.c
+ *
+ * Copyright (C) 2000 Russell King
+ *
+ * Driver for PCF8583 RTC & RAM chip
+ */
+
+#include <linux/i2c.h>
+#include <linux/malloc.h>
+#include <linux/string.h>
+#include <linux/mc146818rtc.h>
+#include <linux/init.h>
+
+#include "pcf8583.h"
+
+static struct i2c_driver pcf8583_driver;
+
+static unsigned short ignore[] = { I2C_CLIENT_END };
+static unsigned short normal_addr[] = { 0x50, 0x51, I2C_CLIENT_END };
+
+static struct i2c_client_address_data addr_data = {
+ force: ignore,
+ ignore: ignore,
+ ignore_range: ignore,
+ normal_i2c: ignore,
+ normal_i2c_range: normal_addr,
+ probe: ignore,
+ probe_range: ignore
+};
+
+#define DAT(x) ((unsigned int)(x->data))
+
+static int
+pcf8583_attach(struct i2c_adapter *adap, int addr, unsigned short flags,
+ int kind)
+{
+ struct i2c_client *c;
+ unsigned char buf[1], ad[1] = { 0 };
+ struct i2c_msg msgs[2] = {
+ { addr, 0, 1, ad },
+ { addr, I2C_M_RD, 1, buf }
+ };
+
+ c = kmalloc(sizeof(*c), GFP_KERNEL);
+ if (!c)
+ return -ENOMEM;
+
+ strcpy(c->name, "PCF8583");
+ c->id = pcf8583_driver.id;
+ c->flags = 0;
+ c->addr = addr;
+ c->adapter = adap;
+ c->driver = &pcf8583_driver;
+ c->data = NULL;
+
+ if (i2c_transfer(c->adapter, msgs, 2) == 2)
+ DAT(c) = buf[0];
+
+ return i2c_attach_client(c);
+}
+
+static int
+pcf8583_probe(struct i2c_adapter *adap)
+{
+ return i2c_probe(adap, &addr_data, pcf8583_attach);
+}
+
+static int
+pcf8583_detach(struct i2c_client *client)
+{
+ i2c_detach_client(client);
+ return 0;
+}
+
+static int
+pcf8583_get_datetime(struct i2c_client *client, struct rtc_tm *dt)
+{
+ unsigned char buf[8], addr[1] = { 1 };
+ struct i2c_msg msgs[2] = {
+ { client->addr, 0, 1, addr },
+ { client->addr, I2C_M_RD, 6, buf }
+ };
+ int ret = -EIO;
+
+ if (i2c_transfer(client->adapter, msgs, 2) == 2) {
+ dt->year_off = buf[4] >> 6;
+ dt->wday = buf[5] >> 5;
+
+ buf[4] &= 0x3f;
+ buf[5] &= 0x1f;
+
+ dt->cs = BCD_TO_BIN(buf[0]);
+ dt->secs = BCD_TO_BIN(buf[1]);
+ dt->mins = BCD_TO_BIN(buf[2]);
+ dt->hours = BCD_TO_BIN(buf[3]);
+ dt->mday = BCD_TO_BIN(buf[4]);
+ dt->mon = BCD_TO_BIN(buf[5]);
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static int
+pcf8583_set_datetime(struct i2c_client *client, struct rtc_tm *dt, int datetoo)
+{
+ unsigned char buf[8];
+ int ret, len = 6;
+
+ buf[0] = 0;
+ buf[1] = DAT(client) | 0x80;
+ buf[2] = BIN_TO_BCD(dt->cs);
+ buf[3] = BIN_TO_BCD(dt->secs);
+ buf[4] = BIN_TO_BCD(dt->mins);
+ buf[5] = BIN_TO_BCD(dt->hours);
+
+ if (datetoo) {
+ len = 8;
+ buf[6] = BIN_TO_BCD(dt->mday) | (dt->year_off << 6);
+ buf[7] = BIN_TO_BCD(dt->mon) | (dt->wday << 5);
+ }
+
+ ret = i2c_master_send(client, (char *)buf, len);
+
+ buf[1] = DAT(client);
+ i2c_master_send(client, (char *)buf, 2);
+
+ return ret;
+}
+
+static int
+pcf8583_get_ctrl(struct i2c_client *client, unsigned char *ctrl)
+{
+ *ctrl = DAT(client);
+ return 0;
+}
+
+static int
+pcf8583_set_ctrl(struct i2c_client *client, unsigned char *ctrl)
+{
+ unsigned char buf[2];
+
+ buf[0] = 0;
+ buf[1] = *ctrl;
+ DAT(client) = *ctrl;
+
+ return i2c_master_send(client, (char *)buf, 2);
+}
+
+static int
+pcf8583_read_mem(struct i2c_client *client, struct mem *mem)
+{
+ unsigned char addr[1];
+ struct i2c_msg msgs[2] = {
+ { client->addr, 0, 1, addr },
+ { client->addr, I2C_M_RD, 0, mem->data }
+ };
+
+ if (mem->loc < 8)
+ return -EINVAL;
+
+ addr[0] = mem->loc;
+ msgs[1].len = mem->nr;
+
+ return i2c_transfer(client->adapter, msgs, 2) == 2 ? 0 : -EIO;
+}
+
+static int
+pcf8583_write_mem(struct i2c_client *client, struct mem *mem)
+{
+ unsigned char addr[1];
+ struct i2c_msg msgs[2] = {
+ { client->addr, 0, 1, addr },
+ { client->addr, 0, 0, mem->data }
+ };
+
+ if (mem->loc < 8)
+ return -EINVAL;
+
+ addr[0] = mem->loc;
+ msgs[1].len = mem->nr;
+
+ return i2c_transfer(client->adapter, msgs, 2) == 2 ? 0 : -EIO;
+}
+
+static int
+pcf8583_command(struct i2c_client *client, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case RTC_GETDATETIME:
+ return pcf8583_get_datetime(client, arg);
+
+ case RTC_SETTIME:
+ return pcf8583_set_datetime(client, arg, 0);
+
+ case RTC_SETDATETIME:
+ return pcf8583_set_datetime(client, arg, 1);
+
+ case RTC_GETCTRL:
+ return pcf8583_get_ctrl(client, arg);
+
+ case RTC_SETCTRL:
+ return pcf8583_set_ctrl(client, arg);
+
+ case MEM_READ:
+ return pcf8583_read_mem(client, arg);
+
+ case MEM_WRITE:
+ return pcf8583_write_mem(client, arg);
+
+ default:
+ return -EINVAL;
+ }
+}
+
+static struct i2c_driver pcf8583_driver = {
+ "PCF8583",
+ I2C_DRIVERID_PCF8583,
+ I2C_DF_NOTIFY,
+ pcf8583_probe,
+ pcf8583_detach,
+ pcf8583_command
+};
+
+static __init int pcf8583_init(void)
+{
+ return i2c_add_driver(&pcf8583_driver);
+}
+
+__initcall(pcf8583_init);
--- /dev/null
+struct rtc_tm {
+ unsigned char cs;
+ unsigned char secs;
+ unsigned char mins;
+ unsigned char hours;
+ unsigned char mday;
+ unsigned char mon;
+ unsigned char year_off;
+ unsigned char wday;
+};
+
+struct mem {
+ unsigned int loc;
+ unsigned int nr;
+ unsigned char *data;
+};
+
+#define RTC_GETDATETIME 0
+#define RTC_SETTIME 1
+#define RTC_SETDATETIME 2
+#define RTC_GETCTRL 3
+#define RTC_SETCTRL 4
+#define MEM_READ 5
+#define MEM_WRITE 6
+
+#define CTRL_STOP 0x80
+#define CTRL_HOLD 0x40
+#define CTRL_32KHZ 0x00
+#define CTRL_MASK 0x08
+#define CTRL_ALARMEN 0x04
+#define CTRL_ALARM 0x02
+#define CTRL_TIMER 0x01
if [ "$CONFIG_X86" = "y" ]; then
bool ' SiS5513 chipset support' CONFIG_BLK_DEV_SIS5513
fi
+ bool ' Cyrix CS5530 MediaGX chipset support' CONFIG_BLK_DEV_CS5530
fi
if [ "$CONFIG_IDEDMA_PCI_EXPERIMENTAL" = "y" ]; then
bool ' Tekram TRM290 chipset support (EXPERIMENTAL)' CONFIG_BLK_DEV_TRM290
"$CONFIG_BLK_DEV_PDC202XX" = "y" -o \
"$CONFIG_BLK_DEV_PIIX" = "y" -o \
"$CONFIG_BLK_DEV_SIS5513" = "y" -o \
+ "$CONFIG_BLK_DEV_CS5530" = "y" -o \
"$CONFIG_BLK_DEV_SL82C105" = "y" ]; then
define_bool CONFIG_BLK_DEV_IDE_MODES y
else
IDE_OBJS += pdc202xx.o
endif
+ifeq ($(CONFIG_BLK_DEV_CS5530),y)
+IDE_OBJS += cs5530.o
+endif
+
ifeq ($(CONFIG_BLK_DEV_PDC4030),y)
IDE_OBJS += pdc4030.o
endif
--- /dev/null
+/*
+ * linux/drivers/block/cs5530.c Version 0.2 Jan 30, 2000
+ *
+ * Copyright (C) 2000 Mark Lord <mlord@pobox.com>
+ * May be copied or modified under the terms of the GNU General Public License
+ *
+ * Development of this chipset driver was funded
+ * by the nice folks at National Semiconductor.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/mm.h>
+#include <linux/ioport.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/ide.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include "ide_modes.h"
+
+/*
+ * Return the mode name for a drive transfer mode value:
+ */
+static const char *strmode (byte mode)
+{
+ switch (mode) {
+ case XFER_UDMA_4: return("UDMA4");
+ case XFER_UDMA_3: return("UDMA3");
+ case XFER_UDMA_2: return("UDMA2");
+ case XFER_UDMA_1: return("UDMA1");
+ case XFER_UDMA_0: return("UDMA0");
+ case XFER_MW_DMA_2: return("MDMA2");
+ case XFER_MW_DMA_1: return("MDMA1");
+ case XFER_MW_DMA_0: return("MDMA0");
+ case XFER_SW_DMA_2: return("SDMA2");
+ case XFER_SW_DMA_1: return("SDMA1");
+ case XFER_SW_DMA_0: return("SDMA0");
+ case XFER_PIO_4: return("PIO4");
+ case XFER_PIO_3: return("PIO3");
+ case XFER_PIO_2: return("PIO2");
+ case XFER_PIO_1: return("PIO1");
+ case XFER_PIO_0: return("PIO0");
+ default: return("???");
+ }
+}
+
+/*
+ * Set a new transfer mode at the drive
+ */
+int cs5530_set_xfer_mode (ide_drive_t *drive, byte mode)
+{
+ int i, error = 1;
+ byte stat;
+ ide_hwif_t *hwif = HWIF(drive);
+
+ printk("%s: cs5530_set_xfer_mode(%s)\n", drive->name, strmode(mode));
+ /*
+ * If this is a DMA mode setting, then turn off all DMA bits.
+ * We will set one of them back on afterwards, if all goes well.
+ *
+ * Not sure why this is needed (it looks very silly),
+ * but other IDE chipset drivers also do this fiddling. ???? -ml
+ */
+ switch (mode) {
+ case XFER_UDMA_4:
+ case XFER_UDMA_3:
+ case XFER_UDMA_2:
+ case XFER_UDMA_1:
+ case XFER_UDMA_0:
+ case XFER_MW_DMA_2:
+ case XFER_MW_DMA_1:
+ case XFER_MW_DMA_0:
+ case XFER_SW_DMA_2:
+ case XFER_SW_DMA_1:
+ case XFER_SW_DMA_0:
+ drive->id->dma_ultra &= ~0xFF00;
+ drive->id->dma_mword &= ~0x0F00;
+ drive->id->dma_1word &= ~0x0F00;
+ }
+
+ /*
+ * Select the drive, and issue the SETFEATURES command
+ */
+ disable_irq(hwif->irq);
+ udelay(1);
+ SELECT_DRIVE(HWIF(drive), drive);
+ udelay(1);
+ if (IDE_CONTROL_REG)
+ OUT_BYTE(drive->ctl | 2, IDE_CONTROL_REG);
+ OUT_BYTE(mode, IDE_NSECTOR_REG);
+ OUT_BYTE(SETFEATURES_XFER, IDE_FEATURE_REG);
+ OUT_BYTE(WIN_SETFEATURES, IDE_COMMAND_REG);
+ udelay(1); /* spec allows drive 400ns to assert "BUSY" */
+
+ /*
+ * Wait for drive to become non-BUSY
+ */
+ if ((stat = GET_STAT()) & BUSY_STAT) {
+ unsigned long flags, timeout;
+ __save_flags(flags); /* local CPU only */
+ ide__sti(); /* local CPU only -- for jiffies */
+ timeout = jiffies + WAIT_CMD;
+ while ((stat = GET_STAT()) & BUSY_STAT) {
+ if (0 < (signed long)(jiffies - timeout))
+ break;
+ }
+ __restore_flags(flags); /* local CPU only */
+ }
+
+ /*
+ * Allow status to settle, then read it again.
+ * A few rare drives vastly violate the 400ns spec here,
+ * so we'll wait up to 10usec for a "good" status
+ * rather than expensively fail things immediately.
+ */
+ for (i = 0; i < 10; i++) {
+ udelay(1);
+ if (OK_STAT((stat = GET_STAT()), DRIVE_READY, BUSY_STAT|DRQ_STAT|ERR_STAT)) {
+ error = 0;
+ break;
+ }
+ }
+ enable_irq(hwif->irq);
+
+ /*
+ * Turn dma bit on if all is okay
+ */
+ if (error) {
+ (void) ide_dump_status(drive, "cs5530_set_xfer_mode", stat);
+ } else {
+ switch (mode) {
+ case XFER_UDMA_4: drive->id->dma_ultra |= 0x1010; break;
+ case XFER_UDMA_3: drive->id->dma_ultra |= 0x0808; break;
+ case XFER_UDMA_2: drive->id->dma_ultra |= 0x0404; break;
+ case XFER_UDMA_1: drive->id->dma_ultra |= 0x0202; break;
+ case XFER_UDMA_0: drive->id->dma_ultra |= 0x0101; break;
+ case XFER_MW_DMA_2: drive->id->dma_mword |= 0x0404; break;
+ case XFER_MW_DMA_1: drive->id->dma_mword |= 0x0202; break;
+ case XFER_MW_DMA_0: drive->id->dma_mword |= 0x0101; break;
+ case XFER_SW_DMA_2: drive->id->dma_1word |= 0x0404; break;
+ case XFER_SW_DMA_1: drive->id->dma_1word |= 0x0202; break;
+ case XFER_SW_DMA_0: drive->id->dma_1word |= 0x0101; break;
+ }
+ }
+ return error;
+}
+
+/*
+ * Here are the standard PIO mode 0-4 timings for each "format".
+ * Format-0 uses fast data reg timings, with slower command reg timings.
+ * Format-1 uses fast timings for all registers, but won't work with all drives.
+ */
+static unsigned int cs5530_pio_timings[2][5] =
+ {{0x00009172, 0x00012171, 0x00020080, 0x00032010, 0x00040010},
+ {0xd1329172, 0x71212171, 0x30200080, 0x20102010, 0x00100010}};
+
+/*
+ * After chip reset, the PIO timings are set to 0x0000e132, which is not valid.
+ */
+#define CS5530_BAD_PIO(timings) (((timings)&~0x80000000)==0x0000e132)
+#define CS5530_BASEREG(hwif) (((hwif)->dma_base & ~0xf) + ((hwif)->channel ? 0x30 : 0x20))
+
+/*
+ * cs5530_tuneproc() handles selection/setting of PIO modes
+ * for both the chipset and drive.
+ *
+ * The ide_init_cs5530() routine guarantees that all drives
+ * will have valid default PIO timings set up before we get here.
+ */
+static void cs5530_tuneproc (ide_drive_t *drive, byte pio) /* pio=255 means "autotune" */
+{
+ ide_hwif_t *hwif = HWIF(drive);
+ unsigned int format, basereg = CS5530_BASEREG(hwif);
+ static byte modes[5] = {XFER_PIO_0, XFER_PIO_1, XFER_PIO_2, XFER_PIO_3, XFER_PIO_4};
+
+ pio = ide_get_best_pio_mode(drive, pio, 4, NULL);
+ if (!cs5530_set_xfer_mode(drive, modes[pio])) {
+ format = (inl(basereg+4) >> 31) & 1;
+ outl(cs5530_pio_timings[format][pio], basereg+(drive->select.b.unit<<3));
+ }
+}
+
+/*
+ * cs5530_config_dma() handles selection/setting of DMA/UDMA modes
+ * for both the chipset and drive.
+ */
+static int cs5530_config_dma (ide_drive_t *drive)
+{
+ int udma_ok = 1, mode = 0;
+ ide_hwif_t *hwif = HWIF(drive);
+ int unit = drive->select.b.unit;
+ ide_drive_t *mate = &hwif->drives[unit^1];
+ struct hd_driveid *id = drive->id;
+ unsigned int basereg, reg, timings;
+
+
+ /*
+ * Default to DMA-off in case we run into trouble here.
+ */
+ (void)hwif->dmaproc(ide_dma_off_quietly, drive); /* turn off DMA while we fiddle */
+ outb(inb(hwif->dma_base+2)&~(unit?0x40:0x20), hwif->dma_base+2); /* clear DMA_capable bit */
+
+ /*
+ * The CS5530 specifies that two drives sharing a cable cannot
+ * mix UDMA/MDMA. It has to be one or the other, for the pair,
+ * though different timings can still be chosen for each drive.
+ * We could set the appropriate timing bits on the fly,
+ * but that might be a bit confusing. So, for now we statically
+ * handle this requirement by looking at our mate drive to see
+ * what it is capable of, before choosing a mode for our own drive.
+ */
+ if (mate->present) {
+ struct hd_driveid *mateid = mate->id;
+ if (mateid && (mateid->capability & 1) && !hwif->dmaproc(ide_dma_bad_drive, mate)) {
+ if ((mateid->field_valid & 4) && (mateid->dma_ultra & 7))
+ udma_ok = 1;
+ else if ((mateid->field_valid & 2) && (mateid->dma_mword & 7))
+ udma_ok = 0;
+ else
+ udma_ok = 1;
+ }
+ }
+
+ /*
+ * Now see what the current drive is capable of,
+ * selecting UDMA only if the mate said it was ok.
+ */
+ if (id && (id->capability & 1) && hwif->autodma && !hwif->dmaproc(ide_dma_bad_drive, drive)) {
+ if (udma_ok && (id->field_valid & 4) && (id->dma_ultra & 7)) {
+ if (id->dma_ultra & 4)
+ mode = XFER_UDMA_2;
+ else if (id->dma_ultra & 2)
+ mode = XFER_UDMA_1;
+ else if (id->dma_ultra & 1)
+ mode = XFER_UDMA_0;
+ }
+ if (!mode && (id->field_valid & 2) && (id->dma_mword & 7)) {
+ if (id->dma_mword & 4)
+ mode = XFER_MW_DMA_2;
+ else if (id->dma_mword & 2)
+ mode = XFER_MW_DMA_1;
+ else if (id->dma_mword & 1)
+ mode = XFER_MW_DMA_0;
+ }
+ }
+
+ /*
+ * Tell the drive to switch to the new mode; abort on failure.
+ */
+ if (!mode || cs5530_set_xfer_mode(drive, mode))
+ return 1; /* failure */
+
+ /*
+ * Now tune the chipset to match the drive:
+ */
+ switch (mode) {
+ case XFER_UDMA_0: timings = 0x00921250; break;
+ case XFER_UDMA_1: timings = 0x00911140; break;
+ case XFER_UDMA_2: timings = 0x00911030; break;
+ case XFER_MW_DMA_0: timings = 0x00077771; break;
+ case XFER_MW_DMA_1: timings = 0x00012121; break;
+ case XFER_MW_DMA_2: timings = 0x00002020; break;
+ default:
+ printk("%s: cs5530_config_dma: huh? mode=%02x\n", drive->name, mode);
+ return 1; /* failure */
+ }
+ basereg = CS5530_BASEREG(hwif);
+ reg = inl(basereg+4); /* get drive0 config register */
+ timings |= reg & 0x80000000; /* preserve PIO format bit */
+ if (unit == 0) { /* are we configuring drive0? */
+ outl(timings, basereg+4); /* write drive0 config register */
+ } else {
+ if (timings & 0x00100000)
+ reg |= 0x00100000; /* enable UDMA timings for both drives */
+ else
+ reg &= ~0x00100000; /* disable UDMA timings for both drives */
+ outl(reg, basereg+4); /* write drive0 config register */
+ outl(timings, basereg+12); /* write drive1 config register */
+ }
+ outb(inb(hwif->dma_base+2)|(unit?0x40:0x20), hwif->dma_base+2); /* set DMA_capable bit */
+
+ if (!strcmp(drive->name, "hdc")) /* FIXME */
+ return 0;
+ /*
+ * Finally, turn DMA on in software, and exit.
+ */
+ return hwif->dmaproc(ide_dma_on, drive); /* success */
+}
+
+/*
+ * This is a CS5530-specific wrapper for the standard ide_dmaproc().
+ * We need it for our custom "ide_dma_check" function.
+ * All other requests are forwarded to the standard ide_dmaproc().
+ */
+int cs5530_dmaproc (ide_dma_action_t func, ide_drive_t *drive)
+{
+ switch (func) {
+ case ide_dma_check:
+ return cs5530_config_dma(drive);
+ default:
+ return ide_dmaproc(func, drive);
+ }
+}
+
+/*
+ * Initialize the cs5530 bridge for reliable IDE DMA operation.
+ */
+unsigned int __init pci_init_cs5530 (struct pci_dev *dev, const char *name)
+{
+ struct pci_dev *master_0 = NULL, *cs5530_0 = NULL;
+ unsigned short pcicmd = 0;
+ unsigned long flags;
+
+ pci_for_each_dev (dev) {
+ if (dev->vendor == PCI_VENDOR_ID_CYRIX) {
+ switch (dev->device) {
+ case PCI_DEVICE_ID_CYRIX_PCI_MASTER:
+ master_0 = dev;
+ break;
+ case PCI_DEVICE_ID_CYRIX_5530_LEGACY:
+ cs5530_0 = dev;
+ break;
+ }
+ }
+ }
+ if (!master_0) {
+ printk("%s: unable to locate PCI MASTER function\n", name);
+ return 0;
+ }
+ if (!cs5530_0) {
+ printk("%s: unable to locate CS5530 LEGACY function\n", name);
+ return 0;
+ }
+
+ save_flags(flags);
+ cli(); /* all CPUs (there should only be one CPU with this chipset) */
+
+ /*
+ * Enable BusMaster and MemoryWriteAndInvalidate for the cs5530:
+ * --> OR 0x14 into 16-bit PCI COMMAND reg of function 0 of the cs5530
+ */
+ pci_read_config_word (cs5530_0, PCI_COMMAND, &pcicmd);
+ pci_write_config_word(cs5530_0, PCI_COMMAND, pcicmd | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE);
+
+ /*
+ * Set PCI CacheLineSize to 16-bytes:
+ * --> Write 0x04 into 8-bit PCI CACHELINESIZE reg of function 0 of the cs5530
+ */
+ pci_write_config_byte(cs5530_0, PCI_CACHE_LINE_SIZE, 0x04);
+
+ /*
+ * Disable trapping of UDMA register accesses (Win98 hack):
+ * --> Write 0x5006 into 16-bit reg at offset 0xd0 of function 0 of the cs5530
+ */
+ pci_write_config_word(cs5530_0, 0xd0, 0x5006);
+
+ /*
+ * Bit-1 at 0x40 enables MemoryWriteAndInvalidate on internal X-bus:
+ * The other settings are what is necessary to get the register
+ * into a sane state for IDE DMA operation.
+ */
+ pci_write_config_byte(master_0, 0x40, 0x1e);
+
+ /*
+ * Set max PCI burst size (16-bytes seems to work best):
+ * 16bytes: set bit-1 at 0x41 (reg value of 0x16)
+ * all others: clear bit-1 at 0x41, and do:
+ * 128bytes: OR 0x00 at 0x41
+ * 256bytes: OR 0x04 at 0x41
+ * 512bytes: OR 0x08 at 0x41
+ * 1024bytes: OR 0x0c at 0x41
+ */
+ pci_write_config_byte(master_0, 0x41, 0x14);
+
+ /*
+ * These settings are necessary to get the chip
+ * into a sane state for IDE DMA operation.
+ */
+ pci_write_config_byte(master_0, 0x42, 0x00);
+ pci_write_config_byte(master_0, 0x43, 0xc1);
+
+ restore_flags(flags);
+ return 0;
+}
+
+/*
+ * This gets invoked by the IDE driver once for each channel,
+ * and performs channel-specific pre-initialization before drive probing.
+ */
+void __init ide_init_cs5530 (ide_hwif_t *hwif)
+{
+ if (hwif->mate)
+ hwif->serialized = hwif->mate->serialized = 1;
+ if (!hwif->dma_base) {
+ hwif->autodma = 0;
+ } else {
+ unsigned int basereg, d0_timings;
+
+ hwif->dmaproc = &cs5530_dmaproc;
+ hwif->tuneproc = &cs5530_tuneproc;
+ basereg = CS5530_BASEREG(hwif);
+ d0_timings = inl(basereg+0);
+ if (CS5530_BAD_PIO(d0_timings)) { /* PIO timings not initialized? */
+ outl(cs5530_pio_timings[(d0_timings>>31)&1][0], basereg+0);
+ if (!hwif->drives[0].autotune)
+ hwif->drives[0].autotune = 1; /* needs autotuning later */
+ }
+ if (CS5530_BAD_PIO(inl(basereg+8))) { /* PIO timings not initialized? */
+ outl(cs5530_pio_timings[(d0_timings>>31)&1][0], basereg+8);
+ if (!hwif->drives[1].autotune)
+ hwif->drives[1].autotune = 1; /* needs autotuning later */
+ }
+ }
+}
#define DEVID_ALI15X3 ((ide_pci_devid_t){PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229})
#define DEVID_CY82C693 ((ide_pci_devid_t){PCI_VENDOR_ID_CONTAQ, PCI_DEVICE_ID_CONTAQ_82C693})
#define DEVID_HINT ((ide_pci_devid_t){0x3388, 0x8013})
-#define DEVID_CX5530 ((ide_pci_devid_t){PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5530_IDE})
+#define DEVID_CS5530 ((ide_pci_devid_t){PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5530_IDE})
#define DEVID_AMD7409 ((ide_pci_devid_t){PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7409})
#define IDE_IGNORE ((void *)-1)
#define INIT_CY82C693 NULL
#endif
-#define INIT_CX5530 NULL
+#ifdef CONFIG_BLK_DEV_CS5530
+extern unsigned int pci_init_cs5530(struct pci_dev *, const char *);
+extern void ide_init_cs5530(ide_hwif_t *);
+#define INIT_CS5530 &ide_init_cs5530
+#define PCI_CS5530 &pci_init_cs5530
+#else
+#define INIT_CS5530 NULL
+#define PCI_CS5530 NULL
+#endif
#ifdef CONFIG_BLK_DEV_HPT34X
extern unsigned int pci_init_hpt34x(struct pci_dev *, const char *);
{DEVID_ALI15X3, "ALI15X3", PCI_ALI15X3, ATA66_ALI15X3, INIT_ALI15X3, DMA_ALI15X3, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 },
{DEVID_CY82C693,"CY82C693", PCI_CY82C693, NULL, INIT_CY82C693, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 },
{DEVID_HINT, "HINT_IDE", NULL, NULL, NULL, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 },
- {DEVID_CX5530, "CX5530", NULL, NULL, INIT_CX5530, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 },
+ {DEVID_CS5530, "CS5530", PCI_CS5530, NULL, INIT_CS5530, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 },
{DEVID_AMD7409, "AMD7409", NULL, ATA66_AMD7409, INIT_AMD7409, NULL, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0 },
{IDE_PCI_DEVID_NULL, "PCI_IDE", NULL, NULL, NULL, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0 }};
IDE_PCI_DEVID_EQ(d->devid, DEVID_HPT34X) ||
#endif /* CONFIG_BLK_DEV_HPT34X */
IDE_PCI_DEVID_EQ(d->devid, DEVID_HPT366) ||
+ IDE_PCI_DEVID_EQ(d->devid, DEVID_CS5530) ||
IDE_PCI_DEVID_EQ(d->devid, DEVID_CY82C693) ||
((dev->class >> 8) == PCI_CLASS_STORAGE_IDE && (dev->class & 0x80))) {
unsigned long dma_base = ide_get_or_set_dma_base(hwif, (!mate && d->extra) ? d->extra : 0, d->name);
# parent makes..
#
+O_OBJS :=
+OX_OBJS :=
+M_OBJS :=
+MX_OBJS :=
+
+# Object file lists.
+
+obj-y :=
+obj-m :=
+obj-n :=
+obj- :=
+
SUB_DIRS :=
MOD_SUB_DIRS := $(SUB_DIRS)
ALL_SUB_DIRS := $(SUB_DIRS) ftape joystick pcmcia
FONTMAPFILE = cp437.uni
O_TARGET := char.o
-M_OBJS :=
-O_OBJS := tty_io.o n_tty.o tty_ioctl.o mem.o raw.o
-OX_OBJS := pty.o misc.o random.o
+obj-y += tty_io.o n_tty.o tty_ioctl.o mem.o raw.o pty.o misc.o random.o
+
+# All of the (potential) objects that export symbols.
+# This list comes from 'grep -l EXPORT_SYMBOL *.[hc]'.
+
+export-objs := busmouse.o console.o i2c-old.o keyboard.o \
+ misc.o pty.o random.o selection.o serial.o videodev.o
KEYMAP =defkeymap.o
KEYBD =pc_keyb.o
SERIAL =
endif
-ifdef CONFIG_VT
-O_OBJS += vt.o vc_screen.o consolemap.o consolemap_deftbl.o
-OX_OBJS += $(CONSOLE) selection.o
-endif
-
-ifeq ($(CONFIG_SERIAL),y)
-OX_OBJS += $(SERIAL)
-else
- ifeq ($(CONFIG_SERIAL),m)
- MX_OBJS += $(SERIAL)
- endif
-endif
-
-ifeq ($(CONFIG_SERIAL_21285),y)
-O_OBJS += serial_21285.o
-endif
+obj-$(CONFIG_VT) += vt.o vc_screen.o consolemap.o consolemap_deftbl.o $(CONSOLE) selection.o
+obj-$(CONFIG_SERIAL) += $(SERIAL)
+obj-$(CONFIG_SERIAL_21285) += serial_21285.o
ifndef CONFIG_SUN_KEYBOARD
- ifdef CONFIG_VT
- OX_OBJS += keyboard.o
- O_OBJS += $(KEYMAP) $(KEYBD)
- endif
+ obj-$(CONFIG_VT) += keyboard.o $(KEYMAP) $(KEYBD)
else
- ifdef CONFIG_PCI
- OX_OBJS += keyboard.o
- O_OBJS += $(KEYMAP)
- endif
+ obj-$(CONFIG_PCI) += keyboard.o $(KEYMAP)
endif
-ifdef CONFIG_MAGIC_SYSRQ
-OX_OBJS += sysrq.o
-endif
+obj-$(CONFIG_MAGIC_SYSRQ) += sysrq.o
+obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
ifeq ($(CONFIG_ATARI_DSP56K),y)
-O_OBJS += dsp56k.o
S = y
else
ifeq ($(CONFIG_ATARI_DSP56K),m)
- M_OBJS += dsp56k.o
SM = y
endif
endif
-ifeq ($(CONFIG_ROCKETPORT),y)
-O_OBJS += rocket.o
-else
- ifeq ($(CONFIG_ROCKETPORT),m)
- M_OBJS += rocket.o
- endif
-endif
-
-ifeq ($(CONFIG_MOXA_SMARTIO),y)
-L_OBJS += mxser.o
-else
- ifeq ($(CONFIG_MOXA_SMARTIO),m)
- M_OBJS += mxser.o
- endif
-endif
-
-ifeq ($(CONFIG_MOXA_INTELLIO),y)
-L_OBJS += moxa.o
-else
- ifeq ($(CONFIG_MOXA_INTELLIO),m)
- M_OBJS += moxa.o
- endif
-endif
-
-ifeq ($(CONFIG_DIGI),y)
-O_OBJS += pcxx.o
-else
- ifeq ($(CONFIG_DIGI),m)
- M_OBJS += pcxx.o
- endif
-endif
-
-ifeq ($(CONFIG_DIGIEPCA),y)
-O_OBJS += epca.o
-else
- ifeq ($(CONFIG_DIGIEPCA),m)
- M_OBJS += epca.o
- endif
-endif
-
-ifeq ($(CONFIG_CYCLADES),y)
-O_OBJS += cyclades.o
-else
- ifeq ($(CONFIG_CYCLADES),m)
- M_OBJS += cyclades.o
- endif
-endif
-
-ifeq ($(CONFIG_STALLION),y)
-O_OBJS += stallion.o
-else
- ifeq ($(CONFIG_STALLION),m)
- M_OBJS += stallion.o
- endif
-endif
-
-ifeq ($(CONFIG_ISTALLION),y)
-O_OBJS += istallion.o
-else
- ifeq ($(CONFIG_ISTALLION),m)
- M_OBJS += istallion.o
- endif
-endif
-
-ifeq ($(CONFIG_COMPUTONE),y)
-O_OBJS += ip2.o ip2main.o
-else
- ifeq ($(CONFIG_COMPUTONE),m)
- M_OBJS += ip2.o ip2main.o
- endif
-endif
-
-ifeq ($(CONFIG_RISCOM8),y)
-O_OBJS += riscom8.o
-else
- ifeq ($(CONFIG_RISCOM8),m)
- M_OBJS += riscom8.o
- endif
-endif
-
-ifeq ($(CONFIG_ISI),y)
-O_OBJS += isicom.o
-else
- ifeq ($(CONFIG_ISI),m)
- M_OBJS += isicom.o
- endif
-endif
-
-ifeq ($(CONFIG_ESPSERIAL),y)
-O_OBJS += esp.o
-else
- ifeq ($(CONFIG_ESPSERIAL),m)
- M_OBJS += esp.o
- endif
-endif
-
-ifeq ($(CONFIG_SYNCLINK),m)
- M_OBJS += synclink.o
-endif
-
-ifeq ($(CONFIG_N_HDLC),m)
- M_OBJS += n_hdlc.o
-endif
-
-ifeq ($(CONFIG_SPECIALIX),y)
-O_OBJS += specialix.o
-else
- ifeq ($(CONFIG_SPECIALIX),m)
- M_OBJS += specialix.o
- endif
-endif
+obj-$(CONFIG_ROCKETPORT) += rocket.o
+obj-$(CONFIG_MOXA_SMARTIO) += mxser.o
+obj-$(CONFIG_MOXA_INTELLIO) += moxa.o
+obj-$(CONFIG_DIGI) += pcxx.o
+obj-$(CONFIG_DIGIEPCA) += epca.o
+obj-$(CONFIG_CYCLADES) += cyclades.o
+obj-$(CONFIG_STALLION) += stallion.o
+obj-$(CONFIG_ISTALLION) += istallion.o
+obj-$(CONFIG_COMPUTONE) += ip2.o ip2main.o
+obj-$(CONFIG_RISCOM8) += riscom8.o
+obj-$(CONFIG_ISI) += isicom.o
+obj-$(CONFIG_ESPSERIAL) += esp.o
+obj-$(CONFIG_SYNCLINK) += synclink.o
+obj-$(CONFIG_N_HDLC) += n_hdlc.o
+obj-$(CONFIG_SPECIALIX) += specialix.o
ifeq ($(CONFIG_SX),y)
-O_OBJS += sx.o generic_serial.o
-else
- ifeq ($(CONFIG_SX),m)
- M_OBJS += sx.o
- endif
-endif
-
-ifeq ($(CONFIG_ATIXL_BUSMOUSE),y)
-O_OBJS += atixlmouse.o
+obj-y += sx.o generic_serial.o
else
- ifeq ($(CONFIG_ATIXL_BUSMOUSE),m)
- M_OBJS += atixlmouse.o
- endif
+ obj-$(CONFIG_SX) += sx.o
endif
-ifeq ($(CONFIG_LOGIBUSMOUSE),y)
-O_OBJS += logibusmouse.o
-else
- ifeq ($(CONFIG_LOGIBUSMOUSE),m)
- M_OBJS += logibusmouse.o
- endif
-endif
-
-ifeq ($(CONFIG_PRINTER),y)
-O_OBJS += lp.o
-else
- ifeq ($(CONFIG_PRINTER),m)
- M_OBJS += lp.o
- endif
-endif
+obj-$(CONFIG_ATIXL_BUSMOUSE) += atixlmouse.o
+obj-$(CONFIG_LOGIBUSMOUSE) += logibusmouse.o
+obj-$(CONFIG_PRINTER) += lp.o
ifeq ($(CONFIG_JOYSTICK),y)
-O_OBJS += joystick/js.o
+obj-y += joystick/js.o
SUB_DIRS += joystick
MOD_SUB_DIRS += joystick
else
endif
endif
+obj-$(CONFIG_BUSMOUSE) += busmouse.o
ifeq ($(CONFIG_BUSMOUSE),y)
M = y
-OX_OBJS += busmouse.o
else
ifeq ($(CONFIG_BUSMOUSE),m)
MM = m
- MX_OBJS += busmouse.o
- endif
-endif
-
-ifeq ($(CONFIG_DTLK),y)
-O_OBJS += dtlk.o
-else
- ifeq ($(CONFIG_DTLK),m)
- M_OBJS += dtlk.o
- endif
-endif
-
-ifeq ($(CONFIG_R3964),y)
-O_OBJS += n_r3964.o
-else
- ifeq ($(CONFIG_R3964),m)
- M_OBJS += n_r3964.o
- endif
-endif
-
-ifeq ($(CONFIG_APPLICOM),y)
-O_OBJS += applicom.o
-else
- ifeq ($(CONFIG_APPLICOM),m)
- M_OBJS += applicom.o
endif
endif
-ifeq ($(CONFIG_MS_BUSMOUSE),y)
-O_OBJS += msbusmouse.o
-else
- ifeq ($(CONFIG_MS_BUSMOUSE),m)
- M_OBJS += msbusmouse.o
- endif
-endif
-
-ifeq ($(CONFIG_82C710_MOUSE),y)
-O_OBJS += qpmouse.o
-else
- ifeq ($(CONFIG_82C710_MOUSE),m)
- M_OBJS += qpmouse.o
- endif
-endif
-
-ifeq ($(CONFIG_SOFT_WATCHDOG),y)
-O_OBJS += softdog.o
-else
- ifeq ($(CONFIG_SOFT_WATCHDOG),m)
- M_OBJS += softdog.o
- endif
+obj-$(CONFIG_DTLK) += dtlk.o
+obj-$(CONFIG_R3964) += n_r3964.o
+obj-$(CONFIG_APPLICOM) += applicom.o
+obj-$(CONFIG_MS_BUSMOUSE) += msbusmouse.o
+obj-$(CONFIG_82C710_MOUSE) += qpmouse.o
+obj-$(CONFIG_SOFT_WATCHDOG) += softdog.o
+obj-$(CONFIG_PCWATCHDOG) += pcwd.o
+obj-$(CONFIG_ACQUIRE_WDT) += acquirewdt.o
+obj-$(CONFIG_MIXCOMWD) += mixcomwd.o
+obj-$(CONFIG_AMIGAMOUSE) += amigamouse.o
+obj-$(CONFIG_ATARIMOUSE) += atarimouse.o
+obj-$(CONFIG_ADBMOUSE) += adbmouse.o
+obj-$(CONFIG_PC110_PAD) += pc110pad.o
+obj-$(CONFIG_WDT) += wdt.o
+obj-$(CONFIG_RTC) += rtc.o
+ifeq ($(CONFIG_PPC),)
+ obj-$(CONFIG_NVRAM) += nvram.o
endif
-ifeq ($(CONFIG_PCWATCHDOG),y)
-O_OBJS += pcwd.o
-else
- ifeq ($(CONFIG_PCWATCHDOG),m)
- M_OBJS += pcwd.o
- endif
-endif
-
-ifeq ($(CONFIG_ACQUIRE_WDT),y)
-O_OBJS += acquirewdt.o
-else
- ifeq ($(CONFIG_ACQUIRE_WDT),m)
- M_OBJS += acquirewdt.o
- endif
-endif
-
-ifeq ($(CONFIG_MIXCOMWD),y)
-O_OBJS += mixcomwd.o
-else
- ifeq ($(CONFIG_MIXCOMWD),m)
- M_OBJS += mixcomwd.o
- endif
-endif
-
-ifeq ($(CONFIG_AMIGAMOUSE),y)
-O_OBJS += amigamouse.o
-else
- ifeq ($(CONFIG_AMIGAMOUSE),m)
- M_OBJS += amigamouse.o
- endif
-endif
-
-ifeq ($(CONFIG_ATARIMOUSE),y)
-O_OBJS += atarimouse.o
-else
- ifeq ($(CONFIG_ATARIMOUSE),m)
- M_OBJS += atarimouse.o
- endif
-endif
-
-ifeq ($(CONFIG_ADBMOUSE),y)
-O_OBJS += adbmouse.o
-else
- ifeq ($(CONFIG_ADBMOUSE),m)
- M_OBJS += adbmouse.o
- endif
-endif
-
-ifeq ($(CONFIG_PC110_PAD),y)
-O_OBJS += pc110pad.o
-else
- ifeq ($(CONFIG_PC110_PAD),m)
- M_OBJS += pc110pad.o
- endif
-endif
-
-ifeq ($(CONFIG_WDT),y)
-O_OBJS += wdt.o
-else
- ifeq ($(CONFIG_WDT),m)
- M_OBJS += wdt.o
- endif
-endif
-
-ifeq ($(CONFIG_RTC),y)
-O_OBJS += rtc.o
-else
- ifeq ($(CONFIG_RTC),m)
- M_OBJS += rtc.o
- endif
-endif
-
-ifeq ($(CONFIG_NVRAM),y)
- ifeq ($(CONFIG_PPC),)
- O_OBJS += nvram.o
- endif
-else
- ifeq ($(CONFIG_NVRAM),m)
- ifeq ($(CONFIG_PPC),)
- M_OBJS += nvram.o
- endif
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_DEV),y)
-OX_OBJS += videodev.o
-else
- ifeq ($(CONFIG_VIDEO_DEV),m)
- MX_OBJS += videodev.o
- endif
-endif
+obj-$(CONFIG_VIDEO_DEV) += videodev.o
#
# for external dependencies in arm/config.in and video/config.in
L_I2C=y
else
ifeq ($(CONFIG_BUS_I2C),m)
- M_I2C=y
+ L_I2C=m
endif
endif
+obj-$(CONFIG_VIDEO_BT848) += bttv.o msp3400.o tda8425.o tda9855.o tea6300.o
ifeq ($(CONFIG_VIDEO_BT848),y)
-O_OBJS += bttv.o msp3400.o tda8425.o tda9855.o tea6300.o
L_I2C=y
L_TUNERS=y
else
ifeq ($(CONFIG_VIDEO_BT848),m)
- M_OBJS += bttv.o msp3400.o tda8425.o tda9855.o tea6300.o
- M_I2C=y
- M_TUNERS=y
+ L_I2C=m
+ L_TUNERS=m
endif
endif
+obj-$(CONFIG_VIDEO_ZR36120) += zoran.o
ifeq ($(CONFIG_VIDEO_ZR36120),y)
-O_OBJS += zoran.o
L_I2C=y
L_TUNERS=y
L_DECODERS=y
else
ifeq ($(CONFIG_VIDEO_ZR36120),m)
- M_OBJS += zoran.o
- M_I2C=y
- M_TUNERS=y
- M_DECODERS=y
+ L_I2C=m
+ L_TUNERS=m
+ L_DECODERS=m
endif
endif
+obj-$(CONFIG_I2C_PARPORT) += i2c-parport.o
ifeq ($(CONFIG_I2C_PARPORT),y)
-O_OBJS += i2c-parport.o
L_I2C = y
else
ifeq ($(CONFIG_I2C_PARPORT),m)
- M_OBJS += i2c-parport.o
M_I2C = y
endif
endif
+obj-$(CONFIG_VIDEO_SAA5249) += saa5249.o
ifeq ($(CONFIG_VIDEO_SAA5249),y)
-O_OBJS += saa5249.o
L_I2C=y
else
ifeq ($(CONFIG_VIDEO_SAA5249),m)
- M_OBJS += saa5249.o
- M_I2C=y
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_CQCAM),y)
-O_OBJS += c-qcam.o
-else
- ifeq ($(CONFIG_VIDEO_CQCAM),m)
- M_OBJS += c-qcam.o
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_BWQCAM),y)
-O_OBJS += bw-qcam.o
-else
- ifeq ($(CONFIG_VIDEO_BWQCAM),m)
- M_OBJS += bw-qcam.o
+ L_I2C=m
endif
endif
+obj-$(CONFIG_VIDEO_CQCAM) += c-qcam.o
+obj-$(CONFIG_VIDEO_BWQCAM) += bw-qcam.o
+obj-$(CONFIG_VIDEO_ZORAN) += buz.o
ifeq ($(CONFIG_VIDEO_ZORAN),y)
-O_OBJS += buz.o
L_I2C=y
L_DECODERS=y
else
ifeq ($(CONFIG_VIDEO_ZORAN),m)
- M_OBJS += buz.o
- M_I2C=y
- M_DECODERS=y
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_LML33),y)
-O_OBJS += bt856.o bt819.o
-else
- ifeq ($(CONFIG_VIDEO_LML33),m)
- M_OBJS += bt856.o bt819.o
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_PMS),y)
-O_OBJS += pms.o
-else
- ifeq ($(CONFIG_VIDEO_PMS),m)
- M_OBJS += pms.o
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_PLANB),y)
-O_OBJS += planb.o
-else
- ifeq ($(CONFIG_VIDEO_PLANB),m)
- M_OBJS += planb.o
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_VINO),y)
-O_OBJS += vino.o
-else
- ifeq ($(CONFIG_VIDEO_VINO),m)
- M_OBJS += vino.o
- endif
-endif
-
-ifeq ($(CONFIG_VIDEO_STRADIS),y)
-O_OBJS += vino.o
-else
- ifeq ($(CONFIG_VIDEO_STRADIS),m)
- M_OBJS += stradis.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_AZTECH),y)
-O_OBJS += radio-aztech.o
-else
- ifeq ($(CONFIG_RADIO_AZTECH),m)
- M_OBJS += radio-aztech.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_RTRACK2),y)
-O_OBJS += radio-rtrack2.o
-else
- ifeq ($(CONFIG_RADIO_RTRACK2),m)
- M_OBJS += radio-rtrack2.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_SF16FMI),y)
-O_OBJS += radio-sf16fmi.o
-else
- ifeq ($(CONFIG_RADIO_SF16FMI),m)
- M_OBJS += radio-sf16fmi.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_CADET),y)
-O_OBJS += radio-cadet.o
-else
- ifeq ($(CONFIG_RADIO_CADET),m)
- M_OBJS += radio-cadet.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_TYPHOON),y)
-O_OBJS += radio-typhoon.o
-else
- ifeq ($(CONFIG_RADIO_TYPHOON),m)
- M_OBJS += radio-typhoon.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_TERRATEC),y)
-O_OBJS += radio-terratec.o
-else
- ifeq ($(CONFIG_RADIO_TERRATEC),m)
- M_OBJS += radio-terratec.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_RTRACK),y)
-O_OBJS += radio-aimslab.o
-else
- ifeq ($(CONFIG_RADIO_RTRACK),m)
- M_OBJS += radio-aimslab.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_ZOLTRIX),y)
-O_OBJS += radio-zoltrix.o
-else
- ifeq ($(CONFIG_RADIO_ZOLTRIX),m)
- M_OBJS += radio-zoltrix.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_MIROPCM20),y)
-O_OBJS += radio-miropcm20.o
-else
- ifeq ($(CONFIG_RADIO_MIROPCM20),m)
- M_OBJS += radio-miropcm20.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_GEMTEK),y)
-O_OBJS += radio-gemtek.o
-else
- ifeq ($(CONFIG_RADIO_GEMTEK),m)
- M_OBJS += radio-gemtek.o
- endif
-endif
-
-ifeq ($(CONFIG_RADIO_TRUST),y)
-O_OBJS += radio-trust.o
-else
- ifeq ($(CONFIG_RADIO_TRUST),m)
- M_OBJS += radio-trust.o
- endif
-endif
-
-ifeq ($(CONFIG_QIC02_TAPE),y)
-O_OBJS += tpqic02.o
-else
- ifeq ($(CONFIG_QIC02_TAPE),m)
- M_OBJS += tpqic02.o
- endif
-endif
+ L_I2C=m
+ L_DECODERS=m
+ endif
+endif
+
+obj-$(CONFIG_VIDEO_LML33) += bt856.o bt819.o
+obj-$(CONFIG_VIDEO_PMS) += pms.o
+obj-$(CONFIG_VIDEO_PLANB) += planb.o
+obj-$(CONFIG_VIDEO_VINO) += vino.o
+obj-$(CONFIG_VIDEO_STRADIS) += stradis.o
+obj-$(CONFIG_RADIO_AZTECH) += radio-aztech.o
+obj-$(CONFIG_RADIO_RTRACK2) += radio-rtrack2.o
+obj-$(CONFIG_RADIO_SF16FMI) += radio-sf16fmi.o
+obj-$(CONFIG_RADIO_CADET) += radio-cadet.o
+obj-$(CONFIG_RADIO_TYPHOON) += radio-typhoon.o
+obj-$(CONFIG_RADIO_TERRATEC) += radio-terratec.o
+obj-$(CONFIG_RADIO_RTRACK) += radio-aimslab.o
+obj-$(CONFIG_RADIO_ZOLTRIX) += radio-zoltrix.o
+obj-$(CONFIG_RADIO_MIROPCM20) += radio-miropcm20.o
+obj-$(CONFIG_RADIO_GEMTEK) += radio-gemtek.o
+obj-$(CONFIG_RADIO_TRUST) += radio-trust.o
+obj-$(CONFIG_QIC02_TAPE) += tpqic02.o
ifeq ($(CONFIG_FTAPE),y)
-O_OBJS += ftape/ftape.o
+obj-y += ftape/ftape.o
SUB_DIRS += ftape
ifneq ($(CONFIG_ZFTAPE),n)
MOD_SUB_DIRS += ftape
endif
endif
-ifdef CONFIG_H8
-OX_OBJS += h8.o
-endif
-
-ifeq ($(CONFIG_PPDEV),y)
-O_OBJS += ppdev.o
-else
- ifeq ($(CONFIG_PPDEV),m)
- M_OBJS += ppdev.o
- endif
-endif
-
+obj-$(CONFIG_H8) += h8.o
+obj-$(CONFIG_PPDEV) += ppdev.o
# set when a framegrabber supports external tuners
-ifeq ($(L_TUNERS),y)
-O_OBJS += tuner.o
-else
- ifeq ($(M_TUNERS),y)
- M_OBJS += tuner.o
- endif
-endif
+obj-$(L_TUNERS) += tuner.o
# set when a framegrabber supports external decoders
-ifeq ($(L_DECODERS),y)
-O_OBJS += saa7110.o saa7111.o saa7185.o
-else
- ifeq ($(M_DECODERS),y)
- M_OBJS += saa7110.o saa7111.o saa7185.o
- endif
-endif
+obj-$(L_DECODERS) += saa7110.o saa7111.o saa7185.o
# set when a framegrabber implements i2c support
-ifeq ($(L_I2C),y)
-OX_OBJS += i2c-old.o
-else
- ifeq ($(M_I2C),y)
- MX_OBJS += i2c-old.o
- endif
-endif
+obj-$(L_I2C) += i2c-old.o
ifeq ($(CONFIG_DRM),y)
SUB_DIRS += drm
endif
endif
+# Extract lists of the multi-part drivers.
+# The 'int-*' lists are the intermediate files used to build the multi's.
+
+multi-y := $(filter $(list-multi), $(obj-y))
+multi-m := $(filter $(list-multi), $(obj-m))
+int-y := $(sort $(foreach m, $(multi-y), $($(basename $(m))-objs)))
+int-m := $(sort $(foreach m, $(multi-m), $($(basename $(m))-objs)))
+
+# Files that are both resident and modular: remove from modular.
+
+obj-m := $(filter-out $(obj-y), $(obj-m))
+int-m := $(filter-out $(int-y), $(int-m))
+
+# Take multi-part drivers out of obj-y and put components in.
+
+obj-y := $(filter-out $(list-multi), $(obj-y)) $(int-y)
+
+# Translate to Rules.make lists.
+
+O_OBJS := $(filter-out $(export-objs), $(obj-y))
+OX_OBJS := $(filter $(export-objs), $(obj-y))
+M_OBJS := $(sort $(filter-out $(export-objs), $(obj-m)))
+MX_OBJS := $(sort $(filter $(export-objs), $(obj-m)))
+
include $(TOPDIR)/Rules.make
fastdep:
#endif
}
-static unsigned char kbd_exists = 1;
-
static inline void handle_keyboard_event(unsigned char scancode)
{
- kbd_exists = 1;
#ifdef CONFIG_VT
if (do_acknowledge(scancode))
handle_scancode(scancode, !(scancode & 0x80));
{
int retries = 3;
- if (!kbd_exists) return 0;
-
do {
unsigned long timeout = KBD_TIMEOUT;
#ifdef KBD_REPORT_TIMEOUTS
printk(KERN_WARNING "keyboard: Timeout - AT keyboard not present?\n");
#endif
- kbd_exists = 0;
return 0;
}
}
#ifdef KBD_REPORT_TIMEOUTS
printk(KERN_WARNING "keyboard: Too many NACKs -- noisy kbd cable?\n");
#endif
- kbd_exists = 0;
return 0;
}
* 1.09 Nikita Schmidt: epoch support and some Alpha cleanup.
* 1.09a Pete Zaitcev: Sun SPARC
* 1.09b Jeff Garzik: Modularize, init cleanup
- *
+ * 1.09c Jeff Garzik: SMP cleanup
+ * 1.10 Paul Barton-Davis: add support for async I/O
*/
-#define RTC_VERSION "1.09b"
+#define RTC_VERSION "1.10"
#define RTC_IRQ 8 /* Can't see this changing soon. */
#define RTC_IO_EXTENT 0x10 /* Only really two ports, but... */
* ioctls.
*/
+static struct fasync_struct *rtc_async_queue;
+
static DECLARE_WAIT_QUEUE_HEAD(rtc_wait);
static spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
wake_up_interruptible(&rtc_wait);
+ if (rtc_async_queue)
+ kill_fasync (rtc_async_queue, SIGIO, POLL_IN);
+
if (atomic_read(&rtc_status) & RTC_TIMER_ON)
mod_timer(&rtc_irq_timer, jiffies + HZ/rtc_freq + 2*HZ/100);
}
return 0;
}
+static int rtc_fasync (int fd, struct file *filp, int on)
+
+{
+ return fasync_helper (fd, filp, on, &rtc_async_queue);
+}
+
static int rtc_release(struct inode *inode, struct file *file)
{
/*
del_timer(&rtc_irq_timer);
}
+ if (file->f_flags & FASYNC) {
+ rtc_fasync (-1, file, 0);
+ }
+
MOD_DEC_USE_COUNT;
spin_lock_irqsave (&rtc_lock, flags);
*/
static struct file_operations rtc_fops = {
- rtc_llseek,
- rtc_read,
- NULL, /* No write */
- NULL, /* No readdir */
- rtc_poll,
- rtc_ioctl,
- NULL, /* No mmap */
- rtc_open,
- NULL, /* flush */
- rtc_release
+ llseek: rtc_llseek,
+ read: rtc_read,
+ poll: rtc_poll,
+ ioctl: rtc_ioctl,
+ open: rtc_open,
+ release: rtc_release,
+ fasync: rtc_fasync,
};
static struct miscdevice rtc_dev=
ifeq ($(CONFIG_PMAC_PBOOK),y)
L_OBJS += mediabay.o
+else
+ ifeq ($(CONFIG_MAC_FLOPPY),y)
+ L_OBJS += mediabay.o
+ endif
endif
ifeq ($(CONFIG_MAC_SERIAL),y)
ALL_SUB_DIRS := $(SUB_DIRS) fc hamradio irda pcmcia tokenring wan sk98lin arcnet
O_TARGET := net.o
-O_OBJS :=
-M_OBJS :=
MOD_LIST_NAME := NET_MODULES
# All of the (potential) objects that export symbols.
extern int apne_probe(struct net_device *);
extern int bionet_probe(struct net_device *);
extern int pamsnet_probe(struct net_device *);
-extern int tlan_probe(struct net_device *);
extern int cs89x0_probe(struct net_device *dev);
extern int ethertap_probe(struct net_device *dev);
extern int ether1_probe (struct net_device *dev);
#ifdef CONFIG_DE4X5 /* DEC DE425, DE434, DE435 adapters */
{de4x5_probe, 0},
#endif
-#ifdef CONFIG_TLAN
- {tlan_probe, 0},
-#endif
#ifdef CONFIG_ULTRA32
{ultra32_probe, 0},
#endif
extern int sparc_lance_probe(void);
extern int starfire_probe(void);
extern int tc59x_probe(void);
+extern int tlan_probe(void);
extern int tulip_probe(void);
extern int via_rhine_probe(void);
extern int yellowfin_probe(void);
#ifdef CONFIG_EEXPRESS_PRO100 /* Intel EtherExpress Pro/100 */
{eepro100_probe, 0},
#endif
+#ifdef CONFIG_TLAN
+ {tlan_probe, 0},
+#endif
#ifdef CONFIG_DEC_ELCP
{tulip_probe, 0},
#endif
* overwrite timers like TLAN_TIMER_ACTIVITY
* Patch from John Cagle <john.cagle@compaq.com>.
* - Fixed a few compiler warnings.
- *
+ *
+ * v1.3 Feb 04, 2000 - Fixed the remaining HZ issues.
+ * - Removed call to pci_present().
+ * - Removed SA_INTERRUPT flag from irq handler.
+ * - Added __init and __initdata to reduce resisdent
+ * code size.
+ * - Driver now uses module_init/module_exit.
+ * - Rewrote init_module and tlan_probe to
+ * share a lot more code. We now use tlan_probe
+ * with builtin and module driver.
+ * - Driver ported to new net API.
+ * - tlan.txt has been reworked to reflect current
+ * driver (almost)
+ * - Other minor stuff
*
*******************************************************************************/
#include "tlan.h"
+#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/pci.h>
#include <linux/etherdevice.h>
typedef u32 (TLanIntVectorFunc)( struct net_device *, u16 );
-#ifdef MODULE
static struct net_device *TLanDevices = NULL;
static int TLanDevicesInstalled = 0;
+/* Force speed, duplex and aui settings */
static int aui = 0;
-static int sa_int = 0;
-static int duplex = 0;
+static int duplex = 0;
static int speed = 0;
+#ifdef MODULE
+
MODULE_PARM(aui, "i");
-MODULE_PARM(sa_int, "i");
MODULE_PARM(duplex, "i");
MODULE_PARM(speed, "i");
MODULE_PARM(debug, "i");
#endif
-
+/* Turn on debugging. See linux/Documentation/networking/tlan.txt for details */
static int debug = 0;
+
static int bbuf = 0;
static u8 *TLanPadBuffer;
static char TLanSignature[] = "TLAN";
static int TLanVersionMajor = 1;
-static int TLanVersionMinor = 2;
+static int TLanVersionMinor = 3;
-static TLanAdapterEntry TLanAdapterList[] = {
+static TLanAdapterEntry TLanAdapterList[] __initdata = {
{ PCI_VENDOR_ID_COMPAQ,
PCI_DEVICE_ID_NETELLIGENT_10,
"Compaq Netelligent 10 T PCI UTP",
static int TLan_PciProbe( u8 *, u8 *, u8 *, u32 *, u32 * );
-static int TLan_Init( struct net_device * );
+static int TLan_Init( struct net_device * );
static int TLan_Open(struct net_device *dev);
static int TLan_StartTx(struct sk_buff *, struct net_device *);
static void TLan_HandleInterrupt(int, void *, struct pt_regs *);
*****************************************************************************/
-#ifdef MODULE
-
- /***************************************************************
- * init_module
- *
- * Returns:
- * 0 if module installed ok, non-zero if not.
- * Parms:
- * None
- *
- * This function begins the setup of the driver creating a
- * pad buffer, finding all TLAN devices (matching
- * TLanAdapterList entries), and creating and initializing a
- * device structure for each adapter.
- *
- **************************************************************/
-
-extern int init_module(void)
-{
- TLanPrivateInfo *priv;
- struct net_device *dev;
- size_t dev_size;
- u8 dfn;
- u32 index;
- int failed;
- int found;
- u32 io_base;
- u8 irq;
- u8 rev;
-
- printk( "TLAN driver, v%d.%d, (C) 1997-8 Caldera, Inc.\n",
- TLanVersionMajor,
- TLanVersionMinor
- );
- TLanPadBuffer = (u8 *) kmalloc( TLAN_MIN_FRAME_SIZE,
- ( GFP_KERNEL | GFP_DMA )
- );
- if ( TLanPadBuffer == NULL ) {
- printk( "TLAN: Could not allocate memory for pad buffer.\n" );
- return -ENOMEM;
- }
-
- memset( TLanPadBuffer, 0, TLAN_MIN_FRAME_SIZE );
-
- dev_size = sizeof(struct net_device) + sizeof(TLanPrivateInfo);
-
- while ( ( found = TLan_PciProbe( &dfn, &irq, &rev, &io_base, &index ) ) ) {
- dev = (struct net_device *) kmalloc( dev_size, GFP_KERNEL );
- if ( dev == NULL ) {
- printk( "TLAN: Could not allocate memory for device.\n" );
- continue;
- }
- memset( dev, 0, dev_size );
-
- dev->priv = priv = ( (void *) dev ) + sizeof(struct net_device);
- dev->name = priv->devName;
- strcpy( priv->devName, " " );
- dev->base_addr = io_base;
- dev->irq = irq;
- dev->init = TLan_Init;
-
- priv->adapter = &TLanAdapterList[index];
- priv->adapterRev = rev;
- priv->aui = aui;
- if ( ( duplex != 1 ) && ( duplex != 2 ) ) {
- duplex = 0;
- }
- priv->duplex = duplex;
- if ( ( speed != 10 ) && ( speed != 100 ) ) {
- speed = 0;
- }
- priv->speed = speed;
- priv->sa_int = sa_int;
- priv->debug = debug;
-
- spin_lock_init(&priv->lock);
-
- ether_setup( dev );
-
- failed = register_netdev( dev );
-
- if ( failed ) {
- printk( "TLAN: Could not register device.\n" );
- kfree( dev );
- } else {
- priv->nextDevice = TLanDevices;
- TLanDevices = dev;
- TLanDevicesInstalled++;
- printk("TLAN: %s irq=%2d io=%04x, %s, Rev. %d\n",
- dev->name,
- (int) dev->irq,
- (int) dev->base_addr,
- priv->adapter->deviceLabel,
- priv->adapterRev );
- }
- }
-
- /* printk( "TLAN: Found %d device(s).\n", TLanDevicesInstalled ); */
-
- return ( ( TLanDevicesInstalled > 0 ) ? 0 : -ENODEV );
-
-} /* init_module */
-
-
-
/***************************************************************
- * cleanup_module
+ * tlan_exit
*
* Returns:
* Nothing
*
**************************************************************/
-extern void cleanup_module(void)
+
+void __exit tlan_exit(void)
{
struct net_device *dev;
TLanPrivateInfo *priv;
}
kfree( TLanPadBuffer );
-} /* cleanup_module */
-
-
-#else /* MODULE */
-
+}
-
- /***************************************************************
+/*
+ ***************************************************************
* tlan_probe
*
* Returns:
* 0 on success, error code on error
- * Parms:
- * dev device struct to use if adapter is
- * found.
+ * Parms:
+ * none
*
* The name is lower case to fit in with all the rest of
- * the netcard_probe names. This function looks for a/
+ * the netcard_probe names. This function looks for
* another TLan based adapter, setting it up with the
- * provided device struct if one is found.
+ * allocated device struct if one is found.
+ * tlan_probe has been ported to the new net API and
+ * now allocates its own device structure. This function
+ * is also used by modules.
*
**************************************************************/
-
-extern int tlan_probe( struct net_device *dev )
+
+int __init tlan_probe(void)
{
- TLanPrivateInfo *priv;
- static int pad_allocated = 0;
- int found;
- u8 dfn, irq, rev;
- u32 io_base, index;
- found = TLan_PciProbe( &dfn, &irq, &rev, &io_base, &index );
+ struct net_device *dev;
+ TLanPrivateInfo *priv;
+ static int pad_allocated = 0;
+ u8 dfn, irq, rev;
+ u32 io_base, index;
+ int found;
+
+ printk(KERN_INFO "ThunderLAN driver v%d.%d:\n",
+ TLanVersionMajor,
+ TLanVersionMinor);
- if ( ! found ) {
- return -ENODEV;
- }
+ TLanPadBuffer = (u8 *) kmalloc(TLAN_MIN_FRAME_SIZE,
+ (GFP_KERNEL | GFP_DMA));
- dev->priv = kmalloc( sizeof(TLanPrivateInfo), GFP_KERNEL );
-
- if ( dev->priv == NULL ) {
- printk( "TLAN: Could not allocate memory for device.\n" );
- return -ENOMEM;
+ if (TLanPadBuffer == NULL) {
+ printk(KERN_ERR "TLAN: Could not allocate memory for pad buffer.\n");
+ return -ENOMEM;
}
- memset( dev->priv, 0, sizeof(TLanPrivateInfo) );
+ memset(TLanPadBuffer, 0, TLAN_MIN_FRAME_SIZE);
- if ( ! pad_allocated ) {
- TLanPadBuffer = (u8 *) kmalloc( TLAN_MIN_FRAME_SIZE,
-// ( GFP_KERNEL | GFP_DMA )
- ( GFP_KERNEL )
- );
- if ( TLanPadBuffer == NULL ) {
- printk( "TLAN: Could not allocate memory for padding.\n" );
- kfree( dev->priv );
+ while((found = TLan_PciProbe( &dfn, &irq, &rev, &io_base, &index))) {
+ dev = init_etherdev(NULL, sizeof(TLanPrivateInfo));
+ if (dev == NULL) {
+ printk(KERN_ERR "TLAN: Could not allocate memory for device.\n");
return -ENOMEM;
- } else {
- pad_allocated = 1;
- memset( TLanPadBuffer, 0, TLAN_MIN_FRAME_SIZE );
}
- }
-
- priv = (TLanPrivateInfo *) dev->priv;
-
- dev = init_etherdev( dev, sizeof(TLanPrivateInfo) );
-
- dev->base_addr = io_base;
- dev->irq = irq;
-
-
- priv->adapter = &TLanAdapterList[index];
- priv->adapterRev = rev;
- priv->aui = dev->mem_start & 0x01;
- priv->duplex = ( ( dev->mem_start & 0x0C ) == 0x0C ) ? 0 : ( dev->mem_start & 0x0C ) >> 2;
- priv->speed = ( ( dev->mem_start & 0x30 ) == 0x30 ) ? 0 : ( dev->mem_start & 0x30 ) >> 4;
- if ( priv->speed == 0x1 ) {
- priv->speed = TLAN_SPEED_10;
- } else if ( priv->speed == 0x2 ) {
- priv->speed = TLAN_SPEED_100;
- }
- priv->sa_int = dev->mem_start & 0x02;
- priv->debug = dev->mem_end;
- spin_lock_init(&priv->lock);
+ priv = dev->priv;
+ if (dev->priv == NULL) {
+ dev->priv = kmalloc(sizeof(TLanPrivateInfo), GFP_KERNEL);
+ priv = dev->priv;
+ }
+ memset(priv, 0, sizeof(TLanPrivateInfo));
+
+ pad_allocated = 1;
+
+ dev->base_addr = io_base;
+ dev->irq = irq;
+ priv->adapter = &TLanAdapterList[index];
+ priv->adapterRev = rev;
+ priv->aui = aui;
+
+ if ( ( duplex != 1 ) && ( duplex != 2 ) )
+ duplex = 0;
+ priv->duplex = duplex;
- printk("TLAN %d.%d: %s irq=%2d io=%04x, %s, Rev. %d\n",
- TLanVersionMajor,
- TLanVersionMinor,
- dev->name,
- (int) irq,
- io_base,
- priv->adapter->deviceLabel,
- priv->adapterRev );
+ if ( ( speed != 10 ) && ( speed != 100 ) )
+ speed = 0;
- TLan_Init( dev );
+ priv->speed = speed;
+ priv->debug = debug;
+ spin_lock_init(&priv->lock);
+
+ if (TLan_Init(dev)) {
+ printk(KERN_ERR "TLAN: Could not register device.\n");
+ unregister_netdev(dev);
+ kfree(dev);
+ } else {
- return 0;
-
-} /* tlan_probe */
-
+ TLanDevicesInstalled++;
+ priv->nextDevice = TLanDevices;
+ TLanDevices = dev;
+ printk(KERN_INFO "TLAN: %s irq=%2d, io=%04x, %s, Rev. %d\n",
+ dev->name,
+ (int) dev->irq,
+ (int) dev->base_addr,
+ priv->adapter->deviceLabel,
+ priv->adapterRev);
+ }
-#endif /* MODULE */
+ }
+
+ printk(KERN_INFO "TLAN: %d device(s) installed\n", TLanDevicesInstalled);
+
+ return ((TLanDevicesInstalled > 0) ? 0 : -ENODEV);
+}
+/* Module loading/unloading */
+module_init(tlan_probe);
+module_exit(tlan_exit);
int reg;
- if ( ! pci_present() ) {
- printk( "TLAN: PCI Bios not present.\n" );
- return 0;
- }
-
for (; TLanAdapterList[dl_index].vendorId != 0; dl_index++) {
pdev = pci_find_device(
TLAN_DBG(
TLAN_DEBUG_GNRL,
- "TLAN: found: Vendor Id = 0x%hx, Device Id = 0x%hx\n",
+ "found: Vendor Id = 0x%hx, Device Id = 0x%hx\n",
TLanAdapterList[dl_index].vendorId,
TLanAdapterList[dl_index].deviceId
);
pci_read_config_dword( pdev, reg, pci_io_base);
if ((pci_command & PCI_COMMAND_IO) && (*pci_io_base & 0x3)) {
*pci_io_base &= PCI_BASE_ADDRESS_IO_MASK;
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: IO mapping is available at %x.\n", *pci_io_base);
+ TLAN_DBG( TLAN_DEBUG_GNRL, "IO mapping is available at %x.\n", *pci_io_base);
break;
} else {
*pci_io_base = 0;
}
if ( *pci_io_base == 0 )
- printk("TLAN: IO mapping not available, ignoring device.\n");
+ printk(KERN_INFO "TLAN: IO mapping not available, ignoring device.\n");
pci_set_master(pdev);
*
**************************************************************/
-int TLan_Init( struct net_device *dev )
+static int TLan_Init( struct net_device *dev )
{
int dma_size;
int err;
TLanPrivateInfo *priv;
priv = (TLanPrivateInfo *) dev->priv;
-
err = check_region( dev->base_addr, 0x10 );
if ( err ) {
- printk( "TLAN: %s: Io port region 0x%lx size 0x%x in use.\n",
+ printk(KERN_ERR "TLAN: %s: Io port region 0x%lx size 0x%x in use.\n",
dev->name,
dev->base_addr,
0x10 );
return -EIO;
}
+
request_region( dev->base_addr, 0x10, TLanSignature );
-
+
if ( bbuf ) {
dma_size = ( TLAN_NUM_RX_LISTS + TLAN_NUM_TX_LISTS )
* ( sizeof(TLanList) + TLAN_MAX_FRAME_SIZE );
dma_size = ( TLAN_NUM_RX_LISTS + TLAN_NUM_TX_LISTS )
* ( sizeof(TLanList) );
}
-
- priv->dmaStorage = kmalloc( dma_size, GFP_KERNEL | GFP_DMA );
+ priv->dmaStorage = kmalloc(dma_size, GFP_KERNEL | GFP_DMA);
if ( priv->dmaStorage == NULL ) {
- printk( "TLAN: Could not allocate lists and buffers for %s.\n",
+ printk(KERN_ERR "TLAN: Could not allocate lists and buffers for %s.\n",
dev->name );
return -ENOMEM;
}
priv->rxList = (TLanList *)
( ( ( (u32) priv->dmaStorage ) + 7 ) & 0xFFFFFFF8 );
priv->txList = priv->rxList + TLAN_NUM_RX_LISTS;
-
if ( bbuf ) {
priv->rxBuffer = (u8 *) ( priv->txList + TLAN_NUM_TX_LISTS );
priv->txBuffer = priv->rxBuffer
(u8) priv->adapter->addrOfs + i,
(u8 *) &dev->dev_addr[i] );
if ( err ) {
- printk( "TLAN: %s: Error reading MAC from eeprom: %d\n",
+ printk(KERN_ERR "TLAN: %s: Error reading MAC from eeprom: %d\n",
dev->name,
err );
}
-
dev->addr_len = 6;
-
+
+ /* Device methods */
dev->open = &TLan_Open;
dev->hard_start_xmit = &TLan_StartTx;
dev->stop = &TLan_Close;
{
TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
int err;
-
+
+ MOD_INC_USE_COUNT;
+
priv->tlanRev = TLan_DioRead8( dev->base_addr, TLAN_DEF_REVISION );
- if ( priv->sa_int ) {
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: Using SA_INTERRUPT\n" );
- err = request_irq( dev->irq, TLan_HandleInterrupt, SA_SHIRQ | SA_INTERRUPT, TLanSignature, dev );
- } else {
- err = request_irq( dev->irq, TLan_HandleInterrupt, SA_SHIRQ, TLanSignature, dev );
- }
+ err = request_irq( dev->irq, TLan_HandleInterrupt, SA_SHIRQ, TLanSignature, dev );
+
if ( err ) {
- printk( "TLAN: Cannot open %s because IRQ %d is already in use.\n", dev->name, dev->irq );
+ printk(KERN_ERR "TLAN: Cannot open %s because IRQ %d is already in use.\n", dev->name, dev->irq );
return -EAGAIN;
}
- MOD_INC_USE_COUNT;
-
dev->tbusy = 0;
dev->interrupt = 0;
dev->start = 1;
TLan_ReadAndClearStats( dev, TLAN_IGNORE );
TLan_ResetAdapter( dev );
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Opened. TLAN Chip Rev: %x\n", dev->name, priv->tlanRev );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Opened. TLAN Chip Rev: %x\n", dev->name, priv->tlanRev );
return 0;
unsigned long flags;
if ( ! priv->phyOnline ) {
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: %s PHY is not ready\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: %s PHY is not ready\n", dev->name );
dev_kfree_skb( skb );
return 0;
}
tail_list = priv->txList + priv->txTail;
if ( tail_list->cStat != TLAN_CSTAT_UNUSED ) {
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: %s is busy (Head=%d Tail=%d)\n", dev->name, priv->txHead, priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: %s is busy (Head=%d Tail=%d)\n", dev->name, priv->txHead, priv->txTail );
dev->tbusy = 1;
priv->txBusyCount++;
return 1;
if ( ! priv->txInProgress ) {
priv->txInProgress = 1;
outw( 0x4, dev->base_addr + TLAN_HOST_INT );
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: Starting TX on buffer %d\n", priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: Starting TX on buffer %d\n", priv->txTail );
outl( virt_to_bus( tail_list ), dev->base_addr + TLAN_CH_PARM );
outl( TLAN_HC_GO | TLAN_HC_ACK, dev->base_addr + TLAN_HOST_CMD );
} else {
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: Adding buffer %d to TX channel\n", priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: Adding buffer %d to TX channel\n", priv->txTail );
if ( priv->txTail == 0 ) {
( priv->txList + ( TLAN_NUM_TX_LISTS - 1 ) )->forward = virt_to_bus( tail_list );
} else {
del_timer( &priv->timer );
free_irq( dev->irq, dev );
TLan_FreeLists( dev );
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: Device %s closed.\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "Device %s closed.\n", dev->name );
MOD_DEC_USE_COUNT;
/* Should only read stats if open ? */
TLan_ReadAndClearStats( dev, TLAN_RECORD );
- TLAN_DBG( TLAN_DEBUG_RX, "TLAN RECEIVE: %s EOC count = %d\n", dev->name, priv->rxEocCount );
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: %s Busy count = %d\n", dev->name, priv->txBusyCount );
+ TLAN_DBG( TLAN_DEBUG_RX, "RECEIVE: %s EOC count = %d\n", dev->name, priv->rxEocCount );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: %s Busy count = %d\n", dev->name, priv->txBusyCount );
if ( debug & TLAN_DEBUG_GNRL ) {
TLan_PrintDio( dev->base_addr );
TLan_PhyPrint( dev );
TLanList *head_list;
u32 ack = 1;
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: Handling TX EOF (Head=%d Tail=%d)\n", priv->txHead, priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: Handling TX EOF (Head=%d Tail=%d)\n", priv->txHead, priv->txTail );
host_int = 0;
head_list = priv->txList + priv->txHead;
dev->tbusy = 0;
CIRC_INC( priv->txHead, TLAN_NUM_TX_LISTS );
if ( eoc ) {
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: Handling TX EOC (Head=%d Tail=%d)\n", priv->txHead, priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: Handling TX EOC (Head=%d Tail=%d)\n", priv->txHead, priv->txTail );
head_list = priv->txList + priv->txHead;
if ( ( head_list->cStat & TLAN_CSTAT_READY ) == TLAN_CSTAT_READY ) {
outl( virt_to_bus( head_list ), dev->base_addr + TLAN_CH_PARM );
TLanList *tail_list;
void *t;
- TLAN_DBG( TLAN_DEBUG_RX, "TLAN RECEIVE: Handling RX EOF (Head=%d Tail=%d)\n", priv->rxHead, priv->rxTail );
+ TLAN_DBG( TLAN_DEBUG_RX, "RECEIVE: Handling RX EOF (Head=%d Tail=%d)\n", priv->rxHead, priv->rxTail );
host_int = 0;
head_list = priv->rxList + priv->rxHead;
tail_list = priv->rxList + priv->rxTail;
CIRC_INC( priv->rxTail, TLAN_NUM_RX_LISTS );
if ( eoc ) {
- TLAN_DBG( TLAN_DEBUG_RX, "TLAN RECEIVE: Handling RX EOC (Head=%d Tail=%d)\n", priv->rxHead, priv->rxTail );
+ TLAN_DBG( TLAN_DEBUG_RX, "RECEIVE: Handling RX EOC (Head=%d Tail=%d)\n", priv->rxHead, priv->rxTail );
head_list = priv->rxList + priv->rxHead;
outl( virt_to_bus( head_list ), dev->base_addr + TLAN_CH_PARM );
ack |= TLAN_HC_GO | TLAN_HC_RT;
host_int = 0;
if ( priv->tlanRev < 0x30 ) {
- TLAN_DBG( TLAN_DEBUG_TX, "TLAN TRANSMIT: Handling TX EOC (Head=%d Tail=%d) -- IRQ\n", priv->txHead, priv->txTail );
+ TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: Handling TX EOC (Head=%d Tail=%d) -- IRQ\n", priv->txHead, priv->txTail );
head_list = priv->txList + priv->txHead;
if ( ( head_list->cStat & TLAN_CSTAT_READY ) == TLAN_CSTAT_READY ) {
outl( virt_to_bus( head_list ), dev->base_addr + TLAN_CH_PARM );
dev->tbusy = 0;
ack = 0;
} else {
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Status Check\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Status Check\n", dev->name );
phy = priv->phy[priv->phyNum];
net_sts = TLan_DioRead8( dev->base_addr, TLAN_NET_STS );
if ( net_sts ) {
TLan_DioWrite8( dev->base_addr, TLAN_NET_STS, net_sts );
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Net_Sts = %x\n", dev->name, (unsigned) net_sts );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Net_Sts = %x\n", dev->name, (unsigned) net_sts );
}
if ( ( net_sts & TLAN_NET_STS_MIRQ ) && ( priv->phyNum == 0 ) ) {
TLan_MiiReadReg( dev, phy, TLAN_TLPHY_STS, &tlphy_sts );
host_int = 0;
if ( priv->tlanRev < 0x30 ) {
- TLAN_DBG( TLAN_DEBUG_RX, "TLAN RECEIVE: Handling RX EOC (Head=%d Tail=%d) -- IRQ\n", priv->rxHead, priv->rxTail );
+ TLAN_DBG( TLAN_DEBUG_RX, "RECEIVE: Handling RX EOC (Head=%d Tail=%d) -- IRQ\n", priv->rxHead, priv->rxTail );
head_list = priv->rxList + priv->rxHead;
outl( virt_to_bus( head_list ), dev->base_addr + TLAN_CH_PARM );
ack |= TLAN_HC_GO | TLAN_HC_RT;
udelay( 1000 );
TLan_MiiReadReg( dev, phy, MII_GEN_STS, &status );
if ( status & MII_GS_LINK ) {
- printk( "TLAN: %s: Link active.\n", dev->name );
+ printk( "TLAN: %s: Link active.\n", dev->name );
TLan_DioWrite8( dev->base_addr, TLAN_LED_REG, TLAN_LED_LINK );
}
}
outl( TLAN_HC_GO | TLAN_HC_RT, dev->base_addr + TLAN_HOST_CMD );
} else {
printk( "TLAN: %s: Link inactive, will retry in 10 secs...\n", dev->name );
- TLan_SetTimer( dev, 1000, TLAN_TIMER_FINISH_RESET );
+ TLan_SetTimer( dev, (10*HZ), TLAN_TIMER_FINISH_RESET );
return;
}
TLan_MiiReadReg( dev, phy, MII_GEN_ID_HI, &hi );
TLan_MiiReadReg( dev, phy, MII_GEN_ID_LO, &lo );
if ( ( control != 0xFFFF ) || ( hi != 0xFFFF ) || ( lo != 0xFFFF ) ) {
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: PHY found at %02x %04x %04x %04x\n", phy, control, hi, lo );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "PHY found at %02x %04x %04x %04x\n", phy, control, hi, lo );
if ( ( priv->phy[1] == TLAN_PHY_NONE ) && ( phy != TLAN_PHY_MAX_ADDR ) ) {
priv->phy[1] = phy;
}
TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
u16 value;
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Powering down PHY(s).\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Powering down PHY(s).\n", dev->name );
value = MII_GC_PDOWN | MII_GC_LOOPBK | MII_GC_ISOLATE;
TLan_MiiSync( dev->base_addr );
TLan_MiiWriteReg( dev, priv->phy[priv->phyNum], MII_GEN_CTL, value );
* This is abitrary. It is intended to make sure the
* tranceiver settles.
*/
- TLan_SetTimer( dev, (50/(1000/HZ)), TLAN_TIMER_PHY_PUP );
+ TLan_SetTimer( dev, (HZ/20), TLAN_TIMER_PHY_PUP );
} /* TLan_PhyPowerDown */
TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
u16 value;
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Powering up PHY.\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Powering up PHY.\n", dev->name );
TLan_MiiSync( dev->base_addr );
value = MII_GC_LOOPBK;
TLan_MiiWriteReg( dev, priv->phy[priv->phyNum], MII_GEN_CTL, value );
phy = priv->phy[priv->phyNum];
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Reseting PHY.\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Reseting PHY.\n", dev->name );
TLan_MiiSync( dev->base_addr );
value = MII_GC_LOOPBK | MII_GC_RESET;
TLan_MiiWriteReg( dev, phy, MII_GEN_CTL, value );
phy = priv->phy[priv->phyNum];
- TLAN_DBG( TLAN_DEBUG_GNRL, "TLAN: %s: Trying to activate link.\n", dev->name );
+ TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Trying to activate link.\n", dev->name );
TLan_MiiReadReg( dev, phy, MII_GEN_STS, &status );
if ( ( status & MII_GS_AUTONEG ) &&
( priv->duplex == TLAN_DUPLEX_DEFAULT ) &&
* but the card need additional time to start AN.
* .5 sec should be plenty extra.
*/
- printk( "TLAN: %s: Starting autonegotiation.\n", dev->name );
+ printk( "TLAN: %s: Starting autonegotiation.\n", dev->name );
TLan_SetTimer( dev, (4*HZ), TLAN_TIMER_PHY_FINISH_AN );
return;
}
priv->phyNum = 0;
data = TLAN_NET_CFG_1FRAG | TLAN_NET_CFG_1CHAN | TLAN_NET_CFG_PHY_EN;
TLan_DioWrite16( dev->base_addr, TLAN_NET_CONFIG, data );
- TLan_SetTimer( dev, 4, TLAN_TIMER_PHY_PDOWN );
+ TLan_SetTimer( dev, (4*(HZ/1000)), TLAN_TIMER_PHY_PDOWN );
return;
} else if ( priv->phyNum == 0 ) {
TLan_MiiReadReg( dev, phy, TLAN_TLPHY_CTL, &tctl );
/* Wait for 8 sec to give the process
* more time. Perhaps we should fail after a while.
*/
- printk( "TLAN: Giving autonegotiation more time.\n" );
+ printk( "TLAN: Giving autonegotiation more time.\n" );
TLan_SetTimer( dev, (8*HZ), TLAN_TIMER_PHY_FINISH_AN );
return;
}
- printk( "TLAN: %s: Autonegotiation complete.\n", dev->name );
+ printk( "TLAN: %s: Autonegotiation complete.\n", dev->name );
TLan_MiiReadReg( dev, phy, MII_AN_ADV, &an_adv );
TLan_MiiReadReg( dev, phy, MII_AN_LPA, &an_lpa );
mode = an_adv & an_lpa & 0x03E0;
priv->phyNum = 0;
data = TLAN_NET_CFG_1FRAG | TLAN_NET_CFG_1CHAN | TLAN_NET_CFG_PHY_EN;
TLan_DioWrite16( dev->base_addr, TLAN_NET_CONFIG, data );
- TLan_SetTimer( dev, 40, TLAN_TIMER_PHY_PDOWN );
+ TLan_SetTimer( dev, (400*(HZ/1000)), TLAN_TIMER_PHY_PDOWN );
return;
}
-
#define TLAN_IGNORE 0
#define TLAN_RECORD 1
-#define TLAN_DBG(lvl, format, args...) if (debug&lvl) printk( format, ##args );
+#define TLAN_DBG(lvl, format, args...) if (debug&lvl) printk(KERN_DEBUG "TLAN: " format, ##args );
#define TLAN_DEBUG_GNRL 0x0001
#define TLAN_DEBUG_TX 0x0002
#define TLAN_DEBUG_RX 0x0004
u32 duplex;
u32 phy[2];
u32 phyNum;
- u32 sa_int;
u32 speed;
u8 tlanRev;
u8 tlanFullDuplex;
p->dma = PARPORT_DMA_NONE;
}
+#ifdef CONFIG_PARPORT_PC_FIFO
if (p->dma != PARPORT_DMA_NONE) {
if (request_dma (p->dma, p->name)) {
printk (KERN_WARNING "%s: dma %d in use, "
}
}
}
+#endif /* CONFIG_PARPORT_PC_FIFO */
}
/* Done probing. Now put the port into a sensible start-up state.
pci_read_config_word(dev, PCI_STATUS, &status);
if (!(status & PCI_STATUS_CAP_LIST))
return 0;
- pci_read_config_byte(dev, PCI_CAPABILITY_LIST, &pos);
+ switch (dev->hdr_type) {
+ case PCI_HEADER_TYPE_NORMAL:
+ case PCI_HEADER_TYPE_BRIDGE:
+ pci_read_config_byte(dev, PCI_CAPABILITY_LIST, &pos);
+ break;
+ case PCI_HEADER_TYPE_CARDBUS:
+ pci_read_config_byte(dev, PCI_CB_CAPABILITY_LIST, &pos);
+ break;
+ default:
+ return 0;
+ }
while (ttl-- && pos >= 0x40) {
pos &= ~3;
pci_read_config_byte(dev, pos + PCI_CAP_LIST_ID, &id);
# Maintained by Martin Mares <pci-ids@ucw.cz>
# If you have any new entries, send them to the maintainer.
#
-# $Id: pci.ids,v 1.46 2000/01/02 20:32:11 mj Exp $
+# $Id: pci.ids,v 1.50 2000/01/23 05:57:06 mj Exp $
#
# Vendors, devices and subsystems. Please keep sorted.
2001 79c978 [HomePNA]
2020 53c974 [PCscsi]
2040 79c974
- 7006 IronGate Host
- 7403 Viper Power Management
- 7408 Viper ISA
- 7409 Viper IDE
- 740B Viper ACPI
- 740C Viper USB
+ 7006 AMD-751 [Irongate] System Controller
+ 7007 AMD-751 [Irongate] AGP Bridge
+ 7400 AMD-755 [Cobra] ISA
+ 7401 AMD-755 [Cobra] IDE
+ 7403 AMD-755 [Cobra] ACPI
+ 7404 AMD-755 [Cobra] USB
+ 7408 AMD-756 [Viper] ISA
+ 7409 AMD-756 [Viper] IDE
+ 740b AMD-756 [Viper] ACPI
+ 740c AMD-756 [Viper] USB
1023 Trident Microsystems
0194 82C194
2000 4DWave DX
fe03 12C01A FireWire Host Controller
104d Sony Corporation
8039 CXD3222 iLINK Controller
+ 8056 Rockwell HCF 56K modem
104e Oak Technology, Inc
0017 OTI-64017
0107 OTI-107 [Spitfire]
1240 ISP1240
2020 ISP2020A
2100 ISP2100
+ 2200 ISP2200
1078 Cyrix Corporation
0000 5510 [Grappa]
0001 PCI Master
1330 PCI-6031E
1350 PCI-6071E
2a60 PCI-6023E
+ b001 IMAQ-PCI-1408
+ b011 IMAQ-PXI-1408
+ b021 IMAQ-PCI-1424
+ b031 IMAQ-PCI-1413
+ b041 IMAQ-PCI-1407
+ b051 IMAQ-PXI-1407
+ b061 IMAQ-PCI-1411
+ b071 IMAQ-PCI-1422
+ b081 IMAQ-PXI-1422
+ b091 IMAQ-PXI-1411
c801 PCI-GPIB
1094 First International Computers [FIC]
1095 CMD Technology Inc
127a 1005 PCI56RVP Modem
13df 1005 PCI56RVP Modem
1436 1005 WS-5614PS3G
+ 2005 HCF 56k V90 FaxModem
8234 RapidFire 616X ATM155 Adapter
127b Pixera Corporation
127c Crosspoint Solutions, Inc.
0005 Permedia
0006 GLINT MX
0007 3D Extreme
+ 0008 GLINT Gamma G1
0009 Permedia II 2D+3D
3d3d 0100 AccelStar II 3D Accelerator
3d3d 0111 Permedia 3:16
3d3d 0120 Santa Ana PCL
3d3d 0125 Oxygen VX1
3d3d 0127 Permedia3 Create!
+ 000a GLINT R3
0100 Permedia II 2D+3D
1004 Permedia
3d04 Permedia
0e11 b0dd NC3131
0e11 b0de NC3132
0e11 b0e1 NC3133
- 1014 005c Ethernet Pro 10/100
+ 1014 005c 82558B Ethernet Pro 10/100
1014 105c Netfinity 10/100
1033 8000 PC-9821X-B06
1033 8016 PK-UG-X006
103c 10C7 MegaRaid T5
1111 1111 MegaRaid 466
113c 03A2 MegaRaid
- 2410 82801 82810 Chipset ISA Bridge (LPC)
- 2411 82801 82810 Chipset IDE
- 2412 82801 82810 Chipset USB
- 2413 82801 82810 Chipset SMBus
- 2415 82801 82810 AC'97 Audio
- 2416 82801 82810 AC'97 Modem
- 2418 82801 82810 PCI Bridge
+ 2410 82801AA 82810 Chipset ISA Bridge (LPC)
+ 2411 82801AA 82810 Chipset IDE
+ 2412 82801AA 82810 Chipset USB
+ 2413 82801AA 82810 Chipset SMBus
+ 2415 82801AA 82810 AC'97 Audio
+ 2416 82801AA 82810 AC'97 Modem
+ 2418 82801AA 82810 PCI Bridge
2420 82801AB 82810 Chipset ISA Bridge (LPC)
2421 82801AB 82810 Chipset IDE
2422 82801AB 82810 Chipset USB
7121 82810 CGC [Chipset Graphics Controller]
7122 82810-DC100 GMCH [Graphics Memory Controller Hub]
7123 82810-DC100 CGC [Chipset Graphics Controller]
+ 7124 82810E GMCH [Graphics Memory Controller Hub]
+ 7125 82810E CGC [Chipset Graphics Controller]
7180 440LX/EX - 82443LX/EX Host bridge
7181 440LX/EX - 82443LX/EX AGP bridge
7190 440BX/ZX - 82443BX/ZX Host bridge
*
* PCI Bus Services -- Exported Symbols
*
- * Copyright 1998 Martin Mares
+ * Copyright 1998--2000 Martin Mares <mj@suse.cz>
*/
#include <linux/config.h>
EXPORT_SYMBOL(pci_register_driver);
EXPORT_SYMBOL(pci_unregister_driver);
EXPORT_SYMBOL(pci_match_device);
+EXPORT_SYMBOL(pci_find_parent_resource);
#ifdef CONFIG_HOTPLUG
EXPORT_SYMBOL(pci_setup_device);
static int do_mtd_request(memory_handle_t handle, mtd_request_t *req,
caddr_t buf)
{
- int ret, tries;
+ int ret=0, tries;
client_t *mtd;
socket_info_t *s;
OS-specific module glue goes here
======================================================================*/
+/* in alpha order */
EXPORT_SYMBOL(pcmcia_access_configuration_register);
EXPORT_SYMBOL(pcmcia_adjust_resource_info);
+EXPORT_SYMBOL(pcmcia_bind_device);
+EXPORT_SYMBOL(pcmcia_bind_mtd);
EXPORT_SYMBOL(pcmcia_check_erase_queue);
EXPORT_SYMBOL(pcmcia_close_memory);
EXPORT_SYMBOL(pcmcia_copy_memory);
EXPORT_SYMBOL(pcmcia_deregister_client);
EXPORT_SYMBOL(pcmcia_deregister_erase_queue);
+EXPORT_SYMBOL(pcmcia_eject_card);
EXPORT_SYMBOL(pcmcia_get_first_client);
EXPORT_SYMBOL(pcmcia_get_card_services_info);
EXPORT_SYMBOL(pcmcia_get_configuration_info);
+EXPORT_SYMBOL(pcmcia_get_mem_page);
EXPORT_SYMBOL(pcmcia_get_next_client);
EXPORT_SYMBOL(pcmcia_get_first_region);
EXPORT_SYMBOL(pcmcia_get_first_tuple);
+EXPORT_SYMBOL(pcmcia_get_first_window);
EXPORT_SYMBOL(pcmcia_get_next_region);
EXPORT_SYMBOL(pcmcia_get_next_tuple);
+EXPORT_SYMBOL(pcmcia_get_next_window);
EXPORT_SYMBOL(pcmcia_get_status);
EXPORT_SYMBOL(pcmcia_get_tuple_data);
+EXPORT_SYMBOL(pcmcia_insert_card);
+EXPORT_SYMBOL(pcmcia_lookup_bus);
EXPORT_SYMBOL(pcmcia_map_mem_page);
EXPORT_SYMBOL(pcmcia_modify_configuration);
EXPORT_SYMBOL(pcmcia_modify_window);
EXPORT_SYMBOL(pcmcia_release_io);
EXPORT_SYMBOL(pcmcia_release_irq);
EXPORT_SYMBOL(pcmcia_release_window);
+EXPORT_SYMBOL(pcmcia_replace_cis);
+EXPORT_SYMBOL(pcmcia_report_error);
EXPORT_SYMBOL(pcmcia_request_configuration);
EXPORT_SYMBOL(pcmcia_request_io);
EXPORT_SYMBOL(pcmcia_request_irq);
EXPORT_SYMBOL(pcmcia_request_window);
EXPORT_SYMBOL(pcmcia_reset_card);
+EXPORT_SYMBOL(pcmcia_resume_card);
EXPORT_SYMBOL(pcmcia_set_event_mask);
+EXPORT_SYMBOL(pcmcia_suspend_card);
EXPORT_SYMBOL(pcmcia_validate_cis);
EXPORT_SYMBOL(pcmcia_write_memory);
-EXPORT_SYMBOL(pcmcia_bind_device);
-EXPORT_SYMBOL(pcmcia_bind_mtd);
-EXPORT_SYMBOL(pcmcia_report_error);
-EXPORT_SYMBOL(pcmcia_suspend_card);
-EXPORT_SYMBOL(pcmcia_resume_card);
-EXPORT_SYMBOL(pcmcia_eject_card);
-EXPORT_SYMBOL(pcmcia_insert_card);
-EXPORT_SYMBOL(pcmcia_replace_cis);
-EXPORT_SYMBOL(pcmcia_get_first_window);
-EXPORT_SYMBOL(pcmcia_get_next_window);
-EXPORT_SYMBOL(pcmcia_get_mem_page);
+EXPORT_SYMBOL(dead_socket);
EXPORT_SYMBOL(register_ss_entry);
EXPORT_SYMBOL(unregister_ss_entry);
EXPORT_SYMBOL(CardServices);
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
+#include <linux/module.h>
#include <pcmcia/ss.h>
yenta_set_mem_map,
yenta_proc_setup
};
+EXPORT_SYMBOL(yenta_operations);
/*
* Ricoh cardbus bridge: standard cardbus, except it needs
static void __init isapnp_peek(unsigned char *data, int bytes)
{
int i, j;
- unsigned char d;
+ unsigned char d=0;
for (i = 1; i <= bytes; i++) {
for (j = 0; j < 10; j++) {
break;
}
- if (min_size > size) {
- /*
- * failed, use ypan
- */
- size = current_par.screen_size;
- var->yres_virtual = size / (font_line_len / fontht);
- } else
- var->yres_virtual = nr_y * fontht;
+ if (var->accel_flags & FB_ACCELF_TEXT) {
+ if (min_size > size) {
+ /*
+ * failed, use ypan
+ */
+ size = current_par.screen_size;
+ var->yres_virtual = size / (font_line_len / fontht);
+ } else
+ var->yres_virtual = nr_y * fontht;
+ }
current_par.screen_end = current_par.screen_base_p + size;
vmode: FB_VMODE_NONINTERLACED
};
-static void __init
+static void __init
acornfb_init_fbinfo(void)
{
static int first = 1;
init_var.height = -1;
init_var.width = -1;
init_var.vmode = FB_VMODE_NONINTERLACED;
+ init_var.accel_flags = FB_ACCELF_TEXT;
current_par.dram_size = 0;
current_par.montype = -1;
* size can optionally be followed by 'M' or 'K' for
* MB or KB respectively.
*/
-static void __init
+static void __init
acornfb_parse_font(char *opt)
{
strcpy(fb_info.fontname, opt);
}
-static void __init
+static void __init
acornfb_parse_mon(char *opt)
{
+ current_par.montype = -2;
+
fb_info.monspecs.hfmin = simple_strtoul(opt, &opt, 0);
if (*opt == '-')
fb_info.monspecs.hfmax = simple_strtoul(opt + 1, &opt, 0);
init_var.height = simple_strtoul(opt + 1, NULL, 0);
}
-static void __init
+static void __init
acornfb_parse_montype(char *opt)
{
current_par.montype = -2;
}
}
-static void __init
+static void __init
acornfb_parse_dram(char *opt)
{
unsigned int size;
{ NULL, NULL }
};
-int __init
+int __init
acornfb_setup(char *options)
{
struct options *optp;
* Detect type of monitor connected
* For now, we just assume SVGA
*/
-static int __init
+static int __init
acornfb_detect_monitortype(void)
{
return 4;
printk("acornfb: freed %dK memory\n", mb_freed);
}
-int __init
+int __init
acornfb_init(void)
{
unsigned long size;
if (current_par.montype == -1)
current_par.montype = acornfb_detect_monitortype();
- if (current_par.montype < 0 || current_par.montype > NR_MONTYPES)
+ if (current_par.montype == -1 || current_par.montype > NR_MONTYPES)
current_par.montype = 4;
- fb_info.monspecs = monspecs[current_par.montype];
+ if (current_par.montype > 0)
+ fb_info.monspecs = monspecs[current_par.montype];
fb_info.monspecs.dpms = current_par.dpms;
/*
for (page = current_par.screen_base + size; page < top; page += PAGE_SIZE)
free_page(page);
current_par.screen_base_p =
- virt_to_phys(current_par.screen_base);
+ virt_to_phys((void *)current_par.screen_base);
}
#endif
#if defined(HAS_VIDC)
-/* $Id: aty128fb.c,v 1.1 1999/10/12 11:00:43 geert Exp $
+/* $Id: aty128fb.c,v 1.1.1.1.36.1 1999/12/11 09:03:05 Exp $
* linux/drivers/video/aty128fb.c -- Frame buffer device for ATI Rage128
*
- * Copyright (C) Summer 1999, Anthony Tong <atong@uiuc.edu>
+ * Copyright (C) 1999-2000, Anthony Tong <atong@uiuc.edu>
*
- * Brad Douglas <brad@neruo.com>
+ * Brad Douglas <brad@neruo.com>
* - x86 support
* - MTRR
* - Probe ROM for PLL
+ * - modedb
+ *
+ * Ani Joshi / Jeff Garzik
+ * - Code cleanup
*
* Based off of Geert's atyfb.c and vfb.c.
*
* TODO:
* - panning
- * - fix 15/16 bpp on big endian arch's
* - monitor sensing (DDC)
+ * - virtual display
* - other platform support (only ppc/x86 supported)
- * - PPLL_REF_DIV & XTALIN calculation
- * - determine MCLK from previous hardware setting
+ * - PPLL_REF_DIV & XTALIN calculation -done for x86
+ * - determine MCLK from previous setting -done for x86
+ * - calculate XCLK, rather than probe BIOS
+ * - hardware cursor support
+ * - acceleration
+ * - ioctl()'s
*/
/*
* example code and hardware. Thanks Nitya. -atong
*/
-#include <linux/config.h>
+
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/fb.h>
#include <linux/init.h>
#include <linux/selection.h>
+#include <linux/console.h>
#include <linux/pci.h>
+#include <linux/ioport.h>
#include <asm/io.h>
#if defined(CONFIG_PPC)
#include <video/macmodes.h>
#endif
+#ifdef CONFIG_FB_COMPAT_XPMAC
+#include <asm/vc_ioctl.h>
+#endif
+
#include <video/fbcon.h>
#include <video/fbcon-cfb8.h>
#include <video/fbcon-cfb16.h>
#ifdef CONFIG_MTRR
#include <asm/mtrr.h>
-#endif
+#endif /* CONFIG_MTRR */
#include "aty128.h"
+/* compatibility with older kernels */
+#ifndef LINUX_VERSION_CODE
+#include <linux/version.h>
+#endif
+
+#ifndef KERNEL_VERSION
+#define KERNEL_VERSION(x,y,z) (((x)<<16)+(y)<<8)+(z))
+#endif
+
+
+/* Debug flag */
#undef DEBUG
-#undef CONFIG_MTRR /* not ready? */
#ifdef DEBUG
-#define DBG(x) printk(KERN_INFO "aty128fb: %s\n",(x));
+#define DBG(x) printk(KERN_DEBUG "aty128fb: %s\n",(x));
#else
#define DBG(x)
#endif
-static char *aty128fb_name = "ATY Rage128";
-
+/* default mode */
static struct fb_var_screeninfo default_var = {
/* 640x480, 60 Hz, Non-Interlaced (25.175 MHz dotclock) */
640, 480, 640, 480, 0, 0, 8, 0,
0, FB_VMODE_NONINTERLACED
};
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,1)
+#ifndef MODULE
+/* default modedb mode */
+static struct fb_videomode defaultmode __initdata = {
+ /* 640x480, 60 Hz, Non-Interlaced (25.172 MHz dotclock) */
+ NULL, 60, 640, 480, 39722, 48, 16, 33, 10, 96, 2,
+ 0, FB_VMODE_NONINTERLACED
+};
+#endif
+#endif
+
+/* chip description information */
+struct aty128_chip_info {
+ const char *name;
+ unsigned short vendor;
+ unsigned short device;
+};
+
+/* supported Rage128 chipsets */
+static const struct aty128_chip_info aty128_pci_probe_list[] =
+{
+ {"PCI_DEVICE_ID_ATI_RAGE128_RE", PCI_VENDOR_ID_ATI, 0x5245},
+ {"PCI_DEVICE_ID_ATI_RAGE128_RF", PCI_VENDOR_ID_ATI, 0x5246},
+ {"PCI_DEVICE_ID_ATI_RAGE128_RK", PCI_VENDOR_ID_ATI, 0x524b},
+ {"PCI_DEVICE_ID_ATI_RAGE128_RL", PCI_VENDOR_ID_ATI, 0x524c},
+ {NULL, 0, 0}
+};
+
+/* packed BIOS settings */
#pragma pack(1)
typedef struct {
u8 clock_chip_type;
} PLL_BLOCK;
#pragma pack()
+/* onboard memory information */
struct aty128_meminfo {
u8 ML;
u8 MB;
u8 Rloop;
};
+/* various memory configurations */
const struct aty128_meminfo sdr_128 = { 4, 4, 3, 3, 1, 3, 1, 16, 30, 16 };
const struct aty128_meminfo sdr_64 = { 4, 8, 3, 3, 1, 3, 1, 17, 46, 17 };
const struct aty128_meminfo sdr_sgram = { 4, 4, 1, 2, 1, 2, 1, 16, 24, 16 };
const struct aty128_meminfo ddr_sgram = { 4, 4, 3, 3, 2, 3, 1, 16, 31, 16 };
static int currcon = 0;
+
+static char *aty128fb_name = "ATY Rage128";
static char fontname[40] __initdata = { 0 };
+static char noaccel __initdata = 0;
+
+#ifndef MODULE
+static const char *mode_option __initdata = NULL;
+#endif
#if defined(CONFIG_PPC)
-static int default_vmode __initdata = VMODE_NVRAM;
-static int default_cmode __initdata = CMODE_NVRAM;
+static int default_vmode __initdata = VMODE_CHOOSE;
+static int default_cmode __initdata = CMODE_8;
#endif
-#if defined(CONFIG_MTRR)
+#ifdef CONFIG_MTRR
static int mtrr = 1;
-#endif
+#endif /* CONFIG_MTRR */
+/* PLL constants */
struct aty128_constants {
u32 dotclock;
u32 ppll_min;
u32 v_total, v_sync_strt_wid;
u32 pitch;
u32 offset, offset_cntl;
+ u32 xoffset, yoffset;
u32 vxres, vyres;
u32 bpp;
};
struct fb_info_aty128 {
struct fb_info fb_info;
+ struct fb_info_aty128 *next;
struct aty128_constants constants;
- unsigned long regbase_phys, regbase;
- unsigned long frame_buffer_phys, frame_buffer;
- const struct aty128_meminfo *mem;
- u32 vram_size;
- u32 BIOS_SEG;
-#ifdef CONFIG_MTRR
- struct { int vram; int vram_valid; } mtrr;
-#endif
+ unsigned long regbase_phys; /* mmio */
+ unsigned long frame_buffer_phys; /* framebuffer memory */
+ unsigned long frame_buffer; /* remaped framebuffer */
+ void *regbase;
+ const struct aty128_meminfo *mem; /* onboard mem info */
+ u32 vram_size; /* onboard video ram */
+ void *BIOS_SEG; /* BIOS segment */
+ unsigned short card_revision; /* video card revision */
struct aty128fb_par default_par, current_par;
struct display disp;
+ struct display_switch dispsw; /* for cursor and font */
struct { u8 red, green, blue, pad; } palette[256];
union {
#ifdef FBCON_HAS_CFB16
#endif
} fbcon_cmap;
int blitter_may_be_busy;
+#ifdef CONFIG_PCI
+ struct pci_dev *pdev;
+#endif
+#ifdef CONFIG_MTRR
+ struct { int vram; int vram_valid; } mtrr;
+#endif /* CONFIG_MTRR */
};
+static struct fb_info_aty128 *board_list = NULL;
+
#define round_div(n, d) ((n+(d/2))/d)
/*
static int aty128fb_set_cmap(struct fb_cmap *cmap, int kspc, int con,
struct fb_info *info);
static int aty128fb_pan_display(struct fb_var_screeninfo *var, int con,
- struct fb_info *info);
+ struct fb_info *fb);
static int aty128fb_ioctl(struct inode *inode, struct file *file, u_int cmd,
u_long arg, int con, struct fb_info *info);
static int aty128fbcon_switch(int con, struct fb_info *info);
static void aty128fbcon_blank(int blank, struct fb_info *info);
-
/*
* Internal routines
*/
static int aty128_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
u_int transp, struct fb_info *info);
static void do_install_cmap(int con, struct fb_info *info);
+#ifndef CONFIG_FB_OF
static void aty128pci_probe(void);
+static int aty128_pci_register(struct pci_dev *pdev,
+ const struct aty128_chip_info *aci);
+#endif
+static struct fb_info_aty128 *aty128_board_list_add(struct fb_info_aty128
+ *board_list, struct fb_info_aty128 *new_node);
+#ifndef CONFIG_PPC
static int aty128find_ROM(struct fb_info_aty128 *info);
-static void aty128_timings(struct fb_info_aty128 *info);
static void aty128_get_pllinfo(struct fb_info_aty128 *info);
+#endif
+static void aty128_timings(struct fb_info_aty128 *info);
+static void aty128_init_engine(const struct aty128fb_par *par,
+ struct fb_info_aty128 *info);
static void aty128_reset_engine(const struct fb_info_aty128 *info);
static void aty128_flush_pixel_cache(const struct fb_info_aty128 *info);
static void wait_for_fifo(u16 entries, const struct fb_info_aty128 *info);
-static void wait_for_idle(const struct fb_info_aty128 *info);
+static void wait_for_idle(struct fb_info_aty128 *info);
static u32 bpp_to_depth(u32 bpp);
#ifdef FBCON_HAS_CFB8
static struct display_switch fbcon_aty128_8;
+static void fbcon_aty8_putc(struct vc_data *conp, struct display *p,
+ int c, int yy, int xx);
+static void fbcon_aty8_putcs(struct vc_data *conp, struct display *p,
+ const unsigned short *s, int count,
+ int yy, int xx);
+#endif
+#ifdef FBCON_HAS_CFB16
+static struct display_switch fbcon_aty128_16;
+#endif
+#ifdef FBCON_HAS_CFB24
+static struct display_switch fbcon_aty128_24;
+#endif
+#ifdef FBCON_HAS_CFB32
+static struct display_switch fbcon_aty128_32;
#endif
-
static struct fb_ops aty128fb_ops = {
aty128fb_open, aty128fb_release, aty128fb_get_fix,
* or using the other register aperture? TODO.
*/
static inline u32
-_aty_ld_le32(volatile unsigned int regindex,
+_aty_ld_le32(volatile unsigned int regindex,
const struct fb_info_aty128 *info)
{
- unsigned long temp;
+ unsigned long *temp;
u32 val;
#if defined(__powerpc__)
eieio();
temp = info->regbase;
asm("lwbrx %0,%1,%2" : "=b"(val) : "b" (regindex), "b" (temp));
-#elif defined(__sparc_v9__)
- temp = info->regbase + regindex;
- asm("lduwa [%1] %2, %0" : "=r" (val) : "r" (temp), "i" (ASI_PL));
#else
temp = info->regbase+regindex;
- val = le32_to_cpu(*((volatile u32 *)(temp)));
+ val = readl (temp);
#endif
+
return val;
}
static inline void
-_aty_st_le32(volatile unsigned int regindex, u32 val,
+_aty_st_le32(volatile unsigned int regindex, u32 val,
const struct fb_info_aty128 *info)
{
- unsigned long temp;
+ unsigned long *temp;
#if defined(__powerpc__)
- eieio();
temp = info->regbase;
- asm("stwbrx %0,%1,%2" : : "b" (val), "b" (regindex), "b" (temp) :
+ asm("stwbrx %0,%1,%2" : : "r" (val), "b" (regindex), "r" (temp) :
"memory");
-#elif defined(__sparc_v9__)
- temp = info->regbase + regindex;
- asm("stwa %0, [%1] %2" : : "r" (val), "r" (temp), "i" (ASI_PL) : "memory");
+#elif defined(__mc68000__)
+ *((volatile u32 *)(info->regbase+regindex)) = cpu_to_le32(val);
#else
temp = info->regbase+regindex;
- *((volatile u32 *)(temp)) = cpu_to_le32(val);
+ writel (val, temp);
#endif
}
static inline u8
-_aty_ld_8(volatile unsigned int regindex,
- const struct fb_info_aty128 *info)
+_aty_ld_8(unsigned int regindex, const struct fb_info_aty128 *info)
{
#if defined(__powerpc__)
eieio();
#endif
- return *(volatile u8 *)(info->regbase+regindex);
+ return readb (info->regbase + regindex);
}
static inline void
-_aty_st_8(volatile unsigned int regindex, u8 val,
- const struct fb_info_aty128 *info)
+_aty_st_8(unsigned int regindex, u8 val, const struct fb_info_aty128 *info)
{
#if defined(__powerpc__)
eieio();
#endif
- *(volatile u8 *)(info->regbase+regindex) = val;
+ writeb (val, info->regbase + regindex);
}
#define aty_ld_le32(regindex) _aty_ld_le32(regindex, info)
#define aty_ld_pll(pll_index) _aty_ld_pll(pll_index, info)
#define aty_st_pll(pll_index, val) _aty_st_pll(pll_index, val, info)
+
static u32
_aty_ld_pll(unsigned int pll_index,
const struct fb_info_aty128 *info)
{
+#if defined(__powerpc__)
+ eieio();
+#endif
aty_st_8(CLOCK_CNTL_INDEX, pll_index & 0x1F);
return aty_ld_le32(CLOCK_CNTL_DATA);
}
+
static void
_aty_st_pll(unsigned int pll_index, u32 val,
const struct fb_info_aty128 *info)
-{
+{
+#if defined(__powerpc__)
+ eieio();
+#endif
aty_st_8(CLOCK_CNTL_INDEX, (pll_index & 0x1F) | PLL_WR_EN);
aty_st_le32(CLOCK_CNTL_DATA, val);
}
return !(aty_ld_pll(PPLL_REF_DIV) & PPLL_ATOMIC_UPDATE_R);
}
+
static void
aty_pll_wait_readupdate(const struct fb_info_aty128 *info)
{
#ifdef DEBUG
if (reset) /* reset engine?? */
- printk(KERN_ERR "aty128fb: PLL write timeout!");
+ DBG("PLL write timeout!");
#endif
}
+
/* tell PLL to update */
static void
aty_pll_writeupdate(const struct fb_info_aty128 *info)
/*
- * Accelerator functions
+ * Accelerator engine functions
*/
static void
-wait_for_idle(const struct fb_info_aty128 *info)
+wait_for_idle(struct fb_info_aty128 *info)
{
unsigned long timeout = jiffies + HZ/20;
int reset = 1;
if (reset)
aty128_reset_engine(info);
+
+ info->blitter_may_be_busy = 0;
}
aty_st_le32(PC_NGUI_CTLSTAT, aty_ld_le32(PC_NGUI_CTLSTAT) | 0x000000ff);
- while (i && (aty_ld_le32(PC_NGUI_CTLSTAT) & PC_BUSY))
+ while (i && ((aty_ld_le32(PC_NGUI_CTLSTAT) & PC_BUSY) == PC_BUSY))
i--;
}
aty_st_le32(PM4_BUFFER_CNTL, PM4_BUFFER_CNTL_NONPM4);
#ifdef DEBUG
- printk("aty128fb: engine reset\n");
+ DBG("engine reset");
#endif
}
static void
aty128_init_engine(const struct aty128fb_par *par,
- const struct fb_info_aty128 *info)
+ struct fb_info_aty128 *info)
{
- u32 temp;
+ u32 pitch_value;
+
+ /* 3D scaler not spoken here */
aty_st_le32(SCALE_3D_CNTL, 0x00000000);
aty128_reset_engine(info);
- temp = par->crtc.pitch; /* fix this up */
+ pitch_value = par->crtc.pitch; /* fix this up */
if (par->crtc.bpp == 24) {
- temp = temp * 3;
+ pitch_value = pitch_value * 3;
}
/* setup engine offset registers */
- wait_for_fifo(4, info);
+ wait_for_fifo(1, info);
aty_st_le32(DEFAULT_OFFSET, 0x00000000);
/* setup engine pitch registers */
- aty_st_le32(DEFAULT_PITCH, temp);
+ aty_st_le32(DEFAULT_PITCH, pitch_value);
/* set the default scissor register to max dimensions */
wait_for_fifo(1, info);
/* set the drawing controls registers */
wait_for_fifo(1, info);
aty_st_le32(DP_GUI_MASTER_CNTL,
- GMC_SRC_PITCH_OFFSET_DEFAULT |
- GMC_DST_PITCH_OFFSET_DEFAULT |
- GMC_SRC_CLIP_DEFAULT |
- GMC_DST_CLIP_DEFAULT |
- GMC_BRUSH_SOLIDCOLOR |
- (bpp_to_depth(par->crtc.bpp) << 8) |
- GMC_SRC_DSTCOLOR |
- GMC_BYTE_ORDER_MSB_TO_LSB |
- GMC_DP_CONVERSION_TEMP_6500 |
- ROP3_PATCOPY |
- GMC_DP_SRC_RECT |
- GMC_3D_FCN_EN_CLR |
- GMC_DST_CLR_CMP_FCN_CLEAR |
- GMC_AUX_CLIP_CLEAR |
- GMC_WRITE_MASK_SET);
+ GMC_SRC_PITCH_OFFSET_DEFAULT |
+ GMC_DST_PITCH_OFFSET_DEFAULT |
+ GMC_SRC_CLIP_DEFAULT |
+ GMC_DST_CLIP_DEFAULT |
+ GMC_BRUSH_SOLIDCOLOR |
+ (bpp_to_depth(par->crtc.bpp) << 8) |
+ GMC_SRC_DSTCOLOR |
+ GMC_BYTE_ORDER_MSB_TO_LSB |
+ GMC_DP_CONVERSION_TEMP_6500 |
+ ROP3_PATCOPY |
+ GMC_DP_SRC_RECT |
+ GMC_3D_FCN_EN_CLR |
+ GMC_DST_CLR_CMP_FCN_CLEAR |
+ GMC_AUX_CLIP_CLEAR |
+ GMC_WRITE_MASK_SET);
+
wait_for_fifo(8, info);
/* clear the line drawing registers */
aty_st_le32(DST_BRES_DEC, 0);
/* set brush color registers */
- aty_st_le32(DP_BRUSH_FRGD_CLR, 0xFFFFFFFF);
- aty_st_le32(DP_BRUSH_BKGD_CLR, 0x00000000);
+ aty_st_le32(DP_BRUSH_FRGD_CLR, 0xFFFFFFFF); /* white */
+ aty_st_le32(DP_BRUSH_BKGD_CLR, 0x00000000); /* black */
/* set source color registers */
- aty_st_le32(DP_SRC_FRGD_CLR, 0xFFFFFFFF);
- aty_st_le32(DP_SRC_BKGD_CLR, 0x00000000);
+ aty_st_le32(DP_SRC_FRGD_CLR, 0xFFFFFFFF); /* white */
+ aty_st_le32(DP_SRC_BKGD_CLR, 0x00000000); /* black */
/* default write mask */
aty_st_le32(DP_WRITE_MASK, 0xFFFFFFFF);
}
- /*
- * CRTC programming
- */
-
/* convert bpp values to their register representation */
static u32
bpp_to_depth(u32 bpp)
{
if (bpp <= 8)
- return 2;
- else if (bpp <= 15)
- return 3;
+ return DST_8BPP;
else if (bpp <= 16)
-#if 0 /* force 15bpp */
- return 4;
-#else
- return 3;
-#endif
+ return DST_15BPP;
else if (bpp <= 24)
- return 5;
+ return DST_24BPP;
else if (bpp <= 32)
- return 6;
+ return DST_32BPP;
return -EINVAL;
}
+ /*
+ * CRTC programming
+ */
+
+/* Program the CRTC registers */
static void
aty128_set_crtc(const struct aty128_crtc *crtc,
const struct fb_info_aty128 *info)
{
aty_st_le32(CRTC_GEN_CNTL, crtc->gen_cntl);
- // aty_st_le32(CRTC_EXT_CNTL, crtc->ext_cntl);
aty_st_le32(CRTC_H_TOTAL_DISP, crtc->h_total);
aty_st_le32(CRTC_H_SYNC_STRT_WID, crtc->h_sync_strt_wid);
aty_st_le32(CRTC_V_TOTAL_DISP, crtc->v_total);
u32 left, right, upper, lower, hslen, vslen, sync, vmode;
u32 h_total, h_disp, h_sync_strt, h_sync_wid, h_sync_pol;
u32 v_total, v_disp, v_sync_strt, v_sync_wid, v_sync_pol, c_sync;
- u32 depth;
+ u32 depth, bytpp;
u8 hsync_strt_pix[5] = { 0, 0x12, 9, 6, 5 };
+ u8 mode_bytpp[7] = { 0, 0, 1, 2, 2, 3, 4 };
/* input */
xres = var->xres;
if (vyres < yres + yoffset)
vyres = yres + yoffset;
- if (bpp <= 8)
- bpp = 8;
- else if (bpp <= 16)
- bpp = 16;
- else if (bpp <= 32)
- bpp = 32;
+ depth = bpp_to_depth(bpp);
- if (vxres * vyres * (bpp/8) > info->vram_size)
- return -EINVAL;
+ /* make sure we didn't get an invalid depth */
+ if (depth == -EINVAL) {
+ printk(KERN_ERR "aty128fb: Invalid depth\n");
+ return -EINVAL;
+ }
- h_disp = xres / 8 - 1;
- h_total = (xres + right + hslen + left) / 8 - 1;
+ bytpp = mode_bytpp[depth];
+
+ /* make sure there is enough video ram for the mode */
+ if ((u32)(vxres * vyres * bytpp) > info->vram_size) {
+ printk(KERN_ERR "aty128fb: Not enough memory for mode\n");
+ return -EINVAL;
+ }
+
+ h_disp = (xres/8) - 1;
+ h_total = (((xres + right + hslen + left) / 8) - 1) & 0xFFFFL;
v_disp = yres - 1;
- v_total = yres + upper + vslen + lower - 1;
+ v_total = (yres + upper + vslen + lower - 1) & 0xFFFFL;
+
+ /* check to make sure h_total and v_total are in range */
+ if ((h_total/8 - 1) > 0x1ff || (v_total - 1) > 0x7FF) {
+ printk(KERN_ERR "aty128fb: invalid width ranges\n");
+ return -EINVAL;
+ }
- h_sync_wid = hslen / 8;
+ h_sync_wid = (hslen+7)/8;
if (h_sync_wid == 0)
h_sync_wid = 1;
- else if (h_sync_wid > 0x3f)
+ else if (h_sync_wid > 0x3f) /* 0x3f = max hwidth */
h_sync_wid = 0x3f;
- h_sync_strt = (xres + right - 8) + hsync_strt_pix[bpp/8];
+ h_sync_strt = h_disp + (right/8);
- v_disp = yres - 1;
v_sync_wid = vslen;
if (v_sync_wid == 0)
v_sync_wid = 1;
- else if (v_sync_wid > 0x1f)
+ else if (v_sync_wid > 0x1f) /* 0x1f = max vwidth */
v_sync_wid = 0x1f;
- v_sync_strt = yres + lower - 1;
+ v_sync_strt = v_disp + lower;
- h_sync_pol = sync & FB_SYNC_HOR_HIGH_ACT ? 0 : (1 << 23);
- v_sync_pol = sync & FB_SYNC_VERT_HIGH_ACT ? 0 : (1 << 23);
-
- depth = bpp_to_depth(bpp);
+ h_sync_pol = sync & FB_SYNC_HOR_HIGH_ACT ? 0 : 1;
+ v_sync_pol = sync & FB_SYNC_VERT_HIGH_ACT ? 0 : 1;
+
c_sync = sync & FB_SYNC_COMP_HIGH_ACT ? (1 << 4) : 0;
- crtc->gen_cntl = 0x03000000 | c_sync | depth << 8;
+ crtc->gen_cntl = 0x03000000L | c_sync | (depth << 8);
- crtc->h_total = (h_disp << 16) | (h_total & 0x0000FFFF);
- crtc->v_total = (v_disp << 16) | (v_total & 0x0000FFFF);
+ crtc->h_total = h_total | (h_disp << 16);
+ crtc->v_total = v_total | (v_disp << 16);
- crtc->h_sync_strt_wid = (h_sync_wid << 16) | (h_sync_strt) | h_sync_pol;
- crtc->v_sync_strt_wid = (v_sync_wid << 16) | (v_sync_strt) | v_sync_pol;
+ crtc->h_sync_strt_wid = hsync_strt_pix[bytpp] | (h_sync_strt << 3) |
+ (h_sync_wid << 16) | (h_sync_pol << 23);
+ crtc->v_sync_strt_wid = v_sync_strt | (v_sync_wid << 16) |
+ (v_sync_pol << 23);
- crtc->pitch = xres / 8;
+ crtc->pitch = xres >> 3;
crtc->offset = 0;
crtc->offset_cntl = 0;
crtc->vxres = vxres;
crtc->vyres = vyres;
+ crtc->xoffset = xoffset;
+ crtc->yoffset = yoffset;
crtc->bpp = bpp;
return 0;
}
+static int
+aty128_bpp_to_var(int pix_width, struct fb_var_screeninfo *var)
+{
+
+ /* fill in pixel info */
+ switch (pix_width) {
+ case CRTC_PIX_WIDTH_8BPP:
+ var->bits_per_pixel = 8;
+ var->red.offset = 0;
+ var->red.length = 8;
+ var->green.offset = 0;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case CRTC_PIX_WIDTH_15BPP:
+ case CRTC_PIX_WIDTH_16BPP:
+ var->bits_per_pixel = 16;
+ var->red.offset = 10;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 5;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case CRTC_PIX_WIDTH_24BPP:
+ var->bits_per_pixel = 24;
+ var->red.offset = 16;
+ var->red.length = 8;
+ var->green.offset = 8;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case CRTC_PIX_WIDTH_32BPP:
+ var->bits_per_pixel = 32;
+ var->red.offset = 16;
+ var->red.length = 8;
+ var->green.offset = 8;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 24;
+ var->transp.length = 8;
+ break;
+ default:
+ printk(KERN_ERR "Invalid pixel width\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+
static int
aty128_crtc_to_var(const struct aty128_crtc *crtc,
struct fb_var_screeninfo *var)
{
-#ifdef notyet /* xoffset and yoffset are not correctly calculated */
- u32 xres, yres, bpp, left, right, upper, lower, hslen, vslen, sync;
+ u32 xres, yres, left, right, upper, lower, hslen, vslen, sync;
u32 h_total, h_disp, h_sync_strt, h_sync_dly, h_sync_wid, h_sync_pol;
u32 v_total, v_disp, v_sync_strt, v_sync_wid, v_sync_pol, c_sync;
u32 pix_width;
h_total = crtc->h_total & 0x1ff;
h_disp = (crtc->h_total>>16) & 0xff;
- h_sync_strt = (crtc->h_sync_strt_wid & 0xff) |
- ((crtc->h_sync_strt_wid>>4) & 0x100);
- h_sync_dly = (crtc->h_sync_strt_wid>>8) & 0x7;
- h_sync_wid = (crtc->h_sync_strt_wid>>16) & 0x1f;
- h_sync_pol = (crtc->h_sync_strt_wid>>21) & 0x1;
+ h_sync_strt = (crtc->h_sync_strt_wid>>3) & 0x1ff;
+ h_sync_dly = crtc->h_sync_strt_wid & 0x7;
+ h_sync_wid = (crtc->h_sync_strt_wid>>16) & 0x3f;
+ h_sync_pol = (crtc->h_sync_strt_wid>>23) & 0x1;
v_total = crtc->v_total & 0x7ff;
v_disp = (crtc->v_total>>16) & 0x7ff;
v_sync_strt = crtc->v_sync_strt_wid & 0x7ff;
v_sync_wid = (crtc->v_sync_strt_wid>>16) & 0x1f;
- v_sync_pol = (crtc->v_sync_strt_wid>>21) & 0x1;
+ v_sync_pol = (crtc->v_sync_strt_wid>>23) & 0x1;
c_sync = crtc->gen_cntl & CRTC_CSYNC_EN ? 1 : 0;
pix_width = crtc->gen_cntl & CRTC_PIX_WIDTH_MASK;
lower = v_sync_strt-v_disp;
vslen = v_sync_wid;
sync = (h_sync_pol ? 0 : FB_SYNC_HOR_HIGH_ACT) |
- (v_sync_pol ? 0 : FB_SYNC_VERT_HIGH_ACT) |
- (c_sync ? FB_SYNC_COMP_HIGH_ACT : 0);
+ (v_sync_pol ? 0 : FB_SYNC_VERT_HIGH_ACT) |
+ (c_sync ? FB_SYNC_COMP_HIGH_ACT : 0);
- switch (pix_width) {
-#if 0
- case CRTC_PIX_WIDTH_4BPP:
- bpp = 4;
- var->red.offset = 0;
- var->red.length = 8;
- var->green.offset = 0;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
-#endif
- case CRTC_PIX_WIDTH_8BPP:
- bpp = 8;
- var->red.offset = 0;
- var->red.length = 8;
- var->green.offset = 0;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case CRTC_PIX_WIDTH_15BPP:
- bpp = 16;
- var->red.offset = 10;
- var->red.length = 5;
- var->green.offset = 5;
- var->green.length = 5;
- var->blue.offset = 0;
- var->blue.length = 5;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case CRTC_PIX_WIDTH_16BPP:
- bpp = 16;
- var->red.offset = 11;
- var->red.length = 5;
- var->green.offset = 5;
- var->green.length = 6;
- var->blue.offset = 0;
- var->blue.length = 5;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case CRTC_PIX_WIDTH_24BPP:
- bpp = 24;
- var->red.offset = 16;
- var->red.length = 8;
- var->green.offset = 8;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case CRTC_PIX_WIDTH_32BPP:
- bpp = 32;
- var->red.offset = 16;
- var->red.length = 8;
- var->green.offset = 8;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 24;
- var->transp.length = 8;
- break;
- default:
- printk(KERN_ERR "Invalid pixel width\n");
- }
+ aty128_bpp_to_var(pix_width, var);
-//Godda do math for xoffset and yoffset: does not exist in crtc
var->xres = xres;
var->yres = yres;
var->xres_virtual = crtc->vxres;
var->yres_virtual = crtc->vyres;
- var->bits_per_pixel = bpp;
var->xoffset = crtc->xoffset;
var->yoffset = crtc->yoffset;
var->left_margin = left;
var->sync = sync;
var->vmode = FB_VMODE_NONINTERLACED;
-#endif /* notyet */
- return 0;
-}
-
-static int
-aty128_bpp_to_var(int bpp, struct fb_var_screeninfo *var)
-{
- /* fill in pixel info */
- switch (bpp) {
- case 8:
- var->red.offset = 0;
- var->red.length = 8;
- var->green.offset = 0;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case 15:
- var->bits_per_pixel = 16;
- var->red.offset = 10;
- var->red.length = 5;
- var->green.offset = 5;
- var->green.length = 5;
- var->blue.offset = 0;
- var->blue.length = 5;
- var->transp.offset = 15;
- var->transp.length = 1;
- break;
- case 16:
- var->bits_per_pixel = 16;
- var->red.offset = 11;
- var->red.length = 5;
- var->green.offset = 5;
- var->green.length = 6;
- var->blue.offset = 0;
- var->blue.length = 5;
- var->transp.offset = 0;
- var->transp.length = 0;
- break;
- case 32:
- var->red.offset = 16;
- var->red.length = 8;
- var->green.offset = 8;
- var->green.length = 8;
- var->blue.offset = 0;
- var->blue.length = 8;
- var->transp.offset = 24;
- var->transp.length = 8;
- break;
- }
-
return 0;
}
aty128_set_pll(struct aty128_pll *pll, const struct fb_info_aty128 *info)
{
int div3;
+
unsigned char post_conv[] = /* register values for post dividers */
- { 2, 0, 1, 4, 2, 2, 6, 2, 3, 2, 2, 2, 7 };
+ { 2, 0, 1, 4, 2, 2, 6, 2, 3, 2, 2, 2, 7 };
/* select PPLL_DIV_3 */
aty_st_le32(CLOCK_CNTL_INDEX, aty_ld_le32(CLOCK_CNTL_INDEX) | (3 << 8));
- /* reset ppll */
+ /* reset PLL */
aty_st_pll(PPLL_CNTL,
aty_ld_pll(PPLL_CNTL) | PPLL_RESET | PPLL_ATOMIC_UPDATE_EN);
+ /* write the reference divider */
+ aty_st_pll(PPLL_REF_DIV, info->constants.ref_divider & 0x3ff);
+ aty_pll_writeupdate(info);
+ aty_pll_wait_readupdate(info);
+
div3 = aty_ld_pll(PPLL_DIV_3);
div3 &= ~PPLL_FB3_DIV_MASK;
static int
-aty128_var_to_pll(u32 vclk_per, struct aty128_pll *pll,
+aty128_var_to_pll(u32 period_in_ps, struct aty128_pll *pll,
const struct fb_info_aty128 *info)
{
const struct aty128_constants c = info->constants;
unsigned char post_dividers [] = {1,2,4,8,3,6,12};
- u32 output_freq, vclk;
+ u32 output_freq;
+ u32 vclk; /* in .01 MHz */
int i;
u32 n, d;
- vclk = 100000000 / vclk_per; /* convert units to 10 kHz */
+ vclk = 100000000 / period_in_ps; /* convert units to 10 kHz */
/* adjust pixel clock if necessary */
if (vclk > c.ppll_max)
vclk = c.ppll_max;
if (vclk * 12 < c.ppll_min)
- vclk = c.ppll_min;
+ vclk = c.ppll_min/12;
/* now, find an acceptable divider */
for (i = 0; i < sizeof(post_dividers); i++) {
if (output_freq >= c.ppll_min && output_freq <= c.ppll_max)
break;
}
- pll->post_divider = post_dividers[i];
/* calculate feedback divider */
n = c.ref_divider * output_freq;
d = c.dotclock;
- pll->feedback_divider = round_div(n, d);
+ pll->post_divider = post_dividers[i];
+ pll->feedback_divider = round_div(n, d);
pll->vclk = vclk;
+
#ifdef DEBUG
- printk("post %x feedback %x vlck %x output %x\n",
- pll->post_divider, pll->feedback_divider, vclk, output_freq);
+ printk(KERN_DEBUG "var_to_pll: post %d feedback %d vlck %d output %d ref_divider %d\n",
+ pll->post_divider, pll->feedback_divider, vclk, output_freq,
+ c.ref_divider);
+ printk(KERN_DEBUG "var_to_pll: vclk_per: %d\n", period_in_ps);
#endif
return 0;
static int
-aty128_pll_to_var(const struct aty128_pll *pll, struct fb_var_screeninfo *var)
+aty128_pll_to_var(const struct aty128_pll *pll, struct fb_var_screeninfo *var,
+ const struct fb_info_aty128 *info)
{
- /* TODO */
+ var->pixclock = 100000000 / pll->vclk;
+
return 0;
}
s32 x, b, p, ron, roff;
u32 n, d;
+ /* 15bpp is really 16bpp */
if (bpp == 15)
bpp = 16;
x;
#ifdef DEBUG
- printk("x %x\n", x);
+ printk(KERN_DEBUG "x %x\n", x);
#endif
b = 0;
while (x) {
x = round_div(n, d);
roff = x * (fifo_depth - 4);
if ((ron + m->Rloop) >= roff) {
- printk("Mode out of range\n");
+ printk(KERN_ERR "Mode out of range\n");
return -EINVAL;
}
#ifdef DEBUG
- printk("p: %x rloop: %x x: %x ron: %x roff: %x\n", p, m->Rloop, x,
- ron, roff);
+ printk(KERN_DEBUG "p: %x rloop: %x x: %x ron: %x roff: %x\n", p,
+ m->Rloop, x, ron, roff);
#endif
dsp->dda_config = p << 16 | m->Rloop << 20 | x;
dsp->dda_on_off = ron << 16 | roff;
u32 config;
info->current_par = *par;
+
+ if (info->blitter_may_be_busy)
+ wait_for_idle(info);
/* clear all registers that may interfere with mode setting */
aty_st_le32(OVR_CLR, 0);
aty_st_le32(MPP_GP_CONFIG, 0);
aty_st_le32(SUBPIC_CNTL, 0);
aty_st_le32(VIPH_CONTROL, 0);
- aty_st_le32(I2C_CNTL_1, 0);
+ aty_st_le32(I2C_CNTL_1, 0); /* turn off i2c */
aty_st_le32(GEN_INT_CNTL, 0); /* turn off interrupts */
aty_st_le32(CAP0_TRIG_CNTL, 0);
aty_st_le32(CAP1_TRIG_CNTL, 0);
#endif
aty_st_le32(CONFIG_CNTL, config);
-
aty_st_8(CRTC_EXT_CNTL + 1, 0); /* turn the video back on */
+
+ if (par->accel_flags & FB_ACCELF_TEXT)
+ aty128_init_engine(par, info);
}
}
+ /*
+ * encode/decode the User Defined Part of the Display
+ */
+
static int
aty128_decode_var(struct fb_var_screeninfo *var, struct aty128fb_par *par,
const struct fb_info_aty128 *info)
{
int err;
- //memset(var, 0, sizeof(struct fb_var_screeninfo));
-
- /* XXX aty128_*_to_var() aren't fully implemented! */
if ((err = aty128_crtc_to_var(&par->crtc, var)))
return err;
- if ((err = aty128_pll_to_var(&par->pll, var)))
+ if ((err = aty128_pll_to_var(&par->pll, var, info)))
return err;
- if ((err = aty128_bpp_to_var(var->bits_per_pixel, var)))
- return err;
+ var->red.msb_right = 0;
+ var->green.msb_right = 0;
+ var->blue.msb_right = 0;
+ var->transp.msb_right = 0;
+
+ var->nonstd = 0;
+ var->activate = 0;
var->height = -1;
var->width = -1;
display = (con >= 0) ? &fb_display[con] : fb->disp;
+ /* basic (in)sanity checks */
+ if (!var->xres)
+ var->xres = 1;
+ if (!var->yres)
+ var->yres = 1;
+ if (var->xres > var->xres_virtual)
+ var->xres_virtual = var->xres;
+ if (var->yres > var->yres_virtual)
+ var->yres_virtual = var->yres;
+ if (var->bits_per_pixel <= 8)
+ var->bits_per_pixel = 8;
+ else if (var->bits_per_pixel <= 16)
+ var->bits_per_pixel = 16;
+ else if (var->bits_per_pixel <= 24)
+ var->bits_per_pixel = 24;
+ else if (var->bits_per_pixel <= 32)
+ var->bits_per_pixel = 32;
+ else
+ return -EINVAL;
+
if ((err = aty128_decode_var(var, &par, info)))
return err;
oldbpp != var->bits_per_pixel || oldaccel != var->accel_flags) {
struct fb_fix_screeninfo fix;
+
aty128_encode_fix(&fix, &par, info);
- display->screen_base = (char *) info->frame_buffer;
+ display->screen_base = (char *)info->frame_buffer;
display->visual = fix.visual;
display->type = fix.type;
display->type_aux = fix.type_aux;
display->inverse = 0;
accel = var->accel_flags & FB_ACCELF_TEXT;
- aty128_set_disp(display, info, var->bits_per_pixel, accel);
+ aty128_set_disp(display, info, par.crtc.bpp, accel);
-#if 0 /* acceleration is not ready */
if (accel)
- display->scrollmode = 0;
+ display->scrollmode = SCROLL_YNOMOVE;
else
-#endif
display->scrollmode = SCROLL_YREDRAW;
if (info->fb_info.changevar)
do_install_cmap(con, &info->fb_info);
}
+#ifdef CONFIG_FB_COMPAT_XPMAC
+ if (console_fb_info == &info->fb_info) {
+ int vmode, cmode;
+
+ display_info.width = var->xres;
+ display_info.height = var->yres;
+ display_info.depth = var->bits_per_pixel;
+ display_info.pitch = (var->xres_virtual)*(var->bits_per_pixel)/8;
+ if (mac_var_to_vmode(var, &vmode, &cmode))
+ display_info.mode = 0;
+ else
+ display_info.mode = vmode;
+ strcpy(info->fb_info.modename, aty128fb_name);
+ display_info.fb_address = info->frame_buffer_phys;
+ display_info.cmap_adr_address = 0;
+ display_info.cmap_data_address = 0;
+ display_info.disp_reg_address = info->regbase_phys;
+ }
+#endif
+
return 0;
}
switch (bpp) {
#ifdef FBCON_HAS_CFB8
case 8:
- disp->dispsw = accel ? &fbcon_aty128_8 : &fbcon_cfb8;
+ info->dispsw = accel ? fbcon_aty128_8 : fbcon_cfb8;
+ disp->dispsw = &info->dispsw;
break;
#endif
#ifdef FBCON_HAS_CFB16
+ case 15:
case 16:
- disp->dispsw = &fbcon_cfb16;
+ info->dispsw = accel ? fbcon_aty128_16 : fbcon_cfb16;
+ disp->dispsw = &info->dispsw;
disp->dispsw_data = info->fbcon_cmap.cfb16;
break;
#endif
#ifdef FBCON_HAS_CFB24
case 24:
- disp->dispsw = &fbcon_cfb24;
+ info->dispsw = accel ? fbcon_aty128_24 : fbcon_cfb24;
+ disp->dispsw = &info->dispsw;
disp->dispsw_data = info->fbcon_cmap.cfb24;
break;
#endif
#ifdef FBCON_HAS_CFB32
case 32:
- disp->dispsw = &fbcon_cfb32;
+ info->dispsw = accel ? fbcon_aty128_32 : fbcon_cfb32;
+ disp->dispsw = &info->dispsw;
disp->dispsw_data = info->fbcon_cmap.cfb32;
break;
#endif
memset(fix, 0, sizeof(struct fb_fix_screeninfo));
strcpy(fix->id, aty128fb_name);
- fix->smem_start = (long) info->frame_buffer_phys;
- fix->smem_len = info->vram_size;
+ fix->smem_start = (long)info->frame_buffer_phys;
+ fix->smem_len = (u32)info->vram_size;
- fix->mmio_start = (long) info->regbase_phys;
+ fix->mmio_start = (long)info->regbase_phys;
fix->mmio_len = 0x1fff;
fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->type_aux = 0;
fix->line_length = par->crtc.vxres*par->crtc.bpp/8;
fix->visual = par->crtc.bpp <= 8 ? FB_VISUAL_PSEUDOCOLOR
- : FB_VISUAL_DIRECTCOLOR;
-
+ : FB_VISUAL_DIRECTCOLOR;
+ fix->ywrapstep = 0;
fix->xpanstep = 8;
fix->ypanstep = 1;
fix->accel = FB_ACCEL_ATI_RAGE128;
+
return;
}
aty128_decode_var(&fb_display[con].var, &par, info);
aty128_encode_fix(fix, &par, info);
+
return 0;
}
*/
static int
aty128fb_pan_display(struct fb_var_screeninfo *var, int con,
- struct fb_info *info)
+ struct fb_info *fb)
{
- if (var->xoffset != 0 || var->yoffset != 0)
- return -EINVAL;
+ struct fb_info_aty128 *info = (struct fb_info_aty128 *)fb;
+ struct aty128fb_par *par = &info->current_par;
+ u32 xoffset, yoffset;
+ u32 offset;
+ u32 xres, yres;
+
+ xres = (((par->crtc.h_total >> 16) & 0xff) + 1) * 8;
+ yres = ((par->crtc.v_total >> 16) & 0x7ff) + 1;
+
+ xoffset = (var->xoffset +7) & ~7;
+ yoffset = var->yoffset;
+
+ if (xoffset+xres > par->crtc.vxres || yoffset+yres > par->crtc.vyres)
+ return -EINVAL;
+
+ par->crtc.xoffset = xoffset;
+ par->crtc.yoffset = yoffset;
+
+ offset = ((yoffset * par->crtc.vxres + xoffset) * par->crtc.bpp) >> 6;
+
+ aty_st_le32(CRTC_OFFSET, offset);
return 0;
}
aty128fb_get_cmap(struct fb_cmap *cmap, int kspc, int con,
struct fb_info *info)
{
- if (!info->display_fg ||
- con == info->display_fg->vc_num) /* current console ? */
+ if (con == currcon) /* current console? */
return fb_get_cmap(cmap, kspc, aty128_getcolreg, info);
else if (fb_display[con].cmap.len) /* non default colormap? */
fb_copy_cmap(&fb_display[con].cmap, cmap, kspc ? 0 : 2);
int size = (fb_display[con].var.bits_per_pixel <= 8) ? 256 : 32;
fb_copy_cmap(fb_default_cmap(size), cmap, kspc ? 0 : 2);
}
+
return 0;
}
else
disp = info->disp;
if (!disp->cmap.len) { /* no colormap allocated? */
- int size = (disp->var.bits_per_pixel <= 16) ? 256 : 32;
+ int size = (disp->var.bits_per_pixel <= 8) ? 256 : 32;
if ((err = fb_alloc_cmap(&disp->cmap, size, 0)))
return err;
}
- if (!info->display_fg || con == info->display_fg->vc_num)
-/* current console? */
+
+ if (con == currcon) /* current console? */
return fb_set_cmap(cmap, kspc, aty128_setcolreg, info);
else
fb_copy_cmap(cmap, &disp->cmap, kspc ? 0 : 1);
+
return 0;
}
/*
- * Virtual Frame Buffer Specific ioctls
+ * Frame Buffer Specific ioctls
*/
static int
}
+#ifndef MODULE
int __init
aty128fb_setup(char *options)
{
break;
memcpy(fontname, this_opt + 5, i);
fontname[i] = 0;
+ } else if (!strncmp(this_opt, "noaccel", 7)) {
+ noaccel = 1;
}
+#ifdef CONFIG_MTRR
+ else if(!strncmp(this_opt, "nomtrr", 6)) {
+ mtrr = 0;
+ }
+#endif /* CONFIG_MTRR */
#if defined(CONFIG_PPC)
- if (!strncmp(this_opt, "vmode:", 6)) {
+ /* vmode and cmode depreciated */
+ else if (!strncmp(this_opt, "vmode:", 6)) {
unsigned int vmode = simple_strtoul(this_opt+6, NULL, 0);
if (vmode > 0 && vmode <= VMODE_MAX)
default_vmode = vmode;
break;
}
}
-#endif
-#ifdef CONFIG_MTRR
- if(mtrr) {
- ACCESS_FBINFO(mtrr.vram) =
- mtrr_add(video_base_phys, ACCESS_FBINFO(video.len),
- MTRR_TYPE_WRCOMB, 1);
- ACCESS_FBINFO(mtrr.valid_vram) = 1;
- printk(KERN_INFO "aty128fb: MTRR set to ON\n");
- }
-#endif
+#endif /* CONFIG_PPC */
+ else
+ mode_option = this_opt;
}
return 0;
}
+#endif /* !MODULE */
/*
* Initialisation
*/
-static int
+static int __init
aty128_init(struct fb_info_aty128 *info, const char *name)
{
struct fb_var_screeninfo var;
u8 chip_rev;
if (!register_test(info)) {
- printk("Can't write to video registers\n");
+ printk(KERN_ERR "Can't write to video registers\n");
return 0;
}
chip_rev = (aty_ld_le32(CONFIG_CNTL) >> 16) & 0x1F;
- /* TODO be more verbose */
- printk("aty128fb: Rage128 [rev 0x%x] ", chip_rev);
+ /* TODO: be more verbose */
+ printk(KERN_INFO "aty128fb: Rage128 [chip rev 0x%x] [card rev %x] ",
+ chip_rev, info->card_revision);
if (info->vram_size % (1024 * 1024) == 0)
- printk("%dM ", info->vram_size / (1024*1024));
+ printk("%dM \n", info->vram_size / (1024*1024));
else
- printk("%dk ", info->vram_size / 1024);
-
- var = default_var;
-
-#ifdef CONFIG_PMAC
-
- if (default_vmode == VMODE_NVRAM) {
-#ifdef CONFIG_NVRAM
- default_vmode = nvram_read_byte(NV_VMODE);
- if (default_vmode <= 0 || default_vmode > VMODE_MAX)
-#endif /* CONFIG_NVRAM */
- default_vmode = VMODE_CHOOSE;
- }
-
- if (default_cmode == CMODE_NVRAM) {
-#ifdef CONFIG_NVRAM
- default_cmode = nvram_read_byte(NV_CMODE);
- if (default_cmode < CMODE_8 || default_cmode > CMODE_32)
-#endif /* CONFIG_NVRAM */
- default_vmode = VMODE_CHOOSE;
- }
-
- if (default_vmode != VMODE_CHOOSE &&
- mac_vmode_to_var(default_vmode, default_cmode, &var))
- var = default_var;
-
-#endif /* CONFIG_PMAC */
-
- if (aty128_decode_var(&var, &info->default_par, info)) {
- printk("Cannot set default mode.\n");
- return 0;
- }
+ printk("%dk \n", info->vram_size / 1024);
/* fill in info */
strcpy(info->fb_info.modename, aty128fb_name);
info->fb_info.blank = &aty128fbcon_blank;
info->fb_info.flags = FBINFO_FLAG_DEFAULT;
+#ifdef MODULE
+ var = default_var;
+#else
+ memset(&var, 0, sizeof(var));
+#ifdef CONFIG_PMAC
+ if (default_vmode == VMODE_CHOOSE) {
+ var = default_var;
+#endif /* CONFIG_PMAC */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,1)
+ if (!fb_find_mode(&var, &info->fb_info, mode_option, NULL, 0,
+ &defaultmode, 8))
+ var = default_var;
+#endif
+
+#ifdef CONFIG_PMAC
+ } else {
+ if (mac_vmode_to_var(default_vmode, default_cmode, &var))
+ var = default_var;
+#endif /* CONFIG_PMAC */
+#endif /* MODULE */
+
+ if (noaccel)
+ var.accel_flags &= ~FB_ACCELF_TEXT;
+ else
+ var.accel_flags |= FB_ACCELF_TEXT;
+
+ if (aty128_decode_var(&var, &info->default_par, info)) {
+ printk(KERN_ERR "Cannot set default mode.\n");
+ return 0;
+ }
+
+ /* load up the palette with default colors */
for (j = 0; j < 16; j++) {
k = color_table[j];
info->palette[j].red = default_red[k];
}
dac = aty_ld_le32(DAC_CNTL) & 15; /* preserve lower three bits */
- dac |= DAC_8BIT_EN; /* set 8 bit dac */
- dac |= (0xFF << 24); /* set DAC mask */
+ dac |= DAC_8BIT_EN; /* set 8 bit dac */
+ dac |= DAC_MASK; /* set DAC mask */
aty_st_le32(DAC_CNTL, dac);
/* turn off bus mastering, just in case */
aty128fb_set_var(&var, -1, &info->fb_info);
aty128_init_engine(&info->default_par, info);
- printk("\n");
+ board_list = aty128_board_list_add(board_list, info);
+
if (register_framebuffer(&info->fb_info) < 0)
return 0;
- printk("fb%d: %s frame buffer device on %s\n",
+ printk(KERN_INFO "fb%d: %s frame buffer device on %s\n",
GET_FB_IDX(info->fb_info.node), aty128fb_name, name);
return 1; /* success! */
}
+/* add a new card to the list ++ajoshi */
+static struct
+fb_info_aty128 *aty128_board_list_add(struct fb_info_aty128 *board_list,
+ struct fb_info_aty128 *new_node)
+{
+ struct fb_info_aty128 *i_p = board_list;
+
+ new_node->next = NULL;
+ if(board_list == NULL)
+ return new_node;
+ while(i_p->next != NULL)
+ i_p = i_p->next;
+ i_p->next = new_node;
+
+ return board_list;
+}
+
+
void __init
aty128fb_init(void)
{
#if defined(CONFIG_FB_OF)
-/* let offb handle init */
+ /* let offb handle init */
#elif defined (CONFIG_PCI)
aty128pci_probe();
#endif
}
-void
+#ifndef CONFIG_FB_OF
+void __init
aty128pci_probe(void)
{
struct pci_dev *pdev = NULL;
- struct fb_info_aty128 *info;
- unsigned long fb_addr, reg_addr;
+ const struct aty128_chip_info *aci = &aty128_pci_probe_list[0];
+
+ while(aci->name != NULL) {
+ pdev = pci_find_device(aci->vendor, aci->device, NULL);
+ while(pdev != NULL) {
+ if(aty128_pci_register(pdev, aci) > 0)
+ return;
+ pdev = pci_find_device(aci->vendor, aci->device, pdev);
+ }
+ aci++;
+ }
+
+ return;
+}
+
+
+/* register a card ++ajoshi */
+static int __init
+aty128_pci_register(struct pci_dev *pdev,
+ const struct aty128_chip_info *aci)
+{
+ struct fb_info_aty128 *info = NULL;
+ unsigned long fb_addr, reg_addr = 0;
u16 tmp;
- while ((pdev = pci_find_device(PCI_VENDOR_ID_ATI, PCI_ANY_ID, pdev))) {
- if ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY) {
- struct resource *rp;
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,1)
+ struct resource *rp;
- /* FIXME add other known R128 device ID's */
- switch (pdev->device) {
- case 0x5245:
- case 0x5246:
- case 0x524B:
- case 0x524C:
- break;
- default:
- continue;
- }
-
- rp = &pdev->resource[0];
- fb_addr = rp->start;
- if (!fb_addr)
- continue;
- fb_addr &= PCI_BASE_ADDRESS_MEM_MASK;
-
- rp = &pdev->resource[2];
- reg_addr = rp->start;
- if (!reg_addr)
- continue;
- reg_addr &= PCI_BASE_ADDRESS_MEM_MASK;
-
- info = kmalloc(sizeof(struct fb_info_aty128), GFP_ATOMIC);
- if (!info) {
- printk("aty128fb: can't alloc fb_info_aty128\n");
- return;
- }
- memset(info, 0, sizeof(struct fb_info_aty128));
-
- info->regbase_phys = reg_addr;
- info->regbase = (unsigned long) ioremap(reg_addr, 0x1FFF);
-
- if (!info->regbase) {
- kfree(info);
- return;
- }
-
- info->vram_size = aty_ld_le32(CONFIG_MEMSIZE) & 0x03FFFFFF;
-
- info->frame_buffer = fb_addr;
- info->frame_buffer = (unsigned long)
- ioremap(fb_addr, info->vram_size);
-
- if (!info->frame_buffer) {
- kfree(info);
- return;
- }
-
- pci_read_config_word(pdev, PCI_COMMAND, &tmp);
- if (!(tmp & PCI_COMMAND_MEMORY)) {
- tmp |= PCI_COMMAND_MEMORY;
- pci_write_config_word(pdev, PCI_COMMAND, tmp);
- }
+ rp = &pdev->resource[0];
+ fb_addr = rp->start;
+ fb_addr &= PCI_BASE_ADDRESS_MEM_MASK;
-#if defined(CONFIG_PPC)
- aty128_timings(info);
+ request_mem_region (rp->start, rp->end - rp->start + 1, "aty128fb");
+
+ rp = &pdev->resource[2];
+ reg_addr = rp->start;
+ reg_addr &= PCI_BASE_ADDRESS_MEM_MASK;
+
+ if (!reg_addr) {
+ release_mem_region (pdev->resource[0].start,
+ pdev->resource[0].end -
+ pdev->resource[0].start + 1);
+ return -1;
+ }
#else
- if (!aty128find_ROM(info)) {
- printk("Rage128 BIOS not located. Guessing...\n");
- aty128_timings(info);
- }
- else
- aty128_get_pllinfo(info);
+ fb_addr = pdev->base_address[0] & PCI_BASE_ADDRESS_MEM_MASK;
+ reg_addr = pdev->base_address[2] & PCI_BASE_ADDRESS_MEM_MASK;
+ if (!reg_addr)
+ return -1;
#endif
- if (!aty128_init(info, "PCI")) {
- kfree(info);
- return;
- }
- }
+ info = kmalloc(sizeof(struct fb_info_aty128), GFP_ATOMIC);
+ if(!info) {
+ printk(KERN_ERR "aty128fb: can't alloc fb_info_aty128\n");
+ goto unmap_out;
+ }
+ memset(info, 0, sizeof(struct fb_info_aty128));
+
+ info->pdev = pdev;
+
+ info->regbase_phys = reg_addr;
+ info->regbase = ioremap(reg_addr, 0x1FFF);
+
+ if (!info->regbase)
+ goto err_out;
+
+ pci_read_config_word(pdev, 0x08, &tmp);
+ info->card_revision = tmp;
+
+ info->vram_size = aty_ld_le32(CONFIG_MEMSIZE) & 0x03FFFFFF;
+
+ info->frame_buffer_phys = fb_addr;
+ info->frame_buffer = (unsigned long)ioremap(fb_addr, info->vram_size);
+
+ if (!info->frame_buffer)
+ goto err_out;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &tmp);
+ if (!(tmp & PCI_COMMAND_MEMORY)) {
+ tmp |= PCI_COMMAND_MEMORY;
+ pci_write_config_word(pdev, PCI_COMMAND, tmp);
+ }
+
+#if defined(CONFIG_PPC)
+ aty128_timings(info);
+#else
+ if (!aty128find_ROM(info)) {
+ printk(KERN_INFO "Rage128 BIOS not located. Guessing...\n");
+ aty128_timings(info);
+ }
+ else
+ aty128_get_pllinfo(info);
+#endif
+#ifdef CONFIG_MTRR
+ if (mtrr) {
+ info->mtrr.vram = mtrr_add(info->frame_buffer_phys, info->vram_size,
+ MTRR_TYPE_WRCOMB, 1);
+ info->mtrr.vram_valid = 1;
+ /* let there be speed */
+ printk(KERN_INFO "aty128fb: Rage128 MTRR set to ON\n");
}
+#endif /* CONFIG_MTRR */
+
+ if (!aty128_init(info, "PCI"))
+ goto err_out;
+
+ return 0;
+
+err_out:
+ kfree (info);
+unmap_out:
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,1)
+ release_mem_region (pdev->resource[0].start,
+ pdev->resource[0].end -
+ pdev->resource[0].start + 1);
+ release_mem_region (pdev->resource[2].start,
+ pdev->resource[2].end -
+ pdev->resource[2].start + 1);
+#endif
+
+ return -1;
}
+#endif /* ! CONFIG_FB_OF */
-static int
+#ifndef CONFIG_PPC
+static int __init
aty128find_ROM(struct fb_info_aty128 *info)
{
u32 segstart;
char *rom;
int stage;
int i;
- char aty_rom_sig[] = "761295520";
- char R128_sig[] = "R128";
+ char aty_rom_sig[] = "761295520"; /* ATI ROM Signature */
+ char R128_sig[] = "R128"; /* Rage128 ROM identifier */
int flag = 0;
-DBG("E aty128find_ROM");
- for (segstart = 0x000c0000; segstart < 0x000f0000; segstart += 0x00001000) {
+ for (segstart=0x000c0000; segstart<0x000f0000; segstart+=0x00001000) {
stage = 1;
rom_base = (char *) ioremap(segstart, 0x1000);
rom_base1 = (char *) (rom_base+1);
- if ((*rom_base == 0x55) && (((*rom_base1) & 0xff) == 0xaa)) {
+ if ((*rom_base == 0x55) && (((*rom_base1) & 0xff) == 0xaa))
stage = 2;
- }
if (stage != 2) {
iounmap(rom_base);
}
rom = rom_base;
+ /* ATI signature found. Let's see if it's a Rage128 */
for (i = 0; (i < 512) && (stage != 4); i++) {
if (R128_sig[0] == *rom) {
if (strncmp(R128_sig, rom, strlen(R128_sig)) == 0) {
continue;
}
- printk("Rage128 BIOS located at segment %4.4X\n", (u32)rom_base);
- info->BIOS_SEG = (u32)rom_base;
+ printk(KERN_INFO "aty128fb: Rage128 BIOS located at segment %4.4X\n",
+ (u32)rom_base);
+ info->BIOS_SEG = rom_base;
flag = 1;
-
break;
}
-DBG("L aty128find_ROM");
return (flag);
}
-static void
+static void __init
aty128_get_pllinfo(struct fb_info_aty128 *info)
{
- u32 bios_header;
- u32 *header_ptr;
+ void *bios_header;
+ void *header_ptr;
u16 bios_header_offset, pll_info_offset;
PLL_BLOCK pll;
-DBG("E aty128_get_pllinfo");
bios_header = info->BIOS_SEG + 0x48L;
- header_ptr = (u32 *)bios_header;
+ header_ptr = bios_header;
- bios_header_offset = (u16)*header_ptr;
- bios_header = info->BIOS_SEG + (u32)bios_header_offset;
+ bios_header_offset = readw(header_ptr);
+ bios_header = info->BIOS_SEG + bios_header_offset;
bios_header += 0x30;
- header_ptr = (u32 *)bios_header;
- pll_info_offset = (u16)*header_ptr;
- header_ptr = (u32 *)(info->BIOS_SEG + (u32)pll_info_offset);
+ header_ptr = bios_header;
+ pll_info_offset = readw(header_ptr);
+ header_ptr = info->BIOS_SEG + pll_info_offset;
- memcpy(&pll, header_ptr, 50);
+ memcpy_fromio(&pll, header_ptr, 50);
info->constants.ppll_max = pll.PCLK_max_freq;
info->constants.ppll_min = pll.PCLK_min_freq;
info->constants.ref_divider = (u32)pll.PCLK_ref_divider;
info->constants.dotclock = (u32)pll.PCLK_ref_freq;
+ aty_st_pll(PPLL_REF_DIV, info->constants.ref_divider);
+ aty_pll_writeupdate(info);
+
info->constants.fifo_width = 128;
info->constants.fifo_depth = 32;
+#ifdef DEBUG
+ printk(KERN_DEBUG "get_pllinfo: ppll_max %d ppll_min %d xclk %d "
+ "ref_divider %d dotclock %d\n",
+ info->constants.ppll_max, info->constants.ppll_min,
+ info->constants.xclk, info->constants.ref_divider,
+ info->constants.dotclock);
+#endif
+
switch(aty_ld_le32(MEM_CNTL) & 0x03) {
case 0:
info->mem = &sdr_128;
info->mem = &sdr_sgram;
}
-DBG("L aty128get_pllinfo");
+ /* free up to-be unused resources */
+ if (info->BIOS_SEG)
+ iounmap(info->BIOS_SEG);
+
return;
}
+#endif /* ! CONFIG_PPC */
#ifdef CONFIG_FB_OF
-void
+void __init
aty128fb_of_init(struct device_node *dp)
{
unsigned long addr, reg_addr, fb_addr;
reg_addr = dp->addrs[2].address;
break;
default:
- printk("aty128fb: TODO unexpected addresses\n");
+ printk(KERN_ERR "aty128fb: TODO unexpected addresses\n");
return;
}
addr = (unsigned long) ioremap(reg_addr, 0x1FFF);
if (!addr) {
- printk("aty128fb: can't map memory registers\n");
+ printk(KERN_ERR "aty128fb: can't map memory registers\n");
return;
}
info = kmalloc(sizeof(struct fb_info_aty128), GFP_ATOMIC);
if (!info) {
- printk("aty128fb: can't alloc fb_info_aty128\n");
+ printk(KERN_ERR "aty128fb: can't alloc fb_info_aty128\n");
return;
}
- memset(info, 0, sizeof(struct fb_info_aty128));
+ memset((void *) info, 0, sizeof(struct fb_info_aty128));
info->regbase_phys = reg_addr;
- info->regbase = addr;
+ info->regbase = (void *) addr;
/* enabled memory-space accesses using config-space command register */
if (pci_device_loc(dp, &bus, &devfn) == 0) {
info->vram_size = aty_ld_le32(CONFIG_MEMSIZE) & 0x03FFFFFF;
info->frame_buffer_phys = fb_addr;
- info->frame_buffer = (unsigned long) ioremap(fb_addr, info->vram_size);
-
- /*
- * TODO find OF values/hints.
- *
- * If we are booted from BootX, the MacOS ATI driver will likely have
- * left useful tidbits in the DeviceRegistry.
- */
+ info->frame_buffer = (void *) ioremap(fb_addr, info->vram_size);
if (!info->frame_buffer) {
- printk("aty128fb: can't map frame buffer\n");
- return;
+ printk(KERN_ERR "aty128fb: can't map frame buffer\n");
+ kfree(info);
+ return;
}
+ /* fall back to defaults */
aty128_timings(info);
if (!aty128_init(info, dp->full_name)) {
kfree(info);
return;
}
-}
+
+#ifdef CONFIG_FB_COMPAT_XPMAC
+ if (!console_fb_info)
+ console_fb_info = &info->fb_info;
#endif
+}
+#endif /* CONFIG_FB_OF */
/* fill in known card constants if pll_block is not available */
-static void
+static void __init
aty128_timings(struct fb_info_aty128 *info)
{
/* TODO make an attempt at probing */
-
- info->constants.dotclock = 2950;
+ if (!info->constants.dotclock)
+ info->constants.dotclock = 2950;
/* from documentation */
- info->constants.ppll_min = 12500;
- info->constants.ppll_max = 25000; /* 23000 on some cards? */
+ if (!info->constants.ppll_min)
+ info->constants.ppll_min = 12500;
+ if (!info->constants.ppll_max)
+ info->constants.ppll_max = 25000; /* 23000 on some cards? */
#if 1
- /* XXX TODO. Calculate properly. Fix OF's pll ideas. */
- info->constants.ref_divider = 0x3b;
+ /* XXX TODO. Calculuate properly. Fix OF's pll ideas. */
+ if (!info->constants.ref_divider)
+ info->constants.ref_divider = 0x3b;
aty_st_pll(PPLL_REF_DIV, info->constants.ref_divider);
aty_pll_writeupdate(info);
#endif
/* TODO. Calculate */
- info->constants.xclk = 0x1d4d; /* same as mclk */
+ if (!info->constants.xclk)
+ info->constants.xclk = 0x1d4d; /* same as mclk */
info->constants.fifo_width = 128;
info->constants.fifo_depth = 32;
static int
aty128fbcon_switch(int con, struct fb_info *fb)
{
- currcon = con;
+ struct fb_info_aty128 *info = (struct fb_info_aty128 *)fb;
+ struct aty128fb_par par;
/* Do we have to save the colormap? */
if (fb_display[currcon].cmap.len)
- fb_get_cmap(&fb_display[currcon].cmap, 1, aty128_getcolreg, fb);
+ fb_get_cmap(&fb_display[currcon].cmap, 1, aty128_getcolreg, fb);
-#if 1
- aty128fb_set_var(&fb_display[con].var, con, fb);
-#else
-{
- struct fb_info_aty128 *info = (struct fb_info_aty128 *) fb;
- struct aty128fb_par par;
+ /* set the current console */
+ currcon = con;
aty128_decode_var(&fb_display[con].var, &par, info);
aty128_set_par(&par, info);
- aty128_set_disp(&fb_display[con], info,
- fb_display[con].var.bits_per_pixel);
+
+ aty128_set_disp(&fb_display[con], info, par.crtc.bpp,
+ par.accel_flags & FB_ACCELF_TEXT);
do_install_cmap(con, fb);
-}
-#endif
return 1;
}
/*
* Blank the display.
*/
-
static void
aty128fbcon_blank(int blank, struct fb_info *fb)
{
*green = (info->palette[regno].green<<8) | info->palette[regno].green;
*blue = (info->palette[regno].blue<<8) | info->palette[regno].blue;
*transp = 0;
+
return 0;
}
aty128_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
u_int transp, struct fb_info *fb)
{
- struct fb_info_aty128 *info = (struct fb_info_aty128 *) fb;
+ struct fb_info_aty128 *info = (struct fb_info_aty128 *)fb;
u32 col;
if (regno > 255)
info->palette[regno].green = green;
info->palette[regno].blue = blue;
- aty_st_8(PALETTE_INDEX, regno);
- col = red << 16 | green << 8 | blue;
+ /* initialize gamma ramp for hi-color+ */
+ if ((info->current_par.crtc.bpp > 8) && (regno == 0)) {
+ int i;
+
+ for (i=16; i<256; i++) {
+ aty_st_8(PALETTE_INDEX, i);
+ col = (i << 16) | (i << 8) | i;
+ aty_st_le32(PALETTE_DATA, col);
+ }
+ }
+
+ /* initialize palette */
+ if (info->current_par.crtc.bpp == 16)
+ aty_st_8(PALETTE_INDEX, (regno << 3));
+ else
+ aty_st_8(PALETTE_INDEX, regno);
+ col = (red << 16) | (green << 8) | blue;
aty_st_le32(PALETTE_DATA, col);
if (regno < 16)
switch (info->current_par.crtc.bpp) {
#ifdef FBCON_HAS_CFB16
- case 16:
+ case 9 ... 16:
info->fbcon_cmap.cfb16[regno] = (regno << 10) | (regno << 5) |
- regno;
+ regno;
break;
#endif
#ifdef FBCON_HAS_CFB24
- case 24:
+ case 17 ... 24:
info->fbcon_cmap.cfb24[regno] = (regno << 16) | (regno << 8) |
regno;
break;
#endif
#ifdef FBCON_HAS_CFB32
- case 32:
- {
- u32 i;
- i = (regno << 8) | regno;
- info->fbcon_cmap.cfb32[regno] = (i << 16) | i;
- }
+ case 25 ... 32: {
+ u32 i;
+
+ i = (regno << 8) | regno;
+ info->fbcon_cmap.cfb32[regno] = (i << 16) | i;
break;
+ }
#endif
}
return 0;
{
if (con != currcon)
return;
+
if (fb_display[con].cmap.len)
fb_set_cmap(&fb_display[con].cmap, 1, aty128_setcolreg, info);
else {
aty128_rectdraw(s16 x, s16 y, u16 width, u16 height,
struct fb_info_aty128 *info)
{
- /* perform rectangle fill */
+ /* perform rectangle operation */
wait_for_fifo(2, info);
aty_st_le32(DST_Y_X, (y << 16) | x);
aty_st_le32(DST_HEIGHT_WIDTH, (height << 16) | width);
+
+ info->blitter_may_be_busy = 1;
}
u_int width, u_int height,
struct fb_info_aty128 *info)
{
- u32 direction = DST_LAST_PEL;
- u32 pitch_value;
+ u32 save_dp_datatype, save_dp_cntl;
- if (!width || !height)
- return;
+ wait_for_fifo(2, info);
+ save_dp_datatype = aty_ld_le32(DP_DATATYPE);
+ save_dp_cntl = aty_ld_le32(DP_CNTL);
- pitch_value = info->current_par.crtc.vxres;
- if (info->current_par.crtc.bpp == 24) {
- /* In 24 bpp, the engine is in 8 bpp - this requires that all */
- /* horizontal coordinates and widths must be adjusted */
- pitch_value *= 3;
- srcx *= 3;
- dstx *= 3;
- width *= 3;
- }
-
- if (srcy < dsty) {
- dsty += height - 1;
- srcy += height - 1;
- } else
- direction |= DST_Y_TOP_TO_BOTTOM;
-
- if (srcx < dstx) {
- dstx += width - 1;
- srcx += width - 1;
- } else
- direction |= DST_X_LEFT_TO_RIGHT;
-
- wait_for_fifo(4, info);
- aty_st_le32(SRC_Y_X, (srcy << 16) | srcx);
+ wait_for_fifo(6, info);
+ aty_st_le32(DP_DATATYPE, (0 | BRUSH_SOLIDCOLOR << 16) | SRC_DSTCOLOR);
aty_st_le32(DP_MIX, ROP3_SRCCOPY | DP_SRC_RECT);
- aty_st_le32(DP_CNTL, direction);
- aty_st_le32(DP_DATATYPE, aty_ld_le32(DP_DATATYPE) | SRC_DSTCOLOR);
- aty128_rectdraw(dstx, dsty, width, height, info);
+ aty_st_le32(DP_CNTL, DST_X_LEFT_TO_RIGHT | DST_Y_TOP_TO_BOTTOM);
+ aty_st_le32(SRC_Y_X, (srcy << 16) | srcx);
+ aty_st_le32(DST_Y_X, (dsty << 16) | dstx);
+ aty_st_le32(DST_HEIGHT_WIDTH, (height << 16) | width);
+
+ wait_for_fifo(2, info);
+ aty_st_le32(DP_DATATYPE, save_dp_datatype);
+ aty_st_le32(DP_CNTL, save_dp_cntl);
+
+ info->blitter_may_be_busy = 1;
+
+ wait_for_idle(info);
}
+
+ /*
+ * Text mode accelerated functions
+ */
+
+
static void
fbcon_aty128_bmove(struct display *p, int sy, int sx, int dy, int dx,
int height, int width)
(struct fb_info_aty128 *)p->fb_info);
}
+
+/* TODO: Fix accel and add to these structs */
#ifdef FBCON_HAS_CFB8
+static void fbcon_aty8_putc(struct vc_data *conp, struct display *p,
+ int c, int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb8_putc(conp, p, c, yy, xx);
+}
+
+
+static void fbcon_aty8_putcs(struct vc_data *conp, struct display *p,
+ const unsigned short *s, int count,
+ int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb8_putcs(conp, p, s, count, yy, xx);
+}
+
+
+static void fbcon_aty8_clear_margins(struct vc_data *conp,
+ struct display *p, int bottom_only)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb8_clear_margins(conp, p, bottom_only);
+}
+
static struct display_switch fbcon_aty128_8 = {
- fbcon_cfb8_setup, fbcon_aty128_bmove, fbcon_cfb8_clear, fbcon_cfb8_putc,
- fbcon_cfb8_putcs, fbcon_cfb8_revc, NULL, NULL, fbcon_cfb8_clear_margins,
+ fbcon_cfb8_setup, fbcon_aty128_bmove, fbcon_cfb8_clear,
+ fbcon_aty8_putc, fbcon_aty8_putcs, fbcon_cfb8_revc, NULL, NULL,
+ fbcon_aty8_clear_margins,
+ FONTWIDTH(4)|FONTWIDTH(8)|FONTWIDTH(12)|FONTWIDTH(16)
+};
+#endif
+#ifdef FBCON_HAS_CFB16
+static void fbcon_aty16_putc(struct vc_data *conp, struct display *p,
+ int c, int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb16_putc(conp, p, c, yy, xx);
+}
+
+
+static void fbcon_aty16_putcs(struct vc_data *conp, struct display *p,
+ const unsigned short *s, int count,
+ int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb16_putcs(conp, p, s, count, yy, xx);
+}
+
+
+static void fbcon_aty16_clear_margins(struct vc_data *conp,
+ struct display *p, int bottom_only)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb16_clear_margins(conp, p, bottom_only);
+}
+
+static struct display_switch fbcon_aty128_16 = {
+ fbcon_cfb16_setup, fbcon_aty128_bmove, fbcon_cfb16_clear,
+ fbcon_aty16_putc, fbcon_aty16_putcs, fbcon_cfb16_revc, NULL, NULL,
+ fbcon_aty16_clear_margins,
+ FONTWIDTH(4)|FONTWIDTH(8)|FONTWIDTH(12)|FONTWIDTH(16)
+};
+#endif
+#ifdef FBCON_HAS_CFB24
+static void fbcon_aty24_putc(struct vc_data *conp, struct display *p,
+ int c, int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb24_putc(conp, p, c, yy, xx);
+}
+
+
+static void fbcon_aty24_putcs(struct vc_data *conp, struct display *p,
+ const unsigned short *s, int count,
+ int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb24_putcs(conp, p, s, count, yy, xx);
+}
+
+
+static void fbcon_aty24_clear_margins(struct vc_data *conp,
+ struct display *p, int bottom_only)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb24_clear_margins(conp, p, bottom_only);
+}
+
+static struct display_switch fbcon_aty128_24 = {
+ fbcon_cfb24_setup, fbcon_aty128_bmove, fbcon_cfb24_clear,
+ fbcon_aty24_putc, fbcon_aty24_putcs, fbcon_cfb24_revc, NULL, NULL,
+ fbcon_aty24_clear_margins,
FONTWIDTH(4)|FONTWIDTH(8)|FONTWIDTH(12)|FONTWIDTH(16)
};
#endif
+#ifdef FBCON_HAS_CFB32
+static void fbcon_aty32_putc(struct vc_data *conp, struct display *p,
+ int c, int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb32_putc(conp, p, c, yy, xx);
+}
+
+
+static void fbcon_aty32_putcs(struct vc_data *conp, struct display *p,
+ const unsigned short *s, int count,
+ int yy, int xx)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb32_putcs(conp, p, s, count, yy, xx);
+}
+
-#if defined(MODULE) && defined(DEBUG)
-int
+static void fbcon_aty32_clear_margins(struct vc_data *conp,
+ struct display *p, int bottom_only)
+{
+ struct fb_info_aty128 *fb = (struct fb_info_aty128 *)(p->fb_info);
+
+ if (fb->blitter_may_be_busy)
+ wait_for_idle(fb);
+
+ fbcon_cfb32_clear_margins(conp, p, bottom_only);
+}
+
+static struct display_switch fbcon_aty128_32 = {
+ fbcon_cfb32_setup, fbcon_aty128_bmove, fbcon_cfb32_clear,
+ fbcon_aty32_putc, fbcon_aty32_putcs, fbcon_cfb32_revc, NULL, NULL,
+ fbcon_aty32_clear_margins,
+ FONTWIDTH(4)|FONTWIDTH(8)|FONTWIDTH(12)|FONTWIDTH(16)
+};
+#endif
+
+#if defined(MODULE)
+MODULE_AUTHOR("(c)1999-2000 Brad Douglas <brad@neruo.com>, Anthony Tong "
+ "<atong@uiuc.edu>");
+MODULE_DESCRIPTION("FBDev driver for ATI Rage128 cards");
+
+int __init
init_module(void)
{
aty128pci_probe();
return 0;
}
-void
+void __exit
cleanup_module(void)
{
-/* XXX unregister! */
+ struct fb_info_aty128 *info = board_list;
+
+ while (board_list) {
+ info = board_list;
+ board_list = board_list->next;
+
+ unregister_framebuffer(&info->fb_info);
+#ifdef CONFIG_MTRR
+ if (info->mtrr.vram_valid)
+ mtrr_del(info->mtrr.vram, info->frame_buffer_phys,
+ info->vram_size);
+#endif /* CONFIG_MTRR */
+ iounmap(info->regbase);
+ iounmap(&info->frame_buffer);
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,1)
+ release_mem_region(info->pdev->resource[0].start,
+ info->pdev->resource[0].end -
+ info->pdev->resource[0].start + 1);
+ release_mem_region(info->pdev->resource[2].start,
+ info->pdev->resource[2].end -
+ info->pdev->resource[2].start + 1);
+#endif
+
+ kfree(info);
+ }
}
#endif /* MODULE */
#include <video/fbcon-cfb16.h>
#include <video/fbcon-cfb24.h>
-/*
- * Some defaults
- */
-#define DEFAULT_XRES 640
-#define DEFAULT_YRES 480
-#define DEFAULT_BPP 8
-
#define MMIO_SIZE 0x000c0000
static char *CyberRegs;
#define VISUALID_16M 4
#define VISUALID_32K 6
-#define K_CAP_X2_CTL1 0x49
-
-#define CAP_X_START 0x60
-#define CAP_X_END 0x62
-#define CAP_Y_START 0x64
-#define CAP_Y_END 0x66
-#define CAP_DDA_X_INIT 0x68
-#define CAP_DDA_X_INC 0x6a
-#define CAP_DDA_Y_INIT 0x6c
-#define CAP_DDA_Y_INC 0x6e
-
-#define EXT_FIFO_CTL 0x74
-
-#define CAP_PIP_X_START 0x80
-#define CAP_PIP_X_END 0x82
-#define CAP_PIP_Y_START 0x84
-#define CAP_PIP_Y_END 0x86
-
-#define CAP_NEW_CTL1 0x88
-
-#define CAP_NEW_CTL2 0x89
-
-#define CAP_MODE1 0xa4
-#define CAP_MODE1_8BIT 0x01 /* enable 8bit capture mode */
-#define CAP_MODE1_CCIR656 0x02 /* CCIR656 mode */
-#define CAP_MODE1_IGNOREVGT 0x04 /* ignore VGT */
-#define CAP_MODE1_ALTFIFO 0x10 /* use alternate FIFO for capture */
-#define CAP_MODE1_SWAPUV 0x20 /* swap UV bytes */
-#define CAP_MODE1_MIRRORY 0x40 /* mirror vertically */
-#define CAP_MODE1_MIRRORX 0x80 /* mirror horizontally */
-
-#define CAP_MODE2 0xa5
-
-#define Y_TV_CTL 0xae
-
-#define EXT_MEM_START 0xc0 /* ext start address 21 bits */
-#define HOR_PHASE_SHIFT 0xc2 /* high 3 bits */
-#define EXT_SRC_WIDTH 0xc3 /* ext offset phase 10 bits */
-#define EXT_SRC_HEIGHT 0xc4 /* high 6 bits */
-#define EXT_X_START 0xc5 /* ext->screen, 16 bits */
-#define EXT_X_END 0xc7 /* ext->screen, 16 bits */
-#define EXT_Y_START 0xc9 /* ext->screen, 16 bits */
-#define EXT_Y_END 0xcb /* ext->screen, 16 bits */
-#define EXT_SRC_WIN_WIDTH 0xcd /* 8 bits */
-#define EXT_COLOUR_COMPARE 0xce /* 24 bits */
-#define EXT_DDA_X_INIT 0xd1 /* ext->screen 16 bits */
-#define EXT_DDA_X_INC 0xd3 /* ext->screen 16 bits */
-#define EXT_DDA_Y_INIT 0xd5 /* ext->screen 16 bits */
-#define EXT_DDA_Y_INC 0xd7 /* ext->screen 16 bits */
-
-#define VID_FIFO_CTL 0xd9
-
-#define VID_CAP_VFC 0xdb
-#define VID_CAP_VFC_YUV422 0x00 /* formats - does this cause conversion? */
-#define VID_CAP_VFC_RGB555 0x01
-#define VID_CAP_VFC_RGB565 0x02
-#define VID_CAP_VFC_RGB888_24 0x03
-#define VID_CAP_VFC_RGB888_32 0x04
-#define VID_CAP_VFC_DUP_PIX_ZOON 0x08 /* duplicate pixel zoom */
-#define VID_CAP_VFC_MOD_3RD_PIX 0x20 /* modify 3rd duplicated pixel */
-#define VID_CAP_VFC_DBL_H_PIX 0x40 /* double horiz pixels */
-#define VID_CAP_VFC_UV128 0x80 /* UV data offset by 128 */
-
-#define VID_DISP_CTL1 0xdc
-#define VID_DISP_CTL1_INTRAM 0x01 /* video pixels go to internal RAM */
-#define VID_DISP_CTL1_IGNORE_CCOMP 0x02 /* ignore colour compare registers */
-#define VID_DISP_CTL1_NOCLIP 0x04 /* do not clip to 16235,16240 */
-#define VID_DISP_CTL1_UV_AVG 0x08 /* U/V data is averaged */
-#define VID_DISP_CTL1_Y128 0x10 /* Y data offset by 128 */
-#define VID_DISP_CTL1_VINTERPOL_OFF 0x20 /* vertical interpolation off */
-#define VID_DISP_CTL1_VID_OUT_WIN_FULL 0x40 /* video out window full */
-#define VID_DISP_CTL1_ENABLE_VID_WINDOW 0x80 /* enable video window */
-
-#define VID_FIFO_CTL1 0xdd
-
-#define VFAC_CTL1 0xe8
-#define VFAC_CTL1_CAPTURE 0x01 /* capture enable */
-#define VFAC_CTL1_VFAC_ENABLE 0x02 /* vfac enable */
-#define VFAC_CTL1_FREEZE_CAPTURE 0x04 /* freeze capture */
-#define VFAC_CTL1_FREEZE_CAPTURE_SYNC 0x08 /* sync freeze capture */
-#define VFAC_CTL1_VALIDFRAME_SRC 0x10 /* select valid frame source */
-#define VFAC_CTL1_PHILIPS 0x40 /* select Philips mode */
-#define VFAC_CTL1_MODVINTERPOLCLK 0x80 /* modify vertical interpolation clocl */
-
-#define VFAC_CTL2 0xe9
-#define VFAC_CTL2_INVERT_VIDDATAVALID 0x01 /* invert video data valid */
-#define VFAC_CTL2_INVERT_GRAPHREADY 0x02 /* invert graphic ready output sig */
-#define VFAC_CTL2_INVERT_DATACLK 0x04 /* invert data clock signal */
-#define VFAC_CTL2_INVERT_HSYNC 0x08 /* invert hsync input */
-#define VFAC_CTL2_INVERT_VSYNC 0x10 /* invert vsync input */
-#define VFAC_CTL2_INVERT_FRAME 0x20 /* invert frame odd/even input */
-#define VFAC_CTL2_INVERT_BLANK 0x40 /* invert blank output */
-#define VFAC_CTL2_INVERT_OVSYNC 0x80 /* invert other vsync input */
-
-#define VFAC_CTL3 0xea
-#define VFAC_CTL3_CAP_IRQ 0x40 /* enable capture interrupt */
-
-#define CAP_MEM_START 0xeb /* 18 bits */
-#define CAP_MAP_WIDTH 0xed /* high 6 bits */
-#define CAP_PITCH 0xee /* 8 bits */
-
-#define CAP_CTL_MISC 0xef
-#define CAP_CTL_MISC_HDIV 0x01
-#define CAP_CTL_MISC_HDIV4 0x02
-#define CAP_CTL_MISC_ODDEVEN 0x04
-#define CAP_CTL_MISC_HSYNCDIV2 0x08
-#define CAP_CTL_MISC_SYNCTZHIGH 0x10
-#define CAP_CTL_MISC_SYNCTZOR 0x20
-#define CAP_CTL_MISC_DISPUSED 0x80
-
-#define REG_BANK 0xfa
-#define REG_BANK_Y 0x01
-#define REG_BANK_K 0x05
+#define FUNC_CTL 0x3c
+#define FUNC_CTL_EXTREGENBL 0x80 /* enable access to 0xbcxxx */
+
+#define BIU_BM_CONTROL 0x3e
+#define BIU_BM_CONTROL_ENABLE 0x01 /* enable bus-master */
+#define BIU_BM_CONTROL_BURST 0x02 /* enable burst */
+#define BIU_BM_CONTROL_BACK2BACK 0x04 /* enable back to back */
+
+#define X_V2_VID_MEM_START 0x40
+#define X_V2_VID_SRC_WIDTH 0x43
+#define X_V2_X_START 0x45
+#define X_V2_X_END 0x47
+#define X_V2_Y_START 0x49
+#define X_V2_Y_END 0x4b
+#define X_V2_VID_SRC_WIN_WIDTH 0x4d
+
+#define Y_V2_DDA_X_INC 0x43
+#define Y_V2_DDA_Y_INC 0x47
+#define Y_V2_VID_FIFO_CTL 0x49
+#define Y_V2_VID_FMT 0x4b
+#define Y_V2_VID_DISP_CTL1 0x4c
+#define Y_V2_VID_FIFO_CTL1 0x4d
+
+#define J_X2_VID_MEM_START 0x40
+#define J_X2_VID_SRC_WIDTH 0x43
+#define J_X2_X_START 0x47
+#define J_X2_X_END 0x49
+#define J_X2_Y_START 0x4b
+#define J_X2_Y_END 0x4d
+#define J_X2_VID_SRC_WIN_WIDTH 0x4f
+
+#define K_X2_DDA_X_INIT 0x40
+#define K_X2_DDA_X_INC 0x42
+#define K_X2_DDA_Y_INIT 0x44
+#define K_X2_DDA_Y_INC 0x46
+#define K_X2_VID_FMT 0x48
+#define K_X2_VID_DISP_CTL1 0x49
#define K_CAP_X2_CTL1 0x49
#define CAP_NEW_CTL2 0x89
#define CAP_MODE1 0xa4
-#define CAP_MODE1_8BIT 0x01 /* enable 8bit capture mode */
-#define CAP_MODE1_CCIR656 0x02 /* CCIR656 mode */
-#define CAP_MODE1_IGNOREVGT 0x04 /* ignore VGT */
-#define CAP_MODE1_ALTFIFO 0x10 /* use alternate FIFO for capture */
-#define CAP_MODE1_SWAPUV 0x20 /* swap UV bytes */
-#define CAP_MODE1_MIRRORY 0x40 /* mirror vertically */
-#define CAP_MODE1_MIRRORX 0x80 /* mirror horizontally */
+#define CAP_MODE1_8BIT 0x01 /* enable 8bit capture mode */
+#define CAP_MODE1_CCIR656 0x02 /* CCIR656 mode */
+#define CAP_MODE1_IGNOREVGT 0x04 /* ignore VGT */
+#define CAP_MODE1_ALTFIFO 0x10 /* use alternate FIFO for capture */
+#define CAP_MODE1_SWAPUV 0x20 /* swap UV bytes */
+#define CAP_MODE1_MIRRORY 0x40 /* mirror vertically */
+#define CAP_MODE1_MIRRORX 0x80 /* mirror horizontally */
#define CAP_MODE2 0xa5
#define Y_TV_CTL 0xae
-#define EXT_MEM_START 0xc0 /* ext start address 21 bits */
-#define HOR_PHASE_SHIFT 0xc2 /* high 3 bits */
-#define EXT_SRC_WIDTH 0xc3 /* ext offset phase 10 bits */
-#define EXT_SRC_HEIGHT 0xc4 /* high 6 bits */
-#define EXT_X_START 0xc5 /* ext->screen, 16 bits */
-#define EXT_X_END 0xc7 /* ext->screen, 16 bits */
-#define EXT_Y_START 0xc9 /* ext->screen, 16 bits */
-#define EXT_Y_END 0xcb /* ext->screen, 16 bits */
-#define EXT_SRC_WIN_WIDTH 0xcd /* 8 bits */
-#define EXT_COLOUR_COMPARE 0xce /* 24 bits */
-#define EXT_DDA_X_INIT 0xd1 /* ext->screen 16 bits */
-#define EXT_DDA_X_INC 0xd3 /* ext->screen 16 bits */
-#define EXT_DDA_Y_INIT 0xd5 /* ext->screen 16 bits */
-#define EXT_DDA_Y_INC 0xd7 /* ext->screen 16 bits */
-
-#define VID_FIFO_CTL 0xd9
-
-#define VID_CAP_VFC 0xdb
-#define VID_CAP_VFC_YUV422 0x00 /* formats - does this cause conversion? */
-#define VID_CAP_VFC_RGB555 0x01
-#define VID_CAP_VFC_RGB565 0x02
-#define VID_CAP_VFC_RGB888_24 0x03
-#define VID_CAP_VFC_RGB888_32 0x04
-#define VID_CAP_VFC_DUP_PIX_ZOON 0x08 /* duplicate pixel zoom */
-#define VID_CAP_VFC_MOD_3RD_PIX 0x20 /* modify 3rd duplicated pixel */
-#define VID_CAP_VFC_DBL_H_PIX 0x40 /* double horiz pixels */
-#define VID_CAP_VFC_UV128 0x80 /* UV data offset by 128 */
-
-#define VID_DISP_CTL1 0xdc
-#define VID_DISP_CTL1_INTRAM 0x01 /* video pixels go to internal RAM */
-#define VID_DISP_CTL1_IGNORE_CCOMP 0x02 /* ignore colour compare registers */
-#define VID_DISP_CTL1_NOCLIP 0x04 /* do not clip to 16235,16240 */
-#define VID_DISP_CTL1_UV_AVG 0x08 /* U/V data is averaged */
-#define VID_DISP_CTL1_Y128 0x10 /* Y data offset by 128 */
-#define VID_DISP_CTL1_VINTERPOL_OFF 0x20 /* vertical interpolation off */
-#define VID_DISP_CTL1_VID_OUT_WIN_FULL 0x40 /* video out window full */
-#define VID_DISP_CTL1_ENABLE_VID_WINDOW 0x80 /* enable video window */
-
-#define VID_FIFO_CTL1 0xdd
+#define EXT_MEM_START 0xc0 /* ext start address 21 bits */
+#define HOR_PHASE_SHIFT 0xc2 /* high 3 bits */
+#define EXT_SRC_WIDTH 0xc3 /* ext offset phase 10 bits */
+#define EXT_SRC_HEIGHT 0xc4 /* high 6 bits */
+#define EXT_X_START 0xc5 /* ext->screen, 16 bits */
+#define EXT_X_END 0xc7 /* ext->screen, 16 bits */
+#define EXT_Y_START 0xc9 /* ext->screen, 16 bits */
+#define EXT_Y_END 0xcb /* ext->screen, 16 bits */
+#define EXT_SRC_WIN_WIDTH 0xcd /* 8 bits */
+#define EXT_COLOUR_COMPARE 0xce /* 24 bits */
+#define EXT_DDA_X_INIT 0xd1 /* ext->screen 16 bits */
+#define EXT_DDA_X_INC 0xd3 /* ext->screen 16 bits */
+#define EXT_DDA_Y_INIT 0xd5 /* ext->screen 16 bits */
+#define EXT_DDA_Y_INC 0xd7 /* ext->screen 16 bits */
+
+#define EXT_VID_FIFO_CTL 0xd9
+
+#define EXT_VID_FMT 0xdb
+#define EXT_VID_FMT_YUV422 0x00 /* formats - does this cause conversion? */
+#define EXT_VID_FMT_RGB555 0x01
+#define EXT_VID_FMT_RGB565 0x02
+#define EXT_VID_FMT_RGB888_24 0x03
+#define EXT_VID_FMT_RGB888_32 0x04
+#define EXT_VID_FMT_DUP_PIX_ZOON 0x08 /* duplicate pixel zoom */
+#define EXT_VID_FMT_MOD_3RD_PIX 0x20 /* modify 3rd duplicated pixel */
+#define EXT_VID_FMT_DBL_H_PIX 0x40 /* double horiz pixels */
+#define EXT_VID_FMT_UV128 0x80 /* UV data offset by 128 */
+
+#define EXT_VID_DISP_CTL1 0xdc
+#define EXT_VID_DISP_CTL1_INTRAM 0x01 /* video pixels go to internal RAM */
+#define EXT_VID_DISP_CTL1_IGNORE_CCOMP 0x02 /* ignore colour compare registers */
+#define EXT_VID_DISP_CTL1_NOCLIP 0x04 /* do not clip to 16235,16240 */
+#define EXT_VID_DISP_CTL1_UV_AVG 0x08 /* U/V data is averaged */
+#define EXT_VID_DISP_CTL1_Y128 0x10 /* Y data offset by 128 */
+#define EXT_VID_DISP_CTL1_VINTERPOL_OFF 0x20 /* vertical interpolation off */
+#define EXT_VID_DISP_CTL1_FULL_WIN 0x40 /* video out window full */
+#define EXT_VID_DISP_CTL1_ENABLE_WINDOW 0x80 /* enable video window */
+
+#define EXT_VID_FIFO_CTL1 0xdd
#define VFAC_CTL1 0xe8
#define VFAC_CTL1_CAPTURE 0x01 /* capture enable */
#define VFAC_CTL2_INVERT_OVSYNC 0x80 /* invert other vsync input */
#define VFAC_CTL3 0xea
+#define VFAC_CTL3_CAP_IRQ 0x40 /* enable capture interrupt */
-#define CAP_MEM_START 0xeb /* 18 bits */
-#define CAP_MAP_WIDTH 0xed /* high 6 bits */
-#define CAP_PITCH 0xee /* 8 bits */
+#define CAP_MEM_START 0xeb /* 18 bits */
+#define CAP_MAP_WIDTH 0xed /* high 6 bits */
+#define CAP_PITCH 0xee /* 8 bits */
#define CAP_CTL_MISC 0xef
#define CAP_CTL_MISC_HDIV 0x01
#define CAP_CTL_MISC_DISPUSED 0x80
#define REG_BANK 0xfa
+#define REG_BANK_X 0x00
#define REG_BANK_Y 0x01
+#define REG_BANK_W 0x02
+#define REG_BANK_T 0x03
+#define REG_BANK_J 0x04
#define REG_BANK_K 0x05
+/*
+ * Bus-master
+ */
+#define BM_ADDRESS_LOW 0xbc080
+#define BM_ADDRESS_HIGH 0xbc084
+#define BM_LENGTH 0xbc088
+#define BM_CONTROL 0xbc08c
+#define BM_CONTROL_ENABLE 0x01 /* enable transfer */
+#define BM_CONTROL_INIT 0x04 /* initialise status & count */
+#define BM_COUNT 0xbc090 /* read-only */
+/*
+ * Graphics Co-processor
+ */
#define CO_CMD_L_PATTERN_FGCOL 0x8000
#define CO_CMD_L_INC_LEFT 0x0004
#define CO_CMD_L_INC_UP 0x0002
dep_tristate 'Apple Macintosh filesystem support (EXPERIMENTAL)' CONFIG_HFS_FS $CONFIG_EXPERIMENTAL
-dep_tristate 'BFS filesystem (read only) support (EXPERIMENTAL)' CONFIG_BFS_FS $CONFIG_EXPERIMENTAL
-dep_bool ' BFS filesystem write support (DANGEROUS)' CONFIG_BFS_FS_WRITE $CONFIG_BFS_FS
+dep_tristate 'BFS filesystem support (EXPERIMENTAL)' CONFIG_BFS_FS $CONFIG_EXPERIMENTAL
# msdos filesystems
tristate 'DOS FAT fs support' CONFIG_FAT_FS
# Note 2! The CFLAGS definitions are now in the main makefile.
O_TARGET := adfs.o
-O_OBJS := dir.o file.o inode.o map.o namei.o super.o
+O_OBJS := dir.o dir_f.o dir_fplus.o file.o inode.o map.o super.o
M_OBJS := $(O_TARGET)
include $(TOPDIR)/Rules.make
--- /dev/null
+/* Internal data structures for ADFS */
+
+#define ADFS_FREE_FRAG 0
+#define ADFS_BAD_FRAG 1
+#define ADFS_ROOT_FRAG 2
+
+#define ADFS_NDA_OWNER_READ (1 << 0)
+#define ADFS_NDA_OWNER_WRITE (1 << 1)
+#define ADFS_NDA_LOCKED (1 << 2)
+#define ADFS_NDA_DIRECTORY (1 << 3)
+#define ADFS_NDA_EXECUTE (1 << 4)
+#define ADFS_NDA_PUBLIC_READ (1 << 5)
+#define ADFS_NDA_PUBLIC_WRITE (1 << 6)
+
+#include "dir_f.h"
+
+/*
+ * Directory handling
+ */
+struct adfs_dir {
+ struct super_block *sb;
+
+ int nr_buffers;
+ struct buffer_head *bh[4];
+ unsigned int pos;
+ unsigned int parent_id;
+
+ struct adfs_dirheader dirhead;
+ union adfs_dirtail dirtail;
+};
+
+/*
+ * This is the overall maximum name length
+ */
+#define ADFS_MAX_NAME_LEN 256
+struct object_info {
+ __u32 parent_id; /* parent object id */
+ __u32 file_id; /* object id */
+ __u32 loadaddr; /* load address */
+ __u32 execaddr; /* execution address */
+ __u32 size; /* size */
+ __u8 attr; /* RISC OS attributes */
+ unsigned char name_len; /* name length */
+ char name[ADFS_MAX_NAME_LEN];/* file name */
+};
+
+struct adfs_dir_ops {
+ int (*read)(struct super_block *sb, unsigned int id, unsigned int sz, struct adfs_dir *dir);
+ int (*setpos)(struct adfs_dir *dir, unsigned int fpos);
+ int (*getnext)(struct adfs_dir *dir, struct object_info *obj);
+ int (*update)(struct adfs_dir *dir, struct object_info *obj);
+ int (*create)(struct adfs_dir *dir, struct object_info *obj);
+ int (*remove)(struct adfs_dir *dir, struct object_info *obj);
+ void (*free)(struct adfs_dir *dir);
+};
+
+struct adfs_discmap {
+ struct buffer_head *dm_bh;
+ __u32 dm_startblk;
+ unsigned int dm_startbit;
+ unsigned int dm_endbit;
+};
+
+/* dir stuff */
+
+
+/* Inode stuff */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+int adfs_get_block(struct inode *inode, long block,
+ struct buffer_head *bh, int create);
+#else
+int adfs_bmap(struct inode *inode, int block);
+#endif
+struct inode *adfs_iget(struct super_block *sb, struct object_info *obj);
+void adfs_read_inode(struct inode *inode);
+void adfs_write_inode(struct inode *inode);
+int adfs_notify_change(struct dentry *dentry, struct iattr *attr);
+
+/* map.c */
+extern int adfs_map_lookup(struct super_block *sb, int frag_id, int offset);
+extern unsigned int adfs_map_free(struct super_block *sb);
+
+/* Misc */
+void __adfs_error(struct super_block *sb, const char *function,
+ const char *fmt, ...);
+#define adfs_error(sb, fmt...) __adfs_error(sb, __FUNCTION__, fmt)
+
+/* namei.c */
+extern struct dentry *adfs_lookup(struct inode *dir, struct dentry *dentry);
+
+/* super.c */
+
+/*
+ * Inodes and file operations
+ */
+
+/* dir_*.c */
+extern struct inode_operations adfs_dir_inode_operations;
+extern struct adfs_dir_ops adfs_f_dir_ops;
+extern struct adfs_dir_ops adfs_fplus_dir_ops;
+
+extern int adfs_dir_update(struct super_block *sb, struct object_info *obj);
+
+/* file.c */
+extern struct inode_operations adfs_file_inode_operations;
+
+extern inline __u32 signed_asl(__u32 val, signed int shift)
+{
+ if (shift >= 0)
+ val <<= shift;
+ else
+ val >>= -shift;
+ return val;
+}
+
+/*
+ * Calculate the address of a block in an object given the block offset
+ * and the object identity.
+ *
+ * The root directory ID should always be looked up in the map [3.4]
+ */
+extern inline int
+__adfs_block_map(struct super_block *sb, unsigned int object_id,
+ unsigned int block)
+{
+ if (object_id & 255) {
+ unsigned int off;
+
+ off = (object_id & 255) - 1;
+ block += off << sb->u.adfs_sb.s_log2sharesize;
+ }
+
+ return adfs_map_lookup(sb, object_id >> 8, block);
+}
/*
- * linux/fs/adfs/dir.c
+ * linux/fs/adfs/dir.c
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1999 Russell King
+ *
+ * Common directory handling for ADFS
*/
-
+#include <linux/version.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/adfs_fs.h>
#include <linux/sched.h>
#include <linux/stat.h>
-static ssize_t adfs_dirread (struct file *filp, char *buf,
- size_t siz, loff_t *ppos)
-{
- return -EISDIR;
-}
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+#include <linux/spinlock.h>
+#else
+#include <asm/spinlock.h>
+#endif
-static int adfs_readdir (struct file *, void *, filldir_t);
-
-static struct file_operations adfs_dir_operations = {
- NULL, /* lseek - default */
- adfs_dirread, /* read */
- NULL, /* write - bad */
- adfs_readdir, /* readdir */
- NULL, /* select - default */
- NULL, /* ioctl */
- NULL, /* mmap */
- NULL, /* no special open code */
- NULL, /* flush */
- NULL, /* no special release code */
- file_fsync, /* fsync */
- NULL, /* fasync */
-};
+#include "adfs.h"
/*
- * directories can handle most operations...
+ * For future. This should probably be per-directory.
*/
-struct inode_operations adfs_dir_inode_operations = {
- &adfs_dir_operations, /* default directory file-ops */
- NULL, /* create */
- adfs_lookup, /* lookup */
- NULL, /* link */
- NULL, /* unlink */
- NULL, /* symlink */
- NULL, /* mkdir */
- NULL, /* rmdir */
- NULL, /* mknod */
- NULL, /* rename */
- NULL, /* read link */
- NULL, /* follow link */
- NULL, /* get_block */
- NULL, /* read page */
- NULL, /* write page */
- NULL, /* truncate */
- NULL, /* permission */
- NULL /* revalidate */
-};
+static rwlock_t adfs_dir_lock;
-unsigned int adfs_val (unsigned char *p, int len)
+static int
+adfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
{
- unsigned int val = 0;
-
- switch (len) {
- case 4:
- val |= p[3] << 24;
- case 3:
- val |= p[2] << 16;
- case 2:
- val |= p[1] << 8;
+ struct inode *inode = filp->f_dentry->d_inode;
+ struct super_block *sb = filp->f_dentry->d_sb;
+ struct adfs_dir_ops *ops = sb->u.adfs_sb.s_dir;
+ struct object_info obj;
+ struct adfs_dir dir;
+ int ret;
+
+ ret = ops->read(sb, inode->i_ino, inode->i_size, &dir);
+ if (ret)
+ goto out;
+
+ switch (filp->f_pos) {
+ case 0:
+ if (filldir(dirent, ".", 1, 0, inode->i_ino) < 0)
+ goto free_out;
+ filp->f_pos += 1;
+
+ case 1:
+ if (filldir(dirent, "..", 2, 1, dir.parent_id) < 0)
+ goto free_out;
+ filp->f_pos += 1;
+
default:
- val |= p[0];
+ break;
}
- return val;
-}
-
-static unsigned int adfs_filetype (unsigned int load)
-{
- if ((load & 0xfff00000) != 0xfff00000)
- return (unsigned int) -1;
- return (load >> 8) & 0xfff;
-}
-static unsigned int adfs_time (unsigned int load, unsigned int exec)
-{
- unsigned int high, low;
-
- /* Check for unstamped files. */
- if ((load & 0xfff00000) != 0xfff00000)
- return 0;
+ read_lock(&adfs_dir_lock);
- high = ((load << 24) | (exec >> 8));
- low = exec & 255;
-
- /* Files dated pre 1970. */
- if (high < 0x336e996a)
- return 0;
+ ret = ops->setpos(&dir, filp->f_pos - 2);
+ if (ret)
+ goto unlock_out;
+ while (ops->getnext(&dir, &obj) == 0) {
+ if (filldir(dirent, obj.name, obj.name_len,
+ filp->f_pos, obj.file_id) < 0)
+ goto unlock_out;
+ filp->f_pos += 1;
+ }
- high -= 0x336e996a;
+unlock_out:
+ read_unlock(&adfs_dir_lock);
- /* Files dated post 2038 ish. */
- if (high > 0x31ffffff)
- return 0x7fffffff;
+free_out:
+ ops->free(&dir);
- /* 65537 = h256,l1
- * (h256 % 100) = 56 h256 / 100 = 2
- * 56 << 8 = 14336 2 * 256 = 512
- * + l1 = 14337
- * / 100 = 143
- * + 512 = 655
- */
- return (((high % 100) << 8) + low) / 100 + (high / 100 << 8);
+out:
+ return ret;
}
-int adfs_readname (char *buf, char *ptr, int maxlen)
+int
+adfs_dir_update(struct super_block *sb, struct object_info *obj)
{
- int size = 0;
- while (*ptr >= ' ' && maxlen--) {
- switch (*ptr) {
- case '/':
- *buf++ = '.';
- break;
- default:
- *buf++ = *ptr;
- break;
- }
- ptr++;
- size ++;
+ struct adfs_dir_ops *ops = sb->u.adfs_sb.s_dir;
+ struct adfs_dir dir;
+ int ret = -EINVAL;
+
+ printk(KERN_INFO "adfs_dir_update: object %06X in dir %06X\n",
+ obj->file_id, obj->parent_id);
+#if 0
+ if (!ops->update) {
+ ret = -EINVAL;
+ goto out;
}
- *buf = '\0';
- return size;
-}
-int adfs_dir_read_parent (struct inode *inode, struct buffer_head **bhp)
-{
- struct super_block *sb;
- int i, size;
-
- sb = inode->i_sb;
-
- size = 2048 >> sb->s_blocksize_bits;
-
- for (i = 0; i < size; i++) {
- int block;
-
- block = adfs_parent_bmap (inode, i);
- if (block)
- bhp[i] = bread (sb->s_dev, block, sb->s_blocksize);
- else
- adfs_error (sb, "adfs_dir_read_parent",
- "directory %lu with a hole at offset %d", inode->i_ino, i);
- if (!block || !bhp[i]) {
- int j;
- for (j = i - 1; j >= 0; j--)
- brelse (bhp[j]);
- return 0;
- }
- }
- return i;
+ ret = ops->read(sb, obj->parent_id, 0, &dir);
+ if (ret)
+ goto out;
+
+ write_lock(&adfs_dir_lock);
+ ret = ops->update(&dir, obj);
+ write_unlock(&adfs_dir_lock);
+
+ ops->free(&dir);
+out:
+#endif
+ return ret;
}
-int adfs_dir_read (struct inode *inode, struct buffer_head **bhp)
+static int
+adfs_match(struct qstr *name, struct object_info *obj)
{
- struct super_block *sb;
- int i, size;
+ int i;
- if (!inode || !S_ISDIR(inode->i_mode))
+ if (name->len != obj->name_len)
return 0;
- sb = inode->i_sb;
+ for (i = 0; i < name->len; i++) {
+ char c1, c2;
- size = inode->i_size >> sb->s_blocksize_bits;
+ c1 = name->name[i];
+ c2 = obj->name[i];
- for (i = 0; i < size; i++) {
- int block;
+ if (c1 >= 'A' && c1 <= 'Z')
+ c1 += 'a' - 'A';
+ if (c2 >= 'A' && c2 <= 'Z')
+ c2 += 'a' - 'A';
- block = adfs_bmap (inode, i);
- if (block)
- bhp[i] = bread (sb->s_dev, block, sb->s_blocksize);
- else
- adfs_error (sb, "adfs_dir_read",
- "directory %lX,%lX with a hole at offset %d",
- inode->i_ino, inode->u.adfs_i.file_id, i);
- if (!block || !bhp[i]) {
- int j;
- for (j = i - 1; j >= 0; j--)
- brelse (bhp[j]);
+ if (c1 != c2)
return 0;
- }
}
- return i;
+ return 1;
}
-int adfs_dir_check (struct inode *inode, struct buffer_head **bhp, int buffers, union adfs_dirtail *dtp)
+static int
+adfs_dir_lookup_byname(struct inode *inode, struct qstr *name, struct object_info *obj)
{
- struct adfs_dirheader dh;
- union adfs_dirtail dt;
-
- memcpy (&dh, bhp[0]->b_data, sizeof (dh));
- memcpy (&dt, bhp[3]->b_data + 471, sizeof(dt));
-
- if (memcmp (&dh.startmasseq, &dt.new.endmasseq, 5) ||
- (memcmp (&dh.startname, "Nick", 4) &&
- memcmp (&dh.startname, "Hugo", 4))) {
- adfs_error (inode->i_sb, "adfs_check_dir",
- "corrupted directory inode %lX,%lX",
- inode->i_ino, inode->u.adfs_i.file_id);
- return 1;
+ struct super_block *sb = inode->i_sb;
+ struct adfs_dir_ops *ops = sb->u.adfs_sb.s_dir;
+ struct adfs_dir dir;
+ int ret;
+
+ ret = ops->read(sb, inode->i_ino, inode->i_size, &dir);
+ if (ret)
+ goto out;
+
+ if (inode->u.adfs_i.parent_id != dir.parent_id) {
+ adfs_error(sb, "parent directory changed under me! (%lx but got %lx)\n",
+ inode->u.adfs_i.parent_id, dir.parent_id);
+ ret = -EIO;
+ goto free_out;
}
- if (dtp)
- *dtp = dt;
- return 0;
-}
-
-void adfs_dir_free (struct buffer_head **bhp, int buffers)
-{
- int i;
-
- for (i = buffers - 1; i >= 0; i--)
- brelse (bhp[i]);
-}
-
-/* convert a disk-based directory entry to a Linux ADFS directory entry */
-static inline void
-adfs_dirent_to_idirent(struct adfs_idir_entry *ide, struct adfs_direntry *de)
-{
- ide->name_len = adfs_readname(ide->name, de->dirobname, ADFS_NAME_LEN);
- ide->file_id = adfs_val(de->dirinddiscadd, 3);
- ide->size = adfs_val(de->dirlen, 4);
- ide->mode = de->newdiratts;
- ide->mtime = adfs_time(adfs_val(de->dirload, 4), adfs_val(de->direxec, 4));
- ide->filetype = adfs_filetype(adfs_val(de->dirload, 4));
-}
-int adfs_dir_get (struct super_block *sb, struct buffer_head **bhp,
- int buffers, int pos, unsigned long parent_object_id,
- struct adfs_idir_entry *ide)
-{
- struct adfs_direntry de;
- int thissize, buffer, offset;
+ obj->parent_id = inode->i_ino;
- offset = pos & (sb->s_blocksize - 1);
- buffer = pos >> sb->s_blocksize_bits;
+ /*
+ * '.' is handled by reserved_lookup() in fs/namei.c
+ */
+ if (name->len == 2 && name->name[0] == '.' && name->name[1] == '.') {
+ /*
+ * Currently unable to fill in the rest of 'obj',
+ * but this is better than nothing. We need to
+ * ascend one level to find it's parent.
+ */
+ obj->name_len = 0;
+ obj->file_id = obj->parent_id;
+ goto free_out;
+ }
- if (buffer > buffers)
- return 0;
+ read_lock(&adfs_dir_lock);
- thissize = sb->s_blocksize - offset;
- if (thissize > 26)
- thissize = 26;
+ ret = ops->setpos(&dir, 0);
+ if (ret)
+ goto unlock_out;
- memcpy (&de, bhp[buffer]->b_data + offset, thissize);
- if (thissize != 26)
- memcpy (((char *)&de) + thissize, bhp[buffer + 1]->b_data, 26 - thissize);
+ ret = -ENOENT;
+ while (ops->getnext(&dir, obj) == 0) {
+ if (adfs_match(name, obj)) {
+ ret = 0;
+ break;
+ }
+ }
- if (!de.dirobname[0])
- return 0;
+unlock_out:
+ read_unlock(&adfs_dir_lock);
- ide->inode_no = adfs_inode_generate (parent_object_id, pos);
- adfs_dirent_to_idirent(ide, &de);
- return 1;
+free_out:
+ ops->free(&dir);
+out:
+ return ret;
}
-int adfs_dir_find_entry (struct super_block *sb, struct buffer_head **bhp,
- int buffers, unsigned int pos,
- struct adfs_idir_entry *ide)
+static ssize_t
+adfs_dir_no_read(struct file *filp, char *buf, size_t siz, loff_t *ppos)
{
- struct adfs_direntry de;
- int offset, buffer, thissize;
+ return -EISDIR;
+}
- offset = pos & (sb->s_blocksize - 1);
- buffer = pos >> sb->s_blocksize_bits;
+static struct file_operations adfs_dir_operations = {
+ NULL, /* lseek - default */
+ adfs_dir_no_read, /* read */
+ NULL, /* write - bad */
+ adfs_readdir, /* readdir */
+ NULL, /* poll - default */
+ NULL, /* ioctl */
+ NULL, /* mmap */
+ NULL, /* no special open code */
+ NULL, /* flush */
+ NULL, /* no special release code */
+ file_fsync, /* fsync */
+ NULL, /* fasync */
+};
+
+static int
+adfs_hash(struct dentry *parent, struct qstr *qstr)
+{
+ const unsigned int name_len = parent->d_sb->u.adfs_sb.s_namelen;
+ const unsigned char *name;
+ unsigned long hash;
+ int i;
- if (buffer > buffers)
+ if (qstr->len < name_len)
return 0;
- thissize = sb->s_blocksize - offset;
- if (thissize > 26)
- thissize = 26;
+ /*
+ * Truncate the name in place, avoids
+ * having to define a compare function.
+ */
+ qstr->len = i = name_len;
+ name = qstr->name;
+ hash = init_name_hash();
+ while (i--) {
+ char c;
- memcpy (&de, bhp[buffer]->b_data + offset, thissize);
- if (thissize != 26)
- memcpy (((char *)&de) + thissize, bhp[buffer + 1]->b_data, 26 - thissize);
+ c = *name++;
+ if (c >= 'A' && c <= 'Z')
+ c += 'a' - 'A';
- if (!de.dirobname[0])
- return 0;
+ hash = partial_name_hash(c, hash);
+ }
+ qstr->hash = end_name_hash(hash);
- adfs_dirent_to_idirent(ide, &de);
- return 1;
-}
+ return 0;
+}
-static int adfs_readdir (struct file *filp, void *dirent, filldir_t filldir)
+/*
+ * Compare two names, taking note of the name length
+ * requirements of the underlying filesystem.
+ */
+static int
+adfs_compare(struct dentry *parent, struct qstr *entry, struct qstr *name)
{
- struct inode *inode = filp->f_dentry->d_inode;
- struct super_block *sb;
- struct buffer_head *bh[4];
- union adfs_dirtail dt;
- unsigned long parent_object_id, dir_object_id;
- int buffers, pos;
-
- sb = inode->i_sb;
+ int i;
- if (filp->f_pos > ADFS_NUM_DIR_ENTRIES + 2)
- return -ENOENT;
+ if (entry->len != name->len)
+ return 1;
- if (!(buffers = adfs_dir_read (inode, bh))) {
- adfs_error (sb, "adfs_readdir", "unable to read directory");
- return -EINVAL;
- }
+ for (i = 0; i < name->len; i++) {
+ char a, b;
- if (adfs_dir_check (inode, bh, buffers, &dt)) {
- adfs_dir_free (bh, buffers);
- return -ENOENT;
- }
+ a = entry->name[i];
+ b = name->name[i];
- parent_object_id = adfs_val (dt.new.dirparent, 3);
- dir_object_id = adfs_inode_objid (inode);
+ if (a >= 'A' && a <= 'Z')
+ a += 'a' - 'A';
+ if (b >= 'A' && b <= 'Z')
+ b += 'a' - 'A';
- if (filp->f_pos < 2) {
- if (filp->f_pos < 1) {
- if (filldir (dirent, ".", 1, 0, inode->i_ino) < 0)
- return 0;
- filp->f_pos ++;
- }
- if (filldir (dirent, "..", 2, 1,
- adfs_inode_generate (parent_object_id, 0)) < 0)
- return 0;
- filp->f_pos ++;
+ if (a != b)
+ return 1;
}
+ return 0;
+}
- pos = 5 + (filp->f_pos - 2) * 26;
- while (filp->f_pos < 79) {
- struct adfs_idir_entry ide;
-
- if (!adfs_dir_get (sb, bh, buffers, pos, dir_object_id, &ide))
- break;
+struct dentry_operations adfs_dentry_operations = {
+ NULL, /* revalidate */
+ adfs_hash,
+ adfs_compare,
+ NULL, /* delete = called by dput */
+ NULL, /* release - called by d_free */
+ NULL /* iput - called by dentry_iput */
+};
- if (filldir (dirent, ide.name, ide.name_len, filp->f_pos, ide.inode_no) < 0)
- return 0;
- filp->f_pos ++;
- pos += 26;
+struct dentry *adfs_lookup(struct inode *dir, struct dentry *dentry)
+{
+ struct inode *inode = NULL;
+ struct object_info obj;
+ int error;
+
+ dentry->d_op = &adfs_dentry_operations;
+ error = adfs_dir_lookup_byname(dir, &dentry->d_name, &obj);
+ if (error == 0) {
+ error = -EACCES;
+ /*
+ * This only returns NULL if get_empty_inode
+ * fails.
+ */
+ inode = adfs_iget(dir->i_sb, &obj);
+ if (inode)
+ error = 0;
}
- adfs_dir_free (bh, buffers);
- return 0;
+ d_add(dentry, inode);
+ return ERR_PTR(error);
}
+
+/*
+ * directories can handle most operations...
+ */
+struct inode_operations adfs_dir_inode_operations = {
+ &adfs_dir_operations, /* default directory file-ops */
+ NULL, /* create */
+ adfs_lookup, /* lookup */
+ NULL, /* link */
+ NULL, /* unlink */
+ NULL, /* symlink */
+ NULL, /* mkdir */
+ NULL, /* rmdir */
+ NULL, /* mknod */
+ NULL, /* rename */
+ NULL, /* read link */
+ NULL, /* follow link */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+ NULL, /* bmap */
+ NULL, /* read page */
+ NULL, /* write page */
+#else
+ NULL, /* read page */
+ NULL, /* write page */
+ NULL, /* bmap */
+#endif
+ NULL, /* truncate */
+ NULL, /* permission */
+ NULL /* revalidate */
+};
--- /dev/null
+/*
+ * linux/fs/adfs/dir_f.c
+ *
+ * Copyright (C) 1997-1999 Russell King
+ *
+ * E and F format directory handling
+ */
+#include <linux/version.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+#include <linux/spinlock.h>
+#else
+#include <asm/spinlock.h>
+#endif
+
+#include "adfs.h"
+#include "dir_f.h"
+
+static void adfs_f_free(struct adfs_dir *dir);
+
+/*
+ * Read an (unaligned) value of length 1..4 bytes
+ */
+static inline unsigned int adfs_readval(unsigned char *p, int len)
+{
+ unsigned int val = 0;
+
+ switch (len) {
+ case 4: val |= p[3] << 24;
+ case 3: val |= p[2] << 16;
+ case 2: val |= p[1] << 8;
+ default: val |= p[0];
+ }
+ return val;
+}
+
+static inline void adfs_writeval(unsigned char *p, int len, unsigned int val)
+{
+ switch (len) {
+ case 4: p[3] = val >> 24;
+ case 3: p[2] = val >> 16;
+ case 2: p[1] = val >> 8;
+ default: p[0] = val;
+ }
+}
+
+static inline int adfs_readname(char *buf, char *ptr, int maxlen)
+{
+ char *old_buf = buf;
+
+ while (*ptr >= ' ' && maxlen--) {
+ if (*ptr == '/')
+ *buf++ = '.';
+ else
+ *buf++ = *ptr;
+ ptr++;
+ }
+ *buf = '\0';
+
+ return buf - old_buf;
+}
+
+static inline void adfs_writename(char *to, char *from, int maxlen)
+{
+ int i;
+
+ for (i = 0; i < maxlen; i++) {
+ if (from[i] == '\0')
+ break;
+ if (from[i] == '.')
+ to[i] = '/';
+ else
+ to[i] = from[i];
+ }
+
+ for (; i < maxlen; i++)
+ to[i] = '\0';
+}
+
+#define ror13(v) ((v >> 13) | (v << 19))
+
+#define dir_u8(idx) \
+ ({ int _buf = idx >> blocksize_bits; \
+ int _off = idx - (_buf << blocksize_bits);\
+ *(u8 *)(bh[_buf]->b_data + _off); \
+ })
+
+#define dir_u32(idx) \
+ ({ int _buf = idx >> blocksize_bits; \
+ int _off = idx - (_buf << blocksize_bits);\
+ *(u32 *)(bh[_buf]->b_data + _off); \
+ })
+
+#define bufoff(_bh,_idx) \
+ ({ int _buf = _idx >> blocksize_bits; \
+ int _off = _idx - (_buf << blocksize_bits);\
+ (u8 *)(_bh[_buf]->b_data + _off); \
+ })
+
+/*
+ * There are some algorithms that are nice in
+ * assembler, but a bitch in C... This is one
+ * of them.
+ */
+static u8
+adfs_dir_checkbyte(const struct adfs_dir *dir)
+{
+ struct buffer_head * const *bh = dir->bh;
+ const int blocksize_bits = dir->sb->s_blocksize_bits;
+ union { u32 *ptr32; u8 *ptr8; } ptr, end;
+ u32 dircheck = 0;
+ int last = 5 - 26;
+ int i = 0;
+
+ /*
+ * Accumulate each word up to the last whole
+ * word of the last directory entry. This
+ * can spread across several buffer heads.
+ */
+ do {
+ last += 26;
+ do {
+ dircheck = cpu_to_le32(dir_u32(i)) ^ ror13(dircheck);
+
+ i += sizeof(u32);
+ } while (i < (last & ~3));
+ } while (dir_u8(last) != 0);
+
+ /*
+ * Accumulate the last few bytes. These
+ * bytes will be within the same bh.
+ */
+ if (i != last) {
+ ptr.ptr8 = bufoff(bh, i);
+ end.ptr8 = ptr.ptr8 + last - i;
+
+ do
+ dircheck = *ptr.ptr8++ ^ ror13(dircheck);
+ while (ptr.ptr8 < end.ptr8);
+ }
+
+ /*
+ * The directory tail is in the final bh
+ * Note that contary to the RISC OS PRMs,
+ * the first few bytes are NOT included
+ * in the check. All bytes are in the
+ * same bh.
+ */
+ ptr.ptr8 = bufoff(bh, 2008);
+ end.ptr8 = ptr.ptr8 + 36;
+
+ do {
+ unsigned int v = *ptr.ptr32++;
+ dircheck = cpu_to_le32(v) ^ ror13(dircheck);
+ } while (ptr.ptr32 < end.ptr32);
+
+ return (dircheck ^ (dircheck >> 8) ^ (dircheck >> 16) ^ (dircheck >> 24)) & 0xff;
+}
+
+/*
+ * Read and check that a directory is valid
+ */
+int
+adfs_dir_read(struct super_block *sb, unsigned long object_id,
+ unsigned int size, struct adfs_dir *dir)
+{
+ const unsigned int blocksize_bits = sb->s_blocksize_bits;
+ int blk = 0;
+
+ /*
+ * Directories which are not a multiple of 2048 bytes
+ * are considered bad v2 [3.6]
+ */
+ if (size & 2047)
+ goto bad_dir;
+
+ size >>= blocksize_bits;
+
+ dir->nr_buffers = 0;
+ dir->sb = sb;
+
+ for (blk = 0; blk < size; blk++) {
+ int phys;
+
+ phys = __adfs_block_map(sb, object_id, blk);
+ if (!phys) {
+ adfs_error(sb, "dir object %lX has a hole at offset %d",
+ object_id, blk);
+ goto release_buffers;
+ }
+
+ dir->bh[blk] = bread(sb->s_dev, phys, sb->s_blocksize);
+ if (!dir->bh[blk])
+ goto release_buffers;
+ }
+
+ memcpy(&dir->dirhead, bufoff(dir->bh, 0), sizeof(dir->dirhead));
+ memcpy(&dir->dirtail, bufoff(dir->bh, 2007), sizeof(dir->dirtail));
+
+ if (dir->dirhead.startmasseq != dir->dirtail.new.endmasseq ||
+ memcmp(&dir->dirhead.startname, &dir->dirtail.new.endname, 4))
+ goto bad_dir;
+
+ if (memcmp(&dir->dirhead.startname, "Nick", 4) &&
+ memcmp(&dir->dirhead.startname, "Hugo", 4))
+ goto bad_dir;
+
+ if (adfs_dir_checkbyte(dir) != dir->dirtail.new.dircheckbyte)
+ goto bad_dir;
+
+ dir->nr_buffers = blk;
+
+ return 0;
+
+bad_dir:
+ adfs_error(sb, "corrupted directory fragment %lX",
+ object_id);
+release_buffers:
+ for (blk -= 1; blk >= 0; blk -= 1)
+ brelse(dir->bh[blk]);
+
+ dir->sb = NULL;
+
+ return -EIO;
+}
+
+/*
+ * convert a disk-based directory entry to a Linux ADFS directory entry
+ */
+static inline void
+adfs_dir2obj(struct object_info *obj, struct adfs_direntry *de)
+{
+ obj->name_len = adfs_readname(obj->name, de->dirobname, ADFS_F_NAME_LEN);
+ obj->file_id = adfs_readval(de->dirinddiscadd, 3);
+ obj->loadaddr = adfs_readval(de->dirload, 4);
+ obj->execaddr = adfs_readval(de->direxec, 4);
+ obj->size = adfs_readval(de->dirlen, 4);
+ obj->attr = de->newdiratts;
+}
+
+/*
+ * convert a Linux ADFS directory entry to a disk-based directory entry
+ */
+static inline void
+adfs_obj2dir(struct adfs_direntry *de, struct object_info *obj)
+{
+ adfs_writeval(de->dirinddiscadd, 3, obj->file_id);
+ adfs_writeval(de->dirload, 4, obj->loadaddr);
+ adfs_writeval(de->direxec, 4, obj->execaddr);
+ adfs_writeval(de->dirlen, 4, obj->size);
+ de->newdiratts = obj->attr;
+}
+
+/*
+ * get a directory entry. Note that the caller is responsible
+ * for holding the relevent locks.
+ */
+int
+__adfs_dir_get(struct adfs_dir *dir, int pos, struct object_info *obj)
+{
+ struct super_block *sb = dir->sb;
+ struct adfs_direntry de;
+ int thissize, buffer, offset;
+
+ buffer = pos >> sb->s_blocksize_bits;
+
+ if (buffer > dir->nr_buffers)
+ return -EINVAL;
+
+ offset = pos & (sb->s_blocksize - 1);
+ thissize = sb->s_blocksize - offset;
+ if (thissize > 26)
+ thissize = 26;
+
+ memcpy(&de, dir->bh[buffer]->b_data + offset, thissize);
+ if (thissize != 26)
+ memcpy(((char *)&de) + thissize, dir->bh[buffer + 1]->b_data,
+ 26 - thissize);
+
+ if (!de.dirobname[0])
+ return -ENOENT;
+
+ adfs_dir2obj(obj, &de);
+
+ return 0;
+}
+
+int
+__adfs_dir_put(struct adfs_dir *dir, int pos, struct object_info *obj)
+{
+ struct super_block *sb = dir->sb;
+ struct adfs_direntry de;
+ int thissize, buffer, offset;
+
+ buffer = pos >> sb->s_blocksize_bits;
+
+ if (buffer > dir->nr_buffers)
+ return -EINVAL;
+
+ offset = pos & (sb->s_blocksize - 1);
+ thissize = sb->s_blocksize - offset;
+ if (thissize > 26)
+ thissize = 26;
+
+ /*
+ * Get the entry in total
+ */
+ memcpy(&de, dir->bh[buffer]->b_data + offset, thissize);
+ if (thissize != 26)
+ memcpy(((char *)&de) + thissize, dir->bh[buffer + 1]->b_data,
+ 26 - thissize);
+
+ /*
+ * update it
+ */
+ adfs_obj2dir(&de, obj);
+
+ /*
+ * Put the new entry back
+ */
+ memcpy(dir->bh[buffer]->b_data + offset, &de, thissize);
+ if (thissize != 26)
+ memcpy(dir->bh[buffer + 1]->b_data, ((char *)&de) + thissize,
+ 26 - thissize);
+
+ return 0;
+}
+
+/*
+ * the caller is responsible for holding the necessary
+ * locks.
+ */
+static int
+adfs_dir_find_entry(struct adfs_dir *dir, unsigned long object_id)
+{
+ int pos, ret;
+
+ ret = -ENOENT;
+
+ for (pos = 5; pos < ADFS_NUM_DIR_ENTRIES * 26 + 5; pos += 26) {
+ struct object_info obj;
+
+ if (!__adfs_dir_get(dir, pos, &obj))
+ break;
+
+ if (obj.file_id == object_id) {
+ ret = pos;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static int
+adfs_f_read(struct super_block *sb, unsigned int id, unsigned int sz, struct adfs_dir *dir)
+{
+ int ret;
+
+ if (sz != ADFS_NEWDIR_SIZE)
+ return -EIO;
+
+ ret = adfs_dir_read(sb, id, sz, dir);
+ if (ret)
+ adfs_error(sb, "unable to read directory");
+ else
+ dir->parent_id = adfs_readval(dir->dirtail.new.dirparent, 3);
+
+ return ret;
+}
+
+static int
+adfs_f_setpos(struct adfs_dir *dir, unsigned int fpos)
+{
+ if (fpos >= ADFS_NUM_DIR_ENTRIES)
+ return -ENOENT;
+
+ dir->pos = 5 + fpos * 26;
+ return 0;
+}
+
+static int
+adfs_f_getnext(struct adfs_dir *dir, struct object_info *obj)
+{
+ unsigned int ret;
+
+ ret = __adfs_dir_get(dir, dir->pos, obj);
+ if (ret == 0)
+ dir->pos += 26;
+
+ return ret;
+}
+
+static int
+adfs_f_update(struct adfs_dir *dir, struct object_info *obj)
+{
+ struct super_block *sb = dir->sb;
+ int ret, i;
+
+ ret = adfs_dir_find_entry(dir, obj->file_id);
+ if (ret < 0) {
+ adfs_error(dir->sb, "unable to locate entry to update");
+ goto out;
+ }
+
+ __adfs_dir_put(dir, ret, obj);
+
+ /*
+ * Increment directory sequence number
+ */
+ dir->bh[0]->b_data[0] += 1;
+ dir->bh[dir->nr_buffers - 1]->b_data[sb->s_blocksize - 6] += 1;
+
+ ret = adfs_dir_checkbyte(dir);
+ /*
+ * Update directory check byte
+ */
+ dir->bh[dir->nr_buffers - 1]->b_data[sb->s_blocksize - 1] = ret;
+
+#if 1
+ {
+ const unsigned int blocksize_bits = sb->s_blocksize_bits;
+
+ memcpy(&dir->dirhead, bufoff(dir->bh, 0), sizeof(dir->dirhead));
+ memcpy(&dir->dirtail, bufoff(dir->bh, 2007), sizeof(dir->dirtail));
+
+ if (dir->dirhead.startmasseq != dir->dirtail.new.endmasseq ||
+ memcmp(&dir->dirhead.startname, &dir->dirtail.new.endname, 4))
+ goto bad_dir;
+
+ if (memcmp(&dir->dirhead.startname, "Nick", 4) &&
+ memcmp(&dir->dirhead.startname, "Hugo", 4))
+ goto bad_dir;
+
+ if (adfs_dir_checkbyte(dir) != dir->dirtail.new.dircheckbyte)
+ goto bad_dir;
+ }
+#endif
+ for (i = dir->nr_buffers - 1; i >= 0; i--)
+ mark_buffer_dirty(dir->bh[i], 1);
+
+ ret = 0;
+out:
+ return ret;
+#if 1
+bad_dir:
+ adfs_error(dir->sb, "whoops! I broke a directory!");
+ return -EIO;
+#endif
+}
+
+static void
+adfs_f_free(struct adfs_dir *dir)
+{
+ int i;
+
+ for (i = dir->nr_buffers - 1; i >= 0; i--) {
+ brelse(dir->bh[i]);
+ dir->bh[i] = NULL;
+ }
+
+ dir->nr_buffers = 0;
+ dir->sb = NULL;
+}
+
+struct adfs_dir_ops adfs_f_dir_ops = {
+ adfs_f_read,
+ adfs_f_setpos,
+ adfs_f_getnext,
+ adfs_f_update,
+ NULL,
+ NULL,
+ adfs_f_free
+};
--- /dev/null
+/*
+ * linux/fs/adfs/dir_f.h
+ *
+ * Copyright (C) 1999 Russell King
+ *
+ * Structures of directories on the F format disk
+ */
+#ifndef ADFS_DIR_F_H
+#define ADFS_DIR_F_H
+
+/*
+ * Directory header
+ */
+struct adfs_dirheader {
+ unsigned char startmasseq;
+ unsigned char startname[4];
+};
+
+#define ADFS_NEWDIR_SIZE 2048
+#define ADFS_NUM_DIR_ENTRIES 77
+
+/*
+ * Directory entries
+ */
+struct adfs_direntry {
+#define ADFS_F_NAME_LEN 10
+ char dirobname[ADFS_F_NAME_LEN];
+ __u8 dirload[4];
+ __u8 direxec[4];
+ __u8 dirlen[4];
+ __u8 dirinddiscadd[3];
+ __u8 newdiratts;
+};
+
+/*
+ * Directory tail
+ */
+union adfs_dirtail {
+ struct {
+ unsigned char dirlastmask;
+ char dirname[10];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ unsigned char reserved[14];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } old;
+ struct {
+ unsigned char dirlastmask;
+ unsigned char reserved[2];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ char dirname[10];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } new;
+};
+
+#endif
--- /dev/null
+/*
+ * linux/fs/adfs/dir_fplus.c
+ *
+ * Copyright (C) 1997-1999 Russell King
+ */
+#include <linux/version.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+#include <linux/spinlock.h>
+#else
+#include <asm/spinlock.h>
+#endif
+
+#include "adfs.h"
+#include "dir_fplus.h"
+
+static int
+adfs_fplus_read(struct super_block *sb, unsigned int id, unsigned int sz, struct adfs_dir *dir)
+{
+ struct adfs_bigdirheader *h;
+ struct adfs_bigdirtail *t;
+ unsigned long block;
+ unsigned int blk, size;
+ int i, ret = -EIO;
+
+ dir->nr_buffers = 0;
+
+ block = __adfs_block_map(sb, id, 0);
+ if (!block) {
+ adfs_error(sb, "dir object %X has a hole at offset 0", id);
+ goto out;
+ }
+
+ dir->bh[0] = bread(sb->s_dev, block, sb->s_blocksize);
+ if (!dir->bh[0])
+ goto out;
+ dir->nr_buffers += 1;
+
+ h = (struct adfs_bigdirheader *)dir->bh[0]->b_data;
+ size = le32_to_cpu(h->bigdirsize);
+ if (size != sz) {
+ printk(KERN_WARNING "adfs: adfs_fplus_read: directory header size\n"
+ " does not match directory size\n");
+ }
+
+ if (h->bigdirversion[0] != 0 || h->bigdirversion[1] != 0 ||
+ h->bigdirversion[2] != 0 || size & 2047 ||
+ h->bigdirstartname != cpu_to_le32(BIGDIRSTARTNAME))
+ goto out;
+
+ size >>= sb->s_blocksize_bits;
+ for (blk = 1; blk < size; blk++) {
+ block = __adfs_block_map(sb, id, blk);
+ if (!block) {
+ adfs_error(sb, "dir object %X has a hole at offset %d", id, blk);
+ goto out;
+ }
+
+ dir->bh[blk] = bread(sb->s_dev, block, sb->s_blocksize);
+ if (!dir->bh[blk])
+ goto out;
+ dir->nr_buffers = blk;
+ }
+
+ t = (struct adfs_bigdirtail *)(dir->bh[size - 1]->b_data + (sb->s_blocksize - 8));
+
+ if (t->bigdirendname != cpu_to_le32(BIGDIRENDNAME) ||
+ t->bigdirendmasseq != h->startmasseq ||
+ t->reserved[0] != 0 || t->reserved[1] != 0)
+ goto out;
+
+ dir->parent_id = le32_to_cpu(h->bigdirparent);
+ dir->sb = sb;
+ return 0;
+out:
+ for (i = 0; i < dir->nr_buffers; i++)
+ brelse(dir->bh[i]);
+ dir->sb = NULL;
+ return ret;
+}
+
+static int
+adfs_fplus_setpos(struct adfs_dir *dir, unsigned int fpos)
+{
+ struct adfs_bigdirheader *h = (struct adfs_bigdirheader *)dir->bh[0]->b_data;
+ int ret = -ENOENT;
+
+ if (fpos <= le32_to_cpu(h->bigdirentries)) {
+ dir->pos = fpos;
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static void
+dir_memcpy(struct adfs_dir *dir, unsigned int offset, void *to, int len)
+{
+ struct super_block *sb = dir->sb;
+ unsigned int buffer, partial, remainder;
+
+ buffer = offset >> sb->s_blocksize_bits;
+ offset &= sb->s_blocksize - 1;
+
+ partial = sb->s_blocksize - offset;
+
+ if (partial >= len)
+ memcpy(to, dir->bh[buffer]->b_data + offset, len);
+ else {
+ char *c = (char *)to;
+
+ remainder = len - partial;
+
+ memcpy(c, dir->bh[buffer]->b_data + offset, partial);
+ memcpy(c + partial, dir->bh[buffer + 1]->b_data, remainder);
+ }
+}
+
+static int
+adfs_fplus_getnext(struct adfs_dir *dir, struct object_info *obj)
+{
+ struct adfs_bigdirheader *h = (struct adfs_bigdirheader *)dir->bh[0]->b_data;
+ struct adfs_bigdirentry bde;
+ unsigned int offset;
+ int i, ret = -ENOENT;
+
+ if (dir->pos >= le32_to_cpu(h->bigdirentries))
+ goto out;
+
+ offset = offsetof(struct adfs_bigdirheader, bigdirname);
+ offset += ((le32_to_cpu(h->bigdirnamelen) + 4) & ~3);
+ offset += dir->pos * sizeof(struct adfs_bigdirentry);
+
+ dir_memcpy(dir, offset, &bde, sizeof(struct adfs_bigdirentry));
+
+ obj->loadaddr = le32_to_cpu(bde.bigdirload);
+ obj->execaddr = le32_to_cpu(bde.bigdirexec);
+ obj->size = le32_to_cpu(bde.bigdirlen);
+ obj->file_id = le32_to_cpu(bde.bigdirindaddr);
+ obj->attr = le32_to_cpu(bde.bigdirattr);
+ obj->name_len = le32_to_cpu(bde.bigdirobnamelen);
+
+ offset = offsetof(struct adfs_bigdirheader, bigdirname);
+ offset += ((le32_to_cpu(h->bigdirnamelen) + 4) & ~3);
+ offset += le32_to_cpu(h->bigdirentries) * sizeof(struct adfs_bigdirentry);
+ offset += le32_to_cpu(bde.bigdirobnameptr);
+
+ dir_memcpy(dir, offset, obj->name, obj->name_len);
+ for (i = 0; i < obj->name_len; i++)
+ if (obj->name[i] == '/')
+ obj->name[i] = '.';
+
+ dir->pos += 1;
+ ret = 0;
+out:
+ return ret;
+}
+
+static void
+adfs_fplus_free(struct adfs_dir *dir)
+{
+ int i;
+
+ for (i = 0; i < dir->nr_buffers; i++)
+ brelse(dir->bh[i]);
+ dir->sb = NULL;
+}
+
+struct adfs_dir_ops adfs_fplus_dir_ops = {
+ adfs_fplus_read,
+ adfs_fplus_setpos,
+ adfs_fplus_getnext,
+ NULL,
+ NULL,
+ NULL,
+ adfs_fplus_free
+};
--- /dev/null
+/*
+ * linux/fs/adfs/dir_fplus.h
+ *
+ * Copyright (C) 1999 Russell King
+ *
+ * Structures of directories on the F+ format disk
+ */
+
+#define ADFS_FPLUS_NAME_LEN 255
+
+#define BIGDIRSTARTNAME ('S' | 'B' << 8 | 'P' << 16 | 'r' << 24)
+#define BIGDIRENDNAME ('o' | 'v' << 8 | 'e' << 16 | 'n' << 24)
+
+struct adfs_bigdirheader {
+ __u8 startmasseq;
+ __u8 bigdirversion[3];
+ __u32 bigdirstartname;
+ __u32 bigdirnamelen;
+ __u32 bigdirsize;
+ __u32 bigdirentries;
+ __u32 bigdirnamesize;
+ __u32 bigdirparent;
+ char bigdirname[1];
+};
+
+struct adfs_bigdirentry {
+ __u32 bigdirload;
+ __u32 bigdirexec;
+ __u32 bigdirlen;
+ __u32 bigdirindaddr;
+ __u32 bigdirattr;
+ __u32 bigdirobnamelen;
+ __u32 bigdirobnameptr;
+};
+
+struct adfs_bigdirtail {
+ __u32 bigdirendname;
+ __u8 bigdirendmasseq;
+ __u8 reserved[2];
+ __u8 bigdircheckbyte;
+};
/*
* linux/fs/adfs/file.c
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1997-1999 Russell King
* from:
*
* linux/fs/ext2/file.c
*
* adfs regular file handling primitives
*/
-
+#include <linux/version.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/ext2_fs.h>
#include <linux/sched.h>
#include <linux/stat.h>
+#include "adfs.h"
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+/*
+ * Write to a file (through the page cache).
+ */
+static ssize_t
+adfs_file_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
+{
+ ssize_t retval;
+
+ retval = generic_file_write(file, buf, count, ppos,
+ block_write_partial_page);
+
+ if (retval > 0) {
+ struct inode *inode = file->f_dentry->d_inode;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ mark_inode_dirty(inode);
+ }
+
+ return retval;
+}
+#endif
+
/*
* We have mostly NULLs here: the current defaults are OK for
* the adfs filesystem.
*/
static struct file_operations adfs_file_operations = {
- NULL, /* lseek - default */
+ NULL, /* lseek */
generic_file_read, /* read */
- NULL, /* write */
- NULL, /* readdir - bad */
- NULL, /* select - default */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+ adfs_file_write, /* write */
+#else
+ NULL,
+#endif
+ NULL, /* readdir */
+ NULL, /* poll */
NULL, /* ioctl */
generic_file_mmap, /* mmap */
- NULL, /* open - not special */
+ NULL, /* open */
NULL, /* flush */
NULL, /* release */
file_fsync, /* fsync */
};
struct inode_operations adfs_file_inode_operations = {
- &adfs_file_operations, /* default file operations */
- NULL, /* create */
- NULL, /* lookup */
- NULL, /* link */
- NULL, /* unlink */
- NULL, /* symlink */
- NULL, /* mkdir */
- NULL, /* rmdir */
- NULL, /* mknod */
- NULL, /* rename */
- NULL, /* readlink */
- NULL, /* follow_link */
- adfs_bmap, /* get_block */
- block_read_full_page, /* readpage */
- NULL, /* writepage */
- NULL, /* truncate */
- NULL, /* permission */
- NULL /* revalidate */
+ &adfs_file_operations, /* default file operations */
+ NULL, /* create */
+ NULL, /* lookup */
+ NULL, /* link */
+ NULL, /* unlink */
+ NULL, /* symlink */
+ NULL, /* mkdir */
+ NULL, /* rmdir */
+ NULL, /* mknod */
+ NULL, /* rename */
+ NULL, /* readlink */
+ NULL, /* follow_link */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+ adfs_get_block, /* bmap */
+ block_read_full_page, /* readpage */
+ block_write_full_page, /* writepage */
+#else
+ generic_readpage, /* readpage */
+ NULL, /* writepage */
+ adfs_bmap, /* bmap */
+#endif
+ NULL, /* truncate */
+ NULL, /* permission */
+ NULL, /* revalidate */
};
/*
* linux/fs/adfs/inode.c
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1997-1999 Russell King
*/
-
+#include <linux/version.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/adfs_fs.h>
#include <linux/locks.h>
#include <linux/mm.h>
+#include "adfs.h"
+
/*
- * Old Inode numbers:
- * bit 30 - 16 FragID of parent object
- * bit 15 0 1
- * bit 14 - 0 FragID of object Offset into parent FragID
- *
- * New Inode numbers:
- * Inode = Frag ID of parent (14) + Frag Offset (8) + (index into directory + 1)(8)
+ * Lookup/Create a block at offset 'block' into 'inode'. We currently do
+ * not support creation of new blocks, so we return -EIO for this case.
*/
-#define inode_frag(ino) ((ino) >> 8)
-#define inode_idx(ino) ((ino) & 0xff)
-#define inode_dirindex(idx) (((idx) & 0xff) * 26 - 21)
-
-#define frag_id(x) (((x) >> 8) & 0x7fff)
-#define off(x) (((x) & 0xff) ? (((x) & 0xff) - 1) << sb->u.adfs_sb.s_dr->log2sharesize : 0)
-
-static inline int adfs_inode_validate_no (struct super_block *sb, unsigned int inode_no)
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+int
+adfs_get_block(struct inode *inode, long block, struct buffer_head *bh, int create)
{
- unsigned long max_frag_id;
+ if (block < 0)
+ goto abort_negative;
+
+ if (!create) {
+ if (block >= inode->i_blocks)
+ goto abort_toobig;
+
+ block = __adfs_block_map(inode->i_sb, inode->i_ino, block);
+ if (block) {
+ bh->b_dev = inode->i_dev;
+ bh->b_blocknr = block;
+ bh->b_state |= (1UL << BH_Mapped);
+ }
+ return 0;
+ }
+ /* don't support allocation of blocks yet */
+ return -EIO;
- max_frag_id = sb->u.adfs_sb.s_map_size * sb->u.adfs_sb.s_ids_per_zone;
+abort_negative:
+ adfs_error(inode->i_sb, "block %d < 0", block);
+ return -EIO;
- return (inode_no & 0x800000ff) ||
- (frag_id (inode_frag (inode_no)) > max_frag_id) ||
- (frag_id (inode_frag (inode_no)) < 2);
+abort_toobig:
+ return 0;
}
-
-int adfs_inode_validate (struct inode *inode)
+#else
+int adfs_bmap(struct inode *inode, int block)
{
- struct super_block *sb = inode->i_sb;
+ if (block >= inode->i_blocks)
+ return 0;
- return adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00) ||
- adfs_inode_validate_no (sb, inode->u.adfs_i.file_id << 8);
+ return __adfs_block_map(inode->i_sb, inode->i_ino, block);
}
+#endif
-unsigned long adfs_inode_generate (unsigned long parent_id, int diridx)
+static inline unsigned int
+adfs_filetype(struct inode *inode)
{
- if (!parent_id)
- return -1;
+ unsigned int type;
- if (diridx)
- diridx = (diridx + 21) / 26;
+ if (inode->u.adfs_i.stamped)
+ type = (inode->u.adfs_i.loadaddr >> 8) & 0xfff;
+ else
+ type = (unsigned int) -1;
- return (parent_id << 8) | diridx;
+ return type;
}
-unsigned long adfs_inode_objid (struct inode *inode)
+/*
+ * Convert ADFS attributes and filetype to Linux permission.
+ */
+static umode_t
+adfs_atts2mode(struct super_block *sb, struct inode *inode)
{
- if (adfs_inode_validate (inode)) {
- adfs_error (inode->i_sb, "adfs_inode_objid",
- "bad inode number: %lu (%X,%X)",
- inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
- return 0;
+ unsigned int filetype, attr = inode->u.adfs_i.attr;
+ umode_t mode, rmask;
+
+ if (attr & ADFS_NDA_DIRECTORY) {
+ mode = S_IRUGO & sb->u.adfs_sb.s_owner_mask;
+ return S_IFDIR | S_IXUGO | mode;
}
- return inode->u.adfs_i.file_id;
-}
+ filetype = adfs_filetype(inode);
-int adfs_bmap (struct inode *inode, int block)
-{
- struct super_block *sb = inode->i_sb;
- unsigned int blk;
+ switch (filetype) {
+ case 0xfc0: /* LinkFS */
+ return S_IFLNK|S_IRWXUGO;
- if (adfs_inode_validate (inode)) {
- adfs_error (sb, "adfs_bmap",
- "bad inode number: %lu (%X,%X)",
- inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
- return 0;
- }
+ case 0xfe6: /* UnixExec */
+ rmask = S_IRUGO | S_IXUGO;
+ break;
- if (block < 0) {
- adfs_error(sb, "adfs_bmap", "block(%d) < 0", block);
- return 0;
+ default:
+ rmask = S_IRUGO;
}
- if (block > inode->i_blocks)
- return 0;
+ mode = S_IFREG;
- block += off(inode->u.adfs_i.file_id);
+ if (attr & ADFS_NDA_OWNER_READ)
+ mode |= rmask & sb->u.adfs_sb.s_owner_mask;
- if (frag_id(inode->u.adfs_i.file_id) == ADFS_ROOT_FRAG)
- blk = sb->u.adfs_sb.s_map_block + block;
- else
- blk = adfs_map_lookup (sb, frag_id(inode->u.adfs_i.file_id), block);
- return blk;
+ if (attr & ADFS_NDA_OWNER_WRITE)
+ mode |= S_IWUGO & sb->u.adfs_sb.s_owner_mask;
+
+ if (attr & ADFS_NDA_PUBLIC_READ)
+ mode |= rmask & sb->u.adfs_sb.s_other_mask;
+
+ if (attr & ADFS_NDA_PUBLIC_WRITE)
+ mode |= S_IWUGO & sb->u.adfs_sb.s_other_mask;
+ return mode;
}
-unsigned int adfs_parent_bmap (struct inode *inode, int block)
+/*
+ * Convert Linux permission to ADFS attribute. We try to do the reverse
+ * of atts2mode, but there is not a 1:1 translation.
+ */
+static int
+adfs_mode2atts(struct super_block *sb, struct inode *inode)
{
- struct super_block *sb = inode->i_sb;
- unsigned int blk, fragment;
+ umode_t mode;
+ int attr;
- if (adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00)) {
- adfs_error (sb, "adfs_parent_bmap",
- "bad inode number: %lu (%X,%X)",
- inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
- return 0;
- }
+ /* FIXME: should we be able to alter a link? */
+ if (S_ISLNK(inode->i_mode))
+ return inode->u.adfs_i.attr;
- fragment = inode_frag (inode->i_ino);
- if (frag_id (fragment) == ADFS_ROOT_FRAG)
- blk = sb->u.adfs_sb.s_map_block + off(fragment) + block;
+ if (S_ISDIR(inode->i_mode))
+ attr = ADFS_NDA_DIRECTORY;
else
- blk = adfs_map_lookup (sb, frag_id (fragment), off(fragment) + block);
- return blk;
+ attr = 0;
+
+ mode = inode->i_mode & sb->u.adfs_sb.s_owner_mask;
+ if (mode & S_IRUGO)
+ attr |= ADFS_NDA_OWNER_READ;
+ if (mode & S_IWUGO)
+ attr |= ADFS_NDA_OWNER_WRITE;
+
+ mode = inode->i_mode & sb->u.adfs_sb.s_other_mask;
+ mode &= ~sb->u.adfs_sb.s_owner_mask;
+ if (mode & S_IRUGO)
+ attr |= ADFS_NDA_PUBLIC_READ;
+ if (mode & S_IWUGO)
+ attr |= ADFS_NDA_PUBLIC_WRITE;
+
+ return attr;
}
-static int adfs_atts2mode(struct super_block *sb, unsigned char mode, unsigned int filetype)
+/*
+ * Convert an ADFS time to Unix time. ADFS has a 40-bit centi-second time
+ * referenced to 1 Jan 1900 (til 2248)
+ */
+static unsigned int
+adfs_adfs2unix_time(struct inode *inode)
{
- int omode = 0;
-
- if (filetype == 0xfc0 /* LinkFS */) {
- omode = S_IFLNK|S_IRUSR|S_IWUSR|S_IXUSR|
- S_IRGRP|S_IWGRP|S_IXGRP|
- S_IROTH|S_IWOTH|S_IXOTH;
- } else {
- if (mode & ADFS_NDA_DIRECTORY) {
- omode |= S_IRUGO & sb->u.adfs_sb.s_owner_mask;
- omode |= S_IFDIR|S_IXUSR|S_IXGRP|S_IXOTH;
- } else
- omode |= S_IFREG;
-
- if (mode & ADFS_NDA_OWNER_READ) {
- omode |= S_IRUGO & sb->u.adfs_sb.s_owner_mask;
- if (filetype == 0xfe6 /* UnixExec */)
- omode |= S_IXUGO & sb->u.adfs_sb.s_owner_mask;
- }
+ unsigned int high, low;
- if (mode & ADFS_NDA_OWNER_WRITE)
- omode |= S_IWUGO & sb->u.adfs_sb.s_owner_mask;
+ if (inode->u.adfs_i.stamped == 0)
+ return CURRENT_TIME;
- if (mode & ADFS_NDA_PUBLIC_READ) {
- omode |= S_IRUGO & sb->u.adfs_sb.s_other_mask;
- if (filetype == 0xfe6 /* UnixExec */)
- omode |= S_IXUGO & sb->u.adfs_sb.s_other_mask;
- }
+ high = inode->u.adfs_i.loadaddr << 24;
+ low = inode->u.adfs_i.execaddr;
- if (mode & ADFS_NDA_PUBLIC_WRITE)
- omode |= S_IWUGO & sb->u.adfs_sb.s_other_mask;
- }
- return omode;
+ high |= low >> 8;
+ low &= 255;
+
+ /* Files dated pre 01 Jan 1970 00:00:00. */
+ if (high < 0x336e996a)
+ return 0;
+
+ /* Files dated post 18 Jan 2038 03:14:05. */
+ if (high >= 0x656e9969)
+ return 0x7ffffffd;
+
+ /* discard 2208988800 (0x336e996a00) seconds of time */
+ high -= 0x336e996a;
+
+ /* convert 40-bit centi-seconds to 32-bit seconds */
+ return (((high % 100) << 8) + low) / 100 + (high / 100 << 8);
}
-void adfs_read_inode (struct inode *inode)
+/*
+ * Convert an Unix time to ADFS time. We only do this if the entry has a
+ * time/date stamp already.
+ */
+static void
+adfs_unix2adfs_time(struct inode *inode, unsigned int secs)
{
- struct super_block *sb;
- struct buffer_head *bh[4];
- struct adfs_idir_entry ide;
- int buffers;
-
- sb = inode->i_sb;
- inode->i_uid = sb->u.adfs_sb.s_uid;
- inode->i_gid = sb->u.adfs_sb.s_gid;
- inode->i_version = ++event;
+ unsigned int high, low;
- if (adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00)) {
- adfs_error (sb, "adfs_read_inode",
- "bad inode number: %lu", inode->i_ino);
- goto bad;
+ if (inode->u.adfs_i.stamped) {
+ /* convert 32-bit seconds to 40-bit centi-seconds */
+ low = (secs & 255) * 100;
+ high = (secs / 256) * 100 + (low << 8) + 0x336e996a;
+
+ inode->u.adfs_i.loadaddr = (high >> 24) |
+ (inode->u.adfs_i.loadaddr & ~0xff);
+ inode->u.adfs_i.execaddr = (low & 255) | (high << 8);
}
+}
- if (frag_id(inode_frag (inode->i_ino)) == ADFS_ROOT_FRAG &&
- inode_idx (inode->i_ino) == 0) {
- /* root dir */
- inode->i_mode = S_IRWXUGO | S_IFDIR;
- inode->i_nlink = 2;
- inode->i_size = ADFS_NEWDIR_SIZE;
- inode->i_blksize = PAGE_SIZE;
- inode->i_blocks = inode->i_size >> sb->s_blocksize_bits;
- inode->i_mtime =
- inode->i_atime =
- inode->i_ctime = 0;
- inode->u.adfs_i.file_id = inode_frag (inode->i_ino);
- } else {
- if (!(buffers = adfs_dir_read_parent (inode, bh)))
- goto bad;
-
- if (adfs_dir_check (inode, bh, buffers, NULL)) {
- adfs_dir_free (bh, buffers);
- goto bad;
- }
+/*
+ * Fill in the inode information from the object information.
+ *
+ * Note that this is an inode-less filesystem, so we can't use the inode
+ * number to reference the metadata on the media. Instead, we use the
+ * inode number to hold the object ID, which in turn will tell us where
+ * the data is held. We also save the parent object ID, and with these
+ * two, we can locate the metadata.
+ *
+ * This does mean that we rely on an objects parent remaining the same at
+ * all times - we cannot cope with a cross-directory rename (yet).
+ */
+struct inode *
+adfs_iget(struct super_block *sb, struct object_info *obj)
+{
+ struct inode *inode;
- if (!adfs_dir_find_entry (sb, bh, buffers, inode_dirindex (inode->i_ino), &ide)) {
- adfs_dir_free (bh, buffers);
- goto bad;
- }
- adfs_dir_free (bh, buffers);
- inode->i_mode = adfs_atts2mode(sb, ide.mode, ide.filetype);
- inode->i_nlink = 2;
- inode->i_size = ide.size;
- inode->i_blksize = PAGE_SIZE;
- inode->i_blocks = (inode->i_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
- inode->i_mtime =
- inode->i_atime =
- inode->i_ctime = ide.mtime;
- inode->u.adfs_i.file_id = ide.file_id;
- }
+ inode = get_empty_inode();
+ if (!inode)
+ goto out;
+
+ inode->i_version = ++event;
+ inode->i_sb = sb;
+ inode->i_dev = sb->s_dev;
+ inode->i_uid = sb->u.adfs_sb.s_uid;
+ inode->i_gid = sb->u.adfs_sb.s_gid;
+ inode->i_ino = obj->file_id;
+ inode->i_size = obj->size;
+ inode->i_nlink = 2;
+ inode->i_blksize = PAGE_SIZE;
+ inode->i_blocks = (inode->i_size + sb->s_blocksize - 1) >>
+ sb->s_blocksize_bits;
+
+ /*
+ * we need to save the parent directory ID so that
+ * write_inode can update the directory information
+ * for this file. This will need special handling
+ * for cross-directory renames.
+ */
+ inode->u.adfs_i.parent_id = obj->parent_id;
+ inode->u.adfs_i.loadaddr = obj->loadaddr;
+ inode->u.adfs_i.execaddr = obj->execaddr;
+ inode->u.adfs_i.attr = obj->attr;
+ inode->u.adfs_i.stamped = ((obj->loadaddr & 0xfff00000) == 0xfff00000);
+
+ inode->i_mode = adfs_atts2mode(sb, inode);
+ inode->i_mtime =
+ inode->i_atime =
+ inode->i_ctime = adfs_adfs2unix_time(inode);
if (S_ISDIR(inode->i_mode))
- inode->i_op = &adfs_dir_inode_operations;
+ inode->i_op = &adfs_dir_inode_operations;
else if (S_ISREG(inode->i_mode))
- inode->i_op = &adfs_file_inode_operations;
- return;
+ inode->i_op = &adfs_file_inode_operations;
+
+ insert_inode_hash(inode);
+
+out:
+ return inode;
+}
-bad:
+/*
+ * This is no longer a valid way to obtain the metadata associated with the
+ * inode number on this filesystem. This means that this filesystem cannot
+ * be shared via NFS.
+ */
+void adfs_read_inode(struct inode *inode)
+{
+ adfs_error(inode->i_sb, "unsupported method of reading inode");
make_bad_inode(inode);
}
+
+/*
+ * Validate and convert a changed access mode/time to their ADFS equivalents.
+ * adfs_write_inode will actually write the information back to the directory
+ * later.
+ */
+int
+adfs_notify_change(struct dentry *dentry, struct iattr *attr)
+{
+ struct inode *inode = dentry->d_inode;
+ struct super_block *sb = inode->i_sb;
+ unsigned int ia_valid = attr->ia_valid;
+ int error;
+
+ error = inode_change_ok(inode, attr);
+
+ /*
+ * we can't change the UID or GID of any file -
+ * we have a global UID/GID in the superblock
+ */
+ if ((ia_valid & ATTR_UID && attr->ia_uid != sb->u.adfs_sb.s_uid) ||
+ (ia_valid & ATTR_GID && attr->ia_gid != sb->u.adfs_sb.s_gid))
+ error = -EPERM;
+
+ if (error)
+ goto out;
+
+ if (ia_valid & ATTR_SIZE)
+ inode->i_size = attr->ia_size;
+ if (ia_valid & ATTR_MTIME) {
+ inode->i_mtime = attr->ia_mtime;
+ adfs_unix2adfs_time(inode, attr->ia_mtime);
+ }
+ /*
+ * FIXME: should we make these == to i_mtime since we don't
+ * have the ability to represent them in our filesystem?
+ */
+ if (ia_valid & ATTR_ATIME)
+ inode->i_atime = attr->ia_atime;
+ if (ia_valid & ATTR_CTIME)
+ inode->i_ctime = attr->ia_ctime;
+ if (ia_valid & ATTR_MODE) {
+ inode->u.adfs_i.attr = adfs_mode2atts(sb, inode);
+ inode->i_mode = adfs_atts2mode(sb, inode);
+ }
+
+ /*
+ * FIXME: should we be marking this inode dirty even if
+ * we don't have any metadata to write back?
+ */
+ if (ia_valid & (ATTR_SIZE | ATTR_MTIME | ATTR_MODE))
+ mark_inode_dirty(inode);
+out:
+ return error;
+}
+
+/*
+ * write an existing inode back to the directory, and therefore the disk.
+ * The adfs-specific inode data has already been updated by
+ * adfs_notify_change()
+ */
+void adfs_write_inode(struct inode *inode)
+{
+ struct super_block *sb = inode->i_sb;
+ struct object_info obj;
+
+ obj.file_id = inode->i_ino;
+ obj.name_len = 0;
+ obj.parent_id = inode->u.adfs_i.parent_id;
+ obj.loadaddr = inode->u.adfs_i.loadaddr;
+ obj.execaddr = inode->u.adfs_i.execaddr;
+ obj.attr = inode->u.adfs_i.attr;
+ obj.size = inode->i_size;
+
+ adfs_dir_update(sb, &obj);
+}
/*
* linux/fs/adfs/map.c
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1997-1999 Russell King
*/
-
+#include <linux/version.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/adfs_fs.h>
-static inline unsigned int
-adfs_convert_map_to_sector (const struct super_block *sb, unsigned int mapoff)
-{
- if (sb->u.adfs_sb.s_map2blk >= 0)
- mapoff <<= sb->u.adfs_sb.s_map2blk;
- else
- mapoff >>= -sb->u.adfs_sb.s_map2blk;
- return mapoff;
-}
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+#include <linux/spinlock.h>
+#else
+#include <asm/spinlock.h>
+#endif
+
+#include "adfs.h"
-static inline unsigned int
-adfs_convert_sector_to_map (const struct super_block *sb, unsigned int secoff)
+/*
+ * For the future...
+ */
+static rwlock_t adfs_map_lock;
+
+/*
+ * return the map bit offset of the fragment frag_id in
+ * the zone dm.
+ * Note that the loop is optimised for best asm code -
+ * look at the output of:
+ * gcc -D__KERNEL__ -O2 -I../../include -o - -S map.c
+ */
+static int
+lookup_zone(const struct adfs_discmap *dm, const unsigned int idlen,
+ const unsigned int frag_id, unsigned int *offset)
{
- if (sb->u.adfs_sb.s_map2blk >= 0)
- secoff >>= sb->u.adfs_sb.s_map2blk;
- else
- secoff <<= -sb->u.adfs_sb.s_map2blk;
- return secoff;
+ const unsigned int mapsize = dm->dm_endbit;
+ const unsigned int idmask = (1 << idlen) - 1;
+ unsigned long *map = ((unsigned long *)dm->dm_bh->b_data) + 1;
+ unsigned int start = dm->dm_startbit;
+ unsigned int mapptr;
+
+ do {
+ unsigned long frag;
+
+ /*
+ * get fragment id
+ */
+ asm("@ get fragment id start");
+ {
+ unsigned long v2;
+ unsigned int tmp;
+
+ tmp = start >> 5;
+
+ frag = le32_to_cpu(map[tmp]);
+ v2 = le32_to_cpu(map[tmp + 1]);
+
+ tmp = start & 31;
+
+ frag = (frag >> tmp) | (v2 << (32 - tmp));
+
+ frag &= idmask;
+ }
+ asm("@ get fragment id end");
+
+ mapptr = start + idlen;
+
+ /*
+ * find end of fragment
+ */
+ asm("@ find end of fragment start");
+ {
+ unsigned long v2;
+
+ while ((v2 = map[mapptr >> 5] >> (mapptr & 31)) == 0) {
+ mapptr = (mapptr & ~31) + 32;
+ if (mapptr >= mapsize)
+ goto error;
+ }
+
+ mapptr += 1 + ffz(~v2);
+ }
+ asm("@ find end of fragment end");
+
+ if (frag == frag_id)
+ goto found;
+again:
+ start = mapptr;
+ } while (mapptr < mapsize);
+
+error:
+ return -1;
+
+found:
+ {
+ int length = mapptr - start;
+ if (*offset >= length) {
+ *offset -= length;
+ goto again;
+ }
+ }
+ return start + *offset;
}
-static int lookup_zone (struct super_block *sb, int zone, int frag_id, int *offset)
+/*
+ * Scan the free space map, for this zone, calculating the total
+ * number of map bits in each free space fragment.
+ *
+ * Note: idmask is limited to 15 bits [3.2]
+ */
+static unsigned int
+scan_free_map(struct adfs_sb_info *asb, struct adfs_discmap *dm)
{
- unsigned int mapptr, idlen, mapsize;
- unsigned long *map;
+ const unsigned int mapsize = dm->dm_endbit + 32;
+ const unsigned int idlen = asb->s_idlen;
+ const unsigned int frag_idlen = idlen <= 15 ? idlen : 15;
+ const unsigned int idmask = (1 << frag_idlen) - 1;
+ unsigned long *map = (unsigned long *)dm->dm_bh->b_data;
+ unsigned int start = 8, mapptr;
+ unsigned long frag;
+ unsigned long total = 0;
+
+ /*
+ * get fragment id
+ */
+ asm("@ get fragment id start");
+ {
+ unsigned long v2;
+ unsigned int tmp;
+
+ tmp = start >> 5;
+
+ frag = le32_to_cpu(map[tmp]);
+ v2 = le32_to_cpu(map[tmp + 1]);
+
+ tmp = start & 31;
+
+ frag = (frag >> tmp) | (v2 << (32 - tmp));
+
+ frag &= idmask;
+ }
+ asm("@ get fragment id end");
- map = ((unsigned long *)sb->u.adfs_sb.s_map[zone]->b_data) + 1;
- zone =
- mapptr = zone == 0 ? (ADFS_DR_SIZE << 3) : 0;
- idlen = sb->u.adfs_sb.s_idlen;
- mapsize = sb->u.adfs_sb.s_zonesize;
+ /*
+ * If the freelink is null, then no free fragments
+ * exist in this zone.
+ */
+ if (frag == 0)
+ return 0;
do {
- unsigned long v1, v2;
- unsigned int start;
+ start += frag;
- v1 = map[mapptr>>5];
- v2 = map[(mapptr>>5)+1];
+ /*
+ * get fragment id
+ */
+ asm("@ get fragment id start");
+ {
+ unsigned long v2;
+ unsigned int tmp;
- v1 = (v1 >> (mapptr & 31)) | (v2 << (32 - (mapptr & 31)));
- start = mapptr;
- mapptr += idlen;
+ tmp = start >> 5;
+
+ frag = le32_to_cpu(map[tmp]);
+ v2 = le32_to_cpu(map[tmp + 1]);
+
+ tmp = start & 31;
+
+ frag = (frag >> tmp) | (v2 << (32 - tmp));
- v2 = map[mapptr >> 5] >> (mapptr & 31);
- if (!v2) {
- mapptr = (mapptr + 32) & ~31;
- for (; (v2 = map[mapptr >> 5]) == 0 && mapptr < mapsize; mapptr += 32);
+ frag &= idmask;
}
- for (; (v2 & 255) == 0; v2 >>= 8, mapptr += 8);
- for (; (v2 & 1) == 0; v2 >>= 1, mapptr += 1);
- mapptr += 1;
-
- if ((v1 & ((1 << idlen) - 1)) == frag_id) {
- int length = mapptr - start;
- if (*offset >= length)
- *offset -= length;
- else
- return start + *offset - zone;
+ asm("@ get fragment id end");
+
+ mapptr = start + idlen;
+
+ /*
+ * find end of fragment
+ */
+ asm("@ find end of fragment start");
+ {
+ unsigned long v2;
+
+ while ((v2 = map[mapptr >> 5] >> (mapptr & 31)) == 0) {
+ mapptr = (mapptr & ~31) + 32;
+ if (mapptr >= mapsize)
+ goto error;
+ }
+
+ mapptr += 1 + ffz(~v2);
}
- } while (mapptr < mapsize);
+ asm("@ find end of fragment end");
+
+ total += mapptr - start;
+ } while (frag >= idlen + 1);
+
+ if (frag != 0)
+ printk(KERN_ERR "adfs: undersized free fragment\n");
+
+ return total;
+error:
+ printk(KERN_ERR "adfs: oversized free fragment\n");
+ return 0;
+}
+
+static int
+scan_map(struct adfs_sb_info *asb, unsigned int zone,
+ const unsigned int frag_id, unsigned int mapoff)
+{
+ const unsigned int idlen = asb->s_idlen;
+ struct adfs_discmap *dm, *dm_end;
+ int result;
+
+ dm = asb->s_map + zone;
+ zone = asb->s_map_size;
+ dm_end = asb->s_map + zone;
+
+ do {
+ result = lookup_zone(dm, idlen, frag_id, &mapoff);
+
+ if (result != -1)
+ goto found;
+
+ dm ++;
+ if (dm == dm_end)
+ dm = asb->s_map;
+ } while (--zone > 0);
+
return -1;
+found:
+ result -= dm->dm_startbit;
+ result += dm->dm_startblk;
+
+ return result;
+}
+
+/*
+ * calculate the amount of free blocks in the map.
+ *
+ * n=1
+ * total_free = E(free_in_zone_n)
+ * nzones
+ */
+unsigned int
+adfs_map_free(struct super_block *sb)
+{
+ struct adfs_sb_info *asb = &sb->u.adfs_sb;
+ struct adfs_discmap *dm;
+ unsigned int total = 0;
+ unsigned int zone;
+
+ dm = asb->s_map;
+ zone = asb->s_map_size;
+
+ do {
+ total += scan_free_map(asb, dm++);
+ } while (--zone > 0);
+
+ return signed_asl(total, asb->s_map2blk);
}
int adfs_map_lookup (struct super_block *sb, int frag_id, int offset)
{
- unsigned int start_zone, zone, max_zone, mapoff, secoff;
+ struct adfs_sb_info *asb = &sb->u.adfs_sb;
+ unsigned int zone, mapoff;
+ int result;
- zone = start_zone = frag_id / sb->u.adfs_sb.s_ids_per_zone;
- max_zone = sb->u.adfs_sb.s_map_size;
+ /*
+ * map & root fragment is special - it starts in the center of the
+ * disk. The other fragments start at zone (frag / ids_per_zone)
+ */
+ if (frag_id == ADFS_ROOT_FRAG)
+ zone = asb->s_map_size >> 1;
+ else
+ zone = frag_id / asb->s_ids_per_zone;
- if (start_zone >= max_zone) {
- adfs_error (sb, "adfs_map_lookup", "fragment %X is invalid (zone = %d, max = %d)",
- frag_id, start_zone, max_zone);
- return 0;
- }
+ if (zone >= asb->s_map_size)
+ goto bad_fragment;
/* Convert sector offset to map offset */
- mapoff = adfs_convert_sector_to_map (sb, offset);
- /* Calculate sector offset into map block */
- secoff = offset - adfs_convert_map_to_sector (sb, mapoff);
+ mapoff = signed_asl(offset, -asb->s_map2blk);
- do {
- int result = lookup_zone (sb, zone, frag_id, &mapoff);
+ read_lock(&adfs_map_lock);
+ result = scan_map(asb, zone, frag_id, mapoff);
+ read_unlock(&adfs_map_lock);
- if (result != -1) {
- result += zone ? (zone * sb->u.adfs_sb.s_zonesize) - (ADFS_DR_SIZE << 3): 0;
- return adfs_convert_map_to_sector (sb, result) + secoff;
- }
+ if (result > 0) {
+ unsigned int secoff;
- zone ++;
- if (zone >= max_zone)
- zone = 0;
+ /* Calculate sector offset into map block */
+ secoff = offset - signed_asl(mapoff, asb->s_map2blk);
+ return secoff + signed_asl(result, asb->s_map2blk);
+ }
- } while (zone != start_zone);
+ adfs_error(sb, "fragment %04X at offset %d not found in map",
+ frag_id, offset);
+ return 0;
- adfs_error (sb, "adfs_map_lookup", "fragment %X at offset %d not found in map (start zone %d)",
- frag_id, offset, start_zone);
+bad_fragment:
+ adfs_error(sb, "fragment %X is invalid (zone = %d, max = %d)",
+ frag_id, zone, asb->s_map_size);
return 0;
}
+++ /dev/null
-/*
- * linux/fs/adfs/namei.c
- *
- * Copyright (C) 1997 Russell King
- */
-
-#include <linux/errno.h>
-#include <linux/fs.h>
-#include <linux/adfs_fs.h>
-#include <linux/fcntl.h>
-#include <linux/sched.h>
-#include <linux/stat.h>
-#include <linux/string.h>
-#include <linux/locks.h>
-
-/*
- * NOTE! unlike strncmp, ext2_match returns 1 for success, 0 for failure
- */
-static int adfs_match (int len, const char * const name, struct adfs_idir_entry *de)
-{
- int i;
-
- if (!de || len > ADFS_NAME_LEN)
- return 0;
- /*
- * "" means "." ---> so paths like "/usr/lib//libc.a" work
- */
- if (!len && de->name_len == 1 && de->name[0] == '.' &&
- de->name[1] == '\0')
- return 1;
- if (len != de->name_len)
- return 0;
-
- for (i = 0; i < len; i++)
- if ((de->name[i] ^ name[i]) & 0x5f)
- return 0;
- return 1;
-}
-
-static int adfs_find_entry (struct inode *dir, const char * const name, int namelen,
- struct adfs_idir_entry *ide)
-{
- struct super_block *sb;
- struct buffer_head *bh[4];
- union adfs_dirtail dt;
- unsigned long parent_object_id, dir_object_id;
- int buffers, pos;
-
- sb = dir->i_sb;
-
- if (adfs_inode_validate (dir)) {
- adfs_error (sb, "adfs_find_entry",
- "invalid inode number: %lu", dir->i_ino);
- return 0;
- }
-
- if (!(buffers = adfs_dir_read (dir, bh))) {
- adfs_error (sb, "adfs_find_entry", "unable to read directory");
- return 0;
- }
-
- if (adfs_dir_check (dir, bh, buffers, &dt)) {
- adfs_dir_free (bh, buffers);
- return 0;
- }
-
- parent_object_id = adfs_val (dt.new.dirparent, 3);
- dir_object_id = adfs_inode_objid (dir);
-
- if (namelen == 2 && name[0] == '.' && name[1] == '.') {
- ide->name_len = 2;
- ide->name[0] = ide->name[1] = '.';
- ide->name[2] = '\0';
- ide->inode_no = adfs_inode_generate (parent_object_id, 0);
- adfs_dir_free (bh, buffers);
- return 1;
- }
-
- pos = 5;
-
- do {
- if (!adfs_dir_get (sb, bh, buffers, pos, dir_object_id, ide))
- break;
-
- if (adfs_match (namelen, name, ide)) {
- adfs_dir_free (bh, buffers);
- return pos;
- }
- pos += 26;
- } while (1);
- adfs_dir_free (bh, buffers);
- return 0;
-}
-
-struct dentry *adfs_lookup (struct inode *dir, struct dentry *dentry)
-{
- struct inode *inode = NULL;
- struct adfs_idir_entry de;
- unsigned long ino;
-
- if (dentry->d_name.len > ADFS_NAME_LEN)
- return ERR_PTR(-ENAMETOOLONG);
-
- if (adfs_find_entry (dir, dentry->d_name.name, dentry->d_name.len, &de)) {
- ino = de.inode_no;
- inode = iget (dir->i_sb, ino);
-
- if (!inode)
- return ERR_PTR(-EACCES);
- }
- d_add(dentry, inode);
- return NULL;
-}
/*
* linux/fs/adfs/super.c
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1997-1999 Russell King
*/
-
+#include <linux/version.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <stdarg.h>
-static void adfs_put_super(struct super_block *sb);
-static int adfs_remount(struct super_block *sb, int *flags, char *data);
-static int adfs_statfs(struct super_block *sb, struct statfs *buf, int bufsiz);
-void adfs_read_inode(struct inode *inode);
+#include "adfs.h"
+#include "dir_f.h"
+#include "dir_fplus.h"
-void adfs_error(struct super_block *sb, const char *function, const char *fmt, ...)
+void __adfs_error(struct super_block *sb, const char *function, const char *fmt, ...)
{
char error_buf[128];
va_list args;
function ? function : "", error_buf);
}
-static unsigned char adfs_calczonecheck(struct super_block *sb, char *map)
+static int adfs_checkdiscrecord(struct adfs_discrecord *dr)
+{
+ int i;
+
+ /* sector size must be 256, 512 or 1024 bytes */
+ if (dr->log2secsize != 8 &&
+ dr->log2secsize != 9 &&
+ dr->log2secsize != 10)
+ return 1;
+
+ /* idlen must be at least log2secsize + 3 */
+ if (dr->idlen < dr->log2secsize + 3)
+ return 1;
+
+ /* we cannot have such a large disc that we
+ * are unable to represent sector offsets in
+ * 32 bits. This works out at 2.0 TB.
+ */
+ if (dr->disc_size_high >> dr->log2secsize)
+ return 1;
+
+ /*
+ * The following checks are not required for F+
+ * stage 1.
+ */
+#if 0
+ /* idlen must be smaller be no greater than 15 */
+ if (dr->idlen > 15)
+ return 1;
+
+ /* nzones must be less than 128 for the root
+ * directory to be addressable
+ */
+ if (dr->nzones >= 128 && dr->nzones_high == 0)
+ return 1;
+
+ /* root must be of the form 0x2.. */
+ if ((le32_to_cpu(dr->root) & 0xffffff00) != 0x00000200)
+ return 1;
+#else
+ /*
+ * Stage 2 F+ does not require the following check
+ */
+#if 0
+ /* idlen must be no greater than 16 v2 [1.0] */
+ if (dr->idlen > 16)
+ return 1;
+
+ /* we can't handle F+ discs yet */
+ if (dr->format_version || dr->root_size)
+ return 1;
+
+#else
+ /* idlen must be no greater than 19 v2 [1.0] */
+ if (dr->idlen > 19)
+ return 1;
+#endif
+#endif
+
+ /* reserved bytes should be zero */
+ for (i = 0; i < sizeof(dr->unused52); i++)
+ if (dr->unused52[i] != 0)
+ return 1;
+
+ return 0;
+}
+
+static unsigned char adfs_calczonecheck(struct super_block *sb, unsigned char *map)
{
unsigned int v0, v1, v2, v3;
int i;
return v0 ^ v1 ^ v2 ^ v3;
}
-static int adfs_checkmap(struct super_block *sb)
+static int adfs_checkmap(struct super_block *sb, struct adfs_discmap *dm)
{
unsigned char crosscheck = 0, zonecheck = 1;
int i;
for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
- char *map;
+ unsigned char *map;
+
+ map = dm[i].dm_bh->b_data;
- map = sb->u.adfs_sb.s_map[i]->b_data;
if (adfs_calczonecheck(sb, map) != map[0]) {
- adfs_error(sb, "adfs_checkmap", "zone %d fails zonecheck", i);
+ adfs_error(sb, "zone %d fails zonecheck", i);
zonecheck = 0;
}
crosscheck ^= map[3];
}
if (crosscheck != 0xff)
- adfs_error(sb, "adfs_checkmap", "crosscheck != 0xff");
+ adfs_error(sb, "crosscheck != 0xff");
return crosscheck == 0xff && zonecheck;
}
-static struct super_operations adfs_sops = {
- adfs_read_inode,
- NULL,
- NULL,
- NULL,
- NULL,
- adfs_put_super,
- NULL,
- adfs_statfs,
- adfs_remount
-};
-
static void adfs_put_super(struct super_block *sb)
{
int i;
for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
- brelse(sb->u.adfs_sb.s_map[i]);
+ brelse(sb->u.adfs_sb.s_map[i].dm_bh);
kfree(sb->u.adfs_sb.s_map);
- brelse(sb->u.adfs_sb.s_sbh);
MOD_DEC_USE_COUNT;
}
return parse_options(sb, data);
}
+static int adfs_statfs(struct super_block *sb, struct statfs *buf, int bufsiz)
+{
+ struct adfs_sb_info *asb = &sb->u.adfs_sb;
+ struct statfs tmp;
+
+ tmp.f_type = ADFS_SUPER_MAGIC;
+ tmp.f_namelen = asb->s_namelen;
+ tmp.f_bsize = sb->s_blocksize;
+ tmp.f_blocks = asb->s_size;
+ tmp.f_files = asb->s_ids_per_zone * asb->s_map_size;
+ tmp.f_bavail =
+ tmp.f_bfree = adfs_map_free(sb);
+ tmp.f_ffree = tmp.f_bfree * tmp.f_files / tmp.f_blocks;
+
+ return copy_to_user(buf, &tmp, bufsiz) ? -EFAULT : 0;
+}
+
+static struct super_operations adfs_sops = {
+ adfs_read_inode, /* read_inode */
+ adfs_write_inode, /* write_inode */
+ NULL, /* put_inode */
+ NULL, /* delete_inode */
+ adfs_notify_change, /* notify_change */
+ adfs_put_super, /* put_super */
+ NULL, /* write_super */
+ adfs_statfs, /* statfs */
+ adfs_remount, /* remount_fs */
+ NULL, /* clear_inode */
+ NULL /* umount_begin */
+};
+
+static struct adfs_discmap *adfs_read_map(struct super_block *sb, struct adfs_discrecord *dr)
+{
+ struct adfs_discmap *dm;
+ unsigned int map_addr, zone_size, nzones;
+ int i, zone;
+
+ nzones = sb->u.adfs_sb.s_map_size;
+ zone_size = (8 << dr->log2secsize) - le16_to_cpu(dr->zone_spare);
+ map_addr = (nzones >> 1) * zone_size -
+ ((nzones > 1) ? ADFS_DR_SIZE_BITS : 0);
+ map_addr = signed_asl(map_addr, sb->u.adfs_sb.s_map2blk);
+
+ sb->u.adfs_sb.s_ids_per_zone = zone_size / (sb->u.adfs_sb.s_idlen + 1);
+
+ dm = kmalloc(nzones * sizeof(*dm), GFP_KERNEL);
+ if (dm == NULL) {
+ adfs_error(sb, "not enough memory");
+ return NULL;
+ }
+
+ for (zone = 0; zone < nzones; zone++, map_addr++) {
+ dm[zone].dm_startbit = 0;
+ dm[zone].dm_endbit = zone_size;
+ dm[zone].dm_startblk = zone * zone_size - ADFS_DR_SIZE_BITS;
+ dm[zone].dm_bh = bread(sb->s_dev, map_addr, sb->s_blocksize);
+
+ if (!dm[zone].dm_bh) {
+ adfs_error(sb, "unable to read map");
+ goto error_free;
+ }
+ }
+
+ /* adjust the limits for the first and last map zones */
+ i = zone - 1;
+ dm[0].dm_startblk = 0;
+ dm[0].dm_startbit = ADFS_DR_SIZE_BITS;
+ dm[i].dm_endbit = (dr->disc_size_high << (32 - dr->log2bpmb)) +
+ (dr->disc_size >> dr->log2bpmb) +
+ (ADFS_DR_SIZE_BITS - i * zone_size);
+
+ if (adfs_checkmap(sb, dm))
+ return dm;
+
+ adfs_error(sb, NULL, "map corrupted");
+
+error_free:
+ while (--zone >= 0)
+ brelse(dm[zone].dm_bh);
+
+ kfree(dm);
+ return NULL;
+}
+
+static inline unsigned long adfs_discsize(struct adfs_discrecord *dr, int block_bits)
+{
+ unsigned long discsize;
+
+ discsize = le32_to_cpu(dr->disc_size_high) << (32 - block_bits);
+ discsize |= le32_to_cpu(dr->disc_size) >> block_bits;
+
+ return discsize;
+}
+
struct super_block *adfs_read_super(struct super_block *sb, void *data, int silent)
{
struct adfs_discrecord *dr;
struct buffer_head *bh;
+ struct object_info root_obj;
unsigned char *b_data;
kdev_t dev = sb->s_dev;
- int i, j;
/* set default options */
sb->u.adfs_sb.s_uid = 0;
lock_super(sb);
set_blocksize(dev, BLOCK_SIZE);
if (!(bh = bread(dev, ADFS_DISCRECORD / BLOCK_SIZE, BLOCK_SIZE))) {
- adfs_error(sb, NULL, "unable to read superblock");
+ adfs_error(sb, "unable to read superblock");
goto error_unlock;
}
"%s.\n", kdevname(dev));
goto error_free_bh;
}
+
dr = (struct adfs_discrecord *)(b_data + ADFS_DR_OFFSET);
+ /*
+ * Do some sanity checks on the ADFS disc record
+ */
+ if (adfs_checkdiscrecord(dr)) {
+ if (!silent)
+ printk("VPS: Can't find an adfs filesystem on dev "
+ "%s.\n", kdevname(dev));
+ goto error_free_bh;
+ }
+
sb->s_blocksize_bits = dr->log2secsize;
sb->s_blocksize = 1 << sb->s_blocksize_bits;
if (sb->s_blocksize != BLOCK_SIZE &&
set_blocksize(dev, sb->s_blocksize);
bh = bread(dev, ADFS_DISCRECORD / sb->s_blocksize, sb->s_blocksize);
if (!bh) {
- adfs_error(sb, NULL, "couldn't read superblock on "
+ adfs_error(sb, "couldn't read superblock on "
"2nd try.");
goto error_unlock;
}
b_data = bh->b_data + (ADFS_DISCRECORD % sb->s_blocksize);
if (adfs_checkbblk(b_data)) {
- adfs_error(sb, NULL, "disc record mismatch, very weird!");
+ adfs_error(sb, "disc record mismatch, very weird!");
goto error_free_bh;
}
dr = (struct adfs_discrecord *)(b_data + ADFS_DR_OFFSET);
"%s.\n", kdevname(dev));
goto error_free_bh;
}
- /* blocksize on this device should now be set to the adfs log2secsize */
- sb->u.adfs_sb.s_sbh = bh;
- sb->u.adfs_sb.s_dr = dr;
-
- /* s_zone_size = size of 1 zone (1 sector) * bits_in_byte - zone_spare =>
- * number of map bits in a zone
- */
- sb->u.adfs_sb.s_zone_size = (8 << dr->log2secsize) - dr->zone_spare;
-
- /* s_ids_per_zone = bit size of 1 zone / min. length of fragment block =>
- * number of ids in one zone
+ /*
+ * blocksize on this device should now be set to the ADFS log2secsize
*/
- sb->u.adfs_sb.s_ids_per_zone = sb->u.adfs_sb.s_zone_size / (dr->idlen + 1);
-
- /* s_idlen = length of 1 id */
- sb->u.adfs_sb.s_idlen = dr->idlen;
-
- /* map size (in sectors) = number of zones */
- sb->u.adfs_sb.s_map_size = dr->nzones;
-
- /* zonesize = size of sector - zonespare */
- sb->u.adfs_sb.s_zonesize = (sb->s_blocksize << 3) - dr->zone_spare;
- /* map start (in sectors) = start of zone (number of zones) / 2 */
- sb->u.adfs_sb.s_map_block = (dr->nzones >> 1) * sb->u.adfs_sb.s_zone_size -
- ((dr->nzones > 1) ? 8 * ADFS_DR_SIZE : 0);
+ sb->s_magic = ADFS_SUPER_MAGIC;
+ sb->u.adfs_sb.s_idlen = dr->idlen;
+ sb->u.adfs_sb.s_map_size = dr->nzones | (dr->nzones_high << 8);
+ sb->u.adfs_sb.s_map2blk = dr->log2bpmb - dr->log2secsize;
+ sb->u.adfs_sb.s_size = adfs_discsize(dr, sb->s_blocksize_bits);
+ sb->u.adfs_sb.s_version = dr->format_version;
+ sb->u.adfs_sb.s_log2sharesize = dr->log2sharesize;
- /* (signed) number of bits to shift left a map address to a sector address */
- sb->u.adfs_sb.s_map2blk = dr->log2bpmb - dr->log2secsize;
-
- if (sb->u.adfs_sb.s_map2blk >= 0)
- sb->u.adfs_sb.s_map_block <<= sb->u.adfs_sb.s_map2blk;
- else
- sb->u.adfs_sb.s_map_block >>= -sb->u.adfs_sb.s_map2blk;
-
- printk(KERN_DEBUG "ADFS: zone size %d, IDs per zone %d, map address %X size %d sectors\n",
- sb->u.adfs_sb.s_zone_size, sb->u.adfs_sb.s_ids_per_zone,
- sb->u.adfs_sb.s_map_block, sb->u.adfs_sb.s_map_size);
- printk(KERN_DEBUG "ADFS: sector size %d, map bit size %d, share size %d\n",
- 1 << dr->log2secsize, 1 << dr->log2bpmb,
- 1 << (dr->log2secsize + dr->log2sharesize));
-
- sb->s_magic = ADFS_SUPER_MAGIC;
-
- sb->u.adfs_sb.s_map = kmalloc(sb->u.adfs_sb.s_map_size *
- sizeof(struct buffer_head *), GFP_KERNEL);
- if (sb->u.adfs_sb.s_map == NULL) {
- adfs_error(sb, NULL, "not enough memory");
+ sb->u.adfs_sb.s_map = adfs_read_map(sb, dr);
+ if (!sb->u.adfs_sb.s_map)
goto error_free_bh;
- }
- for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
- sb->u.adfs_sb.s_map[i] = bread(dev,
- sb->u.adfs_sb.s_map_block + i,
- sb->s_blocksize);
- if (!sb->u.adfs_sb.s_map[i]) {
- for (j = 0; j < i; j++)
- brelse(sb->u.adfs_sb.s_map[j]);
- kfree(sb->u.adfs_sb.s_map);
- adfs_error(sb, NULL, "unable to read map");
- goto error_free_bh;
- }
- }
- if (!adfs_checkmap(sb)) {
- for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
- brelse(sb->u.adfs_sb.s_map[i]);
- adfs_error(sb, NULL, "map corrupted");
- goto error_free_bh;
- }
+ brelse(bh);
- dr = (struct adfs_discrecord *)(sb->u.adfs_sb.s_map[0]->b_data + 4);
+ /*
+ * set up enough so that we can read an inode
+ */
+ sb->s_op = &adfs_sops;
unlock_super(sb);
+ dr = (struct adfs_discrecord *)(sb->u.adfs_sb.s_map[0].dm_bh->b_data + 4);
+
+ root_obj.parent_id = root_obj.file_id = le32_to_cpu(dr->root);
+ root_obj.name_len = 0;
+ root_obj.loadaddr = 0;
+ root_obj.execaddr = 0;
+ root_obj.size = ADFS_NEWDIR_SIZE;
+ root_obj.attr = ADFS_NDA_DIRECTORY | ADFS_NDA_OWNER_READ |
+ ADFS_NDA_OWNER_WRITE | ADFS_NDA_PUBLIC_READ;
+
/*
- * set up enough so that it can read an inode
+ * If this is a F+ disk with variable length directories,
+ * get the root_size from the disc record.
*/
- sb->s_op = &adfs_sops;
- sb->u.adfs_sb.s_root = adfs_inode_generate(dr->root, 0);
- sb->s_root = d_alloc_root(iget(sb, sb->u.adfs_sb.s_root));
+ if (sb->u.adfs_sb.s_version) {
+ root_obj.size = dr->root_size;
+ sb->u.adfs_sb.s_dir = &adfs_fplus_dir_ops;
+ sb->u.adfs_sb.s_namelen = ADFS_FPLUS_NAME_LEN;
+ } else {
+ sb->u.adfs_sb.s_dir = &adfs_f_dir_ops;
+ sb->u.adfs_sb.s_namelen = ADFS_F_NAME_LEN;
+ }
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,0)
+ sb->s_root = d_alloc_root(adfs_iget(sb, &root_obj));
+#else
+ sb->s_root = d_alloc_root(adfs_iget(sb, &root_obj), NULL);
+#endif
if (!sb->s_root) {
+ int i;
+
for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
- brelse(sb->u.adfs_sb.s_map[i]);
- brelse(bh);
- adfs_error(sb, NULL, "get root inode failed\n");
+ brelse(sb->u.adfs_sb.s_map[i].dm_bh);
+ kfree(sb->u.adfs_sb.s_map);
+ adfs_error(sb, "get root inode failed\n");
goto error_dec_use;
}
return sb;
return NULL;
}
-static int adfs_statfs(struct super_block *sb, struct statfs *buf, int bufsiz)
-{
- struct statfs tmp;
- const unsigned int nidlen = sb->u.adfs_sb.s_idlen + 1;
-
- tmp.f_type = ADFS_SUPER_MAGIC;
- tmp.f_bsize = sb->s_blocksize;
- tmp.f_blocks = sb->u.adfs_sb.s_dr->disc_size_high << (32 - sb->s_blocksize_bits) |
- sb->u.adfs_sb.s_dr->disc_size >> sb->s_blocksize_bits;
- tmp.f_files = tmp.f_blocks >> nidlen;
- {
- unsigned int i, j = 0;
- const unsigned mask = (1 << (nidlen - 1)) - 1;
- for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
- const char *map = sb->u.adfs_sb.s_map[i]->b_data;
- unsigned freelink, mapindex = 24;
- j -= nidlen;
- do {
- unsigned char k, l, m;
- unsigned off = (mapindex - nidlen) >> 3;
- unsigned rem;
- const unsigned boff = mapindex & 7;
-
- /* get next freelink */
-
- k = map[off++];
- l = map[off++];
- m = map[off++];
- freelink = (m << 16) | (l << 8) | k;
- rem = freelink >> (boff + nidlen - 1);
- freelink = (freelink >> boff) & mask;
- mapindex += freelink;
-
- /* find its length and add it to running total */
-
- while (rem == 0) {
- j += 8;
- rem = map[off++];
- }
- if ((rem & 0xff) == 0) j+=8, rem>>=8;
- if ((rem & 0xf) == 0) j+=4, rem>>=4;
- if ((rem & 0x3) == 0) j+=2, rem>>=2;
- if ((rem & 0x1) == 0) j+=1;
- j += nidlen - boff;
- if (freelink <= nidlen) break;
- } while (mapindex < 8 * sb->s_blocksize);
- if (mapindex > 8 * sb->s_blocksize)
- adfs_error(sb, NULL, "oversized free fragment\n");
- else if (freelink)
- adfs_error(sb, NULL, "undersized free fragment\n");
- }
- tmp.f_bfree = tmp.f_bavail = j <<
- (sb->u.adfs_sb.s_dr->log2bpmb - sb->s_blocksize_bits);
- }
- tmp.f_ffree = tmp.f_bfree >> nidlen;
- tmp.f_namelen = ADFS_NAME_LEN;
- return copy_to_user(buf, &tmp, bufsiz) ? -EFAULT : 0;
-}
-
static struct file_system_type adfs_fs_type = {
"adfs", FS_REQUIRES_DEV, adfs_read_super, NULL
};
/* Internal header file for autofs */
-#include <linux/auto_fs.h>
+#include <linux/auto_fs4.h>
#include <linux/list.h>
/* This is the range of ioctl() numbers we claim as ours */
static void autofs4_notify_daemon(struct autofs_sb_info *sbi,
struct autofs_wait_queue *wq,
- enum autofs_packet_type type)
+ int type)
{
union autofs_packet_union pkt;
size_t pktsz;
#define su_lf_ioff u.bfs_sb.si_lf_ioff
#define su_lf_sblk u.bfs_sb.si_lf_sblk
#define su_lf_eblk u.bfs_sb.si_lf_eblk
-#define su_bmap u.bfs_sb.si_bmap
#define su_imap u.bfs_sb.si_imap
#define su_sbh u.bfs_sb.si_sbh
#define su_bfs_sb u.bfs_sb.si_bfs_sb
#define iu_dsk_ino u.bfs_i.i_dsk_ino
#define iu_sblock u.bfs_i.i_sblock
#define iu_eblock u.bfs_i.i_eblock
+
+#define printf(format, args...) \
+ printk(KERN_ERR "BFS-fs: " __FUNCTION__ "(): " format, ## args)
#undef DEBUG
#ifdef DEBUG
-#define DBG(x...) printk(x)
+#define dprintf(x...) printf(x)
#else
-#define DBG(x...)
+#define dprintf(x...)
#endif
static int bfs_add_entry(struct inode * dir, const char * name, int namelen, int ino);
int block;
if (!dir || !dir->i_sb || !S_ISDIR(dir->i_mode)) {
- printk(KERN_ERR "BFS-fs: %s(): Bad inode or not a directory %s:%08lx\n",
- __FUNCTION__, bdevname(dev), dir->i_ino);
+ printf("Bad inode or not a directory %s:%08lx\n", bdevname(dev), dir->i_ino);
return -EBADF;
}
if (f->f_pos & (BFS_DIRENT_SIZE-1)) {
- printk(KERN_ERR "BFS-fs: %s(): Bad f_pos=%08lx for %s:%08lx\n",
- __FUNCTION__, (unsigned long)f->f_pos, bdevname(dev), dir->i_ino);
+ printf("Bad f_pos=%08lx for %s:%08lx\n", (unsigned long)f->f_pos,
+ bdevname(dev), dir->i_ino);
return -EBADF;
}
goto out_brelse;
if (!inode->i_nlink) {
- printk(KERN_WARNING
- "BFS-fs: %s(): unlinking non-existent file %s:%lu (nlink=%d)\n",
- __FUNCTION__, bdevname(inode->i_dev), inode->i_ino, inode->i_nlink);
+ printf("unlinking non-existent file %s:%lu (nlink=%d)\n", bdevname(inode->i_dev),
+ inode->i_ino, inode->i_nlink);
inode->i_nlink = 1;
}
de->ino = 0;
kdev_t dev;
int i;
- DBG(KERN_ERR "BFS-fs: %s(%s,%d)\n", __FUNCTION__, name, namelen);
+ dprintf("name=%s, namelen=%d\n", name, namelen);
if (!namelen)
return -ENOENT;
*/
#include <linux/fs.h>
+#include <linux/locks.h>
#include <linux/bfs_fs.h>
+#include <linux/smp_lock.h>
#include "bfs_defs.h"
#undef DEBUG
#ifdef DEBUG
-#define DBG(x...) printk(x)
+#define dprintf(x...) printf(x)
#else
-#define DBG(x...)
+#define dprintf(x...)
#endif
static ssize_t bfs_file_write(struct file * f, const char * buf, size_t count, loff_t *ppos)
fasync: NULL,
};
+static int bfs_move_block(unsigned long from, unsigned long to, kdev_t dev)
+{
+ struct buffer_head *bh, *new = NULL;
+
+ bh = bread(dev, from, BFS_BSIZE);
+ if (!bh)
+ return -EIO;
+ new = getblk(dev, to, BFS_BSIZE);
+ if (!buffer_uptodate(new))
+ wait_on_buffer(new);
+ memcpy(new->b_data, bh->b_data, bh->b_size);
+ mark_buffer_dirty(new, 1);
+ bforget(bh);
+ brelse(new);
+ return 0;
+}
+
+static int bfs_move_blocks(kdev_t dev, unsigned long start, unsigned long end,
+ unsigned long where)
+{
+ unsigned long i;
+
+ dprintf("%08lx-%08lx->%08lx\n", start, end, where);
+ for (i = start; i <= end; i++)
+ if(i && bfs_move_block(i, where + i, dev)) {
+ dprintf("failed to move block %08lx -> %08lx\n", i, where + i);
+ return -EIO;
+ }
+ return 0;
+}
+
static int bfs_get_block(struct inode * inode, long block,
struct buffer_head * bh_result, int create)
{
- long phys = inode->iu_sblock + block;
- if (!create || phys <= inode->iu_eblock) {
+ long phys, next_free_block;
+ int err;
+ struct super_block *s = inode->i_sb;
+
+ if (block < 0 || block > s->su_blocks)
+ return -EIO;
+
+ phys = inode->iu_sblock + block;
+ if (!create) {
+ if (phys <= inode->iu_eblock) {
+ dprintf("c=%d, b=%08lx, phys=%08lx (granted)\n", create, block, phys);
+ bh_result->b_dev = inode->i_dev;
+ bh_result->b_blocknr = phys;
+ bh_result->b_state |= (1UL << BH_Mapped);
+ }
+ return 0;
+ }
+
+ /* if the file is not empty and the requested block is within the range
+ of blocks allocated for this file, we can grant it */
+ if (inode->i_size && phys <= inode->iu_eblock) {
+ dprintf("c=%d, b=%08lx, phys=%08lx (interim block granted)\n", create, block, phys);
bh_result->b_dev = inode->i_dev;
bh_result->b_blocknr = phys;
bh_result->b_state |= (1UL << BH_Mapped);
return 0;
- }
- /* no support for file migration, working on it */
- return -EIO;
+ }
+
+ /* the rest has to be protected against itself */
+ lock_kernel();
+
+ /* if the last data block for this file is the last allocated block, we can
+ extend the file trivially, without moving it anywhere */
+ if (inode->iu_eblock == s->su_lf_eblk) {
+ dprintf("c=%d, b=%08lx, phys=%08lx (simple extension)\n", create, block, phys);
+ bh_result->b_dev = inode->i_dev;
+ bh_result->b_blocknr = phys;
+ bh_result->b_state |= (1UL << BH_Mapped);
+ s->su_lf_eblk = inode->iu_eblock = inode->iu_sblock + block;
+ mark_inode_dirty(inode);
+ mark_buffer_dirty(s->su_sbh, 1);
+ err = 0;
+ goto out;
+ }
+
+ /* Ok, we have to move this entire file to the next free block */
+ next_free_block = s->su_lf_eblk + 1;
+ err = bfs_move_blocks(inode->i_dev, inode->iu_sblock, inode->iu_eblock, next_free_block);
+ if (err) {
+ dprintf("failed to move ino=%08lx -> possible fs corruption\n", inode->i_ino);
+ goto out;
+ }
+
+ inode->iu_sblock = next_free_block;
+ s->su_lf_eblk = inode->iu_eblock = next_free_block + block;
+ mark_inode_dirty(inode);
+ mark_buffer_dirty(s->su_sbh, 1);
+ bh_result->b_dev = inode->i_dev;
+ bh_result->b_blocknr = inode->iu_sblock + block;
+ bh_result->b_state |= (1UL << BH_Mapped);
+out:
+ unlock_kernel();
+ return err;
}
struct inode_operations bfs_file_inops = {
* From fs/minix, Copyright (C) 1991, 1992 Linus Torvalds.
*/
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include "bfs_defs.h"
MODULE_AUTHOR("Tigran A. Aivazian");
-MODULE_DESCRIPTION("UnixWare BFS filesystem for Linux");
+MODULE_DESCRIPTION("SCO UnixWare BFS filesystem for Linux");
EXPORT_NO_SYMBOLS;
#undef DEBUG
#ifdef DEBUG
-#define DBG(x...) printk(x)
+#define dprintf(x...) printf(x)
#else
-#define DBG(x...)
+#define dprintf(x...)
#endif
void dump_imap(const char *prefix, struct super_block * s);
int block, off;
if (ino < BFS_ROOT_INO || ino > inode->i_sb->su_lasti) {
- printk(KERN_ERR "BFS-fs: %s(): Bad inode number %s:%08lx\n",
- __FUNCTION__, bdevname(dev), ino);
+ printf("Bad inode number %s:%08lx\n", bdevname(dev), ino);
make_bad_inode(inode);
return;
}
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = bread(dev, block, BFS_BSIZE);
if (!bh) {
- printk(KERN_ERR "BFS-fs: %s(): Unable to read inode %s:%08lx\n",
- __FUNCTION__, bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
make_bad_inode(inode);
return;
}
int block, off;
if (ino < BFS_ROOT_INO || ino > inode->i_sb->su_lasti) {
- printk(KERN_ERR "BFS-fs: %s(): Bad inode number %s:%08lx\n",
- __FUNCTION__, bdevname(dev), ino);
+ printf("Bad inode number %s:%08lx\n", bdevname(dev), ino);
return;
}
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = bread(dev, block, BFS_BSIZE);
if (!bh) {
- printk(KERN_ERR "BFS-fs: %s(): Unable to read inode %s:%08lx\n",
- __FUNCTION__, bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
return;
}
int block, off;
struct super_block * s = inode->i_sb;
- DBG(KERN_ERR "%s(ino=%08lx)\n", __FUNCTION__, inode->i_ino);
+ dprintf("ino=%08lx\n", inode->i_ino);
- if (!inode)
+ if (!inode || !inode->i_dev || inode->i_count > 1 || inode->i_nlink || !s)
return;
- if (!inode->i_dev) {
- printk(KERN_ERR "BFS-fs: free_inode(%08lx) !dev\n", inode->i_ino);
- return;
- }
- if (inode->i_count > 1) {
- printk(KERN_ERR "BFS-fs: free_inode(%08lx) count=%d\n",
- inode->i_ino, inode->i_count);
- return;
- }
- if (inode->i_nlink) {
- printk(KERN_ERR "BFS-fs: free_inode(%08lx) nlink=%d\n",
- inode->i_ino, inode->i_nlink);
- return;
- }
- if (!inode->i_sb) {
- printk(KERN_ERR "BFS-fs: free_inode(%08lx) !sb\n", inode->i_ino);
- return;
- }
if (inode->i_ino < BFS_ROOT_INO || inode->i_ino > inode->i_sb->su_lasti) {
- printk(KERN_ERR "BFS-fs: free_inode(%08lx) invalid ino\n", inode->i_ino);
+ printf("invalid ino=%08lx\n", inode->i_ino);
return;
}
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = bread(dev, block, BFS_BSIZE);
if (!bh) {
- printk(KERN_ERR "BFS-fs: %s(): Unable to read inode %s:%08lx\n",
- __FUNCTION__, bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
return;
}
off = (ino - BFS_ROOT_INO)%BFS_INODES_PER_BLOCK;
di->i_sblock = 0;
mark_buffer_dirty(bh, 1);
brelse(bh);
+
+ /* if this was the last file, make the previous
+ block "last files last block" even if there is no real file there,
+ saves us 1 gap */
+ if (s->su_lf_eblk == inode->iu_eblock) {
+ s->su_lf_eblk = inode->iu_sblock - 1;
+ mark_buffer_dirty(s->su_sbh, 1);
+ }
clear_inode(inode);
}
{
brelse(s->su_sbh);
kfree(s->su_imap);
- kfree(s->su_bmap);
MOD_DEC_USE_COUNT;
}
else
strcat(tmpbuf, "0");
}
- printk(KERN_ERR "BFS-fs: %s: lasti=%d <%s>\n", prefix, s->su_lasti, tmpbuf);
+ printk(KERN_ERR "BFS-fs: %s: lasti=%08lx <%s>\n", prefix, s->su_lasti, tmpbuf);
free_page((unsigned long)tmpbuf);
#endif
}
struct buffer_head * bh;
struct bfs_super_block * bfs_sb;
struct inode * inode;
- int i, imap_len, bmap_len;
+ int i, imap_len;
MOD_INC_USE_COUNT;
lock_super(s);
s->s_blocksize = BFS_BSIZE;
s->s_blocksize_bits = BFS_BSIZE_BITS;
- /* read ahead 8K to get inodes as we'll need them in a tick */
- bh = breada(dev, 0, BFS_BSIZE, 0, 8192);
+ bh = bread(dev, 0, BFS_BSIZE);
if(!bh)
goto out;
bfs_sb = (struct bfs_super_block *)bh->b_data;
if (bfs_sb->s_magic != BFS_MAGIC) {
if (!silent)
- printk(KERN_ERR "BFS-fs: No BFS filesystem on %s (magic=%08x)\n",
- bdevname(dev), bfs_sb->s_magic);
+ printf("No BFS filesystem on %s (magic=%08x)\n",
+ bdevname(dev), bfs_sb->s_magic);
goto out;
}
if (BFS_UNCLEAN(bfs_sb, s) && !silent)
- printk(KERN_WARNING "BFS-fs: %s is unclean\n", bdevname(dev));
+ printf("%s is unclean, continuing\n", bdevname(dev));
-#ifndef CONFIG_BFS_FS_WRITE
- s->s_flags |= MS_RDONLY;
-#endif
s->s_magic = BFS_MAGIC;
s->su_bfs_sb = bfs_sb;
s->su_sbh = bh;
s->su_lasti = (bfs_sb->s_start - BFS_BSIZE)/sizeof(struct bfs_inode)
+ BFS_ROOT_INO - 1;
- bmap_len = sizeof(struct bfs_bmap) * s->su_lasti;
- s->su_bmap = kmalloc(bmap_len, GFP_KERNEL);
- if (!s->su_bmap)
- goto out;
- memset(s->su_bmap, 0, bmap_len);
imap_len = s->su_lasti/8 + 1;
s->su_imap = kmalloc(imap_len, GFP_KERNEL);
- if (!s->su_imap) {
- kfree(s->su_bmap);
+ if (!s->su_imap)
goto out;
- }
memset(s->su_imap, 0, imap_len);
- for (i=0; i<BFS_ROOT_INO; i++) {
- s->su_bmap[i].start = s->su_bmap[i].end = 0;
+ for (i=0; i<BFS_ROOT_INO; i++)
set_bit(i, s->su_imap);
- }
s->s_op = &bfs_sops;
inode = iget(s, BFS_ROOT_INO);
if (!inode) {
kfree(s->su_imap);
- kfree(s->su_bmap);
goto out;
}
s->s_root = d_alloc_root(inode);
if (!s->s_root) {
iput(inode);
kfree(s->su_imap);
- kfree(s->su_bmap);
goto out;
}
s->su_lf_ioff = 0;
for (i=BFS_ROOT_INO; i<=s->su_lasti; i++) {
inode = iget(s,i);
- if (inode->iu_dsk_ino == 0) {
+ if (inode->iu_dsk_ino == 0)
s->su_freei++;
- s->su_bmap[i].start = s->su_bmap[i].end = 0;
- } else {
+ else {
set_bit(i, s->su_imap);
s->su_freeb -= inode->i_blocks;
if (inode->iu_eblock > s->su_lf_eblk) {
s->su_lf_sblk = inode->iu_sblock;
s->su_lf_ioff = BFS_INO2OFF(i);
}
- s->su_bmap[i].start = inode->iu_sblock;
- s->su_bmap[i].end = inode->iu_eblock;
}
iput(inode);
}
O_TARGET := proc.o
O_OBJS := inode.o root.o base.o generic.o array.o \
kmsg.o proc_tty.o proc_misc.o kcore.o
-ifdef CONFIG_OMIRR
-O_OBJS := $(O_OBJS) omirr.o
-endif
OX_OBJS := procfs_syms.o
M_OBJS :=
+++ /dev/null
-/*
- * fs/proc/omirr.c - online mirror support
- *
- * (C) 1997 Thomas Schoebel-Theuer
- */
-
-#include <linux/string.h>
-#include <linux/mm.h>
-#include <linux/fs.h>
-#include <linux/omirr.h>
-#include <asm/uaccess.h>
-
-static int nr_omirr_open = 0;
-static int cleared_flag = 0;
-
-static char * buffer = NULL;
-static int read_pos, write_pos;
-static int clip_pos, max_pos;
-static DECLARE_WAIT_QUEUE_HEAD(read_wait);
-static DECLARE_WAIT_QUEUE_HEAD(write_wait);
-
-static /*inline*/ int reserve_write_space(int len)
-{
- int rest = max_pos - write_pos;
-
- if(rest < len) {
- clip_pos = write_pos;
- write_pos = 0;
- rest = max_pos;
- }
- while(read_pos > write_pos && read_pos <= write_pos+len) {
- if(!nr_omirr_open)
- return 0;
- interruptible_sleep_on(&write_wait);
- }
- return 1;
-}
-
-static /*inline*/ void write_space(int len)
-{
- write_pos += len;
- wake_up_interruptible(&read_wait);
-}
-
-static /*inline*/ int reserve_read_space(int len)
-{
- int rest = clip_pos - read_pos;
-
- if(!rest) {
- read_pos = 0;
- rest = clip_pos;
- clip_pos = max_pos;
- }
- if(len > rest)
- len = rest;
- while(read_pos == write_pos) {
- interruptible_sleep_on(&read_wait);
- }
- rest = write_pos - read_pos;
- if(rest > 0 && rest < len)
- len = rest;
- return len;
-}
-
-static /*inline*/ void read_space(int len)
-{
- read_pos += len;
- if(read_pos >= clip_pos) {
- read_pos = 0;
- clip_pos = max_pos;
- }
- wake_up_interruptible(&write_wait);
-}
-
-static /*inline*/ void init_buffer(char * initxt)
-{
- int len = initxt ? strlen(initxt) : 0;
-
- if(!buffer) {
- buffer = (char*)__get_free_page(GFP_USER);
- max_pos = clip_pos = PAGE_SIZE;
- }
- read_pos = write_pos = 0;
- memcpy(buffer, initxt, len);
- write_space(len);
-}
-
-static int omirr_open(struct inode * inode, struct file * file)
-{
- if(nr_omirr_open)
- return -EAGAIN;
- nr_omirr_open++;
- if(!buffer)
- init_buffer(NULL);
- return 0;
-}
-
-static int omirr_release(struct inode * inode, struct file * file)
-{
- nr_omirr_open--;
- read_space(0);
- return 0;
-}
-
-static long omirr_read(struct inode * inode, struct file * file,
- char * buf, unsigned long count)
-{
- char * tmp;
- int len;
- int error = 0;
-
- if(!count)
- goto done;
- error = -EINVAL;
- if(!buf || count < 0)
- goto done;
-
- error = verify_area(VERIFY_WRITE, buf, count);
- if(error)
- goto done;
-
- error = -EAGAIN;
- if((file->f_flags & O_NONBLOCK) && read_pos == write_pos)
- goto done;
-
- error = len = reserve_read_space(count);
- tmp = buffer + read_pos;
- while(len) {
- put_user(*tmp++, buf++);
- len--;
- }
- read_space(error);
-done:
- return error;
-}
-
-int compute_name(struct dentry * entry, char * buf)
-{
- int len;
-
- if(IS_ROOT(entry)) {
- *buf = '/';
- return 1;
- }
- len = compute_name(entry->d_parent, buf);
- if(len > 1) {
- buf[len++] = '/';
- }
- memcpy(buf+len, entry->d_name, entry->d_len);
- return len + entry->d_len;
-}
-
-int _omirr_print(struct dentry * ent1, struct dentry * ent2,
- struct qstr * suffix, const char * fmt,
- va_list args1, va_list args2)
-{
- int count = strlen(fmt) + 10; /* estimate */
- const char * tmp = fmt;
- char lenbuf[8];
- int res;
-
- if(!buffer)
- init_buffer(NULL);
- while(*tmp) {
- while(*tmp && *tmp++ != '%') ;
- if(*tmp) {
- if(*tmp == 's') {
- char * str = va_arg(args1, char*);
- count += strlen(str);
- } else {
- (void)va_arg(args1, int);
- count += 8; /* estimate */
- }
- }
- }
- if(ent1) {
- struct dentry * dent = ent1;
- while(dent && !IS_ROOT(dent)) {
- count += dent->d_len + 1;
- dent = dent->d_parent;
- }
- count++;
- if(ent2) {
- dent = ent2;
- while(dent && !IS_ROOT(dent)) {
- count += dent->d_len + 1;
- dent = dent->d_parent;
- }
- count++;
- }
- if(suffix)
- count += suffix->len + 1;
- }
-
- if((nr_omirr_open | cleared_flag) && reserve_write_space(count)) {
- cleared_flag = 0;
- res = vsprintf(buffer+write_pos+4, fmt, args2) + 4;
- if(res > count)
- printk("omirr: format estimate was wrong\n");
- if(ent1) {
- res += compute_name(ent1, buffer+write_pos+res);
- if(ent2) {
- buffer[write_pos+res++] = '\0';
- res += compute_name(ent2, buffer+write_pos+res);
- }
- if(suffix) {
- buffer[write_pos+res++] = '/';
- memcpy(buffer+write_pos+res,
- suffix->name, suffix->len);
- res += suffix->len;
- }
- buffer[write_pos+res++] = '\0';
- buffer[write_pos+res++] = '\n';
- }
- sprintf(lenbuf, "%04d", res);
- memcpy(buffer+write_pos, lenbuf, 4);
- } else {
- if(!cleared_flag) {
- cleared_flag = 1;
- init_buffer("0007 Z\n");
- }
- res = 0;
- }
- write_space(res);
- return res;
-}
-
-int omirr_print(struct dentry * ent1, struct dentry * ent2,
- struct qstr * suffix, const char * fmt, ...)
-{
- va_list args1, args2;
- int res;
-
- /* I don't know whether I could make a simple copy of the va_list,
- * so for the safe way...
- */
- va_start(args1, fmt);
- va_start(args2, fmt);
- res = _omirr_print(ent1, ent2, suffix, fmt, args1, args2);
- va_end(args2);
- va_end(args1);
- return res;
-}
-
-int omirr_printall(struct inode * inode, const char * fmt, ...)
-{
- int res = 0;
- struct dentry * tmp = inode->i_dentry;
-
- if(tmp) do {
- va_list args1, args2;
- va_start(args1, fmt);
- va_start(args2, fmt);
- res += _omirr_print(tmp, NULL, NULL, fmt, args1, args2);
- va_end(args2);
- va_end(args1);
- tmp = tmp->d_next;
- } while(tmp != inode->i_dentry);
- return res;
-}
-
-static struct file_operations omirr_operations = {
- NULL, /* omirr_lseek */
- omirr_read,
- NULL, /* omirr_write */
- NULL, /* omirr_readdir */
- NULL, /* omirr_select */
- NULL, /* omirr_ioctl */
- NULL, /* mmap */
- omirr_open,
- NULL, /* flush */
- omirr_release,
- NULL, /* fsync */
- NULL, /* fasync */
-};
-
-struct inode_operations proc_omirr_inode_operations = {
- &omirr_operations,
-};
/*
* linux/include/asm-arm/arch-arc/time.h
*
- * Copyright (c) 1996 Russell King.
+ * Copyright (c) 1996-2000 Russell King.
*
* Changelog:
* 24-Sep-1996 RMK Created
* 10-Oct-1996 RMK Brought up to date with arch-sa110eval
* 04-Dec-1997 RMK Updated for new arch/arm/time.c
*/
-#include <asm/ioc.h>
-
-static long last_rtc_update = 0; /* last time the cmos clock got updated */
-
-extern __inline__ unsigned long gettimeoffset (void)
-{
- unsigned int count1, count2, status1, status2;
- unsigned long offset = 0;
-
- status1 = inb(IOC_IRQREQA);
- barrier ();
- outb (0, IOC_T0LATCH);
- barrier ();
- count1 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
- barrier ();
- status2 = inb(IOC_IRQREQA);
- barrier ();
- outb (0, IOC_T0LATCH);
- barrier ();
- count2 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
-
- if (count2 < count1) {
- /*
- * This means that we haven't just had an interrupt
- * while reading into status2.
- */
- if (status2 & (1 << 5))
- offset = tick;
- count1 = count2;
- } else if (count2 > count1) {
- /*
- * We have just had another interrupt while reading
- * status2.
- */
- offset += tick;
- count1 = count2;
- }
-
- count1 = LATCH - count1;
- /*
- * count1 = number of clock ticks since last interrupt
- */
- offset += count1 * tick / LATCH;
- return offset;
-}
-
-extern int iic_control (unsigned char, int, char *, int);
-
-static int set_rtc_time(unsigned long nowtime)
-{
- char buf[5], ctrl;
-
- if (iic_control(0xa1, 0, &ctrl, 1) != 0)
- printk("RTC: failed to read control reg\n");
-
- /*
- * Reset divider
- */
- ctrl |= 0x80;
-
- if (iic_control(0xa0, 0, &ctrl, 1) != 0)
- printk("RTC: failed to stop the clock\n");
-
- /*
- * We only set the time - we don't set the date.
- * This means that there is the possibility once
- * a day for the correction to disrupt the date.
- * We really ought to write the time and date, or
- * nothing at all.
- */
- buf[0] = 0;
- buf[1] = nowtime % 60; nowtime /= 60;
- buf[2] = nowtime % 60; nowtime /= 60;
- buf[3] = nowtime % 24;
-
- BIN_TO_BCD(buf[1]);
- BIN_TO_BCD(buf[2]);
- BIN_TO_BCD(buf[3]);
-
- if (iic_control(0xa0, 1, buf, 4) != 0)
- printk("RTC: Failed to set the time\n");
-
- /*
- * Re-enable divider
- */
- ctrl &= ~0x80;
-
- if (iic_control(0xa0, 0, &ctrl, 1) != 0)
- printk("RTC: failed to start the clock\n");
-
- return 0;
-}
-
-extern __inline__ unsigned long get_rtc_time(void)
-{
- unsigned int year, i;
- char buf[8];
-
- /*
- * The year is not part of the RTC counter
- * registers, and is stored in RAM. This
- * means that it will not be automatically
- * updated.
- */
- if (iic_control(0xa1, 0xc0, buf, 1) != 0)
- printk("RTC: failed to read the year\n");
-
- /*
- * If the year is before 1970, then the year
- * is actually 100 in advance. This gives us
- * a year 2070 bug...
- */
- year = 1900 + buf[0];
- if (year < 1970)
- year += 100;
-
- /*
- * Read the time and date in one go - this
- * will ensure that we don't get any effects
- * due to carry (the RTC latches the counters
- * during a read).
- */
- if (iic_control(0xa1, 2, buf, 5) != 0) {
- printk("RTC: failed to read the time and date\n");
- memset(buf, 0, sizeof(buf));
- }
-
- /*
- * The RTC combines years with date and weekday
- * with month. We need to mask off this extra
- * information before converting the date to
- * binary.
- */
- buf[4] &= 0x1f;
- buf[3] &= 0x3f;
-
- for (i = 0; i < 5; i++)
- BCD_TO_BIN(buf[i]);
-
- return mktime(year, buf[4], buf[3], buf[2], buf[1], buf[0]);
-}
+extern void ioctime_init(void);
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
do_timer(regs);
-
- /* If we have an externally synchronized linux clock, then update
- * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
- * called as close as possible to 500 ms before the new second starts.
- */
- if ((time_status & STA_UNSYNC) == 0 &&
- xtime.tv_sec > last_rtc_update + 660 &&
- xtime.tv_usec >= 50000 - (tick >> 1) &&
- xtime.tv_usec < 50000 + (tick >> 1)) {
- if (set_rtc_time(xtime.tv_sec) == 0)
- last_rtc_update = xtime.tv_sec;
- else
- last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
- }
-
- if (!user_mode(regs))
- do_profile(instruction_pointer(regs));
+ do_set_rtc();
+ do_profile(regs);
}
-static struct irqaction timerirq = {
- timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
/*
- * Set up timer interrupt, and return the current time in seconds.
+ * Set up timer interrupt.
*/
extern __inline__ void setup_timer(void)
{
- outb(LATCH & 255, IOC_T0LTCHL);
- outb(LATCH >> 8, IOC_T0LTCHH);
- outb(0, IOC_T0GO);
+ ioctime_init();
- xtime.tv_sec = get_rtc_time();
+ timer_irq.handler = timer_interrupt;
- setup_arm_irq(IRQ_TIMER, &timerirq);
+ setup_arm_irq(IRQ_TIMER, &timer_irq);
}
/*
* linux/include/asm-arm/arch-cl7500/time.h
*
- * Copyright (c) 1996 Russell King.
- * Copyright (C) 1999 Nexus Electronics Ltd.
+ * Copyright (c) 1996-2000 Russell King.
*
* Changelog:
* 24-Sep-1996 RMK Created
* 10-Oct-1996 RMK Brought up to date with arch-sa110eval
* 04-Dec-1997 RMK Updated for new arch/arm/time.c
- * 10-Aug-1999 PJB Converted for CL7500
*/
-#include <asm/iomd.h>
-
-static long last_rtc_update = 0; /* last time the cmos clock got updated */
-
-extern __inline__ unsigned long gettimeoffset (void)
-{
- unsigned long offset = 0;
- unsigned int count1, count2, status1, status2;
-
- status1 = IOMD_IRQREQA;
- barrier ();
- outb(0, IOMD_T0LATCH);
- barrier ();
- count1 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
- barrier ();
- status2 = inb(IOMD_IRQREQA);
- barrier ();
- outb(0, IOMD_T0LATCH);
- barrier ();
- count2 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
-
- if (count2 < count1) {
- /*
- * This means that we haven't just had an interrupt
- * while reading into status2.
- */
- if (status2 & (1 << 5))
- offset = tick;
- count1 = count2;
- } else if (count2 > count1) {
- /*
- * We have just had another interrupt while reading
- * status2.
- */
- offset += tick;
- count1 = count2;
- }
-
- count1 = LATCH - count1;
- /*
- * count1 = number of clock ticks since last interrupt
- */
- offset += count1 * tick / LATCH;
- return offset;
-}
-
-extern __inline__ unsigned long get_rtc_time(void)
-{
- return mktime(1976, 06, 24, 0, 0, 0);
-}
-
-static int set_rtc_time(unsigned long nowtime)
-{
- return 0;
-}
+extern void ioctime_init(void);
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
do_timer(regs);
-
- /* If we have an externally synchronized linux clock, then update
- * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
- * called as close as possible to 500 ms before the new second starts.
- */
- if ((time_status & STA_UNSYNC) == 0 &&
- xtime.tv_sec > last_rtc_update + 660 &&
- xtime.tv_usec >= 50000 - (tick >> 1) &&
- xtime.tv_usec < 50000 + (tick >> 1)) {
- if (set_rtc_time(xtime.tv_sec) == 0)
- last_rtc_update = xtime.tv_sec;
- else
- last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
- }
+ do_set_rtc();
{
/* Twinkle the lights. */
}
}
- if (!user_mode(regs))
- do_profile(instruction_pointer(regs));
+ do_profile(regs);
}
-static struct irqaction timerirq = {
- timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
/*
- * Set up timer interrupt, and return the current time in seconds.
+ * Set up timer interrupt.
*/
extern __inline__ void setup_timer(void)
{
- outb(LATCH & 255, IOMD_T0LTCHL);
- outb(LATCH >> 8, IOMD_T0LTCHH);
- outb(0, IOMD_T0GO);
+ ioctime_init();
- xtime.tv_sec = get_rtc_time();
+ timer_irq.handler = timer_interrupt;
- setup_arm_irq(IRQ_TIMER, &timerirq);
+ setup_arm_irq(IRQ_TIMER, &timer_irq);
}
#include <linux/config.h>
#include <asm/leds.h>
-#define IRQ_TIMER IRQ_EBSA110_TIMER0
-
#define MCLK_47_8
#if defined(MCLK_42_3)
#define PIT1_COUNT 0x85A1
#define DIVISOR 2
#endif
-
-extern __inline__ unsigned long gettimeoffset (void)
-{
- return 0;
-}
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
*PIT_T1 = (PIT1_COUNT) & 0xff;
*PIT_T1 = (PIT1_COUNT) >> 8;
-#ifdef CONFIG_LEDS
- {
- static int count = 50;
- if (--count == 0) {
- count = 50;
- leds_event(led_timer);
- }
- }
-#endif
-
- {
#ifdef DIVISOR
+ {
static unsigned int divisor;
- if (divisor-- == 0) {
- divisor = DIVISOR - 1;
-#else
- {
-#endif
- do_timer(regs);
- }
+ if (divisor--)
+ return;
+ divisor = DIVISOR - 1;
}
+#endif
+ do_leds();
+ do_timer(regs);
}
-static struct irqaction timerirq = {
- timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
/*
- * Set up timer interrupt, and return the current time in seconds.
+ * Set up timer interrupt.
*/
extern __inline__ void setup_timer(void)
{
*PIT_T1 = (PIT1_COUNT) & 0xff;
*PIT_T1 = (PIT1_COUNT) >> 8;
- /*
- * Default the date to 1 Jan 1970 0:0:0
- * You will have to run a time daemon to set the
- * clock correctly at bootup
- */
- xtime.tv_sec = mktime(1970, 1, 1, 0, 0, 0);
+ timer_irq.handler = timer_interrupt;
- setup_arm_irq(IRQ_TIMER, &timerirq);
+ setup_arm_irq(IRQ_EBSA110_TIMER0, &timer_irq);
}
+
+
#include <asm/system.h>
static int rtc_base;
-static unsigned long (*gettimeoffset)(void);
-static int (*set_rtc_mmss)(unsigned long nowtime);
-static long last_rtc_update = 0; /* last time the cmos clock got updated */
-
-#ifdef CONFIG_LEDS
-static void do_leds(void)
-{
- static unsigned int count = 50;
- static int last_pid;
-
- if (current->pid != last_pid) {
- last_pid = current->pid;
- if (last_pid)
- leds_event(led_idle_end);
- else
- leds_event(led_idle_start);
- }
-
- if (--count == 0) {
- count = 50;
- leds_event(led_timer);
- }
-}
-#else
-#define do_leds()
-#endif
#define mSEC_10_from_14 ((14318180 + 100) / 200)
do_leds();
do_timer(regs);
-
- /* If we have an externally synchronized linux clock, then update
- * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
- * called as close as possible to 500 ms before the new second starts.
- */
- if ((time_status & STA_UNSYNC) == 0 &&
- xtime.tv_sec > last_rtc_update + 660 &&
- xtime.tv_usec > 50000 - (tick >> 1) &&
- xtime.tv_usec < 50000 + (tick >> 1)) {
- if (set_rtc_mmss(xtime.tv_sec) == 0)
- last_rtc_update = xtime.tv_sec;
- else
- last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
- }
-
- if (!user_mode(regs))
- do_profile(instruction_pointer(regs));
+ do_set_rtc();
+ do_profile(regs);
}
-static struct irqaction isa_timer_irq = {
- isa_timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
static unsigned long __init get_isa_cmos_time(void)
{
unsigned int year, mon, day, hour, min, sec;
}
static int
-set_isa_cmos_time(unsigned long nowtime)
+set_isa_cmos_time(void)
{
int retval = 0;
int real_seconds, real_minutes, cmos_minutes;
unsigned char save_control, save_freq_select;
+ unsigned long nowtime = xtime.tv_sec;
save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
-static unsigned long __ebsa285_text timer1_gettimeoffset (void)
+static unsigned long timer1_gettimeoffset (void)
{
unsigned long value = LATCH - *CSR_TIMER1_VALUE;
return (tick * value) / LATCH;
}
-static void __ebsa285_text timer1_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static void timer1_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
*CSR_TIMER1_CLR = 0;
/* Do the LEDs things */
do_leds();
-
do_timer(regs);
-
- /* If we have an externally synchronized linux clock, then update
- * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
- * called as close as possible to 500 ms before the new second starts.
- */
- if ((time_status & STA_UNSYNC) == 0 &&
- xtime.tv_sec > last_rtc_update + 660 &&
- xtime.tv_usec > 50000 - (tick >> 1) &&
- xtime.tv_usec < 50000 + (tick >> 1)) {
- if (set_rtc_mmss(xtime.tv_sec) == 0)
- last_rtc_update = xtime.tv_sec;
- else
- last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
- }
-
- if (!user_mode(regs))
- do_profile(instruction_pointer(regs));
-}
-
-static struct irqaction __ebsa285_data timer1_irq = {
- timer1_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
-static int
-set_dummy_time(unsigned long secs)
-{
- return 1;
+ do_set_rtc();
+ do_profile(regs);
}
/*
- * Set up timer interrupt, and return the current time in seconds.
+ * Set up timer interrupt.
*/
extern __inline__ void setup_timer(void)
{
+ int irq;
+
if (machine_is_co285())
/*
* Add-in 21285s shouldn't access the RTC
printk(KERN_WARNING "RTC: *** warning: CMOS battery bad\n");
xtime.tv_sec = get_isa_cmos_time();
- set_rtc_mmss = set_isa_cmos_time;
+ set_rtc = set_isa_cmos_time;
} else
rtc_base = 0;
}
- if (!rtc_base) {
- /*
- * Default the date to 1 Jan 1970 0:0:0
- */
- xtime.tv_sec = mktime(1970, 1, 1, 0, 0, 0);
- set_rtc_mmss = set_dummy_time;
- }
if (machine_is_ebsa285() || machine_is_co285()) {
gettimeoffset = timer1_gettimeoffset;
*CSR_TIMER1_LOAD = LATCH;
*CSR_TIMER1_CNTL = TIMER_CNTL_ENABLE | TIMER_CNTL_AUTORELOAD | TIMER_CNTL_DIV16;
- setup_arm_irq(IRQ_TIMER1, &timer1_irq);
+ timer_irq.handler = timer1_interrupt;
+ irq = IRQ_TIMER1;
} else {
/* enable PIT timer */
/* set for periodic (4) and LSB/MSB write (0x30) */
outb((mSEC_10_from_14/6) >> 8, 0x40);
gettimeoffset = isa_gettimeoffset;
-
- setup_arm_irq(IRQ_ISA_TIMER, &isa_timer_irq);
+ timer_irq.handler = isa_timer_interrupt;
+ irq = IRQ_ISA_TIMER;
}
+ setup_arm_irq(IRQ_ISA_TIMER, &timer_irq);
}
#define UART_BASE 0xfff00000
#define INTCONT 0xffe00000
-#define update_rtc()
-
-extern __inline__ unsigned long gettimeoffset (void)
-{
- return 0;
-}
-
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
static int count = 50;
do_timer(regs);
}
-static struct irqaction timerirq = {
- timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
extern __inline__ void setup_timer(void)
{
int tick = 3686400 / 16 / 2 / 100;
writeb(0x80, UART_BASE + 8);
writeb(0x10, UART_BASE + 0x14);
- /*
- * Default the date to 1 Jan 1970 0:0:0
- * You will have to run a time daemon to set the
- * clock correctly at bootup
- */
- xtime.tv_sec = mktime(1970, 1, 1, 0, 0, 0);
+ timer_irq.handler = timer_interrupt;
- setup_arm_irq(IRQ_TIMER, &timerirq);
+ setup_arm_irq(IRQ_TIMER, &timer_irq);
}
/*
* linux/include/asm-arm/arch-rpc/time.h
*
- * Copyright (c) 1996 Russell King.
+ * Copyright (c) 1996-2000 Russell King.
*
* Changelog:
* 24-Sep-1996 RMK Created
* 10-Oct-1996 RMK Brought up to date with arch-sa110eval
* 04-Dec-1997 RMK Updated for new arch/arm/time.c
*/
-#include <asm/iomd.h>
-
-static long last_rtc_update = 0; /* last time the cmos clock got updated */
-
-extern __inline__ unsigned long gettimeoffset (void)
-{
- unsigned long offset = 0;
- unsigned int count1, count2, status1, status2;
-
- status1 = IOMD_IRQREQA;
- barrier ();
- outb(0, IOMD_T0LATCH);
- barrier ();
- count1 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
- barrier ();
- status2 = inb(IOMD_IRQREQA);
- barrier ();
- outb(0, IOMD_T0LATCH);
- barrier ();
- count2 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
-
- if (count2 < count1) {
- /*
- * This means that we haven't just had an interrupt
- * while reading into status2.
- */
- if (status2 & (1 << 5))
- offset = tick;
- count1 = count2;
- } else if (count2 > count1) {
- /*
- * We have just had another interrupt while reading
- * status2.
- */
- offset += tick;
- count1 = count2;
- }
-
- count1 = LATCH - count1;
- /*
- * count1 = number of clock ticks since last interrupt
- */
- offset += count1 * tick / LATCH;
- return offset;
-}
-
-extern int iic_control(unsigned char, int, char *, int);
-
-static int set_rtc_time(unsigned long nowtime)
-{
- char buf[5], ctrl;
-
- if (iic_control(0xa1, 0, &ctrl, 1) != 0)
- printk("RTC: failed to read control reg\n");
-
- /*
- * Reset divider
- */
- ctrl |= 0x80;
-
- if (iic_control(0xa0, 0, &ctrl, 1) != 0)
- printk("RTC: failed to stop the clock\n");
-
- /*
- * We only set the time - we don't set the date.
- * This means that there is the possibility once
- * a day for the correction to disrupt the date.
- * We really ought to write the time and date, or
- * nothing at all.
- */
- buf[0] = 0;
- buf[1] = nowtime % 60; nowtime /= 60;
- buf[2] = nowtime % 60; nowtime /= 60;
- buf[3] = nowtime % 24;
-
- BIN_TO_BCD(buf[1]);
- BIN_TO_BCD(buf[2]);
- BIN_TO_BCD(buf[3]);
-
- if (iic_control(0xa0, 1, buf, 4) != 0)
- printk("RTC: Failed to set the time\n");
-
- /*
- * Re-enable divider
- */
- ctrl &= ~0x80;
-
- if (iic_control(0xa0, 0, &ctrl, 1) != 0)
- printk("RTC: failed to start the clock\n");
-
- return 0;
-}
-
-extern __inline__ unsigned long get_rtc_time(void)
-{
- unsigned int year, i;
- char buf[8];
-
- /*
- * The year is not part of the RTC counter
- * registers, and is stored in RAM. This
- * means that it will not be automatically
- * updated.
- */
- if (iic_control(0xa1, 0xc0, buf, 1) != 0)
- printk("RTC: failed to read the year\n");
-
- /*
- * If the year is before 1970, then the year
- * is actually 100 in advance. This gives us
- * a year 2070 bug...
- */
- year = 1900 + buf[0];
- if (year < 1970)
- year += 100;
-
- /*
- * Read the time and date in one go - this
- * will ensure that we don't get any effects
- * due to carry (the RTC latches the counters
- * during a read).
- */
- if (iic_control(0xa1, 2, buf, 5) != 0) {
- printk("RTC: failed to read the time and date\n");
- memset(buf, 0, sizeof(buf));
- }
-
- /*FIXME:
- * This doesn't seem to work. Does RISC OS
- * actually use the RTC year? It doesn't
- * seem to. In that case, how does it update
- * the CMOS year?
- */
- /*year += (buf[3] >> 6) & 3;*/
-
- /*
- * The RTC combines years with date and weekday
- * with month. We need to mask off this extra
- * information before converting the date to
- * binary.
- */
- buf[4] &= 0x1f;
- buf[3] &= 0x3f;
-
- for (i = 0; i < 5; i++)
- BCD_TO_BIN(buf[i]);
-
- return mktime(year, buf[4], buf[3], buf[2], buf[1], buf[0]);
-}
+extern void ioctime_init(void);
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
do_timer(regs);
-
- /* If we have an externally synchronized linux clock, then update
- * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
- * called as close as possible to 500 ms before the new second starts.
- */
- if ((time_status & STA_UNSYNC) == 0 &&
- xtime.tv_sec > last_rtc_update + 660 &&
- xtime.tv_usec >= 50000 - (tick >> 1) &&
- xtime.tv_usec < 50000 + (tick >> 1)) {
- if (set_rtc_time(xtime.tv_sec) == 0)
- last_rtc_update = xtime.tv_sec;
- else
- last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
- }
-
- if (!user_mode(regs))
- do_profile(instruction_pointer(regs));
+ do_set_rtc();
+ do_profile(regs);
}
-static struct irqaction timerirq = {
- timer_interrupt,
- 0,
- 0,
- "timer",
- NULL,
- NULL
-};
-
/*
- * Set up timer interrupt, and return the current time in seconds.
+ * Set up timer interrupt.
*/
extern __inline__ void setup_timer(void)
{
- outb(LATCH & 255, IOMD_T0LTCHL);
- outb(LATCH >> 8, IOMD_T0LTCHH);
- outb(0, IOMD_T0GO);
+ ioctime_init();
- xtime.tv_sec = get_rtc_time();
+ timer_irq.handler = timer_interrupt;
- setup_arm_irq(IRQ_TIMER, &timerirq);
+ setup_arm_irq(IRQ_TIMER, &timer_irq);
}
*
*/
-#include <linux/config.h>
-
#ifdef CONFIG_BLK_DEV_IDE
#include <asm/irq.h>
" b 1f @ Seems we must align the next \n" \
" .align 5 @ instruction on a cache line \n" \
"1: mcr p15, 0, %0, c15, c8, 2 @ Wait for interrupts \n" \
+" mov r0, r0 @ insert NOP to ensure SA1100 re-awakes\n" \
" mcr p15, 0, %0, c15, c1, 2 @ Reenable clock switching \n" \
: : "r" (&ICIP) : "cc" ); \
} while (0)
* (C) 1999 Nicolas Pitre <nico@cam.org>
*/
-#include <linux/config.h>
#if defined(CONFIG_SA1100_EMPEG) || \
defined(CONFIG_SA1100_VICTOR) || \
unsigned int sum, tmp1;
__asm__ __volatile__(
- "sub %2, %2, #5 @ ip_fast_csum
- ldr %0, [%1], #4
+ "ldr %0, [%1], #4 @ ip_fast_csum
ldr %3, [%1], #4
+ sub %2, %2, #5
adds %0, %0, %3
ldr %3, [%1], #4
adcs %0, %0, %3
switch (size) {
case 1: __asm__ __volatile__ ("swpb %0, %1, [%2]" : "=r" (x) : "r" (x), "r" (ptr) : "memory");
break;
- case 2: abort ();
case 4: __asm__ __volatile__ ("swp %0, %1, [%2]" : "=r" (x) : "r" (x), "r" (ptr) : "memory");
break;
default: arm_invalidptr(xchg_str, size);
}
#define set_cr(x) \
- do { \
__asm__ __volatile__( \
"mcr p15, 0, %0, c1, c0 @ set CR" \
- : : "r" (x)); \
- } while (0)
+ : : "r" (x))
extern unsigned long cr_no_alignment; /* defined in entry-armv.S */
extern unsigned long cr_alignment; /* defined in entry-armv.S */
* Save the current interrupt enable state & disable IRQs
*/
#define __save_flags_cli(x) \
- do { \
- unsigned long temp; \
- __asm__ __volatile__( \
- "mrs %1, cpsr @ save_flags_cli\n" \
-" and %0, %1, #192\n" \
-" orr %1, %1, #128\n" \
-" msr cpsr, %1" \
- : "=r" (x), "=r" (temp) \
- : \
- : "memory"); \
- } while (0)
+ ({ \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "mrs %0, cpsr @ save_flags_cli\n" \
+" orr %1, %0, #128\n" \
+" msr cpsr_c, %1" \
+ : "=r" (x), "=r" (temp) \
+ : \
+ : "memory"); \
+ })
/*
* Enable IRQs
*/
#define __sti() \
- do { \
- unsigned long temp; \
- __asm__ __volatile__( \
+ ({ \
+ unsigned long temp; \
+ __asm__ __volatile__( \
"mrs %0, cpsr @ sti\n" \
" bic %0, %0, #128\n" \
-" msr cpsr, %0" \
- : "=r" (temp) \
- : \
- : "memory"); \
- } while(0)
+" msr cpsr_c, %0" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ })
/*
* Disable IRQs
*/
#define __cli() \
- do { \
- unsigned long temp; \
- __asm__ __volatile__( \
+ ({ \
+ unsigned long temp; \
+ __asm__ __volatile__( \
"mrs %0, cpsr @ cli\n" \
" orr %0, %0, #128\n" \
-" msr cpsr, %0" \
- : "=r" (temp) \
- : \
- : "memory"); \
- } while(0)
+" msr cpsr_c, %0" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ })
/*
* save current IRQ & FIQ state
*/
#define __save_flags(x) \
- do { \
- __asm__ __volatile__( \
+ __asm__ __volatile__( \
"mrs %0, cpsr @ save_flags\n" \
-" and %0, %0, #192" \
: "=r" (x) \
: \
- : "memory"); \
- } while (0)
+ : "memory")
/*
* restore saved IRQ & FIQ state
*/
#define __restore_flags(x) \
- do { \
- unsigned long temp; \
- __asm__ __volatile__( \
- "mrs %0, cpsr @ restore_flags\n" \
-" bic %0, %0, #192\n" \
-" orr %0, %0, %1\n" \
-" msr cpsr, %0" \
- : "=&r" (temp) \
- : "r" (x) \
- : "memory"); \
- } while (0)
+ __asm__ __volatile__( \
+ "msr cpsr_c, %0 @ restore_flags\n" \
+ : \
+ : "r" (x) \
+ : "memory")
/* For spinlocks etc */
#define local_irq_save(x) __save_flags_cli(x)
#define BITS_PER_LONG 32
+/* Dma addresses are 32-bits wide. */
+
+typedef u32 dma_addr_t;
+
#endif /* __KERNEL__ */
#endif
--- /dev/null
+#ifndef _ASM_IA64_A_OUT_H
+#define _ASM_IA64_A_OUT_H
+
+/*
+ * No a.out format has been (or should be) defined so this file is
+ * just a dummy that allows us to get binfmt_elf compiled. It
+ * probably would be better to clean up binfmt_elf.c so it does not
+ * necessarily depend on there being a.out support.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/types.h>
+
+struct exec
+{
+ unsigned long a_info;
+ unsigned long a_text;
+ unsigned long a_data;
+ unsigned long a_bss;
+ unsigned long a_entry;
+};
+
+#define N_TXTADDR(x) 0
+#define N_DATADDR(x) 0
+#define N_BSSADDR(x) 0
+#define N_DRSIZE(x) 0
+#define N_TRSIZE(x) 0
+#define N_SYMSIZE(x) 0
+#define N_TXTOFF(x) 0
+
+#ifdef __KERNEL__
+# define STACK_TOP 0xa000000000000000UL
+# define IA64_RBS_BOT (STACK_TOP - 0x80000000L) /* bottom of register backing store */
+#endif
+
+#endif /* _ASM_IA64_A_OUT_H */
--- /dev/null
+#ifndef _ASM_IA64_ACPI_EXT_H
+#define _ASM_IA64_ACPI_EXT_H
+
+/*
+ * Advanced Configuration and Power Infterface
+ * Based on 'ACPI Specification 1.0b' Febryary 2, 1999
+ * and 'IA-64 Extensions to the ACPI Specification' Rev 0.6
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ */
+
+#include <linux/config.h>
+
+#include <linux/types.h>
+
+#define ACPI_RSDP_SIG "RSD PTR " /* Trailing space required */
+#define ACPI_RSDP_SIG_LEN 8
+typedef struct {
+ char signature[8];
+ u8 checksum;
+ char oem_id[6];
+ char reserved; /* Must be 0 */
+ struct acpi_rsdt *rsdt;
+} acpi_rsdp_t;
+
+typedef struct {
+ char signature[4];
+ u32 length;
+ u8 revision;
+ u8 checksum;
+ char oem_id[6];
+ char oem_table_id[8];
+ u32 oem_revision;
+ u32 creator_id;
+ u32 creator_revision;
+ char reserved[4];
+} acpi_desc_table_hdr_t;
+
+#define ACPI_RSDT_SIG "RSDT"
+#define ACPI_RSDT_SIG_LEN 4
+typedef struct acpi_rsdt {
+ acpi_desc_table_hdr_t header;
+ unsigned long entry_ptrs[1]; /* Not really . . . */
+} acpi_rsdt_t;
+
+#define ACPI_SAPIC_SIG "SPIC"
+#define ACPI_SAPIC_SIG_LEN 4
+typedef struct {
+ acpi_desc_table_hdr_t header;
+ unsigned long interrupt_block;
+} acpi_sapic_t;
+
+/* SAPIC structure types */
+#define ACPI_ENTRY_LOCAL_SAPIC 0
+#define ACPI_ENTRY_IO_SAPIC 1
+#define ACPI_ENTRY_INT_SRC_OVERRIDE 2
+#define ACPI_ENTRY_PLATFORM_INT_SOURCE 3 /* Unimplemented */
+
+/* Local SAPIC flags */
+#define LSAPIC_ENABLED (1<<0)
+#define LSAPIC_PERFORMANCE_RESTRICTED (1<<1)
+#define LSAPIC_PRESENT (1<<2)
+
+typedef struct {
+ u8 type;
+ u8 length;
+ u16 acpi_processor_id;
+ u16 flags;
+ u8 id;
+ u8 eid;
+} acpi_entry_lsapic_t;
+
+typedef struct {
+ u8 type;
+ u8 length;
+ u16 reserved;
+ u32 irq_base; /* start of IRQ's this IOSAPIC is responsible for. */
+ unsigned long address; /* Address of this IOSAPIC */
+} acpi_entry_iosapic_t;
+
+/* Defines legacy IRQ->pin mapping */
+typedef struct {
+ u8 type;
+ u8 length;
+ u8 bus; /* Constant 0 == ISA */
+ u8 isa_irq; /* ISA IRQ # */
+ u8 pin; /* called vector in spec; really IOSAPIC pin number */
+ u32 flags; /* Edge/Level trigger & High/Low active */
+ u8 reserved[6];
+} acpi_entry_int_override_t;
+#define INT_OVERRIDE_ACTIVE_LOW 0x03
+#define INT_OVERRIDE_LEVEL_TRIGGER 0x0d
+
+typedef struct {
+ u8 type;
+ u8 length;
+ u32 flags;
+ u8 int_type;
+ u8 id;
+ u8 eid;
+ u8 iosapic_vector;
+ unsigned long reserved;
+ unsigned long global_vector;
+} acpi_entry_platform_src_t;
+
+extern int acpi_parse(acpi_rsdp_t *);
+extern const char *acpi_get_sysname (void);
+
+extern void (*acpi_idle) (void); /* power-management idle function, if any */
+
+#endif /* _ASM_IA64_ACPI_EXT_H */
--- /dev/null
+#ifndef _ASM_IA64_ATOMIC_H
+#define _ASM_IA64_ATOMIC_H
+
+/*
+ * Atomic operations that C can't guarantee us. Useful for
+ * resource counting etc..
+ *
+ * NOTE: don't mess with the types below! The "unsigned long" and
+ * "int" types were carefully placed so as to ensure proper operation
+ * of the macros.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/config.h>
+#include <linux/types.h>
+
+#include <asm/system.h>
+
+/*
+ * Make sure gcc doesn't try to be clever and move things around
+ * on us. We need to use _exactly_ the address the user gave us,
+ * not some alias that contains the same information.
+ */
+#define __atomic_fool_gcc(x) (*(volatile struct { int a[100]; } *)x)
+
+/*
+ * On IA-64, counter must always be volatile to ensure that that the
+ * memory accesses are ordered.
+ */
+typedef struct { volatile __s32 counter; } atomic_t;
+
+#define ATOMIC_INIT(i) ((atomic_t) { (i) })
+
+#define atomic_read(v) ((v)->counter)
+#define atomic_set(v,i) (((v)->counter) = (i))
+
+static __inline__ int
+ia64_atomic_add (int i, atomic_t *v)
+{
+ __s32 old, new;
+ CMPXCHG_BUGCHECK_DECL
+
+ do {
+ CMPXCHG_BUGCHECK(v);
+ old = atomic_read(v);
+ new = old + i;
+ } while (ia64_cmpxchg(v, old, old + i, sizeof(atomic_t)) != old);
+ return new;
+}
+
+static __inline__ int
+ia64_atomic_sub (int i, atomic_t *v)
+{
+ __s32 old, new;
+ CMPXCHG_BUGCHECK_DECL
+
+ do {
+ CMPXCHG_BUGCHECK(v);
+ old = atomic_read(v);
+ new = old - i;
+ } while (ia64_cmpxchg(v, old, new, sizeof(atomic_t)) != old);
+ return new;
+}
+
+/*
+ * Atomically add I to V and return TRUE if the resulting value is
+ * negative.
+ */
+static __inline__ int
+atomic_add_negative (int i, atomic_t *v)
+{
+ return ia64_atomic_add(i, v) < 0;
+}
+
+#define atomic_add_return(i,v) \
+ ((__builtin_constant_p(i) && \
+ ( (i == 1) || (i == 4) || (i == 8) || (i == 16) \
+ || (i == -1) || (i == -4) || (i == -8) || (i == -16))) \
+ ? ia64_fetch_and_add(i, v) \
+ : ia64_atomic_add(i, v))
+
+#define atomic_sub_return(i,v) \
+ ((__builtin_constant_p(i) && \
+ ( (i == 1) || (i == 4) || (i == 8) || (i == 16) \
+ || (i == -1) || (i == -4) || (i == -8) || (i == -16))) \
+ ? ia64_fetch_and_add(-i, v) \
+ : ia64_atomic_sub(i, v))
+
+#define atomic_dec_return(v) atomic_sub_return(1, (v))
+#define atomic_inc_return(v) atomic_add_return(1, (v))
+
+#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0)
+#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+
+#define atomic_add(i,v) atomic_add_return((i), (v))
+#define atomic_sub(i,v) atomic_sub_return((i), (v))
+#define atomic_inc(v) atomic_add(1, (v))
+#define atomic_dec(v) atomic_sub(1, (v))
+
+#endif /* _ASM_IA64_ATOMIC_H */
--- /dev/null
+#ifndef _ASM_IA64_BITOPS_H
+#define _ASM_IA64_BITOPS_H
+
+/*
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 02/04/00 D. Mosberger Require 64-bit alignment for bitops, per suggestion from davem
+ */
+
+#include <asm/system.h>
+
+/*
+ * These operations need to be atomic. The address must be "long"
+ * aligned.
+ *
+ * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+ */
+
+extern __inline__ void
+set_bit (int nr, volatile void *addr)
+{
+ __u64 bit, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ bit = 1UL << (nr & 63);
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old | bit;
+ } while (cmpxchg(m, old, new) != old);
+}
+
+extern __inline__ void
+clear_bit (int nr, volatile void *addr)
+{
+ __u64 mask, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ mask = ~(1UL << (nr & 63));
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old & mask;
+ } while (cmpxchg(m, old, new) != old);
+}
+
+extern __inline__ void
+change_bit (int nr, volatile void *addr)
+{
+ __u64 bit, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ bit = (1UL << (nr & 63));
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old ^ bit;
+ } while (cmpxchg(m, old, new) != old);
+}
+
+extern __inline__ int
+test_and_set_bit (int nr, volatile void *addr)
+{
+ __u64 bit, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ bit = 1UL << (nr & 63);
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old | bit;
+ } while (cmpxchg(m, old, new) != old);
+ return (old & bit) != 0;
+}
+
+extern __inline__ int
+test_and_clear_bit (int nr, volatile void *addr)
+{
+ __u64 mask, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ mask = ~(1UL << (nr & 63));
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old & mask;
+ } while (cmpxchg(m, old, new) != old);
+ return (old & ~mask) != 0;
+}
+
+extern __inline__ int
+test_and_change_bit (int nr, volatile void *addr)
+{
+ __u64 bit, old, new;
+ volatile __u64 *m;
+ CMPXCHG_BUGCHECK_DECL
+
+ m = (volatile __u64 *) addr + (nr >> 6);
+ bit = (1UL << (nr & 63));
+ do {
+ CMPXCHG_BUGCHECK(m);
+ old = *m;
+ new = old ^ bit;
+ } while (cmpxchg(m, old, new) != old);
+ return (old & bit) != 0;
+}
+
+extern __inline__ int
+test_bit (int nr, volatile void *addr)
+{
+ return 1UL & (((const volatile __u64 *) addr)[nr >> 6] >> (nr & 63));
+}
+
+/*
+ * ffz = Find First Zero in word. Undefined if no zero exists,
+ * so code should check against ~0UL first..
+ */
+extern inline unsigned long
+ffz (unsigned long x)
+{
+ unsigned long result;
+
+ __asm__ ("popcnt %0=%1" : "=r" (result) : "r" (x & (~x - 1)));
+ return result;
+}
+
+#ifdef __KERNEL__
+
+/*
+ * Find the most significant bit that is set (undefined if no bit is
+ * set).
+ */
+static inline unsigned long
+ia64_fls (unsigned long x)
+{
+ double d = x;
+ long exp;
+
+ __asm__ ("getf.exp %0=%1" : "=r"(exp) : "f"(d));
+ return exp - 0xffff;
+}
+/*
+ * ffs: find first bit set. This is defined the same way as
+ * the libc and compiler builtin ffs routines, therefore
+ * differs in spirit from the above ffz (man ffs).
+ */
+#define ffs(x) __builtin_ffs(x)
+
+/*
+ * hweightN: returns the hamming weight (i.e. the number
+ * of bits set) of a N-bit word
+ */
+extern __inline__ unsigned long
+hweight64 (unsigned long x)
+{
+ unsigned long result;
+ __asm__ ("popcnt %0=%1" : "=r" (result) : "r" (x));
+ return result;
+}
+
+#define hweight32(x) hweight64 ((x) & 0xfffffffful)
+#define hweight16(x) hweight64 ((x) & 0xfffful)
+#define hweight8(x) hweight64 ((x) & 0xfful)
+
+#endif /* __KERNEL__ */
+
+/*
+ * Find next zero bit in a bitmap reasonably efficiently..
+ */
+extern inline int
+find_next_zero_bit (void *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (64-offset);
+ if (size < 64)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while (size & ~63UL) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+found_first:
+ tmp |= ~0UL << size;
+found_middle:
+ return result + ffz(tmp);
+}
+
+/*
+ * The optimizer actually does good code for this case..
+ */
+#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
+
+#ifdef __KERNEL__
+
+#define ext2_set_bit test_and_set_bit
+#define ext2_clear_bit test_and_clear_bit
+#define ext2_test_bit test_bit
+#define ext2_find_first_zero_bit find_first_zero_bit
+#define ext2_find_next_zero_bit find_next_zero_bit
+
+/* Bitmap functions for the minix filesystem. */
+#define minix_set_bit(nr,addr) test_and_set_bit(nr,addr)
+#define minix_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_bit(nr,addr) test_bit(nr,addr)
+#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_BITOPS_H */
--- /dev/null
+#ifndef _ASM_IA64_BREAK_H
+#define _ASM_IA64_BREAK_H
+
+/*
+ * IA-64 Linux break numbers.
+ *
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * OS-specific debug break numbers:
+ */
+#define __IA64_BREAK_KDB 0x80100
+
+/*
+ * OS-specific break numbers:
+ */
+#define __IA64_BREAK_SYSCALL 0x100000
+
+#endif /* _ASM_IA64_BREAK_H */
--- /dev/null
+/*
+ * This is included by init/main.c to check for architecture-dependent bugs.
+ *
+ * Needs:
+ * void check_bugs(void);
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/processor.h>
+
+/*
+ * I don't know of any ia-64 bugs yet..
+ */
+static void
+check_bugs (void)
+{
+}
--- /dev/null
+#ifndef _ASM_IA64_BYTEORDER_H
+#define _ASM_IA64_BYTEORDER_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/types.h>
+
+static __inline__ __const__ __u64
+__ia64_swab64 (__u64 x)
+{
+ __u64 result;
+
+ __asm__ ("mux1 %0=%1,@rev" : "=r" (result) : "r" (x));
+ return result;
+}
+
+static __inline__ __const__ __u32
+__ia64_swab32 (__u32 x)
+{
+ return __ia64_swab64 (x) >> 32;
+}
+
+static __inline__ __const__ __u16
+__ia64_swab16(__u16 x)
+{
+ return __ia64_swab64 (x) >> 48;
+}
+
+#define __arch__swab64(x) __ia64_swab64 (x)
+#define __arch__swab32(x) __ia64_swab32 (x)
+#define __arch__swab16(x) __ia64_swab16 (x)
+
+#define __BYTEORDER_HAS_U64__
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* _ASM_IA64_BYTEORDER_H */
--- /dev/null
+#ifndef _ASM_IA64_CACHE_H
+#define _ASM_IA64_CACHE_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/* Bytes per L1 (data) cache line. */
+#define L1_CACHE_BYTES 64
+
+#endif /* _ASM_IA64_CACHE_H */
--- /dev/null
+#ifndef _ASM_IA64_CHECKSUM_H
+#define _ASM_IA64_CHECKSUM_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * This is a version of ip_compute_csum() optimized for IP headers,
+ * which always checksum on 4 octet boundaries.
+ */
+extern unsigned short ip_fast_csum (unsigned char * iph, unsigned int ihl);
+
+/*
+ * Computes the checksum of the TCP/UDP pseudo-header returns a 16-bit
+ * checksum, already complemented
+ */
+extern unsigned short int csum_tcpudp_magic (unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum);
+
+extern unsigned int csum_tcpudp_nofold (unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum);
+
+/*
+ * Computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+extern unsigned int csum_partial (const unsigned char * buff, int len,
+ unsigned int sum);
+
+/*
+ * Same as csum_partial, but copies from src while it checksums.
+ *
+ * Here it is even more important to align src and dst on a 32-bit (or
+ * even better 64-bit) boundary.
+ */
+extern unsigned int csum_partial_copy (const char *src, char *dst, int len,
+ unsigned int sum);
+
+/*
+ * The same as csum_partial, but copies from user space (but on the
+ * ia-64 we have just one address space, so this is identical to the
+ * above).
+ *
+ * This is obsolete and will go away.
+ */
+#define csum_partial_copy_fromuser csum_partial_copy
+
+/*
+ * This is a new version of the above that records errors it finds in
+ * *errp, but continues and zeros the rest of the buffer.
+ */
+extern unsigned int csum_partial_copy_from_user (const char *src, char *dst,
+ int len, unsigned int sum,
+ int *errp);
+
+extern unsigned int csum_partial_copy_nocheck (const char *src, char *dst,
+ int len, unsigned int sum);
+
+/*
+ * This routine is used for miscellaneous IP-like checksums, mainly in
+ * icmp.c
+ */
+extern unsigned short ip_compute_csum (unsigned char *buff, int len);
+
+/*
+ * Fold a partial checksum without adding pseudo headers.
+ */
+static inline unsigned short
+csum_fold (unsigned int sum)
+{
+ sum = (sum & 0xffff) + (sum >> 16);
+ sum = (sum & 0xffff) + (sum >> 16);
+ return ~sum;
+}
+
+#define _HAVE_ARCH_IPV6_CSUM
+extern unsigned short int csum_ipv6_magic (struct in6_addr *saddr,
+ struct in6_addr *daddr,
+ __u16 len,
+ unsigned short proto,
+ unsigned int sum);
+
+#endif /* _ASM_IA64_CHECKSUM_H */
--- /dev/null
+#ifndef _ASM_IA64_CURRENT_H
+#define _ASM_IA64_CURRENT_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/* In kernel mode, thread pointer (r13) is used to point to the
+ current task structure. */
+register struct task_struct *current asm ("r13");
+
+#endif /* _ASM_IA64_CURRENT_H */
--- /dev/null
+#ifndef _ASM_IA64_DELAY_H
+#define _ASM_IA64_DELAY_H
+
+/*
+ * Delay routines using a pre-computed "cycles/usec" value.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+
+#include <asm/processor.h>
+
+extern __inline__ void
+ia64_set_itm (unsigned long val)
+{
+ __asm__ __volatile__("mov cr.itm=%0;; srlz.d;;" :: "r"(val) : "memory");
+}
+
+extern __inline__ unsigned long
+ia64_get_itm (void)
+{
+ unsigned long result;
+
+ __asm__ __volatile__("mov %0=cr.itm;; srlz.d;;" : "=r"(result) :: "memory");
+ return result;
+}
+
+extern __inline__ void
+ia64_set_itv (unsigned char vector, unsigned char masked)
+{
+ if (masked > 1)
+ masked = 1;
+
+ __asm__ __volatile__("mov cr.itv=%0;; srlz.d;;"
+ :: "r"((masked << 16) | vector) : "memory");
+}
+
+extern __inline__ void
+ia64_set_itc (unsigned long val)
+{
+ __asm__ __volatile__("mov ar.itc=%0;; srlz.d;;" :: "r"(val) : "memory");
+}
+
+extern __inline__ unsigned long
+ia64_get_itc (void)
+{
+ unsigned long result;
+
+ __asm__ __volatile__("mov %0=ar.itc" : "=r"(result) :: "memory");
+ return result;
+}
+
+extern __inline__ void
+__delay (unsigned long loops)
+{
+ unsigned long saved_ar_lc;
+
+ if (loops < 1)
+ return;
+
+ __asm__ __volatile__("mov %0=ar.lc;;" : "=r"(saved_ar_lc));
+ __asm__ __volatile__("mov ar.lc=%0;;" :: "r"(loops - 1));
+ __asm__ __volatile__("1:\tbr.cloop.sptk.few 1b;;");
+ __asm__ __volatile__("mov ar.lc=%0" :: "r"(saved_ar_lc));
+}
+
+extern __inline__ void
+udelay (unsigned long usecs)
+{
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
+ while (usecs--)
+ ;
+#else
+ unsigned long start = ia64_get_itc();
+ unsigned long cycles = usecs*my_cpu_data.cyc_per_usec;
+
+ while (ia64_get_itc() - start < cycles)
+ /* skip */;
+#endif /* CONFIG_IA64_SOFTSDV_HACKS */
+}
+
+#endif /* _ASM_IA64_DELAY_H */
--- /dev/null
+#ifndef _ASM_IA64_DIV64_H
+#define _ASM_IA64_DIV64_H
+
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * vsprintf uses this to divide a 64-bit integer N by a small integer BASE.
+ * This is incredibly hard on IA-64...
+ */
+
+#define do_div(n,base) \
+({ \
+ int _res; \
+ _res = ((unsigned long) (n)) % (unsigned) (base); \
+ (n) = ((unsigned long) (n)) / (unsigned) (base); \
+ _res; \
+})
+
+#endif /* _ASM_IA64_DIV64_H */
--- /dev/null
+#ifndef _ASM_IA64_DMA_H
+#define _ASM_IA64_DMA_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/io.h> /* need byte IO */
+#include <linux/spinlock.h> /* And spinlocks */
+#include <linux/delay.h>
+
+
+#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
+#define dma_outb outb_p
+#else
+#define dma_outb outb
+#endif
+
+#define dma_inb inb
+
+#define MAX_DMA_CHANNELS 8
+#define MAX_DMA_ADDRESS (~0UL) /* no limits on DMAing, for now */
+
+extern spinlock_t dma_spin_lock;
+
+/* From PCI */
+
+#ifdef CONFIG_PCI
+extern int isa_dma_bridge_buggy;
+#else
+#define isa_dma_bridge_buggy (0)
+#endif
+
+#endif /* _ASM_IA64_DMA_H */
--- /dev/null
+#ifndef _ASM_IA64_EFI_H
+#define _ASM_IA64_EFI_H
+
+/*
+ * Extensible Firmware Interface
+ * Based on 'Extensible Firmware Interface Specification' version 0.9, April 30, 1999
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Hewlett-Packard Co.
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
+ */
+#include <linux/init.h>
+#include <linux/string.h>
+#include <linux/time.h>
+#include <linux/types.h>
+
+#include <asm/page.h>
+#include <asm/system.h>
+
+#define EFI_SUCCESS 0
+#define EFI_INVALID_PARAMETER 2
+#define EFI_UNSUPPORTED 3
+#define EFI_BUFFER_TOO_SMALL 4
+
+typedef unsigned long efi_status_t;
+typedef u8 efi_bool_t;
+typedef u16 efi_char16_t; /* UNICODE character */
+
+typedef struct {
+ u32 data1;
+ u16 data2;
+ u16 data3;
+ u8 data4[8];
+} efi_guid_t;
+
+/*
+ * Generic EFI table header
+ */
+typedef struct {
+ u64 signature;
+ u32 revision;
+ u32 headersize;
+ u32 crc32;
+ u32 reserved;
+} efi_table_hdr_t;
+
+/*
+ * Memory map descriptor:
+ */
+
+/* Memory types: */
+#define EFI_RESERVED_TYPE 0
+#define EFI_LOADER_CODE 1
+#define EFI_LOADER_DATA 2
+#define EFI_BOOT_SERVICES_CODE 3
+#define EFI_BOOT_SERVICES_DATA 4
+#define EFI_RUNTIME_SERVICES_CODE 5
+#define EFI_RUNTIME_SERVICES_DATA 6
+#define EFI_CONVENTIONAL_MEMORY 7
+#define EFI_UNUSABLE_MEMORY 8
+#define EFI_ACPI_RECLAIM_MEMORY 9
+#define EFI_ACPI_MEMORY_NVS 10
+#define EFI_MEMORY_MAPPED_IO 11
+#define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12
+#define EFI_PAL_CODE 13
+#define EFI_MAX_MEMORY_TYPE 14
+
+/* Attribute values: */
+#define EFI_MEMORY_UC 0x0000000000000001 /* uncached */
+#define EFI_MEMORY_WC 0x0000000000000002 /* write-coalescing */
+#define EFI_MEMORY_WT 0x0000000000000004 /* write-through */
+#define EFI_MEMORY_WB 0x0000000000000008 /* write-back */
+#define EFI_MEMORY_WP 0x0000000000001000 /* write-protect */
+#define EFI_MEMORY_RP 0x0000000000002000 /* read-protect */
+#define EFI_MEMORY_XP 0x0000000000004000 /* execute-protect */
+#define EFI_MEMORY_RUNTIME 0x8000000000000000 /* range requires runtime mapping */
+#define EFI_MEMORY_DESCRIPTOR_VERSION 1
+
+typedef struct {
+ u32 type;
+ u32 pad;
+ u64 phys_addr;
+ u64 virt_addr;
+ u64 num_pages;
+ u64 attribute;
+} efi_memory_desc_t;
+
+typedef int efi_freemem_callback_t (u64 start, u64 end, void *arg);
+
+/*
+ * Types and defines for Time Services
+ */
+#define EFI_TIME_ADJUST_DAYLIGHT 0x1
+#define EFI_TIME_IN_DAYLIGHT 0x2
+#define EFI_UNSPECIFIED_TIMEZONE 0x07ff
+
+typedef struct {
+ u16 year;
+ u8 month;
+ u8 day;
+ u8 hour;
+ u8 minute;
+ u8 second;
+ u8 pad1;
+ u32 nanosecond;
+ s16 timezone;
+ u8 daylight;
+ u8 pad2;
+} efi_time_t;
+
+typedef struct {
+ u32 resolution;
+ u32 accuracy;
+ u8 sets_to_zero;
+} efi_time_cap_t;
+
+/*
+ * Types and defines for EFI ResetSystem
+ */
+#define EFI_RESET_COLD 0
+#define EFI_RESET_WARM 1
+
+/*
+ * EFI Runtime Services table
+ */
+#define EFI_RUNTIME_SERVICES_SIGNATURE 0x5652453544e5552
+#define EFI_RUNTIME_SERVICES_REVISION 0x00010000
+
+typedef struct {
+ efi_table_hdr_t hdr;
+ u64 get_time;
+ u64 set_time;
+ u64 get_wakeup_time;
+ u64 set_wakeup_time;
+ u64 set_virtual_address_map;
+ u64 convert_pointer;
+ u64 get_variable;
+ u64 get_next_variable;
+ u64 set_variable;
+ u64 get_next_high_mono_count;
+ u64 reset_system;
+} efi_runtime_services_t;
+
+typedef efi_status_t efi_get_time_t (efi_time_t *tm, efi_time_cap_t *tc);
+typedef efi_status_t efi_set_time_t (efi_time_t *tm);
+typedef efi_status_t efi_get_wakeup_time_t (efi_bool_t *enabled, efi_bool_t *pending,
+ efi_time_t *tm);
+typedef efi_status_t efi_set_wakeup_time_t (efi_bool_t enabled, efi_time_t *tm);
+typedef efi_status_t efi_get_variable_t (efi_char16_t *name, efi_guid_t *vendor, u32 *attr,
+ unsigned long *data_size, void *data);
+typedef efi_status_t efi_get_next_variable_t (unsigned long *name_size, efi_char16_t *name,
+ efi_guid_t *vendor);
+typedef efi_status_t efi_set_variable_t (efi_char16_t *name, efi_guid_t *vendor, u32 attr,
+ unsigned long data_size, void *data);
+typedef efi_status_t efi_get_next_high_mono_count_t (u64 *count);
+typedef void efi_reset_system_t (int reset_type, efi_status_t status,
+ unsigned long data_size, efi_char16_t *data);
+
+/*
+ * EFI Configuration Table and GUID definitions
+ */
+
+#define MPS_TABLE_GUID \
+ ((efi_guid_t) { 0xeb9d2d2f, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
+
+#define ACPI_TABLE_GUID \
+ ((efi_guid_t) { 0xeb9d2d30, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
+
+#define SMBIOS_TABLE_GUID \
+ ((efi_guid_t) { 0xeb9d2d31, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
+
+#define SAL_SYSTEM_TABLE_GUID \
+ ((efi_guid_t) { 0xeb9d2d32, 0x2d88, 0x11d3, { 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d }})
+
+typedef struct {
+ efi_guid_t guid;
+ u64 table;
+} efi_config_table_t;
+
+#define EFI_SYSTEM_TABLE_SIGNATURE 0x5453595320494249
+#define EFI_SYSTEM_TABLE_REVISION ((0 << 16) | (91))
+
+typedef struct {
+ efi_table_hdr_t hdr;
+ u64 fw_vendor; /* physical addr of CHAR16 vendor string */
+ u32 fw_revision;
+ u64 con_in_handle;
+ u64 con_in;
+ u64 con_out_handle;
+ u64 con_out;
+ u64 stderr_handle;
+ u64 stderr;
+ u64 runtime;
+ u64 boottime;
+ u64 nr_tables;
+ u64 tables;
+} efi_system_table_t;
+
+/*
+ * All runtime access to EFI goes through this structure:
+ */
+extern struct efi {
+ efi_system_table_t *systab; /* EFI system table */
+ void *mps; /* MPS table */
+ void *acpi; /* ACPI table */
+ void *smbios; /* SM BIOS table */
+ void *sal_systab; /* SAL system table */
+ void *boot_info; /* boot info table */
+ efi_get_time_t *get_time;
+ efi_set_time_t *set_time;
+ efi_get_wakeup_time_t *get_wakeup_time;
+ efi_set_wakeup_time_t *set_wakeup_time;
+ efi_get_variable_t *get_variable;
+ efi_get_next_variable_t *get_next_variable;
+ efi_set_variable_t *set_variable;
+ efi_get_next_high_mono_count_t *get_next_high_mono_count;
+ efi_reset_system_t *reset_system;
+} efi;
+
+extern inline int
+efi_guidcmp (efi_guid_t left, efi_guid_t right)
+{
+ return memcmp(&left, &right, sizeof (efi_guid_t));
+}
+
+extern void efi_init (void);
+extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
+extern void efi_gettimeofday (struct timeval *tv);
+extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if possible */
+
+#endif /* _ASM_IA64_EFI_H */
--- /dev/null
+#ifndef _ASM_IA64_ELF_H
+#define _ASM_IA64_ELF_H
+
+/*
+ * ELF archtecture specific definitions.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/fpu.h>
+#include <asm/page.h>
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x) == EM_IA_64)
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS64
+#define ELF_DATA ELFDATA2LSB
+#define ELF_ARCH EM_IA_64
+
+#define USE_ELF_CORE_DUMP
+
+/* always align to 64KB to allow for future page sizes of up to 64KB: */
+#define ELF_EXEC_PAGESIZE PAGE_SIZE
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.
+ * Typical use of this is to invoke "./ld.so someprog" to test out a
+ * new version of the loader. We need to make sure that it is out of
+ * the way of the program that it will "exec", and that there is
+ * sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE (TASK_UNMAPPED_BASE + 0x1000000)
+
+
+/*
+ * We use (abuse?) this macro to insert the (empty) vm_area that is
+ * used to map the register backing store. I don't see any better
+ * place to do this, but we should discuss this with Linus once we can
+ * talk to him...
+ */
+extern void ia64_init_addr_space (void);
+#define ELF_PLAT_INIT(_r) ia64_init_addr_space()
+
+/* ELF register definitions. This is needed for core dump support. */
+
+/*
+ * elf_gregset_t contains the application-level state in the following order:
+ * r0-r31
+ * NaT bits (for r0-r31; bit N == 1 iff rN is a NaT)
+ * predicate registers (p0-p63)
+ * b0-b7
+ * ip cfm psr
+ * ar.rsc ar.bsp ar.bspstore ar.rnat
+ * ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec
+ */
+#define ELF_NGREG 128 /* we really need just 72 but let's leave some headroom... */
+#define ELF_NFPREG 128 /* f0 and f1 could be omitted, but so what... */
+
+typedef unsigned long elf_greg_t;
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef struct ia64_fpreg elf_fpreg_t;
+typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
+
+struct pt_regs; /* forward declaration... */
+extern void ia64_elf_core_copy_regs (struct pt_regs *src, elf_gregset_t dst);
+#define ELF_CORE_COPY_REGS(_dest,_regs) ia64_elf_core_copy_regs(_regs, _dest);
+
+/* This macro yields a bitmask that programs can use to figure out
+ what instruction set this CPU supports. */
+#define ELF_HWCAP 0
+
+/* This macro yields a string that ld.so will use to load
+ implementation specific libraries for optimization. Not terribly
+ relevant until we have real hardware to play with... */
+#define ELF_PLATFORM 0
+
+#ifdef __KERNEL__
+# define SET_PERSONALITY(EX,IBCS2) \
+ (current->personality = (IBCS2) ? PER_SVR4 : PER_LINUX)
+#endif
+
+#endif /* _ASM_IA64_ELF_H */
--- /dev/null
+#ifndef _ASM_IA64_ERRNO_H
+#define _ASM_IA64_ERRNO_H
+
+/*
+ * This is derived from the Linux/x86 version.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define EPERM 1 /* Operation not permitted */
+#define ENOENT 2 /* No such file or directory */
+#define ESRCH 3 /* No such process */
+#define EINTR 4 /* Interrupted system call */
+#define EIO 5 /* I/O error */
+#define ENXIO 6 /* No such device or address */
+#define E2BIG 7 /* Arg list too long */
+#define ENOEXEC 8 /* Exec format error */
+#define EBADF 9 /* Bad file number */
+#define ECHILD 10 /* No child processes */
+#define EAGAIN 11 /* Try again */
+#define ENOMEM 12 /* Out of memory */
+#define EACCES 13 /* Permission denied */
+#define EFAULT 14 /* Bad address */
+#define ENOTBLK 15 /* Block device required */
+#define EBUSY 16 /* Device or resource busy */
+#define EEXIST 17 /* File exists */
+#define EXDEV 18 /* Cross-device link */
+#define ENODEV 19 /* No such device */
+#define ENOTDIR 20 /* Not a directory */
+#define EISDIR 21 /* Is a directory */
+#define EINVAL 22 /* Invalid argument */
+#define ENFILE 23 /* File table overflow */
+#define EMFILE 24 /* Too many open files */
+#define ENOTTY 25 /* Not a typewriter */
+#define ETXTBSY 26 /* Text file busy */
+#define EFBIG 27 /* File too large */
+#define ENOSPC 28 /* No space left on device */
+#define ESPIPE 29 /* Illegal seek */
+#define EROFS 30 /* Read-only file system */
+#define EMLINK 31 /* Too many links */
+#define EPIPE 32 /* Broken pipe */
+#define EDOM 33 /* Math argument out of domain of func */
+#define ERANGE 34 /* Math result not representable */
+#define EDEADLK 35 /* Resource deadlock would occur */
+#define ENAMETOOLONG 36 /* File name too long */
+#define ENOLCK 37 /* No record locks available */
+#define ENOSYS 38 /* Function not implemented */
+#define ENOTEMPTY 39 /* Directory not empty */
+#define ELOOP 40 /* Too many symbolic links encountered */
+#define EWOULDBLOCK EAGAIN /* Operation would block */
+#define ENOMSG 42 /* No message of desired type */
+#define EIDRM 43 /* Identifier removed */
+#define ECHRNG 44 /* Channel number out of range */
+#define EL2NSYNC 45 /* Level 2 not synchronized */
+#define EL3HLT 46 /* Level 3 halted */
+#define EL3RST 47 /* Level 3 reset */
+#define ELNRNG 48 /* Link number out of range */
+#define EUNATCH 49 /* Protocol driver not attached */
+#define ENOCSI 50 /* No CSI structure available */
+#define EL2HLT 51 /* Level 2 halted */
+#define EBADE 52 /* Invalid exchange */
+#define EBADR 53 /* Invalid request descriptor */
+#define EXFULL 54 /* Exchange full */
+#define ENOANO 55 /* No anode */
+#define EBADRQC 56 /* Invalid request code */
+#define EBADSLT 57 /* Invalid slot */
+
+#define EDEADLOCK EDEADLK
+
+#define EBFONT 59 /* Bad font file format */
+#define ENOSTR 60 /* Device not a stream */
+#define ENODATA 61 /* No data available */
+#define ETIME 62 /* Timer expired */
+#define ENOSR 63 /* Out of streams resources */
+#define ENONET 64 /* Machine is not on the network */
+#define ENOPKG 65 /* Package not installed */
+#define EREMOTE 66 /* Object is remote */
+#define ENOLINK 67 /* Link has been severed */
+#define EADV 68 /* Advertise error */
+#define ESRMNT 69 /* Srmount error */
+#define ECOMM 70 /* Communication error on send */
+#define EPROTO 71 /* Protocol error */
+#define EMULTIHOP 72 /* Multihop attempted */
+#define EDOTDOT 73 /* RFS specific error */
+#define EBADMSG 74 /* Not a data message */
+#define EOVERFLOW 75 /* Value too large for defined data type */
+#define ENOTUNIQ 76 /* Name not unique on network */
+#define EBADFD 77 /* File descriptor in bad state */
+#define EREMCHG 78 /* Remote address changed */
+#define ELIBACC 79 /* Can not access a needed shared library */
+#define ELIBBAD 80 /* Accessing a corrupted shared library */
+#define ELIBSCN 81 /* .lib section in a.out corrupted */
+#define ELIBMAX 82 /* Attempting to link in too many shared libraries */
+#define ELIBEXEC 83 /* Cannot exec a shared library directly */
+#define EILSEQ 84 /* Illegal byte sequence */
+#define ERESTART 85 /* Interrupted system call should be restarted */
+#define ESTRPIPE 86 /* Streams pipe error */
+#define EUSERS 87 /* Too many users */
+#define ENOTSOCK 88 /* Socket operation on non-socket */
+#define EDESTADDRREQ 89 /* Destination address required */
+#define EMSGSIZE 90 /* Message too long */
+#define EPROTOTYPE 91 /* Protocol wrong type for socket */
+#define ENOPROTOOPT 92 /* Protocol not available */
+#define EPROTONOSUPPORT 93 /* Protocol not supported */
+#define ESOCKTNOSUPPORT 94 /* Socket type not supported */
+#define EOPNOTSUPP 95 /* Operation not supported on transport endpoint */
+#define EPFNOSUPPORT 96 /* Protocol family not supported */
+#define EAFNOSUPPORT 97 /* Address family not supported by protocol */
+#define EADDRINUSE 98 /* Address already in use */
+#define EADDRNOTAVAIL 99 /* Cannot assign requested address */
+#define ENETDOWN 100 /* Network is down */
+#define ENETUNREACH 101 /* Network is unreachable */
+#define ENETRESET 102 /* Network dropped connection because of reset */
+#define ECONNABORTED 103 /* Software caused connection abort */
+#define ECONNRESET 104 /* Connection reset by peer */
+#define ENOBUFS 105 /* No buffer space available */
+#define EISCONN 106 /* Transport endpoint is already connected */
+#define ENOTCONN 107 /* Transport endpoint is not connected */
+#define ESHUTDOWN 108 /* Cannot send after transport endpoint shutdown */
+#define ETOOMANYREFS 109 /* Too many references: cannot splice */
+#define ETIMEDOUT 110 /* Connection timed out */
+#define ECONNREFUSED 111 /* Connection refused */
+#define EHOSTDOWN 112 /* Host is down */
+#define EHOSTUNREACH 113 /* No route to host */
+#define EALREADY 114 /* Operation already in progress */
+#define EINPROGRESS 115 /* Operation now in progress */
+#define ESTALE 116 /* Stale NFS file handle */
+#define EUCLEAN 117 /* Structure needs cleaning */
+#define ENOTNAM 118 /* Not a XENIX named type file */
+#define ENAVAIL 119 /* No XENIX semaphores available */
+#define EISNAM 120 /* Is a named type file */
+#define EREMOTEIO 121 /* Remote I/O error */
+#define EDQUOT 122 /* Quota exceeded */
+
+#define ENOMEDIUM 123 /* No medium found */
+#define EMEDIUMTYPE 124 /* Wrong medium type */
+
+#endif /* _ASM_IA64_ERRNO_H */
--- /dev/null
+#ifndef _ASM_IA64_FCNTL_H
+#define _ASM_IA64_FCNTL_H
+/*
+ * This is mostly compatible with Linux/x86.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * open/fcntl - O_SYNC is only implemented on blocks devices and on
+ * files located on an ext2 file system
+ */
+#define O_ACCMODE 0003
+#define O_RDONLY 00
+#define O_WRONLY 01
+#define O_RDWR 02
+#define O_CREAT 0100 /* not fcntl */
+#define O_EXCL 0200 /* not fcntl */
+#define O_NOCTTY 0400 /* not fcntl */
+#define O_TRUNC 01000 /* not fcntl */
+#define O_APPEND 02000
+#define O_NONBLOCK 04000
+#define O_NDELAY O_NONBLOCK
+#define O_SYNC 010000
+#define FASYNC 020000 /* fcntl, for BSD compatibility */
+#define O_DIRECT 040000 /* direct disk access hint - currently ignored */
+#define O_LARGEFILE 0100000
+#define O_DIRECTORY 0200000 /* must be a directory */
+#define O_NOFOLLOW 0400000 /* don't follow links */
+
+#define F_DUPFD 0 /* dup */
+#define F_GETFD 1 /* get f_flags */
+#define F_SETFD 2 /* set f_flags */
+#define F_GETFL 3 /* more flags (cloexec) */
+#define F_SETFL 4
+#define F_GETLK 5
+#define F_SETLK 6
+#define F_SETLKW 7
+
+#define F_SETOWN 8 /* for sockets. */
+#define F_GETOWN 9 /* for sockets. */
+#define F_SETSIG 10 /* for sockets. */
+#define F_GETSIG 11 /* for sockets. */
+
+/* for F_[GET|SET]FL */
+#define FD_CLOEXEC 1 /* actually anything with low bit set goes */
+
+/* for posix fcntl() and lockf() */
+#define F_RDLCK 0
+#define F_WRLCK 1
+#define F_UNLCK 2
+
+/* for old implementation of bsd flock () */
+#define F_EXLCK 4 /* or 3 */
+#define F_SHLCK 8 /* or 4 */
+
+/* operations for bsd flock(), also used by the kernel implementation */
+#define LOCK_SH 1 /* shared lock */
+#define LOCK_EX 2 /* exclusive lock */
+#define LOCK_NB 4 /* or'd with one of the above to prevent
+ blocking */
+#define LOCK_UN 8 /* remove lock */
+
+struct flock {
+ short l_type;
+ short l_whence;
+ off_t l_start;
+ off_t l_len;
+ pid_t l_pid;
+};
+
+#endif /* _ASM_IA64_FCNTL_H */
--- /dev/null
+#ifndef _ASM_IA64_FPSWA_H_
+#define _ASM_IA64_FPSWA_H_
+
+/*
+ * Floating-point Software Assist
+ *
+ * Copyright (C) 1999 Intel Corporation.
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999 Goutham Rao <goutham.rao@intel.com>
+ */
+
+#define FPSWA_BUG
+
+typedef struct {
+ /* 4 * 128 bits */
+ unsigned long fp_lp[4*2];
+} fp_state_low_preserved_t;
+
+typedef struct {
+ /* 10 * 128 bits */
+ unsigned long fp_lv[10 * 2];
+} fp_state_low_volatile_t;
+
+typedef struct {
+ /* 16 * 128 bits */
+ unsigned long fp_hp[16 * 2];
+} fp_state_high_preserved_t;
+
+typedef struct {
+ /* 96 * 128 bits */
+ unsigned long fp_hv[96 * 2];
+} fp_state_high_volatile_t;
+
+/**
+ * floating point state to be passed to the FP emulation library by
+ * the trap/fault handler
+ */
+typedef struct {
+ unsigned long bitmask_low64;
+ unsigned long bitmask_high64;
+ fp_state_low_preserved_t *fp_state_low_preserved;
+ fp_state_low_volatile_t *fp_state_low_volatile;
+ fp_state_high_preserved_t *fp_state_high_preserved;
+ fp_state_high_volatile_t *fp_state_high_volatile;
+} fp_state_t;
+
+typedef struct {
+ unsigned long status;
+ unsigned long err0;
+ unsigned long err1;
+ unsigned long err2;
+} fpswa_ret_t;
+
+/**
+ * function header for the Floating Point software assist
+ * library. This function is invoked by the Floating point software
+ * assist trap/fault handler.
+ */
+typedef fpswa_ret_t (*efi_fpswa_t) (unsigned long trap_type, void *bundle, unsigned long *ipsr,
+ unsigned long *fsr, unsigned long *isr, unsigned long *preds,
+ unsigned long *ifs, fp_state_t *fp_state);
+
+/**
+ * This is the FPSWA library interface as defined by EFI. We need to pass a
+ * pointer to the interface itself on a call to the assist library
+ */
+typedef struct {
+ unsigned int revision;
+ unsigned int reserved;
+ efi_fpswa_t fpswa;
+} fpswa_interface_t;
+
+#endif /* _ASM_IA64_FPSWA_H_ */
--- /dev/null
+#ifndef _ASM_IA64_FPU_H
+#define _ASM_IA64_FPU_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/types.h>
+
+/* floating point status register: */
+#define FPSR_TRAP_VD (1 << 0) /* invalid op trap disabled */
+#define FPSR_TRAP_DD (1 << 1) /* denormal trap disabled */
+#define FPSR_TRAP_ZD (1 << 2) /* zero-divide trap disabled */
+#define FPSR_TRAP_OD (1 << 3) /* overflow trap disabled */
+#define FPSR_TRAP_UD (1 << 4) /* underflow trap disabled */
+#define FPSR_TRAP_ID (1 << 5) /* inexact trap disabled */
+#define FPSR_S0(x) ((x) << 6)
+#define FPSR_S1(x) ((x) << 19)
+#define FPSR_S2(x) (__IA64_UL(x) << 32)
+#define FPSR_S3(x) (__IA64_UL(x) << 45)
+
+/* floating-point status field controls: */
+#define FPSF_FTZ (1 << 0) /* flush-to-zero */
+#define FPSF_WRE (1 << 1) /* widest-range exponent */
+#define FPSF_PC(x) (((x) & 0x3) << 2) /* precision control */
+#define FPSF_RC(x) (((x) & 0x3) << 4) /* rounding control */
+#define FPSF_TD (1 << 6) /* trap disabled */
+
+/* floating-point status field flags: */
+#define FPSF_V (1 << 7) /* invalid operation flag */
+#define FPSF_D (1 << 8) /* denormal/unnormal operand flag */
+#define FPSF_Z (1 << 9) /* zero divide (IEEE) flag */
+#define FPSF_O (1 << 10) /* overflow (IEEE) flag */
+#define FPSF_U (1 << 11) /* underflow (IEEE) flag */
+#define FPSF_I (1 << 12) /* inexact (IEEE) flag) */
+
+/* floating-point rounding control: */
+#define FPRC_NEAREST 0x0
+#define FPRC_NEGINF 0x1
+#define FPRC_POSINF 0x2
+#define FPRC_TRUNC 0x3
+
+#define FPSF_DEFAULT (FPSF_PC (0x3) | FPSF_RC (FPRC_NEAREST))
+
+/* This default value is the same as HP-UX uses. Don't change it
+ without a very good reason. */
+#define FPSR_DEFAULT (FPSR_TRAP_VD | FPSR_TRAP_DD | FPSR_TRAP_ZD \
+ | FPSR_TRAP_OD | FPSR_TRAP_UD | FPSR_TRAP_ID \
+ | FPSR_S0 (FPSF_DEFAULT) \
+ | FPSR_S1 (FPSF_DEFAULT | FPSF_TD | FPSF_WRE) \
+ | FPSR_S2 (FPSF_DEFAULT | FPSF_TD) \
+ | FPSR_S3 (FPSF_DEFAULT | FPSF_TD))
+
+# ifndef __ASSEMBLY__
+
+struct ia64_fpreg {
+ union {
+ unsigned long bits[2];
+ } u;
+} __attribute__ ((aligned (16)));
+
+# endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_FPU_H */
--- /dev/null
+#ifndef _ASM_IA64_HARDIRQ_H
+#define _ASM_IA64_HARDIRQ_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/threads.h>
+
+extern unsigned int local_irq_count[NR_CPUS];
+extern unsigned long hardirq_no[NR_CPUS];
+
+/*
+ * Are we in an interrupt context? Either doing bottom half
+ * or hardware interrupt processing?
+ */
+
+#define in_interrupt() \
+({ \
+ int __cpu = smp_processor_id(); \
+ (local_irq_count[__cpu] + local_bh_count[__cpu]) != 0; \
+})
+
+#ifndef CONFIG_SMP
+# define hardirq_trylock(cpu) (local_irq_count[cpu] == 0)
+# define hardirq_endlock(cpu) ((void) 0)
+
+# define hardirq_enter(cpu, irq) (local_irq_count[cpu]++)
+# define hardirq_exit(cpu, irq) (local_irq_count[cpu]--)
+
+# define synchronize_irq() barrier()
+#else
+
+#include <linux/spinlock.h>
+
+#include <asm/atomic.h>
+#include <asm/smp.h>
+
+extern int global_irq_holder;
+extern spinlock_t global_irq_lock;
+extern atomic_t global_irq_count;
+
+static inline void release_irqlock(int cpu)
+{
+ /* if we didn't own the irq lock, just ignore.. */
+ if (global_irq_holder == cpu) {
+ global_irq_holder = NO_PROC_ID;
+ spin_unlock(&global_irq_lock);
+ }
+}
+
+static inline void hardirq_enter(int cpu, int irq)
+{
+ ++local_irq_count[cpu];
+ atomic_inc(&global_irq_count);
+}
+
+static inline void hardirq_exit(int cpu, int irq)
+{
+ atomic_dec(&global_irq_count);
+ --local_irq_count[cpu];
+}
+
+static inline int hardirq_trylock(int cpu)
+{
+ return !local_irq_count[cpu] && !test_bit(0,&global_irq_lock);
+}
+
+#define hardirq_endlock(cpu) ((void)0)
+
+extern void synchronize_irq(void);
+
+#endif /* CONFIG_SMP */
+#endif /* _ASM_IA64_HARDIRQ_H */
--- /dev/null
+/*
+ * linux/include/asm-ia64/hdreg.h
+ *
+ * Copyright (C) 1994-1996 Linus Torvalds & authors
+ */
+
+#ifndef __ASM_IA64_HDREG_H
+#define __ASM_IA64_HDREG_H
+
+typedef unsigned short ide_ioreg_t;
+
+#endif /* __ASM_IA64_HDREG_H */
--- /dev/null
+#ifndef _ASM_IA64_IA32_H
+#define _ASM_IA64_IA32_H
+
+#include <linux/config.h>
+
+#ifdef CONFIG_IA32_SUPPORT
+
+/*
+ * 32 bit structures for IA32 support.
+ */
+
+/* 32bit compatibility types */
+typedef unsigned int __kernel_size_t32;
+typedef int __kernel_ssize_t32;
+typedef int __kernel_ptrdiff_t32;
+typedef int __kernel_time_t32;
+typedef int __kernel_clock_t32;
+typedef int __kernel_pid_t32;
+typedef unsigned short __kernel_ipc_pid_t32;
+typedef unsigned short __kernel_uid_t32;
+typedef unsigned short __kernel_gid_t32;
+typedef unsigned short __kernel_dev_t32;
+typedef unsigned int __kernel_ino_t32;
+typedef unsigned short __kernel_mode_t32;
+typedef unsigned short __kernel_umode_t32;
+typedef short __kernel_nlink_t32;
+typedef int __kernel_daddr_t32;
+typedef int __kernel_off_t32;
+typedef unsigned int __kernel_caddr_t32;
+typedef long __kernel_loff_t32;
+typedef __kernel_fsid_t __kernel_fsid_t32;
+
+#define IA32_PAGE_SHIFT 12 /* 4KB pages */
+#define IA32_PAGE_SIZE (1ULL << IA32_PAGE_SHIFT)
+
+/* fcntl.h */
+struct flock32 {
+ short l_type;
+ short l_whence;
+ __kernel_off_t32 l_start;
+ __kernel_off_t32 l_len;
+ __kernel_pid_t32 l_pid;
+ short __unused;
+};
+
+
+/* sigcontext.h */
+/*
+ * As documented in the iBCS2 standard..
+ *
+ * The first part of "struct _fpstate" is just the
+ * normal i387 hardware setup, the extra "status"
+ * word is used to save the coprocessor status word
+ * before entering the handler.
+ */
+struct _fpreg_ia32 {
+ unsigned short significand[4];
+ unsigned short exponent;
+};
+
+struct _fpstate_ia32 {
+ unsigned int cw,
+ sw,
+ tag,
+ ipoff,
+ cssel,
+ dataoff,
+ datasel;
+ struct _fpreg_ia32 _st[8];
+ unsigned int status;
+};
+
+struct sigcontext_ia32 {
+ unsigned short gs, __gsh;
+ unsigned short fs, __fsh;
+ unsigned short es, __esh;
+ unsigned short ds, __dsh;
+ unsigned int edi;
+ unsigned int esi;
+ unsigned int ebp;
+ unsigned int esp;
+ unsigned int ebx;
+ unsigned int edx;
+ unsigned int ecx;
+ unsigned int eax;
+ unsigned int trapno;
+ unsigned int err;
+ unsigned int eip;
+ unsigned short cs, __csh;
+ unsigned int eflags;
+ unsigned int esp_at_signal;
+ unsigned short ss, __ssh;
+ struct _fpstate_ia32 * fpstate;
+ unsigned int oldmask;
+ unsigned int cr2;
+};
+
+/* signal.h */
+#define _IA32_NSIG 64
+#define _IA32_NSIG_BPW 32
+#define _IA32_NSIG_WORDS (_IA32_NSIG / _IA32_NSIG_BPW)
+
+typedef struct {
+ unsigned int sig[_IA32_NSIG_WORDS];
+} sigset32_t;
+
+struct sigaction32 {
+ unsigned int sa_handler; /* Really a pointer, but need to deal
+ with 32 bits */
+ unsigned int sa_flags;
+ unsigned int sa_restorer; /* Another 32 bit pointer */
+ sigset32_t sa_mask; /* A 32 bit mask */
+};
+
+struct ucontext_ia32 {
+ unsigned long uc_flags;
+ struct ucontext_ia32 *uc_link;
+ stack_t uc_stack;
+ struct sigcontext_ia32 uc_mcontext;
+ sigset_t uc_sigmask; /* mask last for extensibility */
+};
+
+struct stat32 {
+ unsigned short st_dev;
+ unsigned short __pad1;
+ unsigned int st_ino;
+ unsigned short st_mode;
+ unsigned short st_nlink;
+ unsigned short st_uid;
+ unsigned short st_gid;
+ unsigned short st_rdev;
+ unsigned short __pad2;
+ unsigned int st_size;
+ unsigned int st_blksize;
+ unsigned int st_blocks;
+ unsigned int st_atime;
+ unsigned int __unused1;
+ unsigned int st_mtime;
+ unsigned int __unused2;
+ unsigned int st_ctime;
+ unsigned int __unused3;
+ unsigned int __unused4;
+ unsigned int __unused5;
+};
+
+struct statfs32 {
+ int f_type;
+ int f_bsize;
+ int f_blocks;
+ int f_bfree;
+ int f_bavail;
+ int f_files;
+ int f_ffree;
+ __kernel_fsid_t32 f_fsid;
+ int f_namelen; /* SunOS ignores this field. */
+ int f_spare[6];
+};
+
+/*
+ * IA-32 ELF specific definitions for IA-64.
+ */
+
+#define _ASM_IA64_ELF_H /* Don't include elf.h */
+
+#include <linux/sched.h>
+#include <asm/processor.h>
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x) == EM_386)
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS32
+#define ELF_DATA ELFDATA2LSB
+#define ELF_ARCH EM_386
+
+#define IA32_PAGE_OFFSET 0xc0000000
+
+#define USE_ELF_CORE_DUMP
+#define ELF_EXEC_PAGESIZE PAGE_SIZE
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed.
+ * Typical use of this is to invoke "./ld.so someprog" to test out a
+ * new version of the loader. We need to make sure that it is out of
+ * the way of the program that it will "exec", and that there is
+ * sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE (IA32_PAGE_OFFSET/3 + 0x1000000)
+
+void ia64_elf32_init(struct pt_regs *regs);
+#define ELF_PLAT_INIT(_r) ia64_elf32_init(_r)
+
+#define elf_addr_t u32
+#define elf_caddr_t u32
+
+/* ELF register definitions. This is needed for core dump support. */
+
+#define ELF_NGREG 128 /* XXX fix me */
+#define ELF_NFPREG 128 /* XXX fix me */
+
+typedef unsigned long elf_greg_t;
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef struct {
+ unsigned long w0;
+ unsigned long w1;
+} elf_fpreg_t;
+typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
+
+/* This macro yields a bitmask that programs can use to figure out
+ what instruction set this CPU supports. */
+#define ELF_HWCAP 0
+
+/* This macro yields a string that ld.so will use to load
+ implementation specific libraries for optimization. Not terribly
+ relevant until we have real hardware to play with... */
+#define ELF_PLATFORM 0
+
+#ifdef __KERNEL__
+# define SET_PERSONALITY(EX,IBCS2) \
+ (current->personality = (IBCS2) ? PER_SVR4 : PER_LINUX)
+#endif
+
+#define IA32_EFLAG 0x200
+
+/*
+ * IA-32 ELF specific definitions for IA-64.
+ */
+
+#define __USER_CS 0x23
+#define __USER_DS 0x2B
+
+#define SEG_LIM 32
+#define SEG_TYPE 52
+#define SEG_SYS 56
+#define SEG_DPL 57
+#define SEG_P 59
+#define SEG_DB 62
+#define SEG_G 63
+
+#define FIRST_TSS_ENTRY 6
+#define FIRST_LDT_ENTRY (FIRST_TSS_ENTRY+1)
+#define _TSS(n) ((((unsigned long) n)<<4)+(FIRST_TSS_ENTRY<<3))
+#define _LDT(n) ((((unsigned long) n)<<4)+(FIRST_LDT_ENTRY<<3))
+
+#define IA64_SEG_DESCRIPTOR(base, limit, segtype, nonsysseg, dpl, segpresent, segdb, granularity) \
+ ((base) | \
+ (limit << SEG_LIM) | \
+ (segtype << SEG_TYPE) | \
+ (nonsysseg << SEG_SYS) | \
+ (dpl << SEG_DPL) | \
+ (segpresent << SEG_P) | \
+ (segdb << SEG_DB) | \
+ (granularity << SEG_G))
+
+#define IA32_SEG_BASE 16
+#define IA32_SEG_TYPE 40
+#define IA32_SEG_SYS 44
+#define IA32_SEG_DPL 45
+#define IA32_SEG_P 47
+#define IA32_SEG_HIGH_LIMIT 48
+#define IA32_SEG_AVL 52
+#define IA32_SEG_DB 54
+#define IA32_SEG_G 55
+#define IA32_SEG_HIGH_BASE 56
+
+#define IA32_SEG_DESCRIPTOR(base, limit, segtype, nonsysseg, dpl, segpresent, avl, segdb, granularity) \
+ ((limit & 0xFFFF) | \
+ (base & 0xFFFFFF << IA32_SEG_BASE) | \
+ (segtype << IA32_SEG_TYPE) | \
+ (nonsysseg << IA32_SEG_SYS) | \
+ (dpl << IA32_SEG_DPL) | \
+ (segpresent << IA32_SEG_P) | \
+ (((limit >> 16) & 0xF) << IA32_SEG_HIGH_LIMIT) | \
+ (avl << IA32_SEG_AVL) | \
+ (segdb << IA32_SEG_DB) | \
+ (granularity << IA32_SEG_G) | \
+ (((base >> 24) & 0xFF) << IA32_SEG_HIGH_BASE))
+
+#define IA32_CR0 0x80000001 /* Enable PG and PE bits */
+#define IA32_CR4 0 /* No architectural extensions */
+
+/*
+ * IA32 floating point control registers starting values
+ */
+
+#define IA32_FSR_DEFAULT 0x555500000 /* set all tag bits */
+#define IA32_FCR_DEFAULT 0x33f /* single precision, all masks */
+
+#define ia32_start_thread(regs,new_ip,new_sp) do { \
+ set_fs(USER_DS); \
+ ia64_psr(regs)->cpl = 3; /* set user mode */ \
+ ia64_psr(regs)->ri = 0; /* clear return slot number */ \
+ ia64_psr(regs)->is = 1; /* IA-32 instruction set */ \
+ regs->cr_iip = new_ip; \
+ regs->r12 = new_sp; \
+ regs->ar_rnat = 0; \
+ regs->loadrs = 0; \
+} while (0)
+
+extern void ia32_gdt_init (void);
+extern long ia32_setup_frame1 (int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs);
+extern void ia32_init_addr_space (struct pt_regs *regs);
+extern int ia32_setup_arg_pages (struct linux_binprm *bprm);
+
+#endif /* !CONFIG_IA32_SUPPORT */
+
+#endif /* _ASM_IA64_IA32_H */
--- /dev/null
+/*
+ * linux/include/asm-ia64/ide.h
+ *
+ * Copyright (C) 1994-1996 Linus Torvalds & authors
+ */
+
+/*
+ * This file contains the ia64 architecture specific IDE code.
+ */
+
+#ifndef __ASM_IA64_IDE_H
+#define __ASM_IA64_IDE_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+
+#ifndef MAX_HWIFS
+#define MAX_HWIFS 10
+#endif
+
+#define ide__sti() __sti()
+
+static __inline__ int
+ide_default_irq (ide_ioreg_t base)
+{
+ switch (base) {
+ case 0x1f0: return 14;
+ case 0x170: return 15;
+ case 0x1e8: return 11;
+ case 0x168: return 10;
+ case 0x1e0: return 8;
+ case 0x160: return 12;
+ default:
+ return 0;
+ }
+}
+
+static __inline__ ide_ioreg_t
+ide_default_io_base (int index)
+{
+ switch (index) {
+ case 0: return 0x1f0;
+ case 1: return 0x170;
+ case 2: return 0x1e8;
+ case 3: return 0x168;
+ case 4: return 0x1e0;
+ case 5: return 0x160;
+ default:
+ return 0;
+ }
+}
+
+static __inline__ void
+ide_init_hwif_ports (hw_regs_t *hw, ide_ioreg_t data_port, ide_ioreg_t ctrl_port, int *irq)
+{
+ ide_ioreg_t reg = data_port;
+ int i;
+
+ for (i = IDE_DATA_OFFSET; i <= IDE_STATUS_OFFSET; i++) {
+ hw->io_ports[i] = reg;
+ reg += 1;
+ }
+ if (ctrl_port) {
+ hw->io_ports[IDE_CONTROL_OFFSET] = ctrl_port;
+ } else {
+ hw->io_ports[IDE_CONTROL_OFFSET] = hw->io_ports[IDE_DATA_OFFSET] + 0x206;
+ }
+ if (irq != NULL)
+ *irq = 0;
+}
+
+static __inline__ void
+ide_init_default_hwifs (void)
+{
+#ifndef CONFIG_BLK_DEV_IDEPCI
+ hw_regs_t hw;
+ int index;
+
+ for(index = 0; index < MAX_HWIFS; index++) {
+ ide_init_hwif_ports(&hw, ide_default_io_base(index), 0, NULL);
+ hw.irq = ide_default_irq(ide_default_io_base(index));
+ ide_register_hw(&hw, NULL);
+ }
+#endif /* CONFIG_BLK_DEV_IDEPCI */
+}
+
+typedef union {
+ unsigned all : 8; /* all of the bits together */
+ struct {
+ unsigned head : 4; /* always zeros here */
+ unsigned unit : 1; /* drive select number, 0 or 1 */
+ unsigned bit5 : 1; /* always 1 */
+ unsigned lba : 1; /* using LBA instead of CHS */
+ unsigned bit7 : 1; /* always 1 */
+ } b;
+ } select_t;
+
+#define ide_request_irq(irq,hand,flg,dev,id) request_irq((irq),(hand),(flg),(dev),(id))
+#define ide_free_irq(irq,dev_id) free_irq((irq), (dev_id))
+#define ide_check_region(from,extent) check_region((from), (extent))
+#define ide_request_region(from,extent,name) request_region((from), (extent), (name))
+#define ide_release_region(from,extent) release_region((from), (extent))
+
+/*
+ * The following are not needed for the non-m68k ports
+ */
+#define ide_ack_intr(hwif) (1)
+#define ide_fix_driveid(id) do {} while (0)
+#define ide_release_lock(lock) do {} while (0)
+#define ide_get_lock(lock, hdlr, data) do {} while (0)
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_IA64_IDE_H */
--- /dev/null
+#ifndef _ASM_IA64_IO_H
+#define _ASM_IA64_IO_H
+
+/*
+ * This file contains the definitions for the emulated IO instructions
+ * inb/inw/inl/outb/outw/outl and the "string versions" of the same
+ * (insb/insw/insl/outsb/outsw/outsl). You can also use "pausing"
+ * versions of the single-IO instructions (inb_p/inw_p/..).
+ *
+ * This file is not meant to be obfuscating: it's just complicated to
+ * (a) handle it all in a way that makes gcc able to optimize it as
+ * well as possible and (b) trying to avoid writing the same thing
+ * over and over again with slight variations and possibly making a
+ * mistake somewhere.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ */
+
+/* We don't use IO slowdowns on the ia64, but.. */
+#define __SLOW_DOWN_IO do { } while (0)
+#define SLOW_DOWN_IO do { } while (0)
+
+#define __IA64_UNCACHED_OFFSET 0xc000000000000000 /* region 6 */
+
+#define IO_SPACE_LIMIT 0xffff
+
+# ifdef __KERNEL__
+
+#include <asm/page.h>
+#include <asm/system.h>
+
+/*
+ * Change virtual addresses to physical addresses and vv.
+ */
+static inline unsigned long
+virt_to_phys (volatile void *address)
+{
+ return (unsigned long) address - PAGE_OFFSET;
+}
+
+static inline void*
+phys_to_virt(unsigned long address)
+{
+ return (void *) (address + PAGE_OFFSET);
+}
+
+#define bus_to_virt phys_to_virt
+#define virt_to_bus virt_to_phys
+
+# else /* !KERNEL */
+# endif /* !KERNEL */
+
+/*
+ * Memory fence w/accept. This should never be used in code that is
+ * not IA-64 specific.
+ */
+#define __ia64_mf_a() __asm__ __volatile__ ("mf.a" ::: "memory")
+
+extern inline const unsigned long
+__ia64_get_io_port_base (void)
+{
+ unsigned long addr;
+
+ __asm__ ("mov %0=ar.k0;;" : "=r"(addr));
+ return __IA64_UNCACHED_OFFSET | addr;
+}
+
+extern inline void*
+__ia64_mk_io_addr (unsigned long port)
+{
+ const unsigned long io_base = __ia64_get_io_port_base();
+ unsigned long addr;
+
+ addr = io_base | ((port >> 2) << 12) | (port & 0xfff);
+ return (void *) addr;
+}
+
+/*
+ * For the in/out instructions, we need to do:
+ *
+ * o "mf" _before_ doing the I/O access to ensure that all prior
+ * accesses to memory occur before the I/O access
+ * o "mf.a" _after_ doing the I/O access to ensure that the access
+ * has completed before we're doing any other I/O accesses
+ *
+ * The former is necessary because we might be doing normal (cached) memory
+ * accesses, e.g., to set up a DMA descriptor table and then do an "outX()"
+ * to tell the DMA controller to start the DMA operation. The "mf" ahead
+ * of the I/O operation ensures that the DMA table is correct when the I/O
+ * access occurs.
+ *
+ * The mf.a is necessary to ensure that all I/O access occur in program
+ * order. --davidm 99/12/07
+ */
+
+extern inline unsigned int
+__inb (unsigned long port)
+{
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+ unsigned char ret;
+
+ ret = *addr;
+ __ia64_mf_a();
+ return ret;
+}
+
+extern inline unsigned int
+__inw (unsigned long port)
+{
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+ unsigned short ret;
+
+ ret = *addr;
+ __ia64_mf_a();
+ return ret;
+}
+
+extern inline unsigned int
+__inl (unsigned long port)
+{
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+ unsigned int ret;
+
+ ret = *addr;
+ __ia64_mf_a();
+ return ret;
+}
+
+extern inline void
+__insb (unsigned long port, void *dst, unsigned long count)
+{
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+ unsigned char *dp = dst;
+
+ __ia64_mf_a();
+ while (count--) {
+ *dp++ = *addr;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+extern inline void
+__insw (unsigned long port, void *dst, unsigned long count)
+{
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+ unsigned short *dp = dst;
+
+ __ia64_mf_a();
+ while (count--) {
+ *dp++ = *addr;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+extern inline void
+__insl (unsigned long port, void *dst, unsigned long count)
+{
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+ unsigned int *dp = dst;
+
+ __ia64_mf_a();
+ while (count--) {
+ *dp++ = *addr;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+extern inline void
+__outb (unsigned char val, unsigned long port)
+{
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+
+ *addr = val;
+ __ia64_mf_a();
+}
+
+extern inline void
+__outw (unsigned short val, unsigned long port)
+{
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+
+ *addr = val;
+ __ia64_mf_a();
+}
+
+extern inline void
+__outl (unsigned int val, unsigned long port)
+{
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+
+ *addr = val;
+ __ia64_mf_a();
+}
+
+extern inline void
+__outsb (unsigned long port, const void *src, unsigned long count)
+{
+ volatile unsigned char *addr = __ia64_mk_io_addr(port);
+ const unsigned char *sp = src;
+
+ while (count--) {
+ *addr = *sp++;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+extern inline void
+__outsw (unsigned long port, const void *src, unsigned long count)
+{
+ volatile unsigned short *addr = __ia64_mk_io_addr(port);
+ const unsigned short *sp = src;
+
+ while (count--) {
+ *addr = *sp++;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+extern inline void
+__outsl (unsigned long port, void *src, unsigned long count)
+{
+ volatile unsigned int *addr = __ia64_mk_io_addr(port);
+ const unsigned int *sp = src;
+
+ while (count--) {
+ *addr = *sp++;
+ }
+ __ia64_mf_a();
+ return;
+}
+
+#define inb __inb
+#define inw __inw
+#define inl __inl
+#define insb __insb
+#define insw __insw
+#define insl __insl
+#define outb __outb
+#define outw __outw
+#define outl __outl
+#define outsb __outsb
+#define outsw __outsw
+#define outsl __outsl
+
+/*
+ * The address passed to these functions are ioremap()ped already.
+ */
+extern inline unsigned long
+__readb (unsigned long addr)
+{
+ return *(volatile unsigned char *)addr;
+}
+
+extern inline unsigned long
+__readw (unsigned long addr)
+{
+ return *(volatile unsigned short *)addr;
+}
+
+extern inline unsigned long
+__readl (unsigned long addr)
+{
+ return *(volatile unsigned int *) addr;
+}
+
+extern inline unsigned long
+__readq (unsigned long addr)
+{
+ return *(volatile unsigned long *) addr;
+}
+
+extern inline void
+__writeb (unsigned char val, unsigned long addr)
+{
+ *(volatile unsigned char *) addr = val;
+}
+
+extern inline void
+__writew (unsigned short val, unsigned long addr)
+{
+ *(volatile unsigned short *) addr = val;
+}
+
+extern inline void
+__writel (unsigned int val, unsigned long addr)
+{
+ *(volatile unsigned int *) addr = val;
+}
+
+extern inline void
+__writeq (unsigned long val, unsigned long addr)
+{
+ *(volatile unsigned long *) addr = val;
+}
+
+#define readb __readb
+#define readw __readw
+#define readl __readl
+#define readq __readqq
+#define __raw_readb readb
+#define __raw_readw readw
+#define __raw_readl readl
+#define __raw_readq readq
+#define writeb __writeb
+#define writew __writew
+#define writel __writel
+#define writeq __writeq
+#define __raw_writeb writeb
+#define __raw_writew writew
+#define __raw_writeq writeq
+
+#ifndef inb_p
+# define inb_p inb
+#endif
+#ifndef inw_p
+# define inw_p inw
+#endif
+#ifndef inl_p
+# define inl_p inl
+#endif
+
+#ifndef outb_p
+# define outb_p outb
+#endif
+#ifndef outw_p
+# define outw_p outw
+#endif
+#ifndef outl_p
+# define outl_p outl
+#endif
+
+/*
+ * An "address" in IO memory space is not clearly either an integer
+ * or a pointer. We will accept both, thus the casts.
+ *
+ * On ia-64, we access the physical I/O memory space through the
+ * uncached kernel region.
+ */
+static inline void *
+ioremap (unsigned long offset, unsigned long size)
+{
+ return (void *) (__IA64_UNCACHED_OFFSET | (offset));
+}
+
+static inline void
+iounmap (void *addr)
+{
+}
+
+#define ioremap_nocache(o,s) ioremap(o,s)
+
+# ifdef __KERNEL__
+
+/*
+ * String version of IO memory access ops:
+ */
+extern void __ia64_memcpy_fromio (void *, unsigned long, long);
+extern void __ia64_memcpy_toio (unsigned long, void *, long);
+extern void __ia64_memset_c_io (unsigned long, unsigned long, long);
+
+#define memcpy_fromio(to,from,len) \
+ __ia64_memcpy_fromio((to),(unsigned long)(from),(len))
+#define memcpy_toio(to,from,len) \
+ __ia64_memcpy_toio((unsigned long)(to),(from),(len))
+#define memset_io(addr,c,len) \
+ __ia64_memset_c_io((unsigned long)(addr),0x0101010101010101UL*(u8)(c),(len))
+
+#define __HAVE_ARCH_MEMSETW_IO
+#define memsetw_io(addr,c,len) \
+ _memset_c_io((unsigned long)(addr),0x0001000100010001UL*(u16)(c),(len))
+
+/*
+ * XXX - We don't have csum_partial_copy_fromio() yet, so we cheat here and
+ * just copy it. The net code will then do the checksum later. Presently
+ * only used by some shared memory 8390 Ethernet cards anyway.
+ */
+
+#define eth_io_copy_and_sum(skb,src,len,unused) memcpy_fromio((skb)->data,(src),(len))
+
+#if 0
+
+/*
+ * XXX this is the kind of legacy stuff we want to get rid of with IA-64... --davidm 99/12/02
+ */
+
+/*
+ * This is used for checking BIOS signatures. It's not clear at all
+ * why this is here. This implementation seems to be the same on
+ * all architectures. Strange.
+ */
+static inline int
+check_signature (unsigned long io_addr, const unsigned char *signature, int length)
+{
+ int retval = 0;
+ do {
+ if (readb(io_addr) != *signature)
+ goto out;
+ io_addr++;
+ signature++;
+ length--;
+ } while (length);
+ retval = 1;
+out:
+ return retval;
+}
+
+#define RTC_PORT(x) (0x70 + (x))
+#define RTC_ALWAYS_BCD 0
+
+#endif
+
+/*
+ * The caches on some architectures aren't DMA-coherent and have need
+ * to handle this in software. There are two types of operations that
+ * can be applied to dma buffers.
+ *
+ * - dma_cache_inv(start, size) invalidates the affected parts of the
+ * caches. Dirty lines of the caches may be written back or simply
+ * be discarded. This operation is necessary before dma operations
+ * to the memory.
+ *
+ * - dma_cache_wback(start, size) makes caches and memory coherent
+ * by writing the content of the caches back to memory, if necessary
+ * (cache flush).
+ *
+ * - dma_cache_wback_inv(start, size) Like dma_cache_wback() but the
+ * function also invalidates the affected part of the caches as
+ * necessary before DMA transfers from outside to memory.
+ *
+ * Fortunately, the IA-64 architecture mandates cache-coherent DMA, so
+ * these functions can be implemented as no-ops.
+ */
+#define dma_cache_inv(_start,_size) do { } while (0)
+#define dma_cache_wback(_start,_size) do { } while (0)
+#define dma_cache_wback_inv(_start,_size) do { } while (0)
+
+# endif /* __KERNEL__ */
+#endif /* _ASM_IA64_IO_H */
--- /dev/null
+#ifndef _ASM_IA64_IOCTL_H
+#define _ASM_IA64_IOCTL_H
+
+/*
+ * This is mostly derived from the Linux/x86 version.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/* ioctl command encoding: 32 bits total, command in lower 16 bits,
+ * size of the parameter structure in the lower 14 bits of the
+ * upper 16 bits.
+ * Encoding the size of the parameter structure in the ioctl request
+ * is useful for catching programs compiled with old versions
+ * and to avoid overwriting user space outside the user buffer area.
+ * The highest 2 bits are reserved for indicating the ``access mode''.
+ * NOTE: This limits the max parameter size to 16kB -1 !
+ */
+
+/*
+ * The following is for compatibility across the various Linux
+ * platforms. The ia64 ioctl numbering scheme doesn't really enforce
+ * a type field. De facto, however, the top 8 bits of the lower 16
+ * bits are indeed used as a type field, so we might just as well make
+ * this explicit here. Please be sure to use the decoding macros
+ * below from now on.
+ */
+#define _IOC_NRBITS 8
+#define _IOC_TYPEBITS 8
+#define _IOC_SIZEBITS 14
+#define _IOC_DIRBITS 2
+
+#define _IOC_NRMASK ((1 << _IOC_NRBITS)-1)
+#define _IOC_TYPEMASK ((1 << _IOC_TYPEBITS)-1)
+#define _IOC_SIZEMASK ((1 << _IOC_SIZEBITS)-1)
+#define _IOC_DIRMASK ((1 << _IOC_DIRBITS)-1)
+
+#define _IOC_NRSHIFT 0
+#define _IOC_TYPESHIFT (_IOC_NRSHIFT+_IOC_NRBITS)
+#define _IOC_SIZESHIFT (_IOC_TYPESHIFT+_IOC_TYPEBITS)
+#define _IOC_DIRSHIFT (_IOC_SIZESHIFT+_IOC_SIZEBITS)
+
+/*
+ * Direction bits.
+ */
+#define _IOC_NONE 0U
+#define _IOC_WRITE 1U
+#define _IOC_READ 2U
+
+#define _IOC(dir,type,nr,size) \
+ (((dir) << _IOC_DIRSHIFT) | \
+ ((type) << _IOC_TYPESHIFT) | \
+ ((nr) << _IOC_NRSHIFT) | \
+ ((size) << _IOC_SIZESHIFT))
+
+/* used to create numbers */
+#define _IO(type,nr) _IOC(_IOC_NONE,(type),(nr),0)
+#define _IOR(type,nr,size) _IOC(_IOC_READ,(type),(nr),sizeof(size))
+#define _IOW(type,nr,size) _IOC(_IOC_WRITE,(type),(nr),sizeof(size))
+#define _IOWR(type,nr,size) _IOC(_IOC_READ|_IOC_WRITE,(type),(nr),sizeof(size))
+
+/* used to decode ioctl numbers.. */
+#define _IOC_DIR(nr) (((nr) >> _IOC_DIRSHIFT) & _IOC_DIRMASK)
+#define _IOC_TYPE(nr) (((nr) >> _IOC_TYPESHIFT) & _IOC_TYPEMASK)
+#define _IOC_NR(nr) (((nr) >> _IOC_NRSHIFT) & _IOC_NRMASK)
+#define _IOC_SIZE(nr) (((nr) >> _IOC_SIZESHIFT) & _IOC_SIZEMASK)
+
+/* ...and for the drivers/sound files... */
+
+#define IOC_IN (_IOC_WRITE << _IOC_DIRSHIFT)
+#define IOC_OUT (_IOC_READ << _IOC_DIRSHIFT)
+#define IOC_INOUT ((_IOC_WRITE|_IOC_READ) << _IOC_DIRSHIFT)
+#define IOCSIZE_MASK (_IOC_SIZEMASK << _IOC_SIZESHIFT)
+#define IOCSIZE_SHIFT (_IOC_SIZESHIFT)
+
+#endif /* _ASM_IA64_IOCTL_H */
--- /dev/null
+#ifndef _ASM_IA64_IOCTLS_H
+#define _ASM_IA64_IOCTLS_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/ioctl.h>
+
+/* 0x54 is just a magic number to make these relatively unique ('T') */
+
+#define TCGETS 0x5401
+#define TCSETS 0x5402
+#define TCSETSW 0x5403
+#define TCSETSF 0x5404
+#define TCGETA 0x5405
+#define TCSETA 0x5406
+#define TCSETAW 0x5407
+#define TCSETAF 0x5408
+#define TCSBRK 0x5409
+#define TCXONC 0x540A
+#define TCFLSH 0x540B
+#define TIOCEXCL 0x540C
+#define TIOCNXCL 0x540D
+#define TIOCSCTTY 0x540E
+#define TIOCGPGRP 0x540F
+#define TIOCSPGRP 0x5410
+#define TIOCOUTQ 0x5411
+#define TIOCSTI 0x5412
+#define TIOCGWINSZ 0x5413
+#define TIOCSWINSZ 0x5414
+#define TIOCMGET 0x5415
+#define TIOCMBIS 0x5416
+#define TIOCMBIC 0x5417
+#define TIOCMSET 0x5418
+#define TIOCGSOFTCAR 0x5419
+#define TIOCSSOFTCAR 0x541A
+#define FIONREAD 0x541B
+#define TIOCINQ FIONREAD
+#define TIOCLINUX 0x541C
+#define TIOCCONS 0x541D
+#define TIOCGSERIAL 0x541E
+#define TIOCSSERIAL 0x541F
+#define TIOCPKT 0x5420
+#define FIONBIO 0x5421
+#define TIOCNOTTY 0x5422
+#define TIOCSETD 0x5423
+#define TIOCGETD 0x5424
+#define TCSBRKP 0x5425 /* Needed for POSIX tcsendbreak() */
+#define TIOCTTYGSTRUCT 0x5426 /* For debugging only */
+#define TIOCSBRK 0x5427 /* BSD compatibility */
+#define TIOCCBRK 0x5428 /* BSD compatibility */
+#define TIOCGSID 0x5429 /* Return the session ID of FD */
+#define TIOCGPTN _IOR('T',0x30, unsigned int) /* Get Pty Number (of pty-mux device) */
+#define TIOCSPTLCK _IOW('T',0x31, int) /* Lock/unlock Pty */
+
+#define FIONCLEX 0x5450 /* these numbers need to be adjusted. */
+#define FIOCLEX 0x5451
+#define FIOASYNC 0x5452
+#define TIOCSERCONFIG 0x5453
+#define TIOCSERGWILD 0x5454
+#define TIOCSERSWILD 0x5455
+#define TIOCGLCKTRMIOS 0x5456
+#define TIOCSLCKTRMIOS 0x5457
+#define TIOCSERGSTRUCT 0x5458 /* For debugging only */
+#define TIOCSERGETLSR 0x5459 /* Get line status register */
+#define TIOCSERGETMULTI 0x545A /* Get multiport config */
+#define TIOCSERSETMULTI 0x545B /* Set multiport config */
+
+#define TIOCMIWAIT 0x545C /* wait for a change on serial input line(s) */
+#define TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */
+#define TIOCGHAYESESP 0x545E /* Get Hayes ESP configuration */
+#define TIOCSHAYESESP 0x545F /* Set Hayes ESP configuration */
+
+/* Used for packet mode */
+#define TIOCPKT_DATA 0
+#define TIOCPKT_FLUSHREAD 1
+#define TIOCPKT_FLUSHWRITE 2
+#define TIOCPKT_STOP 4
+#define TIOCPKT_START 8
+#define TIOCPKT_NOSTOP 16
+#define TIOCPKT_DOSTOP 32
+
+#define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
+
+#endif /* _ASM_IA64_IOCTLS_H */
--- /dev/null
+#ifndef __ASM_IA64_IOSAPIC_H
+#define __ASM_IA64_IOSAPIC_H
+
+#define IO_SAPIC_DEFAULT_ADDR 0xFEC00000
+
+#define IO_SAPIC_REG_SELECT 0x0
+#define IO_SAPIC_WINDOW 0x10
+#define IO_SAPIC_EOI 0x40
+
+#define IO_SAPIC_VERSION 0x1
+
+/*
+ * Redirection table entry
+ */
+
+#define IO_SAPIC_RTE_LOW(i) (0x10+i*2)
+#define IO_SAPIC_RTE_HIGH(i) (0x11+i*2)
+
+
+#define IO_SAPIC_DEST_SHIFT 16
+
+/*
+ * Delivery mode
+ */
+
+#define IO_SAPIC_DELIVERY_SHIFT 8
+#define IO_SAPIC_FIXED 0x0
+#define IO_SAPIC_LOWEST_PRIORITY 0x1
+#define IO_SAPIC_PMI 0x2
+#define IO_SAPIC_NMI 0x4
+#define IO_SAPIC_INIT 0x5
+#define IO_SAPIC_EXTINT 0x7
+
+/*
+ * Interrupt polarity
+ */
+
+#define IO_SAPIC_POLARITY_SHIFT 13
+#define IO_SAPIC_POL_HIGH 0
+#define IO_SAPIC_POL_LOW 1
+
+/*
+ * Trigger mode
+ */
+
+#define IO_SAPIC_TRIGGER_SHIFT 15
+#define IO_SAPIC_EDGE 0
+#define IO_SAPIC_LEVEL 1
+
+/*
+ * Mask bit
+ */
+
+#define IO_SAPIC_MASK_SHIFT 16
+#define IO_SAPIC_UNMASK 0
+#define IO_SAPIC_MSAK 1
+
+/*
+ * Bus types
+ */
+#define BUS_ISA 0 /* ISA Bus */
+#define BUS_PCI 1 /* PCI Bus */
+
+#ifndef CONFIG_IA64_PCI_FIRMWARE_IRQ
+struct intr_routing_entry {
+ unsigned char srcbus;
+ unsigned char srcbusno;
+ unsigned char srcbusirq;
+ unsigned char iosapic_pin;
+ unsigned char dstiosapic;
+ unsigned char mode;
+ unsigned char trigger;
+ unsigned char polarity;
+};
+
+extern struct intr_routing_entry intr_routing[];
+#endif
+
+#ifndef __ASSEMBLY__
+
+#include <asm/irq.h>
+
+/*
+ * IOSAPIC Version Register return 32 bit structure like:
+ * {
+ * unsigned int version : 8;
+ * unsigned int reserved1 : 8;
+ * unsigned int pins : 8;
+ * unsigned int reserved2 : 8;
+ * }
+ */
+extern unsigned int iosapic_version(unsigned long);
+extern void iosapic_init(unsigned long);
+
+struct iosapic_vector {
+ unsigned long iosapic_base; /* IOSAPIC Base address */
+ char pin; /* IOSAPIC pin (-1 == No data) */
+ unsigned char bus; /* Bus number */
+ unsigned char baseirq; /* Base IRQ handled by this IOSAPIC */
+ unsigned char bustype; /* Bus type (ISA, PCI, etc) */
+ unsigned int busdata; /* Bus specific ID */
+ /* These bitfields use the values defined above */
+ unsigned char dmode : 3;
+ unsigned char polarity : 1;
+ unsigned char trigger : 1;
+ unsigned char UNUSED : 3;
+};
+extern struct iosapic_vector iosapic_vector[NR_IRQS];
+
+#define iosapic_addr(v) iosapic_vector[v].iosapic_base
+#define iosapic_pin(v) iosapic_vector[v].pin
+#define iosapic_bus(v) iosapic_vector[v].bus
+#define iosapic_baseirq(v) iosapic_vector[v].baseirq
+#define iosapic_bustype(v) iosapic_vector[v].bustype
+#define iosapic_busdata(v) iosapic_vector[v].busdata
+#define iosapic_dmode(v) iosapic_vector[v].dmode
+#define iosapic_trigger(v) iosapic_vector[v].trigger
+#define iosapic_polarity(v) iosapic_vector[v].polarity
+
+# endif /* !__ASSEMBLY__ */
+#endif /* __ASM_IA64_IOSAPIC_H */
--- /dev/null
+#ifndef __i386_IPC_H__
+#define __i386_IPC_H__
+
+/*
+ * These are used to wrap system calls on x86.
+ *
+ * See arch/i386/kernel/sys_i386.c for ugly details..
+ */
+struct ipc_kludge {
+ struct msgbuf *msgp;
+ long msgtyp;
+};
+
+#define SEMOP 1
+#define SEMGET 2
+#define SEMCTL 3
+#define MSGSND 11
+#define MSGRCV 12
+#define MSGGET 13
+#define MSGCTL 14
+#define SHMAT 21
+#define SHMDT 22
+#define SHMGET 23
+#define SHMCTL 24
+
+/* Used by the DIPC package, try and avoid reusing it */
+#define DIPC 25
+
+#define IPCCALL(version,op) ((version)<<16 | (op))
+
+#endif
--- /dev/null
+#ifndef _ASM_IA64_IPCBUF_H
+#define _ASM_IA64_IPCBUF_H
+
+/*
+ * The ipc64_perm structure for IA-64 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 32-bit seq
+ * - 2 miscellaneous 64-bit values
+ */
+
+struct ipc64_perm
+{
+ __kernel_key_t key;
+ __kernel_uid_t uid;
+ __kernel_gid_t gid;
+ __kernel_uid_t cuid;
+ __kernel_gid_t cgid;
+ __kernel_mode_t mode;
+ unsigned short seq;
+ unsigned short __pad1;
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+#endif /* _ASM_IA64_IPCBUF_H */
--- /dev/null
+#ifndef _ASM_IA64_IRQ_H
+#define _ASM_IA64_IRQ_H
+
+/*
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 11/24/98 S.Eranian updated TIMER_IRQ and irq_cannonicalize
+ * 01/20/99 S.Eranian added keyboard interrupt
+ */
+
+#include <linux/config.h>
+#include <linux/spinlock.h>
+
+#include <asm/ptrace.h>
+
+#define NR_IRQS 256
+#define NR_ISA_IRQS 16
+
+/*
+ * 0 special
+ *
+ * 1,3-14 are reserved from firmware
+ *
+ * 16-255 (vectored external interrupts) are available
+ *
+ * 15 spurious interrupt (see IVR)
+ *
+ * 16 lowest priority, 255 highest priority
+ *
+ * 15 classes of 16 interrupts each.
+ */
+#define IA64_MIN_VECTORED_IRQ 16
+#define IA64_MAX_VECTORED_IRQ 255
+
+#define IA64_SPURIOUS_INT 0x0f
+#define TIMER_IRQ 0xef /* use highest-prio group 15 interrupt for timer */
+#define IPI_IRQ 0xfe /* inter-processor interrupt vector */
+#define PERFMON_IRQ 0x28 /* performanc monitor interrupt vector */
+
+#define IA64_MIN_VECTORED_IRQ 16
+#define IA64_MAX_VECTORED_IRQ 255
+
+extern __u8 irq_to_vector_map[IA64_MIN_VECTORED_IRQ];
+#define map_legacy_irq(x) (((x) < IA64_MIN_VECTORED_IRQ) ? irq_to_vector_map[(x)] : (x))
+
+#define IRQ_INPROGRESS (1 << 0) /* irq handler active */
+#define IRQ_ENABLED (1 << 1) /* irq enabled */
+#define IRQ_PENDING (1 << 2) /* irq pending */
+#define IRQ_REPLAY (1 << 3) /* irq has been replayed but not acked yet */
+#define IRQ_AUTODETECT (1 << 4) /* irq is being autodetected */
+#define IRQ_WAITING (1 << 5) /* used for autodetection: irq not yet seen yet */
+
+struct hw_interrupt_type {
+ const char *typename;
+ void (*init) (unsigned long addr);
+ void (*startup) (unsigned int irq);
+ void (*shutdown) (unsigned int irq);
+ int (*handle) (unsigned int irq, struct pt_regs *regs);
+ void (*enable) (unsigned int irq);
+ void (*disable) (unsigned int irq);
+};
+
+extern struct hw_interrupt_type irq_type_default; /* dummy interrupt controller */
+extern struct hw_interrupt_type irq_type_ia64_internal; /* CPU-internal interrupt controller */
+
+struct irq_desc {
+ unsigned int type; /* type of interrupt (level vs. edge triggered) */
+ unsigned int status; /* see above */
+ unsigned int depth; /* disable depth for nested irq disables */
+ struct hw_interrupt_type *handler;
+ struct irqaction *action; /* irq action list */
+};
+
+extern struct irq_desc irq_desc[NR_IRQS];
+
+extern spinlock_t irq_controller_lock;
+
+/* IA64 inter-cpu interrupt related definitions */
+
+/* Delivery modes for inter-cpu interrupts */
+enum {
+ IA64_IPI_DM_INT = 0x0, /* pend an external interrupt */
+ IA64_IPI_DM_PMI = 0x2, /* pend a PMI */
+ IA64_IPI_DM_NMI = 0x4, /* pend an NMI (vector 2) */
+ IA64_IPI_DM_INIT = 0x5, /* pend an INIT interrupt */
+ IA64_IPI_DM_EXTINT = 0x7, /* pend an 8259-compatible interrupt. */
+};
+
+#define IA64_BUS_ID(cpu) (cpu >> 8)
+#define IA64_LOCAL_ID(cpu) (cpu & 0xff)
+
+static __inline__ int
+irq_cannonicalize (int irq)
+{
+ /*
+ * We do the legacy thing here of pretending that irqs < 16
+ * are 8259 irqs.
+ */
+ return ((irq == 2) ? 9 : irq);
+}
+
+extern int invoke_irq_handlers (unsigned int irq, struct pt_regs *regs, struct irqaction *action);
+extern void disable_irq (unsigned int);
+extern void enable_irq (unsigned int);
+extern void ipi_send (int cpu, int vector, int delivery_mode);
+
+#ifdef CONFIG_SMP
+ extern void irq_enter(int cpu, int irq);
+ extern void irq_exit(int cpu, int irq);
+ extern void handle_IPI(int irq, void *dev_id, struct pt_regs *regs);
+#else
+# define irq_enter(cpu, irq) (++local_irq_count[cpu])
+# define irq_exit(cpu, irq) (--local_irq_count[cpu])
+#endif
+
+#endif /* _ASM_IA64_IRQ_H */
--- /dev/null
+#ifndef _ASM_IA64_KDBSUPPORT_H
+#define _ASM_IA64_KDBSUPPORT_H
+
+/*
+ * Kernel Debugger Breakpoint Handler
+ *
+ * Copyright 1999, Silicon Graphics, Inc.
+ *
+ * Written March 1999 by Scott Lurndal at Silicon Graphics, Inc.
+ */
+
+#include <asm/ptrace.h>
+
+ /*
+ * This file provides definitions for functions that
+ * are dependent upon the product into which kdb is
+ * linked.
+ *
+ * This version is for linux.
+ */
+typedef void (*handler_t)(struct pt_regs *);
+typedef unsigned long k_machreg_t;
+
+unsigned long show_cur_stack_frame(struct pt_regs *, int, unsigned long *) ;
+
+extern char* kbd_getstr(char *, size_t, char *);
+extern int kdbinstalltrap(int, handler_t, handler_t*);
+extern int kdbinstalldbreg(kdb_bp_t *);
+extern void kdbremovedbreg(kdb_bp_t *);
+extern void kdb_initbptab(void);
+extern int kdbgetregcontents(const char *, struct pt_regs *, unsigned long *);
+extern int kdbsetregcontents(const char *, struct pt_regs *, unsigned long);
+extern int kdbdumpregs(struct pt_regs *, const char *, const char *);
+
+typedef int kdbintstate_t;
+
+extern void kdb_disableint(kdbintstate_t *);
+extern void kdb_restoreint(kdbintstate_t *);
+
+extern k_machreg_t kdb_getpc(struct pt_regs *);
+extern int kdb_setpc(struct pt_regs *, k_machreg_t);
+
+extern int kdb_putword(unsigned long, unsigned long);
+extern int kdb_getcurrentframe(struct pt_regs *);
+
+/*
+ * kdb_db_trap is a processor dependent routine invoked
+ * from kdb() via the #db trap handler. It handles breakpoints involving
+ * the processor debug registers and handles single step traps
+ * using the processor trace flag.
+ */
+
+#define KDB_DB_BPT 0 /* Straight breakpoint */
+#define KDB_DB_SS 1 /* Single Step trap */
+#define KDB_DB_SSB 2 /* Single Step, caller should continue */
+
+extern int kdb_db_trap(struct pt_regs *, int);
+
+extern int kdb_allocdbreg(kdb_bp_t *);
+extern void kdb_freedbreg(kdb_bp_t *);
+extern void kdb_initdbregs(void);
+
+extern void kdb_setsinglestep(struct pt_regs *);
+
+ /*
+ * Support for ia32 architecture debug registers.
+ */
+#define KDB_DBREGS 4
+extern k_machreg_t dbregs[];
+
+#define DR6_BT 0x00008000
+#define DR6_BS 0x00004000
+#define DR6_BD 0x00002000
+
+#define DR6_B3 0x00000008
+#define DR6_B2 0x00000004
+#define DR6_B1 0x00000002
+#define DR6_B0 0x00000001
+
+#define DR7_RW_VAL(dr, drnum) \
+ (((dr) >> (16 + (4 * (drnum)))) & 0x3)
+
+#define DR7_RW_SET(dr, drnum, rw) \
+ do { \
+ (dr) &= ~(0x3 << (16 + (4 * (drnum)))); \
+ (dr) |= (((rw) & 0x3) << (16 + (4 * (drnum)))); \
+ } while (0)
+
+#define DR7_RW0(dr) DR7_RW_VAL(dr, 0)
+#define DR7_RW0SET(dr,rw) DR7_RW_SET(dr, 0, rw)
+#define DR7_RW1(dr) DR7_RW_VAL(dr, 1)
+#define DR7_RW1SET(dr,rw) DR7_RW_SET(dr, 1, rw)
+#define DR7_RW2(dr) DR7_RW_VAL(dr, 2)
+#define DR7_RW2SET(dr,rw) DR7_RW_SET(dr, 2, rw)
+#define DR7_RW3(dr) DR7_RW_VAL(dr, 3)
+#define DR7_RW3SET(dr,rw) DR7_RW_SET(dr, 3, rw)
+
+
+#define DR7_LEN_VAL(dr, drnum) \
+ (((dr) >> (18 + (4 * (drnum)))) & 0x3)
+
+#define DR7_LEN_SET(dr, drnum, rw) \
+ do { \
+ (dr) &= ~(0x3 << (18 + (4 * (drnum)))); \
+ (dr) |= (((rw) & 0x3) << (18 + (4 * (drnum)))); \
+ } while (0)
+
+#define DR7_LEN0(dr) DR7_LEN_VAL(dr, 0)
+#define DR7_LEN0SET(dr,len) DR7_LEN_SET(dr, 0, len)
+#define DR7_LEN1(dr) DR7_LEN_VAL(dr, 1)
+#define DR7_LEN1SET(dr,len) DR7_LEN_SET(dr, 1, len)
+#define DR7_LEN2(dr) DR7_LEN_VAL(dr, 2)
+#define DR7_LEN2SET(dr,len) DR7_LEN_SET(dr, 2, len)
+#define DR7_LEN3(dr) DR7_LEN_VAL(dr, 3)
+#define DR7_LEN3SET(dr,len) DR7_LEN_SET(dr, 3, len)
+
+#define DR7_G0(dr) (((dr)>>1)&0x1)
+#define DR7_G0SET(dr) ((dr) |= 0x2)
+#define DR7_G0CLR(dr) ((dr) &= ~0x2)
+#define DR7_G1(dr) (((dr)>>3)&0x1)
+#define DR7_G1SET(dr) ((dr) |= 0x8)
+#define DR7_G1CLR(dr) ((dr) &= ~0x8)
+#define DR7_G2(dr) (((dr)>>5)&0x1)
+#define DR7_G2SET(dr) ((dr) |= 0x20)
+#define DR7_G2CLR(dr) ((dr) &= ~0x20)
+#define DR7_G3(dr) (((dr)>>7)&0x1)
+#define DR7_G3SET(dr) ((dr) |= 0x80)
+#define DR7_G3CLR(dr) ((dr) &= ~0x80)
+
+#define DR7_L0(dr) (((dr))&0x1)
+#define DR7_L0SET(dr) ((dr) |= 0x1)
+#define DR7_L0CLR(dr) ((dr) &= ~0x1)
+#define DR7_L1(dr) (((dr)>>2)&0x1)
+#define DR7_L1SET(dr) ((dr) |= 0x4)
+#define DR7_L1CLR(dr) ((dr) &= ~0x4)
+#define DR7_L2(dr) (((dr)>>4)&0x1)
+#define DR7_L2SET(dr) ((dr) |= 0x10)
+#define DR7_L2CLR(dr) ((dr) &= ~0x10)
+#define DR7_L3(dr) (((dr)>>6)&0x1)
+#define DR7_L3SET(dr) ((dr) |= 0x40)
+#define DR7_L3CLR(dr) ((dr) &= ~0x40)
+
+#define DR7_GD 0x00002000 /* General Detect Enable */
+#define DR7_GE 0x00000200 /* Global exact */
+#define DR7_LE 0x00000100 /* Local exact */
+
+extern k_machreg_t kdb_getdr6(void);
+extern void kdb_putdr6(k_machreg_t);
+
+extern k_machreg_t kdb_getdr7(void);
+extern void kdb_putdr7(k_machreg_t);
+
+extern k_machreg_t kdb_getdr(int);
+extern void kdb_putdr(int, k_machreg_t);
+
+extern k_machreg_t kdb_getcr(int);
+
+extern void kdb_bp_install(void);
+extern void kdb_bp_remove(void);
+
+/*
+ * Support for setjmp/longjmp
+ */
+#define JB_BX 0
+#define JB_SI 1
+#define JB_DI 2
+#define JB_BP 3
+#define JB_SP 4
+#define JB_PC 5
+
+typedef struct __kdb_jmp_buf {
+ unsigned long regs[6];
+} kdb_jmp_buf;
+
+extern int kdb_setjmp(kdb_jmp_buf *);
+extern void kdb_longjmp(kdb_jmp_buf *, int);
+
+extern kdb_jmp_buf kdbjmpbuf;
+
+#define getprsregs(regs) ((struct switch_stack *)regs -1)
+
+#define BITMASK(bp,value) (value << bp)
+
+/* bkpt support using break inst instead of IBP reg */
+
+/*
+ * Define certain specific instructions
+ */
+#define BREAK_INSTR (0x00000080100L << 11)
+#define INST_SLOT0_MASK (0x1ffffffffffL << 5)
+
+#if 0
+#define MAX_BREAKPOINTS 40
+#define PSR_SS 40
+#endif
+
+/**
+ * IA-64 instruction format structures
+ */
+typedef union bundle {
+ struct {
+ long low8;
+ long high8;
+ } lform;
+ struct {
+ int low_low4;
+ int low_high4;
+ long high8;
+ } iform;
+} bundle_t;
+
+#define BKPTMODE_DATAR 3
+#define BKPTMODE_IO 2
+#define BKPTMODE_DATAW 1
+#define BKPTMODE_INST 0
+
+/* Some of the fault registers needed by kdb but not passed with
+ * regs or switch stack.
+ */
+typedef struct fault_regs {
+ unsigned long isr ;
+ unsigned long ifa ;
+ unsigned long iim ;
+ unsigned long itir ;
+} fault_regs_t ;
+
+/*
+ * State of kdb
+ */
+
+typedef struct kdb_state {
+ int cmd_given ;
+ int reason_for_entry ;
+ int bkpt_handling_state ;
+ int kdb_action ;
+} kdb_state_t ;
+
+#define BKPTSTATE_NOT_HANDLED 0
+#define BKPTSTATE_HANDLED 1
+
+#define CMDGIVEN_UNKNOWN 0
+#define CMDGIVEN_SSTEP 1
+#define CMDGIVEN_GO 2
+
+#define ENTRYREASON_GO 0
+#define ENTRYREASON_SSTEP 1
+
+#define ACTION_UNKNOWN 0
+#define ACTION_NOBPINSTALL 1
+#define ACTION_NOPROMPT 2
+
+#endif /* _ASM_IA64_KDBSUPPORT_H */
--- /dev/null
+#ifndef _ASM_IA64_KEYBOARD_H
+#define _ASM_IA64_KEYBOARD_H
+
+/*
+ * This file contains the ia-64 architecture specific keyboard
+ * definitions.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+# ifdef __KERNEL__
+
+#define KEYBOARD_IRQ 1
+#define DISABLE_KBD_DURING_INTERRUPTS 0
+
+extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
+extern int pckbd_getkeycode(unsigned int scancode);
+extern int pckbd_pretranslate(unsigned char scancode, char raw_mode);
+extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
+ char raw_mode);
+extern char pckbd_unexpected_up(unsigned char keycode);
+extern void pckbd_leds(unsigned char leds);
+extern void pckbd_init_hw(void);
+extern unsigned char pckbd_sysrq_xlate[128];
+
+#define kbd_setkeycode pckbd_setkeycode
+#define kbd_getkeycode pckbd_getkeycode
+#define kbd_pretranslate pckbd_pretranslate
+#define kbd_translate pckbd_translate
+#define kbd_unexpected_up pckbd_unexpected_up
+#define kbd_leds pckbd_leds
+#define kbd_init_hw pckbd_init_hw
+#define kbd_sysrq_xlate pckbd_sysrq_xlate
+
+#define INIT_KBD
+
+#define SYSRQ_KEY 0x54
+#if defined(CONFIG_KDB)
+#define E1_PAUSE 119 /* PAUSE key */
+#endif
+
+/* resource allocation */
+#define kbd_request_region()
+#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, "keyboard", NULL)
+
+/* How to access the keyboard macros on this platform. */
+#define kbd_read_input() inb(KBD_DATA_REG)
+#define kbd_read_status() inb(KBD_STATUS_REG)
+#define kbd_write_output(val) outb(val, KBD_DATA_REG)
+#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
+
+/* Some stoneage hardware needs delays after some operations. */
+#define kbd_pause() do { } while(0)
+
+/*
+ * Machine specific bits for the PS/2 driver
+ */
+
+#define AUX_IRQ 12
+
+#define aux_request_irq(hand, dev_id) \
+ request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS/2 Mouse", dev_id)
+
+#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
+
+# endif /* __KERNEL__ */
+#endif /* _ASM_IA64_KEYBOARD_H */
--- /dev/null
+/* $Id: linux_logo.h,v 1.6 1998/07/30 16:30:20 jj Exp $
+ * include/asm-ia64/linux_logo.h: This is a linux logo
+ * to be displayed on boot.
+ *
+ * Copyright (C) 1996 Larry Ewing (lewing@isc.tamu.edu)
+ * Copyright (C) 1996 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
+ * Copyright (C) 1998 David Mosberger (davidm@hpl.hp.com)
+ *
+ * You can put anything here, but:
+ * LINUX_LOGO_COLORS has to be less than 224
+ * image size has to be 80x80
+ * values have to start from 0x20
+ * (i.e. RGB(linux_logo_red[0],
+ * linux_logo_green[0],
+ * linux_logo_blue[0]) is color 0x20)
+ * BW image has to be 80x80 as well, with MS bit
+ * on the left
+ * Serial_console ascii image can be any size,
+ * but should contain %s to display the version
+ */
+
+#include <linux/init.h>
+#include <linux/version.h>
+
+#define linux_logo_banner "Linux/ia64 version " UTS_RELEASE
+
+#define LINUX_LOGO_COLORS 214
+
+#ifdef INCLUDE_LINUX_LOGO_DATA
+
+#define INCLUDE_LINUX_LOGOBW
+#define INCLUDE_LINUX_LOGO16
+
+#include <linux/linux_logo.h>
+
+#else
+
+/* prototypes only */
+extern unsigned char linux_logo_red[];
+extern unsigned char linux_logo_green[];
+extern unsigned char linux_logo_blue[];
+extern unsigned char linux_logo[];
+extern unsigned char linux_logo_bw[];
+extern unsigned char linux_logo16_red[];
+extern unsigned char linux_logo16_green[];
+extern unsigned char linux_logo16_blue[];
+extern unsigned char linux_logo16[];
+
+#endif
--- /dev/null
+/*
+ * Machine vector for IA-64.
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Srinivasa Thirumalachar <sprasad@engr.sgi.com>
+ * Copyright (C) Vijay Chander <vijay@engr.sgi.com>
+ * Copyright (C) 1999 Hewlett-Packard Co.
+ * Copyright (C) David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#ifndef _ASM_IA64_MACHVEC_H
+#define _ASM_IA64_MACHVEC_H
+
+#include <linux/config.h>
+#include <linux/types.h>
+
+/* forward declarations: */
+struct hw_interrupt_type;
+struct irq_desc;
+struct mm_struct;
+struct pt_regs;
+struct task_struct;
+struct timeval;
+struct vm_area_struct;
+
+typedef void ia64_mv_setup_t (char **);
+typedef void ia64_mv_irq_init_t (struct irq_desc *);
+typedef void ia64_mv_pci_fixup_t (void);
+typedef unsigned long ia64_mv_map_nr_t (unsigned long);
+typedef void ia64_mv_mca_init_t (void);
+typedef void ia64_mv_mca_handler_t (void);
+typedef void ia64_mv_cmci_handler_t (int, void *, struct pt_regs *);
+typedef void ia64_mv_log_print_t (void);
+
+# if defined (CONFIG_IA64_HP_SIM)
+# include <asm/machvec_hpsim.h>
+# elif defined (CONFIG_IA64_DIG)
+# include <asm/machvec_dig.h>
+# elif defined (CONFIG_IA64_SGI_SN1_SIM)
+# include <asm/machvec_sgi_sn1_SIM.h>
+# elif defined (CONFIG_IA64_GENERIC)
+
+struct ia64_machine_vector {
+ const char *name;
+ ia64_mv_setup_t *setup;
+ ia64_mv_irq_init_t *irq_init;
+ ia64_mv_pci_fixup_t *pci_fixup;
+ ia64_mv_map_nr_t *map_nr;
+ ia64_mv_mca_init_t *mca_init;
+ ia64_mv_mca_handler_t *mca_handler;
+ ia64_mv_cmci_handler_t *cmci_handler;
+ ia64_mv_log_print_t *log_print;
+};
+
+#define MACHVEC_INIT(name) \
+{ \
+ #name, \
+ platform_setup, \
+ platform_irq_init, \
+ platform_pci_fixup, \
+ platform_map_nr, \
+ platform_mca_init, \
+ platform_mca_handler, \
+ platform_cmci_handler, \
+ platform_log_print \
+}
+
+# ifndef MACHVEC_INHIBIT_RENAMING
+# define platform_name ia64_mv.name
+# define platform_setup ia64_mv.setup
+# define platform_irq_init ia64_mv.irq_init
+# define platform_map_nr ia64_mv.map_nr
+# define platform_mca_init ia64_mv.mca_init
+# define platform_mca_handler ia64_mv.mca_handler
+# define platform_cmci_handler ia64_mv.cmci_handler
+# define platform_log_print ia64_mv.log_print
+# endif
+
+extern struct ia64_machine_vector ia64_mv;
+extern void machvec_noop (void);
+
+# else
+# error Unknown configuration. Update asm-ia64/machvec.h.
+# endif /* CONFIG_IA64_GENERIC */
+
+/*
+ * Define default versions so we can extend machvec for new platforms without having
+ * to update the machvec files for all existing platforms.
+ */
+#ifndef platform_setup
+# define platform_setup ((ia64_mv_setup_t *) machvec_noop)
+#endif
+#ifndef platform_irq_init
+# define platform_irq_init ((ia64_mv_irq_init_t *) machvec_noop)
+#endif
+#ifndef platform_mca_init
+# define platform_mca_init ((ia64_mv_mca_init_t *) machvec_noop)
+#endif
+#ifndef platform_mca_handler
+# define platform_mca_handler ((ia64_mv_mca_handler_t *) machvec_noop)
+#endif
+#ifndef platform_cmci_handler
+# define platform_cmci_handler ((ia64_mv_cmci_handler_t *) machvec_noop)
+#endif
+#ifndef platform_log_print
+# define platform_log_print ((ia64_mv_log_print_t *) machvec_noop)
+#endif
+
+#endif /* _ASM_IA64_MACHVEC_H */
--- /dev/null
+#ifndef _ASM_IA64_MACHVEC_DIG_h
+#define _ASM_IA64_MACHVEC_DIG_h
+
+extern ia64_mv_setup_t dig_setup;
+extern ia64_mv_irq_init_t dig_irq_init;
+extern ia64_mv_pci_fixup_t dig_pci_fixup;
+extern ia64_mv_map_nr_t map_nr_dense;
+
+/*
+ * This stuff has dual use!
+ *
+ * For a generic kernel, the macros are used to initialize the
+ * platform's machvec structure. When compiling a non-generic kernel,
+ * the macros are used directly.
+ */
+#define platform_name "dig"
+#define platform_setup dig_setup
+#define platform_irq_init dig_irq_init
+#define platform_pci_fixup dig_pci_fixup
+#define platform_map_nr map_nr_dense
+
+#endif /* _ASM_IA64_MACHVEC_DIG_h */
--- /dev/null
+#ifndef _ASM_IA64_MACHVEC_HPSIM_h
+#define _ASM_IA64_MACHVEC_HPSIM_h
+
+extern ia64_mv_setup_t hpsim_setup;
+extern ia64_mv_irq_init_t hpsim_irq_init;
+extern ia64_mv_map_nr_t map_nr_dense;
+
+/*
+ * This stuff has dual use!
+ *
+ * For a generic kernel, the macros are used to initialize the
+ * platform's machvec structure. When compiling a non-generic kernel,
+ * the macros are used directly.
+ */
+#define platform_name "hpsim"
+#define platform_setup hpsim_setup
+#define platform_irq_init hpsim_irq_init
+#define platform_map_nr map_nr_dense
+
+#endif /* _ASM_IA64_MACHVEC_HPSIM_h */
--- /dev/null
+#define MACHVEC_INHIBIT_RENAMING
+
+#include <asm/machvec.h>
+
+#define MACHVEC_HELPER(name) \
+ struct ia64_machine_vector machvec_##name __attribute__ ((unused, __section__ (".machvec"))) \
+ = MACHVEC_INIT(name);
+
+#define MACHVEC_DEFINE(name) MACHVEC_HELPER(name)
--- /dev/null
+#ifndef _ASM_IA64_MACHVEC_SN1_h
+#define _ASM_IA64_MACHVEC_SN1_h
+
+extern ia64_mv_setup_t sn1_setup;
+extern ia64_mv_irq_init_t sn1_irq_init;
+extern ia64_mv_map_nr_t sn1_map_nr;
+
+/*
+ * This stuff has dual use!
+ *
+ * For a generic kernel, the macros are used to initialize the
+ * platform's machvec structure. When compiling a non-generic kernel,
+ * the macros are used directly.
+ */
+#define platform_name "sn1"
+#define platform_setup sn1_setup
+#define platform_irq_init sn1_irq_init
+#define platform_map_nr sn1_map_nr
+
+#endif /* _ASM_IA64_MACHVEC_SN1_h */
--- /dev/null
+/*
+ * File: mca.h
+ * Purpose: Machine check handling specific defines
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Vijay Chander (vijay@engr.sgi.com)
+ * Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+ */
+#ifndef _ASM_IA64_MCA_H
+#define _ASM_IA64_MCA_H
+
+#include <linux/types.h>
+#include <asm/param.h>
+#include <asm/sal.h>
+#include <asm/processor.h>
+
+/* These are the return codes from all the IA64_MCA specific interfaces */
+typedef int ia64_mca_return_code_t;
+
+enum {
+ IA64_MCA_SUCCESS = 0,
+ IA64_MCA_FAILURE = 1
+};
+
+#define IA64_MCA_RENDEZ_TIMEOUT (100 * HZ) /* 1000 milliseconds */
+
+/* Interrupt vectors reserved for MC handling. */
+#define IA64_MCA_RENDEZ_INT_VECTOR 0xF3 /* Rendez interrupt */
+#define IA64_MCA_WAKEUP_INT_VECTOR 0x12 /* Wakeup interrupt */
+#define IA64_MCA_CMC_INT_VECTOR 0xF2 /* Correctable machine check interrupt */
+
+#define IA64_CMC_INT_DISABLE 0
+#define IA64_CMC_INT_ENABLE 1
+
+
+typedef u32 int_vector_t;
+typedef u64 millisec_t;
+
+typedef union cmcv_reg_u {
+ u64 cmcv_regval;
+ struct {
+ u64 cmcr_vector : 8;
+ u64 cmcr_ignored1 : 47;
+ u64 cmcr_mask : 1;
+ u64 cmcr_reserved1 : 3;
+ u64 cmcr_ignored2 : 1;
+ u64 cmcr_reserved2 : 4;
+ } cmcv_reg_s;
+
+} cmcv_reg_t;
+
+#define cmcv_mask cmcv_reg_s.cmcr_mask
+#define cmcv_vector cmcv_reg_s.cmcr_vector
+
+
+#define IA64_MCA_UCMC_HANDLER_SIZE 0x10
+#define IA64_INIT_HANDLER_SIZE 0x10
+
+enum {
+ IA64_MCA_RENDEZ_CHECKIN_NOTDONE = 0x0,
+ IA64_MCA_RENDEZ_CHECKIN_DONE = 0x1
+};
+
+#define IA64_MAXCPUS 64 /* Need to do something about this */
+
+/* Information maintained by the MC infrastructure */
+typedef struct ia64_mc_info_s {
+ u64 imi_mca_handler;
+ size_t imi_mca_handler_size;
+ u64 imi_monarch_init_handler;
+ size_t imi_monarch_init_handler_size;
+ u64 imi_slave_init_handler;
+ size_t imi_slave_init_handler_size;
+ u8 imi_rendez_checkin[IA64_MAXCPUS];
+
+} ia64_mc_info_t;
+
+/* Possible rendez states passed from SAL to OS during MCA
+ * handoff
+ */
+enum {
+ IA64_MCA_RENDEZ_NOT_RQD = 0x0,
+ IA64_MCA_RENDEZ_DONE_WITHOUT_INIT = 0x1,
+ IA64_MCA_RENDEZ_DONE_WITH_INIT = 0x2,
+ IA64_MCA_RENDEZ_FAILURE = -1
+};
+
+typedef struct ia64_mca_sal_to_os_state_s {
+ u64 imsto_os_gp; /* GP of the os registered with the SAL */
+ u64 imsto_pal_proc; /* PAL_PROC entry point - physical addr */
+ u64 imsto_sal_proc; /* SAL_PROC entry point - physical addr */
+ u64 imsto_sal_gp; /* GP of the SAL - physical */
+ u64 imsto_rendez_state; /* Rendez state information */
+ u64 imsto_sal_check_ra; /* Return address in SAL_CHECK while going
+ * back to SAL from OS after MCA handling.
+ */
+} ia64_mca_sal_to_os_state_t;
+
+enum {
+ IA64_MCA_CORRECTED = 0x0, /* Error has been corrected by OS_MCA */
+ IA64_MCA_WARM_BOOT = -1, /* Warm boot of the system need from SAL */
+ IA64_MCA_COLD_BOOT = -2, /* Cold boot of the system need from SAL */
+ IA64_MCA_HALT = -3 /* System to be halted by SAL */
+};
+
+typedef struct ia64_mca_os_to_sal_state_s {
+ u64 imots_os_status; /* OS status to SAL as to what happened
+ * with the MCA handling.
+ */
+ u64 imots_sal_gp; /* GP of the SAL - physical */
+ u64 imots_new_min_state; /* Pointer to structure containing
+ * new values of registers in the min state
+ * save area.
+ */
+ u64 imots_sal_check_ra; /* Return address in SAL_CHECK while going
+ * back to SAL from OS after MCA handling.
+ */
+} ia64_mca_os_to_sal_state_t;
+
+typedef int (*prfunc_t)(const char * fmt, ...);
+
+extern void mca_init(void);
+extern void ia64_os_mca_dispatch(void);
+extern void ia64_os_mca_dispatch_end(void);
+extern void ia64_mca_ucmc_handler(void);
+extern void ia64_monarch_init_handler(void);
+extern void ia64_slave_init_handler(void);
+extern void ia64_mca_rendez_int_handler(int,void *,struct pt_regs *);
+extern void ia64_mca_wakeup_int_handler(int,void *,struct pt_regs *);
+extern void ia64_mca_cmc_int_handler(int,void *,struct pt_regs *);
+extern void ia64_log_print(int,int,prfunc_t);
+
+#define PLATFORM_CALL(fn, args) printk("Platform call TBD\n")
+
+#undef MCA_TEST
+
+#if defined(MCA_TEST)
+# define MCA_DEBUG printk
+#else
+# define MCA_DEBUG
+#endif
+
+#endif /* _ASM_IA64_MCA_H */
--- /dev/null
+/*
+ * File: mca_asm.h
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ * Copyright (C) Vijay Chander (vijay@engr.sgi.com)
+ * Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+ */
+#ifndef _ASM_IA64_MCA_ASM_H
+#define _ASM_IA64_MCA_ASM_H
+
+#define PSR_IC 13
+#define PSR_I 14
+#define PSR_DT 17
+#define PSR_RT 27
+#define PSR_IT 36
+#define PSR_BN 44
+
+/*
+ * This macro converts a instruction virtual address to a physical address
+ * Right now for simulation purposes the virtual addresses are
+ * direct mapped to physical addresses.
+ * 1. Lop off bits 61 thru 63 in the virtual address
+ */
+#define INST_VA_TO_PA(addr) \
+ dep addr = 0, addr, 61, 3;
+/*
+ * This macro converts a data virtual address to a physical address
+ * Right now for simulation purposes the virtual addresses are
+ * direct mapped to physical addresses.
+ * 1. Lop off bits 61 thru 63 in the virtual address
+ */
+#define DATA_VA_TO_PA(addr) \
+ dep addr = 0, addr, 61, 3;
+/*
+ * This macro converts a data physical address to a virtual address
+ * Right now for simulation purposes the virtual addresses are
+ * direct mapped to physical addresses.
+ * 1. Put 0x7 in bits 61 thru 63.
+ */
+#define DATA_PA_TO_VA(addr,temp) \
+ mov temp = 0x7 ; \
+ dep addr = temp, addr, 61, 3;
+
+/*
+ * This macro jumps to the instruction at the given virtual address
+ * and starts execution in physical mode with all the address
+ * translations turned off.
+ * 1. Save the current psr
+ * 2. Make sure that all the upper 32 bits are off
+ *
+ * 3. Clear the interrupt enable and interrupt state collection bits
+ * in the psr before updating the ipsr and iip.
+ *
+ * 4. Turn off the instruction, data and rse translation bits of the psr
+ * and store the new value into ipsr
+ * Also make sure that the interrupts are disabled.
+ * Ensure that we are in little endian mode.
+ * [psr.{rt, it, dt, i, be} = 0]
+ *
+ * 5. Get the physical address corresponding to the virtual address
+ * of the next instruction bundle and put it in iip.
+ * (Using magic numbers 24 and 40 in the deposint instruction since
+ * the IA64_SDK code directly maps to lower 24bits as physical address
+ * from a virtual address).
+ *
+ * 6. Do an rfi to move the values from ipsr to psr and iip to ip.
+ */
+#define PHYSICAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
+ mov old_psr = psr; \
+ ;; \
+ dep old_psr = 0, old_psr, 32, 32; \
+ \
+ mov ar##.##rsc = r0 ; \
+ ;; \
+ mov temp2 = ar##.##bspstore; \
+ ;; \
+ DATA_VA_TO_PA(temp2); \
+ ;; \
+ mov temp1 = ar##.##rnat; \
+ ;; \
+ mov ar##.##bspstore = temp2; \
+ ;; \
+ mov ar##.##rnat = temp1; \
+ mov temp1 = psr; \
+ mov temp2 = psr; \
+ ;; \
+ \
+ dep temp2 = 0, temp2, PSR_IC, 2; \
+ ;; \
+ mov psr##.##l = temp2; \
+ \
+ dep temp1 = 0, temp1, 32, 32; \
+ ;; \
+ dep temp1 = 0, temp1, PSR_IT, 1; \
+ ;; \
+ dep temp1 = 0, temp1, PSR_DT, 1; \
+ ;; \
+ dep temp1 = 0, temp1, PSR_RT, 1; \
+ ;; \
+ dep temp1 = 0, temp1, PSR_I, 1; \
+ ;; \
+ movl temp2 = start_addr; \
+ mov cr##.##ipsr = temp1; \
+ ;; \
+ INST_VA_TO_PA(temp2); \
+ mov cr##.##iip = temp2; \
+ mov cr##.##ifs = r0; \
+ DATA_VA_TO_PA(sp) \
+ DATA_VA_TO_PA(gp) \
+ ;; \
+ srlz##.##i; \
+ ;; \
+ nop 1; \
+ nop 2; \
+ nop 1; \
+ nop 2; \
+ rfi; \
+ ;;
+
+/*
+ * This macro jumps to the instruction at the given virtual address
+ * and starts execution in virtual mode with all the address
+ * translations turned on.
+ * 1. Get the old saved psr
+ *
+ * 2. Clear the interrupt enable and interrupt state collection bits
+ * in the current psr.
+ *
+ * 3. Set the instruction translation bit back in the old psr
+ * Note we have to do this since we are right now saving only the
+ * lower 32-bits of old psr.(Also the old psr has the data and
+ * rse translation bits on)
+ *
+ * 4. Set ipsr to this old_psr with "it" bit set and "bn" = 1.
+ *
+ * 5. Set iip to the virtual address of the next instruction bundle.
+ *
+ * 6. Do an rfi to move ipsr to psr and iip to ip.
+ */
+
+#define VIRTUAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
+ mov temp2 = psr; \
+ ;; \
+ dep temp2 = 0, temp2, PSR_IC, 2; \
+ ;; \
+ mov psr##.##l = temp2; \
+ mov ar##.##rsc = r0 ; \
+ ;; \
+ mov temp2 = ar##.##bspstore; \
+ ;; \
+ DATA_PA_TO_VA(temp2,temp1); \
+ ;; \
+ mov temp1 = ar##.##rnat; \
+ ;; \
+ mov ar##.##bspstore = temp2; \
+ ;; \
+ mov ar##.##rnat = temp1; \
+ ;; \
+ mov temp1 = old_psr; \
+ ;; \
+ mov temp2 = 1 ; \
+ dep temp1 = temp2, temp1, PSR_I, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_IC, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_IT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_DT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_RT, 1; \
+ ;; \
+ dep temp1 = temp2, temp1, PSR_BN, 1; \
+ ;; \
+ \
+ mov cr##.##ipsr = temp1; \
+ movl temp2 = start_addr; \
+ ;; \
+ mov cr##.##iip = temp2; \
+ DATA_PA_TO_VA(sp, temp1); \
+ DATA_PA_TO_VA(gp, temp1); \
+ ;; \
+ nop 1; \
+ nop 2; \
+ nop 1; \
+ rfi; \
+ ;;
+
+/*
+ * The following offsets capture the order in which the
+ * RSE related registers from the old context are
+ * saved onto the new stack frame.
+ *
+ * +-----------------------+
+ * |NDIRTY [BSP - BSPSTORE]|
+ * +-----------------------+
+ * | RNAT |
+ * +-----------------------+
+ * | BSPSTORE |
+ * +-----------------------+
+ * | IFS |
+ * +-----------------------+
+ * | PFS |
+ * +-----------------------+
+ * | RSC |
+ * +-----------------------+ <-------- Bottom of new stack frame
+ */
+#define rse_rsc_offset 0
+#define rse_pfs_offset (rse_rsc_offset+0x08)
+#define rse_ifs_offset (rse_pfs_offset+0x08)
+#define rse_bspstore_offset (rse_ifs_offset+0x08)
+#define rse_rnat_offset (rse_bspstore_offset+0x08)
+#define rse_ndirty_offset (rse_rnat_offset+0x08)
+
+/*
+ * rse_switch_context
+ *
+ * 1. Save old RSC onto the new stack frame
+ * 2. Save PFS onto new stack frame
+ * 3. Cover the old frame and start a new frame.
+ * 4. Save IFS onto new stack frame
+ * 5. Save the old BSPSTORE on the new stack frame
+ * 6. Save the old RNAT on the new stack frame
+ * 7. Write BSPSTORE with the new backing store pointer
+ * 8. Read and save the new BSP to calculate the #dirty registers
+ * NOTE: Look at pages 11-10, 11-11 in PRM Vol 2
+ */
+#define rse_switch_context(temp,p_stackframe,p_bspstore) \
+ ;; \
+ mov temp=ar##.##rsc;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar##.##pfs;; \
+ st8 [p_stackframe]=temp,8; \
+ cover ;; \
+ mov temp=cr##.##ifs;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar##.##bspstore;; \
+ st8 [p_stackframe]=temp,8;; \
+ mov temp=ar##.##rnat;; \
+ st8 [p_stackframe]=temp,8; \
+ mov ar##.##bspstore=p_bspstore;; \
+ mov temp=ar##.##bsp;; \
+ sub temp=temp,p_bspstore;; \
+ st8 [p_stackframe]=temp,8
+
+/*
+ * rse_return_context
+ * 1. Allocate a zero-sized frame
+ * 2. Store the number of dirty registers RSC.loadrs field
+ * 3. Issue a loadrs to insure that any registers from the interrupted
+ * context which were saved on the new stack frame have been loaded
+ * back into the stacked registers
+ * 4. Restore BSPSTORE
+ * 5. Restore RNAT
+ * 6. Restore PFS
+ * 7. Restore IFS
+ * 8. Restore RSC
+ * 9. Issue an RFI
+ */
+#define rse_return_context(psr_mask_reg,temp,p_stackframe) \
+ ;; \
+ alloc temp=ar.pfs,0,0,0,0; \
+ add p_stackframe=rse_ndirty_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ shl temp=temp,16;; \
+ mov ar##.##rsc=temp;; \
+ loadrs;; \
+ add p_stackframe=-rse_ndirty_offset+rse_bspstore_offset,p_stackframe;;\
+ ld8 temp=[p_stackframe];; \
+ mov ar##.##bspstore=temp;; \
+ add p_stackframe=-rse_bspstore_offset+rse_rnat_offset,p_stackframe;;\
+ ld8 temp=[p_stackframe];; \
+ mov ar##.##rnat=temp;; \
+ add p_stackframe=-rse_rnat_offset+rse_pfs_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov ar##.##pfs=temp; \
+ add p_stackframe=-rse_pfs_offset+rse_ifs_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov cr##.##ifs=temp; \
+ add p_stackframe=-rse_ifs_offset+rse_rsc_offset,p_stackframe;; \
+ ld8 temp=[p_stackframe];; \
+ mov ar##.##rsc=temp ; \
+ add p_stackframe=-rse_rsc_offset,p_stackframe; \
+ mov temp=cr.ipsr;; \
+ st8 [p_stackframe]=temp,8; \
+ mov temp=cr.iip;; \
+ st8 [p_stackframe]=temp,-8; \
+ mov temp=psr;; \
+ or temp=temp,psr_mask_reg;; \
+ mov cr.ipsr=temp;; \
+ mov temp=ip;; \
+ add temp=0x30,temp;; \
+ mov cr.iip=temp;; \
+ rfi;; \
+ ld8 temp=[p_stackframe],8;; \
+ mov cr.ipsr=temp;; \
+ ld8 temp=[p_stackframe];; \
+ mov cr.iip=temp
+
+#endif /* _ASM_IA64_MCA_ASM_H */
--- /dev/null
+#ifndef _ASM_IA64_MMAN_H
+#define _ASM_IA64_MMAN_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define PROT_READ 0x1 /* page can be read */
+#define PROT_WRITE 0x2 /* page can be written */
+#define PROT_EXEC 0x4 /* page can be executed */
+#define PROT_NONE 0x0 /* page can not be accessed */
+
+#define MAP_SHARED 0x01 /* Share changes */
+#define MAP_PRIVATE 0x02 /* Changes are private */
+#define MAP_TYPE 0x0f /* Mask for type of mapping */
+#define MAP_FIXED 0x10 /* Interpret addr exactly */
+#define MAP_ANONYMOUS 0x20 /* don't use a file */
+
+#define MAP_GROWSDOWN 0x0100 /* stack-like segment */
+#define MAP_GROWSUP 0x0200 /* register stack-like segment */
+#define MAP_DENYWRITE 0x0800 /* ETXTBSY */
+#define MAP_EXECUTABLE 0x1000 /* mark it as an executable */
+#define MAP_LOCKED 0x2000 /* pages are locked */
+#define MAP_NORESERVE 0x4000 /* don't check for reservations */
+
+#define MS_ASYNC 1 /* sync memory asynchronously */
+#define MS_INVALIDATE 2 /* invalidate the caches */
+#define MS_SYNC 4 /* synchronous memory sync */
+
+#define MCL_CURRENT 1 /* lock all current mappings */
+#define MCL_FUTURE 2 /* lock all future mappings */
+
+/* compatibility flags */
+#define MAP_ANON MAP_ANONYMOUS
+#define MAP_FILE 0
+
+#endif /* _ASM_IA64_MMAN_H */
--- /dev/null
+#ifndef _ASM_IA64_MMU_CONTEXT_H
+#define _ASM_IA64_MMU_CONTEXT_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+
+#include <asm/processor.h>
+
+/*
+ * Routines to manage the allocation of task context numbers. Task
+ * context numbers are used to reduce or eliminate the need to perform
+ * TLB flushes due to context switches. Context numbers are
+ * implemented using ia-64 region ids. Since ia-64 TLBs do not
+ * guarantee that the region number is checked when performing a TLB
+ * lookup, we need to assign a unique region id to each region in a
+ * process. We use the least significant three bits in a region id
+ * for this purpose. On processors where the region number is checked
+ * in TLB lookups, we can get back those two bits by defining
+ * CONFIG_IA64_TLB_CHECKS_REGION_NUMBER. The macro
+ * IA64_REGION_ID_BITS gives the number of bits in a region id. The
+ * architecture manual guarantees this number to be in the range
+ * 18-24.
+ *
+ * A context number has the following format:
+ *
+ * +--------------------+---------------------+
+ * | generation number | region id |
+ * +--------------------+---------------------+
+ *
+ * A context number of 0 is considered "invalid".
+ *
+ * The generation number is incremented whenever we end up having used
+ * up all available region ids. At that point with flush the entire
+ * TLB and reuse the first region id. The new generation number
+ * ensures that when we context switch back to an old process, we do
+ * not inadvertently end up using its possibly reused region id.
+ * Instead, we simply allocate a new region id for that process.
+ *
+ * Copyright (C) 1998 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define IA64_REGION_ID_KERNEL 0 /* the kernel's region id (tlb.c depends on this being 0) */
+
+#define IA64_REGION_ID_BITS 18
+
+#ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
+# define IA64_HW_CONTEXT_BITS IA64_REGION_ID_BITS
+#else
+# define IA64_HW_CONTEXT_BITS (IA64_REGION_ID_BITS - 3)
+#endif
+
+#define IA64_HW_CONTEXT_MASK ((1UL << IA64_HW_CONTEXT_BITS) - 1)
+
+extern unsigned long ia64_next_context;
+
+extern void get_new_mmu_context (struct mm_struct *mm);
+
+extern inline unsigned long
+ia64_rid (unsigned long context, unsigned long region_addr)
+{
+# ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
+ return context;
+# else
+ return context << 3 | (region_addr >> 61);
+# endif
+}
+
+extern inline void
+get_mmu_context (struct mm_struct *mm)
+{
+ /* check if our ASN is of an older generation and thus invalid: */
+ if (((mm->context ^ ia64_next_context) & ~IA64_HW_CONTEXT_MASK) != 0) {
+ get_new_mmu_context(mm);
+ }
+}
+
+extern inline void
+init_new_context (struct task_struct *p, struct mm_struct *mm)
+{
+ mm->context = 0;
+}
+
+extern inline void
+destroy_context (struct mm_struct *mm)
+{
+ /* Nothing to do. */
+}
+
+extern inline void
+reload_context (struct mm_struct *mm)
+{
+ unsigned long rid;
+ unsigned long rid_incr = 0;
+ unsigned long rr0, rr1, rr2, rr3, rr4;
+
+ rid = (mm->context & IA64_HW_CONTEXT_MASK);
+
+#ifndef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
+ rid <<= 3; /* make space for encoding the region number */
+ rid_incr = 1 << 8;
+#endif
+
+ /* encode the region id, preferred page size, and VHPT enable bit: */
+ rr0 = (rid << 8) | (PAGE_SHIFT << 2) | 1;
+ rr1 = rr0 + 1*rid_incr;
+ rr2 = rr0 + 2*rid_incr;
+ rr3 = rr0 + 3*rid_incr;
+ rr4 = rr0 + 4*rid_incr;
+ ia64_set_rr(0x0000000000000000, rr0);
+ ia64_set_rr(0x2000000000000000, rr1);
+ ia64_set_rr(0x4000000000000000, rr2);
+ ia64_set_rr(0x6000000000000000, rr3);
+ ia64_set_rr(0x8000000000000000, rr4);
+ ia64_insn_group_barrier();
+ ia64_srlz_i(); /* srlz.i implies srlz.d */
+ ia64_insn_group_barrier();
+}
+
+/*
+ * Switch from address space PREV to address space NEXT. Note that
+ * TSK may be NULL.
+ */
+static inline void
+switch_mm (struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk, unsigned cpu)
+{
+ /*
+ * We may get interrupts here, but that's OK because interrupt
+ * handlers cannot touch user-space.
+ */
+ __asm__ __volatile__ ("mov ar.k7=%0" :: "r"(__pa(next->pgd)));
+ get_mmu_context(next);
+ reload_context(next);
+}
+
+#define activate_mm(prev,next) \
+ switch_mm((prev), (next), NULL, smp_processor_id())
+
+#endif /* _ASM_IA64_MMU_CONTEXT_H */
--- /dev/null
+#ifndef _ASM_IA64_MSGBUF_H
+#define _ASM_IA64_MSGBUF_H
+
+/*
+ * The msqid64_ds structure for IA-64 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 2 miscellaneous 64-bit values
+ */
+
+struct msqid64_ds {
+ struct ipc64_perm msg_perm;
+ __kernel_time_t msg_stime; /* last msgsnd time */
+ __kernel_time_t msg_rtime; /* last msgrcv time */
+ __kernel_time_t msg_ctime; /* last change time */
+ unsigned long msg_cbytes; /* current number of bytes on queue */
+ unsigned long msg_qnum; /* number of messages in queue */
+ unsigned long msg_qbytes; /* max number of bytes on queue */
+ __kernel_pid_t msg_lspid; /* pid of last msgsnd */
+ __kernel_pid_t msg_lrpid; /* last receive pid */
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+#endif /* _ASM_IA64_MSGBUF_H */
--- /dev/null
+#ifndef _ASM_IA64_NAMEI_H
+#define _ASM_IA64_NAMEI_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+/*
+ * This dummy routine maybe changed to something useful
+ * for /usr/gnemul/ emulation stuff.
+ * Look at asm-sparc/namei.h for details.
+ */
+#define __prefix_lookup_dentry(name, lookup_flags) \
+ do {} while (0)
+
+#endif /* _ASM_IA64_NAMEI_H */
--- /dev/null
+#ifndef _ASM_IA64_OFFSETS_H
+#define _ASM_IA64_OFFSETS_H
+
+/*
+ * DO NOT MODIFY
+ *
+ * This file was generated by arch/ia64/tools/print_offsets.
+ *
+ */
+
+#define PF_PTRACED_BIT 4
+
+#define IA64_TASK_SIZE 2752 /* 0xac0 */
+#define IA64_PT_REGS_SIZE 400 /* 0x190 */
+#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
+#define IA64_SIGINFO_SIZE 136 /* 0x88 */
+
+#define IA64_TASK_FLAGS_OFFSET 8 /* 0x8 */
+#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
+#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
+#define IA64_TASK_THREAD_OFFSET 912 /* 0x390 */
+#define IA64_TASK_THREAD_KSP_OFFSET 912 /* 0x390 */
+#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
+#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
+#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
+#define IA64_PT_REGS_R12_OFFSET 112 /* 0x70 */
+#define IA64_PT_REGS_R8_OFFSET 144 /* 0x90 */
+#define IA64_PT_REGS_R16_OFFSET 176 /* 0xb0 */
+#define IA64_SWITCH_STACK_B0_OFFSET 464 /* 0x1d0 */
+#define IA64_SWITCH_STACK_CALLER_UNAT_OFFSET 0 /* 0x0 */
+#define IA64_SIGCONTEXT_AR_BSP_OFFSET 72 /* 0x48 */
+#define IA64_SIGCONTEXT_AR_RNAT_OFFSET 80 /* 0x50 */
+#define IA64_SIGCONTEXT_FLAGS_OFFSET 0 /* 0x0 */
+#define IA64_SIGCONTEXT_CFM_OFFSET 48 /* 0x30 */
+#define IA64_SIGCONTEXT_FR6_OFFSET 560 /* 0x230 */
+
+#endif /* _ASM_IA64_OFFSETS_H */
--- /dev/null
+#ifndef _ASM_IA64_PAGE_H
+#define _ASM_IA64_PAGE_H
+/*
+ * Pagetable related stuff.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/config.h>
+
+#include <asm/types.h>
+
+/*
+ * PAGE_SHIFT determines the actual kernel page size.
+ */
+#if defined(CONFIG_IA64_PAGE_SIZE_4KB)
+# define PAGE_SHIFT 12
+#elif defined(CONFIG_IA64_PAGE_SIZE_8KB)
+# define PAGE_SHIFT 13
+#elif defined(CONFIG_IA64_PAGE_SIZE_16KB)
+# define PAGE_SHIFT 14
+#elif defined(CONFIG_IA64_PAGE_SIZE_64KB)
+# define PAGE_SHIFT 16
+#else
+# error Unsupported page size!
+#endif
+
+#define PAGE_SIZE (__IA64_UL_CONST(1) << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE - 1))
+#define PAGE_ALIGN(addr) (((addr) + PAGE_SIZE - 1) & PAGE_MASK)
+
+#ifdef __ASSEMBLY__
+# define __pa(x) ((x) - PAGE_OFFSET)
+# define __va(x) ((x) + PAGE_OFFSET)
+#else /* !__ASSEMBLY */
+# ifdef __KERNEL__
+# define STRICT_MM_TYPECHECKS
+
+extern void clear_page (void *page);
+extern void copy_page (void *to, void *from);
+
+# ifdef STRICT_MM_TYPECHECKS
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long pte; } pte_t;
+typedef struct { unsigned long pmd; } pmd_t;
+typedef struct { unsigned long pgd; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x) ((x).pte)
+#define pmd_val(x) ((x).pmd)
+#define pgd_val(x) ((x).pgd)
+#define pgprot_val(x) ((x).pgprot)
+
+#define __pte(x) ((pte_t) { (x) } )
+#define __pgd(x) ((pgd_t) { (x) } )
+#define __pgprot(x) ((pgprot_t) { (x) } )
+
+# else /* !STRICT_MM_TYPECHECKS */
+/*
+ * .. while these make it easier on the compiler
+ */
+typedef unsigned long pte_t;
+typedef unsigned long pmd_t;
+typedef unsigned long pgd_t;
+typedef unsigned long pgprot_t;
+
+#define pte_val(x) (x)
+#define pmd_val(x) (x)
+#define pgd_val(x) (x)
+#define pgprot_val(x) (x)
+
+#define __pte(x) (x)
+#define __pgd(x) (x)
+#define __pgprot(x) (x)
+
+# endif /* !STRICT_MM_TYPECHECKS */
+
+/*
+ * Note: the MAP_NR() macro can't use __pa() because MAP_NR(X) MUST
+ * map to something >= max_mapnr if X is outside the identity mapped
+ * kernel space.
+ */
+
+/*
+ * The dense variant can be used as long as the size of memory holes isn't
+ * very big.
+ */
+#define MAP_NR_DENSE(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
+/*
+ * This variant works well for the SGI SN1 architecture (which does have huge
+ * holes in the memory address space).
+ */
+#define MAP_NR_SN1(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
+#ifdef CONFIG_IA64_GENERIC
+# define MAP_NR(addr) platform_map_nr(addr)
+#elif defined (CONFIG_IA64_SN_SN1_SIM)
+# define MAP_NR(addr) MAP_NR_SN1(addr)
+#else
+# define MAP_NR(addr) MAP_NR_DENSE(addr)
+#endif
+
+# endif /* __KERNEL__ */
+
+typedef union ia64_va {
+ struct {
+ unsigned long off : 61; /* intra-region offset */
+ unsigned long reg : 3; /* region number */
+ } f;
+ unsigned long l;
+ void *p;
+} ia64_va;
+
+/*
+ * Note: These macros depend on the fact that PAGE_OFFSET has all
+ * region bits set to 1 and all other bits set to zero. They are
+ * expressed in this way to ensure they result in a single "dep"
+ * instruction.
+ */
+#define __pa(x) ({ia64_va _v; _v.l = (long) (x); _v.f.reg = 0; _v.l;})
+#define __va(x) ({ia64_va _v; _v.l = (long) (x); _v.f.reg = -1; _v.p;})
+
+#define BUG() do { printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); *(int *)0=0; } while (0)
+#define PAGE_BUG(page) do { BUG(); } while (0)
+
+#endif /* !ASSEMBLY */
+
+#define PAGE_OFFSET 0xe000000000000000
+
+#endif /* _ASM_IA64_PAGE_H */
--- /dev/null
+#ifndef _ASM_IA64_PAL_H
+#define _ASM_IA64_PAL_H
+
+/*
+ * Processor Abstraction Layer definitions.
+ *
+ * This is based on version 2.4 of the manual "Enhanced Mode Processor
+ * Abstraction Layer".
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ * Copyright (C) 1999 Srinivasa Prasad Thirumalachar <sprasad@sprasad.engr.sgi.com>
+ *
+ * 99/10/01 davidm Make sure we pass zero for reserved parameters.
+ */
+
+/*
+ * Note that some of these calls use a static-register only calling
+ * convention which has nothing to do with the regular calling
+ * convention.
+ */
+#define PAL_CACHE_FLUSH 1 /* flush i/d cache */
+#define PAL_CACHE_INFO 2 /* get detailed i/d cache info */
+#define PAL_CACHE_INIT 3 /* initialize i/d cache */
+#define PAL_CACHE_SUMMARY 4 /* get summary of cache heirarchy */
+#define PAL_MEM_ATTRIB 5 /* list supported memory attributes */
+#define PAL_PTCE_INFO 6 /* purge TLB info */
+#define PAL_VM_INFO 7 /* return supported virtual memory features */
+#define PAL_VM_SUMMARY 8 /* return summary on supported vm features */
+#define PAL_BUS_GET_FEATURES 9 /* return processor bus interface features settings */
+#define PAL_BUS_SET_FEATURES 10 /* set processor bus features */
+#define PAL_DEBUG_INFO 11 /* get number of debug registers */
+#define PAL_FIXED_ADDR 12 /* get fixed component of processors's directed address */
+#define PAL_FREQ_BASE 13 /* base frequency of the platform */
+#define PAL_FREQ_RATIOS 14 /* ratio of processor, bus and ITC frequency */
+#define PAL_PERF_MON_INFO 15 /* return performance monitor info */
+#define PAL_PLATFORM_ADDR 16 /* set processor interrupt block and IO port space addr */
+#define PAL_PROC_GET_FEATURES 17 /* get configurable processor features & settings */
+#define PAL_PROC_SET_FEATURES 18 /* enable/disable configurable processor features */
+#define PAL_RSE_INFO 19 /* return rse information */
+#define PAL_VERSION 20 /* return version of PAL code */
+#define PAL_MC_CLEAR_LOG 21 /* clear all processor log info */
+#define PAL_MC_DRAIN 22 /* drain operations which could result in an MCA */
+#define PAL_MC_EXPECTED 23 /* set/reset expected MCA indicator */
+#define PAL_MC_DYNAMIC_STATE 24 /* get processor dynamic state */
+#define PAL_MC_ERROR_INFO 25 /* get processor MCA info and static state */
+#define PAL_MC_RESUME 26 /* Return to interrupted process */
+#define PAL_MC_REGISTER_MEM 27 /* Register memory for PAL to use during MCAs and inits */
+#define PAL_HALT 28 /* enter the low power HALT state */
+#define PAL_HALT_LIGHT 29 /* enter the low power light halt state*/
+#define PAL_COPY_INFO 30 /* returns info needed to relocate PAL */
+#define PAL_CACHE_LINE_INIT 31 /* init tags & data of cache line */
+#define PAL_PMI_ENTRYPOINT 32 /* register PMI memory entry points with the processor */
+#define PAL_ENTER_IA_32_ENV 33 /* enter IA-32 system environment */
+#define PAL_VM_PAGE_SIZE 34 /* return vm TC and page walker page sizes */
+
+#define PAL_MEM_FOR_TEST 37 /* get amount of memory needed for late processor test */
+#define PAL_CACHE_PROT_INFO 38 /* get i/d cache protection info */
+#define PAL_REGISTER_INFO 39 /* return AR and CR register information*/
+#define PAL_SHUTDOWN 40 /* enter processor shutdown state */
+
+#define PAL_COPY_PAL 256 /* relocate PAL procedures and PAL PMI */
+#define PAL_HALT_INFO 257 /* return the low power capabilities of processor */
+#define PAL_TEST_PROC 258 /* perform late processor self-test */
+#define PAL_CACHE_READ 259 /* read tag & data of cacheline for diagnostic testing */
+#define PAL_CACHE_WRITE 260 /* write tag & data of cacheline for diagnostic testing */
+#define PAL_VM_TR_READ 261 /* read contents of translation register */
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+/*
+ * Data types needed to pass information into PAL procedures and
+ * interpret information returned by them.
+ */
+
+/* Return status from the PAL procedure */
+typedef s64 pal_status_t;
+
+#define PAL_STATUS_SUCCESS 0 /* No error */
+#define PAL_STATUS_UNIMPLEMENTED -1 /* Unimplemented procedure */
+#define PAL_STATUS_EINVAL -2 /* Invalid argument */
+#define PAL_STATUS_ERROR -3 /* Error */
+#define PAL_STATUS_CACHE_INIT_FAIL -4 /* Could not initialize the
+ * specified level and type of
+ * cache without sideeffects
+ * and "restrict" was 1
+ */
+
+/* Processor cache level in the heirarchy */
+typedef u64 pal_cache_level_t;
+#define PAL_CACHE_LEVEL_L0 0 /* L0 */
+#define PAL_CACHE_LEVEL_L1 1 /* L1 */
+#define PAL_CACHE_LEVEL_L2 2 /* L2 */
+
+
+/* Processor cache type at a particular level in the heirarchy */
+
+typedef u64 pal_cache_type_t;
+#define PAL_CACHE_TYPE_INSTRUCTION 1 /* Instruction cache */
+#define PAL_CACHE_TYPE_DATA 2 /* Data or unified cache */
+#define PAL_CACHE_TYPE_INSTRUCTION_DATA 3 /* Both Data & Instruction */
+
+
+#define PAL_CACHE_FLUSH_NO_INVALIDATE 0 /* Don't invalidate clean lines */
+#define PAL_CACHE_FLUSH_INVALIDATE 1 /* Invalidate clean lines */
+
+/* Processor cache line size in bytes */
+typedef int pal_cache_line_size_t;
+
+/* Processor cache line state */
+typedef u64 pal_cache_line_state_t;
+#define PAL_CACHE_LINE_STATE_INVALID 0 /* Invalid */
+#define PAL_CACHE_LINE_STATE_SHARED 1 /* Shared */
+#define PAL_CACHE_LINE_STATE_EXCLUSIVE 2 /* Exclusive */
+#define PAL_CACHE_LINE_STATE_MODIFIED 3 /* Modified */
+
+typedef struct pal_freq_ratio {
+ u64 den : 32, num : 32; /* numerator & denominator */
+} itc_ratio, proc_ratio;
+
+typedef union pal_cache_config_info_1_s {
+ struct {
+ u64 u : 1, /* 0 Unified cache ? */
+ reserved : 5, /* 7-3 Reserved */
+ at : 2, /* 2-1 Cache mem attr*/
+ associativity : 8, /* 16-8 Associativity*/
+ line_size : 8, /* 23-17 Line size */
+ stride : 8, /* 31-24 Stride */
+ store_latency : 8, /*39-32 Store latency*/
+ load_latency : 8, /* 47-40 Load latency*/
+ store_hints : 8, /* 55-48 Store hints*/
+ load_hints : 8; /* 63-56 Load hints */
+ } pcci1_bits;
+ u64 pcci1_data;
+} pal_cache_config_info_1_t;
+
+typedef union pal_cache_config_info_2_s {
+ struct {
+ u64 cache_size : 32, /*cache size in bytes*/
+
+
+ alias_boundary : 8, /* 39-32 aliased addr
+ * separation for max
+ * performance.
+ */
+ tag_ls_bit : 8, /* 47-40 LSb of addr*/
+ tag_ms_bit : 8, /* 55-48 MSb of addr*/
+ reserved : 8; /* 63-56 Reserved */
+ } pcci2_bits;
+ u64 pcci2_data;
+} pal_cache_config_info_2_t;
+
+
+typedef struct pal_cache_config_info_s {
+ pal_status_t pcci_status;
+ pal_cache_config_info_1_t pcci_info_1;
+ pal_cache_config_info_2_t pcci_info_2;
+ u64 pcci_reserved;
+} pal_cache_config_info_t;
+
+#define pcci_ld_hint pcci_info_1.pcci1.load_hints
+#define pcci_st_hint pcci_info_1.pcci1_bits.store_hints
+#define pcci_ld_latency pcci_info_1.pcci1_bits.load_latency
+#define pcci_st_latency pcci_info_1.pcci1_bits.store_latency
+#define pcci_stride pcci_info_1.pcci1_bits.stride
+#define pcci_line_size pcci_info_1.pcci1_bits.line_size
+#define pcci_assoc pcci_info_1.pcci1_bits.associativity
+#define pcci_cache_attr pcci_info_1.pcci1_bits.at
+#define pcci_unified pcci_info_1.pcci1_bits.u
+#define pcci_tag_msb pcci_info_2.pcci2_bits.tag_ms_bit
+#define pcci_tag_lsb pcci_info_2.pcci2_bits.tag_ls_bit
+#define pcci_alias_boundary pcci_info_2.pcci2_bits.alias_boundary
+#define pcci_cache_size pcci_info_2.pcci2_bits.cache_size
+
+
+
+/* Possible values for cache attributes */
+
+#define PAL_CACHE_ATTR_WT 0 /* Write through cache */
+#define PAL_CACHE_ATTR_WB 1 /* Write back cache */
+#define PAL_CACHE_ATTR_WT_OR_WB 2 /* Either write thru or write
+ * back depending on TLB
+ * memory attributes
+ */
+
+
+/* Possible values for cache hints */
+
+#define PAL_CACHE_HINT_TEMP_1 0 /* Temporal level 1 */
+#define PAL_CACHE_HINT_NTEMP_1 1 /* Non-temporal level 1 */
+#define PAL_CACHE_HINT_NTEMP_ALL 3 /* Non-temporal all levels */
+
+/* Processor cache protection information */
+typedef union pal_cache_protection_element_u {
+ u32 pcpi_data;
+ struct {
+ u32 data_bits : 8, /* # data bits covered by
+ * each unit of protection
+ */
+
+ tagprot_lsb : 6, /* Least -do- */
+ tagprot_msb : 6, /* Most Sig. tag address
+ * bit that this
+ * protection covers.
+ */
+ prot_bits : 6, /* # of protection bits */
+ method : 4, /* Protection method */
+ t_d : 2; /* Indicates which part
+ * of the cache this
+ * protection encoding
+ * applies.
+ */
+ } pcp_info;
+} pal_cache_protection_element_t;
+
+#define pcpi_cache_prot_part pcp_info.t_d
+#define pcpi_prot_method pcp_info.method
+#define pcpi_prot_bits pcp_info.prot_bits
+#define pcpi_tagprot_msb pcp_info.tagprot_msb
+#define pcpi_tagprot_lsb pcp_info.tagprot_lsb
+#define pcpi_data_bits pcp_info.data_bits
+
+/* Processor cache part encodings */
+#define PAL_CACHE_PROT_PART_DATA 0 /* Data protection */
+#define PAL_CACHE_PROT_PART_TAG 1 /* Tag protection */
+#define PAL_CACHE_PROT_PART_TAG_DATA 2 /* Tag+data protection (tag is
+ * more significant )
+ */
+#define PAL_CACHE_PROT_PART_DATA_TAG 3 /* Data+tag protection (data is
+ * more significant )
+ */
+#define PAL_CACHE_PROT_PART_MAX 6
+
+
+typedef struct pal_cache_protection_info_s {
+ pal_status_t pcpi_status;
+ pal_cache_protection_element_t pcp_info[PAL_CACHE_PROT_PART_MAX];
+} pal_cache_protection_info_t;
+
+
+/* Processor cache protection method encodings */
+#define PAL_CACHE_PROT_METHOD_NONE 0 /* No protection */
+#define PAL_CACHE_PROT_METHOD_ODD_PARITY 1 /* Odd parity */
+#define PAL_CACHE_PROT_METHOD_EVEN_PARITY 2 /* Even parity */
+#define PAL_CACHE_PROT_METHOD_ECC 3 /* ECC protection */
+
+
+/* Processor cache line identification in the heirarchy */
+typedef union pal_cache_line_id_u {
+ u64 pclid_data;
+ struct {
+ u64 cache_type : 8, /* 7-0 cache type */
+ level : 8, /* 15-8 level of the
+ * cache in the
+ * heirarchy.
+ */
+ way : 8, /* 23-16 way in the set
+ */
+ part : 8, /* 31-24 part of the
+ * cache
+ */
+ reserved : 32; /* 63-32 is reserved*/
+ } pclid_info_read;
+ struct {
+ u64 cache_type : 8, /* 7-0 cache type */
+ level : 8, /* 15-8 level of the
+ * cache in the
+ * heirarchy.
+ */
+ way : 8, /* 23-16 way in the set
+ */
+ part : 8, /* 31-24 part of the
+ * cache
+ */
+ mesi : 8, /* 39-32 cache line
+ * state
+ */
+ start : 8, /* 47-40 lsb of data to
+ * invert
+ */
+ length : 8, /* 55-48 #bits to
+ * invert
+ */
+ trigger : 8; /* 63-56 Trigger error
+ * by doing a load
+ * after the write
+ */
+
+ } pclid_info_write;
+} pal_cache_line_id_u_t;
+
+#define pclid_read_part pclid_info_read.part
+#define pclid_read_way pclid_info_read.way
+#define pclid_read_level pclid_info_read.level
+#define pclid_read_cache_type pclid_info_read.cache_type
+
+#define pclid_write_trigger pclid_info_write.trigger
+#define pclid_write_length pclid_info_write.length
+#define pclid_write_start pclid_info_write.start
+#define pclid_write_mesi pclid_info_write.mesi
+#define pclid_write_part pclid_info_write.part
+#define pclid_write_way pclid_info_write.way
+#define pclid_write_level pclid_info_write.level
+#define pclid_write_cache_type pclid_info_write.cache_type
+
+/* Processor cache line part encodings */
+#define PAL_CACHE_LINE_ID_PART_DATA 0 /* Data */
+#define PAL_CACHE_LINE_ID_PART_TAG 1 /* Tag */
+#define PAL_CACHE_LINE_ID_PART_DATA_PROT 2 /* Data protection */
+#define PAL_CACHE_LINE_ID_PART_TAG_PROT 3 /* Tag protection */
+#define PAL_CACHE_LINE_ID_PART_DATA_TAG_PROT 4 /* Data+tag
+ * protection
+ */
+typedef struct pal_cache_line_info_s {
+ pal_status_t pcli_status; /* Return status of the read cache line
+ * info call.
+ */
+ u64 pcli_data; /* 64-bit data, tag, protection bits .. */
+ u64 pcli_data_len; /* data length in bits */
+ pal_cache_line_state_t pcli_cache_line_state; /* mesi state */
+
+} pal_cache_line_info_t;
+
+
+/* Machine Check related crap */
+
+/* Pending event status bits */
+typedef u64 pal_mc_pending_events_t;
+
+#define PAL_MC_PENDING_MCA (1 << 0)
+#define PAL_MC_PENDING_INIT (1 << 1)
+
+/* Error information type */
+typedef u64 pal_mc_info_index_t;
+
+#define PAL_MC_INFO_PROCESSOR 0 /* Processor */
+#define PAL_MC_INFO_CACHE_CHECK 1 /* Cache check */
+#define PAL_MC_INFO_TLB_CHECK 2 /* Tlb check */
+#define PAL_MC_INFO_BUS_CHECK 3 /* Bus check */
+#define PAL_MC_INFO_REQ_ADDR 4 /* Requestor address */
+#define PAL_MC_INFO_RESP_ADDR 5 /* Responder address */
+#define PAL_MC_INFO_TARGET_ADDR 6 /* Target address */
+#define PAL_MC_INFO_IMPL_DEP 7 /* Implementation
+ * dependent
+ */
+
+
+typedef struct pal_process_state_info_s {
+ u64 reserved1 : 2,
+ rz : 1, /* PAL_CHECK processor
+ * rendezvous
+ * successful.
+ */
+
+ ra : 1, /* PAL_CHECK attempted
+ * a rendezvous.
+ */
+ me : 1, /* Distinct multiple
+ * errors occurred
+ */
+
+ mn : 1, /* Min. state save
+ * area has been
+ * registered with PAL
+ */
+
+ sy : 1, /* Storage integrity
+ * synched
+ */
+
+
+ co : 1, /* Continuable */
+ ci : 1, /* MC isolated */
+ us : 1, /* Uncontained storage
+ * damage.
+ */
+
+
+ hd : 1, /* Non-essential hw
+ * lost (no loss of
+ * functionality)
+ * causing the
+ * processor to run in
+ * degraded mode.
+ */
+
+ tl : 1, /* 1 => MC occurred
+ * after an instr was
+ * executed but before
+ * the trap that
+ * resulted from instr
+ * execution was
+ * generated.
+ * (Trap Lost )
+ */
+ op : 3, /* Operation that
+ * caused the machine
+ * check
+ */
+
+ dy : 1, /* Processor dynamic
+ * state valid
+ */
+
+
+ in : 1, /* 0 = MC, 1 = INIT */
+ rs : 1, /* RSE valid */
+ cm : 1, /* MC corrected */
+ ex : 1, /* MC is expected */
+ cr : 1, /* Control regs valid*/
+ pc : 1, /* Perf cntrs valid */
+ dr : 1, /* Debug regs valid */
+ tr : 1, /* Translation regs
+ * valid
+ */
+ rr : 1, /* Region regs valid */
+ ar : 1, /* App regs valid */
+ br : 1, /* Branch regs valid */
+ pr : 1, /* Predicate registers
+ * valid
+ */
+
+ fp : 1, /* fp registers valid*/
+ b1 : 1, /* Preserved bank one
+ * general registers
+ * are valid
+ */
+ b0 : 1, /* Preserved bank zero
+ * general registers
+ * are valid
+ */
+ gr : 1, /* General registers
+ * are valid
+ * (excl. banked regs)
+ */
+ dsize : 16, /* size of dynamic
+ * state returned
+ * by the processor
+ */
+
+ reserved2 : 12,
+ cc : 1, /* Cache check */
+ tc : 1, /* TLB check */
+ bc : 1, /* Bus check */
+ uc : 1; /* Unknown check */
+
+} pal_processor_state_info_t;
+
+typedef struct pal_cache_check_info_s {
+ u64 reserved1 : 16,
+ way : 5, /* Way in which the
+ * error occurred
+ */
+ reserved2 : 1,
+ mc : 1, /* Machine check corrected */
+ tv : 1, /* Target address
+ * structure is valid
+ */
+
+ wv : 1, /* Way field valid */
+ op : 3, /* Type of cache
+ * operation that
+ * caused the machine
+ * check.
+ */
+
+ dl : 1, /* Failure in data part
+ * of cache line
+ */
+ tl : 1, /* Failure in tag part
+ * of cache line
+ */
+ dc : 1, /* Failure in dcache */
+ ic : 1, /* Failure in icache */
+ index : 24, /* Cache line index */
+ mv : 1, /* mesi valid */
+ mesi : 3, /* Cache line state */
+ level : 4; /* Cache level */
+
+} pal_cache_check_info_t;
+
+typedef struct pal_tlb_check_info_s {
+
+ u64 tr_slot : 8, /* Slot# of TR where
+ * error occurred
+ */
+ reserved2 : 8,
+ dtr : 1, /* Fail in data TR */
+ itr : 1, /* Fail in inst TR */
+ dtc : 1, /* Fail in data TC */
+ itc : 1, /* Fail in inst. TC */
+ mc : 1, /* Machine check corrected */
+ reserved1 : 43;
+
+} pal_tlb_check_info_t;
+
+typedef struct pal_bus_check_info_s {
+ u64 size : 5, /* Xaction size*/
+ ib : 1, /* Internal bus error */
+ eb : 1, /* External bus error */
+ cc : 1, /* Error occurred
+ * during cache-cache
+ * transfer.
+ */
+ type : 8, /* Bus xaction type*/
+ sev : 5, /* Bus error severity*/
+ tv : 1, /* Targ addr valid */
+ rp : 1, /* Resp addr valid */
+ rq : 1, /* Req addr valid */
+ bsi : 8, /* Bus error status
+ * info
+ */
+ mc : 1, /* Machine check corrected */
+ reserved1 : 31;
+} pal_bus_check_info_t;
+
+typedef union pal_mc_error_info_u {
+ u64 pmei_data;
+ pal_processor_state_info_t pme_processor;
+ pal_cache_check_info_t pme_cache;
+ pal_tlb_check_info_t pme_tlb;
+ pal_bus_check_info_t pme_bus;
+} pal_mc_error_info_t;
+
+#define pmci_proc_unknown_check pme_processor.uc
+#define pmci_proc_bus_check pme_processor.bc
+#define pmci_proc_tlb_check pme_processor.tc
+#define pmci_proc_cache_check pme_processor.cc
+#define pmci_proc_dynamic_state_size pme_processor.dsize
+#define pmci_proc_gpr_valid pme_processor.gr
+#define pmci_proc_preserved_bank0_gpr_valid pme_processor.b0
+#define pmci_proc_preserved_bank1_gpr_valid pme_processor.b1
+#define pmci_proc_fp_valid pme_processor.fp
+#define pmci_proc_predicate_regs_valid pme_processor.pr
+#define pmci_proc_branch_regs_valid pme_processor.br
+#define pmci_proc_app_regs_valid pme_processor.ar
+#define pmci_proc_region_regs_valid pme_processor.rr
+#define pmci_proc_translation_regs_valid pme_processor.tr
+#define pmci_proc_debug_regs_valid pme_processor.dr
+#define pmci_proc_perf_counters_valid pme_processor.pc
+#define pmci_proc_control_regs_valid pme_processor.cr
+#define pmci_proc_machine_check_expected pme_processor.ex
+#define pmci_proc_machine_check_corrected pme_processor.cm
+#define pmci_proc_rse_valid pme_processor.rs
+#define pmci_proc_machine_check_or_init pme_processor.in
+#define pmci_proc_dynamic_state_valid pme_processor.dy
+#define pmci_proc_operation pme_processor.op
+#define pmci_proc_trap_lost pme_processor.tl
+#define pmci_proc_hardware_damage pme_processor.hd
+#define pmci_proc_uncontained_storage_damage pme_processor.us
+#define pmci_proc_machine_check_isolated pme_processor.ci
+#define pmci_proc_continuable pme_processor.co
+#define pmci_proc_storage_intergrity_synced pme_processor.sy
+#define pmci_proc_min_state_save_area_regd pme_processor.mn
+#define pmci_proc_distinct_multiple_errors pme_processor.me
+#define pmci_proc_pal_attempted_rendezvous pme_processor.ra
+#define pmci_proc_pal_rendezvous_complete pme_processor.rz
+
+
+#define pmci_cache_level pme_cache.level
+#define pmci_cache_line_state pme_cache.mesi
+#define pmci_cache_line_state_valid pme_cache.mv
+#define pmci_cache_line_index pme_cache.index
+#define pmci_cache_instr_cache_fail pme_cache.ic
+#define pmci_cache_data_cache_fail pme_cache.dc
+#define pmci_cache_line_tag_fail pme_cache.tl
+#define pmci_cache_line_data_fail pme_cache.dl
+#define pmci_cache_operation pme_cache.op
+#define pmci_cache_way_valid pme_cache.wv
+#define pmci_cache_target_address_valid pme_cache.tv
+#define pmci_cache_way pme_cache.way
+#define pmci_cache_mc pme_cache.mc
+
+#define pmci_tlb_instr_translation_cache_fail pme_tlb.itc
+#define pmci_tlb_data_translation_cache_fail pme_tlb.dtc
+#define pmci_tlb_instr_translation_reg_fail pme_tlb.itr
+#define pmci_tlb_data_translation_reg_fail pme_tlb.dtr
+#define pmci_tlb_translation_reg_slot pme_tlb.tr_slot
+#define pmci_tlb_mc pme_tlb.mc
+
+#define pmci_bus_status_info pme_bus.bsi
+#define pmci_bus_req_address_valid pme_bus.rq
+#define pmci_bus_resp_address_valid pme_bus.rp
+#define pmci_bus_target_address_valid pme_bus.tv
+#define pmci_bus_error_severity pme_bus.sev
+#define pmci_bus_transaction_type pme_bus.type
+#define pmci_bus_cache_cache_transfer pme_bus.cc
+#define pmci_bus_transaction_size pme_bus.size
+#define pmci_bus_internal_error pme_bus.ib
+#define pmci_bus_external_error pme_bus.eb
+#define pmci_bus_mc pme_bus.mc
+
+
+typedef struct pal_min_state_area_s {
+ u64 pmsa_reserved[26];
+ u64 pmsa_xfs;
+ u64 pmsa_xpsr;
+ u64 pmsa_xip;
+ u64 pmsa_rsc;
+ u64 pmsa_br0;
+ u64 pmsa_pr;
+ u64 pmsa_bank0_gr[16];
+ u64 pmsa_gr[16];
+ u64 pmsa_nat_bits;
+} pal_min_state_area_t;
+
+
+struct ia64_pal_retval {
+ /*
+ * A zero status value indicates call completed without error.
+ * A negative status value indicates reason of call failure.
+ * A positive status value indicates success but an
+ * informational value should be printed (e.g., "reboot for
+ * change to take effect").
+ */
+ s64 status;
+ u64 v0;
+ u64 v1;
+ u64 v2;
+};
+
+/*
+ * Note: Currently unused PAL arguments are generally labeled
+ * "reserved" so the value specified in the PAL documentation
+ * (generally 0) MUST be passed. Reserved parameters are not optional
+ * parameters.
+ */
+#ifdef __GCC_MULTIREG_RETVALS__
+ extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64);
+ /*
+ * If multi-register return values are returned according to the
+ * ia-64 calling convention, we can call ia64_pal_call_static
+ * directly.
+ */
+# define PAL_CALL(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0,a1, a2, a3)
+#else
+ extern void ia64_pal_call_static (struct ia64_pal_retval *, u64, u64, u64, u64);
+ /*
+ * If multi-register return values are returned through an aggregate
+ * allocated in the caller, we need to use the stub implemented in
+ * sal-stub.S.
+ */
+# define PAL_CALL(iprv,a0,a1,a2,a3) ia64_pal_call_static(&iprv, a0, a1, a2, a3)
+#endif
+
+typedef int (*ia64_pal_handler) (u64, ...);
+extern ia64_pal_handler ia64_pal;
+extern void ia64_pal_handler_init (void *);
+
+extern ia64_pal_handler ia64_pal;
+
+extern pal_cache_config_info_t l0d_cache_config_info;
+extern pal_cache_config_info_t l0i_cache_config_info;
+extern pal_cache_config_info_t l1_cache_config_info;
+extern pal_cache_config_info_t l2_cache_config_info;
+
+extern pal_cache_protection_info_t l0d_cache_protection_info;
+extern pal_cache_protection_info_t l0i_cache_protection_info;
+extern pal_cache_protection_info_t l1_cache_protection_info;
+extern pal_cache_protection_info_t l2_cache_protection_info;
+
+extern pal_cache_config_info_t pal_cache_config_info_get(pal_cache_level_t,
+ pal_cache_type_t);
+
+extern pal_cache_protection_info_t pal_cache_protection_info_get(pal_cache_level_t,
+ pal_cache_type_t);
+
+
+extern void pal_error(int);
+
+
+/* Useful wrappers for the current list of pal procedures */
+
+typedef union pal_bus_features_u {
+ u64 pal_bus_features_val;
+ struct {
+ u64 pbf_reserved1 : 29;
+ u64 pbf_req_bus_parking : 1;
+ u64 pbf_bus_lock_mask : 1;
+ u64 pbf_enable_half_xfer_rate : 1;
+ u64 pbf_reserved2 : 22;
+ u64 pbf_disable_xaction_queueing : 1;
+ u64 pbf_disable_resp_err_check : 1;
+ u64 pbf_disable_berr_check : 1;
+ u64 pbf_disable_bus_req_internal_err_signal : 1;
+ u64 pbf_disable_bus_req_berr_signal : 1;
+ u64 pbf_disable_bus_init_event_check : 1;
+ u64 pbf_disable_bus_init_event_signal : 1;
+ u64 pbf_disable_bus_addr_err_check : 1;
+ u64 pbf_disable_bus_addr_err_signal : 1;
+ u64 pbf_disable_bus_data_err_check : 1;
+ } pal_bus_features_s;
+} pal_bus_features_u_t;
+
+extern void pal_bus_features_print (u64);
+
+/* Provide information about configurable processor bus features */
+extern inline s64
+ia64_pal_bus_get_features (pal_bus_features_u_t *features_avail,
+ pal_bus_features_u_t *features_status,
+ pal_bus_features_u_t *features_control)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_BUS_GET_FEATURES, 0, 0, 0);
+ if (features_avail)
+ features_avail->pal_bus_features_val = iprv.v0;
+ if (features_status)
+ features_status->pal_bus_features_val = iprv.v1;
+ if (features_control)
+ features_control->pal_bus_features_val = iprv.v2;
+ return iprv.status;
+}
+/* Enables/disables specific processor bus features */
+extern inline s64
+ia64_pal_bus_set_features (pal_bus_features_u_t feature_select)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_BUS_SET_FEATURES, feature_select.pal_bus_features_val, 0, 0);
+ return iprv.status;
+}
+
+/* Flush the processor instruction or data caches */
+extern inline s64
+ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 plat_ack)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, plat_ack);
+ return iprv.status;
+}
+
+
+/* Initialize the processor controlled caches */
+extern inline s64
+ia64_pal_cache_init (u64 level, u64 cache_type, u64 restrict)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_INIT, level, cache_type, restrict);
+ return iprv.status;
+}
+
+/* Initialize the tags and data of a data or unified cache line of
+ * processor controlled cache to known values without the availability
+ * of backing memory.
+ */
+extern inline s64
+ia64_pal_cache_line_init (u64 physical_addr, u64 data_value)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_LINE_INIT, physical_addr, data_value, 0);
+ return iprv.status;
+}
+
+
+/* Read the data and tag of a processor controlled cache line for diags */
+extern inline s64
+ia64_pal_cache_read (pal_cache_line_id_u_t line_id, u64 physical_addr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_READ, line_id.pclid_data, physical_addr, 0);
+ return iprv.status;
+}
+
+/* Return summary information about the heirarchy of caches controlled by the processor */
+extern inline s64
+ia64_pal_cache_summary (u64 *cache_levels, u64 *unique_caches)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_SUMMARY, 0, 0, 0);
+ if (cache_levels)
+ *cache_levels = iprv.v0;
+ if (unique_caches)
+ *unique_caches = iprv.v1;
+ return iprv.status;
+}
+
+/* Write the data and tag of a processor-controlled cache line for diags */
+extern inline s64
+ia64_pal_cache_write (pal_cache_line_id_u_t line_id, u64 physical_addr, u64 data)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_CACHE_WRITE, line_id.pclid_data, physical_addr, data);
+ return iprv.status;
+}
+
+
+/* Return the parameters needed to copy relocatable PAL procedures from ROM to memory */
+extern inline s64
+ia64_pal_copy_info (u64 copy_type, u64 num_procs, u64 num_iopics,
+ u64 *buffer_size, u64 *buffer_align)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_COPY_INFO, copy_type, num_procs, num_iopics);
+ if (buffer_size)
+ *buffer_size = iprv.v0;
+ if (buffer_align)
+ *buffer_align = iprv.v1;
+ return iprv.status;
+}
+
+/* Copy relocatable PAL procedures from ROM to memory */
+extern inline s64
+ia64_pal_copy_pal (u64 target_addr, u64 alloc_size, u64 processor, u64 *pal_proc_offset)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_COPY_PAL, target_addr, alloc_size, processor);
+ if (pal_proc_offset)
+ *pal_proc_offset = iprv.v0;
+ return iprv.status;
+}
+
+/* Return the number of instruction and data debug register pairs */
+extern inline s64
+ia64_pal_debug_info (u64 *inst_regs, u64 *data_regs)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_DEBUG_INFO, 0, 0, 0);
+ if (inst_regs)
+ *inst_regs = iprv.v0;
+ if (data_regs)
+ *data_regs = iprv.v1;
+
+ return iprv.status;
+}
+
+#ifdef TBD
+/* Switch from IA64-system environment to IA-32 system environment */
+extern inline s64
+ia64_pal_enter_ia32_env (ia32_env1, ia32_env2, ia32_env3)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_ENTER_IA_32_ENV, ia32_env1, ia32_env2, ia32_env3);
+ return iprv.status;
+}
+#endif
+
+/* Get unique geographical address of this processor on its bus */
+extern inline s64
+ia64_pal_fixed_addr (u64 *global_unique_addr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_FIXED_ADDR, 0, 0, 0);
+ if (global_unique_addr)
+ *global_unique_addr = iprv.v0;
+ return iprv.status;
+}
+
+/* Get base frequency of the platform if generated by the processor */
+extern inline s64
+ia64_pal_freq_base (u64 *platform_base_freq)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_FREQ_BASE, 0, 0, 0);
+ if (platform_base_freq)
+ *platform_base_freq = iprv.v0;
+ return iprv.status;
+}
+
+/*
+ * Get the ratios for processor frequency, bus frequency and interval timer to
+ * to base frequency of the platform
+ */
+extern inline s64
+ia64_pal_freq_ratios (struct pal_freq_ratio *proc_ratio, struct pal_freq_ratio *bus_ratio,
+ struct pal_freq_ratio *itc_ratio)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_FREQ_RATIOS, 0, 0, 0);
+ if (proc_ratio)
+ *(u64 *)proc_ratio = iprv.v0;
+ if (bus_ratio)
+ *(u64 *)bus_ratio = iprv.v1;
+ if (itc_ratio)
+ *(u64 *)itc_ratio = iprv.v2;
+ return iprv.status;
+}
+
+/* Make the processor enter HALT or one of the implementation dependent low
+ * power states where prefetching and execution are suspended and cache and
+ * TLB coherency is not maintained.
+ */
+extern inline s64
+ia64_pal_halt (u64 halt_state)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_HALT, halt_state, 0, 0);
+ return iprv.status;
+}
+typedef union pal_power_mgmt_info_u {
+ u64 ppmi_data;
+ struct {
+ u64 exit_latency : 16,
+ entry_latency : 16,
+ power_consumption : 32;
+ } pal_power_mgmt_info_s;
+} pal_power_mgmt_info_u_t;
+
+/* Return information about processor's optional power management capabilities. */
+extern inline s64
+ia64_pal_halt_info (pal_power_mgmt_info_u_t *power_buf)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_HALT_INFO, (unsigned long) power_buf, 0, 0);
+ return iprv.status;
+}
+
+/* Cause the processor to enter LIGHT HALT state, where prefetching and execution are
+ * suspended, but cache and TLB coherency is maintained.
+ */
+extern inline s64
+ia64_pal_halt_light (void)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_HALT_LIGHT, 0, 0, 0);
+ return iprv.status;
+}
+
+/* Clear all the processor error logging registers and reset the indicator that allows
+ * the error logging registers to be written. This procedure also checks the pending
+ * machine check bit and pending INIT bit and reports their states.
+ */
+extern inline s64
+ia64_pal_mc_clear_log (u64 *pending_vector)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_CLEAR_LOG, 0, 0, 0);
+ if (pending_vector)
+ *pending_vector = iprv.v0;
+ return iprv.status;
+}
+
+/* Ensure that all outstanding transactions in a processor are completed or that any
+ * MCA due to thes outstanding transaction is taken.
+ */
+extern inline s64
+ia64_pal_mc_drain (void)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_DRAIN, 0, 0, 0);
+ return iprv.status;
+}
+
+/* Return the machine check dynamic processor state */
+extern inline s64
+ia64_pal_mc_dynamic_state (u64 offset, u64 *size, u64 *pds)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_DYNAMIC_STATE, offset, 0, 0);
+ if (size)
+ *size = iprv.v0;
+ if (pds)
+ *pds = iprv.v1;
+ return iprv.status;
+}
+
+/* Return processor machine check information */
+extern inline s64
+ia64_pal_mc_error_info (u64 info_index, u64 type_index, u64 *size, u64 *error_info)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_ERROR_INFO, info_index, type_index, 0);
+ if (size)
+ *size = iprv.v0;
+ if (error_info)
+ *error_info = iprv.v1;
+ return iprv.status;
+}
+
+/* Inform PALE_CHECK whether a machine check is expected so that PALE_CHECK willnot
+ * attempt to correct any expected machine checks.
+ */
+extern inline s64
+ia64_pal_mc_expected (u64 expected, u64 *previous)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_EXPECTED, expected, 0, 0);
+ if (previous)
+ *previous = iprv.v0;
+ return iprv.status;
+}
+
+/* Register a platform dependent location with PAL to which it can save
+ * minimal processor state in the event of a machine check or initialization
+ * event.
+ */
+extern inline s64
+ia64_pal_mc_register_mem (u64 physical_addr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_REGISTER_MEM, physical_addr, 0, 0);
+ return iprv.status;
+}
+
+/* Restore minimal architectural processor state, set CMC interrupt if necessary
+ * and resume execution
+ */
+extern inline s64
+ia64_pal_mc_resume (u64 set_cmci, u64 save_ptr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MC_RESUME, set_cmci, save_ptr, 0);
+ return iprv.status;
+}
+
+/* Return the memory attributes implemented by the processor */
+extern inline s64
+ia64_pal_mem_attrib (u64 *mem_attrib)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MEM_ATTRIB, 0, 0, 0);
+ if (mem_attrib)
+ *mem_attrib = iprv.v0;
+ return iprv.status;
+}
+
+/* Return the amount of memory needed for second phase of processor
+ * self-test and the required alignment of memory.
+ */
+extern inline s64
+ia64_pal_mem_for_test (u64 *bytes_needed, u64 *alignment)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_MEM_FOR_TEST, 0, 0, 0);
+ if (bytes_needed)
+ *bytes_needed = iprv.v0;
+ if (alignment)
+ *alignment = iprv.v1;
+ return iprv.status;
+}
+
+typedef union pal_perf_mon_info_u {
+ u64 ppmi_data;
+ struct {
+ u64 generic : 8,
+ width : 8,
+ cycles : 8,
+ retired : 8,
+ reserved : 32;
+ } pal_perf_mon_info_s;
+} pal_perf_mon_info_u_t;
+
+/* Return the performance monitor information about what can be counted
+ * and how to configure the monitors to count the desired events.
+ */
+extern inline s64
+ia64_pal_perf_mon_info (u64 *pm_buffer, pal_perf_mon_info_u_t *pm_info)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PERF_MON_INFO, (unsigned long) pm_buffer, 0, 0);
+ if (pm_info)
+ pm_info->ppmi_data = iprv.v0;
+ return iprv.status;
+}
+
+/* Specifies the physical address of the processor interrupt block
+ * and I/O port space.
+ */
+extern inline s64
+ia64_pal_platform_addr (u64 type, u64 physical_addr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PLATFORM_ADDR, type, physical_addr, 0);
+ return iprv.status;
+}
+
+/* Set the SAL PMI entrypoint in memory */
+extern inline s64
+ia64_pal_pmi_entrypoint (u64 sal_pmi_entry_addr)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PMI_ENTRYPOINT, sal_pmi_entry_addr, 0, 0);
+ return iprv.status;
+}
+
+#ifdef TBD
+struct pal_features_s;
+/* Provide information about configurable processor features */
+extern inline s64
+ia64_pal_proc_get_features (struct pal_features_s *features_avail,
+ struct pal_features_s *features_status,
+ struct pal_features_s *features_control)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PROC_GET_FEATURES, 0, 0, 0);
+ return iprv.status;
+}
+/* Enable/disable processor dependent features */
+extern inline s64
+ia64_pal_proc_set_features (feature_select)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PROC_SET_FEATURES, feature_select, 0, 0);
+ return iprv.status;
+}
+
+#endif
+/*
+ * Put everything in a struct so we avoid the global offset table whenever
+ * possible.
+ */
+typedef struct ia64_ptce_info_s {
+ u64 base;
+ u32 count[2];
+ u32 stride[2];
+} ia64_ptce_info_t;
+
+/* Return the information required for the architected loop used to purge
+ * (initialize) the entire TC
+ */
+extern inline s64
+ia64_get_ptce (ia64_ptce_info_t *ptce)
+{
+ struct ia64_pal_retval iprv;
+
+ if (!ptce)
+ return -1;
+
+ PAL_CALL(iprv, PAL_PTCE_INFO, 0, 0, 0);
+ if (iprv.status == 0) {
+ ptce->base = iprv.v0;
+ ptce->count[0] = iprv.v1 >> 32;
+ ptce->count[1] = iprv.v1 & 0xffffffff;
+ ptce->stride[0] = iprv.v2 >> 32;
+ ptce->stride[1] = iprv.v2 & 0xffffffff;
+ }
+ return iprv.status;
+}
+
+/* Return info about implemented application and control registers. */
+extern inline s64
+ia64_pal_register_info (u64 info_request, u64 *reg_info_1, u64 *reg_info_2)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_REGISTER_INFO, info_request, 0, 0);
+ if (reg_info_1)
+ *reg_info_1 = iprv.v0;
+ if (reg_info_2)
+ *reg_info_2 = iprv.v1;
+ return iprv.status;
+}
+
+typedef union pal_hints_u {
+ u64 ph_data;
+ struct {
+ u64 si : 1,
+ li : 1,
+ reserved : 62;
+ } pal_hints_s;
+} pal_hints_u_t;
+
+/* Return information about the register stack and RSE for this processor
+ * implementation.
+ */
+extern inline s64
+ia64_pal_rse_info (u64 *num_phys_stacked, pal_hints_u_t *hints)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_RSE_INFO, 0, 0, 0);
+ if (num_phys_stacked)
+ *num_phys_stacked = iprv.v0;
+ if (hints)
+ hints->ph_data = iprv.v1;
+ return iprv.status;
+}
+
+/* Cause the processor to enter SHUTDOWN state, where prefetching and execution are
+ * suspended, but cause cache and TLB coherency to be maintained.
+ * This is usually called in IA-32 mode.
+ */
+extern inline s64
+ia64_pal_shutdown (void)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_SHUTDOWN, 0, 0, 0);
+ return iprv.status;
+}
+
+/* Perform the second phase of processor self-test. */
+extern inline s64
+ia64_pal_test_proc (u64 test_addr, u64 test_size, u64 attributes, u64 *self_test_state)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_TEST_PROC, test_addr, test_size, attributes);
+ if (self_test_state)
+ *self_test_state = iprv.v0;
+ return iprv.status;
+}
+
+typedef union pal_version_u {
+ u64 pal_version_val;
+ struct {
+ u64 pv_pal_b_rev : 8;
+ u64 pv_pal_b_model : 8;
+ u64 pv_reserved1 : 8;
+ u64 pv_pal_vendor : 8;
+ u64 pv_pal_a_rev : 8;
+ u64 pv_pal_a_model : 8;
+ u64 pv_reserved2 : 16;
+ } pal_version_s;
+} pal_version_u_t;
+
+
+/* Return PAL version information */
+extern inline s64
+ia64_pal_version (pal_version_u_t *pal_version)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_VERSION, 0, 0, 0);
+ if (pal_version)
+ pal_version->pal_version_val = iprv.v0;
+ return iprv.status;
+}
+
+typedef union pal_tc_info_u {
+ u64 pti_val;
+ struct {
+ u64 num_sets : 8,
+ associativity : 8,
+ num_entries : 16,
+ pf : 1,
+ unified : 1,
+ reduce_tr : 1,
+ reserved : 29;
+ } pal_tc_info_s;
+} pal_tc_info_u_t;
+
+
+/* Return information about the virtual memory characteristics of the processor
+ * implementation.
+ */
+extern inline s64
+ia64_pal_vm_info (u64 tc_level, u64 tc_type, pal_tc_info_u_t *tc_info, u64 *tc_pages)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_VM_INFO, tc_level, tc_type, 0);
+ if (tc_info)
+ tc_info->pti_val = iprv.v0;
+ if (tc_pages)
+ *tc_pages = iprv.v1;
+ return iprv.status;
+}
+
+/* Get page size information about the virtual memory characteristics of the processor
+ * implementation.
+ */
+extern inline s64
+ia64_pal_vm_page_size (u64 *tr_pages, u64 *vw_pages)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_VM_PAGE_SIZE, 0, 0, 0);
+ if (tr_pages)
+ *tr_pages = iprv.v0;
+ if (vw_pages)
+ *vw_pages = iprv.v1;
+ return iprv.status;
+}
+
+typedef union pal_vm_info_1_u {
+ u64 pvi1_val;
+ struct {
+ u64 vw : 1,
+ phys_add_size : 7,
+ key_size : 16,
+ max_pkr : 8,
+ hash_tag_id : 8,
+ max_dtr_entry : 8,
+ max_itr_entry : 8,
+ max_unique_tcs : 8,
+ num_tc_levels : 8;
+ } pal_vm_info_1_s;
+} pal_vm_info_1_u_t;
+
+typedef union pal_vm_info_2_u {
+ u64 pvi2_val;
+ struct {
+ u64 impl_va_msb : 8,
+ rid_size : 8,
+ reserved : 48;
+ } pal_vm_info_2_s;
+} pal_vm_info_2_u_t;
+
+/* Get summary information about the virtual memory characteristics of the processor
+ * implementation.
+ */
+extern inline s64
+ia64_pal_vm_summary (pal_vm_info_1_u_t *vm_info_1, pal_vm_info_2_u_t *vm_info_2)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_VM_SUMMARY, 0, 0, 0);
+ if (vm_info_1)
+ vm_info_1->pvi1_val = iprv.v0;
+ if (vm_info_2)
+ vm_info_2->pvi2_val = iprv.v1;
+ return iprv.status;
+}
+
+typedef union pal_itr_valid_u {
+ u64 piv_val;
+ struct {
+ u64 access_rights_valid : 1,
+ priv_level_valid : 1,
+ dirty_bit_valid : 1,
+ mem_attr_valid : 1,
+ reserved : 60;
+ } pal_itr_valid_s;
+} pal_itr_valid_u_t;
+
+/* Read a translation register */
+extern inline s64
+ia64_pal_vm_tr_read (u64 reg_num, u64 tr_type, u64 tr_buffer, pal_itr_valid_u_t *itr_valid)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_VM_TR_READ, reg_num, tr_type, tr_buffer);
+ if (itr_valid)
+ itr_valid->piv_val = iprv.v0;
+ return iprv.status;
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_PAL_H */
--- /dev/null
+#ifndef _ASM_IA64_PARAM_H
+#define _ASM_IA64_PARAM_H
+
+/*
+ * Fundamental kernel parameters.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/config.h>
+
+#ifdef CONFIG_IA64_HP_SIM
+/*
+ * Yeah, simulating stuff is slow, so let us catch some breath between
+ * timer interrupts...
+ */
+# define HZ 20
+#endif
+
+#ifdef CONFIG_IA64_DIG
+# ifdef CONFIG_IA64_SOFTSDV_HACKS
+# define HZ 20
+# else
+# define HZ 100
+# endif
+#endif
+
+#ifndef HZ
+# define HZ 1024
+#endif
+
+#define EXEC_PAGESIZE 65536
+
+#ifndef NGROUPS
+# define NGROUPS 32
+#endif
+
+#ifndef NOGROUP
+# define NOGROUP (-1)
+#endif
+
+#define MAXHOSTNAMELEN 64 /* max length of hostname */
+
+#endif /* _ASM_IA64_PARAM_H */
--- /dev/null
+#ifndef _ASM_IA64_PCI_H
+#define _ASM_IA64_PCI_H
+
+/*
+ * Can be used to override the logic in pci_scan_bus for skipping
+ * already-configured bus numbers - to be used for buggy BIOSes or
+ * architectures with incomplete PCI setup by the loader.
+ */
+#define pcibios_assign_all_busses() 0
+
+#define PCIBIOS_MIN_IO 0x1000
+#define PCIBIOS_MIN_MEM 0x10000000
+
+/*
+ * Dynamic DMA mapping API.
+ * IA-64 has everything mapped statically.
+ */
+
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+#include <asm/scatterlist.h>
+
+struct pci_dev;
+
+/*
+ * Allocate and map kernel buffer using consistent mode DMA for a device.
+ * hwdev should be valid struct pci_dev pointer for PCI devices,
+ * NULL for PCI-like buses (ISA, EISA).
+ * Returns non-NULL cpu-view pointer to the buffer if successful and
+ * sets *dma_addrp to the pci side dma address as well, else *dma_addrp
+ * is undefined.
+ */
+extern void *pci_alloc_consistent (struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle);
+
+/*
+ * Free and unmap a consistent DMA buffer.
+ * cpu_addr is what was returned from pci_alloc_consistent,
+ * size must be the same as what as passed into pci_alloc_consistent,
+ * and likewise dma_addr must be the same as what *dma_addrp was set to.
+ *
+ * References to the memory and mappings associated with cpu_addr/dma_addr
+ * past this call are illegal.
+ */
+extern void pci_free_consistent (struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle);
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+extern inline dma_addr_t
+pci_map_single (struct pci_dev *hwdev, void *ptr, size_t size)
+{
+ return virt_to_bus(ptr);
+}
+
+/*
+ * Unmap a single streaming mode DMA translation. The dma_addr and size
+ * must match what was provided for in a previous pci_map_single call. All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see
+ * whatever the device wrote there.
+ */
+extern inline void
+pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size)
+{
+ /* Nothing to do */
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+extern inline int
+pci_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents)
+{
+ return nents;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+extern inline void
+pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents)
+{
+ /* Nothing to do */
+}
+
+/*
+ * Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+extern inline void
+pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size)
+{
+ /* Nothing to do */
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA
+ * translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+extern inline void
+pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems)
+{
+ /* Nothing to do */
+}
+
+/* These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns, or alternatively stop on the first sg_dma_len(sg) which
+ * is 0.
+ */
+#define sg_dma_address(sg) (virt_to_bus((sg)->address))
+#define sg_dma_len(sg) ((sg)->length)
+
+#endif /* _ASM_IA64_PCI_H */
--- /dev/null
+#ifndef _ASM_IA64_PGALLOC_H
+#define _ASM_IA64_PGALLOC_H
+
+/*
+ * This file contains the functions and defines necessary to allocate
+ * page tables.
+ *
+ * This hopefully works with any (fixed) ia-64 page-size, as defined
+ * in <asm/page.h> (currently 8192).
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2000, Goutham Rao <goutham.rao@intel.com>
+ */
+
+#include <linux/config.h>
+
+#include <linux/threads.h>
+
+#include <asm/mmu_context.h>
+#include <asm/processor.h>
+
+/*
+ * Very stupidly, we used to get new pgd's and pmd's, init their contents
+ * to point to the NULL versions of the next level page table, later on
+ * completely re-init them the same way, then free them up. This wasted
+ * a lot of work and caused unnecessary memory traffic. How broken...
+ * We fix this by caching them.
+ */
+#define pgd_quicklist (my_cpu_data.pgd_quick)
+#define pmd_quicklist (my_cpu_data.pmd_quick)
+#define pte_quicklist (my_cpu_data.pte_quick)
+#define pgtable_cache_size (my_cpu_data.pgtable_cache_sz)
+
+extern __inline__ pgd_t*
+get_pgd_slow (void)
+{
+ pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
+ if (ret)
+ clear_page(ret);
+ return ret;
+}
+
+extern __inline__ pgd_t*
+get_pgd_fast (void)
+{
+ unsigned long *ret = pgd_quicklist;
+
+ if (ret != NULL) {
+ pgd_quicklist = (unsigned long *)(*ret);
+ ret[0] = 0;
+ --pgtable_cache_size;
+ }
+ return (pgd_t *)ret;
+}
+
+extern __inline__ pgd_t*
+pgd_alloc (void)
+{
+ pgd_t *pgd;
+
+ pgd = get_pgd_fast();
+ if (!pgd)
+ pgd = get_pgd_slow();
+ return pgd;
+}
+
+extern __inline__ void
+free_pgd_fast (pgd_t *pgd)
+{
+ *(unsigned long *)pgd = (unsigned long) pgd_quicklist;
+ pgd_quicklist = (unsigned long *) pgd;
+ ++pgtable_cache_size;
+}
+
+extern __inline__ pmd_t *
+get_pmd_slow (void)
+{
+ pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
+
+ if (pmd)
+ clear_page(pmd);
+ return pmd;
+}
+
+extern __inline__ pmd_t *
+get_pmd_fast (void)
+{
+ unsigned long *ret = (unsigned long *)pmd_quicklist;
+
+ if (ret != NULL) {
+ pmd_quicklist = (unsigned long *)(*ret);
+ ret[0] = 0;
+ --pgtable_cache_size;
+ }
+ return (pmd_t *)ret;
+}
+
+extern __inline__ void
+free_pmd_fast (pmd_t *pmd)
+{
+ *(unsigned long *)pmd = (unsigned long) pmd_quicklist;
+ pmd_quicklist = (unsigned long *) pmd;
+ ++pgtable_cache_size;
+}
+
+extern __inline__ void
+free_pmd_slow (pmd_t *pmd)
+{
+ free_page((unsigned long)pmd);
+}
+
+extern pte_t *get_pte_slow (pmd_t *pmd, unsigned long address_preadjusted);
+
+extern __inline__ pte_t *
+get_pte_fast (void)
+{
+ unsigned long *ret = (unsigned long *)pte_quicklist;
+
+ if (ret != NULL) {
+ pte_quicklist = (unsigned long *)(*ret);
+ ret[0] = 0;
+ --pgtable_cache_size;
+ }
+ return (pte_t *)ret;
+}
+
+extern __inline__ void
+free_pte_fast (pte_t *pte)
+{
+ *(unsigned long *)pte = (unsigned long) pte_quicklist;
+ pte_quicklist = (unsigned long *) pte;
+ ++pgtable_cache_size;
+}
+
+#define pte_free_kernel(pte) free_pte_fast(pte)
+#define pte_free(pte) free_pte_fast(pte)
+#define pmd_free_kernel(pmd) free_pmd_fast(pmd)
+#define pmd_free(pmd) free_pmd_fast(pmd)
+#define pgd_free(pgd) free_pgd_fast(pgd)
+
+extern __inline__ pte_t*
+pte_alloc (pmd_t *pmd, unsigned long vmaddr)
+{
+ unsigned long offset;
+
+ offset = (vmaddr >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+ if (pmd_none(*pmd)) {
+ pte_t *pte_page = get_pte_fast();
+
+ if (!pte_page)
+ return get_pte_slow(pmd, offset);
+ pmd_set(pmd, pte_page);
+ return pte_page + offset;
+ }
+ if (pmd_bad(*pmd)) {
+ __handle_bad_pmd(pmd);
+ return NULL;
+ }
+ return (pte_t *) pmd_page(*pmd) + offset;
+}
+
+extern __inline__ pmd_t*
+pmd_alloc (pgd_t *pgd, unsigned long vmaddr)
+{
+ unsigned long offset;
+
+ offset = (vmaddr >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
+ if (pgd_none(*pgd)) {
+ pmd_t *pmd_page = get_pmd_fast();
+
+ if (!pmd_page)
+ pmd_page = get_pmd_slow();
+ if (pmd_page) {
+ if (pgd_none(*pgd)) {
+ pgd_set(pgd, pmd_page);
+ return pmd_page + offset;
+ } else
+ free_pmd_fast(pmd_page);
+ } else
+ return NULL;
+ }
+ if (pgd_bad(*pgd)) {
+ __handle_bad_pgd(pgd);
+ return NULL;
+ }
+ return (pmd_t *) pgd_page(*pgd) + offset;
+}
+
+#define pte_alloc_kernel(pmd, addr) pte_alloc(pmd, addr)
+#define pmd_alloc_kernel(pgd, addr) pmd_alloc(pgd, addr)
+
+extern int do_check_pgt_cache (int, int);
+
+/*
+ * This establishes kernel virtual mappings (e.g., as a result of a
+ * vmalloc call). Since ia-64 uses a separate kernel page table,
+ * there is nothing to do here... :)
+ */
+#define set_pgdir(vmaddr, entry) do { } while(0)
+
+/*
+ * Now for some TLB flushing routines. This is the kind of stuff that
+ * can be very expensive, so try to avoid them whenever possible.
+ */
+
+/*
+ * Flush everything (kernel mapping may also have changed due to
+ * vmalloc/vfree).
+ */
+extern void __flush_tlb_all (void);
+
+#ifdef CONFIG_SMP
+ extern void smp_flush_tlb_all (void);
+# define flush_tlb_all() smp_flush_tlb_all()
+#else
+# define flush_tlb_all() __flush_tlb_all()
+#endif
+
+/*
+ * Serialize usage of ptc.g:
+ */
+extern spinlock_t ptcg_lock;
+
+/*
+ * Flush a specified user mapping
+ */
+extern __inline__ void
+flush_tlb_mm (struct mm_struct *mm)
+{
+ if (mm) {
+ mm->context = 0;
+ if (mm == current->active_mm) {
+ /* This is called, e.g., as a result of exec(). */
+ get_new_mmu_context(mm);
+ reload_context(mm);
+ }
+ }
+}
+
+extern void flush_tlb_range (struct mm_struct *mm, unsigned long start, unsigned long end);
+
+/*
+ * Page-granular tlb flush.
+ *
+ * do a tbisd (type = 2) normally, and a tbis (type = 3)
+ * if it is an executable mapping. We want to avoid the
+ * itlb flush, because that potentially also does a
+ * icache flush.
+ */
+static __inline__ void
+flush_tlb_page (struct vm_area_struct *vma, unsigned long addr)
+{
+ flush_tlb_range(vma->vm_mm, addr, addr + PAGE_SIZE);
+}
+
+#endif /* _ASM_IA64_PGALLOC_H */
--- /dev/null
+#ifndef _ASM_IA64_PGTABLE_H
+#define _ASM_IA64_PGTABLE_H
+
+/*
+ * This file contains the functions and defines necessary to modify and use
+ * the ia-64 page table tree.
+ *
+ * This hopefully works with any (fixed) ia-64 page-size, as defined
+ * in <asm/page.h> (currently 8192).
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/mman.h>
+#include <asm/page.h>
+#include <asm/types.h>
+
+/* Size of physical address space: */
+#define IA64_PHYS_BITS 50 /* EAS2.5 defines 50 bits of ppn */
+#define IA64_PHYS_SIZE (__IA64_UL(1) << IA64_PHYS_BITS)
+
+/* Is ADDR a valid kernel address? */
+#define kern_addr_valid(addr) ((addr) >= TASK_SIZE)
+
+/* Is ADDR a valid physical address? */
+#define phys_addr_valid(addr) ((addr) < IA64_PHYS_SIZE)
+
+/*
+ * First, define the various bits in a PTE. Note that the PTE format
+ * matches the VHPT short format, the firt doubleword of the VHPD long
+ * format, and the first doubleword of the TLB insertion format.
+ */
+#define _PAGE_P (1 << 0) /* page present bit */
+#define _PAGE_MA_WB (0x0 << 2) /* write back memory attribute */
+#define _PAGE_MA_UC (0x4 << 2) /* uncacheable memory attribute */
+#define _PAGE_MA_UCE (0x5 << 2) /* UC exported attribute */
+#define _PAGE_MA_WC (0x6 << 2) /* write coalescing memory attribute */
+#define _PAGE_MA_NAT (0x7 << 2) /* not-a-thing attribute */
+#define _PAGE_MA_MASK (0x7 << 2)
+#define _PAGE_PL_0 (0 << 7) /* privilege level 0 (kernel) */
+#define _PAGE_PL_1 (1 << 7) /* privilege level 1 (unused) */
+#define _PAGE_PL_2 (2 << 7) /* privilege level 2 (unused) */
+#define _PAGE_PL_3 (3 << 7) /* privilege level 3 (user) */
+#define _PAGE_PL_MASK (3 << 7)
+#define _PAGE_AR_R (0 << 9) /* read only */
+#define _PAGE_AR_RX (1 << 9) /* read & execute */
+#define _PAGE_AR_RW (2 << 9) /* read & write */
+#define _PAGE_AR_RWX (3 << 9) /* read, write & execute */
+#define _PAGE_AR_R_RW (4 << 9) /* read / read & write */
+#define _PAGE_AR_RX_RWX (5 << 9) /* read & exec / read, write & exec */
+#define _PAGE_AR_RWX_RW (6 << 9) /* read, write & exec / read & write */
+#define _PAGE_AR_X_RX (7 << 9) /* exec & promote / read & exec */
+#define _PAGE_AR_MASK (7 << 9)
+#define _PAGE_AR_SHIFT 9
+#define _PAGE_A (1 << 5) /* page accessed bit */
+#define _PAGE_D (1 << 6) /* page dirty bit */
+#define _PAGE_PPN_MASK ((IA64_PHYS_SIZE - 1) & ~0xfffUL)
+#define _PAGE_ED (__IA64_UL(1) << 52) /* exception deferral */
+#define _PAGE_PROTNONE (__IA64_UL(1) << 63)
+
+#define _PFN_MASK _PAGE_PPN_MASK
+#define _PAGE_CHG_MASK (_PFN_MASK | _PAGE_A | _PAGE_D)
+
+#define _PAGE_SIZE_4K 12
+#define _PAGE_SIZE_8K 13
+#define _PAGE_SIZE_16K 14
+#define _PAGE_SIZE_64K 16
+#define _PAGE_SIZE_256K 18
+#define _PAGE_SIZE_1M 20
+#define _PAGE_SIZE_4M 22
+#define _PAGE_SIZE_16M 24
+#define _PAGE_SIZE_64M 26
+#define _PAGE_SIZE_256M 28
+
+#define __ACCESS_BITS _PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_MA_WB
+#define __DIRTY_BITS_NO_ED _PAGE_A | _PAGE_P | _PAGE_D | _PAGE_MA_WB
+#define __DIRTY_BITS _PAGE_ED | __DIRTY_BITS_NO_ED
+
+/*
+ * Definitions for first level:
+ *
+ * PGDIR_SHIFT determines what a first-level page table entry can map.
+ */
+#define PGDIR_SHIFT (PAGE_SHIFT + 2*(PAGE_SHIFT-3))
+#define PGDIR_SIZE (__IA64_UL(1) << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+#define PTRS_PER_PGD (__IA64_UL(1) << (PAGE_SHIFT-3))
+#define USER_PTRS_PER_PGD PTRS_PER_PGD
+
+/*
+ * Definitions for second level:
+ *
+ * PMD_SHIFT determines the size of the area a second-level page table
+ * can map.
+ */
+#define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3))
+#define PMD_SIZE (__IA64_UL(1) << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+#define PTRS_PER_PMD (__IA64_UL(1) << (PAGE_SHIFT-3))
+
+/*
+ * Definitions for third level:
+ */
+#define PTRS_PER_PTE (__IA64_UL(1) << (PAGE_SHIFT-3))
+
+/* Number of pointers that fit on a page: this will go away. */
+#define PTRS_PER_PAGE (__IA64_UL(1) << (PAGE_SHIFT-3))
+
+# ifndef __ASSEMBLY__
+
+#include <asm/bitops.h>
+#include <asm/mmu_context.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+
+/*
+ * All the normal masks have the "page accessed" bits on, as any time
+ * they are used, the page is accessed. They are cleared only by the
+ * page-out routines
+ */
+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_A)
+#define PAGE_SHARED __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RW)
+#define PAGE_READONLY __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_R)
+#define PAGE_COPY __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
+#define PAGE_GATE __pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_X_RX)
+#define PAGE_KERNEL __pgprot(__DIRTY_BITS | _PAGE_PL_0 | _PAGE_AR_RW)
+
+/*
+ * Next come the mappings that determine how mmap() protection bits
+ * (PROT_EXEC, PROT_READ, PROT_WRITE, PROT_NONE) get implemented. The
+ * _P version gets used for a private shared memory segment, the _S
+ * version gets used for a shared memory segment with MAP_SHARED on.
+ * In a private shared memory segment, we do a copy-on-write if a task
+ * attempts to write to the page.
+ */
+ /* xwr */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_READONLY /* write to priv pg -> copy & make writable */
+#define __P011 PAGE_READONLY /* ditto */
+#define __P100 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_X_RX)
+#define __P101 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
+#define __P110 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
+#define __P111 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */
+#define __S011 PAGE_SHARED
+#define __S100 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_X_RX)
+#define __S101 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RX)
+#define __S110 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RWX)
+#define __S111 __pgprot(_PAGE_ED | _PAGE_A | _PAGE_P | _PAGE_PL_3 | _PAGE_AR_RWX)
+
+#define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e))
+#define pmd_ERROR(e) printk("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
+#define pte_ERROR(e) printk("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e))
+
+
+/*
+ * Some definitions to translate between mem_map, PTEs, and page
+ * addresses:
+ */
+
+/*
+ * Given a pointer to an mem_map[] entry, return the kernel virtual
+ * address corresponding to that page.
+ */
+#define page_address(page) (PAGE_OFFSET + (((page) - mem_map) << PAGE_SHIFT))
+
+/*
+ * Given a PTE, return the index of the mem_map[] entry corresponding
+ * to the page frame the PTE.
+ */
+#define pte_pagenr(x) ((unsigned long) ((pte_val(x) & _PFN_MASK) >> PAGE_SHIFT))
+
+/*
+ * Now for some cache flushing routines. This is the kind of stuff
+ * that can be very expensive, so try to avoid them whenever possible.
+ */
+
+/* Caches aren't brain-dead on the ia-64. */
+#define flush_cache_all() do { } while (0)
+#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_range(mm, start, end) do { } while (0)
+#define flush_cache_page(vma, vmaddr) do { } while (0)
+#define flush_page_to_ram(page) do { } while (0)
+#define flush_icache_range(start, end) do { } while (0)
+extern void ia64_flush_icache_page (unsigned long addr);
+
+#define flush_icache_page(pg) ia64_flush_icache_page(page_address(pg))
+
+/*
+ * Now come the defines and routines to manage and access the three-level
+ * page table.
+ */
+
+/*
+ * On some architectures, special things need to be done when setting
+ * the PTE in a page table. Nothing special needs to be on ia-64.
+ */
+#define set_pte(ptep, pteval) (*(ptep) = (pteval))
+
+#define VMALLOC_START (0xa000000000000000+2*PAGE_SIZE)
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+#define VMALLOC_END 0xbfffffffffffffff
+
+/*
+ * BAD_PAGETABLE is used when we need a bogus page-table, while
+ * BAD_PAGE is used for a bogus page.
+ *
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern pte_t ia64_bad_page (void);
+extern pmd_t *ia64_bad_pagetable (void);
+
+#define BAD_PAGETABLE ia64_bad_pagetable()
+#define BAD_PAGE ia64_bad_page()
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+#define mk_pte(page,pgprot) \
+({ \
+ pte_t __pte; \
+ \
+ pte_val(__pte) = ((page - mem_map) << PAGE_SHIFT) | pgprot_val(pgprot); \
+ __pte; \
+})
+
+/* This takes a physical page address that is used by the remapping functions */
+#define mk_pte_phys(physpage, pgprot) \
+({ pte_t __pte; pte_val(__pte) = physpage + pgprot_val(pgprot); __pte; })
+
+#define pte_modify(_pte, newprot) \
+ (__pte((pte_val(_pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)))
+
+#define page_pte_prot(page,prot) mk_pte(page, prot)
+#define page_pte(page) page_pte_prot(page, __pgprot(0))
+
+#define pte_none(pte) (!pte_val(pte))
+#define pte_present(pte) (pte_val(pte) & (_PAGE_P | _PAGE_PROTNONE))
+#define pte_clear(pte) (pte_val(*(pte)) = 0UL)
+/* pte_page() returns the "struct page *" corresponding to the PTE: */
+#define pte_page(pte) (mem_map + pte_pagenr(pte))
+
+#define pmd_set(pmdp, ptep) (pmd_val(*(pmdp)) = __pa(ptep))
+#define pmd_none(pmd) (!pmd_val(pmd))
+#define pmd_bad(pmd) (!phys_addr_valid(pmd_val(pmd)))
+#define pmd_present(pmd) (pmd_val(pmd) != 0UL)
+#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL)
+#define pmd_page(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK))
+
+#define pgd_set(pgdp, pmdp) (pgd_val(*(pgdp)) = __pa(pmdp))
+#define pgd_none(pgd) (!pgd_val(pgd))
+#define pgd_bad(pgd) (!phys_addr_valid(pgd_val(pgd)))
+#define pgd_present(pgd) (pgd_val(pgd) != 0UL)
+#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL)
+#define pgd_page(pgd) ((unsigned long) __va(pgd_val(pgd) & _PFN_MASK))
+
+/*
+ * The following have defined behavior only work if pte_present() is true.
+ */
+#define pte_read(pte) (((pte_val(pte) & _PAGE_AR_MASK) >> _PAGE_AR_SHIFT) < 6)
+#define pte_write(pte) ((unsigned) (((pte_val(pte) & _PAGE_AR_MASK) >> _PAGE_AR_SHIFT) - 2) < 4)
+#define pte_dirty(pte) (pte_val(pte) & _PAGE_D)
+#define pte_young(pte) (pte_val(pte) & _PAGE_A)
+/*
+ * Note: we convert AR_RWX to AR_RX and AR_RW to AR_R by clearing the
+ * 2nd bit in the access rights:
+ */
+#define pte_wrprotect(pte) (__pte(pte_val(pte) & ~_PAGE_AR_RW))
+#define pte_mkwrite(pte) (__pte(pte_val(pte) | _PAGE_AR_RW))
+
+#define pte_mkold(pte) (__pte(pte_val(pte) & ~_PAGE_A))
+#define pte_mkyoung(pte) (__pte(pte_val(pte) | _PAGE_A))
+
+#define pte_mkclean(pte) (__pte(pte_val(pte) & ~_PAGE_D))
+#define pte_mkdirty(pte) (__pte(pte_val(pte) | _PAGE_D))
+
+/*
+ * Macro to make mark a page protection value as "uncacheable". Note
+ * that "protection" is really a misnomer here as the protection value
+ * contains the memory attribute bits, dirty bits, and various other
+ * bits as well.
+ */
+#define pgprot_noncached(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_UC)
+
+/* The offset in the 1-level directory is given by the 3 region bits
+ (61..63) and the seven level-1 bits (33-39). */
+extern __inline__ pgd_t*
+pgd_offset (struct mm_struct *mm, unsigned long address)
+{
+ unsigned long region = address >> 61;
+ unsigned long l1index = (address >> PGDIR_SHIFT) & ((PTRS_PER_PGD >> 3) - 1);
+
+ return mm->pgd + ((region << (PAGE_SHIFT - 6)) | l1index);
+}
+
+/* In the kernel's mapped region we have a full 43 bit space available and completely
+ ignore the region number (since we know its in region number 5). */
+#define pgd_offset_k(addr) \
+ (init_mm.pgd + (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)))
+
+/* Find an entry in the second-level page table.. */
+#define pmd_offset(dir,addr) \
+ ((pmd_t *) pgd_page(*(dir)) + (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)))
+
+/* Find an entry in the third-level page table.. */
+#define pte_offset(dir,addr) \
+ ((pte_t *) pmd_page(*(dir)) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
+
+
+extern void __handle_bad_pgd (pgd_t *pgd);
+extern void __handle_bad_pmd (pmd_t *pmd);
+
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+/*
+ * IA-64 doesn't have any external MMU info: the page tables contain
+ * all the necessary information. However, we can use this macro
+ * to pre-install (override) a PTE that we know is needed anyhow.
+ *
+ * Asit says that on Itanium, it is generally faster to let the VHPT
+ * walker pick up a newly installed PTE (and VHPT misses should be
+ * extremely rare compared to normal misses). Also, since
+ * pre-installing the PTE has the problem that we may evict another
+ * TLB entry needlessly because we don't know for sure whether we need
+ * to update the iTLB or dTLB, I tend to prefer this solution, too.
+ * Also, this avoids nasty issues with forward progress (what if the
+ * newly installed PTE gets replaced before we return to the previous
+ * execution context?).
+ *
+ */
+#if 0
+# define update_mmu_cache(vma,address,pte)
+#else
+# define update_mmu_cache(vma,address,pte) \
+do { \
+ /* \
+ * XXX fix me!! \
+ * \
+ * It's not clear this is a win. We may end up pollute the \
+ * dtlb with itlb entries and vice versa (e.g., consider stack \
+ * pages that are normally marked executable). It would be \
+ * better to insert the TLB entry for the TLB cache that we \
+ * know needs the new entry. However, the update_mmu_cache() \
+ * arguments don't tell us whether we got here through a data \
+ * access or through an instruction fetch. Talk to Linus to \
+ * fix this. \
+ * \
+ * If you re-enable this code, you must disable the ptc code in \
+ * Entry 20 of the ivt. \
+ */ \
+ unsigned long flags; \
+ \
+ ia64_clear_ic(flags); \
+ ia64_itc((vma->vm_flags & PROT_EXEC) ? 0x3 : 0x2, address, pte_val(pte), PAGE_SHIFT); \
+ __restore_flags(flags); \
+} while (0)
+#endif
+
+#define SWP_TYPE(entry) (((entry).val >> 1) & 0xff)
+#define SWP_OFFSET(entry) ((entry).val >> 9)
+#define SWP_ENTRY(type,offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 9) })
+#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+#define swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+#define module_map vmalloc
+#define module_unmap vfree
+
+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+#define PageSkip(page) (0)
+
+#define io_remap_page_range remap_page_range /* XXX is this right? */
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern unsigned long empty_zero_page[1024];
+#define ZERO_PAGE(vaddr) (mem_map + MAP_NR(empty_zero_page))
+
+# endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_IA64_PGTABLE_H */
--- /dev/null
+#ifndef _ASM_IA64_POLL_H
+#define _ASM_IA64_POLL_H
+
+/*
+ * poll(2) bit definitions. Chosen to be compatible with Linux/x86.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define POLLIN 0x0001
+#define POLLPRI 0x0002
+#define POLLOUT 0x0004
+#define POLLERR 0x0008
+#define POLLHUP 0x0010
+#define POLLNVAL 0x0020
+
+#define POLLRDNORM 0x0040
+#define POLLRDBAND 0x0080
+#define POLLWRNORM 0x0100
+#define POLLWRBAND 0x0200
+#define POLLMSG 0x0400
+
+struct pollfd {
+ int fd;
+ short events;
+ short revents;
+};
+
+#endif /* _ASM_IA64_POLL_H */
--- /dev/null
+#ifndef _ASM_IA64_POSIX_TYPES_H
+#define _ASM_IA64_POSIX_TYPES_H
+
+/*
+ * This file is generally used by user-level software, so you need to
+ * be a little careful about namespace pollution etc. Also, we cannot
+ * assume GCC is being used.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+typedef unsigned int __kernel_dev_t;
+typedef unsigned int __kernel_ino_t;
+typedef unsigned int __kernel_mode_t;
+typedef unsigned int __kernel_nlink_t;
+typedef long __kernel_off_t;
+typedef long long __kernel_loff_t;
+typedef int __kernel_pid_t;
+typedef int __kernel_ipc_pid_t;
+typedef unsigned int __kernel_uid_t;
+typedef unsigned int __kernel_gid_t;
+typedef unsigned long __kernel_size_t;
+typedef long __kernel_ssize_t;
+typedef long __kernel_ptrdiff_t;
+typedef long __kernel_time_t;
+typedef long __kernel_suseconds_t;
+typedef long __kernel_clock_t;
+typedef int __kernel_daddr_t;
+typedef char * __kernel_caddr_t;
+typedef unsigned long __kernel_sigset_t; /* at least 32 bits */
+typedef unsigned short __kernel_uid16_t;
+typedef unsigned short __kernel_gid16_t;
+
+typedef struct {
+ int val[2];
+} __kernel_fsid_t;
+
+typedef __kernel_uid_t __kernel_old_uid_t;
+typedef __kernel_gid_t __kernel_old_gid_t;
+typedef __kernel_uid_t __kernel_uid32_t;
+typedef __kernel_gid_t __kernel_gid32_t;
+
+# ifdef __KERNEL__
+
+# ifndef __GNUC__
+
+#define __FD_SET(d, set) ((set)->fds_bits[__FDELT(d)] |= __FDMASK(d))
+#define __FD_CLR(d, set) ((set)->fds_bits[__FDELT(d)] &= ~__FDMASK(d))
+#define __FD_ISSET(d, set) (((set)->fds_bits[__FDELT(d)] & __FDMASK(d)) != 0)
+#define __FD_ZERO(set) \
+ ((void) memset ((__ptr_t) (set), 0, sizeof (__kernel_fd_set)))
+
+# else /* !__GNUC__ */
+
+/* With GNU C, use inline functions instead so args are evaluated only once: */
+
+#undef __FD_SET
+static __inline__ void __FD_SET(unsigned long fd, __kernel_fd_set *fdsetp)
+{
+ unsigned long _tmp = fd / __NFDBITS;
+ unsigned long _rem = fd % __NFDBITS;
+ fdsetp->fds_bits[_tmp] |= (1UL<<_rem);
+}
+
+#undef __FD_CLR
+static __inline__ void __FD_CLR(unsigned long fd, __kernel_fd_set *fdsetp)
+{
+ unsigned long _tmp = fd / __NFDBITS;
+ unsigned long _rem = fd % __NFDBITS;
+ fdsetp->fds_bits[_tmp] &= ~(1UL<<_rem);
+}
+
+#undef __FD_ISSET
+static __inline__ int __FD_ISSET(unsigned long fd, const __kernel_fd_set *p)
+{
+ unsigned long _tmp = fd / __NFDBITS;
+ unsigned long _rem = fd % __NFDBITS;
+ return (p->fds_bits[_tmp] & (1UL<<_rem)) != 0;
+}
+
+/*
+ * This will unroll the loop for the normal constant case (8 ints,
+ * for a 256-bit fd_set)
+ */
+#undef __FD_ZERO
+static __inline__ void __FD_ZERO(__kernel_fd_set *p)
+{
+ unsigned long *tmp = p->fds_bits;
+ int i;
+
+ if (__builtin_constant_p(__FDSET_LONGS)) {
+ switch (__FDSET_LONGS) {
+ case 16:
+ tmp[ 0] = 0; tmp[ 1] = 0; tmp[ 2] = 0; tmp[ 3] = 0;
+ tmp[ 4] = 0; tmp[ 5] = 0; tmp[ 6] = 0; tmp[ 7] = 0;
+ tmp[ 8] = 0; tmp[ 9] = 0; tmp[10] = 0; tmp[11] = 0;
+ tmp[12] = 0; tmp[13] = 0; tmp[14] = 0; tmp[15] = 0;
+ return;
+
+ case 8:
+ tmp[ 0] = 0; tmp[ 1] = 0; tmp[ 2] = 0; tmp[ 3] = 0;
+ tmp[ 4] = 0; tmp[ 5] = 0; tmp[ 6] = 0; tmp[ 7] = 0;
+ return;
+
+ case 4:
+ tmp[ 0] = 0; tmp[ 1] = 0; tmp[ 2] = 0; tmp[ 3] = 0;
+ return;
+ }
+ }
+ i = __FDSET_LONGS;
+ while (i) {
+ i--;
+ *tmp = 0;
+ tmp++;
+ }
+}
+
+# endif /* !__GNUC__ */
+# endif /* __KERNEL__ */
+#endif /* _ASM_IA64_POSIX_TYPES_H */
--- /dev/null
+#ifndef _ASM_IA64_PROCESSOR_H
+#define _ASM_IA64_PROCESSOR_H
+
+/*
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ *
+ * 11/24/98 S.Eranian added ia64_set_iva()
+ * 12/03/99 D. Mosberger implement thread_saved_pc() via kernel unwind API
+ */
+
+#include <linux/config.h>
+
+#include <asm/ptrace.h>
+#include <asm/types.h>
+
+#define IA64_NUM_DBG_REGS 8
+
+/*
+ * TASK_SIZE really is a mis-named. It really is the maximum user
+ * space address (plus one). On ia-64, there are five regions of 2TB
+ * each (assuming 8KB page size), for a total of 8TB of user virtual
+ * address space.
+ */
+#define TASK_SIZE 0xa000000000000000
+
+#ifdef CONFIG_IA32_SUPPORT
+# define TASK_UNMAPPED_BASE 0x40000000 /* XXX fix me! */
+#else
+/*
+ * This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE 0x2000000000000000
+#endif
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/* Processor status register bits: */
+#define IA64_PSR_BE_BIT 1
+#define IA64_PSR_UP_BIT 2
+#define IA64_PSR_AC_BIT 3
+#define IA64_PSR_MFL_BIT 4
+#define IA64_PSR_MFH_BIT 5
+#define IA64_PSR_IC_BIT 13
+#define IA64_PSR_I_BIT 14
+#define IA64_PSR_PK_BIT 15
+#define IA64_PSR_DT_BIT 17
+#define IA64_PSR_DFL_BIT 18
+#define IA64_PSR_DFH_BIT 19
+#define IA64_PSR_SP_BIT 20
+#define IA64_PSR_PP_BIT 21
+#define IA64_PSR_DI_BIT 22
+#define IA64_PSR_SI_BIT 23
+#define IA64_PSR_DB_BIT 24
+#define IA64_PSR_LP_BIT 25
+#define IA64_PSR_TB_BIT 26
+#define IA64_PSR_RT_BIT 27
+/* The following are not affected by save_flags()/restore_flags(): */
+#define IA64_PSR_IS_BIT 34
+#define IA64_PSR_MC_BIT 35
+#define IA64_PSR_IT_BIT 36
+#define IA64_PSR_ID_BIT 37
+#define IA64_PSR_DA_BIT 38
+#define IA64_PSR_DD_BIT 39
+#define IA64_PSR_SS_BIT 40
+#define IA64_PSR_RI_BIT 41
+#define IA64_PSR_ED_BIT 43
+#define IA64_PSR_BN_BIT 44
+
+#define IA64_PSR_BE (__IA64_UL(1) << IA64_PSR_BE_BIT)
+#define IA64_PSR_UP (__IA64_UL(1) << IA64_PSR_UP_BIT)
+#define IA64_PSR_AC (__IA64_UL(1) << IA64_PSR_AC_BIT)
+#define IA64_PSR_MFL (__IA64_UL(1) << IA64_PSR_MFL_BIT)
+#define IA64_PSR_MFH (__IA64_UL(1) << IA64_PSR_MFH_BIT)
+#define IA64_PSR_IC (__IA64_UL(1) << IA64_PSR_IC_BIT)
+#define IA64_PSR_I (__IA64_UL(1) << IA64_PSR_I_BIT)
+#define IA64_PSR_PK (__IA64_UL(1) << IA64_PSR_PK_BIT)
+#define IA64_PSR_DT (__IA64_UL(1) << IA64_PSR_DT_BIT)
+#define IA64_PSR_DFL (__IA64_UL(1) << IA64_PSR_DFL_BIT)
+#define IA64_PSR_DFH (__IA64_UL(1) << IA64_PSR_DFH_BIT)
+#define IA64_PSR_SP (__IA64_UL(1) << IA64_PSR_SP_BIT)
+#define IA64_PSR_PP (__IA64_UL(1) << IA64_PSR_PP_BIT)
+#define IA64_PSR_DI (__IA64_UL(1) << IA64_PSR_DI_BIT)
+#define IA64_PSR_SI (__IA64_UL(1) << IA64_PSR_SI_BIT)
+#define IA64_PSR_DB (__IA64_UL(1) << IA64_PSR_DB_BIT)
+#define IA64_PSR_LP (__IA64_UL(1) << IA64_PSR_LP_BIT)
+#define IA64_PSR_TB (__IA64_UL(1) << IA64_PSR_TB_BIT)
+#define IA64_PSR_RT (__IA64_UL(1) << IA64_PSR_RT_BIT)
+/* The following are not affected by save_flags()/restore_flags(): */
+#define IA64_PSR_IS (__IA64_UL(1) << IA64_PSR_IS_BIT)
+#define IA64_PSR_MC (__IA64_UL(1) << IA64_PSR_MC_BIT)
+#define IA64_PSR_IT (__IA64_UL(1) << IA64_PSR_IT_BIT)
+#define IA64_PSR_ID (__IA64_UL(1) << IA64_PSR_ID_BIT)
+#define IA64_PSR_DA (__IA64_UL(1) << IA64_PSR_DA_BIT)
+#define IA64_PSR_DD (__IA64_UL(1) << IA64_PSR_DD_BIT)
+#define IA64_PSR_SS (__IA64_UL(1) << IA64_PSR_SS_BIT)
+#define IA64_PSR_RI (__IA64_UL(3) << IA64_PSR_RI_BIT)
+#define IA64_PSR_ED (__IA64_UL(1) << IA64_PSR_ED_BIT)
+#define IA64_PSR_BN (__IA64_UL(1) << IA64_PSR_BN_BIT)
+
+/* User mask bits: */
+#define IA64_PSR_UM (IA64_PSR_BE | IA64_PSR_UP | IA64_PSR_AC | IA64_PSR_MFL | IA64_PSR_MFH)
+
+/* Default Control Register */
+#define IA64_DCR_PP_BIT 0 /* privileged performance monitor default */
+#define IA64_DCR_BE_BIT 1 /* big-endian default */
+#define IA64_DCR_LC_BIT 2 /* ia32 lock-check enable */
+#define IA64_DCR_DM_BIT 8 /* defer TLB miss faults */
+#define IA64_DCR_DP_BIT 9 /* defer page-not-present faults */
+#define IA64_DCR_DK_BIT 10 /* defer key miss faults */
+#define IA64_DCR_DX_BIT 11 /* defer key permission faults */
+#define IA64_DCR_DR_BIT 12 /* defer access right faults */
+#define IA64_DCR_DA_BIT 13 /* defer access bit faults */
+#define IA64_DCR_DD_BIT 14 /* defer debug faults */
+
+#define IA64_DCR_PP (__IA64_UL(1) << IA64_DCR_PP_BIT)
+#define IA64_DCR_BE (__IA64_UL(1) << IA64_DCR_BE_BIT)
+#define IA64_DCR_LC (__IA64_UL(1) << IA64_DCR_LC_BIT)
+#define IA64_DCR_DM (__IA64_UL(1) << IA64_DCR_DM_BIT)
+#define IA64_DCR_DP (__IA64_UL(1) << IA64_DCR_DP_BIT)
+#define IA64_DCR_DK (__IA64_UL(1) << IA64_DCR_DK_BIT)
+#define IA64_DCR_DX (__IA64_UL(1) << IA64_DCR_DX_BIT)
+#define IA64_DCR_DR (__IA64_UL(1) << IA64_DCR_DR_BIT)
+#define IA64_DCR_DA (__IA64_UL(1) << IA64_DCR_DA_BIT)
+#define IA64_DCR_DD (__IA64_UL(1) << IA64_DCR_DD_BIT)
+
+/* Interrupt Status Register */
+#define IA64_ISR_X_BIT 32 /* execute access */
+#define IA64_ISR_W_BIT 33 /* write access */
+#define IA64_ISR_R_BIT 34 /* read access */
+#define IA64_ISR_NA_BIT 35 /* non-access */
+#define IA64_ISR_SP_BIT 36 /* speculative load exception */
+#define IA64_ISR_RS_BIT 37 /* mandatory register-stack exception */
+#define IA64_ISR_IR_BIT 38 /* invalid register frame exception */
+
+#define IA64_ISR_X (__IA64_UL(1) << IA64_ISR_X_BIT)
+#define IA64_ISR_W (__IA64_UL(1) << IA64_ISR_W_BIT)
+#define IA64_ISR_R (__IA64_UL(1) << IA64_ISR_R_BIT)
+#define IA64_ISR_NA (__IA64_UL(1) << IA64_ISR_NA_BIT)
+#define IA64_ISR_SP (__IA64_UL(1) << IA64_ISR_SP_BIT)
+#define IA64_ISR_RS (__IA64_UL(1) << IA64_ISR_RS_BIT)
+#define IA64_ISR_IR (__IA64_UL(1) << IA64_ISR_IR_BIT)
+
+#define IA64_THREAD_FPH_VALID (__IA64_UL(1) << 0) /* floating-point high state valid? */
+#define IA64_THREAD_DBG_VALID (__IA64_UL(1) << 1) /* debug registers valid? */
+#define IA64_KERNEL_DEATH (__IA64_UL(1) << 63) /* used for die_if_kernel() recursion detection */
+
+#ifndef __ASSEMBLY__
+
+#include <linux/smp.h>
+#include <linux/threads.h>
+
+#include <asm/fpu.h>
+#include <asm/offsets.h>
+#include <asm/page.h>
+#include <asm/rse.h>
+#include <asm/unwind.h>
+
+/* like above but expressed as bitfields for more efficient access: */
+struct ia64_psr {
+ __u64 reserved0 : 1;
+ __u64 be : 1;
+ __u64 up : 1;
+ __u64 ac : 1;
+ __u64 mfl : 1;
+ __u64 mfh : 1;
+ __u64 reserved1 : 7;
+ __u64 ic : 1;
+ __u64 i : 1;
+ __u64 pk : 1;
+ __u64 reserved2 : 1;
+ __u64 dt : 1;
+ __u64 dfl : 1;
+ __u64 dfh : 1;
+ __u64 sp : 1;
+ __u64 pp : 1;
+ __u64 di : 1;
+ __u64 si : 1;
+ __u64 db : 1;
+ __u64 lp : 1;
+ __u64 tb : 1;
+ __u64 rt : 1;
+ __u64 reserved3 : 4;
+ __u64 cpl : 2;
+ __u64 is : 1;
+ __u64 mc : 1;
+ __u64 it : 1;
+ __u64 id : 1;
+ __u64 da : 1;
+ __u64 dd : 1;
+ __u64 ss : 1;
+ __u64 ri : 2;
+ __u64 ed : 1;
+ __u64 bn : 1;
+ __u64 reserved4 : 19;
+};
+
+/*
+ * This shift should be large enough to be able to represent
+ * 1000000/itc_freq with good accuracy while being small enough to fit
+ * 1000000<<IA64_USEC_PER_CYC_SHIFT in 64 bits.
+ */
+#define IA64_USEC_PER_CYC_SHIFT 41
+
+/*
+ * CPU type, hardware bug flags, and per-CPU state.
+ */
+struct cpuinfo_ia64 {
+ __u64 *pgd_quick;
+ __u64 *pmd_quick;
+ __u64 *pte_quick;
+ __u64 pgtable_cache_sz;
+ /* CPUID-derived information: */
+ __u64 ppn;
+ __u64 features;
+ __u8 number;
+ __u8 revision;
+ __u8 model;
+ __u8 family;
+ __u8 archrev;
+ char vendor[16];
+ __u64 itc_freq; /* frequency of ITC counter */
+ __u64 proc_freq; /* frequency of processor */
+ __u64 cyc_per_usec; /* itc_freq/1000000 */
+ __u64 usec_per_cyc; /* 2^IA64_USEC_PER_CYC_SHIFT*1000000/itc_freq */
+#ifdef CONFIG_SMP
+ __u64 loops_per_sec;
+ __u64 ipi_count;
+ __u64 prof_counter;
+ __u64 prof_multiplier;
+#endif
+};
+
+#define my_cpu_data cpu_data[smp_processor_id()]
+
+#ifdef CONFIG_SMP
+# define loops_per_sec() my_cpu_data.loops_per_sec
+#else
+# define loops_per_sec() loops_per_sec
+#endif
+
+extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
+
+extern void identify_cpu (struct cpuinfo_ia64 *);
+extern void print_cpu_info (struct cpuinfo_ia64 *);
+
+typedef struct {
+ unsigned long seg;
+} mm_segment_t;
+
+struct thread_struct {
+ __u64 ksp; /* kernel stack pointer */
+ unsigned long flags; /* various flags */
+ struct ia64_fpreg fph[96]; /* saved/loaded on demand */
+ __u64 dbr[IA64_NUM_DBG_REGS];
+ __u64 ibr[IA64_NUM_DBG_REGS];
+#ifdef CONFIG_IA32_SUPPORT
+ __u64 fsr; /* IA32 floating pt status reg */
+ __u64 fcr; /* IA32 floating pt control reg */
+ __u64 fir; /* IA32 fp except. instr. reg */
+ __u64 fdr; /* IA32 fp except. data reg */
+# define INIT_THREAD_IA32 , 0, 0, 0, 0
+#else
+# define INIT_THREAD_IA32
+#endif /* CONFIG_IA32_SUPPORT */
+};
+
+#define INIT_MMAP { \
+ &init_mm, PAGE_OFFSET, PAGE_OFFSET + 0x10000000, NULL, PAGE_SHARED, \
+ VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL \
+}
+
+#define INIT_THREAD { \
+ 0, /* ksp */ \
+ 0, /* flags */ \
+ {{{{0}}}, }, /* fph */ \
+ {0, }, /* dbr */ \
+ {0, } /* ibr */ \
+ INIT_THREAD_IA32 \
+}
+
+#define start_thread(regs,new_ip,new_sp) do { \
+ set_fs(USER_DS); \
+ ia64_psr(regs)->cpl = 3; /* set user mode */ \
+ ia64_psr(regs)->ri = 0; /* clear return slot number */ \
+ regs->cr_iip = new_ip; \
+ regs->ar_rsc = 0xf; /* eager mode, privilege level 3 */ \
+ regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \
+ regs->ar_bspstore = IA64_RBS_BOT; \
+ regs->ar_rnat = 0; \
+ regs->loadrs = 0; \
+} while (0)
+
+/* Forward declarations, a strange C thing... */
+struct mm_struct;
+struct task_struct;
+
+/* Free all resources held by a thread. */
+extern void release_thread (struct task_struct *);
+
+/*
+ * This is the mechanism for creating a new kernel thread.
+ *
+ * NOTE 1: Only a kernel-only process (ie the swapper or direct
+ * descendants who haven't done an "execve()") should use this: it
+ * will work within a system call from a "real" process, but the
+ * process memory space will not be free'd until both the parent and
+ * the child have exited.
+ *
+ * NOTE 2: This MUST NOT be an inlined function. Otherwise, we get
+ * into trouble in init/main.c when the child thread returns to
+ * do_basic_setup() and the timing is such that free_initmem() has
+ * been called already.
+ */
+extern int kernel_thread (int (*fn)(void *), void *arg, unsigned long flags);
+
+/* Copy and release all segment info associated with a VM */
+#define copy_segments(tsk, mm) do { } while (0)
+#define release_segments(mm) do { } while (0)
+#define forget_segments() do { } while (0)
+
+/* Get wait channel for task P. */
+extern unsigned long get_wchan (struct task_struct *p);
+
+/* Return instruction pointer of blocked task TSK. */
+#define KSTK_EIP(tsk) \
+ ({ \
+ struct pt_regs *_regs = ia64_task_regs(tsk); \
+ _regs->cr_iip + ia64_psr(_regs)->ri; \
+ })
+
+/* Return stack pointer of blocked task TSK. */
+#define KSTK_ESP(tsk) ((tsk)->thread.ksp)
+
+static inline struct task_struct *
+ia64_get_fpu_owner (void)
+{
+ struct task_struct *t;
+ __asm__ ("mov %0=ar.k5" : "=r"(t));
+ return t;
+}
+
+static inline void
+ia64_set_fpu_owner (struct task_struct *t)
+{
+ __asm__ __volatile__ ("mov ar.k5=%0" :: "r"(t));
+}
+
+extern void __ia64_init_fpu (void);
+extern void __ia64_save_fpu (struct ia64_fpreg *fph);
+extern void __ia64_load_fpu (struct ia64_fpreg *fph);
+
+#define ia64_fph_enable() __asm__ __volatile__ (";; rsm psr.dfh;; srlz.d;;" ::: "memory");
+#define ia64_fph_disable() __asm__ __volatile__ (";; ssm psr.dfh;; srlz.d;;" ::: "memory");
+
+/* load fp 0.0 into fph */
+static inline void
+ia64_init_fpu (void) {
+ ia64_fph_enable();
+ __ia64_init_fpu();
+ ia64_fph_disable();
+}
+
+/* save f32-f127 at FPH */
+static inline void
+ia64_save_fpu (struct ia64_fpreg *fph) {
+ ia64_fph_enable();
+ __ia64_save_fpu(fph);
+ ia64_fph_disable();
+}
+
+/* load f32-f127 from FPH */
+static inline void
+ia64_load_fpu (struct ia64_fpreg *fph) {
+ ia64_fph_enable();
+ __ia64_load_fpu(fph);
+ ia64_fph_disable();
+}
+
+extern inline void
+ia64_fc (void *addr)
+{
+ __asm__ __volatile__ ("fc %0" :: "r"(addr) : "memory");
+}
+
+extern inline void
+ia64_sync_i (void)
+{
+ __asm__ __volatile__ (";; sync.i" ::: "memory");
+}
+
+extern inline void
+ia64_srlz_i (void)
+{
+ __asm__ __volatile__ (";; srlz.i ;;" ::: "memory");
+}
+
+extern inline void
+ia64_srlz_d (void)
+{
+ __asm__ __volatile__ (";; srlz.d" ::: "memory");
+}
+
+extern inline void
+ia64_set_rr (__u64 reg_bits, __u64 rr_val)
+{
+ __asm__ __volatile__ ("mov rr[%0]=%1" :: "r"(reg_bits), "r"(rr_val) : "memory");
+}
+
+extern inline __u64
+ia64_get_dcr (void)
+{
+ __u64 r;
+ __asm__ ("mov %0=cr.dcr" : "=r"(r));
+ return r;
+}
+
+extern inline void
+ia64_set_dcr (__u64 val)
+{
+ __asm__ __volatile__ ("mov cr.dcr=%0;;" :: "r"(val) : "memory");
+ ia64_srlz_d();
+}
+
+extern inline __u64
+ia64_get_lid (void)
+{
+ __u64 r;
+ __asm__ ("mov %0=cr.lid" : "=r"(r));
+ return r;
+}
+
+extern inline void
+ia64_invala (void)
+{
+ __asm__ __volatile__ ("invala" ::: "memory");
+}
+
+/*
+ * Save the processor status flags in FLAGS and then clear the
+ * interrupt collection and interrupt enable bits.
+ */
+#define ia64_clear_ic(flags) \
+ __asm__ __volatile__ ("mov %0=psr;; rsm psr.i | psr.ic;; srlz.i;;" \
+ : "=r"(flags) :: "memory");
+
+/*
+ * Insert a translation into an instruction and/or data translation
+ * register.
+ */
+extern inline void
+ia64_itr (__u64 target_mask, __u64 tr_num,
+ __u64 vmaddr, __u64 pte,
+ __u64 log_page_size)
+{
+ __asm__ __volatile__ ("mov cr.itir=%0" :: "r"(log_page_size << 2) : "memory");
+ __asm__ __volatile__ ("mov cr.ifa=%0;;" :: "r"(vmaddr) : "memory");
+ if (target_mask & 0x1)
+ __asm__ __volatile__ ("itr.i itr[%0]=%1"
+ :: "r"(tr_num), "r"(pte) : "memory");
+ if (target_mask & 0x2)
+ __asm__ __volatile__ (";;itr.d dtr[%0]=%1"
+ :: "r"(tr_num), "r"(pte) : "memory");
+}
+
+/*
+ * Insert a translation into the instruction and/or data translation
+ * cache.
+ */
+extern inline void
+ia64_itc (__u64 target_mask, __u64 vmaddr, __u64 pte,
+ __u64 log_page_size)
+{
+ __asm__ __volatile__ ("mov cr.itir=%0" :: "r"(log_page_size << 2) : "memory");
+ __asm__ __volatile__ ("mov cr.ifa=%0;;" :: "r"(vmaddr) : "memory");
+ /* as per EAS2.6, itc must be the last instruction in an instruction group */
+ if (target_mask & 0x1)
+ __asm__ __volatile__ ("itc.i %0;;" :: "r"(pte) : "memory");
+ if (target_mask & 0x2)
+ __asm__ __volatile__ (";;itc.d %0;;" :: "r"(pte) : "memory");
+}
+
+/*
+ * Purge a range of addresses from instruction and/or data translation
+ * register(s).
+ */
+extern inline void
+ia64_ptr (__u64 target_mask, __u64 vmaddr, __u64 log_size)
+{
+ if (target_mask & 0x1)
+ __asm__ __volatile__ ("ptr.i %0,%1" :: "r"(vmaddr), "r"(log_size << 2));
+ if (target_mask & 0x2)
+ __asm__ __volatile__ ("ptr.d %0,%1" :: "r"(vmaddr), "r"(log_size << 2));
+}
+
+/* Set the interrupt vector address. The address must be suitably aligned (32KB). */
+extern inline void
+ia64_set_iva (void *ivt_addr)
+{
+ __asm__ __volatile__ ("mov cr.iva=%0;; srlz.i;;" :: "r"(ivt_addr) : "memory");
+}
+
+/* Set the page table address and control bits. */
+extern inline void
+ia64_set_pta (__u64 pta)
+{
+ /* Note: srlz.i implies srlz.d */
+ __asm__ __volatile__ ("mov cr.pta=%0;; srlz.i;;" :: "r"(pta) : "memory");
+}
+
+extern inline __u64
+ia64_get_cpuid (__u64 regnum)
+{
+ __u64 r;
+
+ __asm__ ("mov %0=cpuid[%r1]" : "=r"(r) : "rO"(regnum));
+ return r;
+}
+
+extern inline void
+ia64_eoi (void)
+{
+ __asm__ ("mov cr.eoi=r0;; srlz.d;;" ::: "memory");
+}
+
+extern __inline__ void
+ia64_set_lrr0 (__u8 vector, __u8 masked)
+{
+ if (masked > 1)
+ masked = 1;
+
+ __asm__ __volatile__ ("mov cr.lrr0=%0;; srlz.d"
+ :: "r"((masked << 16) | vector) : "memory");
+}
+
+
+extern __inline__ void
+ia64_set_lrr1 (__u8 vector, __u8 masked)
+{
+ if (masked > 1)
+ masked = 1;
+
+ __asm__ __volatile__ ("mov cr.lrr1=%0;; srlz.d"
+ :: "r"((masked << 16) | vector) : "memory");
+}
+
+extern __inline__ void
+ia64_set_pmv (__u64 val)
+{
+ __asm__ __volatile__ ("mov cr.pmv=%0" :: "r"(val) : "memory");
+}
+
+extern __inline__ __u64
+ia64_get_pmc (__u64 regnum)
+{
+ __u64 retval;
+
+ __asm__ __volatile__ ("mov %0=pmc[%1]" : "=r"(retval) : "r"(regnum));
+ return retval;
+}
+
+extern __inline__ void
+ia64_set_pmc (__u64 regnum, __u64 value)
+{
+ __asm__ __volatile__ ("mov pmc[%0]=%1" :: "r"(regnum), "r"(value));
+}
+
+extern __inline__ __u64
+ia64_get_pmd (__u64 regnum)
+{
+ __u64 retval;
+
+ __asm__ __volatile__ ("mov %0=pmd[%1]" : "=r"(retval) : "r"(regnum));
+ return retval;
+}
+
+extern __inline__ void
+ia64_set_pmd (__u64 regnum, __u64 value)
+{
+ __asm__ __volatile__ ("mov pmd[%0]=%1" :: "r"(regnum), "r"(value));
+}
+
+/*
+ * Given the address to which a spill occurred, return the unat bit
+ * number that corresponds to this address.
+ */
+extern inline __u64
+ia64_unat_pos (void *spill_addr)
+{
+ return ((__u64) spill_addr >> 3) & 0x3f;
+}
+
+/*
+ * Set the NaT bit of an integer register which was spilled at address
+ * SPILL_ADDR. UNAT is the mask to be updated.
+ */
+extern inline void
+ia64_set_unat (__u64 *unat, void *spill_addr, unsigned long nat)
+{
+ __u64 bit = ia64_unat_pos(spill_addr);
+ __u64 mask = 1UL << bit;
+
+ *unat = (*unat & ~mask) | (nat << bit);
+}
+
+/*
+ * Return saved PC of a blocked thread.
+ * Note that the only way T can block is through a call to schedule() -> switch_to().
+ */
+extern inline unsigned long
+thread_saved_pc (struct thread_struct *t)
+{
+ struct ia64_frame_info info;
+ /* XXX ouch: Linus, please pass the task pointer to thread_saved_pc() instead! */
+ struct task_struct *p = (void *) ((unsigned long) t - IA64_TASK_THREAD_OFFSET);
+
+ ia64_unwind_init_from_blocked_task(&info, p);
+ if (ia64_unwind_to_previous_frame(&info) < 0)
+ return 0;
+ return ia64_unwind_get_ip(&info);
+}
+
+/*
+ * Get the current instruction/program counter value.
+ */
+#define current_text_addr() \
+ ({ void *_pc; __asm__ ("mov %0=ip" : "=r" (_pc)); _pc; })
+
+#define THREAD_SIZE IA64_STK_OFFSET
+/* NOTE: The task struct and the stacks are allocated together. */
+#define alloc_task_struct() \
+ ((struct task_struct *) __get_free_pages(GFP_KERNEL, IA64_TASK_STRUCT_LOG_NUM_PAGES))
+#define free_task_struct(p) free_pages((unsigned long)(p), IA64_TASK_STRUCT_LOG_NUM_PAGES)
+#define get_task_struct(tsk) atomic_inc(&mem_map[MAP_NR(tsk)].count)
+
+#define init_task (init_task_union.task)
+#define init_stack (init_task_union.stack)
+
+/*
+ * Set the correctable machine check vector register
+ */
+extern __inline__ void
+ia64_set_cmcv (__u64 val)
+{
+ __asm__ __volatile__ ("mov cr.cmcv=%0" :: "r"(val) : "memory");
+}
+
+/*
+ * Read the correctable machine check vector register
+ */
+extern __inline__ __u64
+ia64_get_cmcv (void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=cr.cmcv" : "=r"(val) :: "memory");
+ return val;
+}
+
+extern inline __u64
+ia64_get_ivr (void)
+{
+ __u64 r;
+ __asm__ __volatile__ ("srlz.d;; mov %0=cr.ivr;; srlz.d;;" : "=r"(r));
+ return r;
+}
+
+extern inline void
+ia64_set_tpr (__u64 val)
+{
+ __asm__ __volatile__ ("mov cr.tpr=%0" :: "r"(val));
+}
+
+extern inline __u64
+ia64_get_tpr (void)
+{
+ __u64 r;
+ __asm__ ("mov %0=cr.tpr" : "=r"(r));
+ return r;
+}
+
+extern __inline__ void
+ia64_set_irr0 (__u64 val)
+{
+ __asm__ __volatile__("mov cr.irr0=%0;;" :: "r"(val) : "memory");
+ ia64_srlz_d();
+}
+
+extern __inline__ __u64
+ia64_get_irr0 (void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=cr.irr0" : "=r"(val));
+ return val;
+}
+
+extern __inline__ void
+ia64_set_irr1 (__u64 val)
+{
+ __asm__ __volatile__("mov cr.irr1=%0;;" :: "r"(val) : "memory");
+ ia64_srlz_d();
+}
+
+extern __inline__ __u64
+ia64_get_irr1 (void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=cr.irr1" : "=r"(val));
+ return val;
+}
+
+extern __inline__ void
+ia64_set_irr2 (__u64 val)
+{
+ __asm__ __volatile__("mov cr.irr2=%0;;" :: "r"(val) : "memory");
+ ia64_srlz_d();
+}
+
+extern __inline__ __u64
+ia64_get_irr2 (void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=cr.irr2" : "=r"(val));
+ return val;
+}
+
+extern __inline__ void
+ia64_set_irr3 (__u64 val)
+{
+ __asm__ __volatile__("mov cr.irr3=%0;;" :: "r"(val) : "memory");
+ ia64_srlz_d();
+}
+
+extern __inline__ __u64
+ia64_get_irr3 (void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=cr.irr3" : "=r"(val));
+ return val;
+}
+
+extern __inline__ __u64
+ia64_get_gp(void)
+{
+ __u64 val;
+
+ __asm__ ("mov %0=gp" : "=r"(val));
+ return val;
+}
+
+/* XXX remove the handcoded version once we have a sufficiently clever compiler... */
+#ifdef SMART_COMPILER
+# define ia64_rotr(w,n) \
+ ({ \
+ __u64 _w = (w), _n = (n); \
+ \
+ (_w >> _n) | (_w << (64 - _n)); \
+ })
+#else
+# define ia64_rotr(w,n) \
+ ({ \
+ __u64 result; \
+ asm ("shrp %0=%1,%1,%2" : "=r"(result) : "r"(w), "i"(n)); \
+ result; \
+ })
+#endif
+
+#define ia64_rotl(w,n) ia64_rotr((w),(64)-(n))
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_IA64_PROCESSOR_H */
--- /dev/null
+#ifndef _ASM_IA64_PTRACE_H
+#define _ASM_IA64_PTRACE_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
+ *
+ * 12/07/98 S. Eranian added pt_regs & switch_stack
+ * 12/21/98 D. Mosberger updated to match latest code
+ * 6/17/99 D. Mosberger added second unat member to "struct switch_stack"
+ *
+ */
+/*
+ * When a user process is blocked, its state looks as follows:
+ *
+ * +----------------------+ ------- IA64_STK_OFFSET
+ * | | ^
+ * | struct pt_regs | |
+ * | | |
+ * +----------------------+ |
+ * | | |
+ * | memory stack | |
+ * | (growing downwards) | |
+ * //.....................// |
+ * |
+ * //.....................// |
+ * | | |
+ * +----------------------+ |
+ * | struct switch_stack | |
+ * | | |
+ * +----------------------+ |
+ * | | |
+ * //.....................// |
+ * |
+ * //.....................// |
+ * | | |
+ * | register stack | |
+ * | (growing upwards) | |
+ * | | |
+ * +----------------------+ | --- IA64_RBS_OFFSET
+ * | | | ^
+ * | struct task_struct | | |
+ * current -> | | | |
+ * +----------------------+ -------
+ *
+ * Note that ar.ec is not saved explicitly in pt_reg or switch_stack.
+ * This is because ar.ec is saved as part of ar.pfs.
+ */
+
+#include <linux/config.h>
+
+#include <asm/fpu.h>
+#include <asm/offsets.h>
+
+/*
+ * Base-2 logarithm of number of pages to allocate per task structure
+ * (including register backing store and memory stack):
+ */
+#if defined(CONFIG_IA64_PAGE_SIZE_4KB)
+# define IA64_TASK_STRUCT_LOG_NUM_PAGES 3
+#elif defined(CONFIG_IA64_PAGE_SIZE_8KB)
+# define IA64_TASK_STRUCT_LOG_NUM_PAGES 2
+#elif defined(CONFIG_IA64_PAGE_SIZE_16KB)
+# define IA64_TASK_STRUCT_LOG_NUM_PAGES 1
+#else
+# define IA64_TASK_STRUCT_LOG_NUM_PAGES 0
+#endif
+
+#define IA64_RBS_OFFSET ((IA64_TASK_SIZE + 15) & ~15)
+#define IA64_STK_OFFSET ((1 << IA64_TASK_STRUCT_LOG_NUM_PAGES)*PAGE_SIZE)
+
+#define INIT_TASK_SIZE IA64_STK_OFFSET
+
+#ifndef __ASSEMBLY__
+
+/*
+ * This struct defines the way the registers are saved on system
+ * calls.
+ *
+ * We don't save all floating point register because the kernel
+ * is compiled to use only a very small subset, so the other are
+ * untouched.
+ *
+ * THIS STRUCTURE MUST BE A MULTIPLE 16-BYTE IN SIZE
+ * (because the memory stack pointer MUST ALWAYS be aligned this way)
+ *
+ */
+struct pt_regs {
+ /* The following registers are saved by SAVE_MIN: */
+
+ unsigned long cr_ipsr; /* interrupted task's psr */
+ unsigned long cr_iip; /* interrupted task's instruction pointer */
+ unsigned long cr_ifs; /* interrupted task's function state */
+
+ unsigned long ar_unat; /* interrupted task's NaT register (preserved) */
+ unsigned long ar_pfs; /* prev function state */
+ unsigned long ar_rsc; /* RSE configuration */
+ /* The following two are valid only if cr_ipsr.cpl > 0: */
+ unsigned long ar_rnat; /* RSE NaT */
+ unsigned long ar_bspstore; /* RSE bspstore */
+
+ unsigned long pr; /* 64 predicate registers (1 bit each) */
+ unsigned long b6; /* scratch */
+ unsigned long loadrs; /* size of dirty partition << 16 */
+
+ unsigned long r1; /* the gp pointer */
+ unsigned long r2; /* scratch */
+ unsigned long r3; /* scratch */
+ unsigned long r12; /* interrupted task's memory stack pointer */
+ unsigned long r13; /* thread pointer */
+ unsigned long r14; /* scratch */
+ unsigned long r15; /* scratch */
+
+ unsigned long r8; /* scratch (return value register 0) */
+ unsigned long r9; /* scratch (return value register 1) */
+ unsigned long r10; /* scratch (return value register 2) */
+ unsigned long r11; /* scratch (return value register 3) */
+
+ /* The following registers are saved by SAVE_REST: */
+
+ unsigned long r16; /* scratch */
+ unsigned long r17; /* scratch */
+ unsigned long r18; /* scratch */
+ unsigned long r19; /* scratch */
+ unsigned long r20; /* scratch */
+ unsigned long r21; /* scratch */
+ unsigned long r22; /* scratch */
+ unsigned long r23; /* scratch */
+ unsigned long r24; /* scratch */
+ unsigned long r25; /* scratch */
+ unsigned long r26; /* scratch */
+ unsigned long r27; /* scratch */
+ unsigned long r28; /* scratch */
+ unsigned long r29; /* scratch */
+ unsigned long r30; /* scratch */
+ unsigned long r31; /* scratch */
+
+ unsigned long ar_ccv; /* compare/exchange value */
+ unsigned long ar_fpsr; /* floating point status*/
+
+ unsigned long b0; /* return pointer (bp) */
+ unsigned long b7; /* scratch */
+ /*
+ * Floating point registers that the kernel considers
+ * scratch:
+ */
+ struct ia64_fpreg f6; /* scratch*/
+ struct ia64_fpreg f7; /* scratch*/
+ struct ia64_fpreg f8; /* scratch*/
+ struct ia64_fpreg f9; /* scratch*/
+};
+
+/*
+ * This structure contains the addition registers that need to
+ * preserved across a context switch. This generally consists of
+ * "preserved" registers.
+ */
+struct switch_stack {
+ unsigned long caller_unat; /* user NaT collection register (preserved) */
+ unsigned long ar_fpsr; /* floating-point status register */
+
+ struct ia64_fpreg f2; /* preserved */
+ struct ia64_fpreg f3; /* preserved */
+ struct ia64_fpreg f4; /* preserved */
+ struct ia64_fpreg f5; /* preserved */
+
+ struct ia64_fpreg f10; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f11; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f12; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f13; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f14; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f15; /* scratch, but untouched by kernel */
+ struct ia64_fpreg f16; /* preserved */
+ struct ia64_fpreg f17; /* preserved */
+ struct ia64_fpreg f18; /* preserved */
+ struct ia64_fpreg f19; /* preserved */
+ struct ia64_fpreg f20; /* preserved */
+ struct ia64_fpreg f21; /* preserved */
+ struct ia64_fpreg f22; /* preserved */
+ struct ia64_fpreg f23; /* preserved */
+ struct ia64_fpreg f24; /* preserved */
+ struct ia64_fpreg f25; /* preserved */
+ struct ia64_fpreg f26; /* preserved */
+ struct ia64_fpreg f27; /* preserved */
+ struct ia64_fpreg f28; /* preserved */
+ struct ia64_fpreg f29; /* preserved */
+ struct ia64_fpreg f30; /* preserved */
+ struct ia64_fpreg f31; /* preserved */
+
+ unsigned long r4; /* preserved */
+ unsigned long r5; /* preserved */
+ unsigned long r6; /* preserved */
+ unsigned long r7; /* preserved */
+
+ unsigned long b0; /* so we can force a direct return in copy_thread */
+ unsigned long b1;
+ unsigned long b2;
+ unsigned long b3;
+ unsigned long b4;
+ unsigned long b5;
+
+ unsigned long ar_pfs; /* previous function state */
+ unsigned long ar_lc; /* loop counter (preserved) */
+ unsigned long ar_unat; /* NaT bits for r4-r7 */
+ unsigned long ar_rnat; /* RSE NaT collection register */
+ unsigned long ar_bspstore; /* RSE dirty base (preserved) */
+ unsigned long pr; /* 64 predicate registers (1 bit each) */
+};
+
+#ifdef __KERNEL__
+ /* given a pointer to a task_struct, return the user's pt_regs */
+# define ia64_task_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
+# define ia64_psr(regs) ((struct ia64_psr *) &(regs)->cr_ipsr)
+# define user_mode(regs) (((struct ia64_psr *) &(regs)->cr_ipsr)->cpl != 0)
+
+ struct task_struct; /* forward decl */
+
+ extern void show_regs (struct pt_regs *);
+ extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
+ extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
+
+ /* get nat bits for r1-r31 such that bit N==1 iff rN is a NaT */
+ extern long ia64_get_nat_bits (struct pt_regs *pt, struct switch_stack *sw);
+ /* put nat bits for r1-r31 such that rN is a NaT iff bit N==1 */
+ extern void ia64_put_nat_bits (struct pt_regs *pt, struct switch_stack *sw, unsigned long nat);
+
+ extern void ia64_increment_ip (struct pt_regs *pt);
+ extern void ia64_decrement_ip (struct pt_regs *pt);
+#endif
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * The number chosen here is somewhat arbitrary but absolutely MUST
+ * not overlap with any of the number assigned in <linux/ptrace.h>.
+ */
+#define PTRACE_SINGLEBLOCK 12 /* resume execution until next branch */
+
+#endif /* _ASM_IA64_PTRACE_H */
--- /dev/null
+#ifndef _ASM_IA64_PTRACE_OFFSETS_H
+#define _ASM_IA64_PTRACE_OFFSETS_H
+
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+/*
+ * The "uarea" that can be accessed via PEEKUSER and POKEUSER is a
+ * virtual structure that would have the following definition:
+ *
+ * struct uarea {
+ * struct ia64_fpreg fph[96]; // f32-f127
+ * struct switch_stack sw;
+ * struct pt_regs pt;
+ * unsigned long rsvd1[358];
+ * unsigned long dbr[8];
+ * unsigned long rsvd2[252];
+ * unsigned long ibr[8];
+ * }
+ */
+
+/* fph: */
+#define PT_F32 0x0000
+#define PT_F33 0x0010
+#define PT_F34 0x0020
+#define PT_F35 0x0030
+#define PT_F36 0x0040
+#define PT_F37 0x0050
+#define PT_F38 0x0060
+#define PT_F39 0x0070
+#define PT_F40 0x0080
+#define PT_F41 0x0090
+#define PT_F42 0x00a0
+#define PT_F43 0x00b0
+#define PT_F44 0x00c0
+#define PT_F45 0x00d0
+#define PT_F46 0x00e0
+#define PT_F47 0x00f0
+#define PT_F48 0x0100
+#define PT_F49 0x0110
+#define PT_F50 0x0120
+#define PT_F51 0x0130
+#define PT_F52 0x0140
+#define PT_F53 0x0150
+#define PT_F54 0x0160
+#define PT_F55 0x0170
+#define PT_F56 0x0180
+#define PT_F57 0x0190
+#define PT_F58 0x01a0
+#define PT_F59 0x01b0
+#define PT_F60 0x01c0
+#define PT_F61 0x01d0
+#define PT_F62 0x01e0
+#define PT_F63 0x01f0
+#define PT_F64 0x0200
+#define PT_F65 0x0210
+#define PT_F66 0x0220
+#define PT_F67 0x0230
+#define PT_F68 0x0240
+#define PT_F69 0x0250
+#define PT_F70 0x0260
+#define PT_F71 0x0270
+#define PT_F72 0x0280
+#define PT_F73 0x0290
+#define PT_F74 0x02a0
+#define PT_F75 0x02b0
+#define PT_F76 0x02c0
+#define PT_F77 0x02d0
+#define PT_F78 0x02e0
+#define PT_F79 0x02f0
+#define PT_F80 0x0300
+#define PT_F81 0x0310
+#define PT_F82 0x0320
+#define PT_F83 0x0330
+#define PT_F84 0x0340
+#define PT_F85 0x0350
+#define PT_F86 0x0360
+#define PT_F87 0x0370
+#define PT_F88 0x0380
+#define PT_F89 0x0390
+#define PT_F90 0x03a0
+#define PT_F91 0x03b0
+#define PT_F92 0x03c0
+#define PT_F93 0x03d0
+#define PT_F94 0x03e0
+#define PT_F95 0x03f0
+#define PT_F96 0x0400
+#define PT_F97 0x0410
+#define PT_F98 0x0420
+#define PT_F99 0x0430
+#define PT_F100 0x0440
+#define PT_F101 0x0450
+#define PT_F102 0x0460
+#define PT_F103 0x0470
+#define PT_F104 0x0480
+#define PT_F105 0x0490
+#define PT_F106 0x04a0
+#define PT_F107 0x04b0
+#define PT_F108 0x04c0
+#define PT_F109 0x04d0
+#define PT_F110 0x04e0
+#define PT_F111 0x04f0
+#define PT_F112 0x0500
+#define PT_F113 0x0510
+#define PT_F114 0x0520
+#define PT_F115 0x0530
+#define PT_F116 0x0540
+#define PT_F117 0x0550
+#define PT_F118 0x0560
+#define PT_F119 0x0570
+#define PT_F120 0x0580
+#define PT_F121 0x0590
+#define PT_F122 0x05a0
+#define PT_F123 0x05b0
+#define PT_F124 0x05c0
+#define PT_F125 0x05d0
+#define PT_F126 0x05e0
+#define PT_F127 0x05f0
+/* switch stack: */
+#define PT_CALLER_UNAT 0x0600
+#define PT_KERNEL_FPSR 0x0608
+#define PT_F2 0x0610
+#define PT_F3 0x0620
+#define PT_F4 0x0630
+#define PT_F5 0x0640
+#define PT_F10 0x0650
+#define PT_F11 0x0660
+#define PT_F12 0x0670
+#define PT_F13 0x0680
+#define PT_F14 0x0690
+#define PT_F15 0x06a0
+#define PT_F16 0x06b0
+#define PT_F17 0x06c0
+#define PT_F18 0x06d0
+#define PT_F19 0x06e0
+#define PT_F20 0x06f0
+#define PT_F21 0x0700
+#define PT_F22 0x0710
+#define PT_F23 0x0720
+#define PT_F24 0x0730
+#define PT_F25 0x0740
+#define PT_F26 0x0750
+#define PT_F27 0x0760
+#define PT_F28 0x0770
+#define PT_F29 0x0780
+#define PT_F30 0x0790
+#define PT_F31 0x07a0
+#define PT_R4 0x07b0
+#define PT_R5 0x07b8
+#define PT_R6 0x07c0
+#define PT_R7 0x07c8
+#define PT_K_B0 0x07d0
+#define PT_B1 0x07d8
+#define PT_B2 0x07e0
+#define PT_B3 0x07e8
+#define PT_B4 0x07f0
+#define PT_B5 0x07f8
+#define PT_K_AR_PFS 0x0800
+#define PT_AR_LC 0x0808
+#define PT_K_AR_UNAT 0x0810
+#define PT_K_AR_RNAT 0x0818
+#define PT_K_AR_BSPSTORE 0x0820
+#define PT_K_PR 0x0828
+/* pt_regs */
+#define PT_CR_IPSR 0x0830
+#define PT_CR_IIP 0x0838
+#define PT_CR_IFS 0x0840
+#define PT_AR_UNAT 0x0848
+#define PT_AR_PFS 0x0858
+#define PT_AR_RSC 0x0858
+#define PT_AR_RNAT 0x0868
+#define PT_AR_BSPSTORE 0x0868
+#define PT_PR 0x0870
+#define PT_B6 0x0878
+#define PT_AR_BSP 0x0880
+#define PT_R1 0x0888
+#define PT_R2 0x0890
+#define PT_R3 0x0898
+#define PT_R12 0x08a0
+#define PT_R13 0x08a8
+#define PT_R14 0x08b0
+#define PT_R15 0x08b8
+#define PT_R8 0x08c0
+#define PT_R9 0x08c8
+#define PT_R10 0x08d0
+#define PT_R11 0x08d8
+#define PT_R16 0x08e0
+#define PT_R17 0x08e8
+#define PT_R18 0x08f0
+#define PT_R19 0x08f8
+#define PT_R20 0x0900
+#define PT_R21 0x0908
+#define PT_R22 0x0910
+#define PT_R23 0x0918
+#define PT_R24 0x0920
+#define PT_R25 0x0928
+#define PT_R26 0x0930
+#define PT_R27 0x0938
+#define PT_R28 0x0940
+#define PT_R29 0x0948
+#define PT_R30 0x0950
+#define PT_R31 0x0958
+#define PT_AR_CCV 0x0960
+#define PT_AR_FPSR 0x0968
+#define PT_B0 0x0970
+#define PT_B7 0x0978
+#define PT_F6 0x0980
+#define PT_F7 0x0990
+#define PT_F8 0x09a0
+#define PT_F9 0x09b0
+
+#define PT_DBR 0x2000 /* data breakpoint registers */
+#define PT_IBR 0x3000 /* instruction breakpoint registers */
+
+#endif /* _ASM_IA64_PTRACE_OFFSETS_H */
--- /dev/null
+#ifndef _ASM_IA64_RESOURCE_H
+#define _ASM_IA64_RESOURCE_H
+
+/*
+ * Resource limits
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define RLIMIT_CPU 0 /* CPU time in ms */
+#define RLIMIT_FSIZE 1 /* Maximum filesize */
+#define RLIMIT_DATA 2 /* max data size */
+#define RLIMIT_STACK 3 /* max stack size */
+#define RLIMIT_CORE 4 /* max core file size */
+#define RLIMIT_RSS 5 /* max resident set size */
+#define RLIMIT_NPROC 6 /* max number of processes */
+#define RLIMIT_NOFILE 7 /* max number of open files */
+#define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */
+#define RLIMIT_AS 9 /* address space limit */
+
+#define RLIM_NLIMITS 10
+
+/*
+ * SuS says limits have to be unsigned.
+ * Which makes a ton more sense anyway.
+ */
+#define RLIM_INFINITY (~0UL)
+
+# ifdef __KERNEL__
+
+#define INIT_RLIMITS \
+{ \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+ { _STK_LIM, RLIM_INFINITY }, \
+ { 0, RLIM_INFINITY }, \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+ { 0, 0 }, \
+ { INR_OPEN, INR_OPEN }, \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+ { RLIM_INFINITY, RLIM_INFINITY }, \
+}
+
+# endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_RESOURCE_H */
--- /dev/null
+#ifndef _ASM_IA64_RSE_H
+#define _ASM_IA64_RSE_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Register stack engine related helper functions. This file may be
+ * used in applications, so be careful about the name-space and give
+ * some consideration to non-GNU C compilers (though __inline__ is
+ * fine).
+ */
+
+static __inline__ unsigned long
+ia64_rse_slot_num (unsigned long *addr)
+{
+ return (((unsigned long) addr) >> 3) & 0x3f;
+}
+
+/*
+ * Return TRUE if ADDR is the address of an RNAT slot.
+ */
+static __inline__ unsigned long
+ia64_rse_is_rnat_slot (unsigned long *addr)
+{
+ return ia64_rse_slot_num(addr) == 0x3f;
+}
+
+/*
+ * Returns the address of the RNAT slot that covers the slot at
+ * address SLOT_ADDR.
+ */
+static __inline__ unsigned long *
+ia64_rse_rnat_addr (unsigned long *slot_addr)
+{
+ return (unsigned long *) ((unsigned long) slot_addr | (0x3f << 3));
+}
+
+/*
+ * Calcuate the number of registers in the dirty partition starting at
+ * BSPSTORE with a size of DIRTY bytes. This isn't simply DIRTY
+ * divided by eight because the 64th slot is used to store ar.rnat.
+ */
+static __inline__ unsigned long
+ia64_rse_num_regs (unsigned long *bspstore, unsigned long *bsp)
+{
+ unsigned long slots = (bsp - bspstore);
+
+ return slots - (ia64_rse_slot_num(bspstore) + slots)/0x40;
+}
+
+/*
+ * The inverse of the above: given bspstore and the number of
+ * registers, calculate ar.bsp.
+ */
+static __inline__ unsigned long *
+ia64_rse_skip_regs (unsigned long *addr, long num_regs)
+{
+ long delta = ia64_rse_slot_num(addr) + num_regs;
+
+ if (num_regs < 0)
+ delta -= 0x3e;
+ return addr + num_regs + delta/0x3f;
+}
+
+#endif /* _ASM_IA64_RSE_H */
--- /dev/null
+#ifndef _ASM_IA64_SAL_H
+#define _ASM_IA64_SAL_H
+
+/*
+ * System Abstraction Layer definitions.
+ *
+ * This is based on version 2.5 of the manual "IA-64 System
+ * Abstraction Layer".
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Srinivasa Prasad Thirumalachar <sprasad@sprasad.engr.sgi.com>
+ *
+ * 99/09/29 davidm Updated for SAL 2.6.
+ */
+
+#include <linux/config.h>
+
+#include <asm/pal.h>
+#include <asm/system.h>
+
+extern spinlock_t sal_lock;
+
+#ifdef __GCC_MULTIREG_RETVALS__
+ /* If multi-register return values are returned according to the
+ ia-64 calling convention, we can call ia64_sal directly. */
+# define __SAL_CALL(result,args...) result = (*ia64_sal)(args)
+#else
+ /* If multi-register return values are returned through an aggregate
+ allocated in the caller, we need to use the stub implemented in
+ sal-stub.S. */
+ extern struct ia64_sal_retval ia64_sal_stub (u64 index, ...);
+# define __SAL_CALL(result,args...) result = ia64_sal_stub(args)
+#endif
+
+#ifdef CONFIG_SMP
+# define SAL_CALL(result,args...) do { \
+ spin_lock(&sal_lock); \
+ __SAL_CALL(result,args); \
+ spin_unlock(&sal_lock); \
+} while (0)
+#else
+# define SAL_CALL(result,args...) __SAL_CALL(result,args)
+#endif
+
+#define SAL_SET_VECTORS 0x01000000
+#define SAL_GET_STATE_INFO 0x01000001
+#define SAL_GET_STATE_INFO_SIZE 0x01000002
+#define SAL_CLEAR_STATE_INFO 0x01000003
+#define SAL_MC_RENDEZ 0x01000004
+#define SAL_MC_SET_PARAMS 0x01000005
+#define SAL_REGISTER_PHYSICAL_ADDR 0x01000006
+
+#define SAL_CACHE_FLUSH 0x01000008
+#define SAL_CACHE_INIT 0x01000009
+#define SAL_PCI_CONFIG_READ 0x01000010
+#define SAL_PCI_CONFIG_WRITE 0x01000011
+#define SAL_FREQ_BASE 0x01000012
+
+#define SAL_UPDATE_PAL 0x01000020
+
+struct ia64_sal_retval {
+ /*
+ * A zero status value indicates call completed without error.
+ * A negative status value indicates reason of call failure.
+ * A positive status value indicates success but an
+ * informational value should be printed (e.g., "reboot for
+ * change to take effect").
+ */
+ s64 status;
+ u64 v0;
+ u64 v1;
+ u64 v2;
+};
+
+typedef struct ia64_sal_retval (*ia64_sal_handler) (u64, ...);
+
+enum {
+ SAL_FREQ_BASE_PLATFORM = 0,
+ SAL_FREQ_BASE_INTERVAL_TIMER = 1,
+ SAL_FREQ_BASE_REALTIME_CLOCK = 2
+};
+
+/*
+ * The SAL system table is followed by a variable number of variable
+ * length descriptors. The structure of these descriptors follows
+ * below.
+ */
+struct ia64_sal_systab {
+ char signature[4]; /* should be "SST_" */
+ int size; /* size of this table in bytes */
+ unsigned char sal_rev_minor;
+ unsigned char sal_rev_major;
+ unsigned short entry_count; /* # of entries in variable portion */
+ unsigned char checksum;
+ char ia32_bios_present;
+ unsigned short reserved1;
+ char oem_id[32]; /* ASCII NUL terminated OEM id
+ (terminating NUL is missing if
+ string is exactly 32 bytes long). */
+ char product_id[32]; /* ASCII product id */
+ char reserved2[16];
+};
+
+enum SAL_Systab_Entry_Type {
+ SAL_DESC_ENTRY_POINT = 0,
+ SAL_DESC_MEMORY = 1,
+ SAL_DESC_PLATFORM_FEATURE = 2,
+ SAL_DESC_TR = 3,
+ SAL_DESC_PTC = 4,
+ SAL_DESC_AP_WAKEUP = 5
+};
+
+/*
+ * Entry type: Size:
+ * 0 48
+ * 1 32
+ * 2 16
+ * 3 32
+ * 4 16
+ * 5 16
+ */
+#define SAL_DESC_SIZE(type) "\060\040\020\040\020\020"[(unsigned) type]
+
+struct ia64_sal_desc_entry_point {
+ char type;
+ char reserved1[7];
+ s64 pal_proc;
+ s64 sal_proc;
+ s64 gp;
+ char reserved2[16];
+};
+
+struct ia64_sal_desc_memory {
+ char type;
+ char used_by_sal; /* needs to be mapped for SAL? */
+ char mem_attr; /* current memory attribute setting */
+ char access_rights; /* access rights set up by SAL */
+ char mem_attr_mask; /* mask of supported memory attributes */
+ char reserved1;
+ char mem_type; /* memory type */
+ char mem_usage; /* memory usage */
+ s64 addr; /* physical address of memory */
+ unsigned int length; /* length (multiple of 4KB pages) */
+ unsigned int reserved2;
+ char oem_reserved[8];
+};
+
+#define IA64_SAL_PLATFORM_FEATURE_BUS_LOCK (1 << 0)
+#define IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT (1 << 1)
+#define IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT (1 << 2)
+
+struct ia64_sal_desc_platform_feature {
+ char type;
+ unsigned char feature_mask;
+ char reserved1[14];
+};
+
+struct ia64_sal_desc_tr {
+ char type;
+ char tr_type; /* 0 == instruction, 1 == data */
+ char regnum; /* translation register number */
+ char reserved1[5];
+ s64 addr; /* virtual address of area covered */
+ s64 page_size; /* encoded page size */
+ char reserved2[8];
+};
+
+struct ia64_sal_desc_ptc {
+ char type;
+ char reserved1[3];
+ unsigned int num_domains; /* # of coherence domains */
+ long domain_info; /* physical address of domain info table */
+};
+
+#define IA64_SAL_AP_EXTERNAL_INT 0
+
+struct ia64_sal_desc_ap_wakeup {
+ char type;
+ char mechanism; /* 0 == external interrupt */
+ char reserved1[6];
+ long vector; /* interrupt vector in range 0x10-0xff */
+};
+
+extern ia64_sal_handler ia64_sal;
+
+extern const char *ia64_sal_strerror (long status);
+extern void ia64_sal_init (struct ia64_sal_systab *sal_systab);
+
+/* SAL information type encodings */
+enum {
+ SAL_INFO_TYPE_MCA = 0, /* Machine check abort information */
+ SAL_INFO_TYPE_INIT = 1, /* Init information */
+ SAL_INFO_TYPE_CMC = 2 /* Corrected machine check information */
+};
+
+/* Sub information type encodings */
+enum {
+ SAL_SUB_INFO_TYPE_PROCESSOR = 0, /* Processor information */
+ SAL_SUB_INFO_TYPE_PLATFORM = 1 /* Platform information */
+};
+
+/* Encodings for machine check parameter types */
+enum {
+ SAL_MC_PARAM_RENDEZ_INT = 1, /* Rendezevous interrupt */
+ SAL_MC_PARAM_RENDEZ_WAKEUP = 2 /* Wakeup */
+};
+
+/* Encodings for rendezvous mechanisms */
+enum {
+ SAL_MC_PARAM_MECHANISM_INT = 1, /* Use interrupt */
+ SAL_MC_PARAM_MECHANISM_MEM = 2 /* Use memory synchronization variable*/
+};
+
+/* Encodings for vectors which can be registered by the OS with SAL */
+enum {
+ SAL_VECTOR_OS_MCA = 0,
+ SAL_VECTOR_OS_INIT = 1,
+ SAL_VECTOR_OS_BOOT_RENDEZ = 2
+};
+
+/* Definition of the SAL Error Log from the SAL spec */
+
+/* Definition of timestamp according to SAL spec for logging purposes */
+
+typedef struct sal_log_timestamp_s {
+ u8 slh_century; /* Century (19, 20, 21, ...) */
+ u8 slh_year; /* Year (00..99) */
+ u8 slh_month; /* Month (1..12) */
+ u8 slh_day; /* Day (1..31) */
+ u8 slh_reserved;
+ u8 slh_hour; /* Hour (0..23) */
+ u8 slh_minute; /* Minute (0..59) */
+ u8 slh_second; /* Second (0..59) */
+} sal_log_timestamp_t;
+
+
+#define MAX_CACHE_ERRORS 6
+#define MAX_TLB_ERRORS 6
+#define MAX_BUS_ERRORS 1
+
+typedef struct sal_log_processor_info_s {
+ struct {
+ u64 slpi_psi : 1,
+ slpi_cache_check: MAX_CACHE_ERRORS,
+ slpi_tlb_check : MAX_TLB_ERRORS,
+ slpi_bus_check : MAX_BUS_ERRORS,
+ slpi_reserved2 : (31 - (MAX_TLB_ERRORS + MAX_CACHE_ERRORS
+ + MAX_BUS_ERRORS)),
+ slpi_minstate : 1,
+ slpi_bank1_gr : 1,
+ slpi_br : 1,
+ slpi_cr : 1,
+ slpi_ar : 1,
+ slpi_rr : 1,
+ slpi_fr : 1,
+ slpi_reserved1 : 25;
+ } slpi_valid;
+
+ pal_processor_state_info_t slpi_processor_state_info;
+
+ struct {
+ pal_cache_check_info_t slpi_cache_check;
+ u64 slpi_target_address;
+ } slpi_cache_check_info[MAX_CACHE_ERRORS];
+
+ pal_tlb_check_info_t slpi_tlb_check_info[MAX_TLB_ERRORS];
+
+ struct {
+ pal_bus_check_info_t slpi_bus_check;
+ u64 slpi_requestor_addr;
+ u64 slpi_responder_addr;
+ u64 slpi_target_addr;
+ } slpi_bus_check_info[MAX_BUS_ERRORS];
+
+ pal_min_state_area_t slpi_min_state_area;
+ u64 slpi_bank1_gr[16];
+ u64 slpi_bank1_nat_bits;
+ u64 slpi_br[8];
+ u64 slpi_cr[128];
+ u64 slpi_ar[128];
+ u64 slpi_rr[8];
+ u64 slpi_fr[128];
+} sal_log_processor_info_t;
+
+#define sal_log_processor_info_psi_valid slpi_valid.spli_psi
+#define sal_log_processor_info_cache_check_valid slpi_valid.spli_cache_check
+#define sal_log_processor_info_tlb_check_valid slpi_valid.spli_tlb_check
+#define sal_log_processor_info_bus_check_valid slpi_valid.spli_bus_check
+#define sal_log_processor_info_minstate_valid slpi_valid.spli_minstate
+#define sal_log_processor_info_bank1_gr_valid slpi_valid.slpi_bank1_gr
+#define sal_log_processor_info_br_valid slpi_valid.slpi_br
+#define sal_log_processor_info_cr_valid slpi_valid.slpi_cr
+#define sal_log_processor_info_ar_valid slpi_valid.slpi_ar
+#define sal_log_processor_info_rr_valid slpi_valid.slpi_rr
+#define sal_log_processor_info_fr_valid slpi_valid.slpi_fr
+
+typedef struct sal_log_header_s {
+ u64 slh_next_log; /* Offset of the next log from the
+ * beginning of this structure.
+ */
+ uint slh_log_len; /* Length of this error log in bytes */
+ ushort slh_log_type; /* Type of log (0 - cpu ,1 - platform) */
+ ushort slh_log_sub_type; /* SGI specific sub type */
+ sal_log_timestamp_t slh_log_timestamp; /* Timestamp */
+ u64 slh_log_dev_spec_info; /* For processor log this field will
+ * contain an area architected for all
+ * IA-64 processors. For platform log
+ * this field will contain information
+ * specific to the hardware
+ * implementation.
+ */
+} sal_log_header_t;
+
+
+/*
+ * Now define a couple of inline functions for improved type checking
+ * and convenience.
+ */
+extern inline long
+ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second,
+ unsigned long *drift_info)
+{
+ struct ia64_sal_retval isrv;
+
+ SAL_CALL(isrv, SAL_FREQ_BASE, which);
+ *ticks_per_second = isrv.v0;
+ *drift_info = isrv.v1;
+ return isrv.status;
+}
+
+/* Flush all the processor and platform level instruction and/or data caches */
+extern inline s64
+ia64_sal_cache_flush (u64 cache_type)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_CACHE_FLUSH, cache_type);
+ return isrv.status;
+}
+
+
+
+/* Initialize all the processor and platform level instruction and data caches */
+extern inline s64
+ia64_sal_cache_init (void)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_CACHE_INIT);
+ return isrv.status;
+}
+
+/* Clear the processor and platform information logged by SAL with respect to the
+ * machine state at the time of MCA's, INITs or CMCs
+ */
+extern inline s64
+ia64_sal_clear_state_info (u64 sal_info_type, u64 sal_info_sub_type)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_CLEAR_STATE_INFO, sal_info_type, sal_info_sub_type);
+ return isrv.status;
+}
+
+
+/* Get the processor and platform information logged by SAL with respect to the machine
+ * state at the time of the MCAs, INITs or CMCs.
+ */
+extern inline u64
+ia64_sal_get_state_info (u64 sal_info_type, u64 sal_info_sub_type, u64 *sal_info)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_GET_STATE_INFO, sal_info_type, sal_info_sub_type, sal_info);
+ if (isrv.status)
+ return 0;
+ return isrv.v0;
+}
+/* Get the maximum size of the information logged by SAL with respect to the machine
+ * state at the time of MCAs, INITs or CMCs
+ */
+extern inline u64
+ia64_sal_get_state_info_size (u64 sal_info_type, u64 sal_info_sub_type)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_GET_STATE_INFO_SIZE, sal_info_type, sal_info_sub_type);
+ if (isrv.status)
+ return 0;
+ return isrv.v0;
+}
+
+/* Causes the processor to go into a spin loop within SAL where SAL awaits a wakeup
+ * from the monarch processor.
+ */
+extern inline s64
+ia64_sal_mc_rendez (void)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_MC_RENDEZ);
+ return isrv.status;
+}
+
+/* Allow the OS to specify the interrupt number to be used by SAL to interrupt OS during
+ * the machine check rendezvous sequence as well as the mechanism to wake up the
+ * non-monarch processor at the end of machine check processing.
+ */
+extern inline s64
+ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_MC_SET_PARAMS, param_type, i_or_m, i_or_m_val, timeout);
+ return isrv.status;
+}
+
+/* Read from PCI configuration space */
+extern inline s64
+ia64_sal_pci_config_read (u64 pci_config_addr, u64 size, u64 *value)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_PCI_CONFIG_READ, pci_config_addr, size);
+ if (value)
+ *value = isrv.v0;
+ return isrv.status;
+}
+
+/* Write to PCI configuration space */
+extern inline s64
+ia64_sal_pci_config_write (u64 pci_config_addr, u64 size, u64 value)
+{
+ struct ia64_sal_retval isrv;
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) && !defined(SAPIC_FIXED)
+ extern spinlock_t ivr_read_lock;
+ unsigned long flags;
+
+ /*
+ * Avoid PCI configuration read/write overwrite -- A0 Interrupt loss workaround
+ */
+ spin_lock_irqsave(&ivr_read_lock, flags);
+#endif
+ SAL_CALL(isrv, SAL_PCI_CONFIG_WRITE, pci_config_addr, size, value);
+#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) && !defined(SAPIC_FIXED)
+ spin_unlock_irqrestore(&ivr_read_lock, flags);
+#endif
+ return isrv.status;
+}
+
+/*
+ * Register physical addresses of locations needed by SAL when SAL
+ * procedures are invoked in virtual mode.
+ */
+extern inline s64
+ia64_sal_register_physical_addr (u64 phys_entry, u64 phys_addr)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_REGISTER_PHYSICAL_ADDR, phys_entry, phys_addr);
+ return isrv.status;
+}
+
+/* Register software dependent code locations within SAL. These locations are handlers
+ * or entry points where SAL will pass control for the specified event. These event
+ * handlers are for the bott rendezvous, MCAs and INIT scenarios.
+ */
+extern inline s64
+ia64_sal_set_vectors (u64 vector_type,
+ u64 handler_addr1, u64 gp1, u64 handler_len1,
+ u64 handler_addr2, u64 gp2, u64 handler_len2)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_SET_VECTORS, vector_type,
+ handler_addr1, gp1, handler_len1,
+ handler_addr2, gp2, handler_len2);
+
+ return isrv.status;
+}
+/* Update the contents of PAL block in the non-volatile storage device */
+extern inline s64
+ia64_sal_update_pal (u64 param_buf, u64 scratch_buf, u64 scratch_buf_size,
+ u64 *error_code, u64 *scratch_buf_size_needed)
+{
+ struct ia64_sal_retval isrv;
+ SAL_CALL(isrv, SAL_UPDATE_PAL, param_buf, scratch_buf, scratch_buf_size);
+ if (error_code)
+ *error_code = isrv.v0;
+ if (scratch_buf_size_needed)
+ *scratch_buf_size_needed = isrv.v1;
+ return isrv.status;
+}
+
+#endif /* _ASM_IA64_PAL_H */
--- /dev/null
+#ifndef _ASM_IA64_SCATTERLIST_H
+#define _ASM_IA64_SCATTERLIST_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+struct scatterlist {
+ char *address; /* location data is to be transferred to */
+ /*
+ * Location of actual buffer if ADDRESS points to a DMA
+ * indirection buffer, NULL otherwise:
+ */
+ char *alt_address;
+ unsigned int length; /* buffer length */
+};
+
+#define ISA_DMA_THRESHOLD (~0UL)
+
+#endif /* _ASM_IA64_SCATTERLIST_H */
--- /dev/null
+#ifndef _ASM_IA64_SEGMENT_H
+#define _ASM_IA64_SEGMENT_H
+
+/* Only here because we have some old header files that expect it.. */
+
+#endif /* __ALPHA_SEGMENT_H */
--- /dev/null
+#ifndef _ASM_IA64_SEMAPHORE_H
+#define _ASM_IA64_SEMAPHORE_H
+
+/*
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/wait.h>
+
+#include <asm/atomic.h>
+
+struct semaphore {
+ atomic_t count;
+ int sleepers;
+ wait_queue_head_t wait;
+#if WAITQUEUE_DEBUG
+ long __magic; /* initialized by __SEM_DEBUG_INIT() */
+#endif
+};
+
+#if WAITQUEUE_DEBUG
+# define __SEM_DEBUG_INIT(name) , (long) &(name).__magic
+#else
+# define __SEM_DEBUG_INIT(name)
+#endif
+
+#define __SEMAPHORE_INITIALIZER(name,count) \
+{ \
+ ATOMIC_INIT(count), 0, __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
+ __SEM_DEBUG_INIT(name) \
+}
+
+#define __MUTEX_INITIALIZER(name) __SEMAPHORE_INITIALIZER(name,1)
+
+#define __DECLARE_SEMAPHORE_GENERIC(name,count) \
+ struct semaphore name = __SEMAPHORE_INITIALIZER(name, count)
+
+#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name, 1)
+#define DECLARE_MUTEX_LOCKED(name) __DECLARE_SEMAPHORE_GENERIC(name, 0)
+
+extern inline void
+sema_init (struct semaphore *sem, int val)
+{
+ *sem = (struct semaphore) __SEMAPHORE_INITIALIZER(*sem, val);
+}
+
+static inline void
+init_MUTEX (struct semaphore *sem)
+{
+ sema_init(sem, 1);
+}
+
+static inline void
+init_MUTEX_LOCKED (struct semaphore *sem)
+{
+ sema_init(sem, 0);
+}
+
+extern void __down (struct semaphore * sem);
+extern int __down_interruptible (struct semaphore * sem);
+extern int __down_trylock (struct semaphore * sem);
+extern void __up (struct semaphore * sem);
+
+extern spinlock_t semaphore_wake_lock;
+
+/*
+ * Atomically decrement the semaphore's count. If it goes negative,
+ * block the calling thread in the TASK_UNINTERRUPTIBLE state.
+ */
+extern inline void
+down (struct semaphore *sem)
+{
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+ if (atomic_dec_return(&sem->count) < 0)
+ __down(sem);
+}
+
+/*
+ * Atomically decrement the semaphore's count. If it goes negative,
+ * block the calling thread in the TASK_INTERRUPTIBLE state.
+ */
+extern inline int
+down_interruptible (struct semaphore * sem)
+{
+ int ret = 0;
+
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_interruptible(sem);
+ return ret;
+}
+
+extern inline int
+down_trylock (struct semaphore *sem)
+{
+ int ret = 0;
+
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_trylock(sem);
+ return ret;
+}
+
+extern inline void
+up (struct semaphore * sem)
+{
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+ if (atomic_inc_return(&sem->count) <= 0)
+ __up(sem);
+}
+
+/*
+ * rw mutexes (should that be mutices? =) -- throw rw spinlocks and
+ * semaphores together, and this is what we end up with...
+ *
+ * The lock is initialized to BIAS. This way, a writer subtracts BIAS
+ * ands gets 0 for the case of an uncontended lock. Readers decrement
+ * by 1 and see a positive value when uncontended, negative if there
+ * are writers waiting (in which case it goes to sleep). BIAS must be
+ * chosen such that subtracting BIAS once per CPU will result either
+ * in zero (uncontended case) or in a negative value (contention
+ * case). On the other hand, BIAS must be at least as big as the
+ * number of processes in the system.
+ *
+ * On IA-64, we use a BIAS value of 0x100000000, which supports up to
+ * 2 billion (2^31) processors and 4 billion processes.
+ *
+ * In terms of fairness, when there is heavy use of the lock, we want
+ * to see the lock being passed back and forth between readers and
+ * writers (like in a producer/consumer style of communication).
+ *
+
+ For
+ * liveness, it would be necessary to process the blocked readers and
+ * writers in FIFO order. However, we don't do this (yet). I suppose
+ * if you have a lock that is _that_ heavily contested, you're in big
+ * trouble anyhow.
+ *
+ * -ben (with clarifications & IA-64 comments by davidm)
+ */
+#define RW_LOCK_BIAS 0x100000000ul
+
+struct rw_semaphore {
+ volatile long count;
+ volatile __u8 write_bias_granted;
+ volatile __u8 read_bias_granted;
+ __u16 pad1;
+ __u32 pad2;
+ wait_queue_head_t wait;
+ wait_queue_head_t write_bias_wait;
+#if WAITQUEUE_DEBUG
+ long __magic;
+ atomic_t readers;
+ atomic_t writers;
+#endif
+};
+
+#if WAITQUEUE_DEBUG
+# define __RWSEM_DEBUG_INIT , ATOMIC_INIT(0), ATOMIC_INIT(0)
+#else
+# define __RWSEM_DEBUG_INIT
+#endif
+
+#define __RWSEM_INITIALIZER(name,count) \
+{ \
+ (count), 0, 0, 0, 0, __WAIT_QUEUE_HEAD_INITIALIZER((name).wait), \
+ __WAIT_QUEUE_HEAD_INITIALIZER((name).write_bias_wait) \
+ __SEM_DEBUG_INIT(name) __RWSEM_DEBUG_INIT \
+}
+
+#define __DECLARE_RWSEM_GENERIC(name,count) \
+ struct rw_semaphore name = __RWSEM_INITIALIZER(name,count)
+
+#define DECLARE_RWSEM(name) __DECLARE_RWSEM_GENERIC(name, RW_LOCK_BIAS)
+#define DECLARE_RWSEM_READ_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, RW_LOCK_BIAS - 1)
+#define DECLARE_RWSEM_WRITE_LOCKED(name) __DECLARE_RWSEM_GENERIC(name, 0)
+
+extern void __down_read_failed (struct rw_semaphore *sem, long count);
+extern void __down_write_failed (struct rw_semaphore *sem, long count);
+extern void __rwsem_wake (struct rw_semaphore *sem, long count);
+
+extern inline void
+init_rwsem (struct rw_semaphore *sem)
+{
+ sem->count = RW_LOCK_BIAS;
+ sem->read_bias_granted = 0;
+ sem->write_bias_granted = 0;
+ init_waitqueue_head(&sem->wait);
+ init_waitqueue_head(&sem->write_bias_wait);
+#if WAITQUEUE_DEBUG
+ sem->__magic = (long)&sem->__magic;
+ atomic_set(&sem->readers, 0);
+ atomic_set(&sem->writers, 0);
+#endif
+}
+
+extern inline void
+down_read (struct rw_semaphore *sem)
+{
+ long count;
+
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+
+ count = ia64_fetch_and_add(-1, &sem->count);
+ if (count < 0)
+ __down_read_failed(sem, count);
+
+#if WAITQUEUE_DEBUG
+ if (sem->write_bias_granted)
+ BUG();
+ if (atomic_read(&sem->writers))
+ BUG();
+ atomic_inc(&sem->readers);
+#endif
+}
+
+extern inline void
+down_write (struct rw_semaphore *sem)
+{
+ long old_count, new_count;
+
+#if WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+
+ do {
+ old_count = sem->count;
+ new_count = old_count - RW_LOCK_BIAS;
+ } while (cmpxchg(&sem->count, old_count, new_count) != old_count);
+
+ if (new_count != 0)
+ __down_write_failed(sem, new_count);
+#if WAITQUEUE_DEBUG
+ if (atomic_read(&sem->writers))
+ BUG();
+ if (atomic_read(&sem->readers))
+ BUG();
+ if (sem->read_bias_granted)
+ BUG();
+ if (sem->write_bias_granted)
+ BUG();
+ atomic_inc(&sem->writers);
+#endif
+}
+
+/*
+ * When a reader does a release, the only significant
+ * case is when there was a writer waiting, and we've
+ * bumped the count to 0: we must wake the writer up.
+ */
+extern inline void
+__up_read (struct rw_semaphore *sem)
+{
+ long count;
+
+ count = ia64_fetch_and_add(1, &sem->count);
+ if (count == 0)
+ /*
+ * Other processes are blocked already; resolve
+ * contention by letting either a writer or a reader
+ * proceed...
+ */
+ __rwsem_wake(sem, count);
+}
+
+/*
+ * Releasing the writer is easy -- just release it and
+ * wake up any sleepers.
+ */
+extern inline void
+__up_write (struct rw_semaphore *sem)
+{
+ long old_count, new_count;
+
+ do {
+ old_count = sem->count;
+ new_count = old_count + RW_LOCK_BIAS;
+ } while (cmpxchg(&sem->count, old_count, new_count) != old_count);
+
+ /*
+ * Note: new_count <u RW_LOCK_BIAS <=> old_count < 0 && new_count >= 0.
+ * (where <u is "unsigned less-than").
+ */
+ if ((unsigned long) new_count < RW_LOCK_BIAS)
+ /* someone is blocked already, resolve contention... */
+ __rwsem_wake(sem, new_count);
+}
+
+extern inline void
+up_read (struct rw_semaphore *sem)
+{
+#if WAITQUEUE_DEBUG
+ if (sem->write_bias_granted)
+ BUG();
+ if (atomic_read(&sem->writers))
+ BUG();
+ atomic_dec(&sem->readers);
+#endif
+ __up_read(sem);
+}
+
+extern inline void
+up_write (struct rw_semaphore *sem)
+{
+#if WAITQUEUE_DEBUG
+ if (sem->read_bias_granted)
+ BUG();
+ if (sem->write_bias_granted)
+ BUG();
+ if (atomic_read(&sem->readers))
+ BUG();
+ if (atomic_read(&sem->writers) != 1)
+ BUG();
+ atomic_dec(&sem->writers);
+#endif
+ __up_write(sem);
+}
+
+#endif /* _ASM_IA64_SEMAPHORE_H */
--- /dev/null
+#ifndef _ASM_IA64_SEMBUF_H
+#define _ASM_IA64_SEMBUF_H
+
+/*
+ * The semid64_ds structure for IA-64 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 2 miscellaneous 64-bit values
+ */
+
+struct semid64_ds {
+ struct ipc64_perm sem_perm; /* permissions .. see ipc.h */
+ __kernel_time_t sem_otime; /* last semop time */
+ __kernel_time_t sem_ctime; /* last change time */
+ unsigned long sem_nsems; /* no. of semaphores in array */
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+#endif /* _ASM_IA64_SEMBUF_H */
--- /dev/null
+/*
+ * include/asm-ia64/serial.h
+ *
+ * Derived from the i386 version.
+ */
+
+#include <linux/config.h>
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD ( 1843200 / 16 )
+
+#define CONFIG_SERIAL_DETECT_IRQ /* on IA-64, we always want to autodetect irqs */
+
+/* Standard COM flags (except for COM4, because of the 8514 problem) */
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ)
+#define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+#define STD_COM4_FLAGS ASYNC_BOOT_AUTOCONF
+#endif
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define FOURPORT_FLAGS ASYNC_FOURPORT
+#define ACCENT_FLAGS 0
+#define BOCA_FLAGS 0
+#define HUB6_FLAGS 0
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE
+#endif
+
+/*
+ * The following define the access methods for the HUB6 card. All
+ * access is through two ports for all 24 possible chips. The card is
+ * selected through the high 2 bits, the port on that card with the
+ * "middle" 3 bits, and the register on that port with the bottom
+ * 3 bits.
+ *
+ * While the access port and interrupt is configurable, the default
+ * port locations are 0x302 for the port control register, and 0x303
+ * for the data read/write register. Normally, the interrupt is at irq3
+ * but can be anything from 3 to 7 inclusive. Note that using 3 will
+ * require disabling com2.
+ */
+
+#define C_P(card,port) (((card)<<6|(port)<<3) + 1)
+
+#define STD_SERIAL_PORT_DEFNS \
+ /* UART CLK PORT IRQ FLAGS */ \
+ { 0, BASE_BAUD, 0x3F8, 4, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 3, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0x3E8, 4, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0x2E8, 3, STD_COM4_FLAGS }, /* ttyS3 */
+
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define EXTRA_SERIAL_PORT_DEFNS \
+ { 0, BASE_BAUD, 0x1A0, 9, FOURPORT_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0x1A8, 9, FOURPORT_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0x1B0, 9, FOURPORT_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0x1B8, 9, FOURPORT_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0x2A0, 5, FOURPORT_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0x2A8, 5, FOURPORT_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0x2B0, 5, FOURPORT_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0x2B8, 5, FOURPORT_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0x330, 4, ACCENT_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0x338, 4, ACCENT_FLAGS }, /* ttyS13 */ \
+ { 0, BASE_BAUD, 0x000, 0, 0 }, /* ttyS14 (spare) */ \
+ { 0, BASE_BAUD, 0x000, 0, 0 }, /* ttyS15 (spare) */ \
+ { 0, BASE_BAUD, 0x100, 12, BOCA_FLAGS }, /* ttyS16 */ \
+ { 0, BASE_BAUD, 0x108, 12, BOCA_FLAGS }, /* ttyS17 */ \
+ { 0, BASE_BAUD, 0x110, 12, BOCA_FLAGS }, /* ttyS18 */ \
+ { 0, BASE_BAUD, 0x118, 12, BOCA_FLAGS }, /* ttyS19 */ \
+ { 0, BASE_BAUD, 0x120, 12, BOCA_FLAGS }, /* ttyS20 */ \
+ { 0, BASE_BAUD, 0x128, 12, BOCA_FLAGS }, /* ttyS21 */ \
+ { 0, BASE_BAUD, 0x130, 12, BOCA_FLAGS }, /* ttyS22 */ \
+ { 0, BASE_BAUD, 0x138, 12, BOCA_FLAGS }, /* ttyS23 */ \
+ { 0, BASE_BAUD, 0x140, 12, BOCA_FLAGS }, /* ttyS24 */ \
+ { 0, BASE_BAUD, 0x148, 12, BOCA_FLAGS }, /* ttyS25 */ \
+ { 0, BASE_BAUD, 0x150, 12, BOCA_FLAGS }, /* ttyS26 */ \
+ { 0, BASE_BAUD, 0x158, 12, BOCA_FLAGS }, /* ttyS27 */ \
+ { 0, BASE_BAUD, 0x160, 12, BOCA_FLAGS }, /* ttyS28 */ \
+ { 0, BASE_BAUD, 0x168, 12, BOCA_FLAGS }, /* ttyS29 */ \
+ { 0, BASE_BAUD, 0x170, 12, BOCA_FLAGS }, /* ttyS30 */ \
+ { 0, BASE_BAUD, 0x178, 12, BOCA_FLAGS }, /* ttyS31 */
+#else
+#define EXTRA_SERIAL_PORT_DEFNS
+#endif
+
+/* You can have up to four HUB6's in the system, but I've only
+ * included two cards here for a total of twelve ports.
+ */
+#if (defined(CONFIG_HUB6) && defined(CONFIG_SERIAL_MANY_PORTS))
+#define HUB6_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,0) }, /* ttyS32 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,1) }, /* ttyS33 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,2) }, /* ttyS34 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,3) }, /* ttyS35 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,4) }, /* ttyS36 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(0,5) }, /* ttyS37 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,0) }, /* ttyS38 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,1) }, /* ttyS39 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,2) }, /* ttyS40 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,3) }, /* ttyS41 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,4) }, /* ttyS42 */ \
+ { 0, BASE_BAUD, 0x302, 3, HUB6_FLAGS, C_P(1,5) }, /* ttyS43 */
+#else
+#define HUB6_SERIAL_PORT_DFNS
+#endif
+
+#ifdef CONFIG_MCA
+#define MCA_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, 0x3220, 3, STD_COM_FLAGS }, \
+ { 0, BASE_BAUD, 0x3228, 3, STD_COM_FLAGS }, \
+ { 0, BASE_BAUD, 0x4220, 3, STD_COM_FLAGS }, \
+ { 0, BASE_BAUD, 0x4228, 3, STD_COM_FLAGS }, \
+ { 0, BASE_BAUD, 0x5220, 3, STD_COM_FLAGS }, \
+ { 0, BASE_BAUD, 0x5228, 3, STD_COM_FLAGS },
+#else
+#define MCA_SERIAL_PORT_DFNS
+#endif
+
+#define SERIAL_PORT_DFNS \
+ STD_SERIAL_PORT_DEFNS \
+ EXTRA_SERIAL_PORT_DEFNS \
+ HUB6_SERIAL_PORT_DFNS \
+ MCA_SERIAL_PORT_DFNS
+
--- /dev/null
+#ifndef _ASM_IA64_SHMBUF_H
+#define _ASM_IA64_SHMBUF_H
+
+/*
+ * The shmid64_ds structure for IA-64 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 2 miscellaneous 64-bit values
+ */
+
+struct shmid64_ds {
+ struct ipc64_perm shm_perm; /* operation perms */
+ size_t shm_segsz; /* size of segment (bytes) */
+ __kernel_time_t shm_atime; /* last attach time */
+ __kernel_time_t shm_dtime; /* last detach time */
+ __kernel_time_t shm_ctime; /* last change time */
+ __kernel_pid_t shm_cpid; /* pid of creator */
+ __kernel_pid_t shm_lpid; /* pid of last operator */
+ unsigned long shm_nattch; /* no. of current attaches */
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+struct shminfo64 {
+ unsigned long shmmax;
+ unsigned long shmmin;
+ unsigned long shmmni;
+ unsigned long shmseg;
+ unsigned long shmall;
+ unsigned long __unused1;
+ unsigned long __unused2;
+ unsigned long __unused3;
+ unsigned long __unused4;
+};
+
+#endif /* _ASM_IA64_SHMBUF_H */
--- /dev/null
+#ifndef _ASM_IA64_SHMPARAM_H
+#define _ASM_IA64_SHMPARAM_H
+
+#define SHMLBA PAGE_SIZE /* attach addr a multiple of this */
+
+#endif /* _ASM_IA64_SHMPARAM_H */
--- /dev/null
+#ifndef _ASM_IA64_SIGCONTEXT_H
+#define _ASM_IA64_SIGCONTEXT_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/fpu.h>
+
+#define IA64_SC_FLAG_ONSTACK_BIT 1 /* is handler running on signal stack? */
+#define IA64_SC_FLAG_IN_SYSCALL_BIT 1 /* did signal interrupt a syscall? */
+#define IA64_SC_FLAG_FPH_VALID_BIT 2 /* is state in f[32]-f[127] valid? */
+
+#define IA64_SC_FLAG_ONSTACK (1 << IA64_SC_FLAG_ONSTACK_BIT)
+#define IA64_SC_FLAG_IN_SYSCALL (1 << IA64_SC_FLAG_IN_SYSCALL_BIT)
+#define IA64_SC_FLAG_FPH_VALID (1 << IA64_SC_FLAG_FPH_VALID_BIT)
+
+# ifndef __ASSEMBLY__
+
+struct sigcontext {
+ unsigned long sc_flags; /* see manifest constants above */
+ unsigned long sc_nat; /* bit i == 1 iff scratch reg gr[i] is a NaT */
+ stack_t sc_stack; /* previously active stack */
+
+ unsigned long sc_ip; /* instruction pointer */
+ unsigned long sc_cfm; /* current frame marker */
+ unsigned long sc_um; /* user mask bits */
+ unsigned long sc_ar_rsc; /* register stack configuration register */
+ unsigned long sc_ar_bsp; /* backing store pointer */
+ unsigned long sc_ar_rnat; /* RSE NaT collection register */
+ unsigned long sc_ar_ccv; /* compare and exchange compare value register */
+ unsigned long sc_ar_unat; /* ar.unat of interrupted context */
+ unsigned long sc_ar_fpsr; /* floating-point status register */
+ unsigned long sc_ar_pfs; /* previous function state */
+ unsigned long sc_ar_lc; /* loop count register */
+ unsigned long sc_pr; /* predicate registers */
+ unsigned long sc_br[8]; /* branch registers */
+ unsigned long sc_gr[32]; /* general registers (static partition) */
+ struct ia64_fpreg sc_fr[128]; /* floating-point registers */
+
+ /*
+ * The mask must come last so we can increase _NSIG_WORDS
+ * without breaking binary compatibility.
+ */
+ sigset_t sc_mask; /* signal mask to restore after handler returns */
+};
+
+# endif /* __ASSEMBLY__ */
+#endif /* _ASM_IA64_SIGCONTEXT_H */
--- /dev/null
+#ifndef _ASM_IA64_SIGINFO_H
+#define _ASM_IA64_SIGINFO_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/types.h>
+
+typedef union sigval {
+ int sival_int;
+ void *sival_ptr;
+} sigval_t;
+
+#define SI_MAX_SIZE 128
+#define SI_PAD_SIZE ((SI_MAX_SIZE/sizeof(int)) - 3)
+
+typedef struct siginfo {
+ int si_signo;
+ int si_errno;
+ int si_code;
+
+ union {
+ int _pad[SI_PAD_SIZE];
+
+ /* kill() */
+ struct {
+ pid_t _pid; /* sender's pid */
+ uid_t _uid; /* sender's uid */
+ } _kill;
+
+ /* POSIX.1b timers */
+ struct {
+ unsigned int _timer1;
+ unsigned int _timer2;
+ } _timer;
+
+ /* POSIX.1b signals */
+ struct {
+ pid_t _pid; /* sender's pid */
+ uid_t _uid; /* sender's uid */
+ sigval_t _sigval;
+ } _rt;
+
+ /* SIGCHLD */
+ struct {
+ pid_t _pid; /* which child */
+ uid_t _uid; /* sender's uid */
+ int _status; /* exit code */
+ clock_t _utime;
+ clock_t _stime;
+ } _sigchld;
+
+ /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */
+ struct {
+ void *_addr; /* faulting insn/memory ref. */
+ } _sigfault;
+
+ /* SIGPOLL */
+ struct {
+ int _band; /* POLL_IN, POLL_OUT, POLL_MSG */
+ int _fd;
+ } _sigpoll;
+ } _sifields;
+} siginfo_t;
+
+/*
+ * How these fields are to be accessed.
+ */
+#define si_pid _sifields._kill._pid
+#define si_uid _sifields._kill._uid
+#define si_status _sifields._sigchld._status
+#define si_utime _sifields._sigchld._utime
+#define si_stime _sifields._sigchld._stime
+#define si_value _sifields._rt._sigval
+#define si_int _sifields._rt._sigval.sival_int
+#define si_ptr _sifields._rt._sigval.sival_ptr
+#define si_addr _sifields._sigfault._addr
+#define si_band _sifields._sigpoll._band
+#define si_fd _sifields._sigpoll._fd
+
+/*
+ * si_code values
+ * Positive values for kernel-generated signals.
+ */
+#define SI_USER 0 /* sent by kill, sigsend, raise */
+#define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
+#define SI_QUEUE -1 /* sent by sigqueue */
+#define SI_TIMER -2 /* sent by timer expiration */
+#define SI_MESGQ -3 /* sent by real time mesq state change */
+#define SI_ASYNCIO -4 /* sent by AIO completion */
+#define SI_SIGIO -5 /* sent by queued SIGIO */
+
+#define SI_FROMUSER(siptr) ((siptr)->si_code <= 0)
+#define SI_FROMKERNEL(siptr) ((siptr)->si_code > 0)
+
+/*
+ * SIGILL si_codes
+ */
+#define ILL_ILLOPC 1 /* illegal opcode */
+#define ILL_ILLOPN 2 /* illegal operand */
+#define ILL_ILLADR 3 /* illegal addressing mode */
+#define ILL_ILLTRP 4 /* illegal trap */
+#define ILL_PRVOPC 5 /* privileged opcode */
+#define ILL_PRVREG 6 /* privileged register */
+#define ILL_COPROC 7 /* coprocessor error */
+#define ILL_BADSTK 8 /* internal stack error */
+#define ILL_BADIADDR 9 /* Unimplemented instruction address */
+#define NSIGILL 9
+
+/*
+ * SIGFPE si_codes
+ */
+#define FPE_INTDIV 1 /* integer divide by zero */
+#define FPE_INTOVF 2 /* integer overflow */
+#define FPE_FLTDIV 3 /* floating point divide by zero */
+#define FPE_FLTOVF 4 /* floating point overflow */
+#define FPE_FLTUND 5 /* floating point underflow */
+#define FPE_FLTRES 6 /* floating point inexact result */
+#define FPE_FLTINV 7 /* floating point invalid operation */
+#define FPE_FLTSUB 8 /* subscript out of range */
+#define NSIGFPE 8
+
+/*
+ * SIGSEGV si_codes
+ */
+#define SEGV_MAPERR 1 /* address not mapped to object */
+#define SEGV_ACCERR 2 /* invalid permissions for mapped object */
+#define NSIGSEGV 2
+
+/*
+ * SIGBUS si_codes
+ */
+#define BUS_ADRALN 1 /* invalid address alignment */
+#define BUS_ADRERR 2 /* non-existant physical address */
+#define BUS_OBJERR 3 /* object specific hardware error */
+#define NSIGBUS 3
+
+/*
+ * SIGTRAP si_codes
+ */
+#define TRAP_BRKPT 1 /* process breakpoint */
+#define TRAP_TRACE 2 /* process trace trap */
+#define TRAP_BRANCH 3 /* process taken branch trap */
+#define NSIGTRAP 3
+
+/*
+ * SIGCHLD si_codes
+ */
+#define CLD_EXITED 1 /* child has exited */
+#define CLD_KILLED 2 /* child was killed */
+#define CLD_DUMPED 3 /* child terminated abnormally */
+#define CLD_TRAPPED 4 /* traced child has trapped */
+#define CLD_STOPPED 5 /* child has stopped */
+#define CLD_CONTINUED 6 /* stopped child has continued */
+#define NSIGCHLD 6
+
+/*
+ * SIGPOLL si_codes
+ */
+#define POLL_IN 1 /* data input available */
+#define POLL_OUT 2 /* output buffers available */
+#define POLL_MSG 3 /* input message available */
+#define POLL_ERR 4 /* i/o error */
+#define POLL_PRI 5 /* high priority input available */
+#define POLL_HUP 6 /* device disconnected */
+#define NSIGPOLL 6
+
+/*
+ * sigevent definitions
+ *
+ * It seems likely that SIGEV_THREAD will have to be handled from
+ * userspace, libpthread transmuting it to SIGEV_SIGNAL, which the
+ * thread manager then catches and does the appropriate nonsense.
+ * However, everything is written out here so as to not get lost.
+ */
+#define SIGEV_SIGNAL 0 /* notify via signal */
+#define SIGEV_NONE 1 /* other notification: meaningless */
+#define SIGEV_THREAD 2 /* deliver via thread creation */
+
+#define SIGEV_MAX_SIZE 64
+#define SIGEV_PAD_SIZE ((SIGEV_MAX_SIZE/sizeof(int)) - 3)
+
+typedef struct sigevent {
+ sigval_t sigev_value;
+ int sigev_signo;
+ int sigev_notify;
+ union {
+ int _pad[SIGEV_PAD_SIZE];
+
+ struct {
+ void (*_function)(sigval_t);
+ void *_attribute; /* really pthread_attr_t */
+ } _sigev_thread;
+ } _sigev_un;
+} sigevent_t;
+
+#define sigev_notify_function _sigev_un._sigev_thread._function
+#define sigev_notify_attributes _sigev_un._sigev_thread._attribute
+
+#endif /* _ASM_IA64_SIGINFO_H */
--- /dev/null
+#ifndef _ASM_IA64_SIGNAL_H
+#define _ASM_IA64_SIGNAL_H
+
+/*
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define SIGHUP 1
+#define SIGINT 2
+#define SIGQUIT 3
+#define SIGILL 4
+#define SIGTRAP 5
+#define SIGABRT 6
+#define SIGIOT 6
+#define SIGBUS 7
+#define SIGFPE 8
+#define SIGKILL 9
+#define SIGUSR1 10
+#define SIGSEGV 11
+#define SIGUSR2 12
+#define SIGPIPE 13
+#define SIGALRM 14
+#define SIGTERM 15
+#define SIGSTKFLT 16
+#define SIGCHLD 17
+#define SIGCONT 18
+#define SIGSTOP 19
+#define SIGTSTP 20
+#define SIGTTIN 21
+#define SIGTTOU 22
+#define SIGURG 23
+#define SIGXCPU 24
+#define SIGXFSZ 25
+#define SIGVTALRM 26
+#define SIGPROF 27
+#define SIGWINCH 28
+#define SIGIO 29
+#define SIGPOLL SIGIO
+/*
+#define SIGLOST 29
+*/
+#define SIGPWR 30
+#define SIGSYS 31
+/* signal 31 is no longer "unused", but the SIGUNUSED macro remains for backwards compatibility */
+#define SIGUNUSED 31
+
+/* These should not be considered constants from userland. */
+#define SIGRTMIN 32
+#define SIGRTMAX (_NSIG-1)
+
+/*
+ * SA_FLAGS values:
+ *
+ * SA_ONSTACK indicates that a registered stack_t will be used.
+ * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
+ * SA_RESTART flag to get restarting signals (which were the default long ago)
+ * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
+ * SA_RESETHAND clears the handler when the signal is delivered.
+ * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
+ * SA_NODEFER prevents the current signal from being masked in the handler.
+ *
+ * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
+ * Unix names RESETHAND and NODEFER respectively.
+ */
+#define SA_NOCLDSTOP 0x00000001
+#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
+#define SA_SIGINFO 0x00000004
+#define SA_ONSTACK 0x08000000
+#define SA_RESTART 0x10000000
+#define SA_NODEFER 0x40000000
+#define SA_RESETHAND 0x80000000
+
+#define SA_NOMASK SA_NODEFER
+#define SA_ONESHOT SA_RESETHAND
+#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
+
+#define SA_RESTORER 0x04000000
+
+/*
+ * sigaltstack controls
+ */
+#define SS_ONSTACK 1
+#define SS_DISABLE 2
+
+#define MINSIGSTKSZ 2048
+#define SIGSTKSZ 8192
+
+#define _NSIG 64
+#define _NSIG_BPW 64
+#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+
+/*
+ * These values of sa_flags are used only by the kernel as part of the
+ * irq handling routines.
+ *
+ * SA_INTERRUPT is also used by the irq handling routines.
+ * SA_SHIRQ is for shared interrupt support on PCI and EISA.
+ */
+#define SA_PROBE SA_ONESHOT
+#define SA_SAMPLE_RANDOM SA_RESTART
+#define SA_SHIRQ 0x04000000
+#define SA_LEGACY 0x02000000 /* installed via a legacy irq? */
+
+#define SIG_BLOCK 0 /* for blocking signals */
+#define SIG_UNBLOCK 1 /* for unblocking signals */
+#define SIG_SETMASK 2 /* for setting the signal mask */
+
+#define SIG_DFL ((__sighandler_t)0) /* default signal handling */
+#define SIG_IGN ((__sighandler_t)1) /* ignore signal */
+#define SIG_ERR ((__sighandler_t)-1) /* error return from signal */
+
+# ifndef __ASSEMBLY__
+
+# include <linux/types.h>
+
+/* Avoid too many header ordering problems. */
+struct siginfo;
+
+/* Most things should be clean enough to redefine this at will, if care
+ is taken to make libc match. */
+
+typedef unsigned long old_sigset_t;
+
+typedef struct {
+ unsigned long sig[_NSIG_WORDS];
+} sigset_t;
+
+/* Type of a signal handler. */
+typedef void (*__sighandler_t)(int);
+
+struct sigaction {
+ __sighandler_t sa_handler;
+ unsigned long sa_flags;
+ sigset_t sa_mask; /* mask last for extensibility */
+};
+
+struct k_sigaction {
+ struct sigaction sa;
+};
+
+typedef struct sigaltstack {
+ void *ss_sp;
+ int ss_flags;
+ size_t ss_size;
+} stack_t;
+
+ /* sigcontext.h needs stack_t... */
+# include <asm/sigcontext.h>
+
+# endif /* !__ASSEMBLY__ */
+#endif /* _ASM_IA64_SIGNAL_H */
--- /dev/null
+/*
+ * SMP Support
+ *
+ * Copyright (C) 1999 VA Linux Systems
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ */
+#ifndef _ASM_IA64_SMP_H
+#define _ASM_IA64_SMP_H
+
+#include <linux/init.h>
+#include <linux/threads.h>
+#include <linux/kernel.h>
+
+#include <asm/ptrace.h>
+#include <asm/spinlock.h>
+#include <asm/io.h>
+
+#define IPI_DEFAULT_BASE_ADDR 0xfee00000
+#define XTP_OFFSET 0x1e0008
+
+#define smp_processor_id() (current->processor)
+
+extern unsigned long cpu_present_map;
+extern unsigned long cpu_online_map;
+extern unsigned long ipi_base_addr;
+extern int bootstrap_processor;
+extern volatile int cpu_number_map[NR_CPUS];
+extern volatile int __cpu_logical_map[NR_CPUS];
+
+#define cpu_logical_map(i) __cpu_logical_map[i]
+
+#if defined(CONFIG_KDB)
+extern volatile unsigned long smp_kdb_wait;
+#endif /* CONFIG_KDB */
+
+extern unsigned long ap_wakeup_vector;
+
+/*
+ * XTP control functions:
+ * min_xtp : route all interrupts to this CPU
+ * normal_xtp: nominal XTP value
+ * raise_xtp : Route all interrupts away from this CPU
+ * max_xtp : never deliver interrupts to this CPU.
+ */
+
+/*
+ * This turns off XTP based interrupt routing. There is a bug in the handling of
+ * IRQ_INPROGRESS when the same vector appears on more than one CPU.
+ */
+extern int use_xtp;
+
+extern __inline void
+min_xtp(void)
+{
+ if (use_xtp)
+ writeb(0x80, ipi_base_addr | XTP_OFFSET); /* XTP to min */
+}
+
+extern __inline void
+normal_xtp(void)
+{
+ if (use_xtp)
+ writeb(0x8e, ipi_base_addr | XTP_OFFSET); /* XTP normal */
+}
+
+extern __inline void
+max_xtp(void)
+{
+ if (use_xtp)
+ writeb(0x8f, ipi_base_addr | XTP_OFFSET); /* Set XTP to max... */
+}
+
+extern __inline unsigned int
+hard_smp_processor_id(void)
+{
+ struct {
+ unsigned long reserved : 16;
+ unsigned long eid : 8;
+ unsigned long id : 8;
+ unsigned long ignored : 32;
+ } lid;
+
+ __asm__ __volatile__ ("mov %0=cr.lid" : "=r" (lid));
+
+ /*
+ * Damn. IA64 CPU ID's are 16 bits long, Linux expect the hard id to be
+ * in the range 0..31. So, return the low-order bits of the bus-local ID
+ * only and hope it's less than 32. This needs to be fixed...
+ */
+ return (lid.id & 0x0f);
+}
+
+#define NO_PROC_ID 0xffffffff
+#define PROC_CHANGE_PENALTY 20
+
+extern void __init init_smp_config (void);
+extern void smp_do_timer (struct pt_regs *regs);
+
+#endif /* _ASM_IA64_SMP_H */
--- /dev/null
+/*
+ * <asm/smplock.h>
+ *
+ * Default SMP lock implementation
+ */
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include <asm/spinlock.h>
+
+extern spinlock_t kernel_flag;
+
+/*
+ * Release global kernel lock and global interrupt lock
+ */
+static __inline__ void
+release_kernel_lock(struct task_struct *task, int cpu)
+{
+ if (task->lock_depth >= 0)
+ spin_unlock(&kernel_flag);
+ release_irqlock(cpu);
+ __sti();
+}
+
+/*
+ * Re-acquire the kernel lock
+ */
+static __inline__ void
+reacquire_kernel_lock(struct task_struct *task)
+{
+ if (task->lock_depth >= 0)
+ spin_lock(&kernel_flag);
+}
+
+/*
+ * Getting the big kernel lock.
+ *
+ * This cannot happen asynchronously,
+ * so we only need to worry about other
+ * CPU's.
+ */
+static __inline__ void
+lock_kernel(void)
+{
+ if (!++current->lock_depth)
+ spin_lock(&kernel_flag);
+}
+
+static __inline__ void
+unlock_kernel(void)
+{
+ if (--current->lock_depth < 0)
+ spin_unlock(&kernel_flag);
+}
--- /dev/null
+#ifndef _ASM_IA64_SOCKET_H
+#define _ASM_IA64_SOCKET_H
+
+/*
+ * Socket related defines. This mostly mirrors the Linux/x86 version.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/sockios.h>
+
+/* For setsockoptions(2) */
+#define SOL_SOCKET 1
+
+#define SO_DEBUG 1
+#define SO_REUSEADDR 2
+#define SO_TYPE 3
+#define SO_ERROR 4
+#define SO_DONTROUTE 5
+#define SO_BROADCAST 6
+#define SO_SNDBUF 7
+#define SO_RCVBUF 8
+#define SO_KEEPALIVE 9
+#define SO_OOBINLINE 10
+#define SO_NO_CHECK 11
+#define SO_PRIORITY 12
+#define SO_LINGER 13
+#define SO_BSDCOMPAT 14
+/* To add :#define SO_REUSEPORT 15 */
+#define SO_PASSCRED 16
+#define SO_PEERCRED 17
+#define SO_RCVLOWAT 18
+#define SO_SNDLOWAT 19
+#define SO_RCVTIMEO 20
+#define SO_SNDTIMEO 21
+
+/* Security levels - as per NRL IPv6 - don't actually do anything */
+#define SO_SECURITY_AUTHENTICATION 22
+#define SO_SECURITY_ENCRYPTION_TRANSPORT 23
+#define SO_SECURITY_ENCRYPTION_NETWORK 24
+
+#define SO_BINDTODEVICE 25
+
+/* Socket filtering */
+#define SO_ATTACH_FILTER 26
+#define SO_DETACH_FILTER 27
+
+#endif /* _ASM_IA64_SOCKET_H */
--- /dev/null
+#ifndef _ASM_IA64_SOCKIOS_H
+#define _ASM_IA64_SOCKIOS_H
+
+/*
+ * Socket-level I/O control calls. This mostly mirrors the Linux/x86
+ * version.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#define FIOSETOWN 0x8901
+#define SIOCSPGRP 0x8902
+#define FIOGETOWN 0x8903
+#define SIOCGPGRP 0x8904
+#define SIOCATMARK 0x8905
+#define SIOCGSTAMP 0x8906 /* Get stamp */
+
+#endif /* _ASM_IA64_SOCKIOS_H */
--- /dev/null
+#ifndef _ASM_IA64_SOFTIRQ_H
+#define _ASM_IA64_SOFTIRQ_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#include <linux/config.h>
+#include <linux/stddef.h>
+
+#include <asm/system.h>
+#include <asm/hardirq.h>
+
+extern unsigned int local_bh_count[NR_CPUS];
+
+#define cpu_bh_disable(cpu) do { local_bh_count[(cpu)]++; barrier(); } while (0)
+#define cpu_bh_enable(cpu) do { barrier(); local_bh_count[(cpu)]--; } while (0)
+
+#define cpu_bh_trylock(cpu) (local_bh_count[(cpu)] ? 0 : (local_bh_count[(cpu)] = 1))
+#define cpu_bh_endlock(cpu) (local_bh_count[(cpu)] = 0)
+
+#define local_bh_disable() cpu_bh_disable(smp_processor_id())
+#define local_bh_enable() cpu_bh_enable(smp_processor_id())
+
+#define get_active_bhs() (bh_mask & bh_active)
+
+static inline void
+clear_active_bhs (unsigned long x)
+{
+ unsigned long old, new;
+ volatile unsigned long *bh_activep = (void *) &bh_active;
+ CMPXCHG_BUGCHECK_DECL
+
+ do {
+ CMPXCHG_BUGCHECK(bh_activep);
+ old = *bh_activep;
+ new = old & ~x;
+ } while (ia64_cmpxchg(bh_activep, old, new, 8) != old);
+}
+
+extern inline void
+init_bh (int nr, void (*routine)(void))
+{
+ bh_base[nr] = routine;
+ atomic_set(&bh_mask_count[nr], 0);
+ bh_mask |= 1 << nr;
+}
+
+extern inline void
+remove_bh (int nr)
+{
+ bh_mask &= ~(1 << nr);
+ mb();
+ bh_base[nr] = NULL;
+}
+
+extern inline void
+mark_bh (int nr)
+{
+ set_bit(nr, &bh_active);
+}
+
+#ifdef CONFIG_SMP
+
+/*
+ * The locking mechanism for base handlers, to prevent re-entrancy,
+ * is entirely private to an implementation, it should not be
+ * referenced at all outside of this file.
+ */
+extern atomic_t global_bh_lock;
+extern atomic_t global_bh_count;
+
+extern void synchronize_bh(void);
+
+static inline void
+start_bh_atomic (void)
+{
+ atomic_inc(&global_bh_lock);
+ synchronize_bh();
+}
+
+static inline void
+end_bh_atomic (void)
+{
+ atomic_dec(&global_bh_lock);
+}
+
+/* These are for the irq's testing the lock */
+static inline int
+softirq_trylock (int cpu)
+{
+ if (cpu_bh_trylock(cpu)) {
+ if (!test_and_set_bit(0, &global_bh_count)) {
+ if (atomic_read(&global_bh_lock) == 0)
+ return 1;
+ clear_bit(0,&global_bh_count);
+ }
+ cpu_bh_endlock(cpu);
+ }
+ return 0;
+}
+
+static inline void
+softirq_endlock (int cpu)
+{
+ cpu_bh_enable(cpu);
+ clear_bit(0,&global_bh_count);
+}
+
+#else /* !CONFIG_SMP */
+
+extern inline void
+start_bh_atomic (void)
+{
+ local_bh_disable();
+ barrier();
+}
+
+extern inline void
+end_bh_atomic (void)
+{
+ barrier();
+ local_bh_enable();
+}
+
+/* These are for the irq's testing the lock */
+#define softirq_trylock(cpu) (cpu_bh_trylock(cpu))
+#define softirq_endlock(cpu) (cpu_bh_endlock(cpu))
+#define synchronize_bh() barrier()
+
+#endif /* !CONFIG_SMP */
+
+/*
+ * These use a mask count to correctly handle
+ * nested disable/enable calls
+ */
+extern inline void
+disable_bh (int nr)
+{
+ bh_mask &= ~(1 << nr);
+ atomic_inc(&bh_mask_count[nr]);
+ synchronize_bh();
+}
+
+extern inline void
+enable_bh (int nr)
+{
+ if (atomic_dec_and_test(&bh_mask_count[nr]))
+ bh_mask |= 1 << nr;
+}
+
+#endif /* _ASM_IA64_SOFTIRQ_H */
--- /dev/null
+#ifndef _ASM_IA64_SPINLOCK_H
+#define _ASM_IA64_SPINLOCK_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
+ *
+ * This file is used for SMP configurations only.
+ */
+
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/atomic.h>
+
+typedef struct {
+ volatile unsigned int lock;
+} spinlock_t;
+#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 }
+#define spin_lock_init(x) ((x)->lock = 0)
+
+/* Streamlined test_and_set_bit(0, (x)) */
+#define spin_lock(x) __asm__ __volatile__ ( \
+ "mov ar.ccv = r0\n" \
+ "mov r29 = 1\n" \
+ ";;\n" \
+ "1:\n" \
+ "ld4 r2 = [%0]\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0,r2\n" \
+ "(p7) br.cond.dptk.few 1b \n" \
+ "cmpxchg4.acq r2 = [%0], r29, ar.ccv\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0, r2\n" \
+ "(p7) br.cond.dptk.few 1b\n" \
+ ";;\n" \
+ :: "m" __atomic_fool_gcc((x)) : "r2", "r29")
+
+#define spin_unlock(x) __asm__ __volatile__ ("st4.rel [%0] = r0;;" : "=m" (__atomic_fool_gcc((x))))
+
+#define spin_trylock(x) (!test_and_set_bit(0, (x)))
+
+#define spin_unlock_wait(x) \
+ ({ do { barrier(); } while(((volatile spinlock_t *)x)->lock); })
+
+typedef struct {
+ volatile int read_counter:31;
+ volatile int write_lock:1;
+} rwlock_t;
+#define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
+
+#define read_lock(rw) \
+do { \
+ int tmp = 0; \
+ __asm__ __volatile__ ("1:\tfetchadd4.acq %0 = %1, 1\n" \
+ ";;\n" \
+ "tbit.nz p6,p0 = %0, 31\n" \
+ "(p6) br.cond.sptk.few 2f\n" \
+ ".section .text.lock,\"ax\"\n" \
+ "2:\tfetchadd4.rel %0 = %1, -1\n" \
+ ";;\n" \
+ "3:\tld4.acq %0 = %1\n" \
+ ";;\n" \
+ "tbit.nz p6,p0 = %0, 31\n" \
+ "(p6) br.cond.sptk.few 3b\n" \
+ "br.cond.sptk.few 1b\n" \
+ ";;\n" \
+ ".previous\n": "=r" (tmp), "=m" (__atomic_fool_gcc(rw))); \
+} while(0)
+
+#define read_unlock(rw) \
+do { \
+ int tmp = 0; \
+ __asm__ __volatile__ ("fetchadd4.rel %0 = %1, -1\n" \
+ : "=r" (tmp) : "m" (__atomic_fool_gcc(rw))); \
+} while(0)
+
+/*
+ * These may need to be rewhacked in asm().
+ * XXX FIXME SDV - This may have a race on real hardware but is sufficient for SoftSDV
+ */
+#define write_lock(rw) \
+while(1) {\
+ do { \
+ } while (!test_and_set_bit(31, (rw))); \
+ if ((rw)->read_counter) { \
+ clear_bit(31, (rw)); \
+ while ((rw)->read_counter) \
+ ; \
+ } else { \
+ break; \
+ } \
+}
+
+#define write_unlock(x) (clear_bit(31, (x)))
+
+#endif /* _ASM_IA64_SPINLOCK_H */
--- /dev/null
+#ifndef _ASM_IA64_STAT_H
+#define _ASM_IA64_STAT_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+struct stat {
+ unsigned int st_dev;
+ unsigned int st_ino;
+ unsigned int st_mode;
+ unsigned int st_nlink;
+ unsigned int st_uid;
+ unsigned int st_gid;
+ unsigned int st_rdev;
+ unsigned int __pad1;
+ unsigned long st_size;
+ unsigned long st_atime;
+ unsigned long st_mtime;
+ unsigned long st_ctime;
+ unsigned int st_blksize;
+ int st_blocks;
+ unsigned int __unused1;
+ unsigned int __unused2;
+};
+
+#endif /* _ASM_IA64_STAT_H */
--- /dev/null
+#ifndef _ASM_IA64_STATFS_H
+#define _ASM_IA64_STATFS_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+# ifndef __KERNEL_STRICT_NAMES
+# include <linux/types.h>
+ typedef __kernel_fsid_t fsid_t;
+# endif
+
+struct statfs {
+ long f_type;
+ long f_bsize;
+ long f_blocks;
+ long f_bfree;
+ long f_bavail;
+ long f_files;
+ long f_ffree;
+ __kernel_fsid_t f_fsid;
+ long f_namelen;
+ long f_spare[6];
+};
+
+#endif /* _ASM_IA64_STATFS_H */
--- /dev/null
+#ifndef _ASM_IA64_STRING_H
+#define _ASM_IA64_STRING_H
+
+/*
+ * Here is where we want to put optimized versions of the string
+ * routines.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define __HAVE_ARCH_STRLEN 1 /* see arch/ia64/lib/strlen.S */
+#define __HAVE_ARCH_MEMSET 1 /* see arch/ia64/lib/memset.S */
+
+#endif /* _ASM_IA64_STRING_H */
--- /dev/null
+#ifndef _ASM_IA64_SYSTEM_H
+#define _ASM_IA64_SYSTEM_H
+
+/*
+ * System defines. Note that this is included both from .c and .S
+ * files, so it does only defines, not any C code. This is based
+ * on information published in the Processor Abstraction Layer
+ * and the System Abstraction Layer manual.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
+ */
+#include <linux/config.h>
+
+#include <asm/page.h>
+
+#define KERNEL_START (PAGE_OFFSET + 0x500000)
+
+/*
+ * The following #defines must match with vmlinux.lds.S:
+ */
+#define IVT_END_ADDR (KERNEL_START + 0x8000)
+#define ZERO_PAGE_ADDR (IVT_END_ADDR + 0*PAGE_SIZE)
+#define SWAPPER_PGD_ADDR (IVT_END_ADDR + 1*PAGE_SIZE)
+
+#define GATE_ADDR (0xa000000000000000 + PAGE_SIZE)
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+
+struct pci_vector_struct {
+ __u16 bus; /* PCI Bus number */
+ __u32 pci_id; /* ACPI split 16 bits device, 16 bits function (see section 6.1.1) */
+ __u8 pin; /* PCI PIN (0 = A, 1 = B, 2 = C, 3 = D) */
+ __u8 irq; /* IRQ assigned */
+};
+
+extern struct ia64_boot_param {
+ __u64 command_line; /* physical address of command line arguments */
+ __u64 efi_systab; /* physical address of EFI system table */
+ __u64 efi_memmap; /* physical address of EFI memory map */
+ __u64 efi_memmap_size; /* size of EFI memory map */
+ __u64 efi_memdesc_size; /* size of an EFI memory map descriptor */
+ __u32 efi_memdesc_version; /* memory descriptor version */
+ struct {
+ __u16 num_cols; /* number of columns on console output device */
+ __u16 num_rows; /* number of rows on console output device */
+ __u16 orig_x; /* cursor's x position */
+ __u16 orig_y; /* cursor's y position */
+ } console_info;
+ __u16 num_pci_vectors; /* number of ACPI derived PCI IRQ's*/
+ __u64 pci_vectors; /* physical address of PCI data (pci_vector_struct)*/
+ __u64 fpswa; /* physical address of the the fpswa interface */
+} ia64_boot_param;
+
+extern inline void
+ia64_insn_group_barrier (void)
+{
+ __asm__ __volatile__ (";;" ::: "memory");
+}
+
+/*
+ * Macros to force memory ordering. In these descriptions, "previous"
+ * and "subsequent" refer to program order; "visible" means that all
+ * architecturally visible effects of a memory access have occurred
+ * (at a minimum, this means the memory has been read or written).
+ *
+ * wmb(): Guarantees that all preceding stores to memory-
+ * like regions are visible before any subsequent
+ * stores and that all following stores will be
+ * visible only after all previous stores.
+ * rmb(): Like wmb(), but for reads.
+ * mb(): wmb()/rmb() combo, i.e., all previous memory
+ * accesses are visible before all subsequent
+ * accesses and vice versa. This is also known as
+ * a "fence."
+ *
+ * Note: "mb()" and its variants cannot be used as a fence to order
+ * accesses to memory mapped I/O registers. For that, mf.a needs to
+ * be used. However, we don't want to always use mf.a because (a)
+ * it's (presumably) much slower than mf and (b) mf.a is supported for
+ * sequential memory pages only.
+ */
+#define mb() __asm__ __volatile__ ("mf" ::: "memory")
+#define rmb() mb()
+#define wmb() mb()
+
+/*
+ * XXX check on these---I suspect what Linus really wants here is
+ * acquire vs release semantics but we can't discuss this stuff with
+ * Linus just yet. Grrr...
+ */
+#define set_mb(var, value) do { (var) = (value); mb(); } while (0)
+#define set_rmb(var, value) do { (var) = (value); mb(); } while (0)
+#define set_wmb(var, value) do { (var) = (value); mb(); } while (0)
+
+/*
+ * The group barrier in front of the rsm & ssm are necessary to ensure
+ * that none of the previous instructions in the same group are
+ * affected by the rsm/ssm.
+ */
+/* For spinlocks etc */
+
+#ifdef CONFIG_IA64_DEBUG_IRQ
+
+ extern unsigned long last_cli_ip;
+
+# define local_irq_save(x) \
+do { \
+ unsigned long ip, psr; \
+ \
+ __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" : "=r" (psr) :: "memory"); \
+ if (psr & (1UL << 14)) { \
+ __asm__ ("mov %0=ip" : "=r"(ip)); \
+ last_cli_ip = ip; \
+ } \
+ (x) = psr; \
+} while (0)
+
+# define local_irq_disable() \
+do { \
+ unsigned long ip, psr; \
+ \
+ __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" : "=r" (psr) :: "memory"); \
+ if (psr & (1UL << 14)) { \
+ __asm__ ("mov %0=ip" : "=r"(ip)); \
+ last_cli_ip = ip; \
+ } \
+} while (0)
+
+# define local_irq_restore(x) \
+do { \
+ unsigned long ip, old_psr, psr = (x); \
+ \
+ __asm__ __volatile__ ("mov %0=psr; mov psr.l=%1;; srlz.d" \
+ : "=&r" (old_psr) : "r" (psr) : "memory"); \
+ if ((old_psr & (1UL << 14)) && !(psr & (1UL << 14))) { \
+ __asm__ ("mov %0=ip" : "=r"(ip)); \
+ last_cli_ip = ip; \
+ } \
+} while (0)
+
+#else /* !CONFIG_IA64_DEBUG_IRQ */
+ /* clearing of psr.i is implicitly serialized (visible by next insn) */
+# define local_irq_save(x) __asm__ __volatile__ ("mov %0=psr;; rsm psr.i;;" \
+ : "=r" (x) :: "memory")
+# define local_irq_disable() __asm__ __volatile__ (";; rsm psr.i;;" ::: "memory")
+/* (potentially) setting psr.i requires data serialization: */
+# define local_irq_restore(x) __asm__ __volatile__ ("mov psr.l=%0;; srlz.d" \
+ :: "r" (x) : "memory")
+#endif /* !CONFIG_IA64_DEBUG_IRQ */
+
+#define local_irq_enable() __asm__ __volatile__ (";; ssm psr.i;; srlz.d" ::: "memory")
+
+#define __cli() local_irq_disable ()
+#define __save_flags(flags) __asm__ __volatile__ ("mov %0=psr" : "=r" (flags) :: "memory")
+#define __save_and_cli(flags) local_irq_save(flags)
+#define save_and_cli(flags) __save_and_cli(flags)
+
+
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
+/*
+ * Yech. SoftSDV has a slight probem with psr.i and itc/itm. If
+ * PSR.i = 0 and ITC == ITM, you don't get the timer tick posted. So,
+ * I'll check if ITC is larger than ITM here and reset if neccessary.
+ * I may miss a tick to two.
+ *
+ * Don't include asm/delay.h; it causes include loops that are
+ * mind-numbingly hard to follow.
+ */
+
+#define get_itc(x) __asm__ __volatile__("mov %0=ar.itc" : "=r"((x)) :: "memory")
+#define get_itm(x) __asm__ __volatile__("mov %0=cr.itm" : "=r"((x)) :: "memory")
+#define set_itm(x) __asm__ __volatile__("mov cr.itm=%0" :: "r"((x)) : "memory")
+
+#define __restore_flags(x) \
+do { \
+ unsigned long itc, itm; \
+ local_irq_restore(x); \
+ get_itc(itc); \
+ get_itm(itm); \
+ if (itc > itm) \
+ set_itm(itc + 10); \
+} while (0)
+
+#define __sti() \
+do { \
+ unsigned long itc, itm; \
+ local_irq_enable(); \
+ get_itc(itc); \
+ get_itm(itm); \
+ if (itc > itm) \
+ set_itm(itc + 10); \
+} while (0)
+
+#else /* !CONFIG_IA64_SOFTSDV_HACKS */
+
+#define __sti() local_irq_enable ()
+#define __restore_flags(flags) local_irq_restore(flags)
+
+#endif /* !CONFIG_IA64_SOFTSDV_HACKS */
+
+#ifdef CONFIG_SMP
+ extern void __global_cli (void);
+ extern void __global_sti (void);
+ extern unsigned long __global_save_flags (void);
+ extern void __global_restore_flags (unsigned long);
+# define cli() __global_cli()
+# define sti() __global_sti()
+# define save_flags(flags) ((flags) = __global_save_flags())
+# define restore_flags(flags) __global_restore_flags(flags)
+#else /* !CONFIG_SMP */
+# define cli() __cli()
+# define sti() __sti()
+# define save_flags(flags) __save_flags(flags)
+# define restore_flags(flags) __restore_flags(flags)
+#endif /* !CONFIG_SMP */
+
+/*
+ * Force an unresolved reference if someone tries to use
+ * ia64_fetch_and_add() with a bad value.
+ */
+extern unsigned long __bad_size_for_ia64_fetch_and_add (void);
+extern unsigned long __bad_increment_for_ia64_fetch_and_add (void);
+
+#define IA64_FETCHADD(tmp,v,n,sz) \
+({ \
+ switch (sz) { \
+ case 4: \
+ __asm__ __volatile__ ("fetchadd4.rel %0=%1,%3" \
+ : "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
+ : "m" (__atomic_fool_gcc(v)), "i"(n)); \
+ break; \
+ \
+ case 8: \
+ __asm__ __volatile__ ("fetchadd8.rel %0=%1,%3" \
+ : "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
+ : "m" (__atomic_fool_gcc(v)), "i"(n)); \
+ break; \
+ \
+ default: \
+ __bad_size_for_ia64_fetch_and_add(); \
+ } \
+})
+
+#define ia64_fetch_and_add(i,v) \
+({ \
+ __u64 _tmp; \
+ volatile __typeof__(*(v)) *_v = (v); \
+ switch (i) { \
+ case -16: IA64_FETCHADD(_tmp, _v, -16, sizeof(*(v))); break; \
+ case -8: IA64_FETCHADD(_tmp, _v, -8, sizeof(*(v))); break; \
+ case -4: IA64_FETCHADD(_tmp, _v, -4, sizeof(*(v))); break; \
+ case -1: IA64_FETCHADD(_tmp, _v, -1, sizeof(*(v))); break; \
+ case 1: IA64_FETCHADD(_tmp, _v, 1, sizeof(*(v))); break; \
+ case 4: IA64_FETCHADD(_tmp, _v, 4, sizeof(*(v))); break; \
+ case 8: IA64_FETCHADD(_tmp, _v, 8, sizeof(*(v))); break; \
+ case 16: IA64_FETCHADD(_tmp, _v, 16, sizeof(*(v))); break; \
+ default: \
+ _tmp = __bad_increment_for_ia64_fetch_and_add(); \
+ break; \
+ } \
+ if (sizeof(*(v)) == 4) \
+ _tmp = (int) _tmp; \
+ _tmp + (i); /* return new value */ \
+})
+
+/*
+ * This function doesn't exist, so you'll get a linker error if
+ * something tries to do an invalid xchg().
+ */
+extern void __xchg_called_with_bad_pointer (void);
+
+static __inline__ unsigned long
+__xchg (unsigned long x, volatile void *ptr, int size)
+{
+ unsigned long result;
+
+ switch (size) {
+ case 1:
+ __asm__ __volatile ("xchg1 %0=%1,%2" : "=r" (result)
+ : "m" (*(char *) ptr), "r" (x) : "memory");
+ return result;
+
+ case 2:
+ __asm__ __volatile ("xchg2 %0=%1,%2" : "=r" (result)
+ : "m" (*(short *) ptr), "r" (x) : "memory");
+ return result;
+
+ case 4:
+ __asm__ __volatile ("xchg4 %0=%1,%2" : "=r" (result)
+ : "m" (*(int *) ptr), "r" (x) : "memory");
+ return result;
+
+ case 8:
+ __asm__ __volatile ("xchg8 %0=%1,%2" : "=r" (result)
+ : "m" (*(long *) ptr), "r" (x) : "memory");
+ return result;
+ }
+ __xchg_called_with_bad_pointer();
+ return x;
+}
+
+#define xchg(ptr,x) \
+ ((__typeof__(*(ptr))) __xchg ((unsigned long) (x), (ptr), sizeof(*(ptr))))
+#define tas(ptr) (xchg ((ptr), 1))
+
+/*
+ * Atomic compare and exchange. Compare OLD with MEM, if identical,
+ * store NEW in MEM. Return the initial value in MEM. Success is
+ * indicated by comparing RETURN with OLD.
+ */
+
+#define __HAVE_ARCH_CMPXCHG 1
+
+/*
+ * This function doesn't exist, so you'll get a linker error
+ * if something tries to do an invalid cmpxchg().
+ */
+extern long __cmpxchg_called_with_bad_pointer(void);
+
+struct __xchg_dummy { unsigned long a[100]; };
+#define __xg(x) (*(struct __xchg_dummy *)(x))
+
+#define ia64_cmpxchg(ptr,old,new,size) \
+({ \
+ __typeof__(ptr) _p_ = (ptr); \
+ __typeof__(new) _n_ = (new); \
+ __u64 _o_, _r_; \
+ \
+ switch (size) { \
+ case 1: _o_ = (__u8 ) (old); break; \
+ case 2: _o_ = (__u16) (old); break; \
+ case 4: _o_ = (__u32) (old); break; \
+ case 8: _o_ = (__u64) (old); break; \
+ default: \
+ } \
+ __asm__ __volatile__ ("mov ar.ccv=%0;;" :: "r"(_o_)); \
+ switch (size) { \
+ case 1: \
+ __asm__ __volatile__ ("cmpxchg1.rel %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 2: \
+ __asm__ __volatile__ ("cmpxchg2.rel %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 4: \
+ __asm__ __volatile__ ("cmpxchg4.rel %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ case 8: \
+ __asm__ __volatile__ ("cmpxchg8.rel %0=%2,%3,ar.ccv" \
+ : "=r"(_r_), "=m"(__xg(_p_)) \
+ : "m"(__xg(_p_)), "r"(_n_)); \
+ break; \
+ \
+ default: \
+ _r_ = __cmpxchg_called_with_bad_pointer(); \
+ break; \
+ } \
+ (__typeof__(old)) _r_; \
+})
+
+#define cmpxchg(ptr,o,n) ia64_cmpxchg((ptr), (o), (n), sizeof(*(ptr)))
+
+#ifdef CONFIG_IA64_DEBUG_CMPXCHG
+# define CMPXCHG_BUGCHECK_DECL int _cmpxchg_bugcheck_count = 128;
+# define CMPXCHG_BUGCHECK(v) \
+ do { \
+ if (_cmpxchg_bugcheck_count-- <= 0) { \
+ void *ip; \
+ extern int printk(const char *fmt, ...); \
+ asm ("mov %0=ip" : "=r"(ip)); \
+ printk("CMPXCHG_BUGCHECK: stuck at %p on word %p\n", ip, (v)); \
+ break; \
+ } \
+ } while (0)
+#else /* !CONFIG_IA64_DEBUG_CMPXCHG */
+# define CMPXCHG_BUGCHECK_DECL
+# define CMPXCHG_BUGCHECK(v)
+#endif /* !CONFIG_IA64_DEBUG_CMPXCHG */
+
+#ifdef __KERNEL__
+
+extern void ia64_save_debug_regs (unsigned long *save_area);
+extern void ia64_load_debug_regs (unsigned long *save_area);
+
+#define prepare_to_switch() do { } while(0)
+
+#ifdef CONFIG_IA32_SUPPORT
+# define TASK_TO_PTREGS(t) \
+ ((struct pt_regs *)(((unsigned long)(t) + IA64_STK_OFFSET - IA64_PT_REGS_SIZE)))
+# define IS_IA32_PROCESS(regs) (ia64_psr(regs)->is != 0)
+# define IA32_FP_STATE(prev,next) \
+ if (IS_IA32_PROCESS(TASK_TO_PTREGS(prev))) { \
+ __asm__ __volatile__("mov %0=ar.fsr":"=r"((prev)->thread.fsr)); \
+ __asm__ __volatile__("mov %0=ar.fcr":"=r"((prev)->thread.fcr)); \
+ __asm__ __volatile__("mov %0=ar.fir":"=r"((prev)->thread.fir)); \
+ __asm__ __volatile__("mov %0=ar.fdr":"=r"((prev)->thread.fdr)); \
+ } \
+ if (IS_IA32_PROCESS(TASK_TO_PTREGS(next))) { \
+ __asm__ __volatile__("mov ar.fsr=%0"::"r"((next)->thread.fsr)); \
+ __asm__ __volatile__("mov ar.fcr=%0"::"r"((next)->thread.fcr)); \
+ __asm__ __volatile__("mov ar.fir=%0"::"r"((next)->thread.fir)); \
+ __asm__ __volatile__("mov ar.fdr=%0"::"r"((next)->thread.fdr)); \
+ }
+#else /* !CONFIG_IA32_SUPPORT */
+# define IA32_FP_STATE(prev,next)
+# define IS_IA32_PROCESS(regs) 0
+#endif /* CONFIG_IA32_SUPPORT */
+
+/*
+ * Context switch from one thread to another. If the two threads have
+ * different address spaces, schedule() has already taken care of
+ * switching to the new address space by calling switch_mm().
+ *
+ * Disabling access to the fph partition and the debug-register
+ * context switch MUST be done before calling ia64_switch_to() since a
+ * newly created thread returns directly to
+ * ia64_ret_from_syscall_clear_r8.
+ */
+extern struct task_struct *ia64_switch_to (void *next_task);
+#define __switch_to(prev,next,last) do { \
+ ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
+ if ((prev)->thread.flags & IA64_THREAD_DBG_VALID) { \
+ ia64_save_debug_regs(&(prev)->thread.dbr[0]); \
+ } \
+ if ((next)->thread.flags & IA64_THREAD_DBG_VALID) { \
+ ia64_load_debug_regs(&(next)->thread.dbr[0]); \
+ } \
+ IA32_FP_STATE(prev,next); \
+ (last) = ia64_switch_to((next)); \
+} while (0)
+
+#ifdef CONFIG_SMP
+ /*
+ * In the SMP case, we save the fph state when context-switching
+ * away from a thread that owned and modified fph. This way, when
+ * the thread gets scheduled on another CPU, the CPU can pick up the
+ * state frm task->thread.fph, avoiding the complication of having
+ * to fetch the latest fph state from another CPU. If the thread
+ * happens to be rescheduled on the same CPU later on and nobody
+ * else has touched the FPU in the meantime, the thread will fault
+ * upon the first access to fph but since the state in fph is still
+ * valid, no other overheads are incurred. In other words, CPU
+ * affinity is a Good Thing.
+ */
+# define switch_to(prev,next,last) do { \
+ if (ia64_get_fpu_owner() == (prev) && ia64_psr(ia64_task_regs(prev))->mfh) { \
+ (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \
+ __ia64_save_fpu((prev)->thread.fph); \
+ } \
+ __switch_to(prev,next,last); \
+ } while (0)
+#else
+# define switch_to(prev,next,last) __switch_to(prev,next,last)
+#endif
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_SYSTEM_H */
--- /dev/null
+#ifndef _ASM_IA64_TERMBITS_H
+#define _ASM_IA64_TERMBITS_H
+
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 99/01/28 Added new baudrates
+ */
+
+#include <linux/posix_types.h>
+
+typedef unsigned char cc_t;
+typedef unsigned int speed_t;
+typedef unsigned int tcflag_t;
+
+#define NCCS 19
+struct termios {
+ tcflag_t c_iflag; /* input mode flags */
+ tcflag_t c_oflag; /* output mode flags */
+ tcflag_t c_cflag; /* control mode flags */
+ tcflag_t c_lflag; /* local mode flags */
+ cc_t c_line; /* line discipline */
+ cc_t c_cc[NCCS]; /* control characters */
+};
+
+/* c_cc characters */
+#define VINTR 0
+#define VQUIT 1
+#define VERASE 2
+#define VKILL 3
+#define VEOF 4
+#define VTIME 5
+#define VMIN 6
+#define VSWTC 7
+#define VSTART 8
+#define VSTOP 9
+#define VSUSP 10
+#define VEOL 11
+#define VREPRINT 12
+#define VDISCARD 13
+#define VWERASE 14
+#define VLNEXT 15
+#define VEOL2 16
+
+/* c_iflag bits */
+#define IGNBRK 0000001
+#define BRKINT 0000002
+#define IGNPAR 0000004
+#define PARMRK 0000010
+#define INPCK 0000020
+#define ISTRIP 0000040
+#define INLCR 0000100
+#define IGNCR 0000200
+#define ICRNL 0000400
+#define IUCLC 0001000
+#define IXON 0002000
+#define IXANY 0004000
+#define IXOFF 0010000
+#define IMAXBEL 0020000
+
+/* c_oflag bits */
+#define OPOST 0000001
+#define OLCUC 0000002
+#define ONLCR 0000004
+#define OCRNL 0000010
+#define ONOCR 0000020
+#define ONLRET 0000040
+#define OFILL 0000100
+#define OFDEL 0000200
+#define NLDLY 0000400
+#define NL0 0000000
+#define NL1 0000400
+#define CRDLY 0003000
+#define CR0 0000000
+#define CR1 0001000
+#define CR2 0002000
+#define CR3 0003000
+#define TABDLY 0014000
+#define TAB0 0000000
+#define TAB1 0004000
+#define TAB2 0010000
+#define TAB3 0014000
+#define XTABS 0014000
+#define BSDLY 0020000
+#define BS0 0000000
+#define BS1 0020000
+#define VTDLY 0040000
+#define VT0 0000000
+#define VT1 0040000
+#define FFDLY 0100000
+#define FF0 0000000
+#define FF1 0100000
+
+/* c_cflag bit meaning */
+#define CBAUD 0010017
+#define B0 0000000 /* hang up */
+#define B50 0000001
+#define B75 0000002
+#define B110 0000003
+#define B134 0000004
+#define B150 0000005
+#define B200 0000006
+#define B300 0000007
+#define B600 0000010
+#define B1200 0000011
+#define B1800 0000012
+#define B2400 0000013
+#define B4800 0000014
+#define B9600 0000015
+#define B19200 0000016
+#define B38400 0000017
+#define EXTA B19200
+#define EXTB B38400
+#define CSIZE 0000060
+#define CS5 0000000
+#define CS6 0000020
+#define CS7 0000040
+#define CS8 0000060
+#define CSTOPB 0000100
+#define CREAD 0000200
+#define PARENB 0000400
+#define PARODD 0001000
+#define HUPCL 0002000
+#define CLOCAL 0004000
+#define CBAUDEX 0010000
+#define B57600 0010001
+#define B115200 0010002
+#define B230400 0010003
+#define B460800 0010004
+#define B500000 0010005
+#define B576000 0010006
+#define B921600 0010007
+#define B1000000 0010010
+#define B1152000 0010011
+#define B1500000 0010012
+#define B2000000 0010013
+#define B2500000 0010014
+#define B3000000 0010015
+#define B3500000 0010016
+#define B4000000 0010017
+#define CIBAUD 002003600000 /* input baud rate (not used) */
+#define CMSPAR 010000000000 /* mark or space (stick) parity */
+#define CRTSCTS 020000000000 /* flow control */
+
+/* c_lflag bits */
+#define ISIG 0000001
+#define ICANON 0000002
+#define XCASE 0000004
+#define ECHO 0000010
+#define ECHOE 0000020
+#define ECHOK 0000040
+#define ECHONL 0000100
+#define NOFLSH 0000200
+#define TOSTOP 0000400
+#define ECHOCTL 0001000
+#define ECHOPRT 0002000
+#define ECHOKE 0004000
+#define FLUSHO 0010000
+#define PENDIN 0040000
+#define IEXTEN 0100000
+
+/* tcflow() and TCXONC use these */
+#define TCOOFF 0
+#define TCOON 1
+#define TCIOFF 2
+#define TCION 3
+
+/* tcflush() and TCFLSH use these */
+#define TCIFLUSH 0
+#define TCOFLUSH 1
+#define TCIOFLUSH 2
+
+/* tcsetattr uses these */
+#define TCSANOW 0
+#define TCSADRAIN 1
+#define TCSAFLUSH 2
+
+#endif /* _ASM_IA64_TERMBITS_H */
--- /dev/null
+#ifndef _ASM_IA64_TERMIOS_H
+#define _ASM_IA64_TERMIOS_H
+
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 99/01/28 Added N_IRDA and N_SMSBLOCK
+ */
+
+#include <asm/termbits.h>
+#include <asm/ioctls.h>
+
+struct winsize {
+ unsigned short ws_row;
+ unsigned short ws_col;
+ unsigned short ws_xpixel;
+ unsigned short ws_ypixel;
+};
+
+#define NCC 8
+struct termio {
+ unsigned short c_iflag; /* input mode flags */
+ unsigned short c_oflag; /* output mode flags */
+ unsigned short c_cflag; /* control mode flags */
+ unsigned short c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[NCC]; /* control characters */
+};
+
+/* modem lines */
+#define TIOCM_LE 0x001
+#define TIOCM_DTR 0x002
+#define TIOCM_RTS 0x004
+#define TIOCM_ST 0x008
+#define TIOCM_SR 0x010
+#define TIOCM_CTS 0x020
+#define TIOCM_CAR 0x040
+#define TIOCM_RNG 0x080
+#define TIOCM_DSR 0x100
+#define TIOCM_CD TIOCM_CAR
+#define TIOCM_RI TIOCM_RNG
+#define TIOCM_OUT1 0x2000
+#define TIOCM_OUT2 0x4000
+#define TIOCM_LOOP 0x8000
+
+/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+
+/* line disciplines */
+#define N_TTY 0
+#define N_SLIP 1
+#define N_MOUSE 2
+#define N_PPP 3
+#define N_STRIP 4
+#define N_AX25 5
+#define N_X25 6 /* X.25 async */
+#define N_6PACK 7
+#define N_MASC 8 /* Reserved for Mobitex module <kaz@cafe.net> */
+#define N_R3964 9 /* Reserved for Simatic R3964 module */
+#define N_PROFIBUS_FDL 10 /* Reserved for Profibus <Dave@mvhi.com> */
+#define N_IRDA 11 /* Linux IR - http://www.cs.uit.no/~dagb/irda/irda.html */
+#define N_SMSBLOCK 12 /* SMS block mode - for talking to GSM data cards about SMS msgs */
+#define N_HDLC 13 /* synchronous HDLC */
+#define N_SYNC_PPP 14 /* synchronous PPP */
+
+# ifdef __KERNEL__
+
+/* intr=^C quit=^\ erase=del kill=^U
+ eof=^D vtime=\0 vmin=\1 sxtc=\0
+ start=^Q stop=^S susp=^Z eol=\0
+ reprint=^R discard=^U werase=^W lnext=^V
+ eol2=\0
+*/
+#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+
+/*
+ * Translate a "termio" structure into a "termios". Ugh.
+ */
+#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
+ unsigned short __tmp; \
+ get_user(__tmp,&(termio)->x); \
+ *(unsigned short *) &(termios)->x = __tmp; \
+}
+
+#define user_termio_to_kernel_termios(termios, termio) \
+({ \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
+ copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
+})
+
+/*
+ * Translate a "termios" structure into a "termio". Ugh.
+ */
+#define kernel_termios_to_user_termio(termio, termios) \
+({ \
+ put_user((termios)->c_iflag, &(termio)->c_iflag); \
+ put_user((termios)->c_oflag, &(termio)->c_oflag); \
+ put_user((termios)->c_cflag, &(termio)->c_cflag); \
+ put_user((termios)->c_lflag, &(termio)->c_lflag); \
+ put_user((termios)->c_line, &(termio)->c_line); \
+ copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \
+})
+
+#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
+#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
+
+# endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_TERMIOS_H */
--- /dev/null
+#ifndef _ASM_IA64_TIMEX_H
+#define _ASM_IA64_TIMEX_H
+
+/*
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#define CLOCK_TICK_RATE 1193180 /* Underlying HZ XXX fix me! */
+
+typedef unsigned long cycles_t;
+extern cycles_t cacheflush_time;
+
+static inline cycles_t
+get_cycles (void)
+{
+ cycles_t ret;
+
+ __asm__ __volatile__ ("mov %0=ar.itc" : "=r"(ret));
+ return ret;
+}
+
+#endif /* _ASM_IA64_TIMEX_H */
--- /dev/null
+#ifndef _ASM_IA64_TYPES_H
+#define _ASM_IA64_TYPES_H
+
+/*
+ * This file is never included by application software unless
+ * explicitly requested (e.g., via linux/types.h) in which case the
+ * application is Linux specific so (user-) name space pollution is
+ * not a major issue. However, for interoperability, libraries still
+ * need to be careful to avoid a name clashes.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#ifdef __ASSEMBLY__
+# define __IA64_UL(x) x
+# define __IA64_UL_CONST(x) x
+#else
+# define __IA64_UL(x) ((unsigned long)x)
+# define __IA64_UL_CONST(x) x##UL
+#endif
+
+#ifndef __ASSEMBLY__
+
+typedef unsigned int umode_t;
+
+/*
+ * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
+ * header files exported to user space
+ */
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+/*
+ * There are 32-bit compilers for the ia-64 out there..
+ */
+# if ((~0UL) == 0xffffffff)
+# if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+# endif
+# else
+typedef __signed__ long __s64;
+typedef unsigned long __u64;
+# endif
+
+/*
+ * These aren't exported outside the kernel to avoid name space clashes
+ */
+# ifdef __KERNEL__
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+/*
+ * There are 32-bit compilers for the ia-64 out there... (don't rely
+ * on cpp because that may cause su problem in a 32->64 bit
+ * cross-compilation environment).
+ */
+# ifdef __LP64__
+
+typedef signed long s64;
+typedef unsigned long u64;
+#define BITS_PER_LONG 64
+
+# else
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+#define BITS_PER_LONG 32
+
+# endif
+
+/* DMA addresses are 64-bits wide, in general. */
+
+typedef u64 dma_addr_t;
+
+# endif /* __KERNEL__ */
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_IA64_TYPES_H */
--- /dev/null
+#ifndef _ASM_IA64_UACCESS_H
+#define _ASM_IA64_UACCESS_H
+
+/*
+ * This file defines various macros to transfer memory areas across
+ * the user/kernel boundary. This needs to be done carefully because
+ * this code is executed in kernel mode and uses user-specified
+ * addresses. Thus, we need to be careful not to let the user to
+ * trick us into accessing kernel memory that would normally be
+ * inaccessible. This code is also fairly performance sensitive,
+ * so we want to spend as little time doing saftey checks as
+ * possible.
+ *
+ * To make matters a bit more interesting, these macros sometimes also
+ * called from within the kernel itself, in which case the address
+ * validity check must be skipped. The get_fs() macro tells us what
+ * to do: if get_fs()==USER_DS, checking is performed, if
+ * get_fs()==KERNEL_DS, checking is bypassed.
+ *
+ * Note that even if the memory area specified by the user is in a
+ * valid address range, it is still possible that we'll get a page
+ * fault while accessing it. This is handled by filling out an
+ * exception handler fixup entry for each instruction that has the
+ * potential to fault. When such a fault occurs, the page fault
+ * handler checks to see whether the faulting instruction has a fixup
+ * associated and, if so, sets r8 to -EFAULT and clears r9 to 0 and
+ * then resumes execution at the continuation point.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+
+/*
+ * For historical reasons, the following macros are grossly misnamed:
+ */
+#define KERNEL_DS ((mm_segment_t) { ~0UL }) /* cf. access_ok() */
+#define USER_DS ((mm_segment_t) { TASK_SIZE-1 }) /* cf. access_ok() */
+
+#define VERIFY_READ 0
+#define VERIFY_WRITE 1
+
+#define get_ds() (KERNEL_DS)
+#define get_fs() (current->addr_limit)
+#define set_fs(x) (current->addr_limit = (x))
+
+#define segment_eq(a,b) ((a).seg == (b).seg)
+
+/*
+ * When accessing user memory, we need to make sure the entire area
+ * really is in user-level space. In order to do this efficiently, we
+ * make sure that the page at address TASK_SIZE is never valid (we do
+ * this by selecting VMALLOC_START as TASK_SIZE+PAGE_SIZE). This way,
+ * we can simply check whether the starting address is < TASK_SIZE
+ * and, if so, start accessing the memory. If the user specified bad
+ * length, we will fault on the NaT page and then return the
+ * appropriate error.
+ */
+#define __access_ok(addr,size,segment) (((unsigned long) (addr)) <= (segment).seg)
+#define access_ok(type,addr,size) __access_ok((addr),(size),get_fs())
+
+extern inline int
+verify_area (int type, const void *addr, unsigned long size)
+{
+ return access_ok(type,addr,size) ? 0 : -EFAULT;
+}
+
+/*
+ * These are the main single-value transfer routines. They automatically
+ * use the right size if we just have the right pointer type.
+ *
+ * As the alpha uses the same address space for kernel and user
+ * data, we can just do these as direct assignments. (Of course, the
+ * exception handling means that it's no longer "just"...)
+ *
+ * Careful to not
+ * (a) re-use the arguments for side effects (sizeof/typeof is ok)
+ * (b) require any knowledge of processes at this stage
+ */
+#define put_user(x,ptr) __put_user_check((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)),get_fs())
+#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)),get_fs())
+
+/*
+ * The "__xxx" versions do not do address space checking, useful when
+ * doing multiple accesses to the same area (the programmer has to do the
+ * checks by hand with "access_ok()")
+ */
+#define __put_user(x,ptr) __put_user_nocheck((__typeof__(*(ptr)))(x),(ptr),sizeof(*(ptr)))
+#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+
+/*
+ * The "xxx_ret" versions return constant specified in third argument, if
+ * something bad happens. These macros can be optimized for the
+ * case of just returning from the function xxx_ret is used.
+ */
+#define put_user_ret(x,ptr,ret) ({ if (put_user(x,ptr)) return ret; })
+#define get_user_ret(x,ptr,ret) ({ if (get_user(x,ptr)) return ret; })
+#define __put_user_ret(x,ptr,ret) ({ if (__put_user(x,ptr)) return ret; })
+#define __get_user_ret(x,ptr,ret) ({ if (__get_user(x,ptr)) return ret; })
+
+extern void __get_user_unknown (void);
+
+#define __get_user_nocheck(x,ptr,size) \
+({ \
+ register long __gu_err __asm__ ("r8") = 0; \
+ register long __gu_val __asm__ ("r9") = 0; \
+ switch (size) { \
+ case 1: __get_user_8(ptr); break; \
+ case 2: __get_user_16(ptr); break; \
+ case 4: __get_user_32(ptr); break; \
+ case 8: __get_user_64(ptr); break; \
+ default: __get_user_unknown(); break; \
+ } \
+ (x) = (__typeof__(*(ptr))) __gu_val; \
+ __gu_err; \
+})
+
+#define __get_user_check(x,ptr,size,segment) \
+({ \
+ register long __gu_err __asm__ ("r8") = -EFAULT; \
+ register long __gu_val __asm__ ("r9") = 0; \
+ const __typeof__(*(ptr)) *__gu_addr = (ptr); \
+ if (__access_ok((long)__gu_addr,size,segment)) { \
+ __gu_err = 0; \
+ switch (size) { \
+ case 1: __get_user_8(__gu_addr); break; \
+ case 2: __get_user_16(__gu_addr); break; \
+ case 4: __get_user_32(__gu_addr); break; \
+ case 8: __get_user_64(__gu_addr); break; \
+ default: __get_user_unknown(); break; \
+ } \
+ } \
+ (x) = (__typeof__(*(ptr))) __gu_val; \
+ __gu_err; \
+})
+
+struct __large_struct { unsigned long buf[100]; };
+#define __m(x) (*(struct __large_struct *)(x))
+
+#define __get_user_64(addr) \
+ __asm__ ("\n1:\tld8 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 (2b-1b)|1\n" \
+ "\t.previous" \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+ : "m"(__m(addr)), "1"(__gu_err));
+
+#define __get_user_32(addr) \
+ __asm__ ("\n1:\tld4 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 (2b-1b)|1\n" \
+ "\t.previous" \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+ : "m"(__m(addr)), "1"(__gu_err));
+
+#define __get_user_16(addr) \
+ __asm__ ("\n1:\tld2 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 (2b-1b)|1\n" \
+ "\t.previous" \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+ : "m"(__m(addr)), "1"(__gu_err));
+
+#define __get_user_8(addr) \
+ __asm__ ("\n1:\tld1 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 (2b-1b)|1\n" \
+ "\t.previous" \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+ : "m"(__m(addr)), "1"(__gu_err));
+
+
+extern void __put_user_unknown (void);
+
+#define __put_user_nocheck(x,ptr,size) \
+({ \
+ register long __pu_err __asm__ ("r8") = 0; \
+ switch (size) { \
+ case 1: __put_user_8(x,ptr); break; \
+ case 2: __put_user_16(x,ptr); break; \
+ case 4: __put_user_32(x,ptr); break; \
+ case 8: __put_user_64(x,ptr); break; \
+ default: __put_user_unknown(); break; \
+ } \
+ __pu_err; \
+})
+
+#define __put_user_check(x,ptr,size,segment) \
+({ \
+ register long __pu_err __asm__ ("r8") = -EFAULT; \
+ __typeof__(*(ptr)) *__pu_addr = (ptr); \
+ if (__access_ok((long)__pu_addr,size,segment)) { \
+ __pu_err = 0; \
+ switch (size) { \
+ case 1: __put_user_8(x,__pu_addr); break; \
+ case 2: __put_user_16(x,__pu_addr); break; \
+ case 4: __put_user_32(x,__pu_addr); break; \
+ case 8: __put_user_64(x,__pu_addr); break; \
+ default: __put_user_unknown(); break; \
+ } \
+ } \
+ __pu_err; \
+})
+
+/*
+ * The "__put_user_xx()" macros tell gcc they read from memory
+ * instead of writing: this is because they do not write to
+ * any memory gcc knows about, so there are no aliasing issues
+ */
+#define __put_user_64(x,addr) \
+ __asm__ __volatile__ ( \
+ "\n1:\tst8 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 2b-1b\n" \
+ "\t.previous" \
+ : "=r"(__pu_err) \
+ : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+
+#define __put_user_32(x,addr) \
+ __asm__ __volatile__ ( \
+ "\n1:\tst4 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 2b-1b\n" \
+ "\t.previous" \
+ : "=r"(__pu_err) \
+ : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+
+#define __put_user_16(x,addr) \
+ __asm__ __volatile__ ( \
+ "\n1:\tst2 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 2b-1b\n" \
+ "\t.previous" \
+ : "=r"(__pu_err) \
+ : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+
+#define __put_user_8(x,addr) \
+ __asm__ __volatile__ ( \
+ "\n1:\tst1 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "2:\n" \
+ "\t.section __ex_table,\"a\"\n" \
+ "\t\tdata4 @gprel(1b)\n" \
+ "\t\tdata4 2b-1b\n" \
+ "\t.previous" \
+ : "=r"(__pu_err) \
+ : "m"(__m(addr)), "rO"(x), "0"(__pu_err))
+
+/*
+ * Complex access routines
+ */
+extern unsigned long __copy_user (void *to, const void *from, unsigned long count);
+
+#define __copy_to_user(to,from,n) __copy_user((to), (from), (n))
+#define __copy_from_user(to,from,n) __copy_user((to), (from), (n))
+
+#define copy_to_user(to,from,n) __copy_tofrom_user((to), (from), (n), 1)
+#define copy_from_user(to,from,n) __copy_tofrom_user((to), (from), (n), 0)
+
+#define __copy_tofrom_user(to,from,n,check_to) \
+({ \
+ void *__cu_to = (to); \
+ const void *__cu_from = (from); \
+ long __cu_len = (n); \
+ \
+ if (__access_ok((long) ((check_to) ? __cu_to : __cu_from), __cu_len, get_fs())) { \
+ __cu_len = __copy_user(__cu_to, __cu_from, __cu_len); \
+ } \
+ __cu_len; \
+})
+
+#define copy_to_user_ret(to,from,n,retval) \
+({ \
+ if (copy_to_user(to,from,n)) \
+ return retval; \
+})
+
+#define copy_from_user_ret(to,from,n,retval) \
+({ \
+ if (copy_from_user(to,from,n)) \
+ return retval; \
+})
+
+extern unsigned long __do_clear_user (void *, unsigned long);
+
+#define __clear_user(to,n) \
+({ \
+ __do_clear_user(to,n); \
+})
+
+#define clear_user(to,n) \
+({ \
+ unsigned long __cu_len = (n); \
+ if (__access_ok((long) to, __cu_len, get_fs())) { \
+ __cu_len = __do_clear_user(to, __cu_len); \
+ } \
+ __cu_len; \
+})
+
+
+/* Returns: -EFAULT if exception before terminator, N if the entire
+ buffer filled, else strlen. */
+
+extern long __strncpy_from_user (char *to, const char *from, long to_len);
+
+#define strncpy_from_user(to,from,n) \
+({ \
+ const char * __sfu_from = (from); \
+ long __sfu_ret = -EFAULT; \
+ if (__access_ok((long) __sfu_from, 0, get_fs())) \
+ __sfu_ret = __strncpy_from_user((to), __sfu_from, (n)); \
+ __sfu_ret; \
+})
+
+/* Returns: 0 if bad, string length+1 (memory size) of string if ok */
+extern unsigned long __strlen_user (const char *);
+
+#define strlen_user(str) \
+({ \
+ const char *__su_str = (str); \
+ unsigned long __su_ret = 0; \
+ if (__access_ok((long) __su_str, 0, get_fs())) \
+ __su_ret = __strlen_user(__su_str); \
+ __su_ret; \
+})
+
+/*
+ * Returns: 0 if exception before NUL or reaching the supplied limit
+ * (N), a value greater than N if the limit would be exceeded, else
+ * strlen.
+ */
+extern unsigned long __strnlen_user (const char *, long);
+
+#define strnlen_user(str, len) \
+({ \
+ const char *__su_str = (str); \
+ unsigned long __su_ret = 0; \
+ if (__access_ok((long) __su_str, 0, get_fs())) \
+ __su_ret = __strnlen_user(__su_str, len); \
+ __su_ret; \
+})
+
+struct exception_table_entry {
+ int addr; /* gp-relative address of insn this fixup is for */
+ int skip; /* number of bytes to skip to get to the continuation point.
+ Bit 0 tells us if r9 should be cleared to 0*/
+};
+
+extern const struct exception_table_entry *search_exception_table (unsigned long addr);
+
+#endif /* _ASM_IA64_UACCESS_H */
--- /dev/null
+#ifndef _ASM_IA64_UNALIGNED_H
+#define _ASM_IA64_UNALIGNED_H
+
+/*
+ * The main single-value unaligned transfer routines. Derived from
+ * the Linux/Alpha version.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+#define get_unaligned(ptr) \
+ ((__typeof__(*(ptr)))ia64_get_unaligned((ptr), sizeof(*(ptr))))
+
+#define put_unaligned(x,ptr) \
+ ia64_put_unaligned((unsigned long)(x), (ptr), sizeof(*(ptr)))
+
+/*
+ * EGCS 1.1 knows about arbitrary unaligned loads. Define some
+ * packed structures to talk about such things with.
+ */
+struct __una_u64 { __u64 x __attribute__((packed)); };
+struct __una_u32 { __u32 x __attribute__((packed)); };
+struct __una_u16 { __u16 x __attribute__((packed)); };
+
+extern inline unsigned long
+__uldq (const unsigned long * r11)
+{
+ const struct __una_u64 *ptr = (const struct __una_u64 *) r11;
+ return ptr->x;
+}
+
+extern inline unsigned long
+__uldl (const unsigned int * r11)
+{
+ const struct __una_u32 *ptr = (const struct __una_u32 *) r11;
+ return ptr->x;
+}
+
+extern inline unsigned long
+__uldw (const unsigned short * r11)
+{
+ const struct __una_u16 *ptr = (const struct __una_u16 *) r11;
+ return ptr->x;
+}
+
+extern inline void
+__ustq (unsigned long r5, unsigned long * r11)
+{
+ struct __una_u64 *ptr = (struct __una_u64 *) r11;
+ ptr->x = r5;
+}
+
+extern inline void
+__ustl (unsigned long r5, unsigned int * r11)
+{
+ struct __una_u32 *ptr = (struct __una_u32 *) r11;
+ ptr->x = r5;
+}
+
+extern inline void
+__ustw (unsigned long r5, unsigned short * r11)
+{
+ struct __una_u16 *ptr = (struct __una_u16 *) r11;
+ ptr->x = r5;
+}
+
+
+/*
+ * This function doesn't actually exist. The idea is that when
+ * someone uses the macros below with an unsupported size (datatype),
+ * the linker will alert us to the problem via an unresolved reference
+ * error.
+ */
+extern unsigned long ia64_bad_unaligned_access_length (void);
+
+#define ia64_get_unaligned(_ptr,size) \
+({ \
+ const void *ptr = (_ptr); \
+ unsigned long val; \
+ \
+ switch (size) { \
+ case 1: \
+ val = *(const unsigned char *) ptr; \
+ break; \
+ case 2: \
+ val = __uldw((const unsigned short *)ptr); \
+ break; \
+ case 4: \
+ val = __uldl((const unsigned int *)ptr); \
+ break; \
+ case 8: \
+ val = __uldq((const unsigned long *)ptr); \
+ break; \
+ default: \
+ val = ia64_bad_unaligned_access_length(); \
+ } \
+ val; \
+})
+
+#define ia64_put_unaligned(_val,_ptr,size) \
+do { \
+ const void *ptr = (_ptr); \
+ unsigned long val = (_val); \
+ \
+ switch (size) { \
+ case 1: \
+ *(unsigned char *)ptr = (val); \
+ break; \
+ case 2: \
+ __ustw(val, (unsigned short *)ptr); \
+ break; \
+ case 4: \
+ __ustl(val, (unsigned int *)ptr); \
+ break; \
+ case 8: \
+ __ustq(val, (unsigned long *)ptr); \
+ break; \
+ default: \
+ ia64_bad_unaligned_access_length(); \
+ } \
+} while (0)
+
+#endif /* _ASM_IA64_UNALIGNED_H */
--- /dev/null
+#ifndef _ASM_IA64_UNISTD_H
+#define _ASM_IA64_UNISTD_H
+
+/*
+ * IA-64 Linux syscall numbers and inline-functions.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <asm/break.h>
+
+#define __BREAK_SYSCALL __IA64_BREAK_SYSCALL
+
+#define __NR_ni_syscall 1024
+#define __NR_exit 1025
+#define __NR_read 1026
+#define __NR_write 1027
+#define __NR_open 1028
+#define __NR_close 1029
+#define __NR_creat 1030
+#define __NR_link 1031
+#define __NR_unlink 1032
+#define __NR_execve 1033
+#define __NR_chdir 1034
+#define __NR_fchdir 1035
+#define __NR_utimes 1036
+#define __NR_mknod 1037
+#define __NR_chmod 1038
+#define __NR_chown 1039
+#define __NR_lseek 1040
+#define __NR_getpid 1041
+#define __NR_getppid 1042
+#define __NR_mount 1043
+#define __NR_umount 1044
+#define __NR_setuid 1045
+#define __NR_getuid 1046
+#define __NR_geteuid 1047
+#define __NR_ptrace 1048
+#define __NR_access 1049
+#define __NR_sync 1050
+#define __NR_fsync 1051
+#define __NR_fdatasync 1052
+#define __NR_kill 1053
+#define __NR_rename 1054
+#define __NR_mkdir 1055
+#define __NR_rmdir 1056
+#define __NR_dup 1057
+#define __NR_pipe 1058
+#define __NR_times 1059
+#define __NR_brk 1060
+#define __NR_setgid 1061
+#define __NR_getgid 1062
+#define __NR_getegid 1063
+#define __NR_acct 1064
+#define __NR_ioctl 1065
+#define __NR_fcntl 1066
+#define __NR_umask 1067
+#define __NR_chroot 1068
+#define __NR_ustat 1069
+#define __NR_dup2 1070
+#define __NR_setreuid 1071
+#define __NR_setregid 1072
+#define __NR_getresuid 1073
+#define __NR_setresuid 1074
+#define __NR_getresgid 1075
+#define __NR_setresgid 1076
+#define __NR_getgroups 1077
+#define __NR_setgroups 1078
+#define __NR_getpgid 1079
+#define __NR_setpgid 1080
+#define __NR_setsid 1081
+#define __NR_getsid 1082
+#define __NR_sethostname 1083
+#define __NR_setrlimit 1084
+#define __NR_getrlimit 1085
+#define __NR_getrusage 1086
+#define __NR_gettimeofday 1087
+#define __NR_settimeofday 1088
+#define __NR_select 1089
+#define __NR_poll 1090
+#define __NR_symlink 1091
+#define __NR_readlink 1092
+#define __NR_uselib 1093
+#define __NR_swapon 1094
+#define __NR_swapoff 1095
+#define __NR_reboot 1096
+#define __NR_truncate 1097
+#define __NR_ftruncate 1098
+#define __NR_fchmod 1099
+#define __NR_fchown 1100
+#define __NR_getpriority 1101
+#define __NR_setpriority 1102
+#define __NR_statfs 1103
+#define __NR_fstatfs 1104
+#define __NR_ioperm 1105
+#define __NR_semget 1106
+#define __NR_semop 1107
+#define __NR_semctl 1108
+#define __NR_msgget 1109
+#define __NR_msgsnd 1110
+#define __NR_msgrcv 1111
+#define __NR_msgctl 1112
+#define __NR_shmget 1113
+#define __NR_shmat 1114
+#define __NR_shmdt 1115
+#define __NR_shmctl 1116
+/* also known as klogctl() in GNU libc: */
+#define __NR_syslog 1117
+#define __NR_setitimer 1118
+#define __NR_getitimer 1119
+#define __NR_stat 1120
+#define __NR_lstat 1121
+#define __NR_fstat 1122
+#define __NR_vhangup 1123
+#define __NR_lchown 1124
+#define __NR_vm86 1125
+#define __NR_wait4 1126
+#define __NR_sysinfo 1127
+#define __NR_clone 1128
+#define __NR_setdomainname 1129
+#define __NR_uname 1130
+#define __NR_adjtimex 1131
+#define __NR_create_module 1132
+#define __NR_init_module 1133
+#define __NR_delete_module 1134
+#define __NR_get_kernel_syms 1135
+#define __NR_query_module 1136
+#define __NR_quotactl 1137
+#define __NR_bdflush 1138
+#define __NR_sysfs 1139
+#define __NR_personality 1140
+#define __NR_afs_syscall 1141
+#define __NR_setfsuid 1142
+#define __NR_setfsgid 1143
+#define __NR_getdents 1144
+#define __NR_flock 1145
+#define __NR_readv 1146
+#define __NR_writev 1147
+#define __NR_pread 1148
+#define __NR_pwrite 1149
+#define __NR__sysctl 1150
+#define __NR_mmap 1151
+#define __NR_munmap 1152
+#define __NR_mlock 1153
+#define __NR_mlockall 1154
+#define __NR_mprotect 1155
+#define __NR_mremap 1156
+#define __NR_msync 1157
+#define __NR_munlock 1158
+#define __NR_munlockall 1159
+#define __NR_sched_getparam 1160
+#define __NR_sched_setparam 1161
+#define __NR_sched_getscheduler 1162
+#define __NR_sched_setscheduler 1163
+#define __NR_sched_yield 1164
+#define __NR_sched_get_priority_max 1165
+#define __NR_sched_get_priority_min 1166
+#define __NR_sched_rr_get_interval 1167
+#define __NR_nanosleep 1168
+#define __NR_nfsservctl 1169
+#define __NR_prctl 1170
+#define __NR_getpagesize 1171
+#define __NR_mmap2 1172
+#define __NR_pciconfig_read 1173
+#define __NR_pciconfig_write 1174
+#define __NR_perfmonctl 1175
+#define __NR_sigaltstack 1176
+#define __NR_rt_sigaction 1177
+#define __NR_rt_sigpending 1178
+#define __NR_rt_sigprocmask 1179
+#define __NR_rt_sigqueueinfo 1180
+#define __NR_rt_sigreturn 1181
+#define __NR_rt_sigsuspend 1182
+#define __NR_rt_sigtimedwait 1183
+#define __NR_getcwd 1184
+#define __NR_capget 1185
+#define __NR_capset 1186
+#define __NR_sendfile 1187
+#define __NR_getpmsg 1188
+#define __NR_putpmsg 1189
+#define __NR_socket 1190
+#define __NR_bind 1191
+#define __NR_connect 1192
+#define __NR_listen 1193
+#define __NR_accept 1194
+#define __NR_getsockname 1195
+#define __NR_getpeername 1196
+#define __NR_socketpair 1197
+#define __NR_send 1198
+#define __NR_sendto 1199
+#define __NR_recv 1200
+#define __NR_recvfrom 1201
+#define __NR_shutdown 1202
+#define __NR_setsockopt 1203
+#define __NR_getsockopt 1204
+#define __NR_sendmsg 1205
+#define __NR_recvmsg 1206
+#define __NR_sys_pivot_root 1207
+
+#if !defined(__ASSEMBLY__) && !defined(ASSEMBLER)
+
+extern long __ia64_syscall (long a0, long a1, long a2, long a3, long a4, long nr);
+
+#define _syscall0(type,name) \
+type \
+name (void) \
+{ \
+ register long dummy1 __asm__ ("out0"); \
+ register long dummy2 __asm__ ("out1"); \
+ register long dummy3 __asm__ ("out2"); \
+ register long dummy4 __asm__ ("out3"); \
+ register long dummy5 __asm__ ("out4"); \
+ \
+ return __ia64_syscall(dummy1, dummy2, dummy3, dummy4, dummy5, \
+ __NR_##name); \
+}
+
+#define _syscall1(type,name,type1,arg1) \
+type \
+name (type1 arg1) \
+{ \
+ register long dummy2 __asm__ ("out1"); \
+ register long dummy3 __asm__ ("out2"); \
+ register long dummy4 __asm__ ("out3"); \
+ register long dummy5 __asm__ ("out4"); \
+ \
+ return __ia64_syscall((long) arg1, dummy2, dummy3, dummy4, \
+ dummy5, __NR_##name); \
+}
+
+#define _syscall2(type,name,type1,arg1,type2,arg2) \
+type \
+name (type1 arg1, type2 arg2) \
+{ \
+ register long dummy3 __asm__ ("out2"); \
+ register long dummy4 __asm__ ("out3"); \
+ register long dummy5 __asm__ ("out4"); \
+ \
+ return __ia64_syscall((long) arg1, (long) arg2, dummy3, dummy4, \
+ dummy5, __NR_##name); \
+}
+
+#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
+type \
+name (type1 arg1, type2 arg2, type3 arg3) \
+{ \
+ register long dummy4 __asm__ ("out3"); \
+ register long dummy5 __asm__ ("out4"); \
+ \
+ return __ia64_syscall((long) arg1, (long) arg2, (long) arg3, \
+ dummy4, dummy5, __NR_##name); \
+}
+
+#define _syscall4(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4) \
+type \
+name (type1 arg1, type2 arg2, type3 arg3, type4 arg4) \
+{ \
+ register long dummy5 __asm__ ("out4"); \
+ \
+ return __ia64_syscall((long) arg1, (long) arg2, (long) arg3, \
+ (long) arg4, dummy5, __NR_##name); \
+}
+
+#define _syscall5(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5) \
+type \
+name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) \
+{ \
+ return __ia64_syscall((long) arg1, (long) arg2, (long) arg3, \
+ (long) arg4, (long), __NR_##name); \
+}
+
+#ifdef __KERNEL_SYSCALLS__
+
+static inline _syscall0(int,sync)
+static inline _syscall0(pid_t,setsid)
+static inline _syscall3(int,write,int,fd,const char *,buf,off_t,count)
+static inline _syscall3(int,read,int,fd,char *,buf,off_t,count)
+static inline _syscall3(off_t,lseek,int,fd,off_t,offset,int,count)
+static inline _syscall1(int,dup,int,fd)
+static inline _syscall3(int,execve,const char *,file,char **,argv,char **,envp)
+static inline _syscall3(int,open,const char *,file,int,flag,int,mode)
+static inline _syscall1(int,close,int,fd)
+static inline _syscall4(pid_t,wait4,pid_t,pid,int *,wait_stat,int,options,struct rusage*, rusage)
+static inline _syscall1(int,delete_module,const char *,name)
+static inline _syscall2(pid_t,clone,unsigned long,flags,void*,sp);
+
+#define __NR__exit __NR_exit
+static inline _syscall1(int,_exit,int,exitcode)
+
+static inline pid_t
+waitpid (int pid, int *wait_stat, int flags)
+{
+ return wait4(pid, wait_stat, flags, NULL);
+}
+
+static inline pid_t
+wait (int * wait_stat)
+{
+ return wait4(-1, wait_stat, 0, 0);
+}
+
+#endif /* __KERNEL_SYSCALLS__ */
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_IA64_UNISTD_H */
--- /dev/null
+#ifndef _ASM_IA64_UNWIND_H
+#define _ASM_IA64_UNWIND_H
+
+/*
+ * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * A simple API for unwinding kernel stacks. This is used for
+ * debugging and error reporting purposes. The kernel doesn't need
+ * full-blown stack unwinding with all the bells and whitles, so there
+ * is not much point in implementing the full IA-64 unwind API (though
+ * it would of course be possible to implement the kernel API on top
+ * of it).
+ */
+
+struct task_struct; /* forward declaration */
+struct switch_stack; /* forward declaration */
+
+/*
+ * The following declarations are private to the unwind
+ * implementation:
+ */
+
+struct ia64_stack {
+ unsigned long *limit;
+ unsigned long *top;
+};
+
+/*
+ * No user of this module should every access this structure directly
+ * as it is subject to change. It is declared here solely so we can
+ * use automatic variables.
+ */
+struct ia64_frame_info {
+ struct ia64_stack regstk;
+ unsigned long *bsp;
+ unsigned long top_rnat; /* RSE NaT collection at top of backing store */
+ unsigned long cfm;
+ unsigned long ip; /* instruction pointer */
+};
+
+/*
+ * The official API follows below:
+ */
+
+/*
+ * Prepare to unwind blocked task t.
+ */
+extern void ia64_unwind_init_from_blocked_task (struct ia64_frame_info *info,
+ struct task_struct *t);
+
+/*
+ * Prepare to unwind the current task. For this to work, the kernel
+ * stack identified by REGS must look like this:
+ *
+ * // //
+ * | |
+ * | kernel stack |
+ * | |
+ * +=====================+
+ * | struct pt_regs |
+ * +---------------------+ <--- REGS
+ * | struct switch_stack |
+ * +---------------------+
+ */
+extern void ia64_unwind_init_from_current (struct ia64_frame_info *info, struct pt_regs *regs);
+
+/*
+ * Unwind to previous to frame. Returns 0 if successful, negative
+ * number in case of an error.
+ */
+extern int ia64_unwind_to_previous_frame (struct ia64_frame_info *info);
+
+#define ia64_unwind_get_ip(info) ((info)->ip)
+#define ia64_unwind_get_bsp(info) ((unsigned long) (info)->bsp)
+
+#endif /* _ASM_IA64_UNWIND_H */
--- /dev/null
+#ifndef _ASM_IA64_USER_H
+#define _ASM_IA64_USER_H
+
+/*
+ * Core file format: The core file is written in such a way that gdb
+ * can understand it and provide useful information to the user (under
+ * linux we use the `trad-core' bfd). The file contents are as
+ * follows:
+ *
+ * upage: 1 page consisting of a user struct that tells gdb
+ * what is present in the file. Directly after this is a
+ * copy of the task_struct, which is currently not used by gdb,
+ * but it may come in handy at some point. All of the registers
+ * are stored as part of the upage. The upage should always be
+ * only one page long.
+ * data: The data segment follows next. We use current->end_text to
+ * current->brk to pick up all of the user variables, plus any memory
+ * that may have been sbrk'ed. No attempt is made to determine if a
+ * page is demand-zero or if a page is totally unused, we just cover
+ * the entire range. All of the addresses are rounded in such a way
+ * that an integral number of pages is written.
+ * stack: We need the stack information in order to get a meaningful
+ * backtrace. We need to write the data from usp to
+ * current->start_stack, so we round each of these in order to be able
+ * to write an integer number of pages.
+ *
+ * Copyright (C) 1998, 1999 Hewlett-Packard Co
+ * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#include <linux/ptrace.h>
+
+#include <asm/page.h>
+
+#define EF_SIZE 3072 /* XXX fix me */
+
+struct user {
+ unsigned long regs[EF_SIZE/8+32]; /* integer and fp regs */
+ size_t u_tsize; /* text size (pages) */
+ size_t u_dsize; /* data size (pages) */
+ size_t u_ssize; /* stack size (pages) */
+ unsigned long start_code; /* text starting address */
+ unsigned long start_data; /* data starting address */
+ unsigned long start_stack; /* stack starting address */
+ long int signal; /* signal causing core dump */
+ struct regs * u_ar0; /* help gdb find registers */
+ unsigned long magic; /* identifies a core file */
+ char u_comm[32]; /* user command name */
+};
+
+#define NBPG PAGE_SIZE
+#define UPAGES 1
+#define HOST_TEXT_START_ADDR (u.start_code)
+#define HOST_DATA_START_ADDR (u.start_data)
+#define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG)
+
+#endif /* _ASM_IA64_USER_H */
--- /dev/null
+/*
+ * Access to VGA videoram
+ *
+ * (c) 1998 Martin Mares <mj@ucw.cz>
+ * (c) 1999 Asit Mallick <asit.k.mallick@intel.com>
+ * (c) 1999 Don Dugger <don.dugger@intel.com>
+ */
+
+#ifndef __ASM_IA64_VGA_H_
+#define __ASM_IA64_VGA_H_
+
+/*
+ * On the PC, we can just recalculate addresses and then access the
+ * videoram directly without any black magic.
+ */
+
+#define VGA_MAP_MEM(x) ((unsigned long) ioremap((x), 0))
+
+#define vga_readb(x) (*(x))
+#define vga_writeb(x,y) (*(y) = (x))
+
+#endif /* __ASM_IA64_VGA_H_ */
#define _ADFS_FS_H
#include <linux/types.h>
-/*
- * Structures of data on the disk
- */
/*
* Disc Record at disc address 0xc00
*/
struct adfs_discrecord {
- unsigned char log2secsize;
- unsigned char secspertrack;
- unsigned char heads;
- unsigned char density;
- unsigned char idlen;
- unsigned char log2bpmb;
- unsigned char skew;
- unsigned char bootoption;
- unsigned char lowsector;
- unsigned char nzones;
- unsigned short zone_spare;
- unsigned long root;
- unsigned long disc_size;
- unsigned short disc_id;
- unsigned char disc_name[10];
- unsigned long disc_type;
- unsigned long disc_size_high;
- unsigned char log2sharesize:4;
- unsigned char unused:4;
- unsigned char big_flag:1;
+ __u8 log2secsize;
+ __u8 secspertrack;
+ __u8 heads;
+ __u8 density;
+ __u8 idlen;
+ __u8 log2bpmb;
+ __u8 skew;
+ __u8 bootoption;
+ __u8 lowsector;
+ __u8 nzones;
+ __u16 zone_spare;
+ __u32 root;
+ __u32 disc_size;
+ __u16 disc_id;
+ __u8 disc_name[10];
+ __u32 disc_type;
+ __u32 disc_size_high;
+ __u8 log2sharesize:4;
+ __u8 unused40:4;
+ __u8 big_flag:1;
+ __u8 unused41:1;
+ __u8 nzones_high;
+ __u32 format_version;
+ __u32 root_size;
+ __u8 unused52[60 - 52];
};
#define ADFS_DISCRECORD (0xc00)
#define ADFS_DR_OFFSET (0x1c0)
#define ADFS_DR_SIZE 60
+#define ADFS_DR_SIZE_BITS (ADFS_DR_SIZE << 3)
#define ADFS_SUPER_MAGIC 0xadf5
-#define ADFS_FREE_FRAG 0
-#define ADFS_BAD_FRAG 1
-#define ADFS_ROOT_FRAG 2
-
-/*
- * Directory header
- */
-struct adfs_dirheader {
- unsigned char startmasseq;
- unsigned char startname[4];
-};
-
-#define ADFS_NEWDIR_SIZE 2048
-#define ADFS_OLDDIR_SIZE 1024
-#define ADFS_NUM_DIR_ENTRIES 77
-
-/*
- * Directory entries
- */
-struct adfs_direntry {
- char dirobname[10];
-#define ADFS_NAME_LEN 10
- __u8 dirload[4];
- __u8 direxec[4];
- __u8 dirlen[4];
- __u8 dirinddiscadd[3];
- __u8 newdiratts;
-#define ADFS_NDA_OWNER_READ (1 << 0)
-#define ADFS_NDA_OWNER_WRITE (1 << 1)
-#define ADFS_NDA_LOCKED (1 << 2)
-#define ADFS_NDA_DIRECTORY (1 << 3)
-#define ADFS_NDA_EXECUTE (1 << 4)
-#define ADFS_NDA_PUBLIC_READ (1 << 5)
-#define ADFS_NDA_PUBLIC_WRITE (1 << 6)
-};
-
-#define ADFS_MAX_NAME_LEN 255
-struct adfs_idir_entry {
- __u32 inode_no; /* Address */
- __u32 file_id; /* file id */
- __u32 name_len; /* name length */
- __u32 size; /* size */
- __u32 mtime; /* modification time */
- __u32 filetype; /* RiscOS file type */
- __u8 mode; /* internal mode */
- char name[ADFS_MAX_NAME_LEN]; /* file name */
-};
-
-/*
- * Directory tail
- */
-union adfs_dirtail {
- struct {
- unsigned char dirlastmask;
- char dirname[10];
- unsigned char dirparent[3];
- char dirtitle[19];
- unsigned char reserved[14];
- unsigned char endmasseq;
- unsigned char endname[4];
- unsigned char dircheckbyte;
- } old;
- struct {
- unsigned char dirlastmask;
- unsigned char reserved[2];
- unsigned char dirparent[3];
- char dirtitle[19];
- char dirname[10];
- unsigned char endmasseq;
- unsigned char endname[4];
- unsigned char dircheckbyte;
- } new;
-};
#ifdef __KERNEL__
/*
return (result & 0xff) != ptr[511];
}
-/* dir.c */
-extern unsigned int adfs_val (unsigned char *p, int len);
-extern int adfs_dir_read_parent (struct inode *inode, struct buffer_head **bhp);
-extern int adfs_dir_read (struct inode *inode, struct buffer_head **bhp);
-extern int adfs_dir_check (struct inode *inode, struct buffer_head **bhp,
- int buffers, union adfs_dirtail *dtp);
-extern void adfs_dir_free (struct buffer_head **bhp, int buffers);
-extern int adfs_dir_get (struct super_block *sb, struct buffer_head **bhp,
- int buffers, int pos, unsigned long parent_object_id,
- struct adfs_idir_entry *ide);
-extern int adfs_dir_find_entry (struct super_block *sb, struct buffer_head **bhp,
- int buffers, unsigned int index,
- struct adfs_idir_entry *ide);
-
-/* inode.c */
-extern int adfs_inode_validate (struct inode *inode);
-extern unsigned long adfs_inode_generate (unsigned long parent_id, int diridx);
-extern unsigned long adfs_inode_objid (struct inode *inode);
-extern unsigned int adfs_parent_bmap (struct inode *inode, int block);
-extern int adfs_bmap (struct inode *inode, int block);
-extern void adfs_read_inode (struct inode *inode);
-
-/* map.c */
-extern int adfs_map_lookup (struct super_block *sb, int frag_id, int offset);
-
-/* namei.c */
-extern struct dentry *adfs_lookup (struct inode *dir, struct dentry *dentry);
+#endif
-/* super.c */
extern int init_adfs_fs (void);
-extern void adfs_error (struct super_block *, const char *, const char *, ...);
-
-/*
- * Inodes and file operations
- */
-
-/* dir.c */
-extern struct inode_operations adfs_dir_inode_operations;
-
-/* file.c */
-extern struct inode_operations adfs_file_inode_operations;
-#endif
#endif
* adfs file system inode data in memory
*/
struct adfs_inode_info {
- unsigned long file_id; /* id of fragments containing actual data */
+ unsigned long parent_id; /* object id of parent */
+ __u32 loadaddr; /* RISC OS load address */
+ __u32 execaddr; /* RISC OS exec address */
+ unsigned int filetype; /* RISC OS file type */
+ unsigned int attr; /* RISC OS permissions */
+ int stamped:1; /* RISC OS file has date/time */
};
#endif
/*
* linux/include/linux/adfs_fs_sb.h
*
- * Copyright (C) 1997 Russell King
+ * Copyright (C) 1997-1999 Russell King
*/
#ifndef _ADFS_FS_SB
#define _ADFS_FS_SB
-#include <linux/adfs_fs.h>
+/*
+ * Forward-declare this
+ */
+struct adfs_discmap;
+struct adfs_dir_ops;
/*
- * adfs file system superblock data in memory
+ * ADFS file system superblock data in memory
*/
struct adfs_sb_info {
- struct buffer_head *s_sbh; /* buffer head containing disc record */
- struct adfs_discrecord *s_dr; /* pointer to disc record in s_sbh */
- uid_t s_uid; /* owner uid */
- gid_t s_gid; /* owner gid */
- int s_owner_mask; /* ADFS Owner perm -> unix perm */
- int s_other_mask; /* ADFS Other perm -> unix perm */
- __u16 s_zone_size; /* size of a map zone in bits */
- __u16 s_ids_per_zone; /* max. no ids in one zone */
- __u32 s_idlen; /* length of ID in map */
- __u32 s_map_size; /* size of a map */
- __u32 s_zonesize; /* zone size (in map bits) */
- __u32 s_map_block; /* block address of map */
- struct buffer_head **s_map; /* bh list containing map */
- __u32 s_root; /* root disc address */
- __s8 s_map2blk; /* shift left by this for map->sector */
+ struct adfs_discmap *s_map; /* bh list containing map */
+ struct adfs_dir_ops *s_dir; /* directory operations */
+
+ uid_t s_uid; /* owner uid */
+ gid_t s_gid; /* owner gid */
+ umode_t s_owner_mask; /* ADFS owner perm -> unix perm */
+ umode_t s_other_mask; /* ADFS other perm -> unix perm */
+
+ __u32 s_ids_per_zone; /* max. no ids in one zone */
+ __u32 s_idlen; /* length of ID in map */
+ __u32 s_map_size; /* sector size of a map */
+ unsigned long s_size; /* total size (in blocks) of this fs */
+ signed int s_map2blk; /* shift left by this for map->sector */
+ unsigned int s_log2sharesize;/* log2 share size */
+ unsigned int s_version; /* disc format version */
+ unsigned int s_namelen; /* maximum number of characters in name */
};
#endif
#ifndef _LINUX_AUTO_FS_H
#define _LINUX_AUTO_FS_H
+#ifdef __KERNEL__
#include <linux/version.h>
#include <linux/fs.h>
#include <linux/limits.h>
-#include <linux/ioctl.h>
#include <asm/types.h>
+#endif /* __KERNEL__ */
+
+#include <linux/ioctl.h>
-/* This header file describes a range of autofs interface versions;
- the new implementation ("autofs4") supports them all, but the old
- implementation only supports v3. */
-#define AUTOFS_MIN_PROTO_VERSION 3 /* Min version we support */
-#define AUTOFS_MAX_PROTO_VERSION 4 /* Max (current) version */
+/* This file describes autofs v3 */
+#define AUTOFS_PROTO_VERSION 3
-/* Backwards compat for autofs v3; it just implements a version */
-#define AUTOFS_PROTO_VERSION 3 /* v3 version */
+/* Range of protocol versions defined */
+#define AUTOFS_MAX_PROTO_VERSION AUTOFS_PROTO_VERSION
+#define AUTOFS_MIN_PROTO_VERSION AUTOFS_PROTO_VERSION
/*
* Architectures where both 32- and 64-bit binaries can be executed
typedef unsigned long autofs_wqt_t;
#endif
-enum autofs_packet_type {
- autofs_ptype_missing, /* Missing entry (mount request) */
- autofs_ptype_expire, /* Expire entry (umount request) */
- autofs_ptype_expire_multi, /* Expire entry (umount request) */
-};
+/* Packet types */
+#define autofs_ptype_missing 0 /* Missing entry (mount request) */
+#define autofs_ptype_expire 1 /* Expire entry (umount request) */
struct autofs_packet_hdr {
- int proto_version; /* Protocol version */
- enum autofs_packet_type type; /* Type of packet */
+ int proto_version; /* Protocol version */
+ int type; /* Type of packet */
};
struct autofs_packet_missing {
char name[NAME_MAX+1];
};
-/* v4 multi expire (via pipe) */
-struct autofs_packet_expire_multi {
- struct autofs_packet_hdr hdr;
- autofs_wqt_t wait_queue_token;
- int len;
- char name[NAME_MAX+1];
-};
-
-union autofs_packet_union {
- struct autofs_packet_hdr hdr;
- struct autofs_packet_missing missing;
- struct autofs_packet_expire expire;
- struct autofs_packet_expire_multi expire_multi;
-};
-
#define AUTOFS_IOC_READY _IO(0x93,0x60)
#define AUTOFS_IOC_FAIL _IO(0x93,0x61)
#define AUTOFS_IOC_CATATONIC _IO(0x93,0x62)
#define AUTOFS_IOC_PROTOVER _IOR(0x93,0x63,int)
#define AUTOFS_IOC_SETTIMEOUT _IOWR(0x93,0x64,unsigned long)
#define AUTOFS_IOC_EXPIRE _IOR(0x93,0x65,struct autofs_packet_expire)
-#define AUTOFS_IOC_EXPIRE_MULTI _IOW(0x93,0x66,int)
#ifdef __KERNEL__
--- /dev/null
+/* -*- c-mode -*-
+ * linux/include/linux/auto_fs4.h
+ *
+ * Copyright 1999-2000 Jeremy Fitzhardinge <jeremy@goop.org>
+ *
+ * This file is part of the Linux kernel and is made available under
+ * the terms of the GNU General Public License, version 2, or at your
+ * option, any later version, incorporated herein by reference.
+ */
+
+#ifndef _LINUX_AUTO_FS4_H
+#define _LINUX_AUTO_FS4_H
+
+/* Include common v3 definitions */
+#include <linux/auto_fs.h>
+
+/* autofs v4 definitions */
+#undef AUTOFS_PROTO_VERSION
+#define AUTOFS_PROTO_VERSION 4
+
+#undef AUTOFS_MAX_PROTO_VERSION
+#define AUTOFS_MAX_PROTO_VERSION AUTOFS_PROTO_VERSION
+
+/* New message type */
+#define autofs_ptype_expire_multi 2 /* Expire entry (umount request) */
+
+/* v4 multi expire (via pipe) */
+struct autofs_packet_expire_multi {
+ struct autofs_packet_hdr hdr;
+ autofs_wqt_t wait_queue_token;
+ int len;
+ char name[NAME_MAX+1];
+};
+
+union autofs_packet_union {
+ struct autofs_packet_hdr hdr;
+ struct autofs_packet_missing missing;
+ struct autofs_packet_expire expire;
+ struct autofs_packet_expire_multi expire_multi;
+};
+
+#define AUTOFS_IOC_EXPIRE_MULTI _IOW(0x93,0x66,int)
+
+
+#endif /* _LINUX_AUTO_FS4_H */
#ifndef _LINUX_BFS_FS_SB
#define _LINUX_BFS_FS_SB
-/*
- * BFS block map entry, an array of these is kept in bfs_sb_info.
- */
- struct bfs_bmap {
- unsigned long start, end;
- };
-
/*
* BFS file system in-core superblock info
*/
unsigned long si_lf_sblk;
unsigned long si_lf_eblk;
unsigned long si_lasti;
- struct bfs_bmap * si_bmap;
char * si_imap;
struct buffer_head * si_sbh; /* buffer header w/superblock */
struct bfs_super_block * si_bfs_sb; /* superblock in si_sbh->b_data */
#define PCI_BRIDGE_CTL_FAST_BACK 0x80 /* Fast Back2Back enabled on secondary interface */
/* Header type 2 (CardBus bridges) */
-/* 0x14-0x15 reserved */
+#define PCI_CB_CAPABILITY_LIST 0x14
+/* 0x15 reserved */
#define PCI_CB_SEC_STATUS 0x16 /* Secondary status */
#define PCI_CB_PRIMARY_BUS 0x18 /* PCI bus number */
#define PCI_CB_CARD_BUS 0x19 /* CardBus bus number */
extern struct inode_operations proc_sys_inode_operations;
extern struct inode_operations proc_kcore_inode_operations;
extern struct inode_operations proc_kmsg_inode_operations;
-extern struct inode_operations proc_omirr_inode_operations;
extern struct inode_operations proc_ppc_htab_inode_operations;
/*
if (page_count(page) != 2)
goto cache_unlock_continue;
+ /*
+ * We did the page aging part.
+ */
+ if (nr_lru_pages < freepages.min * priority)
+ goto cache_unlock_continue;
+
/*
* Is it a page swap page? If so, we want to
* drop it if it is no longer used, even if it
struct swap_info_struct * p = NULL;
struct dentry * dentry;
int i, type, prev;
- int err = -EPERM;
+ int err;
- lock_kernel();
if (!capable(CAP_SYS_ADMIN))
- goto out;
+ return -EPERM;
+ lock_kernel();
dentry = namei(specialfile);
err = PTR_ERR(dentry);
if (IS_ERR(dentry))
struct dentry * swap_dentry;
unsigned int type;
int i, j, prev;
- int error = -EPERM;
+ int error;
static int least_priority = 0;
union swap_header *swap_header = 0;
int swap_header_version;
int swapfilesize;
struct block_device *bdev = NULL;
- lock_kernel();
if (!capable(CAP_SYS_ADMIN))
- goto out;
+ return -EPERM;
+ lock_kernel();
p = swap_info;
for (type = 0 ; type < nr_swapfiles ; type++,p++)
if (!(p->flags & SWP_USED))
break;
- if (type >= MAX_SWAPFILES)
+ if (type >= MAX_SWAPFILES) {
+ err = -EPERM;
goto out;
+ }
if (type >= nr_swapfiles)
nr_swapfiles = type+1;
p->flags = SWP_USED;