linux/drivers/staging/lustre
Linus Torvalds 46f7b63556 Staging drivers patches for 3.20-rc1
Here's the big staging driver tree update for 3.20-rc1.  Lots of little
 things in here, adding up to lots of overall cleanups.  The IIO driver
 updates are also in here as they cross the staging tree boundry a lot.
 I2O has moved into staging as well, as a plan to drop it from the tree
 eventually as that's a dead subsystem.
 
 All of this has been in linux-next with no reported issues for a while.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iEYEABECAAYFAlTgtVQACgkQMUfUDdst+yk4mACgshYZ1fOQDoPR+BXd+QD1HXfh
 GosAoICXkSjDQjwVo13W6QHIVMsUezY+
 =4jHr
 -----END PGP SIGNATURE-----

Merge tag 'staging-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging

Pull staging drivers patches from Greg KH:
 "Here's the big staging driver tree update for 3.20-rc1.

  Lots of little things in here, adding up to lots of overall cleanups.
  The IIO driver updates are also in here as they cross the staging tree
  boundry a lot.  I2O has moved into staging as well, as a plan to drop
  it from the tree eventually as that's a dead subsystem.

  All of this has been in linux-next with no reported issues for a
  while"

* tag 'staging-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: (740 commits)
  staging: lustre: lustre: libcfs: define symbols as static
  staging: rtl8712: Do coding style cleanup
  staging: lustre: make obd_updatemax_lock static
  staging: rtl8188eu: core: switch with redundant cases
  staging: rtl8188eu: odm: conditional setting with no effect
  staging: rtl8188eu: odm: condition with no effect
  staging: ft1000: fix braces warning
  staging: sm7xxfb: fix remaining CamelCase
  staging: sm7xxfb: fix CamelCase
  staging: rtl8723au: multiple condition with no effect - if identical to else
  staging: sm7xxfb: make smtc_scr_info static
  staging/lustre/mdc: Initialize req in mdc_enqueue for !it case
  staging/lustre/clio: Do not allow group locks with gid 0
  staging/lustre/llite: don't add to page cache upon failure
  staging/lustre/llite: Add exception entry check after radix_tree
  staging/lustre/libcfs: protect kkuc_groups from write access
  staging/lustre/fld: refer to MDT0 for fld lookup in some cases
  staging/lustre/llite: Solve a race to access lli_has_smd in read case
  staging/lustre/ptlrpc: hold rq_lock when modify rq_flags
  staging/lustre/lnet: portal spreading rotor should be unsigned
  ...
2015-02-15 11:30:39 -08:00
..
include/linux staging/lustre/lnet: portal spreading rotor should be unsigned 2015-02-07 17:31:10 +08:00
lnet staging/lustre/lnet: portal spreading rotor should be unsigned 2015-02-07 17:31:10 +08:00
lustre Staging drivers patches for 3.20-rc1 2015-02-15 11:30:39 -08:00
Kconfig
Makefile staging: lustre: remove top level ccflags variable 2014-07-11 20:51:16 -07:00
README.txt lustre: Add some basic documentation 2014-08-30 11:53:54 -07:00
TODO staging: lustre: remove hpdd-discuss list from TODO file 2014-07-12 18:01:57 -07:00

Lustre Parallel Filesystem Client
=================================

The Lustre file system is an open-source, parallel file system
that supports many requirements of leadership class HPC simulation
environments.
Born from from a research project at Carnegie Mellon University,
the Lustre file system is a widely-used option in HPC.
The Lustre file system provides a POSIX compliant file system interface,
can scale to thousands of clients, petabytes of storage and
hundreds of gigabytes per second of I/O bandwidth.

Unlike shared disk storage cluster filesystems (e.g. OCFS2, GFS, GPFS),
Lustre has independent Metadata and Data servers that clients can access
in parallel to maximize performance.

In order to use Lustre client you will need to download lustre client
tools from
https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/
the package name is lustre-client.

You will need to install and configure your Lustre servers separately.

Mount Syntax
============
After you installed the lustre-client tools including mount.lustre binary
you can mount your Lustre filesystem with:

mount -t lustre mgs:/fsname mnt

where mgs is the host name or ip address of your Lustre MGS(management service)
fsname is the name of the filesystem you would like to mount.


Mount Options
=============

  noflock
	Disable posix file locking (Applications trying to use
	the functionality will get ENOSYS)

  localflock
	Enable local flock support, using only client-local flock
	(faster, for applications that require flock but do not run
	 on multiple nodes).

  flock
	Enable cluster-global posix file locking coherent across all
	client nodes.

  user_xattr, nouser_xattr
	Support "user." extended attributes (or not)

  user_fid2path, nouser_fid2path
	Enable FID to path translation by regular users (or not)

  checksum, nochecksum
	Verify data consistency on the wire and in memory as it passes
	between the layers (or not).

  lruresize, nolruresize
	Allow lock LRU to be controlled by memory pressure on the server
	(or only 100 (default, controlled by lru_size proc parameter) locks
	 per CPU per server on this client).

  lazystatfs, nolazystatfs
	Do not block in statfs() if some of the servers are down.

  32bitapi
	Shrink inode numbers to fit into 32 bits. This is necessary
	if you plan to reexport Lustre filesystem from this client via
	NFSv4.

  verbose, noverbose
	Enable mount/umount console messages (or not)

More Information
================
You can get more information at
OpenSFS website: http://lustre.opensfs.org/about/
Intel HPDD wiki: https://wiki.hpdd.intel.com

Out of tree Lustre client and server code is available at:
http://git.whamcloud.com/fs/lustre-release.git

Latest binary packages:
http://lustre.opensfs.org/download-lustre/