PNFS Block Server Setup Instructions

From Linux NFS

Revision as of 20:39, 15 October 2009 by Yalamanchi (Talk | contribs)
Jump to: navigation, search

How to guide to setup the pNFS Block Layout server based on sPNFS

This page describes about compiling and setting up the pNFS Block Layout Server. This is based on the Rick McNeal's how to guide. Please note that Fedora 11 was used to setup the server, some of the content you see might be specific to Fedora ( for e.g yum).


Contents

Building the code



1) Building the kernel source

Obtain the code from Linux pNFS git. pNFS Block Layout server is currently a part of the pNFS git.

    git clone git://linux-nfs.org/~bhalevy/linux-pnfs.git

CONFIG_SPNFS_BLOCK should be enabled before the compilation of the code.

This page doesn't discuss anything about kernel compilation.

2) Building the usespace daemon

There is a userspace daemon and this is required to be started before client access the block network. Source code an be obtained from the following git.

    git clone git://git.linux-nfs.org/projects/rmcneal/ctl.git

To compile this code , parted, parted-devel and libevent packages should be installed on the machine.

yum insall parted yum insall parted-devel yum insall libevent yum insall libevent-devel

You might see couple or compilattion errors. Please note that below mentioned is just a temporary work around.

In file included from ctl.c:12: /usr/include/parted/device.h:140: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token In file included from /usr/include/asm/types.h:4, from efi.h:26, from ctl.c:16: <ROOT_OF_SRC>/include/asm-generic/int-ll64.h:11:29: error: asm/bitsperlong.h: No such file or directory make: *** [ctl.o] Error 1


1) To get past the first error , the work around is I have updated the "/usr/include/parted/device.h" file , I have add an ifdef like the following.

Line 140 in device.h.

#ifdef notdef extern PedConstraint* ped_device_get_constraint (PedDevice* dev); #endif

2) I have created a symbolic link to <ROOT_OF_SRC>/include/asm-generic/bitsperlong.h in <ROOT_OF_SRC>/include/asm-generic.



3) Building the nfsutils

Obtain the "nfs-utils" source code.

    git clone  git://linux-nfs.org/~bhalevy/pnfs-nfs-utils

Run the autogen.sh to generate the "configure" file. If you are trying to build the code first time several pacakages are required. I have either installed or updated the following packages.

yum update libtirpc

yum install libtirpc-devel

yum install tcp_wrappers-devel

yum install libevent

yum install libevent-devel

yum install libnfsidmap

yum install libnfsidmap-devel

yum install nfs-utils-lib

yum install nfs-utils-lib-devel

yum install libgssglue

yum install libgssglue-devel


Exporting the filesystem

- For the block access to work properly the disks must have a signature. - Partetioned the disks using "parted". Disks partetioned with "fdisk" doesn't have the signatures. - I have followed the below mentioned steps.

  1. parted /dev/sdb

(parted) mklabel gpt (parted) mkpart 1 <Provide start and end of the partetions> (parted) print (parted) print

Model: VMware Virtual disk (scsi) Disk /dev/sdb: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: gpt

Number Start End Size File system Name Flags 1 17.4kB 53.7GB 53.7GB ext3 1 msftres

- I have tested with ext4 , create ext4 filesystem with 4K block size.

  1. mkfs.ext4 -b 4096 /dev/sdb1


Setting up the BLOCK storage / SAN

- I have used ISCSI to setup the block storage , "scsi-target-utils" is required to setup the iscsi target. - One key thing is when adding a LUN to the target , don't add the disk partettion (/dev/sdb1) , instead add the entire disk(/dev/sdb).

 The disk signatures are not visible when if you add the disk partetion to the target.

Export Options

/mnt *(rw,sync,fsid=0,insecure,no_subtree_check,no_root_squash,pnfs)

How to Start the server

- I have used a scrpit to start the server. Script is attached.

Mount from the client

 - mount -t nfs4 -o minorversion=1 SN:/ /mnt/ob

How to verify

 - tcpdump/wireshark  is the best way to see what is happening.
 - The other way is after mounting the export,on the client check /proc/self/mountstats.
Personal tools