top of page

Accessing the Nexfs Storage Pool

The Nexfs storage pool is accessible using standard file and block storage methods. Some access methods can be directly configured through and managed by Nexfs, while others can be configured to access the storage pool.

The table below lists some of the more popular access methods:

 

 

 

 

 

 

 

 

*Ganesha NFS is not currently compatible with Nexfs

 

 

 

Methods with direct access (currently iSCSI) do not use the fuse mount point. Direct access can improve IO performance compared to accessing the storage pool through fuse mount points, especially for workloads that have smaller write IO block sizes.
 

The integrated S3 API (Nexfs Content Server)

Nexfs includes an integrated S3 Api server with direct access to the storage pool. The integrated S3 server can be configured to allow access to all files stored in Nexfs no matter what protocol was used to create the files or the files are being access with. There are no protocol restrictions on file access with Nexfs.

Information on the integrated Nexfs Content and S3 Server is documented here

The Fuse Mountpoint

By default, the storage pool can be accessed as a standard Linux filesystem, the mount point location can be configured using the "MOUNTPOINT" nexfs setting, to locate the mount point on a running server run 'nexfscli server status' and look for the value return by 'Nexfs Mountpoint:'

The fuse mount point performs best when using larger write block sizes, with 4K or larger block sizes recommended.

In general, users and applications can access the storage pool from the fuse mount point, from which the Nexfs storage pool can also be exported or shared using storage protocols like SMB (Samba).

Note: Userspace file protocol applications such as ganesha NFS may not work when configured to access the storage pool over the fuse mount point.

Integrated iSCSI

Nexfs includes an integrated iSCSI server that uses direct access to the storage pool. The integrated iSCSI server can outperform an external iSCSI server configured to use the fuse mount point. 

 

Information on managing and configuring the Integrated iSCSI server is documented here

NFS

While Nexfs supports both NFSv3 and NFSv4, NFSv3 is currently recommended for active workloads such as storage of running VMWare Virtual Machines. If NFSv4 is used for such workloads then ENABLEWRITEBUFFERING should be disabled to avoid possible NFSv4 client caching conflicts with fuse kernel caching.

Although this is not a requirement Nexfs can directly configure and manage the Linux kernel NFS server, it may be preferable to separately configure and/or manage an NFS server, for example when the same NFS server also exports shares from outside of Nexfs as well as exporting Nexfs shares.

Also, see NFS Management with Nexfs

SMB/Cifs/Samba/Windows

When Samba does not offer the best solution, it is recommended that the storage pool is exported via iSCSI to a windows server, formated NTFS, then shared and managed as a standard windows filesystem.

For those confident with managing Samba, SMB shares can be created from the Nexfs fuse mount point.

VMare vSphere/ESX storage

It is recommended to export the Nexfs storage pool to vSphere and ESX over iSCSI or NFSv3

Ganesha NFS

Ganesha NFS is not currently compatible with Nexfs

Other Protocols/Applications

Nexfs can be accessed like any standard Linux file system, most other storage protocols and applications can be configured to directly access the storage pool over the Nexfs Fuse mount point.

Access Method
Managed By Nexfs
Via Fuse Mountpoint
Direct Access
File System Access
Yes
Yes
No
Integrated iSCSI
Yes
No
Yes
S3 API
Yes
No
Yes
NFSD
Yes
Yes
No
Samba
No
Yes
No
Other iSCSI
No
Yes
No
Ganesha NFS*
No
No
No
Green Light TIP.png

With the exception of the fuse mount point, there is no requirement to manage any access method through Nexfs; methods can individually be disabled in Nexfs, then configured to access the Nexfs fuse mounted file system. 

bottom of page