Jump to content
mr007

ISCSI Support

102 posts in this topic Last Reply

Recommended Posts

A use case would add more weight to your 9 word request

Share this post


Link to post

A use case would add more weight to your 9 word request

 

Network booting OpenELEC.  I'm doing this now via NFS but would like to move to iSCSI to reduce the overhead of NFS.

 

John

Share this post


Link to post

I am in two minds...... even ESX (the big driver for iSCSI over the last few years) is migrating to nfs.

 

Saying that the use cases that remain are from the kind of people (like yourself) we want to actively attract to the community.

 

Advanced show case use cases also help sell the product via the "cool factor", reviews and personal recommendations.

 

I would use it so if it isnt a huge effort it gets my vote. :)

Share this post


Link to post

I came up with another use case for unRAID as an iSCSI target...

 

I plan on building a dedicated WIN10 VM for HyperSpin.  One of my wheels will be for PC Games.  Since I have limited space on my cache pool (VM storage), I would love the ability to install games to an iSCSCI target on my array.

 

John

  • Upvote 2

Share this post


Link to post

Using unraid as a Hyper-V storage container over ISCSI instead of using NFS/SMB is something I am interested in. Right now I have to invest in using Microsoft's or a crappy NAS solution which doesn't give me the flexibility and cost savings that an ISCSI solution does.

 

 

  • Upvote 1

Share this post


Link to post

iSCSI support would be great.

I'm also not yet an unRAID user, but I'm doing my research and making sure my use cases can be met with minimal customization (i.e. potentially breaking things later).

 

So, here's my take, in more than nine words:

 

1) I want to clear up that VMware/ESXi is not deprecating or "moving away" from iSCSI necessarily, let alone to NFSv3 or v4, but I know that's not what was explicitly claimed.

If anything you want to keep your eyes peeled on the future of VMFS and in particular VMware's efforts on object-based storage (such as in VSAN). You can imagine it should start to look a bit like the late-development roadmap for BTRFS generally, so it's exciting stuff.

Anyway, I personally use NFS instead of iSCSI due to it being a bit more readily accessible as far as raw files would go.

 

2) Desktop PCs can definitely benefit from iSCSI over SMB/CIFS.

A little background first - I have what is perhaps a bit niche of a use case, but it doesn't have to be.

I currently go with a centralized storage target approach for my systems at home (I haven't gone as far as PXE booting everything, though).

 

Basically my systems have an SSD to boot from, then they connect to iSCSI targets served by an Ubuntu system for bulk storage.

I host the iSCSI targets as fileio targets on a number of BTRFS volumes:

- 8x NL-SAS HDDs for standard tier: BTRFS with a raid-10 allocator policy for metadata and data

- 6x SATA SSDs for upper-tier: BTRFS with a raid-10 allocator policy for metadata and data

- 5x SATA HDDs for low-tier stuff: BTRFS with a raid-6 allocator policy for metadata and data

 

The idea is that I can pool together all my disks for a balance of capacity, availability, and performance. You can't have all three at once, necessarily (https://en.wikipedia.org/wiki/CAP_theorem), but so far it's proven to be really reliable and performant, and I don't have to maintain numerous [also slower] arrays across each system. It felt native with 1Gbit networking, and it's just blazing fast with 10Gbit (stating the obvious, I know).

 

With centralized storage solutions like unRAID making themselves more and more approachable and powerful, I think iSCSI would be a wonderful addition to the feature set. Seems natural to me for others to adopt a similar approach, as long as it's made more approachable thanks to unRAID, etc.

 

3) You probably already have the needed iSCSI target bits in current kernels anyway. LIO was included in mainline kernels for some time now, so you just need targetcli to be installed.

Granted, you want to avoid or prevent users from adding any unRAID block devices directly, and mapping to only unallocated storage components, and/or allow creation of fileio targets on hosted/shared storage. Better yet, maybe do so without using /dev/sd<x> names and go with something a bit more reliable or consistent like /dev/disks/by-id or by-uuid.

 

4) It would require quite a bit of work in the UI to expose all the usually important bits to a user:

- IQN creation/definition

- Binding to, or listening on, relevant IPs

- Backing creation (block device), with options "wce" for write caching, and logical name and number.

- Backing creation (fileio), with the same options above, but also a size, and whether it's sparse-allocated or otherwise.

- LUN assignment

- ACLs

- Authentication/CHAP

- Other advanced target or protocol specific options

 

Generally every one of these things can be managed via targetcli, or maybe more realistically in the case of unRAID, by manipulating or writing LIO text configuration files directly.

Share this post


Link to post

+1 to the ISCSI LUN support.

 

There are so many use cases ISCSI LUN support, I could go on about use cases for hours.

The biggest problem with Network shares is caching it is not done properly or rather not as well as what LUNs are capable of.

Then you run into issues of compatibility across platforms.

With an ISCSI LUN there are no compatibility issues, also LUNs look like a native drive on the guest OS.

 

+10000000 to ISCSI support.

 

Share this post


Link to post

A use case would add more weight to your 9 word request

 

Not OP, but my use case:

I am new to Unraid and (like any geek) have a few NAS devices* .. and i would like to run them in service of my Unraid server.

So i wonder whether i should go with iSCSI or the (discontinued?) SNAP plugin .. which would be simple to setup and recover?

 

*) QNAP TS-410 holding 6TB Raid5. They read at only 35MB/s but thats fast enough for quite a few video streams.

Share this post


Link to post

For what it is worth I too would use iSCSI now

Share this post


Link to post

Well ISCSI LUN might be a perfect fit with Unraid..  ::)

 

On terminology:

NAS = a host sharing folders via SMB/FTP/whatever ..with any device.

SAN = a device sharing logical volumes via ISCSI/FibreChannel ..with a server.

 

I had to look up ISCSI LUN but yes it should allow each SAN to (simply?) appear as a Unraid drive. This is how its usually used on servers.

 

I see no use in the Unraid parity drive, however. And that disk would be as big as your biggest SAN, also not very realistic.

 

In my case, i would very much like all the SAN volumes to be presented as one big archive, so i'd gladly use the user shares for that.

  • Like 1

Share this post


Link to post

Until we've got ISCSI, i wonder how to allow my Plex docker to browse my NAS devices?

Is there even an alternative? (using Unraid6)

Share this post


Link to post

Until we've got ISCSI, i wonder how to allow my Plex docker to browse my NAS devices?

Is there even an alternative? (using Unraid6)

Use unassigned devices and mount the smb shares from the nas

Share this post


Link to post

Use unassigned devices and mount the smb shares from the nas

Thanks! Your answer pointed me to these posts and it all makes sense now.

 

Now i should just make sure that it gets mounted/unmounted at boot/shutdown. And it made me wonder .. would that be something i could automate by programming a plugin? Assuming it doesnt exist yet. Perhaps thats something for another topic.. but insights welcome..

Share this post


Link to post

+1 on iSCSI support

 

Would be really usefull to have a 'win 8.1/win 10/server 2012' server as VM on the unraid server, and use iSCSI to connect the VM to a user share on that unraid server.

 

Then share this iSCSI drive to the rest of the network.

 

Result: multichannel SMB3 support 

 

(source: you can connect Server#1 to a High-Rely drive via iSCSI then simply “share” that drive from Server#1 to the rest of the network. )

Share this post


Link to post

Use unassigned devices and mount the smb shares from the nas

Thanks! Your answer pointed me to these posts and it all makes sense now.

 

Now i should just make sure that it gets mounted/unmounted at boot/shutdown. And it made me wonder .. would that be something i could automate by programming a plugin? Assuming it doesnt exist yet. Perhaps thats something for another topic.. but insights welcome..

His answer should have led you to the Unassigned Devices plugin. See search tips in my sig.

Share this post


Link to post

His answer should have led you to the Unassigned Devices plugin. See search tips in my sig.

Thanks for this insight!

 

So this basically replaces the SNAP plugin functionality? In that case what i dont understand is that i didn't found out about this in one of the many (now deprecated) SNAP plugin threads. :-\

Share this post


Link to post

His answer should have led you to the Unassigned Devices plugin. See search tips in my sig.

Thanks for this insight!

 

So this basically replaces the SNAP plugin functionality? In that case what i dont understand is that i didn't found out about this in one of the many (now deprecated) SNAP plugin threads. :-\

From the OP of the SNAP thread

 

There is a new plugin that has the functionality of SNAP that integrates better with unRAID V6.  The plugin is here: http://lime-technology.com/forum/index.php?topic=38635.0  I recommend you convert to this plugin.  I will maintain SNAP as long as I can with my limited time to work on it, but I won't make any enhancements.

 

That link takes you to the OP for Unassigned Devices.

 

As an aside, this is why the forums were also reorganized to separate v6 (& v5) plugins from v6.1 and why CA will not allow you to install / reinstall a plugin that is not compatible with v6.1.x

Share this post


Link to post

+1 iSCSI feature for the cache pool would be so nice  :)

Share this post


Link to post

+1 for iSCSI for mounting Hyperspin to windows clients... It's the only way to host content remotely to hyperspin...

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now