iSCSI support would be great.
I'm also not yet an unRAID user, but I'm doing my research and making sure my use cases can be met with minimal customization (i.e. potentially breaking things later).
So, here's my take, in more than nine words:
1) I want to clear up that VMware/ESXi is not deprecating or "moving away" from iSCSI necessarily, let alone to NFSv3 or v4, but I know that's not what was explicitly claimed.
If anything you want to keep your eyes peeled on the future of VMFS and in particular VMware's efforts on object-based storage (such as in VSAN). You can imagine it should start to look a bit like the late-development roadmap for BTRFS generally, so it's exciting stuff.
Anyway, I personally use NFS instead of iSCSI due to it being a bit more readily accessible as far as raw files would go.
2) Desktop PCs can definitely benefit from iSCSI over SMB/CIFS.
A little background first - I have what is perhaps a bit niche of a use case, but it doesn't have to be.
I currently go with a centralized storage target approach for my systems at home (I haven't gone as far as PXE booting everything, though).
Basically my systems have an SSD to boot from, then they connect to iSCSI targets served by an Ubuntu system for bulk storage.
I host the iSCSI targets as fileio targets on a number of BTRFS volumes:
- 8x NL-SAS HDDs for standard tier: BTRFS with a raid-10 allocator policy for metadata and data
- 6x SATA SSDs for upper-tier: BTRFS with a raid-10 allocator policy for metadata and data
- 5x SATA HDDs for low-tier stuff: BTRFS with a raid-6 allocator policy for metadata and data
The idea is that I can pool together all my disks for a balance of capacity, availability, and performance. You can't have all three at once, necessarily (https://en.wikipedia.org/wiki/CAP_theorem), but so far it's proven to be really reliable and performant, and I don't have to maintain numerous [also slower] arrays across each system. It felt native with 1Gbit networking, and it's just blazing fast with 10Gbit (stating the obvious, I know).
With centralized storage solutions like unRAID making themselves more and more approachable and powerful, I think iSCSI would be a wonderful addition to the feature set. Seems natural to me for others to adopt a similar approach, as long as it's made more approachable thanks to unRAID, etc.
3) You probably already have the needed iSCSI target bits in current kernels anyway. LIO was included in mainline kernels for some time now, so you just need targetcli to be installed.
Granted, you want to avoid or prevent users from adding any unRAID block devices directly, and mapping to only unallocated storage components, and/or allow creation of fileio targets on hosted/shared storage. Better yet, maybe do so without using /dev/sd<x> names and go with something a bit more reliable or consistent like /dev/disks/by-id or by-uuid.
4) It would require quite a bit of work in the UI to expose all the usually important bits to a user:
- IQN creation/definition
- Binding to, or listening on, relevant IPs
- Backing creation (block device), with options "wce" for write caching, and logical name and number.
- Backing creation (fileio), with the same options above, but also a size, and whether it's sparse-allocated or otherwise.
- LUN assignment
- ACLs
- Authentication/CHAP
- Other advanced target or protocol specific options
Generally every one of these things can be managed via targetcli, or maybe more realistically in the case of unRAID, by manipulating or writing LIO text configuration files directly.