kachunkachunk

Members
  • Posts

    4
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kachunkachunk's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Forgive me, as I'm now going to show that I'm a terrible imposter now. I don't run unRAID and just a regular Docker image. The ls.io forums don't have any forum posts of note for this so I'm checking out a much more active community on this. So, I don't really have a check for updates feature, no. I can restart the docker container but otherwise it doesn't seem to pull in new software, or at least yet.
  2. Hey, can you guys perhaps confirm what version of the UniFi Controller you are running? I'm still seeing 5.2.7 on mine and wonder if I need to do anything special to get it up to the 5.4.x releases that are currently out. Edit: Or maybe the repository is only presenting 5.2.7 as the latest stable release? I'll have to check into this a bit more, but maybe someone can kindly point me in the right direction too. Edit 2: Okay, 5.4.9 was found via the unifi5 Debian repo, so I'm not sure what's up.
  3. *pokes head inside* I'm waiting on some hardware to arrive - upgrading to a dual-socket board + processors and a buttload of memory. The idea is to replace + consolidate my storage and virtualization boxes into one, saving space/power/whatever. So I have an opportunity to look at changing things up a bunch! Still, a bit sad to see no iSCSI support (or seemingly even not much dev interest), but I understand it's not an overnight affair anyway. I'll check again another year, heh. Even if I could learn to write a plugin, there's no way I would have time to maintain and responsibly test/debug it for consumption. I'd hate to be responsible for downtime or heartache. But indeed if anyone is interested, LIO (probably the better iSCSI target to adopt, IMO) pretty much involves wrapping around targetcli functions and /sys outputs. If you had that much working, it's a step away from enabling people FCoE and other goodies that LIO offers.
  4. iSCSI support would be great. I'm also not yet an unRAID user, but I'm doing my research and making sure my use cases can be met with minimal customization (i.e. potentially breaking things later). So, here's my take, in more than nine words: 1) I want to clear up that VMware/ESXi is not deprecating or "moving away" from iSCSI necessarily, let alone to NFSv3 or v4, but I know that's not what was explicitly claimed. If anything you want to keep your eyes peeled on the future of VMFS and in particular VMware's efforts on object-based storage (such as in VSAN). You can imagine it should start to look a bit like the late-development roadmap for BTRFS generally, so it's exciting stuff. Anyway, I personally use NFS instead of iSCSI due to it being a bit more readily accessible as far as raw files would go. 2) Desktop PCs can definitely benefit from iSCSI over SMB/CIFS. A little background first - I have what is perhaps a bit niche of a use case, but it doesn't have to be. I currently go with a centralized storage target approach for my systems at home (I haven't gone as far as PXE booting everything, though). Basically my systems have an SSD to boot from, then they connect to iSCSI targets served by an Ubuntu system for bulk storage. I host the iSCSI targets as fileio targets on a number of BTRFS volumes: - 8x NL-SAS HDDs for standard tier: BTRFS with a raid-10 allocator policy for metadata and data - 6x SATA SSDs for upper-tier: BTRFS with a raid-10 allocator policy for metadata and data - 5x SATA HDDs for low-tier stuff: BTRFS with a raid-6 allocator policy for metadata and data The idea is that I can pool together all my disks for a balance of capacity, availability, and performance. You can't have all three at once, necessarily (https://en.wikipedia.org/wiki/CAP_theorem), but so far it's proven to be really reliable and performant, and I don't have to maintain numerous [also slower] arrays across each system. It felt native with 1Gbit networking, and it's just blazing fast with 10Gbit (stating the obvious, I know). With centralized storage solutions like unRAID making themselves more and more approachable and powerful, I think iSCSI would be a wonderful addition to the feature set. Seems natural to me for others to adopt a similar approach, as long as it's made more approachable thanks to unRAID, etc. 3) You probably already have the needed iSCSI target bits in current kernels anyway. LIO was included in mainline kernels for some time now, so you just need targetcli to be installed. Granted, you want to avoid or prevent users from adding any unRAID block devices directly, and mapping to only unallocated storage components, and/or allow creation of fileio targets on hosted/shared storage. Better yet, maybe do so without using /dev/sd<x> names and go with something a bit more reliable or consistent like /dev/disks/by-id or by-uuid. 4) It would require quite a bit of work in the UI to expose all the usually important bits to a user: - IQN creation/definition - Binding to, or listening on, relevant IPs - Backing creation (block device), with options "wce" for write caching, and logical name and number. - Backing creation (fileio), with the same options above, but also a size, and whether it's sparse-allocated or otherwise. - LUN assignment - ACLs - Authentication/CHAP - Other advanced target or protocol specific options Generally every one of these things can be managed via targetcli, or maybe more realistically in the case of unRAID, by manipulating or writing LIO text configuration files directly.