0wn996

Members
  • Posts

    6
  • Joined

  • Last visited

0wn996's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Little follow-up on my unexpected lockups. They seem to be unrelated to iSCSI. When upgrading, for some reason a new IQN got created, and windows still looking for the old one gave me those errors in my logs. Fixing that did not end my unexpected shutdowns though. The lockups i'm more and more sure were related to me running ddr4 at 3600mhz, while apparently the memory controller in my ryzen 3900X supports up to 3200mhz. Worked without any issues for a year, but for some reason it seems to have become an issue now. Lowered ram speeds and no lockups so far.
  2. Nope, the config page just contains the current iqn. I noticed just now that these old ones were still present on the windows machine i use to access the iscsi. Those messages were the windows machine trying to reach a non-existent iscsi target. Removing the old iqn on windows did the trick. Now let's see what that does for my unexpected shutdowns. Fingers crossed.
  3. I'm experiencing some issues after upgrading to 6.11.1 from 6.9. First of all my iscsi drive didn't show up, so as suggested here i removed and re-installed the iscsi target plugin. That made my mapped drive show up again, but i started having unexpected shutdowns on unraid. Mostly unclean shutdowns completely out of the blue. So i've set up logging to see what may be going on, only to find my syslogs littered with these: Oct 12 16:47:19 Server996 kernel: Unable to locate Target IQN: iqn.2003-01.org.linux-iscsi.server996.x8664:sn.a5ccc3708536 in Storage Node Oct 12 16:47:19 Server996 kernel: iSCSI Login negotiation failed. One of those like every 5 seconds. And when looking at the IQN number, i saw that it's not the current IQN. So for some reason, there is a reference to an old IQN that unraid is trying to connect to. Any idea on where that reference may be, and most importantly, how i get rid of it? PS. i'm not sure these issues are the actual cause of my unclean shutdowns, but they're about 95% of my syslog file so they need to be addressed either way.
  4. Loud and clear. Looks like i'll just be passing on block devices in that case and stay away from fileIO images. I don't feel very confident about software-raiding multiple iscsi devices on windows so i'll just stick with individual drives. Anyways, when speed is really of the essence i'll stick to local nvme storage on the gaming machine. The iSCSI extra storage is so i can keep more of my steam library permanently installed. And iscsi is certainly a lifesaver here because both Origin and nvidia gamestream don't play nice with games running from regular network shares. With iscsi, not a problem. Now it's time to go break my btrfs pool and pass on some block storage Cheers!
  5. Thanks for the swift reply, but I don't quite get that, since the speed of multiple striped drives far exceeds that of a single drive. So my conclusion would be the other way around. On 1gbit (or 2.5 even) a single ssd will easily saturate the line. Running 10gbit, you should see an advantage from the added speed of the striped array. My situation is currently 1gbit, but i'll soon upgrade to 10gbit. any idea btw if and how the fileio image can be modified after initial creation?
  6. Hi there. So first of all thanks for the awesome job on the plugin. Much appreciated! I've just succesfully set up an iscsi target on my unraid server, but i'm left with a few questions. First some background info: I'm running an ssd cache pool (4 drives) in unraid that i want to use as the iscsi target. This way i can get striping and mirroring out of btrfs, and use a FileIO image to connect to it over iscsi. I understand connecting directly to a device introduces less overhead, but that way i would have to set up an individual fileio for each drive (right?). Now i'm just wondering, if i end up growing the cache pool, can i also increase the size of the fileIO image? And possibly change other properties such as write-back true/false? Thanks in advance for the feedback!