Jump to content

0wn996

Members
  • Posts

    6
  • Joined

  • Last visited

Posts posted by 0wn996

  1. Little follow-up on my unexpected lockups.
    They seem to be unrelated to iSCSI. When upgrading, for some reason a new IQN got created, and windows still looking for the old one gave me those errors in my logs. Fixing that did not end my unexpected shutdowns though.

    The lockups i'm more and more sure were related to me running ddr4 at 3600mhz, while apparently the memory controller in my ryzen 3900X supports up to 3200mhz. Worked without any issues for a year, but for some reason it seems to have become an issue now. Lowered ram speeds and no lockups so far.

    • Thanks 1
  2. 15 minutes ago, SimonF said:

    Does the ign show on the config page?

    Nope, the config page just contains the current iqn. I noticed just now that these old ones were still present on the windows machine i use to access the iscsi. Those messages were the windows machine trying to reach a non-existent iscsi target. Removing the old iqn on windows did the trick. 

     

    Now let's see what that does for my unexpected shutdowns. Fingers crossed. 

  3. I'm experiencing some issues after upgrading to 6.11.1 from 6.9.
    First of all my iscsi drive didn't show up, so as suggested here i removed and re-installed the iscsi target plugin.
    That made my mapped drive show up again, but i started having unexpected shutdowns on unraid. Mostly unclean shutdowns completely out of the blue.
    So i've set up logging to see what may be going on, only to find my syslogs littered with these:

    Oct 12 16:47:19 Server996 kernel: Unable to locate Target IQN: iqn.2003-01.org.linux-iscsi.server996.x8664:sn.a5ccc3708536 in Storage Node
    Oct 12 16:47:19 Server996 kernel: iSCSI Login negotiation failed.

    One of those like every 5 seconds. And when looking at the IQN number, i saw that it's not the current IQN.
    So for some reason, there is a reference to an old IQN that unraid is trying to connect to.
    Any idea on where that reference may be, and most importantly, how i get rid of it?

    PS. i'm not sure these issues are the actual cause of my unclean shutdowns, but they're about 95% of my syslog file so they need to be addressed either way.

  4. 16 hours ago, ich777 said:

    I think what he means is that even of you have a striped pool with 30Gbit/s+ you can't get those speeds on a FileIO image because of the overhead.

     

    Theoretically what you've wrote is true but practical a FileIO image is usually always slower when you reach certain speeds.

     

    ou can test this right now, simply create a small FileIO image with the write cache disabled and you will see that on certain hardware configurations the speed sometimes even don't reaches 1Gbit/s

     

    So a single SSD or NVME over iSCSI will be always faster than a FileIO image.

     

    Another idea would be that you've build up a software RAID for example in Windows/Linux with the drives from the Target, never tried this but I think this could be possoble.

     

    Hope that clears things up.

    Loud and clear. Looks like i'll just be passing on block devices in that case and stay away from fileIO images. I don't feel very confident about software-raiding multiple iscsi devices on windows so i'll just stick with individual drives.
    Anyways, when speed is really of the essence i'll stick to local nvme storage on the gaming machine. The iSCSI extra storage is so i can keep more of my steam library permanently installed.
    And iscsi is certainly a lifesaver here because both Origin and nvidia gamestream don't play nice with games running from regular network shares. With iscsi, not a problem.

    Now it's time to go break my btrfs pool and pass on some block storage :) Cheers!

    • Like 1
    • Haha 1
  5. 1 minute ago, maxstevens2 said:

    I think it depends on the usage, if you are gonna do a lot of writing, then you can mount the drive, however you can use a fileIO image on the btrfs-RAID setup. if you are gonna be using 1gigabit connection, I would recommend sticking with FileIO, 2.5gbs and above use the disk, which I think cannot be used in a raid-config.

    Hope this helps!

    Thanks for the swift reply, but I don't quite get that, since the speed of multiple striped drives far exceeds that of a single drive.

    So my conclusion would be the other way around. On 1gbit (or 2.5 even) a single ssd will easily saturate the line. Running 10gbit, you should see an advantage from the added speed of the striped array.
    My situation is currently 1gbit, but i'll soon upgrade to 10gbit.

     

    any idea btw if and how the fileio image can be modified after initial creation?

  6. Hi there. So first of all thanks for the awesome job on the plugin. Much appreciated!

    I've just succesfully set up an iscsi target on my unraid server, but i'm left with a few questions.
    First some background info: I'm running an ssd cache pool (4 drives) in unraid that i want to use as the iscsi target. This way i can get striping and mirroring out of btrfs, and use a FileIO image to connect to it over iscsi.
    I understand connecting directly to a device introduces less overhead, but that way i would have to set up an individual fileio for each drive (right?).

    Now i'm just wondering, if i end up growing the cache pool, can i also increase the size of the fileIO image? And possibly change other properties such as write-back true/false?
    Thanks in advance for the feedback!

×
×
  • Create New...