[Plugins] iSCSI GUI and ISCSI Target


SimonF

Recommended Posts

I'm having an issue with this:

I have 2 network cards, 1 is the main 1GbE connection (home network and others (VLAN's) are all connected to this), the other is a 10GbE connection (peer 2 peer pc-server).

 

The issue I am having is that when I try to connect via the 10GbE it connects, but doesn't seem to load in the disks, and makes the whole service just lock up (I actually need to restart my PC to get it to work).

 

Does anyone have an advise what to do? (Is there maybe a way to lock ISCSI to an IP or interface?)

 

btw: My firewall is also a vm which has access to both networks.

 

See the differences in the pictures (verbonden means connected)

afbeelding.png.96ffeed10ff8b5f241c368e530b60b19.png

afbeelding.png

Edited by maxstevens2
Link to comment
1 hour ago, maxstevens2 said:

the other is a 10GbE connection (peer 2 peer pc-server)

Are these different IP address ranges?

 

1 hour ago, maxstevens2 said:

The issue I am having is that when I try to connect via the 10GbE it connects, but doesn't seem to load in the disks,

This is very strange, if you connect through the 1Gbit it is working?

 

1 hour ago, maxstevens2 said:

Does anyone have an advise what to do? (Is there maybe a way to lock ISCSI to an IP or interface?)

I also had a setup similar like this but never experienced an issue like you have now.

Yes and no, it listens on all interfaces and should make no difference.

 

How did you connect exactly? Did you enter the IP from the 10Gbit NIC in your server?

 

Please also post a picture from the overview from the iSCSI plugin where the configuration is visible.

  • Like 1
Link to comment
12 minutes ago, ich777 said:

This is very strange, if you connect through the 1Gbit it is working?

12 minutes ago, ich777 said:

Are these different IP address ranges?

Yes 192.168.179.XXX is the main network (connected via 1 Gigabit), this works flawlessly and reaches up-to 1gigabit without a problem (read and write).

192.168.181.XXX is the 10Gig lan network, which is a different network adapter than the other. More here:
afbeelding.thumb.png.0c75fb8d3ed3afc52f1b2873dcdd0e58.png

 

 

Later today I can make a picture to show a bit how the network is set-up (the 10-gig side is pretty simple due to it being peer to peer).

 

12 minutes ago, ich777 said:

How did you connect exactly? Did you enter the IP from the 10Gbit NIC in your server?

Yes I did, it connects but then the whole ISCSI service just freezes in Windows when trying to disconnect or look for the disks. (I connected to 192.168.181.253 in my case.)

 

12 minutes ago, ich777 said:

Please also post a picture from the overview from the iSCSI plugin where the configuration is visible.

This is a picture with it now connected to 1GbE:
afbeelding.png.629c3a133c4440df6e6ea0d9aa40389a.png

Edited by maxstevens2
Link to comment
49 minutes ago, maxstevens2 said:

Later today I can make a picture to show a bit how the network is set-up (the 10-gig side is pretty simple due to it being peer to peer).

Do you need vridging on the 10gbit adapter?

If not try to turn it off, but to be honest I don't think it will make a difference.

 

42 minutes ago, maxstevens2 said:

Btw. The MTU of the 10 gigabit adapter is set to 9000. Is this a bad thing maybe?

I don't recommend setting it to 9000 even for 10gbit adapters because manufacturers have tuned their drivers pretty well and even on my Mellanox cards I reach the full 10gbit speeds with the default MTU from 1500.

 

Have you also set the MTU to 9000 on the Windows machine?

  • Thanks 1
Link to comment
1 minute ago, ich777 said:

Have you also set the MTU to 9000 on the Windows machine?

Yes I do

 

2 minutes ago, ich777 said:

Do you need vridging on the 10gbit adapter?

Well I guess but also not. If I disable the bridge, this means VM's can't connect to it right? Which whould mean I can't have internet over this due to my firewall being a VM. Honestly I don't think this would be a big of a deal though as I don't mind using the 10 gig link only for Unraid -> PC usage.

 

3 minutes ago, ich777 said:

If not try to turn it off, but to be honest I don't think it will make a difference.

I will do this later, because then I have to stop all the VM's and such which I can't do right now.

Link to comment

Little update:
I just hung my notebook with an MTU of 1500 on the 10GbE adapter (but my notebook only supports 1 GbE so it went to 1GbE) Which worked flawlessly.

Will try to see if changing MTU on my pc makes it work!

 

Edit:

Yep, it was the MTU. I set it to 1500 and instantly worked!

Performance is perfect!

afbeelding.png.6628fb08c58d80e213ba53890e1b84b2.png

Edited by maxstevens2
  • Like 1
Link to comment
25 minutes ago, maxstevens2 said:

Any idea what this is?

Is the Initiator still working, so to speak the client side or does it freeze?

 

Does this happen all the time or only on heavy writes?

 

Usually this means that the Initiator sends too much data to the Target who can't handle it in time.

  • Like 1
Link to comment
On 12/1/2021 at 1:25 PM, ich777 said:

Does this happen all the time or only on heavy writes?

Thanks for your reply,

 

At that point I was installing a game, so I guess on heavy writes yeah.

 

On 12/1/2021 at 1:25 PM, ich777 said:

Usually this means that the Initiator sends too much data to the Target who can't handle it in time.

Is this a very bad thing? And if so, where is this issue located? Disk, Network, CPU?

 

Edit:
In addition to that I also noticed this happening while copying ~8GB files:

Notice the 'hills' created, swerving from 15MB/s upto 375, while the disk can do around 160-200MB/s

 

 

 

It seems to be a bit 'stuttery' pushing all the data trough:

 

Edited by maxstevens2
Link to comment
8 minutes ago, maxstevens2 said:

Is this a very bad thing? And if so, where is this issue located? Disk, Network, CPU?

I think you are using a FileIO volume or am I wrong?

This is pretty normal for a FileIO volume when WriteBack is enabled (but please don't disable it because this will give you much worse performance).

 

Usually I always recommend to use a whole block device (hard disk) because such errors are very uncommon to appear on real disks. But I also understand that using a whole disk is not always possible.

 

No, it's not bad usually this is mainly caused because the write buffer runns full and the disk can't keep up writing the data to the disk, also keep in mind that using a image always causes some overhead.

  • Like 1
Link to comment
1 minute ago, maxstevens2 said:

It seems to write data onto the image, write onto the actual disk, then gets hung up due to the upload speed being too much, then de disk is ready (goes to 0MB/s) and then the upload goas upto 400MB/s again.

See the last paragraph of my previous answer.

  • Like 1
Link to comment
1 minute ago, ich777 said:

I think you are using a FileIO volume or am I wrong?

 

Yes, 2 Tb FileIO image.

 

1 minute ago, ich777 said:

This is pretty normal for a FileIO volume when WriteBack is enabled (but please don't disable it because this will give you much worse performance).

I've noticed.. I had 15MB/s write speeds while testing
 

 

2 minutes ago, ich777 said:

Usually I always recommend to use a whole block device (hard disk) because such errors are very uncommon to appear on real disks. But I also understand that using a whole disk is not always possible.

I need to think how I am gonna realize this. The main focus of the project is to force out my PC's HDD, so this might be the best option then.

I guess performance will increase when using the whole disk (block) too. Will this be close to 1:1 performance (although I am already pretty happy with the performance)?
 

This is not the same for READ performance right? Because I don't think I would mind it then. It's mainly made for gaming (and other things like game recordings).

 

4 minutes ago, ich777 said:

No, it's not bad usually this is mainly caused because the write buffer runns full and the disk can't keep up writing the data to the disk, also keep in mind that using a image always causes some overhead.

Okay good, was already scared I would lose data. Are there maybe any tweaks? I could limit my network card to 2.5G or 5G (but 2.5G means ~300MB/s which still is above the 200BM/s limit of the disk).

 

Link to comment
2 minutes ago, maxstevens2 said:

I've noticed.. I had 15MB/s write speeds while testing

This is basically the same if you turn of write caching in Linux so that everything is written instantaneous to the disk, always a bad thing.

On the other hand also a risk if you have a power outage of data loss but actually never heard of serious issues with that...

 

4 minutes ago, maxstevens2 said:

I guess performance will increase when using the whole disk (block) too. Will this be close to 1:1 performance (although I am already pretty happy with the performance)?

Yes, should be really close to the performance when the disk is connected directly to a SATA controller.

 

5 minutes ago, maxstevens2 said:

Okay good, was already scared I would lose data. Are there maybe any tweaks? I could limit my network card to 2.5G or 5G (but 2.5G means ~300MB/s which still is above the 200BM/s limit of the disk).

No, I would leave as it is because the disk does what it's able to do and the overall write speed seems subjectively slower, because it drops down some times to 0MB/s, but overall it's basically the same and even a little faster.

The buffer is located in RAM and it can happen that the speed is way higher than what the disk is actually capable of.

 

If you copy for example with a speed of 400MB/s over iSCSI to a spinning disk that is usually capable of writing at 150MB/s the buffer fills slowly up and will shortly after you've started copying invoke writing the data from the RAM the disk, but the spinning disk can't keep up with these kind of speeds and when the RAM buffer is full you see these messages in the log and the speed drops down to 0MB/s. When this happens the write cache is written out to a disk to a certain threshold and when that threshold is reached the copy will continue on the Initiator (keep in mind the disk on the target is always at full tilt because it can't keep up with these kind of speeds).

 

You can actually tune some values but it can turn worse very quickly if you turn on the wrong numbers, as it is it is really well tuned.

In my case I can read/write with full speed to the Target using a block device (SATA SSD) about 540MB/s.

 

Hope that makes sense to you.

  • Thanks 1
Link to comment
7 minutes ago, ich777 said:

(keep in mind the disk on the target is always at full tilt because it can't keep up with these kind of speeds)

Well the weird thing is (that I saw with iotop on unraid), see screenshot after this quote:

 

47 minutes ago, maxstevens2 said:

It seems to be a bit 'stuttery' pushing all the data trough:

 

"Actual Disk Write 0 B/s", while on the picture before the quote it was running at 165MB/s (while probably exhausting buffer).

 

Thanks so much for the replies and help!

 

9 minutes ago, ich777 said:

In my case I can read/write with full speed to the Target using a block device (SATA SSD) about 540MB/s.

Does this not use RAM as cache/buffer?

Link to comment
1 minute ago, maxstevens2 said:

Does this not use RAM as cache/buffer?

Yes and no, you basically write direct to it like if you have it physically connected to machine.

 

8 minutes ago, maxstevens2 said:

"Actual Disk Write 0 B/s", while on the picture before the quote it was running at 165MB/s (while probably exhausting buffer).

Yes, it can fluctuate a bit depending on what files (size, how many of them,...) you are transferring, how much the buffer is filled and so on... there are many, many variables.

 

2 minutes ago, maxstevens2 said:

Just tested this myself, it does not. Maximum read performance comming from the disk!

Exactly, there is no read cache and this is the read speed from the disk, the disk can only deliver the speed that it's capable of. ;)

 

 

Hope this explains most of your questions.

  • Thanks 1
Link to comment
2 minutes ago, ich777 said:

Yes, it can fluctuate a bit depending on what files (size, how many of them,...) you are transferring, how much the buffer is filled and so on... there are many, many variables.

And based on these variables, I have to make my decision 😜 . But since I am pretty new to ISCSI and the disk is also pretty new (the server disk is a 4TB WD RED PRO), I have to see what I am gonna do with storing data an such (I am not new to unraid btw, got already 4TB+ of data). I will try and see how it will all end up.

 

4 minutes ago, ich777 said:

Hope this explains most of your questions.

It does. Thanks so much for all the help!

  • Like 1
Link to comment

hi @ich777 Wondering if this plugin is what im looking for to connect to a windows 10 Machine? Ideally id like to use my unraid share volumes as the main data point for an archive for Family photos and Videos... I apologize if this is a stupid question I was about to install the plugin and then got worried about the data on the shares...

 

Thanks in advance!

Link to comment
8 hours ago, Stanui said:

Ideally id like to use my unraid share volumes as the main data point for an archive for Family photos and Videos...

iSCSI is only good for one connected computer, do not connect multiple computers, this can cause file and even filesystem corruption!

 

I think the best way to do what you want is to create a share and map a network drive on the local computers.

If you only want that you have write access you can create multiple accounts, one for read/write access and the others read only.

Link to comment
6 hours ago, ich777 said:

iSCSI is only good for one connected computer, do not connect multiple computers, this can cause file and even filesystem corruption!

 

I think the best way to do what you want is to create a share and map a network drive on the local computers.

If you only want that you have write access you can create multiple accounts, one for read/write access and the others read only.

Sorry, I should have specified. I only have one computer that would be connecting to the ISCSI itself. Any other connections to the unraid shares would be through VMs on the system itself. Would that still cause issue? 

 

Thanks!

Link to comment
17 minutes ago, Stanui said:

I only have one computer that would be connecting to the ISCSI itself. Any other connections to the unraid shares would be through VMs on the system itself. Would that still cause issue?

Yes, this would also cause issues. You can theoretically connect multiple Initiators (Clients) to one Target (Server) but if you do this is is very likely that you cause data corruption and even filesystem corruption.

 

Usually you will do that with network shares.

Link to comment
  • 2 weeks later...

Question, using this on a vanilla kernel (Unraid 6.9.2, Linux NAS01 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz GenuineIntel GNU/Linux) works fine, however Persistent Reservations (SCSI-3) isn't working correctly, as I'm trying to do some fancy failover cluster stuff.

 

I'm getting "Failed", and "The device is not ready", with an error of 0x80070015. After doing some searching of Windows errorness, some people mention its because Persistent Reservations wasn't working with LIO (3 years ago, but might be a clue?).

 

Is it because I'm missing the custom kernel?

 

Edited by Miguel Rodriguez
Add versions
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.