[Plugins] iSCSI GUI and ISCSI Target


SimonF

Recommended Posts

6 hours ago, ich777 said:

I wonder where you have that information from

Decades of working with storage arrays at various levels of enterprise, along with building lots of home-grown solutions based on anything from Unraid, to True(Free)NAS and their contemporaries. 

In a normalized setup, the difference is not a consideration factor whatsoever.


 

Quote

 

Sure, but that depends always on how it's set up.

 

That's true of anything though, no?

The devil's in the detail, in this case, BTRFS being the likely culprit, with a close second being FUSE.

My moneys on BTRFS...shudder...how its considered stable in this industry is beyond me.

 

 

Quote

Jumbo Frames shouldn't matter much when the right adapter is used, even on my Mellanox/Nvidia ConnectX3 card I can saturate the full 10Gbit/s with the default MTU from 1500.

True, however you are missing one of the major advantages of jumbo frames - dramatically less overhead. 

If a given implementation does not support NIC offloading, then its the CPU that deals with it - even with modern CPUs, it doesn't take much for Jumbo frames to become a factor.

Equally, retries due to misconfiguration at the network level, add up too.

 

 

Quote

Exactly, but it is always my recommendation to use a single Block Device for the usage with iSCSI.

I see your point for diagnostic purposes, but for anything close to day-to-day, or 'production', a single block device is not a great idea.

One suggestion here, would be to perhaps look into zvols to serve the function of the block device, thus kinda giving the advantages of both data redundancy and psudo-block-level access - assuming ZFS is in use of course.

 

Edited by boomam
zvol note.
  • Like 3
Link to comment
On 6/6/2023 at 10:36 AM, MrOz said:

Ok, I figured out what I needed to do to get that command to run. I was too tired to see it correctly. It does show "not supported"

 

What settings can I change to fix this?

 

Mine is the same but my cluster volumes work fine. I was just showing you how to find out. I found in some instances of windows that if the disk is initialized or online on one HyperV then the cluster volume Manager would not detect the disk so you could convert it to a cluster volume. Been a while since I have set one up

Link to comment
  • 2 weeks later...
2 hours ago, BeliefanX said:

could you please help me figure out where the problem lies?

Please share a bit more information about how you've set it up.

A screenshot from the configuration would be nice, also are you using a image, do you have write-through activated?

 

The Diagnostics missing in your post.

Link to comment
9 hours ago, ich777 said:

What client or better speaking initiator are you using? Are you sure that you aren't limited by the client?

Have you yet tried another machine?

 

I have the same issue under Windows, and I've tried all other settings, but it's still not working. However, under the same network conditions, the SMB protocol can reach 900MB/s.

 

362137777_CleanShot2023-06-23at23_02.20@2x.thumb.png.558fb1e58e583003f25204fe58267b5e.png

 

 

Link to comment
2 hours ago, BeliefanX said:

I have the same issue under Windows, and I've tried all other settings, but it's still not working. However, under the same network conditions, the SMB protocol can reach 900MB/s.

Have you yet tried it with write thru deactivated?

 

I would also recommend checking what's going on here your syslog is full of these messages:

Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Close session: user root from 10.10.10.253 port 58186 id 3
Jun 22 08:16:40 Unraid sshd[28650]: Starting session: command for root from 10.10.10.253 port 58186 id 3

 

Link to comment

Could you update the tutorial to be more accurate and clear?

 

There are times where you don't use the same terms and it can be confusing.

 

"Define a LUN" instead of "Create a LUN" there is no define button on that page.

 

Also when I get to the step "add an initiator" you may as well be doctor. I can't make sense of half of what's being said.

 

Are there any better instructions on how to set up an iSCSI target?

Link to comment
1 hour ago, JarpJupe said:

Could you update the tutorial to be more accurate and clear?

 

There are times where you don't use the same terms and it can be confusing.

 

"Define a LUN" instead of "Create a LUN" there is no define button on that page.

 

Also when I get to the step "add an initiator" you may as well be doctor. I can't make sense of half of what's being said.

 

Are there any better instructions on how to set up an iSCSI target?

iSCSI is a complex setup. Maybe could explain what you are looking to acheive?

Link to comment
  • 1 month later...

So I tried an FileIO disk on my zfs-mirror pool just for fun to check the speed vs just use the pool over smb:
FileIO Read 240MB, Write 121MB
SMB: Read 229, Write 293

Is the iscsi fileIO overhead that big or  is it something else?
Also have an single 8TB hdd setup as block storage and there I get the expected speed of Read/Write 135MB

All is over an 2.5gbit direct connection.

Link to comment
44 minutes ago, isvein said:

Makes sense. So ntfs on an fileio gives more overhead than ntfs on an block device it seems then if the speed test is to be trusted.

Yes, because if you are using a Block Device the host is not really interested which filesystem you are using on the block device because it writes it's 0 and 1 to it or better speaking it has not to deal with the filesystem of the host.

  • Thanks 1
Link to comment

Hi,

 

I've managed to successfully setup a iScsi target and connect my PC. except sometimes my network goes down (doggy router, working on that). When the network comes back up Windows fails to connect through.

 

When i look at unraid I see:

 

Active Sessions

alias: sid: 2 type: Normal session-state: LOGGED_IN

name: iqn... (NOT AUTHENTICATED)

mapped-lun: 0 backstore: fileio/Drive mode: rw

address: 192....2 (TCP) cid: 1 connection-state: CLEANUP_WAIT

 

From what I understand windows won't reconnect because there is already a session locking the drive.

Is there a way to close the session, knowing full well it won't reconnect.

I'm aware that i could just restart my server, but that feels heavy handed, and It mean interrupting the other work I offload onto it.

Link to comment
54 minutes ago, kain0_0 said:

Is there a way to close the session, knowing full well it won't reconnect.

I think this has to be done on the Windows side since you could technically connect two machines to one Target (but it is for obvious reasons not recommended).

Have you yet tried to restart your Windows machine?

Link to comment
9 hours ago, ich777 said:

Have you yet tried to restart your Windows machine?

 

Yes, without restarting the server. So i'm pretty sure its on the server side. In the pas after restarting the server it would just connect on the windows side.

 

The configuration is (in case i missed something):

o- / ......................................................................................................................... [...]

o- backstores .............................................................................................................. [...]

| o- block .................................................................................................. [Storage Objects: 0]

| o- fileio ................................................................................................. [Storage Objects: 1]

| | o- GamesDrive ................................................. [/mnt/user/iScsi/GamesDrive.img (2.0TiB) write-thru activated]

| | o- alua ................................................................................................... [ALUA Groups: 1]

| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]

| o- pscsi .................................................................................................. [Storage Objects: 0]

| o- ramdisk ................................................................................................ [Storage Objects: 0]

o- iscsi ............................................................................................................ [Targets: 1]

| o- iqn.--snip--linux--snip--........................................................... [TPGs: 1]

| o- tpg1 ............................................................................................... [no-gen-acls, no-auth]

| o- acls .......................................................................................................... [ACLs: 1]

| | o- iqn.--snip--microsoft--snip--............................................................ [Mapped LUNs: 1]

| | o- mapped_lun0 ........................................................................... [lun0 fileio/GamesDrive (rw)]

| o- luns .......................................................................................................... [LUNs: 1]

| | o- lun0 .......................................... [fileio/GamesDrive (/mnt/user/iScsi/GamesDrive.img) (default_tg_pt_gp)]

| o- portals .................................................................................................... [Portals: 1]

| o- 0.0.0.0:3260 ..................................................................................................... [OK]

o- loopback ......................................................................................................... [Targets: 0]

o- vhost ............................................................................................................ [Targets: 0]

o- xen-pvscsi ....................................................................................................... [Targets: 0]

Link to comment
6 hours ago, kain0_0 said:

Yes, without restarting the server. So i'm pretty sure its on the server side. In the pas after restarting the server it would just connect on the windows side.

I think this is maybe because the Target assumes that the Initiator with the IQN is still connected and doesn't allow a connection with the same IQN.

 

May I ask if it wouldn't be better to sort out the root cause of the issue, the unstable network?
I'm not too sure about that but I don't think that there is an easy solution to your specific issue because iSCSI usually requires a stable network connection.

Link to comment
  • 3 weeks later...
7 hours ago, Xxharry said:

I am using 1TB nvme with the plugin but that is now full. Can I somehow use 3 disks in raid 5 with this plugin?

You can do that however you want but keep in mind you have to do that on the host, so to speak Unraid.

You can create for example a ZFS volume, you can create a BTRFS volume, you even can create a Windows storage pool if you want to do it directly on Windows.

Link to comment
  • 2 months later...

I'm having a problem with this message repeated over and over.

Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 Sense Key : 0x3 [current]
Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 ASC=0x11 ASCQ=0x1
Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 CDB: opcode=0x88 88 00 00 00 00 05 44 8a 93 08 00 00 08 18 00 00
Nov 16 17:44:49 unRAID kernel: critical medium error, dev sdw, sector 22624768776 op 0x0:(READ) flags 0x84700 phys_seg 259 prio class 2

 

I'm connecting to an iSCSI Drobo over a local gigabit connection.  The connection is slow, but when I'm copying large amounts of data (from the iSCSI device), it eventually fails back to being RO.  I'm not able to resolve the issue without rebooting unRAID.  It'll run fine copying data for 1-3 days and then fail again.

 

Do you have any suggestions?  Thanks!

Link to comment
7 hours ago, ich777 said:

It seems that sdw has some sort of issue, are you sure that the disk is okay?

It's possible that it does have an issue.  I don't know if it's a filesystem issue or a hardware issue with the Drobo itself.  I have two of them that I inherited from work during the COVID fire sale.  Out of all the mounted volumes, I think this is the only one that's giving me a problem.

I unmounted the volume from unRAID, mounted it from my Linux workstation and I'm see the same errors.  Once I get the data copied back to unRAID, I'll do some testing.  I guess now that I'm seeing this error on the workstation as well, you can ignore my report.  Thanks for the reply!

  • Like 1
Link to comment
  • 1 month later...

Hello,

 

I have just found and installed the program "iSCSI Target", it works with Win10 according to the instructions without any problems. 

 

Now I wanted to use a target for recording with a Bosch FLEXIDOME IP turret 3000i IR. 

If I use the camera's initiator instead of the windows initiator, I always get the following error message:


-------
Dec 19 13:31:18 S2023 kernel: iSCSI Login negotiation failed.
Dec 19 13:31:23 S2023 kernel: Unable to locate key "X-com.bosch.BVIPInitiatorType".
Dec 19 13:31:23 S2023 kernel: Unable to locate key "X-com.bosch.BVIPInitiatorType".
Dec 19 13:31:23 S2023 kernel: ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE set when Expected Data Transfer Length is 0 for CDB: 0x00, Fixing up flags
Dec 19 13:31:23 S2023 kernel: iSCSI Initiator Node: iqn.2005-12.com.bosch:unit00075fb6cfdd is not authorized to access iSCSI target portal group: 1.
Dec 19 13:31:23 S2023 kernel: iSCSI Login negotiation failed.
------


I leave the password field in the input mask of the kamara empty.

Can anyone help me ? thanks

 

99.png

Edited by gpetr
Link to comment
1 hour ago, gpetr said:

Hello,

I have just found and installed the program "iSCSI Target", it works with Win10 according to the instructions without any problems. 

Now I wanted to use a target for recording with a Bosch FLEXIDOME IP turret 3000i IR. 

If I use the camera's initiator instead of the windows initiator, I always get the following error message:
-------
Dec 19 13:31:18 S2023 kernel: iSCSI Login negotiation failed.
Dec 19 13:31:23 S2023 kernel: Unable to locate key "X-com.bosch.BVIPInitiatorType".
Dec 19 13:31:23 S2023 kernel: Unable to locate key "X-com.bosch.BVIPInitiatorType".
Dec 19 13:31:23 S2023 kernel: ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE set when Expected Data Transfer Length is 0 for CDB: 0x00, Fixing up flags
Dec 19 13:31:23 S2023 kernel: iSCSI Initiator Node: iqn.2005-12.com.bosch:unit00075fb6cfdd is not authorized to access iSCSI target portal group: 1.
Dec 19 13:31:23 S2023 kernel: iSCSI Login negotiation failed.
------

I leave the password field in the input mask of the kamara empty.

Can anyone help me ? thanks

 

Can you post the config from the first page?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.