Joedy

Members
  • Posts

    68
  • Joined

  • Last visited

Everything posted by Joedy

  1. Can you add a new diagnjostics file that way it is not confusing with all the VDMK errors because the drive was not mounted.
  2. Are you going to release an Axigen X5 docker or how would I upgrade my current to this please?
  3. Hi so I have a bit of a predicament and need some advice let me first say, dont run your exchange databases in a storage pool and then use the ISCSI plugin to access them on your windows exchange server as they dont like this and it floods your logs with the below then locks you out of the connection and you have to reboot unraid to fix it , I have a 10GB fibre card and jumbo frames but it just has issues. so to try and get the data off the Raid 0 pool I was thinking I could remove them from the pool and then mount them in assigned devices to get the data off? remove from unraid, put them in an enclosure and then get the data off on a windows machine? just unsure on how to even attempt this and help would be very welcome.
  4. Thanks for the reply, was hoping to have another faster way to do it as they are Exchange databases I am moving and they are not small. I added the file manager plugin and will see if that will do it for me.
  5. Mine is the same but my cluster volumes work fine. I was just showing you how to find out. I found in some instances of windows that if the disk is initialized or online on one HyperV then the cluster volume Manager would not detect the disk so you could convert it to a cluster volume. Been a while since I have set one up
  6. Hi I have a few Storage pools that i use as Initiators to connect to my windows servers, I recently purchased a few 8T SSD drives and setup another pool and want to migrate files from an existing Pool to the new one, is there an App or how would I do this through the CLI? Have looked through the forums but cant find anything specific to Storage Pools only Cache Pools and that's not the same
  7. ou can attempt to read the persistent-reservations capabilities of a disk/LUN with the "sg_persist -d /dev/ -c" command. If the disk/LUN won't support SCSI-3 persistent reservations, the commands will respond with a "command not supported" error message: sg_persist -d /dev/sda -c ATA VBOX HARDDISK 1.0 Peripheral device type: disk PR in (Report capabilities): command not supported If the disk/LUN claims to support SCSI-3 persistent reservations, you'll get a longer response: sg_persist -d /dev/sde -c IBM 2145 0000 Peripheral device type: disk Report capabilities response: Compatible Reservation Handling(CRH): 1 Specify Initiator Ports Capable(SIP_C): 0 All Target Ports Capable(ATP_C): 0 Persist Through Power Loss Capable(PTPL_C): 1 Type Mask Valid(TMV): 1 Allow Commands: 0 Persist Through Power Loss Active(PTPL_A): 0 Support indicated in Type mask: Write Exclusive, all registrants: 1 Exclusive Access, registrants only: 1 Write Exclusive, registrants only: 1 Exclusive Access: 1 Write Exclusive: 1 Exclusive Access, all registrants: 1 The above commands are sufficient to verify if the disk/LUN claims to support SCSI-3 persistent reservations or not.
  8. MrOz we use it ina hyperv failover cluster without any issues, have you enabled the disk on both physical boxes and then did the cluister volume?
  9. Currently running Version: 6.10.2-rc3 (due to SAN certificates that we use) Sorry about the title but unsure what to exactly put this under I have a backup server Veritas that connects to my unraid share to use as a Disk storage device for backups, it all works great then overnight or couple days i start getting these through the logs and the unraid and the server cant write to the directory anymore, permissions seem find, the logs have /boot as the directory so I thought it could be the flash drive so I try to access it through UNC but it is access denied, change it to export public and it still does not allow access. Jun 1 09:45:33 Backup smbd[23983]: [2022/06/01 09:45:33.774155, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:33 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:35 Backup smbd[23983]: [2022/06/01 09:45:35.250772, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:35 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:35 Backup smbd[23983]: [2022/06/01 09:45:35.264995, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:35 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:36 Backup smbd[23983]: [2022/06/01 09:45:36.342492, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:36 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:36 Backup smbd[23983]: [2022/06/01 09:45:36.343609, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:36 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:36 Backup smbd[23983]: [2022/06/01 09:45:36.364459, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:36 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:36 Backup smbd[23983]: [2022/06/01 09:45:36.365425, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:36 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:37 Backup smbd[23983]: [2022/06/01 09:45:37.880785, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:37 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 Jun 1 09:45:37 Backup smbd[23983]: [2022/06/01 09:45:37.885701, 0] ../../source3/smbd/service.c:168(chdir_current_service) Jun 1 09:45:37 Backup smbd[23983]: chdir_current_service: vfs_ChDir(/boot) failed: Permission denied. Current token: uid=1001, gid=1000, 14 groups: 1000 0 1 2 3 4 6 10 17 281 1004 1005 1006 1002 changing the flash share Unsure if it is relevant. Jun 1 09:44:22 Backup emhttpd: Starting services... Jun 1 09:44:22 Backup emhttpd: shcmd (82167): /etc/rc.d/rc.samba restart Jun 1 09:44:22 Backup wsdd2[21989]: 'Terminated' signal received. Jun 1 09:44:22 Backup nmbd[21975]: [2022/06/01 09:44:22.145183, 0] ../../source3/nmbd/nmbd.c:59(terminate) Jun 1 09:44:22 Backup nmbd[21975]: Got SIGTERM: going down... Jun 1 09:44:22 Backup winbindd[21993]: [2022/06/01 09:44:22.145226, 0] ../../source3/winbindd/winbindd.c:247(winbindd_sig_term_handler) Jun 1 09:44:22 Backup winbindd[21995]: [2022/06/01 09:44:22.145234, 0] ../../source3/winbindd/winbindd.c:247(winbindd_sig_term_handler) Jun 1 09:44:22 Backup winbindd[21993]: Got sig[15] terminate (is_parent=1) Jun 1 09:44:22 Backup winbindd[21995]: Got sig[15] terminate (is_parent=0) Jun 1 09:44:22 Backup winbindd[21998]: [2022/06/01 09:44:22.145256, 0] ../../source3/winbindd/winbindd.c:247(winbindd_sig_term_handler) Jun 1 09:44:22 Backup winbindd[21998]: Got sig[15] terminate (is_parent=0) Jun 1 09:44:22 Backup wsdd2[21989]: terminating. Jun 1 09:44:22 Backup winbindd[22253]: [2022/06/01 09:44:22.145364, 0] ../../source3/winbindd/winbindd.c:247(winbindd_sig_term_handler) Jun 1 09:44:22 Backup winbindd[22254]: [2022/06/01 09:44:22.145390, 0] ../../source3/winbindd/winbindd.c:247(winbindd_sig_term_handler) Jun 1 09:44:22 Backup winbindd[22253]: Got sig[15] terminate (is_parent=0) Jun 1 09:44:22 Backup winbindd[22254]: Got sig[15] terminate (is_parent=0) Jun 1 09:44:25 Backup root: Starting Samba: /usr/sbin/smbd -D Jun 1 09:44:25 Backup smbd[22685]: [2022/06/01 09:44:25.788805, 0] ../../source3/smbd/server.c:1734(main) Jun 1 09:44:25 Backup smbd[22685]: smbd version 4.15.7 started. Jun 1 09:44:25 Backup smbd[22685]: Copyright Andrew Tridgell and the Samba Team 1992-2021 Jun 1 09:44:25 Backup root: /usr/sbin/nmbd -D Jun 1 09:44:25 Backup nmbd[22687]: [2022/06/01 09:44:25.821883, 0] ../../source3/nmbd/nmbd.c:901(main) Jun 1 09:44:25 Backup nmbd[22687]: nmbd version 4.15.7 started. Jun 1 09:44:25 Backup nmbd[22687]: Copyright Andrew Tridgell and the Samba Team 1992-2021 Jun 1 09:44:25 Backup root: /usr/sbin/wsdd2 -d Jun 1 09:44:25 Backup root: /usr/sbin/winbindd -D Jun 1 09:44:25 Backup wsdd2[22701]: starting. Jun 1 09:44:25 Backup winbindd[22702]: [2022/06/01 09:44:25.967113, 0] ../../source3/winbindd/winbindd.c:1722(main) Jun 1 09:44:25 Backup winbindd[22702]: winbindd version 4.15.7 started. Jun 1 09:44:25 Backup winbindd[22702]: Copyright Andrew Tridgell and the Samba Team 1992-2021 Jun 1 09:44:25 Backup winbindd[22704]: [2022/06/01 09:44:25.972932, 0] ../../source3/winbindd/winbindd_cache.c:3085(initialize_winbindd_cache) Jun 1 09:44:25 Backup winbindd[22704]: initialize_winbindd_cache: clearing cache and re-creating with version number 2 Jun 1 09:44:26 Backup emhttpd: shcmd (82173): /etc/rc.d/rc.avahidaemon restart Jun 1 09:44:26 Backup root: Stopping Avahi mDNS/DNS-SD Daemon: stopped Jun 1 09:44:26 Backup avahi-daemon[22014]: Got SIGTERM, quitting. Jun 1 09:44:26 Backup avahi-dnsconfd[22023]: read(): EOF Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fec5:4695. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface vethd75e605.IPv6 with address fe80::8817:44ff:fe33:3f91. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::42:f1ff:fe69:9ed9. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface br0.IPv4 with address 10.1.0.10. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface lo.IPv6 with address ::1. Jun 1 09:44:26 Backup avahi-daemon[22014]: Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. Jun 1 09:44:26 Backup avahi-daemon[22014]: avahi-daemon 0.8 exiting. Jun 1 09:44:26 Backup root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D Jun 1 09:44:26 Backup avahi-daemon[22726]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214). Jun 1 09:44:26 Backup avahi-daemon[22726]: Successfully dropped root privileges. Jun 1 09:44:26 Backup avahi-daemon[22726]: avahi-daemon 0.8 starting up. Jun 1 09:44:26 Backup avahi-daemon[22726]: Successfully called chroot(). Jun 1 09:44:26 Backup avahi-daemon[22726]: Successfully dropped remaining capabilities. Jun 1 09:44:26 Backup avahi-daemon[22726]: Loading service file /services/sftp-ssh.service. Jun 1 09:44:26 Backup avahi-daemon[22726]: Loading service file /services/smb.service. Jun 1 09:44:26 Backup avahi-daemon[22726]: Loading service file /services/ssh.service. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fec5:4695. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface vnet0.IPv6 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface vethd75e605.IPv6 with address fe80::8817:44ff:fe33:3f91. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface vethd75e605.IPv6 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:f1ff:fe69:9ed9. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface docker0.IPv6 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface docker0.IPv4 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface br0.IPv4 with address 10.1.0.10. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface br0.IPv4 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface lo.IPv6 with address ::1. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface lo.IPv6 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. Jun 1 09:44:26 Backup avahi-daemon[22726]: New relevant interface lo.IPv4 for mDNS. Jun 1 09:44:26 Backup avahi-daemon[22726]: Network interface enumeration completed. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for fe80::fc54:ff:fec5:4695 on vnet0.*. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for fe80::8817:44ff:fe33:3f91 on vethd75e605.*. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for fe80::42:f1ff:fe69:9ed9 on docker0.*. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for 172.17.0.1 on docker0.IPv4. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for 10.1.0.10 on br0.IPv4. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for ::1 on lo.*. Jun 1 09:44:26 Backup avahi-daemon[22726]: Registering new address record for 127.0.0.1 on lo.IPv4. Jun 1 09:44:26 Backup emhttpd: shcmd (82174): /etc/rc.d/rc.avahidnsconfd restart Jun 1 09:44:26 Backup root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped Jun 1 09:44:26 Backup root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon: /usr/sbin/avahi-dnsconfd -D Jun 1 09:44:26 Backup avahi-dnsconfd[22735]: Successfully connected to Avahi daemon. Jun 1 09:44:26 Backup emhttpd: shcmd (82179): smbcontrol smbd close-share 'flash' Jun 1 09:44:27 Backup avahi-daemon[22726]: Server startup complete. Host name is Backup.local. Local service cookie is 1788355674. Jun 1 09:44:28 Backup avahi-daemon[22726]: Service "Backup" (/services/ssh.service) successfully established. Jun 1 09:44:28 Backup avahi-daemon[22726]: Service "Backup" (/services/smb.service) successfully established. Jun 1 09:44:28 Backup avahi-daemon[22726]: Service "Backup" (/services/sftp-ssh.service) successfully established. Jun 1 09:44:48 Backup nmbd[22691]: [2022/06/01 09:44:48.727507, 0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2) Jun 1 09:44:48 Backup nmbd[22691]: ***** Jun 1 09:44:48 Backup nmbd[22691]: Jun 1 09:44:48 Backup nmbd[22691]: Samba name server BACKUP is now a local master browser for workgroup JHE on subnet 172.17.0.1 Jun 1 09:44:48 Backup nmbd[22691]: Jun 1 09:44:48 Backup nmbd[22691]: ***** Jun 1 09:44:48 Backup nmbd[22691]: [2022/06/01 09:44:48.727688, 0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2) Jun 1 09:44:48 Backup nmbd[22691]: ***** Jun 1 09:44:48 Backup nmbd[22691]: Jun 1 09:44:48 Backup nmbd[22691]: Samba name server BACKUP is now a local master browser for workgroup JHE on subnet 10.1.0.10 Jun 1 09:44:48 Backup nmbd[22691]: Jun 1 09:44:48 Backup nmbd[22691]: *****
  10. how manu usb devices do you have plugged in?
  11. This was sorted by upgrading to 6.10 Rc2 few steps involved but does recognise our Multi domain wildcard Cert (SAN) Thanks to Ijm42 for the help.
  12. pm you my Cert so you can see the issues, does not want to use it rather than the unraid cert
  13. i can unc into the server so just changed the ident.cfg file, it is having issues letting go of the unraid cert so just trying to settle it and then i can try again
  14. Did all this but the https gives a 404 error now, cant login to the server through the gui as it just keeps going back to the login screen under http. can login to the CLI without issue. have a dig around to see whats happening.
  15. Thanks trying this now, will keep you posted
  16. yes attachment worked but if i delete it directly off the post it puts it back CTRL V is all i am doing to paste into a post, using windows 11
  17. A multi domain wildcard cert is just as secure as any cert if not more secure and most Enterprises run of SAN certs these days so they dont have to have 20 different certificates. i would not count it as a mistake that it worked in 6.9.2 as it was one of the reasons we moved to unraid because we could secure it for an enterprise enviroment and our other systems use the certificate to secure the connects between them. Is their maybe a work around with this or rolling back is my only option? thanks heaps for your help.
  18. ctrl V to paste from snagit, i even delete one of the screenshots but it puts it back after i save
  19. Well thats just not real good as it is a multi domain wildcard cert and that is not an option for it as the subject name is horts.com.au, we use this certificate through our enterprise setup. bit disapointed this was changed and is now useless, the drives in the array do not show up if the certificate does not match and other areas have issues now because of this. we will just roll back and use the older version as the new one provides no benifit to us, thanks heaps for your response.
  20. this is the cert it should be using, was using before the upgrade Multi domain wildcard Certificate
  21. The url to access the server is https://backup.horts.com.au the cert screenshots is just the unraid cert not my actual certificate.
  22. "Sorry about the double up of screen shots but when I paste into the forum it puts double in for some reason." I tried to do it bother ways with the wildcard cert I have but it wont pick up the local Certificate and if I dont provision the unraid cert I cant use the My Servers
  23. Ok thanks will give it a try so it is not TLD anymore it is the machines full domain name
  24. I had this issue, did a reinstall from apps, previous apps, actions this fixed it for my plex, it did some scanns and that but came up affter this.