cholzer

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by cholzer

  1. I was looking for the same info, thank you! I just migrated a debian VM from proxmox to unraid and I am amazed how easy it was! One thing I'd like to add is that proxmox itself is capable of extracting a vma file. So if you still have access to your proxmox system you just have to open the shell and then cd /path/where/you/store/your/vma vma extract [name].vma -v /target/folder/to/extract/to to extract the raw image. https://pve.proxmox.com/wiki/VMA
  2. I think I figured it out. Goal: Short Version: Use ssh to have UnassignedDevices mount/umount a specific disk and create an SMB/CIFS share for that disk. Long Version - my usecase: (pre-backup script) a Server on my network connects to UNRAID via SSH and instructs UD to mount a specific disk which also creates the share this server then runs a backup job which has this UD share as target (post-backup script) once the backup is done the server connects to UNRAID via SSH and instructs UD to unmount this specific disk which also removes the share Step-by-step guide: I assume that your disk already has a single partition Connect the disk to your Unraid system go to the settings of the disk, enable the 'share' setting mount the disk, make sure that the share is created and can be accessed note the disk id (sd* i.e. sdg) next to the drive serial in the UD GUI ssh into the UNRAID server or open the terminal in the webgui run "ls -ahlp /dev/disk/by-uuid" to list all drives by their UUID (look for the sd* to find your disk's UUID) now you can use these 2 commands to mount/unmount this specific disk via SSH or a script on your UNRAID machine. /usr/local/sbin/rc.unassigned mount '/dev/disk/by-uuid/THEUUIDOFYOURDISK' /usr/local/sbin/rc.unassigned umount '/dev/disk/by-uuid/THEUUIDOFYOURDISK' Hope this might help someone who finds himself in the same situation as I was in.
  3. Is it possible to ssh into unraid and then remotely call UD to unmount or mount a disk? I did take a look at the scripts in the first post but these dont seem to hold the answer. 😅 Reason for my question is that as it truns out I run into several issues with the SMB share creation/destruction when I remotely SSH into unraid and then just mount/unmount the disk. These problems do not exists when I use the "mount" button in the UD gui, so I guess the solution would be to call UD remotelly via SSH and let it do the mount/umount. But how? Thanks in advance!
  4. I noticed that it can sometimes take 60 seconds or more for the SMB share to get accessible after the disk has been mounted. Is that.... normal? 😅
  5. Thx! I found a way to have Veeam Backup & Replication remotely mount the disk via ssh before the backup starts then then unmount it once the backup is done.
  6. thx! Ideally I would have Veeam Backup & Replication SSH into unraid, ensure that the disk is mounted before the backup and unmount it when its done. But I need to see if I can execute that pre/post backup job. (ESXi is on a different machine - Unraid is 'only' used to store onsite backups and as 'provider' for the offsite backup media) As a quick solution I have now created a userscript to unmount the disk - scheduled to run ~2hours after the backup task is always finished. This worked nicely in my test! I assume there is no way how I could have a cron job run sometime at night to let Unraid/Linux forget that a disk was safely removed - in case that this would happen at some point - so that a reboot isnt required.
  7. Thank you! I will reboot and see what happens. Is there a way to have a cron job unmount the disk? I'd like to have this offsite-backup as "end-user" proof as possible. Which means I do not want to unmount it manually every Monday evening before it gets unplugged. 😅
  8. I'm building a new NAS and I have a reeeeeeally funky issue. Unraid 6.9.2 The Array consists out of 3x 4TB Seagate Ironwolf's and a 120GB Plextor SSD Cache. But that has nothing to do with the funky stuff. I have 2x 2TB Seagate Compute drives, these are used for off-site backups. The usecase is this: Monday morning 2TB Seagate Compute "A" is plugged in, auto mounted and SMB shared by UD later that day a backup is executed on a different system which stores the backup on that disk/share Monday night 2TB Seagate Compute "A" is disconnected and stored off site Next week on Monday morning 2TB Seagate Compute "B" is plugged in, auto mounted and SMB shared shared by UD later that day a backup is executed on a different system which stores the backup on that disk/share Monday night 2TB Seagate Compute "B" is disconnected and stored off site etc..... Now here is the funky part: I connected 2TB Seagate Compute "A", configured auto mount and auto share in UD disconnected 2TB Seagate Compute "A" I connected 2TB Seagate Compute "B", configured auto mount and auto share in UD disconnected 2TB Seagate Compute "B" Whenever I now connect either 2TB Seagate Compute "A" or "B" I get this greyed out "ARRAY" button in UD and the disk is not shared 😬🙃🤪 I connected the drives to a different machine and deleted the partition. Yet they still show up with that greyed out "ARRAY" button in UD. Below is the log taken when I connected one of the drives while it had no partition anymore. It complains about a mismatch in line 4 as if it does not like that different disks get connected to that sata port. Mar 31 13:56:32 NAS kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 31 13:56:42 NAS kernel: ata6: link is slow to respond, please be patient (ready=0) Mar 31 13:56:44 NAS kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 31 13:56:44 NAS kernel: ata6.00: serial number mismatch ' WFL3RRLV' != ' WFL2P4XZ' Mar 31 13:56:44 NAS kernel: ata6.00: revalidation failed (errno=-19) Mar 31 13:56:44 NAS kernel: ata6: limiting SATA link speed to 1.5 Gbps Mar 31 13:56:49 NAS kernel: ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 31 13:56:49 NAS kernel: ata6.00: serial number mismatch ' WFL3RRLV' != ' WFL2P4XZ' Mar 31 13:56:49 NAS kernel: ata6.00: revalidation failed (errno=-19) Mar 31 13:56:49 NAS kernel: ata6.00: disabled Mar 31 13:56:55 NAS kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 31 13:56:55 NAS kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.SAT0.SPT5._GTF.DSSP], AE_NOT_FOUND (20200925/psargs-330) Mar 31 13:56:55 NAS kernel: ACPI Error: Aborting method \_SB.PCI0.SAT0.SPT5._GTF due to previous error (AE_NOT_FOUND) (20200925/psparse-529) Mar 31 13:56:55 NAS kernel: ata6.00: ATA-10: ST2000DM008-2FR102, WFL2P4XZ, 0001, max UDMA/133 Mar 31 13:56:55 NAS kernel: ata6.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 32), AA Mar 31 13:56:55 NAS kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.SAT0.SPT5._GTF.DSSP], AE_NOT_FOUND (20200925/psargs-330) Mar 31 13:56:55 NAS kernel: ACPI Error: Aborting method \_SB.PCI0.SAT0.SPT5._GTF due to previous error (AE_NOT_FOUND) (20200925/psparse-529) Mar 31 13:56:55 NAS kernel: ata6.00: configured for UDMA/133 Mar 31 13:56:55 NAS kernel: ata6.00: detaching (SCSI 7:0:0:0) Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] Synchronizing SCSI cache Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] Stopping disk Mar 31 13:56:55 NAS unassigned.devices: Reload: A udev 'remove disk' initiated a reload of udev info. Mar 31 13:56:55 NAS unassigned.devices: Updating udev information... Mar 31 13:56:55 NAS unassigned.devices: Udev: Update udev info for /dev/disk/by-id/ata-ST2000DM008-2FR102_WFL3RRLV. Mar 31 13:56:55 NAS unassigned.devices: Udev: Update udev info for /dev/disk/by-id/wwn-0x5000c500cf867c17. Mar 31 13:56:55 NAS kernel: scsi 7:0:0:0: Direct-Access ATA ST2000DM008-2FR1 0001 PQ: 0 ANSI: 5 Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: Attached scsi generic sg6 type 0 Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB) Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] 4096-byte physical blocks Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] Write Protect is off Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] Mode Sense: 00 3a 00 00 Mar 31 13:56:55 NAS kernel: sd 7:0:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 31 13:56:56 NAS kernel: sdl: Mar 31 13:56:56 NAS kernel: sd 7:0:0:0: [sdl] Attached SCSI disk Mar 31 13:56:56 NAS unassigned.devices: Hotplug: A udev 'add disk' initiated a Hotplug event. Mar 31 13:56:56 NAS unassigned.devices: Updating udev information... Mar 31 13:56:56 NAS unassigned.devices: Udev: Update udev info for /dev/disk/by-id/ata-ST2000DM008-2FR102_WFL2P4XZ. Mar 31 13:56:56 NAS unassigned.devices: Udev: Update udev info for /dev/disk/by-id/wwn-0x5000c500cf6a8b34. Mar 31 13:56:59 NAS unassigned.devices: Processing Hotplug event... but even when I connect one of the drives to a different SATA port now it still shows up with that greyed out "array" button and shows that "mismatch" error in the log. Mar 31 14:01:33 NAS kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 31 14:01:33 NAS kernel: ata3.00: serial number mismatch ' WFL2P4XZ' != ' WFL3RRLV' Mar 31 14:01:33 NAS kernel: ata3.00: revalidation failed (errno=-19) EDIT! Now it gets even funky'er!!! 🤣 If I do the exact same thing with these 2 disks: HGST_HDS724040ALE640 dev3HGST_HDS724040ALE640 Then it works just fine! Only with the 2 ST2000DM008-2FR102 it does not work and I run into this greyed out "ARRAY" button.
  9. I just stumbled over this thread. Please expose the ChatID in the Telegram settings area! That would make setting this up so much easier!
  10. The Telegram agent is extremely useful! However if you'd like to have the agent send messages to a telegram group then the agent needs to know the ChatID of that telegram group. Currently you have to use the terminal for that. cd /boot/config/plugins/dynamix/telegram nano chatid Please expose a 'chatid' field in the Configuration of the Telegram Agent to make this process more user friendly. @limetech
  11. Hey everyone! I just got the Telegram Agent to work. However we are 2 admins and we'd both like to get notifications via telegram from UNRAID. Is it possible to have the telegram Agent send messages to a telegram group? I could not figure it out. 😅 Thanks in advance!
  12. How can this still be am issue? I am about to setup an UnRAID machine for a friend and I ran into the exact same problem that this garbage USB creator does not detect ANY of the 10 different drives I tried. The manual method created a working USB drive that this UnRAID machine now runs on (so my sandisk usb drive is okay) but c'mon @limetech !!! Fix that USB creator!!!! This is a HORRIBLE first impression for anyone who considers trying UnRAID!!!
  13. Guys, you should really open a new report in the correct subforum because mine is closed. Hardly anyone from the dev team will read closed reports.
  14. Hi! One of my Seagate IronWolf 8TB drives shows 14 read errors. But how concerned should I be about that? The drive is 13 months old, I did pre-clear it. Should I replace it and try RMA? ST8000VN004-2M2101_WKD17TMQ-20210301-0717.txt
  15. Tired to unplug the network cable over night and then in the morning reconnect and check the log if spin ups occurred? That way you can rule out devices on your network causing the drives to wake up. Just for the record, since I replaced my failing HBA my drives have stopped to spin up randomly.
  16. Yeah mine was broken, I even got random disconnects of entire disks later on. RMA in progress. In the meantime the SAS2008 is working fine and no more issues.
  17. My HBA is 7 months old, and it started to fail shortly after I upgraded to RC2. So just because your hardware is new does not mean that it is okay. The only way how you can make sure that RC2 is to blame is by downgrading.
  18. increased CPU load every 10 seconds!? Intel Xeon E5-2620 v3 my UnRAID is bored 90% of the day. there are no dockers, no VM's and no one on the LAN accesses the shares. With 6.8.3, prior to upgrading to RC2, the CPU load was mostly at 0-1%, sometimes there was a spike to 5% on a core. Now with RC2 I see multiple CPU cores spike to to 15% every 10 seconds..... (all array disks spun down, no one using it for anything). nas-diagnostics-20210122-0730.zip
  19. If the issue goes away when you downgrade to the latest stable, then yeah - it would appear that RC2 is the problem. But just because you don't use the same HBA as I do does not mean that it is software related. In my case the HBA began to die, that is what caused my issue. Your onboard SATA controller can malfunction just as my HBA did. So unless downgrading Unraid to the latest stable fixes your issue, you can not rule out a hardware fault. In my case the issue started after the upgrade, which is why I first also thought that RC2 was to blame, while it was the HBA.
  20. In my case it was caused by the LSI 3008 controller which began to act up in other ways as well. This issue was in my case just one more symptom of the HBA failing. If you use plugins, VM's or dockers then you should try to disable all of them and see if the disks stay spun down.
  21. Changed Status to Closed I am nolonger able to reproduce the issue since I replaced my potentially failing LSI 3008 with an LSI 2008.
  22. I had to replace my LSI 3008 HBA with an LSI 2008 HBA because I suspected that the 3008 was failing. Since I did that I can nolonger reproduce this issue. Disks remain spun down now!
  23. I have not tried that. I only have my Array disks, cache SS and one disk I share through unassigned devices for backups. That entry in the log just keeps showing up throughout the day.
  24. Anyone else gets this error in their log?
  25. Already changed the HBA and all cables. I still get this error in the log several times throughout the day. I do not recall getting that error prior to the upgrade to 6.9.2 RC2 What does it even mean? What is this task it tries to abort? *edit* Recently replaced the LSI 3008 with a LSI 2008 (including the cables), this error message still shows up. Jan 18 07:23:17 NAS kernel: sd 3:0:4:0: attempting task abort!scmd(0x000000006d823564), outstanding for 15380 ms & timeout 15000 ms Jan 18 07:23:17 NAS kernel: sd 3:0:4:0: [sdf] tag#789 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Jan 18 07:23:17 NAS kernel: scsi target3:0:4: handle(0x000d), sas_address(0x4433221106000000), phy(6) Jan 18 07:23:17 NAS kernel: scsi target3:0:4: enclosure logical id(0x590b11c05321f300), slot(5) Jan 18 07:23:20 NAS kernel: sd 3:0:4:0: task abort: SUCCESS scmd(0x000000006d823564) Jan 18 07:23:20 NAS kernel: sd 3:0:4:0: Power-on or device reset occurred Jan 18 07:23:20 NAS emhttpd: read SMART /dev/sde Jan 18 07:23:20 NAS emhttpd: read SMART /dev/sdf Jan 18 07:23:37 NAS kernel: sd 3:0:5:0: attempting task abort!scmd(0x00000000191fe81f), outstanding for 15232 ms & timeout 15000 ms Jan 18 07:23:37 NAS kernel: sd 3:0:5:0: [sdg] tag#787 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Jan 18 07:23:37 NAS kernel: scsi target3:0:5: handle(0x000e), sas_address(0x4433221105000000), phy(5) Jan 18 07:23:37 NAS kernel: scsi target3:0:5: enclosure logical id(0x590b11c05321f300), slot(6) Jan 18 07:23:39 NAS kernel: sd 3:0:5:0: task abort: SUCCESS scmd(0x00000000191fe81f) Jan 18 07:23:39 NAS emhttpd: read SMART /dev/sdg Jan 18 07:23:39 NAS emhttpd: read SMART /dev/sdb Jan 18 07:36:08 NAS kernel: sd 3:0:4:0: Power-on or device reset occurred nas-diagnostics-20210118-0830.zip