Jump to content

doron

Members
  • Content Count

    177
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by doron

  1. A couple thoughts: 1. If by any chance your array was encrypted, then you can safely assume the drives are cryptographically wiped if you get rid of the master key, even more so if you erase the per-drive LUKS key store on each drive (a momentary operation). Let me know if this is the case and I can help with doing the latter. 2. Since you're talking about completely decommissioning your system, we can take some liberties with it. Assuming array is unmounted and stopped(!), collect all the X's in /dev/sdX of the drives you want to erase. Assuming for example that those are /dev/sdc, /dev/sdd and /dev/sdf, you can then do: for X in {c, d, f} ; do shred -n 1 /dev/sd$X & done ; wait ; echo -e "\n\nThat's all, folks!" This has the advantage of running all the shredding tasks in parallel. This may cut total time considerably (depending on h/w setup). It will take time, and will also get all of the drives quite hot in the process, so mind your cooling 🙂 Make sure you get the letters above right (or you may end up shredding your USB drive, unassigned drives and whatnot). Once this is done, your drives will look unformatted (i.e no partition table). The above is hacked up, but I see no reason why it shouldn't work.
  2. SATA 4TB drives. Connected to SAS/SATA ports of an X10SL7-F.
  3. For a while now I've had most of the SMART Self-Test buttons for each disk drive greyed out. "Download" is available (and works), but the four buttons under it are not. The lower button has the message about the disk needing to be spun up. The problem is, that even if the disk is indeed spun up, I get the same behavior. Theres's an older thread about this here, but it's in a deprecated section so I thought I'd raise it here. Problem might be related to hdparm providing inaccurate status for spun-up drives, like so: > smartctl -x /dev/sdh | grep "^Device State:" | sed "s/^.*: *// ; s/ *(.*//" Active whereas > hdparm -C /dev/sdh /dev/sdh: drive state is: standby Any help would be appreciated. Unraid 6.7.2. Drives support SMART with no issue.
  4. Two simple ways to boot your VM: Use PlopKexec, boot from its ISO image, and have your flash drive available to the VM. The boot code will detect the USB drive and continue boot from it. Adds a few extra seconds to be boot time but might be worth it in simplicity. This is the method I'm using now. Create a small boot drive on your Unraid VM, format it as FAT, copy the content of the USB drive to it and then run the "make bootable" bat script from it to make that drive bootable. You will still need to have the flash drive available to the VM. This method shaves a few seconds from boot time but needs a bit of fiddling, and, most importantly, each time Unraid is updated, you will need to manually copy the updated files to the boot virtual drive or your system won't boot the next time around. None of these methods involve a downloaded, pre-cooked VMDK. You just create a VM and provision it with these elements. You will also need to provision it with your array drives, but this is discussed extensively in other posts. Hope this helps.
  5. I'm running Unraid 6.7.x. Is there a reason you are installing a backward version?
  6. When looking into this I can see: > smartctl -x /dev/sdh | grep "^Device State:" | sed "s/^.*: *// ; s/ *(.*//" Active Whereas > hdparm -C /dev/sdh /dev/sdh: drive state is: standby @bonienl - this may shed light onto what's going on. hdparm seems to not report the correct state.
  7. Was this ever resolved? I'm seeing the same phenomena, on all my array drives (WD SE 4TBs). Clicking "spin up" does spin the disk up, it moves to "standby" (a-la hdparm) and green ball, but the SMART buttons are still greyed out and "Unavailable - disk must be spun up". The one button that's functional is the Download button and it works - downloads a perfectly valid SMART report. Any help would be appreciated.
  8. Happy Unraid day! I've been a camper for like 8 years or so, mostly a very happy one. Came here from the FreeNAS varieties, visited OMV, but stayed here. Cool product, great bunch of people on the forum.
  9. I was typing away a response when your second post came in... So, scratch that one, obsolete 🙂 The errors you see seem to be coming from the preclear plugin and it seems to have caused a reset on the (v-) USB controller, leading to a virtual eject. In general Unraid indeed runs mostly from RAM but it is dependent on the flash drive to be there - e.g. it is mounted as /boot for permanent storage, etc.
  10. Very possible, in a variety of ways, depending on what exactly you want to do and on your personal taste. To automate, you can edit the ESXi root crontab, which lives in /var/spool/cron/crontabs/root in ESXi (access it via the ESXi CLI). There, you can add a crontab entry to do the shutdown at your designated time. Essentially, /sbin/shutdown.sh and /sbin/poweroff will do the trick. Now that's not exactly what you are trying to do; you want Unraid to do an orderly shutdown, before ESXi goes down. This can be achieved via several different approaches: 1. If you have the VMware tools plugin installed in Unraid, and Unraid is part of the autostart VMs set, then /sbin/shutdown.sh will trigger an orderly shutdown as part of the host shutdown. 2. You can have Unraid auto-shutdown at (say) 1 am, and have ESXi auto-shutdown at (say) 1:10 am. Make sure clocks are synced... 3. You can have Unraid initiate its own shutdown and the ESXi shutdown (via SSH CLI), simultaneously. 4. You can use PowerCLI (as @StevenD just wrote while I was typing this) from a Windows host where PowerCLI is installed. You could probably trigger an Unraid shutdown, wait some time for it to complete, then do the stop-vmhost. These are just examples, obviously. Hope this helps.
  11. Which would be your answer right there. The per-process lines in "top" show %CPU in single core units, so 400% would make sense. The top line sums it all and adjusts it as a total of all available logical processors. So the ~80% you see (your process is "niced" so it appears under ni) is 80% your total processor units. Specifically, the ~50% you see in the "ni" part is in fact your 400% in the per-process line (4 out of 8).
  12. This sounds a bit odd - can you post a screenshot of top at the time of the stress?
  13. ... and when resuming, it says "Elapsed time: less than a minute", i.e. keeping the time elapsed only since resumption.
  14. At least in my case it appears to be very real. All clients actively accessing the server's shares freeze for the duration of the "storm", which ranges from 5-12 seconds. The server itself remains accessible over CLI and web GUI, while showing 100% CPU (e.g. using "top"), with most of the usage in "I/O wait". Once the graphic display cools down from red (both CPUs at 100%) to normal (1%-5% on each), they all return to normal operation.
  15. It might; however, I don't even have mover running (and no cache drive configured). Well, you do say that you have high I/O wait. That's also CPU usage - adds up to user and system times. I, too, see this happen on all available cores.
  16. This looks similar also to this issue, which I opened a while ago. It was redirected into the "virtualizing" sub-forum because my Unraid runs under ESXi but my sense was, and still is, that it is unrelated to the virtualization. In my case, Unraid has one docker container (and obviously no VMs). Can you check whether, during the few seconds this is happening, "top" reports high CPU in IO wait (third line from top, "wa" percentage)? This might further indicate similarity. I feel we keep seeing reports about recent versions of Unraid getting into a few seconds (5-15) of 100% CPU usage where everything locks up, and then it goes back to normal.
  17. This looks to be a decent write speed (quite good actually). In fact that test is testing write to disk, only. The source of the copy is /dev/zero which is a kernel device generating a stream of bin zeroes, which are then written to the disk file you specified. To test the read part you could either manually mount something from a different OS on /mnt/test and then copy from it to /tmp/blah, or use SCP to copy directly from a remote file to /tmp/blah. It will not be an extremely accurate test, but will give you a pretty good idea as to performance limits.
  18. I'd change the command to use bs=1G count=10. In parallel to that, I'd do a read test from the remote (networked) file into /tmp/blah (i.e. into RAM disk). Use a reasonable file size (this filesystem is in RAM and you don't want to exhaust it) but a few hundred MB up to 1GB will give you a rough estimate as to the incoming stream speed. With these two, you should quickly be able to know which part of the copy process is the culprit, and zoom in on it.
  19. Oh, agreed 100%. And I can: once I set the BIOS to boot from the Unraid flash, it will, and will then see all the array drives (that are currently assigned raw via RDM) natively. I will not have all the other VMs currently under this hypervisor, but Unraid will work bare metal. Thanks!
  20. Basically because the controller is shared with the hypervisor. I could move drives around to get around that, but from my vantage point, RDMs have been working for me flawlessly for years and years now - if it ain't broke, you know...
  21. Thank you. Indeed, new config worked as expected and without a hitch.
  22. Happy New Year Folks, So I have just upgraded my ESXi host from 5.5 to 6.5d. Array HDDs are physically attached via RDM. It seems like ESXi 6 has a slightly different naming convention for the drives, so what used to be called WDC_WD4000F9YZ-09N20L1_WD-WMCxxxxxxxx is now WDC_WD4000F9YZ-0_WD-WMCxxxxxxxx (basically, the f/w version has been removed from the disk ID - actually sensible). So now unRAID believes the config is stale and all its drives are gone... Oh well. Trying to just "seat" them into their right places causes the UI to say "wrong" on each, and consequentially the array won't start. I think the solution is to go "New Config", assign the right disks to their right (old) slots as they should have been, without any "preserve current assignment", and restart. But, I need some confirmation / assurance that this will do what I expect it to: find the drives in their correct slots and start the array successfully (btw, the drives are encrypted, if that changes anything). Specifically, I don't want to lose any data 🙂 So: is the above the correct sequence I should go through? Anything else I should be mindful of? Thanks!!
  23. Hmm I don't seem to be able to download this particular version (aka 6.5.0d). Only lets me download an earlier or later one. Oh well.
  24. Yep - one of the reasons I kept it at 5.5 until now 🙂 Plus that it wasn't broke. It's gonna be an interesting challenge to move the whole setup to the new ESXi (several guests in there). Thanks for the suggestion.