Jump to content

doron

Members
  • Content Count

    177
  • Joined

  • Last visited

  • Days Won

    1

doron last won the day on June 4 2018

doron had the most liked content!

Community Reputation

8 Neutral

1 Follower

About doron

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. A couple thoughts: 1. If by any chance your array was encrypted, then you can safely assume the drives are cryptographically wiped if you get rid of the master key, even more so if you erase the per-drive LUKS key store on each drive (a momentary operation). Let me know if this is the case and I can help with doing the latter. 2. Since you're talking about completely decommissioning your system, we can take some liberties with it. Assuming array is unmounted and stopped(!), collect all the X's in /dev/sdX of the drives you want to erase. Assuming for example that those are /dev/sdc, /dev/sdd and /dev/sdf, you can then do: for X in {c, d, f} ; do shred -n 1 /dev/sd$X & done ; wait ; echo -e "\n\nThat's all, folks!" This has the advantage of running all the shredding tasks in parallel. This may cut total time considerably (depending on h/w setup). It will take time, and will also get all of the drives quite hot in the process, so mind your cooling 🙂 Make sure you get the letters above right (or you may end up shredding your USB drive, unassigned drives and whatnot). Once this is done, your drives will look unformatted (i.e no partition table). The above is hacked up, but I see no reason why it shouldn't work.
  2. SATA 4TB drives. Connected to SAS/SATA ports of an X10SL7-F.
  3. For a while now I've had most of the SMART Self-Test buttons for each disk drive greyed out. "Download" is available (and works), but the four buttons under it are not. The lower button has the message about the disk needing to be spun up. The problem is, that even if the disk is indeed spun up, I get the same behavior. Theres's an older thread about this here, but it's in a deprecated section so I thought I'd raise it here. Problem might be related to hdparm providing inaccurate status for spun-up drives, like so: > smartctl -x /dev/sdh | grep "^Device State:" | sed "s/^.*: *// ; s/ *(.*//" Active whereas > hdparm -C /dev/sdh /dev/sdh: drive state is: standby Any help would be appreciated. Unraid 6.7.2. Drives support SMART with no issue.
  4. Two simple ways to boot your VM: Use PlopKexec, boot from its ISO image, and have your flash drive available to the VM. The boot code will detect the USB drive and continue boot from it. Adds a few extra seconds to be boot time but might be worth it in simplicity. This is the method I'm using now. Create a small boot drive on your Unraid VM, format it as FAT, copy the content of the USB drive to it and then run the "make bootable" bat script from it to make that drive bootable. You will still need to have the flash drive available to the VM. This method shaves a few seconds from boot time but needs a bit of fiddling, and, most importantly, each time Unraid is updated, you will need to manually copy the updated files to the boot virtual drive or your system won't boot the next time around. None of these methods involve a downloaded, pre-cooked VMDK. You just create a VM and provision it with these elements. You will also need to provision it with your array drives, but this is discussed extensively in other posts. Hope this helps.
  5. I'm running Unraid 6.7.x. Is there a reason you are installing a backward version?
  6. When looking into this I can see: > smartctl -x /dev/sdh | grep "^Device State:" | sed "s/^.*: *// ; s/ *(.*//" Active Whereas > hdparm -C /dev/sdh /dev/sdh: drive state is: standby @bonienl - this may shed light onto what's going on. hdparm seems to not report the correct state.
  7. Was this ever resolved? I'm seeing the same phenomena, on all my array drives (WD SE 4TBs). Clicking "spin up" does spin the disk up, it moves to "standby" (a-la hdparm) and green ball, but the SMART buttons are still greyed out and "Unavailable - disk must be spun up". The one button that's functional is the Download button and it works - downloads a perfectly valid SMART report. Any help would be appreciated.
  8. Happy Unraid day! I've been a camper for like 8 years or so, mostly a very happy one. Came here from the FreeNAS varieties, visited OMV, but stayed here. Cool product, great bunch of people on the forum.
  9. I was typing away a response when your second post came in... So, scratch that one, obsolete 🙂 The errors you see seem to be coming from the preclear plugin and it seems to have caused a reset on the (v-) USB controller, leading to a virtual eject. In general Unraid indeed runs mostly from RAM but it is dependent on the flash drive to be there - e.g. it is mounted as /boot for permanent storage, etc.
  10. Very possible, in a variety of ways, depending on what exactly you want to do and on your personal taste. To automate, you can edit the ESXi root crontab, which lives in /var/spool/cron/crontabs/root in ESXi (access it via the ESXi CLI). There, you can add a crontab entry to do the shutdown at your designated time. Essentially, /sbin/shutdown.sh and /sbin/poweroff will do the trick. Now that's not exactly what you are trying to do; you want Unraid to do an orderly shutdown, before ESXi goes down. This can be achieved via several different approaches: 1. If you have the VMware tools plugin installed in Unraid, and Unraid is part of the autostart VMs set, then /sbin/shutdown.sh will trigger an orderly shutdown as part of the host shutdown. 2. You can have Unraid auto-shutdown at (say) 1 am, and have ESXi auto-shutdown at (say) 1:10 am. Make sure clocks are synced... 3. You can have Unraid initiate its own shutdown and the ESXi shutdown (via SSH CLI), simultaneously. 4. You can use PowerCLI (as @StevenD just wrote while I was typing this) from a Windows host where PowerCLI is installed. You could probably trigger an Unraid shutdown, wait some time for it to complete, then do the stop-vmhost. These are just examples, obviously. Hope this helps.
  11. Which would be your answer right there. The per-process lines in "top" show %CPU in single core units, so 400% would make sense. The top line sums it all and adjusts it as a total of all available logical processors. So the ~80% you see (your process is "niced" so it appears under ni) is 80% your total processor units. Specifically, the ~50% you see in the "ni" part is in fact your 400% in the per-process line (4 out of 8).
  12. This sounds a bit odd - can you post a screenshot of top at the time of the stress?
  13. ... and when resuming, it says "Elapsed time: less than a minute", i.e. keeping the time elapsed only since resumption.
  14. At least in my case it appears to be very real. All clients actively accessing the server's shares freeze for the duration of the "storm", which ranges from 5-12 seconds. The server itself remains accessible over CLI and web GUI, while showing 100% CPU (e.g. using "top"), with most of the usage in "I/O wait". Once the graphic display cools down from red (both CPUs at 100%) to normal (1%-5% on each), they all return to normal operation.