Jump to content

doron

Members
  • Content Count

    274
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by doron

  1. I'm running Unraid 6.7.x. Is there a reason you are installing a backward version?
  2. When looking into this I can see: > smartctl -x /dev/sdh | grep "^Device State:" | sed "s/^.*: *// ; s/ *(.*//" Active Whereas > hdparm -C /dev/sdh /dev/sdh: drive state is: standby @bonienl - this may shed light onto what's going on. hdparm seems to not report the correct state.
  3. Was this ever resolved? I'm seeing the same phenomena, on all my array drives (WD SE 4TBs). Clicking "spin up" does spin the disk up, it moves to "standby" (a-la hdparm) and green ball, but the SMART buttons are still greyed out and "Unavailable - disk must be spun up". The one button that's functional is the Download button and it works - downloads a perfectly valid SMART report. Any help would be appreciated.
  4. Happy Unraid day! I've been a camper for like 8 years or so, mostly a very happy one. Came here from the FreeNAS varieties, visited OMV, but stayed here. Cool product, great bunch of people on the forum.
  5. I was typing away a response when your second post came in... So, scratch that one, obsolete 🙂 The errors you see seem to be coming from the preclear plugin and it seems to have caused a reset on the (v-) USB controller, leading to a virtual eject. In general Unraid indeed runs mostly from RAM but it is dependent on the flash drive to be there - e.g. it is mounted as /boot for permanent storage, etc.
  6. Very possible, in a variety of ways, depending on what exactly you want to do and on your personal taste. To automate, you can edit the ESXi root crontab, which lives in /var/spool/cron/crontabs/root in ESXi (access it via the ESXi CLI). There, you can add a crontab entry to do the shutdown at your designated time. Essentially, /sbin/shutdown.sh and /sbin/poweroff will do the trick. Now that's not exactly what you are trying to do; you want Unraid to do an orderly shutdown, before ESXi goes down. This can be achieved via several different approaches: 1. If you have the VMware tools plugin installed in Unraid, and Unraid is part of the autostart VMs set, then /sbin/shutdown.sh will trigger an orderly shutdown as part of the host shutdown. 2. You can have Unraid auto-shutdown at (say) 1 am, and have ESXi auto-shutdown at (say) 1:10 am. Make sure clocks are synced... 3. You can have Unraid initiate its own shutdown and the ESXi shutdown (via SSH CLI), simultaneously. 4. You can use PowerCLI (as @StevenD just wrote while I was typing this) from a Windows host where PowerCLI is installed. You could probably trigger an Unraid shutdown, wait some time for it to complete, then do the stop-vmhost. These are just examples, obviously. Hope this helps.
  7. Which would be your answer right there. The per-process lines in "top" show %CPU in single core units, so 400% would make sense. The top line sums it all and adjusts it as a total of all available logical processors. So the ~80% you see (your process is "niced" so it appears under ni) is 80% your total processor units. Specifically, the ~50% you see in the "ni" part is in fact your 400% in the per-process line (4 out of 8).
  8. This sounds a bit odd - can you post a screenshot of top at the time of the stress?
  9. ... and when resuming, it says "Elapsed time: less than a minute", i.e. keeping the time elapsed only since resumption.
  10. At least in my case it appears to be very real. All clients actively accessing the server's shares freeze for the duration of the "storm", which ranges from 5-12 seconds. The server itself remains accessible over CLI and web GUI, while showing 100% CPU (e.g. using "top"), with most of the usage in "I/O wait". Once the graphic display cools down from red (both CPUs at 100%) to normal (1%-5% on each), they all return to normal operation.
  11. It might; however, I don't even have mover running (and no cache drive configured). Well, you do say that you have high I/O wait. That's also CPU usage - adds up to user and system times. I, too, see this happen on all available cores.
  12. This looks similar also to this issue, which I opened a while ago. It was redirected into the "virtualizing" sub-forum because my Unraid runs under ESXi but my sense was, and still is, that it is unrelated to the virtualization. In my case, Unraid has one docker container (and obviously no VMs). Can you check whether, during the few seconds this is happening, "top" reports high CPU in IO wait (third line from top, "wa" percentage)? This might further indicate similarity. I feel we keep seeing reports about recent versions of Unraid getting into a few seconds (5-15) of 100% CPU usage where everything locks up, and then it goes back to normal.
  13. This looks to be a decent write speed (quite good actually). In fact that test is testing write to disk, only. The source of the copy is /dev/zero which is a kernel device generating a stream of bin zeroes, which are then written to the disk file you specified. To test the read part you could either manually mount something from a different OS on /mnt/test and then copy from it to /tmp/blah, or use SCP to copy directly from a remote file to /tmp/blah. It will not be an extremely accurate test, but will give you a pretty good idea as to performance limits.
  14. I'd change the command to use bs=1G count=10. In parallel to that, I'd do a read test from the remote (networked) file into /tmp/blah (i.e. into RAM disk). Use a reasonable file size (this filesystem is in RAM and you don't want to exhaust it) but a few hundred MB up to 1GB will give you a rough estimate as to the incoming stream speed. With these two, you should quickly be able to know which part of the copy process is the culprit, and zoom in on it.
  15. Oh, agreed 100%. And I can: once I set the BIOS to boot from the Unraid flash, it will, and will then see all the array drives (that are currently assigned raw via RDM) natively. I will not have all the other VMs currently under this hypervisor, but Unraid will work bare metal. Thanks!
  16. Basically because the controller is shared with the hypervisor. I could move drives around to get around that, but from my vantage point, RDMs have been working for me flawlessly for years and years now - if it ain't broke, you know...
  17. Thank you. Indeed, new config worked as expected and without a hitch.
  18. Happy New Year Folks, So I have just upgraded my ESXi host from 5.5 to 6.5d. Array HDDs are physically attached via RDM. It seems like ESXi 6 has a slightly different naming convention for the drives, so what used to be called WDC_WD4000F9YZ-09N20L1_WD-WMCxxxxxxxx is now WDC_WD4000F9YZ-0_WD-WMCxxxxxxxx (basically, the f/w version has been removed from the disk ID - actually sensible). So now unRAID believes the config is stale and all its drives are gone... Oh well. Trying to just "seat" them into their right places causes the UI to say "wrong" on each, and consequentially the array won't start. I think the solution is to go "New Config", assign the right disks to their right (old) slots as they should have been, without any "preserve current assignment", and restart. But, I need some confirmation / assurance that this will do what I expect it to: find the drives in their correct slots and start the array successfully (btw, the drives are encrypted, if that changes anything). Specifically, I don't want to lose any data 🙂 So: is the above the correct sequence I should go through? Anything else I should be mindful of? Thanks!!
  19. Hmm I don't seem to be able to download this particular version (aka 6.5.0d). Only lets me download an earlier or later one. Oh well.
  20. Yep - one of the reasons I kept it at 5.5 until now 🙂 Plus that it wasn't broke. It's gonna be an interesting challenge to move the whole setup to the new ESXi (several guests in there). Thanks for the suggestion.
  21. ESXi 5.5. Drives are SATA, connected to an onboard SAS controller on the X10SL7. Passed through to the VM (done awhile ago, twas unRAID 5...) by creating an RDM VMDK for the real drive and passing it to the VM.
  22. I've been here long enough to remember being able to ask the community for assistance and insights, even for configurations that are not directly supported by Limetech. I should know, I've been there, on both ends. I guess things have changed. Which is fine, your forum - your rules. Thanks for clarifying.
  23. @trurl, I'm not sure Virtualizing Unraid is the "proper forum". The fact that my server runs as a VM may or may not be related to the problem at hand. Analogously, I'm sure that the fact that someone's drives are WD wouldn't make you move her posts to a WD-specific forum...
  24. Folks, Recently I've been seeing random "freezes" in my Unraid server performance. They look like momentary "choking" when streaming from the server halts for a few seconds. Then it resumes, problem goes away, until next time. It finally became annoying enough to look into. It seems as if from time to time the server goes to 100% CPU for no apparent reason (no real load, no VMs, couple dockers that do backup work in the wee hours, no RAM shortage), for several seconds, and then it goes back to to normal (which is like 1%-6%). I seem to be mapping these spikes to spin-up of HDDs. It appears as if the system waits for the drive to spin up in a busy loop. I mean this does not make a lot of sense, but it looks that way. Now: it does not happen every time a drive needs to spin up. In fact when the disks spin up for an SMB or NFS request - no problem (and CPU stays at single digits). Might be happening when a docker spins up a drive. My environment: Unraid 6.6.5 running as a VM under ESXi. HDDs passed through. Two parity drives, three data drives, all 4TB. Array is LUKS encrypted. Thoughts, anyone? Thanks!