SimonHampel

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by SimonHampel

  1. Thanks - I can confirm everything worked as expected.
  2. I have two unraid servers. I want to move 6 drives (including the data on them) from one of the servers to the other server. All drives on both machine use XFS as the filesystem. Both servers use dual parity. My understanding is that if I create a new config on the destination server and assign the drives to empty slots, then start the array - it will rebuild parity and the data will just appear - the system won't try to reformat those drives in the new system? I would then do a new config on the origin server and unassign the drives, then start the array to do a parity rebuild without those drives present. I have an existing share on the new server that is different to the share that the 6 drives use - if I just rename the top level folder on each of the drives to match the new share name, will that be sufficient to have the data on those drives appear under the new share? Summary of steps I think I'll need: take screenshots of all existing array assignments just in case shut down both servers and physically move all 6 drives from origin to destination servers start destination server (auto start is disabled) create new config on destination server preserving current assignments, assign new drives to empty slots start array on destination server, which will trigger a party rebuild rename top level folders on all new drives to match share name that will be used on destination server create new config on the origin server preserving current assignments, unassign old drives start array on origin server, which will trigger a parity rebuild Anything I've missed? Thanks!
  3. I have a Fractal Design Define 7 XL with 16 drives (including 2 parity) and 3 cache SSDs. Temps in the mid 30's typically for the drives at the top (heat rises!) - I'm in Australia and the ambient is usually quite high. The following screenshot was taken around 1% into a parity check and it's around 24C inside my office. I'm running a pretty low powered machine - AMD Ryzen 3 3200G with onboard graphics (no discrete GPU). I have three 14cm fans mounted at the front of the case and a 12cm fan mounted at the rear. My goal for the build was to minimise noise - this machine sits in my office next to my desk, so I didn't want something that would add to the noise already in the room. I'm very happy with the build. I also have an old Antec 1200 case running a 20 drive array in 4 rows of 5-in-3 hot swap drive bays - cooled only by the 92mm fans mounted on the back of each drive bay and the 120mm on the top. Unsurpsingly, given how densly the drives are packed in - the drive temperatures on this machine are much higher than then Define 7 XL (frequently reaching 46-47C under load). If this older server starts to have hardware issues - I think I'll rebuild it in another Define 7 XL. It will be annoying having to drop down from 20 drives to 16, and it's a lot more work swapping drives in the Define 7 XL than it is with the hot swap drive bays - but the improved cooling, and more importantly, quietness of the Define 7 XL is going to improve things overall I feel.
  4. Okay - the plot thickens. I'm running an rsync from this server to my file server backing up some media files - running at around 40MB/s or so (turbo write turned on for the destination) While this is running - I'm getting exactly the same behaviour - the drives in the SageTV VM are timing out when I try to access them. D drive just sits there with ever-growing disk queue length, while E drive errors out "The request could not be performed because of an I/O device error" So this is not specifically related to prelcear like I first thought Any suggestions?
  5. Yes, I think it definitely is a bottleneck - I think perhaps the preclear is saturating the bus or something - but I'm not sure what to do about it. Is it the VirtIO driver? Is it the controller? Is it the USB-C hard drive?
  6. Preclear has finished and I can confirm that the SageTV VM is now performing as expected. I was able to record 4 HD shows at once while watching another via my extender - no issues at all. So why is the preclear causing the drives used by the VM to become unresponsive?
  7. I was testing a few more things and can now make the following observations: 1. trying to access D:\ in windows explorer (recording drive #1) results in 100% disk usage on that drive with a growing queue length showing in resource monitor 2. trying to access E:\ in windows explorer (recording drive #2) results in 100% disk usage for a short period before erroring out with the message: "E:\ is not accessible - The request could not be performed because of an I/O device error" I'm not sure why the result would be different between these drives. Actual read/write speed seems to be pretty much zero, despite 100% activity
  8. Long story - sorry, lots of context required. I've been using Unraid for many years now with a 20 drive 100TB array working as my main file server for backups. The CPU in that machine has no virtualisation support and no capacity for cache drives, so I have not used VMs (or Dockers) on that server. I've recently built a second Unraid server (glad I bought that dual license back in the day!!) to replace my ageing media server that was running Sage TV / Plex / Sonarr / SABnzbd I've successfully migrated Plex, Sonarr and SABnzbd to run in dockers - that's working well. However, due to a lack of driver support for my old Hauppauge TV tuners, I haven't gone the docker path for SageTV but instead have converted my old server to a VM. This also allowed me to continue running my old SageTV v7 setup without any changes. I have the tuners passed through to the VM so they can use the Windows drivers. So my SageTV vdisk is contained on a dedicated SSD (not part of the cache pool) mounted via Unassigned Devices and uses the VirtIO bus. I have two HDDs for recorded TV which aren't in the array - both of which are passed through to the VM and are formatted using NTFS. They both use the VirtIO bus. I'm not sure if this is the best mechanism for passing dedicated drives through - but from what I've read, unless you have a dedicated controller to pass through, then VirtIO is the best choice for dedicated drives? Anyway - the server has been working well, I've migrated all of my media across and I have three SSDs in a cache pool. However, in the last few days I've been having difficulty accessing SageTV via any of my clients (HD200 extender / Placeshifter / etc). I can start the SageTV UI and navigate the menus. I can even watch live TV without issues. But as soon as I try to watch a recorded show, it just sits there waiting (spinning circle in SageTV). When accessing the VM via Teamviewer, I can see there is nothing really using any CPU and the machine is generally responsive, but as soon as I try to access either of the recording drives via Windows Explorer (just to get a directory listing!), the machine grinds to a halt with Resource Monitor showing 100% disk utilisation on the recording drive and a growing disk queue length. The vdisk boot drive is fine, so the OS is still running - it's the recording drives which cause the issue. Now after a bit of head scratching (because it has been working fine up until the last couple of days), I've remembered that I'm also running a preclear on a new 8TB drive which is housed in a portable enclosure and connected via USB-C. The preclear is running at around 180MB/s or more (wow - so fast compared to my other Unraid box!!) but is still going to take days to complete. I'm wondering whether there is some kind of bus saturation thing happening here while the preclear is running? I'm disinclined to stop the preclear right now to check whether that solves the issue - so I'll have to wait a few days to confirm. But either way, this is a problem if running a preclear stops my VM from recording any shows! I'm wondering whether it is the VirtIO bus that I'm using for the pass through recording drives which is the issue? Is there a better setup I could use? Let me know what info you need to help me diagnose the problem here - thanks!
  9. Glad to hear I wasn't being silly by delaying the optional update to 20H2! Right now things are working the way I need them, so I'm not going to push to the latest versions until I actually need to.
  10. My new laptop has 1909 - I haven't updated to 20H2 yet. I did a rebuild of my old laptop for use by my son and that didn't have any problems accessing Unraid - it's running 2004 Should I take this to mean that the issue has been fixed in 2004? Or is this just another Windows 10 quirk where some machines have issues while others don't ?
  11. Just got a brand new laptop running Windows 10 Pro and I can confirm that this registry change still fixes the issue.
  12. Ahh - I just found another post which used notify, but using the full path. I checked and it works! Turns out notify didn't disappear - it was just no longer in the path. /usr/local/emhttp/webGui/scripts/notify -i warning -s "rsync web1 failed" -d "/mnt/user/Backup/Logs/rsync/rsync.web1.20190410"
  13. I use cron tasks to execute scripts which use rsync to pull files from remote servers to my Unraid box for backup purposes. This is running natively on Unraid, not via a VM. I would like to have the system send me notifications or emails if something goes wrong in my scripts. In previous versions of Unraid, I have piped messages into mail, then when that was no longer available I used the notify command - but that no longer works either? What's the best way to send notifications or emails from the CLI now?
  14. Just for future reference, I fixed this by removing my ssh config from /boot/config/ssh and then rebooting the server at which point unRAID recreated the necessary ssh config for me to be able to connect directly using my external client. Then I was able to copy back my original ssh client config into a separate directory (I used /boot/config/ssh_client), from which I copied the necessary files (keys and known_hosts file) into /root/.ssh at server startup. I can now ssh/sftp/rsync into remote machines from my unRAID server and everything seems to be working again.
  15. My issue was that when I first set up unRAID a few years back, I had manually set up ssh to connect to my remote servers for rsync purposes, and so my ssh configuration was stopping my ssh client from connecting to the unRAID server. Will have to do some more research into how to get these to co-exist.
  16. Hmm ... seems I was actually using Telnet to connect previously (how old school!). Will look into actually configuring SSH instead!
  17. I upgraded from 6.3.x to 6.4.1 and now cannot connect to my unRAID server using my SSH client. I can connect using the Terminal window, but prefer to use my SSH client. I am receiving the error: "The remote system refused the connection." Is there a config change for 6.4.x which prevents external clients from connecting?