Jump to content

SimonHampel

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by SimonHampel

  1. I have two unraid servers. I want to move 6 drives (including the data on them) from one of the servers to the other server. All drives on both machine use XFS as the filesystem. Both servers use dual parity.

     

    My understanding is that if I create a new config on the destination server and assign the drives to empty slots, then start the array - it will rebuild parity and the data will just appear - the system won't try to reformat those drives in the new system?

     

    I would then do a new config on the origin server and unassign the drives, then start the array to do a parity rebuild without those drives present.

     

    I have an existing share on the new server that is different to the share that the 6 drives use - if I just rename the top level folder on each of the drives to match the new share name, will that be sufficient to have the data on those drives appear under the new share?

     

    Summary of steps I think I'll need:

     

    1. take screenshots of all existing array assignments just in case
    2. shut down both servers and physically move all 6 drives from origin to destination servers
    3. start destination server (auto start is disabled)
    4. create new config on destination server preserving current assignments, assign new drives to empty slots
    5. start array on destination server, which will trigger a party rebuild
    6. rename top level folders on all new drives to match share name that will be used on destination server
    7. create new config on the origin server preserving current assignments, unassign old drives
    8. start array on origin server, which will trigger a parity rebuild

     

    Anything I've missed?

     

    Thanks!

  2. I have a Fractal Design Define 7 XL with 16 drives (including 2 parity) and 3 cache SSDs.

     

    Temps in the mid 30's typically for the drives at the top (heat rises!) - I'm in Australia and the ambient is usually quite high.

     

    The following screenshot was taken around 1% into a parity check and it's around 24C inside my office.

     

    image.png.634a8190d93580d93a3aba6ab9066b86.png

     

    I'm running a pretty low powered machine - AMD Ryzen 3 3200G with onboard graphics (no discrete GPU). I have three 14cm fans mounted at the front of the case and a 12cm fan mounted at the rear.

     

    My goal for the build was to minimise noise - this machine sits in my office next to my desk, so I didn't want something that would add to the noise already in the room. I'm very happy with the build.

     

    I also have an old Antec 1200 case running a 20 drive array in 4 rows of 5-in-3 hot swap drive bays - cooled only by the 92mm fans mounted on the back of each drive bay and the 120mm on the top. Unsurpsingly, given how densly the drives are packed in - the drive temperatures on this machine are much higher than then Define 7 XL (frequently reaching 46-47C under load).

     

    If this older server starts to have hardware issues - I think I'll rebuild it in another Define 7 XL. It will be annoying having to drop down from 20 drives to 16, and it's a lot more work swapping drives in the Define 7 XL than it is with the hot swap drive bays - but the improved cooling, and more importantly, quietness of the Define 7 XL is going to improve things overall I feel.

    • Like 2
  3. Okay - the plot thickens.

     

    I'm running an rsync from this server to my file server backing up some media files - running at around 40MB/s or so (turbo write turned on for the destination)

     

    While this is running - I'm getting exactly the same behaviour - the drives in the SageTV VM are timing out when I try to access them.

     

    D drive just sits there with ever-growing disk queue length, while E drive errors out "The request could not be performed because of an I/O device error"

     

    So this is not specifically related to prelcear like I first thought 

     

    Any suggestions?

  4. I was testing a few more things and can now make the following observations:

     

    1. trying to access D:\ in windows explorer (recording drive #1) results in 100% disk usage on that drive with a growing queue length showing in resource monitor

    2. trying to access E:\ in windows explorer (recording drive #2) results in 100% disk usage for a short period before erroring out with the message: "E:\ is not accessible - The request could not be performed because of an I/O device error"

     

    I'm not sure why the result would be different between these drives.

     

    Actual read/write speed seems to be pretty much zero, despite 100% activity

  5. Long story - sorry, lots of context required. I've been using Unraid for many years now with a 20 drive 100TB array working as my main file server for backups. The CPU in that machine has no virtualisation support and no capacity for cache drives, so I have not used VMs (or Dockers) on that server.

     

    I've recently built a second Unraid server (glad I bought that dual license back in the day!!) to replace my ageing media server that was running Sage TV / Plex / Sonarr / SABnzbd

     

    I've successfully migrated Plex, Sonarr and SABnzbd to run in dockers - that's working well.

     

    However, due to a lack of driver support for my old Hauppauge TV tuners, I haven't gone the docker path for SageTV but instead have converted my old server to a VM. This also allowed me to continue running my old SageTV v7 setup without any changes. I have the tuners passed through to the VM so they can use the Windows drivers.

     

    So my SageTV vdisk is contained on a dedicated SSD (not part of the cache pool) mounted via Unassigned Devices and uses the VirtIO bus.

     

    I have two HDDs for recorded TV which aren't in the array - both of which are passed through to the VM and are formatted using NTFS. They both use the VirtIO bus.

     

    I'm not sure if this is the best mechanism for passing dedicated drives through - but from what I've read, unless you have a dedicated controller to pass through, then VirtIO is the best choice for dedicated drives?

     

    Anyway - the server has been working well, I've migrated all of my media across and I have three SSDs in a cache pool.

     

    However, in the last few days I've been having difficulty accessing SageTV via any of my clients (HD200 extender / Placeshifter / etc). 

     

    I can start the SageTV UI and navigate the menus. I can even watch live TV without issues. But as soon as I try to watch a recorded show, it just sits there waiting (spinning circle in SageTV).

     

    When accessing the VM via Teamviewer, I can see there is nothing really using any CPU and the machine is generally responsive, but as soon as I try to access either of the recording drives via Windows Explorer (just to get a directory listing!), the machine grinds to a halt with Resource Monitor showing 100% disk utilisation on the recording drive and a growing disk queue length.

     

    The vdisk boot drive is fine, so the OS is still running - it's the recording drives which cause the issue.

     

    Now after a bit of head scratching (because it has been working fine up until the last couple of days), I've remembered that I'm also running a preclear on a new 8TB drive which is housed in a portable enclosure and connected via USB-C.

     

    The preclear is running at around 180MB/s or more (wow - so fast compared to my other Unraid box!!) but is still going to take days to complete.

     

    I'm wondering whether there is some kind of bus saturation thing happening here while the preclear is running?

     

    I'm disinclined to stop the preclear right now to check whether that solves the issue - so I'll have to wait a few days to confirm. But either way, this is a problem if running a preclear stops my VM from recording any shows!

     

    I'm wondering whether it is the VirtIO bus that I'm using for the pass through recording drives which is the issue? Is there a better setup I could use?

     

    Let me know what info you need to help me diagnose the problem here - thanks!

  6. 5 minutes ago, Frank1940 said:

    Right now, some folks are having a real issue with 20H2 and SMB.  I don't think it is everybody but it is a fair percentage.  That was the reason for the question.  It would be nice to hear from some folks who are using 20H2 and not having an issue with SMB.  I have locked myself onto 2004 for the present...

     

    Glad to hear I wasn't being silly by delaying the optional update to 20H2!

     

    Right now things are working the way I need them, so I'm not going to push to the latest versions until I actually need to.

  7. On 1/27/2021 at 12:01 AM, Frank1940 said:

     

    Type   winver   and tell us which version of Windows 10 Pro you have ---  2004 or 20H2

     

    My new laptop has 1909 - I haven't updated to 20H2 yet.

     

    I did a rebuild of my old laptop for use by my son and that didn't have any problems accessing Unraid - it's running 2004

     

    Should I take this to mean that the issue has been fixed in 2004? Or is this just another Windows 10 quirk where some machines have issues while others don't ?

  8. On 9/25/2018 at 3:18 PM, jkBuckethead said:

    I've been down some crazy rabbit holes with windows before, but this one really takes the cake.  A little googling, and you quickly see that tons and tons of people have experienced this particular error.   There are dozens upon dozens of potential solutions, ranging from simple to extremely complicated and everything in between.  Reading posts of people's results couldn't be more random.  For every person that is helped by a particular solution, there are twenty people for whom it didn't work.  I myself had tried about a dozen of the best sure-fire fixes without any success.

     

    I really didn't have much hope, but I took a look at the post linked above.  The thread started in August of 2015.  One common thread in error 0x80070035 posts is the 1803 windows 10 update so I decided to jump ahead to the end of the thread.  Low and behold, on page 5, the first post I read struck a chord for some reason.  Even though I was quite tired of trying random things without success, I decided to give this registry edit a try.  As soon as I added the key below I was able to access the unraid server.  I didn't even have to reboot.  HALLELUJAH!!!!

     

    Try: (Solution)

    https://www.schkerke.com/wps/2015/06/windows-10-unable-to-connect-to-samba-shares/

    Basically the solution follows, but you'll need to use regedit:

    add the new key HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\AllowInsecureGuestAuth key, value set to 1

    It's interesting to know one of my other computers that works doesn't have this key, but it has "AuditSmb1Access" which set to 0, which this computer doesn't have.  

     

    I checked one of windows 10 home machines, and like the post above it does not have the AllowInsecureGuestAuth key, but does have the AuditSmb1Access key set to 0.  My windows 10 pro machine, the one the could not access my unraid server, had the AllowInsecureGuestAuth set to 0.  Setting this to 1 appears to have fixed my problem.

     

    I'm not certain, but I suspect the different keys could be linked to one being Home and the other Pro.  Again I'm just guessing, but the name suggests that access was blocked because the share lacked a password.  I guess it's a security thing, but it's kind of an unexpected default setting.  I wonder what GUI setting this is associated with.  I don't recall ever seeing a windows setting to block access to open servers.  I don't even want to test and see how much frustration I could have saved myself if I had simply secured the share and set a password from the start.

     

     

    Just got a brand new laptop running Windows 10 Pro and I can confirm that this registry change still fixes the issue.

×
×
  • Create New...