Jump to content

KptnKMan

Members
  • Posts

    279
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KptnKMan's Achievements

Contributor

Contributor (5/14)

43

Reputation

  1. Thanks for the responses guys, I've looked into things and did another full review... but I still think SOMETHING was up. The reason I say this is that I know that my array is capable of writes more than 70MBs, and very low write speeds is what alarmed me. Its visible on the graphs that there is a lot of reading at 200MBs and little writing at a low ~10-15MBs. I also came to the same educated guess conclusion that it's caching, but the numbers still didn't sit right. I left it running all night, and I think the same happened, there was a relatively small amount transferred. Today, I saw my ISP router had an issue overnight (Coincidence, no idea), and decided I would do a full restart of the entire network and UNRAIDs (Have not done that in years at this point). So I cold started both the UNRAID servers, and cold started my ISP router, pfsense router, all wifi APs, all switches. After that I waited for everything to settle and started the transfer again. I also turned off Docker and VMs on both servers to make sure I'm seeing relatively real numbers. This is now what I'm seeing on UNRAID2: I think this is more inline with what I was expecting to see, the write speed of the disks is much higher at 200MBs, in fairly consistent bursts. Maybe its that the spinning disk caches will fill up and the writes will drop down again, but it seems pretty consistent so far. I woud still expect something above 15MBs, but maybe its a psychological thing like you said. Am I wrong in this estimation? I also did a test with trasferring 55GB with the cache turned on, to compare again, and its again what I would expect with writing to the cache at 2GBs, bursting to the NVME drive: And the 55GB finished, looked along what I expect, it's consistent even though its in bursts (Where before it seemed to stall for VERY long time): And with that done, I then waited for the cache to empty to disk and ran the mover: The mover is apparently great at consistent writes above 300MBs though. If the mover is writing at 300MBs to disk, and the remote transfers in from spinning disks anyway, is it better to just leave the cache on? In the end, would you advise migrating large data with the NVME cache on or off, I mean overall? I still have another ~6.5TB to migrate. I made sure that turbo writes/reconstruct write is turned on before this migration, it's actually been on a week now in preparation for this. Thanks for the reminder, I've been wondering, is it advisable to have this turned on long term as I have read that it is harder on the array disks?
  2. Hi, so I'm trying to move/migrate about 8TB of data between two unraid servers. Both systems equipped with 10Gbit networking, and have been working fine otherwise. I have mounted the UNRAID2 SMB unraid shares on UNRAID1 as remote mount points. I have been trying a "push" migration, running commands on UNRAID1 to "push" files over to UNRAID2. I have checked through all networking settings and everthing seems fine, nothing has been changed. Everything else is working fine on the network. I started doing migration using `mv` but then it cannot be resumed, so I switched to rsync using the following command: `rsync -avzh --remove-source-files --progress /mnt/user/isos/ /mnt/remotes/UNRAID2_isos/` After this rsync migration completes, I intend to run this to find and remove the empty dirs left behind (I never got this far): `find /mnt/user/isos/ -type d -empty -delete` I noticed that it would fill up the 2TB cache drive, and everything would slow right down while the mover on UNRAID2 started emptying the cache, and the transfer continued, and the whole process would stall while the mover frantically emptied the cache while more data was pushed. So, I figured that I would simply wait for the mover to move everything from the cache drive to array, on UNRAID2 change the `iso` share options to write directly to array without cache involved, and resume the move/migration again... Before: After: I figured it would be slower writing to the array disks at ~250MBs rather than NVME speed far above that, but I could leave it and let it run until its done, and the mover would not be required. I figured the source is the array disks on UNRAID1, so in the end its not much different in overall speed. Now when I started doing the migration without cache/mover involved, I've noticed that the transfer will start, about 3 or 4 files will be moved over, and then it would just stall. System Stats from UNRAID2: You can see here, the transfer would begin, it would copy some files, with some breaks, then stall and wait. After doing this a few times, I rebooted both systems multiple times, cold booted both systems multiple times, and rechecked all settings. Nothing seems odd. After this, I tried reconfiguring the `isos` share on UNRAID2 to use cache again, and that seemed to start working, but slower. So that seemed to work, but then the same problem would eventually occur where the cache would fill and speeds would slow down while the mover does its job again. So I stopped the transfer, ran the mover, and waited for all files to be moved. After this I reconfigured the `isos` share back to not use cache, and the same problem appeared (UNRAID2): I'm really at quite at a loss as to what is happening here. Can anyone help?
  3. I'm sure I might not be the only person that wants a Miniforum MS-01 icon. Thanks anyone that can do this.
  4. vm1-win10-data0.xml Here is the xml for the VM in question. I did try migrating the "/etc/libvirt/qemu/nvram/b154af59-fe03-1020-2c56-1a5f76d10671_VARS-pure-efi.fd" file on line 31, but still no joy. It's like this VM just doesn't want to be moved for some reason. I know it "works" because the Windows Recovery runs after a few reboots, but the actual OS refuses, just black screen forever. Oh, also I didn't use TPM BIOS just the normal OVMF.
  5. I looked into the error and and this worked: root@unraid:~# virsh undefine --nvram "VM1-WIN10-Q35-OVMF" Domain 'VM1-WIN10-Q35-OVMF' has been undefined Still very confused about trying to get this particular Win10 VM running...
  6. If you mean the domain dir on the cache disk, no there is nothing there. I have even deleted the vdisk and the VM refuses to go. That also doesn't seem to work, I get a message: root@unraid2:~# virsh undefine VM1-WIN10-Q35-OVMF error: Failed to undefine domain 'VM1-WIN10-Q35-OVMF' error: Requested operation is not valid: cannot undefine domain with nvram
  7. This continues to be a confusing issue, especially as I have now migrated other Windows VMs between the same systems without issues. THis particular VM just seems to refuse to start on the new system. Also, it seems that I cannot delete THIS particular VM now, but I can create/delete other VMs. When I try to delete it, I just see the spinning arrows to infinity. I've rebooted enough times, cleared browser cookies cache, tried a different browser. If anyone has even an idea, I'd really appreciate anything...
  8. I still haven't got this VM to work, and I've been trying everytrhing I can find. Is it possibly something to do with TPM, why it won't start? Does anyone have advice on how to copy TPM from another Unraid? Would really appreciate any ideas.
  9. Hi, I have a pretty generic Win10 VM that I have migrated over to a different unraid server, and it refuses to start under the same generic settings. THe VM works fine on the old system (AMD Ryzen) but on the new system (Intel 13900H) there is no dice at all. After a few tries the Windows Startup Repair runs though, so it seems the VM is "booting" but starting actual Windows doesn't work. Interestingly, ANOTHER VM does boot, a similar Windows 10 generic VM (Migrated from AMD Ryzen system to same Intel system) with Virtual GPU etc, and that seems to be fine. I'm at a loss investigating on what might be the problem... Does anyone have an idea?
  10. Hi, This is a post to notify I'm VERY soon (in the next few weeks) selling major parts from my homelab and main 2 Unraid systems (In my Signature). Mods please, I ask that I can list here for visibility and put more topics later. Mods please let me know if I'm doing this wrong, please don't delete my topic. Selling is due to family emergency requiring me to internationally relocate to help family. So... I'm looking for a good home for these items. Everything is functioning in what I would consider great condition, no issues, once I've taken everything apart I'll post pics. I can try and ship if it is worth it, but pickup is preferred, from area of The Hague/Delft/Rotterdam in Netherlands. Buyer would pay shipping. Items soon to be for sale: [4Sale] 3x APC 1000XL UPS, each with APC AP9631 Network Interface Card and original APC RBC7 Battery (Installed 2022-09-20). 200€ each [4Sale] Cooler Master Stacker 810 case, original from 2006, unmodified in excellent condition, all fans, with all accessories and original box. 80€ [4Sale] Rackmountable 4U Black Case, space for 6x 5.25" bays & 4x 2.5" bays, all fans, not sure the exact model but will find and provide pics, no box. 50€ [4Sale] CoolerMaster MasterBox Lite 5 Black Case with window, unused, no box. 40€ [4Sale] 2x IcyBox RAID Caddy 5.25" to 5x 3.5" (looks like this) with SATA cables and all parts. 40€ each [4Sale] 2x SATA SAS HDD Cage 5.25" to 5x 3.5" (looks like this) with SATA cables, SATA Power cables and all parts like drive fitters. 20€ each [4Sale] AMD Ryzen 3600 CPU, excellent condition with cooler. 60€ I will possibly have more for sale, I will add if relevant items.
  11. KptnKMan

    Turbo write

    Also, does switching between parity write methods require rebooting, or can I test this by just adjusting the setting?
  12. KptnKMan

    Turbo write

    I've been considering turning this on for a while. Just asking, is there a timeline or idea for when the auto option will be implemented?
  13. Thanks, I totally forgot about docker interactive console. I'm trying to resolve a problem where I think the webserver files are wrong permissions. In particular I think the followind dirs: - /var/www/html/ - /var/www/html/config - /var/www/html/data Do you know the correct permissions and owners for these dirs?
  14. Thanks, that helps with docker exec. However, I still cannot su within the container console, and its annoying because my container has stopped working and I have no su/sudo access. How can I get the root password?
  15. Hi, I've been trying to get this to work for some time and I'm unable. When I run the command, I get: Console has to be executed with the user that owns the file config/config.php Current user id: 33 Owner id of config.php: 99 I'm trying to get cron running using the mentioned 2nd method. When I check the /etc/passwd file, there is no user 99. Any ideas? Also, whenever I try to su within the container, I get a password prompt, but what is the password? Additionally, when I try to sudo I get an error that sudo is not found. Any help would be appreciated?
×
×
  • Create New...