Jump to content

testdasi

Members
  • Content Count

    1082
  • Joined

  • Last visited

  • Days Won

    2

testdasi last won the day on June 27

testdasi had the most liked content!

Community Reputation

79 Good

1 Follower

About testdasi

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

1051 profile views
  1. Do you happen to use ACS Override? I had severe lags in my VM moving from 6.5.3 to 6.6 that went away after I switched off ACS Override.
  2. Yes, that should work. A few things to keep an eye for Make sure your 2-bay dock is mounting each disk as individual devices and not some weird JBOD or RAID nonsense. It shouldn't happen but check the manual just to be sure. I do cp just because I have always used cp and I know it works every time. Don't use Krusader (and mc) for large migration. I have found these more advanced tools to cause issues (e.g. Krusader grinding to a halt after a few hundred GBs, mc causing fragmentation etc.).
  3. The fastest way is to install both the new and old disks the mount new disks in Unassigned Devices then cp disk-to-disk (e.g. 4TB A -> 10TB A, 4TB B -> 10TB B etc.). Since the number of disks match (presumably your 5x4TB = 4 data + 1 parity), you can even run the cp in parallel e.g. using CA User Script run in background functionality. Then you remove the old disks, create a new config and build the 12TB parity. A derivation of the above method is to add all the new disks to the array but remove parity and then again do cp disk-to-disk. It's slightly less safe (no parity protection during migration) but there's zero format risk with UD (not that there's any substantial risks, UD should do xfs format in such a way that the disks can be added to the array directly without requiring the disks being reformatted). Your rebuild method is not wrong but it's a lot slower since you can only do it 1 disk at a time. The instruction was intended for single disk replacement while in your case it's a wholesale migration. Q1: no need to rerun parity check. Q2: no need to turn off docker and VM, especially if you are sure there's no write. It may slow things down a little. On 6.7.0+ though, you may have terrible performance for any array access during rebuild due to the bug with read/write priority that was raised recently.
  4. Try booting Unraid in legacy mode and turn off Hyper V.
  5. I thought trurl's response was very clear. You don't need to plug a disk to the exact same port that it was plugged to originally. Unraid disk assignment is based on serial number of the disk, not the port the disk is plugged into.
  6. Read the FAQ here on how to enable Trim. Also, OS drive (C) has "optimisation not available" has been an issue for quite some time. Can't remember which Update that broke it. You can still run trim from command line and it would still work, just not on the Windows optimisation app.
  7. The OoM happened on Aug 04 at 14:56:39 and python was killed to free up memory. Given your system has 4GB RAM and 1 single HDD, I think you are mounting rclone cloud for Plex + other dockers yes? A few things you can do Set a memory limit for each docker - then if there's a "runaway" docker, it is more likely to be obvious what the culprit is Avoid transcode temp to RAM (but given the low amount of RAM I don't think anyone would do that anyway) Change your rclone parameters to use a bit less RAM
  8. Yes, Lightroom has that feature to allow you to edit stuff locally with the RAW on the network. I have never used that functionality in any serious manner though. Performance is better than editing RAW after the initial ingress wait. That's expected though since you are essentially editing a simplified version of your RAW files. If I were you, I would set up a VM on the 1700 and editing via RDP. Would be a waste of the 1700 otherwise. Of course if you use the server for other purposes then you can use the Lightroom feature you mentioned. My workflow is generally have my files in "local" NVMe SSD for things that I still need to edit. Once done, I move everything to the NAS. But then my NAS and workstation is the same machine so it's different.
  9. From the Unraid website: "Trial keys require an internet connection upon server boot in order to validate. Basic/Plus/Pro keys do notrequire an internet connection." https://unraid.net/pricing
  10. Firstly, if speed test shows 60-80 Mbps upload then it's a problem with your ISP presumably. Secondly, what do you mean by "playback from my Plex server"? What's the client? What's the bitrate and codec of the content? Any transcoding? Any disk write activity happening in the background e.g. parity sync, mover etc.? What's your CPU core load / usage level when media being played?
  11. Google built-in stuff to Google Drive then rclone it down to my server.
  12. Don't need to reboot Unraid. You just need to run the script. The schedule at first array start is for the next time you boot/reboot Unraid (i.e. so you don't have to rerun the script manually).
  13. You need to install CA User Script plugin to create a script and schedule it to run at first array start. You can do it the command line way, the plugin just make it more user friendly. The CA User Script GUI is rather self-explanatory but if you need more help, you can ask in the dedicated support topic (just search the forum for Unraid CA User Script support). Once you get the script, schedule it and run it, when you edit your Plex docker template host path 2, you should be able to see PlexRamScratch in a drop-down when you click on /tmp. Just select it, which should update the path in the box and save the template. Your within-Plex setting looks correct. PS: you do NOT need the script to config Plex to use RAM for transcode temp (i.e. map /transcode to /tmp). It's just that Plex will have access to all your RAM and depending on load may cause you to run out of memory. What the script does is to create a limit to how much RAM Plex can use.
  14. Code 43 is Nvidia driver detecting that you are using it in a VM. Nvidia consumer card i.e. non-Quadro is not supposed to be run in a VM so if the driver detects a VM envi, it will refuse to load. This happens a lot with primary GPU pass-through. Good to know that booting Unraid in legacy mode fixes your problem. There is no setback or pitfall of booting Unraid in legacy mode that I know of. I guess theoretically there might be some specific hardware that requires UEFI boot but that looks irrelevant to your case.
  15. In your primary gpu windows screenshot, there is an exclamation mark on the 970 device suggesting Windows error. What is the error? No particular reason why you should boot Windows VM on primary. Rather it's better to boot Mac OS on secondary. The primary slot is always more troublesome to deal with and Mac VM is also more troublesome to deal with (remember, Apple never intends for you to run Mac on a VM, even Hackintosh is a "your millage may vary" situation). The more variables and roadblocks you can remove from an issue, the easier it is to troubleshoot.