testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Update on the storage pool: I reverted back to non-RAID (i.e. 2 independent drives) after much tinkering with storage pool and ReFS. The main reason I wanted to try storage pool was ReFS promised data integrity feature i.e. it checksum data i.e. similar to btrfs / ZFS. Then after many tries, I gave up because scrubbing doesn't work. What is the effing point of having hashing without the ability to scrub. Sure I know I can't recover data since I run RAID-0 but all the other FS allows me to scrub regardless. I can always restore from a backup! While trying to make scrubbing work (to no avail), my frustration was only heightened by the discovery that ReFS can silently delete files that it finds integrity break on. This is an atrocious design for a file system. Sure the file may be corrupt but at least partial recovery is better than no recovery. And deleting silently (regardless if it will or not) is a big no-no for any FS. I now suspect Microsoft decision to remove ReFS creation on Windows Pro wasn't a marketing decision to push Windows Workstation. They just realised ReFS is shit and want to reduce the chance of class-action lawsuits. Btw, Windows Enterprise will allow formatting any drive to ReFS without the need of storage pool so there's really no need to add to the risk by running RAID-0 anyway. I am now back to square one to solve a very much first-world problem of trying to hash my data on Windows (and hopefully in the process not have to deal with 2 independent drives).
  2. You can always run multiple instances of the same docker. Just use a different names + different appdata folder I would prefer docker over VM at the first instance. VM requires KVM/qemu overhead which makes it less efficient so I would only use VM for things that I can't do with docker.
  3. Something seems odd with that 6TB preclear. Below is my slowest drive a Seagate 5TB preclear. It's a SMR + 5400rpm + 2.5" so I don't expect anything to be slower. ############################################################################################################################ # # # unRAID Server Preclear of disk WCJ0A3LV # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [13:07:25 @ 105 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [16:44:07 @ 83 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [14:26:44 @ 96 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 44:18:31 | Total elapsed time: 44:18:32 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status default # # # # # # ATTRIBUTE INITIAL CYCLE 1 STATUS # # 5-Reallocated_Sector_Ct 0 0 - # # 9-Power_On_Hours 0 44 Up 44 # # 183-Runtime_Bad_Block 0 0 - # # 184-End-to-End_Error 0 0 - # # 187-Reported_Uncorrect 0 0 - # # 190-Airflow_Temperature_Cel 27 32 Up 5 # # 197-Current_Pending_Sector 0 0 - # # 198-Offline_Uncorrectable 0 0 - # # 199-UDMA_CRC_Error_Count 0 0 - # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear Finished Successfully!.
  4. My Win10 VM's + Ubuntu VM's would, without fail, shutdown if I stop it from the GUI. If it doesn't do it for yours, you might want to check your Power Options for Power and sleep button settings as the first suspect. Btw, if your VM freezes then only a Force Stop will work.
  5. I agree that something is probably wrong with your BIOS. Your USB is being detected as 2.0 EHCI.
  6. Final updates on the VMWare adventure: I reverted back to running Windows VM under Unraid 🤣 The limitations were too much to justify a little bit better performance It only took 3 hours to roll back (having backups and available storage are pretty big factors behind the rapid recover). Created a storage pool for my 2x PM983 running in RAID-0 equivalent just so I don't have to manage 2 different drives. Now it makes me itchy to buy another PM983 so I can run RAID-5 equivalent. Also doing hardware transcoding in Windows now with GPU (instead of docker with CPU). I am in awe of the number of streams my GTX 1070 is capable of (with the helped of Dr.G 😉 ). What used to take an hour, now takes 5 minutes. Have to resist moving Plex to my workstation instead of docker. The most important reason is I don't have a Plex Pass 😅. Having to convert Linux-based Plex db to Windows-based is a pain in the backside. My docker with about 1/4 the CPU power is fully capable of doing what I need it to do for media consumption.
  7. Please make 2FA an optional feature. My server is not exposed to the Internet so there's really no need for extra security. It would be a massive pain in the backside having to grab my phone just to check if a docker has crashed.
  8. The 860 QVO uses "adaptive" SLC cache. That effectively means the cache capacity reduces with the amount of free space available. Apparently it ranges from 6GB (full drive) to 78GB (empty drive). When using the SLC cache functionality (it's a functionality of the firmware, it doesn't really have true SLC cells), it performs about the same level as the 860 EVO. When the cache runs out, it gets down to 160MB/s. Headline sequential write numbers sort of undermine the QVO a little bit e.g. when you start comparing to slow HDD at 130MB/s (that's slow even for HDD, my 7200rpm is still faster than that towards the very end). Random IO is still faster than HDD, regardless if cache or not. Read speed is still consistently magnitude faster than HDD Nevertheless, as I said, the QLC is still too expensive to be a viable home mass storage. Half of its current price and I have no problem switching to all-SSD.
  9. The limitation I think is costs. The QVO cost is still too high to justify an all-SSD array. Maybe half its current price/GB and they can be a viable option for home uses.
  10. Quite easy to do with Userscript plugin.
  11. Deciding which GPU is used to boot is controlled by the motherboard BIOS. There's nothing you can do with yoru plan except for getting a new motherboard. That's the reason why I usually recommend Gigabyte motherboards for new users (to no avail, people just prefer Asus / Asrock) because its BIOS allows you to pick which slot to boot with. Anyway with your case, you really have no choice but to go through all the hoops to get the primary GPU passed through. Watch Spaceinverder One videos very carefully. Some of your subsequent questions suggest you have skimmed through some of the vids.
  12. I have 850 Evo 2TB and 860 Evo 4TB. They all trim fine. Perhaps it's a peculiarity with your motherboard.
  13. Are you looking at overall usage i.e. including ram cache? Linux actively uses RAM to cache write and reports it as not-free but once required, it can easily be freed up automatically. In fact, 62% usage means 38% wasted RAM. You want as close 100% as possible.
  14. So after running it for a while, below are the Pros and Cons (with workaround suggestions) of running Unraid in VMWare Workstation. I am now debating moving back to the way it was LOL. Premise: No n-gamer-1-PC scenario - Unraid wins, end of in this scenario. Pros: No need to mess about with PCIe pass-through. Have a single GPU and have trouble passing it through to the VM? Have a Navi GPU that just refuses to work in a VM? Have strange audio issues that can't seem to be fixed? No VM limitation for the machine that matters USB device disconnecting? VM overhead affecting performance? Disappointed with your VM fps variance / stuttering? Cons: No physical drive pass through - have to use vmdk vdisk That means no protection by isolation e.g. a Windows cryptovirus can encrypt the entire vmdk Drive size limited to 8TB - can break large drives into multiple smaller vdisks but parity sync / check will be terrible. Network access is limited to gigabit (i.e. cap at 125MB/s) regardless of interface This is a VMWare limitation so no workaround No SMART monitoring - need to use 3rd-party software e.g. HDSentinel Cannot run Hyper-V based software at the same time Docker for windows is also affected as it requires Hyper-V Another VMWare limitation so no workaround. In fact, it requires this command in PowerShell to work at all. bcdedit /set hypervisorlaunchtype off No CPU core pinning on host - can use Process Lasso to set core affinity for vmware-vmx.exe process Additional cost of VMWare Workstation. I think it would work on VMWare Player but can't confirm as VMWare does not allow running both softwares in the same machine for me to test. Need plob / rEFInd iso files to boot Unraid from USB stick. VMWare does not support booting from USB device (but can connect USB device to VM). Note: I can't set it to connect automatically so have to manually connect at boot. While it's possible to boot Hyper-V VM from a USB device, this is done via the disk controller i.e. the GUID isn't sent to Unraid. Restart (e.g. due to Windows update) will require rebooting the server Need to run task scheduler as system user (e.g. using PSTools) to disable the 2 reboot tasks
  15. I had a close look at the new scripts (mainly for the --drive-stop-on-upload-limit option) on Github and I have to say it makes the old ones look like Neanderthal stone tools. 😅 Am I right to say these are the new features compared to the old scripts? CreateBindMount - so the rclone download / upload goes through a different IP address from the server IP? mergerfs instead of unionfs - for hardlinks + no COW Service Accounts - automatic switching of upload accounts Backup / Upload switch - rclone sync vs rclone move Variable-ize various things instead of hard-coding BW Limit by time - looks like it's a rclone functionality and there's no logic in the script, unlike (1) -> (4)? Just want to double check before I selectively adapt some to my current (old) scripts. A small proposal: why don't you also variable-ize the script location (/mnt/user/appdata/other/rclone/remotes)? Seems a bit out of place for all the other paths to be variables and this one is hard-coded.
  16. Have you checked whether your external USB drive controller is overheating? An overheating controller than throttle itself down causing high IO Wait which manifests as lower speed / choppiness. A dying controller will also do the same thing. If your external are not branded (i.e. 3rd party enclosure + an internal 3.5" HDD) then it would be best to just take the drive out and plug it into the server via SATA.
  17. No, your post is fine. Make more sense than some of the posts on here from non-dyslexia folks. You shouldn't be following Linus in terms of home usage. His work server is an all-nvme enterprise-grade storage server on a 40Gbps network that is capable of 5 GIGAbytes / s throughput. Even his home storage server is on 10Gb network + I believe he only uses that for Plex. Personally, I have My Documents and Downloads on the server, albeit on (ssd) cache and not the array. I used to have (some) Photos and Videos also on the server, albeit on (ssd) unassigned device and not the array. My array only has my backup and Plex library. You can see the pattern. Only stuff that doesn't need speed should be on the array, mainly due to the slow write speed. Everything else should be on cache / unassigned device. In order to move stuff onto the server though, you have to make sure that your connection between your desktop and the server is consistent. Your Downloads folder moving back to the local computer wasn't a bug. You probably lost connection at one point. Last but not least, SSD endurance is very much misunderstood. Regular trimming will be more beneficial to your SSD lifetime than moving Documents and Photos onto the server. (that is assuming you are not thrashing the SSD with GB of write daily). PS1: moving Desktop to the server is a big no-no. You are asking for troubles doing that. PS2: strictly speaking, you shouldn't be moving Documents / Music / Photos / Videos to the NAS. You should add the NAS folders to those libraries (using the Libraries functionality of Windows).
  18. Multiple drives don't usually drop dead (completely dead, not just a write error). If that happens, the first culprit to suspect is power. Your PSU could be fine but it could be other factors e.g. bad connection arcing etc. Recently there's another user on here who lost 5 drives because of a bad 5-bay rack. And then your PSU could also be the problem. Surge protection only protects against surge into the PSU. A broken PSU can lose control of the voltage / current (out of it) and sensitive electronics needs less than 1V surge to die. If I were you, I would double check the fail drives on a 2nd computer to see if they show up. If they do then use smartmontools to check SMART to see if there's any clue. Would be even better if the other computer can read xfs to have a look at the data.
  19. In your combined server, you will run 2 GPU. The default behavior of the BIOS would be to pick the 1st GPU to boot with, which would be the one on the 1st PCIe slot (which would not be your GT 710 in the x1 slot). Gigabyte would allow you to pick any x16 slots as initial display, saving you from wasting the 1st PCIe slot; however, it also doesn't allow you to pick the x1 slot as initial display. The reason why the GT 710 is doing fine in your current server is probably because it's the only GPU. Ironically with Threadripper, there can only be so much you can throw at it before latency comes into play and makes things worse. For gaming, Intel (single die type of CPU) is still the best.
  20. Let's start with "reliability" and tinkering. Usually once set up (and go through all the hoops to get things set up), you EITHER are happy with it and really don't have to do any more tinkering OR are constantly tinkering because there's something that keeps on bugging you. The most common complain with gaming on VM with Threadripper is high fps variance. You can get close to bare metal average fps but with the min can be quite a bit lower, depending on games and config. Personally I have never found it to be a big deal but I have seen so many complaints about it. If gaming is very important to you, I highly highly recommend you NOT consider doing any VM stuff. VM just cannot beat bare metal performance. Now with regards to your intended PCIe use. There's no need for a USB card. Ryzen / TR motherboard has at least one onboard USB controller that can be passed through to a VM. Most TR motherboard comes with 3 M.2 slots. sTR4 can come with 4 M.2 slots. Failing that, you can even get something like the Asus Hyper M.2 to break out a x16 slot to 4 x4 M.2 slots for many more NVMe. Your GT 710 will by itself occupy a x16 slot (even if it's running at x8 or even PCIe 2.0 x4 speed). That assumes you buy a Gigabyte motherboard, which allows you to pick which x16 physical slot as initial display i.e. what Unraid boots with. Other motherboards are likely to require you to waste the 1st PCIe slot for the GT 710 should you want Unraid to boot with it.
  21. It looks obvious to me that the problem is on that win10 vdisk. Could be something like you are on a VPN and your ISP has decided to throttle your speed through VPN.
  22. Why do you think you need ECC? You are not running FreeNAS (ZFS to be exact) nor an enterprise server. For home uses of Unraid, ECC is just a waste of money. Are you looking to do PCIe pass through? Are your clients 1080p? or 4k? or mix of both?
  23. Neither. To get performance at 90% of bare metal, you need to pass it through as a PCIe device (i.e. stub it and then select it in the other PCIe device on the VM template GUI). Everything else won't get close to 90% and very certainly not SATA.