Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. That is because the files are not in /mnt/cache. See my reply just above your last post.
  2. You set "Use cache" = Yes. As the page states: "Mover transfers files from cache to array" So files are on the array, thus they disappear from /mnt/cache. Set it to Prefer, run the mover (to move the files back to cache) then set it to Only. "Only" is the settings to keep files on cache. (assuming your cache pool isn't filled up).
  3. You could email Limetech directly asking about it.
  4. Do you actually have an isos share? What is its cache settings? Can you find the file under /mnt/user/isos?
  5. Your test is heavily flawed. /mnt/disk4/DST/test.dat and /mnt/cache/DST/test.dat are duplicated files under /mnt/user/DST/test.dat. You then do an additional write test to /mnt/user/DST/test.dat. You just introduced an unrealistic variable to the test. You haven't made any attempt to remove the effect of RAM cache from the test. There is no SATA SSD / HDD that is capable of 1.3GB/s and 1.4GB/s read speed. Block size of 1k (1024) is equivalent to random IO. 100k of 1k files write to a parity-protected array at 13.2MB/s is completely NOT unreasonable. Let's simplify the test. Take 3 large (multiple GB) files, copy them using cp command and then watch the speed reported on the Unraid Main page of the GUI.
  6. Shucking will certainly void warranty. A way to somewhat mitigate it is to run Preclear of the external as-is via USB first to weed out the early failure. Of course, it can fail later too but after the early failures are eliminated, they will fall back to probabilistic pattern i.e. can be mitigated by parity. I personally haven't shucked HDD for years now. Last I checked, the lower cost didn't justify the (statistical) expected cost of replacement without warranty (even with the higher reliability assumption of 8TB+ HDDs). If price goes down enough, I would have no problem doing it.
  7. testdasi

    Hello To All

    Welcome! Your config is an overkill just as a NAS so I'm sure you will be itching to do more stuff with it soon. Also, don't overclock your RAM (i.e. don't run it at 3200MHz). Run it at stock speed will save you lots of unexpected gripes.
  8. X570 (actually all Ryzen motherboards that I know of) has 1 onboard USB 3.0 controller that can be passed through i.e. no need more PCIe card. Follow Spaceinvader One vid on PCIe vfio stubbing to stub it and it will show up as other PCIe device on your VM template and then it's just tick, save, start. Check this post for the onboard controller for X570 chipset that can be passed through (you will have to do all .0, .1, .3 to the same VM or it will not work).
  9. There's no need to get "NAS" drives (or "Enterprise" or stuff like that), especially for Unraid. These labels were created to segment the market when manufacturers realise they can charge more for slightly tweaked products. Don't get me wrong, there ARE cases in which these drives do have advantages. Unraid (for home users) just isn't one of those cases. In terms of brands, there are really just 3 manufacturers of HDD right now: Seagate, Toshiba and WD (which also owns Hitachi / HGST brand). So they are all considered reputable and there's really little if any clear distinction among the three. So what I do is just to find the cheapest available HDD at the capacity that I want and go for it. Whatever brand / model it is then it is what it is. WD Red is rather popular on here but I think it's more because back when 4TB was like 8TB at the moment, WD Red was the most affordable. Some on here also shuck HDD out of external HDD and the most affordable ones tend to have WD Red / White label inside, adding to the popularity. At the moment in the UK, Toshiba is usually the most affordable for 8TB and Seagate Iron Wolf the most affordable for the 12+TB. And forgot to say, don't buy multiple HDD's at the same time (and if you have to, try to avoid doing it with same seller) in case there's a bad batch. So Froberg's strategy of gradual upgrade is a very good strategy. Signature is in Account Settings. Limetech has never explained why it isn't part of Profile.
  10. "Small" = 6TB and under. In terms of price per GB, 8TB should be similar to 4TB so there is really no reason to get 4TB at all. 10TB / 12TB are just a bit more expensive per GB. Above 12TB needs maybe 6 months to drop to reasonable level (again price per GB!) The 960 Evo is 3D TLC which is what you want. (the "3D" part is important). There isn't an approved list because kernel changes and new models can make it outdated pretty quickly. If you hang around the forum and check out people's signature, you should be able to get a nice short list of NVMe and then you can just msg the person directly to ask about it. Most people are happy to answer. I know for sure Intel i750, Samsung 970 Evo and Samsung PM983 can be passed through for Unraid 6.8.2 cuz I'm running them right now.
  11. Updates: Bought another 3.84TB Samsung PM983 NVMe so all of my VM data is now PCIe-passed-through (e.g. no need to save data on UD share of SATA SSD). I have been backing up my data offline to a 2.5" USB external (holding the Seagate BarraCuda 5TB i.e. SMR). It is filled to 95%, making it very slow (even slower than it has always been!) and creating the need to split my backup among multiple external drives. So I thought since I need to split my offline backup, I may as well solve the speed issue as well. Now all my large capacity SATA SSD's (Samsung 860 Evo 4TB, Samsung 850 Evo 2TB and Crucial MX300 2TB) are in external enclosures serving as offline backup. One of the i750 is in cache, the other one mounted as UD. I use both for write-heavy activities. 1 is at 94% reserve and 1 at 98% so still have some way to go. I mounted the Seagate BarraCuda 5TB + 2 of my old Toshiba laptop HDD's (320GB and 80GB - yep they are still alive and kicking) as UD. The 2 laptop HDDs will be used as online backup for appdata, flash, vdisk etc. The 5TB will be used for infrequently-accessed static data (to keep them off my write-heavy SSD's). I'm trying to resist the temptation to build another server to house my SATA SSD's as offline backup LOL. Updated Unraid to 6.8.2. No problem to report.
  12. A few things that I haven't seen mentioned. Because HDD's fail in a probabilistic manner, it's always better (statistically) to have fewer larger capacity drives than have more smaller capacity drives. Furthermore, Backblaze stat also suggests newer large capacity drives also seem to have better reliability in general than older low capacity drives. And of course number of SATA ports is a limited resource. Adding all those 3 things, you should definitely aim to get larger capacity drives. Avoid QLC NVMe. QLC for SSD is like SMR for HDD. QLC NVMe SSD is like making a Ferrari run on bicycle tires. Make sure you google the controller of the (NVMe) SSD before buying. Some (e.g. SM2263) requires special workarounds to be passed through. Some (e.g. Intel 660p) just downright refuses to be passed through.
  13. Are you able to restart your VM without needing to restart the server?
  14. In your commands, you mkdir tempdir but you haven't cd tempdir so all your files were created on /mnt/user That is likely to cause a mess because /mnt/user is where all the magic happens. Try creating a share called tempdir first (and set allocation method to Most Free and split folder to all). Then cd /mnt/user/tempdir
  15. Try adding this above </device> and see if you can pass through the 1st drive. <controller type='scsi' index='1' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host2'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </hostdev>
  16. These fanless boxes do run hot (hot enough that it can warm your tea - not make it hot but make it warm-ish) but if you actually measure temp, it's still within spec so I would not be too concerned about temp (other than avoid putting sensitive things like plants near it). I would also recommend running critical infrastructure e.g. router separately instead of in a VM. As I have seen recently, users have been reporting problems with pfsense not happy with passed through NIC.
  17. That sounds like there were other IO's happening at the same time as when you last did the transfer. Because RAID stripes data, it helps with speed during simultaneous IO. Unraid will behave the same way as you have simultaneous IO on a single HDD i.e. slow.
  18. Did someone help you build the server originally? I don't think a non-techy can build a 30-disk 108-TB array that works for years without ever experiencing issues. Given you have a rather massive server, I would recommend you get help so you don't end up making avoidable newbie mistakes. Btw, you have 2 parity with 2 failed drives so it's possible to rebuild. The problem is, as what johnnie said, your controller might be the cause which can exacerbate your current situation. For example, more drives may drop offline while you are rebuilding, and that's not good.
  19. My understanding is you should only enable it if you actually have a problem and it may or may not work. The only time that I could have needed it was when windows 1803 fails to update (didn't realise it was related until later) but I managed to work around it using a different method. So I don't think it's about "work correctly" but rather work vs crash sort of situation. Sources:
  20. How did you "read the same file back"? Is there any other IO happening at the same time? With your current config, read should be the same speed as your data drive and write should be the slower of parity and data. When you add more data drives, your write speed should slow down but read speed should still be the same as that of whichever drive the data is on. If RAID is something that suits your needs, why would you move away from it? Can't have your cake and eat it.
  21. M.2 SSD (NVMe, SATA and even PCIe AHCI) all work fine for a while. Now let me clarify the point about 2nd GPU. Unraid does NOT need a GPU to boot (as long as the motherboard allows it to boot "headless", which seems to be the case nowadays for all the brands). The GPU that Unraid boots with generally CAN be passed through. It's just that there are certain hoops that I observe new users tend to have troubles with (most frequently reset issue for AMD cards and error code 43 for Nvidia cards). These hoops make it rather frustrating for both the users as well as anyone trying to help. The exception is iGPU. I have not seen any success story with passing through iGPU for current gen (both Intel and AMD). Of course, with the appropriate skill level (and a cooperating GPU), it's entirely possible to do it with a single GPU. It's just that new users don't tend to have the skill and familiarity. Having a dedicated GPU for Unraid to boot with makes it easier to work around the aforementioned hoops. For example: Dumping vbios is pretty dang easy with 2 GPUs. Doing it with a single GPU is virtually impossible. Downloading vbios from Techpowerup is prone to user error (i.e. downloading the wrong vbios). Some AMD cards have reset issue so if it's already initiated at boot, it just can't be passed through to the VM, end of. Note that these cards, even if successfully passed through, will require the entire server to reboot if the VM reboots for the GPU to work again. This is a long-standing AMD problem that is unlikely to be fixed any time soon so don't keep your hope up. All recent Nvidia GTX / RTX GPUs will error itself out with error code 43 if it's initiated at boot and then passed through to a VM without the right settings e.g. vbios, hyper-V etc. This is because it detects that it is being used in a virtualised environment, which Nvidia doesn't want you to use cheap consumer-level GPUs for i.e. it wants to force you to buy more expensive Quadro cards. The problem is error code 43 is a generic error code that just says the card doesn't work i.e. it overlaps with issues such as an actual bad GPU or incomplete pass through or corrupted drivers etc. That leads to frustration trying to diagnose and work around it. So in other words, having a GPU for Unraid to boot with (and not used by a VM) is a quality of life item and not a hardware requirement. If you have 2 GPUs for 2 VMs then things are slightly different. It's actually now worth your while to test it out first without a dedicated GPU for Unraid. Basically try to pass through the one Unraid doesn't boot with first to get the familiarity and skill up. Then once successful with 1 then you can start working on the other one (and hopefully at this time, it's less frustrating to deal with issues). If somehow it still doesn't work then you can then get a dedicated low-end GPU for Unraid (assuming you get an ATX motherboard). This is when the Gigabyte motherboard flexibility still comes in handy because you can put this low-end GPU on the slowest PCIe slot and reserving the 2 fast ones for the 2 GPU's to be passed through to the 2 VM's.
  22. "That's not good" is very much an understatement. I have seen cable melted due to current from overloading HDD on a single lead so really, don't do that.
×
×
  • Create New...