-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. Thanks a million for looking through the logs! That's irritating. I'll give it a go in another machine to see what happens. Though I think it might be a board issue. Tried a flashed H310 as well, and had similar issues. I don't suppose you've any idea what settings I should be changing? Not familiar with more technical settings on AMD boards.
  2. Hi all, Moved my build to an X399 system. Drives connected to the board show up fine, drives connected to the 9211-8i (P19 IT) aren't being detected in the WebUI. Card was previously working fine. Tried different slots, same thing. Anyone care to have a look through diags to see if any light can be shed on this? server-diagnostics-20171118-1402.zip
  3. I'm almost sure this has been brought up before, so sorry if I'm duplicating. Googled and searched, couldn't find it. After a recent upgrade, and some problems, I'm in a situation where, because of a failed HBA, I can't bring enough drives online to start my array. Plenty of SATA connections left for my cache pool and any unassigned devices, but can't start the bulk storage. Even though all my VMs, and most of my Docker containers can start, I can't actually do anything with them, so my server, which hosts lots of game servers via VMs, and Plex an associated applications through Docker can't function. TL;DR - Allow starting of cache and unassigned devices outside of main array?
  4. Hi all, First off, this is a hardware issue, not unRAID, but I figured it would get a lot more visibility here, and I've been pulling my hair out for a couple of hours already on this. Moving stuff to a new build - From a C2750 setup to an X399 one. Have a mix of drives, including three 8TB WD Easystores that were shucked. Everything is being detected, except the 8TB Easystores. They're not showing up in the BIOS either. I've swapped with known working cables (power and data), same result. They were working perfectly in my C2750 system. Anyone have any ideas on what else I can try here? Thanks in advance
  5. Big bottleneck somewhere, my checks with dual parity take as long as they did with single parity. Could well be. Using a C2750 SoC, but usage is never more than around 25-40% during a check. Maybe it's lacking instruction sets that dual parity finds beneficial or something. 1950X build incoming though, so we'll see soon enough.
  6. Recently switched to dual parity. Checks now take almost two days. (55MB/s vs. ~110 for single) Would love something like this. The array is sluggish as all hell on dual parity during a check. Maybe I'm CPU limited or there's something else going on, but Plex was unusable during this time, as were most other things. Would love to have this run only at night for 3-4 days instead.
  7. Because then you can't run your VM on redundant storage, requiring downtime if you want to create a backup image. That, or you have to passthrough a hardware RAID1 config, which seems a bit silly within an OS like unRAID.
  8. Apologies for the necro, but having seen that single-drive vDev expansion is coming for ZFS some time in the future, I figured I'd nudge this again for visibility. For myself, I'd be happy just having ZFS for cache, not the main array. Myself and some other users have been having some issues possibly related to BTRFS cache pooling (see below, the issue seem to go away when a single XFS cache device is used), and I feel like having something that's been around longer and has been put through it's paces a little more might be a nice option. I understand that for all the reasons Limetech listed above, it might still not be viable, but just putting it out there none-the-less. https://forums.lime-technology.com/topic/58381-large-copywrite-on-btrfs-cache-pool-locking-up-server-temporarily/
  9. Just to chime in here: I'm experiencing the same issue, also using 2x 850 Evos. Not a fan. Haven't tried any troubleshooting things mentioned in this thread or anything yet, but just to say that it is also affecting me, etc.
  10. Yup. 2x RAID1 arrays is my use-case as well. One for VMs/User-facing Docker stuff, the other for cache and "backend" containers.
  11. Simple one, probably doesn't need it's own thread, but I couldn't think of a better place. As the title says, I think "Main" is a little ambiguous, especially with "Dashboard" right beside it. I'm not sure on the stats - I'm sure LT has tonnes of data on this - but I'd guess most people are doing things with unRAID other than just storage. unRAID isn't "just" a storage platform anymore, and I don't think it's been just that for a while now. As such, the Dashboard seems like it would be the more appropriate overview type page, while the "Main" page really only handles storage-related things. So, I think a clearer name would be something that reflects that - Storage, Array, Disks, etc. Just a thought.
  12. That's a very handy calculator. Thanks for that. Might end up having to go for a 1TB SSD anyway then. I didn't realise BTRFS' RAID1 works this way. That actually makes things much simplier. Cheers!
  13. How does RAID1 with 3 disks work exactly? Edit: Never mind, this explains it pretty well: https://superuser.com/questions/870847/btrfs-raid-1-on-3-devices So I can pick up one 500GB SSD, just add it to the pool, still have "RAID1" (in the sense that there's only ever two copies of any data), and gain a bunch of extra space?
  14. Thanks for all the replies guys. It would be nice to just add a third SSD into a three-drive pool, but I've heard BTRFS RAID 5/6 equivalents are a bit unstable. Haven't looked into it in a while. For the moment I've stopped downloads and media shares from using the cache, and have taken the hit on performance. We'll see if this improves VM performance anyway from not having as much stuff hitting the same SSDs. Mounting a spinner to a directory on the cache sounds interesting. Does this then still work with the mover correctly? I assume it would.
  15. I understand your reasoning for looking at my use of the cache drive, but there are reasons I'm doing it this way. 1) Power consumption. Turbo write would help yes, but would mean my power consumption is closer to 80W 24/7, rather than it's usual ~35. Not a huge deal, but electricity isn't cheap where I live. 2) CPU resources. My CPU isn't very powerful. A bunch of stuff getting written to the array can use 40%. I'd like to keep this down during the day when people are using other CPU intensive things. So as to my original question: I'll be looking at my usage of the cache drives overall, but can you - or anyone else - suggest anything I might be able to do to have VMs and Docker humming along nicely, as well as keep most writes using cache?
  16. Well, that's originally why it was created. The cache was implemented (as far as I know) before Docker was. Saying "don't use it", while an option, seems like I'd be missing out on a pretty nice feature of unRAID, especially since I'll be picking up some 10G NICs shortly as well.
  17. Hi all, I'll be doing a hardware upgrade on my server soon enough, and with it I'm planning some updates to my VM storage. Right now, I have everything on 2x500GB SSDs. VMs, Docker, and most shares using it. The problem is that I'm running out of space. Now, I could: Buy bigger SSDs - Certainly possible, but I'd like to avoid this if possible given how expensive SSDs have got over the last while Move VMs to an Unassigned SSD - I'd like to have all of BTRFS's error checking, and the RAID1 redundancy Make some of my shares not use cache - Possible, but for the following reason it's a bit tricky. When something is finished downloading, it gets moved from mnt/user/downloads to mnt/user/movies or tv as the case may be. So even if downloads is set to not use cache (which is what's currently set) stuff still gets copied to SSD when it gets moved to its final home. Can I use User0 here, or will that cause more problems? Ideally, I'd like to have a second cache pool, only for use with VMs. I could use downloads with an unassigned device, but I don't know how well that's supported with the mover, if at all. Does anyone have any options or ideas about what could be done here? Thanks!
  18. +1. I feel like this is probably coming, given what's been talked about with 6.4, and with the most recent RC, but worth mentioning anyway. Would love to be able to use stronger passwords with the webUI.
  19. I don't want the shares only on the cache. They're just downloaded junk, so I want it ultimately on the array, but I'd rather that happen at night, hence the standard cache=yes behaviour is desired. This happens daily, so manual intervention isn't feasible. The diagnostics showing anyone anything useful?
  20. I'm aware of the mover, however this didn't happen during a time when the mover was active, and invoking the mover manually doesn't cause this issue. Also, the issue lasts for several minutes to an hour or so, depending on the size of the files in question.
  21. Hi all, Not really sure where to put this, as I haven't been able to narrow down the root cause yet. I've noticed that when a download completes (Deluge, Transmission docker instance, etc.) if the file is big enough (usually a couple of gigs or more will do it) the whole server hangs. The webUI mostly works, but slows to a crawl. Plex streams will break. Locally streaming video will become a buffering mess. Other VMs and docker instances will have their performance impacted. While this is happening, this is what the stats look like: From the image above, it looks very much like somewhere along the line files are being written to the array, despite all shares being "Cache: Yes". The shares: Downloads go to: user/downloads/incomplete Once completed, Sonarr/CP move to: user/downloads/tv or user/downloads/movies Then these are moved to: user/tv or user/movies, for Plex to pick up. It's hard to tell where in the process the bottleneck is happening, but as stated, all are using cache, so AFAIK, this shouldn't happen. Anyone have any ideas? It's irritating when users report problems on a Minecraft server just because a download is finishing up. Diagnostics attached. Thanks in advance! server-diagnostics-20170820-1114.zip
  22. No, actually I meant just what I said. An extension of the current approach "Easily add and replace disks", would be "Easily add, replace, and remove disks". These are the main operations a user would expect to be able to do with something designed around easy data management, and it stands to reason all options should be (roughly) equally easy.
  23. It could also be argued that it's somewhat expected. I'll grant you removing a disk is far less common than adding one, but unRAID's big callout is how easily disks can be added, and that you don't have to faff about with matches drives, pools, etc. like you do with something like ZFS. By this, you'd think an obvious extension would be easily adding, removing, and replacing disks.