Congles

Members
  • Posts

    21
  • Joined

  • Last visited

Congles's Achievements

Noob

Noob (1/14)

7

Reputation

  1. Much appreciated insight. If I delete my domains share and recreate it with the correct settings and copy files back. Will that be sufficient to fix this issue going forward?
  2. Thanks @JorgeB, a short time ago I realised the data on the pool was lost. Thankfully I did actually have recent backups. I've just finished restoring everything, sans one of the nvme's which I think is dead. It is my "Domains" share that is set to NOCOW because, as per the Unraid GUI help it says "We recommend this setting for shares used to store vdisk images, including the Docker loopback image file. This setting has no effect on non-btrfs file systems." As my domains share is used solely for vdisks, I presumed this was the right approach? Should I set it back to Auto? And if so, would I need to empty the share, recreate it and copy files back so they inherit the COW attribute?
  3. Hey All, I have two 2TB nvme drives in a RAID1 btrfs and am getting errors. I've tried doing a scrub but it seems to immediately be aborted. Since I have a RAID1 setup, can I simply remove the problematic drive and replace it later this week when a new one arrives? Diagnostics are attached. diagnostics-20230506-1013.zip
  4. We just moved to TrueNAS Core (virtualised on Unraid) in September to support our bandwidth needs... Looking like it won't be long before we move back (Core sucks for the unfamiliar). As a side note, having support for 30+ drives would be nice for us. Our ZFS pool is 24 drives and we have a JBOD case to add another 36 drives over the next 12 months. We can manage otherwise though.
  5. Same here, 6.9.2 and NVME temperature setting won't save - normal disks will though. Tried firefox and edge
  6. +1 I have email alerts turned on and don’t see a way to remove the alerts from arriving in my email other than having the temperature thresholds actually remain. I've tried deleting the cfg file and setting the values again but the the GUI resets to default and I still get notifications (cfg file has the correct settings) Curiously, it only happens on my nvme cache drives (raid0 btrfs) and not with my spinning disks.
  7. Thanks @ChatNoir and @JorgeB for your help, Your discovery fixed my problem and the log events have now gone and everything seems to be stable again. Thanks!
  8. Ahh thanks, so it seems like an amd/architecture thing and not an Unraid OS / Server thing. I'll have a further read. Hopefully this fixes the recent instability I've been having. Thanks guys
  9. @ChatNoir thanks for pointing that out. Do you know, or have a link to, why we can't run the RAM at even spec speeds? I'm about to set it to 2667 but kinda bummed I didn't know about this before I purchased the ram because I could have saved myself a big of $$ going for slower ram (especially 128gb of the stuff) Thanks again!
  10. Oh wow... thank you. I didn't realise the RAM was overclocked, it shouldn't be. I must have got confused one day (as I have multiple machines) and set the ram at 3600 instead of 3200. I'll be fixing that ASAP. I'll report back if I continue to get the error
  11. I have two Sabrent 2TB NVME drives in a btrfs raid 0 cache array called "Rocket" Does anyone know what the below errors mean? I started getting this last week, I wiped the cache and recreated it and it's come back. I've attached the unraid diagnostics too. Thanks in advance! tower-diagnostics-20210614-1412.zip
  12. Hey @primeval_god, thanks again for all the help on this. I've reached a resolution, albeit not what I set out to do, but a better result: We're now using Syncthing instead of Resilio - which has the bonus of being free but also some recent improvements (since we last tested it about 2 years ago) has meant it's easily twice as fast as Resilio, probably 3 times faster. As for the container mappings, I ended up moving everything to a single share and am going to now look into how to have a single folder within that share, available on the cache for my own working. I suspect this will require another local-sync program to create and maintain a mirror of the data in said folder, on the cache drive. Any thoughts you have there to steer me in the right direction would be appreciated, but I'll do my research and open another thread if I need. Thank you a tonne again!
  13. Oh I see what you're saying. I'll give that a try at some point in the next week or so. For now it'll have to wait as I have about 10 sync jobs I would need to redirect before removing the /sync binding.
  14. Oh yep, I did try that but unfortunately it's baked into the container. I also noticed someone discuss it on the linuxserver resilio thread. Basically you can bind /sync to anywhere you like, but all the subfolders for sharing need to be child folders of that /sync location.