tiwing

Members
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

2 Neutral

About tiwing

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. correct. After leaving system and appdata on the "cache" pool, and moving all other shares to a different pool, I had no connection loss to my dockers and no gui interruption. I probably copied several TB in various tests last night to test, including using krusader disk to pool directly, krusader share to share, windows share to share, and having Tdarr do a bunch of work on a test folder. I'm at work now, but I'll watch "top" tonight and see what happens there. I also have the netdata docker installed, will watch that tonight and see if anything looks bizarre, and will report back here.
  2. that's interesting, looks like I fall into the few users camp! I just tested my old machine (diagnostics attached) which actually does show the same behaviour in terms of processor usage on writing to the BTRFS pool - in this case the "pool" is one SSD as I don't care for redundancy in this box. I never noticed it when it was my primary machine, but I think I know why. This box alternates between full speed and almost zero write speeds. Always has. Still does. Seems like when the processors get maxed the write speed drops to "slow", then things calm down, processors return to norma
  3. update: I created a new pool, put the 250GB evo's back into the box and moved docker and appdata back to them. Once I started writing large files to cache the same issue reoccurred where processors maxed out and it looks like disk IO caused connection issues, including dropping out plex, I assume because plex couldn't find IO to access the docker or appdata. Or maybe the networking within dockers... I have no idea. So I've left the twin 480s in a separate pool and switched file writes to that pool while leaving system and appdata on the smaller pool. It works perfectly, and al
  4. also, thinking about the best way to switch back to the 250 evos , would it be to create a new cache pool (pool2) of 2x250SSD, move/copy over appdata and system folders and repoint the shares to "pool2", then remove the 480s? Far better, I think, than moving everything to the array, then back again (large plex library)
  5. damn.. well that's why I'm seeing it now then - my 250s are EVOs and I never noticed a problem with them. Do you think that's largely the cause of high processor usage?? Maybe I'll dump the EVOs back in there - return window is still open for one of the 480s. edit: follow up Q - would you recommend using separate cache pools for data writes versus appdata/system ? Makes a difference to the sizes I'm buying
  6. kscs-fvm2-diagnostics-20210415-1243.zip Hi all, I'm having an issue where any large writes to the cache pool cause what seems to be connectivity issues to the entire unraid server including dropping connection to dockers and interruption of the GUI from a browser. I've been able to isolate two scenarios where this happens, and how I can prevent it: 1) copying large files to a cache:yes share using krusader 2) processing files using tdarr from a cache:yes share. NOTE the tdarr cache space is on an SSD in undefined devices, not on the cache pool itself. setting shares
  7. Thanks!! Sounds like a great mid way solution.
  8. Thanks for that. The mirror I was suggesting would be in proxmox on a separate pc, prob with proxmox itself booting off a spinner, and probably zfs raid 1 for the vms. I did my original install of pfsense on metal with raid 1 across 2 ssds. Waste of space since pf runs mostly from memory, but I had them lying around. Nothing broke... Are you saying to rip the guts out of supermicro and put Intel igpu in the server chassis or go with a separate Intel unit? I hear your comments about 24/7. I've thought a lot about it both ways... I'm not challenging you. I do find it so
  9. Within the last 3 months I purchased a new-to-me (and not returnable) supermicro server to act as my primary unraid server with some pretty great specs. Or so I thought. It's a 36 bay supermicro X9DRi-LN4+ with SAS2 backplanes and dual Xeon e5-2670 processors for a total of 36 threads. I thought I needed it. Then about 2 weeks later I learned about intel quicksync. I have two primary uses for the server: Plex, and data storage. Related to plex are things like tdarr which handle all my re-encodes for better streaming, radarr, sonarr, bazarr. I share media with family and close frien
  10. I've been playing with pfsense for well over a year now, and in all my research, and personal experience so far, I would NEVER NEVER NEVER set up a firewall as a VM (on unraid) if you rely on that one as your ONLY firewall. Simple reason is that if something happens and you need to take unraid down, you also lose your network. I've done it, but my VM on unraid acts as a secondary node that is used if I take down the primary. IF you think that one day you MIGHT want to play with primary and secondary boxes in a high availability setup, you'll need THREE network ports - one for WAN,
  11. ok, thanks. It DID work before and has been working fine for a couple of weeks on 6.9. But manually stopping seemed to have triggered the error - so others may also experience this on a manual stop/start of service. Hopefully this helps someone else cheers.
  12. Hi, I stopped my VMs and dockers, and disabled the VM and docker services in order to configure my 4 onboard NICs (intel i350 on board). After I configured, I attempted to restart the services but they both failed and turned the file location red. It seems to be related to non-standard naming in the docker and libvert image file - my libvert file is libvert2.img and docker was docker4.img. Changing the name back to libvert.img and docker.img let them start OK. Is this a "feature" or a "bug" in 6.9?
  13. Hi all hoping for a little help here. I have a supermicro server with on board Intel i350, and a second pci i350 quad for pfsense vm. I have set up the vm previously by passing through the PORTS of the pci card on an older unraid server. When I transferred to the new box I lost connectivity nto unraid until I disabled all 4 onboard ports and installed an old realtec card I had laying around. All of which would be ok except the realtec seems unstable and I want to get back to actually using the onboard ports. Options I can think of: 1) buy a new pci card and forget about the o
  14. swapped power supplies to the other (lower powered) UPS, ensured connection in unraid, and pulled the plug. totally normal controlled shutdown and no parity check after restart. So clearly something is wrong with the other UPS, even though it worked fine a month ago. Must just be coincidence...! (I don't like coincidences). In all your opinions is smart UPS worth the extra cost? UPS will only ever have one server attached to it. In my case, I start a controlled shut down after 60 seconds of no power and have timeouts set to 5 minutes. It doesn't have to keep running forever, it jus
  15. That's always my first go-to. I'm not an expert in any of this stuff. 2) I double checked that before pulling the plug this morning... I HAVE done that before and spent an hour scratching my head going WTF... !! But not this time. The UPS does have both, but I'm plugged into the "backup" section with for both power supplies. 1) will test tomorrow with the other UPS... it's old too, so it won't necessarily be conclusive. But it will be an indication!