ShadowLeague

Members
  • Posts

    41
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ShadowLeague's Achievements

Rookie

Rookie (2/14)

0

Reputation

2

Community Answers

  1. I have a Dell T630 currently running unRAID and I'd like to move it to a VM managed by Proxmox. The Perc card is in JBOD mode. If I setup a VM in Proxmox for unRAID and pass the Perc card to the VM, will all the current data still be there and accessible to the unRAID VM, or should I migrate the data to another system and then transfer it back?
  2. What would you recommend if a user wanted to monitor logs from unraid, docker containers, and VMs? Looks like Crible Edge Single could potentially do this
  3. I just shared my logs: 455bd7c3-90a9-4842-a521-de24a1645f7d Looks like the backup acknowledges to exclude backing up /mnt, but a few lines later it's calculating /mnt's backup. I ended up cancelling the scheduled backup since it was taking several hours; causing its backup to be hundreds of GB in size.
  4. Update 2: After adding directories to an excluded folders/files, the backup completed much faster. I overlooked the 'per container' settings option when I configured the app. But, unless I'm misunderstanding the external volume options, I shouldn't have to do this if `save external volumes` is set to `no`. Update: I just found the "per container" settings and Plex did have /mnt/user/ listed but external volumes are NOT selected for backup. I'm going to add /mnt/user/ to the exclude list and start the backup process. I'll update this post and share my findings. Hello, I've noticed that my backups are now exponentially bigger. For example, the folder for Plex is roughly 350GB and its backup is over 3TB. I didn't have compression enabled at the time... I have since enabled it and the current backup is still bigger than the folder itself (and still growing and running 2+ hours). I'm on unRAID 6.12.3 docker container location is /mnt/user/appdata
  5. TLDR: I didn't review the settings closely. appdata backup was backing up media folders (arrrgh) Update: I believe this is related to the new backup plugin. Backups with the new plugin are 10x their original size. Their last modified time appears to line up what I observed prior to me doing a restart. I'm trying to determine what is writing files to my disk and what are the files. Could be my paranoia but better safe than sorry someone suggested running iotop to see what's going on with my server, and here's the output. I'm green to the command... but seeing the ?unavailable? with root running it is concerning (til someone says otherwise). In the interim, I'm trying to stop the array but I'm getting the "try unmounting user shares" message. 10:38:52 16601 be/4 root 4278.03 K/s 1432.38 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 20940 be/4 root 1222.29 K/s 2043.52 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 10885 be/4 root 3055.74 K/s 1623.36 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 10886 be/4 root 1833.44 K/s 1661.56 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 10923 be/4 root 3666.88 K/s 1585.16 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 27495 be/4 root 1222.29 K/s 1890.74 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0 10:38:52 25824 be/4 root 611.15 K/s 2024.43 K/s ?unavailable? shfs /mnt/user -disks 4095 -o default_permissions,allow_other,noatime -o remember=0
  6. I converted my cache pool to btrfs with Raid0 and Plex and other app UIs running from this pool are back to being responsive. Someone on reddit mentioned specifying the docker image (and not the folder location) helped with their issue. At this point, I'm happy with the performance and I don't think it's worth further testing. I did keep my second cache pool as zfs and don't have any issues with it.
  7. optimizing libraries had no effect. I'm going to rebuild the cache array to something other than zfs to see if that helps.
  8. Are you wanting help setting them up or you're experiencing the same issue?
  9. For Plex, it's back to taking awhile to view recommended items. It might take 10-15 secs before anything is displayed. After that, it works as expected. Other times, it'll time out. I'm optimizing the database to see if that helps. I'm not seeing a delay in SAB but I don't feel like I've tested it enough to confirm it's fixed.
  10. Might have made progress: Deleting docker apps and re-adding them didn't help Deleted the docker.img file and downloaded a fresh copy reinstalled docker apps via community apps plugin (thank to the author(s) for this) so far, the lag experienced in Plex is gone, but more testing to follow Maybe the old btrfs settings were causing an issue? I also just specified an image folder and not a docker image... not sure if that made a difference.
  11. Greetings, I made the jump from 6.11.5 to 6.12.3. After the upgrade, I converted my two cache pools to zfs. The cache pool running my docker containers use two NVMe drives in a raid0 (yep - I know the risk and make backups ) Since converting this to a zfs cache pool, my containers seem to be a bit sluggish. For example, Plex spins when I try to Navigate from my TV library to recommended TV items, or just times out altogether. I've observed WebUI timeouts with SABnzbd. Troubleshooting steps: restart unRAID stop/start docker (ver 20.10.24) in Docker, switch from macvlan to ipvlan FYI, Docker info still shows the filesystem as btrfs in Plex, refreshed metadata ensured appdata folder sits on the zfs cache pool, and no secondary storage is defined. While my system isn't new, it isn't starving for resources (I think). I have two E5-2600's v3 with 64GB of ECC Ram. If I look at the unRAID dashboard for RAM, ZFS is taking about 8GB, and Docker is steady at 41% utilization. Is there anything else I can try? Is there a step I have missed?
  12. I think I solved my own problem. I don't believe i unassigned the pool devices to delete the pool. I did that after a reboot and my newly created zfs pool shows used space and free space. I'm moving files to the new cache pool and the used and free space are updating
  13. I just converted my cache pool to ZFS and when I used the file manager to move my appdata folder to the new ZFS cache pool, the UI says there are 0 bytes used and 0 bytes free. If I use file manager to calculate the size, it's just under 500GB