dRuEFFECT

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by dRuEFFECT

  1. this was very helpful and works for my use case, thanks!
  2. Sounds like this can make Emby play the same content across multiple devices, is that right? I'd like to try, installed it with unraid, but no instructions on how to use once set up.. Any pointers?
  3. nevermind, looks like the raid1 btrfs file system corrupted as is read only YET AGAIN. i hate this shit, this is like the 5th time now. what's the point of raid 1 backup when the whole filesystem gets corrupted so frequently
  4. Since updating to 6.11 I have been unable to update most docker containers. Some containers are orphaned and trying to remove them manually I get a vague "Server Error". When trying to update a single container or update all containers, the output shows that the removal of the existing image fails, and can't create the new image because an image with that name already exists. I'm stuck with some permanently orphaned images, and some dockers that fail to start with "Server Error" i have docker directory enabled and tried scrubbing the filesystem but it just shows status: aborted screenshots and diagnostics attached unraid-diagnostics-20221006-1626.zip
  5. this is the first i'm hearing of memtest86. my motherboard doesn't support ECC ram, so i'm limited to non-ECC. could that be a factor here? to me it just seems the raid1 cache drives were not balanced and just threw a ton of errors without a way for me to rebalance. i wound up swapping the older SSD with the other older SSD and let unraid rebalance, and i've been fine ever since so i guess this wasnt a filesystem corruption. i'm still not 100% sure what happened.
  6. woops, ok. here it is. unraid-diagnostics-20220716-1447.zip
  7. This is now the third time I have to backup, reformat, and restore my cache pool due to BTFRS filesystem issues. Logs attached. Any idea what's causing this? I thought it was a bad cables as I would have ECC error count on the SSDs move from 0 to 1 and back regularly, so I replaced my SATA cables. Then one in particular still kept throwing that error a lot so I bought a new SSD to replace it in the pool, but it's now my understanding that's just some kind of bug with MX500 SSDs and I'm still having this issue. I really like the idea of having SSD drive fault tolerance with a pool, so I really don't want to go to a single drive XFS. Plus my OCD kicks in when I see that exclamation point on the shares tab telling me my cache shares aren't protected. syslog.2
  8. I posted here the other day thinking I got it working with nested folder mappings, but like can0n says, that doesn't work. I deleted my last post since i was completely wrong. but I'm happy to say that i believe i actually got it working with symbolic links, specifically relative symbolic links, AND i needed to map the path of the symbolic link destination to the docker as well. so like lotetreemedia said, copy/move the directories, create RELATIVE symbolic links (-rs not -s), and add a new mapped path to the destination. when plex is inside the docker and sees the symbolic link, it tries to follow that path. so a standard symbolic link won't work because the /mnt/user/newshare directory doesnt exist within the context of the docker, you need to also map that share to a location within the docker. maybe you could do a standard symlink too and just map the full host path to be the same in the container path, but whatever i used a relative symlink and just mapped the share to the container's root. i got it to work on all 3 folders, Cache, Media, and Metadata stop the docker, move the folders, and create the links cd /mnt/user/appdata/plex/config/Library/'Application Support'/'Plex Media Server' mv Cache /mnt/user/metadata/Plex-Cache mv Media /mnt/user/metadata/Plex-Media mv Metadata /mnt/user/metadata/Plex-Metadata ln -rs /mnt/user/metadata/Plex-Cache Cache ln -rs /mnt/user/metadata/Plex-Media Media ln -rs /mnt/user/metadata/Plex-Metadata Metadata now your folders are linked like this then add a mapped path to the new metadata share and BOOM, bobs your uncle
  9. not sure how i got stuck with my email address as my username here on the forum, but this isn't ideal and i can't seem to change it. can an admin update my username to: dRuEFFECT please and thank you
  10. edit: i re-ran it and yea it appears to skip existing files pretty quickly btrfs restore ran for a while but hit a not enough memory error.. not sure what to do next would re-running skip over what's already restored?
  11. for my new cache pool, would it make sense to run a single drive primary cache and schedule rsync on the entire drive to a secondary SSD pool, this way both drives are XFS, both drives have independent filesystems, and i could stop the array and hot swap the backup cache in case of failure. is this plausible or am i missing something?
  12. I have 2 500gb crucial ssd's mirrored in a raid 1 cache pool. I keep everything updated, on the latest 6.9.2. Everything was swimming along nicely until this morning when I updated Fix Common Problems, then went to CA and showed me that docker was disabled. I check things out and see that my cache drive is in read only mode, and one of the 2 SSDs was showing BTRFS errors to trigger this. I was confused why I would have any issues, if one drive had a problem, wouldn't the second be able to keep things afloat? So I tried stopping the array, this way i can add a spare SSD to pool devices and copy the cache drive contents, then possibly drop BTRFS for XFS and run a single cache drive with scheduled backups. Stopping the array was getting hung up on unmounting the cache drive. i tried manually unmounting, but no joy. So I rebooted into safe mode and now both SSDs are showing as unmountable with no file system. For the love of god, please tell me there's a way to recover the cache drive contents. I've been running a mirrored cache pool solely for the peace of mind that i have redundancy, how could this be happening? What can i do?
  13. Just got this sorted out myself. To properly pass through the GPU into the VM, you first need to set "PCIe ACS override" to "Downstream" in the VM Manager settings, then reboot. This is the first VM i ever needed to build and found Spaceinvader's VM walkthrough for unraid was pretty helpful in piecing this together.
  14. diagnostics attached. andrews-unraid-diagnostics-20210103-1406.zip
  15. I'm fairly new to unraid, having moved over from FreeNAS about 6 weeks ago. I have 2 parity 12TB, 4 array 12TB, 6 array 6TB, and 2 cache 500GB. I was playing around with different share settings and was having deluge to actively download to /mnt/cache/downloads/, then move completed downloads to a different folder under /mnt/user0/downloads-seeding/.... i did this so download writes go to SSDs until the files are complete, and then store the files on the array for sonarr/radarr to move/copy/hardlink into /mnt/user/media.. (i found when downloading to /mnt/user/downloads and moving to /mnt/user/downloads-seeding/ that the data stayed on the cache pool and subsequently the copies to /mnt/user/media/ also were on the cache and not the array)... this was working fine for a week or so until last night when my cache drive filled up to 100% when I was asleep, but seemingly continued to function properly for a while. i have a 2TB SSD on the way for me to mount as an unassigned drive for downloads, this way cache can be dedicated to important things like appdata. i may have corrupted my docker containers with the cache having filled up, but this issue is now secondary and i have disabled docker until i can resolve the below issue. when i woke up, i found that FCP was reporting that disk1 was filled up or could not be written to, i had recently changed my media share to "Fill-up" allocation with a 50GB free space minimum, but the main tab showed that disk1 was only at 10.1TB used of 12TB total so it's not filled yet. i tried rebooting and restarting the array, now the drive shows as unmountable, with the option to format and create a filesystem on the unmountable disk. there's data on the disk, so obviously i don't want to do that. i was looking at this thread with a similar issue on the unmountable drive, where the OP said he was able to emulate the drive and run a command to resolve his issue, but i can't find any info on what it means to emulate a disk or how to reproduce what he did exactly. i'm running a diagnostic export if that's necessary, but it's taking a long while. in the meantime, can someone explain how i could emulate the disk in order to appropriately run the above command? andrews-unraid-diagnostics-20210103-1406.zip