Jump to content

jonfive

Members
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About jonfive

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. oh boy. jeez, this will be the 4th seagate drive failure i guess i'll move the rest over to wd and get to blocking pins. Thanks for the heads up!
  2. got an alert that there are errors on my parity drive. i ran the smart extended self-test and it came up with no errors. though the two highlighted in attached image have me a bit concerned. But i'm not sure if it was just a fluke - can i 'clear' the error to see if it pops up again? or is that ill-advised?
  3. ahhh gotcha! That's an excellent explanation I think i was just treating it like a 'watch folder' rather than considering the sab/sonarr communication It works now - i just backed them both down to mnt/user/downloads. Thank you very much! You're a real asset to the community
  4. i have sabnzbd using another 'complete' folder from itself working with categories inside that complete folder - hence the complete/complete/sonarr So just make sonarr match the output path of sabnzbd plus the sonarr category folder? sorry, i'm just a little confused - the files do end up in /complete/complete/sonarr when they're finished downloading.
  5. Thanks for taking the time, i really appreciate it. Sonarr root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sonarr' --net='bridge' --log-opt max-size='10m' --log-opt max-file='1' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8989:8989/tcp' -p '9897:9897/tcp' -v '/mnt/user/Downloads/Complete/Complete/Sonarr/':'/data':'rw' -v '/mnt/user/Media/':'/media':'rw' -v '/mnt/user/appdata/binhex-sonarr':'/config':'rw' 'binhex/arch-sonarr' 4a9a4e45951b5c7b0c6b82f1211504c6c86a738a7d2d9a2c13ec4a93c8ea3bdf Sabnzbd root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sabnzbd' --net='bridge' --log-opt max-size='10m' --log-opt max-file='1' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8080:8080/tcp' -p '8090:8090/tcp' -v '/mnt/user/Downloads/Complete/':'/data':'rw' -v '/mnt/user/appdata/binhex-sabnzbd':'/config':'rw' 'binhex/arch-sabnzbd' eacb2791776f258d3ba207c2176d4f525b0a45db625f9b9f8e029dd8fc9ca1c2
  6. /mnt/user/downloads/complete/complete/ with the sonarr category, making the completed files go into /mnt/user/downloads/complete/complete/sonarr/ Files are indeed there. Wanted page manual import: Activity page manual import: I've had to recover some data with appdata restore, it's worked just fine for about a year since i had to redo everything. Possibly something got mixed up in the config since the restore? If need-be, is there a way where i can wipe out the config and 'start fresh' without losing the show database? or reinstall and preserve the db?
  7. Automatic import issue: Mappings are correct unless i'm missing something stupid or misunderstanding the data folder. Sabnzbd drops the completed Sonarr downloads into /mnt/user/downloads/complete/complete/sonarr/ Sonarr's data folder is mapped to /mnt/user/downloads/complete/complete/sonarr/ I can manually import successfully directly from the Wanted>Manual Import and use the /data/ folder. From using the manual import button on the activity page, it says 'No video files were found in the selected folder.' Am i missing something terribly stupid?
  8. The card is an 88se9215 - one HDD and one optical drive connected. Supposedly it's "bootable, working out of the box" on the compatibility chart. It looks like the drives are found and identified (line 946 & 948) Then it seems to timeout, fail to identify, switch to sata 2, then move on to the next one. (starting line 1005) Any ideas? I have the amd_iommu=pt appended, same thing without the amd_ tower-diagnostics-20190830-2137.zip
  9. thanks a lot! You're a real asset to the community The sde drive totally died - nothing in bios either. Lesson learned with not having parity the entire time - just happy i don't keep anything important on there
  10. Sorry to post on top, figured it might be useful - i think that this whole situation is coming from that drive (sde) which can't find the filesystem now. i got some dockers to work directly on the cache, but at this point, docker is the least of my problems. I ran filesystem check, repair - they both can't go anywhere - gives relatively the same response, expecting one block but getting another then stopping
  11. My disk 1 seems to have died since (of course the same day i get a parity drive :P) Thank you tower-diagnostics-20190827-0643.zip
  12. I had an issue with docker giving the loop2 error sort of like this thread (BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure) - so i went ahead and turned off docker and deleted the image, i have appdata backed up (in a working, non error throwing state). When i turn the image back on to recreate it, it makes whatever drive i put it on read only. mnt/usr, mnt/cache etc Not sure how to proceed - is there maybe something still cached somewhere that i need to remove manually so docker can't 'fall back' on it?
  13. Just wondering if i misconfigured something along the way When logged in cpu usage stays normal, a few percent on each core. As soon as the user is logged out, it pins a core 100% - goes away once a user is logged in again.. Any ideas? Previous experiences? LOGGED IN: LOGGED OUT:
  14. lol, well, i think i messed it up by doing that while not having enough free space --- it's completely full or read only, i'm assuming completely full on that 120gb drive.. at least i made an appdata backup i guess
  15. Alright, so i want to remove some of the old ssd's from my cache pool. Currently it's in a sort of JBOD mode but one volume. From reading around, i tried to balance with -dconvert=raid1 -mconvert=raid1 but it just refreshes the browser window and doesn't do anything. I narrowed the disk usage by getting some vm's off of the cache and moved to the array. I got it down to about 150 gigs. My smallest drive (that i want to remove) is only 120 gigs. I assume that trying to make a redundant raid setup for rebuilding the cache after removing wouldn't work unless the used space is less than the smallest drive, right? If it's helpful: Data, RAID1: total=44.00GiB, used=23.89GiB Data, single: total=184.00GiB, used=92.25GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=2.00GiB, used=247.83MiB GlobalReserve, single: total=79.28MiB, used=0.00B No balance found on '/mnt/cache'