Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Everything posted by Cpt. Chaz

  1. When in safe mode, what's it showing for the appdata folder path?
  2. Man, it looks like you saved me from having to return this drive - thank you so much. Currently 60% through the reformat running the command you mentioned. Just out of curiosity, i saw another command that it looks like some guys used to fix these type 2 protected drives, wondering if it basically does the same thing, or if one is better than the other? here it is: sg_format -v --format --size=512 /dev/sXX thoughts?
  3. So after doing a little digging, i found buried down in a forum that the dynamix buttons plugin can cause problems with preclear. I uninstalled buttons, and it seems to have resolved my problems with my unassigned SSD. Then I tried to run the preclear on my 10tb sas drive again. it seemed to start fine, then speeds dropped down to about 5 mb/s, and my syslog started filling up with errors again. i've attached the preclear log and my syslog, would really appreciate any thoughts here. The disk seems to pass SMART tests, but i keep having problems with it. Its a used sas disk from ebay, with low hours. wondering if it's the disk, maybe my controller or cables, i dunno... kal-el-diagnostics-20200708-1413.zippreclear_disk_2YHTEGWD_30050.txt
  4. Just to throw another $0.02 in the bucket, i've been doing something similar but via the mac route. since spaceinvader released the macinabox container last year, i spun up a mac vm and threw my quickbooks server for mac on it, and it's been running flawlessly for months. my clients are still mac's on the local network, just running the server portion on the (dedicated) mac vm and have no complaints. but mark me down as a +1 if anyone ever manages to containerize a QB instance
  5. Today I ran docker safe new permissions, and it's hanging on mnt/disk4/Backups share. mnt/disks 1-3/Backups ran fine. here's the syslog output Jul 2 11:26:24 Kal-El emhttpd: cmd: /usr/local/emhttp/plugins/fix.common.problems/scripts/newperms.sh Jul 2 11:41:19 Kal-El emhttpd: error: send_file, 151: Broken pipe (32): sendfile: /usr/local/emhttp/update.htm Jul 2 11:41:19 Kal-El emhttpd: error: send_file, 151: Broken pipe (32): sendfile: /usr/local/emhttp/update.htm has anybody ever seen this "Broken pipe" update error before, or know what it means? if it makes any difference, i had no problems running newperms in the terminal on my "Backups" share
  6. well it looks like i won't be alone in posting a question about what my error logs mean here, haha! On a dell R720, i added an unassigned SSD a while back that was showing this error here I posted about it in the UAD forum, and somebody advised that it looked like maybe a bad cable since the disk SMART report came back good. However, yesterday i went to preclear a 10tb SAS drive, low hours, clean SMART report, and started getting errors trying to clear it. it would run for about 3 minutes, then stop and actually say completed successfully, but definitely not. i've attached the preclear log txt for that disk. I left the disk in the server, but not mounted or anything - with nothing in process on the disk, and my syslog is completely full today. Disk log looks like this: Preclear log was showing i/o errors on the new SAS disk too, which has me wondering if this is in fact either a problem with my backplane, sas cables, or both. These two disks are on the same column of the backplane, using the same sas cable connection, so it seems likely to me, but sure would love to have a 2nd or 3rd opinion before i start replacing more hardware. (syslog diagnostics also posted just in case) preclear_disk_2YHTEGWD_47254.txt kal-el-diagnostics-20200701-1337.zip
  7. found the issue, even simpler than i thought. i forgot to put the container onto the letsencrypt docker network instead of bridge mode. only took me 5 days to realize it.
  8. Crap, that drive is plugged straight into the backplane. Thanks for looking, I’ll see what I can do.
  9. hi all. I'm getting a weird disk error in a fairly new (couple months old) unassigned SSD. I haven't noticed any performance problems other than trim failing. here's the disk log output: Jun 21 01:00:56 Kal-El kernel: sd 1:0:3:0: [sde] tag#1704 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jun 21 01:00:56 Kal-El kernel: sd 1:0:3:0: [sde] tag#1704 Sense Key : 0x5 [current] Jun 21 01:00:56 Kal-El kernel: sd 1:0:3:0: [sde] tag#1704 ASC=0x21 ASCQ=0x0 Jun 21 01:00:56 Kal-El kernel: sd 1:0:3:0: [sde] tag#1704 CDB: opcode=0x42 42 00 00 00 00 00 00 00 18 00 Jun 21 01:00:56 Kal-El kernel: print_req_error: critical target error, dev sde, sector 1950394355 Jun 21 01:00:56 Kal-El kernel: BTRFS warning (device sde1): failed to trim 1 device(s), last error -121 Jun 21 03:47:43 Kal-El kernel: sd 1:0:3:0: [sde] tag#711 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Jun 21 03:47:43 Kal-El kernel: sd 1:0:3:0: [sde] tag#711 CDB: opcode=0x28 28 00 00 37 11 b0 00 00 20 00 Jun 21 03:47:43 Kal-El kernel: print_req_error: I/O error, dev sde, sector 3609008 Jun 21 04:54:20 Kal-El kernel: sd 1:0:3:0: [sde] tag#2505 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Jun 21 04:54:20 Kal-El kernel: sd 1:0:3:0: [sde] tag#2505 CDB: opcode=0x28 28 00 01 6d 40 88 00 03 60 00 Jun 21 04:54:20 Kal-El kernel: print_req_error: I/O error, dev sde, sector 23937160 Jun 21 11:24:34 Kal-El kernel: sd 1:0:3:0: [sde] tag#2521 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Jun 21 11:24:34 Kal-El kernel: sd 1:0:3:0: [sde] tag#2521 CDB: opcode=0x28 28 00 04 46 5e 50 00 00 48 00 Jun 21 11:24:34 Kal-El kernel: print_req_error: I/O error, dev sde, sector 71720528 Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1491 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1491 CDB: opcode=0x28 28 00 00 e8 2a b8 00 01 40 00 Jun 21 11:52:55 Kal-El kernel: print_req_error: I/O error, dev sde, sector 15215288 Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1492 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1492 Sense Key : 0x2 [current] Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1492 ASC=0x4 ASCQ=0x2 Jun 21 11:52:55 Kal-El kernel: sd 1:0:3:0: [sde] tag#1492 CDB: opcode=0x28 28 00 04 4b b6 50 00 01 a0 00 Jun 21 11:52:55 Kal-El kernel: print_req_error: I/O error, dev sde, sector 72070736 DONE "critical target error" and "I/O error" certainly sound ominous. Running smart tests on the disk don't show any problems. Any ideas?
  10. that example seems to be for setting up heimdall as the landing page for mysite.com instead of its own subdomain heimdall.mysite.com which is all i'm going for. should be less complicated, i feel like i'm missing something simple here, just don't know what.
  11. hey guys, read through the thread here and saw folks dealing with a few reverse proxy problems, but didn't see a solution for me. i've kept my setup fairly simple with a custom domain "mysite.com". i've got no issues getting this to work for other cname instances, but can't seem to get it working for heimdall. i'm using linux's letsencrypt default subdomain config: # make sure that your dns has a cname set for heimdall server { listen 443 ssl; listen [::]:443 ssl; server_name heimdall.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app heimdall; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } here's a screenshot of my container config: i've got heimdall setup and working just fine on lan, just can't get it on the reverse proxy side. during troubleshooting, i tried changing upstream to match the container 2443: set $upstream_port 2443 but it didn't make any difference, so i reverted back to default 443 until i get some guidance. i've triple checked my cname in cloudfare at heimdall.mysite.com. Any help is much appreciated, thanks!
  12. I tried installing two new instances of linuxserver's jackett. different appdata and ports, exactly like your screenshots (and also the webui port). In instance 1, i loaded indexer ABC. then when i turn on and open instance 2, it shows indexer ABC already loaded.
  13. yep, that's exactly what i did too, using bin-hex's container. however when both containers are on, only one works, the other fails on all indexers. what's curious is trying to open the log for each different container (jackett1 and jackett2), the log window for jackett2 reverts to the same window for jackett1. log also shows Content root path: /usr/lib/jackett/Content i wonder if that has something to do with my problem? maybe i should just switch containers 🤨 EDIT: Nope, that's not it
  14. Can confirm. However, the provided instructions didn’t work for me exactly. I never could get the modprobe change to the go file to persist after a reboot. But once I mod the file each time, it kicks off without a hitch.
  15. you're using the same linux server docker, just 3 different instances, i wonder if that has anything to do with your success? I tried different ports and appdata... at your convenience, would you mind sharing a screenshot of your container configs w/ ports?
  16. Was looking at running two separate instances of jackett docker, one binhex and linuxserver, in hopes to speed up query time. but even with different ports and appdata folders, i can't seem to get both of them working side by side. has anybody ever tried this?
  17. I’ve got a decent amount, yeah. Enough that I think it would be worth trying. If it doesn’t work, I’ll revert back
  18. i saw somewhere where folks were saying they downloaded to ram instead of disk. i tried adding a container path to /tmp and set it as the temporary download location, and it threw up all kinds of errors in the gui. has anybody here had success with this?
  19. sorry for the delay, i didn't have my notifications on... Just to make sure i'm following correctly, you're going to keep the working plexappdata (PAD) on your unassigned device drive. Then rsync changes only to a duplicate-non-working copy in mnt/cache/appdata so it gets backed up through CAbackup/Restore, is that right? If so, that means double the storage... just thinking out loud, but for example my PAD is around 300gb. not sure what yours is, but it wouldn't seem ideal to keep that in an unassigned device, and the cache pool. don't see why it wouldn't work, but it does seem overly redundant unless you just have gobs of storage to fill or a really small PAD. again, i think that's where something like syncthing would come in handy, to file sync straight from the unassigned device directly to your backup destination (this would also reduce downtime of containers being offline during backup). it's just a simple matter of mapping the paths in the syncthing container... but there's definitely more than one way to skin a cat. If/ When you do move the PAD to unassigned device, remember to change the access mode to rw/slave for the container path.
  20. @Josh.5 i'm seeing weird processing times for larger 1080 files (movies). on a 1.2gb 1080.x264.mp4 --> 1080.x265.mp4 with no audio conversion, it took over 8 hours to process and reduced 35% in size. This just started a couple months ago out of the blue. But then on 14.8gb 2160.x265.mkv --> 2160.x265.mp4 with no audio, it only took 45 minutes and decreased over 36% in size. This isn't specific to these two files though, it's a general trend for all my files. Some of these 1080's are even taking as long as 13 hours to process. the 2160's don't seem to matter if they're x264 or x265 - they're fast either way. i've attached a pastee link for the ffmpeg log of the above 1080 file here. (this log was so big i don't even think the unmanic log capture grabbed it all from the beginning) Here is the link to the ffmpeg log for the 2160 file. Any thoughts?
  21. $2020-04-17 14:06:08,571 DEBG 'start-script' stdout output: [warn] PIA endpoint 'au-perth.privateinternetaccess.com' is not in the list of endpoints that support port forwarding, DL/UL speeds maybe slow [info] Please consider switching to one of the endpoints shown below $2020-04-17 14:06:08,571 DEBG 'start-script' stdout output: [info] List of PIA endpoints that support port forwarding:- [info] ca-montreal.privateinternetaccess.com [info] ca-vancouver.privateinternetaccess.com [info] de-berlin.privateinternetaccess.com [info] de-frankfurt.privateinternetaccess.com [info] sweden.privateinternetaccess.com [info] swiss.privateinternetaccess.com [info] france.privateinternetaccess.com $2020-04-17 14:06:08,572 DEBG 'start-script' stdout output: [info] czech.privateinternetaccess.com [info] spain.privateinternetaccess.com [info] ro.privateinternetaccess.com [info] israel.privateinternetaccess.com @pnbrooks although i think the canadian endpoints have stopped working.
  22. try manually pinning the cpu cores in the docker container
  23. 1. you're right about the plex performance increase on the unassigned device. Though ymmv depending on how much your cache is used and if it impedes your current plex metadata on a regular basis or not. i.e., if cache doesn't get used much, you may not notice a big performance increase moving it to UD. But if cache is often taxed, you may see a noticeable difference making that move. The other advantage may be capacity. Depending on the size of your plex library, all that metadata adds up - especially if you have video thumbnails turned on. this may be a problem if you don't have a large cache pool. 2. i'm not overly familiar with rsync, but in your scenario, why not just cut out the middle man (cache) and sync directly to the array? While i don't back up my unassigned disks, i've had great success with syncthing (docker and a nice gui) backing up (file sync) files and folders in real time between remote unraid servers. you can set to scan and sync once a day. Might be worth looking into.