tomsliwowski

Members
  • Posts

    14
  • Joined

  • Last visited

tomsliwowski's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I noticed the following errors pop up in syslog when my weekly file integrity scan is running: [Sun Jan 28 05:11:37 2024] traps: lsof[12844] general protection fault ip:1465a4835c6e sp:f5ee69bd59bc7327 error:0 in libc-2.37.so[1465a481d000+169000] [Sun Jan 28 05:16:04 2024] traps: lsof[23856] general protection fault ip:14951f35ec6e sp:7a304e6690af2c39 error:0 in libc-2.37.so[14951f346000+169000] [Sun Jan 28 05:36:42 2024] traps: lsof[2946] general protection fault ip:154ec51d6c6e sp:cde1ba7f4aafda50 error:0 in libc-2.37.so[154ec51be000+169000] [Sun Jan 28 05:49:24 2024] traps: lsof[30238] general protection fault ip:153a093d6c6e sp:646c5675301701d5 error:0 in libc-2.37.so[153a093be000+169000] [Sun Jan 28 05:53:34 2024] traps: lsof[7087] general protection fault ip:1509280cec6e sp:8cddcf0a92fac65d error:0 in libc-2.37.so[1509280b6000+169000] [Sun Jan 28 06:22:23 2024] traps: lsof[7195] general protection fault ip:147f247cdc6e sp:2971fccf21adb751 error:0 in libc-2.37.so[147f247b5000+169000] [Sun Jan 28 06:24:20 2024] traps: lsof[11280] general protection fault ip:14741fcf9c6e sp:d4b780a1f1aff6cd error:0 in libc-2.37.so[14741fce1000+169000] [Sun Jan 28 06:39:20 2024] traps: lsof[19103] general protection fault ip:14835087bc6e sp:cdefaf69ea40b09d error:0 in libc-2.37.so[148350863000+169000] [Sun Jan 28 06:41:07 2024] traps: lsof[23347] general protection fault ip:1520b1a38c6e sp:5bf634991cab0feb error:0 in libc-2.37.so[1520b1a20000+169000] [Sun Jan 28 06:49:14 2024] traps: lsof[8090] general protection fault ip:1472aa810c6e sp:e544104636008a41 error:0 in libc-2.37.so[1472aa7f8000+169000] [Sun Jan 28 06:53:31 2024] traps: lsof[18301] general protection fault ip:149f05cbbc6e sp:9732caf1b814058 error:0 in libc-2.37.so[149f05ca3000+169000] [Sun Jan 28 07:26:24 2024] traps: lsof[27204] general protection fault ip:1460fb8a1c6e sp:450a16ff49819677 error:0 in libc-2.37.so[1460fb889000+169000] [Sun Jan 28 07:30:35 2024] traps: lsof[3487] general protection fault ip:152321d18c6e sp:d15eaa34f003094a error:0 in libc-2.37.so[152321d00000+169000] [Sun Jan 28 07:32:26 2024] traps: lsof[7730] general protection fault ip:14fc2600fc6e sp:1ef4da6085dbf58d error:0 in libc-2.37.so[14fc25ff7000+169000] [Sun Jan 28 07:34:26 2024] traps: lsof[11795] general protection fault ip:14fd4594cc6e sp:2860af92e62d9df4 error:0 in libc-2.37.so[14fd45934000+169000] [Sun Jan 28 07:44:52 2024] traps: lsof[4278] general protection fault ip:149635e06c6e sp:43a5be79ccf901dc error:0 in libc-2.37.so[149635dee000+169000] [Sun Jan 28 07:46:53 2024] traps: lsof[8418] general protection fault ip:1548fefddc6e sp:9fc14438cbaf77e3 error:0 in libc-2.37.so[1548fefc5000+169000] [Sun Jan 28 07:53:20 2024] traps: lsof[24154] general protection fault ip:14e76bfdec6e sp:e017171ece2a524d error:0 in libc-2.37.so[14e76bfc6000+169000] [Sun Jan 28 08:13:59 2024] traps: lsof[4257] general protection fault ip:148861f70c6e sp:2c70d28c847ed92d error:0 in libc-2.37.so[148861f58000+169000] [Sun Jan 28 08:24:25 2024] traps: lsof[26478] general protection fault ip:14f8e64a2c6e sp:b4beae672c0c34a2 error:0 in libc-2.37.so[14f8e648a000+169000] [Sun Jan 28 08:28:54 2024] traps: lsof[3310] general protection fault ip:14b2b84ecc6e sp:bb134c9a2fa2378e error:0 in libc-2.37.so[14b2b84d4000+169000] [Sun Jan 28 08:30:41 2024] traps: lsof[7436] general protection fault ip:1530ffa75c6e sp:88423d5fcfbd1ec error:0 in libc-2.37.so[1530ffa5d000+169000] [Sun Jan 28 08:34:54 2024] traps: lsof[17848] general protection fault ip:14e0ed3fec6e sp:e158183adfa98957 error:0 in libc-2.37.so[14e0ed3e6000+169000] [Sun Jan 28 08:37:02 2024] traps: lsof[21755] general protection fault ip:14fd9884ac6e sp:c3f591bbe80de51f error:0 in libc-2.37.so[14fd98832000+169000] [Sun Jan 28 08:49:26 2024] traps: lsof[18698] general protection fault ip:1528bacccc6e sp:c88d9fd7b8df777a error:0 in libc-2.37.so[1528bacb4000+169000] [Sun Jan 28 09:04:06 2024] traps: lsof[19887] general protection fault ip:15525e2dfc6e sp:f78dd12d628fa57b error:0 in libc-2.37.so[15525e2c7000+169000] [Sun Jan 28 09:10:26 2024] traps: lsof[1614] general protection fault ip:146cda238c6e sp:5d6166528ff03c0b error:0 in libc-2.37.so[146cda220000+169000] [Sun Jan 28 09:37:23 2024] traps: lsof[31455] general protection fault ip:14f4dc217c6e sp:c82e19e798d75a10 error:0 in libc-2.37.so[14f4dc1ff000+169000] [Sun Jan 28 09:45:37 2024] traps: lsof[17432] general protection fault ip:14c08438dc6e sp:e9b2be199605832b error:0 in libc-2.37.so[14c084375000+169000] [Sun Jan 28 09:53:59 2024] traps: lsof[1723] general protection fault ip:14cabdc0dc6e sp:4f114c801403f1aa error:0 in libc-2.37.so[14cabdbf5000+169000] [Sun Jan 28 10:00:00 2024] traps: lsof[13963] general protection fault ip:14bb2f105c6e sp:c3d0af6abac85d0f error:0 in libc-2.37.so[14bb2f0ed000+169000] [Sun Jan 28 10:04:25 2024] traps: lsof[24752] general protection fault ip:1521468e2c6e sp:d13db6cc5554a80b error:0 in libc-2.37.so[1521468ca000+169000] [Sun Jan 28 10:14:52 2024] traps: lsof[13547] general protection fault ip:14fbfd66dc6e sp:1df2368861c1b4e9 error:0 in libc-2.37.so[14fbfd655000+169000] [Sun Jan 28 10:27:17 2024] traps: lsof[7636] general protection fault ip:14af354c3c6e sp:4a46ff3bc3996236 error:0 in libc-2.37.so[14af354ab000+169000] [Sun Jan 28 10:29:13 2024] traps: lsof[11666] general protection fault ip:146083169c6e sp:956dc074242059b0 error:0 in libc-2.37.so[146083151000+169000] [Sun Jan 28 10:31:25 2024] traps: lsof[17617] general protection fault ip:14bcfa418c6e sp:91354f1ce0bd501 error:0 in libc-2.37.so[14bcfa400000+169000] [Sun Jan 28 10:39:40 2024] traps: lsof[1697] general protection fault ip:1518a167cc6e sp:cf9a57fe6c5b716b error:0 in libc-2.37.so[1518a1664000+169000] [Sun Jan 28 11:13:04 2024] traps: lsof[9545] general protection fault ip:148c9fde3c6e sp:f798703c5845de75 error:0 in libc-2.37.so[148c9fdcb000+169000] [Sun Jan 28 11:19:13 2024] traps: lsof[23258] general protection fault ip:146115a2fc6e sp:9903487127be5a4 error:0 in libc-2.37.so[146115a17000+169000] [Sun Jan 28 11:23:18 2024] traps: lsof[31852] general protection fault ip:14a558e28c6e sp:a684e5f86b172630 error:0 in libc-2.37.so[14a558e10000+169000] Does this indicate there is an issue with my SAS controller, SAS expander or a maybe a bad cable? I tried looking for similar errors on the forums but all I found was a couple of posts where one solution was to reimage the USB drive which doesn't feel like the right answer for this. Is there further troubleshooting I can do to isolate the issue?
  2. Is the build for today borked for anyone else? Seeing this over and over in the logs and can't reach the admin page:
  3. Looks like that's been fixed in the latest update. It was driving me up a wall thinking I messed up the configuration or something...
  4. Firefox seems not to like the login page so my solution was log in using Chrome and just adding a bypass to my home LAN to not require auth...not a good solution, I know, but it lets me use the container.
  5. So I've been using this docker image for a few weeks and it's been great. Yesterday I replaced my parity drive which, of course, involves stopping the array. I restarted all my docker containers but for some reason sickchill isn't coming back right. It binds to port 8081 but refuses all incoming connections and I see a stuck python process taking up 1 CPU core. nobody 1858 1547 87 21:12 ? 00:00:06 python /app/sickchill/SickBeard.py --datadir /config I also see this in the logs over and over but I'm almost positive the same git errors popped up before: Traceback (most recent call last): File "/app/sickchill/SickBeard.py", line 520, in <module> SickChill().start() File "/app/sickchill/SickBeard.py", line 245, in start sickbeard.initialize(consoleLogging=self.console_logging) File "/app/sickchill/sickbeard/__init__.py", line 1596, in initialize silent=False File "/app/sickchill/sickbeard/scheduler.py", line 43, in __init__ self.lastRun = datetime.datetime.now() + self.run_delay - cycleTime OverflowError: date value out of range Traceback (most recent call last): File "/app/sickchill/SickBeard.py", line 520, in <module> SickChill().start() File "/app/sickchill/SickBeard.py", line 245, in start sickbeard.initialize(consoleLogging=self.console_logging) File "/app/sickchill/sickbeard/__init__.py", line 1596, in initialize silent=False File "/app/sickchill/sickbeard/scheduler.py", line 43, in __init__ self.lastRun = datetime.datetime.now() + self.run_delay - cycleTime OverflowError: date value out of range 21:14:12 WARNING::MAIN :: Unable to setup GitHub properly, You are currently being throttled by rate limiting for too many requests - Try adding an access token. Error: 403 {"documentation_url": "https://developer.github.com/v3/#rate-limiting", "message": "API rate limit exceeded for 108.29.8.54. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)"} Any idea what it could be? I could just remove and re-install the container but I think that would loose all my configs which I'd rather keep if possible.
  6. Thanks for the explanation. Yeah, I'm shipping the drive out this afternoon but thinking of picking up a 10 or 12 TB Easystore so I can have parity in the interim (and an additional 10TB when the RMA is complete). Do you know if there is a bug in the unraid GUI when displaying this raw read error rate? I ask cause the SMART report in the diagnostics has a crazy high number but on the GUI it's 0. Is this some weird 16bit number overflow that makes it read as 0 as soon as 65536 is passed?
  7. I should also probably ask, what's the best way to remove the parity drive and not have data loss while I'm awaiting a replacement?
  8. Hmm, extended test will take approximately 1149 minutes..... Instead of that I created an RMA and will ship it with 2 day air and hopefully get a replacement by the end of next week. Thank you so much for all your help. Would you mind sharing which logs you looked into to tell the issue was most likely the drive itself?
  9. Those were apparently corrections done during the last Parity check which I've set to run once every quarter. Would the sync errors and this read error be connected with a drive going bad?
  10. Running the test now. What sync errors are you referring to?
  11. So I woke up to this warning : [10588352.295242] print_req_error: critical medium error, dev sdc, sector 11241728864 Googling that error comes up with just about all the possibilities of bad cable, bad drive, bad controller, and "everything is OK, just some spindown error." Ideally I'd like the last one to be true but I can't quite figure out how to tell from the diagnostics. Can someone who's more knowledgeable take a look? The drive in question is the Parity drive. It's technically still under WD warranty until December so if the drive is bad I can create an RMA for it and live without parity for a week while WD sends me a new drive (or I can use it as an excuse to pick up a 12TB drive) arthur-diagnostics-20200731-0723.zip
  12. OK, so reboot seemed to fix the 502 issue but as soon as I started my array, the parity check operation kicked in. Not sure why as I didn't execute it manually and the box is configured to do it once a quarter (first Sunday of Jan, Apr, July, Oct). Is there a log file somewhere which would tell me why it decided to do the check?
  13. Seems like it's something to do with emhttpd crashing: See this in dmesg when I try to restart emhttpd: [1728640.363470] emhttpd[9824]: segfault at 478 ip 0000000000412801 sp 00007fff235f4570 error 4 in emhttpd[403000+1b000] [1728640.363478] Code: 8b 45 f0 48 89 c7 e8 5e ba ff ff 48 89 45 e8 48 8d 55 d0 48 8b 45 f0 48 89 d6 48 89 c7 e8 74 fa f f ff 48 8b 55 d0 48 8b 45 e8 <48> 8b 80 78 04 00 00 48 29 c2 8b 45 cc 89 c6 bf ff 1c 42 00 b8 00 [1733417.480102] emhttpd[13602]: segfault at 478 ip 0000000000412801 sp 00007ffc6b087020 error 4 in emhttpd[403000+1b000 ] [1733417.480108] Code: 8b 45 f0 48 89 c7 e8 5e ba ff ff 48 89 45 e8 48 8d 55 d0 48 8b 45 f0 48 89 d6 48 89 c7 e8 74 fa f f ff 48 8b 55 d0 48 8b 45 e8 <48> 8b 80 78 04 00 00 48 29 c2 8b 45 cc 89 c6 bf ff 1c 42 00 b8 00 [1733445.486722] emhttpd[13642]: segfault at 478 ip 0000000000412801 sp 00007fff5cbb20b0 error 4 in emhttpd[403000+1b000 ] [1733445.486728] Code: 8b 45 f0 48 89 c7 e8 5e ba ff ff 48 89 45 e8 48 8d 55 d0 48 8b 45 f0 48 89 d6 48 89 c7 e8 74 fa f f ff 48 8b 55 d0 48 8b 45 e8 <48> 8b 80 78 04 00 00 48 29 c2 8b 45 cc 89 c6 bf ff 1c 42 00 b8 00
  14. I'm running 6.8.3 with fairly vanilla setup. I went to see if there were any updates and got a popup with nginx error 502. Thought that was odd so I tried to open a terminal window and got the same error. Tried to view the log using the web viewer, same error. I SSH'ed into the server and restarted the nginx process which seemed to fix the inability to get the web-based terminal but the other two options I tried give me a 502. I see this in /var/log/nginx/error.log: 2020/03/30 09:43:01 [error] 6991#6991: *17 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily un available) while connecting to upstream, client: 192.168.254.100, server: , request: "GET /logging.htm?cmd=/plugins/dyna mix.plugin.manager/scripts/plugin&arg1=checkos&csrf_token=82B2D868106A28B7 HTTP/1.1", upstream: "http://unix:/var/run/em httpd.socket:/logging.htm?cmd=/plugins/dynamix.plugin.manager/scripts/plugin&arg1=checkos&csrf_token=82B2D868106A28B7", host: "arthur.lan", referrer: "http://arthur.lan/Tools/Update" 2020/03/30 10:03:13 [error] 6991#6991: *472 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily u navailable) while connecting to upstream, client: 192.168.254.100, server: , request: "GET /logging.htm?cmd=/plugins/dyn amix.plugin.manager/scripts/plugin&arg1=checkos&csrf_token=82B2D868106A28B7 HTTP/1.1", upstream: "http://unix:/var/run/e mhttpd.socket:/logging.htm?cmd=/plugins/dynamix.plugin.manager/scripts/plugin&arg1=checkos&csrf_token=82B2D868106A28B7", host: "arthur.lan", referrer: "http://arthur.lan/Tools/Update" 2020/03/30 10:13:05 [error] 6991#6991: *774 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily u navailable) while connecting to upstream, client: 192.168.254.100, server: , request: "POST /logging.htm HTTP/1.1", upst ream: "http://unix:/var/run/emhttpd.socket:/logging.htm", host: "arthur.lan", referrer: "http://arthur.lan/Dashboard" Any idea how to fix this?