zoggy

Members
  • Posts

    698
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by zoggy

  1. new version was pushed out #2020.02.14 - Fix: Write/Erase ops hang if display_pid is reused by the system
  2. currently 4G (2x2G). when I get around to a new build I'll probably do 32 or 64 gigs
  3. Personally I view it like test driving a car. Normally it wont change your mind about the confidence level you have in it.. but the rare event it does... it pays for itself. The time and energy/stress related to having your parity drive or something out of commission while you try to get another drive to replace it is just not worth it. *(which is why you should always keep a pre-cleared drive as a spare to reduce that scenario).
  4. digging through the "CA Docker Autostart Manager" it looks like its done by the order of the gui on startup. anyone know if it stops them in reverse order ?
  5. Running unraid 6.8.2 and starting to work with dockers some more and was curious about their shutdown/startup order. Does unraid start dockers in order of the docker name? by the order they are on the gui (which would then respect if you changed the order there)? If unraid does not do things in any specific order, how would one make sure for example my mariadb docker is started before my <insert other dockers that rely on it>?
  6. Right now I find it hard to visually tell if a docker is running or not (as the docker images take focus away from the state icons) One example would to add some css to lesen the opacity of the docker image if the state is stopped, here is an example: img.stopped { opacity: .35; filter: alpha(opacity=35); -webkit-transition: opacity .5s ease-in-out; -moz-transition: opacity .5s ease-in-out; -ms-transition: opacity .5s ease-in-out; -o-transition: opacity .5s ease-in-out; transition: opacity .5s ease-in-out; } preview:
  7. updated from 6.7.2 to 6.8.2, unable to start array. array tab just shows reboot or shutdown, looking at logs: Jan 28 18:24:02 husky emhttpd: error: mdcmd, 2721: Device or resource busy (16): write Jan 28 18:24:02 husky kernel: mdcmd (47): stop Jan 28 18:24:02 husky kernel: md: 1 devices still in use. husky-diagnostics-20200128-1832.zip thinking it may be the recent security change... removing this line from my go file and rebooting: cp /boot/custom/docker-shell /usr/local/bin looks like that was the culprit... went and and just deleted this script since it's not really needed (was checking it out from a spaceinvader video) I know in the 6.8 notes it mentioned about the go entries no longer having the +x bit and you'd have to copy the files off.. but why did this kill the array from being able to boot? or was it just some fluke (I do not have array auto start enabled)
  8. I just ran into this myself, some googling I found this and "umount /var/lib/docker" was my fix as well. Which is odd since I stop all the dockers before I went to stop my array.. I'm guessing a docker really didnt stop or something?
  9. I can note that I pre-cleared a 10TB WD RED Pro drive with preclear.disk-2020.01.11a (version 1.0.6) on 6.7.2 without any problems 2 weeks ago. Jan 11 17:14:46 husky preclear_disk_1EH4Z5MN[8086]: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdn Jan 11 17:14:46 husky preclear_disk_1EH4Z5MN[8086]: Preclear Disk Version: 1.0.6 ... Jan 12 21:32:34 husky preclear_disk_1EH4Z5MN[8086]: Zeroing: progress - 100% zeroed ... Jan 13 11:44:11 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: progress - 100% verified Jan 13 11:44:12 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: dd - read 10000831348736 of 10000831348736. Jan 13 11:44:12 husky preclear_disk_1EH4Z5MN[8086]: Post-Read: dd exit code - 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 5 Reallocated_Sector_Ct 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 9 Power_On_Hours 42 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 194 Temperature_Celsius 35 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 196 Reallocated_Event_Count 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 197 Current_Pending_Sector 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 198 Offline_Uncorrectable 0 Jan 13 11:44:14 husky preclear_disk_1EH4Z5MN[8086]: S.M.A.R.T.: 199 UDMA_CRC_Error_Count 0
  10. lol signatures have been disabled since the new site for me.. looking, yeah thats pretty outdated. thanks, enabled sigs once again and nuked my stale info
  11. you can see your load is 1.x which aligns with one of the cpus being pegged. your not showing all the processes in top.. try: top -b1
  12. found this docker last night: https://hub.docker.com/r/celedhrim/kodi-server he used to be one of the guys that did the kodi-headless patch. he complained about how it wasnt fun to maintain the patch and that the increasing build times was exceeding the allowed time on dockerhub which was causing its own problems. he switched to doing fake Xorg ( with XPRA) and pulseaudio ( kodi complain is no sound card/daemon) no more patch to maintain, uses kodi ppa ubuntu image, which is much faster to build. but does result in more cpu/memory (as it has to waste cycles to do the full interface) and does rely on the ppa images (which only matters if kodi removed old images and you wanted to build an older version) so far its working just fine for me. sonarr talks to it and I see it updates everything as a normal linux box. to set it up I just installed the docker via CA docker (search for: kodi-server) since I wanted to get latest leia: celedhrim/kodi-server:leia added path 'data' and set it to my old kodi-headless dir: /mnt/cache/appdata/.kodi added 'port': 8089 (he exposes this one instead of the default 8080 --- so update advancedsettings webserver port to 8089 (or change mapping)) added 'port': 9090 (so websocket to the docker works)
  13. Trying this docker out again. I had stopped because Leia support didnt come in a timely manner and I just used a firetv on the side to get by. Updated my docker from :Krypton to :Leia and noticed that it was only showing 18.3 Dug through github and found out that only the 'master' branch got updated to 18.5 (also looks like no one has been maintaining this docker?) Updated docker to just use master branch and now I see its showing 18.5 now. I see that its not able to scrape for new content however it fails to even curl to get the login token: 2020-01-21 20:31:13.916 T:22602176238592 NOTICE: ----------------------------------------------------------------------- 2020-01-21 20:31:13.916 T:22602176238592 NOTICE: Starting Kodi (18.5 Git:20200109-nogitfound). Platform: Linux x86 64-bit 2020-01-21 20:31:13.916 T:22602176238592 NOTICE: Using Release Kodi x64 build 2020-01-21 20:31:13.916 T:22602176238592 NOTICE: Kodi compiled 2020-01-09 by GCC 7.4.0 for Linux x86 64-bit version 4.15.18 (266002) 2020-01-21 20:31:13.916 T:22602176238592 NOTICE: Running on Ubuntu 18.04.3 LTS, kernel: Linux x86 64-bit version 4.19.56-Unraid ... 2020-01-22 02:31:13.937 T:22602176238592 NOTICE: ADDON: metadata.tvdb.com v3.2.3 installed ... 2020-01-22 04:05:42.202 T:22601646622464 DEBUG: GetEpisodeList: Searching 'https://api.thetvdb.com/login?{"apikey":"439DFEBA9D3059C6","id":281511}|Content-Type=application/json' using The TVDB scraper (file: '/config/.kodi/addons/metadata.tvdb.com', content: 'tvshows', version: '3.2.3') 2020-01-22 04:05:42.207 T:22601646622464 DEBUG: CurlFile::ParseAndCorrectUrl() adding custom header option 'Content-Type: application/json' 2020-01-22 04:05:42.207 T:22601646622464 DEBUG: CurlFile::Open(0x148e3818ca60) https://api.thetvdb.com/login 2020-01-22 04:05:42.486 T:22601731766016 WARNING: CActiveAE::StateMachine - signal: 22 from port: timer not handled for state: 1 2020-01-22 04:06:07.523 T:22601646622464 WARNING: Previous line repeats 25 times. 2020-01-22 04:06:07.523 T:22601646622464 ERROR: CCurlFile::FillBuffer - Failed: Timeout was reached(28) 2020-01-22 04:06:07.523 T:22601646622464 ERROR: CCurlFile::Open failed with code 0 for https://api.thetvdb.com/login: 2020-01-22 04:06:07.523 T:22601646622464 ERROR: Run: Unable to parse web site 2020-01-22 04:06:07.523 T:22601646622464 WARNING: No information found for item 'smb://husky/TV/TV/Black-ish/Season 06/', it won't be added to the library.
  14. to bring this back from the dead, why cant you switch between the two? (if you stop everything first) -- for example I have a mix mash of both methods across my dockers and want to normalize them to one way
  15. do you use SMB to access your context on the nas from plex?
  16. using latest version on 6.7.2, inserted a new disk, going to preclear it but first I wanted to pull the smart details to record. /Tools/Preclear/New?name=sdn clicking on "Download SMART report:" results in nothing/this invalid url: /Tools/Preclear/%3Cbr%20/%3E%3Cb%3EWarning%3C/b%3E:%20%20copy():%20Filename%20cannot%20be%20empty%20in%20%3Cb%3E/usr/local/emhttp/plugins/dynamix/include/Download.php%3C/b%3E%20on%20line%20%3Cb%3E24%3C/b%3E%3Cbr%20/%3E/husky-smart-20200111-1706.zip
  17. i held off 6.8 upgrade due to all the smb slowdown reports. anyone that had those, can you confirm if 6.8.1-rc1 resolved that?
  18. probably your password manager clearing cookies when session expired?
  19. this past time i did grab diags (see post #2)
  20. I'm not debating that. It was a unclean shutdown. And on boot it said so and it was correct. Then when it finished the parity check and brought the array online it cleared it. So far all good. Then a day later the monthly parity check started, I aborted it since the partiy was just checked a day prior. The UI at this point was showing that it was an unclean shutdown again -- This in my eyes is wrong. It was making me think that if I was to try and start the array again it would force me to do the parity check. However, I found that if I rebooted the box it 'fixed' the ui on thinking it was unclean as I could start the array just fine again.
  21. the first time yes, but the unclean status should have been wiped once the check completed. (which it did, it just doesnt look that way if you abort out another parity check)
  22. went ahead and just powered down. booted up, and started array. no warning or anything about a unclean shutdown or requiring me to do parity check. so looks like it wast just a bug that it still said unclean shutdown due to aborting the parity check after the unclean one.
  23. Attaching diags: husky-diagnostics-20190829-1958.zip