Jump to content

Maticks

Members
  • Posts

    323
  • Joined

  • Last visited

Everything posted by Maticks

  1. i think the docker got an update last night with some issues. after dropping the log file and restarting the docker it still went over and created endless log errors with the wrong pid error. so i forced an update of the docker which had two updates. that fixed it. the log output is now fixed. maybe someone made a mistake and pushed an update. hrmm i think ill set my docker auto update to the weekends from now on.
  2. well found the 6.1GB log file with the same entry. bit weird though.. {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445811136Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445815478Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.44582013Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.4458232Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445825713Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445829377Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445833706Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445836275Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445838856Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445843206Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445847761Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445850568Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445854268Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445858084Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445860761Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445865003Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445868597Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445871172Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445873674Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445876963Z"} {"log":"Found pihole-FTL process with PID 534 (my PID 625) - killing it ...\n","stream":"stdout","time":"2018-04-10T23:06:50.445885869Z"}
  3. just increased my docker image size to 30G so i can start it. looks to be working...
  4. i am assuming its this. but i have a heaps of space... ime="2018-04-11T07:12:08+10:00" level=error msg="garbage collection failed" error="write /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: no space left on device" module="containerd/io.containerd.gc.v1.scheduler" panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x8ae367]
  5. As attached vault-diagnostics-20180411-0841.zip
  6. Probably best to describe why i rebooted first. I woke up this morning to my medusa docker not responding on its webui. I checked my other dockers they were working, i tried to restart medusa docker and it wouldn’t restart. i did a quick check in dashboard and noticed that the flash docker log was at 100% i have never seen it more than 20% full ever. My Cache drive was at around 30% full, none of my hard drives are full either. I decided to reboot the box so i restarted it, my vm’s took a while but finally booted back up but dockers never started. When i click on the dockers tab, it says the docker service failed to start.. what do i do? i haven’t had this issue before. Can anyone help point me in the right direction.
  7. i did a Plex update on my docker and the update is causing some issues with my tv stopping video stream after 30 seconds exactly of playing. Just throws me off the server, so how do i roll back the docker update? Is there an easy command to roll this thing back.
  8. Medusa is broken for Torrentleech search provider, its been fixed in Github for medusa a few days ago. Can we get an update on the docker please 2018-04-07 15:30:49 INFO SEARCHQUEUE-FORCED-75692 :: [TorrentLeech] :: [d6eb72d] Unknown exception in url https://classic.torrentleech.org/torrents/browse. Error: Unable to parse Cloudflare anti-bots page: Error parsing Cloudflare IUAM Javascript challenge. Cloudflare may have changed their technique, or there may be a bug in the script. Please read https://github.com/Anorov/cloudflare-scrape#updates, then file a bug report at https://github.com/Anorov/cloudflare-scrape/issues." Cloudflare may have changed their technique, or there may be a bug in the script. Please read https://github.com/Anorov/cloudflare-scrape#updates, then file a bug report at https://github.com/Anorov/cloudflare-scrape/issues."
  9. Anyone know how to fix this error in the script.. delete_dangling_images broken Script location: /tmp/user.scripts/tmpScripts/delete_dangling_images/scriptNote that closing this window will abort the execution of this script"docker rmi" requires at least 1 argument.See 'docker rmi --help'.Usage: docker rmi [OPTIONS] IMAGE [IMAGE...] [flags]Remove one or more imagesFinishedif an error shows above, no dangling images were found to delete
  10. the binhex/medusa feels like sickrage but years in front. maybe some devs went there and forked sickrage.
  11. seems to still be very broken for me at least the torrentleech search is broken and the public stuff is all out of date.
  12. I don’t think dev in sick rage is being maintained anymore. I moved to medusa. Omg the feature it’s sickrage but 3 years in the future.
  13. i use to run this command when i had cache fs full issues, it was because of allocated sectors in btrfs. It's not so much an Unraid thing its more btrfs itself. btrfs balance start -dusage=75 /mnt/cache If that fixes it run a cron every few days.
  14. I would go into your disk settings and untick enable auto start so the array doesn't mount on boot. Note down the disk serial numbers that are detected. power down the system and move the disks around between the two cages and see if it detects the disks in the other cage. maybe the disks aren't faulty it could be the narco cage itself or even the power cables that plug into it. hopefully its just the narco cage backplane or one of the power leads. The two Molex connectors provide power between the drives it could be one of those that is dead. But at least this will tell you the drive are fine.. Unraid is fine with drive being booted on different SATA connections so moving disks around is fine. If that works at least then its a PSU problem or a Molex Connection problem its a matter of working out which one it is first.
  15. I haven't had the allocation SSD issue happen since the upgrade. I think its resolved. at least i haven't seen less than 112Mbps transfer speeds so i haven't had a need to manually run rebalance.
  16. I had this happen a day after turning on acs override on my vm config with 3 disks on my lsi controller. havent made any config changes recently also looked fine 2 disks on lsi one in the onboard controller also thought sata cables couldn’t be the issue.
  17. Also crossfire i don't believe it officially supported that might be causing some of your issues.
  18. I would try not to enable ACS if you can, it can break other things. if it does then try moving the graphics card from Slot 1 to any other slot that usually fixes the IO MMU group issue. You can usually do Slot 2 and Slot 3or4 depending on motherboard configuration in your manual you will get 2 8x slots and thats more than enough for those graphics cards. Other suggestions for VM performance, always leave one core free for Unraid don't pass all cores to the VM. If you are passing only 2 or 3 cores make sure you pass through the Hyperthread cores, in 6.4 these are now grouped together it will say something like 0/4 and you tick them in pairs. As seen attached.
  19. Last time I checked you couldn’t run the IT flash program within unraid cli.
  20. that sounds like a good idea. The LSI cards are good but you need to either buy them already IT Mode flashed or have a second machine to do the flashing. they are a decent price. Full list here https://lime-technology.com/wiki/Hardware_Compatibility#PCI_SATA_Controllers LSI SAS 9201-8i HBA 8 PCIe x8 SATA III SAS2008 plain HBA, works ootb LSI SAS 9211-8i 8 PCIe x8 SATA III SAS2008 flashed to IT mode [49] LSI SAS 9240-8i 8 PCIe x8 SATA III SAS2008 not yet tested, but expected to work in 9211-8i IT mode LSI SAS 9207-8i 8 PCIe3.0 x8 SATA III SAS2308 successor of the well known 9211-8i now with PCIe3.0 interface; by default in IT mode [50]; not yet tested but should work ootb LSI SAS 9217-8i 8 PCIe3.0 x8 SATA III SAS2308 same card as 9207-8i but in IR mode; only available through OEMs; firmware for IT mode available [51]; flash to IT mode not yet confirmed! LSI SAS 9300-8i HBA 8 PCIe3.0 x8 SATA III SAS3008 plain HBA works ootb [52] LSI SAS 9310-8i 8 PCIe3.0 x8 SATA III SAS3008 not yet tested but should work when flashed to IT mode (vertical connectors) LSI SAS 9311-8i 8 PCIe3.0 x8 SATA III SAS3008 not yet tested but should work when flashed to IT mode (horizontal connectors)
  21. there is a bit around on the forum about this card.
  22. this might have to do with the ROM being setup in Raid instead of IT mode where the SATA commands are passed directly to Linux. Raid PCI cards take on these functions, you want to flash the ROM where its just a SATA hub.
  23. maybe the sata cable was loose.. that is the only explanation.
  24. new things to check power down the server and check the SATA Data cable is connected. If you have a spare SATA data cable try replacing it. If you are using a Marvel SATA controller on your motherboard try using one of the other SATA ports. If you have ACS enabled in your VM Manager under settings try turning that off you can have IO MMU group issues cause data problems. I have had issues with ACS on and data errors across several drives.
  25. Do you have cache in and a cache drive. That might be causing an issue try turning that off in the share see if that makes a difference. Never had the share report no space in windows. Cache drives can cause weird space issues sometimes.
×
×
  • Create New...