Jump to content

dvd.collector

Members
  • Content Count

    252
  • Joined

  • Last visited

Everything posted by dvd.collector

  1. Has anyone had issues with Sonarr not automatically starting when the array starts recently. All was fine until a couple of days ago and now Sonaar will not auto start. It starts fine if I go and manually choose start from the menu.... Am unsure exactly which parts of the log are current as it doesn't seem to create a new log when it starts, but these are the last entries from when it does not startup. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
  2. Conversion to raid0 appears to have worked: Many thanks for your help!
  3. OK so when I click on Cache, I can change that to btrfs. When I click on cache2, it says "unknown" in partition format. Went ahead and changed Cache to btrfs anyway then clicked format. It looks like this worked (see screenshot). Now just need to change to raid 0 i guess.
  4. Attached Diagnostics. tower-diagnostics-20181108-1929.zip Can see this in the logs, is it trying to format one drive as XFS? Nov 8 19:26:34 Tower emhttpd: shcmd (1000): /sbin/wipefs -a /dev/sde1 Nov 8 19:26:34 Tower emhttpd: shcmd (1001): mkfs.xfs -m crc=1,finobt=1 -f /dev/sde1
  5. OK, so I stopped the array, ran the blkdiscard command on both disks, restarted array but see the same error message. Should I have removed them as cache drives first? I'll try that now anyway.
  6. OK, I removed the raid 0 from the bios and added both ssd's as cache drives. Now I see the attached error message. Clicking format does nothing.
  7. Thanks for the quick reply. I don't suppose I have much choice, but is a btrfs raid 0 pool as fast as I would expect (e.g nearly double the speed of a single ssd)? Lee.
  8. I want to use two ssd's in a raid 0 configuration as a cache drive. I created a RAID0 array in the bios of my unraid server, however unraid does not seem to see that - it still sees the individual disks. Is this expected?
  9. Hmm, I've updated the container to the latest and still can't get to rutorrent, just seems to be looping trying to start. Privoxy works fine.
  10. From my observations, the gpu does power down when not in use, however it isn't completely off. When I start a VM that uses the GPU the power consumption (measured at the wall) jumps 100-200watt, even with the vm idle doing nothing. My assumption is that this is the GPU drawing power as the consumption before starting a VM is around 70watt which seems too low if it included GPU draw. The fans on the GPU are on though, so it must be actually powered up. This is an AMD RX480 so its entirely possible that newer GPU's have a way of "powering down" when not in use which older ones do not.
  11. Ah how did I miss that...! Thanks, changed to France and its working now. I think I was confused as I had a different issue related to PIA as well. I could not access their website due to the SSL_PROTOCOL_ERROR. After googling it seems this is related to my ISP's WEBSAFE filter, which seems to have started causing issues with PIA from yesterday. After disabling the WEBSAFE filter on my ISP account settings I can now access the PIA website again. Seems too co-incidental that both issues happen at once, but maybe i'm just unlucky!
  12. Is anyone having issues with PIA today? Suddenly this docker will not work and I cannot access the PIA website (from another pc) which throws the error ERR_SSL_PROTOCOL_ERROR. The docker log seems to show this error over and over again: 2018-04-03 11:03:00,913 DEBG 'start-script' stdout output: [info] Attempting to curl http://209.222.18.222:2000/?client_id=3663b5f1cc355a121364e709954f045c7b1bc67d6aabf721ee1eb382e30aa751... 2018-04-03 11:03:00,946 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 12 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:00,980 DEBG 'start-script' stdout output: [info] Successfully retrieved external IP address 212.92.117.185 2018-04-03 11:03:10,975 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 11 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:21,013 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 10 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:31,040 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 9 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:41,072 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 8 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:51,099 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 7 retries left
  13. I've no idea if its the same thing i'm seeing with my rx480, but maybe try the suggestions in my thread :
  14. OK thanks, I see them now. I don't suppose you know any way to script the eject do you? Google shows me how to script eject of a drive letter, but not a device.
  15. I don't have this icon unless a USB drive is plugged in?? Even when I plug in a USB drive, it only shows me the USB drive. I'd never heard of "ejecting" a graphics card, hence me asking
  16. OK, maybe a daft question but how do I "safely remove" the card in Windows 10? I can't see any option to do that.
  17. I've created a new windows 10 vm and am passing through my AMD RX480x graphics card. It works fine the first time, but after shutting down the vm, the vm will not start a second time without rebooting the whole tower. From reading around I can see this might be a "reset bug", but more applicable to Nvidia than AMD cards? Regardless, does anyone have a solution?
  18. Has the way the mover write to the syslog changed in this release? I know the notes say mover has been improved, but am seeing odd timings in the syslog as the mover runs. i.e. Jan 24 08:13:55 Tower root: mover: started Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-22 [Mon] 10-59 (Full).7z Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-24 [Wed] 08-00 (Full).7z Above you can see mover started at 08:13, then writes nothing to the log until 08:41. Now if mover was writing AFTER it completes a file i could understand that, as the Backup file mentioned is 60GB in size. However the second backup file is also 60GB in size, but writes to the log at exactly the same time? Does it just write to the log in batches rather than as it is actually working?
  19. I had no idea "LibvirtdWOL" and "Virtual Machine Wake On Lan" were the same thing. Sounds like you know more than I, so I'll leave you to it.
  20. I use the addon "Virtual Machine Wake On Lan" and just use my phone to send a wake on lan request.
  21. I can also confirm this update has fixed the issue on startup too.
  22. I've added your fix as per the FAQ and rebooted unraid. I still see the same error in the log and cannot access deluge or privoxy. If i restart the container it then works. I'm sure this has only started happening since updating the docker recently. 2017-03-07 11:42:17,422 DEBG 'start-script' stdout output:[crit] 'tun' module not available, you will not be able to connect to Deluge or Privoxy outside of your LAN[info] Synology users: Please attempt to load the module by executing the following on your host:- 'insmod /lib/modules/tun.ko'