dvd.collector

Members
  • Posts

    270
  • Joined

  • Last visited

Everything posted by dvd.collector

  1. Hmm, I've updated the container to the latest and still can't get to rutorrent, just seems to be looping trying to start. Privoxy works fine.
  2. From my observations, the gpu does power down when not in use, however it isn't completely off. When I start a VM that uses the GPU the power consumption (measured at the wall) jumps 100-200watt, even with the vm idle doing nothing. My assumption is that this is the GPU drawing power as the consumption before starting a VM is around 70watt which seems too low if it included GPU draw. The fans on the GPU are on though, so it must be actually powered up. This is an AMD RX480 so its entirely possible that newer GPU's have a way of "powering down" when not in use which older ones do not.
  3. Ah how did I miss that...! Thanks, changed to France and its working now. I think I was confused as I had a different issue related to PIA as well. I could not access their website due to the SSL_PROTOCOL_ERROR. After googling it seems this is related to my ISP's WEBSAFE filter, which seems to have started causing issues with PIA from yesterday. After disabling the WEBSAFE filter on my ISP account settings I can now access the PIA website again. Seems too co-incidental that both issues happen at once, but maybe i'm just unlucky!
  4. Is anyone having issues with PIA today? Suddenly this docker will not work and I cannot access the PIA website (from another pc) which throws the error ERR_SSL_PROTOCOL_ERROR. The docker log seems to show this error over and over again: 2018-04-03 11:03:00,913 DEBG 'start-script' stdout output: [info] Attempting to curl http://209.222.18.222:2000/?client_id=3663b5f1cc355a121364e709954f045c7b1bc67d6aabf721ee1eb382e30aa751... 2018-04-03 11:03:00,946 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 12 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:00,980 DEBG 'start-script' stdout output: [info] Successfully retrieved external IP address 212.92.117.185 2018-04-03 11:03:10,975 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 11 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:21,013 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 10 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:31,040 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 9 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:41,072 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 8 retries left [info] Retrying in 10 secs... 2018-04-03 11:03:51,099 DEBG 'start-script' stdout output: [warn] Response code 000 from curl != 2xx [warn] Exit code 7 from curl != 0 [info] 7 retries left
  5. I've no idea if its the same thing i'm seeing with my rx480, but maybe try the suggestions in my thread :
  6. OK thanks, I see them now. I don't suppose you know any way to script the eject do you? Google shows me how to script eject of a drive letter, but not a device.
  7. I don't have this icon unless a USB drive is plugged in?? Even when I plug in a USB drive, it only shows me the USB drive. I'd never heard of "ejecting" a graphics card, hence me asking
  8. OK, maybe a daft question but how do I "safely remove" the card in Windows 10? I can't see any option to do that.
  9. I've created a new windows 10 vm and am passing through my AMD RX480x graphics card. It works fine the first time, but after shutting down the vm, the vm will not start a second time without rebooting the whole tower. From reading around I can see this might be a "reset bug", but more applicable to Nvidia than AMD cards? Regardless, does anyone have a solution?
  10. Has the way the mover write to the syslog changed in this release? I know the notes say mover has been improved, but am seeing odd timings in the syslog as the mover runs. i.e. Jan 24 08:13:55 Tower root: mover: started Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-22 [Mon] 10-59 (Full).7z Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-24 [Wed] 08-00 (Full).7z Above you can see mover started at 08:13, then writes nothing to the log until 08:41. Now if mover was writing AFTER it completes a file i could understand that, as the Backup file mentioned is 60GB in size. However the second backup file is also 60GB in size, but writes to the log at exactly the same time? Does it just write to the log in batches rather than as it is actually working?
  11. I had no idea "LibvirtdWOL" and "Virtual Machine Wake On Lan" were the same thing. Sounds like you know more than I, so I'll leave you to it.
  12. I use the addon "Virtual Machine Wake On Lan" and just use my phone to send a wake on lan request.
  13. I can also confirm this update has fixed the issue on startup too.
  14. I've added your fix as per the FAQ and rebooted unraid. I still see the same error in the log and cannot access deluge or privoxy. If i restart the container it then works. I'm sure this has only started happening since updating the docker recently. 2017-03-07 11:42:17,422 DEBG 'start-script' stdout output:[crit] 'tun' module not available, you will not be able to connect to Deluge or Privoxy outside of your LAN[info] Synology users: Please attempt to load the module by executing the following on your host:- 'insmod /lib/modules/tun.ko'
  15. I've been running unraid 6.3.0 for ages with this docker though with no problem, but I'll try what is listed in the FAQ.
  16. Recently I have been having an issue with this docker, where on startup of unraid, this docker won't work until I manually restart it. Any ideas what could cause this? The log shows the following: 2017-03-07 06:05:36,742 DEBG 'start-script' stdout output:[crit] 'tun' module not available, you will not be able to connect to Deluge or Privoxy outside of your LAN
  17. my guess is that you need to include the full path to the Powerdown command. Thanks. Do you know what the full path is? Note that /usr/local/sbin/powerdown is a script that just invokes either /sbin/reboot or /sbin/poweroff and has been deprecated. 'powerdown' is a script formerly used by webGui/emhttp to initiate a gracefull power off or reboot. The problem is that in it's original form, 'powerdown' relied on emhttp process to sequence the operation, but there were cases where this could not happen or proceeded very slowly. Ultimately the system commands 'poweroff' or 'reboot' were finally invoked to complete the operation. Anyway, the whole shutdown/poweroff/reboot operation was re-coded a couple releases ago so that now the "stock" linux reboot and poweroff commands work properly to execute a "clean" reboot or poweroff, or at least time-out in a reasonable amount of time (before battery dies in UPS hopefully). The point is you should use /sbin/poweroff instead of 'powerdown' or '/usr/local/sbin/powerdown' in your 'at' job. Thank you, this worked.
  18. my guess is that you need to include the full path to the Powerdown command. Thanks. Do you know what the full path is?
  19. Has something changed with "powerdown" in this release? I have some at commands like this: echo "powerdown" | at 22:30 which are not working any more. Error is as follows: sh: line 21: powerdown: command not found However if I log into the server via putty and simply type powerdown, it works fine??
  20. try downgrading the machine version to 2.3
  21. I don't think its the same, but I found my vm would hang randomly on startup but work second time too. However mine showed as paused on the VM page. I worked out that it was the i440 machine type causing my hangs. When I selected 2.5 I got the hangs... when i downgraded to 2.3 the hangs went away. No idea what the difference between the two is though.
  22. I've had this for two different reasons. First the win10 installation ran out of disk space. Nothing I did in the repair screen worked, until I guessed it may be space related and increased space in the unraid gui. Second, when windows has done a major upgrade the vm fails to boot with more than one cpu assigned. Had to temporarily reduce cpus to 1 and then once the updates were finished assign back to more.
  23. Yep, no way to change file systems and rebuild to a new drive at the same time, you have to copy the data. At this point, it's probably going to be quicker to go ahead and add the new 2TB to the array, format it as XFS, and copy the data back over again from the 1TB Reiser disk. If you recreate the partition the way UD set it up, the original data will probably show up intact, but you still will have to repartition and format it to add it to the array. I can't think of a way to get around that and still add the drive to the parity protected array. OK thanks for your help.