Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The value I have set for WebUI is http://[IP]:[PORT:8888]
  2. If using the docker version, make sure that the network option is set to ‘bridged’ rather than ‘host’ (which I think is the default). I know I had to make this change to get mine working again.
  3. I’ve got a strange case where I have 1 item where each worker thread seems to want to re-encode a specific TV episode separately. I thought that if an episode can been re-encoded this would be recognised and it would then be ignored? Any idea what might cause this behviour?
  4. If use the docker based versions then they are not affected by this upgrade as they are isolated from host OS changes.. Thought I should mention it in case this plugin never gets updated
  5. I think you need a minimum of 4GB RAM to successfully upgrade via the Wub GUI
  6. OK - that makes sense. My system does not have hyperthreading which will be why I am seeing this.
  7. I notice on the new dashboard that 100% CPU usage on the overall state is a longer bar than 100% on individual CPU's. Is this by design?
  8. Probably not - I am seeing it in Microsoft Edge on Windows 10 Pro. Maybe I am only noticing it now as the I am more likely to stay on the Dashboard with the new design. In the past I have tended to stay on the Main tab.
  9. I have been noticing that when I have just navigated to the new dashboard and then click on a running docker container I get the drop down menu of available actions almost immediately. However if I have left it on the dashboard for some time then it can take 30 seconds or more to react to such a click. It is as if there is some background script running that is slowing things down. I thought it was worth reporting to see if others see similar symptoms or if it is just something local to my system.
  10. 6.7.0.rc1 is available if you are prepared to run a RC version? If not you will just have to ignore this message until 6.7.0 stable is released.
  11. Unraid is not intended to be secured against those who have physical access to the server. You can (if you want) encrypt the data stored on your drives for additional security but I do not believe the majority of Unraid users bother with this.
  12. Have you actually tried this? I thought that in such a scenario a simple rename was used which is close to instantaneous.
  13. I get all 4 cores used to their max when converting a single file. I believe the ffmpeg libraries are highly optimised to exploit multi-core systems.
  14. It is worth pointing out that a rebuild never fixes an ‘unmountable disk’ status. A rebuild is intended to rebuild a failed drive onto a new drive in the same state as it was at the point of failure. If a disk is flagged as unmountable before starting the rebuild it will always be unmountable afterwards. Since rebuild works at the physical sector level it has no idea if the file system is good or even if the disk has even been formatted. The recommended procedure is to try and fix the file system on the emulated drive before doing the rebuild as if it cannot be fixed at that point you will have the same corruption present after the rebuild.
  15. No - they are completely different. The parity is about protecting the array disks and you are correct in it needing to be at least as large as the largest data disk, Cache disk is about providing higher performance storage for running apps/VMs and for the initial write of new files. If you DO want a cache disk then it is typical (although not required) to use a SSD as this gives the highest performance. A cache disk has no minimum or maximum size requirement - you determine what meets your basic usage pattern. If the cache disk is a single drive then it has no redundancy (many are happy with this). A cache can also be run as a 'pool' of multiple drives which gives redundancy if that is important for the files that you place on there. However be aware if running a pool with more than one drive that this requires the cache to be formatted as BTRFS and that has proved more fragile and prone to corruption that XFS. Not sure what you have put on the cache disk so far. If you want to repurpose the current cache disk for use as a parity disk, then you make sure that the VM and docker services are stopped; set any shares that might be using the cache disk to Use Cache=Yes (if this sounds counter-intuitive turn on the GUI help to see what the actual affect of this setting is) and then run mover to get any files currently on the cache disk moved back to the array. After doing that you can stop the array; unassign the cache disk; re-assign the drive as parity1; start the array to start building parity (it will probably take most of a day with a 8TB drive.). At this point you can re-enable the docker and/or VM services if you use them. You could then later add a smaller drive as cache (assuming you want one).
  16. The docker image normally only holds the binaries for the containers and not their variable data. Assuming that is how you are using docker in this manner then the easiest thing to do is; stop the docker service under Docker->Settings if you are using the ‘appdata’ share to hold the docker container’s variable data make sure the share is set to Use Cache=Prefer. You can now run mover to get ‘appdata’ files that are on the array moved to the cache disk delete the existing docker image file in the docker Settings specify the new location to be used for the docker image. restart the docker service. This will create a new empty docker image file. use the Previous Apps option in Community Applications to re-instate your dockers (with their existing Settings). At this point you can adjust (if necessary) any settings you want to change. As you add each container back, the latest binaries will be downloaded and added to the docker image.
  17. Normally the docker images only contain the binaries - all variable data is external to the docker. The normal place to configure for this variable data is under the appdata share, although it is up to the user where such data is placed.
  18. The Current release is 6.6.6 and has been for some time. Currently no RC outstanding.
  19. Not quite sure what you are asking. You seem to say that you have both python2 and python3 installed, and then you say you do not have python installed! If you are simply saying that the 'python' command line option is not present, then this can be set as a link to either the 'python2' or 'python3' commands (depending on which version of python you want to default to).
  20. Diagnostics are a ZIP file obtained via Tools->Diagnostics (or by using the ‘diagnostics’ command from a terminal session).
  21. It looks as if VMs might also be enabled since libvirt is being mounted. If so disabling that might also be a good idea.
  22. You need to upgrade to the latest UnRAID release. The release you are using has broken task scheduling.
  23. Has there been any thought to automatically renaming the converted files to reflect the fact that they have been re-encoded? For instance I was thinking of replacing h264 in the filename with either h265 or HEVC.
  24. Have you mapped the container location /etc/influxdb to an external location on the host (typically to /mnt/user/appdata/influxdb) so that configuration information is held eternally to the container?
  25. I have mine set as in attached image (based on the example on GitHub) Point to note are: Advanced view is enabled so that I can set the WebUI entry I have pointed the /library container location to a temporary location on my server as I do not (yet) want my main media library to be scanned. I have pointed the transcoding location to a location that is in RAM on the host. If you do not want this then point to a suitable location on the cache disk. I have selected 'host' style networking. This means I have also set up a port mapping entry in case I want to use a different port to the default of 8888.
×
×
  • Create New...