Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Every time I’ve seen this in my log I notice that the time gets adjusted even if only by a few seconds. Are you sure that is not happening?
  2. I created some for my own use under /mnt/disks as that seemed the least likely to cause problems. However you will need to recreate the mount points and remount any such devices using either the go file or User Scripts as such mounts do not survive reboots.
  3. UnRAID does not support ZFS as a standard feature Perhaps you were getting confused with XFS which what I would expect most people are using for array drives.
  4. Would have thought you could achieve this now using the User Scripts plugin
  5. When you realise that parity has no concept of ‘data’, but merely of sectors with no understanding of what they contain you realise why the disk HAS to be zero when adding it if parity is to be maintained. Humans tend to think of disk with no files on it as being ‘clear’ but that is not truly the case when one thinks at the physical sector level.
  6. Ok that makes sense. Now that I know what the messages relate to I was able to track down what device had the running browser session.
  7. If you are doing anything in parallel with a parity check/build then you will get very significant decrease in the parity check speed due to the disks continually having to move the read/write heads back and forth.
  8. If you have run pre-clear (which many do just to stress test new disks the Clear phase gets scrapped making the disk immediately available on starting the array. Also if you do not have parity then Clear is not required.
  9. This has been standard behavior on adding a disk to a parity protected array ever since v5. The difference in v5 was that the array was offline while clear took place, while in v6 the array is usable (although the disk itself is not until the Clear finishes).
  10. I agree it can change as I have a different value displayed when I run Unmanic under docker. However I have made a few installs under Linux on SBC systems and they all seem to show that address. I guess I was wondering what is the meaning of it if it is not a system that is actually on my local LAN.
  11. I am seeing messages displayed of the form: 304 GET /?ajax=workersInfo (192.168.0.12) 7.13ms What is strange is I do not see where that IP address has come from. It is NOT the address of the system running Unmanic. It is almost as if that address is hard-coded somewhere?
  12. Interesting! It was working for me previously so I assume this is a bug introduced with the reworking of the logging capabilities?
  13. Have you set the option to scan on starting the container? If not I have found I get a similar delay.
  14. When you get a crc error it can sometimes take a few seconds for the system to recover from it. In the terms of disk access speeds this is an eternity so even a relatively small number can adversely affect performance if they are occurring regularly. @PoppaJohn: it could be worth reversing the change that you think ‘fixed’ your problem to see if it reappears. If not then the fact that the error stopped with this change is probably just a co-incidence.
  15. Yes. UnRAID is runs purely from RAM and this includes the log files. Unless special action is taken to write them to persistent storage they are lost on a reboot.
  16. One way of thinking is to consider the container path the location that the app thinks is used for of particular purpose. Whether that is adjustable is up to what configuration options are available with the containers app. Many containers have internal paths that have been set by the container producer and are not amenable to change. The host path is the real location that the data is stored at in Unraid. Setting up such a mapping means a re-direct to this location happens without the container being aware of it. If there is no mapping between a container path and an Unraid host path then the path is only visible within the container and any data store on that path is internal to the container. In the case you mention of getting data out of the container then you need to set up a mapping between where the container thinks it is writing to and where on the Unraid server you want that data to appear. You also need to check that the container has write permissions set in the path mapping. One consequence of this is that if you have multiple containers that need to access the same data then you set up each container path to be shared to point to the same location at the Unraid level. It does not matter if the containers internal paths are different (and this is often the case) as they are transparently re-directed to the Unraid location. Does the above explanation help in any way?
  17. One thing to be aware of with vdisks is that normally they are set up as ‘sparse’ files. This means that disk space is not actually given to the file on the physical drive for parts that are not used. This means the space used on disk can be less than the the notional size of the vdisk file. As the VM runs it can write to unused parts of the vdisk that will cause extra space on the physical drive to be used. If the physical disk runs out of space during this process the VM can stop running correctly.
  18. No. Parity can only rebuild a disk with exactly the same format. Parity has no concept of file system as it works at the physical sector level with no understanding of what the contents of the sectors mean.
  19. I am not a user of NZBGet but I would have thought that a good starting point would be to change the MainDir setting in NZBGet to something simple like /download and then set up a path mapping in the container settings to map /download inside the container to somewhere in Unraid storage (e.g. /mnt/cache/appdata/nzbget/download). Since the other entries are sub/folders of the MainDir setting that would result in all of them being external to the container. Others who actually use NZBGet may chime in with the settings they actually use. I am surprised that the default templates do not already have such a mapping.
  20. It definitely sounds like that! Only paths you have explicitly mapped are outside the container. I would think that there is a NZBGet setting that controls where the files should be placed and this is not currently pointing to the /media location (which would get files into Media/TV Shows at the Unraid Level).
  21. Yes! That is how you abstract the container internal configuration from the actual physical configuration on the server.
  22. When you say there is nothing in /mnt are you talking about inside the docker container or at the Unraid host level? With the settings you show what you see at the Unraid host level at ‘/mnt/user/Media/TV Shows’ is what you should see inside the container at /media. Check you have the case right in the paths as Linux is case-sensitive.
  23. I was not aware that they even made 64-bit capable systems without some sort of hardware virtualisation! I would have thought that on UnRAID if the user does not have hardware that can support any sort of virtualisation at the hardware level one probably does no want to even attempt it as it would be such an outlier case. That is very different to the current behavior of supporting virtualisation as long as there is basic support at the motherboard/CPU level. There is also support for the increasingly common case of the next level which adds hardware passthrough capabilities
  24. It works for me! Did you remember to click the Apply button after changing that list as otherwise they do not get saved? I have been caught out by that in the past
  25. Just click on “HERE” and it takes you to the relevant Settings page
×
×
  • Create New...