Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Any share that has a Use-Cache setting = Only will only exist on the cache drive. If Use Cache=Prefer then it will normally be on the cache drive but is allowed to overflow to the array if the cache drive runs out of space.
  2. The fact it is locking up during parity check suggests something power related as that is the time the system will be under maximum load from a power perspective.
  3. I think that USB3 support during the boot process can be flaky with some BIOS's. I would expect this to progressively improve over time.
  4. I very much doubt that this restriction will be lifted (but I could be wrong). Since you could just use a small flash drive to satisfy the requirement of having at least one array drive it is not much of a hardship. what we DO know is coming in the next Unraid release is multiple cache pools which will allow those running primarily docker/VMs to tune their use of caches to better suit their workload.
  5. This is probably because there is no free space on the cache drive? The vdisk for VMs starts of using ‘sparse’ allocation so that the physical space is less than the logical space but over time as VMs run the physical space requirement can grow towards the logical one and if there is insufficient space to grow the vdisk then symptoms like you describe can happen. It also looks like the docker.img file is probably corrupt as well so would need to be deleted and recreated via Apps >> Previous Apps. your disk1 is also completely full. This is probably because what looks like your media share has a Split Level of 1 and when Unraid selects a disk for a file on the array Split Level over-rides allocation method which can force files to a particular disk even if it is full. This can result in mover not being able to move files from cache to array to free up cache space. You will need to either manually move files from disk1 to free up space or relax the Split Level setting.
  6. SSD’s in the main array is not a supported configuration although some users have tried it with apparent success. The main restriction is that array drives do not support the ‘trim’ operation so the SSD performance can degrade over time as a result.
  7. The one thing that would cause a problem is any VMs that use hardware pass-through as with new hardware the id’s of the passed-through hardware will almost certainly change.
  8. Note that when adding the apps back you need to do it via the Previous Apps section of the Apps tab if you want to stain your settings. Adding them back as if they are new apps will NOT retain your settings.
  9. I believe that earlier releases of Unraid were more tolerant of certain types of error in the network.cfg file. That is why the advice to delete the existing one to revert to default settings is often given.
  10. It sounds as if you do not understand how parity works The parity drive has no concept of data, it just knows how to ensure any particular bit on a drive is either correct or not. You might want to read this section from the online manual to get a better understanding of parity.
  11. When you format a disk then space is used to create the file system control structures that will be used to manage files that you later add. I think this overhead works out to something like 1-2% of the total size of the drive.
  12. Looks like the Unraid flash drive is getting read failures. This means it has either failed or dropped offline. You cannot get the GUI back with the flash drive not available as that is where all config information is held.
  13. Doing so would mean that parity would no longer be valid and parity would then need to be rebuilt. Also if a drive failed while parity was disabled you are not protected so you would lose the contents of that drive.
  14. Have you checked to see if there are already copies of the files on the array as well as on the cache If so mover will not move such files - you would need to manually delete one of them.
  15. Yes - in principle it will all ‘just work’. The one exception is when you are running VMs with hardware pass-through as in such a case the hardware id’s of the passed through hardware will probably change and need adjusting.
  16. Did you read the link I gave earlier in the thread? Once you start getting complete disk rotations as part of a write operation that is very much a limiting factor.
  17. The moment you add a parity disk to the array then the perceived performance will drop off due to the overheads of the way parity is managed. You might find this link to be of interest.
  18. Are you using many VM’s or Docker containers? If so it might be worth keeping the 250Gb SSD as a cache drive and use the 1TB drive via the Unassigned Devices plugin for storing VM images to get maximum performance. Of course the converse is also possible, and you should select the option that best supports your work pattern. if you go such a route then you will not have redundancy on the SSD’s but you could look at the CA Backup and VM backup plugins as a way of automatically backing their contents up to the array at intervals of your choosing.
  19. I was surprised that WireGuard was not mentioned as one of the significant improvements since the previous podcast
  20. Not quite. You do not Unassign the drive or use the UD plugin. After stopping the array click on the drive to select the encryption you want. Now start the array and you are given the option to format the drive with encryption.
  21. Provide the diagnostics zip file (Tools >> Diagnostics) so we can get a better idea of the current state of your system
  22. Just tried it here and it seems to be working great. No errors now being logged and files seem to be syncing fine. Thanks for your efforts in fixing this.
  23. Unfortunately not. In a Pi cluster each node is running its own OS and only software specially written to run in cluster mode can take advantage of all the nodes.
  24. This is documented here in the online documentation
×
×
  • Create New...