Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. You can go to the 6.8.2 release by downloading the zip file from the Limetech site and extracting all the bz* type files to the root of the flash drive. It is a good idea to have first backed up the flash drive.
  2. When you delete a file and put it back it depends on whether the VM re-uses the same internal sectors. If it does then the vdisk will not use any additional space. However if the VM decides to use different internal sectors then additional space will be used by the vdisk. The point is that the host is only aware of ‘sectors’ within the vdisk file and not how the VM is using them. The moment the VM writes a sector with non-Zero values the vdisk has to have the space to store that sector. This behaviour is independent of where the vdisk is located and is inherent in how vdisks operate. it is possible to initially fully allocate the vdisk so that the physical and logical space are the same. In such a case the vdisk will no longer grow as you are already using the maximum space the VMs will internally expect to be available.
  3. VM vdisks are initially allocated as ‘sparse’ files which means only sectors written inside the VM actually use space at the host (Unraid level). However is not atypical for the host to not know when a VM deletes a file internally so the space that file occupied remains allocated within the vdisk. You should always assume that the vdisk can grow towards the logical size you allocated to the VM and avoid over-committing the physical space.
  4. If you want to continue using tdarr docker then make sure you change the path to match how your Unraid system is actually set up.
  5. You should be OK to run as you have dual parity. what I would suggest, though, is: stop array unassign disk2. Not necessary to physically remove it although you can if you want. You may not want to disturb any of the servers innards until you have to while plugging the replacement disk. start the array and Unraid will be emulating disk2 and show that slot as having no drive assigned. at this point if everything is good you should then still be able to see the contents of disk2 via Unraid’s emulation of it. If not, tell us what you are seeing. the reason for this is that it is the contents of the ‘emulated’ disk that will be written during the rebuild process. Keep disk2 intact in it’s current state until you have finished the replacement process as it provides a fall-back for data recovery if anything should go wrong during that process. Points to note: It is recommended that the scheduled parity check is set to be none correcting as you do not want a drive that is potentially returning bad reads to end up corrupting parity. Unraid never fails a drive based on its SMART reports but it does send you notifications if any of the values it monitors are changing so you do not want to ignore notifications about them. You can get a “array is healthy” status report despite SMART values indicating a drive should probably be replaced as the status report only indicates that at the moment Unraid does not think any disk is in an error state.
  6. There have been plenty of forum posts about multiple cache pools being part of the 6.9 release (although we do not have the ETA for that) and that it is looking good in Limetech’s internal testing. You might find this podcast to be of interest.
  7. The steps to rebuild a drive are covered here in the online documentation. It has a specific mention of how to rebuild a disk onto itself if it has been disabled but you decide the drive is actually OK.
  8. I thought the requirement was to keep all documents for the owner together in which case 1 was correct.
  9. Since the SMART test is internal to the drive this suggests something hardware related such as insufficient power supply rather than a software issue?
  10. I tend to forget how long ago that Windows XP was the ‘latest’ thing
  11. Have you tried specifying the disk type as SATA rather than virtio so that no additional drivers need loading?
  12. That link says it is supported on Red Hat versions of Linux, but Unraid is Slackware based.
  13. That disk definitely needs replacing ASAP. I would also expect there to be no problem with a RMA with that many reallocated sectors, particularly as the number keeps increasing. It feels as if the disk is probably on its last legs
  14. I am afraid that is how vdisks work? The host does not know that the guest had released space. Over time a vdisk can be expected to grow to the maximum logical size you assigned to it.
  15. If you REALLY want to shrink the array the process is documented here in the on-line documentation. However unless you are absolutely certain that is the thing that needs doing then first stop and ask for advice as data loss is never pleasant.
  16. Under Linux file/folders are case sensitive so ‘media’ is not the same as ‘Media’. If both exist then Sambe can pick up either (and the other will be hidden). You will have to correct this by going in at the Linux level I am afraid. You can use something like the ‘mc’ (Midnight Commander) from the command line, or a docker container such as Krusader to achieve this. You can also use raw Linux commands you probably need to work out what caused you to have different capatilsarion to stop it occurring again in the future
  17. in theory that should not help as the GUID is built into the hardware. it can happen after an upgrade if the device appears to have a valid GUID when Unraid is first installed, but it later turns out that this is not the case when more than one user tries to register the same GUID. In such a case it is only when the upgrade happens that the new blacklist kicks in and you get the message.
  18. There is lot much you can do other than use a device that is not blacklisted Limetech do not have any procedure for un-blacklisting a GUID (I know as I asked when I accidentally ran the licence transfer process against the wrong flash drive thus blacklisting a perfectly good one) . It is a requirement that Unraid is run from a device with a unique GUID. This means you cannot use devices where the manufacturer has not given each of them a unique GUID which it looks like what is happening in this case.
  19. Removing drives is covered here in the online documentation
  20. If you have done NO writes to the array since removing parity, then you can use Tools >> New Config with option to retain current assignments assign the parity drive; on the Main tab click the Parity is Valid checkbox and then start the array. This will start the array without writing anything to parity. You now stop the array; unassign the disk you are trying to recover; restart the array. If parity is valid you will now see the disk being emulated and its contents visible. If that is the case then you can stop the array; assign the disk to be rebuilt, s tart the array to rebuild the contents of the emulated disk to the physical drive. If the emulated drive after doing step 5 does not show your data that may mean parity is not valid. If the difference is minor then you may find the drive shows as unmountable and be able to run a file system repair on the emulated drive before attempting to rebuild it. It is worth emphasising that the rebuild only puts back what is shown on the emulated drive, so no point in attempting a rebuild if no data is showing at that point. In case it is not obvious the parity drive contains no actual data, and has no understanding of data. What it does contain is the information to restore the bit pattern on a missing drive using the combination of a particular sector on ALL the other data drives plus the parity drive.
  21. I am very confused by the exact sequence of steps you mention, and thus the exact state of your system. However, if you formatted the drive while you still had parity, then as the pop-up dialog would have warned you that you are erasing all data from the drive AND updating parity to reflect this. in such a case there is no way to rebuild the disk that has just been formatted.
  22. Have you tried booting in both GUI and none-GUI mode? Some people have reported that one mode works for them while the other does not. I do not think the reason has ever been identified
  23. If the server has not crashed (as appears likely) then a short press on the power button will trigger a shutdown.
  24. While reallocated sectors are not inherently bad, if the reallocated sector count is continuing to increase then that is a good indication that disk failure could be imminent.
  25. Q1) Using Only is fine as long as you do not already have files in that share on the array. If you DO have files on the array then set it to Prefer; disable the VM service and Docker services under Settings; run mover to get the files moved to the cache; re-enable the Docker and VM services; (optional) change the Use Cache setting to Only. Q2) There is no way to explicitly set the division between VM and file caching. What you can set is the Minimum Free Space setting under Settings >> Global Share settings and when free space on the cache falls below this then files get written directly to the array. The recommended value is to be larger than the size of the largest file you are likely to write. Having said that since you do not have a parity drive (which is what limits speed writing to array drives) I wonder if you even need to bother caching file writes in the first place.
×
×
  • Create New...