Jump to content

itimpi

Moderators
  • Posts

    19,784
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. Do you know how you got into this situation? It looks like a copy or move was done manually at some point that went wrong?
  2. It all depends on what you mean by ‘root’ in in an earlier post. The Share name counts as one level, so if you meant root to refer to the Share name then you would need a value of 4 to allow folder3 to be on multiple drives. If you meant folder1 to be the share name then you would need a value of 3.
  3. With a Split level of 2 anything starting at folder2 will be constrained to the disk where any folder at the folder2 level is first created. In other words only the share/folder1 can exist on multiple drives.
  4. You have a share anonymised as S—f with a Split Level setting of 2. It is possible that this is constraining which disks can be used by this share depending on the full path of the files you are trying to put into it. it is always worth remembering that Split Level always wins any contention between share settings for which drives to use, so it is always one to check out first. if it is not that perhaps you can give the exact path of files that are not being distributed across drives as expected.
  5. Probably due to the settings on your shares. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  6. Once a disk is disabled (which it would be if removed with Unraid running) it has to be rebuilt to get it back into use. Not quite sure what you then did - did you follow the procedure documented here?
  7. You could use Tools->New Config and simply assign the nvme to a pool and if it is set for the same file system as it is using currently then it will be picked up with data intact. Note that the default for array drives is xfs while for pools it is btrfs (and multi-disk pools MUST be btrfs). Most people use a pool for dockers and VMs so that is definitely not an issue. Typically the array is used for storing large files (e.g video) or backups which do not fit onto a pool.
  8. Handling of unmountable disks is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI. Unraid does not care about the sdX type designations which are unpredictably subject to change between boots - instead it identifies disk by their serial number.
  9. The diagnostics are a single zip file that contains all the files you posted. Please post that instead of the individual files as it is MUCH easier to work with - so much so that most people will not even look at them when posted as individual files.
  10. Most people prefer to use SSD’s in pool/cache scenarios for their increased performance. This is particularily the case when running VMs or Docker Containers. as to mover, without knowing more about what problems it caused you it is difficult to give a constructive answer.
  11. Macvlan panics are a known issue with docker containers that use custom IP’s. You could try upgrading to 6.10.0 rc2 and then change any such containers to use Iplan instead
  12. Download the ZIP for the release and then extract all the bz* type files for the release overwriting the ones on the flash drive.
  13. You probably need a later Unraid release (e.g. 6.10.0 rc2) to have a driver for the NIC in that board.
  14. One option that some people use is to use a small USB stick as the only array drive (with no data being stored on it) and then set up the SSD as a pool device. This meets the requirement that Unraid must have at least 1 drive assigned to the main array before it can start up properly, and pool devices CAN be trimmed.
  15. You do NOT have the files mirrored - they are simply different views of the same files. Usef Shares provide an aggravated view of files within a top level folder on all array and pool drives. You might want to read the online documentations accessible via the ‘Manual’ link at the bottom of the GUI to find out more.
  16. The sentiment is correct but that is not one of my plugins
  17. They do not count as long as they are not plugged at the point where the array starts. If they are plugged in at that point they DO count. Once the array is started then removable drives can be plugged in without problems (at least until the next array start). In practice many people like to have a licence level which means they can leave such drives plugged in as it is more convenient to not to have to unplug them to be able to start the array, but it is up to the user to make a decision on this.
  18. From the command line it would be /boot/config
  19. If you are ta;king about the sdX type designations then Unraid has no control over those. They are assigned dynamically at the Linux level during the boot process as Linux recognises the drives and are subject to change on any boot due to slight variations in timing. In practice they tend to remain constant between boots but you should never rely on that.
  20. According to your syslog it look like a couple of disks had connection problems, and then disk11 dropped offline, and later reconnected with a different device ID so it now showing up under Unassigned Devices (Unraid cannot is not hot-plug aware for array devices and cannot handle this so it thinks it has failed). As was mentioned previously you need to carefully check the cabling (both power and SATA) to make sure all cables are properly seated. Also are you sure your power supply is up to handling all the drives you now have connected?
  21. It could be worth hitting the Stop option and time how long it takes for the array to be successfully stopped. The timeout needs to be at least as long as that plus a safety margin. at that point try a reboot. If you still get an unclean shutdown detected, then this might indicate an issue with updating the flash drive to say the array was successfully stopped before the reboot.
  22. It must have been online during the boot-tocess, but it looks as if it may have now dropped offline since none of the expected contents are showing.
  23. It might be worth using the ‘df’ command to check that the flash drive is mounted at /boot? The messages you give suggest that it is either not online, or has problems.
  24. When using a custom IP for the container I do not think you can change the port used it would be up to the person who provides the container to provide a mechanism for changing the port.
  25. All the configuration information was saved in the config folder on the flash drive and if you have a backup of that you can restore settings.
×
×
  • Create New...