itimpi

Moderators
  • Posts

    19625
  • Joined

  • Last visited

  • Days Won

    54

Community Answers

  1. itimpi's post in Understanding license disk limits was marked as the answer   
    All drives attached to the machine (except the boot flash drive) count at the point you start the array regardless of whether Unraid is using them.  Your example shows 7 drives attached.
  2. itimpi's post in Noob Q: Change Docker Config Paths was marked as the answer   
    It is probably best (and safer) to leave it set to /mnt/user and use the Exclusive Share option that is now part of the 6.12.x releases.
     
    That gets the same performance advantages as would be given by /mnt/cache and is less likely to be something you might overlook in the future when making changes.
     
  3. itimpi's post in Rename device in Cache pool? was marked as the answer   
    This is NOT anything to do with the pool you have called ‘Cache’.   It is that somehow you have also ended up with a User Share called ‘cache’.   If you look you probably have a folder called ‘cache’ on one of your drives.  You definitely want to fix this and not ignore it.
     
  4. itimpi's post in Parity check seems stuck was marked as the answer   
    Yes.    You need to carefully define the start time so it can only occur once during the lifetime of a check.
     
    if you use the parity check plugin then you also get control over other types of long duration array operations.   It will still, however, expect you to get the start conditions right for scheduled parity checks.
  5. itimpi's post in Getting error on each drive when doing parity sync was marked as the answer   
    If using SATA->SATA power splitters then you do not want to split a SATA connection more than 2 way as the host end is limited by the current carrying capacity of its SATA connector.   If using Molex->SATA then you can get away with a 4-way split.
  6. itimpi's post in No parity disk and RAID SSDs to store settings and container/VMs. Will it work? was marked as the answer   
    It is a standard Linux file system so will be readable on any system that can handle a BTRFS file system.
     
    The container binaries and working data are stored wherever you tell Unraid to store them (the 'system' and 'appdata' shares being the default locations.   It is normally recommended that these are sore on a pool for performance reasons.   The USB drive also stores templates for docker containers you install via the Unraid Apps tab in case you need to edit their settings or reinstall the containers with their previous settings intact.
     
     
    Definitely.   The USB stick stores all you Unraid specific settings in its 'config' folder and this would be required if you ever need to transfer to a new USB stick.  You want a backup made any time you make a significant configuration change. There are currently 3 standard ways available:
    By clicking on the Boot device on the Main tab and selecting the option to make a backup (which is then downloaded as a ZIP file). By using Unraid Connect to have automated backups made in the cloud on the Limetech servers. By using the "Backup/Restore Appdata" plugin which has an option to make a backup to a location of your choice when it runs to backup docker containers working data.
  7. itimpi's post in Removing drive from share and deleting contents of that drive? was marked as the answer   
    As mentioned Dynamix File Manager would be the recommended tools for this.
  8. itimpi's post in Cache / Parity Suggestions was marked as the answer   
    Not sure.    In principle you want the fastest drive as parity.   Suspect this would mean one of the SAS drives but you would need to check their specs to be sure.
  9. itimpi's post in Parity on Unraid with uneven disks was marked as the answer   
    No.  
     
    Unraid uses the whole drive when in the array, so only the 4TB drive could be the parity drive and no data drive can be larger than the smallest parity drive.
  10. itimpi's post in Move appdata from zfs formatted array disk to zfs pool? was marked as the answer   
    Have you got the array set as secondary storage?   You need that to be allowed to use Mover.
  11. itimpi's post in 6.12.9 Lost Drives was marked as the answer   
    This is a known issue if you are using a HBA with a built in SATA port multiplier to get extra SATA ports as drives connected to the multiplier part are not seen.   It is apparently a Linux kernel bug that should be fixed in the next Unraid release and in the meantime you need to stay on 6.12.8.
  12. itimpi's post in Sync Errors Corrected after unclean shutdown during file transfer was marked as the answer   
    Might as well let the check continue.  It will only be updating parity to make it conform to the current data disks.
  13. itimpi's post in Replace Behavior was marked as the answer   
    Unraid has always worked this way - overwriting existing files in place.    It is only NEW files that can be cached.
  14. itimpi's post in Basic license - external USB drive was marked as the answer   
    To clarify it will count if it is plugged in any time the array is started.   If it is plugged in when the array is already started then it does not count.
     
    You have to decide the trade-off the inconvenience of having to make sure it is not plugged in every time the array is started vs the convenience of not having to worry about this by having a licence that allows for more drives.. 
  15. itimpi's post in mover is running - no data is beeing moved was marked as the answer   
    Mover never overwrites existing files.    If a file exists in more than one location it is up to you to manually decide which copy to keep and  delete the other one.   The Dynamix File Manager is probably a good way to do this.
  16. itimpi's post in Parity Rebuild and "Mover" was marked as the answer   
    As far as I know that is expected behaviour.
     
    The parity Check Tuning plugin gives you the option to automatically pause the array operation while mover is running so that mover runs at maximum speed and then resume the array operation when mover completes.
  17. itimpi's post in Consistent Writes to Cache Drive - 6.12.8 was marked as the answer   
    It is perfectly normal for the drive(s) holding the 'system' share in particular to get regular activity as that is where the docker sub-system constantly has files open all the time it is running (which was what would have kept disk1+parity constantly spun up when that share was on disk1.
     
    In addition the same consideration applies to the 'appdata' share any time you have any container running which is configured to use it.
     
    In summary from your description you are just seeing the normal activity generated by running docker containers.   The whole idea is to keep such writes from going to the main array both for performance reasons and to avoid keeping array disks spun up.
  18. itimpi's post in Unraid freezes and is no longer accessible was marked as the answer   
    The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted.  You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash.  The mirror to flash option is the easiest to set up (and if used the file is then automatically included in any diagnostics), but if you are worried about excessive wear on the flash drive you can put your server's address into the remote server field. 
     
    However the syslog in the diagnostics you did post suggests you may have had macvlan crashes - these will eventually bring down the server.   The recommendation is to switch docker networking to use ipvlan which is more stable.  If you want to continue to use macvlan then make sure that bridging is disabled on eth0.
     
    You also are loading a LOT of extras so any of them may be causing an issue - does the problem still occur if you boot in Safe Mode to stop plugins and extras from being loaded?
     
  19. itimpi's post in Remove Duplicate Empty Appdata was marked as the answer   
    The \r indicates a carriage return that you have somehow appended to the name.   You need to get rid of that folder to tidy things up.   Not sure if Dynamix File Manager will be able to delete that folder or not. 
  20. itimpi's post in Good or bad idea? RAID 0 ZFS pool in unRAID array with unRAID parity [SOLVED] was marked as the answer   
    Nor possible.  Each drive in the main array is a free-standing file system.
  21. itimpi's post in Starter one year of software updates with purchase was marked as the answer   
    Limetech have stated that all users who have one of the current Basic/Plus/Pro licences will be entitled to all future upgrades at no additional cost.  Covered in some detail in their blog on the subject.
  22. itimpi's post in Help Needed: Registration Key / USB Flash GUID Mismatch After USB Failure was marked as the answer   
    With trial keys you have to reassign all the drives if you want to get the config on another drive.  This means do not copy the super.dat file in the config folder from the old flash drive to the new one.
  23. itimpi's post in Manually triggered parity check shouldn't be paused was marked as the answer   
    If you want more control then use the Parity Check Tuning plugin to handle pauses and resumes.   You can tell that plugin whether Manual checks should be paused or not.
  24. itimpi's post in Unraid reports no space left on the array despite there being 1.20tb free was marked as the answer   
    It looks like the Minimum Free Space setting for your share is less than the available space which is why you get told there is no space available.
  25. itimpi's post in Fix Common Problems Error: Invalid folder users contained within /mnt was marked as the answer   
    You have something creating the ‘users’ folder under /mnt.    You need to identify what is doing this and correct it.