Jump to content

itimpi

Moderators
  • Posts

    20,707
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I have seen a similar problem on Windows. In that case it is because Windows does not allow two connections at the same time to the same server with different users from the a particular client. It just fails the second connection reporting a failure to authenticate. I wonder if MacOS has a similar limitation.
  2. That is because each 'write' operation is not simply a write. it involves: Reading the appropriate sector from both the target drive and the parity drive(s) in parallel. Calculating the new contents of the parity sector(s) based in the changes to the target drive data and the current parity drive(s) data. Waiting for the parity drive(s) and target drive to complete a disk revolution (as this is slower than the previous step) Writing the updated sector to the parity drive(s) and the target array drive. in this mode there is always at least one revolution of both the parity drive(s) and the target drive (whichever is slowest) before the 'write' operation completes and this is what puts an upper limit on the speed that is achievable. The Turbo write mode tries to speed things up by eliminating the initial read of the parity drive is) and target drive by: Reading the target sector from ALL data drives except the target drive (in parallel). The parity drive(s) are not read at this stage. Calculating the new contents of the parity sector(s) based on the contents of the target drive and the equivalent sectors on all the data drive (this is the same calculation to that done when initially building parity) Writing the updated sector(s) to the parity drive. Whether this actually speeds things up is going to vary between systems as it depends on the rotational state of many drives, but in tends to by eliminating the need to wait for a full disk revolution to occur on both the parity drive(s) and the target drive and this tends to be the slowest step. in both cases the effective speed will be lower than raw disk performance might suggest. The potential attraction of SSD only arrays (that some users have been discussing) is that delays due to disk rotation are eliminated thus speeding up the above processes.
  3. That sounds like a good speed for writing to the parity protected array you might get better speeds if you have Turbo write enabled but that is at the expense of needing all drives to be spinning which is not the case for the default mode.
  4. Another possibility that might be reasonable to implement is to have a field in the form view where you can enter a custom XML fragment that is to always be present in the generated XML.
  5. And probably as multiple requests as one giant one is unlikely to gain much traction. Instead it needs to be a number of smaller requests that can be individually prioritised (if accepted) and gradually picked of.
  6. There has not been for several years now Some people still like to do it as an initial stress test as recovering adding a 'bad' drive to the array can be more trouble than doing the preclear.
  7. In which case I cannot help you as that is not something I have ever wanted to do so have never even tried. You will have to wait to see if anyone else can help you.
  8. As far as I know there is no scheduling function available in unBalance. It is not a plugin I have used much so it is possible there is a way to do what you want but if so I do not know how.
  9. It sounds as if the User Scripts plugin might be more appropriate to your needs? It has built in scheduler/cron capability for any script it runs.
  10. I am reasonably certain that Unraid will never try to write to a non-existent disk regardless of what you set in the share settings.
  11. There have been quite a few similar reports. Hopefully LimeTech are looking into the cause so it gets fixed for a future release.
  12. Just to state that it is also working for me, so although it does sound like a problem on the OP system it is not one that everyone is experiencing. Posting diagnostics taken just after the system has failed to recognise a hot-swapped drive might help pin down the cause. there is also the fact that as far as I know Unraid has never formally supported hot swap although it seems to work fine when using the UD plugin the vast majority of the time.
  13. I was wondering if it is worth changing the text on the FIND button to read DUPLICATES instead? Although I have had the plugin installed for ages it was only recently that I realized this button was about detecting duplicate files on the server. Since it works off the hash files generated by the plugin it is also very fast once you have the hash files generated. Knowing this capability exists might be a reason that encourages more users to make use of the plugin
  14. I've been with Unraid since early v5 beta days (not sure how long ago that is now ). I remember that was when 3TB drives were just starting to become readily available which is why I went with v5 rather than 4.7 which was the stable release at the time. It is time I replaced the Limetech badge currently on the Unraid server with an Unraid one
  15. I think this normally only really becomes a big issue when there is a total failure of the Unraid system (e.g. hardware failure) and for this going the Linux route is a tried and tested answer. In special cases of recovering individual damaged disks then there are solutions such as XFS Explorer that have been used on Windows. In typical normal day-to-day use a drive gets plugged in via the Unassigned Devices plugin for reading/writing files on a 'transfer' disk.
  16. In the case where I was trying to use the Paragon software in Windows I was using XFS. I have not tried the Paragon software with an Unraid BTRFS disk - this is something I should check against the Paragon drivers. I have tried both BTRFS and XFS formatted Unraid drives against Linux running on a Raspberry Pi and they both worked. I have not tried encrypted drives but forum posts suggest they will work as well.
  17. When the Paragon software did not work for me I plugged into a Raspberry Pi running Linux and that allowed me to get files off the Unraid drive with no problems. I did that as part of proving that it was the Paragon software that was at fault when it could not successfully read the same drive plugged into the same USB-SATA dock. i am still hopeful that at some point Paragon will fix their software so I can read such drives from Windows as it would be convenient but I did at least prove that the Linux solution worked. An alternative I have not tried that would probably work as well is running a Linux VM on the Windows system and passing the Unraid drive to that. It is what I would probably try if I regularily had to read Unraid drives on a Windows system.
  18. I do not think that will work! At least it did not for me and I raised a bug report with Paragon but so far they have been unable to resolve it. Although the Paragon drivers could see that it was a XFS format drive and showed me the top level folder corresponding to my User Share it was not able to go any further and show me the actual files. They offered me a refund so it sounds as if there is no great confidence they will resolve the problem. A great shame as the Paragon software has worked well for Linux Ext3/4 format disks.
  19. I will be interested to see how well this works. My tests were a little artificial as my system does not suffer from heat problems. if you have any problems then please enable the debugging log option in the plugin settings and then let me have a copy of the syslog so that I can see what is going on.
  20. I am not sure why you lose the connectivity to the local LAN? I have a similar setup and also am using NordVPN but I do not lose access to shares on the LAN.
  21. Yes is almost certainly the wrong setting as that means new files are created on the cache and then are moved from cache to array when mover runs! what you probably want to do is: Stop docker service set the Use Cache setting for ‘appdata’ to Prefer Manually run mover to get any files that are on the array for appdata moved to the cache change the setting for the ‘appdata’ share to ‘Only’ to make sure new appdata files are created on the cache and mover will not attempt to move them to the array. re-enable the docker service.
  22. What is the Use Cache setting for the ‘appdata’ share? Normally it would be set to one that would mean mover ignores it.
  23. The quickest way to check is to boot Unraid on the board and then call up the system information to see if says IOMMU is enabled.
  24. It should be easy, but just as a precaution I would suggest taking a screenshot of the current Main tab before you move anything.
  25. I had forgotten that but the basic logic still applies Anything not under /mnt or /boot is only in RAM and will not survive a reboot. For a transient work file I would suggest just create it anywhere convenient under /tmp as the traditional home on Linux for temporary files.
×
×
  • Create New...