Jump to content

itimpi

Moderators
  • Posts

    20,776
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. There is no option to ‘install’ Unraid in the conventional sense. The OS does not run from the USB stick - it runs from RAM. It is just loaded during booting from the archives on the USB stick so does not really stress it in any way. No idea why you have that failure rate, many people go for years without issue. It is advantageous if possible to plug into a USB 2 socket as they tend to be more reliable and run cooler. Even if they do not have external USB 2 sockets most motherboards have internal USB 2 headers that can be used.
  2. You will have to correct any current corruption after the RAM is fixed, but then hopefully it will stop happening.
  3. The problem is that there is still a lot of old documentation floating around. If you want to make sure you are using the current version of the online documentation it is accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  4. That picture is incomplete as it does not include the IP addresses when going via the router which will be different to those on the direct connection. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can see how you have things configured. FYI: For security reasons you cannot configure the ‘root’ user to have network access to the shares.
  5. You might want to check that the shares in question do not have Export=No in their SMB settings as that stops them being visible on the network. For security reasons this is now the default and the user has to explicitly share them.
  6. I would suggest you run a check filesystem on disk3 because I saw this in the syslog/; May 4 16:56:52 Tower kernel: REISERFS warning (device md3): jdm-20002 reiserfs_xattr_get: Invalid hash for xattr Whether that will fix your problem I do not know but that share is partly on disk3 so it just might. It would also do no harm to also do this on the other drives just to play safe. Were the diagnostics taken after you encountered the problem? If not it might be worth recreating the issue and then posting new diagnostics to see if something then shows up. Not directly relevant to your current issue but you might want to consider migrating your array disks off being reiserfs format. Reiserfs is now deprecated and at some point in the not too distant future support for it is planned to be removed from the Linux kernel and at that point new Unraid releases will also probably stop supporting it.
  7. Highly likely. Btrfs file systems are very likely to get corruption if you have RAM issues.
  8. Not sure why you are having a problem, but I notice a copy of anomalies in your syslog May 3 13:31:36 Weyland root: ERROR: ld.so: object '/boot/config/key.key' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. This keeps occurring - not something I have seen before. Any idea what the file mentioned actually is? you also have a share called ‘cache’ (that has files on disk3) as well as a pool called ‘cache’. Not sure how you got into this state as I thought the GUI prevented you creating a share with the same name as a pool. It might confuse things. Your ‘system’ share has files on cache and disk3 and is also set to Use Cache=Yes which means mover will try to move it off cache onto the array. You normally want this share completely on the cache if possible to maximise performance. Your ‘appdata’ share has files on both cache and disk4 and again you normally want this all on cache to maximise performance. It is set to Use Cache=Prefer so at least mover will try to move it to the cache. I can see attempts by Unraid to start docker and failing - not sure why. it might be worth booting your system in Safe Mode (which stops any plugins loading) to see if that helps in case one of them is causing issues.
  9. You should attach your system's diagnostics zip file to your next post in this thread so we can see more about your system and its current status. It might also be worth mentioning which share it is in case that turns out to be relevant.
  10. Do you have something configured to write to /mnt/user/cctv ? From your comment above I would have expected Zoneminder to be configured to use /mnt/disks/cctv which is the UD drive - is this the case?
  11. This is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  12. @unTECH Not an answer to why you get your current symptoms, but It is worth pointing out that you can always manually upgrade as described here inthe online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  13. I suggest you just post them as you find them I do not think there is one official place to report them. You could create a new thread just for this purpose if you expect to make multiple posts on this topic to keep them together.
  14. This is unlikely to happen I think. What might be more realistic is to ask for a docker container (or pre-configured VM) that contains this.
  15. Earlier today while testing I accidentally did something that meant I ended up with about 15 drives under historical devices that I do not want to remain there. I could successfully remove them one at a time by clicking on the 'x' against each drive, but I was wondering whether it would be practical to providing a mechanism for selecting multiple drives and then a Remove button that removes all selected drives at once rather than the 'x' against each drive? Certainly not critical but thought it was worth asking.
  16. That is turned off automatically if files exist on both the primary and secondary storage locations. However you probably do NOT want it turned off as the whole idea of exclusive mode is to give better performance for shares which are only on a given pool.
  17. If you now add the second drive to the pool the file system will automatically be set to be btrfs., but will not initially be formatted. You then need to format it from within Unraid (which does both drives). Note that with the soon to be released 6.12 release you will also have the option of using zfs in multi-drive pools as an alternative to btrfs so if you are interested in going that route you can install 6.12-rc5 (or wait until it goes stable which is expected to be within days) and then create the multi-drive pool selecting zfs as the file system to be used. It depends where you copied the data to. Copying it manually will work and is probably fastest, but if it was copied to a share of the correct name on the main array then setting the relevant shares to Use Cache-Prefer and running mover (with docker and VM services disabled) then mover can be used to copy the files back.
  18. As long as the power supplies are good enough to handle the drives then I doubt that adding extra ones is going to help (you did not mention the PSU ratings) so you may not want to spend the money on the additional PSU's . The netapp is normally aimed at production environments where uptime is crucial and the cost of extra PSU's is a marginal cost.
  19. I think that there is a lot to be said for making this far more prominent as down in the footer it is easy to miss. Even knowing it is there I find I have to look carefully to see it. It could either be added to the icons at the top right of the GUI or (even more extreme) given a tab of its own.
  20. Well at least that means you do not have the sort of power splitter problem I was thinking of Just for interest how is power supplied to that many drives - are they in some sort of enclosure where the power is applied via the backplane?
  21. Not sure where you found a reference to multiple PSUs. The term power splitter cable typically refers to when you add it to a single cable from the PSU so you can attach more drives to that cable.
  22. It might be a good idea to post a copy of your system’s diagnostics zip file so we can check how things are currently set up. If you have the Dynamix File Manager plugin installed you would be able to delete the Krusader files directly. if files exist in multiple locations then unfortunately you have to decide which copy to keep In normal operation this scenario should not occur and files would only be at one location. You also mention appdata being set to disk1 and Use Cache = Prefer. This combination does not normally make sense as the Prefer setting means you want files for the share (on ANY array drive) to be moved to cache if space permits. The disk1 setting would only be relevant for NEW files that did not fit on the cache.
  23. You mention you think the power is fine, but do you use power splitter cables as you have a lot of drives? They can be problematics sometimes, especially if you are trying to hang too many drives off a single cable from the PSU.
  24. From the xfs_repair output it appears that when you had it set as a single drive in the pool it was explicitly set as xfs (the default for pools is btrfs) but when you added an additional drive the format was then configured to be btrfs as xfs does not support multi-drive pools. If you set it back to being a single drive pool and set the format to be xfs I think you will find the drive mounts fine, if you still want to move to a multi-drive pool you first need to copy the data off that drive so it can be reformatted as btrfs so that the pool can be multi-drive.
  25. You have a User Share named ‘cctv’ that has files on disk4 and this is what the error message is referencing. This is NOT the same as the external drive mounted at /mnt/disks/cctv. Is this setup intentional?
×
×
  • Create New...