TheJJJ42

Members
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

0 Neutral

About TheJJJ42

  • Rank
    Newbie
  • Birthday August 29
  1. it was "Device is disabled, contents emulated", so i followed this thread with squids guide to... ... so I expected a rebuild. Should the shares (on that particular data drive) be shown while the rebuild is running? Or is this an indication, that they are lost? If yes: why? What may have gone wrong?
  2. args! that´s bad. Storing the system log is not that easy, when all the drives are 'gone missing' or the system hangs. One does not actually remember doing this, when worrying errors occur. I expected that these quite important (and not that big) logs are stored somewhere by default. Seems, that there isn´t a dedicated addon to do this? I would like to store them to the unraid usb drive as well. Can someone recommend a addon / tool that is good for this job? cron´d rsync?
  3. I had a data disk of the array disabled. Just went 'red cross'. Checked it with other live Linux, no troubles - all data there, no smart issues of this quite new disk. Now unraid is rebuilding the data drive (see this thread) using the two parity drives and all other data disks. (quite slowly: 50mb/s, takes 3 days) -> Is it normal, that the shares, residing on the 'lost' data disk are not shown / gone while the data disk is being rebuild? I expected that the shares are at least shown in the shares page (and maybe even be readable) while the rebuild is run
  4. Hi My unraid had troubles in the last days: - all nvme caches and pool drives 'gone', - all disks greyed out, - hang, - disabled data disk, To check the disk, I rebooted several times (also in other Linux live systems with only the troubled disk connected. Seems to be fine.). - now rebuilding array from the parity drives. Now I want to check, what was wrong. But I can´t find the system logs of the last days / week.
  5. (Just for readers with this issue and find this thread, like I did:) Same issue here. 2 Linux Mint (Ubuntu technology) machines can´t copy large files to unraid. Stop after around 10gb. (May be a distribution / Linux / Samba issue.) Now I use syncthing (docker), that does the job. Krusader and double commander failed for me, too. File-transfers between unraid shares work fine with the unbalance add-on.
  6. Nowadays I consult youtube for stuff like that. https://www.youtube.com/results?search_query=syncthing+android Find the qr-code and take a picture.
  7. Hi. Two of my devices are not capable of finding each other in the same LAN. (Unraid server and windows box.) It was especially weird, because all other devices had no issues. (these had no firewalls, open ports.) In the end I found out, that this container is missing the announce UDP port 21067 from its configuration and 22000 stays closed. Using nmap from windows and another Linux machine, 22000 is detected as closed. When I add 21067, it does not show up in the 'docker allocations' (neither do other ports I tested with - on this and other containers), and nmap obviousl
  8. Thank you for that hint. btw: I did not claim, that is is not documented, just that I could not find it, where I looked for this information. For everybody else, looking for it: >You can continue to use the array while a parity check is running but the performance of any file operations will be degraded due to drive contention between between the check and the file operation. The parity check will also be slowed while any such file operations are active.
  9. I can´t find any information, if it is ok to keep using unraid when a parity check is running. Or what use cases do or dont affect a running parity check. (this information could be provided in the unraid gui, when a parity check is running.) Until now, I do not - which means more than 2 days 'downtime'. Sadly, I experience the issue, that shares cant be unmounted, when shutting down. (I read the threads to manually shut down docker and apps, what does not help.) So every shutdown/reboot results in 2+ days parity check downtime. Or, does it?
  10. Trying to install the latest update: Something really wrong went on during getCategoriesPresent Post the ENTIRE contents of this message in the Community Applications Support Thread <br /> <b>Warning</b>: Invalid argument supplied for foreach() in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1714</b><br /> [] "Unknown error happened. Please post a screenshot in the support thread of the Statistics screen" (see attached)
  11. Hi. Thank you for your answer. I think, your explanation is how I understand parity (checks). So, when lots of drives are structured in parallel, all the drives have to be read to calculate parity - for example: for their first sectors. |--------------------------------14tb drive------------------------------| |-----------5tb drive-------| |-----------5tb drive-------| |----3tb drive---| Now all drives need to spin to check/validate the first 3tb of parity. To check between tb3 and tb5, the 3 larger drives have work to do - the small one ca
  12. Hi. My server uses several drives with different sizes and I think, parity and parity checks can be done in segments, if the drives are structured differently in the array. Currently, parity checks take two days. I think, this can be better: Example: |-------------------------------------------------------------Parity drive 14tb------------------------------------------------| |-----------------Drive1 5tb --------------||---------------Drive1 5tb ---------------||-------Drive1 3tb ----------| If drives are arranged this way, parity checks can be done drive by drive.
  13. +1 for multiple arrays. Ralf and some others had good points. Adds flexibility and new options, many never thought of. An encrypted and an unencrypted array would be great. Also, I got lots of disks and don't want to stress the parity drives for everything. Spin array 1 down if just array 2 is working. Alternatively: load balancing for two simultaneous workloads. My disks are quite different sizes. Just seems wrong to have some giants and dwarfs in the same team - in my case, their use-cases are different sports. I already see the situation, when
  14. +2 I copy & paste the text to a text document to read it. :-(