TheJJJ42

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by TheJJJ42

  1. it was "Device is disabled, contents emulated", so i followed this thread with squids guide to... ... so I expected a rebuild. Should the shares (on that particular data drive) be shown while the rebuild is running? Or is this an indication, that they are lost? If yes: why? What may have gone wrong?
  2. args! that´s bad. Storing the system log is not that easy, when all the drives are 'gone missing' or the system hangs. One does not actually remember doing this, when worrying errors occur. I expected that these quite important (and not that big) logs are stored somewhere by default. Seems, that there isn´t a dedicated addon to do this? I would like to store them to the unraid usb drive as well. Can someone recommend a addon / tool that is good for this job? cron´d rsync?
  3. I had a data disk of the array disabled. Just went 'red cross'. Checked it with other live Linux, no troubles - all data there, no smart issues of this quite new disk. Now unraid is rebuilding the data drive (see this thread) using the two parity drives and all other data disks. (quite slowly: 50mb/s, takes 3 days) -> Is it normal, that the shares, residing on the 'lost' data disk are not shown / gone while the data disk is being rebuild? I expected that the shares are at least shown in the shares page (and maybe even be readable) while the rebuild is running. Now I am a little worried that some newer data (since last backup) may be lost with the shares. This does not look like what I was expecting using unraid with 2 parity drives.
  4. Hi My unraid had troubles in the last days: - all nvme caches and pool drives 'gone', - all disks greyed out, - hang, - disabled data disk, To check the disk, I rebooted several times (also in other Linux live systems with only the troubled disk connected. Seems to be fine.). - now rebuilding array from the parity drives. Now I want to check, what was wrong. But I can´t find the system logs of the last days / week.
  5. (Just for readers with this issue and find this thread, like I did:) Same issue here. 2 Linux Mint (Ubuntu technology) machines can´t copy large files to unraid. Stop after around 10gb. (May be a distribution / Linux / Samba issue.) Now I use syncthing (docker), that does the job. Krusader and double commander failed for me, too. File-transfers between unraid shares work fine with the unbalance add-on.
  6. Nowadays I consult youtube for stuff like that. https://www.youtube.com/results?search_query=syncthing+android Find the qr-code and take a picture.
  7. Hi. Two of my devices are not capable of finding each other in the same LAN. (Unraid server and windows box.) It was especially weird, because all other devices had no issues. (these had no firewalls, open ports.) In the end I found out, that this container is missing the announce UDP port 21067 from its configuration and 22000 stays closed. Using nmap from windows and another Linux machine, 22000 is detected as closed. When I add 21067, it does not show up in the 'docker allocations' (neither do other ports I tested with - on this and other containers), and nmap obviously shows it as closed. Changing between host and bridge network, does not help. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-syncthing' --net='host' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8384'='8384' -e 'TCP_PORT_22000'='22000' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'UDP_PORT_21027'='21027' -e 'UDP_PORT_75'='75' -e 'TCP_PORT_76'='76' -v '/mnt/user':'/media':'rw' -v '/mnt/user/appdata/binhex-syncthing':'/config':'rw' 'binhex/arch-syncthing' ... The command finished successfully! nmap scanning windows shows every interrsting port closed, even if the firewall has rules for syncthing or is disabled completely. Do I do anything wrong? Is this a configuration issue on my machine? A Docker bug?
  8. Thank you for that hint. btw: I did not claim, that is is not documented, just that I could not find it, where I looked for this information. For everybody else, looking for it: >You can continue to use the array while a parity check is running but the performance of any file operations will be degraded due to drive contention between between the check and the file operation. The parity check will also be slowed while any such file operations are active.
  9. I can´t find any information, if it is ok to keep using unraid when a parity check is running. Or what use cases do or dont affect a running parity check. (this information could be provided in the unraid gui, when a parity check is running.) Until now, I do not - which means more than 2 days 'downtime'. Sadly, I experience the issue, that shares cant be unmounted, when shutting down. (I read the threads to manually shut down docker and apps, what does not help.) So every shutdown/reboot results in 2+ days parity check downtime. Or, does it?
  10. Trying to install the latest update: Something really wrong went on during getCategoriesPresent Post the ENTIRE contents of this message in the Community Applications Support Thread <br /> <b>Warning</b>: Invalid argument supplied for foreach() in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1714</b><br /> [] "Unknown error happened. Please post a screenshot in the support thread of the Statistics screen" (see attached)
  11. Hi. Thank you for your answer. I think, your explanation is how I understand parity (checks). So, when lots of drives are structured in parallel, all the drives have to be read to calculate parity - for example: for their first sectors. |--------------------------------14tb drive------------------------------| |-----------5tb drive-------| |-----------5tb drive-------| |----3tb drive---| Now all drives need to spin to check/validate the first 3tb of parity. To check between tb3 and tb5, the 3 larger drives have work to do - the small one can call it a day. (I hope, this is correct?) When only the 3tb drive had been used for the last two weeks (since the last parity check), there is no need to validate the parity for the 5tb drives. So, in my usecase, I would prefer the drive structure of my original post. Most often just the appdata and domains shares show alerts, that the data is unprotected / need a parity validation. Is there a way to validate just this data? I think it would be possible, if this data is not allocated at the beginning of the drives - because this includes all drives to validate the parity. In the 'serial' allocation of the drives (first post), just the drive containing these shares and the parity disk got work to do. It adds another perspective, if we take into account the way, how shares are handled: In my example, a share, that spans across the two 5tb drives and uses the fill-up allocation method, these drives factually are structured 'behind' each other (like jbod), like in my first post. If the first isn't full, the second should not contain any data. Why include it in a parity check at all? Do you get my thoughts? Are these assumptions how parity works, even correct?
  12. Hi. My server uses several drives with different sizes and I think, parity and parity checks can be done in segments, if the drives are structured differently in the array. Currently, parity checks take two days. I think, this can be better: Example: |-------------------------------------------------------------Parity drive 14tb------------------------------------------------| |-----------------Drive1 5tb --------------||---------------Drive1 5tb ---------------||-------Drive1 3tb ----------| If drives are arranged this way, parity checks can be done drive by drive. - Check one drive per night. - Check just the drives with a specific amount of changes. (every 100gb change) (- Just do parity in sectors that contain data. - may not be possible) Is this a plausible idea?
  13. +1 for multiple arrays. Ralf and some others had good points. Adds flexibility and new options, many never thought of. An encrypted and an unencrypted array would be great. Also, I got lots of disks and don't want to stress the parity drives for everything. Spin array 1 down if just array 2 is working. Alternatively: load balancing for two simultaneous workloads. My disks are quite different sizes. Just seems wrong to have some giants and dwarfs in the same team - in my case, their use-cases are different sports. I already see the situation, when I need to move my data to bigger disks and use this for a cleanup. Copying between two arrays will be much easier. As mentioned already: my backups won't need parity. Also, I farm chia. I don't want parity for that. Some might also fancy a btrfs raid 5 or 6 as additional array.
  14. +2 I copy & paste the text to a text document to read it. :-(
  15. +1 Would have helped me, especially in the initial setup (with some used disks).
  16. unraid 6.9.2. Creator on Windows notebook. Intel C216 mobile chipset, core 3xxx generation. Tried several Sandisk 32gb usb flash drives - none of them showed up in the creator. Cheap old 16gb intenso usb stick worked fine. But this is not a quality device. Bought 2 different sandisk 16gb drives today. Ultra Usb 3.0 (black, plastic) Does not show up in creator. Ultra Flair Usb 3.0 (small, black silver aluminium) Does show up. Tried several times: 1. the creator does the download and starts writing (syncing file system), but never stops. 2. downloaded zip and used this local file, to prevent several downloads. never stops 'syncing'. 3. usb drive contains stuff like this: (or nothing) What went wrong? Update: manual creation of usb drive worked. I could not find out, what caused the initial troubles.