Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. I currently have moved everything off my old synology server. I am now trying to use it as a backup, but have had several problems. 1. I tried to use syncthing, but is painfully slow over local network 2. Tried rsync, finally got it work, but i have to enter my password 3. I tried to use sshpass, but do not have it on either unraid or synology server 4. How do i create user scripts to back up my files to secondary network storage, on some sort of schedule. Is there a better way to do this. I just want to make network backups for some/all of my shares as needed and have them backup on a schedule without user intervention?
  3. Outage on Git should be resolved
  4. Weird, that column is in the stock image but not on my actual router. I did try the remote port settings but I must have done it wrong initially because I tried it just now and it worked. Hopefully I can manage from here. Thanks for the help, I know I wasn't making things easy but I just really needed another set of eyes because I haven't been able to keep things straight lately.
  5. I recently purchased twenty four 4TB NAS rated drives. So I had been using the pre-clear Plugin to preclear between two and four drives concurrently. I didn't run into any issues. I have a fourth unRAID pro license so I just use that in a PC with a four drive cage. And do the prelear process.
  6. Don't crimp. Use the jacks and terminate the cable on them. Then use patch cables to connect to the equipment. Using crimp on lugs is just asking for trouble down the road. I can't count the number of clients we've had at work where they had issues with those crimp on lugs. Our company replaces them with jacks. And after that they have no issues.
  7. Today
  8. Thank you everyone for your input. I've taken jonathanm's advice on assigning all of my disks as they were previously including the correct drives for parity 1 and parity 2. At first glance everything is looking good and I'm running a parity check. Looking into UPSs I've realized I don't know enough about the nuances. Are there any recommendations on a UPS that can be rack mounted?
  9. Yes, thank you! I'll leave it in current settings then and not mess with it.
  10. Honestly I'm with you. I'm sure if I get my own modem and get to pfsense it will work. But it's not my gateway so not much I can do T. T Sent from my Pixel 2 XL using Tapatalk
  11. I don't see anything obvious. What browser are you using? Have you tried clearing your browser's cache or reload the page with ctrl-f5?
  12. Hello, I am using unraid for a bunch of years now and i am still delighted how good it works for me. So i suggested to a friend to use unraid, too. He bought hardware an began setting it up an ended up in a ton of problems, till i decided to take his hardware to my home and to start testing it. At first everything was good. i set up the system and began testing, disk, docker, vm all good. But then i added an ecrypted disk. immediately after everything was set up and i began to copy files on the encrypted disk, the system stopped responding. the copyprocess stopped, my explorer on the windowsmachine stopped working and i was unable to stop the array. I tried xfs encrypted, as well as btrfs encrypted. At the moment the system is clearing a disk again for the next try. i tried to find clues, about what i possibly could have done wrong, but i didn't find anything which would have been an explanation for the behaviour. what am i missing? what do i have to search for? Version is 6.7.2. As i said, the sytem ran just fine, till i tried to use encrypted disks. thanks in advance for your help
  13. In the container console, go to the /config/nginx/security folder: the auth file contains the authentication info. Delete it, and then re-generate it using this command: htpasswd -c auth yourusername You will then be asked to enter a password (twice). If you wish to add more users, you can use this command: htpasswd auth yourusername
  14. @Taddeusz Thanks! My mistake!! Attached is the latest excerpt from the correct log. Thanks again! Thomas
  15. I get the following error when trying to add the container back in: Pulling image: binhex/arch-deluge:latest TOTAL DATA PULLED: 0 B Command:root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-deluge' --net='bridge' --log-opt max-size='50m' --log-opt max-file='1' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -v '/mnt/cache/appdata/data':'/data':'rw' -v '/mnt/user/appdata/binhex-deluge':'/config':'rw' 'binhex/arch-deluge' Unable to find image 'binhex/arch-deluge:latest' locally /usr/bin/docker: Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 209.244.0.4:53: read udp 192.168.11.208:39687->209.244.0.4:53: read: no route to host. See '/usr/bin/docker run --help'. The command failed. Any suggestions?
  16. Where did you see this, on our forum? This was an issue several years back where some USB flash devices had a config bit set to "this is a removable device" and others had the bit set to "this is a hard disk". I think the HP tool was able to flip that bit and solve the problem, which was due to HP bios not allowing boot from a USB flash with the bit set wrong as a security measure (can't remember which state it wants the bit to be).
  17. I understand by this you're using s SAS1 expander with dual link, if so you'll need to account for protocol overhead and 8b/10b encoding, of the 2400MB/s theoretical max you'll get around 2200MB/s usable, so with 24 drives around 92MB/s per drive.
  18. Thanks both denishay & saarg both helped alot to get to right solution Ended up with: shell docker exec -it nextcloud bash cd /config/www/nextcloud/ sudo -u abc php7 occ files:scan --all Its running the filescan now. Thanks both of you! i owe you many hours in saved time on this
  19. Almost 36 hours is indeed too long. My mixture of 5400 RPM 3TB & 4TB drives, and 7200 RPM 8TB drives finishes in 18.5 hours. A pure 7200 RPM 8TB setup should complete in under 16.5 hours. Even slower 5400 RPM 8TB drives should finish in under 22 hours. I'm not sure that your math accounts for typical communication protocol overhead, nor the inefficiency inherent in port expanders. I'm happy you did find a 10MB/s improvement. This is exactly the reason I avoided port expanders. 60-70 MB/s sounds close to what I would expect. This is absolutely correct. Though I think many users struggle to understand this without a visual: This chart is from a HDD review, showing throughput speed (MB/s) over the course of the disk. The lime green line starts off high around 200MB/s at the beginning (outside edge of the disk platter), then tapers off to around 95 MB/s at the end of the disk (inside edge of the disk platter). This is just a random sample, and doesn't necessarily represent your drives, so this is just to show the concept of what is going on. The average speed of the drive (and the resulting Parity Check) would be around 155 MB/s, not the 200 MB/s peak. The Unraid Tunables Tester only tests the very beginning of the drive (i.e. the first 5-10%, where speeds are the highest). This is way above the average speed of an entire drive, beginning to end. On this chart I drew three dashed lines. The green line at the top represents Unraid Tunables set to a value that doesn't limit performance at all. This is what we are trying to achieve with the Unraid Tunables Tester. The yellow line in the middle represents how a typical system (one without any real performance issue) might perform with stock Unraid Tunables. Notice that while peak performance is reduced from 200 MB/s to perhaps around 190 MB/s, this slight reduction is only for the first 17% of the drive, beyond which the performance is no longer limited. A 5% speed reduction for 17% of the drive only reduces average throughput (for the entire drive) by less than 1%, so fixing this issue might only increase average throughput for the entire drive by 1-2 MB/s. Sure, it's an improvement, but a very small one. The red line at the bottom represents how some controllers have major performance issues when using the stock Unraid Tunables - like my controller. In this case, the throughput is so constrained, over 90% of the drive performs slower than it is capable of performing. Fixing the Tunables on my system unleashes huge performance gains. Hopefully that helps show why most systems see extremely little improvement from adjusting the Unraid Tunables - these systems are already performing so close to optimum that any speed increase will hardly make a dent in parity check times. It's only the systems that are misbehaving that truly benefit. Paul
  20. Google found this: https://www.vpi.us/cat5e-plugs/super-flat-cat5e-rj45-731 and https://www.youtube.com/watch?v=CEPMQ4wybgI I would imagine that using a regular connector would be difficult to feed the wires in, and you would have no strain relief for the wire terminations.
  21. For the last week I've been having a problem where, if my server is on for any length of time, eventually the online interface will become largely inaccessible. I can still access the WebUI page, but some of the headings will be missing, and any page I click on will just be white and blank (see attached image). I can often still access my running docker containers is I type in their actual IP:Port address, but they often load very slowly. At this point, the only way for me to get back to a normal interface is to log into my server via SSH and manually reboot, which then triggers a parity check. Any ideas what might be causing this? I've attached my system diagnostics. Thanks! vulftower-diagnostics-20190722-1718.zip
  22. I'm having issues with my UD no longer mounting my 2-disk BTRFS RAID1 pool ("ssd" = nvme0n1p1 set to automount + "ssd2" = nvme1n1p1). The 2nd disk [mounted to "ssd2"] of the pool (not set to automount) doesn't get luksOpen'ed and therefore is "missing," so the automount of the 1st disk fails. Jul 22 12:33:02 Tower unassigned.devices: Mounting 'Auto Mount' Devices... Jul 22 12:33:02 Tower unassigned.devices: Adding disk '/dev/mapper/ssd'... Jul 22 12:33:03 Tower kernel: BTRFS: device fsid 4939602d-ea6f-4dd0-a535-252580b60aac devid 1 transid 11909816 /dev/dm-13 Jul 22 12:33:03 Tower unassigned.devices: Mount drive command: /sbin/mount '/dev/mapper/ssd' '/mnt/disks/ssd' Jul 22 12:33:03 Tower kernel: BTRFS info (device dm-13): disk space caching is enabled Jul 22 12:33:03 Tower kernel: BTRFS info (device dm-13): has skinny extents Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): devid 2 uuid f66ef13b-8a93-45da-b73e-0d6a478729b2 is missing Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): failed to read chunk tree: -2 Jul 22 12:33:03 Tower kernel: BTRFS error (device dm-13): open_ctree failed Jul 22 12:33:03 Tower emhttpd: Warning: Use of undefined constant luks - assumed 'luks' (this will throw an Error in a future version of PHP) in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 634 Jul 22 12:33:03 Tower unassigned.devices: Mount of '/dev/mapper/ssd' failed. Error message: mount: /mnt/disks/ssd: wrong fs type, bad option, bad superblock on /dev/mapper/ssd, missing codepage or helper program, or other error. Jul 22 12:33:03 Tower unassigned.devices: Partition 'Samsung_SSD_970_PRO_1TB_S462NF0M300357X' could not be mounted... Jul 22 12:33:03 Tower unassigned.devices: Disk with serial 'Samsung_SSD_970_PRO_1TB_S462NF0M311892X', mountpoint 'ssd2' is not set to auto mount and will not be mounted... After every array start, I have to execute luksOpen on "ssd2" and mount "ssd" manually (see below). Then after this, unmount/mount works again in the GUI. root@Tower:~# /usr/sbin/cryptsetup luksOpen /dev/ssd2 ssd2 --allow-discards --key-file /root/keyfile root@Tower:~# mkdir /mnt/disks/ssd root@Tower:~# /sbin/mount '/dev/mapper/ssd' '/mnt/disks/ssd' I think this happened after I renamed the mounts and physically changed slots, though it seemed to work after one restart, but then broke after the next restart. Have restarted a few times since and it never works anymore, even if I change the names. If I switch the nvme1n1 to automount instead of nvme0n1, it still results in the same error. tower-diagnostics-20190722-1649.zip
  23. Is that the catalina.out file? There should be a folder within the logs folder with this file. There is also a tomcat.log but that just monitors the status of the tomcat service. Not useful for this problem.
  1. Load more activity