ken-ji

Members
  • Posts

    1245
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by ken-ji

  1. A. when a disk goes bad, the parity protection of the array - like RAID 5 (or 6 if you have two parity discs) - allow the array to continue functioning in degraded mode. If you have hotswap, to replace the disk(s) you will need to stop the array, which will disable anything using it including the docker engine and VM engine. After replacing the disk(s), the array can be started in the same degraded mode, and Unraid will rebuild the replacement disk or recompute parity - depending on which disk(s) gone bad. Your total downtime is just the time to stop the array, replace disc, and start it up again. 2nd A. Storage in Unraid is basically in three tiers: Array (usually parity protected and support shares that span the array), Cache (these are independent pools of drives if using btrfs, for either caching writes to the array or keeping data on faster disks) and Unassigned (this requires a plugin to simplify, but are storage not in previous tiers and mounted in whereever you want) There is unfortunately only one Array per server, so most users backup to either unassigned disks that they periodically mount, backup, unmount or they target another remote server and use whatever backup tool (ie rsync, google drive, backblaze b2, etc) and as I mentioned, docker containers and VMs are tied to array running, so stopping the array stops everything, but with hotswap bays you can be up and running in a few minutes it takes to stop the array and replace the drives.
  2. The array needs to go offline to replace disks. While the array is offline, this will also shutdown all VM's and docker containers. Hotswap is your friend at this point as it will let you replaced failed drives without needing to mess with actual connections. Which many users in the past have managed to disconnect additional drives while the case is open. Hotswap will not require a shutdown, but just about everything else built in, will be stopped. and resumed only once the array is back online. Once the array is offline, any drive of the array can be replaced (including a parity drive), with Unraid prompting to start a rebuild of the drive based on the state of the array, once the array has been restarted.
  3. I'd like to add my 2bits here that the issue seems to be present when doing parity checks, so it might not be a problem of the Diskspeed docker, but of the mpt3sas driver interacting with the LSI SAS2 controllers. When I don't spin up my disks, my parity check gets capped to about 77MB/s ave vs the usual 140MB/s I'm still looking into making tweaks to work around this as I don't see anything anywhere about this spinup issue.
  4. There seems to be a really odd performance issue with LSI-2308 based controllers When all the drives on the controller are spun down (I only have eight) and you attempt to access them for parity checking, benchmarking a single drive We get the following speed results from Parity tests: 2021-04-27, 03:06:09 1 day, 4 hr, 36 min, 29 sec 77.7 MB/s OK 0 2021-03-15, 14:44:59 15 hr, 14 min, 58 sec 145.8 MB/s OK 0 2021-02-15, 15:16:28 15 hr, 46 min, 27 sec 140.9 MB/s OK 0 2021-01-11, 15:39:25 16 hr, 9 min, 24 sec 137.6 MB/s OK 0 2020-12-14, 15:40:07 16 hr, 10 min, 5 sec 137.5 MB/s OK 0 2020-11-09, 14:42:48 15 hr, 12 min, 47 sec 146.1 MB/s OK 0 2020-10-12, 14:54:37 14 hr, 57 min, 53 sec 148.5 MB/s OK 0 As you can see my typical speed for parity tests should be about 140MB/s on average Drive benchmarks: The benchmarks are also weird in that I got the second results by simplying spinning down and spinning up all the drives prior to just blindly benchmarking all my drives. Testing this condition with the parity check also produces the same result - starting the parity check while the drives are spun down results in a prity check that runs at 777MB/s max, whereas spinning up all the drives first allows them hit the max speed of about 200MB/s Also it seems only my Toshiba N300 drives are affected, and none of my Seagate Archive drives or drives on the onboard SATA controller seems to be affected (or I haven't noticed) Any suggestions or ideas? mediastore-diagnostics-20210424-1011.zip
  5. just invoke it like this bash /boot/config/plugins/user.scripts/scripts/rcloneSync/script where rcloneSync is the name given to the user script. However, this name might be different from the current name because renaming a user script doesn't change the directory the script is in in which case you need to mouse over the gear icon in the User Script settings until the settings menu opens
  6. @ionred maybe posting your diagnostics will shed some light on this issue... but it really is weird that something is recreating the authorized keys file with that old time stamp.
  7. Odd. Can you post a screenshot (or text copy) of what you are seeing now? particularly running root@MediaStore:~# ls -al /root/.ssh /boot/config/ssh /boot/config/ssh/root lrwxrwxrwx 1 root root 21 Apr 8 02:48 /root/.ssh -> /boot/config/ssh/root/ /boot/config/ssh: total 120 drwx------ 3 root root 8192 Mar 22 18:44 ./ drwx------ 11 root root 8192 Apr 27 21:07 ../ drwx------ 2 root root 8192 Apr 19 22:47 root/ -rw------- 1 root root 668 May 3 2019 ssh_host_dsa_key -rw------- 1 root root 605 May 3 2019 ssh_host_dsa_key.pub -rw------- 1 root root 227 May 3 2019 ssh_host_ecdsa_key -rw------- 1 root root 177 May 3 2019 ssh_host_ecdsa_key.pub -rw------- 1 root root 411 May 3 2019 ssh_host_ed25519_key -rw------- 1 root root 97 May 3 2019 ssh_host_ed25519_key.pub -rw------- 1 root root 980 May 3 2019 ssh_host_key -rw------- 1 root root 645 May 3 2019 ssh_host_key.pub -rw------- 1 root root 1679 May 3 2019 ssh_host_rsa_key -rw------- 1 root root 397 May 3 2019 ssh_host_rsa_key.pub -rw------- 1 root root 1773 Aug 12 2020 ssh_known_hosts -rw------- 1 root root 3312 Mar 22 18:47 sshd_config /boot/config/ssh/root: total 48 drwx------ 2 root root 8192 Apr 19 22:47 ./ drwx------ 3 root root 8192 Mar 22 18:44 ../ -rw------- 1 root root 418 Mar 9 08:19 authorized_keys -rw------- 1 root root 883 Mar 9 08:19 id_rsa -rw------- 1 root root 209 Apr 19 22:47 id_rsa.pub -rw------- 1 root root 3473 Apr 19 22:45 known_hosts
  8. Because of a weird speed issue where I now need to spin up my array before running parity checks etc. and my personal opinions about php, I had to take up the bash challenge (this might even be sh compatible) Only did the spin them all up one though. #!/bin/bash . /usr/local/emhttp/state/var.ini curl --unix-socket /var/run/emhttpd.socket \ --data-urlencode cmdSpinupAll=apply \ --data-urlencode startState=$mdState \ --data-urlencode csrf_token=$csrf_token \ http://127.0.0.1/update
  9. Holy --- ! You're correct about the spin up thing.... hmm I think I'll look into this angle... and probably create a cron script to wake up the drives prior to major activities
  10. Are you absolutely sure you do not have anything in your system? user scripts plugin? cron jobs? and if you delete/rename /boot/config/ssh/root folder and reboot? does it come back?
  11. It might be, but I've only heard from @Zonediver so have no other points of reference and am not having luck seeing anything like this in the wild (or my ability to google has failed me) In any case I'll try some hardware tweaks when I am able and report back.
  12. So there really is something wrong with 6.9.2 and some LSI controllers I had a parity check run start and was wondering why it said it was running at 90MB/s at the start. So I stopped it and ran some benchmarks and saw this: Disk 4 actually read 60MB/s at one point but restarting the benchmark painted the usual 244MB/s value I don't see any other issues in my logs. I'm currently allowing a parity check to complete before I try anything else. I'll probably try things like switching ports on the controller or switching controllers to an LSI 9200-8e (I think) when I get back home to my server mediastore-diagnostics-20210424-1011.zip
  13. Ok. now that I have time to try upgrading again. I did and the problem has gone away. The one thing I did though beforehand was to finally flash my controller to P40 so maybe that was it.
  14. The reverse proxy can be configured to require a login before the docker container URLs can be accessed. However, this will not be anything fancy like the usual login pages I'm not using the specific docker container, but nginx is fairly universal so: A sample location I'm using that is password protected. location /transmission { satisfy any; auth_basic "Transmission Remote Web Client"; auth_basic_user_file /etc/nginx/transmission.passwd; proxy_pass http://192.168.95.20:9091; } Which makes the browser pop this during access You can create the password file with the following command line (can be run on the Unraid CLI or inside the NGINX container) echo USER:$(echo PASSWORD | openssl passwd -apr1) >> site.passwd The command assumes the login USER with the password PASSWORD (replace as needed) and if you repeat the command for a different user and password, both users can be used to login cf https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-nginx-on-ubuntu-14-04
  15. I'm not sure about this, but i think bridging must be enabled so br0 has to exist. Enabling it means that you need to stop the array and mess with the network settings Depending on your actual config, there might some other changes that you'll need to implement
  16. @nerbonne The VM should be connected to the br0/eth0 network interface instead of vibr0 This will then allow you to enable the Remote Desktop function (which presumes you are using Windows 10 Pro). You should also assign the VM a fixed IP address either via a DHCP reservation on the remote router, or a statically assigned setting. Then you connect to it via the IP address from your local Windows and set RDP to map your printer over.
  17. Well I had a parity check run then and the speeds got capped to the slow 60MBs so its not the DiskSpeed-Docker's fault I think.
  18. I'm wondering if anybody with an LSI card using the mpt3sas driver has noticed any speed issues? I'm using a LSI SAS9206-16E HBA and its dual linked to an expander / enclosure - ARC-4036 I've had this setup for a long while now and I've always gotten the maximum speed out of all my HDDs from it. But when I upgraded to 6.9.2 this weekend, half of my drives on the HBA started running at 60MBs only compared to the usual ~200MBs When I reverted back to 6.9.1 the drives all ran at their max speed When I have time, I'll try to upgrade again and see if the problem occurs again I don't have the all drives benchmark in 6.9.2 which gives really weird results. ^ This is Disk 5 (black is 6.9.2)
  19. You can try making a request to Limetech, but you'll need to know what you need exactly (I'm not sure either)
  20. I think most users here limit VMs and dockers at the router level, which would have better control of the network than Unraid itself. I could do that with my Mikrotik router, but I never needed to.
  21. The error: "Specified qdisc not found" indicates that the qdisc modules are not installed/available. So this would probably be unsupported unless the modules are compiled and loaded in (either by Limtech or some plugin)
  22. The first time docker engine starts up (or you've blown away the local-kv.db file) it picks a /16 network for the default docker0 network it starts from 172.17.0.0/16 and keeps going to 172.18.0.0/16 until it gives up The criteria for picking is the subnet is not used by the local machine, which is odd - so probably your USB ethernet adapter dropped off or wasn't connected when the docker engine last started up You're screen shots indicate the a bad config: * br2 is 172.17.0.0/24 (which is a subset of 172.17.0.0/16 * you have 3 default gateways (which will invariably confuse the OS as to which interface to use to talk to the world) I suggest you try renumbering br2 to some other sensible IP range 172.20.0.0/24 or the 192.168.x.0/24 if its a possibility and make sure the default routes are really what you want (most of the time it isn't), and it can be resolved by either specifying no gateway on the other interfaces. Also, as a possible security measure, it is not necessary to have an IP on the other interfaces (save for eth0/br0). Docker can still function as in my setup Unraid is only accessible via br0, but I can run docker containers and VMs on all my subnets (VLANs)
  23. Unraid also has SFTP, you just need to either enable root access via SSH by: * assigning root password or * adding ssh public key for root to login
  24. For data points: I'm using 4x 8TB N300 (about 1y 10m poweron time old) I've haven't seen any weird SMART attribute ever, so it's probably just the firmware on them.