Leaderboard

Popular Content

Showing content with the highest reputation on 08/17/19 in all areas

  1. Here a guide that shows how to repair XFS filesystem corruption if you are ever unlucky enough to have this happen on your server. Hope its useful if you need to fix it !
    4 points
  2. Unless they are going to create their own mounts the simplest solution is to share your Plex server
    2 points
  3. Hi, I tried to search the forum but did not find a topic for this. So, is there a way to set Unraid to raise a notification (alert / warning) if a fan fails (i.e., the speed gets lower than a configurable limit)? Thanks! >>> Made a script (User Scripts plugin) that gets the fan speeds using 'sensors' command and parses the output. If a fan speed is lower than limit set for that fan, a notification is initiated. Works for my needs. Marking the issue solved.
    1 point
  4. I had eight WD 6TB Red Pros on a SAS PCIe 2 controller and I had to take two of them off of it before I could read all drives at the same time at the same speed as each drive on it's own. My UTT scores changed a large amount after.
    1 point
  5. 1. cloud -> server -> family 2. see 1 but u can "link" unlimited plex server sto the same drive (which would be pretty dumb if u ask me) 3. transcode or not depends on the client, like a normal plex installation. just remmeber the whole thing appears to be a normal directory for your server/software, thats the whole deal.
    1 point
  6. I truly hope not. It would remove a very key feature of unRaid vs snapraid where unRaid can emulate missing / dead drives seamlessly. snapraid cannot do that at all
    1 point
  7. And probably as multiple requests as one giant one is unlikely to gain much traction. Instead it needs to be a number of smaller requests that can be individually prioritised (if accepted) and gradually picked of.
    1 point
  8. Script works perfectly fine. What you were doing though is copying and pasting the script via the forum which occasionally adds some hidden characters, resulting in a broken script. Changing the plugin to remove the unprintable characters when pasting into the edit box
    1 point
  9. The speeds seem artificially low. My 3TB 5400 RPM constrained array can hit 140 MB/s, and your 4TB drives should be marginally faster. While 130 MB/s is close, I think you have a bottleneck somewhere. With 7 drives on your SAS 2008 controller, let's check and see if that could be the culprit. 7 * 130 * 1.36 (this is an easier version of the formula I detailed above) = 1237 MB/s going through your controller. PCIe 1.0 x8 and PCIe 2.0 x4 both support 2000 MB/s, and PCIe 1.0 x4 supports 1000 MB/s. None of that lines up with 1237 MB/s, so it doesn't seem like this is a PCIe bus related constraint. That doesn't rule out the SAS 2008 controller, though - maybe it is just slow... Perhaps you have something about your build that doesn't show up in the report. Expanders? Maybe when using all of your SATA ports on your motherboard (sdb, sdc, sdd, sde) you are hitting some kind of bus limit? 4 * 130 * 1.36 = 707 MB/s, which again doesn't really seem like a common bus limit. I think you should try @jbartlett's DiskSpeed testing tool. Other thoughts: You have one of those servers that doesn't seem to react to changing the Unraid disk tunables. Except in extreme edge cases, you get basically the same speed no matter what. On the repeated tests, most seem to be withing +/- 0.9 MB/s, which is a fairly large variation, and for that reason your fastest measured speed of 129.7 is essentially the same as anything else hitting 127+ MB/s. Also, on at least one repeated test (Pass 1_Low Test 2 @ Thresh 120 = 127.8, and Pass 2 Test 1 = 116.6), the speed variation was 11.2 MB/s, which is huge. Perhaps you had some process/PC accessing the array during one of those, bringing down the score. For that reason, I say pretty much every test result was an identical result, and you probably won't notice much of any difference between any values. There's certainly no harm in using the Fastest values, as the memory utilization is so low there's no reason for you to chase more efficiency. Keep in mind if you use jbartlett's DiskSpeed test and find the bottleneck, and you make changes to fix it, you would want to rerun UTT to see if the Fastest settings change.
    1 point
  10. Yes. Too much trouble and too easy to make mistakes. Also, unless you have enough ports to allow you to have all the disks in at the same time then you would have to New Config and rebuild parity more than once. And, having so many disks installed at once actually increases the risk of one of them giving problems. The standard way to do what you want to do is to replace each disk one at a time and let it rebuild from the parity calculation. Since you don't have any actual failed disks (as far as we know), rebuilding is even less risky since the original disks can serve as a backup in case there is any problems. So, what you quoted (from me) is what I just repeated. Why do you question it? It would make sense to do something else if the original disks are ReiserFS and you want the new disks to have another filesystem, such as XFS. But you don't mention anything like that. It would make sense to do something else if you intended to decrease the number of disks because the smaller disks would fit on fewer disk. But you specifically said that is not what your are doing. If you want more specific consideration of your actual situation, post Diagnostics and give us more details.
    1 point
  11. Dedicated IPs can talk to each other. It's the custom bridge and host that can't talk to each other.
    1 point
  12. You have specified the port in the host variable. Remove the port and it should work.
    1 point
  13. I might be wrong but if you have both bookstack and mariadb on their on dedicated ip's, there is a docker "security" feature that blocks them talking to each other.
    1 point
  14. Post you docker run command. At this point it's just guessing.
    1 point
  15. Script to convert text files from DOS to Unix format. dos2unix #!/bin/bash # Convert text files from DOS to Unix format. if [ $# -eq 0 ] || [ "$1" == "--help" ] then printf "Usage: dos2unix <files>...\n" exit 0 fi for file in "${@}" do user=$(stat -c '%U' "$file") group=$(stat -c '%G' "$file") perms=$(stat -c "%a" "$file") tmp="$file.$(date +%N)" cat "$file" | fromdos > "$tmp" mv "$tmp" "$file" chown $user:$group "$file" chmod $perms "$file" done Script to convert text files from Unix to DOS format. unix2dos #!/bin/bash # Convert text files from Unix to DOS format. if [ $# -eq 0 ] || [ "$1" == "--help" ] then printf "Usage: unix2dos <files>...\n" exit 0 fi for file in "${@}" do user=$(stat -c '%U' "$file") group=$(stat -c '%G' "$file") perms=$(stat -c "%a" "$file") tmp="$file.$(date +%N)" cat "$file" | todos > "$tmp" mv "$tmp" "$file" chown $user:$group "$file" chmod $perms "$file" done
    1 point
  16. Very weird, each disk is being detected twice, with different letters, this is confusing Unraid, note parity is sdg, disk 2 sdf: Aug 15 05:10:39 Tower kernel: mdcmd (1): import 0 sdg 64 2930266532 0 HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 Aug 15 05:10:39 Tower kernel: md: import disk0: (sdg) HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 size: 2930266532 Aug 15 05:10:39 Tower kernel: mdcmd (2): import 1 Aug 15 05:10:39 Tower kernel: md: import_slot: 1 empty Aug 15 05:10:39 Tower kernel: mdcmd (3): import 2 sdf 64 2930266532 0 HUS723030ALS640_YHK3LB3G_35000cca01aafa228 Aug 15 05:10:39 Tower kernel: md: import disk2: (sdf) HUS723030ALS640_YHK3LB3G_35000cca01aafa228 size: 2930266532 Aug 15 05:10:39 Tower kernel: mdcmd (4): import 3 sde 64 2930266532 0 HUS724030ALS640_P8JPYX7W_35000cca02798ac54 Aug 15 05:10:39 Tower kernel: md: import disk3: (sde) HUS724030ALS640_P8JPYX7W_35000cca02798ac54 size: 2930266532 3 seconds later, parity is now sdd, disk 2 is sdc Aug 15 05:10:42 Tower kernel: mdcmd (1): import 0 sdd 64 2930266532 0 HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 Aug 15 05:10:42 Tower kernel: md: import disk0: (sdd) HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 size: 2930266532 Aug 15 05:10:42 Tower kernel: mdcmd (2): import 1 Aug 15 05:10:42 Tower kernel: md: import_slot: 1 empty Aug 15 05:10:42 Tower kernel: mdcmd (3): import 2 sdc 64 2930266532 0 HUS723030ALS640_YHK3LB3G_35000cca01aafa228 Aug 15 05:10:42 Tower kernel: md: import disk2: (sdc) HUS723030ALS640_YHK3LB3G_35000cca01aafa228 size: 2930266532 Aug 15 05:10:42 Tower kernel: mdcmd (4): import 3 sde 64 2930266532 0 HUS724030ALS640_P8JPYX7W_35000cca02798ac54 Aug 15 05:10:42 Tower kernel: md: import disk3: (sde) HUS724030ALS640_P8JPYX7W_35000cca02798ac54 size: 2930266532 3 seconds later, parity is still sdd, but disk 2 is sdf again Aug 15 05:10:44 Tower kernel: mdcmd (1): import 0 sdd 64 2930266532 0 HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 Aug 15 05:10:44 Tower kernel: md: import disk0: (sdd) HUS723030ALS640_YHK3JJ6G_35000cca01aaf8698 size: 2930266532 Aug 15 05:10:44 Tower kernel: mdcmd (2): import 1 Aug 15 05:10:44 Tower kernel: md: import_slot: 1 empty Aug 15 05:10:44 Tower kernel: mdcmd (3): import 2 sdf 64 2930266532 0 HUS723030ALS640_YHK3LB3G_35000cca01aafa228 Aug 15 05:10:44 Tower kernel: md: import disk2: (sdf) HUS723030ALS640_YHK3LB3G_35000cca01aafa228 size: 2930266532 Aug 15 05:10:44 Tower kernel: mdcmd (4): import 3 sde 64 2930266532 0 HUS724030ALS640_P8JPYX7W_35000cca02798ac54 Aug 15 05:10:44 Tower kernel: md: import disk3: (sde) HUS724030ALS640_P8JPYX7W_35000cca02798ac54 size: 2930266532 I never seen this problem before but until it's resolved Unraid won't work correctly, maybe you're trying to use SAS multipath, Unraid doesn't support that, make sure there's only one cable from the HBA to the enclosure.
    1 point
  17. unRAID (as when I started with it) helped me get away from the off the shelf NAS products into a much more robust environment. Originally starting with an 8-disk rack server, I quickly added additional DAS devices for more storage...swapped hardware for more VM performance...and now host a media streaming server for 20ish daily users. The server also runs my home LAN dockers, and my daily driver gaming VM....could have done it a million other ways, but unRAID made everything cohesive & simple!
    1 point
  18. Well f off and fork your own. Been running my install since we released without a problem. I got no time for comments like this and am going to treat it with the contempt it deserves. All the code is open source feel free to make a pull request, however, I've yet to see a single person make a comment of this nature actually do so, and I'm not expecting you to be the first. Sent from my Mi A1 using Tapatalk
    1 point
  19. Great that your backup worked There have been some issues for various people with sqlite database corruption since Unraid v 6.7, so I would pay attention to it during the next days/weeks to make sure it stays stable
    1 point
  20. We've made it easier to get Nvidia & iGPU hardware trancoding working with this container. This post will detail what you need to do to use either of these in your container. Nvidia 1. Install the Unraid Nvidia Plugin, download the version of Unraid required containing the Nvidia drivers and reboot 2. Add the Jellyfin container and add --runtime=nvidia to extra parameters (switch on advanced template view) and copy your GPU UUID to the pre-existing NVIDIA_VISIBLE_DEVICES parameter. There is no need to utilise the NVIDIA_DRIVER_CAPABILITIES parameter any more. This is handled by our container. Intel GPU 1. Edit your go file and add modprobe i915 to it, save and reboot. 2. Add the Jellyfin container and add --device=/dev/dri to extra parameters (switch on advanced template view) There is no need to chmod/chown the /dev/dri, this is handled by the container.
    1 point
  21. Same issue... I average out around 55Mb/s My win 10pc can do 92Mb/s In firefox on unraid, speedtest.net.. I get 940mbps/940mbps I have a 1.5gb fiber connection.. I was getting around 70Mb/s on unraid, but since 6.7 and previous beta's it's been slow
    1 point
  22. To clarify: in the case of a single "disk1" and either one or two parity devices, the md/unraid driver will write the same content to disk1 and to either or both parity devices, without invoking XOR to generate P and without using matrix arithmetic to calculate Q. Hence in terms of writes, single disk1 with one parity device functions identically to a 2-way mirror, and with a second parity device, as a 3-way mirror. The difference comes in reads. In a typical N-way mirror, the s/w keeps track of the HDD head position of each device and when a read comes along, chooses the device which might result in the least seek time. This particular optimization has not been coded into md/unraid - all reads will directly read disk1. Also, N-way mirrors might spread reads to all the devices, md/unraid doesn't have this optimization either. Note things are different if, instead of a single disk1, you have single disk2 (or disk3, etc), and also a parity2 device (Q). In this case parity2 will be calculated, so not a true N-way mirror in this case.
    1 point
  23. try changing the host path for /config from /mnt/user/appdata/binhex-sonarr to /mnt/disk1/appdata/binhex-sonarr then restart the container There are ongoing issues with using FUSE and Docker, im convinced there are issues around it, although LT have not found any evidence to date.
    1 point