Leaderboard

Popular Content

Showing content with the highest reputation on 08/19/19 in all areas

  1. Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
    2 points
  2. 2 points
  3. I am partially colorblind. I have difficulty with similar shades (such as more green blues, next to greens, or more blue greens next to blues.) The default Web Console color scheme makes reading the output of ls almost impossible for me: I either have to increase the font size to a point that productivity is hindered drastically, or strain my eyes to make out the letters against that background. A high contrast option would be great. Or, even better, the option to select common themes like "Solarized" et al would be great. Perhaps even, the ability to add shell color profiles for the web console. For now I use KiTTY when I can - and I've added a color profile to ~/.bash_profile via my previously suggested "persistent root" modification. Also, worth mentioning here: https://github.com/Mayccoll/Gogh Gogh has a very extensive set of friendly, aesthetically pleasing, and well-contrasting color profiles ready-to-go. Edit: Also worth noting that currently the web terminal doesn't source ~/.bashrc or ~/.bash_profile, and this results in the colors being "hardcoded" (source ~/.bashrc to the rescue) Edit2: Additionally, the font is hard coded. If we are fixing the web terminal to be a capable, customizable platform - this would also be high on the list of things to just do.
    1 point
  4. If you set an appdata path of /mnt/cache and then move appdata to the array your appdata won’t be at /mnt/cache anymore and the dockers won’t be able to connect to it.
    1 point
  5. Did you copy / paste for the forum? That occasionally adds in extra characters. With user scripts, you can make sure you're running yesterday's update, then edit the script and save it right away
    1 point
  6. https://docs.broadcom.com/docs/9300_16i_Package_P16_IT_FW_BIOS_for_MSDOS_Windows.zip
    1 point
  7. I am running a Unifi USG for the last couple of months and still happy with my decision yet. The controller runs as a docker container on my main unraid box.
    1 point
  8. Why? Do you want to ensure that they can't run simultaneously? I don't see a good reason to go through the extra hassle vs. just setting up 2 VM's.
    1 point
  9. I bought an ASROCK for my UnRAID a while back. Due to the senors chip not being supported I ended up using it as my desk top. In my case it used a Nuvoton NCT6683D eSIO chip which only ASROCK seem to use and wasn't supported very well. When I contacted ASROCK they were very firm and clear that they don't support Linux so they wouldn't help.
    1 point
  10. This right here saved me some good amount of time. Thank you!
    1 point
  11. Hey Nick. Glad that you haven't lost your data Yeah your right, the rebuild has started to reconstruct the data onto the 6tb and so has built the indexes to the files but nothing else. So yeah just reformat the drive and then copy the data across. If you didn't figure on how to reformat the 6tb then use unassigned devices plugin to delete the partition. 1. Goto the settings of the unassigned devices plugin and set it to destructive mode. 2. Stop the array. 3. Set the slot with the 6tb drive to unassigned. 4. the 6tb drive will show up as an unassigned device. 5. delete the partition 6. Now you can either (i) format the drive using unassigned devices to whatever filesystem that you want. ( I would use xfs as that is now the Unraid defualt for array drives). (ii) Only delete the partition then when drive back in array Unraid will format it as the current defualt set for the drives in settings/disk settings. 7. Go back to where you set the 6tb drive slot in the array to be unassigned and add the 6tb disk back to that slot. 8 goto tools/newconfig and this time set preserve current assignments to be all and apply 9. start the array . Again cancel parity sync until the data is safety copied across from the 2 tb then do a parity sync.
    1 point
  12. That is because each 'write' operation is not simply a write. it involves: Reading the appropriate sector from both the target drive and the parity drive(s) in parallel. Calculating the new contents of the parity sector(s) based in the changes to the target drive data and the current parity drive(s) data. Waiting for the parity drive(s) and target drive to complete a disk revolution (as this is slower than the previous step) Writing the updated sector to the parity drive(s) and the target array drive. in this mode there is always at least one revolution of both the parity drive(s) and the target drive (whichever is slowest) before the 'write' operation completes and this is what puts an upper limit on the speed that is achievable. The Turbo write mode tries to speed things up by eliminating the initial read of the parity drive is) and target drive by: Reading the target sector from ALL data drives except the target drive (in parallel). The parity drive(s) are not read at this stage. Calculating the new contents of the parity sector(s) based on the contents of the target drive and the equivalent sectors on all the data drive (this is the same calculation to that done when initially building parity) Writing the updated sector(s) to the parity drive. Whether this actually speeds things up is going to vary between systems as it depends on the rotational state of many drives, but in tends to by eliminating the need to wait for a full disk revolution to occur on both the parity drive(s) and the target drive and this tends to be the slowest step. in both cases the effective speed will be lower than raw disk performance might suggest. The potential attraction of SSD only arrays (that some users have been discussing) is that delays due to disk rotation are eliminated thus speeding up the above processes.
    1 point
  13. <FacePalm> You totally baited me. I feel like I got rick rolled.
    1 point
  14. Running a Mikrotik hEX Router https://mikrotik.com/product/RB750Gr3 Its quite a bit of a learning curve for people coming from "point-n-click routers" but should be fairly straightforward for most technical users. What I really like about it is the QoS (quite a challenge) capability, and the support for VPN options (though still missing OpenVPN in UDP mode) There are some rough spots still like the built in DNS server only supporting A/AAAA records (but has regex matching) It also has builtin AP management (these need to be Mikrotik AP though) so new APs just need to be plugged in to the network and told to look for the head unit. The main feature I've loved about it until my ISP started placing users on CGNAT is how easy it is to create a site-to-site VPN between routers, just plug in the public IP on both ends and you are done.
    1 point
  15. Cause it's not FreeNAS. After 2 years with FreeNAS on my 45drives AV15 - I finally said f-it and made the switch. Boy oh Boy - Why didn't I do this 2 years ago!. FreeNAS made me sad every day. It made my shares sad. It made my laptops and desktops sad. If VMs on it would have ever worked - they would have been sad too. Unraid allowed me to do everything that I wanted to do two years ago - and I allowed me to do it all in 2 days. 6 dockers running, 3 vms, 50+TB of storage online, a 10G switch for all of the clients, automated backups running, and even integrated with my DevOps slack channels. Unraid really licks the lamas ass!
    1 point
  16. @argonaut @ice pube Hey, I released a separate tag for you with some dirty hacks, but looks like it's working. You can use the tag spikhalskiy/zerotier:1.4.2 and it will give you the latest Zerotier version. Give it a try if you are in the mood for some experiments It's an experimental tag and the docker image for this build contains hacks that are not in the Zerotier upstream, so I don't recommend to switch on it until you understand that it could not work for you. I made a ticket for Zerotier team: https://github.com/zerotier/ZeroTierOne/issues/1013. When it's resolved in the upstream in a reasonable manner I will update the main docker with Zerotier 1.4.2 or newer for everybody.
    1 point
  17. Of course! So not that then. The speed test came out OK. Also @johnnie.black I'd suggest that an impact to performance that brings a systems to it's knees in the main area it is designed for should not be categorised as minor. Perhaps we should increase the ticket rating which may also get more visibility?
    1 point
  18. This was 4 years ago. People didn't know any better back then.
    1 point
  19. The usual recommendation is to assign all disks as data disks in the array, and none as parity. The disk that was parity will show up as unmountable since it doesn't have a filesystem. Then you can New Config and reassign that one as parity and make any other changes to the order of the remaining disks. You might even check the box that says parity is already valid before starting the array, but you should probably do a parity check anyway. If for some reason more than one disk is unmountable, come back for further advice.
    1 point
  20. Hi Nick, Unraid has dropped that disk from the array. Although it is now present when the server is powered back on, when you add it as the same drive, disk 3 , although its the original drive because it was dropped from the array, Unraid now considers it a new disk and wants to rebuild it from parity. However as this disk came out of the array when rebuild was happening (replacing the 2tb) it cant rebuild disk 3 because there are now 2 missing drives and 1 parity disk cant rebuild 2 failed drives only 1. this is why you see the error 'Too many wrong and/or missing disks!'. Now there maybe a way that you recover from this. I guess that you still have the 2tb disk that you are replacing yes? (if not there still maybe a way but for now i assume that you have) So if you still have it then data is still on that drive...so thats good. Disk 3 should also still have its data on it so you haven't lost anything there (unless it was lost before hand) So what i would try would be the following. Note the disks and there locations (take a screen shot of how they are now) Now goto tools, new config. On the drop down menu you will see preserve current assignments. Choose to preserve parity slots and and cache slots and create new config. Now add back the data drives as in your photo. (be careful not to add any other drives that were not part of the array, if you were using other unassigned drives on the server) But don't add the 2 tb drive to the array (you want that to be an unassigned drive and then later copy the data from that to the array) So you should now have in the array, all of your old data disks (except the 2tb) including the 6tb that you started to rebuild on. Now start the array. Unraid will want to do a parity sync as this is a new config. Cancel that, you will do that later. It will also say that the 6tb drive has not file system (or it should as the rebuild failed) this is fine. Allow Unraid to format the 6tb drive. Now the array should be started. parity will not be valid -- but don't do a parity sync as yet. You need to copy the data off the 2tb drive first. Do this using the unassigned devices plugin and Krusader docker container you and manually transfer the data off the 2 tb to the 6tb drive. After this has finished ( now the data is transferred to the 6tb ) you can then do a parity sync and your array should be fine. if the disk 3 is okay and doesnt still keep giving read errors parity sync should complete ok I hope that this makes sense
    1 point
  21. there are more hp tips/tricks in my sig
    1 point
  22. Just in case you still don't understand. You don't need to figure out which port each disk was connected to. Unraid will look at each hard drive, determine its serial number, and assign the disks just as they were. If for some reason this doesn't work out for you please come back for further advice.
    1 point
  23. First and foremost, don't panic. 15% over 6 months = you still have 3 years to reach 100% Even at 100%, your SSD is unlikely to catastrophically die. As far as I know, Intel is the only company that locks their SSDs in read-only mode when all reserve is used up. SSD tends to fail gracefully so you essentially will only lose capacity as more cells die. Note that only write is relevant. Read doesn't wear out your SSD. In terms of usage, what do you mean by "Unraid storage"? If you are using your 250GB SSD as write cache then you really need to think very carefully about what share needs Cache = Yes. Most of the time, you can bypass the cache. Depending on how much RAM you have, you can move transcode to RAM. To prevent Plex transcode from spamming your RAM causing OoM issues, create this script to run at first array start #!/bin/bash mkdir /tmp/PlexRamScratch mount -t tmpfs -o size=4g tmpfs /tmp/PlexRamScratch And then create a new mapping in your Plex docker to map /transcode to /tmp/PlexRamScratch and then change Plex settings to use /transcode for transcode files. (Hint: change the "size=4g" in the script to more or less depending on how many streams you need. I have found 4g to be sufficient for me (upto 5 1080p streams)) For Handbrake settings, you only need output as no cache to reduce write. (see above - read does not wear out your SSD). Finally I suggest you have a quick pick through this topic just for a bit more info on tips to make your SSD last longer (e.g. reduce write amplification)
    1 point
  24. By default it will join as a client. This image contains ZeroTierOne https://github.com/zerotier/ZeroTierOne ZeroTier controllers (the same thing as my.zerotier.com) a lot more configuration. You will also need additional firewall ports opened for the controller to work. See https://github.com/zerotier/ZeroTierOne/tree/master/controller for more information. You can view the template for this image here: https://raw.githubusercontent.com/Spikhalskiy/docker-templates/master/zerotier.xml
    1 point
  25. Disabling the policy resolved my issue. Thank you much! Anyone still looking for a fix for this, see below --- Windows 10 1903 introduces a new display driver for remote desktop sessions (WDDM based Indirect Display Driver - IDD). As soon as a rdp session gets disconnected (i.e. the user is still logged in, but the rdp session is disconnected) the new driver causes the Desktop Window Manager (dwm.exe) to max out one cpu core. - Switch off the new driver by disabling the following Group Policy (gpedit.msc) : "Use WDDM graphics display driver for Remote Desktop Connections" in "Computer Configuration\Policies\Windows Settings\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment\" --Rags
    1 point
  26. I suddenly had the realisation that this bug is probably what's been causing me so many headaches with my Crashplan backup. I mean, I nearly cancelled the service because it was so slow and it kept crashing. So I had enough and downgraded. Yes, crashplan (docker based) is now suddenly faster and so far working much better, other things I noticed included that it booted a lot quicker and didn't sort of pause before the login screen, Plex is more responsive, the disks seem to be 'quieter', before there was sort of random reads and writes happening which I couldn't track down, but now seem to have dissappeared, the Unraid GUI is much faster, I'd even say my SSD is running cooler. (Call me paranoid but I've had two SSD's unexpectedly die and this brand new one already has unrecoverable sectors after only a month. Perhaps some of this is in my mind, but the primary function of a NAS is, well to serve files to multiple people concurrently in a performant way. Right now that doesn't happen on 6.7. I'd bet many people have this bug and haven't realised it yet.
    1 point
  27. The Plex database corrupted twice on my server, and Sonarr at least twice, resulting in many of my hours used resolving the problems. Both were stored in /mnt/user/appdata/. I have moved everything to /mnt/cache/appdata/ by toggling the 'use cache only' setting and so far no database corruptions. I appreciate limetech taking this matter seriously and want to say thanks for an otherwise fantastic 6.7.0 release.
    1 point
  28. +100 indeed unRAID OS needs a file manager. Did you ever tried a Synology or Qnap NAS? Both have a excellent file manager.
    1 point
  29. 1 point