Leaderboard

Popular Content

Showing content with the highest reputation on 08/01/20 in all areas

  1. Thanks for the feedback. i will look at 2 again. I may be letting the resume happen as soon as no disks are classed as ‘hot’, rather than checking they have cooled the amount specified in the plugin settings. it looks like making the Testing log option available in the GUI may have been a good decision in helping to get to the bottom of such issues. Testing the temperature related settings has always been a bit difficult as I have no problems with my disks overheating so have to artificially force such failures.
    2 points
  2. I have just pushed what I hope is the ‘fixed’ version of the plugin to GitHub. Let me know if you notice any further anomalies/bugs.
    2 points
  3. Having any trouble setting up an Unraid Capture, Encoding, and Streaming Server? Are you a twitch streamer who uses Unraid? Let us know here! https://unraid.net/blog/unraid-capture-encoding-and-streaming-server
    1 point
  4. THANK YOU! Copied over every bz* file in the 6.8.3 zip and booted again, Web UI is back up on the old stick! Thanks again for your help!
    1 point
  5. Contact the E-Bay seller. It will also give you an idea of his customer service attitude! Remember on E-Bay, you vet the seller more than the product...
    1 point
  6. This applies to kitchen and bathroom fans also...... after 5 years of dust buildup on bath fan plastic fins, fins snapped... had to buy a new fan just because of that.
    1 point
  7. I only have one server, and for many years I had PfSense running on it (on an Esxi, then on Unraid). The downside is that everytime I turn off my server, everything in my house lose internet connection I like to tinker with my server (change hardware, etc..), and now my wife is pushing me to turn it off as much as possible to save money on our electricity bills 😅 On top of that, my USB drive crashed... So I put a backup on a new one. But when I did boot on the backup, the array didn't want to start until the licence was transfered to the new USB stick. For that it needed an internet connection, but in order to have internet I needed to start the array ... not very convenient 🤣 After all of this, I decided to get a small machine that runs everything needed for internet while my server is OFF. that was the main purpose of it.
    1 point
  8. Yeah @rachid596 got me setup with a custom build too. Super helpful and then I learned to make my own thanks to @ich777 's helper container. FYI the issue is fixed in kernel 5.7.5 and after for AMD audio and USB passthrough. So Unraid 6.9beta24 and after will have this fix.
    1 point
  9. ... this 👆 I simply don't play games that block VM.
    1 point
  10. I forget what the command is but if you read down in the comments of the video someone tells you what you need to do to create a keyfile for backup. It's really as simple as creating a file called keyfile and writing your passphrase in it... but using the command in the comments is what you want to do.
    1 point
  11. You can't hide a device from UD. The best you can do is to mark it as passed through so UD will ignore it.
    1 point
  12. Worked a treat! And only took about 5 minutes. I'm going to run an extended Smart test just to be safe. Thanks again
    1 point
  13. And another thing I always say: Each additional drive is an additional point of failure.
    1 point
  14. The higher data speeds of the larger drives can reduce the parity check times by some margin, but yes, the parity check overall time is largely determined by the size of the parity drive. There is a plugin to pause and restart the parity check on a schedule, so you can run the check over multiple low usage time periods. My main server typically completes a check over a period of 3 days, starting after midnight and pausing at 6am.
    1 point
  15. Faster, lower overall power consumption, less physical space needed, typically cheaper after you consider the cost of the extra slots for lower capacity drives, as more SATA ports can get expensive quick if you need another HBA and / or case. Just some of the reasons to go with fewer drive count of higher capacity drives.
    1 point
  16. First picture: names of shares are not translated and are displayed as is. This is correct. Second picture: this is a bug on the Dashboard page (wrong reference). This is fixed in the next version of Unraid. Thanks
    1 point
  17. Thanks. @bonienl- Can you take a look and see if these sections are supposed to be this way? If not, I will search for the appropriate section and make sure these get translated.
    1 point
  18. Does anyone know if NFS v4 is/will be supported in 6.9?
    1 point
  19. I certainly would not try to persuade you to go in a different direction with some of your hardware choices, everything looks good... albeit some amount of overkill. There is no practical reason to use a $400+ consumer motherboard when you won't actually be utilizing 90% of the features it offers you by using it with Unraid. I mean just quickly off the "oohh ahhh" specs you won't be using any of these: Intel® WiFi 6 802.11ax 2T2R & BT 5 Rear 125dB SNR AMP-UP Audio with ALC1220-VB & ESS SABRE 9118 DAC with WIMA Audio Capacitors USB TurboCharger for Mobile Device Fast Charge Support RGB FUSION 2.0 with Multi-Zone Addressable LED Light Show Design, Supports Addressable LED & RGB LED Strips Front & Rear USB 3.2 Gen2 Type-C™ Headers I realize that in relation to the rest of the hardware the cost of that motherboard is a drop in the bucket, but it's just something to consider. I think you are going to enjoy Unraid with this build though, very powerful. And, go with the ECC RAM, just for peace of mind with memory error correction over the long term.
    1 point
  20. More updates after a fortnight with zfs: I decided to not use it and switch to btrfs. 😂 The biggest reason is that ZFS does not respect isolcpus (there's even an official bug report for it but with no fix mentioned). This is only made worse by what I can only conclude as ZFS preference to use isolated cores after 2 weeks of use. Normally it's fine but under heavy IO, it's painful as it lags even web-browsing. In contrast, btrfs in 6.9.0 is much better. Only balance doesn't respect isolcpus now - which I can live with as I can schedule that at the wee hours. Scrub and normal IO are both fine. I also found out kswapd and unraidd processes don't respect isolcpus. The latter looks like an Unraid-spawned process but at least they don't lag as badly as zfs under heavy IO. A small annoyance is that I have to use CLI to check on pool health and free space. No big deal but it does play a part in the final decision. The btrfs write hole issue is mostly fixed, except for scenarios that would also affect other non-zfs RAID solutions. At least with btrfs, I can have metadata and system running in RAID1 (or RAID10 / RAID1C3) so the write hole is likely to only corrupt the particular data being written without entirely killing the file system. And frequent scrubbing would also help. The final nail was my epiphany moment realising btrfs can do snapshot just like ZFS. ZFS does have znapzend which makes doing snapshots trivial. However, I was able to create a set of 4 scripts to to do automated snapshot with cleanup in a way very similar to znapzend (e.g. the equivalent to znapzend 1week=>1hour,1month=>1day,1year=>1week,10year=>1month). It's not as elegant but it works well enough that I have no reservation moving to btrfs snapshot. Oh and btrfs also does compression. Not as well as zfs but good enough for backups. So now I took full advantage of Unraid 6.9.0-beta25 multi-pool feature to have 1-2-3-4 (didn't intend for it to be that way) Array: 1x Samsung 970 Evo 2TB No trim in array but I intend to use it as temp space so will have plenty of free space to mitigate write speed degradation (e.g. it's sitting right now with 99.9% free space). Hopefully I won't have to resort to periodic blkdiscard to refresh it. Pool1: 2x Intel 750 1.2TB in RAID0 The daily network driver. Most stuff is done on here. Pool2: 3x Samsung 860 Evo 4TB in RAID-5 (metadata + system in RAID1C3) I know it's an overkill doing RAID5 data chunks + RAID1C3 metadata/system chunks but then I'd like something irrational in my life. This is my main game / Steam storage. Performance is surprisingly good even over the network (e.g. ARK only loads about 30% slower than on a passed-through NVMe and that's the worst case scenario due to ARK liberal use of tiny files) Pool3: 4x Kingston DC500R 7.68TB in RAID-5 (metadata + system in RAID10) Running metadata + system chunks in RAID10 provide theoretically better performance (i.e. not perceivable in practice) with the same single-failure protection as RAID-5 This is for online backup of my workstation data (with compression) and miscellaneous storage i.e. it's what the array used to be for me. And a spare SATA port for my Bluray drive 🤣 It's kinda ironic that I originally used Unraid because it's NOT RAID but have evolved into running 3 RAID pools. 😅 It speaks volume to how important the non-core features (namely VM with PCIe pass-through and docker with readily available apps) have become over the years. Now if only Limetech would remove the requirement to have at least one device in the array but maybe that's too much to ask. 😆
    1 point
  21. I do not see anything wrong with your hardware choices. I am not sure you will gain a benefit with the ECC ram but I do not believe it will cause any issues either. My only concern would be the case, it has poor thermal performance but I like everything else about it.
    1 point
  22. 1 point
  23. One of the most beneficial things about Unraid, is the ability to move the USB key to another (new) system with complete different hardware, and start it like nothing has ever changed.
    1 point
  24. @oko2708 i might have done it wrong but i got it working by exposing the docker port on my unraid box then configuring a cloud tcp://192.168.XX.XX:XXXX then i just create an cloud agent template this was also a bit annoying as the docker image then needed to apply the jlnp agent stuff i.e `FROM jenkins/jnlp-slave` if you figure out how to "just use docker images" let me know as it feels kinda iffy. @binhex i updated my image earlier today and then jenkis just died prompting something along the line of ` libfreetype.so.6: cannot open shared object file:` in JDK 8. is this just a jenkis issue? i assume your auto building based on tags in jenkins land or something. i just rolled back to 2.239 for now
    1 point
  25. If you change the VNC port in the XML to be a specific port instead of auto (which is the behaviour that you want, you want to connect to the same VM on the same port) and then sebsequently change any setting of the VM using the gui the port reverts back to the auto port, losing your nicely configured static port setting. We need to be able to show and set the port in the gui so that it can easily be configured to a static port and more importantly that the mapping isn’t lost when you make a GUI change. im forever having to go back and edit the XML to restore the vnc port.
    1 point
  26. Probably already said it, but set the VNC port in the GUI so it doesn’t get lost when changing configuration xml. Drives me absolutely nuts.
    1 point
  27. If passphrase is "grass is green" then you can create a file like this: echo -n "grass is green" >/root/keyfile
    1 point
  28. I just used the "Disk Management" in "Computer Management" to first delete all volumes and then created a FAT32 volume with 1024 MB as size. It works and I'm happy
    1 point
  29. Ok so I just added a path in the Krusader container settings (attached), restarted the container, and that seemed to solve the issue.
    1 point