Leaderboard

Popular Content

Showing content with the highest reputation on 08/25/18 in all areas

  1. There have been loads of commits into the webgui GitHub repo. New functionality I saw there is CPU Pinning for Docker containers.
    4 points
  2. Hey guys Sorry for the lack of updates in a (long) while ? Real life has been taking up a lot of time and my own install of rclone has been sufficient for my needs. However both the stable branch as well as the beta branch should now be able to survive a reboot even if no internet connection is available. Please test it out and see if it's working as intended. I also fixed the missing icon on the settings page. Cheers
    2 points
  3. Hi everyone, For those of you using a Threadripper build, or any recent AMD CPU, you may have concerns with the Infinity Fabric induced latency between CCXs, and further latency between dies. I dug through multiple posts to try and find exactly what cores were associated with what CCX/die, and after finding insufficient info, decided to ask AMD and my motherboard supplier, Gigabyte. For those of you who had not seen the informative post on AMD's latency tests, it has been found that base Ryzen uses sequential numberings (no interleave) for their core assignments. Lodging a ticket with AMD support confirms this for Threadripper as well. Gigabyte also returned the same info for physical cores, but gave no info on logical ones. Ultimately that means cores 0-1 are both logically related, cores 0-7 are on the same CCX, and (if on Threadripper) cores 0-15 are on the same die. This will hopefully help someone else come across the information more quickly and reduce their struggle by some small amount. I hate to cop out, but I'm at work now, so if anyone has an article they feel will supplement this info, please comment it below.
    1 point
  4. NO. Let me reiterate: NO DO NOT FORMAT!!! Just rebuild. A format is no different than on say Windows, a C64, or a TRS-80. You will erase the files on the disk. In this case the emulated disks. Due to the massive amount of failed writes, you do have file system corruption. You will need to Check Disk File System on each of the drives. You can do that either before or after the rebuild
    1 point
  5. Not discounting your project, but just be aware that if you are using one of @binhex's VPN enabled dockers, of which I'm pretty sure the qBittorent docker that you forked was originally forked from, you already HAVE a proxy(privoxy) baked into the VPN enabled docker. You just have to enable it.
    1 point
  6. It looks like your SAS cable isn't plugged in all the way or you've got power issues. Disk 2 & 5 are disabled (they dropped offline, but then reconnected), Disk 11 keeps having read errors (but also had write errors which the system reconstructed), and finally dropped off the face of the earth Reseat your HBA, cables, etc
    1 point
  7. Hi everyone, first of all, let me say hello to everyone. My first posting after nearly 1 year of using Unraid. So far i had no big issues and if there where some smaller ones it was easy to find a fix for it in the forums. I fiddled around with the topic of core assignments when i started with the TR4 platform end of last year. I thought i figured it out after doing some tests back than, which die corresponds to which core shown in the Unraid Gui. First of all my specs: CPU: 1950x locked at 3,9GHz @1.15V Mobo: ASRock Fatal1ty X399 Professional Gaming RAM: 4x16GB TridentZ 3200MHz GPU: EVGA 1050ti Asus Strix 1080ti Storage: Samsung 960 EVO 500GB NVME (Cache Drive) Samsung 850 EVO 1TB SSD (Steam Library) Samsung 960 Pro 512GB NVME (passthrough Win10 Gaming VM) 3x WD Red 3TB (Storage) After reading your post @thenonsense i was kinda confused. So i decided to do some more testing. Here are my results which basically confirmes your findings. I did some benchmarks with Cinebench (3 times in a row) inside a Win10 vm i use since end of last year for gaming and video editing. Also i did some cache and memory benchmarks with Aida64. Specs Win10 VM: 8cores+8threads 16GB RAM Asus Strix 1080ti 960 Pro 512GB NVME Passthrough TEST 1 initial cores assigned: Cinebench Scores: run1: 1564 run2: 1567 run3: 1567 Next i did the exact same tests with the following core assignments you suggested @thenonsense TEST 2 Cinebench Scores: run1: 2226 run2: 2224 run3: 2216 Both the CPU and the memory score was improved. The memory performance almost doubled!! Clearly a sign that on the second test, only one die was used and the performance wasn`t limited by the communication between the dies over the Infinity Fabric as on my old setting. After that i decided to do some more testing. This time with a Windows 7 VM with only 4 cores and 4GB of RAM to check which are the physical cores and which ones are the corresponding SMT threads. first test: assigned cores 4 5 6 7 (physical cores only) Cinebench Scores: run1: 558 run2: 558 run3: 557 second test: assigned cores 12 13 14 15 (SMT cores only) Cinebench Scores: run1: 540 run2: 542 run3: 541 third test: assigned cores 4 5 12 13 (physical + corresponding SMT cores) Cinebench Scores: run1: 561 run2: 563 run3: 560 And again a clear sign your statement is correct @thenonsense. The cores 0-7 are the physical cores and the cores 8-15 are the SMT cores. Test 2 only uses the SMT cores and clearly showes that the performance is worse than using physical cores like in test 1. I'm really sure based on my first tests last year i configured my WIN10 vm to only use the cores from one die and all other vm`s to use the correct corresponding core pairs. Clearly not. Did UNRAID changed something in how the cores are presented in the webgui in one of the last versions? i never checked if something is changed. All my VM`S run smooth without any hickups or freezes but as the tests showed the performance wasn't optimal. @limetech It would be nice if you guys could find a way to recognize the CPU if its a Ryzen/Threadripper based system and present the user the correct core pairing in the webui. Over all, i had no bigger issues over the time i use your product. Let me say thank you for providing us UNRAID Greetings from Germany and sry for my bad english ?
    1 point
  8. Depends on the battery capacity and the true percentage of charge that is on it. I look at most of the numbers (concerning the state of the battery) as SWAG's (Stupid Wild A$$ Guess). Plus, in many cases they really mean it when they say to charge the battery for 24 hours. While the battery may have been charged to 100% at some time in the past, we have have no way of knowing when that past really was. As most of these batteries are made in China, it is probably (at least) six months ago as nobody is going to airship these batteries at the price they are asking for them. The best way to test to allow it to charge for 24 hours. Shutdown your server down. Now plug in a number in light bulbs to approximate the power load that the UPS unit will have to supply. (Spin up all of the drives and look at the UPS load for a reasonable estimate) and then pull the power plug. Take data points of time vs Runtime Left. Then plot the data points. As I have always stressed to anyone who will listen, you should set the Time on Battery before Shutdown to some value like 30 seconds. In the developed countries of the world, if the power is out longer than this, it will be out far, far longer than the batteries in these inexpensive UPS's can ever supply! Plus, you want to make sure that you will always have enough battery left to actually power the server when the battery is three years old. A second factor is the charging time of the battery is at least ten times longer that the discharge time. IF you have a power outage and run the battery down to 40% of full charge and then shut the server down. Lets assume that the power is out for two hours. The power comes back on. An hour, someone decides to restart the server. Fifteen minutes later that power goes out again. Now the UPS will not have enough charge for shutdown sequence before the battery is exhausted and you will have an unclean shutdown!
    1 point
  9. Looks like it would make for a good unRAID server. The 8G of RAM may hold you back slightly if you want to run VMs. You need to dedicate RAM to each VM and suggest leaving at least 4G for unRAID. That would mean 4G for your VM. If you need more, you can likely upgrade your RAM. You'll get different answers on this one. I personally recently upgraded from an ECC to a non-ECC capable build and just tested the heck out of the memory. Never had a problem. It would not be a deal breaker for me. Yes - you should be able to. I would recommend Plex Docker and no pass through is needed. The GPU can do hardware transcoding. I would suggest an SSD to be shared as cache, plex, and VM. You don't need a dedicated disk to each.
    1 point
  10. People with the title administrators are employees of Limetech, everybody else are volunteers giving their free time to support the product.
    1 point
  11. You might have a drive with the 3.3v reset issue. These drives will work fine if connected via USB or with a molex—>SATA adapter but connected directly to SATA they don’t power up due to reset loop. The solution is to put kapton tape over SATA power pin 3 on the drive. You can overlap 1 and 2, if you can't cut a piece small enough for just #3. Search for SATA 3.3v reset and you will get lots of info. It varies by drive. Some work, some don’t without the tape. UPDATE: I added a shucked 8TB Easystore to my array last week (all my 8TB drives are shucked from Easystore enclosures). It was a white label EMAZ drive and I assumed it would have the issue and was prepared to do the tape thing. Fortunately, the backplane on my hot-swap cage negated the 3.3v reset since it is powered by molex connectors, a fact I had forgotten. Many SATA drives manufactured from late 2017 on will have the 3.3v reset issue, especially WD white label 8TB EMAZ drives shucked from BB Easystore enclosures. 3.3v reset is a new standard, but, a lot of older power supplies don't support it, thus, the tape or other solutions like molex--> SATA adapters, snipping a wire on the SATA connector, etc. I prefer the tape as it is reversible and easy to do. Sent from my iPhone using Tapatalk
    1 point
  12. I keep getting this error when I enable PIA VPN, I am using France server. Never had this issue until I updated to 6.5.3. I've done a clean install of app still unable to access if VPN is on. Any ideas? 2018-08-20 16:26:22,395 DEBG 'start-script' stdout output:[info] Starting OpenVPN...2018-08-20 16:26:22,426 DEBG 'start-script' stdout output:Options error: Unrecognized option or missing or extra parameter(s) in [CMD-LINE]:1: auth-user-pass (2.4.6)Use --help for more information.2018-08-20 16:26:22,427 DEBG 'start-script' stdout output:[info] OpenVPN started
    1 point
  13. The case specs say you have room for up to six 3.5" HDD and one SSD, so, you have room to grow that way. I have personally never had an unRAID system built around an Atom CPU, however, there are several users in these forums who have/had such systems. Perhaps one of them can give you a better idea of exactly what it can handle and when the line is crossed that requires additional CPU power. Frankly, it is my sense of things that in the old days of unRAID (before version 6), the Atom was a very capable processor for a NAS-only build. With the introduction of dockers and VMs, the Atom quickly became under-powered. Both of my unRAID systems (see my signature - you have to enable signatures in your account settings if you can't see them) are Mini-ITX builds. One is Xeon-based and the other uses a Haswell i5 processor. Both can handle multiple dockers just fine. My original build had an i3 and 4GB RAM. You don't say how much RAM you have, but, just for dockers (no VMs), I suggest you should have at least 8GB RAM. My two systems have 32GB (I run a couple of lightly-used VMs on this one) and 16GB. That much is not always needed, but, I like having the headroom of more RAM. Some people in these forums have dual-Xeon beasts with 20+ processing cores and 64-128 GB RAM. These systems run multiple high-powered VMs. If you want to run a handful of dockers (Sonarr/Radar/Plex or Emby/OpenVPN, etc.) I would look for a Mini-ITX board that supports an Intel i5 and DDR3 RAM. DDR4 is crazy expensive right now. You will probably need new RAM since your old board uses DDR2 SO-DIMMs. You don''t have to break the bank for a decent build (unless you want the latest and greatest tech) that will still give you some expandability options. If you get a decent motherboard and outgrow the i5, you can upgrade to an i7 and run a VM or two if you wish. Take a look at my backup build in my signature for one example. Another option is to build around AMD Ryzen; for example, the Ryzen 2600 would be a great option, but, that means DDR4 RAM unRAID 6 is very stable. If you aren't concerned about VMs and hardware passthrough which adds some complexity, and usually means you need more PCIe slots than you will find on a Mini-ITX board, you can build a very capable server for NAS and dockers and I think you will find it will be very stable, yet also provide for some future expandability and flexibility. The "downside" for most people with commercial NAS systems like Synology and QNAP is the underpowerd hardware for the price and the lack of flexibility in expanding the storage array. However, they are just the thing for many people. Only you can decide what is most important to you.
    1 point
  14. On a different tangent than VPN, you can also grant remote access through another machine on the network with something like teamviewer. A small headless VM with teamviewer in host mode will allow access to anything you want, and can be configured securely enough. All certificate work is done using teamviewer's infrastructure, so you can download and use the client on pretty much anything you control on the spur of the moment with only your teamviewer account. However... under NO circumstances should you be accessing your home network in any way from untrusted public machines. I don't know if that's what you were after, if so, DON'T.
    1 point