Leaderboard

Popular Content

Showing content with the highest reputation on 12/06/19 in Posts

  1. Hello, would it be possible to bake in Nvidia and AMD GPU drivers directly into unRaid? Or maybe an easy way of activating the drivers? The folks over at LinuxServer are doing an amazing job with their Plugin (Which I do use) but it would be really nice if you could provide this out of the box, or an easy driver store to install from. The unRaid nvidia plugin is pretty popular, I am sure I am not the only one wishing for this. Of course I have no clue what kind of licensing this would require
    1 point
  2. I thought this was big enough to post here to create awareness. I don't expect anyone here can do anything about it though as I assume it needs to be resolved at the network / kernel level. https://www.zdnet.com/article/new-vulnerability-lets-attackers-sniff-or-hijack-vpn-connections/?ftag=TRE-03-10aaa6b&bhid=10041925
    1 point
  3. Yes is definitely wrong, since Yes means move from cache to array. Prefer is OK or Only. Prefer is the only setting that can help you get it moved to cache. Prefer means move from array to cache, but you can't move open files so more would need to be done to make that work. Specifically, while docker service and VM service is enabled you can't move docker and libvirt images. Simplest is probably to just disable and delete those images, set system to cache-only, and recreate them so they will go on cache.
    1 point
  4. I would test them, then put them back on the shelf until you need the capacity. Fewer drives = fewer points of failure, and plenty of spares means if you do have a failure it's painless to replace. For a media server, it's much better to mostly fill the drives and add as needed vs. putting them all in and filling all of them gradually. Because of the way most people obtain and consume media, and the fact that unraid lets drives stay spun down if not being read, you are better off keeping older stuff together and adding newer stuff to a single drive at a time.
    1 point
  5. Start from scratch. Download the Unraid USB Creator. For version select Next -> 6.8.0 RC5. Click customize. Check Allow UEFI boot. Insert your new unraid usb into the white port on your motherboard labeled BIOS. Boot your computer into the bios. Reset all settings to factory optimized/default settings. Save and reboot back into the bios. Make ONLY these changes: Tweaker -> Advanced CPU Settings -> SVM Mode -> Enable Settings -> Miscellaneous -> IOMMU -> Enable Settings -> AMD CBS -> ACS Enable -> Enable Settings -> AMD CBS -> Enable AER Cap -> Enable Save and reboot. Assuming you don't have any other boot media attached it will boot unraid. From the web gui Main menu, click the name of your Boot Device (flash). Under Syslinux Config -> Unraid OS, add "video=efifb:off" after "append initrd=/bzroot". The line should now read "append initrd=/bzroot video=efifb:off". Reboot unraid. Create a new vm using win10 template. Use the latest win10 install iso and virtio driver iso. Use the Q35 chipset, and the rest default/usual. Get windows installed/working with vnc graphics. Download appropriate vbios from https://www.techpowerup.com/vgabios/. Pass it along with your gpu video/sound. If you haven't seen it already, spaceinvader one has a video specific for the 5700 XT. The long and short is you need to pass a vbios, and you need to use Q35 to get the drivers to install properly. I would try doing everything fresh and in the order I suggested.
    1 point
  6. *Should* be fine, as long as parity1 slot is used, when sync is done do a new config and check "parity is already valid" before starting the array. Alternatively clone the disk with dd then do a new config and also trust parity.
    1 point
  7. Ok wow, I need to read more of the documentation. Don't know why I didn't think the cache wasn't counted. I have been messing around with freenas for a couple of months, and its a great project and all, but for my needs, I think unraid probably works better. I just need some simple NAS storage and the ability to run plex and possibly a few other things via the NAS. Ill download the trial, and give it a whirl once I move my hardware to my new case. Regardless, thanks for the answer!
    1 point
  8. No. All attached storage devices (except the flash drive used for booting Unraid) count (even if not being used by Unraid).
    1 point
  9. If you go into the settings for the share you can add disk2 to the list of excluded drives. This will stop new files for that share being written to the that disk (although any existing ones will still show up for read purposes).
    1 point
  10. Only for metadata, i.e., you'll lose data if one of the devices fails.
    1 point
  11. Not related but I noticed some FCP warnings in your syslog. You shouldn't ignore these unless you know exactly why they don't apply to your specific use (and these do apply): Dec 5 21:10:01 Tower root: Fix Common Problems Version 2019.11.22 Dec 5 21:10:01 Tower root: Fix Common Problems: Warning: Share Disk2 test is set for both included (disk2) and excluded (disk1,disk3,disk4) disks Dec 5 21:10:01 Tower root: Fix Common Problems: Warning: Share Films is set for both included (disk2) and excluded (disk1) disks .... Dec 5 21:10:06 Tower root: Fix Common Problems: Warning: Dynamix SSD Trim Plugin Not installed Dec 5 21:10:06 Tower root: Fix Common Problems: Warning: Syslog mirrored to flash You shouldn't set both include and exclude and there is never any good reason to do so. Include means ONLY and Exclude means EXCEPT, so using one or the other not both, covers all possibilities. Remove one or the other. In fact, your setting for Films isn't even consistent. And I don't know why you would even have a share named Disk2. It doesn't appear in your user shares, and include / exclude settings don't make any sense for a disk share. Maybe you removed that share after it was logged. Hope you can clarify this one for me. And you don't want to mirror syslog to flash permanently since you will wear out your flash drive. That should only be done temporarily as a troubleshooting measure. Better yet set syslog server to write to one of your user shares. Also, your system share has files on the array instead of all on cache where they belong, so your dockers won't perform as well due to parity and they will keep array disks spinning.
    1 point
  12. From your screen capture of the Main tab, nothing is being written to any device on the array. (I am assuming that it was taken when this activity was going on.) Now, 7.2Mbps is not very fast (for a data transfer). I would suspect that it is GUI activity. Did you have more than one GUI screen open? Were you 'watching' the preclear progress on those three drives?
    1 point
  13. @RyanBorh If you already have installed some dockers or VMs, disable docker and VMs first in the settings, set both shares to PREFERED, run the mover so all existing data is moved to the cache. Check if the files are moved over to the cache and check if the shares don't exists on any array drive. Simply click the folder icon on the far right behind the drive in Unraids main tab. After that change both shares to ONLY and startup the docker and VMs again in unraids settings. Every new Docker or VM you install now will always been stored on the cache drive now and newer will be moved by the Mover to the array. For daily backups for the Dockers you can use the "CA Backup / Restore Appdata" plugin from the community applications. For backing up your VMs there are a couple ways to do it. You can use the following script for example and configure a daily schedule for it with the "User Scripts Plugin" Or in case you wanna use the BTRFS snapshot feature you can use a script to do this. This only works if the source and target drive is using BTRFS
    1 point
  14. you could either: Remove WebUI\Password_SHA1 from your config temporarily so you can log in without a password and set it again Get another clean qBittorent (Dektop, new container), generate a Password with it and copy WebUI\Password_PBKDF2 from its config. Not sure if you need to remove WebUI\Password_SHA1. Hash you password like qBittorrent does and insert it into your config
    1 point
  15. In short, you can't. In long, it kinda is possible simulate it by having 2 separate sets of dockers and a set of bash script but it's not true isolation.
    1 point
  16. That's good enough for me, I don't really mind when users don't say thank you, I do appreciate when people at least respond to say if it helped or worked.
    1 point
  17. From the official changelog: I'm not sure any future versions will include a "fix" for this.
    1 point
  18. Not sure how I obtained this as I thought I got it from community apps. However....got binhex and all is good....thx for the help....(I'll get this yet!) lol
    1 point
  19. Hey @binhex. Trying to install the remove plugin for Deluge however looks like it won't enable in the new version. There is already a bug report on the plugin git. https://github.com/springjools/deluge-autoremoveplus/issues/3 Any chance we can get this module included in the docker container? Thanks
    1 point
  20. Yes, until there are less than 2TB free, then it will start filling the other disks.
    1 point
  21. Cache. You can use this.
    1 point
  22. RC8 ha revert kernel back to 4.19, so it might be the issue
    1 point
  23. unRAID is hardware agnostic. The only hardware that matters for licensing purposes is the flash drive used to boot unRAID. Change hardware all you want. You can also extend the trial twice for 15 day each.
    1 point
  24. Have you read and followed the application setup guide on the github or docker hub link in the first post of this thread?
    1 point
  25. With High Water allocation and with the largest disk being 4TB I would not expect the 2TB drive to start being used until all other drives are down to 2TB (or less) free.
    1 point
  26. Cache file system is corrupt, best way forward is to backup, re-format, restore data.
    1 point
  27. That's normal for high-water. https://wiki.unraid.net/Un-Official_UnRAID_Manual#High_Water
    1 point
  28. So as I understand from reading your post, these are your requirements: IPMI, lots of PCIe slots, cores for VM's, and somewhat power consumption conscious...correct? If that's the case, then your really have two paths that aren't going to cost an arm and a leg (so no Xeon Scalable and AMD Epyc) and give you the PCIE slots you want (so no Xeon E and AMD Ryzen). The two paths you are left with are LGA2011-3 Xeon (older but has USB 3.0, DDR4 ECC, and is becoming cheaper very quickly) and AMD Threadripper. Here are prospective choices with those paths. LGA2011-3 Xeon - CPU: If you want VM's or lots of Docker containers running, then you need to go with the E5-26XX v3 series and probably not the E5-16XX v3 series (they have less cores but higher frequency and single thread rating). I'd stick with a single CPU since you're wanting to go lower in power consumption and it will save you money in both motherboard and CPU cost. The E5-2640 v3 is a really nice lower power CPU (it will idle much lower than its 90W TDP) to use for you. It has 8 cores/16 threads, 13,864 Passmark, and can be found Used on eBay for around $75. You could also go up to the E5-2680 v3 (18,426 Passmark, 120W TDP, 12 cores/24 threads, $150) and the E5-2690 v3 (19,240 Passmark, 135W TDP, 12 cores/24 threads, $200) if you want more power. Not to mention that any motherboard that can have a v3 processor can also take a v4 one. The E5-26XX v4 is still really expensive but these will decrease over time and give you a decent upgrade path. **Side Note: Don't buy QS/ES samples or ones from China. There are plenty of good, regular, used ones around so don't be drawn in by slightly lower costs.** - Motherboard: The X10SRL-F is pretty much the standard. It has a ton of PCIE lanes, USB 3.0, IPMI, fantastic support, and plenty of used ones on eBay. This seller: https://www.ebay.com/itm/SuperMicro-X10SRL-F-ATX-Single-Socket-LGA2011-v3-DDR4-Motherboard-X10/123818059106?hash=item1cd421a562:g:n3sAAOSwwrlckRA- regularly takes around $60 lower for a one than what is listed. The I/O plate can be found on Supermicro's website for only a couple of dollars. If you want to buy it new, Provantage has them for around $260 which will also come with a warranty which is nice. Just beware that if you pick this board, you'll need to find a heatsink that fits the LGA2011-3 Narrow socket. Noctua has a 3U one that would be perfect for your Supermicro chassis (I actually have the exact chassis as you!). - RAM: DDR4 ECC RDIMM's are dropping considerably in price. If you look on the Used market (eBay, ServeTheHome, or r/homelabsales), you can get some great deals. Hard to say what you'll pay because you didn't specify how much RAM you were planning on using. If you're going to do VM's and Docker containers, more is better. Also, take into account that this is a Quad Channel CPU so you'll want to use at least 4 sticks to really get your money's worth. AMD Threadripper - CPU: The 1900X is actually not a bad choice if you're planning on getting started with this setup but don't want to spend a bunch of money. It has 8 cores/16 threads, 16,108 Passmark, and can be found for about $150. You could also go with a 1920X ($250) if you want more cores. All threadripper CPU's will come with a higher idle power consumption though (starting at 180W TDP with an idle around 100W). You do, however, get higher Single Thread Ratings and a bigger upgrade path. - Motherboard: If you want IPMI, this leaves you with exactly one choice: ASRock Rack X399D8A-2T. This one is actually pretty hard to find and is damn expensive. It costs about $550 (I know! Crazy right?!) and it can be found on eBay and a couple of other sites. NewEgg is out right now but it'll probably come back relatively soon. Although it costs a ton, this motherboard has everything. PCIE lanes, IPMI, 10Gig, you name it. - RAM: DDR4 ECC UDIMM's are starting to go down in price and Provantage has some nice 2666mhz Kingston ram sticks for a decent price. You can also do non-ECC dimms and get better speeds if ECC isn't important to you. The only issue with Threadripper and RAM is that it really likes to have overclocked higher speeds to get the most out of the CPU. Since this is a NAS we're talking about, I don't condone overclocking ram but some people on here think it's worth it for the performance boost. Conclusion You can build a damn nice LGA2011-3 build for a lot lower cost than a Threadripper build while only suffering a little in high end performance. **Personally, I really want to do a LGA2011-3 build but I just don't have the need for all those PCIE lanes so I'm probably going to go AMD Ryzen instead.**
    1 point
  29. Master Version: Not sure what is going on, just updated, and now it crashes without much in the logs. Debug mask set to true or false comes up with same logs I will install and test dev, as I no longer see a master version, only 2 dev versions available. < ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2019-12-04 17:03:37.527209 [info] System information Linux 642928f489f4 5.3.6-Unraid #2 SMP Wed Oct 16 14:28:06 PDT 2019 x86_64 GNU/Linux 2019-12-04 17:03:37.565416 [info] PUID defined as '99' 2019-12-04 17:03:37.760181 [info] PGID defined as '100' 2019-12-04 17:03:38.047748 [info] UMASK defined as '000' 2019-12-04 17:03:38.082860 [info] Permissions already set for volume mappings 2019-12-04 17:03:38.125948 [info] DELUGE_DAEMON_LOG_LEVEL not defined,(via -e DELUGE_DAEMON_LOG_LEVEL), defaulting to 'info' 2019-12-04 17:03:38.160930 [info] DELUGE_WEB_LOG_LEVEL not defined,(via -e DELUGE_WEB_LOG_LEVEL), defaulting to 'info' 2019-12-04 17:03:38.197591 [info] VPN_ENABLED defined as 'yes' 2019-12-04 17:03:38.245688 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/** >
    1 point
  30. First time Adata SSD dropped offline: Dec 3 12:06:09 TOWER kernel: ata4.00: qc timeout (cmd 0xec) Dec 3 12:06:09 TOWER kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) Dec 3 12:06:09 TOWER kernel: ata4.00: revalidation failed (errno=-5) Dec 3 12:06:09 TOWER kernel: ata4.00: disabled Second time it dropped again: Dec 4 09:03:48 TOWER kernel: ata4.00: link online but device misclassified Dec 4 09:04:18 TOWER kernel: ata4.00: qc timeout (cmd 0xec) Dec 4 09:04:18 TOWER kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4) Dec 4 09:04:18 TOWER kernel: ata4.00: revalidation failed (errno=-5) Dec 4 09:04:18 TOWER kernel: ata4.00: disabled Adata devices don't have a very good reputation, but it can also be a connection issue, suggest you try again with the SSD in a different slot/cable/port, if it happens again replace with a new device, preferably from a different brand
    1 point
  31. You should let limetech know about your issues and concerns. After all, it's really limetech who should be providing you with a solution, not a third party like us. We do what we can (chbmb and bassrock put a lot of work into it) but there is only so much an outsider can do when they only have partial info and have to reverse engineer everything. As an example, qnap worked directly with plex employees to make sure their OS included the necessary drivers and packages to make sure transcoding worked with plex on their devices. We are neither the OS provider (limetech) or the client (plex/emby). We're just folks trying to give back to the community.
    1 point
  32. Rather than 'backed up', think 'moved' from the cache drive to the array. So Yes, you lose all data that is still on the cache drive. However there a utility that will create a back up of certain system information about Docker Apps and VM's for this type of scenario. If you set up the mover to run once a day (it defaults to 3:00AM), you would only lose that day's files. Yes Yes but ask (or find the instructions for doing it). Also see answer to next your question. (I started with a 320GB HDs for the cache drive in both of my servers and switched them both out for an SSD's. It was simple enough that I don't even remember exactly what steps were involved.) Yes. See the following link. Be sure to read the 'NOTE:' under Cache Pool Operations about the required disk format if you start with a single drive! https://wiki.unraid.net/UnRAID_6/Storage_Management#Cache_pool_operations EDIT: Just took a quick look at the mover settings, you can also set the mover to run hourly should you feel that necessary.
    1 point
  33. Again as per the readme, it states `need to change both sides` aka the container port and host port to match and you haven't done this.
    1 point
  34. Has anyone revisited this since a while back? Refurb models of these have come down quite a bit in cost
    1 point
  35. FYI this is a bit older but make sure to to the Config in Sab and go to "Special" and scroll down to the bottom. There's a section for a whitelist, make sure it reads "tower" or "Tower" depending on how you are trying to access it (or whatever hostname you use for your Sab box) The case sensitivity is key -- it matters.
    1 point
  36. How do I replace/upgrade my single cache device? (unRAID v6.2 and above only) This procedure assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't. Stop all running Dockers/VMs Settings -> VM Manager: disable VMs and click apply Settings -> Docker: disable Docker and click apply For v6.11.5 or older: Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer" For v6.12.0 or newer: Click on all shares that are using the pool you want to empty and change them to have the pool as primary storage, array as secondary storage and mover action set to move from pool to array Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page When the mover finishes check that your cache is empty (any files on the cache root will not be moved as they are not part of any share) Stop array, replace cache device, assign it, start array and format new cache device (if needed), check that it's using the filesystem you want For v6.11.5 or older: Click on Shares and change to "Prefer" all shares that you want moved back to cache For v6.12.0 or newer: Click on Shares and change the mover action to move from array to pool for all shares that you want moved back to cache On the Main page click "Move Now" When the mover finishes re-enable Docker and VMs
    1 point