Leaderboard

Popular Content

Showing content with the highest reputation on 02/20/20 in all areas

  1. The main roadblock to adding Nvidia and AMD gpu drivers has been that Linux will grab those devices upon boot - which is what you want for them to be used by docker containers but makes it a real PITA for those wanting to passthrough the cards to VM's instead. Traditionally you had to find out vendor id and stub the drivers via syslinux kernel command line. To help with this we added vfio-pci.cfg method to select by PCI ID, but still no slick user interface for easily selecting the devices to stub - but lately I've seen a plugin called "VFIO-PCI Config" - maybe the author would help us integrate this natively into Unraid OS 😎 This would open door for us to add gpu drivers without adding a huge burden to VM users.....
    5 points
  2. is hugepages related to thata hugebitch...... I guess I missed that Linus video.
    2 points
  3. There are certainly more VM users than hardware-transcoding users. So I would strongly urge LT to make the loading of Nvidia driver an optional setting (especially if it requires existing VM users to make config changes for a feature they didn't ask for or even worse a feature that may even break things). Needless to say, expect a lot of upset users to inundate the forum with "6.x.0 broke my VM" posts. 😅 Last but not least, I can only wish you good luck with all the future "Nvidia new driver has already been out for one whole freaking day, why haven't Unraid had it yet?" demands. 🤣
    2 points
  4. It's an rclone option e.g. --log-file=/mnt/cache/mount_rclone/rclone/logs/upload_sa_06.log
    1 point
  5. Thank you for wanting to help. I have try it again from newone and now its working. my finaly fault was a stupid config error on my fritz box... so now its running finaly.. yay ^^
    1 point
  6. Regarding <TemplateURL> If having this entry populated with the URL causes problems for you, then going forward (as of Feb 21) maintainers can manually edit the appropriate xml in their repositories and either change the existing entry or add in <TemplateURL>false</TemplateURL> If the appfeed sees false in that entry, then it will not populate it automatically with the URL of your template. This has the result of if a user deletes a variable, then on the next container update dockerMan will not re-add that variable back into their template. This is a manual only edit, and cannot be done via dockerMan's GUI. If any one starts bugging you about FCP complaining about a missing template URL via FCP, then tell your users to simply ignore that error. Note that any of your users will also either have to reinstall the app from scratch or manually edit their user template to get rid of the existing entry. @Roxedus
    1 point
  7. Read the description for high water. Notice the free space remaining on all the drives. It's working exactly as described.
    1 point
  8. Programming Board would be the applicable place, but I'll go ahead and let the feed ignore populating that entry and will post somewhere in there with an entry to add to the template to do this.
    1 point
  9. Hugepages support is built into Unraid natively, but to enable it, you must do two things: 1) Navigate to Main > Flash Device Settings (click the Flash on this page). After "append" and on the same line, add the following: hugepagesz=2M hugepages=16128 hugepagesz=2M hugepages=X Change X to represent the # of hugepages you want to allocate. Each page is 2 MB based on the first variable. So for example, if you wanted 16GB of RAM, you'd use the following: hugepagesz=2M hugepages=8064 You obviously will need to increase this for the # of VMs you wish to use this with. Here's what mine looks like as an example: 2) Edit the your VM and switch to form-based edit mode (the toggle switch in the top right on the Edit VM screen). Add <hugepages/> to the <memoryBacking> section like so: Save your VM, reboot your server and fire it up.
    1 point
  10. no problem at all - just download plopKExec iso (https://www.plop.at/en/plopkexec/download.html), add it to Esxi VM, and you are done - Esxi VM boots from plopKExec ISO, then it automatically select unraid USB stick and continue to boot.
    1 point
  11. Welcome I had very similar problems. It was solved by establishing a tunnel with an additional fan (3d printing) on the controller. fans: 35-36% , Controller temperature 80-83C I haven't checked it yet but it's possible that our problems are caused by bad thermal contact between the heat sink and the chip. https://www.thingiverse.com/thing:4070343 Good luck :)
    1 point
  12. Post the complete syslog when you lose it again, those call traces look network related.
    1 point
  13. Really like the new upload script - enough to completely switch to using Service Accounts now. Very easy to customize as well e.g. have dedicated log files for each of my 11 team drives + sharing 100 SA's among them so now my big backup job can complete overnight. Also found an unexpected extra use for mergerfs - to pool multiple external USB SSD's to make up a single pool for my offline backup. The "least used space" policy is serviceable with mixed-size pool to distribute data as evenly as possible (I prefer most free percentage but I don't think that's an option).
    1 point
  14. I understand why Main is the default page after login since after fresh boot, that's where you start the array. However, if already booted, the Dashboard page is usually a lot more useful. Even right after boot, if the array is set to autostart, most people will have to click the Dashboard to e.g. start docker, VM etc. anyway. So perhaps have the GUI pick the Dashboard if array has started would be more logical.
    1 point
  15. ignore this, its only a problem if you have multiple people all accessing the same container, and thus possibly could see your credentials,conf file and your username and password. this means the cipher defined in the client ovpn file does not match the cipher on the vpn providers server, its a warning and not a fatal error so again can be ignored, if it bothers you a lot then get your vpn provider to fix their ovpn config file so that the cipher matches. see Q2 for how to confirm the vpn tunnel is working:- https://github.com/binhex/documentation/blob/master/docker/faq/delugevpn.md correct.
    1 point
  16. looks ok to me too. yep i would agree with this, looks like its a semi permanent ban, so you need to speak to a IRC OP for that channel. looks to me like its running now, just get that ban removed and you should be good, nice work @Cat_Seeder
    1 point
  17. Use your existing 2x250GB to separate write-heavy and read-heavy data. Something like this: Set all shares with Cache = Yes to Cache = No. You don't have parity so there's no need for write cache (even if you do, I would turn on Turbo Write rather than wasting SSD write cycles as a wriet cache). Set the 2x250GB in a RAID1 cache pool Mount the 500GB as UD and use the console to move data over (except any temp data). Then use the 500GB as temp drive for write-heavy activities. I doubt you will see a diff in performance but if you have a VM that needs a really fast vdisk, just put the vdisk in the 500GB. Alternatively, if let's say you want the fastest possible storage performance for your VM, you can set 1x250GB in cache and 1x250GB as UD as temp drive for write-heavy activities. Then pass through the 500GB to your VM as a PCIe device i.e. stub it and select it in Other PCIe device section of the VM GUI.
    1 point
  18. Sorry to be harsh but if you need that kind of hand-holding just to get the hardware, you will be heavily troubled by the tech hoops you have to pass through (pun intended) to get your project done. Also you will need to be a lot more specific with your use cases than "gaming/etc". The main reason is with a 3-gamer-1-PC build, one of the 3 gamers will have to compromise. Last but not least, unless there's someone on here who happen to have built a 3-gamer-1-PC, you will never get "exactly" in any answer. PCIe pass through is notoriously difficult to predict and requires the on-site person (i.e. you) to have some tech skill to troubleshoot. And nobody wants to be blamed for recommending an exact hardware that should work but doesn't because, for example, the user uses the wrong vbios.
    1 point
  19. IIRC there are known performance issues with that 3ware controller.
    1 point
  20. It is, in fact, stealing (return fraud - price arbitrage) and is prosecuted, for example, as petty theft/shoplifting in CA. https://www.shouselaw.com/is-return-fraud-a-crime-in-california, so please don't do it.
    1 point
  21. That's what I've heard as well, but I honestly don't notice a difference. It's entirely possible that CPU encoding may result in a higher video quality at a smaller file size, but I have hundreds of DVDs and blu-rays I'm ripping, so I'd rather have faster encoding speeds if it means a negligible quality difference. I need to do a side-by-side comparison of a standard DVD, but the blu-rays I really can't see a difference. Someday, when I get my Threadripper 3990x processor after winning the lottery, then I'll re-encode everything with CPU only.
    1 point
  22. Yep, for example disk1 was ST12000NM0007-2A1101_ZJV1HE5B and now is ST12000NM0007-2A_ZJV1HE5B_35000c500b1c28799 You could do a new config, but flashing the LSI to current firmware, it's using a very old one, should fix it.
    1 point
  23. i have just figured out an issue on the new version of firefly 5.0.5 if you are getting a error in your log that looks like this: local.ERROR: SQLSTATE[08006] [7] received invalid response to SSL negotiation: t (SQL: select "id", "name", "data" from "configuration" where "name" = is_demo_site and "configuration"."deleted_at" is null limit 1) {"exception":"[object] (Illuminate\\Database\\QueryException(code: 7): SQLSTATE[08006] [7] received invalid response to SSL negotiation: t (SQL: select \"id\", \"name\", \"data\" from \"configuration\" where \"name\" = is_demo_site and \"configuration\".\"deleted_at\" is null limit 1) at /var/www/firefly-iii/vendor/laravel/framework/src/Illuminate/Database/Connection.php:669) and the web ui shows: then you need to set a new variable in your container that looks like this 1. make the name whatever you want and put the key as DB_CONNECTION 2. in value put: mysql if you are using mariadb or mysql (or anything else that is based/is another form of mysql) OR put in: postgre (this could be wrong, if so please post what the variable should be) if you are using postgre (or anything that is based/ is another form of Postgre) i hope this helps someone and gets put into the docker to avoid problems
    1 point
  24. Please make 2FA an optional feature. My server is not exposed to the Internet so there's really no need for extra security. It would be a massive pain in the backside having to grab my phone just to check if a docker has crashed.
    1 point
  25. FYI you can run both the web server and consumer in a single docker container by using a bash script: #! /bin/bash /sbin/docker-entrypoint.sh document_consumer & /sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload & wait save this file into a volume that's mounted in the container. i just put this in the appdata directory. then turn on advanced view and override the entry point, e.g. --entrypoint /usr/src/paperless/data/entry.sh clear out the 'post arguments', since you're doing that in the bash script now.
    1 point
  26. SMART for those disabled disks looks OK. Since you rebooted I can't tell from syslog what happened. Do you have Syslog Server setup? https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601 You have more disks than I care to examine. Do any of your disks have SMART warnings on the Dashboard?
    1 point
  27. How do I use the Syslog Server? Beginning with release 6.7.0, there has been a syslog server functionality added to Unraid. This can be a very powerful diagnostic tool when you are confronted with a situation where the regular tools can not or do not capture information about about a problem because the server has become non-responsive, has rebooted, or spontaneously powered down. However, getting it set up to use has been confusing to many. Let's see if we clarify setting it up for use. Begin by going to Settings >>> Syslog Server This is the basic Syslog Server page: You can click on the 'Help' icon on the Toolbar and get more information for all of these three options. The first one to be considered for use is the Mirror syslog to flash: This one is the simplest to set up. You select 'Yes' from the dropdown box and click on the 'Apply' button and the syslog will be mirrored to logs folder/directory of the flash drive. There is one principal disadvantage to this method. If the condition, that you are trying to troubleshoot, takes days to weeks to occur, it can do a lot of writes to the flash drive. Some folks are hesitant to use the flash drive in this manner as it may shorten the life of the flash drive. This is how the setup screen looks when the Syslog Server is set up to mirror to the flash drive. The second option is use an external Syslog Server. This can be another Unraid server. You can also use virtually any other computer. You find the necessary software by googling for the syslog server <Operating system> After you have set up the computer/server, you fill in the computer/server name or the IP address. (I prefer to use the IP address as there is never any confusion about what it is.) The Click on the 'Apply' button and your syslog will be mirrored to the other computer. The principal disadvantage to this system is that the other computer has be left on continuously until the problem occurs. The third option uses a bit of trickery in that we use the Unraid server with the problem as the Local syslog server. Let's begin by setting up the Local syslog server. After changing the Local syslog server: dropdown to 'Enabled', the screen will look like this. Note that we have a new menu option-- Local syslog folder: This will be a share on the your server but chose it with care. Ideally, it will be a 'cache only' or a 'cache preferred' share. This will minimize the spinning up of disks due to the continuous writing of new lines to the syslog. A cache SSD drive would be the ideal choice here. (The folder that you see above is a 'cache preferred' share. The syslog will be in the root of that folder/share.) If you click the 'Apply button at this point, you will have this server setup to serve as a Remote Syslog Server. It can now capture syslogs from several computers if the need should arise. Now, we added the ip address of this server as the Remote syslog server (Remember the mention of trickery. So basically, you send data out-of-the-server and it comes-right-back-in.) This is what it looks now: As soon as you click on apply, the logging of your syslog will start to a file named (in this case) syslog-192.168.1.242.log in the root of the selected folder (in this case-- Folder_Tree). One very neat feature is that each entry are appended onto this file every time a new line is added to the syslog. This should mean if you have a reboot of the server after a week of collecting the syslog, you will have everything from before the reboot and after the reboot in one file! Thanks @bonienl for both writing this utility and the guidance in putting this together.
    1 point
  28. I think I've sorted this.. I'd not added my WHS smb shares as a "remote smb". I'd simply added the WHS smb in Krusader using "New net connection" Having added the remote smb using "Add Remote SMB/NFS Share" and mounting it, I'm seeing 100MB/s + transfer speeds. Thanks to all for your support and apologies for my noobishnes :)
    1 point
  29. On the flash drive, edit config/domaing.cfg Change SERVICE="enable" to be SERVICE="disable" and reboot the server
    1 point