Leaderboard

Popular Content

Showing content with the highest reputation on 10/23/20 in all areas

  1. Unraid is a cut down version of slackware, specifically stripped of everything that's not needed, because it loads into and runs entirely in RAM. We don't have the luxury of just slapping every single driver and support package into it, you would end up with a minimum 16GB or 32GB RAM spec. Before VM and docker container support was added, you could have all the NAS functionality with 1GB of RAM. Now, 4GB is the practical bare minimum for NAS, even if you don't use VM's and containers, and 8GB is still cramped. Adding support for a single adapter that works well in slackware, providing the manufacturer keeps up with linux kernel development shouldn't be an issue. That way we can tell people if you want wifi, here is a list of cards using that driver that are supported. It's the blanket statement of "lets support wifi" that doesn't work. BTW, even if we do get that golden ticket wifi chip support from the manufacturer and Unraid supports it perfectly, the forums will still be bombarded with performance issues because either their router sucks or their machine isn't in the zone of decent coverage, or their neighbours cause interference at certain times of day, etc. Bottom line, wifi on a server just isn't ready for primetime yet. Desktop daily drivers, fine. 24/7/365 servers with constant activity from friends and family, no. It's much easier support wise to require wired. If the application truly has to have wireless, there are plenty of ways to bridge a wireless signal and convert it to wired. A pair of commercial wifi access points with a dedicated backhaul channel works fine, that's what I use in a couple locations.
    2 points
  2. 3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST. New users continue reading 13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/BinsonBuzz/unraid_rclone_mount 17/11/21 Update: Poll to see how much people are storing I've added a Paypal.me upon request if anyone wants to buy me a beer. There’s been a number of scattered discussions around the forum on how to use rclone to mount cloud media and play it locally via Plex, Emby etc. After discussions with @Kaizac @slimshizn and a few others, we thought it’d be useful to start a thread where we can all share and improve our setups. Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk. With access to unlimited cloud space available for the cost of a domain name and around $510/pm, then this becomes a very interesting proposition as it reduces local storage requirements, noise etc etc. At the moment I have about 80% of my library in the cloud and I struggle to tell if a file is local or in the cloud when playback starts. To kick the thread off, I’ll share my current setup using gdrive. I’ll try and keep this initial thread updated. Update: I've moved my scripts to github to make it easier to keep them updated https://github.com/BinsonBuzz/unraid_rclone_mount Changelog 6/11/18 – Initial setup (updated to include rclone rc refresh) 7/11/18 - updated mount script to fix rc issues 10/11/18 - added creation of extra user directories ( /mnt/user/appdata/other/rclone & /mnt/user/rclone_upload/google_vfs) to mount script. Also fixed typo for filepath 11/11/18 - latest scripts added to https://github.com/BinsonBuzz/unraid_rclone_mount for easier editing 3/1/20 - switched from unionfs to mergerfs 4/2/20 - updated the scripts to make easier to use and control. Thanks to @senpaibox for the inspiration My Setup Plugins needed Rclone – installs rclone and allows the creation of remotes and mounts. New scripts require V1.5.1+ User Scripts – controls how mounts get created How It Works Rclone is used to access files on your google drive and to mount them in a folder on your server e.g. mount a gdrive remote called gdrive_vfs: at /mnt/user/mount_rlone/gdrive_vfs Mergerfs is used to merge files from your rclone mount (/mnt/user/mount_rlone/gdrive_vfs) with local files that exist on your server and haven't been uploaded yet (e.g. /mnt/user/local/gdrive_vfs) in a new mount /mnt/user/mount_unionfs/gdrive_vfs This mergerfs mount allows files to be played by dockers such as Plex, or added to by dockers like radarr etc without the dockers even being aware that some files are local and some are remote. It just doesn't matter The use of a rclone vfs remote allows fast playback, with files streaming within seconds New files added to the mergerfs share are actually written to the local share, where they will stay until the upload script processes them An upload script is used to upload files in the background from the local folder to the remote. This activity is masked by mergerfs i.e. to plex, radarr etc files haven't 'moved' Getting Started Install the rclone plugin and via command line run rclone config and create 2 remotes: gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive: It is advisable to create your own client_id to avoid API bans. Mount Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc Choose a suitable cron job. I run this script on a 10 min */10 * * * * schedule so that it automatically remounts if there’s a problem. The script: Checks if an instance is already running, remounts (if cron job set) automatically if mount drops Mounts your rclone gdrive remote Installs mergerfs and creates a mergerfs mount Starts dockers that need the mergerfs mount e.g. plex, radarr Upload Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc - USE THE SAME PATHS Choose a suitable cron job e.g hourly Features: Checks if rclone is installed correctly sets bwlimits There is a cap on uploads by google of 750GB/day. I have added bandwidth scheduling to the script so you can e.g. set an overnight job to upload the daily quota at 30MB/s, have it trickle up over the day at a constant 10MB/s, or set variable speeds over the day The script now stops once the 750GB/day limit is hit (rclone 1.5.1+ required) so there is more flexibility over upload strategies I've also added --min age 10mins to stop any premature uploads and exclusions to stop partial files etc getting uploaded. Cleanup Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and set to run at array start (recommended) or array stop In the next post I'll explain my rclone mount command in a bit more detail, to hopefully get the discussion going!
    1 point
  3. Hi guys, i got inspired by this post from @BRiT and created a bash script to allow you set media to read only to prevent ransomware attacks and accidental or malicious deletion of files. The script can be executed once to make all existing files read only, or can be run using cron to catch all newly created files as well. The script has an in-built help system with example commands, any questions let me know below. Download by issuing the following command from the unRAID 'Terminal' :- curl -o '/tmp/no_ransom.sh' -L 'https://raw.githubusercontent.com/binhex/scripts/master/shell/unraid/system/no_ransom/no_ransom.sh' && chmod +x '/tmp/no_ransom.sh' Then to view the help simply issue:- /tmp/no_ransom.sh Disclaimer:- Whilst i have done extensive tests and runs on my own system with no ill effects i do NOT recommend you run this script across all of your media until you are fully satisfied that it is working as intended (try a small test share), i am in no way responsible for any data loss due to the use of this script.
    1 point
  4. I'm using Unraid for a while now and collected some experience to boost the SMB transfer speeds: Donate? 🤗 1.) Choose the right CPU The most important part is to understand that SMB is single-threaded. This means SMB uses only one CPU core to transfer a file. This is valid for the server and the client. Usually this is not a problem as SMB does not fully utilize a CPU core (except of real low powered CPUs). But Unraid adds, because of the ability to split shares across multiple disks, an additional process called SHFS and its load raises proportional to the transfer speed, which could overload your CPU core. So the most important part is, to choose the right CPU. At the moment I'm using an i3-8100 which has 4 cores and 2257 single thread passmark points: And since I have this single thread power I'm able to use the full bandwith of my 10G network adapter which was not possible with my previous Intel Atom C3758 (857 points) although both have comparable total performance. I even was not able to reach 1G speeds while a parallel Windows Backup was running (see next section to bypass this limitation). Now I'm able to transfer thousands of small files and parallely transfer a huge file with 250 MB/s. With this experience I suggest a CPU that has around 1400 single thread passmark points to fully utilize a 1G ethernet port. As an example: The smallest CPU I would suggest for Unraid is an Intel Pentium Silver J5040. P.S. Passmark has a list sorted by single thread performance for desktop CPUs and server CPUs. 2.) Bypass single-thread limitation The single-thread limitation of SMB and SHFS can be bypassed through opening multiple connections to your server. This means connecting to "different" servers. The easiest way to accomplish that, is to use the ip-address of your server as a "second" server while using the same user login: \\tower\sharename -> best option for user access through file explorer as it is automatically displayed \\10.0.0.2\sharename -> best option for backup softwares, you could map it as a network drive If you need more connections, you can add multiple entries to your windows hosts file (Win+R and execute "notepad c:\windows\system32\drivers\etc\hosts"): 10.0.0.2 tower2 10.0.0.2 tower3 Results If you now download a file from your Unraid server through \\10.0.0.2 while a backup is running on \\tower, it will reach the maximum speed while a download from \\tower is massively throttled: 3.) Bypass Unraid's SHFS process If you enable access directly to the cache disk and upload a file to //tower/cache, this will bypass the SHFS process. Beware: Do not move/copy files between the cache disk and shares as this could cause data loss! The eligible user account will be able to see all cached files, even those from other users. Temporary Solution or "For Admins only" As Admin or for a short test you could enable "disk shares" under Settings -> Global Share Settings: By that all users can access all array and cache disks as SMB shares. As you don't want that, your first step is to click on each Disk in the WebGUI > Shares and forbid user access, except for the cache disk, which gets read/write access only for your "admin" account. Beware: Do not create folders in the root of the cache disk as this will create new SMB Shares Safer Permanent Solution Use this explanation. Results In this thread you can see the huge difference between copying to a cached share or copying directly to the cache disk. 4.) Enable SMB Multichannel + RSS SMB Multichannel is a feature of SMB3 that allows splitting file transfers across multiple NICs (Multichannel) and create multiple TCP connection depending on the amount of CPU Cores (RSS) since Windows 8. This will raise your throughput depending on your amount of NICs, NIC bandwidth, CPU and used settings: This feature is experimental SMB Multichannel is considered experimental since its release with Samba 4.4. The main bug for this state is resolved in Samba 4.13. The Samba developers plan to resolve all bugs with 4.14. Unraid 6.8.3 contains Samba 4.11. This means you use Multichannel on your own risk! Multichannel for Multiple NICs Lets say your mainboard has four 1G NICs and your Client has a 2.5G NIC. Without Multichannel the transfer speed is limited to 1G (117,5 MByte/s). But if you enable Multichannel it will split the file transfer across the four 1G NICs boosting your transfer speed to 2.5G (294 MByte/s): Additionally it uses multiple CPU Cores which is useful to avoid overloading smaller CPUs. To enable Multichannel you need to open the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf And add the following to it: server multi channel support = yes Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Then restart the Samba service with this command: samba restart Eventually you need to reboot your Windows Client, but finally its enabled and should work. Multichannel + RSS for Single and Multiple NICs But what happens if you're server has only one NIC. Now Multichannel is not able to split something, but it has a sub-feature called RSS which is able to split file transfers across multiple TCP connections with a single NIC: Of course this feature works with multiple NICs, too: But this requires RSS capability on both sides. You need to check your servers NIC by opening the Unraid Webterminal and entering this command (could be obsolete with Samba 4.13 as they built-in an RSS autodetection ) egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 Now you can check your Windows 8 / Windows 10 client by opening Powershell as Admin and enter this command: Get-SmbClientNetworkInterface It must return "True" for "RSS Capable": Interface Index RSS Capable RDMA Capable Speed IpAddresses Friendly Name --------------- ----------- ------------ ----- ----------- ------------- 11 True False 10 Gbps {10.0.0.10} Ethernet 3 Now, after you are sure that RSS is supported on your server, you can enable Multichannel + RSS by opening the Unraid Webterminal and enter the following (the file is usually empty, so do not wonder): nano /boot/config/smb-extra.conf Add the following and change 10.10.10.10 to your Unraid servers IP and speed to "10000000000" for 10G adapter or to "1000000000" for a 1G adapter: server multi channel support = yes interfaces = "10.10.10.10;capability=RSS,speed=10000000000" If you are using multiple NICs the syntax looks like this (add RSS capability only for supporting NICs!): interfaces = "10.10.10.10;capability=RSS,speed=10000000000" "10.10.10.11;capability=RSS,speed=10000000000" Press "Enter+X" and confirm with "Y" and "Enter" to save the file. Now restart the SMB service: samba restart Does it work? After rebooting your Windows Client (seems to be a must), download a file from your server (so connection is established) and now you can check if Multichannel + RSS works by opening Windows Powershell as Admin and enter this command: Get-SmbMultichannelConnection -IncludeNotSelected It must return a line similar to this (a returned line = Multichannel works) and if you want to benefit from RSS then "Client RSS Cabable" must be "True": Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- tower True 10.10.10.100 10.10.10.10 11 13 True False In Linux you can verify RSS through this command which returns one open TCP connection per CPU core (in this case we see 4 connections as my client has only 4 CPU cores, altough my server has 6): netstat -tnp | grep smb tcp 0 0 192.168.178.8:445 192.168.178.88:55975 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55977 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55976 ESTABLISHED 3195/smbd tcp 0 0 192.168.178.8:445 192.168.178.88:55974 ESTABLISHED 3195/smbd Note: Sadly Samba does not create multiple smbd processes, which means we still need a CPU with high single thread performance to benefit from RSS. This is even mentioned in the presentation: If you are interested in test results, look here. 5.) smb.conf Settings Tuning I did massive testing with a huge amount of smb.conf settings provided by the following websites and really NOTHING resulted in a noticable speed gain: https://wiki.samba.org/index.php/Performance_Tuning https://wiki.samba.org/index.php/Linux_Performance https://wiki.samba.org/index.php/Server-Side_Copy https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/speed.html https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html https://lists.samba.org/archive/samba-technical/attachments/20140519/642160aa/attachment.pdf https://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf https://www.samba.org/samba/docs/current/man-html/ (search for "vfs") https://lists.samba.org/archive/samba/2016-September/202697.html https://codeinsecurity.wordpress.com/2020/05/18/setting-up-smb-multi-channel-between-freenas-or-any-bsd-linux-and-windows-for-20gbps-transfers/ https://www.snia.org/sites/default/files/SDC/2019/presentations/SMB/Metzmacher_Stefan_Samba_Async_VFS_Future.pdf https://www.heise.de/newsticker/meldung/Samba-4-12-beschleunigt-Verschluesselung-und-Datentransfer-4677717.html I would say the recent Samba versions are already optimized by default. 6.) Choose a proper SSD for your cache You could use Unraid without an SSD, but if you want fast SMB transfers an SSD is absolutely required. Else you are limted to slow parity writes and/or through your slow HDD. But many SSDs on the market are not "compatible" for using it as an Unraid SSD Cache. DRAM Many cheap models do not have a DRAM Cache. This small buffer is used to collect very small files or random writes before they are finally written to the SSD and/or is used to have a high speed area for the file mapping-table. In Short, you need DRAM Cache in your SSD. No exception. SLC Cache While DRAM is only absent in cheap SSDs, SLC Cache can miss in different price ranges. Some cheap models use a small SLC cache to "fake" their technical data. Some mid-range models use a big SLC Cache to raise durability and speed if installed in a client pc. And some high-end models do not have an SLC Cache, as their flash cells are fast enough without it. Finally you are not interested in SLC Cache. You are only interested in continuous write speeds (see "Verify Continuous Writing Speed") Determine the Required Writing Speed But before you are able to select the right SSD model you need to determine your minimum required transfer speed. This should be simple. How many ethernet ports do you want to use or do you plan to install a faster network adapter? Lets say you have two 5G ports. With SMB Multichannel its possible to use them in sum and as you plan to install a 10G card in your client you could use 10G in total. Now we can calculate: 10G * 117.5 MByte/s (real throughput per 1G ethernet) = 1175 MByte/s and by that we have two options: buy one M.2 NVMe (assuming your motherboard has such a slot) with a minimum writing speed of 1175 MByte/s buy two or more SATA SSDs and use them in a RAID0, each with a minimum writing speed of 550 MByte/s Verify Continuous Writing Speed of the SSD As an existing "SLC Cache" hides the real transfer speed you need to invest some time to check if your desired SSD model has an SLC cache and how much the SSD throttles after its full. A solution could be to search for "review slc cache" in combination with the model name. Using the image search could be helpful as well (maybe you see a graph with a falling line). If you do not find anything, use Youtube. Many people out there test their new ssd by simply copying a huge amount of files on it. Note: CrystalDiskMark, AS SSD, etc Benchmarks are useless as they only test a really small amount of data (which fits into the fast cache). Durability You could look for the "TBW" value of the SSD, but finally you won't be able to kill the SSD inside the warranty as long your very first filling of your unraid server is done without the SSD Cache. As an example a 1TB Samsung 970 EVO has a TBW of 600 and if your server has a total size of 100TB you would waste 100TBW on your first fill for nothing. If you plan to use Plex, think about using the RAM as your transcoding storage which would save a huge amount of writes to your SSD. Conclusion: Optimize your writings instead of buying an expensive SSD. NAS SSD Do not buy "special" NAS SSDs. They do not offer any benefits compared to the high-end consumer models, but cost more. 7.) More RAM More RAM means more caching and as RAM is even faster than the fastest SSDs, this adds additional boost to your SMB transfers. I recommend installing two identical (or more depening on the amount of slots) RAM modules to benefit from "Dual Channel" speeds. RAM frequency is not as important as RAM size. Read Cache for Downloads If you download a file twice, the second download does not read the file from your disk, instead it uses your RAM only. The same happens if you're loading covers of your MP3s or Movies or if Windows is generating thumbnails of your photo collection. More RAM means more files in your cache. The read cache uses by default 100% of your free RAM. Write Cache for Uploads Linux uses by default 20% of your free RAM to cache writes, before they are written to the disk. You can use the Tips and Tweaks Plugin to change this value or add this to your Go file (with the Config Editor Plugin) sysctl vm.dirty_ratio=20 But before changing this value, you need to be sure to understand the consequences: Never use your NAS without an UPS if you use write caching as this could cause huge data loss! The bigger the write cache, the smaller the read cache (so using 100% of your RAM as write cache is not a good idea!) If you upload files to your server, they are 30 seconds later written to your disk (vm.dirty_expire_centisecs) Without SSD Cache: If your upload size is generally higher than your write cache size, it starts to cleanup the cache and in parallel write the transfer to your HDD(s) which could result in slow SMB transfers. Either you raise your cache size, so its never filled up, or you consider totally disabling the write cache. With SSD Cache: SSDs love parallel transfers (read #6 of this Guide), so a huge writing cache or even full cache is not a problem. But which dirty_ratio value should you set? This is something you need to determine by yourself as its completely individual: At first you need to think about the highest RAM usage that is possible. Like active VMs, Ramdisks, Docker containers, etc. By that you get the smallest amount of free RAM of your server: Total RAM size - Reserved RAM through VMs - Used RAM through Docker Containers - Ramdisks = Free RAM Now the harder part: Determine how much RAM is needed for your read cache. Do not forget that VMs, Docker Containers, Processes etc load files from disks and they are all cached as well. I thought about this and came to this command that counts hot files: find /mnt/cache -type f -amin -86400 ! -size +1G -exec du -bc {} + | grep total$ | cut -f1 | awk '{ total += $1 }; END { print total }' | numfmt --to=iec-i --suffix=B It counts the size of all files on your SSD cache that are accessed in the last 24 hours (86400 seconds) The maximum file size is 1GiB to exclude VM images, docker containers, etc This works only if you hopefully use your cache for your hot shares like appdata, system, etc Of course you could repeat this command on several days to check how it fluctuates. This command must be executed after the mover has finished its work This command isn't perfect as it does not count hot files inside a VM image Now we can calculate: 100 / Total RAM x (Free RAM - Command Result) = vm.dirty_ratio If your calculated "vm.dirty_ratio" is lower than 5% (or even negative), you should lower it to 5 and buy more RAM. between 5% and 20%, set it accordingly, but you should consider buying more RAM. between 20% and 90%, set it accordingly If your calculated "vm.dirty_ratio" is higher than 90%, you are probably not using your SSD cache for hot shares (as you should) or your RAM is huge as hell (congratulation ^^). I suggest not to set a value higher than 90. Of course you need to recalcuate this value if you add more VMs or Docker Containers. #8 Disable haveged Unraid does not trust the randomness of linux and uses haveged instead. By that all encryptions processes on the server use haveged which produces extra load. If you don't need it, disable it through your Go file (CA Config Editor) as follows: # ------------------------------------------------- # disable haveged as we trust /dev/random # https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452 # ------------------------------------------------- /etc/rc.d/rc.haveged stop
    1 point
  5. Dans la version francaise il y a une erreur de traduction sur la page des partages quand il n'y a pas de "protection" parce qu'un rebuild d'un disque est en cours il y a un triangle orange. Lorsque l'on passe la souris dessus un message s'affiche: en anglais "some or all files unprotected" en français la traduction actuelle est "certains ou tous les fichiers sont cryptés" Pendant un moment j'ai eu les boules 😁
    1 point
  6. The Mybook uses drive encryption so you can't put any old drive in it. I have multiple WD elements and WD Mybook boards, the elements can use any disks, the Mybook only seem compatible with the drives it arrived with. It's a hardware limitation. You can try this
    1 point
  7. In the lower right hand corner of the GUI, you will see a link named "Manual". Clicking on it, and following the obvious choices will get you to this: https://wiki.unraid.net/UnRAID_6/Storage_Management#Replacing_disks
    1 point
  8. It seems yes. Maybe OP need to update his language files to latest ?
    1 point
  9. It was ok in the Shares.txt, or it may be there multiple time (?). It's not just showing up in the GUI.
    1 point
  10. Nice to hear. The steam container itself isn't updated very often because the container has only the necessary dependencies in it and the start script nothing more. If you restart start the container steamcmd will look if newer version of the game is available und updates it eventually (also i think i put this in the discription). All the files from the game and all settings are in your appdata directory so nothing will be lost. I got a different approach to updates than other developers, almost every container checks on each start/restart if a newer version is available and downloads it from the source itself, so the containers haven't to be updated that often only if the developer of the game/application changes something. Hope that makes sense to you.
    1 point
  11. @SpencerJ Hi Spencer, do you know how to find in what file we need to check in the GitHub for that message ? In Shares.txt, this text is already translated. Some issue with the code ?
    1 point
  12. My apologies, it does. Thank you very much both of you.
    1 point
  13. You can try the invalid slot command, follow the instructions below carefully and ask if there's any doubt. -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and assign any missing disk(s) if needed, don't assign parity2, if you have a spare to rebuild disk2 (same size or larger) use it since it will leave you with more options if this doesn't work -Important - After checking the assignments leave the browser on that page, the "Main" page. -Open an SSH session/use the console and type (don't copy/paste directly from the forum, as sometimes it can insert extra characters): mdcmd set invalidslot 2 29 -Back on the GUI and without refreshing the page, just start the array, do not check the "parity is already valid" box (GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the invalid slot command, but they won't be as long as the procedure was correctly done), disk2 will start rebuilding, disk should mount immediately but if it's unmountable don't format, wait for the rebuild to finish (or cancel it) and then run a filesystem check.
    1 point
  14. You can run it as a container on unraid or on another server and still use swag. You just exchange plex with the IP for the other server. And port if you change it.
    1 point
  15. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
    1 point
  16. Und besser als meine vu+ wäre das Ding immer am Satellit und hätte auch ohne wöchentliches Einschalten immer die aktuellen EPG-Daten. Das stört auch mein Heimnetzprinzip ohne Internetzugang nicht.
    1 point
  17. Ok i just have to figure out how you export panel to json edit: you have a pm with jsons
    1 point
  18. I am assuming that the one disk, had some ZFS config on it, that was interfering with the rest. The thing I don't understand is why it was playing havoc with the rest of the whole system. I'm relatively new to ZFS so I guess what do I know, but logically if the system is on a USB in a path that has nothing to do with the ZFS volume, this kind of extenuating circumstance should not happen. Until I know for sure I'll consider this my first ZFS drama. The awesome thing is that throughout, no ZFS data was lost, all pools were active and running. Tested multiple times - even did a scrub. Anyway, I am using kernel helper again as it's easier to keep control of ZFS versions and I did notice the nice ZFS export notice at the bottom of the screen - nice job!
    1 point
  19. k cool that makes perfect sense. I installed 0.8.5 and my zfs pools are theere. I also had an issue with ZnapZend not working but it just needed perl-5.32.0-x86_64-1.txz from nerdpack.
    1 point
  20. Okay got passthrough working on the second GPU... i found faulty PCIe power cables..... Got everything working now - thanks for your help. I am running beta though.
    1 point
  21. No problem I'm here to help. What console did you open the console from Unraid itself or the console from the container (right click on the icon and Console)? Can you send me the exact command? Btw thank you for reporting that, the description was wrong here is the right command to connect to the console (also you have to open a console from Unraid and not the container): 'docker exec -u steam -ti NAMEOFYOURCONTAINER screen -xS Barotrauma' (without quotes) Then you should be in the game console, if you're done simply close the console window
    1 point
  22. I'm right there with you. My unRaid server is sporting an underutilized i7 and has been flawless in handling Roon including multi-channel. The only issue is being able to revert to a previously saved update. I think the permissions issue looks like a hot trail.
    1 point
  23. It looks like your vfio-pci.cfg was created by hand. Since you are on 6.9, use Tools -> System Devices to create the vfio-pci.cfg and it will be more resilient to hardware changes. Additionally, you can use the "View VFIO-PCI Log" button on that page to see details of each device that was (un)successfully bound during boot. For more info:
    1 point
  24. @xthursdayx Well, I'm sorry to hear you ran into the same issue we've got - but also glad I've got someone else with interest in getting this resolved! I really don't want to have another machine running 24/7 if I can help it. The unRaid box and dockers needs to cover the needs! (with the exception of piHole). I'm about ready to fail another upgrade here, so I'll document the whole process with screenshots and logs. detailed comment on your github issue #8 split unRaid forum post of the update problems screen shots along the process pastebin 1 pastebin 2 Nuked, re-installed. Runs. Restore from backup, completes, upon restart, no dice. pastebin log re-Nuke. re-install. rebuild. same-same
    1 point
  25. Den Neustart hast du manuell ausgelöst? Hat das geklappt oder musstest du nachhelfen (zB in dem du hard resettet hast)? Wenn er ganz normal neu gestartet wurde, dann würde ich vielleicht einen neuen Stick in die Schublade legen. Mir ist auch schon ein Kingston DataTraveler SE9 nach wenigen Wochen kaputt gegangen. Der Transcend JetFlash 600 hält dagegen aktuell in drei verschiedenen Servern. In einem sogar bei einem USB3 Port, der ja gerne von der Community verteufelt wird. ^^
    1 point
  26. Thanks everyone! I'm back up and running! I don't know if I have to post this or not, but we can mark this one as solved!
    1 point
  27. Thanks! This is very interesting. I'll do the same "blocking out". Sadly I'm kinda stuck with these 2 Starship USB controllers that won't pass. I will go ahead and try the RC version of unRAID though as that sounds interesting for the FLR issues. I realize a USB card might work, but I really want to save my PCI-E slots for graphics. Well that was easy. I went to the Beta unRAID 6.9 and the USB controller passed without any issues so easily. Wow! and thanks again! Now time to add another graphic card and for VM #2
    1 point
  28. Hi ich777, Before i start, thanks for all the work you've done on the gameservers! I'm pretty new to unraid, but had some great fun with it so far! Hoping you might be able to help with issues i'm having with the Barotrauma server? I've got it up and running - and can run the sandbox and mission gametypes in multiplayer - but i cant get the campaign option to work (its greyed out - cant be selected or voted for). I asked for some help on the Barotrauma Discord server and they suggested that i need to give myself admin permissions in the game to be able to edit the settings and select the campaign. Dont know if you think this would work? I tried a couple of different things - using the ingame console, the console you can select on the docker itself and the terminal built into unraid - but with all of them couldn't work out what i was supposed to do. I did see your note on the container install notes about connecting to the console - but couldn't seem to make it work. As i said, i'm pretty new to the unraid ecosystem - so not fully sure what everything means! I'm pretty happy following guides / wikis - so if there's anything you could point me in the direction of that'd be amazing! Happy to provide more info on the things i tried if that helps.
    1 point
  29. This was correct. I just took your predefined line (+maxplayers 8 +map c2m1_highway) out and added '+server.cfg' in it's place. And no worries on the "lateness" on this reply. I kept poking around at things, learning the container, and configuring things as I moved along. Thank you for your help!
    1 point
  30. I'm running 6.8.3. It seems perverse, to me, that when on the GUI dashboard, menu over a docker container offers an 'Update' option when there is no notified update. As soon as an update is notified for a particular container, the 'Update' option disappears. In other words, you can update a container as long as there is no update! Is there a good reason for this?
    1 point
  31. And already fixed in the 6.9 series
    1 point
  32. So, I recently went through and moved everything off cache to reformat it for the large write fix. I've since moved everything back and now my CPU usage is very high. I looked at processes and have this guy running high root 6342 200 0.0 1777300 42356 ? Ssl 20:31 6:12 /usr/local/sbin/shfs /mnt/user -disks 2047 -o noatime,allow_other -o remember=330 Attached are the diags, not sure what is going on. unraid-diagnostics-20201008-2036.zip EDIT: I started stopping dockers one by one and it appears to be related to the syncthing docker. Going to look in to seeing what is going on with it.
    1 point
  33. LTS is way back at version 5.6. There is not anything wrong with that but it is lagging far behind in features. Version 5.13 has been running very stable for me and others for several months. I use a version-specific repository tag to stay on that version (rather than :latest) while the issues with 5.14/6.0 get sorted. Here is my repository: linuxserver/unifi-controller:5.13.32-ls71
    1 point