Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 10/24/20 in all areas

  1. 2 points
    No, because by default Linux does not switch off unused pcie, usb, sata, iGPU, etc. This script sets only all devices from (permanently) ON to AUTO (reduce energy consumption / switch off, if not used). This is different compared to Windows or Apple where energy saving is the default. Source: https://www.kernel.org/doc/html/v4.14/driver-api/pm/devices.html#sys-devices-power-control-files "The setting can be adjusted by user space by writing either “on” or “auto” to the device’s power/control sysfs file. Writing “auto” calls pm_runtime_allow(), setting the flag and allowing the device to be runtime power-managed by its driver. Writing “on” calls pm_runtime_forbid(), clearing the flag, returning the device to full power if it was in a low-power state, and preventing the device from being runtime power-managed. User space can check the current value of the runtime_auto flag by reading that file." This has nothing to do with underclocking currently used devices. It only reduces power of a device that is not used (which is controlled by its driver). This is the same as CPU C-States.
  2. 1 point
    Powertop Powertop is an Intel tool (yes, works for AMD builds, too) to check power consumption states of sata, pcie, usb, etc devices. You can install powertop through the NerdPack Plugin and execute it as follows: powertop Now press "TAB" until you reach "Tunables". You get a list of devices which could be switched off: If you like, you can generate a report which generates commands that can be added to the Go File through Config File Editor Plugin (change path if you do not have a Cache disk): powertop --html=/mnt/cache/powertop.html How much energy can be saved? After reboot my server's power consumption was reduced by 4W (of 24W, which is 20%)! You can save even more energy with Intel Undervolting. Powertop Auto Tune If powertop is permanently installed on your server, you can add this to your Go File: # ------------------------------------------------- # Reduce power consumption # ------------------------------------------------- powertop --auto-tune Execute this command through your WebTerminal and you don't need to restart your server. Script If you don't want to use powertop and if you are mighty (I'm using this for three different Unraid servers without flaws), you can add this to your Go File (credits go to @mika91), but note: In the first section disabling "haveged" is not active. If you want to activate it, remove the "#" in front of the line. Read this thread to find out if this is useful for you, too. In the last section WoL is disabled for all ethernet ports. Remove this if you need WoL # ------------------------------------------------- # disable haveged as we trust /dev/random # https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452 # ------------------------------------------------- #/etc/rc.d/rc.haveged stop # ------------------------------------------------- # powertop tweaks # ------------------------------------------------- # Enable SATA link power management for i in /sys/class/scsi_host/host*/link_power_management_policy; do echo 'med_power_with_dipm' > $i done # Runtime PM for I2C Adapter (i915 gmbus dpb) for i in /sys/bus/i2c/devices/i2c-*/device/power/control; do echo 'auto' > $i done # Autosuspend for USB device for i in echo /sys/bus/usb/devices/*/power/control; do echo 'auto' > $i done # Runtime PM for disk for i in /sys/block/sd*/device/power/control; do echo 'auto' > $i done # Runtime PM for PCI devices for i in /sys/bus/pci/devices/????:??:??.?/power/control; do echo 'auto' > $i done # Runtime PM for port ata od PCI devices for i in /sys/bus/pci/devices/????:??:??.?/ata*/power/control; do echo 'auto' > $i done # Disable wake on lan for i in /sys/class/net/eth?; do ethtool -s $(basename $i) wol d done Config File Editor:
  3. 1 point
    Your Unraid server shouldn't be exposed to the internet, anyway, so why bother?
  4. 1 point
    The rebuild does not check the existing contents. It just works out what should be there by reading all the other disks, and then overwrites whatever is on the disk being rebuilt.
  5. 1 point
    Checking would just slow it down and it assumes a new disk was used so no point checking.
  6. 1 point
    Without going through the massively complicated directions, since you're using 6.9, instead of utilizing an unassigned device for things, you're better off (and it's going to just plain work better) if you had instead created a new cache pool. But, for the execution error, edit the container, make a change any change, revert it then hit apply. The exact reason why it's failing will appear following the docker run command that appears.
  7. 1 point
  8. 1 point
    Do not use option 2 (new config) mentioned in that post from several years ago. That was incorrect.
  9. 1 point
    You didn't have time to run an extended SMART test, but SMART attributes look OK. You can rebuild to a new disk or to the same disk as already explained in this thread. Since this happened after you were making hardware changes, you should double check all connections, all disks, SATA and power, including splitters.
  10. 1 point
    cool i ignored that. 😀 I used to have tapes drives but the last one was a lto3.
  11. 1 point
    I don't think SAS vs SATA will make much of a difference with Unraid, more important is the disk platter size, i.e., more recent disks with a higher areal density will likely be generally faster.
  12. 1 point
    @bigmac5753 Not 100% sure, but 3000 series Ryzen should be supported with Unraid 6.9, currently in beta.
  13. 1 point
    my container automatically updated to 4.3.0 but it isn't whitelisted yet on trackers, so i'd rather downgrade for now. what is the best way to do so? on one forum, someone mentioned getting a version number from the More Info tab on the container image, but i can't find it, and add that to the repository on the container's Edit page. can anyone write a quick step by step for me please? i believe i've turned off automatic updates for the future. thanks.
  14. 1 point
    dirty_writeback is the only risky setting, but it's not raised by powertop, it's reduced. The default is 30 seconds. Powertop sets it to 15 seconds. This means Linux starts 15 seconds earlier to empty the RAM to disk. This can save energy as all devices can reach the sleep state 15 seconds earlier. Note: A UPS is absolutely recommend or all data of the last 15 seconds will be lost. Unraid default: Even 30 seconds. Example: You move important data to your server (so it's "deleted" from your client). If now a power outage occurs in the 30 seconds since the first file was uploaded, everything is lost. I think this point needs more attention through Limetech. vm.dirty_ratio should be reduced to 1% and vm.dirty_writeback_centisecs should be reduced to 0.5 seconds (disabling has a huge negative impact on performance) if no UPS is connected. I raised my dirty_ratio to 50% and left the 30 seconds because I'm using a UPS and it allows to upload a huge amount of files even to HDDs with maximum 10G speeds (as it's written to RAM and emptying the RAM happens later)
  15. 1 point
    Thx, that was it. But I can't remeber when I changed the vfio-pci config.
  16. 1 point
    Possible reduced stability. It's almost like the inverse of overclocking. Circuits are designed and tested to run at spec, deviate and you risk bit errors. Depending on the quality of your specific silicon you might be fine, but you don't know for sure. How valuable is your data integrity? See @mgutt's reply, I was referencing undervolting, wrong thread.
  17. 1 point
    Read this thread for more about why that's possibly a bad idea.
  18. 1 point
    v0.9 released: - Preloads only subtitle files that belong to preloaded video files Now the preloading should be much faster if your collection contains many SRT files. Here is an example from my logs:
  19. 1 point
  20. 1 point
    Yes and no. If you set it to only and your cache becomes full again, emby will crash (because emby runs out of storage). Thats the reason why I set a minimum free space on my SSD before changing the path to Direct Access as this minimum free space is only valid for Shared Access. By that I have up to 100GB free storage exclusively for Plex no matter how much data is written to the SSD through other processes. This is strange. One moment, I will test if this feature shows files that exist twice... Hmm no it shows disk8 and cache if the same file is located on both: So what happened with your Emby installation 🤔 Emby itself has usually no clue about the local path. But maybe the docker container itself resets after changing the appdata path? Hopefully not, but else I'm out of ideas. Which emby container are you using? I want to test this scenario. EDIT: I'm having an additional idea. Execute this command please: sysctl -a | grep dirty Is "vm.dirty_expire_centisecs" set to 3000? If yes, the following could happen: - emby wrote new data to the database - because of "vm.dirty_expire_centisecs" this new database is still located in the RAM for 30 seconds - now you changed the path, emby restarts and does not find the database file anymore as its not already written to the SSD cache If this is the reason, I have to change the manual as we have to disable the container and wait more than 30 seconds before changing the path. What it means for you: File is lost (sorry for that).
  21. 1 point
    I am one very happy bunny!! That worked a treat!
  22. 1 point
    Apologies if this has been addressed, I saw notes around page 8 of others having issues using custom ports, but I couldn't find a solution. It's unfortunate that so much of this thread is standard UniFi controller support over container support. Anyway, I have been unsuccessful using custom ports for this container as well. I have configured the docker properly, but the defaults are still used. If I go into the container and look as the system.properties file, the port options are all commented out, so I'm not sure how the UniFi controller is supposed to know to use the custom ports. Thanks, James
  23. 1 point
    I wouldn't recommend that controller. It is a native 2 sata port PCI-E x1 controler with port multipliers. A PCI-E x1 card only has enough bandwidth for 2 modern HDD so it will quickly become a bottle neck and you are likely to get drive errors, drives ejected from the array etc. 3Gbs is way faster than any conventional HDD.
  24. 1 point
    Maybe, or another network issue, but I've always used SMB for Kodi and never had any issues.
  25. 1 point
    Update: Bin jetzt und bleibe jetzt bei RouterOS. Die 10G Module haben irgendwie nicht funktioniert mit SWOS aufeinmal. Frag mich nicht. In RouterOS funktionieren sie, ergo ich bleib bei RouterOS EDIT: Die Netzwerkkarten sind noch nicht da. Ich habe jetzt ein paar 10G Module für den QNap Netzwerkadapter benutzt. Dann hat mein Macbook schonmal die 10G
  26. 1 point
    Don't see any errors on the log, are you using NFS? If yes try with SMB instead if possible.
  27. 1 point
    Q16:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  28. 1 point
    I looked at it, and it was by design. But, you convinced me, so I did a coding change, and we'll see if the powers that be agree.
  29. 1 point
    That's what it will show unless you're running a balance, it's the normal status.
  30. 1 point
    Important: to all of you who downloaded OVMF files from my posts I'm very sorry for this, OVMF files have unexpected behaviors if compiled with xcode (it seems the culprit is clang 7.0.0) (at least on my system!); they somehow work, you probably didn't notice any issue, me too, unless I detached my second hdmi monitor and I wasn't able to boot anymore (only attaching hdmi made the vm to boot again). So, if you downloaded OVMF files from me AND you have no tianocore logo at boot that OVMF files are somehow corrupted. Read more info here, in the bug I opened today: https://bugzilla.tianocore.org/show_bug.cgi?id=3006 So switch back to that provided with unraid or download the attached version (v. 202008 stable compiled from sources): these are compiled by me on kali linux, no corrupted files, and you will notice that the tianocore logo is back (update: tianocore logo shown/not shown is not so important, it's due to xcode not supporting Hii-Binary-Package.UEFI_HII). Moreover, I think all my issues I posted here related to OVMF could be related to this. With the attached v. 202008 I have no problems in Catalina and Big Sur. All previous attachments in the forum relating OVMF are now removed, pointing to this post. Sorry again edk2-OVMF-202008.zip
  31. 1 point
    Multiple mounts, one upload and one tidy-up script. @watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like below soon: 1. My folder structure looks something like this: mount_mergerfs/tdrive_vfs/movies mount_mergerfs/tdrive_vfs/music mount_mergerfs/tdrive_vfs/uhd mount_mergerfs/tdrive_vfs/tv_adults mount_mergerfs/tdrive_vfs/tv_kids 2. I created separate tdrives / rclone mounts for some of the bigger folders e.g. mount_rclone/tdrive_vfs/movies mount_rclone/tdrive_vfs/music mount_rclone/tdrive_vfs/uhd mount_rclone/tdrive_vfs/adults_tv for each of those I created a mount script instance where I do NOT create a mergerfs mount 3. I mount each in turn and for the final main mount add the extra tdrive rclone mounts as extra mergerfs folders: ############################################################### ###################### mount tdrive ######################### ############################################################### # REQUIRED SETTINGS RcloneRemoteName="tdrive_vfs" RcloneMountShare="/mnt/user/mount_rclone" LocalFilesShare="/mnt/user/local" MergerfsMountShare="/mnt/user/mount_mergerfs" # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/mount_rclone/music" LocalFilesShare3="/mnt/user/mount_rclone/uhd" LocalFilesShare4="/mnt/user/mount_rclone/adults_tv" 4. Run the single upload script - everything initially gets moved from /mnt/user/local/tdrive_vfs to the tdrive_vfs teamdrive 5. Overnight I run another script to move files from the folders that are in tdrive_vfs: to the correct teamdrive. You have to work out the encrypted folder names for this to work. Because rclone is moving the files, the mergerfs mount gets updated i.e. it looks to plex etc like they haven't moved #!/bin/bash rclone move tdrive:crypt/music_tdrive_encrypted_folder_name gdrive:crypt/music_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/tv_tdrive_encrypted_folder_name tdrive_t_adults:crypt/tv_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/uhd_tdrive_encrypted_folder_name tdrive_uhd:crypt/uhd_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs exit
  32. 1 point
    So just to let everyone know.... I have received a (only one in stock, the 3 others will be sent later) OCF cabel. AND THAT WORKED! Now on to the next challanges. (Just fought the whole "disk only reads 746,5GB and not 3/4/8/10TB"-problem. THANK YOU ALL!
  33. 1 point
    Go to the Docker tab. Find the Binhex-Krusader in the Dockers list. Click on it (see picture below) and select "Edit". The Binhex-Krusader Docker edit screen pops up. Go down the list until you see the /mnt/disks/ for the container path for unassigned. Click the "Edit" button to the right of it. The details pop-up for the unassigned devices paths. Go down the list to "Access Mode" and change that to "RW/Slave" and then save.
  34. 1 point
    Feature request: Instead of one giant tarball, could this app use separate tarballs for each folder in appdata? That way it would be much easier to restore a specific app's data (manually) or even pull a specific file since most of them could be opened with untar guis. Plex is the major culprit with it gargantuan folder.
  35. 1 point
    I recently set up a mail server on a vps for my company and I would not attempt to create a docker for it. Way too many components and too much to tinker while setting it up. If you really want to run it from home, I would recommend doing it in a Linux vm For reference, I set up a postfix server with dovecot for sasl and also set up opendkim (signatures) and postsrsd (for email forwarding). Fail2ban provides the firewall. Roundcube is the webmail interface but is rarely used as most employees either set it up on their phones with imap, or on their desktops with outlook, or forward all to gmail