Jump to content

mgutt

Moderators
  • Posts

    11,355
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. I have to correct myself. It seems a recent version has solved this issue, as my logs are now working without my solution.
  2. Which version does it use? If I understand it right it should be available since Handbrake 1.3:
  3. Hmm what? ^^ Using a GPU for the VM is hardware acceleration?! Only to be clear: - a VM that is used rarely as a client and uses no video encoding/decoding software does not need a physical GPU, the virtual GPU is enough - a VM that is used as client could have a physical GPU to make it more responsive - a VM that is used for Gaming or Video encoding/decoding must have a physical GPU Ok, maybe you could change the curve to something more silent? But of course it depends on your current temps.
  4. 1.) Why is the server in your room? 2.) Sounds like a cheap cooler / fan / case. If Plex/Unraid uses the (i)GPU, it's not available for the VM. So you need a second GPU for a VM (which could be the iGPU if you use the GPU as your Unraid/Plex GPU). But do you need hardware acceleration in a VM? Enabling Intel QSV in handbrake is the same step as enabling it for Plex. If you use QSV, the load is much lower, which means less heat, which means less noise. Here you find interesting Infos about handbrake's hardware acceleration: https://handbrake.fr/docs/en/latest/technical/performance.html It looks like it's possible in a recent version. Asrock? My Supermicro used java and I hated it ^^
  5. A smart socket would allow the same: - set BIOS to auto power on - set Smart socket to power off if under ~3watts - set Smart socket to power on by schedule - set Cronjob accordingly or script to execute on boot - let a script power down the backup server if in idle I will use for my remote backup server a selfmade IPMI. At the moment I'm simply using a Lubuntu VM with Real VNC Viewer. @Mor9oth They are all good. Asus, QNAP and LR Link (Aliexpress) are the cheapest. P.S. if you want to save even more energy, you could directly connect the server with the client through 10G and in addition the client and the server with the 1G ports of your router (this means you need two cables for your client). By that you don't need a 10G switch and you save in addition the 3 watts of the permanently active connection between server and 10G switch. If you have multiple clients that should be connected through 10G, install a 2 port or 4 port 10 card. Still cheaper as the 10G switch.
  6. Do you need IPMI or not? If not take a look at the Gigabyte C246M-WU4 or Asus WS C246M Pro. Sadly there is no C246 board with onboard 10G. A 10G card adds ~6 watts, 3 watts the card and 3 watts the port. I know that 3 watts per port are valid for the onboard 10G ports as well but I don't know how much the controller/chipset adds. Let's say it's 1 watt.. the total different would be only 2 watts.
  7. My system load is high as my two unraid servers are syncing, backup to cloud is running and parity check is running on all disks. So look only on Plex's load. Nice, isn't it The only reason to buy a Xeon is because you need a high power CPU with ECC RAM support. The Core i3-9350K is the fastest consumer CPU with ECC support and has "only" 4 cores (which is still enough for the most Unraid uses I think). P.S. Intel removed ECC support from all i3 and Pentium Gold with the 10th generation. It seems they want to boost Xeon sales And if you want to save energy you should not buy a server board with IPMI. Instead buy a workstation board. I suggest the Gigabyte C246-WU4 or the Asus WS C246 Pro. CEC 2019 is an energy standard from California which forces the mainboard producers to use energy efficient components and deep sleep states etc. It is not needed for server boards and as far as I know not supported by Asrock boards. Gigabyte and Asus seem to support CEC 2019 on all recent consumer / workstation boards. Note: It must be enabled in the BIOS and its independent from the OS, so yes it is "supported" by Unraid. Example: Asrock C246 WSI 11.87 watts (default BIOS 12.58 watts) Gigabyte C246N-WU2 7.36 watts (defaut BIOS = CEC 2019 disabled 10.29W) Supermicro A2SDi-8C+-HLN4F with IPMI 22 watts (I didn't noted the default value, but it was even higher) Of course both with the same components installed (Windows 10, SATA SSD, 16GB RAM, active 1G LAN, HDMI connected, analog Audio disabled, all C-States at maximum).
  8. I removed both /boot/config/network* files from the flash drive and booted with the new board without problems. Maybe VMs need re-configuration if they use fixed hardware resources that aren't present anymore, but other things (docker, plugins, settings, etc) should work without any interaction as Unraid loads drivers on boot without making changes to the flash drive itself. Maybe you could even leave the network files on the flash drive. I didn't tested it. But I was surprised that it was so easy ^^ The license is bound to the flash drive. So no changes needed here, too. P.S. you don't even need to connect the HDDs in the same order as Unraid assigns them through their serial number.
  9. Sorry, thought you meant a usual KVM over Internet hardware which costs >500 dollar. You should post links to a project like PiKVM I subscribed the pre-order.
  10. No chance with Asrock (no CEC 2019) and a server board (IPMI, no CEC 2019).
  11. TinyPilot is better, because it's cheaper, can be switched off, can be moved to the next server, works without VPN, works without port forwarding, etc But I think I'll start with idea B and use it at first only for the Unraid WebGUI.
  12. Hier mein kleines Abenteuer eine defekte Festplatte in meinem Backup-NAS zu reparieren:
  13. Don't think so. The UHD 630 (should be the same than your UHD P630) is able to transcode ~↨4x 4K or ~20x 1080p. So they have both the same performance. No joke. EDIT: Here is the proof (Intel P630 iGPU 20x 1080p vs Nvidia GTX 1080 24x 1080p) https://forums.serverbuilds.net/t/guide-hardware-transcoding-the-jdm-way-quicksync-and-nvenc/1408/243 I had the same problem. My last board was an expensive Supermicro A2SDi-8C+-HLN4F without iGPU. I though it would have enough power to transcode 1080p, but finally I found out that Blu-Ray rips with VC1 codec could only be transcoded with one CPU core. So I sold my Supermicro Board (with high loss) and now I'm using the Gigabyte C246N-WU2 with the i3-8100 and the difference is like night and day. Funnily it was cheaper and the new Server consumes with 10G card, 8 HDDs, one NVMe and two 32 GB ECC RAM modules in total less (19.59 watts ) then my old Board (21 watts) with only one active 1G connection, one SATA SSD and one 16GB ECC RAM module.
  14. No. Does not work. echo "bla" is not logged. No verbose output is logged. Only fatal errors are logged. This is the reason why so many people complain and ask for the "real download"
  15. No it does not. I'm already using Real VNC Viewer to connect to the VM behind the router without port forwarding.
  16. My server is located in a remote location, it is behind a router without port forwarding, I do not have access to any clients and the boards has not IPMI (which wouldn't help as I do not have VPN or Client access). At the moment I access my server remotely through a VM, but if the Unraid array stops I'm not able to access the server anymore. I thought about using wireshark and try a server to server connection in hope the remote server does not need any open ports, but finally I didn't test it because the external location uses the same IP range as my local network (which must be different to use wireshark). So I came of with this idea - Raspberry PI with the default Raspbian and Real VNC Viewer Connect (is preinstalled) - Use an PCIe Slot Bracket Adapter to install the RPI into the case - Connect the RPI to the server's power supply through ATX plug Pin 9 (+5VSB) - Connect both, Mainboard and RPI with the Switch/Router - Connect GPIO with F-Panel Power +/- (to be able to power on / hard power off the server) - RPI stays powered on or is time controlled through Witty Pi if low energy consumption is required Upgrade: - Install an HDMI Capture Adapter and Pi-KVM (or TinyPilot?) to control even the BIOS What do you think about the idea?
  17. There aren't any. The logs are buggy so I used this at the beginning of all my scripts: # logging exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1 By that all output is twice in the logs, but this was the only solution that worked. And only to inform you. There is another bug. Its not possible to stop scripts although the dashboard claims it. It seems @Squiddid not found the time to correct it. I posted two ways to solve it.
  18. Not possible. It's not my router. Could the external NAS connect to my server as a client so it becomes part of my local network or are open ports on both sides a must?
  19. This needs only open ports on my location, right?
  20. This is my external backup NAS. Its located in an other city. I'm able to control it remotely through a Lubuntu VM + VNC Viewer (Real VNC). If I would stop the array, the VM would be killed and I loose the ability to access it.
  21. This is not possible. I tried it multiple times. It does not shutdown. My friend on the external location needed to hold the power button to reboot the server and finally after restart I'm in the same situation (not seeing the disk through terminal). Does anyone have an idea to accces the WebGUI remotely without starting the array (I'm not able to open ports on the remote side)? It's strange how the disk is listed (8.1 GB total size ^^):
  22. Diagnostics Logs attached. black-diagnostics-20201002-2331.zip
  23. My backup NAS was victim of a power outage and since it restarted, one disk shows errors and sometimes "Unmountable: No File System". As I'm only able to access it remotely through a VM I'm not able to stop the array. At first I tried to repair it: xfs_repair -v /dev/md6 xfs_repair: cannot open /dev/md6: Device or resource busy Then I unmounted it: umount /dev/md6 And repaired it again: xfs_repair -v /dev/md6 Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error Tried remounting: mount -t xfs /dev/sdg1 /mnt/disk6 mount: /mnt/disk6: special device /dev/sdg1 does not exist Checked all existing disks and it seems to be dead as sdg / sdg1 is missing completely: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 9.1M 1 loop /lib/modules loop1 7:1 0 7.1M 1 loop /lib/firmware loop2 7:2 0 20G 0 loop /var/lib/docker loop3 7:3 0 1G 0 loop /etc/libvirt sda 8:0 1 14.2G 0 disk └─sda1 8:1 1 14.2G 0 part /boot sdb 8:16 0 12.8T 0 disk └─sdb1 8:17 0 12.8T 0 part sdc 8:32 0 12.8T 0 disk └─sdc1 8:33 0 12.8T 0 part sdd 8:48 0 12.8T 0 disk └─sdd1 8:49 0 12.8T 0 part sde 8:64 0 223.6G 0 disk └─sde1 8:65 0 223.6G 0 part /mnt/cache sdf 8:80 0 12.8T 0 disk └─sdf1 8:81 0 12.8T 0 part sdh 8:112 0 12.8T 0 disk └─sdh1 8:113 0 12.8T 0 part md1 9:1 0 12.8T 0 md /mnt/disk1 md2 9:2 0 12.8T 0 md /mnt/disk2 md3 9:3 0 12.8T 0 md /mnt/disk3 md4 9:4 0 12.8T 0 md /mnt/disk4 md5 9:5 0 12.8T 0 md /mnt/disk5 md6 9:6 0 12.8T 0 md Could I check something else? If not I would pick up the NAS in a few days and replace the disk and restore from backup.
  24. Even after reducing vm.dirty_pages to even 100MB: sysctl vm.dirty_bytes=100000000 it does not influence transfer speeds: This means I do not need to repeat all the benchmarks with simulated low ram conditions. Even the tiniest RAM setup works. Ok, this is not valid for the read cache as it would contain the full file, but a) I do not know how to disable the read cache and b) reading has no real potential to be become faster through changing samba settings. Now it's time to analyze all benchmarks. SMB.conf Tuning I made many tests by adding different Samba settings to smb-extra.conf (3-4 for each setting). This is the default /etc/samba/smb.conf of Unraid 6.8.3 (only the performance part): use sendfile = Yes aio read size = 0 aio write size = 4096 allocation roundup size = 4096 In addition I enabled RSS as this allows the usage of all CPU cores. Now the results: CrystalDiskMark Speed best SMB.conf setting Gain of fastest SEQ1M Q8T1 Read >1170 MB/s default or aio write size or aio read size or write cache size no gain SEQ1M Q1T1 Read >980 MB/s default or strict allocate or write cache size or aio write size (fastest) + 10 MB/s RND4K Q32T16 Read >300 MB/s default or strict allocate or write cache size (fastest) + 30 MB/s RND4K Q1T1 Read >39 MB/s default (fastest) or strict allocate or write cache size no gain SEQ1M Q8T1 Write >1170 MB/s default or strict allocate or write cache size or aio read size or aio write size no gain SEQ1M Q1T1 Write >840 MB/s write cache size (fastest) or aio read size + aio write size + 40 MB/s RND4K Q32T16 Write >270 MB/s write cache size + 20 MB/s RND4K Q1T1 Write >40 MB/s default or write cache size or aio write size no gain NAS Performance Tester Sequential Write Avg >1165 MB/s default or strict allocate or write cache size or aio write size no gain Sequential Read Avg >1100 MB/s default or strict allocate or write cache size (fastest) no gain As we can see "write cache size" seems to be the only setting that is sometimes faster than default. So I retested it with the following sizes: write cache size = 131072 write cache size = 262144 write cache size = 2097152 write cache size = 20971520 write cache size = 209715200 write cache size = 2097152000 Result: No (stable) difference compared to the default settings. So it is obviously random, depending on the minimal load on server and/or client while the benchmarks are running. At next I will try to overwrite the default unraid settings. Maybe this has an influence.
  25. 1.) Open WebGUI -> Webterminal and enter this command: du -hs $(ls -A) /mnt/disk6/Moviesharename/* Which sizes does it return? Or use this for real usage: du -hs $(ls -A) -block-size=1 --apparent-size /mnt/disk6/Moviesharename/* 2.) When was the last scan of Plex or when did you saw a movie before yesterday? The problem could happen much earlier. 3.) Has a user write access to your movie collection? If yes: Don't do that. The Plex docker container should only have read-access for your media. And your users, too. I enabled Disk Share access, enabled only the cache disk for my user and rip my movies to /mnt/cache/Movies. After the mover does its work, they are out of all user permissions. By that my Movie collection is protected against deleting by accident and ransomware as long I'm not using the Root Login through my browser. 4.) If 3.) is yes, did you check your clients regarding Ransomware? 5.) Is your server accessible through the Internet (open ports on your router)?
×
×
  • Create New...