Leaderboard

Popular Content

Showing content with the highest reputation on 11/12/19 in all areas

  1. 2 points
  2. Yes, you are right. The import parser went wrong on the comment statement(s). I have made an update with the fix. Thanks.
    2 points
  3. CPU Threadripper 2990WX with a VM running Windows 10 fully patched, RAM is G.SKILL Ripjaws 4 Series 64GB (8 x 8GB) DDR4 2133 (PC4 17000) Motherboard is a ASUS ROG Zenith Extreme Alpha X399. MB & RAM are at stock settings, CPU governor set to Performance. VM is pinned to NUMA nodes 0 & 2 which has the PCIe & RAM attached, utilizing all CPUs and the emulator pin is on NUMA node 1, CPU 16. Total OS memory assigned is 12GB. No NUMA || NUMA Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM || Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 || CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 Average 6572 20944 1261 | 6515 20831 1257 || Average 6408 20617 1389 | 6539 20958 1300 Highest 6620 21085 1263 | 6443 20873 1258 || Highest 6525 20728 1391 | 6589 21144 1306 Lowest 6537 20810 1255 | 6484 20805 1254 || Lowest 6438 20455 1385 | 6511 20746 1297 Variance 83 275 8 | 59 68 4 || Variance 87 273 6 | 78 398 9 The left set has no NUMA node configuration. The 1/16/2 paring shows a roughly 0.7% drop in CPU performance and a negligible difference in RAM performance. However, the variance was much lower - or the difference between the high & low scores which indicates increased stability in processing speeds at a slight performance loss. With a NUMA configuration, things were different. CineBench showed roughly the same performance but PerformanceTest 9.0 showed a larger variance in scores in which there was several much higher scores that were dropped in order to bring the variance down below a thousand. What the NUMA configuration clearly benefited is in RAM test scores if you create the numa node in the guest OS to match the host. <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='32' threads='1'/> <numa> <cell id='0' cpus='0-15' memory='6291456' unit='KiB'/> <cell id='1' cpus='16-31' memory='6291456' unit='KiB'/> </numa> </cpu> In short, since AMD CPU's do not support hyperthreaded CPU's to the Guest OS, setting cores=16/threads=2 shows mixed results based on if you specify a numa node or not. It's probably recommended to always have threads=1 so it matches what the guest OS sees.
    1 point
  4. Well, you certainly don't want to override it with unifi.yourdomain.com The Controller Hostname/IP should contain the IP address of your unRAID server if you are running the container in bridge mode on the host server (some find they need to run it in host networking mode initially to even do adoption). If you have the container assigned its own IP address on a VLAN or custom network, you should enter that IP address. Mine is in bridge mode on the host unRAID server so I have the unRAID IP address as the Controller host.
    1 point
  5. My testing shows that setting up a numa configuration in your guest benefits memory speed but not really CPU performance on an AMD system. Best to leave threads=1 for a slightly improved CPU performance over threads=2. Edit: ARGH! Some how, the CPU Mode ended up set to Emulated instead of passthrough. I didn't make that change but let me re-do these tests yet again.
    1 point
  6. This guest blog by @spx404 of SPX Labs is all about setting up a remote VPN backup server with Unraid. Check it out and let us know what you think of this setup or let him know if you have any questions here in the comments! If you enjoy this content, be sure to subscribe to @spx404's youtube channel! Read the full blog here: https://unraid.net/blog/remote-vpn-unraid-backup-server
    1 point
  7. Yes you should.... Nope... won't work out of the box You need the 1.02b bootloader with VIRTIO drivers Else it won't be any different
    1 point
  8. Eating my own dog food here, got a WD My Book for a good price (looks to be HGST Helium drive) and got it connected via usb 3 and running preclear on it, so far so good! :-).
    1 point
  9. @limetech - I've been experiencing network drops on my 10G network card and from my investigation, it's due to a memory leak in the driver which was patched in the 1.6.13 version. It looks like unraid is loading 1.5.44. Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1499321 - The meat of the discussion is just a little past half way. Affects onboard NICs & PCIe cards. Nov 12 01:42:57 VM1 kernel: atlantic: link change old 10000 new 0 Nov 12 01:42:57 VM1 kernel: br0: port 1(eth0) entered disabled state Nov 12 01:43:11 VM1 kernel: atlantic: link change old 0 new 10000 Nov 12 01:43:11 VM1 kernel: br0: port 1(eth0) entered blocking state Nov 12 01:43:11 VM1 kernel: br0: port 1(eth0) entered forwarding state root@VM1:~# ethtool -i eth0 driver: atlantic version: 5.3.8-Unraid-kern firmware-version: 1.5.44 expansion-rom-version: bus-info: 0000:0a:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes supports-priv-flags: no vm1-diagnostics-20191112-1903.zip
    1 point
  10. i think so, see https://wiki.unraid.net/Boot_Codes
    1 point
  11. You could go with a Rosewill or Norco case but if you want to do it right, buy a used Supermicro chassis. They are well worth the cost. A CSE-826 (2u, 12 x 3.5” drives) or CSE-743 (4u, 8 x 3.5” drives), would be some good options that meet what you want. CSE-836 and 846 are also great and give you a lot more room to expand. Do it right and buy something that’ll be dependable. Let me know if you’re interested and I can explain what some of the model differences are and ways to keep them cool and quiet.
    1 point
  12. I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, I guess it would be nice to have this kind of functionality built into the OS though.
    1 point
  13. Have you try boot in safe mode ? Pls also confirm BIOS up to date and check BIOS power limit setting. And CPU power supply wire in 4pin / 8pin ?
    1 point
  14. Yes, then start array to run a parity sync. As you prefer, that's optional.
    1 point
  15. The current version of Unraid can be used with a Trial license which is valid for 30 days, is fully functional, and is intended to allow you to validate that UnRAID runs OK on your hardware and meets your needs. The trial licence can be extended once for a further 30 days. The version of Unraid that runs for free with a limited number of drives was v5. This is been super-ceded for many years with v6 so no new user would be expected to use it. It was limited to 32 bits and had far less functionality than current Unraid
    1 point
  16. No one will pay attention to the package installation comments, and I doubt many users even understand what nmap is or does. A package won't be installed by UD unless specified in the plugin, is downloaded using a secure link, and the MD5 checksum is verified. All UD packages are hosted on my secure Github. They are not downloaded randomly from the Internet. I think you are overreacting a bit here. I have added a parameter to the "UD Settings" that will allow you to remove the nmap package if it is that much of a problem for you. It is removed when the array is started, so you have to reboot or stop and start the array for it to take effect. nmap is installed by default when the plugin is installed.
    1 point
  17. I see many people commenting that they always change their VM XML from something like the following to improve system performance. I've done the same but I came across Microsoft's "coreinfo" utility which revealed that the VM wasn't actually seeing any hyper-threaded CPUs. So I decided to benchmark it using the following topologies on an existing Win10 VM. Iteration 1: <topology sockets='1' cores='32' threads='1'/> Iteration 2: <topology sockets='1' cores='16' threads='2'/> Based on my testing, I do not see any improvements. I used Cinebench R20 & PerformanceTest 9.0, running each ten times and taking the max & average score, discarding any test that scored too far off of the high & low end of the variance (difference between the high & low scores) until the variance fell into an acceptable value based on my observations of running these benchmarks dozens of times. I picked these because I needed a couple and they were quick & easy to set up (I lost count on how many Win10 VM's I've set up in the past month). For the initial test, I ran with cores=32 threads=1 and then tried to get close to or better variance on the cores=16 threads=2 test. In my scenario, I'm running a Threadripper 2 2990wx with the CPU's 0-15 & 32-47 pinned (numa 0 & 2 which has the PCIe & RAM attached) and the emulator pin on CPU 16 (numa 1). This particular config is my intended use case with video broadcasting using Livestream Studio outputting a 1080p@60fps stream to YouTube plus at least two NDI streams of a 1080p@60fps video feed to be consumed by other VM/PC's. OS is Windows 10 fully updated, GPU is a Quadro P2000. Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 Average 6572 20944 1261 | 6515 20831 1257 Highest 6620 21085 1263 | 6443 20873 1258 Lowest 6537 20810 1255 | 6484 20805 1254 Variance 83 275 8 | 59 68 4 The average scores are lower in the 1/16/2 config but they're also tighter. I'm currently running this test again passing in the numa configuration to match the host. Past test runs have shown marked improvements.
    1 point
  18. The nmap package was updated by me. The repository was not compromised. This has been discussed ad-nauseum. I removed the package, and then I agreed to leave nmap in as a convenience to users. I won't remove the package from the plugin. Conditional plugin installation is not easy. I'm not sure what could be done except your manual removal.
    1 point
  19. If there was a version update- sometimes they will replace the config with a vanilla one. I always just make a backup before restarting the docker and restore over the vanilla. I do know that some options change on occasion when versions do, so you may have to recreate a new config based on the vanilla. Definitely not a problem with the docker- we played our 28 day horde last night with my crew of 6 and they took down our entire base! So much fun.
    1 point
  20. I would setup a VPN server on unRAID and then client on your laptop or remote computer to establish a secure VPN connection between the remote computer and the unRAID server, then you can do whatever you want across the VPN tunnel.
    1 point
  21. You were exactly right. Started looking into some other issues I had and there was something wrong with the new disk I inserted. Not sure if my failed disk wrote to corrupted data to parity or what exactly happened but at the end of the day I had to re-format my new disk. Some data was certainly lost - but I will see if I can get anything off the old disk or restore from backups. The good news is my stuff is working again. Thanks anyhow!
    1 point
  22. metadata location (as in /config) is shown in unraid web ui/docker tab/ left click plex, select edit, click 'advanced view' top right, click on 'show more settings' and you should then see the host path defined for container path /config.
    1 point
  23. For my primary storage/docker server 'Sector001'
    1 point
  24. After you get things square again, be sure to setup Notifications to alert you immediately by email or other agent as soon as Unraid detects a problem. Probably Unraid would have been nagging you the whole time you were letting this first problem go: Might also be a good idea to install Fix Common Problems plugin. I know it would have been nagging you every single day about that. And if you ever have any trouble again, please ask on the forum instead of trying random things which only make things worse.
    1 point
  25. Try ddrescue, it should recover most data.
    1 point
  26. Great interview! Fantastic videos, they got me into unRAID and I watch every one you release @SpaceInvaderOne. A live stream would be great, you should do a regular slot! Neat setup with the servers, it gave me some ideas for my three servers. Keep up the excellent work.
    1 point
  27. Awesome interview... really looking forward to future videos. BTW - pfSense in an Atari 800xl... priceless...
    1 point
  28. Great interview, I look forward to all the future has to bring from Spaceinvader One & unRAID!
    1 point
  29. Same here. Would not have found and build my unraid environments without Spaceinvader and Linus. Both added in different ways , ideas , methods etc
    1 point
  30. Thanks Unraid for the interview, and especially thanks @SpaceInvaderOne for all you've contributed to the community! Without you I doubt I would have ever been able to get my own home server running let alone built or even considered in the first place. I've watched so many of your videos and they are all truly helpful. Please don't ever stop making them! 😊
    1 point
  31. iommu=pt also worked for me !! holy f_ck there was some heavy breathing going on when ALL my drives where gone after the upgrade I don't mind telling you haha Awesome to have it solved now, thanks everyone
    1 point
  32. I don't use unraid yet, but I can confirm I have IPMI VGA remote console working alongside with working IGFX Quicksync or other 3D offloading tasks. This is working with either Windows physical machine or in ESXI where you setup Intel IGFX for passthrough so your VM can use Quicksync (tested with Windows VM) I was not able achieve simultaneous picture from IPMI VGA remote console and IGFX connected digital display though. Either IPMI VGA has active display or IGFX, but not both at same time, I guess this really BIOS limitation, but at least you should be able to use Quicksync with IPMI VGA remote console. X11SCA-F BIOS settings Primary Display : PCI Internal Graphics: Enable after reboot IGFX GOP Version is populated in BIOS. Other observations with this board someone could use. Passthrough of motherboard AHCI controller is possible with ESXI, but only when you boot your VM with BIOS. With EFI VM gets powered off. You can use HDD ATA Security with compatible drives (User HDD Password)
    1 point
  33. You mean something like this: Copy the file to your flash drive (named something like background.jpg) Add the following to you go file (config folder on the flash drive) before the emhttp line cp /boot/background.jpg /usr/share/slim/themes/default/background.jpg
    1 point
  34. Update: if somebody else has to do this The following steps worked for me and the VMs are running stable so far. Installing the drivers before converting / moving the disk is not necessary. - Create a Backup of the Servers - Shut down servers and convert vdisks to raw format (I used qmu-img with the following command ) .\qemu-img convert -O raw "D:\location\disk.vhdx" 'G:\location\vdisk1.img' - Create new VM in unRAID for Server 2016 and disable automatic start - Replace vdisk1 one with the converted disk. Set disk type to SATA - Add second disk with 1M size and raw format - Add newes VirtIO drivers ISO to VM (https://fedoraproject.org/wiki/Windows_Virtio_Drivers) - Start VM and install VirtIO Drivers on the device manager (same way as shown in the guide below) - Replace GPU drivers and Install Guest driver (as shown in the guide below) - Stop VM and remove disk 2 and change disk 1 to raw format done
    1 point
  35. Fix is easy: Install Fix Common Problems Plugin, then hit Tools - Docker Safe New Permissions. But, going forward, how were the file(s) created in the first place?
    1 point