NathanR

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by NathanR

  1. Plex server crashed again Tis why I'm hesitant to move Plex/Unifi/Syncthing/etc. over I Setup syslog/kiwi, hopefully I can catch the error now. https://documentation.solarwinds.com/archive/pdf/kss/kss_getting_started_guide.pdf
  2. VM is now running d-tools server on Win10x64 Pro. Server has been up for 8 days since upgrade to 6.11 Hoping for lots of stability now that I have something production-esque running on the server. Next will be Plex server, video-card passthru, and adding HDD's.
  3. Expanded my VM's HDD (vdisk1.img) size from 30GB to 60GB. Deleted the recovery partition so I could extend the OS drive space from 30GB to 40GB. https://www.partitionwizard.com/partitionmagic/delete-recovery-partition.html
  4. Brainstorming/thoughts post... Looking into converting my .xva (xcp-ng) vms into Unraid format (qcow?) Maybe this? - https://forums.unraid.net/topic/69244-convert-vmware-vm-to-kvm/ Looking into converting my physical disk into unraid drive. https://kmwoley.com/blog/convert-a-windows-installation-into-a-unraid-kvm-virtual-machine/ https://kmwoley.com/blog/reduce-shrink-raw-image-img-size-of-a-windows-virtual-machine/ A little worried I can't shrink my 500GB NVMe drive down to a manageable 50GB lol Looking into creating new W10 VM I was looking here > https://wiki.unraid.net/Manual/VM_Management#Basic_VM_Creation But that is wrong, this is correct steps to install HDD driver > https://wiki.unraid.net/index.php/UnRAID_Manual_6#Installing_a_Windows_VM Latest virtio drivers: https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md
  5. I have a NH-D9L on the CPU now NF-A4x10 PWM on the x570 chipset, which gets super hot. (ASRock really should have had a fan & larger cooler on x570 chipset) pwm cpu-rpm x570-rpm 100% 1900 5000 75% 1500 4000 50% 1100 2700 25% 600 1600
  6. Update. Now running Version 6.11.0 2022-09-23 Pearl & iperf3 included with the kernel so...yay. My 10G NIC's are confirmed working; woot! I was having issues confirming speeds from within the W10 VM. Also, I have my 1T 980's in what I think is a Raid 1 pool. But I've read conflicting posts about whether this is operating properly or not. Pressing the rebalance as Raid 1 does nothing. I also modified the temp warning/errors to 85c so I didn't get the annoying Samsung 980 bug. I really wish they would do a firmware release. BTW...how do I do a NVMe firmware update? Oh! and I'm super happy that the temp sensors work out of the box now (well, there is a tiny config todo). Pearl or Linux Kernel with AMD tunings, whatever, don't care, I'm happy figured out exactly which by looking at IPMI & running P95 in a VM. CPU @ 70c and x570 @ ~50c lol
  7. Thanks! I haven't had time to look into why it crashed the second time. Any pointers on where to look in the logs? I read that the logs don't show pre-boot logs IIRC. Idk, I'll investigate in the future. Re: RC - I completely agree, I usually don't run RC stuff, but I saw all the significant improvements (in-general & x570/kernel) and wanted to try it out. My SVR right now is just a test-bench and is running things that aren't critical (yet) just so I can get used to Unraid in a relaxed environment. That said, I am in the process of moving computer cases, installing 10G, and swapping drives so I'll be ready soon to move the server towards production Edit: Just read your story I think know I am in good company
  8. Hi Spencer, Good copy, thank you. I am now the proud owner of Pro I am excited for what the future holds!
  9. TL;DR Purchase license now, activate later (when ready?) Downside to upgrading Basic > Pro? Are there any downside to purchasing a basic license and then upgrading to pro? I can save $14.70 going from basic to pro it seems. https://unraid.net/upgrade-sale I have no intention of purchasing any other license than Pro. I want to take advantage of the sale before it ends at midnight. Still, seems silly to not offer a discount without upgrade when upgrading (might, hence my question) does the same thing. I presume the only downside is two license keys and two purchase receipt emails? --- I have 8 days left on my trial, and the server is running fairly well. I have plenty still to do. But I am very happy with the setup thus far. I really like Unraid community, plugins/apps/dockers/support, adding drives slowly, etc. I also love the cost model, one and done. My favorite. I'd do anything for not not having SAAS, I freaking hate that stuff. I know it's better for company's cash flow. But I am so thankful for that model. Thank you Lime Tech. I wasn't intending on licensing it yet because I wanted to have it setup the way I wanted and then transfer to the new USB drive. [As I feel like the licensing only works on one flash drive and I don't want to license this flash drive; I'd rather transfer it. But I also want to test out the new flash drive, etc. I don't know what to do.] Can I purchase the licenses now and 'activate' them later?
  10. Had a crash even after fixing Idle current. Probably will disable global C-states now. Upgraded to 6.11-RC2 to see if system is more stable for X570 builds. Tried to get temp/sensors working. Foundout the nerdtools plugin for Pearl is broken on 6.11 - hopefully we seen an update soon https://www.reddit.com/r/unRAID/comments/w9647x/unraid_6110rc1_now_available_notes_in_comments/ Version 6.11.0-rc2 2022-07-27 Improvements With this release there have been many base package updates including several CVE mitigations. The Linux kernel update includes mitigation for Processor MMIO stale-data vulnerabilities. The plugin system has been refactored so that 'plugin install' can proceed in the background. This alleviates issue where a user may think installation has crashed and closes the window, when actually it has not crashed. Many other webGUI immprovements. Bug fixes Fixed issue in VM manager where VM log can not open when VM name has an embedded '#' character. Fixed issue where Parity check pause/resume on schedule was broken. Change Log vs. Unraid OS 6.10 Base distro: aaa_base: version 15.1 aaa_glibc-solibs: version 2.35 aaa_libraries: version 15.1 adwaita-icon-theme: version 42.0 appres: version 1.0.6 at-spi2-core: version 2.44.1 atk: version 2.38.0 bind: version 9.18.5 btrfs-progs: version 5.18.1 ca-certificates: version 20220622 cifs-utils: version 6.15 coreutils: version 9.1 curl: version 7.84.0 dbus: version 1.14.0 dmidecode: version 3.4 docker: version 20.10.17 (CVE-2022-29526 CVE-2022-30634 CVE-2022-30629 CVE-2022-30580 CVE-2022-29804 CVE-2022-29162 CVE-2022-31030) editres: version 1.0.8 etc: version 15.1 ethtool: version 5.18 file: version 5.42 findutils: version 4.9.0 freeglut: version 3.2.2 freetype: version 2.12.1 fribidi: version 1.0.12 fuse3: version 3.11.0 gdbm: version 1.23 gdk-pixbuf2: version 2.42.8 git: version 2.37.1 glib2: version 2.72.3 glibc: version 2.35 gnutls: version 3.7.6 gptfdisk: version 1.0.9 harfbuzz: version 5.0.1 hdparm: version 9.64 htop: version 3.2.1 icu4c: version 71.1 inotify-tools: version 3.22.6.0 iproute2: version 5.18.0 iptables: version 1.8.8 json-c: version 0.16_20220414 kernel-firmware: version: 20220725_150864a kmod: version 30 libX11: version 1.8.1 libXcursor: version 1.2.1 libaio: version 0.3.113 libcap-ng: version 0.8.3 libdrm: version 2.4.110 libepoxy: version 1.5.10 libevdev: version 1.12.1 libgcrypt: version 1.10.1 libgpg-error: version 1.45 libidn: version 1.41 libjpeg-turbo: version 2.1.3 libmnl: version 1.0.5 libnetfilter_conntrack: version 1.0.9 libnfnetlink: version 1.0.2 libnftnl: 1.2.2 libnl3: version 3.6.0 libtiff: version 4.4.0 libtiff: version 4.4.0 liburcu: version 0.13.1 libusb: version 1.0.26 libxcb: version 1.15 libxkbcommon: version 1.4.1 libzip: version 1.9.2 libX11: version 1.8.1 listres: version 1.0.5 logrotate: version 3.20.1 lsof: version 4.95.0 lzip: version 1.23 mc: version 4.8.28 mcelog: version 184 mkfontscale: version 1.2.2 nano: version 6.3 nettle: version 3.8 nfs-utils: version 2.6.1 nghttp2: version 1.48.0 ntfs-3g: version 2022.5.17 oniguruma: version 6.9.8 openssh: version 9.0p1 openssl: version 1.1.1q (CVE-2022-1292 CVE-2022-2097 CVE-2022-2274) openssl-solibs: version 1.1.1q (CVE-2022-1292) pango: version 1.50.8 pciutils: version 3.8.0 pcre2: version 10.40 php: version 7.4.30 (CVE-2022-31625 CVE-2022-31626) rsync: version 3.2.4 samba: version 4.16.4 (CVE-2022-2031 CVE-2022-32744 CVE-2022-32745 CVE-2022-32746 CVE-2022-32742) setxkbmap: version 1.3.3 shared-mime-info: version 2.2 sqlite: version 3.39.2 sudo: version 1.9.11p3 sysfsutils: version 2.1.1 tdb: version 1.4.7 tevent: version 0.12.1 tree: version 2.0.2 util-linux: version 2.38 wget: version 1.21.3 xauth: version 1.1.2 xclock: version 1.1.1 xdpyinfo: version 1.3.3 xfsprogs: version 5.18.0 xkeyboard-config: version 2.36 xload: version 1.1.4 xmodmap: version 1.0.11 xsm: version 1.0.5 xterm: version 372 xwud: version 1.0.6 Linux kernel: version 5.18.14 (CVE-2022-21123 (CVE-2022-21123 CVE-2022-21125 CVE-2022-21166) oot: md/unraid: version 2.9.23 CONFIG_IOMMU_DEFAULT_PASSTHROUGH: Passthrough CONFIG_VIRTIO_IOMMU: Virtio IOMMU driver CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver CONFIG_FIREWIRE: FireWire driver stack CONFIG_FIREWIRE_OHCI: OHCI-1394 controllers CONFIG_FIREWIRE_SBP2: Storage devices (SBP-2 protocol) CONFIG_FIREWIRE_NET: IP networking over 1394 CONFIG_INPUT_UINPUT: User level driver support CONFIG_INPUT_JOYDEV: Joystick interface CONFIG_INPUT_JOYSTICK: Joysticks/Gamepads CONFIG_JOYSTICK_XPAD: X-Box gamepad support CONFIG_JOYSTICK_XPAD_FF: X-Box gamepad rumble support CONFIG_JOYSTICK_XPAD_LEDS: LED Support for Xbox360 controller 'BigX' LED Management: rc.nginx: enable OCSP stapling on certs which include an OCSP responder URL rc.wireguard: add better troubleshooting for WireGuard autostart rc.S: support early load of plugin driver modules upc: version v1.3.0 webgui: Plugin system update Detach frontend and backend operation Use nchan as communication channel Allow window to be closed while backend continues Use SWAL as window manager Added multi remove ability on Plugins page Added update all plugins with details webgui: docker: use docker label as primary source for WebUI This makes the 'net.unraid.docker.webui' docker label the primary source when parsing the web UI address. If the docker label is missing, the template value will be used instead. webgui: Update Credits.page webgui: VM manager: Fix VM log can not open when VM name has an embedded '#' webgui: Management Access page: add details for self-signed certs webgui: Parity check: fix regression error webgui: Remove session creation in scripts webgui: Update ssh key regex Add support for ed25519/sk-ed25519 Remove support for ecdsa (insecure) Use proper regex to check for valid key types webgui: misc. style updates webgui: Management access: HTTP port setting should always be enabled webgui: Fix: preserve vnc port settings webgui: Fix regression error in plugin system webgui: Fix issue installing registration keys webgui: Highlight case selection when custom image is selected webgui: fix(upc): v1.4.2 apiVersion check regression
  11. Ran into the dreaded Ryzen Linux power states issue. I was curious if I was going to have an issue with it or not. I had BIOS set for typical idle current, but when I replaced IPMI/BMS chip I think it got reset somehow. I haven't had a need for disabling global c-states yet, but I am curious once I have more going on if it will be necessary. My Threadripper TrueNAS build definitely needed the fix and would reboot constantly before fixing those issues. --- Solved via FAQ for v6, this is a very good post that quickly explains the various issues running Ryzen and acceptable parameters.
  12. brave://flags/#brave-debounce Fixed https://community.brave.com/t/random-crash-after-sorting-a-page/416551/8?u=lightingman117
  13. Been running Unraid on Chrome (.114) while using Brave (.134) Finally got a crash in brave on B&H's website. So latest Chromium must be broken. Sorry to bother.
  14. Preclear signature was lost on the 20T Now it will have to do it again I wonder if this has to do with Unassigned Devices or Dynamix File Manager being installed? --- Finished my VM vs HW testing. Interesting results... Overall, seems not a huge difference. Fairly impressive IMO Drive speed differences are rather strange. (Strange because image 1 is the passthru and image 2 is hw, lol) Who cares tho.
  15. First things first. I love the GUI. It is fast, simple, and logical. Thank you for making an awesome product! --- My browser just started crashing today. I powered down my server to replace the BMC chip. Powered it up and it started doing this. It happens when I am clicking around my Unraid Server GUI (webpage IPaddr URL). It happens randomly. Pinned tab or not. Cleared cookies. 2022-07-20 - Brave - Version 1.41.99 Chromium: 103.0.5060.134 (Official Build) (64-bit) --- Where should I look in the logs to try to tell if this is a browser (Chromium update) issue or Unraid? --- Testing: It does not seem to crash on Chromium 103.0.5060.114 pfSense webpage - no crash MikroTik webpage - no crash ASRock Rack IPMI webpage - no crash
  16. In other news... I debated on preclear and decided on it eventually. WD's new 20T Gold drives are amazing. 295-peak read
  17. No matter what I did I could not get the qcow2 file to launch after EFI-Shell came up. I googled for hours finally finding that as the fix (after seeing stuff about CSM and Secure Boot all for older versions 4.xx/5.xx, ~2019)
  18. Plugins: Community Applications - https://forums.unraid.net/topic/38582-plug-in-community-applications/ Unassigned Devices (And Plus) - https://forums.unraid.net/topic/92462-unassigned-devices-managing-disk-drives-and-remote-shares-outside-of-the-unraid-array/ Unassigned Devices Preclear - https://forums.unraid.net/topic/120567-unassigned-devices-preclear-a-utility-to-preclear-disks-before-adding-them-to-the-array/ Dynamix File Manager - https://forums.unraid.net/topic/120982-dynamix-file-manager/ Dockers: DiskSpeed - https://forums.unraid.net/topic/70636-diskspeed-hard-drive-benchmarking-unraid-6-version-292/ ESPHome (until I get VM running) - https://forums.unraid.net/topic/72033-support-digiblurs-docker-template-repository/ Home-Assistant-Core - https://registry.hub.docker.com/r/homeassistant/home-assistant/ netdata - https://forums.unraid.net/topic/47828-support-data-monkey-netdata/ --- Home-Assistant-Core doesn't have the add-ons module (dockers) because it is a docker. There's also the linuxsystems developed core (docker) version? Idk, too many limitations; decided VM was best route. Unfortunately HA is now setup as CORE and I need to move files to the VM... hOw? idk. But I know that Dynamix is amazingly helping me with all that. Unfortunately installing HA as a VM was rather troublesome... qemu-img convert -p -f qcow2 -O raw -o preallocation=off "/mnt/user/domains/Home Assistant/haos_ova-8.2.qcow2" "/mnt/user/domains/Home Assistant/vdisk1.img" Solved here though: --- I am slowly condensing my data on old HDD's [1x 16T, 2x 10T, 2x 2T] while running pre-checks on the new drive(s). Ordering 10G components (sw and svr already have 10G, but I need a card for my desktop (5950X/3070/64G,NVMe) 1T 980 NVMe Cache drives arrived [need to lookup the temp issue and apply the fix again] Todo: -5900X -Setup cache -Move HA-Core config -Condense -10G -Move Data -Preclear -DNS docker -test failure modes
  19. @Arbadacarba was instrumental in solving my VM issues. Thank you \/ \/ \/ \/ See fixes here \/ \/ \/ \/ Having BMC issues with my X570D4U-2L2T Somehow it lost the 1.2 image --- On happier notes I ordered: -5900X -2x 980 1T On Prime Day I plan on using 5900X for H265 encoding and the 980's to store VM's and do write cache etc. Yes I plan on having 10G NIC's to utilize the throughput --- Here are all the app's I have installed so far: I have long wanted to run Home-Assistant!!! I CAN FINALLY DO IT YEASSSSSSS PLZZZZZZ THANK YOU Unraid! Plex is next I guess?
  20. Holy crap that worked! Thank you!!! I figured out that I needed a few things first though. Found this guide: https://exitcode0.net/how-to-pass-through-a-drive-to-a-unraid-vm/ Which led me to this plugin/app: https://forums.unraid.net/topic/92462-unassigned-devices-managing-disk-drives-and-remote-shares-outside-of-the-unraid-array/ Which led me to trying to figure out what my drive ID was. Until I finally found out that I could use /dev/nvme0n1 This also fixed the extra vDisk Size & Type lines that normally show up
  21. Figured I could chronicle my journey a bit and help anyone in the future or just be a place for reference and input as I have no idea what I'm doing Build: Model: Custom M/B: ASRockRack X570D4U-2L2T Version - BIOS: American Megatrends International, LLC. Version P1.40. Dated: 05/19/2021 BMS: 1.2 CPU: AMD Ryzen 9 3900X 12-Core @ 3800 MHz HVM: Enabled IOMMU: Enabled Cache: 768 KiB, 6 MB, 64 MB Memory: 64 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: interface down eth1: 1000 Mbps, full duplex, mtu 1500 eth2: interface down eth3: interface down Kernel: Linux 5.15.46-Unraid x86_64 OpenSSL: 1.1.1o Uptime: ~3hrs Previous (noob getting started) thread: --- Me trying to figure out how to pass through a NVMe Drive in 6.10.3 I might just give up and run xcp-ng with Unraid nested? Or I guess I don't really need NVMe drives passthru I could just install a new Win10...but I have a working image with all my files, boo Idk, will revisit slowly...we'll see what Ghost82 says. Then I went down the why USB bootable drive and 64GB drive and which drive hole... Ugh, please just make this bootable from a standard HDD and the USB nightmare can go away. I read that 64GB drive Fat32 formatted with something like GUI Format.exe will work still. I read that Samsung BAR Plus is the best drive (as of 2022-07-12) for this. I get consternation given the reviews of the BAR Plus's longevity and lack of USB 2.0 drives. I'd rather have a list of tested compatible drives or better-yet a USB drive sold by Lime Technology...but it's all good. --- Current plan is to trade-for/buy a 5900X and run that. Using my old 3900X until then. I want some cache drives... but I am seeing now that there are issues with even that. Me complaining: Ugh... WHY is everything so complicated?!?! I read that Unraid was simple and 'just worked' FFFFFFF's in chat!!! Seems this is the fix: nvme_core.default_ps_max_latency_us=1500 --- Next is figuring out drive structure, folder structure, cache drives, etc. --- I'd like some input on the most common first things y'all do on an install... I've heard of zeroing or wiping or cleaning drives...whats that? What dockers, apps, VM's do you typically run? What's your folder/drive structure and why do you like it? Etc.
  22. Hmm, I like that approach. I need to do some thinking about my data I thinks. I presume you're running PLEX and that works well? Each drive has separated content, but is it all one share or multiple different shares? I'll admit having the location agnostic to the share is new territory for me.
  23. Joy... On a good note Ghost82 seems to be a wizard; even using a translator!? I am trying to pass an intel 512 NVMe into a VM. It already has W10 installed so I wanted to do some basic BareMetal vs VM testing for funsies (new to Unraid, just trying things out to learn...). Reading through all the threads and editing xml made my head hurt. I thought I had it figured out (BAR error goes away) except for the alias for the NVMe disappears upon 'save/update'. Edit: Ahh, got to the bottom of the thread you sent about WiFi dongles... I see Ghost82's statemnt on June 20th saying this is borked. Sad... I really wanted VM's to work/be easy in Unraid. I might go back to my thinking of using XCP-NG and passing through for Unraid. But... that also has it's own annoyances. I'm sure normal VM's work just fine?
  24. I don't know if necro is allowed. But this thread is closest to what I need help with. Maybe I should start a new thread? I figured out the qemu stuff but now vm reports alias not found. Well...every time I put in my alias in the hostdev block it gets eaten by whatever script is cleaning up the xml stuff. How do I make it stop eating my alias line? <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x23' slot='0x00' function='0x0'/> </source> <alias name='intel512'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.intel512.x-msix-relocation=bar2'/> </qemu:commandline> <qemu:capabilities> <qemu:del capability='device.json'/> </qemu:capabilities> </domain> [8086:f1a6] 23:00.0 Non-Volatile memory controller: Intel Corporation SSD Pro 7600p/760p/E 6100p Series (rev 03) This controller is bound to vfio, connected drives are not visible. Version 6.10.3 2022-06-14 Libvirt version: 8.2.0
  25. Not to be argumentative but isn't Unraid most similar to RAID4? I was mostly talking out the answer to myself. Thanks for listening. That is indeeeeed an incredibly low overhead loss amount once you get the number of disks higher!