1812

Members
  • Posts

    2612
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by 1812

  1. side note: you can get around the halted boot process for missing devices by disabling them in the bios.
  2. why does it keep installing the newest Nvidia driver after every update? I use a gt for Plex and it always rolls to the latest vs keeping me on the 470.129.06 which is the one that works for this card.
  3. FWIW I have 2 HP ML30 Gen 9 servers updated to 6.10.2, both with dual: 02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe 02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe I did the "un-blacklist" procedure and have experienced no errors. I will also add that the Broadcom controllers are not eth0 in both systems either, and as far as I know there were no reports of this model of HP servers with issues.
  4. soke too soon, having a problem with my gpu, which went from functional to not. Log shows: May 19 13:46:23 Tower kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 245 May 19 13:46:23 Tower kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 245 May 19 13:46:23 Tower kernel: NVRM: The NVIDIA GeForce GT 730 GPU installed in this system is May 19 13:46:23 Tower kernel: NVRM: supported through the NVIDIA 470.xx Legacy drivers. Please May 19 13:46:23 Tower kernel: NVRM: visit http://www.nvidia.com/object/unix.html for more May 19 13:46:23 Tower kernel: NVRM: information. The 510.73.05 NVIDIA driver will ignore May 19 13:46:23 Tower kernel: NVRM: this GPU. Continuing probe... May 19 13:46:23 Tower kernel: NVRM: No NVIDIA GPU found. It appears that my GPU driver was automatically updated and now I have to roll back to the 470.xx option in the Nvidia plugin. Ok I guess... a bit of an annoyance to have to fix something like this that was working just fine.
  5. updated 2 machines. first one no problems. Second one had all disks in the 2 pools "missing" as the device names had changed (those using an H240). I noted the disk positions/assignments, used new config and preserved the array disks, went back to the main tab and re-assigned the pool disks, marked parity as correct, and started the array. Normal operation as expected with no loss of data (also expected).
  6. First post updated with instructions for 6.10.
  7. I already listed my hardware specs. So moving on: Firstly, don't write to the array. If you have to, make sure you're on reconstruct write. For cache use ssd/nvme cache disks if you want fastest performance. second, see here: Try that and see if your experience changes. *Note* I don't have this setting on anything but a fast network share. The speeds I posted yesterday are just a a basic share that uses a cache drive.
  8. M1 MacBook Pro > OWC Thunderbolt Pro Doc > 10gbe ethernet cable > mikrotik 4 port 10gbe switch > direct attach copper cable > mellanox connectx-2 card > hp ml30 gen9 unraid server. file copy from the server, off a cache pool of 6 data ssd's in raid10, writing is about 100MB/s slower. Reading from my spinning array gets me about 170MB/s give or take using exos drives. Writing is a little slower. So, it works for me and my Mac. YMMV depending on server/client hardware specifics and tuning.
  9. the very first post says "Installation (procedure used for unRaid 6.9.0 RC 2 and up to current stable versions, previous versions not supported). This thread will be updated/deprecated when 6.10 goes stable to reflect the changes with that version." The patch is a part of the OS currently in the RC versions, so technically it should work if you follow step 4 and 6 in the first post, rebooting after step 4. I sold my last proliant a year ago, and don't always run RC versions so I don't have the ability to test at the moment, hence why RC is not supported (plus things can unexpectedly change from version to version.) If you do try steps 4 and 5, kindly report back the results.
  10. Current does not equate to RC/beta versions. Current pertains to stable versions. But I have updated that text to reflect that since there seems to be confusion.
  11. the second will be listed as de[recated once the stable version comes out, thanks to you!
  12. She's selling a service to set it up for you.
  13. you will have to ask in this thread, as I don't maintain the script.
  14. try setting up a syslog server and it might capture any errors that occur right before it power cycles.
  15. I had an 8th gen ML350p with 2 rx580's, 2 10gbe cards, 2 hba's, and 1 gt 710 for Plex. (and maybe something else, I don't remember) The 2 rx580's went to a Macos vm plus a 10gbe solar flare card. Passthrough was fine once the RMRR issue was solved. Switching to legacy doesn't really affect anything in your setup/use. As I mentioned before I remember the onboard quad being a bit finicky. I think I disabled mine altogether since the server was on 10gbe. That doesn't help you much unless you get an aftermarket card.
  16. we'll try some troubleshooting/digging around try switching to legacy mode and boot with the 4 nics. make sure to disable UEFI in bios as well as an option. see if it works. change acs override to both in the vm manager and reboot. this shouldn't matter, but then try to start the vm with just 1 of the nics. the newer bios won't fix the rmrr issue. HP doesn't care about older hardware. the newer bios were essentially for spectre, etc... you may just have to get a quad intel nic and not use the onboard if it doesn't want to play nice.
  17. for some reason I'm unable to view your syslog... I can't unzip it. not sure why. Are you using the onboard raid controller? A long time ago I ran into problems with it even being enabled causing issues when trying to also use onboard networking. are you booting in legacy mode?
  18. check for this if that's not it, then post your full diagnostic files
  19. If you're using the internal display output, I can't really advise that. But you should try to bind whatever graphics device you want to use under tools>system devices to start. If you bork the vm and get the "guest not initializing display", make a new vm with the same settings/disk (don't delete your old disk when removing the old vm afterwards.)
  20. It's generated by HP's server management system iLO.
  21. Off the top of my head, no. You would have to ask @ich777 if there is a way to support this (unless I overlooked something in the configuration page of the plugin.) There is a section for beta builds, so it may be possible.
  22. do they show up in the raid controller menu/configuration? Also post your complete diagnostics zip.