1812

Members
  • Posts

    2619
  • Joined

  • Last visited

  • Days Won

    12

1812 last won the day on August 27 2019

1812 had the most liked content!

7 Followers

Retained

  • Member Title
    ¯\_(ツ)_/¯

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

31059 profile views

1812's Achievements

Proficient

Proficient (10/14)

382

Reputation

1

Community Answers

  1. updated 2 machines from 6.11.x to this. no issues.
  2. upgraded 2 boxes: one from 6.11.3, the other from 6.10.3. no issues currently observed. SPIFFY!
  3. 1. you can assign as many cores as you have to your vm or a number of vm's. 2. Any level of unRaid does this. The difference in the levels is the amount of disks you can attach. you can assign whatever and however many cores to any vm, and even stack vm's on the same cores (though you'll get an understandable performance penalty.) You can even isolate the cores away from unRaid running as host and reserve the cores to only be used by the vm's you choose. 3. open a question in the appropriate place on the forum and someone will probably be able to help you with that.
  4. and I broke the open files plugin by running it out of memory
  5. Then I'm not sure why this is currently working? I mean, I know what value I entered on purpose for max open files (just to see what would happen). But it seems like I'm chugging along beyond it. Maybe it'll all come to a screeching halt soon? ¯\_(ツ)_/¯
  6. just a heads up. A Mac photo library that is of any substantial size (like 175GB for example) blows past the 40964 open file limit when transferring to the server and the file limit has to be increased way beyond that. [learned from experience and several failures today before finding this thread and increasing to a ludicrous number to try and get this moved over]. Hopefully it will be more easily user adjustable in the future release.
  7. Over 4 years later and now I'm changing my +1 to a +10
  8. side note: you can get around the halted boot process for missing devices by disabling them in the bios.
  9. why does it keep installing the newest Nvidia driver after every update? I use a gt for Plex and it always rolls to the latest vs keeping me on the 470.129.06 which is the one that works for this card.
  10. FWIW I have 2 HP ML30 Gen 9 servers updated to 6.10.2, both with dual: 02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe 02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe I did the "un-blacklist" procedure and have experienced no errors. I will also add that the Broadcom controllers are not eth0 in both systems either, and as far as I know there were no reports of this model of HP servers with issues.
  11. soke too soon, having a problem with my gpu, which went from functional to not. Log shows: May 19 13:46:23 Tower kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 245 May 19 13:46:23 Tower kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 245 May 19 13:46:23 Tower kernel: NVRM: The NVIDIA GeForce GT 730 GPU installed in this system is May 19 13:46:23 Tower kernel: NVRM: supported through the NVIDIA 470.xx Legacy drivers. Please May 19 13:46:23 Tower kernel: NVRM: visit http://www.nvidia.com/object/unix.html for more May 19 13:46:23 Tower kernel: NVRM: information. The 510.73.05 NVIDIA driver will ignore May 19 13:46:23 Tower kernel: NVRM: this GPU. Continuing probe... May 19 13:46:23 Tower kernel: NVRM: No NVIDIA GPU found. It appears that my GPU driver was automatically updated and now I have to roll back to the 470.xx option in the Nvidia plugin. Ok I guess... a bit of an annoyance to have to fix something like this that was working just fine.
  12. updated 2 machines. first one no problems. Second one had all disks in the 2 pools "missing" as the device names had changed (those using an H240). I noted the disk positions/assignments, used new config and preserved the array disks, went back to the main tab and re-assigned the pool disks, marked parity as correct, and started the array. Normal operation as expected with no loss of data (also expected).
  13. First post updated with instructions for 6.10.
  14. I already listed my hardware specs. So moving on: Firstly, don't write to the array. If you have to, make sure you're on reconstruct write. For cache use ssd/nvme cache disks if you want fastest performance. second, see here: Try that and see if your experience changes. *Note* I don't have this setting on anything but a fast network share. The speeds I posted yesterday are just a a basic share that uses a cache drive.
  15. M1 MacBook Pro > OWC Thunderbolt Pro Doc > 10gbe ethernet cable > mikrotik 4 port 10gbe switch > direct attach copper cable > mellanox connectx-2 card > hp ml30 gen9 unraid server. file copy from the server, off a cache pool of 6 data ssd's in raid10, writing is about 100MB/s slower. Reading from my spinning array gets me about 170MB/s give or take using exos drives. Writing is a little slower. So, it works for me and my Mac. YMMV depending on server/client hardware specifics and tuning.