Jump to content

devros

Members
  • Content Count

    61
  • Joined

  • Last visited

Community Reputation

8 Neutral

About devros

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Two things I always do when repurposing a drive(not that it applies to this situation) is: 1) Remove all partitions 2) wipefs -a /dev/sdX I wish I'd known about that last one much earlier. Would have saved me a lot of grief with removing MD superblocks, ZFS metadata, etc...
  2. See my previous post about the UNMS issue
  3. Looking like the Plex server is going to survive the night without me having to pause. I was originally just going to remove a drive, but then realized I had a larger one that was already pre-cleared around, so figured I would just try to kill two birds with one reboot.
  4. Two questions here. 1) if I'm running the command "dd bs=1M if=/dev/zero of=/dev/md1 status=progress" in screen so I can zero the drive and then remove it from the array without losing parity, is reason I couldn't just suspend the process this evening when the server is going to be under high use and then resume it much later tonight? Based on what it's doing I think that should be ok. 2) if I have a new, bigger drive that I have already pre-cleared could I just change the drive assignment and preserve parity?
  5. I'm running the clear array drive script in a screen right now. In the evening, my server is under a pretty heavy load. Is there any issue in just doing a ctrl-z to pause the process and then resume it later? 2nd question. I have a cleared a drive and rather than remove it, want to replace it with a bigger drive that has already been pre-cleared, can I do that all in one step while preserving parity?
  6. Now I'm a little curious to see how AFP performs
  7. For anyone still having issues with the UNMS container, see the issue I created. https://github.com/Nico640/docker-unms/issues/22#issuecomment-578910768 Basically you have to specify the cache specifically for the config directory, even if you have the appdata share set to always use the cache
  8. I'll try in the next day or two when I have the time.
  9. Yup. Not only do I run my plex server off this motherboard, but I have about 40-50 HBAs in production at work, all on SM motherboards. Sometimes with as many as 4 packed right together. In 12 years I've never seen this happen. Not only with the server motherboards at work, but also with this motherboard and the previous SM one on my last server.
  10. I've had a 92xx and a 93xx HBA in there with no issues.
  11. I should have been more specific in my last post. I'm talking about the UNMS docker
  12. This still working ok for everyone? I tried logging in for the 1st time in a while and there were some postgres errors in the logs. I backed up the config folder and did a fresh install, but now it's not creating the postgres conf file which is preventing postgres from starting...
  13. Looks like there is a new BIOS/IPMI Firmware out very recently: BIOS: 1.1 IPMI: 01.23.04 No real release notes unfortunately. Currently I'm running unRAID 6.8.0, have a VGA display, IPMI/LOM console, and Quick Sync with my Plex docker all working great. My BIOS setting are the same as above, except there no longer seems to be a "Primary PCIE" option since I did the BIOS upgrade. I have "i915.disable_display=1" in syslinux.cfg and the following in my "go" file modprobe i915 chown -R nobody:users /dev/dri chmod -R 777 /dev/dri Happy to have been the guinea pig here. Aside from the "Primary PCIE" option disappearing I'd be curious to know if anyone else notices any other differences with the upgrades.
  14. I built out a new Unraid server several years ago to replace and CentOS 7 server running docker compose and several ZFS RAID2Zs which is the ZFS equivalent of a RAID6. Since I was using new drives for Unraid, the ZFS plugin was key to me making that choice so I could easily hook up those enclosures, mount those filesystems and just copy over all my content. As was stated above, ZFS on Unraid is the same ZFS on any other linux distro. As long as you are comfortable with the CLI it should be all good. I run several ZFS production systems at work. Some are multiple HDD RAID2Zs pooled together for almost half a PB of storage. That's been running stable for 3-4 years. We have more important DB servers running mirrored HDD pools with SSD caching that we use for the snapshotting. Also been running those 3-5 years, many of them on two bonded 10G NICs. Many of these are just on the stock CENT 7 kernel which is still 3.10.x we recently upgraded the kernels on some of those to the latest stable 5.3.x kernel so we could do some testing with some massive mirrors(24 x 2 on 12G backplanes) with NVMe caching(we needed the improved NVMe support in 5.3.x) and the performance has been incredible. In 4 years we had one issue that came up where performance went to shit, and we needed to try a reboot quickly to get the system back online so we weren't able to determine if it was a ZFS or NFS issue, but all was good on a reboot. Probably more info than you needed, but wanted to answer your 10G question and put something in this thread for people to read later about what I did personally and what our company has done with great results with ZFLOL. Cheers, -dev