devros

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by devros

  1. It's been a bad year for boot drives for me. My 2nd one this year now seems to be on the way out. Unfortunately my SM motherboard doesn't have any USB 2 ports. For now I can shut the server down, pull the USB drive, repair it on my Mac and my unraid server will boot again, but it's now getting to the point it will only last for a few days before linux kicks it offline. dmesg is filled with: [349517.230113] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230115] blk_update_request: I/O error, dev sda, sector 625365 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [349517.230123] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230124] blk_update_request: I/O error, dev sda, sector 625366 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [349517.230132] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230133] blk_update_request: I/O error, dev sda, sector 625367 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [349517.230141] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230143] blk_update_request: I/O error, dev sda, sector 625368 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [349517.230151] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230152] blk_update_request: I/O error, dev sda, sector 625369 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [349517.230160] sd 0:0:0:0: [sda] tag#0 access beyond end of device [349517.230167] sd 0:0:0:0: [sda] tag#0 access beyond end of device same thing happened to me in June so I can't automatically do a license transfer to a new thumb drive. I know the main bit of advice here is to use a USB2 port, but since I have none, does using a small USB2 hub plugged into a USB3 port make any sense? Also, based on the dmesg errors, I think I'm running out of runway before this totally dies. Is it possible to have the system allow me to do another USB change within a year. I have the full pro License. thanks, dev
  2. Just installed 1.6a. Same issues. Using SG for primary display does not help
  3. There is a 1.6 BIOS out now. Anyone tried it?
  4. I recently did the upgrade to 1.5 and can report basically the same things @dumurluk had with 1.5. Lots of CMOS resets that afternoon :( Over the past few years I've ordered millions of dollars of SM server gear for our company's colos. I think I'm just going to forward this thread to our rep and tell them to escalate till it's fixed.
  5. ok, phew. forgot I had auto backups on. reverted to 6.0.45, moved the old folder in appdata, fired it up and uploaded the right backup and all is well now. Think I'll stick with 6.0.45 for now
  6. reverted to backup from a few days ago and same issue. Reverted the docker image back to one from a month ago and got: UniFi Controller startup failed We do not support upgrading from 6.1.71. Hoping I don't have to go to my appdata folder backup as I've made some changes since that was last Sunday
  7. I just updated the docker to the latest version. Running 6.1.71. If I log in locally, there are no adopted devices. If I go in through the cloud it shows the site online with the right number of devices, clients, etc, but when I click launch, it connects and there is nothing there. If I relaunch the controller, I'll get alerts that all of my devices have reconnected, but nothing there in the GUI. anything obvious I might be missing? Thanks, -dev
  8. Despite doing all the things in the post referenced above, my iKVM still cuts out when the OS takes over. It worked great for me on the previous builds. debating about trying 1.3
  9. Two things I always do when repurposing a drive(not that it applies to this situation) is: 1) Remove all partitions 2) wipefs -a /dev/sdX I wish I'd known about that last one much earlier. Would have saved me a lot of grief with removing MD superblocks, ZFS metadata, etc...
  10. Looking like the Plex server is going to survive the night without me having to pause. I was originally just going to remove a drive, but then realized I had a larger one that was already pre-cleared around, so figured I would just try to kill two birds with one reboot.
  11. Two questions here. 1) if I'm running the command "dd bs=1M if=/dev/zero of=/dev/md1 status=progress" in screen so I can zero the drive and then remove it from the array without losing parity, is reason I couldn't just suspend the process this evening when the server is going to be under high use and then resume it much later tonight? Based on what it's doing I think that should be ok. 2) if I have a new, bigger drive that I have already pre-cleared could I just change the drive assignment and preserve parity?
  12. I'm running the clear array drive script in a screen right now. In the evening, my server is under a pretty heavy load. Is there any issue in just doing a ctrl-z to pause the process and then resume it later? 2nd question. I have a cleared a drive and rather than remove it, want to replace it with a bigger drive that has already been pre-cleared, can I do that all in one step while preserving parity?
  13. Now I'm a little curious to see how AFP performs
  14. For anyone still having issues with the UNMS container, see the issue I created. https://github.com/Nico640/docker-unms/issues/22#issuecomment-578910768 Basically you have to specify the cache specifically for the config directory, even if you have the appdata share set to always use the cache
  15. I'll try in the next day or two when I have the time.
  16. Yup. Not only do I run my plex server off this motherboard, but I have about 40-50 HBAs in production at work, all on SM motherboards. Sometimes with as many as 4 packed right together. In 12 years I've never seen this happen. Not only with the server motherboards at work, but also with this motherboard and the previous SM one on my last server.
  17. I've had a 92xx and a 93xx HBA in there with no issues.
  18. I should have been more specific in my last post. I'm talking about the UNMS docker
  19. This still working ok for everyone? I tried logging in for the 1st time in a while and there were some postgres errors in the logs. I backed up the config folder and did a fresh install, but now it's not creating the postgres conf file which is preventing postgres from starting...
  20. Looks like there is a new BIOS/IPMI Firmware out very recently: BIOS: 1.1 IPMI: 01.23.04 No real release notes unfortunately. Currently I'm running unRAID 6.8.0, have a VGA display, IPMI/LOM console, and Quick Sync with my Plex docker all working great. My BIOS setting are the same as above, except there no longer seems to be a "Primary PCIE" option since I did the BIOS upgrade. I have "i915.disable_display=1" in syslinux.cfg and the following in my "go" file modprobe i915 chown -R nobody:users /dev/dri chmod -R 777 /dev/dri Happy to have been the guinea pig here. Aside from the "Primary PCIE" option disappearing I'd be curious to know if anyone else notices any other differences with the upgrades.
  21. I built out a new Unraid server several years ago to replace and CentOS 7 server running docker compose and several ZFS RAID2Zs which is the ZFS equivalent of a RAID6. Since I was using new drives for Unraid, the ZFS plugin was key to me making that choice so I could easily hook up those enclosures, mount those filesystems and just copy over all my content. As was stated above, ZFS on Unraid is the same ZFS on any other linux distro. As long as you are comfortable with the CLI it should be all good. I run several ZFS production systems at work. Some are multiple HDD RAID2Zs pooled together for almost half a PB of storage. That's been running stable for 3-4 years. We have more important DB servers running mirrored HDD pools with SSD caching that we use for the snapshotting. Also been running those 3-5 years, many of them on two bonded 10G NICs. Many of these are just on the stock CENT 7 kernel which is still 3.10.x we recently upgraded the kernels on some of those to the latest stable 5.3.x kernel so we could do some testing with some massive mirrors(24 x 2 on 12G backplanes) with NVMe caching(we needed the improved NVMe support in 5.3.x) and the performance has been incredible. In 4 years we had one issue that came up where performance went to shit, and we needed to try a reboot quickly to get the system back online so we weren't able to determine if it was a ZFS or NFS issue, but all was good on a reboot. Probably more info than you needed, but wanted to answer your 10G question and put something in this thread for people to read later about what I did personally and what our company has done with great results with ZFLOL. Cheers, -dev
  22. Just got a High Sierra VM working great. What's the trick to be able to sign into the App Store? TIA