Jump to content

dirtysanchez

Members
  • Content Count

    945
  • Joined

  • Last visited

Community Reputation

7 Neutral

About dirtysanchez

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Riverside, CA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. When I got the Xeon it was used and did not come with the stock cooler. I used a stock i5 cooler I had lying around for a bit before I upgraded to the Noctua cooler. Temps definitely went up compared to the i3 but nothing too drastic. I think it was idling around 42C or thereabouts. The server typically idles around that now as well, even with the Noctua cooler, but that's just what happens when you have a 77W CPU in such a small case. If you will be doing more transcodes temps will definitely go up so I'd put the biggest cooler on it you can fit. I'm using the small form factor Noctua becuase I'm pressed for space. Can't fit a much larger cooler in there. When I've seen 3 or 4 transcodes running at the same time the CPU definitely spikes into the 60's.
  2. Congrats on the build, glad it has been working well for you. Yes, I just dropped in the E3-1245v2 and called it a day, no issues. Nothing special needed that I recall as long as the BIOS version supported the CPU, which you stated you already checked. Yes, the mobo supports VT-d, my build shows both HVM and IOMMU as enabled. The reason it shows as disabled for you is that the i3-3240 does not support VT-d. Once you drop in the Xeon it should show enabled assuming you have it enabled in the BIOS. I did add a SATA expansion card, but totally forgot to add that to the OP. I used the IOCrest SY-PEX40039. It uses an ASM1061 chipset. https://www.amazon.com/gp/product/B005B0A6ZS/ref=oh_aui_search_asin_title?ie=UTF8&psc=1 Just FYI, before the IOCrest I ordered a StarTech 2 port PEXSAT32 card that uses the Marvell 88SE9230 chipset. It did not work. I don't recall if unRAID didn't detect the card, or if it detected the card but could not detect the attached drives.
  3. Still going strong. It's been running 24x7x365 for over 6 years now. There have been many changes to the server over the years, most all of which has been detailed in this thread. Only changes since my last update were upgrading from 8GB RAM to 16GB RAM and finally getting around to converting all drives to xfs. My sig below reflects the current state of the server. I'll update the OP shortly with all the changes not already mentioned in the OP.
  4. Everything working as it should. Marking this as solved.
  5. Nuked the docker.img and recreated containers from my templates. So far so good. I am able to stop containers successfully. I'll marked this solved in another 24 hours if all is still working. Thanks again for the help.
  6. Thanks for taking a look Squid. Yes, plex crashed a few days ago (first time in years if I recall correctly) and that is what lead to the discovery of the issue as I was unable to get it running again without a hard reset that caused an unclean shutdown. The following day I discovered it went beyond the plex container when I was attempting to stop some containers to prepare for migrating all my drives to xfs. As for the current container refusing to stop (not the docker system per se), even if you turn Docker off it doesn't kill the running containers. You can change Settings > Docker > Enable Docker to no and the containers don't die nor does Docker itself stop. docker ps still shows them running. Once the migrations are complete (tomorrow morning) I'll nuke the docker.img and reinstall all dockers per your suggestion and report back. Thank you for the assistance.
  7. Hello all, Recently starting having an issue, unsure when exactly it started as I don't often have a reason to stop a docker container. Issue exists on 6.6.6 as well as 6.6.5. Unknown if it existed in prior versions or if it is even unRAID OS version related. I run the following containers. All containers are the linuxserver.io version with the exception of UniFi Video which is pducharme: Plex UniFi UniFi Video Radarr Sonarr Sabnzbd Tautulli Transmission Problem is as follows. If you attempt to stop a running container, it does not stop. The GUI shows the spinning arrows forever and once you finally refresh it still shows it running. The container is in fact dead at that point and the container WebGUI does not respond, but it does not finish exiting correctly. Issuing a docker stop containername or a docker kill containername from cmd line does nothing and hangs until you ctrl-c out. I have not found a way to kill and/or restart the container successfully. Some of the container logs appear as if the container exited successfully, while others the last line in the logs is "[s6-finish] syncing disks". At this point the only way to get the container running again is to restart unRAID. Problem is unRAID is now unable to stop the docker service, and therefore unable to stop the array. The only way to restart the server is a hard power cycle, and hence an unclean shutdown. In limited testing I have found that if only Plex and UniFi Video are running you can stop the array and the containers will successfully stop and the array successfully stops. I have yet to start the containers one by one and find which ones are causing the issue. I am currently in the process of migrating all drives to xfs and so have not yet had the time to test further. All that said, it appears when the containers are automatically stopped/restarted weekly to update via CA Auto Update settings, they do stop and start correctly. I have searched the forums and have not found a similar issue with resolution. Attached are diags from when the containers were hung. Any assistance would be appreciated. landfill-diagnostics-20190102-1851.zip
  8. Hello all, Updated to 6.6.6 from 6.6.5. I am now unable to stop dockers. Most containers are linuxserver.io. Have also tried to stop from cmd line and it does not stop. Also rolled back to 6.6.5 and no change. It is possible this started before the upgrade and I just haven't had to manually stop a docker in a month or two. Did some searching and didn't find much. Not sure this is related to the unRAID version or some other issue. Diags attached. Many thanks for any assistance. landfill-diagnostics-20190102-1851.zip
  9. Also having this issue with 6.6.6, also with linuxserver containers. Reverted to 6.6.5, no change.
  10. If you've installed the LSIO Unifi container as default, it is version 5.6.37 (the LTS version). If you need the 5.7.x branch, you'll need to change the repository to linuxserver/unifi:unstable in the Docker config. Once that is done it should be 5.7.23 and you should be able to import your existing backup. As for the forgetting of devices, there are 2 ways you can go about it. If you plan to give the docker the same IP as the previous Windows controller, then you just import the backup and the devices will show up. If you're migrating to another IP address, you'll either need to forget the devices and re-adopt in the new controller, or SSH into the AP's and re-run the set-inform command to point them to the new controller. There are other ways to do it as well, but a bit more convoluted (via the override setting in the controller).
  11. Good to know! Thought they were gone. I'll do it now.
  12. So a few additional upgrades that were never mentioned. Swapped the case for the Silverstone DS380 a few months ago as I needed additional drive space. I have to say, as long as you do the "cardboard" mod to seal off the airflow to force it to go through the drive bays before entering the rest of the case, this case actually keeps my drives cooler than the Q25 ever did. Nothing against the Q25, it's a great case, I just needed more space. Also, upgraded the CPU to the Xeon E3-1245v2 a couple years ago. Much more horsepower for transcodes, VMs, etc. You can't kill this thing, it just keeps on running.
  13. And therein lies the issue. 5.7 is the latest stable release and is listed as such by Ubiquiti. I second the suggestion to have LTS and stable tags, but do understand your statement regarding renaming.
  14. Haven't been around in a while, but just wanted to say this build is still running strong after almost 5 years. Other than the 1 Seagate drive that died and was replaced, have had no issues whatsoever.