Jump to content

happyagnostic

Members
  • Content Count

    26
  • Joined

  • Last visited

Community Reputation

1 Neutral

About happyagnostic

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @sgt_spike @Gobs To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial 1. Log into pfsense or whatever firewall Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400 Click Save / Apply 2. In Unraid > Docker > plex > Edit Upper right corner change from Basic View to Advanced View Find the field, Extra Parameters: Paste the following: -p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udp Click Apply 3. Log into your Plex Server > Settings > Remote Access Be sure to Check the Checkbox for Manually specify public port and set 32400 Click Apply *I had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container
  2. @FlorinB To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial 1. Log into pfsense or whatever firewall Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400 Click Save / Apply 2. In Unraid > Docker > plex > Edit Upper right corner change from Basic View to Advanced View Find the field, Extra Parameters: Paste the following: -p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udp Click Apply 3. Log into your Plex Server > Settings > Remote Access Be sure to Check the Checkbox for Manually specify public port and set 32400 Click Apply @roppy84 I had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container You could try step 2. above and see if that resolves the issue for now.
  3. It looks like an oversight. Other unraid dockers have the ports listed in NetworkSettings and ExposedPorts.
  4. Why limit the template to a host-only network? Or rather, may I submit a request to have the Ports populated in the NetworkSettings?
  5. Thank you for posting this. I am having the identical error and was going to do the screenshots, but yours is exactly it. I noticed in the Docker Image the Exposed Ports are defined properly, but NetworkSettings: Ports: {}, are empty. I believe this is the cause of the issue.
  6. Alright, I took the risk. Followed these instructions to the letter. And SUCCESS! Attached is how it looks now. All data is still there. It removed the missing disk and is rebuilding parity. All services are running. Make sure that the drive or drives you are removing have been removed from any inclusions or exclusions for all shares, including in the global share settings. Shares should be changed from the default of "All" to "Include". This include list should contain only the drives that will be retained. Make sure you have a copy of your array assignments, especially the parity drive. You may need this list if the "Retain current configuration" option doesn't work correctly Stop the array (if it is started) Go to Tools then New Config Click on the Retain current configuration box (says None at first), click on the box for All, then click on close Click on the box for Yes I want to do this, then click Apply then Done Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)! Do not click the check box for Parity is already valid; make sure it is NOT checked; parity is not valid now and won't be until the parity build completes Start the array; system is usable now, but it will take a long time rebuilding parity
  7. So the will the data on the cache get wiped or is it going to just remain there? I really wish there was a tutorial on this.
  8. Please confirm this. It won't touch the data that is on Disk 1, Disk 2, Cache, Cache 2. Then it will erase the Parity Disk, and take the data from Disk 1, Disk 2, Cache, Cache 2 and put them back on the Parity Disk. Is that correct?
  9. Attached is a screen shot of my array. I don't want to lose the data. I want to shrink and remove the missing disk. The wiki information is confusing because it reads like I'm going to lose the data. Does the New Config keep my data or remove it? I had nothing on the disk and parity has been rebuilt daily, but won't work until I remove that disk.
  10. Here you go. https://www.backuppods.com/collections/backblaze-storage-pod-6-0
  11. I may have discovered why my Windows 10 Pro fresh install on 6.2.0-beta21 was crashing. Hyper-V is being turned on by default on the Windows 10 template. I think that may be the source of the problem. I noticed when I downgraded to 6.1.9 and did a fresh install that Hyper-V was turned off in the Windows 8 template by default and everything worked fine, when I turned it on, it crashed while booting. I'm passing a GTX 970 through to Guest VM. Any thoughts to Hyper-V causing these VM crashes which also crashed the whole array?
  12. How do you downgrade this to 6.1.9? I tried installing from the plugins menu, but it posts "plugin: not installing older version" Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.
  13. iStarUSA come with backplanes. http://www.amazon.com/s/ref=bnav_search_go?url=search-alias%3Daps&field-keywords=BPN-DE230SS This seems your price point. I have a 3 to 5, works well.
  14. I run a build similar to this. I have GD09B, put a 1230v3 in it. GD08/09B need every filled fan to keep the CPU/GPU/HDD cool and plenty of space around them to keep cool. I wouldn't recommend it as a case for a home server unless there is at least 75mm clearance on left and right of it and the front isn't enclosed. I moved to a fractal node 804 with my parts, silent cool and can hold 10HDD and 4SSD. I would highly recommend it, but it takes a microATX max.