happyagnostic
-
Posts
27 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by happyagnostic
-
-
To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial
1. Log into pfsense or whatever firewall
Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400
Click Save / Apply2. In Unraid > Docker > plex > Edit
Upper right corner change from Basic View to Advanced View
Find the field, Extra Parameters:
Paste the following:
-p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udpClick Apply
3. Log into your Plex Server > Settings > Remote Access
Be sure to Check the Checkbox for Manually specify public port and set 32400
Click Apply*I had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container
-
To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial
1. Log into pfsense or whatever firewall
Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400
Click Save / Apply2. In Unraid > Docker > plex > Edit
Upper right corner change from Basic View to Advanced View
Find the field, Extra Parameters:
Paste the following:
-p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udpClick Apply
3. Log into your Plex Server > Settings > Remote Access
Be sure to Check the Checkbox for Manually specify public port and set 32400
Click ApplyI had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container
You could try step 2. above and see if that resolves the issue for now.
- 1
-
23 minutes ago, H2O_King89 said:
They probably set it up this way so it works right out of the box.
It looks like an oversight. Other unraid dockers have the ports listed in NetworkSettings and ExposedPorts.
-
4 hours ago, H2O_King89 said:
It’s because the template is setup for host so no port needs map. If it gets changed to a different network then the port needs map so it is pass.
Sent from my iPhone using Tapatalk ProWhy limit the template to a host-only network?
Or rather, may I submit a request to have the Ports populated in the NetworkSettings?
-
3 hours ago, FlorinB said:
After following up the setup described here
linuxserver/plex docker IP:Port is not transated/mapped to the Uraid IP:Port.
Plex is reachable over the public web address, but not from my internal LAN.Here are some print screens:
Docker tab: No address/port mapping for Plex.
Docker allocations: the revproxy network ip is displayed, however there is no port mapping.
SSH Shell: docker network list
So far I had tried the following, without success:
- Changed the network of the docker to Bridge, as it was initially.
- Uninstalled Plex and reinstalled it from the custom templates.
- Rebooted Unraid server.
Notice that for the others docker containers which are in the custom docker network revproxy there is no issue.
Diag archive attached: node804-diagnostics-20181004-0146.zip
Thank you for posting this.
I am having the identical error and was going to do the screenshots, but yours is exactly it.
I noticed in the Docker Image the Exposed Ports are defined properly, but
NetworkSettings: Ports: {}, are empty.
I believe this is the cause of the issue.
-
Alright, I took the risk. Followed these instructions to the letter. And SUCCESS! Attached is how it looks now.
All data is still there. It removed the missing disk and is rebuilding parity. All services are running.
- Make sure that the drive or drives you are removing have been removed from any inclusions or exclusions for all shares, including in the global share settings. Shares should be changed from the default of "All" to "Include". This include list should contain only the drives that will be retained.
- Make sure you have a copy of your array assignments, especially the parity drive. You may need this list if the "Retain current configuration" option doesn't work correctly
- Stop the array (if it is started)
- Go to Tools then New Config
- Click on the Retain current configuration box (says None at first), click on the box for All, then click on close
- Click on the box for Yes I want to do this, then click Apply then Done
- Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)!
- Do not click the check box for Parity is already valid; make sure it is NOT checked; parity is not valid now and won't be until the parity build completes
- Start the array; system is usable now, but it will take a long time rebuilding parity
-
So the will the data on the cache get wiped or is it going to just remain there?
I really wish there was a tutorial on this.
-
Please confirm this.
It won't touch the data that is on Disk 1, Disk 2, Cache, Cache 2.
Then it will erase the Parity Disk, and take the data from Disk 1, Disk 2, Cache, Cache 2 and put them back on the Parity Disk.
Is that correct?
-
Same issue.
-
Attached is a screen shot of my array.
I don't want to lose the data. I want to shrink and remove the missing disk.
The wiki information is confusing because it reads like I'm going to lose the data. Does the New Config keep my data or remove it? I had nothing on the disk and parity has been rebuilt daily, but won't work until I remove that disk.
-
I'm contemplating one of those 30-drive capacity cases from 45drives, but I'm not sure if they sell them bare, and I don't like losing the ability to quickly replace data drives as I upgrade them.
Here you go.
https://www.backuppods.com/collections/backblaze-storage-pod-6-0
-
I may have discovered why my Windows 10 Pro fresh install on 6.2.0-beta21 was crashing.
Hyper-V is being turned on by default on the Windows 10 template. I think that may be the source of the problem.
I noticed when I downgraded to 6.1.9 and did a fresh install that Hyper-V was turned off in the Windows 8 template by default and everything worked fine, when I turned it on, it crashed while booting.
I'm passing a GTX 970 through to Guest VM. Any thoughts to Hyper-V causing these VM crashes which also crashed the whole array?
-
How do you downgrade this to 6.1.9?
I tried installing from the plugins menu, but it posts "plugin: not installing older version"
Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.
-
Hi, I'm currently using Icy Dock MB153SP-B, but it died last month and Amazon stopped selling since last year. The price was around 60-65USD (I had free international shipping). I'm wondering if there is similar ones to that model (backplane so that if a drive spoils, I won't have to open the computer case) that Amazon has in stock? I looked briefly around at a couple of products, but most are around 100 and above or Amazon doesn't stock them (or there is no backplane). Any recommendations will be appreciated. In the meantime I shall search for more posts
iStarUSA come with backplanes.
http://www.amazon.com/s/ref=bnav_search_go?url=search-alias%3Daps&field-keywords=BPN-DE230SS
This seems your price point. I have a 3 to 5, works well.
-
After a ton of reading I've decided on the following, please let me know if you have any input or if I should change anything up. For the most part this will be a media server with future options to expand to a couple vm. I will be running plex, sickbeard, sabnzbd, xbmc, and a few others as I learn more about unRAID.
CPU: Intel Xeon x3 1230 v5
Mobo: Supermicro ATX DDR4 LGA 1151 X11SAE-F-O
PSU: EVGA 600w Bronze+
Case: Silverstone Tek GD08B
Mem: Crucial 16GBx2 ddr4-2133 ECC RDIMM
SSD: Samsung 850 EVO 500gb
HDD: WD Red 6tb x 4 and the odds ones laying around the house.
In the future I will find an appropriate gpu when I am ready to do VM, unless anyone can recommend a passive cooled gpu that can support hdmi 2.0 for 4k/60hz, preferably nvidia.
Thanks
I run a build similar to this.
I have GD09B, put a 1230v3 in it. GD08/09B need every filled fan to keep the CPU/GPU/HDD cool and plenty of space around them to keep cool. I wouldn't recommend it as a case for a home server unless there is at least 75mm clearance on left and right of it and the front isn't enclosed.
I moved to a fractal node 804 with my parts, silent cool and can hold 10HDD and 4SSD. I would highly recommend it, but it takes a microATX max.
-
yes, this is exactly the case I've been looking at for the mATX build. I have to check the foot print and make sure it fits into the cabinet where my wife doesn't see it. She doesn't want to see the box but requires new content each month, go figure...
Here's a quote you can use.
"I like it, it's cute" - my wife
-
How compact are you looking?
I have a NODE 804 http://www.fractal-design.com/home/product/cases/node-series/node-804 for my home unRAID.
It's a tiny, cubic, super quiet and supports mATX.
8x 3.5 Drives
2x 2.5 Drives
and a 290mm long GPU.
-
It's all about how you assign cores and what type of games you're playing.
His rig had 2 CPU's with 28 cores total, spread amongst 7 gamers. You are looking at 4 cores for 2 gamers.
You could expect a little less than dual core performance, which isn't that big of a deal with older games, that don't utilize all cores. Newer games that are looking for i7's for recommended will be a bit harder on your system (obviously). (Also, I'd recommend 6-8GB of Memory per VM)
Linus also dedicated GPU's to each user. Your CPU could be a bottleneck to whatever graphic cards you throw in there.
If you're not going to be running 24x7, you could go with "green" drives. I'd recommend getting an SSD for your cache.
-
hey folks,
so i have a E5-2620 v2 available that I'd love to use to upgrade my current APU setup. The thing is, the MB has to be an ITX.
Is anyone aware of an ITX board that would work with E5-2620 v2? I wasn't able to find one.
if that's not a go, I'm considering an i3-6100 build for future proof setup. In general, does the skylake architecture provide significant benefits over older hardware for unraid application? (nas, few dockers and an odd VM for office work, perhaps gaming later on?)
There are no ITX for the LGA 2011. mATX will be your best option.
Plex gives an idea of what kind of CPU you need to transcode of 1080p based on a CPUbenchmark.net score.
For VM purproses. Skylake supports up to 64GB of Memory, but I haven't found any ITX Skylake that support more than 32GB. Haswell and older ITX support 16GB Max.
Gaming, there are no major improvements from Haswell to Skylake.
Scroll to the "So, should I upgrade?"
http://www.pcgamer.com/intel-skylake-i7-6700k-tested-a-smart-upgrade-despite-small-gaming-gains/
tl;dr
Since ITX is your limitation, Skylake would be a good call for VMs because it will allow more memory on it's ITX than Haswell or older. If VM isn't that big of a deal, find older cheaper hardware and save some cash.
-
Will the onboard dual 10GBase-T have any issues with unRAID?
It's a Intel x540 controller, but i'm not sure if i will be able to take full advantage of it in the KVM.
-
Apple is a member of BSA. BSA "audit" consequences are not typically public knowledge, as the businesses involved would rather settle quietly.
Interesting.
I'll take that into consideration in regards to what I may or may not be doing and you can consider anything I'm asking here to be hypothetical.
-
Not an ethics debate, a clear consequence of getting caught statement.
I appreciate the concern.
If you could site some sources for the judgements against businesses running VM's of Mac OS X I would appreciate it. I'd like to understand the full scope of what I may or may not be getting myself into.
-
It sounds nice, but OS X licensing prevents you from legally running the OS on non-Apple hardware.
I appreciate your sentiment, and I'm not looking to get into an ethics debate.
My original question, is there anything I may have missed in regards to the hardware aspect of my build?
I've ordered the parts, just don't want any oversight or I'll take recommendations on improvements.
-
Failed Cache Pool - Failed VM - vdisks open with null backing
in VM Engine (KVM)
Posted
tl;dr qcow2 and very large data img file do not open and the backups were happening but have been incomplete. Who can I contact to potentially recover/read data from .img?
I'm in a bad situation as this is a production server.
I hadn't realized our NextCloud VM's had moved to the cache, it's 1.1TB.
Over the weekend one of my cache drives from a pool took a dive. It looks like it was a faulty cable. I moved the cache drives to new cables to see if that would resolve the issue and the third disk kept throwing errors. I believe the drive had failed. Shut down the unraid, switched out the failed with a new drive.
One of the other drives were also throwing an error. I tried a reboot. The cache pool was saying it knew the file system was BTRFS, but unmountable.
I tried usiing the BTRFS restore techniques, to varying degrees of success, but the VM's that I copied weren't working or opening. The logs were showing that it needed to use mirrors to pull the data. The VM's still won't open.
I tried the BTRFS check repair... last resort. Still nothing.
I rebooted. The pool mounted, but it is not writable, which I suppose is a good thing.
The mover doesn't work.
I've been able rsync items from the pool, but the VM qcow2 and imgs don't work.
qemu-img info for the boot drive
/mnt/disk12/restore07152020_2/mnt/user/domains/NextCloudUbuntu1604/vdisk40G.qcow2 file format: qcow2 virtual size: 40 GiB (42949672960 bytes) disk size: 38.3 GiB cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false
fdisk -l for the drive
/mnt/disk12/restore07152020_2/mnt/user/domains/NextCloudUbuntu1604/vdisk40G.qcow2: 38.3 GiB, 41119842304 bytes, 80312192 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
No partitions, no anything.
I've been at this for days and I am unable to go any further. Please help or direct me to someone who can.
tower-diagnostics-20200715-1026.zip