happyagnostic

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by happyagnostic

  1. tl;dr qcow2 and very large data img file do not open and the backups were happening but have been incomplete. Who can I contact to potentially recover/read data from .img? I'm in a bad situation as this is a production server. I hadn't realized our NextCloud VM's had moved to the cache, it's 1.1TB. Over the weekend one of my cache drives from a pool took a dive. It looks like it was a faulty cable. I moved the cache drives to new cables to see if that would resolve the issue and the third disk kept throwing errors. I believe the drive had failed. Shut down the unraid, switched out the failed with a new drive. One of the other drives were also throwing an error. I tried a reboot. The cache pool was saying it knew the file system was BTRFS, but unmountable. I tried usiing the BTRFS restore techniques, to varying degrees of success, but the VM's that I copied weren't working or opening. The logs were showing that it needed to use mirrors to pull the data. The VM's still won't open. I tried the BTRFS check repair... last resort. Still nothing. I rebooted. The pool mounted, but it is not writable, which I suppose is a good thing. The mover doesn't work. I've been able rsync items from the pool, but the VM qcow2 and imgs don't work. qemu-img info for the boot drive /mnt/disk12/restore07152020_2/mnt/user/domains/NextCloudUbuntu1604/vdisk40G.qcow2 file format: qcow2 virtual size: 40 GiB (42949672960 bytes) disk size: 38.3 GiB cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false fdisk -l for the drive /mnt/disk12/restore07152020_2/mnt/user/domains/NextCloudUbuntu1604/vdisk40G.qcow2: 38.3 GiB, 41119842304 bytes, 80312192 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes No partitions, no anything. I've been at this for days and I am unable to go any further. Please help or direct me to someone who can. tower-diagnostics-20200715-1026.zip
  2. @sgt_spike @Gobs To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial 1. Log into pfsense or whatever firewall Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400 Click Save / Apply 2. In Unraid > Docker > plex > Edit Upper right corner change from Basic View to Advanced View Find the field, Extra Parameters: Paste the following: -p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udp Click Apply 3. Log into your Plex Server > Settings > Remote Access Be sure to Check the Checkbox for Manually specify public port and set 32400 Click Apply *I had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container
  3. @FlorinB To fix the reverse proxy issue for plex if you followed Spaceinvader One’s tutorial 1. Log into pfsense or whatever firewall Create another Port Forwarding Rule as the tutorial showed (or Duplicate one) but set the ports to 32400 Click Save / Apply 2. In Unraid > Docker > plex > Edit Upper right corner change from Basic View to Advanced View Find the field, Extra Parameters: Paste the following: -p 1900:1900/udp -p 32400:32400/tcp -p 32400:32400/udp -p 32460:32469/tcp -p 32460:32469/udp -p 55353:5353/udp Click Apply 3. Log into your Plex Server > Settings > Remote Access Be sure to Check the Checkbox for Manually specify public port and set 32400 Click Apply @roppy84 I had to change mDNS ports -p 5353:5353/udp to -p 5353:55353 because there was a conflict with mDNS and wouldn't let my docker start properly... there is probably a bug in the container You could try step 2. above and see if that resolves the issue for now.
  4. It looks like an oversight. Other unraid dockers have the ports listed in NetworkSettings and ExposedPorts.
  5. Why limit the template to a host-only network? Or rather, may I submit a request to have the Ports populated in the NetworkSettings?
  6. Thank you for posting this. I am having the identical error and was going to do the screenshots, but yours is exactly it. I noticed in the Docker Image the Exposed Ports are defined properly, but NetworkSettings: Ports: {}, are empty. I believe this is the cause of the issue.
  7. Alright, I took the risk. Followed these instructions to the letter. And SUCCESS! Attached is how it looks now. All data is still there. It removed the missing disk and is rebuilding parity. All services are running. Make sure that the drive or drives you are removing have been removed from any inclusions or exclusions for all shares, including in the global share settings. Shares should be changed from the default of "All" to "Include". This include list should contain only the drives that will be retained. Make sure you have a copy of your array assignments, especially the parity drive. You may need this list if the "Retain current configuration" option doesn't work correctly Stop the array (if it is started) Go to Tools then New Config Click on the Retain current configuration box (says None at first), click on the box for All, then click on close Click on the box for Yes I want to do this, then click Apply then Done Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)! Do not click the check box for Parity is already valid; make sure it is NOT checked; parity is not valid now and won't be until the parity build completes Start the array; system is usable now, but it will take a long time rebuilding parity
  8. So the will the data on the cache get wiped or is it going to just remain there? I really wish there was a tutorial on this.
  9. Please confirm this. It won't touch the data that is on Disk 1, Disk 2, Cache, Cache 2. Then it will erase the Parity Disk, and take the data from Disk 1, Disk 2, Cache, Cache 2 and put them back on the Parity Disk. Is that correct?
  10. Attached is a screen shot of my array. I don't want to lose the data. I want to shrink and remove the missing disk. The wiki information is confusing because it reads like I'm going to lose the data. Does the New Config keep my data or remove it? I had nothing on the disk and parity has been rebuilt daily, but won't work until I remove that disk.
  11. Here you go. https://www.backuppods.com/collections/backblaze-storage-pod-6-0
  12. I may have discovered why my Windows 10 Pro fresh install on 6.2.0-beta21 was crashing. Hyper-V is being turned on by default on the Windows 10 template. I think that may be the source of the problem. I noticed when I downgraded to 6.1.9 and did a fresh install that Hyper-V was turned off in the Windows 8 template by default and everything worked fine, when I turned it on, it crashed while booting. I'm passing a GTX 970 through to Guest VM. Any thoughts to Hyper-V causing these VM crashes which also crashed the whole array?
  13. How do you downgrade this to 6.1.9? I tried installing from the plugins menu, but it posts "plugin: not installing older version" Will there be a 6.2.0.beta22 soon? Fresh install of Windows 10 VM being broken and locking up the system is really bumming me out.
  14. iStarUSA come with backplanes. http://www.amazon.com/s/ref=bnav_search_go?url=search-alias%3Daps&field-keywords=BPN-DE230SS This seems your price point. I have a 3 to 5, works well.
  15. I run a build similar to this. I have GD09B, put a 1230v3 in it. GD08/09B need every filled fan to keep the CPU/GPU/HDD cool and plenty of space around them to keep cool. I wouldn't recommend it as a case for a home server unless there is at least 75mm clearance on left and right of it and the front isn't enclosed. I moved to a fractal node 804 with my parts, silent cool and can hold 10HDD and 4SSD. I would highly recommend it, but it takes a microATX max.
  16. Here's a quote you can use. "I like it, it's cute" - my wife
  17. How compact are you looking? I have a NODE 804 http://www.fractal-design.com/home/product/cases/node-series/node-804 for my home unRAID. It's a tiny, cubic, super quiet and supports mATX. 8x 3.5 Drives 2x 2.5 Drives and a 290mm long GPU.
  18. It's all about how you assign cores and what type of games you're playing. His rig had 2 CPU's with 28 cores total, spread amongst 7 gamers. You are looking at 4 cores for 2 gamers. You could expect a little less than dual core performance, which isn't that big of a deal with older games, that don't utilize all cores. Newer games that are looking for i7's for recommended will be a bit harder on your system (obviously). (Also, I'd recommend 6-8GB of Memory per VM) Linus also dedicated GPU's to each user. Your CPU could be a bottleneck to whatever graphic cards you throw in there. If you're not going to be running 24x7, you could go with "green" drives. I'd recommend getting an SSD for your cache.
  19. There are no ITX for the LGA 2011. mATX will be your best option. Plex gives an idea of what kind of CPU you need to transcode of 1080p based on a CPUbenchmark.net score. https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-computer- For VM purproses. Skylake supports up to 64GB of Memory, but I haven't found any ITX Skylake that support more than 32GB. Haswell and older ITX support 16GB Max. Gaming, there are no major improvements from Haswell to Skylake. Scroll to the "So, should I upgrade?" http://www.pcgamer.com/intel-skylake-i7-6700k-tested-a-smart-upgrade-despite-small-gaming-gains/ tl;dr Since ITX is your limitation, Skylake would be a good call for VMs because it will allow more memory on it's ITX than Haswell or older. If VM isn't that big of a deal, find older cheaper hardware and save some cash.
  20. Will the onboard dual 10GBase-T have any issues with unRAID? It's a Intel x540 controller, but i'm not sure if i will be able to take full advantage of it in the KVM.
  21. Interesting. I'll take that into consideration in regards to what I may or may not be doing and you can consider anything I'm asking here to be hypothetical.
  22. I appreciate the concern. If you could site some sources for the judgements against businesses running VM's of Mac OS X I would appreciate it. I'd like to understand the full scope of what I may or may not be getting myself into.
  23. I appreciate your sentiment, and I'm not looking to get into an ethics debate. My original question, is there anything I may have missed in regards to the hardware aspect of my build? I've ordered the parts, just don't want any oversight or I'll take recommendations on improvements.
  24. Yes. 10.11.3 with Server installed. Upgrading from some 10.6/10.9 Xserves.