Trebron74

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Trebron74's Achievements

Noob

Noob (1/14)

1

Reputation

  1. The more than one Server scenario is not clear enough and the Pub.sh pic is no longer available. I would like to run all 3 mods of OpenRA and cannot figure out the dal router portfowarding and server port/listen port/ Ext port combos for this
  2. And it solved it! Thank a lot for the fast help
  3. This is an setting related to an Plugin called TurboMode. Thanks for the hint. Set it long ago due to some GUI Lockups and forgot to disable it again
  4. After updating to V6.5.0 I'm no longer able to update outdated plugins or remove plugins. The popup states that the plugin in is not installed but listed under plugins tab and is running and available (Settings Tab) in my current unRAID system fatmax-diagnostics-20180322-1105.zip
  5. Hello community - I'm seeing my parity disk disabled without any action and no issues (at least from my point of view). I've changed controller and cabling, removed and re-added the drive but it goes to disabled state shortly before parity rebuild completes. Smart does not report errors beside 2 UDMA CRC Errors (that's why I changed the connections)... Any idea what could be wrong? attached are the diagnostics files as needed. fatmax-diagnostics-20180126-1951.zip
  6. I struggle a lot with sporadic unresponsive GUI, but working SSH connections as well as unresponsive GUI and SSH terminal. As I start a new topic with debugging information I would like to prepare a fresh build using my licensed USB key. Debugging information is actually captured but need to wait for the next unresponsive session... Is there a procedure to save important files/settings/license items to be prepared for a fresh build? I assume to be saved: Disk layout (Informational to rebuild using the GUI) License File (copied from /boot/config as file) --> the pro.key file is not visible on the USB key when mounted /flash... Do I miss something? Docker Config (Informational to rebuild using the GUI) KVM XML with manual edits (as Text file for copy and paste) I would start as small and skimmed down as possible to avoid conflicts and perform debugging analysis while enabling apps/plugins, dockers, VM step by step. Initial start would be unRAID + Plex Docker only Searching the forum was not answering my question about the backup steps... Thanks a lot.
  7. Any idea about my error? This is coming from unRAID KVM Manger when I power on the VM. Or better a separate thread? Any idea about my error? This is coming from unRAID KVM Manger when I power on the VM. Or better a separate thread?
  8. I do have similar issues. once I added my original MacPro (late 2009) GeForce GT 120 Gfx card into my unRAID and wanted to pass it trough to my macOS KVM running 10.12.4 Error Message: internal error: process exited while connecting to monitor: 2017-05-09T13:49:59.860781Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: error, group 1 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver. 2017-05-09T13:49:59.860801Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: failed to get group 1 2017-05-09T13:49:59.860810Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x4: Device initialization failed and replaced the VNC portion with <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> </hostdev> as stated in the guide from gridrunner unRAID seems to have the problem that qemu does not "known" my GT 120. What do I miss?
  9. Thanks a lot for the hint with the 2nd video to get it running with OS X Sierra 10.12.4. As this version is actually used when creating the image in VMware, I missed the UPDATE portion (I was already on.4 so no need to watch But now, with the machine up and running a basic question came up and I was not able to search for an answer. Increase the disk size of the vm. My first (and IMHO logical) step was to define a bigger disk in unRAID then used OS X disk tools to extend the boot partition (entered size directly) and let it execute rebooot --> dead sign ø Is there a different step list?
  10. Same for me. For an unknown reason (Nothing changed beside update to 6.3.2 some days back) the server become unresponsive (Web GUI and Shares). I can still use the SSH connection so. Is there any command to restart the web server or get diagnostics data? Shutdown -rF does not do the reboot (waited 1 hour and hard rebooted ending up in Parity scan)
  11. Hello all - I'm experiencing a reboot loop after the update from 6.2.4 to 6.3.0 this morning. The last line after regular mode or safe mode is the bzroot entry. The server immediately reboots after that line was printed to screen. Funny so, the GUI Mode works and all dials are taken in the guide mode started server. I've tried to give him a cleanup and retry chance but as soon as I use the regular or safe mode I'm back in the reboot loop... Any idea where I can look for details to fix it? Is there anything logged when it is rebooting directly after bzroot? fatmax-diagnostics-20170204-1652.zip
  12. I'm unsure where to look... Search was not successful and Logfiles are not stating any errors. Any VM I've created (using standard values) does not boot beyond the BIOS screen with the assigned memory. Any ideas where I can look for the root cause? Example Log file (Non showed an error line): 2017-01-01 18:09:31.709+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: FatMax LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name Ubuntu -S -machine pc-q35-2.5,accel=kvm,usb=off,mem-merge=off -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/ddce4f00-e106-6864-3579-589b9ff61b8c_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 4,sockets=1,cores=4,threads=1 -uuid ddce4f00-e106-6864-3579-589b9ff61b8c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-Ubuntu/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,busv=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x1 -device usb-host,hostbus=1,hostaddr=3,id=hostdev0 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x4 -msg timestamp=on Domain id=3 is tainted: high-privileges char device redirected to /dev/pts/0 (label charserial0) 2017-01-01T18:10:31.884723Z qemu-system-x86_64: terminating on signal 15 from pid 17485 2017-01-01 18:10:32.084+0000: shutting down
  13. From unassigned drive to the unRAID Drive --> MC via SSH (could also be done with an script and the Plugin User Scripts) From an array drive to another drive in the array --> unbalance Plugin
  14. Hi Chadskie - I'm running a similar setup and it works quite well. MB: have a Look at the Supermicro X11SSL-CF This comes with 6 SATA and 2 SAS (Breakout to 8 SATA) Ports. Resulting In an total of 14 ports. Other specs are very similar to your selection CPU: I've picked the Xeon E3-1260L v5 @ 2.90GHz as it runs at a very low Wattage (45W) with an impressive passmark of 10k. The whole system is consuming approx. 40W with all drives spun down but fully responsive due to the SSD Cache and folder caching plugin. Max Power consumption was 105W with Plex doing the enumeration of the whole library during an parity scan... Cache: Running 2 Samsung Evo 750 500GB in RAID1 mode. Decided to not use it for the Plex mediathek share and only use it for the dockers & VMs. Plex will delay the playback by 3-5 seconds If the drive needs to spin up. Acceptable for me.. RAM: I ran with 16GB and it was enough. I've updated to 32GB only for VM usage. Plex is running fine off the SSD Cache and 16GB RAM But definitely go with 2 modules in the beginning... My system Summary: Case: Antec 1900 (offers 14 HDD/SSD Slots plus 4 additional via the 5.25 Bays) PSU: be quiet! Pro10 650W Nice SATA Power connectors available... MB: Supermicro X11SSL-CF (comes with 14 SATA ports --> Matching the 14 Bays of the case CPU: Intel Xeon E3-1260L v5 @ 2.90GHz comes with 10k Passmarks to run 5 parallel transcoding streams in Plex @ 45W! CPU Heatsink: be Quiet Pure Rock as the smallest is already more than covering the CPU Wattage... RAM: 4x 8GB CL15 PC4-17000 DDR4 UDIMM-Module from Crucial The drives are mostly from my old setup, extended by matching drives (1x SSD, 1x Parity, 2x WD Red) HDD: 1x WD Black 4TB for Parity, 2x WD Green 3TB, 4x WD Red 4TB SSD: 2x Samsung EVO 750 500GB A drive outside of the array Is possible (via the unassigned drives plugin) but it will not be covered by parity! You can define a new share and only allow a certain drive to be utilized. This would save all data to the dedicated drive and keeps it save with parity.
  15. CA calculates the appdata size by looking at the paths mapped to the docker app. If there are multiple paths present that could potentially be appdata it'll flip a coin Sent from my LG-D852 using Tapatalk That would be a nice feature request: CA App Store with option to select/define path that being considered for reporting for installed docker