abhi.ko

Members
  • Posts

    344
  • Joined

  • Last visited

Everything posted by abhi.ko

  1. @HoopsterThanks for all the help, so I am planning to get the P2200, that should work well right? Also for my VM pass through, a OSX Catalina VM and windows 10 VM mainly, I have others but these would not be using the GPU. What is a good card for that? Read that AMD cards are not the best for unraid VM passthrough (reset bug etc.) but again Catalina needs AMD is what I understand from what I have read. Is that true, if so any recommendations on what to buy would be very much appreciated. Thanks.
  2. Thank you! Yes I am planning to use the unraid Nvidia plugin and basically follow the guide by spaceinvaderone . leaning towards the P2000 myself, it is not cheap though.
  3. It is all good now, aggregation is setup and it works. Thanks @Vr2Iofor the help.
  4. Hello - My current CPU is Intel® Xeon® CPU E3-1231 v3 @ 3.40GHz, running on ASRock C226 WS Version M/B; this CPU does not have Intel Quick Sync. I do not have a powerful GPU. I am running Plex (plex pass) in a docker using pms-docker:beta, I do have 'Use Hardware Transcoding when available' turned on in Plex, but with my current setup it doesn't use the CPU. Questions I have are these: Can I add an Nvidia GPU to the build and pass that through to the docker and achieve Plex Hardware Transcoding through the GPU. (Solved - adding P2200) What GPU to use for Catalina OSX and Windows 10 VM passthrough? My understanding is that Catalina requires an AMD GPU but from reading the forum it seems AMD doesn't work well with UNRAID GPU passthrough. So wondering how to reconcile that? Any suggestions? Thank You!
  5. @Vr2Io Thank you for the quick response, that article was helpful. Working on this today. So just trying to reconfirm that the steps are right: Enable port aggregation on the router - screenshot of mine is below. Stop docker and VM's in unRaid Change network settings for eth0 and enable bonding with members eth1 select bonding mode to balance-alb leave all other settings as is, click apply and done Questions: Do I change the MTU settings to anything higher than 1500 (9000 maybe)? My ethernet controller says jumbo frames supported, hence the question. Anything else I am missing. All guidance is appreciated. Thanks.
  6. Hi - I'm trying to setup link aggregation on my dual LAN ports on the ASROCK C226 Motherboard connected to a Netgear Nighthawk Modem. My question is what bonding mode do I choose under eth0 since both my ports are on the board and I am not using a switch or any PCI NIC's. Thanks.
  7. Hi, First off I am a novice at any of this proxy network setup stuff with very little experience and understanding of what I am doing or what I am doing incorrectly, so please be patient (and if possible gentle) with your replies and comments. So I followed the guides by @SpaceInvaderOne , namely these three below merged together, and set up everything using my own domain. Video #1, Video #2 & Video #3 (related to this docker). Testing with sonarr for starters. I got server ready on the swag logs (pasted below): ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support the app dev(s) visit: Certbot: https://supporters.eff.org/donate/support-work-on-certbot To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=America/Chicago URL=mydomain.com SUBDOMAINS=wildcard EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=dns DNSPLUGIN=cloudflare EMAIL=myemailaddress STAGING=false SUBDOMAINS entered, processing Wildcard cert for only the subdomains of mydomain.com will be requested E-mail address entered: myemailaddress here dns validation via cloudflare plugin is selected Certificate exists; parameters unchanged; starting nginx Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind, and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key. [cont-init.d] 50-config: exited 0. [cont-init.d] 60-renew: executing... The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am). [cont-init.d] 60-renew: exited 0. [cont-init.d] 99-custom-files: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-files: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) Server ready duckdns subdomain created duckdns container installed and running updating every 5 minutes ports are forwarded, not sure if 80 needs to be forwarded for dns validation, but it is. CNAMES are created on cloudflare and pointed to duckdns subdomain created above, and proxied both dockers are on proxynet (docker network created for this) config files are edited, server is up and running and sonarr can be accessed locally. But when I try to access sonarr.mydomain.com on chrome, I get the error 521 - webserver is down. Can someone help tell me what I did wrong and how to correct this please? Thanks, Abhi
  8. Thank you! I have done asked some pretty stupid questions here and that probably was on the top of that list. Set it to mnt/user/isos share and all is good now.
  9. I am trying to install a VM for running Hass.io, but VM not starting up after being enabled, HVM and IOMMU are enabled in BIOS. Tried rebooting but no luck. Anything I am missing? System Profiler: VM Manager Settings:
  10. Thanks! That worked, but I only have metrics for sdb-sdg. What about the rest of the drives? How do I get those?
  11. I have it installed - but all disks show up with no data. What should it do to get the SMART data populated?
  12. Great - thank you both. will mark this as solved now. Appreciate all the help.
  13. Yes, all the disk assignments came up empty after boot and I had to reassign the disks manually, the Disk Assignments.txt in the config directory that was saved a day earlier is what I used for reassigning and they are right. Not sure why they had to be manually reassigned, after I copied over the contents of the flash from the 9/21 back up and made the disk bootable, I thought it would retain the disk assignments. I did not check that box, I should have, the parity was valid. I remember seeing that box and thinking, if I don't check it, it will just do a parity check. Did not realize that it will trigger a rebuild. It is definitely rebuilding parity, but all the shares, dockers, plugins, and data seems to be fine. Is there a negative, if the parity is rebuilt? Other than the array being unprotected for the rebuild duration.
  14. Thanks Squid. I did grab it from the USB backup share as per the plan you suggested (that was a backup as of Monday morning) and now everything is good, parity rebuild is in progress. Not sure why parity had to be rebuilt though?
  15. Thanks @Squid, appreciate the help and patience. I found a back up from June (06/24) saved on my desktop - I think I replaced the USB drive then and this was saved for that. So can I use anything from this? The config folder and disk assignments should be pretty accurate.
  16. Yes the USB backup is on the array, under a share. So please bear with me: What do you mean by - Assign ALL drives as DATA drives. I have never checked the USB Backup location before this, just trusted the Appdata Backup plugin to do its work, is there any way to check the contents in that drive, ls command doesn't work. Just wondering if the backup is not there, then what? I haven't yet powered the system down, because I am afraid I will loose access to shares and dockers and their configs, db's. So anything I need to do before shutting the system down. Download or move stuff?
  17. So I did execute rm -rf \* on accident and then stopped it pretty quickly (<2s), and the command generated a bunch of cannot remove messages, now the Web UI doesn't work, but the shares, docker containers all seem to be live and working. I do have my appdata and USB backup folder but the USB Backup permissions doesn't seem to be correct and I can't run chmod or almost any command from the terminal - it says command not found. How do I fix this without having to reconfigure everything again?
  18. No worries. Was struggling to find a solution there hence though i would try here. Seems the answer for any one trying this is here just add :plexpass to the respository name.
  19. Hi - I’m new to docker, so wanted to double check if I am doing something incorrectly. I am currently running the PMS version Version: 1.19.5.3112 and hosting the server on unRAID using the plexinc/pms-docker, I have set VERSION=beta in the docker config and I’m a plexpass customer. However, after restarting the container multiple times PMS is not getting updated automatically, and I get the server update available message every time I log into my PMS webUI. I had initially created the container with VERSION=latest and have since then edited to all other values - beta, plexpass and public but none of those seem to update the PMS. Any help is appreciated - what is the right way to automatically update PMS to the latest available beta or public versions?