Jump to content

danioj

Members
  • Content Count

    1363
  • Joined

  • Last visited

Everything posted by danioj

  1. Hi Guys, Has anyone any experience with a PCI-E x8 to PCI-E x16 riser. I know many of them are dodgy and should not be used, which is why I am seeking some recommendations. I've stated in quite a few posts, my board is an SuperMicro X10SL7-F https://www.supermicro.com/products/motherboard/xeon/c220/x10sl7-f.cfm I have 2 PCIe slots: 1 PCI-E 3.0 x8 (in x16 slot) 1 PCI-E 2.0 x4 (in x8 slot) I have an Gigabyte Geforce GTX 1050 Ti OC LP 4GB passed through to an client VM under libreelec in the PCI-E 3.0 x16 (x8) slot. This card basically blocks my use of the PCI-E 2.0 x8 (x4) slot due to the size of the card. I would like to be able to drop a GPU in that slot too ... hence the need for a riser. Thanks in advance. D
  2. Server is still stable as all hell since rollback.
  3. I got very lucky and realised there was a connection issue resulting from a physical move so I didnt have to upgrade. Yay!
  4. EDIT: After a hard reset I rolled back to 6.6.6 with no issue. Will report back on how stable the system runs for the next 6 hours while we catch a few movies. I'm guessing (unless there just happens to be a coincidental HW failure at the same time I upgraded) all will be fine as it has been. EDIT2: can a Mod please move this post to this thread. Read it start to finish and it seemed a very similar issue. EDIT3: its late so maybe I'm operating a bit slow, but reading the console output again could this be a cpu sleep state issue. Even with an e3-1241v3 cpu and never had an issue on any previous unraid version for the last couple of years? EDIT4: after successfull rollback, things are as stable as ever. Hi All. I've just upgraded to 6.7 RC2. I was in a trailblazing mood this afternoon. Unfortunately, after upgrading and everything seeming fine, the server locked up. In that I couldn't ping it, access GUI, telnet in (obviously given ping etc), Dockers all inaccessible OR unresponsive and VM's froze (while still outputting display BUT non responsive via any input device). I had to do a hard reset. Cancelled Parity Check. Started playing around and exploring. All was going well again. With everything. Then, an hour or so later, frozen. I tried to find some logs that would make an actual bug report worthwhile, but given I had to reset, nothing. Everything that was running on 6.6.6 should be fine on 6.7 RC2 as I didnt see anything in the release notes which required preparation so I did a straight switch to unstable and upgraded. As an aside, the server is rock solid and has been over the last 2 years, throughout all the upgrades. Monthly maintenance has shown nothing and just recently did a disk audit (1 week ago) and all was well. Upgraded from the latest stable version 6.6.6 I did log into the IPMI console and that was unresponsive too. However, I did manage to capture the attached from the console output. My build is MainServer, documented here: https://forums.unraid.net/topic/35892-completed-daniels-home-setup-updated-14012016/ Not sure what is going on but I am going to hard reset again and roll back. Ta D
  5. Nah. I did this and got really sick. My priorities have changed and tbh I have abandoned this. It was a good idea when i had time but even I back up manually now.
  6. Hello Community, Happy New Year 2019!! I have started 2019 with a motherboard I need to replace. It is an Supermicro X10SL7-F. It's out of warranty and I've tried to get a replacement but it's looking more expensive here in AUS to buy the board now than it is to buy a new one. So, I'm thinking upgrade. Happy to spend some money if the upgrade is worth it. The board was excellent as it had just the right amount of sata ports as well as IPMI. I also run 3 VM's (1 with a low powered Nvidia GC) and a handful of dockers which only test the CPU every now and then. Last year I considered 2 x gaming / 4k VMs (requiring 2 x16 GC's) but the board only had an x8 and an x4 PCI slot which was the only limitation I ever found (speed and number of slots) which meant I just left things as they were. 2 x Gbe - plus 1 for IPMI - has been fine (dont and cant see need for 10Ge). On the board there is an Intel® Xeon® Processor E3-1241v3 (8M Cache, 3.50 GHz), 2 x Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory and is powered by a Corsair HX850i 80 Plus Platinum 850W Power Supply. I'd like to reuse parts if possible. Has anyone upgraded the same board recently and or would like to chime in with some suggestions or thoughts!?! Thanks in as advance.
  7. For anyone having an issue setting up this container, here are some tips from me: - don't worry that there appear to be no port or config mappings. Everything is fine, you don't need to add these. - if you set your own IP (which you will probs have to do given it requires port 80) then ALL other containers it interacts with need their own IP - tvhProxy may need a restart before it will be recognised by TVH. - Don't mess with ports - use <ip>:<port> format of tvhProxy setup in Plex to find the server. Just my experience - HNY 2019. D
  8. For anyone who is installing Ombi for the first time and either cannot log in via OAuth OR Ombi doesn't recognise Plex Media Server (despite pulling the details from your Plex account), then the following solutions worked for me: - Don't try and login using OAuth on initial setup. Skip the wizard and set yourself a very strong password. Then, once logged in, navigate to Settings>Configuration>Authentication and click the checkbox which says "Enable Plex OAuth". Then, disable any popup blocker you have for your Ombi URL. Log out, then log back in (this time using the newly appearing option to use OAuth) and then you're good. For some reason, this method of logging requires a popup method of logging on, which the Wizard doesn't seem to use (as in it opens a new page). Meaning, when you're taken to a new page (where you are successfully logged in - and asked to close the window) but then the wizard never continues. - Once you have grabbed the details of your Plex server (via the Media Server settings option) from your Plex account and Click either "Load Libraries" OR "Test Connectivity" and received an error saying it can't connect to your server - just go to the Docker page and reset the server and try again. It should then work. - Run a Settings>Media Server>Plex>Manually Run Full Sync right at the end. Just my experience! HNY 2019! D
  9. Hey Neil. Hope life is treating you well. I appreciate the time you have taken to update this. Thank you.
  10. Thanks. I tried your 2 steps with @piotrasd libreelec binaries. The system found the adapter but not the drivers. Like you, I rolled back.
  11. Yeah, that's what @piotrasd did and he posted them earlier. I quoted them in my post in the links above. I'm just not sure of the install process. I asked what the process was for installation of these files (eg where do each of the files within the archive go, does upgraded stock need to be installed first and can this method run alongside the Plugin - eg can I keep the plugin installed awaiting @CHBMB comeback or do I have to uninstall when using the manual method). Also, I'd like to know the process so I can compile them myself from now on without having to rely on others.
  12. Would you mind documenting the steps you took to create these? I am not averse to doing some compiling and self-installation (as opposed to the plugin) but after reading the entire thread 4 times and the wiki I cannot get things to work. EDIT: In the meantime, I assume you just throw the output of each of the archives above onto the flash drive and reboot? Is this right? Also, what is the unraid-media file? I haven't seen this before?
  13. Hi @bonienl , I followed your instructions to the letter, but I still hit issues. All my docker containers are working fine (as you would expect on br1) but my openvpn docker (which is configured on the host) will not communicate with the containers - which have thier own IP set on my network. It can (once again, as you would expect) communicate with the host. Do you have any suggestions?
  14. Hmmm, I do have 2 ethernet interfaces on the server. They are currently bonded. I'm not sure I get much real life benefit from that bonding setup. I might remove the bond and try that solution.
  15. Thanks for this. However, after reading through the posts I wasn't too taken away with the solutions. So, (for others benefit) what I decided to do was: Use my existing Ubuntu VM which is always running Installed 18.04 LTS in a minimal config Install Docker.io via apt-get Install Portainer management UI docker container Give VM static IP address Deploy linuxserver.io openvpn-as container into the Ubuntu VM Docker instance Setup openvpn-as as normal port forward 1194 to the Ubuntu VM Login via phone and test. Now all docker containers with their own IP address can be accessed when I VPN in. There are plenty of other solutions to this (e.g. deploy openvpn-as directly into the VM, use router VPN functionality) but for various reasons (ongoing admin, the power of router hardware) I didn't want to do it. Happy now. EDIT: some people might want to know why I want each of my dockers to have their own LAN IP. It is so I can use my router to route certain dockers (via their IP) internet connections via an external VPN service.
  16. Thanks @jonathanm. My search kungfu must be terrible, in a dozen searches I didnt see any discussion. Can you give me a link to the most authoritative thread so I can read up.
  17. EDIT: I didnt post this in the OpenVPN-AS container thread as I am thinking this is an unRAID network issue. Mods, please move this if you / future replies indicate I am wrong. Hi All, I have an interesting network issue. I establish a VPN connection to my unRAID machine via linuxserver.io docker OpenVPN-AS. All has always worked well. Recently (as in a few days ago) I decided to change things and give each of my containers their own LAN IP on the same range as all other machines on my LAN (192.168.1.x). I went further and allocated (via the -h docker switch AND DNS in the router) each their own hostname. Now, when I VPN in, I cannot access any docker container UI. I can access other machines on the network fine and also the unRAID UI. I have tried to access the IP address as well as the local DNS name (I half expected the local DNS name not to work) but to no avail. When I revert back to using a bridge or host port, I can access the containers UI's just fine via VPN. There is absolutely no change to local access on the LAN - where I can access each container perfectly fine using either the hostname or the local IP. I imagine this must have something to do with a container accessing a container, but I am not savy enough here to figure out what is going on to try and fix it. Any help would be appreciated. Ta, Daniel
  18. This makes perfect sense, thank you. I understand now. Probably unrealistically, I had expected unused settings to disappear when I selected an option which made them irrelevant and there was no guidance in the help text. Issue resolved. Thanks again.
  19. Hi All, During routine maintenance I have noticed (What I consider to be) something weird about container mappings on my main unRAID setup. Note: posting here, as I seem to feel that this has nothing to do with any particular container or the docker engine itself. However, I decided to map IP's to each of my containers. Played around with the router and added hostnames too. Easy done. e.g. Custom br0 => 192.168.1.203 => nginx.danioj.lan As I had set the container to run (when it was originally setup in bridge / host mode) on port 81 (due to the conflict with unRAID GUI port 80) I had anticipated having to go to: http://nginx.danioj.lan:81 Due to my ever growing laziness, I accidentally left off the port assignment on the URL and (just as I hit enter and expected to find a URL not found error or similar) I was shocked to see that it resolved. What the? It resolved on port 80? I didn't think this was possible, given unRAID GUI runs on port 80. I was also "sure" that even though I have allocated the container to its own LAN IP Address that it still couldn't run on port 80 - there is only one port 80 on the host after all. EDIT: This is despite me selecting the Host Port in the Container Settings as 81. On the Docker summary page, still shows it is mapping to port 80: 192.168.1.203:443/TCP192.168.1.203:443 192.168.1.203:80/TCP192.168.1.203:80 Checked and double checked the container settings page. Host port is definitely 81. However, evidence is evidence so I thought, ooooo - Ill change the application port within other containers I have (e.g. emby) to port 80 too meaning I can just access the application using host name and no port. It did not work. Despite the application allowing for the port to be changed (which i did and then restarted), it wouldn't bind to port 80. When it came back up, the port was 8096 (default). What was also wierd though was, I glanced at the port mappings on the docker page in the emby entry and they (despite only having one port mapping for 8096 in the settings of the container) actually showed there was 4 mappings: 192.168.1.200:1900/UDP192.168.1.200:1900 192.168.1.200:7359/UDP192.168.1.200:7359 192.168.1.200:8096/TCP192.168.1.200:8096 192.168.1.200:8920/TCP192.168.1.200:8920 Again ....What the??? Something screwy is going on here. So, in summary, I have the following: 2 containers' settings indicating 1 port mapping (for the default port of the application) but the docker summary page shows that each container has multiple mappings each. I can access nginx on port 80 when a seperate IP is allocated to that container but not when I try and do the same with another container. I am scratching my head here .....
  20. I think if we were to talk features. My unRAID life would be complete if we had: - ability to run a VM independantly of Array status (to facilitate pfSense use and or primary desktop) - formal support for virtualising unRAID as a guest
  21. There is no reason why you cannot have all the space available to you if you buy a 1TB SSD. If you format the SSD's as BTRFS you can run them in RAID0 and unRAID will treat them as 1 big drive. I run 3 x 250GB SSD's and unRAID see's them as 1 big 750GB cache disk. I don't run anything outside the array (utilising unassigned devices plugin) anymore as I prefer to use unRAID as it was intended (VM's, Docker etc) from the Cache device. As for the type of SSD, I don't think you can go past the Samsung EVO range. I find them to be excellent value for money.
  22. Please grab and post your diagnostics file. From the GUI: Tools>Diagnostics>Download. From the CLI: Type diagnostics then go and get the generated file from the flash drive.
  23. Interesting feedback for LimeTech. I am interested to know what was the driver behind the post? Has a recent feature (or promise of a feature) given you cause for concern? I have always felt (like you it seems) that unRAID should maintain its position as a storage centric product first and all the other things it is (can can be) second. So much so, that I was concerned myself when they started integrating Docker and KVM. I remember feeling at the time that their efforts were best served concentrating on more "storage related" features such as Dual Parity. On reflection, I feel that Limetech made a great move. If they had listened to me, it would have had them loosing ground (and custom) on other competing products. Dual Parity came eventually (and they did a great job ensuring that this was implemented correctly), but not before they made great strides to keep the product relevant and current in meeting with what many new customers want from a NAS appliance (e.g. application hosting). It's worth noting, that we now refer to Docker as a means of ensuring that the core product remains as it is BUT in fact Docker itself was just a short time ago one of those such features that was integrated into unRAID which really had nothing to do with its original product. jm2c.
  24. Me too. However ... These days a disk Clear (to get that flag on the disk) doesn't take the Array down - or make it or the GUI unresponsive (more appropriate explanation). As the disk I was using was from another unRAID server (and had already been through many rigorous tests) I knew it was fine - so had no need to clear outside the array (especially as I note above, this does not result in downtime anymore) hence why I just added it. What I was unclear (no pun intended - happy accident) was what was recorded in that history log, which made my post look nuts. I have cleared (again - no pun intended - another happy accident) up the original post. All makes sense now. Sigh. Sorry folks.
  25. Something required a clear .... I remember it! EDIT: Oh crap, I have seriously got brain fog. I did add another 8TB disk too. Sigh. So it goes something like this: Entry 1: Parity Check Entry 2: Parity Sync as a result of adding another 8TB Parity disk Entry 3: Clear as a result of additional 8TB disk Entry 4: Monthly parity check Entry 5: 3TB => 8TB disk rebuild Entry 6: 3TB => 8TB disk rebuild God, I feel like I have just spammed this beloved thread! I will make the edits to the original post. What fluff.