danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. Hey Neil. Hope life is treating you well. I appreciate the time you have taken to update this. Thank you.
  2. Thanks. I tried your 2 steps with @piotrasd libreelec binaries. The system found the adapter but not the drivers. Like you, I rolled back.
  3. Yeah, that's what @piotrasd did and he posted them earlier. I quoted them in my post in the links above. I'm just not sure of the install process. I asked what the process was for installation of these files (eg where do each of the files within the archive go, does upgraded stock need to be installed first and can this method run alongside the Plugin - eg can I keep the plugin installed awaiting @CHBMB comeback or do I have to uninstall when using the manual method). Also, I'd like to know the process so I can compile them myself from now on without having to rely on others.
  4. Would you mind documenting the steps you took to create these? I am not averse to doing some compiling and self-installation (as opposed to the plugin) but after reading the entire thread 4 times and the wiki I cannot get things to work. EDIT: In the meantime, I assume you just throw the output of each of the archives above onto the flash drive and reboot? Is this right? Also, what is the unraid-media file? I haven't seen this before?
  5. Hi @bonienl , I followed your instructions to the letter, but I still hit issues. All my docker containers are working fine (as you would expect on br1) but my openvpn docker (which is configured on the host) will not communicate with the containers - which have thier own IP set on my network. It can (once again, as you would expect) communicate with the host. Do you have any suggestions?
  6. Hmmm, I do have 2 ethernet interfaces on the server. They are currently bonded. I'm not sure I get much real life benefit from that bonding setup. I might remove the bond and try that solution.
  7. Thanks for this. However, after reading through the posts I wasn't too taken away with the solutions. So, (for others benefit) what I decided to do was: Use my existing Ubuntu VM which is always running Installed 18.04 LTS in a minimal config Install Docker.io via apt-get Install Portainer management UI docker container Give VM static IP address Deploy linuxserver.io openvpn-as container into the Ubuntu VM Docker instance Setup openvpn-as as normal port forward 1194 to the Ubuntu VM Login via phone and test. Now all docker containers with their own IP address can be accessed when I VPN in. There are plenty of other solutions to this (e.g. deploy openvpn-as directly into the VM, use router VPN functionality) but for various reasons (ongoing admin, the power of router hardware) I didn't want to do it. Happy now. EDIT: some people might want to know why I want each of my dockers to have their own LAN IP. It is so I can use my router to route certain dockers (via their IP) internet connections via an external VPN service.
  8. Thanks @jonathanm. My search kungfu must be terrible, in a dozen searches I didnt see any discussion. Can you give me a link to the most authoritative thread so I can read up.
  9. EDIT: I didnt post this in the OpenVPN-AS container thread as I am thinking this is an unRAID network issue. Mods, please move this if you / future replies indicate I am wrong. Hi All, I have an interesting network issue. I establish a VPN connection to my unRAID machine via linuxserver.io docker OpenVPN-AS. All has always worked well. Recently (as in a few days ago) I decided to change things and give each of my containers their own LAN IP on the same range as all other machines on my LAN (192.168.1.x). I went further and allocated (via the -h docker switch AND DNS in the router) each their own hostname. Now, when I VPN in, I cannot access any docker container UI. I can access other machines on the network fine and also the unRAID UI. I have tried to access the IP address as well as the local DNS name (I half expected the local DNS name not to work) but to no avail. When I revert back to using a bridge or host port, I can access the containers UI's just fine via VPN. There is absolutely no change to local access on the LAN - where I can access each container perfectly fine using either the hostname or the local IP. I imagine this must have something to do with a container accessing a container, but I am not savy enough here to figure out what is going on to try and fix it. Any help would be appreciated. Ta, Daniel
  10. This makes perfect sense, thank you. I understand now. Probably unrealistically, I had expected unused settings to disappear when I selected an option which made them irrelevant and there was no guidance in the help text. Issue resolved. Thanks again.
  11. Hi All, During routine maintenance I have noticed (What I consider to be) something weird about container mappings on my main unRAID setup. Note: posting here, as I seem to feel that this has nothing to do with any particular container or the docker engine itself. However, I decided to map IP's to each of my containers. Played around with the router and added hostnames too. Easy done. e.g. Custom br0 => 192.168.1.203 => nginx.danioj.lan As I had set the container to run (when it was originally setup in bridge / host mode) on port 81 (due to the conflict with unRAID GUI port 80) I had anticipated having to go to: http://nginx.danioj.lan:81 Due to my ever growing laziness, I accidentally left off the port assignment on the URL and (just as I hit enter and expected to find a URL not found error or similar) I was shocked to see that it resolved. What the? It resolved on port 80? I didn't think this was possible, given unRAID GUI runs on port 80. I was also "sure" that even though I have allocated the container to its own LAN IP Address that it still couldn't run on port 80 - there is only one port 80 on the host after all. EDIT: This is despite me selecting the Host Port in the Container Settings as 81. On the Docker summary page, still shows it is mapping to port 80: 192.168.1.203:443/TCP192.168.1.203:443 192.168.1.203:80/TCP192.168.1.203:80 Checked and double checked the container settings page. Host port is definitely 81. However, evidence is evidence so I thought, ooooo - Ill change the application port within other containers I have (e.g. emby) to port 80 too meaning I can just access the application using host name and no port. It did not work. Despite the application allowing for the port to be changed (which i did and then restarted), it wouldn't bind to port 80. When it came back up, the port was 8096 (default). What was also wierd though was, I glanced at the port mappings on the docker page in the emby entry and they (despite only having one port mapping for 8096 in the settings of the container) actually showed there was 4 mappings: 192.168.1.200:1900/UDP192.168.1.200:1900 192.168.1.200:7359/UDP192.168.1.200:7359 192.168.1.200:8096/TCP192.168.1.200:8096 192.168.1.200:8920/TCP192.168.1.200:8920 Again ....What the??? Something screwy is going on here. So, in summary, I have the following: 2 containers' settings indicating 1 port mapping (for the default port of the application) but the docker summary page shows that each container has multiple mappings each. I can access nginx on port 80 when a seperate IP is allocated to that container but not when I try and do the same with another container. I am scratching my head here .....
  12. I think if we were to talk features. My unRAID life would be complete if we had: - ability to run a VM independantly of Array status (to facilitate pfSense use and or primary desktop) - formal support for virtualising unRAID as a guest
  13. There is no reason why you cannot have all the space available to you if you buy a 1TB SSD. If you format the SSD's as BTRFS you can run them in RAID0 and unRAID will treat them as 1 big drive. I run 3 x 250GB SSD's and unRAID see's them as 1 big 750GB cache disk. I don't run anything outside the array (utilising unassigned devices plugin) anymore as I prefer to use unRAID as it was intended (VM's, Docker etc) from the Cache device. As for the type of SSD, I don't think you can go past the Samsung EVO range. I find them to be excellent value for money.
  14. Please grab and post your diagnostics file. From the GUI: Tools>Diagnostics>Download. From the CLI: Type diagnostics then go and get the generated file from the flash drive.
  15. Interesting feedback for LimeTech. I am interested to know what was the driver behind the post? Has a recent feature (or promise of a feature) given you cause for concern? I have always felt (like you it seems) that unRAID should maintain its position as a storage centric product first and all the other things it is (can can be) second. So much so, that I was concerned myself when they started integrating Docker and KVM. I remember feeling at the time that their efforts were best served concentrating on more "storage related" features such as Dual Parity. On reflection, I feel that Limetech made a great move. If they had listened to me, it would have had them loosing ground (and custom) on other competing products. Dual Parity came eventually (and they did a great job ensuring that this was implemented correctly), but not before they made great strides to keep the product relevant and current in meeting with what many new customers want from a NAS appliance (e.g. application hosting). It's worth noting, that we now refer to Docker as a means of ensuring that the core product remains as it is BUT in fact Docker itself was just a short time ago one of those such features that was integrated into unRAID which really had nothing to do with its original product. jm2c.
  16. Me too. However ... These days a disk Clear (to get that flag on the disk) doesn't take the Array down - or make it or the GUI unresponsive (more appropriate explanation). As the disk I was using was from another unRAID server (and had already been through many rigorous tests) I knew it was fine - so had no need to clear outside the array (especially as I note above, this does not result in downtime anymore) hence why I just added it. What I was unclear (no pun intended - happy accident) was what was recorded in that history log, which made my post look nuts. I have cleared (again - no pun intended - another happy accident) up the original post. All makes sense now. Sigh. Sorry folks.
  17. Something required a clear .... I remember it! EDIT: Oh crap, I have seriously got brain fog. I did add another 8TB disk too. Sigh. So it goes something like this: Entry 1: Parity Check Entry 2: Parity Sync as a result of adding another 8TB Parity disk Entry 3: Clear as a result of additional 8TB disk Entry 4: Monthly parity check Entry 5: 3TB => 8TB disk rebuild Entry 6: 3TB => 8TB disk rebuild God, I feel like I have just spammed this beloved thread! I will make the edits to the original post. What fluff.
  18. OK - sorry all, I completely screwed that post up. Please let me assure you, it was all coming from a good place. I didn't realise that clears were recorded in the parity check log as well. NB: I think I will be asking LT for a change to the logging to indicate what it was that was actually run. But Ill get to that later .... So based on my log and the timeline of events (before I edit the post again and get it wrong) my log is this .... 2017-04-03, 16:22:17 19 hr, 48 min, 40 sec 112.2 MB/s OK 0 2017-04-02, 19:13:52 22 hr, 54 min, 43 sec 97.0 MB/s OK 0 2017-04-01, 21:00:28 20 hr, 30 min, 27 sec 108.4 MB/s OK 0 2017-03-30, 09:57:57 14 hr, 59 min, 16 sec 148.3 MB/s OK 0 2017-03-29, 18:35:46 21 hr, 3 min, 48 sec 105.5 MB/s OK 0 2017-03-01, 20:02:50 19 hr, 32 min, 49 sec 113.7 MB/s OK 0 I am confused if Clears are logged as well why there are not more entries. I did the following (working from the bottom up): - Added new 8TB parity disk (no clear was required just a sync) - I assume that was entry 2 - Replaced 3TB data disk with another 8TB disk (required a clear) - Disk rebuild 3TB disk => 8TB disk - Parity Check ran in that time - I assume that was entry 4 - Replaced 3TB disk with another 8TB disk (required another clear) - Disk rebuild 3TB disk => 8TB disk Given that list of events I am finding it difficult to see what that 148.3MB/s is? If it was a clear then surely I would have a similar one as I added a second 8TB disk that required a clear. I am confused.
  19. Yorkshire Aussie @ that! Double trouble! LOL!
  20. Let me start this off by saying, I have NOT read the entire thread for this issue. I searched for .DS to see if anything was posted and there was nothing. That might give some clue as to why I am posting. I have been doing some general updating to the server (along with installing some new excellent plugins - of which I think this is) and I found a conflict between this plugin and user scripts. I ran the included "This script will delete all .DS_Store files on your array created by Apple's Finder" script on the user scripts plugin and the ransomware plugin kicked in. Doh! When I say "conflict" I believe the ransomware plugin is behaving as intended BUT the files in question are inconsequential. Are we able to set an exclusion list? or a safe plugin list? .... Just thinking out loud!? EDIT: also, is it possible for everything to work as intended and still have the SMB folders hidden. It bugs me that when I open my share list up I have this huge list of "dummy" shares and I have to "scan" to find what I want!?
  21. You're right. That was the Parity Sync. I didn't realise there was a parity check that occurred in there (obviously as it was the 1st). I am correcting the post now.
  22. I am posting this to continue my contribution to this thread. As you all know I am a big supporter of these drives for the typical unRAID use case. My bi-directional transfer speeds are rock solid and the server supports several clients running concurrently 24 hours per day as well as early morning progressive backups. For those who don't want to read up, as of 29th March my Array configuration consisted of: Parity: 1 x Seagate 8TB Shingle Data: 5 x WD 3TB Red and 2 x WD 3TB Green and 1 x Seagate 8TB Shingle My monthly parity checks are: 2017-03-01, 20:02:50 19 hr, 32 min, 49 sec 113.7 MB/s OK 2017-02-01, 19:54:16 19 hr, 24 min, 15 sec 114.5 MB/s OK 2017-01-01, 19:47:18 19 hr, 17 min, 17 sec 115.2 MB/s OK 2016-12-01, 19:47:09 19 hr, 17 min, 8 sec 115.2 MB/s OK 2016-11-06, 06:49:45 19 hr, 42 min, 11 sec 112.8 MB/s OK On the 29th March 2017 I added a second Seagate 8TB Shingle as a second Parity. The subsequent Sync record was: On the 30th March 2017 I added a further Seagate 8TB Shingle as a data disk. The subsequent Clear record was: On the 1st April 2017 my monthly parity check ran. The record was: Following the parity check (as I rebooted to update to 6.3.3) one of the WD 3TB Green's failed. So I replaced it with a new Seagate 8TB Shingle. The subsequent rebuild record was: I then went on to replace the second WD 3TB Green from the system (as they were from the same batch as the one that had just failed) with another Seagate 8TB Shingle. The subsequent rebuild record was: For those not keeping score my new configuration is: Parity: 2 x Seagate 8TB Shingle Data: 5 x WD 3TB Red and 4 x Seagate 8TB Shingle I am satisfied with these figures. I am currently running a parity check on the system to see if my ~ 19 hour average remains. I will post when it does.
  23. Updated from 6.3.2 to 6.3.3 without issue as usual. I am interested to see an answer to @johnnie.black post above though regarding the "additional" diagnostic information captured (if indeed it wasn't captured before). I can't see reference to this in the changelog. The only thing referenced relating to diagnostics is: