Jump to content

Helmonder

Members
  • Content Count

    2656
  • Joined

  • Last visited

Community Reputation

13 Good

About Helmonder

  • Rank
    Advanced Member
  • Birthday 03/22/1971

Converted

  • Gender
    Male
  • Location
    The Netherlands, Eindhoven

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Helmonder

    VPN Provider?

    I used to have PureVPN and Express VPN... Both gave me 5-8Mb/s.. I switched to Mullvad and I am getting 55 MB/s ... If you want yoy can pay them in bitcoin... No logon.. no email...
  2. Get the feeling.. I was basically using it to make it easier on the firewall side to distinguish traffic, found other ways to do that so this works fine also.. Glad to be of help !
  3. Unraid is running totally in memory, you would need to put a startupscript somewhere in /boot.. Wondering however... What is your use case for running unraid on a laptop ?
  4. Purevpn is continuing to give troubles.... I just registered an openvpn account with MULLVAD. Worked out of the box.... Also does WIRESHARK by the way... Would be nice if we could get the option to use that within the container also ? (EDIT) BTW... MULLVAD is -extremely fast... I am getting 60MB/s where purevpn would give me 5/6 MB/s for the same content..
  5. What about forgetting about mover and just disabling VM and Docker completely (thru settings) and then moving the files manually and then starting docker and VM again.. ?
  6. Safe mode disables the plugins and does not auto start the array... Do you have dockers that have their own IP address ? Your symptoms look exactly like what I had... For me the system became rock-solid after removing the different IP addresses for the dockers..
  7. I was just about to get backblaze subscription when I saw this... Great tool and thanks for the unraid integration !
  8. All folders in your unraid array are mapped as follows: /mnt/user/<sharename> So say you have an unraid share called "media", then there will be path in unraid that is: /mnt/user/media Now when you start a docker container (say plex), then plex will not have access to your files, plex will only see its own files. you need to "map" drives from unraid to plex. You can see it as a mount command (not the same, but it is like it). In the docker container you need to configure how you want this share in the docker to be called, lets say you call it "movies", then you can just say that it is called "/movies". In the second next configuration field you can actually select the physical unraid folder. In linux it is a best practice to mount external file systems under a folder called "/mnt", there is however no technical need to do it and mounting directly in the root works just fine. There is also no need to give the thing the samen name, you can map /Poopiesnort to /mnt/user/movies without a problem. Plex will the contents of your /mnt/user/movies folder in a folder called /Poopiesnort. Gottit ?
  9. Just noticed that Dynamix cache dirs is listed as beiing not known to Community Applications.. Seems like an error ? Probably one of the plugins almost everyone is using ? Issue Found: dynamix.cache.dirs.plg is not known to Community Applications. Compatibility for this plugin CANNOT be determined and it may cause you issues.
  10. My log shows a large amount of the following error lines yesterday: Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: nchan: Out of shared memory while allocating message of size 9468. Increase nchan_max_reserved_memory. Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: *5303164 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: MEMSTORE:01: can't create shared message for channel /disks Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [crit] 16603#16603: ngx_slab_alloc() failed: no memory Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: shpool alloc failed Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: nchan: Out of shared memory while allocating message of size 9468. Increase nchan_max_reserved_memory. Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: *5303167 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: MEMSTORE:01: can't create shared message for channel /disks The errors were not there before the 19th and are not here after 19th, system has not been rebooted. The erros appeared yesterday during the time that a parity rebuild was running (due to me removing a couple of old drives out of the array). System appears to be functioning fine at the moment. Diagnostics are attached. tower-diagnostics-20200420-1349.zip
  11. Though not as extreme as this in a couple of years I have also experienced three times failure with btrfs.. corruption of some kind that could not be fixed.. Had to reformat the whole system.. Last time I just formatted in XFS.. Works fine..
  12. But even if your flash failes that is not a hopeless situation.. And even IF you would not be able to recover it (Which you should) then all individual disks are readable and have their own complete files on it. So even if all other hardware evaporates, you still have your data as long as your drives are there.
  13. Thanks. I will keep you posted. VPN provider helpdesk does not understand the issue at all Keeps asking for my settings page 🙂
  14. No... still not working... But if you think its on the side of the vpn provider I guess I just leave it alone for a couple of days, see if it comes back... Thanks very much for your help !
  15. Herewith ! I read it gives an AUTH FAILED in there.. Password has not been changed in over a year though... Just changed it just to be sure, did not make a difference, new password also ended up in the credentials file in the openvpn folder.. So that part works.. supervisord.log