Helmonder

Members
  • Posts

    2817
  • Joined

  • Last visited

Everything posted by Helmonder

  1. I run a daily backup with the plugin for docker backup.. actually works fine.. for the rest I use the cache for -cache- and that does not mean I care a lot if I loose something.. I think the cache pools are most valuable for situation where they are not really used for cache... But for "fast storage"..
  2. Does this only pertain BTRFS ? My M2 cache drive (XFS) has written 233TB and read 500TB in 1.7 years... Also seems like a lot for a 1TB drive... If I calculate correctly the whole 1TB drive is written totally every 3 days... And it mostly always more then half full ..
  3. Helmonder

    VPN Provider?

    I used to have PureVPN and Express VPN... Both gave me 5-8Mb/s.. I switched to Mullvad and I am getting 55 MB/s ... If you want yoy can pay them in bitcoin... No logon.. no email...
  4. Get the feeling.. I was basically using it to make it easier on the firewall side to distinguish traffic, found other ways to do that so this works fine also.. Glad to be of help !
  5. Unraid is running totally in memory, you would need to put a startupscript somewhere in /boot.. Wondering however... What is your use case for running unraid on a laptop ?
  6. Purevpn is continuing to give troubles.... I just registered an openvpn account with MULLVAD. Worked out of the box.... Also does WIRESHARK by the way... Would be nice if we could get the option to use that within the container also ? (EDIT) BTW... MULLVAD is -extremely fast... I am getting 60MB/s where purevpn would give me 5/6 MB/s for the same content..
  7. What about forgetting about mover and just disabling VM and Docker completely (thru settings) and then moving the files manually and then starting docker and VM again.. ?
  8. Safe mode disables the plugins and does not auto start the array... Do you have dockers that have their own IP address ? Your symptoms look exactly like what I had... For me the system became rock-solid after removing the different IP addresses for the dockers..
  9. I was just about to get backblaze subscription when I saw this... Great tool and thanks for the unraid integration !
  10. All folders in your unraid array are mapped as follows: /mnt/user/<sharename> So say you have an unraid share called "media", then there will be path in unraid that is: /mnt/user/media Now when you start a docker container (say plex), then plex will not have access to your files, plex will only see its own files. you need to "map" drives from unraid to plex. You can see it as a mount command (not the same, but it is like it). In the docker container you need to configure how you want this share in the docker to be called, lets say you call it "movies", then you can just say that it is called "/movies". In the second next configuration field you can actually select the physical unraid folder. In linux it is a best practice to mount external file systems under a folder called "/mnt", there is however no technical need to do it and mounting directly in the root works just fine. There is also no need to give the thing the samen name, you can map /Poopiesnort to /mnt/user/movies without a problem. Plex will the contents of your /mnt/user/movies folder in a folder called /Poopiesnort. Gottit ?
  11. Just noticed that Dynamix cache dirs is listed as beiing not known to Community Applications.. Seems like an error ? Probably one of the plugins almost everyone is using ? Issue Found: dynamix.cache.dirs.plg is not known to Community Applications. Compatibility for this plugin CANNOT be determined and it may cause you issues.
  12. My log shows a large amount of the following error lines yesterday: Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: nchan: Out of shared memory while allocating message of size 9468. Increase nchan_max_reserved_memory. Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: *5303164 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Apr 19 06:46:23 Tower nginx: 2020/04/19 06:46:23 [error] 16603#16603: MEMSTORE:01: can't create shared message for channel /disks Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [crit] 16603#16603: ngx_slab_alloc() failed: no memory Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: shpool alloc failed Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: nchan: Out of shared memory while allocating message of size 9468. Increase nchan_max_reserved_memory. Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: *5303167 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Apr 19 06:46:24 Tower nginx: 2020/04/19 06:46:24 [error] 16603#16603: MEMSTORE:01: can't create shared message for channel /disks The errors were not there before the 19th and are not here after 19th, system has not been rebooted. The erros appeared yesterday during the time that a parity rebuild was running (due to me removing a couple of old drives out of the array). System appears to be functioning fine at the moment. Diagnostics are attached. tower-diagnostics-20200420-1349.zip
  13. Though not as extreme as this in a couple of years I have also experienced three times failure with btrfs.. corruption of some kind that could not be fixed.. Had to reformat the whole system.. Last time I just formatted in XFS.. Works fine..
  14. But even if your flash failes that is not a hopeless situation.. And even IF you would not be able to recover it (Which you should) then all individual disks are readable and have their own complete files on it. So even if all other hardware evaporates, you still have your data as long as your drives are there.
  15. Thanks. I will keep you posted. VPN provider helpdesk does not understand the issue at all Keeps asking for my settings page 🙂
  16. No... still not working... But if you think its on the side of the vpn provider I guess I just leave it alone for a couple of days, see if it comes back... Thanks very much for your help !
  17. Herewith ! I read it gives an AUTH FAILED in there.. Password has not been changed in over a year though... Just changed it just to be sure, did not make a difference, new password also ended up in the credentials file in the openvpn folder.. So that part works.. supervisord.log
  18. Thanks ! Its starting up now. Will wait for 5 min and get back to you !
  19. Using purevpn and trying to use the dutch endpoint
  20. For some reason I cannot reach the webpage of the gui anymore... Setting VPN to no makes it work again but this has been working for a very long time... I think it stopped working yesterday evening.. I have pihole running but it is not active for my dockers (they use cloufflare dns) so it should not be the issue... No other firewall is in between.. What could this be ? With some pointers in some direction I will be able to fix it but have no idea where to look now..
  21. Make sure you set your inotify handlers high.. Nothing will go wrong if you don't, syncthing will ask for an increase when needed.
  22. There is not a real reason this should happen though.. Do you have plex setup with its own ip address for the docker or using bridge and the main ip address ? If plex remains reachable itself but only the remote stops working then it looks like something in your router... basically there is nothing special that plex itself is doing to make that work...
  23. https://techwiser.com/use-calibre-guide-for-kindle/
  24. That is a very nice design.... If not for the manual work I would be willing to do the same ! I wonder though.. Read that we might be getting multiple cache pools.. If that is true maybe I could do something similar to your process: - "Regular" array is read only; - Incomming files get written to SSD cache (1TB cache) - On periodic basis this gets copied automatically to cache-2 (lets say a 10TB drive that holds all shares) - Periodically I copy the contents of share 2 manually to the regular array .. That would minimise the manual work to a couple of times per year.. Would fit my usecase..