Jump to content

Diggewuff

Members
  • Posts

    152
  • Joined

  • Last visited

Posts posted by Diggewuff

  1.  

    3 hours ago, noties said:

    Take a look a few posts back and you'll see me trying to do the same thing.  I have a similar setup (but using br0.30 as interface)

    My experience was that Telegraf would not work at all unless in host mode.  You'll need to make sure you have "Host Access to custom networks" turned is ON in order for Telegraf (in host mode) to talk to your InfluxDB container.  

     

    The only thing I don't have working still is the running VMs/Dockers icons.   I haven't had a chance to troubleshoot these.

     

    My setup below:

    HTH

     

    image.thumb.png.6e7838bc60ed3a8167869df3913621a7.png

     

    image.thumb.png.aa87e9fb05340326e683efdfc5897f75.png

     

     

    I Think I've got it configured as far as you did, really sad that the pictures aren't working. I'd be glad for any advice if you got any further.

  2. 2 hours ago, noties said:

    Heya... Take a look at post #1 and this post: 

    They have pretty much everything you need.

     

    Can I run telegraf in a custom br0.10 network? I run all my related services i.e: InfluxDB Grafana and my reverse proxy in there and cannot reach InfluxDB in the br0.10 network from Telegraf's standard host network. 

  3. 5 hours ago, jonathanm said:

    I'm not talking about the previous apps portion. That would be handy, but like you say is not going to necessarily be what is intended. I'm talking about the point in time when CA Backup does its thing, and backs up your appdata. That way, if you lose your cache pool and are restoring from ground zero it gets restored with the backed up appdata set, which theoretically would match the containers you want to reinstall and get running, at least to a large extent.

     

    Also, it would be handy if installing a container wouldn't immediately start it, but instead offer an option like VM's do, to set it up but not start. When I had to recreate my cache pool for real, the multi install put things back and started them in an order which created some errors and chaos, because of the way I have things set up with dependencies and scripted starts. Having everything restored but not started would be ideal.

     

    I script some containers startup with logic that can't be handled with the built in binary auto start with delay. I need conditionals, some of which were handled by your deprecated auto start plugin, some of which I coded myself. Since auto start and delay are now part of the base, I just handle all the special case containers with scripts.

    Can you give us some examples of your scripted startup procedure? I'm really interested in that. 

  4. I'm using the External storage support 1.7.0 App to mount local Unraid shares via Docker mapping into Nextcloud 16.0.7. I'm not using SMB to map them because it is more than one share and SMB turned out to be much slower in navigating into them via Nextcloud. 

    When Syncing data to these mapped shares via Web DAV, or when I'm creating a folder via Web GUI,

    permissions of created folders are not 777 (drwxrwxrwx) but 755 (drwxr-xr-x).

    Permissions of a txt file created via Web GUI are 644 (-rw-r--r--) and not 666 (-rw-rw-rw-) as expected for Unraid. 

     

    Is there any option to set the propper UMASK to make Nextcloud respect Unraid's permission scheme?

  5. On 12/20/2019 at 6:49 AM, kmwoley said:

    I dislike this solution for a few reasons. The primary of which is that passing in the entire BT bus into a Docker container only works if it's running as privileged and on the host network IIRC. In my configuration, I run my containers in VLANs for network isolation, making it such that they can't access the Bluetooth devices. If the BT support were removed from the host OS, I'd then have to create/maintain a Docker container in a higher-permission config just to run a simple Bluetooth script. Feels like overkill to go in that direction to me. 

     

    It's nice that there's multiple ways to do it so that if someone runs into a driver issue they need to work around, they can use Docker. But I don't think the host OS support necessitates the frequent/trivial OS updates that seem to be the desired reason to remove the support.  

     

    TL;DR - I appreciate and use the native host OS support for bluetooth and would like to keep it.

    Can you tell me how you are passing a Bluetooth enabled container without having to run ist in host network? I didn’t accomplished that until now. 

  6. On 12/28/2019 at 4:59 PM, Diggewuff said:

    I'm using the External storage support 1.7.0 App to mount local Unraid shares via Docker mapping into Nextcloud 16.0.7. I'm not using SMB to map them because it is more than one share and SMB turned out to be much slower in navigating into them via Nextcloud. 

    When Syncing data to these mapped shares via Web DAV, or when I'm creating a folder via Web GUI,

    • permissions of created folders are not 777 (drwxrwxrwx) but 755 (drwxr-xr-x).
    • Permissions of a txt file created via Web GUI are 644 (-rw-r--r--) and not 666 (-rw-rw-rw-) as expected for Unraid. 

    Is there any option to set the propper UMASK to make Nextcloud respect Unraid's permission scheme?

    Any advice on this?

  7. I'm using the External storage support 1.7.0 App to mount local Unraid shares via Docker mapping into Nextcloud 16.0.7. I'm not using SMB to map them because it is more than one share and SMB turned out to be much slower in navigating into them via Nextcloud. 

    When Syncing data to these mapped shares via Web DAV, or when I'm creating a folder via Web GUI,

    • permissions of created folders are not 777 (drwxrwxrwx) but 755 (drwxr-xr-x).
    • Permissions of a txt file created via Web GUI are 644 (-rw-r--r--) and not 666 (-rw-rw-rw-) as expected for Unraid. 

    Is there any option to set the propper UMASK to make Nextcloud respect Unraid's permission scheme?

  8. On 10/12/2019 at 1:38 AM, Diggewuff said:

    When creating a new folder in a Local external storage directory, Nextcloud gives it the permissions drwxr-xr-x. Which isn't in accordance with the unRaid Permission scheme, and prevents me to change it via SMB afterwards. How can I tell unRaid to always create folders with drwxrWxrWx (777) permissions?

     

    51 minutes ago, Ustrombase said:

    Is it better to use SMB or to simply mount the volume via docker?

    The external storage is actually mounted via Docker. But every folder created by Nextcloud via web UI or App gets wrong permissions. 

  9. When creating a new folder in a Local external storage directory, Nextcloud gives it the permissions drwxr-xr-x. Which isn't in accordance with the unRaid Permission scheme, and prevents me to change it via SMB afterwards. How can I tell unRaid to always create folders with drwxrWxrWx (777) permissions?

  10. 5 hours ago, testdasi said:

    Assuming you have a realistic expectation to claim warranty with Samsung for enterprise SSD (that assumption cannot be assumed!), it comes down to how much write you expect to do.

    • The PM983 is rated for 1.3 DWPD for only 3 years = 0.96TB * 365 * 3 = 1051TB TBW.
    • The 970 Evo has a 5-year warranty but only for 600TB TBW (= 600 / 5 / 365 = 0.329 DWPD).

    In other words, do you expect to write 600TB over 5 years or 1000TB over 3 years?

    Let’s take 5 years as an example. From my calculation TBW will be at about 1000 maybe a bit less because of WA and the larger drive size. 

    970 will be way beyond the specified TBW. But still in warranty. SMART Test will again be failing because of percentage used over 100%. I‘ll again have no idea if the drive fails or not even if it still works. 

    PM983 will be still in spec for TBW but out of warranty because of the time. Percentage used and SMART Test will be ok but no waranty. 

     

    Usually I claim warranty when there is any but  if the claim process is costly and the prospects of success are low. I just leave it. Wich is my experience for Samsung consumer drives. No idea if this will be different to Samsung Enterprice drives. Advance RMA maybe? WD offers that. 

     

    What Formfactor goes: My MB accepts 22110. 

     

    As you have both drives installed: 

    Do you encountered any differences in read or write speads?

    How is each percentage used value scaling to your TBW values? Are they indead scaling different as I assumed above?

    Could you maybe post those values for each drive as a reference?

     

    • Trim was always running daily on my SSD.
    • Can I improve anything about WL?
    • Would you rather suggest buying a bigger consumer SSD than a better rated smaller enterprise SSD to just leave space free and to hope for the controller to make good use of it and thereby reducing the WA?
    • About 180-200€ for a Samsung PM983 960GB M.2 seams reasonable for me. Would you consider that as enterprise grade or would rather buy a Samsung 970 Evo M.2 1TB for about the same price?

    Thank you very much for sharing your expertise on that.

  11.  

    16 minutes ago, testdasi said:

    So a better approach would be to rethink your config. 512GB is not a lot of space and when you add static data (e.g. your VM vdisk, docker img, docker appdata etc.), you don't have much left to over-provision for write activities.

    Thanks for your detailed Explanations. Downloads and Plex Transcods cannot be the main reason for the nearly 400 TBW.

    18 hours ago, johnnie.black said:

    Is the SSD used for docker/VMs? If yes they are known to write a lot do the SSD, much more than expected

    That seems more reasonably to me. Or am I understanding anything wrong here?

    If not, overprovisioning seems as the most reasonable approach to me.

    21 minutes ago, testdasi said:

    you don't have much left to over-provision for write activities.

    Is it possible to over provision any SSD Myself?

    I don't have any VM vdisks and my Docker Image is about 100gig so 500gig of usable space is plenty enough for my use. Switching to 1 tb or let's say overprovisioned 840gigs would already be a huge upgrade for me but I don't want to lose the speed of NVMe.

    I cannot find any fast NVMe drives that are rated for significantly higher TBW.

  12. 3 minutes ago, johnnie.black said:

    Is the SSD used for docker

    yes 

     

    11 minutes ago, Diggewuff said:

    I'm not moving huge amounts of data regularly. But I have quite a few Docker containers (35ea) running, logging writing to Databases and regularly updated.

    I specifically decided for an NVMe cache because of the Containers. 

    Maybe That was a Pretty costly decision based on that experience. 😖

     

    I think I'll use the drive until it fails. But from now on I won't have any warning for the point of failure, because SMART already failed.

    Would adding a second cache drive in RAID1 be a good idea?

    My Mainboard (Supermicro X11SSi-LN4F) only has one NVMe slot. Will RAID1 work with a PCI NVMe extension?

  13. The TBW value also looks pretty high to me so much written and only 1/4 of that read. My whole array is only 1/13 of that amount of data (TBW). Do that sound plausible to you?

    Additional info: I'm not moving huge amounts of data regularly. But I have quite a few Docker containers (35ea) running, logging writing to Databases and regularly updated.

×
×
  • Create New...