Jump to content

SolidFyre

Members
  • Posts

    29
  • Joined

  • Last visited

Posts posted by SolidFyre

  1. Is it possible to have the lxc path set to something else than the cache or the array?

    I am trying to put it on a separate nvme that I use for all my docker containers and it doesn't accept the path.

     

    /mnt/disks/Samsung_SSD_970_EVO_Plus_1TB_S4EWNF0M532569N/lxc

     

    image.thumb.png.cf490ac29efb4a29f4a05b20416136f4.png

  2. 8 minutes ago, Kilrah said:

    Yes paths are always taken from the container settings, then if that path is within one of the appdata sources it's considered internal

    Ah, alright. Guess I have to bite the bullet and migrate all my containers to the "correct" path then. uhh.... Thanks :)

  3. 13 minutes ago, Kilrah said:

    "Appdata source(s)" at the top, if something's within one of those set there it's internal.

     

    Tried to set it several times, but it reverts to absolute path and then regards it as "external path", and as a result it skips it completely unless I explicit set it to backup external paths further down.

    Right now it's only backing up the template settings.

    Is it fetching these paths from the container settings even though I explicitly tells it to target these folders?

     

    Screenshot 2024-04-06 122320.png

  4. @KluthR Whenever I try to change my source directories to my actual "internal appdata path" it changes back automatically to the absolute path regarded as "external" by the plugin (/mnt/disks/XXXXX/docker/appdata).

    is it possible to somehow change whats regarded as being "internal" path?

     

    All my appdata is regarded as external paths since I setup my server at a time when I had a weird bug with the appdata folder access rights, so I am using another path (/mnt/user/appdata/appdata). Stupid yes, but it was the only way I managed to solve it at the time.

    I could use "external paths" but then it backs up stuff I don't want, like the external transcode directory for plex etc.

    Otherwise I would need to migrate all my 38 containers back to the default appdata path 😅

  5. @Kilrah @Rusty6285 Not sure if I am late to the party but, I had issues deploying Prometheus and managed to fix the broken template and got it running. I saw that you had issues in September and maybe you fixed it by now. Appears the template is still not updated though.

     

    I identified two issues with it.

    1. The mappings to the yml file is indeed wrong. The correct Docker path for the config should be "/data".

    image.png.5642486ca123d6adcf6fd0fe875f5f45.png

     

     

    2. For some reason (at least for me) the appdata folder for Prometheus installs with owner root:root:, which is incorrect and hinders it from starting. Viewing the log shows denied permissions etc.

    image.png.b80932ceb32d3a4edc6fbdf417a52f10.png

     

    Should be changed to "users" group and your user uid.

    Example: "chown -R 65534:users prometheus/"

     

    image.thumb.png.f2153c183877b1d30a63f470436f5b6d.png

     

    And its up!

    Might be more to fix to get it completely up, for example chmod the appdata folder with correct attributes etc, but at least it's running.

     

    Cheers 🍻

  6. 5 minutes ago, JorgeB said:

    Disk looks mostly OK, should fine to use, use a different power connect and SATA cable in case that was the problem.

     

    The sas/sata cables are completely new, so they should be fine. I have also ordered new sata power cables as well, will try and reintroduce the drive again whenever they arrive.

    Thanks for the help guys :)

    • Like 1
  7. So I got a new drive, popped it in and rebuilt the array just fine.

    I took the "broken drive" and hooked it up using USB and went through a complete Pre-clear of the drive without a single issue (drive: sdn).

     

    Here's a new log collection, the 2459 CRC errors reported by SMART are the ones I got before the disk was rejected by the Array and I took it out.

    What to make of this? Can I reintroduce the drive to the array again? 🤔

     

     

     

    tytan-diagnostics-20220930-2013.zip

  8. Hello,

     

    I am having problems with one of my drives and is looking for some moral support...

     

    I got about 50 read errors and thought it was a good idea to run a Smart Self Test.

     

    First Short Self Test went fine, no errors, then I did an extended test because this drive has been having temper tantrums before.

    After about 5 min it got 1074 read errors and the drive was automatically Disabled.
    Now it's frozen.

    In the Self Test tab "latest smart test" is just spinning/loading. When I try to download the logs I get a blank file, so I cannot really provide you guys with logs.

    History and Smart Error Log buttons show nothing either. The other tabs show no information.

    Nothing happens when I try to Spin Down the drive.

     

    Not sure what to do now, what would be my next step here? Should I consider the drive done?

    I am currently looking online to find a new drive.

    Screenshot 2022-09-26 231412.jpg

    Screenshot 2022-09-26 231437.jpg

    Screenshot 2022-09-26 231452.jpg

    Screenshot 2022-09-26 231528.jpg

    Screenshot 2022-09-26 231555.jpg

  9. The config instruction of the Unraid docker image for 'cloudflare-ddns' should probably be updated.

    It's not possible to use both 'Email Variable' and 'API Key' variable at the same time, it won't start. It will simply say "Invalid Cloudflare Credentials" in the log and then kill all processes.

    In order to use the API Key variable the Email variable needs to be deleted, it seems.

  10. 12 minutes ago, ich777 said:

    First of all I've edited your post to not spam the thread and packed it in a file.

     

     

    Have you set the password to a minimum length of at least 5 characters?

    This looks like a instant crash after it tries to start, this usually happens when the password is too short.

     

    Can you attach a screenshot from your template file?

    Also please be sure to read the recommended thread at the top for Valheim.

     

    EDIT: Tried it now from scratch and it works flawlessly Valheim.log

     

    Thanks, I tried to put it in code field to minimize bloat, apparently that didn't work.

     

    Yea, tried 5 characters for password, didn't work.

    Appears the minimum is 6 characters, no digits. Now it starts flawlessly every time.

     

    Thanks for pointing me in the right direction, been banging my head against the wall for hours :P

    Maybe update that instruction to 6 instead of 5.

    • Like 1
  11. I solved it. I put the JSON through a JSON Position finder online and it whined about my umask settings. 000 which has worked fine for about a year now turned up as "undefiend".

    So I changed "umask": 000, to "umask": 0,

     

    I guess they changed what is expected input on this variable.

    😒

  12. Haven't touched my container for at least a year now and suddenly last update broke it, anyone else?

    It doesn't seem to like my JSON anymore, not sure why. I tried changing every entry that starts with "upload-" as it is the only clue in the log, but still nothing :(

    Did they change some variables or something?

    -------------------------------------
    Transmission will run as
    -------------------------------------
    User name: abc
    User uid: 99
    User gid: 100
    -------------------------------------

    STARTING TRANSMISSION
    CONFIGURING PORT FORWARDING
    Transmission startup script complete.
    Wait for tunnel to be fully initialized and PIA is ready to give us a port
    Thu May 28 18:48:03 2020 Initialization Sequence Completed
    [2020-05-28 16:48:03.899] JSON parse failed in /data/transmission-home/settings.json at pos 2184: INVALID_NUMBER -- remaining text "00,
    "upload-"
    Generating new client id for PIA
    Got new port 36284 from PIA
    transmission auth not required
    waiting for transmission to become responsive
    [2020-05-28 16:48:19.652] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:48:29.661] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:48:39.670] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:48:49.679] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:48:59.688] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:49:09.697] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:49:19.705] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server
    [2020-05-28 16:49:29.714] transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server

  13. Hi,

     

    I'm in a bit of a pickle and would really appreciate some assistance.

     

    History:

    During a startup I noticed only about half of the drives accessible in Unraid, the rest were gone.

    The ones accessible was all of a sudden spitting out loads of read errors. Scared sh*tless I rebooted to see if the drives would recover, turned out the controller card gave up completely instead due to MPT bios corruption.

    Card: LSI 9201-16i (Firmware P20 - 20.00.07.00)
     

    Server has been offline since.

    Now I have received a new card which I popped in and almost everything seems to be in order, this time I'm running an older firmware though, p19 19.00.00.00. I am afraid to upgrade the firmware this time and is hoping this firmware is perhaps more stable, at least it seems so judging by some forums.

    Problem:

    All drives except one turns up green, disk 3.
    It has a red X saying its disabled and that content is emulated.

    I have done several smart tests and it says its all good. Running an extended one right now just in case, seems to take a while though.

    Thought I would do a read check on all drives next?
     

    How and what steps should I take in order to add the drive back into the array safely, and should I? The drive seems to be OK.
    I have 10x10TB where 1 is parity and parity is OK without errors.

    I have added diagnostics.

    Thanks in advance.

    tytan-diagnostics-20200414-2035.zip

  14. 19 hours ago, clowrym said:

    I've never used this particular setting, But I believe your key should be set to TRANSMISSION_UMASK ,  not just UMASK

     

    Basically any variable you see here or here can be added to your docker template with TRANSMISSION_whichever_setting_you_want_to_be_Static_after_reboot_of_the_container set as the key as you indicated in your post.

     

    For instance, I have my Queue size set :

     

    image.png.92e3a057983e935dbaaa3de7b0b64de9.png

     

     

    Yea, those lists.

    Problem solves itself by using the "Safe docker permissions" thingy, but as soon as I download something new the rights gets broken.

    Hmm, well it seems that I was correct using TRANSMISSION_UMASK as I did at the start, I have now switched back, will try with some small downloads.

    What I have read, 000 is the same as 777 so I am going back to this to try again, maybe I missed something.

     

    Thanks, will report back my findings.

     

    • Like 1
  15. Hi!

    I'm having an issue with the Transmission_VPN docker image regarding the UMASK settings that is driving me nuts. Hopefully someone could nudge me in the right direction.

     

    I am unsure of the correct variable name, according to some variable chart I found all variables are "TRANSMISSION_WHATEVER" (which worked for all other variables I have set), however that did not seem to work for umask so I switched to just "UMASK" as variable, but still can't get it to work.

     

    For some reason I get this in the download folder:

    Topfolder = drwxrwxr-x 1 nobody users - Can't edit

    Subfolder = drwxrwxr-x 1 nobody users - Can edit

    Files = -rw-rw-r-- 1 nobody users - Can edit (and execute...?)

     

    How come the same rights, topfolder and subfolder behave differently when browsing samba?

    Download folder is accessed using a unraid user with R/W access (private samba share), are these settings interfering?

     

    Whats the correct variable and setting to use to have it set 777 (or at least 775) on all folders and files?

     

    Right now I have the settings like this:
     

     

    Capture.JPG

    • Like 1
  16. 2 hours ago, itimpi said:

    Assum.ng you have a folder called 'isos' on the nvme drive then from the Unraid command line use a command of the form:

    
    ln -s /mnt/disks/nvmename/isos /mnt/user/isos/nvme_isos

    (where nvmename is what the nvme drive is called in the UD plugin) should result in what looks like a folder called 'nvme_isos' inside the 'isos' share that contains the contents of the 'isos' folder on the nvme drive and that you can get to via the GUI.   I have not actually tried this but it shows what I am thinking.

    Thanks, I will give it a try when I get the chance :)

     

     

  17. 2 minutes ago, itimpi said:

    Since any ISO’s are read-only after initial creation I am surprised that the performance of an array drive is not sufficient.   Normally when creating a VM the iso speed is not the limiting factor.

     

    if not then a workaround might be to set up a link inside the ISO’s share that points to the nvme.    That should make it show up at the GUI level I would think.

    Sorry, but what do you mean by setting up a link? 😕

  18. 24 minutes ago, Squid said:

    AFAIK you would have to use unassigned devices to mount the drive, and then manually edit smb-extra.cfg on the flash drive to share the appropriate folder on it.

    Yes, I thought of this too, but thought maybe there was a way of doing it without bringing the whole array offline.

     

    On the main page for unassigned devices there is a button called "Add ISO File Share". This one requests an iso file according to pre-filled in text and only shows shares from the array, for some reason. Is there any way to have folders from the nvme to show up here? That would actually solve the problem.

    The other button, "Add Remote SMB/NFS Share" only allows remote shares to be mounted, not local shares.

    Maybe a feature request?

     

×
×
  • Create New...