Jump to content

IamSpartacus

Members
  • Posts

    802
  • Joined

  • Last visited

Posts posted by IamSpartacus

  1. 2 hours ago, itimpi said:

    The video seems to show that Unraid is failing to find and mount the flash drive during the second phase of the boot process.     Have you checked that it has a label of “UNRAID”?   You can check if this is the case by logging in on the console and using the ‘df’ command to see if it shows the flash drive mounted at /boot.   

     

    If I connect it to Windows, yes it's labeled Unraid.  Afterall every test I've done has been using the USB Creator at which point the final product is the drive labeled Unraid before I eject it.

     

    This is what the df command returns after login at root.

     

    CaptureScreen1.jpeg

  2. PREFACE

     

    I have this server running perfectly fine using an identical USB flash drive.  That configuration has been running for 4+ months.  I'm moving that USB flash drive to new/migrated build.  If I pop that USB into the same USB slot, my server boots with no issue.

     

    STEPS TAKEN

    • Chkdsk on the flash drive (clean)
    • replaced flash drive with brand new identical model (same as working drive above)
    • Tried both 6.8.2 and 6.8.3 with the USB Creator
    • Tried with Allow EFI checked and with it unchecked
    • Tried booting in bios and UEFI mode
    • Tried using DHCP or setting static IP settings

     

    ISSUE

     

    My server is unable to get an IP address or respect the static IP set using the USB Creator for a brand new install.  It seems like it is not respecting (or even detecting as seen from the last command of the video) the network.cfg file (see below for what the network.cfg file shows when accessing the flash drive on a Windows machine).

     

    # Generated network settings
    USE_DHCP="yes"
    IPADDR=
    NETMASK=
    GATEWAY=
    BONDING="yes"
    BRIDGING="yes"

     

  3. 7 minutes ago, Squid said:

    Enter it in by itself at the command prompt and you'll see all the options.   You'll either want the type to be normal or warning

    and is there a variable one can use to call the name of the script?

  4. 9 minutes ago, Squid said:

    Use the notify command built in

     

    /usr/local/emhttp/plugins/dynamix/scripts/notify

     

    By default that shows a red notification as if there's an error.  Is there a way to manipulate the color displayed?

     

    Nvmd, was using the old WebGUI notification.  Thanks.

  5. I feel like the following things would greatly enhance the docker tab in the WebUI.

     

    • Container Groups (for commands):   I know personally that it would be hugely beneficial to be able to select a group of containers and run a command (push the button) on just that group of containers.  For anyone who has more than 10 containers, the Start/Stop/Restart/Pause/Resume/Update buttons are pretty much useless because that's essentially the same thing as disabling docker all together.  The same thing goes for checking selecting/unselecting Autostart.  Right now we don't even have the option to change that for more than one at a time.  Having to click on say 30 of 50 different containers individually to manage them so that certain other containers get left alone is a huge burden.

     

    • Container Groups (for organization):  To piggy back off the previous request; those of us who have a lot of containers, it begins to just look like a wall of text after a certain point.  This really makes it hard to manage what is where.  So being able to visually group containers together with separators would be great.  And even better would be the ability select those groups (with a checkbox) and then run commands on them.
    • Like 1
  6. 36 minutes ago, Squid said:

    If the memory is required, then another process can use it.

     

    IE: On a 16Gig system, you will see that rootfs (where /tmp is) is sized at ~ 7.8G.  But you can run (quite easily) 3 VMs at 4Gig a piece without running out of memory.  The memory used (ie: dashboard) will reflect this.

     

    I'll have to test it again as I could have sworn /tmp was acting the same as my ramdisk.  And I switched to using a ramdisk because I like having easy access to it via an SMB share for insight as I use it for a bunch of services (plex and Emby transcoding, downloads, etc.).

  7. 7 hours ago, Squid said:

    /tmp is mounted to use 50% of the memory available maximum.

     

    Cached memory should be always as much as possible.  Cache is always returned to the system when a process needs it, and unused RAM is wasted RAM.  https://www.linuxatemyram.com/

     

    So if I start writing more than 63GB to my ramdisk more ram ram should be allocated on the fly to accommodate?

     

    I ask because I'm not seeing that happen.  I use my ramdisk for incomplete usenet downloads and when that 63GB is filled I cant write anymore.  Meanwhile I have another 40+GB of free RAM apparently doing nothing.

  8. I have 128GB of RAM in my system (AMD EPYC) and Unraid clearly reads all 128GB but yet only shows half that is available in /tmp or a /ramdisk I've created.  I was aware of /dev/shm only allocating half ram but I was under the impression that /tmp would be able to see and use all available ram, and same for a user created ramdisk.  Am I missing something?

     

    image.png.24e6ade8a22104d698fe7eb066b522bc.png

     

     

    EDIT:  It looks like Unraid is caching more than half my RAM but not making that RAM available?

     

    image.png.61877adfea18cef754d9a6f08ab539a7.png

  9. 8 hours ago, Squid said:

    Quite a bit (or more truthfully a metric tonne)

     

    OTOH though it can be done via some playing around with settings.

     

    Set up the options for backup #1.  Make a backup of the settings from the flash drive

    Set up the options for backup #2.  Make a backup of the settings from the flash drive

    Disable the backup from running on a schedule.

     

    Via user scripts you set 2 scripts with whatever schedule you want, and that script will restore the appropriate settings file onto the flash drive and then execute the backup script.

     

    Where there's a will, there's a way (and time to spare at work letting me think about it)

     

    Thanks for the suggestion, that's actually a super workable solution.

  10. Has the LSIO team approached Limetech about the possibility of them starting to include nvidia drivers in their official releases?  I GREATLY appreciate what LSIO has done to get this all working and to continue to support it but I feel like it's an unnecessary burden on them.  I don't see why Limetech can't just include the latest nvidia drivers available at the time whenever they put out a new release just like how the update other packages like samba, etc.  

  11. 17 minutes ago, jenga201 said:

    Didn't have issue with unassigned drives;

    telegraf.conf
      devices = [ "/dev/nvme0n1", "/dev/nvme1n1" ]


    image.png.e43d98f373ac6cbe31f85c773061ad4b.png

    image.png.f966fe8b678741019d6ff213704d8b3b.png

     

    Thank you for testing that.  The only other variable is  that my two unassigned NVMe devices (Intel Optane 900p drives) are part of a btrfs pool.  My queries are identical to yours.  My cache drive reads fine but if I choose either of the two drives in the btrfs pool I get no data.

     

    image.png.f8e058b3553ce49f700d73752949e1bc.png

  12. 15 minutes ago, jenga201 said:

     

     

    No, you should be able to still scan the normal disks. Refer to my original post on how to stack them.

     

    When I get time, I will be looking into how unraid uses docker-compose or docker files.

    For now, I'll just be doing it manually.. or not updating the container.

     

    Oh I see what you did there.  Ok yea that work (stacking the inputs).  Have you had any luck getting data off unassigned NVMe's or do you not use any in UD's?

  13. 14 hours ago, jenga201 said:

    try

    [[inputs.smart]]

      attributes = true

      devices = ["/dev/nvme0n1","/dev/nvme1n1","/dev/nvme2n1"]

     

    How you specified it would be an array of a single device, not an array of 3 devices.

     

    Yup that did it.  So I have to do that for all my spinner disk as well huh?  I also noticed that the only nvme device I'm able to pull temp data on is my cache drive.  I have 3 other unassigned nvme's but I can't see the temp data on those.  Odd.

     

    Also, are you just manually installing smartmontools after each container update or are you doing it through a script?

  14. 35 minutes ago, jenga201 said:

    Nothing special related to telegraf or grafana except the [[inputs.smart]] block.

     

    This is my grafana config.

    image.thumb.png.e8e8049eeae23b4fe2ae8ad984ea36c3.png

     

    Hmmmm.  I can't seem to get the smart data to show up in my database.  In my telegraf.conf, if I have 3 nvme devices do I need anything more than this?

    [[inputs.smart]]
      attributes = true
      devices = ["/dev/nvme0n1,/dev/nvme1n1,/dev/nvme2n1"]

     

     

  15. 5 hours ago, jenga201 said:

     

    I'm using the same image you are for Nvidia support.

    It has apt, so you can just run;

    apt-get update

    apt-get install smartmontools

     

    You can allow the device by adding the nvme device(s) under [[inputs.smart]]

     

    My config (instead of using hddtemp);

    [[inputs.smart]]
      attributes = true
    [[inputs.smart]]

      attributes = true

      devices = [ "/dev/nvme0n1p1" ]

     

    I haven't found a way to scan all nvme instead of specifying them.
     

     

    Thanks for reminding me, Nvidia support was the reason I switched to this image as well.  And thanks for the tip on getting smartmontools working.  But I'm not any smart data available to choose from in my grafana queries.  Was there anything else you had to do?

  16. Yea this image isn't going to work with my current telegraf.conf.  It doesn't appear to support the new inputs.apcupsd input that was recently added by telegraf and it gives me errors with all the fields I have in inputs.docker.

     

    It's not worth getting all that to play nice if this workaround won't even persist across container updates so I'll need to find a different solution.

×
×
  • Create New...