-
Posts
802 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by IamSpartacus
-
-
PREFACE
I have this server running perfectly fine using an identical USB flash drive. That configuration has been running for 4+ months. I'm moving that USB flash drive to new/migrated build. If I pop that USB into the same USB slot, my server boots with no issue.
STEPS TAKEN
- Chkdsk on the flash drive (clean)
- replaced flash drive with brand new identical model (same as working drive above)
- Tried both 6.8.2 and 6.8.3 with the USB Creator
- Tried with Allow EFI checked and with it unchecked
- Tried booting in bios and UEFI mode
- Tried using DHCP or setting static IP settings
ISSUE
My server is unable to get an IP address or respect the static IP set using the USB Creator for a brand new install. It seems like it is not respecting (or even detecting as seen from the last command of the video) the network.cfg file (see below for what the network.cfg file shows when accessing the flash drive on a Windows machine).
# Generated network settings USE_DHCP="yes" IPADDR= NETMASK= GATEWAY= BONDING="yes" BRIDGING="yes"
-
7 minutes ago, Squid said:
Enter it in by itself at the command prompt and you'll see all the options. You'll either want the type to be normal or warning
and is there a variable one can use to call the name of the script?
-
9 minutes ago, Squid said:
Use the notify command built in
/usr/local/emhttp/plugins/dynamix/scripts/notify
By default that shows a red notification as if there's an error. Is there a way to manipulate the color displayed?
Nvmd, was using the old WebGUI notification. Thanks.
-
Is there any easy way to add a start/stop notification for user scripts?
-
I feel like the following things would greatly enhance the docker tab in the WebUI.
- Container Groups (for commands): I know personally that it would be hugely beneficial to be able to select a group of containers and run a command (push the button) on just that group of containers. For anyone who has more than 10 containers, the Start/Stop/Restart/Pause/Resume/Update buttons are pretty much useless because that's essentially the same thing as disabling docker all together. The same thing goes for checking selecting/unselecting Autostart. Right now we don't even have the option to change that for more than one at a time. Having to click on say 30 of 50 different containers individually to manage them so that certain other containers get left alone is a huge burden.
- Container Groups (for organization): To piggy back off the previous request; those of us who have a lot of containers, it begins to just look like a wall of text after a certain point. This really makes it hard to manage what is where. So being able to visually group containers together with separators would be great. And even better would be the ability select those groups (with a checkbox) and then run commands on them.
- 1
-
8 minutes ago, binhex said:
You can do that although this approach isn't a percentage of memory, see here:-
Sent from my CLT-L09 using Tapatalk
Oh nice. Yea this could work. Thank you.
-
Just now, Squid said:
It won't ever use more than 50% of your RAM. It will however only use as much as necessary up to 50%
Then yes, its of no use to me. That's disappointing that there is no ram directory that I can use more than 50% of my ram with.
-
8 minutes ago, Squid said:
Just add this to your smb-extra.conf file on the flash drive
[TMP] path=/tmp valid users = andrew write list = andrew
Yea that's what I have for my ramdisk. I will test writing to /tmp again and if it does dynamically assign more ram I will add those SMB settings.
-
36 minutes ago, Squid said:
If the memory is required, then another process can use it.
IE: On a 16Gig system, you will see that rootfs (where /tmp is) is sized at ~ 7.8G. But you can run (quite easily) 3 VMs at 4Gig a piece without running out of memory. The memory used (ie: dashboard) will reflect this.
I'll have to test it again as I could have sworn /tmp was acting the same as my ramdisk. And I switched to using a ramdisk because I like having easy access to it via an SMB share for insight as I use it for a bunch of services (plex and Emby transcoding, downloads, etc.).
-
2 minutes ago, trurl said:
No. Your ramdisk is not resized on the fly. Memory is used as I/O buffer and released to other processes as needed, but this has nothing to do with your ramdisk allocation.
And /tmp is the same scenario?
-
7 hours ago, Squid said:
/tmp is mounted to use 50% of the memory available maximum.
Cached memory should be always as much as possible. Cache is always returned to the system when a process needs it, and unused RAM is wasted RAM. https://www.linuxatemyram.com/
So if I start writing more than 63GB to my ramdisk more ram ram should be allocated on the fly to accommodate?
I ask because I'm not seeing that happen. I use my ramdisk for incomplete usenet downloads and when that 63GB is filled I cant write anymore. Meanwhile I have another 40+GB of free RAM apparently doing nothing.
-
I have 128GB of RAM in my system (AMD EPYC) and Unraid clearly reads all 128GB but yet only shows half that is available in /tmp or a /ramdisk I've created. I was aware of /dev/shm only allocating half ram but I was under the impression that /tmp would be able to see and use all available ram, and same for a user created ramdisk. Am I missing something?
EDIT: It looks like Unraid is caching more than half my RAM but not making that RAM available?
-
8 hours ago, Squid said:
Quite a bit (or more truthfully a metric tonne)
OTOH though it can be done via some playing around with settings.
Set up the options for backup #1. Make a backup of the settings from the flash drive
Set up the options for backup #2. Make a backup of the settings from the flash drive
Disable the backup from running on a schedule.
Via user scripts you set 2 scripts with whatever schedule you want, and that script will restore the appropriate settings file onto the flash drive and then execute the backup script.
Where there's a will, there's a way (and time to spare at work letting me think about it)
Thanks for the suggestion, that's actually a super workable solution.
-
@Squid How hard would it be to implement multiple different schedules into this plugin? So that, for example, one could backup all containers once a week but only the "production" containers every day.
-
On 7/19/2019 at 12:07 AM, ljm42 said:
Here is a tip from:
Create a file called startup.nsh in the root of the flash drive which contains the single line below:\EFI\boot\bootx64.efi
This doesn't appear to work anymore, I assume since starting with 6.8 one can't run scripts directly from flash? Can this be adapted to run as a User script at array startup?
-
16 minutes ago, BRiT said:
DejaVu. We've already discussed that in this thread.
Sorry, thread is 68 pages long. Can't exactly parse through every page to check if each topic has been raised.
-
Has the LSIO team approached Limetech about the possibility of them starting to include nvidia drivers in their official releases? I GREATLY appreciate what LSIO has done to get this all working and to continue to support it but I feel like it's an unnecessary burden on them. I don't see why Limetech can't just include the latest nvidia drivers available at the time whenever they put out a new release just like how the update other packages like samba, etc.
-
I love how user friendly the interface is yet still allows advanced users to do lots of customization.
I would love to see official support for multiple btrfs SSD pools.
-
17 minutes ago, jenga201 said:
Thank you for testing that. The only other variable is that my two unassigned NVMe devices (Intel Optane 900p drives) are part of a btrfs pool. My queries are identical to yours. My cache drive reads fine but if I choose either of the two drives in the btrfs pool I get no data.
-
15 minutes ago, jenga201 said:
No, you should be able to still scan the normal disks. Refer to my original post on how to stack them.
When I get time, I will be looking into how unraid uses docker-compose or docker files.
For now, I'll just be doing it manually.. or not updating the container.
Oh I see what you did there. Ok yea that work (stacking the inputs). Have you had any luck getting data off unassigned NVMe's or do you not use any in UD's?
-
14 hours ago, jenga201 said:
try
[[inputs.smart]]
attributes = true
devices = ["/dev/nvme0n1","/dev/nvme1n1","/dev/nvme2n1"]
How you specified it would be an array of a single device, not an array of 3 devices.
Yup that did it. So I have to do that for all my spinner disk as well huh? I also noticed that the only nvme device I'm able to pull temp data on is my cache drive. I have 3 other unassigned nvme's but I can't see the temp data on those. Odd.
Also, are you just manually installing smartmontools after each container update or are you doing it through a script?
-
35 minutes ago, jenga201 said:
Hmmmm. I can't seem to get the smart data to show up in my database. In my telegraf.conf, if I have 3 nvme devices do I need anything more than this?
[[inputs.smart]] attributes = true devices = ["/dev/nvme0n1,/dev/nvme1n1,/dev/nvme2n1"]
-
5 hours ago, jenga201 said:
I'm using the same image you are for Nvidia support.
It has apt, so you can just run;
apt-get update
apt-get install smartmontools
You can allow the device by adding the nvme device(s) under [[inputs.smart]]
My config (instead of using hddtemp);
[[inputs.smart]]
attributes = true
[[inputs.smart]]attributes = true
devices = [ "/dev/nvme0n1p1" ]
I haven't found a way to scan all nvme instead of specifying them.
Thanks for reminding me, Nvidia support was the reason I switched to this image as well. And thanks for the tip on getting smartmontools working. But I'm not any smart data available to choose from in my grafana queries. Was there anything else you had to do?
-
Yea this image isn't going to work with my current telegraf.conf. It doesn't appear to support the new inputs.apcupsd input that was recently added by telegraf and it gives me errors with all the fields I have in inputs.docker.
It's not worth getting all that to play nice if this workaround won't even persist across container updates so I'll need to find a different solution.
No IP/Network.cfg on first boot up
in General Support
Posted · Edited by IamSpartacus
If I connect it to Windows, yes it's labeled Unraid. Afterall every test I've done has been using the USB Creator at which point the final product is the drive labeled Unraid before I eject it.
This is what the df command returns after login at root.