hernandito Posted January 6, 2020 Share Posted January 6, 2020 Hi, Since custom scripts stored on flash drive now do not execute, what is the suggested method to be able to use them? I had my scripts executing within any directory. How can it be configured so the script location is on the “path”? Thank you. H. Quote Link to comment
itimpi Posted January 6, 2020 Share Posted January 6, 2020 18 minutes ago, hernandito said: Hi, Since custom scripts stored on flash drive now do not execute, what is the suggested method to be able to use them? I had my scripts executing within any directory. How can it be configured so the script location is on the “path”? Thank you. H. Easiest thing would be do copy them to /usr/local/bin as part of the boot process and then set the execute permission. You can do this either via the config/go file on the flash or via the User Scripts plugin. Quote Link to comment
itimpi Posted January 6, 2020 Share Posted January 6, 2020 1 minute ago, JoeUnraidUser said: The User Scripts plugin does not have the ability to run scripts at boot. You would have to use config/go. It depends on when they are needed. If only after the array starts then User Scripts could be used. Quote Link to comment
hernandito Posted January 7, 2020 Share Posted January 7, 2020 (edited) 13 hours ago, itimpi said: Easiest thing would be do copy them to /usr/local/bin as part of the boot process and then set the execute permission. You can do this either via the config/go file on the flash or via the User Scripts plugin. Thank you guys. Following Itimpi's advice, I added the following to my go file. cp /boot/menu /usr/local/bin cp /boot/dock /usr/local/bin chmod u+x /usr/local/bin/menu chmod u+x /usr/local/bin/dock My two script files are called "menu" and "dock". I use these scripts for several commands that I can easily execute as needed from the command line. The "dock" script allows me to open the command line of each docker installed. This solution worked perfectly. Thank you again. H. Edited January 7, 2020 by hernandito Quote Link to comment
trurl Posted January 7, 2020 Share Posted January 7, 2020 29 minutes ago, hernandito said: The "dock" script allows me to open the command line of each docker installed. Maybe you already know, but you can open the command line of any docker by clicking on its icon in the webUI and selecting >_ Console Quote Link to comment
hernandito Posted January 7, 2020 Share Posted January 7, 2020 I had never noticed this....! I think I am too used to my Putty... or a terminal app on my iPad.... Thanks! H. Quote Link to comment
CarlosCo Posted January 7, 2020 Share Posted January 7, 2020 (edited) On 1/6/2020 at 1:17 AM, itimpi said: What browser are you using (and what version). NoVNC has always seemed a bit susceptible to not working correctly with all browsers. I have personally have often had problems with noVNC in the past so I always have a free-standing VNC client as a fall-back. You are right, it fails on chrome but it works on firefox, so I'll be using firefox for now on, thanks. Edited January 7, 2020 by CarlosCo Typo Quote Link to comment
FlynDice Posted January 7, 2020 Share Posted January 7, 2020 8 hours ago, CarlosCo said: You are right, it fails on chrome but it works on firefox, so I'll be using firefox for now on, thanks. Try clearing your browser cache in chrome. That fixed the noVNC problem for me. Quote Link to comment
CodeMonkeyX Posted January 10, 2020 Share Posted January 10, 2020 (edited) I just wanted to say that this update did break NFS for my situation. I went back after a few days and read the change log to try and figure out if something changed that broke it and saw this: Quote Fixed bug with hard link support. Previously a 'stat' on two directory entries referring to same file would return different i-node numbers, thus making it look like two independent files. This has been fixed however there is a config setting on Settings/Global Share Settings called "Tunable (support hard links)". The default is Yes, but with certain very old media and DVD players which access shares via NFS, you may need to set this to No. I only use NFS to store jobs archived on a old production printer (Xerox Docutech 6135 from the 90's), and unRAID worked great until that update. Thankfully the setting "Tunable (support hard links)." fixed the problem for me. I just wanted to say please don't ever remove that option! I know people who want it are an edge case, but it really would ruin my year if I had to try and implement and maintain a vm or docker or something just to run a NFS server for those old machine. Edited January 10, 2020 by CodeMonkeyX Quote Link to comment
twg Posted January 11, 2020 Share Posted January 11, 2020 There's a problem with 6.8.0. I'm running 6.7.2 with no issues and everytime I update to 6.8.0 the server can't get an IP address. When I revert to 6.7.2 the problem goes away. I tried twice already. Same behaviour. Quote Link to comment
BRiT Posted January 11, 2020 Share Posted January 11, 2020 14 minutes ago, twg said: There's a problem with 6.8.0. I'm running 6.7.2 with no issues and everytime I update to 6.8.0 the server can't get an IP address. When I revert to 6.7.2 the problem goes away. I tried twice already. Same behaviour. Have you tried reverting changes to your network.cfg file on the flash drive? I think there's been a few posts in here detailing that. If you want community help you're probably going to have to post full diagnostics zip file to your next post. Quote Link to comment
JorgeB Posted January 11, 2020 Share Posted January 11, 2020 3 hours ago, twg said: I'm running 6.7.2 with no issues and everytime I update to 6.8.0 the server can't get an IP address. When I revert to 6.7.2 the problem goes away. I tried twice already. Same behaviour. There have been some cases were eth0 becomes for example eth1, but like mentioned if you need help we need the diags, and please start a thread on the general forum instead. Quote Link to comment
sjaak Posted January 13, 2020 Share Posted January 13, 2020 (edited) i not sure if this is to right spot to place this, but there did something goes wrong on my server running 6.8. (there are NO diagnostics!!) this morning i had a "half" freezed system. i tried to login on my VM (its always on) after put password in and pressed 'enter' its stops working, only the mouse was working, no keyboard command was accepted. i tried to gain access thought the GUI boot, but firefox freezed while loading the webgui. then i tried to open terminal but i cant type anything there. so, booted my notebook to gains access through ssh, did successful login to it en tried to force a reboot, got the message that says system is going to do a reboot. waited.... well reboot not started, it did nothing. opened en new ssh terminal to get "diagnostic", waited about 30mins and no diagnostics... so there was nothing else then a dirty reboot (i really don't like that) it rebooted and its now doing a paritysync, no strains thing going on... but, now i didn't got any diagnostics, so i setup a remote syslog server, but when i config the syslog service on unraid and saved it i had to restart rsyslog by my self, Unraid didn't restart it after i changed something in the GUI. why doesn't unraid restart it after some gui change?? "sudo /etc/rc.d/rc.rsyslogd restart" did the trick. now i'm going to monitor the system logs remotely, hopefully this was a onetime thing.... edit: Thnx to Domoticz (its running on unRAID) i found out that around 0:15h AM there where no activity recorded from my energy meter (youless LS120), around 9:55h AM it started working again (the moment i did a dirty reboot) no i know why Plex didn't do it scheduling programs... so Docker was "dead", and VM manager was almost dead... Edited January 13, 2020 by sjaak Quote Link to comment
trurl Posted January 13, 2020 Share Posted January 13, 2020 1 hour ago, sjaak said: something goes wrong on my server running 6.8 Did it ever work after upgrading? Quote Link to comment
sjaak Posted January 13, 2020 Share Posted January 13, 2020 1 minute ago, trurl said: Did it ever work after upgrading? its the first time this "half freeze" problem happens. (beside the freezes i had with Ryzen) i running 6.8 now +/- 2 weeks, i don't have a high uptime because the annoying AMD reset bug (vega64). Quote Link to comment
Dmtalon Posted January 13, 2020 Share Posted January 13, 2020 This may or may not be related, but it seems since installing 6.8.0 (12/27). I have been getting the following email from unRAID. fstrim: /mnt/cache: the discard operation is not supported I have had the Dynamix SSD Trim for some time w/o issues. Could anything related to 6.8.0 cause this? The only other thing I've done to my server recently is install this expansion card to add two more HDD's but I did not move my SSD off the MB controller. https://www.ebay.com/itm/162958581156?ViewItem=&item=162958581156 Quote Link to comment
JorgeB Posted January 13, 2020 Share Posted January 13, 2020 4 minutes ago, Dmtalon said: The only other thing I've done to my server recently is install this expansion card to add two more HDD's but I did not move my SSD off the MB controller. Are you sure? That error suggests SSDs are on a controller without trim support, like that LSI. Quote Link to comment
kavo Posted January 13, 2020 Share Posted January 13, 2020 On 1/11/2020 at 12:36 AM, twg said: There's a problem with 6.8.0. I'm running 6.7.2 with no issues and everytime I update to 6.8.0 the server can't get an IP address. When I revert to 6.7.2 the problem goes away. I tried twice already. Same behaviour. This same issue happens on my system. Unfortunately I'm currently overseas for work and only had enough time to revert back to 6.7.2before leaving Quote Link to comment
Dmtalon Posted January 13, 2020 Share Posted January 13, 2020 51 minutes ago, johnnie.black said: Are you sure? That error suggests SSDs are on a controller without trim support, like that LSI. my SSD is definately not plugged into the external. I just verified. It's plugged into 1 of the two SATA ports labeled as "ASMedia® PCIe SATA controller". I'm 99.85% sure I didn't touch any existing drives when installing the card. Just for the two new 4TB Red's I added. I have 1 spot open on the other set labeled "AMD SB950 controller : 6 x SATA 6Gb/s port(s), brown" I could swap too. Quote Link to comment
JorgeB Posted January 13, 2020 Share Posted January 13, 2020 4 minutes ago, Dmtalon said: "ASMedia® PCIe SATA controller" Trim doesn't work correctly on some older Asmedia controllers, and that might have changed with the newer kernel, try the board chipset ports instead. Quote Link to comment
Dmtalon Posted January 13, 2020 Share Posted January 13, 2020 5 minutes ago, johnnie.black said: Trim doesn't work correctly on some older Asmedia controllers, and that might have changed with the newer kernel, try the board chipset ports instead. I'd already started that process just to see. It appears it did not change anything. ~# fstrim -v /mnt/cache fstrim: /mnt/cache: the discard operation is not supported Quote Link to comment
JorgeB Posted January 13, 2020 Share Posted January 13, 2020 3 minutes ago, Dmtalon said: I'd already started that process just to see. It appears it did not change anything. Please start a new thread on the general support forum and post the diagnostics. 1 Quote Link to comment
trurl Posted January 13, 2020 Share Posted January 13, 2020 This is a yes or no question: 9 hours ago, trurl said: Did it ever work after upgrading? This seems to be a very ambiguous way to answer it: 8 hours ago, sjaak said: its the first time this "half freeze" problem happens. Quote Link to comment
sjaak Posted January 14, 2020 Share Posted January 14, 2020 9 hours ago, trurl said: This is a yes or no question: This seems to be a very ambiguous way to answer it: yeah, and at this time of writing this message, the server is 23hours "stable" without suspicious notifications in the logs, however almost 18hours are Parity sync (its slower on 6.8) Dockers and VM are on. Quote Link to comment
boof Posted January 15, 2020 Share Posted January 15, 2020 On 1/10/2020 at 5:36 AM, CodeMonkeyX said: I just wanted to say that this update did break NFS for my situation. I went back after a few days and read the change log to try and figure out if something changed that broke it and saw this: I only use NFS to store jobs archived on a old production printer (Xerox Docutech 6135 from the 90's), and unRAID worked great until that update. Thankfully the setting "Tunable (support hard links)." fixed the problem for me. I just wanted to say please don't ever remove that option! I know people who want it are an edge case, but it really would ruin my year if I had to try and implement and maintain a vm or docker or something just to run a NFS server for those old machine. This isn't just a legacy issue. It affected my Ubuntu 18.04.3 LTS clients as well. Stale mounts quite quickly after the initial mount. There are other threads linking it to cached data and mover - but very much a general NFS issue. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.