Delarius

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by Delarius

  1. It's definitely somewhere on that /boot drive, just a matter of figuring out how to search for it. Not sure using the old servername is the right approach. The cronjob will no doubt look something like this: 5 4 * * 0 /root/backup.sh So in this instance I'd try using "backup.sh" as my keyword.
  2. It's coming from your /boot volume (the flash drive) - probably easiest to determine a keyword in your cronjob - something somewhat unique if possible. Then you can do something like this to figure out where that root cron is coming from: $ grep -r "mykeyword" /boot/ That should give you a list of the possible matches and hopefully it's easy to determine where the root cron is coming from.
  3. This won't help you now, but having been a Linux admin for many years - If you ever think the command you are typing might be destructive or could be if you accidentally hit Enter at the wrong time: Type echo before that command, like this: $ echo rm -rf /mnt/blah/blah Then if you make a mistake, it'll just return your command on the command line, and won't do anything. When you are happy that you haven't made a terrible error, you can always press up arrow, then Control-a and remove the echo part. I find it particularly useful when doing batch jobs where a mistake might really ruin your day. A fake and not great example is: $ for i in $(ls myfolder); do rm -rf $i; done If you put an echo in there, it'll just display what it's going to do before you actually do it which can be a lifesaver: $ for i in $(ls myfolder); do echo rm -rf $i; done Just a tip because humans make mistakes all too often and just adding echo can really help prevent serious oopsies.
  4. Also looks like you aren't getting an IP address. That's the most likely cause of this issue. I would try plugging in an ethernet cable to all of the ports on your server and see what it finds. Currently it looks like it's not selecting the expected ethernet port. The HPE servers can definitely be a little finicky in regards to their network ports so test all of them because you should be getting a dhcp address from your router, not the 169.x.x.x address.
  5. Fair enough, I don't think it would be that difficult. An example script would be something like this and you can just switch the mounts/mount points (even using an echo loop if you like): mkdir -p /mnt/mytransferdir mount.cifs -o username=mysynologyuser,password=mysynologypasswd,file_mode=0660,dir_mode=0770,vers=2.1 //SynologynameorIP/sharename /mnt/mytransferdir rsync -avz /mnt/mytransferdir/ /mnt/user/Pictures #or whatever you need umount /mnt/mytransferdir # Repeat this with each share you want syncd Could easily work with an NFS share if that's how you roll and Synology will let you mount on the command line. The thing is that if you can just get one good rsync, then subsequent rsyncs won't presumably need to copy all the data, so even if you can get this to work once, your initial plan might be slow but adequate.
  6. I don't own a Synology, but is there a reason why you couldn't mount the Synology share in unRaid and then just use a local cron job to rsync directly? I would presume you should be able to mount the Synology share, do an rsync and then umount the share.
  7. Greetings, I don't have a lot of time right now, but this is definitely fixable. First, please elaborate on this statement if you could. This sounds a bit unexpected. I don't know how you renamed this 'share', but something was missed. Somehow plex or the OS is remembering a disk or share or something regarding "Plex Media Server" but why? Let's look at the configs. So open terminal, and look for that string in the most likely places. grep -R "Plex Media Server" /mnt/user/appdata /etc This will look for any occurrences of this string in both of the above locations and show which files they are in. I expect this will give you a clue where the problem lies. Mostly, I find it very strange that somehow a 'share' was created, and I'm not sure I understand how you 'renamed' this share. You might be able to find some immediate relief by just symlinking the missing dir to the one you think you want, but something seems a little odd. I will look at your diagnostics when time permits. Hope this helps, Del
  8. Hi! I think this is the issue. Normally when I do this I don't put a passphrase on the key. It's not the password of the server, it's a special passphrase to make your key more secure. If you want automation, redo your key generation and press enter when you get prompted for a passphrase. You can also test with the verbose flags -v through -vvv in your ssh command to give you a little more feedback about what is failing. Try something like: # ssh -vvv [email protected] That should sort this out. Del
  9. Hi, this might not work for your use case, but I have a few virtual machines that I would prefer to have a backup but it doesn't have to be the very latest. This is the lazy mans way of doing this, provided an overnight backup is good enough: Create a file like this one at /boot/config/mybackup that rsyncs whatever you want from cache: #!/bin/bash rsync -a /mnt/cache/domains/ /mnt/disk1/Backup/domains/ ##note disk to disk here - you can do it through the /mnt/user directory too. They are mutually exclusive Then in your /boot/config/go file add two lines like this somewhere: # add these two lines after this line: /usr/local/sbin/emhttp & cp /boot/config/mybackup /etc/cron.daily/mybackup & chmod 755 /etc/cron.daily/mybackup & Obviously you could also set this up to do the backup hourly, weekly or monthly too and to be honest, it's not a great idea for a potentially running virtual machine as it could definitely cause some weirdness but it's about the easiest way I could think of to achieve this once a day overnight. Two quick edits and done. Just another possible way to approach this if you aren't very anxious about your data being a day older in case of ssd failure. Forgot to add - you can check when the cron.* scripts run by typing: crontab -l Del
  10. Try turning bonding off in your existing config file - reboot: BONDING="no" And then what I'd do is start a ping test from another computer, and then unplug all network cables, and plug one in - test in all ports. Then try the other cable and do the same thing if needed. And wow, i totally forgot about that ethtool functionality Hoopster, yeah identify the port with ethtool. Del
  11. Hi tiwing, Something is definitely happening on your machine. In your initial screenshot, note the network traffic? You may be able to gather more information about what is running on your machine by simply going into the terminal and typing: top press q to exit, it should show you what is using your cpu. Diagnostics are always helpful if this doesn't reveal the problem. Hope this helps, Del
  12. Thanks, that's good information. To me, it looks like the bridging/bonding part isn't quite right, note how a few pings went through, but in ifconfig we see that eth0 appears down? I suggest, just for troubleshooting, test out a variation on my network.cfg So, make a backup of network.cfg (we can just move it for now) # mv /boot/config/network.cfg /boot/config/network.cfg.Mar42021.bak then make a new file at /boot/config/network.cfg with your favorite editor (vi, nano, pico, whatever). Might be easiest to shutdown - copy the file using another computer to the usb stick and then try restarting with this config file. Another thing you might try that is very easy - see what turning off bonding in your config file does? Then see if you get network on either of your network ports by pinging and unplugging the cable. and add these contents - I'll try to make it as short as possible (adapted from my working network.cfg) : # Generated settings: IFNAME[0]="br0" DHCP_KEEPRESOLV="yes" DNS_SERVER1="8.8.8.8" DHCP6_KEEPRESOLV="no" BRNAME[0]="br0" BRNICS[0]="eth1" BRSTP[0]="no" BRFD[0]="0" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.80" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" USE_DHCP6[0]="yes" MTU[0]="1500" IFNAME[1]="eth0" PROTOCOL[1]="ipv4" SYSNICS="2" I have adapted this to use the interface which seems to be UP on your machine eth1 - eth0 seems to be having issues. For now I think I'd unplug eth0, see if this network.cfg works any differently. I find it strange that your config is vastly simpler than mine, but mine is definitely working. Del
  13. Hi Drackeo, Something is definitely amiss with your network. If possible, it would be useful to try the following things. 1. from the unraid cli - can you ping the gateway? Hopefully you get something back. # ping 192.168.1.1 2. Can you send the output of two commands? First shows all the interfaces, and the second will show your routing. # ip a # ip r Post that information and we might be able to figure out what's going on. Hope this helps, Del
  14. Greetings, Long time since i've posted here. I hope this is still a valid way of doing things but seems to be working for me in 6.8.3 You should have a file at /boot/config/rsyslog.conf That's the base config file for rsyslog, what i often do is just drop messages entirely. Again, forgive me if i haven't stayed current, but what i do is the following. 1. open up /boot/config/rsyslog.conf with an editor 2. find the line that reads: # limetech - everything goes to syslog. 3. underneath that line, add a log exclusion: :msg,contains,"connect from 192.168.254.228 to getport" ~ One example I use is: :msg,contains,"spindown" ~ 4. if you want to do this without a reboot, you can probably add the desired entry to the active rsyslog.conf file at /etc/rsyslog.conf 5. and then you'll need to restart rsyslog via: /etc/rc.d/rc.rsyslogd restart May or may not work as advertised, but i think it should? Hope this helps, and thank you for getting me engaged in the community again. Del
  15. Just a small tip, having seen this a few times where users (and sometimes sysadmins) are a bit puzzled by immutable. If you do change attributes to immutable - be sure to remind yourself which files you changed and how to undo it. Never fails if you don't, in a few months you'll be trying to remove those files and will find it surprisingly challenging. Del
  16. I thought there was now a reset password button on the main login but I have no idea how it works, so here's an alternative that should work to at least restore command line access to the root account: If you have access to a linux command line - this would probably work on a live DVD. Rename the files on the USB in /config back to the original names. On a Linux command line run these commands - replacing MyUnRaidPassword with your desired password python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MyUnRaidPassword', '\$5\$%s\$' % randomsalt)" It'll output a value - that looks something like this: $5$ZRtMxFol$XKxC./tdG2d5U5wCzZEO43YYsqmFR.9hOnSARfwIXFB Open the /config/shadow file in a text editor (make sure you don't mangle the line endings) and exchange the existing value for root with the value you just created. You need to make sure you only adjust the value between root: and the next colon. So if we used the example from above your root line should be something very similar to this: root:$5$ZRtMxFol$XKxC./tdG2d5U5wCzZEO43YYsqmFR.9hOnSARfwIXFB:17367:0:99999:7::: Save the file, reboot into unRaid and that should likely fix your login issues. Definitely for the terminal, not entirely sure how the web authentication works but I think it may fix that too. Hope it helps, Del
  17. Glad I could help, I'm fairly sure libvirt has its own way of figuring out routing, so that's why your VMs and quite possibly dockers were still working. Del
  18. Greetings, I had a weird quirk like this a few weeks ago. This may or may not be your problem. You can check in two ways: - In Settings -> Network - make sure your default route is set to the correct gateway. or - In terminal type either: ip r route Route can take a moment to finish - check the default route is set to the proper gateway. You should also check that you don't have any odd routing - you'll likely have 4-6 routes for internal routing which is fine - they should generally only point to internal (non routable) networks. You can quite possibly sort this out using the Network Settings - delete the wrong route and add a new default route. Verify the correct settings are seen in /boot/config/network.cfg - edit if needed. In Terminal you could do it this way: You can try deleting and adding a new default gateway. Using the information you gathered earlier - note down the correct interface - probably br0 and the correct gateway IP address and the incorrect ones then: # route delete default gw <incorrect gateway ip> <incorrect current interface> and then # route add default gw <correct gateway ip> <correct interface> you might need to finish with: # ip route flush cache Hopefully this helps, Del
  19. I'd guess the files/directory might have a the 'immutable' attribute set. You can check with: lsattr /path/2/problem/fileordirectory if you see an 'i' in the listing it's immutable - fix it with (you might want to use -R to do it recursively): chattr -i /path/2/problem/fileordirectory That should allow you rm the file if needed. Del
  20. I think you might be misunderstanding how the cache drive works. The cache drive is generally intended to provide faster, unprotected storage for things that need snappy storage - in many cases things like virtual disks, or possibly appdata If you want items protected, they need to be on the array. If you want them fast, on the cache. You can certainly probably use a tool to make backups of your cache to the array, which is what most people likely do. Del
  21. I agree it is somewhat misleading but the dashboard cpu meter in my view is intended to show if there's a problem. If you do have very high i/o wait then that's almost certainly an issue, and thus it's displayed - to warn you. It might be nice to have a toggle so you could see the actual cpu load, but this can also be done through the system stats plugin. Or just using top. Just my two cents, Del
  22. Greetings, I think there's possibly two things going on. It appears you're running with 2G of RAM. While this is possible, because the root file system is running on that RAM it's complaining that it's getting full. With that little RAM, I'd think it might be somewhat expected to get that warning. However, it is easy enough to check if this is high, but stable or high and getting worse. First thing I'd do would be to open the terminal and familiarize yourself with two commands - df and du. I shall explain what they do and how to use them. To find out the filesystem sizes, issue df with an minus h (human readable) like this: df -h or if you just want to see the root filesystem that it's complaining about: df -h / You'll probably see this (from your logs): Filesystem Size Used Avail Use% Mounted on rootfs 812M 670M 142M 83% / Now, if we want to figure out where the space is being used on / - we'll use du . This command essentially 'sums up' the disk usage in directories. I usually run it with the -h and often -s flags (for human readable and summarize respectively.) From the above df command, we already see that you're using 670M in your / (root) filesystem, so we could try to break that down a bit. Because some of the filesystems aren't relevant (you saw them with the straight df earlier) we'll exclude some via substitution as it's way easier and cleaner. You'll still need to ponder the output of this, and possibly drill down into directories to see if there's indeed an actual problem. So, to find out how the space is divided (there is a space after the } ) : du -hs /* --exclude=/{proc,sys,dev,mnt,boot,run} /* This should output a nice list with sizes for relevant root directories. If indeed /tmp seems like it's the issue, you can check the size of things in it - in much the same way but the excludes won't be needed - since we're not checking the root file system. du -hs /tmp/* If you want to just see a specific directory size, say for /tmp we do it without the star: du -hs /tmp With these two commands I believe you should be able to get a handle on this situation and determine if it's stable but reporting high because you don't have much memory, or reporting high and getting worse. Hope this helps, Del
  23. Greetings, I think one clue here is this line when you run top (or from your diagnostics): %Cpu(s): 0.0 us, 1.6 sy, 0.0 ni, 0.0 id, 98.4 wa, 0.0 hi, 0.0 si, 0.0 st Note the 98.4% listing for wa - this shows that it's not so much the CPU that's generating load, it's that the CPU is waiting for IO. This points to a disk error, and sure enough, looking through your syslog also shows some errors. Note that the drive on ATA11.0 (TOSHIBA MD04ACA4) is showing frequent resets as the board attempts to adjust the communication speed. I haven't had time to carefully comb through the logs but first glance indicates this is more of a drive issue than an actual cpu load issue (although it'll show as 'load' - note the load indication in command 'top' - top line right.) If this was me, I'd run a full SMART test on the Toshiba disk - which appears to be disk2 in your array. You might find a bit of relief by moving critical stuff OFF disk2 for now (if you can - the disk definitely seems a bit dicey.) Check cables and monitor your syslog - you really want to stop those 'reset' messages - these ones: There's another user here who can probably give you even better advise but that's my take on this issue. Hope it helps, Del
  24. I'm not entirely familiar with this specific way of getting a docker terminal but if it's not working - you can always just do this directly via: docker exec -it <container> /bin/bash so an example might be something like: docker exec -it ApacheGuacamole /bin/bash At least you can get logged in to a docker in this manner - as to why the gui isn't letting you, not sure.
  25. I think in this instance, depending on how your QNAP device works - the way I'd approach it is - see if you can't get your QNAP to share (via NFS/cifs/however) to your unRAID server. Then you can use rsync to periodically 'synchronize' your directories. I don't do this but I believe you can add a 'network share' in the unRaid GUI. You could definitely do this a few different ways - including rsync via ssh but it's often easier if the storage is presented locally. rsync command would be something like: rsync -avz /mnt/user/Movies /mnt/wherever/you/mount/the/network/share/Movies You can test it via --dry-run as in: rsync -avz --dry-run /mnt/user/Movies /mnt/wherever/you/mount/the/network/share/Movies