Jump to content

BLKMGK

Members
  • Posts

    978
  • Joined

  • Last visited

Everything posted by BLKMGK

  1. Anyone ever find a solution to this? I've got this message filling my logs and it makes looking at them near useless. SuperMicro mobo, IPMI plugin up and running. I get temps and fan speeds fine near as I can tell. I've seen mention that IPMI may have something to do with this but was wondering if anyone had solved this. Mobo comes up as: Supermicro - X9DRi-LN4+/X9DR3-LN4+
  2. Receiving the following error - something happen? Had multiple updates failed and began poking to find this 😮 After the update failure I was trying to install the older code - no go so far. plugin: installing: /boot/config/plugins-old-versions/community.applications.plg/2018.09.28/community.applications.plg Cleaning Up Old Versions plugin: downloading: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2018.09.28.txz ... failed (Invalid URL / Server error response) plugin: wget: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2018.09.28.txz download failure (Invalid URL / Server error response) Downloaded directly from https://github.com/Squidly271/community.applications/blob/master/plugins/community.applications.plg and attempted to install after copying to the flash and got this next error: plugin: installing: /boot/bak-plug/community.applications.plg plugin: file doesn't exist or xml parse error I'm stumped, hoping it's just some network weirdness but wanted to give a heads up Edit: MAY have found the issue? System log is full. I have a sneaking suspicion that's causing me problems. Once I figure out how to flush it I'll try again. Rebooting isn't an option right now Edit2: Was able to do a clean install from the link at the start of this thread after clearing a little space for syslog. Ugh! atop logging had filled my log directory not syslog much to my surprise. Two large files multiple days old were responsible - not sure WTF but deleted them.
  3. Hmm, some weirdness. I removed the plg manually and the directory as well, no joy. Put the directory back and it refuses to install as it says this is the same version. Replaced the PLG with a copy from the "old versions" directory. It attempts to install but again claims that the file is corrupt. I can open the file via 7zip and extract the tar, I can open that file too with 7zip. Both the PLG and the TXZ have current file dates with versions that match the latest. Permissions issue or some weirdness maybe? Something funky going on, I think I'll reboot when I'm able and with crossed fingers see if things return to normal Okay, bounced it and the apps tab is back. It also acted as if this was the first time installing the plug-in which seems right. Dunno', something maybe screwed up tar? Doesn't look like an issue on your end so I'll shrug and keep on trucking
  4. Ah, I see what's going on! Looks like a file is corrupt. +============================================================================== | Installing new package /boot/config/plugins/community.applications/community.applications-2018.10.28a-x86_64-1.txz +============================================================================== Verifying package community.applications-2018.10.28a-x86_64-1.txz. Unable to install /boot/config/plugins/community.applications/community.applications-2018.10.28a-x86_64-1.txz: tar archive is corrupt (tar returned error code 2)
  5. Sumthin' odd happened. Tried to install a new container and got dumped to the screen to build my own template, hit back and got dumped to my server's homepage. Noticed there was an update to the Community app available, installed it successfully according to the dialog, interface seemed a little slow, went back to install the container and my Apps tab was missing. Seen this before so I tried to remove the CA plugin and the dialog told me it wasn't there and it got removed from the interface. Used the URL here to reinstall and the tab hasn't come back but I appear to be on the 10-28a release successfully. I can see from above it looks like GitHub may be having issues, the status page linked says they're still "red" as of this morning. Not to pile on but just wanted to let you know things still might be dorked up. Might explain why the initial container attempt failed too. <shrug> No biggie, I'll play with other things. Really appreciate the effort and the software for sure!
  6. Updated to 6.5.3 and ran into this same exact issue. Running sensors at the commandline throws multiple errors. Even better, my Telegraf container refused to start and was throwing errors about lm_sensors too. Pulled a backup config I'd saved on my flash, pasted it into Nano, and got my server reading temps again. Reloaded Telegraf, bashed ionto it, loaded sensors, and now it works too. Yeesh, glad I posted something here or I'd have had to research it all over again! Here's the working config: # sensors chip "coretemp-isa-0001" label "temp1" "CPU Temp" chip "coretemp-isa-0000" label "temp1" "MB Temp"
  7. Okay, I think I've got something! If you run sensors from the commandline, in my case, it had issues parsing multiple lines of a config file root@BLKMGK:~# sensors Error: File /etc/sensors.d/sensors.conf, line 2: Undeclared bus id referenced Error: File /etc/sensors.d/sensors.conf, line 4: Undeclared bus id referenced Error: File /etc/sensors.d/sensors.conf, line 6: Undeclared bus id referenced Error: File /etc/sensors.d/sensors.conf, line 12: Undeclared bus id referenced sensors_init: Can't parse bus name I went into that file and removed the offending lines from the config. Rerunning Sensors I now get the following: root@BLKMGK:/etc/sensors.d# sensors coretemp-isa-0001 Adapter: ISA adapter CPU Temp: +49.0°C (high = +80.0°C, crit = +90.0°C) Core 0: +45.0°C (high = +80.0°C, crit = +90.0°C) Core 1: +50.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +46.0°C (high = +80.0°C, crit = +90.0°C) Core 3: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 4: +44.0°C (high = +80.0°C, crit = +90.0°C) Core 5: +49.0°C (high = +80.0°C, crit = +90.0°C) Core 6: +42.0°C (high = +80.0°C, crit = +90.0°C) Core 7: +46.0°C (high = +80.0°C, crit = +90.0°C) i350bb-pci-0600 Adapter: PCI adapter loc1: +68.0°C (high = +120.0°C, crit = +110.0°C) nct7904-i2c-0-2d Adapter: SMBus I801 adapter at 1180 in1: +1.06 V in2: +1.10 V in3: +1.49 V in4: +1.27 V in5: +1.27 V in6: +0.63 V in7: +1.81 V in8: +0.81 V in9: +1.03 V in10: +0.81 V in11: +1.34 V in12: +1.35 V in13: +1.35 V in14: +1.35 V in15: +3.37 V in16: +3.19 V in20: +3.37 V fan1: 0 RPM fan2: 1668 RPM fan3: 1721 RPM fan4: 1776 RPM fan5: 1776 RPM fan6: 1755 RPM fan7: 1741 RPM fan8: 1773 RPM temp1: +48.0°C temp2: +0.0°C temp3: +0.0°C coretemp-isa-0000 Adapter: ISA adapter MB Temp: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 0: +43.0°C (high = +80.0°C, crit = +90.0°C) Core 1: +42.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +44.0°C (high = +80.0°C, crit = +90.0°C) Core 3: +43.0°C (high = +80.0°C, crit = +90.0°C) Core 4: +44.0°C (high = +80.0°C, crit = +90.0°C) Core 5: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 6: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 7: +42.0°C (high = +80.0°C, crit = +90.0°C) I have now modified the sensors.conf file located on the flash drive at config/plugins/dynamix.system.temp in hopes that on reboot it will load properly. Using the "rescan" button on the System Temp page now shows me my temps!!!! I am also seeing FAN SPEEDS! First time for that one Any chance we could maske that page display more entires, I have a bunch of fans in this box lol. Aww crap, as soon as I "apply" it all goes away the config file is rewritten when I hit the Apply button. Is there a way to solve this? Ha, manually edited it again, hit rescan, applied what I want, and instead of hitting Apply hit Done and was able to keep my temps - woohoo!
  8. SAME exact issue! I had it working for awhile and then one day went into the drop downs and noticed additional sensors. Configured them and as soon as I hit Save it ALL stopped working. No more system temps for me, driving me crazy! coretemp nct7904 is what mine detects as <sigh> Typing sensors in the shell givers me "Error: File /etc/sensors.d/sensors.conf, line 2: Undeclared bus id referenced" Supermicro - X9DRi-LN4+/X9DR3-LN4+
  9. Will you consider adding Calibre to this? You guys seem to be keeping pretty up to date and Calibre added to this would be wonderful! I've been trying to manually configure LL and Calibre in a VM and have just about thrown in the towel lol. Seeing a pretty weird error from the latest LL pull too where apparently their database format changed and caused an error, think it was an update today that did it as it worked earlier in the day and then died after a new pull from GIT so beware.
  10. I was moving from ReiserFS so yeah it was time! I've been using unRAID a very very long time and there was finally a good reason to move to a new filesystem, crypto! I was on travel part of the time which is why using a portable device was so important. It's nice to be sitting in a bar somewhere on wifi and remote in to move things around
  11. One thing mentioned I'd echo is that a cancel button would be helpful. I've found that if I have SSH access I can kill an RSYNC process and bring it down but it's not graceful and not something I'd expect a novice to be comfortable with. That said, I was able to use this tool to successfully move over 35TB of data around to encrypt 16 devices. I still haven't done my cache but it's on the list - I think that can be done? Anyway, unBALANCE, the Krusader container, and MidnightCommander were all very helpful and I appreciate the effort that's gone into this tool - thank you!!
  12. On my Chrome equipped desktop I also see where expanding a large number of directories expands into the disks listed below if that's what you're describing above. I generally try to move at the top level to avoid this. I have seen the size issue as well and increased my minimum space. It doesn't seem consistent though and appears better with this last release? I've been careful to move drive to drive and not give it much space to have to calculate on it's own
  13. Okay, with Chrome on iPad no change. Likewise on my iPhone with an up to date iOS. Logs show: I: 2018/03/09 21:26:07 core.go:170: Sending config I: 2018/03/09 21:26:07 core.go:175: Sending state I: 2018/03/09 21:26:07 core.go:185: Sending storage I: 2018/03/09 22:32:53 core.go:170: Sending config I: 2018/03/09 22:32:53 core.go:175: Sending state I: 2018/03/09 22:32:53 core.go:185: Sending storage When my phone attempts to make a plan to Scatter and goes off into the weeds, I think it's not crashing just iOS isn't handling it well. Would love to be able to handle this remotely but I could use Chrome Remote Desktop in a pinch so if no one else is interested I'll understand or help you solve it if you want
  14. I can test on Chrome but it’ll use the same engine I believe. I don’t know how to access the debug console on a tablet browser. Honestly it doesn’t feel like a crash just an inability to display the dialog. I’ll be home tonight and can check and troubleshoot it further. My tablet is an iOS update behind, my phone is current so I can check both for you. Once I kick it off from my desktop I can access it from anywhere, it’s starting that’s the issue. When I refresh I do see the following message: {"message":"Not Found"} But removing the /transfer from the URL brings the page up fine ? I’ll post an update tonight with what I see. I have a big move from this morning running for another hour or so as I clean drives for crypto. Your tool is far superior to trying to do this with Krusader as that container tends to freeze and move data slowly, rsync rocks! I look forward to using the Gather soon too when this is done too...
  15. Not much to send honestly. I select a drive, it expands, I select a folder (expands fine), select a target drive, Plan is lit, I press Plan and it goes dark while the colored bars in the top right begin “playing”. I never see a dark box displaying progress with the planning as before, the Plan button stays dim, and the page is unresponsive until I refresh which starts it all over again. Does that help? Certainly willing to test if you need the help! I’ll check it on my desktop when I get home too.
  16. Did something change with this release regarding how the Plan pops up? I used to be able to use this from my iPad but after the last release hitting the Plan button starts the churning in the upper right but I never see the plan and cannot advance to begin the move. I am sad! Lol
  17. Thanks for updating the plugin to the latest Rclone! Still trying to exclude a dir - argh!
  18. Been beating my head against the wall for awhile on a config issue. I've been using this plugin and rclone for months now with great success, first on ACD and now on GSuite. My commandline is as follows: rm /mnt/rclone-bak.log # remove the old backup mv /mnt/rclone.log /mnt/rclone-bak.log # create the new backup rclone sync --log-file /mnt/rclone.log --max-size=40G --copy-links --transfers=4 --exclude '/FOO/*.*' --bwlimit=10M --verbose /mnt/user/ google_crypt:/ --checkers=10 -v This removes a logfile from the previous run 2 prior, copies over the existing current logfile, then kicks off rclone in verbose mode with output piped to a file. I can tail these files to monitor status or just open them (hint: do not place them in a directory being backed up). The bolded portion however is a real problem... I have recently moved data to a new share we'll call FOO. Foo is about 60gig of 50Kb files, it's not something I require to be backed up and the tiny file sizes push rclone into spasms so my backup never completes! I'm in a 403 tarpit... How the bloody hell do I skip the FOO directory? I have tried each of the following to no result: --exclude '/FOO/*.*' --exclude "/FOO/*.*" --exclude='/FOO/*.*' --exclude '/FOO/*.*' --exclude="/FOO/*.*" --exclude "FOO/*.*" --exclude="FOO/*.*" --exclude "/mnt/user/FOO/*.*" I have probably tried a few other combinations too! I have been told that the path should be relative to the one given to rclone for backup so I think I've got this right not giving it the full path. I have tried with and without a leading slash, I have tried dual and single quotes - I'm pretty sure single is correct. For bonus points there's another share I'd ALSO like to skip but it's much less critical as it's simply a working directory.
  19. Currently working very well for me nightly with UserScipts, I managed to clear the long filenames and it was a PITA but now done. This runs maybe an hour or three a night and updates Amazon fine. First time it ran it deleted a good TB and a half off of Amazon from all of my previous attempts and files moving around. It's now run multiple nights in a row without issue, starts up at 4:40am and am not sure how to change it but it's working lol. For the logfile, I simply keep it at a higher level than I'm backing up - at the /mnt level. I did modify the calling script to rename rclone-bak.log, then move rclone.log to rclone-bak.log so I have 2 nights worth of logs I can look at if something weird happens. Currently sitting at about 27.1TB backed up and about 520K files give or take a few K
  20. Alright, after having to severely slash some filenames and move the logfile out of the range that's being backed up I'm finally able to have target files deleted that are no longer needed. I'd really appreciate some help from anyone that has exclusions working. Next up is scheduling this to run regularly. Hopefully User Scripts handles it - fingers crossed!
  21. Yup, Amazon! I've just done a bit of hack and slash with a bulk renamer so we'll see what fails this round. IF all of the lengths check out it will still fail however since it will be unable to avoid trying to backup the changing log file. Has anyone got excludes working properly? I've got a work directory or two I'd like to skip and that log file for sure! Deleting the extra crap would sure be nice and that won't occur without a full run without I/O errors it says... Edit: 4 more and counting! Some of these I've edited 2&3 times now and have names shorter than others. The cryto must occasionally get a wild card and screw them up!
  22. Here's an example of one file that's just failed. I'm still getting over 100 failures despite having modified every file that was claimed to fail on the last run! BlackHat/Defcon/DEF CON 9/DEF CON 9 audio/DEF CON 9 Hacking Conference Presentation By Daniel Burroughs - Applying Information Warfare Theory to Generate a Higher Level of Knowledge from Current IDS - Audio.m4b Another BlackHat/Defcon/DEF CON 19/DEF CON 19 slides/DEF CON 19 Hacking Conference Presentation By - Kenneth Geers - Strategic Cyber Security An Evaluation of Nation-State Cyber Attack Mitigation Strategies - Slides.m4v These don't show up with a find as short as 225 chars so I'm a bit frustrated and am wondering if there's something in the name of the file itself that's tripping these up except I see little in common. The error is: u0027 failed to satisfy constraint: Member must have length less than or equal to 280
  23. Finally have my entire backup nearly done! However while the backup has been occurring I've been adding files, moving files and doing what gets done with NAS like unRAID. I've also run into issues with files having pathnames too long. For folks who may run into this the following command will find files that are too long so you can rename them. Run it from a terminal session or SSH in the /mnt/user directory and it'll return the names of files that exceed 230. find -regextype posix-extended -regex '.{230,}' Note that max allowed length is 280 but for some reason quite a few extra characters seem to get tacked on and I've had to search for lengths longer than 230. I had over 130 instances of this, no fun to clean up! I have one last issue - exclusions! I run this using UserScripts with the following CustomScript: rclone --log-file /mnt/user/work/rclone.log --max-size=40G --transfers=2 --bwlimit=8.8M sync /mnt/user/ crypt: --exclude="work/HB" --exclude="work/rclone.log" --exclude="work"--checkers=10 "work" is a shared mount under /mnt/user and I'd like to exclude it. When I'm compressing video or doing random downloads this is where that winds up - it's also where I have a running log for rclone being created. Because the rclone log is constantly being updated when rclone tries to upload it the file changes and I get an error. If rclone finds an I/O error like this it refuses to do deletions on the target. Since I've had some pretty hefty files get uploaded from my Work directory that I don't want hanging around I need that full synch to occur. So, how best to exclude "work"? I've tried "/mnt/work/" and I've tried what's above. I'm going to try "/mnt/work/" again as this seems like it ought to work and rclone has revved a few times since I last tried it. I've got it running now with just one error found so far - the darned log file! All of my name changes seem to have taken - whew! So close to having a good baseline but so far - I'd appreciate a pointer from anyone who's gotten exclusions to work. Edit: Ugh, even 230 wasn't short enough - I've still got at least 20 files with issues!
  24. Yeah, I can see where that could be an issue I hadn't thought of. I currently transfer 9 files concurrent and have 15 threads checking files as I understand it. I do have lots of files now but once initial upload is done I think it'll be mostly large files from media and backups. It'll be 20 or 30 days before I get there but I'll try to report any bizarre behavior when I do!
  25. Your stop script looks better than what I'm doing. UserScripts terminates the top level script but not the rest of the rclone scripts which I have to kill by hand. I expect I'll be using your's soon lol. I'm still doing my initial upload and am at 10tb but when complete I'd like to schedule it too, I like what you've done!
×
×
  • Create New...