icepic0

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

icepic0's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've been using your great IPMI plugin for years on a Supermicro SuperStorage (5018A-AR12L with a Super A1SA7-2750F motherboard) server with a Aspeed AST 2400 BMC (gen X7). Yesterday after a full power down to move the server the plugin is no longer reporting any sensors and seems to not be able to access IPMI. I'm running Unraid 6.12.6 and did not update just before this happened. The fans do run based on the simple configuration in the IPMI settings ( The IPMI web interface still works, and shows sensors and values. I have reset the IPMI unit from the web interface, and also done a factory reset of it. I have cleared the CMOS while replacing the battery. It seems that the ipmi tools cannot access the IPMI system. Trying tools I get ipmi-detect: Cannot connect to server ipmi-sensors Caching SDR repository information: /root/.freeipmi/sdr-cache/sdr-cache-unraid1.localhost cipmi_sdr_cache_create: internal IPMI error and all the commands other than ipmi-detect seem to time out I've noticed two other odd issues, that seem to have started at the same time, but do not know if they are related. The only red reading is VBAT showing "Lower Non-recoverable", even after replacing the CMOS battery. And the iKVM input (keyboard and mouse) are not working, even using the virtual keyboard. Any other things I should check or do? Has anyone seen this behavior before? Thank you very much. My system does seem to be recognized by the plug in as it says Motherboard and Model: Supermicro A1SA7 Gen:1 Here's some screen shots of my IPMI details and what I'm seeing in the plugin
  2. I'm using similar hardware (HP Gen7 Microserver and 10GB memory) with Unraid 6.3.5. For the last few months I have been seeing out of memory errors like yours "Jun 21 04:31:33 UNRAID kernel: Out of memory: Kill process 9943 (python2) score 50 or sacrifice child" with various processes, often for me it is These are the two major killers: Jun 24 04:50:59 unraid1 kernel: Out of memory: Kill process 1452 (smbd) score 1 or sacrifice child Jun 12 04:34:38 unraid1 kernel: Out of memory: Kill process 2500 (shfs) score 3 or sacrifice child What I haven't been able to find out is what is taking up the memory, as stats -> system stats shows that there is plenty of memory available. I'm guessing a plugin may be at fault, so I deleted most of my plugins. I still have Server Layout Community Applications Dynamix Active Streams Dynamix System Information Dynamix System Statistics All up to date. Is anyone else seeing these problems?
  3. Thank you Ken-ji. I had read about sec=sys in the release notes of one of the unRAID 6 betas, but I thought that had been fixed within the unRAID configuration. So, the better way to fix the NFS read only problem is to add "sec=sys" within the parenthesis of each of your NFS export rules. This will apply the settings within your rule correctly it seems. If you have more that one statement in your rule, for example server1(rw, root_squash) server2(rw, no_root_squash) you need add the sec=sys inside each section server1(sec=sys,rw, root_squash) server2(sec=sys,rw, no_root_squash) So thank you Ken-ji. I hope this can go into the unRAID FAQ.
  4. For a summary of what is needed after CrashPlan in the Docker upgrades to 4.3. The upgrade seems to work fine, you may need to start/restart the docker instance as it may have exited when CrashPlan restarted for the upgrade. In 4.3 there is a new check that using a key in the .ui_info file. You need to copy it from your server, it is in the "id" folder and since is "hidden" from normal browsing because of the dot. I suggest using scp to copy it or use ssh command line to make a copy of it without the dot. Then you need to copy it to the system that you use for running the CrashPlan client. I've seen a few notes on where to put it on a PC, but as I use Macs I didn't see the info in this thread. So here it is. On Windows, copy to: C:\ProgramData\CrashPlan\.ui_info On a Mac copy to /Library/Application Support/CrashPlan/.ui_info and ~/Library/Application Support/CrashPlan/.ui_info (these are were they go depending on if you installed CrashPlan for "all users" or "just myself") On a mac you can open up a terminal window. cd to the location and then use scp to copy the file. cd "/Library/Application Support/CrashPlan/" scp root@UNRAID_IP:/CRASHPLAN_CONFIG/id/.ui_info . cd "$HOME/Library/Application Support/CrashPlan/" scp root@UNRAID_IP:/CRASHPLAN_CONFIG/id/.ui_info . Then use the same method you were using prior to the 4.3 upgrade., ssh port forwarding or direct with the IP and port of your unraid server in the ui.properties file. Good Luck.
  5. I am having the same problem. Somehow the auto export, "exportfs -a" is getting a set of default options that includes RO, and the per-host config is not overriding it. I did find a fix(hack) that will force all mounts to RW. I wrote a script to edit /etc/exports to force RW (and for me "no_root_squash") It should be safe, but if the format of the exports file changes at all it will stop working and need to be updated as it looks for what is currently in the options of the exports files and adds "rw,no_root_squash" only if it finds the same beginning of the options line as now exists. #!/bin/sh OPTIONS_TO_ADD="rw,no_root_squash" touch /etc/exports2 while true do diff /etc/exports /etc/exports2 if [ $? -ne 0 ]; then echo "Updating /etc/exports with $OPTIONS_TO_ADD" cp /etc/exports /etc/exports.bak sed "s/ -async/ -$OPTIONS_TO_ADD,async/" /etc/exports.bak > /etc/exports2 cp /etc/exports2 /etc/exports exportfs -a fi sleep 60 done In /boot/config/go I call the script # reexport NFS RW under Private /boot/config/make_nfs_rw.sh & NOTES: - The ampersand is important as the script does not exit. - Once this bug is fixed, remove the script from /boot/config/go - You cannot override what my script does with a "ro" in a host, so if you need mixed ro and rw exports the script needs to be updated. - Also, it only runs at boot, so if you change a config from the web interface it will blow away the changes. You can re-run the script though. This seems like it might be a bug in nfsd itself, not allowing a per-host configuration to override the default configuration. For the problem to troubleshoot the root cause: I have this set in the web configuration: 192.168.1.0/24(rw,no_root_squash) I have these options for my exports from /exportfs: -async,no_subtree_check,fsid=102 192.168.1.0/24(rw,no_root_squash) I see that it is actually exported with, from exportfs -v: 192.168.1.0/24((ro,async,wdelay,root_squash,no_subtree_check,fsid=108,sec=sys,ro,root_squash,no_all_squash) If I do a manual export of a file system "exportfs -o async,no_subtree_check,fsid=12,rw,no_root_squash 192.168.1.0/24:/mnt/disk3" then that filesystem IS exported read write exportfs -v gives these options 192.168.1.0/24(rw,async,wdelay,no_root_squash,no_subtree_check,fsid=12,sec=sys,rw,no_root_squash,no_all_squash) so for some reason exportfs's default ro option is not being overridden by per host configuration. Or the format changed and we need to change our configurations.
  6. I have got it running my unRAID system from a Linux system. Getting it running on unRAID should work, google turned up at least one binary package for rsnapshot and plenty for perl. One thing I found out, the user filesystems do not support hard links, so you need to make your destination on a physical disk (which do support hard mounts) or you will fill your disks up. Using an NFS will work, as long as the destination is not a unRAID user filesystem. I turned these up in my search, I don't know if they work though: http://pkgs.org/slackware-13.37/slacky-i486/rsnapshot-1.3.1-i486-5sl.txz.html http://slackware.cs.utah.edu/pub/slackware/slackware-14.0/slackware/d/ (BTW, I am using Crashplan, it works nicely on unRAID)