fnwc

Members
  • Posts

    62
  • Joined

  • Last visited

Posts posted by fnwc

  1. for i in /etc/rc.d/rc.avahi* ;do :>$i ;done

    I don't have any Macs on my network, so I guess I don't need avahi.

    However, this didn't seem to do anything (avahi still spammed the syslog).

    Did you put the above line in your "go" script, and then reboot?

    If so, then the avahi daemon will not be started at boot, so how can it still spam your syslog?

     

    I don't know what to tell you, but it didn't work. I edited "/boot/config/go" and added the line above and rebooted. After reboot, I'm still seeing the messages.

     

    The other fix worked, though. Thanks for that.

  2. That can't happen.  syslogd is NOT writing to the flash drive. (unless you've modified its config to do so on purpose).  In v.5, the syslog is in a size-limited tmpfs in RAM.  So nothing bad will happen as a result of its growing.  It will just stop growing once it reaches a certain size (128MB?  grep /var/log /proc/mounts).  That may take a very long time though.

    I'm a little confused. I see a file "/var/log/syslog" that grows... am I not looking in the right place?

    You are looking in the right place.  What are you confused about?

     

    Well, since "/var/log/syslog" resides on the flash drive, I was confused that you were saying that syslogd doesn't write to the flash drive. Maybe I'm mistaken about where "/var/log/syslog" actually sits -- I don't really know much about linux.

     

    The syslog priority levels, in ascending order are: debug, info, notice, warning, error, emerg.

    Stock unRAID uses the highest log level -- debug -- logs everything.

     

    So you could lower the log level with something like this:

    mv /etc/syslog.conf /etc/syslog.conf.bak
    echo '*.info	-/var/log/syslog' >/etc/syslog.conf
    /etc/rc.d/rc.syslog restart
    

    Is the syslog still getting spammed? If yes, try lowering some more:

    echo '*.notice	-/var/log/syslog' >/etc/syslog.conf
    /etc/rc.d/rc.syslog restart
    

    ...etc.

     

    While the above may help you, it is still better to deal with the offending program's settings (avahi-daemon?), rather than messing with the syslogd conf settings.

     

    Ok thanks, I'll look into that. I'm basically using stock unRaid with the latest unmenu (which doesn't have anything to do with this avahi spam, AFAIK), so the avahi-daemon message is coming from the unraid package, which is what I'm trying to solve.

     

    Thanks for your help. I'll see if I can reduce the logging.

  3. I have this same issue, and I also have "System wakeup disabled by ACPI" even though I have S3 sleep support.

     

    Sleeping the unRAID server works fine, but waking up results in an unresponsive server, with no SMB shares mounted, and even a keyboard plugged directly into the box still results in no signal to the monitor. I basically have to hard reset the server at this point.

     

    I'll try rolling back to RC 14... are there any issues I need to be aware of in downgrading?

  4. 2) the way you do that through crond, it will start two separate instances of sed at the same time, and you'll have race condition, as each of the two stream editors will write to their own copies of the syslog and than try rename it back to syslog.  Bad idea.  Instead of going through crond, you may as well have a single script with one infinite loop and a 5 min sleep inside it.  But still a bad idea.

     

    I've modified the script to only use 1 sed command.

     

    In general, meddling with the syslog is a BAD idea!  Instead, look into the config files of whatever is writing those lines (avahi?) and adjust the log level there.

     

    I agree, although I have no idea on how to adjust the log level for the syslogd. Can you point me in the right direction?

     

    I found some filtering capabilities in syslogs, although they seem to come from differing variants of syslog, such as rsyslog or syslog-ng. I couldn't find anything about filtering messages in the "vanilla" syslogd.

     

    That can't happen.  syslogd is NOT writing to the flash drive. (unless you've modified its config to do so on purpose).  In v.5, the syslog is in a size-limited tmpfs in RAM.  So nothing bad will happen as a result of its growing.  It will just stop growing once it reaches a certain size (128MB?  grep /var/log /proc/mounts).  That may take a very long time though.

     

    I'm a little confused. I see a file "/var/log/syslog" that grows... am I not looking in the right place?

  5. As a followup, I did a "lsof /var/log" to figure out if there was an open file handle (after the script was run) and I got this:

     

    COMMAND  PID USER    FD     TYPE DEVICE SIZE/OFF NODE NAME
    syslogd 1195 root    1w     REG    0,13   151090 4605 /var/log/syslog (deleted)
    

     

    I'm guessing that "sed -i" must delete the file and then copy back... and somehow this causes the syslog daemon to lose the handle on the syslog file?

  6. Bear with me, I'm a Windows user.

     

    Ok, I tried to write a script and have it run every 5 minutes.

     

    I created a file: "/boot/custom/parsesylog":

     

    */5 * * * * sed -i '/Received response from host .* with invalid source port .* on interface/d' /var/log/syslog
    */5 * * * * sed -i '/Invalid legacy unicast query packet/d' /var/log/syslog
    

     

    In "/boot/config/go":

     

    cat /boot/custom/parsesyslog >> /var/spool/cron/crontabs/root
    

     

    The idea is that I remove the two offending lines that are spamming my syslog every 5 minutes with this script.

     

    When I run the commands normally, i.e.,

     

    sed -i '/Received response from host .* with invalid source port .* on interface/d' /var/log/syslog
    sed -i '/Invalid legacy unicast query packet/d' /var/log/syslog
    

     

    The syslog is pruned successfully.

     

    However, once the script runs via cron, something weird happens and the it seems only the first removal works --- all the "Invalid legacy unicast query packet" lines remain.

     

    I've attached both syslogs, one which is a minute after boot, BEFORE cron runs the "parsesyslog" script, and one 5 minutes later, AFTER cron runs the "parsesyslog" script.

     

    Furthermore, it seems that the syslog is then "locked"... nothing else seems to write to the syslog properly. You'll see in file "syslog-2013-10-06-BEFORE.txt" on line 1431 it shows that I telnet in from my machine. Once the script runs, the syslog no longer seems to capture the telnet (I tried repeatedly, and the syslog never appends anything else). Note: even when running the /sbin/powerdown script, this normally captures a ton of stuff in the syslog, but once looking at the archived log file in /boot/logs, it doesn't show anything. So something is getting messed up.

     

    Does linux have file handles like Windows? Perhaps sed is causing the file to be locked?

     

    Does anyone know what I might be doing wrong here?

    syslog-2013-10-06.zip

  7. I just upgraded from 4.7 today, using Tom's recommended guidelines on his announcement post. I had no issues with any of the drives, but I was looking at the syslog and noticed this:

     

    So, 192.168.1.123 is a netgear powerline adapter.

     

    It's not causing any issues, but I don't want the syslog to get gigantic from the constant warning message in the syslog, which might fill the flash drive eventually (which *will* cause problems).

     

    I'm not really sure what to do about this.

     

    Any ideas?

     

    Edit: It seems like I might be able to modify "etc/syslog.conf" to filter these messages (so they don't ever appear in the syslog), but I'm unsure of the syntax.

    syslog-2013-10-05_2.zip

  8. Sickbeard is not kaput

     

    It works just fine even without the external indexers because it has its own indexer

     

    just select sickbeard indexer in the settings and you're good to go. It will work for all new material.

     

    For backlog, you need an external indexer, for which there a bunch out there, none I found as good as nzbmatrix, but they will get better

    Ah thanks, didn't realize that you could use an internal indexer. Sweet.

  9. Hi,

     

    I recently tried to upgrade disk1 from 1.5 TB to 2.0 TB. I had recently performed a parity check (no errors), so I stopped the array, unassigned the drive, and powered down.

     

    Screenshot *before* the upgrade:

     

    20111106diskstatuscopy.th.png

     

    I installed the new drive in the same bay as the old one (disk 1), and started the array (blue ball), rebuilding the drive based on the previous parity.

     

    Screenshot *after* the upgrade:

     

    20111107diskstatuscopy.th.png

     

    However, at *some* point during the rebuilding process, the server froze. I wasn't able to access it remotely, and it didn't even have an IP assigned on the router. I went to the console and logged in, but I was immediately logged out.

     

    I hard reset the server, and it came back up, although the parity was rebuilding. Since I wasn't 100% sure that disk1 had finished rebuilding, I didn't trust the parity (it had already found a bunch of errors). I stopped the rebuild of the parity, unassigned the drive, and powered the server down.

     

    I reinstalled the *old* 1.5 TB drive in disk1 and started the server, and reassigned it to disk1.

     

    Now I have this:

     

    1182011121812am.th.png

     

    This is what I think I have:

     

    • Incorrect parity
    • Good data for all of my disks (including disk1)

     

    How do I restore drive1 with the existing data on it? I basically want to start the array and rebuild parity, trusting that all of the data on the disks is correct.

     

    Thanks for the help!

    syslog-2011-11-07.txt

  10. Hi,

     

    I have the following directory setup on my unRaid 4.7 server:

     

    disk1/Movies/A/Aliens (1986)

    disk2/Movies/A/Almost Famous (2000)

     

    and I wanted to keep all the movie files together on the same disk (everything at the movie name folder level and below). I have set my split level to "(" and that seemed to work, most of the time. However, I've been noticing that sometimes -- usually after I've already moved the files over to the server and sometime later add another file (such as a trailer) -- that the files end up fragmented.

     

    In the above example, I copied a trailer for the Aliens movie and it ended up in disk2 for some reason, whereas I expected it should have been stored on disk1 (there was plenty of room on disk1).

     

    Should I just change my split level to 2?

  11. I've noticed some issues in the smart view of myMain:

     

    http://img820.imageshack.us/i/316201131117am.png/

     

    • /dev/sda: I've looked up "multi_zone_error_rate" but haven't had much success in determining what this exactly is. Should I be worried? Smart report: http://pastebin.com/embed_js.php?i=3Q3H8ZDK
    • /dev/sdo looks farked. This drive was preclearing using the preclear script. Are these errors possibly a result of a bad cable/connection, or are the errors definitely a drive issue?  Smart report: http://pastebin.com/embed_js.php?i=VwQi85jW
    • /dev/sdl and /dev/sdm are Seagate ST32000542AS. Any reason why they show up as "zzz"?

     

    Thanks in advance for anyone who can clarify these issues for me.

     

  12. That's exactly what you need to do. If possible grab a screen shot of your disk layout in the control panel and honestly stick a sticky or whatever you can on each drive to make sure it goes back in that assigned position. As you know its not what cables go into which slots its making sure your drive is assigned the correct slot in the web interface.

    Cool, thanks.

     

    I actually use a label maker to put the serials on the back of the drive so I can see them when I have the case off. This makes it a lot easier when I'm trying to find a particular physical drive and match it to my array.

  13. Hi, I'm planning to move a bunch of my cables between my onboard SATA ports and my 2 SATA cards from the drives in my array (for better cable management).  I assume that if I mark the exact position of each drive (in particular the parity drive) in my array and restore them to the correct positions, this will not be an issue?

     

    I thought this was in the wiki somewhere but I've been unable to find it.

     

    Thanks in advance.