Jump to content

JonathanM

Moderators
  • Posts

    16,659
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. Hi, a while ago, I tested all version of Unraid 6 beta.  I think one plugins did fill all the LOG partition and now, everytime I install a plugin (or update it), I always get lots of error because it is full.

     

    Attached is the picture of the dashboard showing the LOG at 100%...

     

    I would appreciate if someone can tell me how to clear it.

    Unless you have changed some fundamental settings in unraid, the log partition is created during the boot process in RAM, and is not kept anywhere on powerdown. You probably have something misconfigured to write to the log partition. You can change the partition size at will, many people expand the stock 128MB to something more reasonable as well.
    • Like 1
  2. 3. You always have forward a port for vpn (unless your gateway is running it) and that's considered safe as long as that is the only port open

    It's only as safe as the VPN server that answers on that port. If a flaw in the VPN package is found and you don't update to patch it, then it's no longer safe.

     

    However, it's by far safer than opening things up in general, and you only have one application to audit for security flaws and updates.

  3. Is this good or bad?

     

    3712 sectors had been re-allocated before the start of the preclear.

    4288 sectors are re-allocated at the end of the preclear,

        a change of 576 in the number of sectors re-allocated.

    That part is bad, but not necessarily fatal. Run another preclear cycle, and if you get more sectors reallocated, preclear it again until either the number stabilizes for _at least_ one additional preclear cycle, or the drive fails, one or the other.

     

    I'm betting the drive will run out of re-allocatable sectors and fail. I don't like to see re-allocated sectors in the double digits, let alone over 4000+.  Looks like a failing drive to me.

  4. The sleep that the unraid is waking from, occasionally rebooting and running a parity check, is an automatic sleep via the built-in function.  Is there a way to prevent the automatic parity check after a forced reboot?

    Sleep is not built in to unraid, it's a third party add-on that doesn't always work with all motherboards or cards. The automatic check is necessary to detect corruption that is likely to be induced when the array is forcefully stopped without the necessary checks to ensure everything is synced properly. If you don't do the check, you are likely to get into a situation where a failed drive will not be rebuilt correctly.
  5. ... My parity check certainly went faster, was blazing fast at only 26 hours @ 42.0 MB/sec when it used to be 30+ hours...

    Parity check is not something that would be improved by changing filesystems, since parity operations deal only with the bits and not the files.

    I'm going to play devil's advocate here, because of things I've seen recently. What if the code that runs the file access is working more efficiently because of the XFS file system change? If there are more disk I/O interruptions for the same tasks using RFS than there are using XFS, I could argue that unless you run the parity check in maintenance mode so no files can be read from the shares, it's possible that for a server that is in use during a parity check that file system change may effect the average speed.

     

    Or, it could be some other factor that was changed.

  6. I think you could always set up a Trial USB key, install preclear and run it on another machine, even if you were not going to run that machine permanently as another unraid array. Kinda like building a preclear "stick" on a spare USB. You could preclear 3 drives at once on that setup.

    The device limit ONLY APPLIES TO STARTING THE ARRAY, not booting unraid. You can have as many drives hooked up as you want, as long as you don't need to actually start the array. Preclear does NOT need the array to be started, and works fine with any number of drives installed. You don't even need to apply for a trial key for preclear use.
  7. So if the drive's not redballed now it should be there on a restart, right?
    Not correlated. Red ball happens the moment a write to the drive by unraid fails. Could happen at any time data is written, restart or not.

    Now does the long smart test have to be done with the array offline? And how can I just *read* everything on the drive to see if I get the errors again? something like dd to /dev/null.

    You can leave the array online, just make sure unraid doesn't try to spindown the drive during the test. A long smart test will read the entire drive, and after the test is done you will be able to see the results when you obtain a new smart report.
  8. disk got a bunch of read errors!  Will this disk now be red balled?
    Theoretically the drive accepted the writes to replace the data that could not be read. When a drive has read errors, unraid reads all the other drives and calculates what should have been in the location that had read errors, and issues a write command to put the data that couldn't be read back onto the drive. If the write fails, the drive is red balled (X'ed?) otherwise if the write returns success, the drive stays in service and the error count for that drive is incremented.

     

    I'd disable spindown for that drive and run a long smart test, then get a smart report and post it.

     

    If you want help deciding what drive needs to be replaced next, get smart reports on all your drives and post them.

  9. Am I correct that this plugin is now fully 6.0 compatible?

     

    If so, can you please edit that on the first page where the download link is.

     

    Thanks,

     

    Russell

    As the subject of the thread says, it works on any version from 6beta14 to what is current as of this post, 6.1.2. It's in the 6.1 verified subforum. What other notation are you wanting? Either it's already been changed, or I don't see what you are talking about.
  10. [glow=red,2,300]UPDATE:[/glow]

     

    ###2015.09.17

    - Add: Settings Page;

    - Add: SMB security;

    - Add: NFS export;

    - Add: common script that will be executed prior to disk script;

    - Add: format empty disk;

    - Add: log view and upload when Help is on;

    - Fix: "Add SMB mount" not working with hidden shares;

    - Fix: webGui hanging while mounting/unmounting.

    Well, that escalated quickly!  :)

     

    Hidden share mount confirmed fixed, haven't tested hanging issue, 'cause I don't want to risk a hang right now.  ;D

  11. Is there a way to deal with the GUI hang that occurs if there is a SMB mount defined for a computer that is currently offline? It seems to tie up the main page for a LONG time before it times out, every time an action is attempted that involves the main page. I spent the better part of an hour trying to get a clean shutdown because an SMB mount was waiting to complete or time out. I finally gave up and did a hard power off.

     

    Second, is there a way to connect to a hidden share? I tried filling out the field manually with the correct info, but it would not accept anything that wasn't available with the "load shares" button.

  12. for some reason if I reboot the unraid server and reload the docker; all of my Unraid cameras say disconnected... when I login to them they all say their camera server is something on the 172.17.0.x network which isn't even one of my subnets...  I can manually overwrite this and save it; the camera comes back.  Works until I reboot again...  Any words of wisdom?

    Well, the docker container internal network is 172.17.0.x, so apparently it's broadcasting the container's IP to the cameras. Which, if things were  normal, would be fine, but inside the docker NAT, not so fine. Don't know how to fix it though.
  13. I did a few quick tests on my test server and all I came up with was a hung webgui.

     

    So how would one use this? Is there a way to send the command lines NOT on the webui.... something that sends the command to the console some way. Or create a console script that lists running process and allows you to kill them.

    The intended use was to check for issues using this plugin BEFORE you try to shut down the array, and then once the plugin gives the all clear, do the shutdown using the webui. It was suggested that the plugin should intercept the shutdown attempt if it detected issues and allow you to intervene before passing through the shutdown command, but that was deemed too intrusive for a non-limetech supported plugin.
  14. I do not share my disk drives and have not tired, but the same recycle bin operation should occur if you share the disk drives and delete files at the drive level.  The drive share will have a .Recycle.Bin for the drive.
    Haven't tried it, but in theory if the folder is created at the root of the drive, it will automatically become a user share.
  15. One of the things I have seen is that the .Recycle.Bin folders on the cache drive shares get moved off the cache drive to the array.

    What? I thought that was fixed when they changed the default mover implementation to only move cache "yes" shares instead of ignoring cache "only" shares.

     

    Does the .Recycle.Bin share have a normal share property page in the gui?

  16. Nope, it is not. This has to do where your configuration files are being kept. If I had to guess the config volume in your settings is pointing to the default location of /opt which is non persistent in Unraid because /opt is a folder in ram and not on the array. Adjust your settings so the config volume points to a folder on the array or the cache device

     

    Why does it default to that, it seems wrong?

    Not everyone has a cache drive, or the same number and size of disks, so the best location of the config folder is difficult to determine automatically. The best solution is clear instructions with the download, but not everyone reads documentation, many just download and start playing.
×
×
  • Create New...