Jump to content

itimpi

Moderators
  • Posts

    19,873
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by itimpi

  1. 5 minutes ago, bmartino1 said:

    kinda. I would wait for hte rebuild and then run the commands aginst the disk.

    Only to fix/maintain unraid party and data. and only if the drive is in its forever home and not to be hit again.
     

    Running the Test using the command line

    XFS and ReiserFS

    You can run the file system check from the command line for ReiserFS and XFSxfs as shown below if the array is started in Maintenance mode by using a command of the form:

    xfs_repair -v /dev/mdX

    or

    reiserfsck -v /dev/mdX

    where X corresponds to the diskX number shown in the Unraid GUI. Using the /dev/mdX type device will maintain parity. If the file system to be repaired as an encrypted XFS one then the command needs to be modified to use the /dev/mapper/mdX device

    Note that this is no longer quite accurate as the device name for array drives can vary according to the Unraid release and whether encryption is being used or not.   Much better to do it from the GUI as then you do not need to worry about the device name.

    • Like 1
  2. If you put your Unraid servers address into the Remote server field then it will write to itself in the location you have set.    Ideally this should be a share that resides on a pool to avoid spinning up array drives.

  3. 14 hours ago, da_banhammer said:

    Well dang. I was thinking it was only a couple years old but it's actually been in service for 4 years now so I guess I shouldn't be too surprised. Thanks for your help!

    Those errors do not always indicate a drive problem.   They can also be caused by power/cabling issues to the drive (in particular power as you mentioned it making a noise).   Running the SMART extended test is a good indication as to whether a drive is healthy or not.  The easier step of getting the SMART information after a reboot to get the drive back online might also give an indication.

  4. 8 minutes ago, chris_netsmart said:

    @JorgeB many thanks for having a look, I am guesting that the Diags had no usefully information, I have done as advise and posted what I have done.  all I can do now is wait and see for when it next crash

    Capture.JPG


    That is not sufficient - you have so far only set the server into ‘listening’ mode.   To get something actually being logged you either need to put your servers address into the Remote Server field or set the mirror to flash option.

  5. 1 minute ago, RadRom said:

    As per documentation, stopped array and ran it in GUI terminal.

     

    Where in the current documentation did you see this?   When I looked it said:

    If you ever need to run a check on a drive that is not part of the array or if the array is not started then you need to run the appropriate command from a console/terminal session. As an example for an XFS disk you would use a command of the form:
    
    xfs_repair -v /dev/sdX1

     

    • Thanks 1
  6. The syslog suggests you are getting macvlan related crashes and the syslog has

    Apr 26 13:14:13 Nexus root: Fix Common Problems: Error: Macvlan and Bridging found

    This combination is known to cause instability on many systems.

     

    As mentioned some time ago in the 6.12.4 release notes you need to either switch docker networking to use ipvlan or if you want to continue using macvlan then disable bridging on eth0.

  7. 13 hours ago, chowpay said:

    Ok I changed System to be cache, Stopped all the containers, enabled the mover. Once the mover was complete I enabled the dockers again. But I see its still utilizing disk1. Is there something I should do to ensure that docker.img is in cache and not on disk

    It should definitely have worked if you did the following steps.

    • Disabled Docker and VM services under Settings
    • Set the 'system' share with cache as primary storage; array as secondary storage; and mover direction as array->Cache
    • Run mover manually.  When this completes the 'system' share should now only exist on the cache.  You can easily check this using something like Dynamix File Manager.
    • Change 'system' share to have cache as primary storage and nothing set as secondary storage.
    • (optional) enable Use Exclusive mode under Settings->Global Share settings.  This improves performance on share that are all on a pool
    • Re-enable the Docker and VM Services

    You could also use Dynamix File Manager to manually move the 'system' share from disk1 to cache if you prefer this to mover.

     

    You should also make sure the cache has a Minimum Free Space value set that is bigger than the largest file you expect to cache (ideally something like twice that size).   This is to stop the cache filling up to far which can cause problems.

  8. The syslog is full of entries of the form

    May 21 17:19:54 NAS kernel: sd 1:0:4:0: attempting task abort!scmd(0x00000000118655b3), outstanding for 30364 ms & timeout 30000 ms
    May 21 17:19:54 NAS kernel: sd 1:0:4:0: [sdf] tag#3018 CDB: opcode=0x88 88 00 00 00 00 00 00 53 06 d8 00 00 04 00 00 00

    which refers to the parity drive.   It has also apparently dropped offline so there is no SMART information for it in the diagnostics.

  9. 56 minutes ago, rtgurley said:

    Somebody on reddit shared this document with me.  So far I am about 74% (18 hours) into copying data from old parity to new parity.  This seems faster than having to redownload data and rebuild parity.

    Not sure that version is still accurate as it is in the 'legacy' part of the documentation.   The current online documentation for that is here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI.  In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. 

  10. Looks like the docker.img file (in the system share) is on disk1.   That means that any time the docker service is running those two drives will be spun up.  Ideally you want all of the 'system' share to be on the cache both for best performance, and also to let the array drives spin down.

  11. 1 hour ago, loady said:

    next i will swap sticks over, could it also be the RAM slot and not the actual RAM

    It could be.   It could also be simply the fact of having the extra RAM module installed where each one checks out fine individually, but you get failures when both installed.

×
×
  • Create New...