itimpi

Moderators
  • Posts

    19632
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by itimpi

  1. According to the diagnostics you still have a fixed address of 192.168.50.203 set in the network.cfg file which is strange as you say you deleted that file and the default should be DNS.   You should either set go to setting->Network and make sure you have set the server to DNS or alternatively change the IP address there to one that is compatible with your router in your new home.

     

    You may also find the gateway addresses you have set are no longer valid or wrong.  In particular you currently seem to have the first one point back to the Unraid server rather than to the router.   Not sure what the other ones are currently pointing at?

  2. 6 minutes ago, madmax969 said:

    will I lose everything on my drive which is disabled when I do a Parity drive swap

    No.   The whole purpose of the parity swap procedure is to replace a parity drive with a larger one and then rebuild the contents of the emulated drive onto the old parity drive that is replacing the disabled drive.    
     

    It works in two phases which have to both run to completion without interruption if you do not want to have to restart again from the beginning.

    1. the first phase will copy the contents of the old parity drive onto the new (larger) parity drive.   During this phase the array is never available..
    2. when that completes the standard rebuild process starts running to rebuild the emulated drive onto the old parity drive.   If you were doing this process in Normal mode  the array is available but with reduced performance (as is standard with rebuilding).   If running in Maintenance mode the array will become available only when the rebuild completes and you restart the array in normal mode.
    • Like 2
    • Upvote 1
  3. The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted.  You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash.  The mirror to flash option is the easiest to set up (and if used the file is then automatically included in any diagnostics), but if you are worried about excessive wear on the flash drive you can put your server's address into the remote server field. 

  4. 7 hours ago, levster said:

    I did a hard reboot and the GUI came up. Parity check kicked in, but not sure why I lost GUI. Any way for me to check? Log would have probably been cleared.

    If you want to be able to see logs of what happened before the last reboot then you have to enable the syslog server to get logs that can survive a reboot.

  5. 2 hours ago, asrock73 said:

    When the USB tries to load and then reboots, is there a chance it generates a error file or something on the USB stick i can read?

    From your description it will not have gotten far enough to be able to write a file.

     

    It might be worth rewriting all the bz* type files as described here in the online documentation to see if that helps just in case one of those files is failing to be read properly on the problem machine.

  6. 5 hours ago, jsamdal said:

    I was able to invoke the mover for files that go on plex, so this is definitely an issue with the shares having "j" and "J" 

     

    The problem is under shares I only have one. So how can I get rid of the "j" share if it doesn't show up under shares. Is there a way to do so in command line?

    You should be able to do this using the Dynamix File Manager.   You can also do it from the command line but if you are not familiar with it you could cause further problems.

  7. 1 hour ago, grana said:

    The manual says that documentation for ZFS should be added, but it is not :(

    Sorry - missed the fact that it was ZFS.

     

    Not sure of the steps to handle this on ZFS I am afraid, hopefully someone else can provide some guidance.   In the worst case I would expect something like UFS Explorer on Windows would be able to get most/all of the data.

  8. Handling of drives that unexpectedly go unmountable is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. 

  9. You can use the New Config tool to reset the array as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.   I recommend that you start by using the options to keep all current assignments to reduce the chance of error and then return to the Main tab to remove the drive before starting the array.   Since you have removed a drive parity will need to be built sgain.

  10. 1 hour ago, Mikke said:

    when/how often are new releases usually slated?


    This has in the past been nothing like a fixed time.   I seem to remember it varying between about 3 months and 12 months.     Whether this will become more predictable (and if it does what the intervals are likely to be) I have no idea. 

     

    Limetech have improved over their initial suggested offering (as you noticed) in that you are still entitled to any patch releases that are produced for your current release even if the support period on your licence has expired.

  11. 16 minutes ago, cprn said:

    If the drives are connected through USB will I able to add them to the existing array where I already have 12 drives? Basically go from 12 to 20 drives in the same array with 8 hdd connected via usb in a jbod.

     

    From what I was able to find online, it seems that you are forced to create a separate pool with the jbod drives, is that true?

    As long as the JBOD exposes each drive individually to Unraid with their unique serial numbers then you can treat them just like any SATA connected drive and use them for any purpose supported by Unraid.
     

    As was said earlier the potential issues would be:

    • Problems if the drives have a tendency to disconnect and then reconnect as a different device ID (as many USB connected drives often do) as Unraid arrays are not hot-plug aware.
    • performance bottlenecks due to putting multiple drives through a single connection to the host.