Jump to content

tgggd86

Members
  • Posts

    93
  • Joined

  • Last visited

Posts posted by tgggd86

  1. 8 hours ago, JonathanM said:

    That will result in an empty fresh filesystem, that you will need to repopulate from your external backups. Is that what you intend to do?

     

    That is not what I intend to do, but I do have a backup of my most critical data.

     

    My understanding of dual parity, is that you can still recover data after two drives fail. Why is this situation different?

  2. 46 minutes ago, JorgeB said:

    If it's not a RAM issue could be corrupt LUKS headers, do you have a backup of those?

     

    In any case two simultaneously disable disks is usually not a disk problem, SMART looks OK for both and they are on a SASLP which is not recommended for Unraid and known to drop disks sometimes, so you might want to check if the actual disks mount with UD and if yes re-sync parity instead.

     

    Also I wouldn't recommend encryption unless you absolutely needed it, it's just another layer of complexity, but if you keep using it make sure you backup the LUKS headers.

     

    I do require encryption, but unfortunately did not back up my LUKS headers. Searched that and will do so once my array has been rebuilt.

     

    If I can't mount my two drives due to encryption issues, I'm assuming I'll need to format them and then rebuild them using parity. I don't see how rebuilding parity is the best course of action. What am I missing? 

  3. I left on vacation for 10 days and when I returned I noticed 2 of my disks in my array were disabled. I rebooted and when starting the array, 2 of my disks could not be mounted with the "unmountable wrong encryption key" error (these were the same disks that were disabled). I googled the error and found the below thread which hints at a memory error/issue. 

     

    https://forums.unraid.net/topic/114191-solved-multiple-unmountable-disks-with-wrong-encryption-key-but-its-correct/

     

    I've been running memtest 86+ on my server for 36 hours (7 passes) with 0 errors. So I assume memory is not the issue here.

     

    I have dual parity so I'm assuming that I can just rebuild those 2 drives without any issues. But I want to make sure I prevent this issue from happening again and that I won't end up with data corruption if theirs another underling issue that hasn't revealed itself yet. So before I begin rebuilding my array looking for any advice/recomendations before I commit. Below are a couple things that I've noticed or done that may be a contributing factor.

     

    - Upgraded my cache drive aprx. 3 weeks ago. Did not notice any issues beforehand and followed the "replace a cache drive" on the unraid wiki.

    - Plex seemed to have "lost" many of my files located in various folders. Example: I had a playlist with 20 movies in it and it only remebered 2. 

          - I forced plex to rescan my folders and it began adding items that were already on the array as if they were "new".

    - Log file when I first looked at it (before my initial restart) indicated a bunch of read errors on disks that were not disabled/emulated.

         - Those disk errors haven't popped up since restarting.

     

    Thanks in advance!  Log files keep failing to upload. I'll try in a follow on post.

  4. On 12/13/2020 at 2:47 PM, dbinott said:

    Imo, this version was not ready for primetime. But too late.

    @tgggd86 I bet if you delete all your blacklist it will work. The migration is failing on it. But you would have to start v2 to do that, or edit the sqlite db directly if you know how.

    I don't know how, but I'm willing to learn. I was able to open the radarr.db and find the blacklist table. I just don't know which portion I need to edit.

  5. I have been having a similar issue to the above users. Although I have a InvalidCastException error which I'm not sure is the cause. Can't seem to figure out a solution.

     

    [Info] Bootstrap: Starting Radarr - /app/radarr/bin/Radarr.dll - Version 3.0.1.4259
    [Info] AppFolderInfo: Data directory is being overridden to [/config]
    [Info] Router: Application mode: Interactive
    [Info] MigrationController: *** Migrating data source=/config/radarr.db;cache size=-20000;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 ***
    [Info] MigrationLoggerProvider: *** 154: add_language_to_files_history_blacklist migrating ***
    [Info] add_language_to_files_history_blacklist: Starting migration to 154
    [Error] MigrationLoggerProvider: System.InvalidCastException: Specified cast is not valid.
    at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ)
    at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i)
    at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143
    at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression)
    at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor)
    at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1()
    at FluentMigrator.Runner.StopWatch.Time(Action action)
    at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions)

    [v3.0.1.4259] System.InvalidCastException: Specified cast is not valid.
    at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ)
    at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i)
    at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143
    at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression)
    at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor)
    at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1()
    at FluentMigrator.Runner.StopWatch.Time(Action action)
    at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions)




    [Fatal] ConsoleApp: EPIC FAIL!

    [v3.0.1.4259] NzbDrone.Common.Exceptions.RadarrStartupException: Radarr failed to start: Error creating main database
    ---> System.InvalidCastException: Specified cast is not valid.
    at System.Data.SQLite.SQLiteDataReader.VerifyType(Int32 i, DbType typ)
    at System.Data.SQLite.SQLiteDataReader.GetInt32(Int32 i)
    at NzbDrone.Core.Datastore.Migration.add_language_to_files_history_blacklist.UpdateLanguage(IDbConnection conn, IDbTransaction tran) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\154_add_language_to_file_history_blacklist.cs:line 143
    at FluentMigrator.Runner.Processors.SQLite.SQLiteProcessor.Process(PerformDBOperationExpression expression)
    at FluentMigrator.Expressions.PerformDBOperationExpression.ExecuteWith(IMigrationProcessor processor)
    at FluentMigrator.Runner.MigrationRunner.<>c__DisplayClass70_0.<ExecuteExpressions>b__1()
    at FluentMigrator.Runner.StopWatch.Time(Action action)
    at FluentMigrator.Runner.MigrationRunner.ExecuteExpressions(ICollection`1 expressions)
    at FluentMigrator.Runner.MigrationRunner.ExecuteMigration(IMigration migration, Action`2 getExpressions)
    at FluentMigrator.Runner.MigrationRunner.ApplyMigrationUp(IMigrationInfo migrationInfo, Boolean useTransaction)
    at FluentMigrator.Runner.MigrationRunner.MigrateUp(Int64 targetVersion, Boolean useAutomaticTransactionManagement)
    at FluentMigrator.Runner.MigrationRunner.MigrateUp(Boolean useAutomaticTransactionManagement)
    at FluentMigrator.Runner.MigrationRunner.MigrateUp()
    at NzbDrone.Core.Datastore.Migration.Framework.MigrationController.Migrate(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\Migration\Framework\MigrationController.cs:line 67
    at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 115
    --- End of inner exception stack trace ---
    at NzbDrone.Core.Datastore.DbFactory.CreateMain(String connectionString, MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 130
    at NzbDrone.Core.Datastore.DbFactory.Create(MigrationContext migrationContext) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 79
    at NzbDrone.Core.Datastore.DbFactory.Create(MigrationType migrationType) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 67
    at NzbDrone.Core.Datastore.DbFactory.RegisterDatabase(IContainer container) in D:\a\1\s\src\NzbDrone.Core\Datastore\DbFactory.cs:line 45
    at Radarr.Host.NzbDroneConsoleFactory.Start() in D:\a\1\s\src\NzbDrone.Host\ApplicationServer.cs:line 95
    at Radarr.Host.Router.Route(ApplicationModes applicationModes) in D:\a\1\s\src\NzbDrone.Host\Router.cs:line 56
    at Radarr.Host.Bootstrap.Start(ApplicationModes applicationModes, StartupContext startupContext) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 77
    at Radarr.Host.Bootstrap.Start(StartupContext startupContext, IUserAlert userAlert, Action`1 startCallback) in D:\a\1\s\src\NzbDrone.Host\Bootstrap.cs:line 40
    at NzbDrone.Console.ConsoleApp.Main(String[] args) in D:\a\1\s\src\NzbDrone.Console\ConsoleApp.cs:line 41


    Press enter to exit...
    Non-recoverable failure, waiting for user intervention...

  6. I noticed, Windows 10 users have problems with the HP tool.

    The Free_DOS bootmedia build with RUFUS needs the himemx.exe from within the full Free_DOS and an according entry in the config.sys.

     

    Some days ago I also noticed the "Exit Code: 0x01" error while flashing an H200 controller.

    I believe this is related to newer firmwares on the controller.

    They won't respond when querried by the MegaCli tool.

    We need to use sas2flsh instead.

    You should have noticed the error already when executing the first step (1.bat).

    Not sure about the MegaRec steps (2.bat and 3.bat) but they should also fail (if I remember correctly).

     

    Therefore I started setting up an new version of the toolset with some modifications.

    RUFUS and additional files included. I added it in the original post.

    Note, I was going to test the tools next week on some controllers that are inbound.

    See what happens, and PM me if you have problems. Grab some screenshots if possible.

     

    Try starting over with step 1e.bat which uses sas2flsh instead of 1.bat.

    If 3.bat fails, there is a 3e.bat also.

     

    The __READMEFIRST.txt contains an SAS adress you can use if you didn't manage to dump the original one.

    Maybe you can input a random number also? Can't tell if there is a check or not.

     

    Thanks for the quick fix and reply Fireball3!  So running 1e.bat came back with bad command or filename since the .bat looks for the sas2flsh.exe in the same folder as the .bat so I just copied sas2flsh.exe to that same folder.  Should only take modding the path to point to any of the sub folders to make it dummy proof.

     

    Otherwise everything went as planned.  Really I don't think there is a need for the SAS address maybe unless you have multiple cards in your system.  I used the SAS address already in the .bat and after it booted up and recognized all my drives just fine.  Hopefully my issues were due to the Supermicro controller and I'll be worry free from here on out.

  7.  

    LSI MegaRAID with SAS2008 chipsets

     

    3) DELL Perc H310 as well as H200 Flashed successfully to LSI9211-8i IT (P20)

     

     

    Ok, got my Dell Perc H310 Card.  I started following the attached instructions and on both my Win10 computers, could not get it to format two different flash drives.  So I went to Rufus which apparently worked the same (hint it didn't) and followed all the steps up to Step 4 where I could not find my ADAPTERS.txt file.  Back during Step 1, I had assumed a problem happened when I got the error that not enough memory was available and no ADAPTERS.txt file was in my root folder.  At the time I assumed it wasn't a big deal... obviously it is.

     

    Apparently Rufus uses a version of FreeDOS which does not have the HIMEM.SYS which makes it very problematic using FreeDOS on modern systems.  So I forced Rufus to use FreeDOS 1.2 and didn't run into that error.  Only problem is since I wiped my old firmware out, my ADAPTERS.txt file only says "Exit Code: 0x01"  I'm assuming that's because it's not being seen by the system since I wiped it. 

     

    So should I go back and reflash the adapter with the file created in Step 2, or should I keep going and then jump back to Step 1 to get my Hardware ID so I can complete Step 6 (if that's even possible)?  I also do not have a sticker on the backside of my card showing the SAS address.

     

    Thanks in advance!

  8. Seems like there are a lot of people having issues with Marvell controllers and unRAID.  I just ran into an issue I can't seem to resolve and I think it's time to find a new SATA controller card.  I currently have a Supermicro AOC-SASLP-MV8 which is throwing errors no matter what I do (change drives, change cables, change power source, update firmware) and am hoping to find something similar to replace it.  Any recommendations?  Hoping for something that will be a little future proof but I only need 8 ports.

     

    Also if you're running unRAID v6.1.9 or higher and using this same controller without issues please chime in

     

    Thread about my server issues: https://lime-technology.com/forum/index.php?topic=54768.0

  9. Thanks CHBMB!  That was definitely the problem!

     

    You didn't do anything wrong, check the share that you keep the appdata on hasn't been moved to the array.

     

    I had a very similar problem.

     

    Keep all my data on /mnt/cache/appdata

     

    Mover had moved it all to mnt/user/appdata

     

    None of my containers could then find the appdata.

     

    But there's the kicker, it doesn't happen straightaway but I noticed after a server reboot.

     

    Got me confused as hell to start with.

     

    All I had to do was move the appdata folder back to the cache share and job done.

     

    I found the data on /mnt/disk8/appdata so used mc to copy /mnt/disk8/appdata to /mnt/cache/appdata

     

    Restarted docker service and all was well.

  10. So I shutdown my server last night and upon reboot SABnzdb and sonarr now start up as if I am running them for the first time.  Did I do something wrong?  Are we required to manually stop docker apps prior to shutting down our servers?

     

    SABnzbd has data in the logs from when I last ran it, but the config.xml file has only it's own ports and API keys listed in it.

  11. Pre clearing two 2TB SATA drives right now.  I just had this happen about six minutes into the first one.  Thoughts?

     

    =
    =
    =
    Elapsed Time:  0:06:40
    ================================================================== 1.14
    =                unRAID server Pre-Clear disk /dev/sdi
    =               cycle 1 of 3, partition start on sector 64
    = Disk Pre-Clear-Read completed                                 DONE
    = Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
    = Step 2 of 10 - Copying zeros to remainder of disk to clear it
    =  **** This will take a while... you can follow progress below:
    =
    =
    =
    =
    =
    =
    =
    =
    Elapsed Time:  0:06:45
    0+0 records in
    0+0 records out
    0 bytes (0 B) copied, 0.00134184 s, 0.0 kB/s
    ./preclear_disk.sh: line 656: let: percent_wrote=(0 / ): syntax error: operand e                                                                                    xpected (error token is ")")
    Wrote  0  bytes out of    bytes (% Done)
    ./preclear_disk.sh: line 1867: / (1405475789 - 1405475763) / 1000000 : syntax er                                                                                    ror: operand expected (error token is "/ (1405475789 - 1405475763) / 1000000 ")
    ========================================================================1.14
    == WDCWD20EADS-00R6B0   WD-WCAVY6713758
    == Disk /dev/sdi has been successfully precleared
    == with a starting sector of 64
    ============================================================================
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ./preclear_disk.sh: line 831: [: : integer expression expected
    ** Changed attributes in files: /tmp/smart_start_sdi  /tmp/smart_finish_sdi
                    ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA                                                                                    LUE
          Raw_Read_Error_Rate =   200      ok
                 Spin_Up_Time =   182      ok
             Start_Stop_Count =    99      ok
        Reallocated_Sector_Ct =   200      ok
              Seek_Error_Rate =   200      ok
               Power_On_Hours =    83      ok
             Spin_Retry_Count =   100      ok
      Calibration_Retry_Count =   100      ok
            Power_Cycle_Count =   100      ok
      Power-Off_Retract_Count =   200      ok
             Load_Cycle_Count =   198      ok
          Temperature_Celsius =   122      ok
      Reallocated_Event_Count =   200      ok
       Current_Pending_Sector =   198      ok
        Offline_Uncorrectable =   199      ok
         UDMA_CRC_Error_Count =   200      ok
        Multi_Zone_Error_Rate =   191      ok
    No SMART attributes are FAILING_NOW
    
    919 sectors were pending re-allocation before the start of the preclear.
    
        a change of -919 in the number of sectors pending re-allocation.
    0 sectors had been re-allocated before the start of the preclear.
        a change of 0 in the number of sectors re-allocated.
    SMART overall-health status =
    root@Tower:/boot# 

×
×
  • Create New...