Jump to content

citizen_y

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by citizen_y

  1. Just a quick update - I really struggled to solve this, but read some other posts of people having issues with a slow performing zfs drive in the array so I decided to try and migrate my one zfs drive to xfs. Turns out this was a struggle, even copying the content from the zfs drive to another xfs formatted drive the best performance I could get was 2-3 Mb/s. I finally buckled down and copied only the irreplaceable content and blew up the rest when I erased that drive and reformatted under xfs. After doing so, I ran another speed check and voila, the drive was now showing speeds on speed tests that were inline with the other drives (4-5x what it had been under zfs). 

     

    After that, I started another parity build, its not done yet, but its been maintaining speeds of 170-260 MB/s (inly really slowing as it comes to the later portions of any given drive size int he array). I'll report back when complete to confirm this was the fix - but right now its looking like another case of a a zfs drive in an array causing exceedingly slow performance.

     

     

  2. 10 minutes ago, JorgeB said:

    There may be something else reading from disk4, you can also run the diskspeed docker to confirm all disks are performing well, though it will only test read speed.

     

    THanks much - I tried exploring that, but the file activity pluggin wasn't showing any disk activity. Also, when I look at the read / write speed per disk it looks like the parity writes are equivalent to the reads on all of the other disks except for disk 4 which is showing a slightly higher read speed.

     

    I ran the diskspeed test docker prior to starting the parity build and did find disk 4 operating at a much lower performance than the other drives but nothing even close to this slow. Do you think its prudent to run the diskspeed test again while still running the parity rebuild?

     

    One other item of note the CPU load on the system seems quite high given the fact that I don't have docker or vm services on - anything I should look at there that may be related to the slow rebuild speed?

     

    image.thumb.png.5e137cb15f2d8f4585e9ca33ec11d719.png

     

    image.thumb.png.815b73ceb119deaabb77ab50f9af0da5.png

  3. Hello,

     

    I initiated a parity rebuild due to the replacement of a failing drive. The rebuild started relatively quickly but eroded to ~30-40MB/s after the first day. Unfortunately for the last two days the rebuild has been averaging aroun 2-3 MB/s. I'm not seeing any driver related errors in the logs. I have shutdown all VM and dockers services and have disabled all shares, but am having still seeing this exceptionally slow speed. 

     

    I previously ran a drive speed check and did see that disk4 (the zfs disk I have in the array) had an average speed of <20% the speed of the rest of the drives. I assume this is part of the issue and want to test out migrating this disk away from zfs and potentially removing fromt he server entirely, but I need to finish this parity rebuild first.

     

    What else should I be considering to try and get this parity rebuild back to a relatively (or at least viable) speed.

    citizenur-diagnostics-20240516-2334.zip

     

  4. 13 hours ago, JorgeB said:

    You run it on the emulated disk.

     

    Thanks for this - when I went to run without the "-n" option I got the following error. Recommendations on how to proceed? 

     

    Quote

    Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.

     

  5.  

    3 hours ago, JorgeB said:

    Check filesystem on disk1, run without -n.

     

    Hey, thanks for replying. Right now I don't have any disk assigned as disk 1 (that was the disk that was failing). I have a disk currently in preclear that I intend to assign as disk 1 and rebuild from parity once preclear is completed. The array is currently stopped. Given that current situation, is it viable for me to run a check file system on disk 1? If so, is it the right step for me to bring the array online in maintenance mode with the disk 1 assignment empty and attempt to run a check filesystem on an emulated disk 1? I just want to make sure I'm clear on the steps to take so I can avoid losing the data if possible.

     

  6. Hello,

     

    I had a drive that looking like it was failing so  shut down my server and replaced it. When I booted back up, I started a pre-clear on the new disk, but needed access to the array for a small amount of time so started the array with the replaced disk missing and emulated. While the array was starting I started to see errors like the followng in my system log "kernel: ata5: COMRESET failed (errno=-16) unraid" when the array started none of the shares were listed so I stopped the array and rebooted. When the system came back up the array started automativally with the missing disk still missing but it was not being emulated. I did not see any errors in the log upon reboot, but I immediately shut down the array since not all looked right.  Have  I now lost everything that was on the original disk, or when I add the new pre-cleared disk to the array will it be recovered from parity? Is there any steps I should take to make the data from the failing disk recoverable from parity?

    citizenur-diagnostics-20240509-2117.zip

  7. On 2/28/2024 at 8:16 PM, razorline said:

    Getting a odd issue. Just installed it and set up the file permissions. This is what i get. 

    Can't launch the WebUI. 

     

    Starting web ui
    [2024-02-28 17:08:16,889] DEBUG ARM: __init__.<module> Debugging pin: 12345
    [2024-02-28 17:08:16,973] DEBUG ARM: utils.arm_alembic_get Alembic Head is: 469d88477c13
    [2024-02-28 17:08:16,973] DEBUG ARM: utils.arm_db_check Database file is not present: /home/arm/db/arm.db
    [2024-02-28 17:08:16,973] DEBUG ARM: utils.arm_db_cfg No armui cfg setup
    [2024-02-28 17:08:16,974] INFO ARM: utils.check_db_version No database found.  Initializing arm.db...
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 145, in __init__
        self._dbapi_connection = engine.raw_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
        return self.pool.connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 452, in connect
        return _ConnectionFairy._checkout(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
        fairy = _ConnectionRecord.checkout(pool)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 716, in checkout
        rec = pool._do_get()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/impl.py", line 284, in _do_get
        return self._create_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
        return _ConnectionRecord(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 678, in __init__
        self.__connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 903, in __connect
        pool.logger.debug("Error on connect(): %s", e)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
        raise exc_value.with_traceback(exc_tb)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 898, in __connect
        self.dbapi_connection = connection = pool._invoke_creator(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/create.py", line 637, in connect
        return dialect.connect(*cargs, **cparams)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 615, in connect
        return self.loaded_dbapi.connect(*cargs, **cparams)
    sqlite3.OperationalError: unable to open database file
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/opt/arm/arm/runui.py", line 8, in <module>
        import arm.config.config as cfg  # noqa E402
      File "/opt/arm/arm/../arm/__init__.py", line 2, in <module>
        import arm.ripper
      File "/opt/arm/arm/../arm/ripper/__init__.py", line 3, in <module>
        from arm.ripper import logger, utils, makemkv, handbrake, identify, ARMInfo # noqa F401
      File "/opt/arm/arm/../arm/ripper/utils.py", line 24, in <module>
        from arm.ui import db  # needs to be imported before models
      File "/opt/arm/arm/../arm/ui/__init__.py", line 66, in <module>
        from arm.ui.database.database import route_database  # noqa: E402,F811
      File "/opt/arm/arm/../arm/ui/database/database.py", line 28, in <module>
        armui_cfg = ui_utils.arm_db_cfg()
      File "/opt/arm/arm/../arm/ui/utils.py", line 206, in arm_db_cfg
        check_db_version(cfg.arm_config['INSTALLPATH'], cfg.arm_config['DBFILE'])
      File "/opt/arm/arm/../arm/ui/utils.py", line 88, in check_db_version
        flask_migrate.upgrade(mig_dir)
      File "/usr/local/lib/python3.8/dist-packages/flask_migrate/__init__.py", line 111, in wrapped
        f(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/flask_migrate/__init__.py", line 200, in upgrade
        command.upgrade(config, revision, sql=sql, tag=tag)
      File "/usr/local/lib/python3.8/dist-packages/alembic/command.py", line 399, in upgrade
        script.run_env()
      File "/usr/local/lib/python3.8/dist-packages/alembic/script/base.py", line 578, in run_env
        util.load_python_file(self.dir, "env.py")
      File "/usr/local/lib/python3.8/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file
        module = load_module_py(module_id, path)
      File "/usr/local/lib/python3.8/dist-packages/alembic/util/pyfiles.py", line 109, in load_module_py
        spec.loader.exec_module(module)  # type: ignore
      File "/opt/arm/arm/migrations/env.py", line 89, in <module>
        run_migrations_online()
      File "/opt/arm/arm/migrations/env.py", line 73, in run_migrations_online
        connection = engine.connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3264, in connect
        return self._connection_cls(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 147, in __init__
        Connection._handle_dbapi_exception_noconnection(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 2426, in _handle_dbapi_exception_noconnection
        raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 145, in __init__
        self._dbapi_connection = engine.raw_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
        return self.pool.connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 452, in connect
        return _ConnectionFairy._checkout(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
        fairy = _ConnectionRecord.checkout(pool)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 716, in checkout
        rec = pool._do_get()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/impl.py", line 284, in _do_get
        return self._create_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
        return _ConnectionRecord(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 678, in __init__
        self.__connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 903, in __connect
        pool.logger.debug("Error on connect(): %s", e)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
        raise exc_value.with_traceback(exc_tb)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 898, in __connect
        self.dbapi_connection = connection = pool._invoke_creator(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/create.py", line 637, in connect
        return dialect.connect(*cargs, **cparams)
    ... (4 lines left)

     

     Did you ever figure this out? I'm having the exact same issue

  8. Hello,

     

    I am currently using Unraid Version 6.12.9.

     

    Whenever my Unraid server boots I'm getting repeated errors from nginx failing to bind to ports 80 and 443 due to error "98: Address already in use". I have pasted the logs below (though I have removed my internal network IPs and hostname).

     

    Is this expected behavior? if not, I'm  fairly confident the fix is something straight forward and obvious that I am just missing - but would appreciate any help.

     

    Thanks

     

    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use)
    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use)
    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use)
    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use)
    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use)
    Apr  1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use)
    Apr  1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use)
    Apr  1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use)

     

     

×
×
  • Create New...