Jump to content

Zimeic

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by Zimeic

  1. I have observed weird behaviour since roughly a week ago:

    A few minutes after midnight, download and upload speed decrease dramatically to no more than ~60kbps (regular speed is ~10mbps symmetrical). The actual point in time can vary a few minutes, but its always between 00:00 and 00:15. A container restart temporarily fixes this until next midnight.

     

    Supervisord log shows nothing interesting as far as i can tell, but i discovered, that according to the logs, i had the same public ip adress before and after restarting the container. However, executing curl https://ifconfig.io in the containers console shows a different ip adress. Aren't these adresses supposed to be same and could this indicate an ip leak of some kind? Neither adresses are my home ip, but still...

    docker.txt supervisord.txt

  2. Hi folks,

    I'm not sure if this an issue with qBit or Proton, but I have to start somewhere, so here it is:

    When i start the container, everything runs smoothly for hours, until at some point, port forwarding does not work correctly anymore. qBit recognizes, that the port is closed and marks it for reconfigure, but then just keeps going and doesn't actually resolve the issue.

     

    Restarting the container temporarily fixes this, but that's obvioulsy not a long term solution.

     

    As per your github instructions, qbit_docker contains the docker command to run the container, qbit_debug is the supervisord log. I'm afraid the debug log is very long, since the issue only appears after hours and i didnt want to cut anything out in case its important. First occurance of port closure appears to happen around 20:10 (line 44530).

    qbit_docker.txt qbit_debug.log

  3. Allright, i think i figured out how to implement my wonky setup 😉

    Quote

    Drives in the classic Unraid array can't be trimmed

    Just out of curiousity, i saw a trim section in the scheduler options, is that something different?

  4. Quote

    I'm unclear what you are asking. If a pool is a single disk it can be formatted XFS, multi drive pools can use either BTRFS or ZFS.

    I was asking if i'm forced to to run a multi drive pool in a raid of some sort like ZFS instead of XFS.

     

    I think I found an answer to my question after more research though: Up until now i was under the impression i have to group drives in some sort of group/pool/array/whatever before pointing a share to that group to utilize all drives included in that group. But as it turns out i can specify individual drives to use per share, so i'll just chuck all HDDs/SSDs into the array and create the 'groups' that way.

  5. Quote

    Pools are single or groups of disks that can utilize BTRFS or ZFS RAID levels.

    Can as in optional? So i could have two pools (with the aforementioned 3x HDD and 3x SSD) each utilizing the classic individual unraid strategy? That would probably work for what I need. I could just set up a docker or cron job that syncs from the SSD pool to the HDD pool and for the 'classic' pool uses i'd have a third cache pool.

  6. Ok, next option i guess would be to have the cached shares on a seperate SSD only array and manually sync changes made there to an HDD array share/folder. Just to be clear on the terminology, if i have 3x HDDs and 3x SSDs that should be in a 'group' each, would that two arrays with three devices each or an array with two pools containing three devices each?

  7. Hi there!
    From what i understand, the unraid mover can move data from the primary storage (SSD cache) to the secondary storage (HDD array) and in reverse.

     

    Is it possible to keep predefined files/folders/shares on the cache as well as the array? My goal is to have the data on the SSD cache for reads and writes so that i don't have to spin up any disks when they happen. At some scheduled point all changes made to the data on the cache will be synced to the array in one whole swoop.

     

    Ideally, this functionality should exists parallel to regular mover/cache functionality (write cache for array that gets moved off the cache regularly, living space for vms and its data).

×
×
  • Create New...