-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Report Comments posted by -Daedalus

  1. Sounds like a good move.

     

    Maybe a partner topic - something like "6.11 series feedback" - Not pointing out any specific bug, but maybe as a way for people to give feedback or change requests for new features as they are implementated "Maybe this button should be orange" "What about moving that thing below this thing". 

     

    Maybe useful, maybe more effort than it's worth, just food for thought.

    • Like 2
  2. 3 hours ago, Squid said:

    Is the backup share a "new" share (ie: did you make it very recently?)  Try stopping and starting the array to see if that makes a difference...

     

    It was one of the first I created years ago, but the disk changes are recent. I'm pretty sure the array has been restarted since doing that, but I'll give it a go once there's not much happening.

  3. Yup. I double-checked this. At first I thought that was what's causing the issue, because originally disks 1 and 2 were designated for backup, but I recently upgraded some disks, so I only need disk2 now. I removed the backup folder on 1, and the next time my VM backup script ran it just created a dir on disk1.

     

    I'm wondering if this has something to do with pools as well, because something similar - but not the same - happened when I was moving some other files around.

     

    I have an 'nvme' pool, and an 'ssd' pool.

    I was moving files on an nvme-only share to the backup share.

    The backup share uses the 'ssd' pool as its cache.

    When moving from /mnt/nvme to /mnt/user/backup, a 'backup' directory was created on 'nvme', rather than being moved to the existing 'backup' directory on 'ssd'

     

    I'm guessing this has something to do with pool/cache logic. If I remember right, it's setup in such a way that a given share can only have one cache associated with it, and there isn't really any logic for moving stuff between cache pools.

    It is something like, if the share is cache enabled, and the files are on *any* cache drive, then that must be the cache assigned to that pool, so I'd better create the dir if it doesn't exist.

     

     

    I'm not sure if these two things are connected in any way, but I figured I'd mention it.

     

  4. I'm not running 6.10, but the post above should illustrate why iowait shouldn't be included in the system load on the dashboard. Everyone equates those bars with CPU load, because that's how they're portrayed. 

     

    If it has to be there, maybe break it out into "Disk I/O" or something instead. 

    • Like 2
  5. Just to be clear here: Pinned != isolated. They're different things.

     

    Pinned just means a CPU core that a VM can use, but anything else can also use that core as well if it's free. This is done by pinning the CPU core in the GUI.

    Isolating a CPU core means unRAID doesn't touch it. This is done by appending "isolcpus="x,y,z" to your syslinux config on Flash > Main.

     

    If you want to fully isolate the cores so that only the VM uses them, then you'll need to change your syslinux config from this:

    image.png.8c991331f4a30ca31305038090244efb.png

     

    To this (for example):

    image.png.f87e02ad60d88501ebe16417ff0efb4d.png

     

  6. 3 hours ago, trurl said:

    Unraid doesn't install to the flash. The flash is just the archives of the install. It installs into RAM.

    This is why I don't comment much; I manage to completely miss the obvious most of the time.

    Reasoning for not including drivers off the bat makes complete sense now, carry on.

  7. 7 hours ago, limetech said:

    The problem is, this driver alone along with support tools adds 110MB to the download and expands close to 400MB of root file system usage.  Only a fraction of Unraid users require this, why make everyone bear this cost?  Same situation for all the other drivers required for the "DVB" builds.

    Without going near any of the other stuff, I'm curious to hear your thoughts on this one:

     

    Why the concern over install size? Most people are installing unRAID on 16-32GB sticks these days. Does it matter much if the install size is 400 vs 100MB? 

     

    I can absolutely understand efficiency standpoint; it's a lot of space for something very niche, I'm just not sure what the downside is. The only one I can really think of is longer backup times of the flash drive, but that seems very minor. Is there something I'm missing here?

    • Like 1
  8. I don't have an answer to this, but try doing the same things with top/htop open. If you still see high CPU usage there, then there might be something up. If you don't, it's because the dashboard CPU takes iowait into account, which can spike if the CPU is waiting on disks.

     

    Obligatory feature request for iowait to be removed from dashboard CPU usage. This isn't the first time it's caused confusion. It really shouldn't be there. If it is going to be there, don't call it "CPU".

  9. Some controllers and firmware don't support trim properly.

    One of my 9207-8is doesn't for example, because it's on P20 firmware. Apparently anything after P16 broke it. There was a thread on here a month or so ago about it.

    Also my board has an ASMedia controller for 2 of its ports, and they don't do trim either.

  10. 3 hours ago, Squid said:

    If you use the "folder" method as described in the OP, then yes

    Before this release, the docker.img was always BTRFS, regardless if the drive it sat on was BTRFS, XFS, or ReiserFS. [ snip ] The main reason for these changes however is to lessen the excess writes to the cache drive.  The new way of mounting the image should give lesser amount of writes.  The absolute least amount of writes however will come via the folder method.  But, the GUI doesn't natively support it yet without the change itemized in the OP.

    Thank you! I figured it was mostly to address the excess writes (I moved back to a single XFS drive from a pool because of this), just wasn't sure if there were any other affects as well.

  11. Can someone ELi5 the Docker changes in this version?

     

    How does the docker image get formatted with its own filesystem, wouldn't it inherit whatever filesystem of the drive its living on?

    What sort of differences/impact might we expect from the bind mount vs loopback?

     

    Just curious.