Jump to content

limetech

Administrators
  • Content Count

    9504
  • Joined

  • Last visited

  • Days Won

    141

limetech last won the day on July 14

limetech had the most liked content!

Community Reputation

1745 Hero

About limetech

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We're looking at this, thank you for the report.
  2. Thanks for the report, fixed in next release.
  3. I think rsync will create a destination file with same name as source file but prepends a '.' to the file name to make it "hidden". Then when transfer completes it performs a 'rename', thereby deleting previous copy of source file, replacing it with new copy. I think if you can use the '--inplace' option it doesn't do this, instead it opens destination for write which deletes old file and new data is written to the file.
  4. You could also create backup folders on the usb flash device and 'mv' files there, then new files will write to new blocks; though eventually the flash will fill up. For example, say you're going to update the OS. Already the update process will 'mv' (move) current bz* files to 'prev' folder, deleting those files and making their blocks available for re-use. You could first rename 'prev' to say 'prev-20200727'. Now Update OS will create a new 'prev' folder and 'mv' bz* files there - noting that 'mv' does not actually move it just changes a few pointers to place those files in the target directory. This way over time you end up writing further and further into the device. You could do similar thing with 'config' directory. But in normal use the usb flash is barely being used. We have usb flash devices used in test servers that have been hammered for several years now and still work fine.
  5. Here's the problem. As soon as we publish a release with Nvidia/AMD GPU drivers installed, any existing VM which uses GPU pass through of an Nvidia or AMD GPU may stop working. Users must use the new functionality of the Tools/System Devices page to select GPU devices to "hide" from Linux kernel upon boot - this prevents the kernel from installing the driver(s) and initializing the card. Since there are far more people passing through GPU vs using GPU for transcoding in a Docker container, we thought it would be polite to give those people an opportunity to prepare first in 6.9 release, and then we would add the GPU drivers to the 6.10 release. We can make 6.10 a "mini release" which has just the GPU drivers. Anyway, this is our current plan. Look about 10 posts up.
  6. Thank you for the report, fixed in next release.
  7. AFP has been dropped. How can it possibly work better than SMB? Are you using really old macOS?
  8. The next iteration of 'multiple pools' is to generalize the unRAID array so that you can have multiple "unRAID array pools". Along with this, introduce concept of primary pool and cache pool for a share. Then you could make different combinations, e.g., brtrfs primary pool with xfs single device cache. To have 'mover' move stuff around you would reconfigure the primary/cache settings for a share. This will work not get done for 6.9 release however.
  9. Better yet, from Main click on Flash and then Flash Backup to download entire contents of the usb flash boot device.
  10. Your key was sent to your hotmail account several hours ago.
  11. We have a solution for this in the works ...
  12. That is the exact config I was trying to get right because in my development workstation I have a 3-device btrfs raid-1. What you get from statvfs() is blocks - ie total blocks free - unused blocks avail - blocks available to be assigned Normally free==avail, but in case of btrfs avail takes into account raid organization We only care about size and free - where size is the number of blocks available to hold user data, and free is how much of that total is available. For next release I'm doing this: size = total-free + avail free = avail Using this I think everything's correct except for odd number of devices in a raid-1, which is unfortunate.
  13. But negligible given absolute amount of data written. A loopback is always going to incur more overhead because there is the overhead of the file system within the loopback and then there is the overhead of the file system hosting the loopback. In most cases the benefit of the loopback far outweighs the extra overhead.