Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Yeah, it could be just coincidence, but it feels like the forum handled theming better prior to the last couple weeks. Here's another example.
  2. In the past few days I've seen more an more instances of posts that are unreadable in the light theme. My best guess is that something changed in the default font and quoting or something. Here is the latest example. Screenshot in light theme. Dark.
  3. @KluthR, I have a suggestion based on the previous few posts. Could you put a copy of the corresponding template xml from config/plugins/dockerMan/templates-user into the root of the archive? That way backups could be used for full disaster recovery without digging through the appropriate flash backup.
  4. Make sure the container configuration folders are pointed at an actual physical storage location, for instance if you save the config to /mnt/cache but you don't actually have a storage pool named "cache" then the configuration files would only exist in RAM, and be lost on server reboot.
  5. Short answer is yes, you can click on the container in the GUI and select console, that will open a command prompt inside the container environment. However... When the container is updated, it will revert your changes. To properly accomplish what you are asking the correct way is build a different container with the parts you need. Depending on the complexity of what you are trying to do, it may be reasonable to script the changes and apply the script when needed.
  6. Probably because that's not what I was talking about. I think it may be showing up as a hard drive.
  7. mc works anywhere in the filesystem, so it can do things that could break Unraid or cause irretrievable data loss with a simple copy operation if you mix disk and fuse (user share) locations improperly. The Dynamix File Manager has data safety as first priority. If you know what you are doing and know how not to shoot yourself in the foot mc is definitely faster.
  8. Technically true, but it's convenient for me to first paste it into a password manager or other secure storage for use on other devices. Yes, you can generate a new one whenever you need it, and revoke old ones just as easily, but I find it convenient to reuse 1 password across multiple older devices and do the revoke and reissue dance only when absolutely necessary.
  9. Since we are discussing massive changes in array handling I'd like to submit a hair brained idea. Use the same address space that the preclear signature occupies or something similar to put a couple ID kbytes that would allow Unraid to recognize and ID drives that should participate in the classic unRAID parity array. If there is enough space, it could contain ID hashes of the rest of the drives in that set, so Unraid could easily determine what drives should be in what slots for a pool to have valid parity. That way a fresh Unraid install could prepopulate any detected unRAID pools. Maybe even be able to do other pool types this way too. It would be really nice to be able to download a fresh Unraid install and have it instantly recognize all the drives.
  10. Just a quick note to finish out the differences, parity1 is mathematically simpler, and also doesn't require the disk slot numbers to be the same to stay valid. For example, with valid parity1, you can rearrange any data drives you wish to different slot numbers, like swapping disk1 and disk4, and as long as all the disks are present and none added, parity1 remains valid. Parity2 on the other hand is a much nastier set of equations, and one of the factors of the parity2 math is the slot number of the drive. So all disks must remain in the same slots for it to remain valid, and it requires more CPU power to calculate. That also can cause a difference in write speed, depending on available CPU power. Recent CPU's you really can't even tell the difference, but on older CPU's with limited math power it can be quite noticeable.
  11. Poke around the BIOS for other boot type options, sometimes there are multiple places that reference boot, some to specify permitted devices, others to specify which one of the permitted devices to use. Look for the list of WHICH disk to boot, not just which type of devices to boot (network, cd, removable device, hard drive). Many times a USB stick will show up in the list of hard drives instead of removable devices.
  12. Try reformatting the new USB sticks with RUFUS, then follow the normal install instructions.
  13. If you unassign the parity drive(s) before adding the data disk it will skip the clearing as it's not needed without parity.
  14. Bad memory would be my first guess.
  15. Please refrain from posting things that can be used to circumvent copyright. If you post XML, be sure to redact the osk before attaching it. Thanks.
  16. Please post in the proper area, this is not an Unraid bug.
  17. The correct answer to this is to set up a VM environment with all the appropriate tools and such. Unraid is NOT designed to be used as a general multipurpose linux box, it's an appliance with limited command line tools. Only root is allowed access to the command line. I know you can force it to do things it's not designed to do, but you will be fighting an uphill battle with each update possibly breaking your workarounds. Much better to let the Unraid OS be an appliance and host your containers, VM's and storage. Set up a VM as your daily driver.
  18. If you move all your storage drives and boot USB you won't need to change anything, providing neither machine uses hardware RAID. VM's with hardware passthrough may also need to be changed because the passed through hardware is different.
  19. Depends what you are asking. Each individual unRAID array disk can be a different size, and different format if desired. Those would each be single volume ZFS. A pool would need identical size disks in it to fully take advantage of the ZFS specific RAID functions. I'm not familiar enough with ZFS to comment on whether it's a good idea to mix sizes in a ZFS multi disk volume, but I suspect not. ZFS is file system, much like BTRFS is a file system. How the file system itself deals with multiple member disks is unique to the file system, not Unraid. Currently the unRAID parity array only supports a single disk per slot, with whichever file system you want on it. Pools can have multiple disks, handled by their specific file system. Clear as mud?
  20. Any traction to the concept of a share having a "new files written here pool" "overflow when new file destination is full pool" (optional) "mover enabled yes / no" (optional rules of when to invoke) (optional third pool as yes destination) Instead of the cache yes/no/only/preferred setting? This would accomplish a couple things, first it would clarify the historically muddy yes/no/only/preferred setting, second, it would more easily support pool to pool instead of being limited to primary / cache structure.
  21. Parity doesn't know anything about files, it's just a bucket of bits that completes an equation with all the array disks. If ANY array disk was trimmed without the affected bits being accounted for in parity it would invalidate parity, causing potential corruption if a disk was rebuilt, and parity checks subsequent to the trim event would have errors.
  22. Care to share for the rest of the class? Your font choice is not readable on the light theme.
×
×
  • Create New...