steini84

Community Developer
  • Posts

    430
  • Joined

  • Last visited

  • Days Won

    1

steini84 last won the day on March 16

steini84 had the most liked content!

3 Followers

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

steini84's Achievements

Enthusiast

Enthusiast (6/14)

  • Superstar Rare

Recent Badges

125

Reputation

  1. Forgot to update the reference to the new mbuffer package so the install failed on new installs (updates were fine). Fixed now Sent from my iPhone using Tapatalk
  2. Updated to 2.2 and upgraded the Perl and mbuffer packages Sent from my iPhone using Tapatalk
  3. Its probably easy to add the option to use the current GitHub build, but I’m tracking the releases. I will take a look at it Sent from my iPhone using Tapatalk
  4. you are correct, dont know what i was doing earlier. thanks
  5. It seemed to only create a folder, not a dataset.
  6. Congratulations on integrating ZFS! *** MY BAD *** Creating a single disk share actually makes a datastore ***** I would like to create a ZFS-only shares on the array and have a separate datastore for each share to enable replication and snapshots. While going through the release notes, I came across the following information: "Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories." However, I haven't found an official way to achieve this for a single ZFS drive within the array through the settings. Manually, it is easy to accomplish, but I wanted to think aloud and see if anyone has any insights into potential issues with my strategy in unRAID or if there are alternative approaches that align better with the "Out of the box - unRAID way". In my understanding, this approach should work, but I am unsure if I might unintentionally disrupt any underlying mechanisms in unRAID by manually creating the datasets. Here's what I have done so far: Converted disk 15 to ZFS. Manually created disk15/Nextcloud. Configured the included disk to only include disk 15. Migrated my existing data to disk 15. So far so good, but please let me know if you have any suggestions on a better strategy and/or if there are any potential concerns with my current setup. PS I guess Exclusive access is not relevant here since the folder is on the array, not a zfs pool.
  7. I just bit the bullet and made the changes everywhere. Took 45 mins, but since ZFS is now native in unRAID I just want to use the defaults and dont want to break future updates But since I dont have a 360 degree overview I cannot see why the pool-names have to be lowercase
  8. I wrote some points on why and how I use ZFS ->
  9. This is great news and really exciting. This opens up a complete new dimnesion for Unraid and I cannot wait to play more with this.
  10. This plugin is now depricated, but for all the right reasons: https://unraid.net/blog/6-12-0-rc1 Contrats to the team on this big milestone and I´m excited to play with native ZFS on my Unraid setup!
  11. You are absolutly correct. He took my manual build process and automated it so well that I have not had to think about it at all any more! Really took this plugin to another level and now we just wait for the next Unraid release so we can depricate it
  12. Just to put it out there. I have personally moved to sanoid/syncoid Sent from my iPhone using Tapatalk
  13. You can get away with a oneliner in the go file / User Scripts: wget -P /usr/local/sbin/ "https://raw.githubusercontent.com/jimsalterjrs/ioztat/main/ioztat" && chmod +x /usr/local/sbin/ioztat But packing this up as a plugin should be a fun project
  14. I want to keep the zfs package as vanilla as possible. It would be a great fit for a plugin Sent from my iPhone using Tapatalk
  15. This is just an unfortunate coincidence, i would guess it´s cabling issues or a dying drive. Have you done a smart test?