ezhik

Members
  • Content Count

    411
  • Joined

  • Last visited

  • Days Won

    1

ezhik last won the day on November 20 2020

ezhik had the most liked content!

Community Reputation

109 Very Good

1 Follower

About ezhik

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Can confirm run into the same issue. Drives that are not assigned to an array and for example are used as passthrough drives for the VM, send out notifications for drives surpassing the default thresholds of 45C, regardless of the values you set.
  2. When it comes to software development, ideally the structure would be: 6.9.1 6 - Major 9 - Minor 1 - Patch -- With unRAID we have seen that Minor releases are treated as major and contain a lot of new features.
  3. Right, the OP did. Apologies there, got mixed up.
  4. ezhik

    Soon™️

    The good old base64.
  5. Betas are not releases though, they are considered pre-release and are mainly used for testing purposes. When you say release, you mean the stable version and that takes on average 12+ months. Mind you, major releases require major resources
  6. 6.10 would be considered a major release, I wouldn't count it being until year end at the earliest.
  7. Also noticed that when you reset drive assignments, your settings get reset as well - I think that's wrong: The following settings get reset: Global disk settings [Disk Settings] - Enable auto start - Default file system - Shutdown time-out - Tunable (nr_requests) - Tunable (md_write_methods) [Global SMART Settings] - Default SMART notification value
  8. If the partition table was improperly setup to begin with, this could very well be an issue though - am I wrong?
  9. Interesting, I just re-did as well and it worked as intended... I did notice that there was a difference on the space required for the filesystem and this went down from 72GB to 69GB on a 10TB drive, wonder if this was the issue...
  10. I could re-produce this at the cost of losing data on the array, I'll do a re-pro and post diagnostics.
  11. Hey Team, Recently had to go through an exercise of downsizing the cache array on an encrypted array (and encrypted cache array). Only way to do so was via: 1. Tools -> New Config -> Preserve current assignments [Array slots]. 2. wipefs -a /dev/sd[y-z] # (clear filesystem from cache drives) 3. assign cache drives 4. Mark 'parity is already valid' 5. Start array 6. encounter unmountable drives -- The array was hosed at this point, can someone confirm this for me? have rebooted the server and re-created the array since
  12. No stress man! Your work is greatly appreciated! We are back in business: --
  13. Not to be the one to complain, but we need to turn from reactive to proactive. I genuinely appreciate the support here and the dev work put in here, but couldn't this be anticipated and communicated out to the developer ahead of time? If we are trying to bridge the gap between core product devs and community devs, this could be avoided In either case, no harm no foul. The system is running and we can wait for the fix.
  14. Updated without any major issues except for nvidia plugin: I only see version "v" available. Any way to bring back the 'STABLE' drivers over 'BETA'? Although I do respect having an option for 'bleeding edge', I appreciate stability over features.