Unraid Feature Request Wishlist


SpencerJ

Unraid Feature Wish List  

3041 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

9 minutes ago, cbr600ds2 said:

Maybe I'm just being simplistic but I would just rather be able to go over the 30 drive limit (unlimited hahaha).  I love unraid and the ease of use and the fact that it lets you use random hard drives but the 30 drive limit is a bit of a bummer. 

Just curious: Where are you finding a case to hold all of those drives?  I can't seem to find a case that can hold more than about 11.

That's my biggest hold-up with expansion.  Not being able to mount more drives.

Link to comment
21 minutes ago, Ellis34771 said:

Just curious: Where are you finding a case to hold all of those drives?  I can't seem to find a case that can hold more than about 11.

That's my biggest hold-up with expansion.  Not being able to mount more drives.

Ebay - I actually have a norcotek 24 bay but I've been looking at other ones.  The Chenbro nr40700 is older and some pop up there.  Search like JBOD enclosures.  :D  some take a bit of modding. 

Edited by cbr600ds2
Link to comment
On 2/28/2020 at 2:57 PM, Jason Noble said:

Can we make requests here, or just vote? Please look into adding VDO, that provides inline block-level deduplication, compression, and thin provisioning capabilities for primary storage. https://github.com/dm-vdo/kvdo

This looks nice! Maybe with an optional integration of dm-cache to avail a transparent hotblock based cache, that will make the speed of the SSD's transparently usable for everything without the risk of dataloss - and without wasting cache space by effectively only caching those hot spots that are really frequently accessed (i.e. the index of some databases, some blocks of a vm, some parts of a docker, certain tools, thumbnails etc ) ?

Link to comment
On 2/28/2020 at 5:57 AM, Jason Noble said:

Can we make requests here, or just vote? Please look into adding VDO, that provides inline block-level deduplication, compression, and thin provisioning capabilities for primary storage. https://github.com/dm-vdo/kvdo

I don't think this was intended to be a place to make feature requests. I would advise making your request in the Feature Requests Section of the forums. Be sure to check for duplicate existing request threads and like or continue the discussion in those.

  • Like 1
Link to comment

I wouldn't mind direct Grafana integration replacing the Dashboard. Why bother spending dev time on stuff that can be handled by a dedicated project? Docker management is another area where I wonder if it would be better to just use portainer or something that has broader support.

 

Some better documentation on stuff, I've asked a bunch of vlan related questions and searched through the documentation without any luck getting responses.

 

 

 

  • Like 1
Link to comment
On 2/26/2020 at 8:10 PM, Fizzyade said:

VM enhancements....mainly that the "basic" editor only change fields/attributes that have been edited.  It's incredibly annoying to change something only to find that the VNC port has reverted to auto *yet again*.

 

Heck I'd even settle for just that stopping happening.

i voted for snapshots as it would be very cool but i'm trying to have my mac os vm working and editing twice the vm settings for every change made is really a pain is the a..

If i could i would change my vote.

Link to comment
On 3/16/2020 at 5:04 PM, eagle470 said:

I would like to see a reduced cost license that allows you to use the cache pool for Dockers and VMs, but you cannot use the array function.

I bought a full license just to run a single docker & 2 VMs on a ingle M.2 drive, but the purchase wasn't for the purpose functionality rather to contribute more to the UNRAID development.

I thought a reduced price with reduced functionality would be nice but in the end this is a one time payment for a lifetime support AFAIK.

  • Like 3
Link to comment

1. Audio feedback from motherboards buzzer.

  • Example when a hardrive fails, parity-check fail.
  • Or when someone loges into the systems etc.

 

2. Automatic server shutdown.

  • Example if the system have no VMs or dockers running for longer than 15min the server shuts off by itself.

 

3. Consider trying to implement the “Looking Glass” vm-technology.

  • If I have understood correctly…. Looking Glass makes it possible to retrieve frame-buffer from a vm’s dedicated gpu and display it in on the host machine with almost no overhead or delay.
  • Like 1
Link to comment

What I would find really great would be the possibility to pause parity sync over multiple reboots! I have two reasons for this:

First is: In my situation, I am living in a flat and normally I have all doors open during night for better air support and because I have no radiator in the sleeping room, so I get my heat from other rooms. Unfortunately, parity sync takes longer than 24h and this is the only time in month I have to close doors because of the noise, and then it gets cold, too. Second reason is that normally I am only syncing backups from my other systems several times a week up to once a day, or I need some older data stored on the array that is not on my live systems some times in month, means that my array is shut down most of the day and only online for some hours, maybe half a day, because of noise and energy costs. I could use that time when I need it sparely, but ready, to continue parity sync during that time. This saves a lot of energy, because the only time during month when it is running for more than 24h, even during night when I don't access it, is for parity sync.

 

So what I would appreciate is that the parity sync operation writes some stat file when paused explicitely and/or clean system shutdown to flash drive and that it gives the possibility to continue where it left off before (or restart from beginning, for example with an additional check-mark). Like this, parity sync could be paused and resumed whenever the system has not to be shut down for whatever reason.

 

I don't think this is difficult because parity sync operation knows always where it is and I know there is parity check tuning plugin that lets run parity sync in intervals, but it does not survive reboot, which is critical for my use case. I hope this can be possible! Thank you!

 

BTW this was already requested, but maybe the importance was not given as much as in my text:

 

Edited by Addy90
Link to comment
20 hours ago, Addy90 said:

I don't think this is difficult because parity sync operation knows always where it is and I know there is parity check tuning plugin that lets run parity sync in intervals, but it does not survive reboot, which is critical for my use case.

This IS difficult :(   Until Limetech provide the underlying support there is not much that can be done.

Link to comment
55 minutes ago, itimpi said:

This IS difficult :(   Until Limetech provide the underlying support there is not much that can be done.

Ah, yes, of course it is nearly impossible for plugin development! I know you have it on the Wish List for your great Parity Check Tuner plugin!

But it is exactly as you describe it there (copy from your Parity Check Tuner page)

Quote

Resume parity checks on array start. The current Limetech implementation of pause/resume does not allow a parity check to be started from any point except the beginning.   If the ability to start at a defined offset is ever provided then this could be implemented.

The offset ability has to be implemented by Limetech! This is why I ask in the Unraid Feature Request thread and not in your plugin thread, because I know that it is not possible for you and that you need that feature.

 

But I think for Limetech it is not very difficult, because this is what I think:

There is a function md_do_sync() with a variable mddev->curr_resync in the md daemon that holds the position of the current parity check run. This variable is incremented (mddev->curr_resync += sectors) within a while loop until the maximum sector (mddev->recovery_running) is reached, which marks the end of the parity check operation.

Parity check can already resumed by giving options to check_array() function which revives the mddev->recovery_thread that continues with the current mddev->curr_resync position held in memory.

I think it is easily possible to read the content of this variable (and others if needed, maybe with a checksum, just to be sure no modification happened). The status_md() function prints out the content of this sync-position variable, for example (seq_printf(seq, "mdResyncPos=%llu\n", mddev->curr_resync/2);). Maybe it is already possible to get the contents of the needed variables with this function.

So when you can get the content of this variable, the check_array() function only needs an additional parameter for resuming operation at a specific position. It then sets the mddev->curr_resync, wakes up the sync-thread, and operation continues where it left off after reboot!

 

This parameter can be read from a file, controlled by the checksum and the mdcmd tool can be improved by accepting this value for parity check via command line.

Everything should be possible via mdcdm calls via UI, the functions permit the setting of the current position and the UI can save the current value to a file (with checksum just to be sure) and read it back after reboot and only delete the position file in case the operation is running and write it when it was paused or array shut down manually, so that when there is an unclean shutdown or the check ended successfully, the file is not there and next start check if there is a file and if not, starts from the beginning.

 

I think, for Limetech, it is not very difficult to permit this operation. I would LOVE to see it :) Would made mine and some other days, also @itimpi's days brighter :)

 

PS: I don't mind pausing and resuming parity sync over multiple reboots. It is as easy as just shut down the array, file with position is saved, and after reboot, parity check tells in UI that the last check was not finished and can be resumed (or ask to start from beginning by checkmark, which calls mdcmd without position argument). Not difficult for the user, but very handy for the use-case!

 

PPS: As parity check is also used for rebuilding disks as far as I understood, it should either be possible to resume rebuild after reboot, but in this case, I would certainly warn about that to do or gray out the option to shut down, but it should be possible technically. In case of unclean shutdown, we have to restart either way.

Edited by Addy90
Link to comment

There is a VERY large list of check conditions that have to be satisfied before one can be sure that it is safe to continue a parity check from the position which had previously been reached.    Since Limetech are a little paranoid about changes that might have any risk of data loss I think it is this that is holding things up.     As always it tends to be all the edge cases that cause the problems rather than the simple mainline one.

  • Thanks 1
Link to comment
10 minutes ago, itimpi said:

There is a VERY large list of check conditions that have to be satisfied before one can be sure that it is safe to continue a parity check from the position which had previously been reached.    Since Limetech are a little paranoid about changes that might have any risk of data loss I think it is this that is holding things up.     As always it tends to be all the edge cases that cause the problems rather than the simple mainline one.

Yes, definitely. I believe Limetech has loads of automated test patterns for all edge cases they know to be sure about changes that can cause data loss. Of course this is nothing one can implement quickly, but I think as it is possible to pause and continue a parity check already during operation, it should also be safe after reboot. Especially because a lot of conditions are checked after reboot if the array and all disks are available and there were no changes to the array. But it is not for me to assume too much here, it is just a Feature Request thread and I wanted to ask for something that I would love to see and that I think is not impossible to do (much easier than double parity or multi-streaming and other features we have seen recently). It is just a fair feature request in my opinion, and some suggestions about possible implementations because you made me to write about it. The rest is hope and patience, from which I have both. I prefer a solid tested feature than any other feature, like most of us here do; this is one reason for us to use Unraid, because it works. Don't agree? :)

Edited by Addy90
Link to comment
  • SpencerJ changed the title to Unraid Feature Request Wishlist

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.