Pauven

Members
  • Posts

    747
  • Joined

  • Last visited

  • Days Won

    7

Posts posted by Pauven

  1. 1 minute ago, JorgeB said:

    Connect the old disks in two known good slots, it can be the ones where you have the new disks 5 and 6, just to see if the old disks are detected, unlikely that they are both dead.

     

    I would highly recommend at least starting with the original slots where the replacement drives are being detected, remove the replacements and install the originals there.  For now, don't touch any of the other "good" drives, as that could be compounding the problem, especially if you start to lose track of which drives are which.  Keep it simple.

     

  2. Okay, slow down.  Up to this point, most of the advice has been either about doing tests or hypothesizing options.  Anytime you think you're ready to take action, please post here your planned steps for review and approval.  Anytime you take an action, you're one step closer to losing data if it is the wrong action. 

     

    I believe your data is still intact, so don't give up hope.  But slow down and work with the guys here, don't do anything that's not reviewed and approved.

     

    22 hours ago, JorgeB said:

    That suggests something in your /config is causing the issue, you can backup the current flash drive first and then redo it and just restore the bare minimum, like the key, super.dat and the pools folder for the assignments, also copy the docker user templates folder, if all works you can then reconfigure the server or try restoring a few config files at a time from the backup to see if you can find the culprit.

     

    Did you follow this guidance to backup the current flash drive first, before restoring from backup?  From a planning perspective, we need to know what options remain.

     

  3. Hey guys, I'm just chiming in here as jkwaterman is a friend of mine, and we've already been chatting via email - I sent him here for expert advice.  I'm super happy to see JorgeB, trurl and Frank1940 are helping out - you guys are sharp so I know he's in good hands.

     

    I read through everything, and I do have a few thoughts.  Everything you guys are suggesting is pretty much a match for what I've advised via email as well, so we're all already on the same page.

     

    Restoring the super.dat from his Apr 2023 backup is a great idea, but I think that only applies if he didn't change any drives between the backup and before the drives failed out.  If we send him down this path, I think he first needs to confirm he didn't upgrade/swap/add any drives post backup, and also he should have a new backup of his current (bad) config, in case this goes sideways and he wants to get back to the current state.  I wanted to point this out since I didn't see anyone ask this particular question.

     

    I also strongly agree with trying to use the original failed drives, and that he should perform SMART tests to validate the drives are okay before re-using them.

     

    One thing I'm not sure about is if he uses the old drives, should he use the Trust Parity feature (I assume that's still a feature, been a decade since I last did this).  I'm imagining that he's got two paths forward with the old drives.  He could recreate the array config using all the original drives, and do a Trust Parity so it won't be rebuilt, and then immediately swap out the two suspect drives and rebuild onto the replacements.  Basically, with this approach he's using the GUI to recreate the pre-failed drive config state, and then manually failing/upgrading the drives.  Otherwise, he could again recreate the array config using all the original drives, but don't Trust Parity and instead rebuild new Parity via the data on the suspect drives.  This second approach sounds slightly riskier, as we're trusting the suspect drives to survive the parity rebuild, and unfortunately we don't know the nature of the errors that started this whole fiasco.

     

    I know for a fact that he has started the array numerous times in disk emulation mode, so data could have been written to the array.  Additionally we are both users of the My Movies software, which has a habit of updating local movie data from online web contributions that other users continually submit, and this metadata in turn gets written to the array.  It's probably safe to assume that My Movies was running at some point during disk emulation mode, so that the current parity data no longer matches the data on the failed drives.  I just wanted to point this out, so that we all know to only either trust the parity data, or trust the suspect drive data, but expect the two data sources to be slightly out of sync with each other.  Note that the updates from My Movies are trivial and will automatically be reapplied if he reverts to the old drive data, so no risk of data loss there if he reverts to them.

     

    One question I had myself is:  Is it possible to manually fix the drive config, via text editing, so that the parity drives are re-added to the array in a trusted state, but the 2 failed drives are still shown as missing/wrong/replaced?  I was thinking there was a way to accomplish this via text file edits, but I really don't know.

     

     

    1 hour ago, Frank1940 said:

    An off-the-wall question.  You have 19+ disks in your server.  Does it have a single-rail power supply that can provide 45-50 amps of +12volts for those drives? 

     

    6 minutes ago, jkwaterman said:

    i have a Seasonic FOCUS SSR-750FM 750W 80+ Gold power supply.  I've been using it for 3 - 4 years.

     

    I helped with his server build.  This power supply has 62A on +12V if I'm not mistaken.

     

     

    Thanks for helping jkwaterman out, guys, I know we both really appreciate it!!!

  4. Thanks Rysz.  Actually, it's my signature that's really outdated, hah!  But I was still on 6.9.2, and I had to upgrade to 6.10+ even to use the URL method.

     

    I'm on the latest 6.12 now, and I was able to install from URL.  I assume it's the same MergerFS release as the CA version.

     

    I like MergerFS, it's working as I hoped.  But it's not perfect.  The "Create" option is static for files vs. directories, and I was finding that it would create a directory, write some files to it, drop below the minimum free space, and then create a new directory on a different branch.  Considering that I'm backing up uncompressed blu-rays, typically around 45 GB in size, I need the min free space for creating a directory to be at least 45 GB higher than the min free space for creating files.

     

    To solve this, I customized the mirror.sh script someone else wrote (which is used to create each directory right before files are written to it, rather than creating all empty directories first and then copying files).  I changed it to have it create directories based upon 100 GB min free space, and to evaluate my MergerFS branches in a particular sequence.  I then was able to configure MergerFS with a much lower 4 GB min free space, which only applies to files since my script creates the directories. 

     

    When used with MergerFS's "ep" Existing Path option, I now have MergerFS writing the backup files to where my backup script creates the directories.  This allows me to keep my blu-ray disc directories whole on a single drive, and all my MergerFS branches fill up one-by-one.  I'm in backup nirvana!!!

  5. A year ago I created an easy, affordable backup solution for my Unraid server.  Essentially just a stack of external USB drives that I mounted with Unassigned Devices and joined together in a BTRFS JBOD style pool.  With 5x 16TB drives, this gave me a single 80TB storage volume.  At the time, this solution seemed perfect.  I had a backup script that used RSYNC to copy my files to the single mount point, and I thought that BTRFS filled up each drive one-by-one.  Since my Unraid data is basically already a backup of my physical data, having this portable backup volume that could be stored offsite was more than I needed, even without any built-in redundancy.

     

    This week, while adding a new 20TB drive to expand this pool up to 100TB, I learned I made several mistakes in my backup solution.

     

    First, when adding the new drive I made a few mistakes and ended up corrupting the BTRFS pool.  And since my pool had no redundancy, BTRFS prohibits mounting it in RW mode to fix it, so the only option was to start over, recreate the entire pool, and re-backup the original 80TB of data. 

     

    That was painful enough.  But in redoing all this, I discovered that BTRFS is automatically balancing, writing to the drive with the most free space for each file.  With the nature of the data I'm storing, losing a single drive would now make the entire backup worthless as I need each directory to remain whole on a single drive, and can't lose any files inside each directory.  While my BTRFS backup pool is better than nothing, this is way too fragile for me to continue using it.

     

    While researching solutions, I came across MergeFS and eventually this thread.  This sounds like the right type of solution.

     

    My core requirements are to plug in my USB drives, mount them as a single filesystem, and run a backup script to copy any new/altered data to my backup pool, with data filling up each drive, one-by-one, before moving on to the next drive.  That way, if I lose a drive, I only lose the data backed up to that one drive, plus any directories that happened to be spanning the transition between drives.

     

    Sorry for the long lead-in.  Now to my questions:

     

    Is the plugin on CA yet?  I searched and can't find it, so I'm assuming I have to install it via URL.

     

    Can someone help me with the configuration?  I read through the MergerFS github page, and there's tons of options and the examples don't seem to apply to my use case.  I'm a bit overwhelmed.  I need commands for configuring, mounting, unmounting, and expanding the pool.

     

    Thanks!

    -Paul

     

     

  6. I did see that, but then you appended with your edit and I thought you were changing your answer, hence my confusion.

     

    I currently have 78.6 TB of data backed up in this pool, as-is.  If I follow those steps, is there any risk I could lose that data and have to repopulate the back-up?  It was over a week of copying, I don't want to have to do that again.

     

    If I'm understanding you correctly, I can remove the 5 disks, delete the history, then 1 at a time insert the disk and rename it to the same pool name, delete my history again just to make sure, and then the next time I bring in all 5 drives at the same time, they will appear as a single pool.  Does that sound right?

  7. 1 hour ago, dlandon said:

    Try a blank the mount point and it should pick up the default pool label.  If that doesn't work, remove all the pool devices and delete each one in Historical devices.  Then re-install them.

     

    Edit: The mount point has to be the disk label on the pool devices.

     

    So does that mean this isn't possible?  Sorry, I got confused. 

     

    Since the mount point has to be the disk label, and it won't let me rename to an existing value, that makes it impossible to do the solution you offered, right?

  8. I just tried changing the Mount Point to all be the same, and it won't let me.  It reports "Fail".  I think it is because it's changing the disk label and the mount point at the same time.

     

    Is there a trick to doing this?

     

    Errors in the log:

     

    Apr 10 17:24:51 Tower unassigned.devices: Error: Device '/dev/sdx1' mount point 'Frankenstore' - name is reserved, used in the array or by an unassigned device.

     

  9. Hey dlandon.  Thanks so much for this awesome tool.  I've been using it for years, and it's been a big help for certain tasks.

     

    One of the things I occasionally use it for is for a removable btrfs JBOD drive pool of 5 USB HDD's.  It's so easy to plug it in, mount it, run my rsync backup job, then put it back in offline storage when I'm done.  I love the fact I don't have to stop/start the array to use it, that I don't get warnings when I unplug it, and that I don't get any fix common problems warnings for duplicated data on a cache disk.

     

    I was recently sharing my solution with some fellow users, and I discovered that the tutorial for how to create the btrfs drive pool for UD was removed.  I reached out to JorgeB and he restored that post so now I have those instructions again.

     

    While working with these other users on how to do this hot-pluggable backup pool, and comparing with how it works using stock Unraid pools, a few things cropped up that I wanted to ask you about.  After all, UD is the best tool for creating hot-pluggable drive pools that are normally stored offline, but there are a couple things Unraid pools do a bit better.

     

    First, when mounting the 1st pool device, the buttons to mount the other devices remain enabled.  One of my fellow users got confused, and clicked mount on all devices, and then saw the pool was mounted multiple times.  Would it be possible to both make it more obvious that all the drives in the pool are now mounted, and to disable/hide the mount button on the other drives?  Currently the only indication is the partition size on the mounted drive.  Perhaps even the other drives that got mounted in the pool can be inset to the right, beneath the parent, to better indicate what is going on.

     

    Second, would it be possible to add a feature in the GUI to add a partition to an existing pool?  I believe that Unraid pools let you do this, but in UD you have to go out to the command line and do the btrfs dev add... command to add the partition to a mount point.  I know it's a pretty easy command line, but some users are very uncomfortable with the cmd line and prefer the GUI approach.

     

    I know most people seem to think that Unraid pools are the only game in town now, even your own documentation states to use them.  But for hot-pluggable, removable drive pools, UD is so much better, I hope you continue to support and enhance this capability.

     

    Thanks!!!

    Paul

     

  10. Awesome, thank you Johnnie (should I still call you Johnnie, or Jorge, or something else?), that's exactly what I needed.  I was surprisingly close in my recreation of the steps based upon my research, but was full of doubt. You're extremely helpful as always.  😊

     

    Another user did a test and discovered he was able to mount a pool created in Unraid using UD, no big surprise I guess since these are just standard btrfs pools.  So for some users it might be easier to create the pool using Unraid, remove it, delete the definition, and then use UD from then on for hot-plugging.

     

    I would definitely use the Unraid pools feature if it more gracefully handled hot-pluggable backup pools, and didn't require the stop/start.  I'm not complaining, though, since UD does this extremely well.

  11. Hey Johnnie/@JorgeB, I could use some help on this.

     

    Side note, your new username and logo had me all confused, I couldn't figure out how you seemed to have been here for years/decades, yet I didn't recognize the name.  I finally figured out your provenance, though I'm still baffled by the user name change.

     

    Anyway, to my issue.  I created a portable backup drive pool, as described above, with Unassigned Devices back when I was running 6.8, using the directions you linked to above.   I plug it in 2-4 times a year and do a backup, it's fantastic.  Those instructions (which I think you wrote) have since been deleted, since the preferred way is to use the multiple drive pools feature in 6.9.

     

    But the functionality in 6.9 is not the same.  If you create a drive pool and then unplug it, Unraid is unhappy about the missing drives.  You can make the warnings go away if you delete the pool, but then you have to make sure you add recreate the pool with all the drives back in the correct order before doing your next incremental.  You also have to stop/start the array to do any changes to the pool.

     

    Unassigned Devices did this particular task so much better.  No warnings, don't have to delete the config, just plug it in and mount it, don't have to stop the array.  While I can understand that the preferred method is to use Unraid for multiple permanent drive pools, I don't understand why the documentation for doing it with UD was deleted, as that still serves a niche.

     

    I'm trying to help some other users get up and going with the solution I'm using, and since I can't find the documentation I can't fully help them.  I think there were some command lines I used when setting up the btrfs pool as jbod, possibly related to formatting but I don't recall.  I also need to expand my UD backup drive pool soon, almost ran out of space on my last backup so I need a 6th drive, and I'm worried I won't be able to do this correctly without the instructions.

     

    Even the UD support thread points to the now deleted instructions, and the internet archive doesn't have any successful copies of the FAQ.

     

    Is this something you can help with, or point me to someone who can?

     

    Thanks!

    Paul

  12. Big thanks to @Cessquill for the write-up, and for all the other contributors to this thread!  I just successfully fixed this on my server.

     

    I finally took the plunge after waiting half a year, as I started having major compatibility issues with Unassigned Devices on 6.8.3, and couldn't put off upgrading anymore.

     

    I followed all the steps that Cessquill outlined, and disabled EPC on my four Seagate ST8000NM0055 drives.  I used the latest SeaChest Utility files from Seagate's website, downloaded yesterday.  They appear to have changed again, and this time I used the files from path:

    • \Linux\Non-RAID\centos-7-x86_64\

     

    Oddly different than @optiman's experience with his ST8000NM0055's, mine all had Low Current Spinup disabled, so I didn't mess with that.

     

    I'm also running on a Marvel based controller, which probably creates a unique data point that this issue doesn't just affect LSI controllers. 

     

    Last time I upgraded to 6.9.x, I had major issues, and could not get beyond 66GB of my parity rebuild, which is why I rolled back to 6.8.3.  After applying the EPC fix and upgrading to 6.9.2, I've done multiple drive spindown's/spinup's with no issues, and a full parity check which completed in record time.  It's perhaps too early to celebrate, but it does seem like the issue is resolved on my setup.

     

    I also have two pre-cleared ST8000VN0022's that are not in my array.  They had both EPC and Low Current Spinup (Ultra Low) enabled.  I decided to leave Low Current Spinup alone, but went ahead and disabled EPC for both of these drives.  These will migrate into my array in the coming months, so I don't know yet how they'll behave.  I don't even know if I would have had issues with them, but since other users here mentioned them I decided to play it safe.

     

    I also used SeaChest_Info to examine my other non-Seagate drives (surprisingly it works), and found that EPC exists and is enabled on my HGST_HUH728080ALE drives, but those don't cause any problems.

     

    I kinda hate that these Seagate Exos 8TB drives are such a good value, as they've become my chosen upgrade path, so now I'll have to remember to disable EPC on all new drives going forward.  While I do like the HGST drives better, the price premium is just too much for a server this large.

     

    Thanks again!!!

    -Paul

    • Like 1
  13. 42 minutes ago, optiman said:

    It's my understanding that the issue isn't with Unraid at all, so the answer would be never. 

     

    If this is true, then why does the problem only appear after upgrading to Unraid 6.9.x?  

     

    I'd been running on 6.8.3 for a long time without issues.  Last spring I upgraded to 6.9.3 and bam! the issues hit immediately.  I never did the EPC fix.  The problem was incessant on 6.9.x, and I didn't want to risk loosing data playing around with drive setting as I had 2 drives out and was already risking data loss, so I rolled back to 6.8.3, and the problem went away.  Half a year later and it's been smooth sailing on 6.8.3.  I stayed on 6.8.3 because it works and there wasn't anything in the 6.9.x branch I'm needing.

     

    Even the very first post here mentions that the problems started with 6.9.0, which 100% matches my experience.

     

    Perhaps what you are saying is that the problem lies in the Linux kernel or one of the various drivers that were upgraded in the 6.9.x releases, and the issue is not in any of LimeTech's Unraid code.  That may be true, though I'm not sure I've seen it clearly detailed in this thread exactly where the problem lies, so I would appreciate pointers to any additional information I may have missed.

     

    It certainly seems reasonable to me that since a change in 6.9.x broke this, another change in 6.10.x could fix it, so I'm not inclined to give up hope entirely.  And there have been many times LimeTech has chased down bugs in other components on behalf of their users - and this issue has been reported to them in more than one ticket so they should be aware of it, though disappointingly I've never seen them weigh in on the topic.

  14. On 9/23/2021 at 4:10 AM, Cessquill said:

    I don't think you'll have problems.  This only affected 1 or 2 of all the Ironwolf models, none of which you have.

     

    I can confirm that ST8000NM0055 drives are most definitely affected by this issue.  This bit me hard when I upgraded to v6.9.2 back in April.  I had to roll back to 6.8.3 to recover from a dual-drive "failure" and inability to rebuild on 6.9.2, and never attempted any of the fixes posted here.  I felt extremely lucky to escape without losing data, and I'm still running 6.8.3.

     

     

    On 9/23/2021 at 3:59 PM, optiman said:

    I ran a parity check and it just finished without any errors.  syslog looks good.  I would guess that I would already see errors if I was going to have the issue.

     

    Thanks!

     

    optiman, glad to read this worked for you.  Since it has been a couple weeks, is your system still okay?  I'm starting to feel a little trapped on 6.8.3, so I'll probably have to apply this fix.  Since we both have ST8000NM0055 drives, your results matter most to me.

     

    I was hopeful that this was a bug in 6.9.x that would be fixed in 6.10, and that I wouldn't need to do the drive fix.  Came here to see if anyone had tested this on 6.10 without applying these fixes, but no dice.

     

    Paul

  15. Cross-posting here for greater user awareness since this was a major issue - on 6.9.2 I was unable to perform a dual-drive data rebuild, and had to roll-back to 6.8.3.

     

    I know a dual-drive rebuild is pretty rare, and don't know if it gets sufficiently tested in pre-release stages.  Wanted to make sure that users know that, at least on my hardware config, this is borked on 6.9.2.

     

    Also, it seems the infamous Seagate Ironwolf drive disablement issue may have affected my server, as both of my 8TB Ironwolf drives were disabled by Unraid 6.9.2.

     

    I got incredibly lucky that I only had two Ironwolfs, so data rebuild was an option.  If I had 3 of those, recent data loss would likely have resulted.

     

    Paul

    • Like 1
    • Thanks 1
  16. Everything these fine gents wrote is correct.  I stopped development of UTT after Unraid v6.8 came out. 

     

    There was some chatter that even v6.8 had some tunables that affected performance, and that what LimeTech was doing didn't work perfectly on all hardware, though as you can see it has been quiet here for well over a year, so I'm guessing the issues weren't enough for users to chase solutions.  And perhaps LT did resolve some of those earlier v6.8 performance issues a few users experienced.

     

    Ultimately, my perspective is that beginning with v6.8, LT was actively working on internalizing performance tuning, and the need for UTT is no more.

     

    Additionally, the original major performance issue that I experienced on my hardware, that led me to create this tool, is gone since v6.8.  So even if there were performance issues affecting some hardware configs, I'm lacking the motivation or time to troubleshoot them by revamping this code.  I willingly pass the mantle on to anyone else that has a need to refine the code for newer Unraid versions.  My shift has ended.

    • Like 1
    • Thanks 2
  17. Thanks Johnnie this is exactly the info I needed. 

     

    I have created a "Frankenstore" backup solution (pic below), using 5 USB 16 TB drives.  These are cheap drives, at ~$310 each, and even with 3D printing an exoskeleton for portability, wiring up a single power supply, and using a 7-port USB 3.0 hub with toggle switches, my total cost for an 80 TB backup solution is under $1700.  The final solution is extremely portable, making it easy to take offsite for security.  The 10A 12V power supply could easily support 6 drives, and possibly even 7, so I have a bit of room to grow to 96 or even 112 TB of backup capacity in the future, though for the next year 80 TB is plenty.

     

    The toggle switches on the USB hub are really cool, as it allows me to control the power-up order and get the same disk ID each time, though I'm not sure if that matters with the BTRFS pool.

     

    Of course, at such a low cost, I am expecting drive failures.  Since this is primarily just an offline backup for my main array, I'm cool with taking that risk.  When I read through your linked instructions, you talk about replacing a drive, but not specifically replacing a failed drive.  Is the process the same, or will it be different?  I'm assuming with a JBOD, I only lose the data on the failed drive, plus any files that might have been split across two drives onto the failed drive - I don't suppose there is a way to prevent splitting files across drives in a pool, is there?

     

    Also, with Unraid v6.9 in the wings, is using UD still the right way to go?  I'm running 6.8.3, and do not run beta or even RC on my production server.  Do you know if my UD BTRFS JBOD pool will migrate to v6.9's new multi-pool functionality, or would I have to recreate it from scratch and re-do my backup?

     

    Thanks!

    Paul

     

    image.thumb.png.dcf06f7edc505f0a1e9c3731f1976644.png

  18. I don't know that DPI really matters for monitors like it does for printing.  Most DPI for monitors is 120 or below.

     

    What is probably more important is simply having a physical size large enough to cover a high-resolution 4K monitor.  So if every banner image was sized to 3840 x 200, that would be high enough resolution to cover 4K widths, and easily scale down to lower resolutions, i.e. 1920 x 100 for a standard Full HD monitor.

     

    I don't know if there is an official banner height, but when I investigated it a while back I was coming up with a size of 91 pixels high, which seems a little odd.  Perhaps it is correct, I don't know.  If 91 pixels high, then that could mean we want to target 3840 x 182 as a banner size, and scale down from there. 

     

    But then again, that might cause problems for even lower resolutions, as narrower windows would zoom in further, and keeping the aspect ratio locked would cause the image to run out of pixels height-wise.  Perhaps we have to plan for a minimum width, i.e. 960 x 91, which would scale up to 1920 x 382 and 3840 x 764  1920 x 182 and 3840 x 364.

     

    If every Banner was 3840 x 764 3840 x 364, that is a still reasonable 3 megapixel   1.4 megapixel image.

     

    EDITED to correct some crazy typos.  I'm actually decent at math... no really.

     

    On 2/4/2020 at 10:43 AM, wgstarks said:

    Perhaps this would be better submitted as a feature request in that forum. Get eyes on the problem that can make the changes.

     

    I'm sure you're right, but I was thinking it might be nice to have some artist community commentary on the requirements before making a feature request.  Perhaps other banner creators have some unique needs that I haven't thought about.

  19. As much as I enjoy being able to set custom banners, the scaling issue is a real challenge.  I'm often moving my browser windows around and setting them to different sizes.  Sometimes I have my Unraid window full screen (on a 4K monitor), and sometimes half-width on the screen, and sometimes quarter screen, and on rare occasions a truly custom size where I've grabbed the browser edge and widened/compressed with window width to fit what I'm working on.  I also will look at it on other resolution screens or mobile devices.

     

    Because the banner image stretches to fit the full width, it becomes impossible to have a 1 resolution image that works perfectly on all screen ratios and resolutions.  The current implementation really feels like a 1990's solution.

     

    This is extra frustrating with banners that have circular elements, like the 2001 Space Odyssey / HAL and Iron Man / Jarvis banners I created for another user several weeks back.  If your browser width is set to the exact same width as the banner, it looks perfect.  Anything else and you get ovals, and text in the banner is stretched/compressed in an ugly fashion.  On that Jarvis banner, I even positioned a couple "folder" graphical elements to sit behind the Unraid version & server info text, but again this only works if you have the right browser width, otherwise the text doesn't center on these elements.

     

    Because I work with web design tools like WordPress, I know that better solutions are possible.  Instead of stretching, the image could be scaled keeping the aspect ratio fixed.  Possibly the Unraid text elements could scale with the image, maintaining positions on top of graphical elements.  If cropping is necessary, I think cropping top/bottom is preferable to cropping the sides, though I'm sure there could be some banners where cropping width is a better solution - perhaps that could be a setting we can toggle.

     

    It even crossed my mind that it might be possible to have a multi-segment banner, where you have separate "Left", "Center", and "Right" images that get closer/further away from each other as the browser width changes.  This could allow for you to set a static background for the Unraid text elements on the left and right, and a floating center image that ties it all together.  Perhaps you would even need 5 elements to make this work correctly: "Left", "Left Gap Filler", Center", "Right Gap Filler", "Right".  The Gap Fillers could stretch between the Center and Left/Right, connecting them seamlessly.  That way we as banner designers can achieve the near impossible: correct aspect ratio and positioning behind the text elements with a modern responsive behavior to browser width changes.  It would be very easy for a banner designer to chop a banner into 5 segments.

     

    Unfortunately, I don't have the programming skills to contribute to enhancing Unraid.  All I can do is sit here and share ideas, hoping to get some discussion going on this challenge.  Perhaps one of the really smart guys can even make it happen...  :)

     

    • Like 1
  20. 24 minutes ago, jumperalex said:

    That said, I disagree that LT has made it self tuning. Default settings for even the one main tunable, md_num_stripes, makes a difference for my 7 disk array. Setting it to 5120 improved my parity by 10% from default. Not sure if this is the best speed:memory option, but i also guess I don't need a full script to test out a few options. But for sure default is slower than tweaked. FWIW, I'm getting same speed in 6.8 tweaked as 6.7 tweaked which was the same speed as 6.6 tweaked

     

    Interesting, I did not know this.  I knew some users had performance issues with 6.8 and that Lime-Tech was still refining their logic, but I hadn't heard that some of the tunables can still help.  I guess once I get off my legacy version I can revisit this again.  Most likely I'm waiting for 6.9.1, fingers crossed.

     

    Though to be honest, I do hope Lime-Tech can figure out the logic to truly make UTT unnecessary.

     

    27 minutes ago, jumperalex said:

    In any case, thank you for all your time and effort. Even if it ends up being the stuff of legend, your contribution to the community really is greatly appreciated.

    Thanks, it feels good to help and even better to be appreciated!

  21. On 9/5/2019 at 8:39 AM, jonathanm said:

    Sure. 2,000 blu-rays backed up at 50GB / disk. When you have well over $20,000 worth of blu ray disks, surely you would want a backup of your data, right?

     

    🤣

     

    Cool, you've got the right idea.  Just wondering if you are fully leveraging it with a front-end so you don't have to insert discs.  Essentially, your discs are your backup, and your array is your media server.

     

    I've got 1800+ movies stored away in boxes in the basement (my backup), and watch everything directly from my array.  Using my own GUI front-end, of course... 😉

  22. 14 hours ago, sosdk said:

    I would like a cache setting that allows files to be on both cache pool and array.

    Yes.  Yeeeeesssssss.  This!  

     

    I realize that is asking a lot, as typically files are only allowed to exist in one or the other, not both, and this probably throws off some internal checks.  But it is definitely a feature I want (plus SSD/NVMe array support, which got a lot of votes but no mentions).

     

    My use case is that I have certain data (music/mp3's, software code, etc.) that I want immediate, fast access to all the time, without spinning up any drives, so it makes sense to put them on my NVMe cache drive.  But I want that data backed up too.  Sure, I can buy another $600 2TB NVMe drive just to create a mirrored cache pool, but ouch that's a lot of $$$.  I'd rather give Lime-Tech $120 for another license that I don't need (let's call it a donation, baby), so that these files can be stored both in my protected array and my unprotected cache.  All reads would come from cache, and all writes would go to both (or cache first, sync later).

     

    I had a script a while back that was syncing a few directories from my cache drive to my array, but it stopped working a while back and I haven't bothered to try and fix it.  Plus, I think it was causing those duplicate file error messages as Unraid was detecting I had the same file in cache and the array, so I've been hesitant to try doing this again.  I looked for a plugin that would handle this and found nothing.

     

    Native Unraid functionality for a "Use Cache Disk: Both" option would be awesome.

     

     

    • Like 1
  23. UTT is not compatible with Unraid v6.8 or later.

     

    I developed the latest version using Unraid 6.6.6 (which is what I'm still running).  I've avoided the 6.7.x series due to some known performance issues, and 6.8 for even bigger issues. So I don't have more recent versions available for testing and development.   6.6.6 works perfectly for me, and I have zero reason to chase version upgrades just to be on a #, so I might be here for a while.

     

    Which is all really pointless anyway, since Lime-Tech took away the tunables that UTT tunes in v6.8.  In theory, UTT is dead and no longer even needed with v6.8, since Lime-Tech took back control of these tunables and have their own internal logic for tuning them.

     

    So long story short, UTT is dead for Unraid 6.8 or later.  Though it still works for 6.7.3 and earlier.

    • Thanks 1