• Posts

  • Joined

  • Last visited

Everything posted by kolepard

  1. itimpi Thank you for considering it. I appreciate you being willing to look into it and understand it may not be possible. Kevin
  2. I have a question/feature request I'd like to ask/propose My backup server is in a case with less than ideal cooling. I love that parity check tuning will pause the parity check when the drives heat up, however, even if there is no other activity on the server, the drives never seem to spin down and cool off, so the parity check never restarts. Is there a way to get the drives to spin down during a parity check when they hit the temperature threshold? If not, is it practicable for this to be added in a future revision? Thank you. Kevin
  3. Anyone know if this problem persists in the 6.10.x series? Kevin
  4. No. If you follow the thread further down from that post you'll see that when I last checked prices the Q30 case alone is about $2,000 (though that includes a PSU when I checked, which the Rosewill does not), the Rosewill is about $250. (But yes, you can spend $8k or more on a Q30 that is fully built.) You are correct that the Rosewill case has a set of three 120mm fans in front of and behind the drives. This dual "fan wall" design is also used in the Q30. My experience running both cases is that drives are signifanctly cooler in the Q30. I think it is fair to compare features and performance of cases at different price points, particularly as this is "The Enclosure Thread," which doesn't suggest to me some cases and comparisons should be excluded. I guess you don't. You are, of course, free to hold that opinion. You do get a much better case with the Q30. The ease with which you can work in the case, the build quality, airflow, and rackmount rails you can get with it are all, IMO, much superior to the Rosewill. For a price difference of $1750 these days, you'd certainly expect so. Each person will have to decide individually if that is worth the money to him/her or not. (And at $2k it's a _lot_, no argument there.) Best wishes in your case choice. Kevin
  5. Similar situation here. I have a Q30 as my main server and a Rosewill RSV-4500 configured for 15 HDDs as my backup. The Rosewill case is ok, but is a bit of a pain to work in, and makes me really appreciate the Q30. The airflow in the Rosewill is better than what I used to have with my 5-in-3 drive cages, but still not as good as in the Q30. The backup server still runs fine, but was my first build about 10 years ago, and fairly recently transplanted into the Rosewill case. It uses an older Supermicro board (C2SEA) with a quad-core Core processor but I'd like to have my backup server use ECC RAM like the main one does. I probably won't be able to answer your question about the GPU. I haven't moved into using VMs or GPU-based transcoding at this point. Good luck. Kevin
  6. On a lark, I contacted them to see if they were selling enclosures only again, and they are. Cases are fully assembled, with steel chassis, modular drive cage, drive cabling, case fans, direct wired backplanes, sliding rails and a power supply. They take ATX motheboards (not eATX). They said they aren't designed to power or fit a GPU, but looking inside the Q30 I have now, it looks like there'd be plenty of space, but I guess you'd want to measure to be sure. They are not cheap (prices I was emailed today, 3/13/2022): AV15 (650W non-redundant PSU) - $1,683.80 (USD) Q30 (850W non-redundant PSU) - $2,151.13 (USD) S45 (1200W redundant PSU) - $2,922.29 (USD) XL60 (1200W redundant PSU) - $3,353.88 (USD) Hope this helps someone. Kevin
  7. Years ago they (45drives.com) used to sell just the case, and I bought one in 2015. It was very expensive, about $1200 with shipping, as well as the special internal cabeling. Last time I asked, they no longer sold just the case, though I'm sure they'd respond if you ask them about it again. Their support and customer service is excellent. The mounting solution--which works great and is super convenient--uses a combined SATA+power connector that is attached to a bracket inside. I needed to get cables that I could use that would route back to a SFF-8463 connector instead of the original SFF-8087 ones and I had no problem ordering them from the company. That said, nothing else I've used, most recently a Rosewill case, is even close to as good to work in. I just had to repalce a drive in the Rosewill case, and it's a pain with the way the cabeling and drive cages work. Everything from working with the motherboard, PSU, drives, fans, cards, it's all just easy in the Storinator case I have. Kevin P.S. If you do contact them and they will sell you just a case, please post here or PM me because despite the cost, if I could order a second one, I'd do it again today.
  8. Optiman I am running 6.9.2 with 4 Ironwolf drives in my array. I followed the instructions in the first post, and have had no trouble, but I did disable both. Sorry I cannot be more helpful. Kevin
  9. I have 3 of these drives in my array, and since applying the fix, I have had no problems, and my array has been on 24/7 (currently over 44 days) with multiple spin ups and spin downs of all the drives. I am not close to the level of expertise of others here, but my guess is that this is a seperate, unrelated issue. Kevin
  10. Not in my experience. I have done this at least a half dozen times without any issues. Sent from my iPad using Tapatalk
  11. I'd like to suggest notifications be moved to a separate window or pane in the UI. When I have a large number of them for whatever reason, they cover other parts of the interface that I'd like to see, and it is difficult to review all of them. Clicking to clear them all gets them out of the way, but then you obviously cannot review them. Thank you for considering this as a possible enhancement. Kevin
  12. The problem with that is that the 8087 end is male, and I need an end that is SFF-8087 female to connect with the cables I have. Thanks for looking for me, though. I appreciate the help. Kevin
  13. What are those? Can you suggest source or search terms? Thank you! Kevin
  14. I'm using 8 drives currently, and if I need to add more, I need an HBA that supports more than 8 drives, as the motherboard only has 1 PCI slot. Kevin
  15. All I have a question about finding a cable that I am having some trouble with. I'd like to relace the IBM M1015 in my system with an LSI 9305-16i. My drives are all in a StoragePod 4 with cables that end with Male SFF-8087 connectors. Becaues of the case design, it is not possible to replace these cables. I am trying to find a cable that will go from the female connector, an SFF-8463, on the 9305-16i to the male connector of the SFF-8087 cable to the drives, Thus the cable needs to be male SFF-8463 on one end to plug in to the 9305-16i and female SFF-8087 on the other end to plug in to the cable running to the drives. I have not yet been successful in this search. I know there are differences between forward and reverse cables that I do not fully understand, so I may not be using the correct search terms. The cables I've been able to find on Newegg, etc., all appear to be female SFF-8463 on one end to male SFF-8087 on the other end. Any help that folks can offer is much appreciated. Thank you in advance. Kevin [Edited for clarity.]
  16. This statement in the release notes: "No need to use the master key now." Has me hopeful my issue will be resolved with this update. 🙂 Thanks for making this available Djoss. Kevin
  17. I'm having trouble getting my Backblaze B2 bucket selected. I can enter a Display name, and then paste in the Account ID and Application Key (to avoid typing errors), but then I cannot select a bucket. Are there limitations on the bucket or display names of some kind? (Other than the Backblaze guidelines limiting you to A-Z, a-z, and -.) Kevin
  18. Does it represent a problem if you have empty slots? For example, I just replaced my disks 1 though 4 with a single drive (four 2 TB drives in to one 8TB drive) I put the 8 TB drive in slot 1, but slots 2, 3, 4, are now empty. Everything seems to be working fine. Is there any reason the empty and non-contiguous slots would be a problem (aside from being somewhat esthetically displeasing)? Kevin
  19. That makes sense. It's been 20 years since I studied how using parity to reconstruct data worked, but it makes sense that zeroing the drive would be a step that would preserve parity. I was thinking that once the data was gone, parity would be intact without the drive, and I could just yank it, but I can at least conceive of how that would not be the case and you'd need to zero the drive before you could pull it and have parity stay intact. Patience it is then. 🙂 Thank you for your time and reply. Kevin
  20. I have started replacing the 8 year old 2 TB drives in my backup array. I had previously updated both parity drives to 8 TB, and am in the process of shrinking the array. I am following the procedure outlined here: https://wiki.unraid.net/index.php/Shrink_array Specifically the clear drive then remove section. I am using the unbalance plugin to move everything off the old drive on to the new drive, the the clear_array_drive script to clear the old drive. Reading the description of what the clear_array_drive script does, I am wondering if that step is really needed. The wiki says the script "it will completely and safely zero out the drive" Once removed these drives will be destroyed so I am not worried about data being retained on them. Does the script do something else necessary for the procedure? I can be patient if needed, but it would be nice to avoid another lengthy step if possible. Thank you for your help. Kevin
  21. Thank you. I updated the name and it starts the server and I can access the web interface, but it still doesn't start the GUI, though I can (as before) log in to the terminal. In the past I've just updated after looking at the release notes. I see this time there is a thread about the other preparatory steps that needed to be taken, which, of course, mentioned the change in legal characters. With future updates I'll look or similar threads before updating. Anyway, I'll run through the thread on updating to 6.5.0, and I have Fix Common Problems running now. in the worse case I suppose I can always start with a fresh installation. Thank you again for your help. Kevin
  22. Hello all I recently updated from 6.4.0 to 6.5.0. When I restarted, it boots to the login prompt, but never starts the UI. Suggestions? Diagnostics attached. Thank you very much for your help. Kevin tower_bkup-diagnostics-20180403-1352.zip
  23. Maybe it's just voodoo now, but I've pre-cleared all my drives and haven't had any premature failures, on drives some of which are now more than 8 years old. The drives that failed pre-clear I've returned. Although my sample size is only around 20 drives, I'm a believer in stress testing a drive before putting it in to use. If I understand correctly, the plug in functioned properly under 6.3.5. Is that the consensus here? I'm tempted to just set up a system with my unused license (I knew I'd have a use for that extra license someday!) running 6.3.5 just to preclear disks, as I've got the hardware around anyway. For those of you who also believe in doing a pre-deployment stress test, what alternative do other suggest for stress testing a disk instead of preclear? I'm certainly open to suggestions. Thanks. Kevin
  24. Thank you very much. That will simplify the process considerably. Kevin