FreakyUnraid

Members
  • Posts

    21
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FreakyUnraid's Achievements

Noob

Noob (1/14)

5

Reputation

  1. Are you sure about that? These 2,5" Seagates only do about 135MBps max. HBA has 8x 6 Gbps With 1 link to the expander; that's 4x 6 Gbps = 24 Gb With max 20 drives that comes to 24/20 = 1,2 Gb per drive = 1,2 / 8 = 150MBps PCIe 2.0 does 500 MBps per lane. HBA is x8 with 2 ports which comes to x4 per port? So 500 MBps x 4 = 2000 MBps / 20 drives = 100 MBps max theoretical. In practice this will be lower is my guess. So PCIe 2.0 is the bottleneck here. Changing out the PCIe 2.0 card for a PCIe 3.0 card result in: PCIe 3.0 does 985MBps per lane x 4 = 3940 MBps / 20 drives = 197 MBps I read that in practice this is more like 3200 MBps / 20 drives = 160 MBps Still well over the max these drives can do. Around 20% headroom. If, in the future, I would switch to 3,5" drives which can do much higher speeds I will also need fewer drives. So I think I'm good when I switch to something like a 9207-8i, right? The 2,5" drives need more HBA/Expanders so the power consumption difference between a 3,5" setup is indeed negligible at this point. I just really like these little drives for being so quiet. Even in 100% operation (all drives) I can barely hear it. I remember my old Synology with 4x 3,5" drives which was in the same spot the current server is, and the noise was really unbearable. The power consumption 'demand' was for the new hardware. Power is getting expensive so I don't want a HBA or Expander that consumes a ton of power when there are (far) better and more efficient once out there.
  2. Hi, Currently running a fujitsu D3643-H (4x sata) with 2x IBM M1015 (2x 8 sata). All ports are populated with 2,5" drives. With 20 drives I'm at capacity and I need more storage. No more ports left, so looking for a good solution to expand the number of ports. Less money and less power consumption is much better. Options I considered but deemed not a good option: - Changing HDDs for larger ones. Already running 5TB drives, so can't go bigger in 2,5". Going 3,5" would mean I have to change out at least 3 drives (I have dual parity) to go bigger. With at least 16TB being the best option this would set me back at least $800. Too expensive and also 3,5" make too much noise for my taste. Server is located in a room I often work and sit in so it's a big deal. The 2,5" are so quiet, I can't even hear them. And I have a couple spares laying around. - Changing 1x M1015 for a 16 port HBA. Again, this would cost a lot. You can get one for around $170 on ebay, but I would like to get 2 so I have a spare laying around just in case. Ebay and shipping can take weeks if not months and I don't want my server to be down so long. With $340 this would be too costly in my opinion. The only real option I found was to change out 1x M1015 for an expander. Looking around the Intel RES2SV240 looks to be the best option? For around $60 not a bad deal? The Lenovo is real cheap at just $30 but I think I can't get enough ports with that option? After reading the performance topic on throughput I'm a bit worried that the PCIe 2.0 M1015 is going to bottleneck my drives quite a bit. They start parity at around 135MBps. With (in theory) 20 drives on the expander this would mean they would be bottlenecked to around 113MBps if my math is correct? Could this be damaging in some other way than just parity taking a bit longer? Perhaps I should change out the M1015 for a PCIe 3.0 card like the 9207-8i and sell my M1015's? So in short: NOW 2x M1015 Option 1 1x M1015 (PCIe 2.0 bottleneck with 20 drives on expander?) 1x Intel RES2SV240 Option 2 1x 9207-8i PCIe 3.0 (Dell or a different one?) 1x Intel RES2SV240 Option 3 ? Suggestions are welcome. Would love to hear your thoughts on this. Thanks.
  3. @JorgeB I found the following post that claims to 'fix' this issue https://forum.proxmox.com/threads/smart-error-health-detected-on-host.109580/#post-475308 Where and how do I use this command in Unraid?
  4. For people experiencing the same issue: delugevpn is running but the Web UI is not available. Edit the container and check that LAN_NETWORK is set to your local LAN IP (192.168.1.0/24 for example). Mine was set to 'localhost', which resulted in DelugeVPN and all dockers routed through it not being accessible. All Web UI's were unreachable. I'm not sure how this happened, because until yesterday everything was working just fine. And I followed Space Invaderone's video when setting things up and I re-watched that again and he also put's in the LAN IP. So I have no idea where 'localhost' came from... Could an update of the container have caused this?
  5. Hi, As the title says; can I force new docker containers to be added to the bottom of the docker container list? Because now they get added to the top, which forces me to move them otherwise startup/ip's get messed up. Tried search, but this seems to be a to generic search term.
  6. Yes, Swag with Cloudflare. I already found that solution online and tried it. No luck. Also played around with timeout settings, set chunk size to 50MB, nothing. Only Webdav was giving me issues... Webdav upload with an android app; Contacted the developer and he looked into it. He built an option to force the app to set chunksize to 50MB. And that worked! So apparently a normal webdav connection doesn't do chunking on it's own. Atleast it doesn't listen to the server, and that's why it fails with a reverse proxy with Cloudflare. Nextcloud only mentions this: https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/chunking.html Yeah, how am I, a simple user, suppose to use that with Webdav? Tried both adresses in Windows and they work, but keep hitting that 100MB upload limit. So no chunking it seems... Perhaps similar to this: https://github.com/nextcloud/server/issues/4109 and a missing feature? Or I'm missing something.
  7. Did you find a way to get around the 100MB upload limit when proxying Nextcloud through Cloudflare? When I proxy Nextcloud through Cloudlfare the upload only becomes problematic when using a WebDAV connection. Uploads through the PC client and website interface all work just fine. It's like webdav doesn't use chunked uploads? So I found this: https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/chunking.html WebDAV address mentioned there: https://server/remote.php/dav/uploads/<userid> Fails with "403 Forbidden" message "normal" WebDAV address stated in Nextcloud: https://server/remote.php/dav/files/<userid> Fails because of connection is closed by Cloudflare because of the 100MB limit
  8. Okay, but why do they get disabled? Because the two other disks had read errors too, but came back online after the reboot. I don't understand what made disk 2 and 3 different? And why does Unraid (seemingly always?) disable disks in such a scenario. Is it something preemptive? And what is it preventing when disabling those disks?
  9. Great, back to normal operation it is. Really, really appreciate the help! Thank you! I'm not sure if I understand this correctly. Are you saying Unraid will never disable more than 2 disks (dual parity)? How does that work? (if there is a wiki about this a link will suffice of course).
  10. @JorgeB Sorry, somehow I accidentally posted before I even started typing haha. Thought with tab I could select the username after typing @, but selected "submit reply" and pressed enter. My bad.
  11. @JorgeB @trurl (sorry, somehow pressing enter posted right away...) Success! What a relief. Disk 2 returned to normal operation Disk 3 returned to normal operation Parity sync / Data rebuild finished - Finding 0 errors Duration: 13 hours, 44 minutes, 39 seconds. Average speed: 101.1 MB/sec Don't think another parity check is necessary, right? Lessons learned: never letting the server go to sleep again when using an LSI card, that's for sure haha. But I still wonder, how does Unraid handle a failing LSI card? I was really lucky this time for having dual parity. But my other LSI card has 8 drives connected to it... I hate to think what would have happend if that one had failed. Because what are the odds of "just" 2 drives getting disabled in such a case? RIP array? Or how does Unraid handle this? I know from the past that when you have 'boot array at startup' enabled AND a faulty cable you can be sure that combination results in a disabled disk and thus a rebuild. For that reason alone I disabled array boot at startup a while back.
  12. Of course. Whenever something like this happens I just disable all services. Too bad for my plex users, but better safe than sorry. I am skipping the pre-clears, because both drives have already been pre-cleared about a month ago. Thankfully I bought some extra on blackfriday. So yeah, no need to pre-clear them again. Thanks, I just shucked 2 drives and will be replacing them tonight so the server can rebuilt over night and during the rest of the day. So to sum things up (don't want to screw this up); I can follow "replacing failed/disabled disk(s)" section from here https://wiki.unraid.net/Manual/Storage_Management#Replacing_disks To translate that to my situation, and just to be 100% sure that what I'm going to do is the right way: Stop the array. Power down the unit. Replace disk 2 and 3 with the spares. Double check that all cables are connected properly. Power up the unit. Assign the spares to disk 2 and 3 spots. Click the checkbox that says Yes I want to do this. Click the checkbox Maintenance mode Click Start. Click Sync to trigger the rebuild. Fingers crossed and report back with any problems or success Maintenance mode seems like the safest option to me. Can you confirm that these are the right steps? I'm not missing anything? EDIT: Successfully replaced disk 2 and 3 and the array is now being rebuild. See you in ~14 hours, hopefully with some good news
  13. Okay, and because the disks are mounting there is no need to check the filesystem, correct? I have done a rebuilt in the past (probably also caused by this sleep issue), but never 2 drives at the same time. Is there more risk involved when rebuilding 2 drives at the same time? I mean, both drives are still connected to the same cable and LSI card. Are you sure this was caused by sleep mode and causing (temporarily) issues with the LSI card? Would it be wise to swap disk 2 and 3 with new (pre-cleared) drives? Then start the rebuild with the new drives. In case something does go wrong with the rebuild, I still have disk 2 and 3 laying around so I can recover the data by just copying them to the new disks. Or am I just thinking too hard and should I do as you say, rebuild over the existing drives?
  14. See attached diagnostics. Don't mind the first two failed array starts; keyfile was missing. After reboot Server-UR: Notice [SERVER-UR] - array turned good Array has 0 disks with read errors Disk 2 and 3 are still disabled and emulated. server-ur-diagnostics-20220119-1411.zip
  15. Is that a 'thing', that LSI cards don't like sleep? Didn't realize that, but after reading your comment I googled some and there a quite some topics where people mention about LSI cards "server parts are not meant for sleep mode" etc. Hadn't even thought about this for a second... dumbdumbdumb. So to sum things up and see if I understand you correctly: reboot start array - in maintenance mode i presume? check everything download diagnostics Correct?