Jump to content

mfwade

Members
  • Content Count

    32
  • Joined

  • Last visited

Community Reputation

5 Neutral

About mfwade

  • Rank
    Advanced Member
  1. Well, new information. Not sure if it pertains to the finally running parity check though.. I am seeing the following in the logs. Jun 1 10:55:12 unRAID-1 kernel: CPU: 24 PID: 5156 Comm: unraidd Tainted: G O 4.19.41-Unraid #1 Jun 1 10:55:12 unRAID-1 kernel: Call Trace: Could this be a new issue or a systemic from the now running parity check? Latest diagnostics attached. -MW unraid-1-diagnostics-20190601-1459.zip
  2. Thanks Johnnie.Black, I did try those settings with the same result. The server web interface was hung and the network interfaces dropped. However, the parity check appears to keep running though. Seems the only setting that has made any difference was the Tunable (md_write_method): => Reconstruct Write. Nonetheless, it is running now with albeit a perceived slower than what I would expect SSD disk speeds of which should complete in under 24 hours. In the storage world, this same operation could take days/weeks to complete (background verify tasks) so I guess I should be grateful for what I have If @limetech is reading these, I would be more than happy to perform any type of testing you may need on this or another array. I can stand up another one relatively quickly with all SAS drives and a few scattered SATA SSD and HDD if you need these as well. Thanks again! -MW
  3. Had some time this morning to work on the UNRAID server. I made one change to the Disk Settings section - Tunable (md_write_method): => Reconstruct Write. I wouldn't expect this to make a difference unless there is an issue with the current parity hash or checksums. So far, so good. The network connectivity to the array has not dropped however, the web interface is still very very sluggish. I am pleased that the parity check is currently running even if its a staggering 44-47MB/s per drive x qty. 30. I would expect significantly faster speeds to each of the SSD drives.. I know there are a few of you ( @johnnie.black is one? ) with all flash arrays. I suspect that most are all SATA however (mine is all SAS), if you wouldn't mind posting your Disk Settings (or any other relevant) so that I may try to incorporate them to assist in speeding up my array, I would be sincerely grateful. Thanks again, -MW
  4. I am considering starting over per se, or rather formatting my USB stick with a fresh UNRAID image. Thoughts? This is really only an exercise to rule out any 'weirdness' that may be going on with the currently installed image. Ultimately, my real concern is a disk failure and as it currently stands, the inability to run a parity check and or verify if In fact parity is correct. There are 'green' health indicators however, erring on the side of caution I need to ensure the data is intact and can be rebuilt. If I do this, I would really like to preserve the following: NO plugins will be installed - initially Docker container / configuration Virtual Machine configuration and layout Disk layout Shares User names / passwords Cache pool (4 disk BTRFS pool) Appdata resides on cache Domains reside on cache Other shares reside here as well I don't mind reconfiguring the hostname, network, etc., it's simple enough to redo as I have 4 connections set up in failover mode. Is there a document that details the config files that I need to preserve and move over to the newly formatted USB? Anything else I am missing? Should I even go down this path? Other than the inability to run or verify parity, UNRAID appears to be running phenomenally! And, it goes without saying... The new web GUI looks great! -MW
  5. I would tend to agree with you on some server type gear and the staggered spinning up of the drives however, in the storage world (Pure, NetApp, EMC, etc.), all drives are on all of the time. In this case they are all flash drives so the overall power draw is significantly lower than that of a tray with spinners. Power draw: All DAE Trays contain either dual 850watt power supplies or 1150watt power supplies. The server contains dual 750watt power supplies currently configured for high efficiency - balanced mode. Total combined usage detailed below. Server (0 drives): In use: 268 watts DAE Tray 1 (20 drives): Idle - 32watts, In use - 161watts DAE Tray 2 (14 drives): idle - 32watts, In use - 118watts DAE Tray 3 (0 drives): idle - 32 watts I don't believe this to be a power supply issue. As stated earlier, the parity check does kick off and it does appear to run however, when it is running I lose all connectivity to the UNRAID server and the web interface when logged in to the console becomes unresponsive. -MW
  6. To clarify a bit further... Server contains no drives at this time. H200E Connection 1 => SAS DAE tray 1 contains 14 data drives, 2 parity drives, 4 cache drives - (all 2.5") H200E Connection 2 => SAS DAE tray 2 contains 14 data drives - (all 2.5") DAE tray 2 SAS expansion => SAS DAE tray 3 contains 0 drives - will be used for Unassigned Devices (at some point) The server and all SAS DAE's are all connected to the same rackmount APC Smart UPS 1500 of which is running around 65% load. -MW
  7. Server power supplies are rated @ 750watts x 2. Power draw at start up is 392watts. At idle 242watts. The SAS trays each have 1150watt x 2 power supplies. At idle with no drives they are running @ 40watts total. these are enterprise rated disk trays and are rated for 25 hdd’s. The ssd’s I am using consume much less than a spinner.
  8. Good Afternoon, I am having an issue when trying to manually run or schedule a parity check. Every time I try to kick one off the entire system hangs. More specifically, when I am on the console and kick it off, I see all the drive lights light up like its doing a parity check however, the web instance becomes unresponsive. Furthermore, when monitoring from a remote computer, I see that the network (ping test) drops altogether. I have verified this activity both on my older Supermicro server and on the newly provisioned (this past weekend) HP DL365 G7. Please note, I have tried safe mode, guide mode without plugins, etc. All with the same results. Current setup: UNRAID Version 6.7.0 2019-05-08 HP DL365 G7 192G RAM AMD Opteron™ 6282 SE @ 2600 MHz Qty. (2) H200 flashed to IT Mode - Not connected to any drives Qty. (1) H200E flashed to IT Mode - connected to external EMC SAS shelves Qty. (30) 4TB Enterprise SAS Flash Drives (2 Parity, 28 Data) Qty. (4) 4TB Enterprise SAS Flash Drives (BTRFS Cache Pool) All plugins removed Only modification: /boot/config/go => rmmod acpi_power_meter I am attaching the diagnostics file for your review. All that said, I suspect if I just let it run, that it would complete in a day or so however, none of the VM's and Containers are available nor can UNRAID be managed. This has been going on for quite some time and the only way to recover is to perform a hard shut down and power on. When it comes up I have to be fairly quick to cancel the parity check. Any help is appreciated. Thanks again! -MW unraid-1-diagnostics-20190528-1430.zip
  9. Good Morning, I am in the process of swapping out my existing Supermicro server for another server, an HP Proliant DL385 G7. I know it’s dated however, it does have more horsepower than the current Supermicro. Specs are as follows: -Qty. (2) AMD Opteron Processor 6282 SE @ 2.6 GHz -128 GB Ram -BIOS A18 -Will connect to existing external 2.5 and 3.5 SAS (EMC) enclosures -Drrive counts (30 - 4TB SAS SSD) - (Numerous Unassigned Devices HDD’s) -Running in upwards of 15 docker containers (Plex, Radarr, Sonarr, Minio, NGINX, etc..) Questiosn: -Should I continue to use the existing LSI 9211-8i SAS controllers or should I look at something different? Currently, the parity check runs at 55MBs so a tad on the slow side for SSD. I suspect the aging architecture, the cards and/or bus, and the dual parity may play a part in slower speed. -Right now I am using an HP 6GBs SAS expander to connect all of the internal SAS drives on the Supermicro along with using the single external connector on the SAS expander to connect the external shelves. I would like to move away from the expander and use a dedicated card. Are there any recommendations for either a PCI 2.0 or backwards compatible PCI 3.0 card that you would recommend. I am looking at an LSI 9207-8e however, I am not opposed to using something different. -Any opposition to bumping up the memory to 256 GB. I have it so why not use it.... -Does anyone out there have access (and willing to share) HP BIOS and firmware. My IT purchasing has centered around IBM, Dell, Cisco, etc. but never HP. Any assistance is appreciated. I would like to update the server if i can. -Any recommendations on settings specific to HP Proliant gear and UNRAID? -Any settings specific other than what I am currently using (SSD Trim plugin) to account for the all SSD array? I am sure I left some things out. Feel free to ask any other question pertinent to making any recommendations. Thanks in advance for your support! -MW
  10. Only the two that are showing, sdv and sdw (the ones in the picture above). -MW
  11. I guess I am the only one having this issue. Sigh... Maybe Limetech will see this and a) say yep, there is an issue or b) tell me to deal with it, take my meds or have a beer or c) say hey, that's a new one.... I guess I should have added, Unraid 6.7.0. At any rate, I do believe it's simply cosmetic. -MW
  12. Definitely a typo. Is set to 6000000. Comes out to roughly 6.29TB ish....
  13. Limetech, I tried that as well. It doesn't work with any of the permission variables. Maybe it just wasn't meant to be and I will have to go out and buy an older albeit used time capsule and replace the drive. The main reason I would like to get this to work is my existing TC is starting to fail. Again, thanks to all for the assist! -MW
  14. Good Morning, Moderators, please let me know if this needs to be moved. I posted it here as I am not having an issue with the plugin rather this seems to be nothing more than a GUI issue. Please advise. I am observing a weird issue with the Unraid web interface. On the main Dashboard page, at the bottom under Unassigned Devices I see several disks, in this case 4 however, when on the Main tab, I actually only have 2 mounted. See the screenshots below. The first one is from the Dashboard page, the second is from the Main page. This behavior isn't affecting the use of the mounted shares rather just messes with my OCD Attaching diagnostic for your review. All in all, I am really liking the new interface. Great job Limetech!! -MW unraid-1-diagnostics-20190514-1050.zip
  15. So, not sure if we should continue on with this thread or open a new one. Moderators, please advise. I tried once again this morning to get the Time Machine (via SMB) to work with 2 Mac computers. 1 running Mojave and the other running High Sierra, both with the same results. I create a share called 'Time Machine' or 'test-1', etc,. Assigned the following SMB attributes (export: yes/time machine, volume size: 60000 for 6TB, and security: private). When looking in either Mac's Time Machine properties, and attempting to add a new disk, the new share is not visible. However, if I mount the SMB share via Finder, Go, Connect to Server, then I am able to see and use the disk in Time Machine. I have also tested using AFP. When creating the same type share via AFP, the following settings were used (export: yes/time machine, volume size: 60000 for 6TB, volume dbpath: empty, and security: private). When I go to browse for the share in Time Machine, I am able to see it. The issue comes when I try to mount it in the Time Machine settings. It takes in upwards of almost 20 or so seconds to 'connect' to the drive. When it does finally connect, I am prompted for my credentials, and then it errors out. Unfortunately, I did not capture any screen shots of the errors this morning. I will take care of that when ai get home this evening and post them. So in summary, If I am not following the proper procedures when using an SMB share with Time Machine, please let me know. I am under the assumption that it will simply appear much like an AFP share. If others have been able to get this to work, please share your settings for not only creating the share but also how it is exported. I am attaching my diagnostics file. Maybe it will be of use to someone in the hopes of figuring this out. Thank you to everyone in advance for reading and providing commentary. The support you provide is sincerely appreciated. -MW unraid-1-diagnostics-20190514-1050.zip