Jump to content

jkBuckethead

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

5 Neutral

About jkBuckethead

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, you've confirmed my thoughts. I knew I could turn on the PSU with a jumper, I just didn't know if there was a reason I shouldn't.
  2. Seems like your response is closest to what I want to achieve. If nothing is connected to the MB, then you are controlling the PSU directly. Not clear whether your remote switch is connected to the incoming power or the PSU output. Please clarify how you are turning the PSU on and keeping it on. Did you install a permanent jumper on the 24-pin connector and now you are switching the power from the wall? Or, is your electronic switch connected to the 24-pin connector? Either way would confirm my thought that using a jumper to activate the PSU, whether the jumper is solid or switched, is all I need to turn the PSU on.
  3. Thanks for the input, both of you. Not keen on the idea of splicing into my PSU cables. I'd need a pair of 24-pin extensions so I could splice into them without permanent damage to my PSU cables. Plus, I would need some sort of connectors at the back of the machine for easy disconnection when I need to move them. Cheaper than the supermicro widget, but still probably $15-20 in parts. Both of these solutions would require additional cabling between the two enclosures. I'm not aware of any off the shelf cables that would work for either option so I would have to rig something up using adapters and old cable parts. I might even have to solder, which I suck at. I'm not concerned about keeping the PSUs in sync. Except for upgrades and/or repairs, this server runs 24/7. I think I'll try to stick with a solution that doesn't involve extra connections between the two machines.
  4. Looking for a little room to grow. Planning on using one of the currently available mini-ITX enclosures with 8 hot swap bays to house the drives and connecting to an external SAS HBA in the main system with a pass-thru in the back of the case. I know that on a dollar per bay basis I would be better with a used server chassis, but I don't need that much expansion and I don't have anyplace to mount a server chassis. My question is how to power the external chassis since I won't have a motherboard. Do I really need something like the SuperMicro JBPWR2 Power Board or can I simply turn on the PSU with a jumper? If all I need is a jumper for the PSU, looking at this switch to make powering off and on easier. If I need something more, I also have an old ASUS AT5IONT-I board with an integral Atom CPU lying around collecting dust. I'm thinking I could also use it to control the PSU, and it would just be in a constant state of failed boot without a boot drive. This would waste a bit of power, but with a 13W CPU not too much.
  5. I have had no issues using the Aquantia AQtion 10G Pro NIC in my unraid machine. The card is multi-gig so it supports 1, 2.5, 5, and 10G depending on the connection at the other end and the length and quality of the cable. In my case it sits right next to my main switch with a CAT 7 patch cord connection, but is limited to 5G because it is connected to a 5G port on my switch. Still, with spinning hard drives this is more than enough speed.
  6. A couple of weeks ago, completely out of the blue I saw I had errors on two storage drives plus one of my two parity drives was offline. The first sign something was weird was that both storage drives had the exact same number of errors. This would be a huge coincidence if it was physical drive failures. It turned out that all three drives were connected to the same breakout cable (the 4th was unused) on my LSI 9207-8i HBA. Thinking it might be a bad cable, I swapped out the cable and rebooted. I rebuilt the 2nd parity drive and everything has been fine for the past two weeks. Tonight, I updated to version 6.8.1. Right after rebooting I saw a strange warning message that one of my cache drives was unavailable. Oddly, when I checked the drive on the MAIN page it said the drive was operating normally. A few minutes later, the same parity and two storage drives started having similar problems as before. While on a different breakout cable, the cache drive is connected to the same HBA as the other malfunctioning drives. I shut down and swapped the HBA for a spare I just bought for another machine. It seems like the HBA may be sketchy. I prefer not to put the HBA back into service without confirming it is healthy. I also don't want to buy another if not necessary. Does anyone know of any software tools or other methods for testing an HBA? unbucket-diagnostics-20200113-2308.zip
  7. Best Buy is currently offering $90 off on the 10TB Easystore, making it $159.99. This is $30 less than the 8TB at $199.99. They also have $100 off on the 14TB model, making it $209.99. If you can go big, $15/TB is not too shabby. No deal on the 12TB so it is $249.99.
  8. Thanks to all. The file system status check with the -L switch seems to have done the trick.
  9. Unfortunately I tried both, but with no positive results. First I tried xfs_repair from the terminal window, which seemed to return an error and stop. I can't figure how to copy text from the terminal window so I've done my best to repeat the results below. xfs_repair result Phase 1 - find and verify superblock... - block cashe size set to 323016 entries Phase 2 - using internal log - zero log... zero_log: head block 116006 tail block 116002 ERROR: The filesystem has valuable metadata changes in the log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the file system before doing this. I checked out the -L option under xfs_repair, and it had this to say: -L Force Log Zeroing. Forces xfs_repair to zero the log even if it is dirty (contains metadata changes). When using this option the filesystem will likely appear to be corrupt, and can cause the loss of user files and/or data. Since this didn't sound promising, I moved on to the webGUI option. The webGUI option first returned an ALERT that was similar to the error above, but still essentially cautioned to first mount my (unmountable) file system. It did however complete the scen, the results of which are below. The wiki indicates that the file system check should clearly indicate what steps should be taken if a repair is required. Maybe I'm missing something, but I do not see any suggestions below. Since nothing was suggested, I restarted the array normally and still the disk is unmountable. With no obvious error, does this mean my disk is toast? My replacement should arrive tomorrow. Is rebuilding from parity the bet option? After that I can pull the disc and more thoroughly test it. webGUI file system check results Phase 1 - find and verify superblock... - block cache size set to 323016 entries Phase 2 - using internal log - zero log... zero_log: head block 116006 tail block 116002 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (1:120049) is ahead of log (1:116006). Would format log to cycle 4. No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu May 16 18:40:32 2019 Phase Start End Duration Phase 1: 05/16 18:40:31 05/16 18:40:31 Phase 2: 05/16 18:40:31 05/16 18:40:31 Phase 3: 05/16 18:40:31 05/16 18:40:32 1 second Phase 4: 05/16 18:40:32 05/16 18:40:32 Phase 5: Skipped Phase 6: 05/16 18:40:32 05/16 18:40:32 Phase 7: 05/16 18:40:32 05/16 18:40:32 Total run time: 1 second
  10. DITTO, the same thing happened to me. The common problems plugin notified me of an issue. The error was "Unable to communicate with GitHub.com" but I have also determined that my Plex server (running as a docker) is not available outside of my network. Fortunately I have no problem seeing the server or my unraid shares on my local network. I checked the network settings and discovered that the motherboard ethernet port, which has been disabled for many months in favor of a mellanox 10Gb network card. I set interface back to PORT DOWN and rebooted. Unfortunately after rebooting the server still cannot connect outside of my network. EDIT I rolled back the OS to version 6.6.7 and the network connectivity issues are gone. No settings changes, just the OS version. Definitely looks like a network issue with OS 6.7.0. unbucket-diagnostics-20190516-0320.zip
  11. Got home from work today, and happened to see that a new unraid version was available so I decided to upgrade. I backed up the flash drive, and then I ran the upgrade assistant tool, which advised me to upgrade a couple of plugins, which I did. Plugins sorted I started the upgrade, which completed just fine and told me to reboot. Upon rebooting Disk 4 is now showing as "Unmountable: No file system". Prior to the upgrade I didn't notice any issues or warnings. About 3 weeks ago the server did shut down abnormally when a power outage exceeded my UPS standby time, but it had been running fine since being restarted. I have only tried the most basic troubleshooting. I tried restarting with no change. I also tried a new SATA cable and swapping SATA ports on the motherboard, but the error remains on the original disk 4, not the cable or port. I have attached my diagnostics tool zip file. I don't have a spare disk at the moment because I used it on another system, but I already have on order. In the meantime I have shut down the server since it houses less than critical data that I can live without for a few days. While I wait, I would like to know what happened, in case it's something that I did or something otherwise preventable in the future. While I wait for the replacement drive I guess I need to read up on how to replace a drive and rebuild the array since this is my first failure since starting to use unraid. EDIT Rolled back the OS to version 6.6.7 but the drive is still unmountable. Seems like maybe it is a drive issue that just didn't show up until the system was reboot. ghost-diagnostics-20190516-0154.zip
  12. Thanks, that seems to have fixed the problem. The process is now going 80-90 MB/s, and should be done in a day and a half. 5 minutes in and it has already done more than 5 hours yesterday. I replaced the two cables, and of course gave all the rest a push to make sure they were seated. Like I said I am still learning. I'm guessing that ATA# refers to the actual interface, but 3 and 6 didn't correspond to the port numbers those drives were connected to on the motherboard, even if I start counting at 0. For example, the drives were connected to SATA_1 and SATA_4. Could you please tell me how you determined which interface was connected to which drive? Finally, thanks for the tip about the diagnostics tool. I posted the system log because it was the first thing I found that was full of errors. I now see that the diagnostics tool includes the log plus other useful info.
  13. I've just gotten started with unraid in the past few months. My first server is working great, but the case has no more room drives. Instead of getting a server case I have no place to mount, I decided to create a second server using an old computer on hand. I started by installing 10 hard drives salvaged from my old amahi server, which I stopped using. Drive sizes range from 5 to 8 TB. I didn't have a parity drive to start, but I recently purchased a 10TB drive to use for parity. I installed it, assigned it to parity, and the parity sync began. Unfortunately, the process is proceeding at a snails pace. Right now the anticipated completion time is 55 days, but I have seen it well over 100 days. Progress is slow on two fronts. First, I haven't seen the read speed yet exceed 25 MB/s, but most of the time it is between 1 and 2 MB/s. As if this weren't bad enough, it only reads for a few seconds before it stops and all drives read 0 MB/s for a few seconds. This on and off really hurts the average speed. This is not a powerful machine, but this is the first time I have had any speed issues. All drives mechanical (no SSD) and some are connected via SATA II ports, either on the motherboard or pcie cards. The pcie cards are pcie 2.0 x1, but one has only one port, and the other only two ports so they shouldn't be bottlenecking this bad. The CPU is only an Athlon X2 270, but according to the dashboard it is barely topping 25% so it doesn't seem like a bottleneck either. I've attached the system log. There is an error that keeps repeating on either ATA3.00 or ATA6.00. I'm guessing this is what's causing my problem. Can someone please help me identify what the error is. Also, how do I correlate the ATA device with actual drives so I know what drive or cable to check? For now I am going to shut down and remove the parity drive. I'm leaving town tomorrow for the holidays so I won't have time to fool with it till I get back in a few days. ghost-syslog-20181223-0143.zip
  14. I'm not completely certain that I know how to answer that. When I check permissions for a file using krusader, I first see a screen with separate pulldown menus for Owner, Group and Others. Each menu is set to "Can View & Modify. At the bottom is a box labeled Ownership, which says User: nobody and Group: users. This seems to match what you are saying it should be. I tried running this tool on part of one share. Unfortunately there was no change in my ability to play videos using VLC. It did however change the file permissions from rwx to rw-.