Spyderturbo007

Members
  • Posts

    294
  • Joined

  • Last visited

Everything posted by Spyderturbo007

  1. I'm looking at a server for a friend of mine. I'm wondering if this controller / backplane are compatible with unRAID? The seller has it listed in his listing along with "Compatible with hard drives up to 16TB per drive". Controller: LSI 9211-8i HBA JBOD FREENAS UNRAID Backplane: BPN-SAS-825TQ 8-port 2U TQ (W/AMI 9072) backplane It's an 8 bay server and has 2 x Xeon E5-2630 V1 Hex Core (2.3GHz) processors. Thanks!
  2. I guess I'm a little confused with how it's displaying things. From what I see, it looks like there is a single chip, with 8 physical cores and hyperthreading. CPU 0 - HT 1 would make me think: Physical Core 0 - HyperThreading (Logical Core) 1 So the way it is displayed, I'd think that I have a single CPU, with 8 physical cores + HT (16 logical cores).
  3. So I'm wondering how I can tell if unRAID is "seeing" my second processor? I'm running 2 x AMD 6212. On the Dashboard under Processor it just shows one. The Hardware Profile file appears to show two independent CPUs provided I'm reading it correctly. Hardware Profile.xml
  4. I found a bunch of threads and even the "Parity Swap Procedure" Wiki article. However, none of them seem to cover a straight replacement of the Parity drive to a larger drive. My current drive is 6TB and I want to upgrade the Parity drive to a larger 10TB drive. What's the best way to handle the upgrade and maintain protection for the array during the upgrade? Should the new Parity drive be precleared? I read that parity drive 2 is different than parity 1, so I'm assuming I don't want to do it that way. Can I just stop the array, shutdown the server, remove the current parity drive, assign the new drive as the parity drive and let it rebuild? Since I'll still have parity drive 1, it would still keep the array safe during the rebuild.
  5. Parity check finished (0 errors). Duration 16h, 9m average speed 103.2MB/s. Should I just chalk this up as an unusual glitch and move on? Thanks!
  6. I received a message when I logged in this morning that said that the parity was valid. I'm attaching log files following completion of the rebuild. Can someone take a look at them for me and let me know what I should do next? Thanks! tower-diagnostics-20201121-1330.zip
  7. Thanks for the help. Parity rebuild is in progress and is estimated to take 1 day 11 hours. I'll report back when it's finished. Should I refrain from using the array I assume since the parity rebuild is in progress? I don't want to lose any data if another disk fails. I'm also thinking a second parity drive would be a good idea for a situation like this in the future, but thought I'd ask for your opinions?
  8. How would I go about doing the rebuild on the disk? I'm terrified of losing anything.
  9. Morning all. It says "Completed without error". I'm attaching the SMART report and new diagnostics. Thoughts on what to do next? One weird thing is that if I click on Show, next to SMART self-test history, it says "No self-tests have been logged. (To run self-tests, use: smartctl -t). WDC_WD60EFRX-68MYMN1_WD-WX51D6422029-20201119-1515.txt tower-diagnostics-20201120-0805.zip
  10. Thanks. I wasn't sure what to expect. It's 6TB, so I'll leave the array offline and check back later tonight. I really appreciate the help Constructor.
  11. The extended SMART test is in progress. It's been on 10% for about 45 minutes. I'm not sure if that is normal or not? Does the array need to be started for it to run the test? I was actually just reading the wiki article on it. I read this part and thought I might just do it as my drives begin to need replacing. Some are quite old. "At this point, there is NO general recommendation as to converting existing Reiser drives, UNLESS you are having a known Reiser-related issue. Some feel it is a good idea to begin converting existing drives to XFS, but others do not think it is necessary, and may be an over-reaction to the previous now-fixed issues. At any rate, it does seem wise to consider a slow migration strategy, as drives are added."
  12. No changes other than normal updates. It's a SuperMicro rack mount chassis with a backplane, so I can't see how any connection issues would effect a single drive. Don't these point to a drive issue though? You know more than me, but thought I'd ask so I can understand how it works. Nov 19 02:15:24 Tower kernel: md: disk0 read error, sector=4971446528 Nov 19 02:15:24 Tower kernel: md: disk0 read error, sector=4971446536 Nov 19 02:15:24 Tower kernel: md: disk0 read error, sector=4971446544 Nov 19 02:15:24 Tower kernel: md: disk0 read error, sector=4971446552 Nov 19 02:15:24 Tower kernel: md: disk0 read error, sector=4971446560 Nov 19 02:16:51 Tower kernel: md: disk0 write error, sector=4971446528 Nov 19 02:16:51 Tower kernel: md: disk0 write error, sector=4971446536 Nov 19 02:16:51 Tower kernel: md: disk0 write error, sector=4971446544 Nov 19 02:16:51 Tower kernel: md: disk0 write error, sector=4971446552 Nov 19 02:16:51 Tower kernel: md: disk0 write error, sector=4971446560 I'm not sure what to do about the ReiserFS.
  13. I was able to get the diagnostics as requested. The only thing I did this morning was stop the array after receiving the message. I've never had this happen before so it's a little odd. Thanks so much for taking the time to help me with this problem. tower-diagnostics-20201119-1323.zip
  14. Thanks itimpi. I'll work on getting that later today. I got paranoid and stopped the array and since I run a pihole docker, my Internet is down at the house. I didn't have time to edit DNS before I had to run out of the house for work.
  15. I woke up to this email from my server for my 6TB parity drive. Event: Unraid Parity Disk Error Subject: Alert [TOWER] - Parity disk in error state (disk dsbl) The GUI shows the disk as having; 22,756,061,247,961 Reads 18,446,744,073,704,421,376 writes 808 errors My assumption is that the drive is toast so I'm going to order another drive, but I have a few questions. 1. Is the safest thing to stop the array until the new drive gets delivered an installed? I only have one parity drive so another drive failing would mean data loss. 2. Does the new drive need to go through pre-clear? Thanks!
  16. Wow, thanks! I'll wait for 6.9.0 to purchase the drives. Thanks again for all your help testdasi!
  17. Thanks for the detailed response. I currently have a 250GB SSD, which sometimes isn't enough so I've been considering upgrading it to 500GB or 1TB. Would it be better to go with 2 x 500? If I'm understanding the pool feature, it provides redundancy in the event of a cache drive failure, so I'm assuming either 2 x 250GB or 2x 500GB would be best depending on how much storage I think I'll need. My server motherboard doesn't have an M.2 slot.
  18. Right now I have a single cache drive where I store my AppData Folder. I'm wondering if that's the best option, since all the data written pretty much passes through the cache drive. Is that dangerous since the cache drive, and therefor the System folder, doesn't have parity? If the cache drive is lost, so is the System folder. I do have the Community Application backup service running along with Crashplan, but I've always been funny about having a bunch of redundancy. Perhaps I'm just being over cautious? Edit -> I just looked and mine is pretty old. According to SMART data its been on for 5 years 9 months
  19. I would think this would be a resource issue, but if I'm watching the resource monitor when it is unpacking, it's not like all the cores are going to 100% or I'm running out of memory. I can navigate to the unRAID GUI but all the dockers seem unresponsive. The GUI is really slow, but it does eventually load. v6.8.3 2 x AMD Opteron 6212 64GB DDR3 Memory All docker / system files are on a 256GB SSD Diagnostic logs are attached. Thanks! tower-diagnostics-20200720-1525.zip
  20. I did some searching and can't seem to find what I need. I'm looking to move some of my shares to Read Only for all users except me. When I do that, how are docker apps effected, or are they not? I don't want to break a bunch of apps because they can no longer write to the array or something.
  21. I bought the same server about a week or so ago and had a different issue. Mine wouldn't boot the the flash drive and I had to go into the LSI BIOS and change the settings for the card. There is a section about control of the card with 3 options (if I remember correctly). It's set to allow control of the card to the BIOS and the card. I had to change that to card only for it to recognize my flash drive with more than 12 drives attached. I'm not sure if it will help with your problem, but as johnnie.black said, that would be the first place to look. On a side note, the fans plugged into the front expander backplane are not speed regulated based on the setting in the BIOS, so they will run much faster, and louder than they would if they were plugged into the motherboard. I plugged them all into the motherboard and now they run a little slower. @johnnie.black it has 2 expander backplanes. One for the front 24 drives and another jammed under the motherboard for the rear drives.
  22. I'm not sure what happened to be honest. I was watching a TV show and went to look something up on the Internet. I'm running Pihole as a docker. DNS was down for some reason. The docker was started but the Pihole interface was painfully slow to load. The unRaid interface was really slow and some of the unRaid interface pages wouldn't load. I stopped the array and then started everything again and it seems fine now. Can someone take a look at the logs and point me in the right direction? Thanks for your time! tower-diagnostics-20200612-2054.zip
  23. I'm getting that warning for the fix common problems. I want to make sure I do this right so I don't mess up all my docker containers and lose my settings. It looks like I can do the following: 1. Stop all containers and set them to not start automatically 2. Navigate to \\tower\cache and copy everything to my computer. 3. Stop the array 4. Click on the cache drive and change the file system to BTRFS 5. Start the array 6. Copy everything back to \\tower\cache 7. Start all containers and set them back to auto-start Is that the correct way to convert the file system on the cache drive?