Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About captain_video

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm using an Asus A88X-PRO motherboard with a quad-core AMD A10 CPU (I forget the exact model). The motherboard can take up to 64 GB of memory (4 x 16 GB). I just upgraded it to 32 GB and it's running fine. I checked the UEFI BIOS on the motherboard when I booted it up and it sees all four 8GB DIMMs. I checked the dashboard in unRAID (Pro version 6.6.7) and it says there's 32 GB installed, but only 14.977 GB usable with a max installable capacity of 64 GB. Is there a limitation on the amount of RAM that unRAID can use?
  2. OK. Thanks. Data rebuild for the new drive won't be completed until late tonight and the pre-clear on the 2nd drive will probably finish late tomorrow night. Both drives are 8 TB. That means I won't get a chance to try this until Tuesday at the earliest. Update: I did a filesystem check from the web GUI after stopping the array and putting it into maintenance mode. After running the repair with various options I was finally able to repair the drive with minimal data loss, but nothing that couldn't be replaced. I wish I had known about this feature when this problem first occurred. I lost data from at least one other drive because of this issue. I'm curious if my hardware setup had anything to do with it. All of the affected drives were in a Supermicro 5-bay internal enclosure that was connected to the mainboard SATA ports using eSATA adapter brackets and 6-ft long eSATA to SATA data cables. I was always getting a lot of UDMA errors on the same drives so I was wondering if that could have contributed to the corrupted file systems. I have since reconnected the drives directly to the mainboard without all of the adapters and long cables in the signal paths and so far I haven't seen any UDMA errors.
  3. So what's the best approach to fixing a drive that says "Unmountable: No file system?" Does this mean that xfs_repair won't work until the bug is fixed in the next release of unRAID? I've had multiple drives fail over the past few weeks with this issue and it's only happened with the latest releases of unRAID. I've been running unRAID for over 10 years and never had any issues like this before. The weird thing is that it was only affecting drives that I had installed in an enclosure connected to the main server (it's a Supermicro 5-bay internal enclosure that I have sitting on top of the server chassis). I have since connected the enclosure directly to the motherboard SATA ports to eliminate any potential problems with the cabling as I was getting a lot of UDMA errors with several drives in the enclosure. I am currently running a parity rebuild on a replacement drive for the one that had the unmountable file system, but I'm getting the same error message with the new drive. After reading about the issue I see that it's just rebuilding the corrupted file system from parity. I will attempt to run xfs_repair after the data gets rebuilt. I still have the original drive in case I need to run it on that one. I've already lost a ton of data and I may never be able to recover a lot of it. I'm still taking an inventory of what I might have lost and I've only scratched the surface. I'm also running a pre_clear on another drive in the background so I need to wait until that finishes before attempting anything that might require a reboot. I see that upgrading to xfsprogs 4.19 may fix the problem with xfs_repair. I've downloaded the xfsprogs-4.19.0.tar.xz file but I have no idea what to do with it. I just saw that there's a newer version so I downloaded xfsprogs-4.19.0-2-x86_64.pkg.tar.xz from https://archlinux.pkgs.org/rolling/archlinux-core-x86_64/xfsprogs-4.19.0-2-x86_64.pkg.tar.xz.html. I'm not well versed in Linux or FreeBSD line commands so I'm probably better off waiting for the next release of unRAID with the fix. Any idea when it will be released?
  4. Here's the diagnostics file that you requested tower-diagnostics-20181229-1036.zip I'm going through the various shares to see what data was lost and it's quite considerable. I may never get it all sorted out. I have a share for music and the share folder is showing as completely empty. I had well over 1,000 CDs ripped to my server and spent the past several months getting them all named and tagged properly. I'm going through the individual disks one at a time to take inventory of what remains so I can figure out what's missing. This totally sucks.
  5. Running unRAID Pro 6.6.6, 28 data drives (130TB) with dual parity and a 1TB SSD cache drive. I have 23 of the data drives mounted in a 24-bay Supermicro 846 chassis and five of the drives in a Supermicro 5-drive bay connected via shielded SATA cables connected directly to the server motherboard. Both parity drives are mounted inside the 24-bay enclosure. One is in one of the standard drive bays and the other is mounted on the drive bracket attached to the side of the power supply bay. Last night one of the drives in the 5-bay enclosure had a red X next to it in the web GUI. Under the two columns that showed the capacity of the drive and the amount used it indicated "Unmountable: No file system." I shut down the array and swapped out the 4TB drive that had the error with a new 8TB drive that I had already precleared. I restarted the array and allocated the new drive to the array and started a data rebuild. While it was being rebuilt, another drive in the enclosure started showing all kinds of errors and an enormous number of writes to the drive. The new drive was no longer being written to so the data rebuild had halted. The drive that was replaced was still showing the "Unmountable: No file system" indication. I shut down the array again and replaced the second drive that was having all of the write errors with another new 8TB drive that had been precleared. I also replaced the new drive that I had installed as a replacement for the original drive that showed the errors. I started the array and assigned the new drives to the two slots where the previous drive had been having the write errors as well as the original one with the unmountable file system. I try to keep several spare drives on hand that have been precleared just for such an emergency. After a while I noticed that a third drive in the enclosure was having a large number of write errors and now both new drives were showing the "Unmountable: No file system" message and again it had stopped the data rebuild for both new drives. I canceled the data rebuild and shut down the array. I swapped out the 5-bay enclosure with another one that I had on hand and powered up the array once again. The data rebuild started but I am still getting the "Unmountable: No file system" indication for the two drives, but the third drive with the write errors was now behaving normally. It looks like it is going through a normal data rebuild, but I have no idea what data it's putting on the two drives. I suspect that whatever is being written is simply corrupted and the data is lost. I would expect it to show the capacity of each drive plus the amount of data to be restored instead of the file system error message. The display attached to the server is indicating metadata CRC errors and there's another message to unmount and run XFS repair. I'm at a total loss right now. I've been running unRAID for over 10 years and I've never seen anything like this before. I've attached the system log and a couple of screen shots to show the error messages. The two parity drives are not shown because it would have cut off the drives at the bottom of the screen. You will notice that I am also running a preclear on another drive in the background. The total capacity is also being shown as 122TB instead of the previous 130TB due to the two missing 4TB data drives (disks 26 and 27). tower-syslog-20181228-1214.zip
  6. I've never been able to perform a backup and restore to a network drive using unRAID. I believe the rescue disk will only look for drives that are physically attached to the computer that contain the backup image. I just use an external USB docking bay with a spare drive to backup and restore Windows images. You could probably use any external USB drive as well. Either that or any additional drive installed in your PC can be used to store the backup image.
  7. Unfortunately, that ship has sailed. This happened several months ago and I've been gradually replacing the missing data as I discover what got deleted. It was a one-time thing so I'm not all that concerned about it. It was mostly annoying and a bit of a surprise when I discovered what had happened.
  8. External drives have really poor cooling as they are strictly passive and placed in a plastic enclosure that does not transmit heat. They're also running off a wall wart AC adapter instead of a well regulated power supply. I use external drives in my unRAID array almost exclusively, but I pull them from the enclosures first. The drives used in external enclosures are the same as the desktop drives that you typically pay more for and have a longer warranty. The warranty is shorter for external drives because they're not expected to last as long for the very reasons I just mentioned - poorly regulated power and poor ventilation. If you pull them from the enclosures and mount them in a server or desktop case they will more than likely last as long as their desktop counterparts. Of course, pulling them from the enclosures voids the warranty, but doing so almost guarantees that they'll last longer than the warranty period anyway.
  9. I've upgraded the BIOS on every motherboard I've ever owned and never bricked a single on of them, and I've owned more motherboards than I can count. I once contracted a virus that infected my BIOS and reflashing the BIOS got rid of it so sometimes it's a necessity for various reasons. I recently had an email account hacked and inadvertently opened an email from the hacker telling me he had installed ransomware on my PC and wanted me to pay him in bitcoin. I immediately disconnected my PC from the internet and shut down the PC. I pulled all of the data drives and copied them to my server using another PC and a USB docking station. Once I had all of the data copied I wiped the drives and trashed them. I installed a new OS drive in my PC and did a fresh install from scratch after reflashing my BIOS just to be on the safe side. I installed new data drives in the PC and formatted and partitioned them as before and then copied the data from my server back onto the new drives. Of course, I changed my email password using a different PC since the hacker said he was using a program that would alert him if I tried to change my password. What he didn't know was that I never open my email on my PC but always view it remotely on their server. I have since received several more emails from the hacker that I immediately deleted without opening. This happened about a month ago and my PC is working perfectly. I had to laugh because he also told me he got hold of my contact list from my email account. I looked at the list and there were only a couple of email addresses from people I know. The rest of them were probably put there by spammers. I contacted the few people in the list that I actually knew and told them not to open any emails from me using that email address and gave them all my current address. The irony is that the account that got hacked was an account that I don't use anymore. What's funny is that I'm hoping he sends emails to the other addresses in my contact list and spams the spammers. Now that would be true justice. The point to all of this is that flashing your BIOS is a simple task and not one you should be afraid to perform. Just follow the instructions posted on the download site for the latest BIOS file and you should have no problems. I should also like to mention that I have since switched to using a password manager instead of the simple password that got hacked for all of my accounts. It was far too easy for someone to crack but I was just too lazy to change it. I can never remember long passwords, especially complicated ones.
  10. I didn't format the drive at all. The only thing I do to a new drive is perform a preclear running in the background so it's ready to install when needed. The drive was installed in the array as a blank precleared drive and then the data was rebuilt from parity. The drive gets formatted on the fly as part of the data rebuild. All of my drives were migrated from ReiserFS to XFS quite some time ago. Anytime I start to see a drive look like it's about to fail I always shut down the array and check my connections. I pull the drive and reseat it in the backplane and then check the cable connection between the backplane and the controller card. I also pull the card and reseat it in the PCI-e slot.
  11. I had a 4TB drive fail a while back so I replaced it with a new 8TB drive. The data rebuild seemed to go fine and everything was back to normal. It wasn't until several days later that I noticed that the new drive only contained a few hundred GB of data when it originally contained about 3TB. I don't know what happened, but apparently whatever data was written to the drive simply disappeared. It was all media files that were replaceable so it wasn't a disaster. It was just a strange occurrence that I had never seen before. I've been running unRAID for about 10 years now and I try to keep it up to date. This happened about 4 or 5 months ago so I was using an older version of unRAID 6. I'm currently running unRAID Pro version 6.6.5 with dual parity drives, 28 data drives, a 1TB SSD cache drive and a current capacity of 130TB with 45.4TB free. On another note, it seems like unRAID likes to report drive failures more frequently than before. I've replaced at least a couple of "failed" drives over the past several months. I always buy extra drives when I see them on sale and do a preclear so they're ready to go in case of a drive failure. The thing is, I've run complete diagnostics on the "failed" drives using either WD Data Lifeguard or SeaTools and they've all passed with flying colors. I then do a full erase and run another preclear to use them as a replacement drive. I don't know if it's just a glitch in the system or if unRAID is overly sensitive to reporting drive failures, but so far every drive that has been reported as failed in the array lately has tested fine. None of them reported S.M.A.R.T. failures at any time. If I see a S.M.A.R.T. failure I know the drive is toast and just trash it.
  12. I reinstalled the linuxserver Plex docker and finally got the volumes mapped correctly. The Plex webGUI couldn't find the shares in unRAID because I was entering the wrong paths in the entry fields. It's currently in the process of rebuilding the libraries. The flash drive is the same one that's been plugged into my server going on about ten years now. I don't know why it didn't show up under the previously installed apps, but there was nothing there. Plex is the only docker I have installed. I try to keep unRAID up to date. I'm wondering if files get purged after a certain amount of time or if they get deleted when updating the OS. FWIW, I had uninstalled the limetech Plex docker and that is now showing up in the previous apps as you would expect.
  13. It's been quite some time since I set it up so I've forgotten a lot of things since then. After looking at the FAQ it jogged my memory about some of the things I did before to get it working. Thanks for pushing me in the right direction.
  14. I checked the Previously Installed Apps and it says No Matching Content Found. How would I map volumes to my media? I haven't changed anything in my setup that I'm aware of since I first installed Plex. I don't recall having any problems setting Plex up before. I installed the latest linuxserver Plex and disabled the limetech PlexMediaServer so they're both installed, but only the linuxserver Plex is running. I have the same issues with both versions of Plex.
  15. I had the linuxserver Plex running fine on my unRAID server and then I decided to swap out my cache drive for a larger one. I lost Plex in the process because the appdata folder was on the old cache drive. It tried reinstalling Plex, but I mistakenly installed the limetech PlexMediaServer version. I tried adding libraries using the web GUI with no luck. When I browsed for folders it displayed what you see in the attached image. I can't see any of the media files on my server. How do I get to the shares on my server?