Auggie

Members
  • Posts

    387
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Auggie

  1. Experiencing a high incident rate of Windows 10 Pro hanging immediately upon boot before it POSTs (black screen with non-blinking underscore cursor). It happens almost all the time, but after a forced stop, it subsequently starts up successfully. And this is with a brand new install with no third party or other applications installed yet. I used the latest virtual drivers available through unRAID's VM management tools. Only 1 CPU is assigned, out of a possible 2. And a first: While drafting this post, it just hanged at the blue Windows logo screen after restarting during it's first Windows update process. I initiated the Windows Update again, which upon the typical restart to continue updating, it went straight to the frozen underscore cursor screen. Since it always hangs during startup, I can't get a proper Windows update installed because I have to force the VM to stop, after which when I finally successfully get it to startup it rolls back any updates to the previous version. This is really unacceptable for me as I require reliable unattended operations (I've set it to auto-logon). Curious if anyone else is experiencing this issue, and if so, what solutions may have worked to resolved this. So far, this does not bode well in my first VM foray with Win10 as I need a stable VM that will reliably boot and operate. I only intend to run custom software that I've created to manage my media library jukebox, so I'm willing to downgrade to Windows 7 if that offers the stability and reliability I need and this is not an unRAID VM issue specific to any OS...
  2. Thanks for that! Didn't know to check the CPU specs since the BIOS allows me to enable VT-d; you'd think that if that feature wasn't available, the BIOS shouldn't allow that option. Any who, now that I just setup my very first VM (Windows 10) I was planning to change the CPU eventually to at least get more cores than the current dual-core setup.
  3. I enabled both Virtualization Technology and VT-d in BIOS (latest version as of Dec 2016) but unRAID still shows IOMMU as "Disabled." Presently, I don't have any PCI devices that need to be passed through but I'm curious if the "Disabled" status is because of that, if unRAID somehow does not recognize that VT-d is actually active in BIOS, or if this board is somehow incompatible with unRAID's IOMMU functionality. This is for future endeavors in case I expand the role of this server which may require PCI passthrough.
  4. Flashed worked successfully! Though with the original subject specifically targeting the X9SCM boards (which I have), I was simply following along the steps of the original post which went straight straight to flashing the M1015 while in DOS command line, which immediately resulted in errors. Silly me that I didn't initially read the follow-up post that provided instrux for UEFI shell, needed for boards with UEFI like the X9SCM. FYI, there are newer X9SCM BIOS and IPMI firmwares than the ones provided in this thread, R2.2 and R3.38 respectively, as of December 2016.
  5. Although it seems a relatively simple process, I don't have any PC's to flash the HBA's other than my headless unRAID servers, both of which have SuperMicro mobo's that utilize Java-based IPMI in which the Java applet has been extremely unstable and crashes constantly on my OS X computers. So, I would have to create a bootable flash drive and connect a monitor and keyboard in a cramped, low ceiling storage closet where my RPC-4224 machine resides (or temporarily haul it out onto a table).
  6. unRAID will do it's magic based on software. The controller hardware has to simply provide the drives to the OS. Hardware RAID is not necessary and will not work. Some hardware in this thread may be obsolete, but the fact that a controller card has to run in IT mode is still valid. Thanks. I think I understand now; basically, the HBA must allow "JBOD" (which is the term that I've usually looked for when selecting an HBA card). I had purchased the M1015 last year from a recommendation on these forums in another unrelated thread but it wasn't mentioned that I may have to flash the card (it's brand new). I paid $140 but I just compared prices and an LSI SAS9211-8I is going for $112 new; it seems that's the better option as it's cheaper and is already just a plain JBOD HBA with no RAID functions (which flashing the M1015 essentially makes it into a SAS9211-8I with the FW provided by OP).
  7. Thanks for all the info! I'm glad unRAID provides all the necessary functions to remotely access the VM desktop, while having the option to use RDP if I so choose to.
  8. I want to create a Windows (any flavor of 7, 8 or 10) VM on my headless unRAID v6.x host machine, and subsequently access the Windows VM desktop remotely. I will only be running custom software I wrote (which runs only on Windows and OS X) that manages and automates my media library jukebox that the unRAID machine stores. Having never implemented VM's through unRAID, does unRAIDs' web GUI have the tools to remotely access the Windows VM desktop? The box physically resides in a storage closet so I want to avoid having to have a monitor connected. Or would I have to implement a VNC setup on the Windows VM and access it through a VNC client from the remote device?
  9. I, too, recently picked up an M1015 card but haven't installed it. Is it necessary to flash this card for unRAID v6.x? Or can it be plugged-and-played? And if it is plug-n-play, is flashing to IT mode simply for better throughput? In general, how relevant is the OP's information regarding the various LSI controllers for v6.x?
  10. I can now confirm that the AOC-SAS2LP-MV8 is indeed compatible with the RES2SV240. I now have 2 SAS2LP plus the RES2SV240 connecting my 24 SATA drives with no issues thus far, having completed a data-rebuild on one drive. Even though the mobo has 4 PCIe slots, only 2 are x8, and for some unknown reason, I would get rare and intermittent errors with any SAS2LP in the x4 slots as I started upgrading to greater than 2TB drives.
  11. FYI, I went ahead and purchased a M1015. I only have a week to work on the box when I get home for Spring Break starting this coming Friday, which includes having to rebuild parity with in incoming 8TB drive (about a day), then replace a 4TB with the previous 6TB parity drive (just over a day), then run a parity check to make sure the expansion was accurately completed (about 2 days). If I wait until then to find out the SM HBA doesn't support port multiplication, then I'm screwed as I don't have time to wait for the new board (while spending a hefty price to get it overnighted). I will first see if the SM HBA does work with the Expander card and report back my findings.
  12. Does anyone know if the SuperMicro AOC-SAS2LP-MV8 supports the Intel RAID expander card RES2SV240? I already have the AOC-SAS2LP-MV8's so would prefer not having to buy an IBM M1015 to replace an otherwise perfectly functional SAS/SATA card.
  13. I don't have any experience with the Q1900M, but me, personally, I would prefer ECC for that added protection to help prevent memory corruption. It really depends on the what data you intend to store on this server, and if you have any other redundant storage plan. I also want to point out that if you plan on having more than 10 drives, the Q1900M will require at least another HBA, so the initial low-cost of it versus other motherboards with more on-board SATA ports is negated.
  14. It depends on how many drives (and hence, SATA and/or PCI slots you need). My first unRAID box used a SuperMicro X9SCM-HF, which I still use for my second box (I went with a different motherboard when I upgraded the case to a 24-drive Norco RPC-4224). The X9SCM has an integrated Atom D510 1.66GHz CPU and is one of the lowest power-consuming boards around. Having a built-in CPU would also makes this a low-cost solution; just need RAM and perhaps an HBA card if its 6 SATA ports are not enough.
  15. I'm done with all the PCI problems of my SuperMicro X9SCM-F mobo; ever since I got it there has always been annoying issues with the three SuperMicro AOC-SAS2LP-MV8 HBAs. I've had the mobo serviced by SM, two of the three MV8's serviced, replaced the SAS-SAS cables but the intermittent gremlins persist. After all this time, I think I've narrowed down the problems to the two non-8x PCIe slots of the mobo, which I believe the 8x MV8's have problems with. Since there are only 2 8x slots and I have 3 MV8's, it's time I look for a replacement mobo. So I'm seeking recommendations for a mobo with at least 3 8x PCIe slots and IPMI or equivalent access, and work out-of-the-box with unRAID. I would prefer an LGA1155 socket to keep my replacement costs low, but am willing to go with a different CPU/memory as I can move the setup to my other, less critical, unRAID system (it has a single MV8 so I would either need a SAS expander card, like the Intel RAID Expander Card, or a 16-port SAS->SATA card, like the LSi SAS 9201-16I, to stay in the 2-8x PCIe slots). I came across the SuperMiicro X9DRi-F, but there are no threads here on that mobo.
  16. I've always had problems with SM's Java-based IPMI and Mac OS X (haven't tried accessing via a Windows box) with the Java client constantly hanging when launching most of the time. I think when Java version 6 and newer became required is when I started having these issues. I currently have two unRAID boxes both with SM mobo's. I'm currently having PCI problems with my X9SCM-F (constant "sas_eh_handle_sas_errors" involving HBA's installed in either slot 4 or 5, both 4x) so now I'm contemplating getting a new mobo and looking at anything not SuperMicro. I need IPMI so I hope other non-SM mobo's with IPMI can be accessed much more reliably...
  17. I've got the latest version; since using the built-in update mechanism in unRAID 6 I no longer visit the download website and the updater doesn't make it really obvious that the final version was actually released after all these years. As I said, these nagging issues have always been persistent since at least v5 and OS X 10.9 and are still persistent in the latest 6.0.1, but not sure if its OS X or unRAID. #3 in my original post is obviously an OS X bug in regards to losing all network connectivity (both wired and wireless) and it's already been reported to Apple.
  18. I've been having little persistent nagging problems with unRAID shares and Mac OS X, at least as far back as Mac OS X 10.9 and unRAID 5: 1) All volumes unexpectedly unmount from desktop of a target unRAID server; usually during prolonged access, such as during a large block transfer with hundreds of not thousands of smaller files, and/or when trying to access another directory or volume on the unRAID server either after a long period of inactivity or during aforementioned block transfer. This happens with both SMB (my primary access) and AFP. 2) Long access times when opening folders, which may result in all volumes from the target unRAID server being unexpectedly unmounted. This happens under both SMB and AFP. 3) Time Machine backups always fail with Mac OS X losing all network connectivity completely; I have restart the network interface itself via command line or restart the computer. This is under Yosemite and unRAID 6, but never tested to see if this bug is present in older versions of OS X. Now, I don't have a specific single disk in the unRAID box dedicate strictly to Time Machine but I've read that it may be necessary to only have the Time Machine User Share be assigned to one disk. I have two different and dissimilar unRAID servers on my network, both under the current beta version of 6.0, and both experience these issues with my main Mac OS X machine, which I had performed a recent clean OS X Yosemite install and the problems persist. I'm just trying to determine if these are unRAID issues and/or Mac OS X issues in order to elevate the problems to the right party(ies).
  19. I'm manually copying files over entire drive by drive (via cp -a command) as I convert from ReiserFS to XFS and a read Input/Output error just for one file got spit out in my telnet session, but no errors were reported on the Dashboard screen of the GUI and all drives were "green balled." There were no SMART errors detected. So my question is: What errors do and do not get reported on the GUI? I would imagine any and all errors, whether it be hardware related or file structure integrity, should be reported.
  20. I'm not sure from reading your saga whether you meant this statement at face value or not, but I want to clarify just in case. Unraid has no way to rebuild your data, it can only recreate the drive in total. So, any corruption on the drive will exist on the rebuilt drive as well. If you were to have physically pulled the drive and worked on the replacement drive, you would have most likely had the same results, but you would have had another copy of the corrupted drive to work on to possibly try different recovery options. Yes, I understand the distinction: unRAID can only "rebuild the data" in regards to whatever the parity/data drives calculate the data to be, and cannot "remove" any corruption that had been incorporated into the parity drive. The suspect drive has been completely replaced, and since unRAID v5 simply created no data on the replacement drive from it's "data rebuild" there is no "corruption" on that new drive (which I subsequently reformatted to XFS). I did NOT perform any parity check/update on the parity drive whatsoever. In fact, since upgrading to v6 I decided to have the parity drive rebuilt from scratch, trusting the integrity of the existing data drives. You've brought up a great point which further strengthens the position to perform a reiserfsck since if the operation results in a recommendation for a --rebuild-tree, doing so should also correct any "corruption" of the parity data that may have been incorporated from a corrupted data drive; simply replacing the suspect drive outright and allowing unRAID to perform a data rebuild (by the way, I'm using the exact terms of "data rebuild" that all unRAID documentation and references uses) would allow any corrupted parity data to be propagated onto the replacement data drive. UPDATE: Just getting back to the original thread topic discussion, it looks like it will be much faster copying data over as it has taken two hours to copy 600+ GB of data so far. At this rate, it should take just under 8 hours for each 4TB drive, meaning I could theoretically do a couple drives or more per day and get the entire media server converted in under two weeks, if all goes well.
  21. Did a SMART check and there were no errors. There were no tell-tale signs in the syslog of an obvious hardware failure (just read errors on that drive while performing a copy operation utilizing two different drives in which the suspect drive had no direct involvement). Then ran a reiserfsck which came back with the recommendation for --rebuild-tree, which I did. Since there appeared to be no hardware errors that I could detect, nor had I experienced any issues before whenever I had this rare "corruption" over the years on several occasions, which were successfully repaired by performing the recommended repair options by reiserfsck, went ahead and performed the procedure. During the rebuild, unRAID coughed up a syslog dump at the console and froze (http://lime-technology.com/forum/index.php?topic=37772.msg349454#msg349454). After forcing a restart, that's when I noticed unRAID marking the drive as unformatted. Attempting any subsequent reiserfsck came back with some sort of incomplete status (IIRC) that I could not recover from or bypass. Installing a replacement drive unRAID simply formatted it and data-rebuilt it as an empty drive. It was only afterwards that I discovered that while unRAID is Maintenance mode and performing a reiserfsck operation, this essentlally marks the target drive as unformatted until the reiserfsck completes successfully. Since I could never restart the reiserfsck to complete successfully, its contents were doomed. The replacement drive is a brand new one so it's a freshly minted XFS format with no data coming from the original drive whatsoever so there is no "corruption" being propagated into the "new" array (its data is unrecoverable anyways). There were no errors noted by unRAID on any other drive. There have been no errors, hardware or otherwise, since installing a new drive. So "whatever I did" or am doing should not cause any corruption of the new XFS system as there is absolutely nothing to indicate any current issues with the system, which has been running fine for the past 11 days. Anyways, I'm not sure why you are implying that no reiserfsck should ever be performed when the unRAID wiki actually does specify to do so as part of the troubleshooting process; I followed all recommended steps in the wiki and from the reiserfsck results. The one possible repercussion that was never mentioned anywhere was what could go wrong during a reiserfsck rebuilt process: if I had know that a crash or other interruption that would prevent a successful completion of that command could result in total loss of the data on that drive, then perhaps I would have simply replaced the drive and had unRAID perform a data rebuild. Then again, nothing is foolproof: a crash or other such event could occur even during a data rebuild, or the parity drive could go bad during said rebuild. The end result is still data loss of the original drive. We all take our chances in whatever course of remedial action we take when array errors occur. At least the beauty of unRAID is that we only lose the data on the drive(s) that become unusable for whatever reason.
  22. Yes - but at least although the elapsed time is long you can let the computer do most of the work. Because of the elapsed time it takes, many people seem to be doing it when a drive needs replacing/upgrading and otherwise leaving things as they are. True, the computer does all the work. But to move essentially 4TB worth of data at a time probably takes about a day of unattended operation, checking the completion status the next day (I'm only guesstimating because a 4TB data rebuild takes about a day to complete; haven't performed a cp/mv operation of that magnitude yet): 23 data drives would take close to a month. But I guess now's the best time for me since I just lost 4TB of data (system crashed while performing a reiserfsck --rebuild-tree, resulting in unRAID v5 marking the drive as unformatted with no option to rebuild the data from parity). I can repopulate the lost video media after I convert to XFS...
  23. Ugh. I just discovered that the only way to reformat drives is to basically clear off the contents of a drive, stop the array, change the file system type on the Device Settings page, restart the array and allow unRAID to format the drive. Rinse, repeat. For each drive. 94TB. This... will... take... some... time...
  24. I'm assuming this is with v6 only; that is, there have been no common RFS/CPU stalls under v5, no? And were these stalls during read, write, or both types of operations? My media server is mostly read, except when transferring media over to the server, whereas my backup unRAID (backup in the sense it's used as a backup to my computers' hard drives) is mostly involved with write operations and I don't recall experiencing stalls nor stability issues in the year or more that I've been running beta 6. Were these problematic issues using mixed RFS/XFS/BTRFS formatting? EDIT: I just noticed that one of my drives in the backup unRAID, the last one added to expand the array, had been formatted in XFS. Hmm, never knew that, though I must admit I rarely look at change logs nor keep up to date with announcements as I upgrade beta's whenever I see a new version on LT's website. So I've been running a mixed RFS/XFS system for the past year and no issues.
  25. I've also discovered that the APC battery backup plugin has been updated for v6 (http://lime-technology.com/forum/index.php?topic=34994.0) so there are no longer any issues holding me back from upgrading to v6. I'm only worried with data integrity and reliability so any GUI glitches are of no concern for me. But all v6 beta's I've tried has not resulted in any data issues, losses, or other drive issues, so it looks like I will be upgraded today once a replacement 4TB arrives via UPS.