Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Auggie last won the day on October 4 2018

Auggie had the most liked content!

Community Reputation

3 Neutral

About Auggie

  • Rank
    Advanced Member


  • Gender
  • URL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You are completely ignoring that there are other applications of unRAID that require the largest arrays possible, such as media servers. Cache pools are of limited value in these setups. You may not understand it, but that doesn't mean that there isn't a truly legitimate need for these types of arrays. I welcome any new features to unRAID, including the increasing the number of cache pools available, as it expands unRAIDs capabilities for those in the mass market that could use them. But the spirit of this particular thread is to encourage the expansion of the number of data drives unRAID is capable of incorporating into a protected array, which at the present, is limited due to how the super.dat file is formatted.
  2. When you say "cache pools," this doesn't mean the ability to have multiple arrays running simultaneously on a bare-metal server, no?
  3. As I had to revisit recreating my Ubuntu 16 VM, then while I was at it, decided to upgrade to Ubuntu 18, I noticed the missing cursor bug was still present with UnRAID 6.7.2; two years after I reported this error in this forum (I had been using Xorg and Microsoft RDP and hadn't touched UnRAID's VNC Remote since my first reporting). Thany mentioned this is a Javascript/browser issue, so I'm now confused. Since this issue still exists and effects other users, is this truly a simple Javascript/browser issue that can be corrected by a preference setting? Or this really an UnRAID bug?
  4. Well, for some reason the libvirt.img on my original cache drive got corrupted beyond accessibility and repair, so I bit the bullet to resinstall an Ubuntu VM from scratch. But lo and behold, after creating a new VM, downloading the latest Ubuntu and firing it all up, my original Ubuntu VM was started ; it appears all my files and settings are still intact. Whew! Apparently, since I had the original vDisk that wasn't corrupted, UnRAID's VM module simply launched it without wiping and installing a clean system. All is well again...
  5. It was on the original cache drive, which I upgraded to a larger one and had copied over all of its contents. I had even reselected the libvert location on the "new" cache share, but the VMs still did not show. Unfortunately, UnRAID was also getting errors opening the libvert.img file which prevented VM daemon/application from starting, so I tossed it, not knowing it contained the actual settings of the VMs themselves, and redownloaded a new version which solved the errors and allowed the VM module to start. When I get home I'll check to see if the libvert.img file is still available on the original cache drive which I don't believe I touched or reformatted.
  6. I had an unassigned drive as the shared storage for my VMs, then I did some disk and file rearranging, copying all raw VM files to a new cache drive and now my VMs disappeared. Reinstalling the original drive did not restore any VMs. Am I FUBAR'd or can I recover?
  7. For me, I'm concerned about running an UnRAID-based media server in VM; it doesn't matter for my backup server. All my videos are 1:1 imaged full Blu-Ray discs (which precludes running Plex or any other current media streamer) so anything that could impede performance and cause drop-outs, stuttering, or pauses in playback would be unacceptable.
  8. This would be a sufficient solution for my very narrow needs. If the multiple-array feature should eventually be incorporated, then I would definitely want to run multiple arrays (servers) on the same iron since at the max data drive capacity, my 48-bay Chenbro would never be fully utilized (prefer to run native vs virtual to reduce the potential for latency issues during media playback).
  9. I should add that I'm not seeing this issue with my other UnRAID NAS, which uses an older X9SCM motherboard with one SuperMicro AOC-SAS2LP HBA, one LSI 9211-8i (IT Mode) HBA, and one IBM M1015 (IT Mode) HBA. The X11 setup has built-in SAS3 3008 chip (IT Mode) connected to the Chenbro's built-in SAS expanders so the latest UnRAID version appears to have an issue with this setup.
  10. I noticed this sometime ago, but I wasn't sure exactly what was happening. About a month ago I noticed 128 errors after a parity check was completed but saw it only after a week or so had already passed. All SMART reports were good. I ran my monthly parity check several days ago and the exact same number of 128 errors occurred. All SMART reports were again good with zero anomalies on all drives. The syslog showed a spin down an hour or so after parity check was started (FYI, my disk settings are set to spin down after an hour) and that's when the read I/O errors occurred (specifically involving only three drives at that time). After performing an XFS repair on the three drives with zero issues, I again ran the parity last night. This morning, I noted in the syslog that again an hour after parity check was initiated, UnRAID spun down ALL drives and soon thereafter I/O errors on ALL drives were reported. Two hours later, again UnRAID spun down ALL drives and I/O errors started. This appears to be a UnRAID 6.7 bug wherein it's incorrectly spinning down drives that may be physically I/O active. I don't believe its hardware related as I've had this particular motherboard/chassis (SuperMicro X11SPH/Chenbro 48-bay RM43348) combination running for over six months now and never experienced anything like this before. Attached is my syslog (FYI, the parity check is still in progress and although there were numerous I/O errors, the parity check has reported ZERO errors thus far). syslog.txt
  11. It appears the problem is due to the Dynamix Cache Dirs, which I had installed recently. Removing it and the issue has not surfaced since.
  12. This has been an ongoing irritating issue with my Media Server for some time now; I can't pinpoint when this started happening, whether it was in recent 6.x releases or earlier releases (I don't recall experiencing this issue under v5 and certainly not under v4). Upgrading to a completely brand new system in every single aspect has not resolved the issue. As I watch a movie, every 20 minutes or so the video would freeze for 10-20 seconds. It's almost like clockwork. I have several different media players from different companies so it's not an issue specific to media players (though my Oppo 203 seems to experience this less often; however, it is used only to watch 4K). If I log onto the NAS and spinup all drives, this seems to alleviate the issue for the remainder of the movie. These movies are full copies; no re-encoding so they command a lot of NAS/network traffic when playing. Regarding network, I recently moved so the topology is completely different, with only one switch (16-port rackmount NetGear ProSafe) transferred to new LAN. My drives are set to the default 1 hour spin down delay. I don't believe the issue is with the specific drive the movie is on, but I haven't dug into the logs to see if UnRAID was attempting to spin that drive down, or any other drives. Perhaps it's when UnRAID is attempting to spin another drive down that it unexpectedly causes a problem with other drives. I have the Cache Directories, Auto Fan Control (I no longer need this and will delete), Nerd Tools, VM Wake-On-LAN, and Unassigned Drive plugins. My VMs are currently not running (haven't resolved libvert service error since hardware migration), though they have been in the past. When It happens again, I will try to review the logs at that time. Until then, has anyone else experienced this issue?
  13. Interesting. I will test on my 9211-8i to see if the LSI's are more immune to the PWDIS feature...
  14. Is there one molex connector per backplane, or two? The version with one is the newer with which the system I have had to have the pins taped. If you have the single, what HBA's are you using?
  15. It didn't with mine with 1 LSI 9211-8i, 1 SuperMicro SAS2LP, and 1 IBM M1015, connected to a Norco 4224. Not sure which of the HBA's I initially tried without taping, but it was with at least two drives that I experienced no power-up unless the pins were taped, after which I started taping them all immediately after shucking. FWIW, the Norco in this setup is the "newer" version that supports only a single PSU and thus has a differently designed "backplane". I've retired this case and relocating the X9 hardware to my older, dual-PSU capable 4224 for my backup server. And next PWDIS drive I get I'll test it without tape in both my new Chenbro as well as the Norco system (all three HBA's)... FYI, rebuilding an 8TB drive on the X9/Norco setup netted an average of 110MB/s. Rebuilding a 6TB of the same array on the new X11/Chenbro averaged 198MB/s, and replacing an 8TB with a 10TB averaged 117MB/s; the CPU utilization maybe pegged 50% now and then, but it was typically idling around 5%, unlike the X9/Pentinum which maxxed out quite often during rebuilds. Certainly a big step up, especially for my VMs. Now I have to build a noise reduction cabinet to quiet down this maddingly howling rig, and upgrade my home network to 10Gb...