Whaler_99

Members
  • Posts

    618
  • Joined

  • Last visited

Everything posted by Whaler_99

  1. Are they all using XFS (or BTRFS)? I suspect its localized to SAS2LP-MV8 cards + ReiserFS + Writing operations + (possibly) some additional factor in my setup. I will ask and find out...
  2. Something interesting with these cards. I have three friends all using them in new builds. New as in v6 builds. None of them had a v5 machine and all three systems are working fine.
  3. Plex recently had some issues in a release the put out. Try just playing the media directly with another device or try another media streamer to rule that out before we troubleshoot further. Looks like that is indeed the issue. I did check the Plexpass forum and didn't see anything off hand about the pausing issue. Playing same video just through Windows Explorer using VLC seems ok. Thanks!
  4. Anyone seeing performance issues? I updated to 6.1.5 and then a day or so later to 6.1.6 so I am not sure which version affected it, but in 6.1.4 there were no issues. I am noticing streaming shows from my server, via plex, video is skipping and jumping a bit. Audio seems fine, just the video. And no, it isn't a bad file, have tried with a few different ones, all seeing random skips and such since the upgrade to 6.1.5->6.1.6.
  5. Wondering if you have any 3x5 cages for sale?
  6. Is there any way to tag the admin team on this thread? There doesn't seem to have been any response to this as of yet or updated documentation. Although a great solution/option I am frankly pretty nervous now about running dockers to an extent but more so virtual machines on this cache array when it seems to be a bit less than ideal, has no real documentation for trouble shooting and not many people on the site have a lot of experience with. First - in an array, how do you add drives, what can your expect? (let's not assume it is the same as the data array) Second - If you want to upgrade existing drives, how and what can you expect? (Again, we cannot assume here) Third - in the event of a failing drive, what do you do? (this clearly needs a lot of work simply based on my experience and a few others) Fourth - in the event of a failed drive, what do you do? Hopefully we can see this all flushed out over the coming months. I see 6.1.4 was released and work is ongoing on 6.2, great news.... but how about some updates on the current solution and issues that have been seen?
  7. I concur on this, just get that drive replaced asap with a new one or one you know is good. Odds are it won't fail any time soon, but better safe than sorry.
  8. If your box is "dying" within a few hours of running, I would say start looking at your hardware. Sounds like something is wrong. Either overheating, bad motherboard, bad controller, something. There really is no reason why a new 6.1.3 system should otherwise freeze. You didn't upgrade to this from some other previous version with plugins installed did you? That could do it as well. Finally, next time your system is up and running, install the powerdown plugin. This allows you to run the powerdown command from the console and it will gracefully shut everything down for you so you don't need to do a parity check at reboot. You can also press the power button which should also initiate this. If your system is freezing to the point where that doesn't even work, that I would say you do have some hardware issues.
  9. Just following up on this... There still seems to be a complete lack of documentation on this, in regards to support in the event of a failed drive or upgrading existing drives to larger ones (or is that even possible?). When can we expect some updated information on what is now a VERY important and integral part of unRAID from the LimeTech team?
  10. Just to clarify, this is an expander card that still requires a host controller card correct? It cannot operate on its own, for instance, with unraid to get you 24 drives.
  11. So, yes, I had to rebuild to cache array from scratch.... Some things I noticed - I unassigned all the drives in the pool, started the array and then stopped the array, reassigned my 4 drives back to the pool, hoping that would "wipe" that cache array. It didn't. I ended up doing a new config, reassigning all my drives to the data array, starting the array and not run parity check. Things working so far. Assigned then one drive to the cache. Funny enough, as it was previously a btrfs formatted drive, it just spun up, but empty. Progress. Assigned the other three drives, now 4 drive cache array, empty and no errors in the syslog. Yippee to test, I go to start docker with a new file, nada. Won't run. Not sure what is going on at this point. I am clicking around and notice something VERY weird. At some point, a "cache" directory, with nothing in it, was made on my data drives, drive one and four. I have no clue how these got there. I deleted them. Restarted the array, and bam, Docker starts. I have now copied back my data, started a fresh docker image and reloaded my dockers from the templates and my VM's are up and running with no errors. Overall this has been the weirdest experience with unRaid I have ever had. Just seemed to be a bunch of errors and problem with no clear solutions for, even just removing a failing drive. Thanks for looking in jonp - at this point everything looks to be running ok. That said, how much testing was done in regards to what happens when drives fail, etc? Seems me and a few others could only fix our cache array's by complete rebuilds. As well, weird how everything went nuts after a rebalance. Rebalance - is this something that should be run regularly? The command on the web page looks a little weird - -dconvert=raid1 -mconvert=raid1. I am not linux guy at all, but it almost looks like two process being called by the same function? A Dconvert and a Mconvert? Cheers
  12. Have to say pretty upset with how this new cache array is working out. No clear documentation on removing and replacing a drive. Ran a balance and that seems to have pooched the whole thing. One disk spinning down now and won't spin up. Docker won't start, won't even make the img file. And couple of days and only Gary has replied in here... Have to say, I have lost all confidence in the btrfs array setup at this point. If a drive actually failed, at least from what I am seeing, all hell breaks loose. I cannot even stop the array any more, just hangs, which is a PAIN when trying to trouble shoot. I have a bunch of dockers and VM's and now nada... fun times...
  13. This message is displayed on the console: "mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error. In some cases useful info is found in syslog - try dmesg | tail or so" Ya, cryptic.
  14. Thanks! I have reached out to him. I think more documentation around the cache pool and dealing with failures is definitely needed. So... tried to stop the array again, the web interface went for a dump on spinning down drives and crashed. Went to the console, used powerdown to shut down server. Booted back up and... the drive that wasn't doing anything is back in the array, but still generating errors. Going to try and stop the VM's and copy them off. EDIT - cache is offline. Although looks to be up in the GUI, it is offline... cannot access anything stored on cache... EDIT 2 - looks like I can access content now on the cache, just super slow. EDIT 3 - Vm's showed up, but Docker tab missing from GUI and none running Using MC to copy everything off and might just blow the whole thing away and start from scratch. If so, big issue with that and failures... tower-syslog-20150927-1317.zip
  15. Well that was a bad thing to do apparently... Now one of the drives, not the one that was having reallocated sector issues, no longer spins up. But, isn't being marked as failed, and a bunch of erros showing up in the log.. Sep 27 12:52:12 Tower kernel: sd 1:0:7:0: [sdr] tag#0 CDB: opcode=0x2a 2a 00 00 00 00 c0 00 00 08 00 Sep 27 12:52:12 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:12 Tower kernel: sd 1:0:7:0: [sdr] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Sep 27 12:52:12 Tower kernel: sd 1:0:7:0: [sdr] tag#0 CDB: opcode=0x2a 2a 00 00 0b b1 e0 00 00 60 00 Sep 27 12:52:12 Tower kernel: sd 1:0:7:0: [sdr] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Sep 27 12:52:12 Tower kernel: sd 1:0:7:0: [sdr] tag#0 CDB: opcode=0x2a 2a 00 00 00 00 c0 00 00 08 00 Sep 27 12:52:12 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:13 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:13 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:13 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:14 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:14 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 Sep 27 12:52:14 Tower kernel: BTRFS: lost page write due to I/O error on /dev/sdr1 If I stop the array, that drive is marked as a unassigned drive, within that cache pool... But... it's there... assigned...
  16. Just tried un-assigning the drive from the cache pool and starting the array and seeing what happens. The cache array was unmountable. Leaves me a little concerned what happens if the drive actually fails... Hopefully someone who has done this can chime in.
  17. Sorry - it's not the individual cache drive. It's part of a cache array, the btrfs. As this is a different raid type than the data array, I wanted to make sure the procedure was the same.
  18. I have one of my drives starting to fail. The reallocated sector count is going up each day. So, I just want to verify if replacing a cache disk in a cache array is the same as a data disk? 1 - Stop the array 2 - Unassign the old drive, if it's still assigned 3 - Power down 4 - [ Optional ] Pull the old drive (you may want to leave it installed for Preclearing or testing) 5 - Install the new drive 6 - Power on 7 - Assign the new drive in the slot of the old drive 8 - Go to the Main -> Array Operation section 9 - Put a check in the Yes, I'm sure checkbox (next to the information indicating the drive will be rebuilt), and click the Start button
  19. Well, that worked. I pre-loaded all four of the drivers, shut down windows, imaged and booted it, without editing the XML to IDE, and box booted fine. Now just getting some updates and device manager sorting itself out, but up and running. Thanks for all your help and suggestions.
  20. This isn't easy - booted the drive back up in original machine, log into windows, go into the balloon directory and go to install virtio drives. And fails. "Driver is not intended for this platform". Well now... Google, here I come... EDIT - sometimes I am to fast... realized, I was trying to install 64 bit version on 32 bit OS. Tried 32 bit version and voila. Got the error about it not being able to start, but it is installed now. Now, to re-image and test.
  21. <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/msiso/virtio-win-0.1.102.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/vm/Surf/vdisk1.img'/> <backingStore/> <target dev='hdb' bus='ide'/> <boot order='1'/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/vm/Surf/vdisk2.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk>
  22. Ya, that was the first thing I did, as per the wiki, was to edit that and change it to IDE.