shat

Members
  • Posts

    276
  • Joined

  • Last visited

Everything posted by shat

  1. Anyone have any suggestions? I just upgraded to Windows 10 and had some hopes that it would resolve itself with it; it did not.
  2. Additional files Music.cfg syslog.txt vars.txt ps.txt
  3. At some point in my progress of upgrading to v6; I have seemingly lost the ability to access my user shares or see my unRAID v6.1.6 server on my network from a Windows 7 workstation. I am able to access the shares via AFP and SMB on my macbook. I have: - verified NFS, AFP, SMB are enabled - verified no iptables rules are blocking 138, 139, 445 (unRAID doesn't appear to use one of t hose 3 ports for SMB) - verified all the shares are exported in said ways that I need to access - ensured my workstation and unraid are within the same workgroup and that master is set to yes on unRAID. - disabled any/all network firewall policies on my windows 7 machine additional steps I tried without success in the end result of access the shares in any way: - installed Umich's NFSD4.1; has its own issue of complaining about AD (I'm not using a directory environment on my home network) - modified Windows 7 Profession to Enterprise registry settings so I could attempt to install NFS client tools via Windows Components as it is unavailable on Professional; but is available for Enterprise versions (didn't work) I can reach the server via browser http://10.0.10.50 I can ping the server without any issues Running out of ideas. I am attaching a bunch of data, not sure if it is helpful; but here goes. disk.cfg network.cfg ident.cfg smb-extra.conf
  4. The drive is still green, one of them showing the error doesn't have any errors in the report itself, so I am confused a bit.
  5. I finally got around to upgrading to 6.1.6 this evening. I performed a parity check before the upgrade and proceeded. The upgrade seems to have completed successfully as I have rebooted and run new perms without any issue.. However, I noticed I have a few drives with some smart issues / errors occurring. I've attached the SMART reports from the two drives, the parity drive is showing errors; unrelated to SMART apparently, but attached as well. One of the drives has 1012 errors. Parity 0: Reallocated sector count 5 Data 3: Reported uncorrect 1012 Data 10: Current pending sector 1, Offline uncorrectable 3 Attachments are named per drive. Suggestions? I was looking to begin using unBALANCE perhaps to begin converting to BTRFS (prefer) or XFS .. I really want to mount up a few 1+ TB ssds for cache .. been a while since I posted, hello to all btw data10_smart.txt parity_smart.txt data3_smart.txt
  6. Fellow unRAID(ers)! It's been a long, long time since I've posted on the forums and it seems that unRAID has made some seriously bad ass progress on development. I've been keeping an eye on it, but haven't done anything with my existing unRAID which is on 5.0.1-RC1... I rebooted it for the first time in 214 days today and it seems that emhttp refuses to start despite my best efforts at troubleshooting it. It also has become apparent that I should likely begin the process of upgrading to a new version. I have a few questions: 1) Is upgrading from 5.0.1-RC1 to a more 'up to date' version of 6.X feasible and painless? I know in the past upgrading versions has lead to the array become unstable or unusable for many people, so I am curious if anyone else has performed the upgrade (surely there is) and how it went and anything I should be aware of first? 2) Anyone know if Influencer's plugins work on 6.x? 3) It seems there are numerous virtualization options available for the 6.x branch of unRAID which is great. My "tower" is a Xeon 1230v2 with 32gb of memory, so being able to use it for more than just unraid would be nice, but will likely still be unraid related, for example offloading PMS to its own VM using 4 cores instead of all.. same for sab, etc. So, enlighten me if you will. Is the process of upgrading still as straight forward as before or has it changed? I prefer KVM over Xen these days, but anyone have opinions as to which is running 'better' on 6.x?
  7. I have had this very same issue and it turned out to be because I was transcoding for audio. If your stereo, soundbar or television doesn't support the audio codec being received by roku via Plex it gets transcoded. Example, my Vizio 60" TV does not support DTS audio, so despite the fact that it is optical cable from TV to sound bar, which does support it, causes plex to transcode it. There is a log file in Plex you can tail and grep for the detail of transcoding taking place. A "1" means real time direct stream and anything slower is transcoded. I would dig on plex forums as well. If you're a plex pass member, the gurus there are always helpful. Just like us with unraid. Sent from my iPhone using Tapatalk
  8. Anyone ordered and received? Sent from my iPhone using Tapatalk
  9. Nice man. Glad you got it tidied up. Loving my little Lian Li mini-itx rigs. Have 4 of them running now. No specific reason why other than to store stuff, heh.
  10. Garycase: what's the update on this project? Sent from my iPhone using Tapatalk
  11. Trying to help a friend with his unraid, he has same issue with libssl.so.0 Everything worked fine for a day, he shut it down, took it to his house, turned on and nothing. You can telnet, ping, etc, but plugins are not getting installed.
  12. I've noticed my server running a little slower than normal these last few days, but drives are all passing SMART tests, parity checks fine and the overall hardware seems to be in good health. Noticed these errors in my log file this evening, not sure how long they have been showing up as I just checked: tail -n 40 -f /var/log/syslog Dec 2 19:53:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:53:01 Hoard kernel: crond[25978]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:54:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:54:01 Hoard kernel: crond[25992]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:55:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:55:01 Hoard kernel: crond[26006]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:56:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:56:01 Hoard kernel: crond[26025]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:57:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:57:01 Hoard kernel: crond[26038]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:58:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:58:01 Hoard kernel: crond[26054]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 19:59:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 19:59:01 Hoard kernel: crond[26067]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:00:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:00:01 Hoard kernel: crond[26083]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:01:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:01:01 Hoard kernel: crond[26099]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:02:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:02:01 Hoard kernel: crond[26112]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:03:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:03:01 Hoard kernel: crond[26124]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:04:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:04:01 Hoard kernel: crond[26136]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:05:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:05:01 Hoard kernel: crond[26150]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:06:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:06:01 Hoard kernel: crond[26166]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:07:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:07:01 Hoard kernel: crond[26182]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:08:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:08:01 Hoard kernel: crond[26194]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:09:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:09:01 Hoard kernel: crond[26229]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:09:07 Hoard in.telnetd[26230]: connect from 10.0.10.100 (10.0.10.100) Dec 2 20:09:09 Hoard login[26231]: ROOT LOGIN on '/dev/pts/0' from '10.0.10.100' Dec 2 20:10:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:10:01 Hoard kernel: crond[26740]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] Dec 2 20:11:01 Hoard crond[1234]: exit status 1 from user root /usr/lib/sa/sa1 2 1 & >/dev/null Dec 2 20:11:01 Hoard kernel: crond[27635]: segfault at 4001e51c ip 4001e51c sp bfdc0b98 error 15 in ld-2.11.1.so[4001e000+1000] [/cpde] [b]unRAID 5.0.1-rc1[/b] on Intel i5 with 16GB memory (only 8gb is usable I think..)
  13. I can align with gary and tyrindor. I have cleared a total of 55 WD red drives, all 3TB, largest bulk was 20 drives. Of the batch of 20, one failed during clear. Got a new one in and no issues. Typically I don't like WD, especially because of the failure rates I have experienced with greens, but the reds are quite nice. Sent from my HTC One using Tapatalk 4
  14. I am currently away until sometime after 10/07. Once I return home I will begin processing new requests. Thanks for patience.
  15. Do it. Let me know how it builds. Send me some photos. I need to put a mid range quaddro in her desktop. Like keeping her with a smile. She has been happy I've been feeling modestly better and feeling out a bit these last two days. I ordered Ubiquiti edgemax routers and Ubiquiti UniFI access points for five of my friends for being so supportive. Now infer to support setting them up
  16. Tried editing, wouldn't let me.. the sales comment to be extended with.. And I don't do "sales" I just recommend quality working products, services and software. Yours works.. I suggest it.
  17. I don't follow. When we get custom cases fabricated they're cheaper in higher qty. What is your "higher qty"? For example, Supermicro MOQ a couple years ago was 3,000 pcs. I haven't bothered asking them lately. Hey, I'm not bashing you or unraid. Just my flavor. Plus, honestly I don't even remember typing those posts. The first one, slightly, the second about quantity.. no clue I typed that. But, I am talking quantity, we ordered custom fabricated cases for some industry specific builds we did and the price was decent at 500-1000, but no major break until we hit 5,000 units. We ended up buying close to 17,400 in all. We negotiated in 15,000 on terms so we got them in batches of 100 at the time for same price as 15k, but only because I knew we would go through them.. if you knew you would sell or already sold 15k you'd be in similar situation. I'll admit I am not on top of istar, they seem to have well designed products but I've only ever used one model of 5.25:2.5" drive caddy. It was good. Apologies if I offended, was not my intent. Keep doing what you do. I keep referring friends. I have 19 now that have switched from FreeNAS to unraid keys. I'll keep them coming, regardless of my personal opinions about your chassis, unraid is still great. Just need people like me out there selling it for you
  18. First, you're absolutely dead on with the quality of Lian Li's PC-Q25B. It is an outstanding case. I am really impressed with it. I bought multiple more of that model and another that is similar but less drive space and has a 5.25" slot for cdrom. The latter model is to build out some small footprint desktops for a friend. Picked up an extra to build my wife a proper PC so she isn't always using her massive Precision series Dell laptop or the Wyse zero client at home. I'm going to continue using the Q25 because it rocks for small builds. I still have my 15bay (3) 5:3 supermicro cage Helios and a Norco 12 bay 2U. All that to say I have plenty of storage space and mostly just enjoy tinkering with more as a hobby or some form of twisted expensive addiction. I intend on having so many SSDs because I currently have, literally, I can snap a photo later when I go into the back home office... ( 500GB Samsung 830s, (11) 256GB Samsung 840 pros, (5) Samsung 120/128GB Evo (new, just came in forget size), (10) 240GB Mushkin Chronos drives, an an assortment of lightly used/tested drives from 8GB to 500GB. The previous counts are just new in box. I have so many because I keep doing massive SSD db cache builds similar to fusion io, but on my own. I end up having drives for spares, warranty, etc and always order if there is a good deal. So 8 SSD gives me a good array of just SSD for some massive disk IO tests. Have to remember I run the Usenet index and always test new builds at home and in the DC. I also have other apps I develop that require even high io. My biggest database is doing about 3M iops right now with combo of SSD cache, SSD array for archive and 15k rpm drives. As far as fabrication goes, between some friends I can have just about anything fabricated, machined, etc. I just got some new stuff in I had her design for my bitcoin asic rigs to better increase hot/cold on the smaller footprint that is more efficient than doing it in our APC inflow cabinets. I'm a geek.. always tinkering.. but more extreme I guess. With my health being as crappy as it's been for a few months now (headed to Mayo in Rochester this Saturday) I am working from home and get bored. I also volunteer with a local school's program, Tulsa Engineering and Sciences Program for middle school through high school students, to help build robots from FLL (Lego league) through larger non kit. They need some storage and the schools can't fork out or wont, so I am building them a 7x2TB array with raid 6 and 120GB ssd cache. Again, Lian li. As far as Psu I stopped using Cx430 after first build and went SFX. Much better, you're right with that. Might do some fan swapping to some Scythe gentle typhoons maybe. All that ranting.. in short.. why not? I get bored/distracted easily. Nice. Good to hear. Performing well?
  19. Parts came in for second build. Have my wife drafting and modeling a chassis very similar what will hold 7 3.5" and 8 2.5" drives.
  20. I don't follow. When we get custom cases fabricated they're cheaper in higher qty.
  21. I'm using. cx430 as well. I dig it so much,the build, I ordered parts for four more.
  22. So, I got bored and built another server. This one is strictly for linux and zfs, but same compulsive behavior for buying hardware, so I thought to share. It isn't a new design. Several have done exact builds, but it's a mini itx, I5, 16gb memory and 6 we 3tb reds. I also stacked in 4 Samsung 840 500gb ssds and a single 256gb 840pro as pool cache. Running two raid 5 pools, and right now toying with raid 10 on the ssds, though they will likely come out. Attached are some incomplete photos. Need to put psu in still and 1015 controller.