revco

Members
  • Posts

    37
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

revco's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Well, unfortunately, simply turning up libx86's 64bit version wasn't enough to get S2RAM working. I was able to find "libx86-1.1-x86_64-1.txz" in the Slackware repository, but still no dice even with a 64-bit enabled x86 library. I think fundamentally, it comes down s2ram being 32 bit, as shown in my file output above. Per the Alien link I posed above about 32/64bit libraries, I tried that path and got all the libraries converted over to 32/64bit, but that still didn't allow me to execute s2ram. This also broke a few things in Unraid as well, was getting undefined errors all over the web interface and who knows what else. (Didn't keep it around long enough to figure out what all broke.) Would have been nice 'cause I figured I could run all the commands to rebuild the libraries from "go" on boot, but alas, that's not a viable path. I did go so far as to try and recompile the suspend application (root application for s2ram) and ran into a host of troubles there. I haven't been able to determine if it would be truly impossible to compile a 64bit version of s2ram at this point, but the suspend project is no longer supported and clearly ended in the 32 bit era. A few things I've read indicate that it might not be possible. I may give it another go at some point, but at this point I think I just need to be happy with automatic shutdowns and thankful my machine will WOL from that state. Of course, if anyone has other ideas, I'd be really happy to hear about them.
  2. I've been doing a little more digging and I think this has to do with where Slackware 14.1 references the libraries that are installed. According to the link below, /lib64 and /usr/lib64 are the default places where Slackware looks for libraries. Installing the "old" 32 bit version of libx86 puts it into /usr/lib, so I don't believe the library is getting found properly. There's a fair bit of information on making Slackware multi-library capable, which I'll keep studying. In the meantime, if anyone has a fix or further helpful info, I'd appreciate it. root@gaia:/usr/lib# ls -l total 8 lrwxrwxrwx 1 root root 11 Jan 10 12:51 libx86.so -> libx86.so.1* -rwxr-xr-x 1 root root 7416 Nov 30 2008 libx86.so.1* http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:multilib
  3. Hey all, So a couple months ago, I took the plunge and upgraded my 4.7 system all the way to 6.1.4, and now 6.1.6. All went well, I did a "clean" migration to avoid problems and most everything is working well. Going back just simply isn't an option as I now want 2TB+ drive support and the GUI improvements. My machine has had problems with traditional S3 sleeping where the system would respond to WOL, but video wouldn't come back and network would die shortly after WOL. I was able to get my 4.7 box to sleep and return properly with S2RAM, so this weekend I've been trying to get it working in 6.1. I've got all the script components working properly, which leverage installed bwm-ng and libx86 packages, but I believe my problem is lower than that. I can't even manually call S2RAM, which the automated script I'm using also substantiates this. When I execute S2RAM I get: root@gaia:/boot# s2ram -h -bash: ./s2ram: cannot execute binary file I don't recall running into this issue on the 4.x install, but I also admit it's been quite awhile. I've checked the usual suspects, the file is executable, libx86 v1.1 installs successfully (both on boot or manually), re-downloaded S2RAM twice from two sources, tried running it with ./ and many other things. My best guess at this point, after some rather extensive research, is that this may be a 32bit vs. 64bit issue. My S2RAM file was compiled on/for the 32bit version...I didn't compile it, it's just what I found on the forums here. My understanding is that a 64bit OS "should" be able to execute 32bit applications, but maybe not in this case? root@gaia:/boot# uname -a Linux gaia 4.1.13-unRAID #1 SMP PREEMPT Fri Nov 13 20:26:59 PST 2015 x86_64 AMD Sempron 145 Processor AuthenticAMD GNU/Linux root@gaia:/boot# file s2ram s2ram: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), not stripped root@gaia:/boot# ldd -r s2ram not a dynamic executable I've also read there's some differences in sleep behavior introduced in 6.x, but haven't been able to positively identify what these are and don't know why that would result in the above error. I feel like I've entirely exhausted my Google-fu and have re-read every single document I can find with no references to my problem. For giggles, I did try using the S3 plugin from Dynamix, but alas my problems above still present themselves. I'd rather avoid a major hardware swapout at this point, the system has been rock stable for many years now. I *can* get the Dynamix plugin to work using shutdown, but I prefer the faster sleep mode revival. If I can't get this working, that's probably the route I'll go as it's important for me to save the power. Has anyone gotten S2RAM working in 6.1.x? Am I missing something stupid? Appreciate any insight you might be able to provide.
  4. Sorry to resurrect my old thread here, but have some new developments. Thought I'd provide a brief update on this server. Sadly, the hardware above is out of date...but (at least to me), these UCD threads are about the lifecycle of a server. I experienced my first HDD failure over labor day weekend here. Not bad, considering I built the server close to two years ago. It failed during a parity check, glad it's doing these once a month! The email notifications I set up were well worth it. I knew I had a problem when I saw my "UnRAID" Outlook folder with an excessive number of messages for a day...strongly recommend setting up that feature if you haven't all ready. The replacement went smooth as heck, textbook really. This is precisely the reason that I went with an UnRAID solution and it's proven it's worth to me. Fortunately, I had a drive ready to rock as a replacement and have another drive on order to replace my standby. It took about 7.5 hours for me rebuild the array using 2.0TB drives. I'll probably be building my second server later this year. I will soon have some spare hardware when I finish my main PC's upgrade in a month or two. Planning a virtualized build based on ESXi, since my hardware is well above and beyond the requirements for UnRAID alone. I'm strongly considering a 5.0 beta build (good God, 5.x is STILL in beta? Haha.) and beefing up the controllers to accept more drives than the server above. Also want to build it around the newer 4.0TB drives, since these have come down quite a bit in price. Anyway, I just wanted to say thanks again to the developers and community here. If you're on the fence about unRAID, take it from me...it's worth every penny when you see a drive red ball and you know your data is still safe.
  5. RDP does have some built in 128bit encryption, so it's considered relatively safe to use over the public internet by itself. That said, encrypting the traffic within a VPN tunnel (either LAN to LAN or client based) will provide that much more security. A VPN means you don't have to have port 3389 open to the outside internet. If VPN won't work, I do STRONGLY suggest locking down the IP addresses that can access the RDP session via your firewall. (e.g. Deny all IP's except the one's you'll use to connect) If that's not practical (e.g. you connect from hot spots, DHCP sites, etc.) then it's paramount that you secure your box with strong passwords. Also, removing any common administrator usernames is also considered best practice. The above will prevent exhaust pretty much all the commonly used RDP password crackers out there. You're still at risk for brute force, but that's such a slow process over the internet that it's not very common to see in use, except for high value targets.
  6. I built my server back in early October. http://lime-technology.com/forum/index.php?topic=15756.0 My planned upgrades for this box are: Extend my 12V 4 Pin CPU power cable so I can route it better Add 4 more drives when needed Upgrade parity to a 7200RPM enterprise/RAID drive At this point, I don't think I'm going to tweak it too much. I've been very interested to see the VMWare implementations of unRAID. Big picture, I intend to build a second server with every intention of having it be an ESXi based, server level build.
  7. With the info you presented, I would probably suggest scrapping the four 10/100 NIC's for a single gigabit adapter. You're going to get comparable performance to the four NIC's with a lot less complication. Also, Zuhkov is correct that ESXi would be required to do this since you need access to VSphere to modify the network parameters and to create virtual switches and such. Depending on what you have on the network side, I've found it's a lot better to aggregate multiple NIC's into a port channel (or LACP group) such that the VM instances can utilize aggregated bandwidth of multiple NIC's. You would require a managed switch that can do LACP trunks, however. That way, you can aggregate even more out of any given server, if need be.
  8. Wait, am I reading this right? You had heat problems with 4-in-3 cages and you think running without cages will reduce your heat? Or you had other cages and you're switching to 4-in-3's? The latter makes sense, but I couldn't quite make out your second sentence. As for #2, I haven't done it myself but I've read several posts about starting anew. Might try searching around a bit and I'd bet you'd find good instructions.
  9. Yeah, I can't say for sure. Interestingly, I shot nearly identical trouble the other night: http://lime-technology.com/forum/index.php?topic=15867.msg147494#msg147494 Same error, both shortly after adding a disk, similar mysterious resolution. In his case, his drive did mount later...which I didn't see in your logs. Both of you are running late betas, you 5b12, him 5b10. I'm not normally one to scream "bug" but I'd be negligent to point out that you are running betas, which means things aren't *fully* baked yet. I think this warrants one of the heavyweights around here to look at it. Maybe they'll see this and comment?
  10. I'm not an expert, but I'll try to take a stab at this. Disk 5 is showing some problems in the syslog: Oct 24 07:36:44 unraid kernel: REISERFS warning (device md5): sh-2021 reiserfs_fill_super: can not find reiserfs on md5 Oct 24 07:36:44 unraid logger: mount: wrong fs type, bad option, bad superblock on /dev/md5, Oct 24 07:36:44 unraid logger: missing codepage or helper program, or other error Oct 24 07:36:44 unraid logger: In some cases useful info is found in syslog - try Oct 24 07:36:44 unraid logger: dmesg | tail or so Oct 24 07:36:44 unraid logger: Oct 24 07:36:44 unraid emhttp: _shcmd: shcmd (135): exit status: 32 Oct 24 07:36:44 unraid emhttp: disk5 mount error: 32 Is this drive added to the array and showing a good status? My initial guess is that you may have corruption on that disk and that's what's causing this mounting issue. If it is corruption, I'd suggest checking your power availability, your connections and possibly try a different SATA port to try & root out the initial cause. (There may be no hardware cause...it could be something like an improper shutdown, too.) Even if you fix the issue and something is creating corruption on your disk, any fix will be short lived. Can you attach the SMART report, and if possible, the initial preclear report from that disk? You might try to run reiserfsck on that disk as well because it might be able to clear it up.
  11. Here's something important to note...if I recall correctly, ReiserFS doesn't support background TRIM. (Reiser4 & ext4 do BTW) It may support offline TRIM, which is less dependent on the file system. Here's an article that I used to get TRIM working under Ubuntu. Essentially you have to edit your fstab file and enable "discard"...which equals TRIM in the Linux world. I have no advice on how to do this unRAID....and whether this should be used with caution is up to you to figure out. Personally, I don't think the performance benefits are worth it...but that's me. http://www.howtogeek.com/62761/how-to-tweak-your-ssd-in-ubuntu-for-better-performance/
  12. This stood out to me in your syslog: Oct 22 05:10:25 Tower dhcpcd[1145]: broadcasting DHCP_REQUEST for 192.168.2.11 Oct 22 05:10:25 Tower dhcpcd[1145]: dhcpIPaddrLeaseTime=86400 in DHCP server response. Oct 22 05:10:25 Tower dhcpcd[1145]: dhcpT1value is missing in DHCP server response. Assuming 43200 sec Oct 22 05:10:25 Tower dhcpcd[1145]: dhcpT2value is missing in DHCP server response. Assuming 75600 sec Oct 22 05:10:25 Tower dhcpcd[1145]: DHCP_NAK server response received Oct 22 05:10:25 Tower kernel: r8169: eth0: link up Oct 22 05:10:25 Tower dhcpcd[1145]: broadcasting DHCP_DISCOVER What's your lease time on your DHCP server? It looks like it's doing a DHCP renewal and it received a NAK from the DHCP server, meaning that IP had been issued elsewhere or for whatever reason, the DHCP server didn't want to give you the same IP that the server had previously. That could definitely interrupt communications. We could troubleshoot that, but honestly, I would suggest using a static IP address on your server. That way, you won't be changing IP's on your server and potentially disrupting communications. Hope that helps.
  13. Lastly, think about your migration strategy. It sounds like you're planning to re-use your drives in your gaming rig. Note that you can't just take a drive that's formatted for FAT32/NTFS/exFAT and drop it into unRAID. It will first need to be formatted for ReiserFS, which will kill all the data on the drive. (It's also suggested to preclear any disks, even if they've been used previously...this is important to insure the integrity of the system!) If it were me, I would try to clear up at least one drive from your system (more would be better) and then add it to unRAID. Transfer data over from your other drives so you can clear them and then add them to the system. Repeat until you've got all the drives you currently want in unRAID. Also, remember that you probably want parity...so one of the largest drives will need to be dedicated to that. You don't need to add parity right away, so you can migrate your data a little easier, but your data will technically be unprotected until you do. If you're at or near capacity of your current drives, it's going to be impossible to migrate without the purchase of new drives. Unfortunately, with the floods in Thailand right now, drive prices are on the rise...and I've even seen some shortages at this time.
  14. That PSU might be dicey, depending on how you expand and how quickly. It has two separate 12V rails, one at 16A and the other at 17A. Plan on at least 2A per green drive, 3A for 7200/older drives. Then you need some additional amperage for your CPU, which I'd plan on at least 6-8A for safety, and any other extras for 12V devices in your system. How you hook everything up will make the difference between success or failure with that PSU...you can't exceed (or shall I recommend, come within a couple amps) of either rail or you'll have problems. Power problems are a pain to troubleshoot as they can mask themselves as any number of issues. Unless you can precisely calculate the load on both rails, it could present problems for you.
  15. Looks like a good build. Here's some of my comments... Fully populated with 5-in-3's, that case could handle up to 16 drives (15x in 5-in-3's and one spare in the case). You're spot on for looking for MB's with 2x PCIe slots, but I would encourage you to definitely look for something with onboard video so you don't have to use a slot for video. That way, you can run 2x AOC-SASLP-MV8 and have enough ports to run all 16 drives. Otherwise, you'll be limited to 14 drives (with 6 SATA ports presumed on the MB) and you'll waste a slot or two. If you see wisdom in this, I'd definitely encourage you to specifically look for MB's that are known to support 2x AOC-SASLP-MV8's because I've read reports of some MB's not playing nice with 2 SATA boards. I'm not sure if you plan to migrate the drives from your existing machine? If so, you'll be nearly at capacity (assuming you keep at least one local drive for your system drive) and your upgrade to 5-in-3's will be forced sooner. It may be wise just to look at 5-in-3's now...you'll like it better, trust me. Also, if you go with a board with an 8111E chipset, I'd just plan on an Intel PCI NIC. They're cheap, but you'll avoid problems. Supposedly support is getting better in the late betas, but it flat out won't work in 4.7. Also, I'd say that 4GB of RAM is the bare minimum. I know people here say 2GB is enough, but I just built a machine with 2GB and it's really, really close. 8GB may even be appropriate for you, considering you want this to be a somewhat multi-purpose machine. RAM's cheap these days, though.