mikejp Posted July 8, 2011 Share Posted July 8, 2011 Just went from beta7 to beta8. Works fine except... now the temps don't show for the drives hooked to the motherboard. The temps show fine on the 2 BR10i cards with both betas. Motherboard is a GIGABYTE GA-790XT-USB3. Syslogs from both versions attached. Pick one of your drives hooked to the motherboard and note the device identifier, say it's (sdb). Please type this command, capture output and post: smartctl -A /dev/sdb <--- instead of 'sdb' use whatever one corresponds to drive on motherboard port Tower2 login: root Linux 2.6.37.6-unRAID. root@Tower2:~# smartctl -A /dev/sdb smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net SMART Disabled. Use option -s with argument 'on' to enable it. root@Tower2:~# Done what it said.... showing fine now. Didn't hold through reboot Quote Link to comment
madburg Posted July 8, 2011 Share Posted July 8, 2011 Syslog once I clicked "Stop" array: ... There is a crash of the 'umount' command being recorded in you system log - this is not good. Looks like it's with you disk20. Please run a 'reiserfsck' on this disk. Hard rebooted the unRAID server. Array not set to auto start. Updated to 5.0Beta8b (from 8a), rebooted. Ran reiserfsck on all disks. The last disk #20 as you stated came back with: root@PNTower:~# reiserfsck /dev/sdg1 reiserfsck 3.6.21 (2009 www.namesys.com) ************************************************************* ** If you are using the latest reiserfsprogs and it fails ** ** please email bug reports to [email protected], ** ** providing as much information as possible -- your ** ** hardware, kernel, patches, settings, all reiserfsck ** ** messages (including version), the reiserfsck logfile, ** ** check the syslog file for any related information. ** ** If you would like advice on using this program, support ** ** is available for $25 at www.namesys.com/support.html. ** ************************************************************* Will read-only check consistency of the filesystem on /dev/sdg1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Fri Jul 8 13:40:42 2011 ########### Replaying journal: Trans replayed: mountid 50, transid 3572, desc 6865, len 1, c ommit 6867, next trans offset 6850 Trans replayed: mountid 50, transid 3573, desc 6868, len 1, commit 6870, next tr ans offset 6853 Trans replayed: mountid 50, transid 3574, desc 6871, len 1, commit 6873, next tr ans offset 6856 Trans replayed: mountid 50, transid 3575, desc 6874, len 1, commit 6876, next tr ans offset 6859 Trans replayed: mountid 50, transid 3576, desc 6877, len 1, commit 6879, next tr ans offset 6862 Trans replayed: mountid 50, transid 3577, desc 6880, len 1, commit 6882, next tr ans offset 6865 Trans replayed: mountid 50, transid 3578, desc 6883, len 1, commit 6885, next tr ans offset 6868 Replaying journal: Done. Reiserfs journal '/dev/sdg1' in blocks [18..8211]: 7 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 22961 Internal nodes 140 Directories 18 Other files 14 Data block pointers 23233616 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Fri Jul 8 13:41:13 2011 ########### This is the same behavior I had with 5.0Beta7, The last drive was a old WD 80GB drive set as cache, I removed it all together, problem kept coming back. Now its doing this to the next last disk, disk#20 (a seagate 250GB drive). I will now start the array and keep testing to see if it happens again. 5.0Beta8b now seeing the free space on the flash drive, thank you. Quote Link to comment
limetech Posted July 8, 2011 Author Share Posted July 8, 2011 Just went from beta7 to beta8. Works fine except... now the temps don't show for the drives hooked to the motherboard. The temps show fine on the 2 BR10i cards with both betas. Motherboard is a GIGABYTE GA-790XT-USB3. Syslogs from both versions attached. Pick one of your drives hooked to the motherboard and note the device identifier, say it's (sdb). Please type this command, capture output and post: smartctl -A /dev/sdb <--- instead of 'sdb' use whatever one corresponds to drive on motherboard port Tower2 login: root Linux 2.6.37.6-unRAID. root@Tower2:~# smartctl -A /dev/sdb smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net SMART Disabled. Use option -s with argument 'on' to enable it. root@Tower2:~# Done what it said.... showing fine now. Didn't hold through reboot Fixed this in -beta8c (not published yet). Quote Link to comment
limetech Posted July 8, 2011 Author Share Posted July 8, 2011 Syslog once I clicked "Stop" array: ... There is a crash of the 'umount' command being recorded in you system log - this is not good. Looks like it's with you disk20. Please run a 'reiserfsck' on this disk. Hard rebooted the unRAID server. Array not set to auto start. Updated to 5.0Beta8b (from 8a), rebooted. Ran reiserfsck on all disks. The last disk #20 as you stated came back with: root@PNTower:~# reiserfsck /dev/sdg1 reiserfsck 3.6.21 (2009 www.namesys.com) ************************************************************* ** If you are using the latest reiserfsprogs and it fails ** ** please email bug reports to [email protected], ** ** providing as much information as possible -- your ** ** hardware, kernel, patches, settings, all reiserfsck ** ** messages (including version), the reiserfsck logfile, ** ** check the syslog file for any related information. ** ** If you would like advice on using this program, support ** ** is available for $25 at www.namesys.com/support.html. ** ************************************************************* Will read-only check consistency of the filesystem on /dev/sdg1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Fri Jul 8 13:40:42 2011 ########### Replaying journal: Trans replayed: mountid 50, transid 3572, desc 6865, len 1, c ommit 6867, next trans offset 6850 Trans replayed: mountid 50, transid 3573, desc 6868, len 1, commit 6870, next tr ans offset 6853 Trans replayed: mountid 50, transid 3574, desc 6871, len 1, commit 6873, next tr ans offset 6856 Trans replayed: mountid 50, transid 3575, desc 6874, len 1, commit 6876, next tr ans offset 6859 Trans replayed: mountid 50, transid 3576, desc 6877, len 1, commit 6879, next tr ans offset 6862 Trans replayed: mountid 50, transid 3577, desc 6880, len 1, commit 6882, next tr ans offset 6865 Trans replayed: mountid 50, transid 3578, desc 6883, len 1, commit 6885, next tr ans offset 6868 Replaying journal: Done. Reiserfs journal '/dev/sdg1' in blocks [18..8211]: 7 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 22961 Internal nodes 140 Directories 18 Other files 14 Data block pointers 23233616 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Fri Jul 8 13:41:13 2011 ########### This is the same behavior I had with 5.0Beta7, The last drive was a old WD 80GB drive set as cache, I removed it all together, problem kept coming back. Now its doing this to the next last disk, disk#20 (a seagate 250GB drive). I will now start the array and keep testing to see if it happens again. Seems to me something might be marginal in your hardware. You have a lot of drives, is your power adequate? Quote Link to comment
madburg Posted July 8, 2011 Share Posted July 8, 2011 It is looking like I have a unique situation, and trying to figure it out as well. I have a new CORSAIR Professional Series Gold AX750 (CMPSU-750AX) 750W PSU. I sized it for a 7200k Parity and the rest being green drives. But for testing I am using 4 (was 5, when I removed that WD 80gb drive) 7200 old drives. All preclear 4 cycles and healthy, and 7 new 2TB + 1 3TB green drives. My slatted parity drives is sitting on the bench. What has been crossing my mind is, 1) maybe not enough testing is conducted with not having a parity drive, which I can understand, and maybe some logic slows things down when you have a parity drive and the spin up / mounts have enought time to complete. But when no parity drive exists things happen to fast or code failing. 2) I though an old drive (the WD 80GB 10k) maybe was not too compatible and/or not happy living on a SAS controller and causing hicups. Removed from my unRAID tests, now the next old drive (seagate 250GB 7200k) is showing same behavior. I could remove it as well, but next 4 drives after that are 160GB old 7200k drives and are older then the 250Gb drive... all my hard drives run off LSI SAS controllers, (1) SAS2008, (1) SAS2116. Whats the chances all the old drives are not happy? Running a Xeon with fully buffered ECC Ram (2x4 sticks, total 8GB ram), not a memory issue. Cabling, maybe, but have not stuck my hand in the case since Beta6x. I am open for suggestions. I will tests one more and see if the same behavior occurs. Then I could move to removing all the old drives, test again. And/or test with a parity drive in the array... Copying the data I have on the seagate over to one of the new 2Tb drives via windows explorer W2K8R2 server averaging 52MB, step up from the 32MB I used to get. And I dont have jumbo frames on, as I first test plain vanilla unRAID. Once all is well I move on to the add-ins I want. Quote Link to comment
limetech Posted July 8, 2011 Author Share Posted July 8, 2011 It is looking like I have a unique situation, and trying to figure it out as well. I have a new CORSAIR Professional Series Gold AX750 (CMPSU-750AX) 750W PSU. I sized it for a 7200k Parity and the rest being green drives. But for testing I am using 4 (was 5, when I removed that WD 80gb drive) 7200 old drives. All preclear 4 cycles and healthy, and 7 new 2TB + 1 3TB green drives. My slatted parity drives is sitting on the bench. What has been crossing my mind is, 1) maybe not enough testing is conducted with not having a parity drive, which I can understand, and maybe some logic slows things down when you have a parity drive and the spin up / mounts have enought time to complete. But when no parity drive exists things happen to fast or code failing. 2) I though an old drive (the WD 80GB 10k) maybe was not too compatible and/or not happy living on a SAS controller and causing hicups. Removed from my unRAID tests, now the next old drive (seagate 250GB 7200k) is showing same behavior. I could remove it as well, but next 4 drives after that are 160GB old 7200k drives and are older then the 250Gb drive... all my hard drives run off LSI SAS controllers, (1) SAS2008, (1) SAS2116. Whats the chances all the old drives are not happy? Running a Xeon with fully buffered ECC Ram (2x4 sticks, total 8GB ram), not a memory issue. Cabling, maybe, but have not stuck my hand in the case since Beta6x. I am open for suggestions. I will tests one more and see if the same behavior occurs. Then I could move to removing all the old drives, test again. And/or test with a parity drive in the array... Copying the data I have on the seagate over to one of the new 2Tb drives via windows explorer W2K8R2 server averaging 52MB, step up from the 32MB I used to get. And I dont have jumbo frames on, as I first test plain vanilla unRAID. Once all is well I move on to the add-ins I want. Pretty clear this is not necessarily an issue with -beta8, and I don't want the thread getting cluttered with this, so I think the best way to proceed is via email: [email protected] Quote Link to comment
SSD Posted July 8, 2011 Share Posted July 8, 2011 As a matter of interest, is the extra pair of braces necessary, or have you used them simply for clarity? I have worked in many different languages, and the orders of prescedence rules do vary. If the ">" had a higher precedence than the +, then your "if" statment would first see if "0 > 0", and then add that result to mdResynDt (interestingly, in this situation, I think that might work, but certainly not the intent of the statement). It only takes a second to add an extra pair of parenthesis, and makes it 100% clear, no matter what the order of prescedence. Update: Ha Ha! Just see PeterB making same suggestion. Great minds .....! Quote Link to comment
dlmh Posted July 8, 2011 Share Posted July 8, 2011 All I did was replace the bzroot and bzimage files. If only I shoulda... I don't mind rebuilding parity, but this is a good reminder to not be so quick and stupid with upgrading. I'll try and downgrade first to b4, see if the config is valid again POST SYSLOG FIRST! Let me know if you need directions. There you go! If you've lost your disk configuration, safest way to proceed is as follows: 1. Go to Utils and click 'New config'. Check the 'Yes I want to do this' box, then click Apply. 2. Go to Main and start assigning your drives. Do not assign Parity. Whichever drive you think is Parity, just don't assign. 3. Now you have all your data disks and cache disk assigned, and they all have a blue dot - that's ok (and Parity is set to "unassigned"). Click Start and array will start and attempt to mount all the data drives. 4. If any disk did not mount (that is, appears 'unformatted'), well you have a problem: perhaps that is the actual parity disk? 5. You can spot check the files on the disks to assure yourself everything looks good. 6. Now Stop array and assign your Parity disk. 7. Click Start and you should see a parity sync start up. Variations on the theme: a) Suppose you don't know which physical disk is Parity? In this case assign all your hard drives to data disk slots (do NOT assign a Parity disk). Click Start and the one that comes up 'unformatted' is your parity disk (now you know which one it is). Repeat steps above except at step 3 now you know which is Parity so go ahead and assign it. b) Suppose you lost the config, but you know that Parity is valid, so you want to skip the lengthy re-sync. In this case, once you know which disk is Parity, and you have it and all other disks assigned, just prior to clicking the 'Start' button you can type this command in a telnet window: mdcmd set invalidslot 99 Now click Start (don't do a refresh between typing this command clicking Start or else command will have no effect). What this does is tell the driver that none of the array drives are invalid, and hence won't start a sync (normally Parity is marked invalid when there's been a "New Config"). Make sense? Thanks, Tom! Although I tried the command to prevent the parity sync, it's currently doing just that. Too bad, but at least all my drives and shares now show up again. Strange thing, though, is that permissions where all messed up, so some folders showed up empty. Even files and folders I deleted last week suddenly re-appeared. Isn't that something.... Edit: even files deleted months ago! WOW! It's like frikkin' time machine! Quote Link to comment
Joe L. Posted July 8, 2011 Share Posted July 8, 2011 As a matter of interest, is the extra pair of braces necessary, or have you used them simply for clarity? I have worked in many different languages, and the orders of prescedence rules do vary. If the ">" had a higher precedence than the +, then your "if" statment would first see if "0 > 0", and then add that result to mdResynDt (interestingly, in this situation, I think that might work, but certainly not the intent of the statement). It only takes a second to add an extra pair of parenthesis, and makes it 100% clear, no matter what the order of prescedence. Update: Ha Ha! Just see PeterB making same suggestion. Great minds .....! I 1000% agree. I never leave out the extra parens... not if you want maintainable code. But... let's take the unMENU discussion to the customizations forum. Let's not clutter up this thread where Tom is attempting to resolve core issues... Joe L. Quote Link to comment
Thornwood Posted July 8, 2011 Share Posted July 8, 2011 Hello I just found something weird, I just updated from 5b7 to 5b8a and my flash drive shows completely full with no empty space in either smb or main page. Because of this Unmenu wood not boot because of a divide by 0 error. I removed the drive and checked it and it is formatted fat32 and I did a check disk with no problems. If I return to 5b7 it shows the correct amount of free space. Please help. See log of 5b8a and 5b7 Right, that's not good New release -beta8b corrects this problem. Thank you very much I thought I was going nutz.... Quote Link to comment
SSD Posted July 8, 2011 Share Posted July 8, 2011 The patched version of unmenu is available for install. The basic issue was a line added by bjp999. I modified it very slightly to get past the error, but it will probably need a more correct fix. Odds are it was using a variable from /proc/mdcmd available in the older versions of unRAID, but no longer present. #resync_speed = sprintf("%d", (resync_db+0) / (resync_dt+0)); #bjp999 3/7/11 Change for 5.0b6 resync_speed = sprintf("%d", (resync_db+0) / (resync_dt+0.1)); #Joe L. temp fix for divide by 0 Here are the vars having to do with "re-sync" (ie, parity check/reconstruct) available from driver: mdResync - 0 if no resync in process, else size of resync (in 1024-byte blocks) mdResyncCorr - 0 if not correcting, 1 if correcting (applies only to parity check) mdResyncPos - current resync block position mdResyncDt - "delta time" in seconds mdResyncDB - "delta blocks", ie, number of blocks re-sync'ed during last "delta time" seconds Prior to -beta8, the bolded vars above were not output if there was no resync in process. Now they are always output, but set to 0 if resync is not in process. Must be this new behavior causing unmenu crash? Love the mdResyncCorr! Will be able to tell user if sync errors were corrected or not - accurately. Small thing - but would be nice if you returned an unRAID version string. Right now only way to know is parse the syslog or go digging in the emhttp executable. Quote Link to comment
limetech Posted July 8, 2011 Author Share Posted July 8, 2011 Small thing - but would be nice if you returned an unRAID version string. Right now only way to know is parse the syslog or go digging in the emhttp executable. cat /etc/unraid-version Quote Link to comment
limetech Posted July 8, 2011 Author Share Posted July 8, 2011 5.0-beta8c now available that fixes issue with enabling SMART and NFS issue. Quote Link to comment
Superorb Posted July 8, 2011 Share Posted July 8, 2011 Will there ever be a way to tell which exact file generates a sync error? This way we can just compare the file on the data disk with the source file. Sometimes it's hard to know if the data on a data disk is bad or if data on the parity disk is bad. Quote Link to comment
Joe L. Posted July 8, 2011 Share Posted July 8, 2011 Will there ever be a way to tell which exact file generates a sync error? This way we can just compare the file on the data disk with the source file. Sometimes it's hard to know if the data on a data disk is bad or if data on the parity disk is bad. short answer. No. Longer answer, not likely... it could be any bit on any of your disks or any of your hardware that flipped a bit. It might be part of a file, or empty space, or part of the file-system structure. Parity is computed and checked ( I think ) in sets of blocks on a disk.. It can encompass multiple files, or none. Quote Link to comment
speeding_ant Posted July 8, 2011 Share Posted July 8, 2011 Running Beta8c. Unsure if any changes were made to AFP and TimeMachine in this beta. My TimeMachine share was not showing up at all. I had to disable the share and enable it again for it to show up again. When I try to backup, it takes a very long time for unRAID to authenticate my user and work. Constant logins and logouts in syslog. Finally, unRAID disconnects shares already mounted, restarts AFP, and then mounts the TimeMachine sparseimage. However, it appears Avahi stops working after this, until you re-connect manually (eg, it doesn't appear in Finder server list). Lion still isn't seeing TimeMachine share as a backup destination as of yet, presuming this is known to still be an issue Attached is syslog from point of re-creating share. Thanks Tom! Syslog.txt Quote Link to comment
mikejp Posted July 8, 2011 Share Posted July 8, 2011 Just went from beta7 to beta8. Works fine except... now the temps don't show for the drives hooked to the motherboard. The temps show fine on the 2 BR10i cards with both betas. Motherboard is a GIGABYTE GA-790XT-USB3. Syslogs from both versions attached. Pick one of your drives hooked to the motherboard and note the device identifier, say it's (sdb). Please type this command, capture output and post: smartctl -A /dev/sdb <--- instead of 'sdb' use whatever one corresponds to drive on motherboard port Tower2 login: root Linux 2.6.37.6-unRAID. root@Tower2:~# smartctl -A /dev/sdb smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net SMART Disabled. Use option -s with argument 'on' to enable it. root@Tower2:~# Done what it said.... showing fine now. Didn't hold through reboot Fixed this in -beta8c (not published yet). Thanks Tom!! -beta8c fixed the SMART issues I was having. Quote Link to comment
PeterB Posted July 9, 2011 Share Posted July 9, 2011 5.0-beta8c now available that fixes issue with enabling SMART and NFS issue. Thank you - NFS now starting up okay on boot. Quote Link to comment
liujason Posted July 9, 2011 Share Posted July 9, 2011 I'm trying to setup NFS share in beta8c. After disabling AFP share and enabled NFS share and restarted the server, I could not longer get into the the web management GUI and the shares aren't working (because i've disabled AFP). Attached is the syslog. First question, what should I do to get back into the web management console? Second question, I'm using Mac Disk Utility to setup NFS share, what are the mount locations? Thanks, Jason syslog.txt Quote Link to comment
Nezil Posted July 9, 2011 Share Posted July 9, 2011 Now it's the weekend, and I've got some time, I thought I'd try and do the upgrade to 5.0-beta8c. I usually start by testing updates, and indeed any customisations on a virtual machine that I have set up with tiny 8GB dummy drives. Because I'm using a Mac, it's not possible to actually use a USB disk for this, and in the past I've used an extra SATA dummy disk as the flash, which has worked fine. Obviously I'm only able to simulate the free version of unRAID like this, but that's fine for testing. With 5.0-beta8c, I'm not able to do this because I'm getting the 'Segmentation fault' error that users of beta8a were having on the first page of this thread. I'm wondering if that's something that can be fixed early, as using a VM for development and testing is very valuable. I'm also thinking that this issue might exist if anyone is trying to use the free version with a real flash drive, though I'm not able to test this. Quote Link to comment
tyrindor Posted July 9, 2011 Share Posted July 9, 2011 There's a critical bug fix in this release having to do with data rebuild. There is a corner case that comes up where, if during a data rebuild of a disabled disk (or disk replaced with a larger one), if a write-request occurs for another disk in the same stripe for the disk currently being rebuilt, it's possible the data for the disk being rebuilt is not actually written. Later, this will cause a Parity Check 'sync error'. This greatly worries me. I had my most crucial disk fail last week on b7. I rebuilt it with a bigger driver, and I did writes while doing so. What should I do, I haven't had any corrupt data that i've found but i'd like to make sure. "Correct any Parity-Sync errors by writing the Parity disk with corrected parity." makes it seem like running parity would overwrite parity with disk data. Isn't this exactly what I wouldn't want to do? I believe the best solution in my case would be to run parity and have parity overwrite data on the data discs? If so, how can I do that? I will run a parity sync in the mean time with the box uncheck, just to see if I have any errors. Quote Link to comment
liujason Posted July 9, 2011 Share Posted July 9, 2011 I'm trying to setup NFS share in beta8c. After disabling AFP share and enabled NFS share and restarted the server, I could not longer get into the the web management GUI and the shares aren't working (because i've disabled AFP). Attached is the syslog. First question, what should I do to get back into the web management console? Second question, I'm using Mac Disk Utility to setup NFS share, what are the mount locations? Thanks, Jason I found the reason why I wasn't able to connect : My bad. after disabling the AFP, I could not connect using tower.local, but it was still accessible through IP address. I did change the config manually to disable the NFS. Now I get the AFP NFS running at the same time, after mounting to nfs://tower.local/mnt/disk1, I am only getting read access despite I've set the disk1 to public (and tried the *(rw,insecure)), and tried the -i,-s,-w... trick, it is still read-only access from my mac OS. I'm getting tons of this error in the console. Jul 9 00:07:30 Tower rpc.statd[1016]: STAT_FAIL to Tower for SM_MON of 192.168.0.2 Jul 9 00:07:30 Tower kernel: lockd: cannot monitor Jason-Lius-iMac.local Jul 9 00:07:30 Tower rpc.statd[1016]: No canonical hostname found for 192.168.0.2 Jul 9 00:07:30 Tower rpc.statd[1016]: STAT_FAIL to Tower for SM_MON of 192.168.0.2 Jul 9 00:07:30 Tower kernel: lockd: cannot monitor Jason-Lius-iMac.local Jul 9 00:07:30 Tower rpc.statd[1016]: No canonical hostname found for 192.168.0.2 Jul 9 00:07:30 Tower rpc.statd[1016]: STAT_FAIL to Tower for SM_MON of 192.168.0.2 Jul 9 00:07:30 Tower kernel: lockd: cannot monitor Jason-Lius-iMac.local Jul 9 00:07:30 Tower rpc.statd[1016]: No canonical hostname found for 192.168.0.2 Jul 9 00:07:30 Tower rpc.statd[1016]: STAT_FAIL to Tower for SM_MON of 192.168.0.2 Jul 9 00:07:30 Tower kernel: lockd: cannot monitor Jason-Lius-iMac.local Jul 9 00:07:30 Tower rpc.statd[1016]: No canonical hostname found for 192.168.0.2 Jul 9 00:07:30 Tower rpc.statd[1016]: STAT_FAIL to Tower for SM_MON of 192.168.0.2 One thing I did notice, browsing large file structure using NFS is a lot faster than using AFP. Any idea I can get read_write access using NFS? Thanks, Jason Quote Link to comment
dertbv Posted July 9, 2011 Share Posted July 9, 2011 I've installed it... I can't get to the web interface. I logged on to the server, and ran the emhttp command by hand and get a segmentation fault. It is on the network, as I can telnet to it. I have exactly the same problem. Restarted, but emhttp never came up. Telnet in and start it manually, and get segfault. Attached syslog. Having the same issue with 8c syslog-2011-7-8.txt Quote Link to comment
prostuff1 Posted July 9, 2011 Share Posted July 9, 2011 I've installed it... I can't get to the web interface. I logged on to the server, and ran the emhttp command by hand and get a segmentation fault. It is on the network, as I can telnet to it. I have exactly the same problem. Restarted, but emhttp never came up. Telnet in and start it manually, and get segfault. Attached syslog. Having the same issue with 8c try this here Quote Link to comment
sdballer Posted July 9, 2011 Share Posted July 9, 2011 Unmenu not working for me now... Anyone else? edit: http://lime-technology.com/forum/index.php?topic=13866.msg131338#msg131338 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.