heffneil
Members-
Posts
345 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by heffneil
-
So when everything can go wrong it does! My "spare" unraid server hasn't had to reboot in forever and now I have three new 3TB drives at least 22 hours in to the preclear when I come back home and the UPS was out. My question is how if any way can I determine if the preclear was completed and completed satisfactorily if it did finish. Help with this would be appreciated and save me another day! Thanks, Neil
-
What type of plugins? I have simple features and unmenu. Also some ssmtp and cache dirs but that's about it. No programs running like media servers or the such. Thanks
-
Forgive my ignorance I am new to 5.0 and I noticed that the green ball isn't blinking next to my cache drive. Is this typical? Thanks, Neil
-
Hey I think it was working and then I went and upgraded unraid from 4.7 to rc5 and I think I ran in to a problem. In my syslog I see this: Jul 9 11:45:10 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Often. Any ideas on what I am screwing up now? Thanks, Neil
-
Ok rebooted windows and now it is accesible. Stupid me. Very strange but windows happens Thanks!
-
Guys please help not sure what to do here. Two of six shares are not working saying they are inaccessible. I've ran the permission script two times now. Any thoughts?
-
Jul 9 04:40:01 Storage syslogd 1.4.1: restart. Jul 9 04:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 05:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 06:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 07:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 08:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 09:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 10:47:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 11:19:09 Storage kernel: mdcmd (66): spindown 1 (Routine) Jul 9 11:22:23 Storage shfs/user: duplicate object: /mnt/disk3/./.DS_Store (Minor Issues) Jul 9 11:22:23 Storage shfs/user: duplicate object: /mnt/disk5/./.DS_Store (Minor Issues) Jul 9 11:45:10 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 11:51:42 Storage kernel: mdcmd (67): spindown 4 (Routine) Jul 9 11:52:13 Storage kernel: mdcmd (68): spindown 7 (Routine) Jul 9 11:52:45 Storage kernel: mdcmd (69): spindown 6 (Routine) Jul 9 12:15:24 Storage kernel: mdcmd (70): spindown 1 (Routine) Jul 9 12:15:24 Storage kernel: mdcmd (71): spindown 2 (Routine) Jul 9 12:25:26 Storage kernel: mdcmd (72): spindown 2 (Routine) Jul 9 12:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 12:43:28 Storage kernel: mdcmd (73): spindown 3 (Routine) Jul 9 12:47:29 Storage kernel: mdcmd (74): spindown 4 (Routine) Jul 9 12:48:30 Storage kernel: mdcmd (75): spindown 5 (Routine) Jul 9 12:49:31 Storage kernel: mdcmd (76): spindown 6 (Routine) Jul 9 12:52:03 Storage kernel: mdcmd (77): spindown 8 (Routine) Jul 9 12:52:34 Storage kernel: mdcmd (78): spindown 7 (Routine) Jul 9 12:55:35 Storage kernel: mdcmd (79): spindown 1 (Routine) Jul 9 12:56:05 Storage kernel: mdcmd (80): spindown 0 (Routine) Jul 9 12:56:06 Storage kernel: mdcmd (81): spindown 9 (Routine) Jul 9 13:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 14:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 15:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 16:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user) Jul 9 17:33:09 Storage kernel: mdcmd (82): spindown 1 (Routine) Jul 9 17:33:39 Storage kernel: mdcmd (83): spindown 2 (Routine) Jul 9 17:33:40 Storage kernel: mdcmd (84): spindown 3 (Routine) Jul 9 17:38:01 Storage crond[1194]: ignoring /var/spool/cron/crontabs/root- (non-existent user)
-
I can only access some of my shares. I ran the new permissions 2 times now after upgrading from 4.7 to rc5. I need some help badly. I can access each of the directories within the disk names. I put a password on the root user and added a new user called neil. All of the shares are public access. I am attempted to connect with SMB. Any help would greatly be appreciated. Thanks, Neil
-
Did you delete these files (in flash/config/): passwd shadow smbpasswd * I deleted all of these except shadow since I couldn't see it!? Did you run the New Permissions script? *Yes I ran this but I put my cache drive in afterwards does that matter? Did you re-add your users? *I only had root before and I share everything publicly so it shouldn't be a problem Are you trying to use root for access? * I don't know I just map over to that machine in windows and I can see the shares and I click on it. Never prompted for a user name and password but they are publicly shared as said above. Have you restarted the clients? *No not yet. But I tried connecting from a different computer which hadn't connected before I don't think and it was still an issue I can drill down in the web interface so I know my stuff is all there (thank goodness) Thanks! Neil
-
I upgraded to 5 RC5. All seemed to go well but I can't access some of my shares via windows SMB. Any ideas? Thanks, Neil
-
Upgraded from 4.7 can't access most shares
heffneil posted a topic in General Support (V5 and Older)
Hey guys just upgraded to rc5 from 4.7 and I can only access a few shares. I noticed when I go to the shares page one of them is red and the rest are green. Even those with the green ball aren't accessible. I added an additional user besides root. I will note that I updated permissions prior to adding this user and I didn't have my cache drive assigned to the array when I ran the update for the permissions. These are public shared shares if that makes a difference. Any thoughts would greatly be appreciated. Thanks, Neil -
I started back up the array and replaced the files and everything looks great! Boy what an upgrade Question you guys might be able to help me with: I ran the update permissions script before adding back in the cache drive. Do I have to do anything special in regard to permissions for that drive? Thanks, Neil
-
Guys I am seeing this in my syslogs when I attempted to stop unraid: Jul 8 19:39:22 Storage emhttp: _shcmd: shcmd (518): exit status: 1 (Other emhttp) Jul 8 19:39:22 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:27 Storage emhttp: shcmd (519): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:27 Storage emhttp: _shcmd: shcmd (519): exit status: 1 (Other emhttp) Jul 8 19:39:27 Storage emhttp: shcmd (520): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:27 Storage emhttp: _shcmd: shcmd (520): exit status: 1 (Other emhttp) Jul 8 19:39:27 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:32 Storage emhttp: shcmd (521): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:32 Storage emhttp: _shcmd: shcmd (521): exit status: 1 (Other emhttp) Jul 8 19:39:32 Storage emhttp: shcmd (522): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:32 Storage emhttp: _shcmd: shcmd (522): exit status: 1 (Other emhttp) Jul 8 19:39:32 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:37 Storage emhttp: shcmd (523): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:37 Storage emhttp: _shcmd: shcmd (523): exit status: 1 (Other emhttp) Jul 8 19:39:37 Storage emhttp: shcmd (524): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:37 Storage emhttp: _shcmd: shcmd (524): exit status: 1 (Other emhttp) Jul 8 19:39:37 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:42 Storage emhttp: shcmd (525): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:42 Storage emhttp: _shcmd: shcmd (525): exit status: 1 (Other emhttp) Jul 8 19:39:42 Storage emhttp: shcmd (526): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:42 Storage emhttp: _shcmd: shcmd (526): exit status: 1 (Other emhttp) Jul 8 19:39:42 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:47 Storage emhttp: shcmd (527): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:47 Storage emhttp: _shcmd: shcmd (527): exit status: 1 (Other emhttp) Jul 8 19:39:47 Storage emhttp: shcmd (528): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:47 Storage emhttp: _shcmd: shcmd (528): exit status: 1 (Other emhttp) Jul 8 19:39:47 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Jul 8 19:39:52 Storage emhttp: shcmd (529): umount /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:52 Storage emhttp: _shcmd: shcmd (529): exit status: 1 (Other emhttp) Jul 8 19:39:52 Storage emhttp: shcmd (530): rmdir /mnt/user >/dev/null 2>$stuff$1 (Other emhttp) Jul 8 19:39:52 Storage emhttp: _shcmd: shcmd (530): exit status: 1 (Other emhttp) Jul 8 19:39:52 Storage emhttp: Retry unmounting user share(s)... (Other emhttp) Any ideas? Thanks, Neil
-
Is that unusual for that to deny the request? I don't quite understand. I can reboot this server but now I am wondering if I should just go 5.x now Decisions Decisions. Im just weary and it is a big leap that I don't know if I am prepared.... I thought I could use the 3tb as 2tb in the short term Neil
-
I ran the following and got the following error: root@Storage2:~# hdparm --yes-i-know-what-i-am-doing -N p3907029168 /dev/sdl /dev/sdl: setting max visible sectors to 3907029168 (permanent) SET_MAX_ADDRESS failed: Input/output error max sectors = 5860533168/5284784(5860533168?), HPA setting seems invalid (bug gy kernel device driver?)
-
Yes I know I will have a lot of work and be shuffling things around. I don't want to mess with parity so I am going to make them 2 TB to be used later. What I will do is use these drives in my backup machine that I am replacing second. In other words when unraid 5.0 is out and proven I will buy a host of new drives for my original server and start the upgrade with new drives. Joe: I already pre-cleared the drives as 3 TB. Now I want to make them work as 2.0 so once I change everything HDPARM I have to preclear correct? Thanks, Neil
-
Wow that sucks! Was I right about the HDparam command?
-
I found this: "But to answer your question, the number of sectors in a 2T drive is 3907029168." But that wasn't from you, Joe. So I am not sure if I have the write place. Can I run the HDparam after preclearing or would I need to re-preclear after the HDParam? Thanks, Neil
-
Can I set the 3tb drives to the same space as my other 2tb drives so I don't have to replace parity? I am re-reading that linked thread now.
-
I know using 3 TB drives with unraid 4.7 is impossible in its fullest capacity but my unraid 4.7 servers are 98% full and I need more space. I bought a few 3TB drives and precleared them full capacity (I believe it has been a while). So the question is can I drop those guys in to 4.7 and use them as is for now? I only have a 2 TB parity drive that I don't think I want to replace right now... Anyway any ideas or suggestions would greatly be appreciated. I don't want to invest in 2TB drives considering they are just too small with the limited number of drive spaces... Thanks, Neil
-
I'll give it a shot. Thanks! Neil
-
I have searched all around within the forums in order to get rsync schedule on my boxes. I don't know how to do it that is obvious. Anyway I know the two rsync commands I want to run - and I want to run them serially one time a day. Seems simple enough but I want to make sure it will run after a reboot. Any help advise or pointers would greatly be appreciated. Thanks! Neil