pcjacobse

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

pcjacobse's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Could you add Joe's Own Editor to this? http://joe-editor.sourceforge.net/
  2. Thanks for the suggestion! I edited the cache_dirs files and added: as 2nd line. Now it seems to work correctly. My ssh shell was /bin/bash, however the boot script (and the simpleFeatures invoker) used /bin/sh as shell.
  3. I guess you could create a script that checks if the all the disks are spun down. And if so call the clean powerdown script.
  4. I have a strange problem using this script (the latest version 1.6.6). I run unraid 5.0-rc8a on a full slackware 14 distro. in my /boot/config/go i have: /usr/bin/cache_dirs -d 5 -m 3 -M 5 -w After the boot the cache_dirs script takes up 99% cpu in the foreground. Preventing my other bootscripts to run. When I login on ssh and try to stop it using cache_dirs -q, i get the message that its not running. And indeed there is no lock file at /var/lock/cache_dirs.LCK, but the process is running. I can kill the script. And than my boot scripts continue (starting sickbeard etc.) When i run it in the foreground i get the following output: I guess this happens because i rebooted, so there is still a line 'kernel: mdcmd (11): stop' in the syslog The script does work when I use `/usr/bin/cache_dirs -d 5 -m 3 -M 5 -w -F -B`. Any suggestions on how to fix this?
  5. My situation is as follows: I have a storage server with unraid 5.0rc4 (on a full slackware distro) which runs 24x7. Disk spin down is 1 hour. I have a htpc (ubuntu with xbmc) which mounts a couple of user shares over nfs. However, when the disks are spun down, and the htpc boots it doesn't spin up the disks in the unraid array. Then when I want to start a show (tv series) on xbmc I get the message/question that the file doesn't exist anymore, and if i want to remove it from my library. No matter how many times I try to start it (waiting 10-30 secs between attempts) it keeps giving the error. The problem goes a way when i either click the 'Spin up' button in the unraid menu or manually browse in the console (on the htpc) to the folder AND do a `ls` in there. I partially fixed the problem by adding a post-network-up script on the htpc that "clicks" the 'Spin up' button in the unraid menu (using curl). However when the htpc is on for a longer period without any activity the disks spin down again and the original problem start over again. Does anyone have this same problem? Or a better solution for this? ps. Disabling the spin down is not really an option (energy consumption / disk lifetime) as you may know
  6. It all took longer than expected, the hardware came in yesterday. But I had some trouble installing slackware on it. Finally managed to get it running including the XFS filesystem. So I can do a local file copy The system is now copying the data
  7. I was reading the wiki/tutorial on how to install unRaid 5 on a full Slackware distro (http://lime-technology.com/wiki/index.php/Installing_unRAID_5.0_on_a_full_Slackware_Distro) On of the thing I still confused about is the following line in the notes / issues: Is it enough to compare the /unraid/etc/rc.d/rc.6 with the default from slackware and use that as the rc.local_shutdown? Or did I overlook something in the bzroot file from unraid? ps. I haven't installed the Slackware distro yet, so I'm not sure what is in the /etc/rc.d/rc.6 from Slackware
  8. The main reason I wanted it out of the raid 1 array is so I can use the other disk as data disk. I can now mount the xfs partition on my htpc (also an ubuntu (xbmc) machine) and do a (relative) quick transfer over the network (gbit). I did have some content (400GB) on there aswel (as temporary storage, cause the raid array was full) to transfer. Thanks for all the suggestions. It would taken me way longer to figure out these thing without the help and suggestions! I'll start the preclearing of the remaining disk tomorrow
  9. Disconnect went good, the etrayz still has a working (degraded) raid 1 filesystem. After some exciting minutes, i was able to remove the raid info from the other hd. and i can mount the partitions in the ubuntu live cd. However (and I didn't check that before) the filesystem is a XFS system, and the unraid usb disk can't mount that by default. Are there packages i can install so I can mount the XFS filesystem?
  10. I will just mark the hd as faulty and get it out, and let the other hd of the raid 1 intact while I try to remove the raid info on the removed one. In worst case, i `dd` clear the hd and add it back to the degraded array and let it rebuild So not really a risky part in my eyes
  11. Hmm.. thanks... I guess I read it wrong a `mdadm --stop` with a ubuntu live-cd might work according to the internet:
  12. Yes it is software raid I did some more searching online. I can delete the raid metadata from the hd with a `dmraid -rE`. The hd should be a normal hd again. And I can mount the partitions normally in slackware (right?) Tonight (CEST) I will attempt this, first removing one hd from the array and checking to see if I can mount it. If that succeeds, than I will preclear the other hd. I want to copy the data to a unraid user share, so the data(folders) will be nicely spread acros the disks. At least thats my understanding of what unraid does. The hd used to transfer the data will later become the parity disk
  13. Hi all, I'm currently running an eTrayz nas in RAID 1 mode, I don't have enough room left on the device, so I bought some new hardware to build an UNRAID storage server. I currently have 2x2TB (2TB data) hd's in RAID 1 and want to go to 4x2TB (6TB data) in the new system. I'm using a separate OS disk, to install a full slackware on using the tutorials in the wiki. However this disk has not yet arrived from delivery. However since I want to use the 2 disk from the old array in the new system, can I just plugin one of the RAID 1 disks in the new system, mount it and transfer the data (the eTrayz has REALY slow transfer speeds) to the other 3 (precleared) disks. And later preclear that disk and add it as parity. I don't care that much if for some unfortunate event a disk crashes before the parity disk is added, since its mostly just movies and series for my HTPC. I found some info on how to mount it, but that requires mdadm to be present. I did the preclearing in my HTPC running from an unraid usb stick. However the stick doesn't have mdadm on it. Can I install that or would it be better if I installed it on the full slackware disk when it arrives. Once all the parts of the new system have arrived, I would like to get it running asap and not wait for an other 26 hours of preclearing just so I can start the transfer. I hope I explained my situation clearly, and you can help/advise me TNX ps. Little background info on myself. I manage several Debian (web)servers, so I know my way around linux (I'm not an expert on it, but i know how to install, configure and maintain it from console), and I've never run Slackware before.