jangjong

Members
  • Posts

    365
  • Joined

  • Last visited

Everything posted by jangjong

  1. Ah.. i want to move towards to these 3 x 2 hot swap cages now.. but which one to choose.. BPN-DE230SS or MB153SP-B .... decision decision.
  2. you shouldn't be installing them in the same folder. you should be trying to open up http://tower:81/spotweb/install.php also do you have rewrite setup for your newznab? That's going to cause problem with spotweb's rewrite. either delete newznab's rewrite and use only spotweb's rewrite. (newznab won't be accessible) or put newznab and spotweb into a seprate folder under web root, so it would be tower:81/newznab/ and tower:81/spotweb/
  3. Here is the write up 1. Requirements You need to have a web server and mysql running on your server to get SpotWeb to work. Get SimpleFeatures web server here: http://lime-technology.com/forum/index.php?topic=12698.0 Get MySQL plug in for unRAID here: http://lime-technology.com/forum/index.php?topic=20603.0 2. Database Setup SpotWeb does not create a database for you, so you need to create an empty database. Telnet to your unRAID server Run this command mysql -u root -p Type in mysql password. you should see something like this if you're connected mysql> Type this command in to create database CREATE DATABASE spotweb; this will create 'spotweb' database. Type exit To exit out of mysql command. 3. Download SpotWeb You can either go here: https://github.com/spotweb/spotweb then click on 'ZIP' to download the whole repository.. or use GIT to download I'm going to assume that you have sickbeard plug in installed from here (http://lime-technology.com/forum/index.php?topic=21260.0). The reason why is that I don't want to go through necessary packages to install 'git', and sickbeard plug in from influencer already installs them. So, if you have sickbeard plug in installed, follow this instruction to get the file. otherwise, probably easier just to go to the github site to download them From here, I am assuming that you have Web Root directory as '/mnt/cache/web/'. If it's somewhere else, change the command according to that Telnet to your unRAID if you haven't already, and type in this command to go to web root cd /mnt/cache/web/ then type in this command to download files needed: git clone -b master https://github.com/spotweb/spotweb spotweb This will create a sub directory (spotweb) in your webroot and download files from github. 4. Install SpotWeb If you followed above steps, you should be able to open install.php file in your web browser with this address: http://tower:81/spotweb/install.php (Change Tower:81 to your simplefeature web server setting) Install.php It's okay to have 'NOT OK' for DB::pgsql and Own settings file. They don't matter. If anything else says something else, try to fix it. Other wise, hit 'Next'. Database Settings Type: mysql Server: localhost (if localhost doesnt work, type in the name of your server) database: spotweb username: root password: root (or whatever your mysql password is) You can probably create a new user just to access this, but if you're just trying this out or using it internally, shouldnt matter Usenet server settings You can either choose a server from the drop down menu and type in the info or choose 'Custom' and put in your usenet server info. DO NOT put the port number. It will cause some error. You can set up SSL and some others after you have successfully installed it Then hit 'Verify Usenet Server' Spotweb type/Administrative user Self explanatory. Choose whatever setting you want to use, create a username, then 'Create 'System' Installation succesful When you see this message, you're done. This page does not have a link to go to SpotWeb page though.. weird. so just go to http://tower:81/spotweb/ 5. Retrieve NZB's Do not press 'Retrieve' button on the page just yet. This is going to take a long time at first. By default, it will get everything from 11/1/2009 (or 1000 days? i dont know). If you don't need to go back that far, you can choose the date by going config -> setting -> retrieve just select the date. Then open up the telnet connection to your unRAID machine. Go to screen by just typing screen then navigate to your spotweb directory. in my case it's /mnt/cache/web/spotweb/ cd /mnt/cache/web/spotweb/ and then run this command php retrieve.php It will start gathering stuff and you should be able to see it on SpotWeb as it updates. This may take a while. This is why I wanted you to run it under screen.. so just let it run for awhile I'm sure you know screen commands.. but just in case you dont know Detach Screen: Ctrl + A -> Ctrl + D Back to Screen: screen -r 5. Rewrite Setup If you want to use SpotWEb with CouchPotato or SickBeard, you need to set this rewrite. open up lighttpd.cfg file in '/boot/config/plugins/simpleFeatures/' or it's in your flash drive.. hopefully you know where to find it and add this rewrite rule url.rewrite-once = ( "^/spotweb/api\?(.*)" => "/spotweb/index.php?page=newznabapi&$1" ) This is if you have it set up so the address for spotweb is http://tower:81/spotweb/ If not.. well change your rewire rule lol Test your API by opening this link http://server/spotweb/api?t=c 5. Schedule Retrieve You can run these commands to add cron job to run retrieve every two hours. You can also add these line to go file (/boot/config/go) to set it automatically on boot up crontab -l > /tmp/crontab echo "# SpotWeb Retrieve" >> /tmp/crontab echo "0 */2 * * * /usr/bin/php /mnt/cache/web/spotweb/retrieve.php >> /var/log/spotweb" >> /tmp/crontab crontab /tmp/crontab MAKE SURE TO CHANGE '/mnt/cache/web/spotweb/' TO WHEREVER YOU INSTALLED YOUR SPOTWEB Also, if you want to make it to run every hour, you can change */2 to */1.. if you need to run it sooner than that, read this page: http://en.wikipedia.org/wiki/Cron END Is this good enough?
  4. Yea.. I didnt think I would need anything before 1 year, so i set the date to start at 1/1/2012. Retrieve was done in 30 mins maybe? And I hope you're running retrieve.php in either command line or screen. Just like newznab's backfilling, first time will take forever because it goes back few years by default.. And batt01.. i will try to write something up
  5. I'm actually liking this more than newznab for now. I haven't tested with sickbeard or couch potato yet though. For those of you who is interested.. found the rewrite for api's and cron job to schedule retrieve.php on their wiki. lighttpd rewrite url.rewrite-once = ( "^/api\?(.*)" => "/index.php?page=newznabapi&$1" ) cron (Every 4 hours) 0 */4 * * * /usr/bin/php /var/www/spotweb/retrieve.php >> /var/log/spotweb It's getting a lot more data than i actually need.. but looking good
  6. well.. 'retrieve' button doesnt work too well. it's better to use screen and run php retrieve.php
  7. Looks interesting.. Trying it out now. So far, it's pretty easy to install as long as you have the simple feature web server and mysql installed. this "Retrieve" is a lot easier than newznab's update scripts..dont need to worry about all the regex it seems.. and looks like it can be used with couchpotato/sickbeard according to https://github.com/spotweb/spotweb/wiki/Spotweb-als-Newznab-Provider It's in dutch but chrome translate fairly well
  8. Huh.. this totally fixed my write speed issue. Gonna put it in GO file for now...
  9. Fixed my own problem.. http://lime-technology.com/forum/index.php?topic=22675.msg213845#msg213845 Funny how these memory things do to you
  10. I wasn't sure if this was normal or not.. but I thought I would ask. I have 2 x WD20EARX drives in my machine. I also have 1 x ST2000DM001 machine. Reading speed is fine. I get about 60 - 80 mb/s from both drives. Not a big deal.. However, when I was transferring some files to unRAID machine, I noticed that the write speed for WD20EARX is so much slower. Here is the screen shot of ST2000DM001 transferring speed: about 20 mb/s. That's expected. Here is the screen shot of WD20EARX transferring speed... It never goes above 3 mb/s This can't be normal... is it? My syslog is clean..
  11. Could you explain this further? I followed your instructions but newznab didn't automatically start on system reboot. It worked only when running as root although I got an error. Please see below my output ubuntu@hades:~$ ./newznab.sh start -bash: ./newznab.sh: No such file or directory ubuntu@hades:~$ sudo su root@hades:/home/ubuntu# cd /etc/init.d root@hades:/etc/init.d# ./newznab.sh start Starting Newznab binaries update...root@hades:/etc/init.d# [OK] E: File read error E: File read error E: File read error root@hades:/etc/init.d# First, where do you have ./newznzb.sh ? is it in /etc/init.d ? Second, did you edit your newznab.sh file and change the path variable to the correct path (the location of update_script) ? this is required Also, update_script folder needs to be under misc folder. and misc folder has to be at the same location of www folder. If you open up newznab.sh, you will see that it tries to run update_binary and update_release. When you look at those two php files, it actually tries to access a file in www folder by "../../www"
  12. Also, if you're using older version of simplefeature webgui, it may not be able to I had this issue before. but seems like the new simplefeature fixed it
  13. Even if you said 50 day back fill, it won't backfill automatically. there is another script for backfill. i think it's backfill.php? should in the update_script folder update_release only gets new one since you updated . 50 days for multiple group may take a long time. if you're planning on running backfill.php, i recommend not running newznab.sh, but just run backfill.php according to their document, backfill.php should be similar to update_binary.php. so after it's done runnign backfill.php, make sure to run update_release.php also, please look at this link for disabling mysql binary log: http://systembash.com/content/mysql-binary-log-file-size-huge/ if you don't comment out "log-bin=mysql-bin" line in my.cnf, it will generate huge files in your mysql data folder. for me, when i had 30 days back fill for one group, these bin files took up 2 - 3 GB. disabling this will help you save some space
  14. Here is the group screen after I ran update binary / release. it wasn't newznab.sh, i ran the php file. So, i guess it's accurate to say, it shows last time updated if the group actually has new binaries. (sorry 33weeks was last post lol) cause as you can see, some show few seconds ago, but still some says 13 hours ago so then my question is.. do you have multiple groups activated? which one do you expect to see the update? -- Edit: Actually, misunderstood your post a bit. if you think there's a problem with the update script though, that newznab_screen.sh is probably better way to see what's going on.
  15. Thank you jangjong for the response. OK - understand re: cronning the job. Have taken it out of my cron queue. I'm actually using a different usenet server to avoid these problems... so not sure what is going on. Yes - I can run those files perfectly though the command line. I've switched over to using the newzbnzb_screen.sh command. Seems to do the same thing, but displays the output verbosely. I don't mind tying up one of my screens with this command, so guess I'll just live with this for the time being. Pity as the other file was much cleaner. You can check the last update of each of the groups by going to the newznab homepage and clicking on "Groups". It lists each of the activated groups and shows the time of last update [This is from memory, so please excuse any mistakes] screen.sh should work fine too. but honestly, i dont know if that 'last update' is showing when the script updated the group. i have a few groups enabled, but some of them show like 10 hours, and some of them show 15 mins. or i have another group that has 33weeks in there. lol so im' pretty sure it means the last time the group itself got updated in the server.. nothing wrong with your script in my opinion.
  16. christuf, newznab.sh under cron_script is actually not meant for cron jobs.. not sure why they call it cron_scripts. so it won't work with influence's cron script as i mentioned earlier. you just have to run it after changing the path. newznab.sh may not delete the pid file though... kill is suppsoed to delete that pid file but it doesn't really work sometimes. one thing i want to ask you though, was your sabnzbd active at all? in order for newznab to update, i found out that it needs connection with the usenet server. so for example, if your usenet server offer 30 connection and you have set up 30 connection in sabnzbd , and your sabnzbd was actively downloading something, newznab wont be able to get that connection it needs.. what i did was my usenet server offers like 50 connection, so i lowered it to 30 in sabnzbd so there will always be connection available for newznab script. if this is not a problem, i would check the basic. are you able to run update_binary and update_release fine? another thing i want to ask is how are you checking if it updated or not?
  17. well.. emhttp was just an example. what's killing the share is probably this line Dec 10 05:21:48 HomeServer kernel: Out of memory: kill process 1481 (shfs) score 356 or a child Dec 10 05:21:48 HomeServer kernel: Killed process 1481 (shfs) Dec 10 05:21:48 HomeServer kernel: shfs invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 Dec 10 05:21:48 HomeServer kernel: Pid: 2976, comm: shfs Not tainted 2.6.32.9-unRAID #8 in your log file, if you search for 'invoked oom-killer', you will find out what triggered to run oom-killer. it seems like shfs and rsync is invoking oom-killer a lot in your log file so.. again my guess may be right where your file share is getting too big to handle.. Another possibility is.. You could take a look at this post for an example: http://lime-technology.com/forum/index.php?topic=15596.0 This user changed some setting in sabnzbd accidently to use ram disk to unrar stuff, which will destroy free memory. Just something to look at..
  18. Well.. share going down is definitely the problem with memory though based on the log Out of memory is killing off emhttp. Dec 10 05:21:45 HomeServer kernel: Out of memory: kill process 1366 (emhttp) score 412 or a child Dec 10 05:21:45 HomeServer kernel: Killed process 1366 (emhttp) If you haven't had a problem for years though, first I would try memtest to make sure your ram is okay. It could also be that newer version of those plugins and unraid may require more ram and as you get more files in your server, i am guessing that it will require more ram to handle those data (just my guess, someone can tell me if i'm wrong) I heard that ACPI errors can be ignored somewhere. but i dont know... i would definitely run memtest and looking into getting more ram
  19. Add this pkg as well. http://slackware.cs.utah.edu/pub/slackware/slackware-14.0/slackware/l/libelf-0.8.13-i486-2.txz i didnt need that package, but not sure what changed. -- Edit: actually, it looks like the link for screen that i gave you seems to be newer version or newer build than what i have. so that explains it. installing libelf will fix the issue
  20. Yea, that may be too much for 1GB.. i feel like sabnzbd alone could take up to 1GB depending on the file size it's downloading/verifying/extracting. I was about to recommend Helmonder's thread as well.. but you need to put more ram in your machine if you want to run those plugins correctly
  21. I think for ubuntu, it should be /var/lib/mysql/ Also, if it's at 50% for 32GB, i would look in to this: http://systembash.com/content/mysql-binary-log-file-size-huge/ For me, mysql created multiple binary log files that's over 1GB, which took about 6 - 7GB.
  22. Are you running any plug-ins or installed extra packages? -- Edit: Also, I see many 'ACPI Error: No handler for Region [sACS] (f741d208) [PCI_Config] (20090903/evregion-319)' Do you have AHCI mode enabled in BIOS?