Jump to content

lovingHDTV

Members
  • Content Count

    475
  • Joined

  • Last visited

Everything posted by lovingHDTV

  1. I searched here on the forum to see why my cache drive goes to read only after a while. Most people ran a check on the drive to check for corruptions. So I stopped the array and ran reiserfsck -check /dev/sdb1 Replaying journal: Done. Reiserfs journal '/dev/sdb1' in blocks [18..8211]: 662 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 63511 Internal nodes 417 Directories 309746 Other files 184359 Data block pointers 31617135 (969150 of them are zero) Safe links 0 ########### reiserfsck finished at Mon Aug 29 10:23:45 2016 ########### It reports no issues. Any other suggestions on why my cache drive started acting up all of a sudden? When I try to create a file I get: /mnt/cache# touch test touch: cannot touch âtestâ: Read-only file system And in the log file I get the following error/warning: Aug 29 10:32:22 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375255 Aug 29 10:32:22 tower kernel: Buffer I/O error on dev sdb1, logical block 178296899, async page read Aug 29 10:32:25 tower shfs/user: shfs_read: read: (5) Input/output error Aug 29 10:32:25 tower kernel: mpt2sas0: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c2 5f 00 00 08 00 Aug 29 10:32:25 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375263 Aug 29 10:32:25 tower kernel: Buffer I/O error on dev sdb1, logical block 178296900, async page read Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c3 c7 00 00 08 00 Aug 29 10:32:32 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375623 Aug 29 10:32:32 tower kernel: Buffer I/O error on dev sdb1, logical block 178296945, async page read It seems to me as if the drive is failing? thanks david
  2. Just wanted to drop by and say thanks! I was in Taiwan for business when I got an email saying that Crashplan wasn't backing up. I was aware that this was going to happen, but couldn't do anything about it. A few days later the power went out and unRaid shutdown automatically. After power was restored it started back up, and crashplan automatically updated and I got 4.7. I didn't know this all happened until I got another email saying the backups were working again, imagine my surprise. I don't think anything has every worked this perfectly for me thanks for the great docker, david
  3. I just do a : docker ps cut-n-paste the id into docker exec -it <paste> bash then if I want to run top export TERM=xterm I find it faster to cut-n-paste the ID than trying to remember/figure out the names david
  4. Today I monitor my home internet connection. It is running as a windows server, but I turn off my windows box. I'd like to have it running on my unRaid because it is always on. david
  5. Wondering if anyone would be interested in a Neubot docker, it is similar to smokeping, but does more than ICMP checking. http://neubot.org/ It does a pretty nice job measuring you network performance. At the moment I have it running on a PC, but would like to move it to a docker. thanks, david
  6. Would it be possible to get a bandwidth probe installed as well, or is there one already? thanks, david
  7. I setup openVPN on my pfsense router and it worked just fine. However, it got annoying that the bank requires additional authentication when coming from a VPN, many store fronts don't work because they block known VPN IP ranges, craigslist etc. Took a bit of work to get Plex to work because PIA didn't support a valid port forwarding model. So I went with delugeVPN to avoid the full VPN hassle. Not an openVPN or pfsense issue, just the fallout when people do stupid things via VPN. david +1 This type of solution is very much needed. Having the ability to choose which application uses the VPN provides a great amount of control and flexibility. As an example VPNing docker applications like Sickbeard, Sickrage, Sonarr, Couchpotato, Sabnzbd, to name a few, but not VPNing Plex Server since it has an issue with losing it's connection once a VPN connection is established. Hope to see this type of solution come to fruition in the future. OR... You can use your external firewall / router. Load a DD-WRT or Tomato firmware on your router and using the OpenVPN client & the active routing policies - you can "route" any number of specific internal source IP's out through the VPN. Keeps the networking on the edge and it is actully quite easy to set up (If I can do it! ;-)) My 2c, Did it. WAAAYYYY too slow. routers don't have enough CPU power for real-time encryption. I slowed my speeds by about 75%. The VPN really needs to run on a modern CPU, so inside a separate docker is the better solution. I'm looking at setting up a pfSense install for this very purpose, but an openVPN docker would probably work just as well.
  8. Thanks for the pointer. I don't do any transcoding inside the house, this is all for external streams. This helps a lot. david
  9. I'm hoping someone has a suggestion. My current unRaid box is an AMD Phenom II X3 710 with 8GB of memory. It does fine with a single plex transcode, but I'm finding that my family is wanting 3-4 now and it just can't handle that many. I'm looking to upgrade my CPU, Memory, motherboard to something that can handle it. Is there any advantage for unRaid to run a Xeon? I know that it has better/bigger L3, but not sure that really helps with this issue. I've looked at an A10, or maybe a i5. I'm "hoping" to keep it around the $200 mark. Suggestion? thanks david
  10. Even though sickrage was working, I decided to update the docker. I've been ignoring the update read button for a long time. Now when I did the update it no longer works: Error ! NameError: Undefined import re myDB = db.DBConnection() today = str(datetime.date.today().toordinal()) layout = sickbeard.HOME_LAYOUT status_quality = '(' + ','.join([str(x) for x in Quality.SNATCHED + Quality.SNATCHED_PROPER]) + ')' status_download = '(' + ','.join([str(x) for x in Quality.DOWNLOADED + Quality.ARCHIVED]) + ')' /opt/sickrage/lib/mako/runtime.py, line 226: raise NameError("Undefined") /opt/sickrage/gui/slick/views/home.mako, line 12: today = str(datetime.date.today().toordinal()) /opt/sickrage/gui/slick/views/layouts/main.mako, line 62: <meta data-var="sickbeard.TRIM_ZERO" data-content="${sickbeard.TRIM_ZERO}"> /opt/sickrage/lib/mako/runtime.py, line 883: callable_(context, *args, **kwargs) ideas? david UPDATE: removing cache dir and cache.db fixed it
  11. Well I finally got around to rebooting the server and now everything works. Seems I needed to take a microsoft approach to my UnRaid issue david
  12. Wondering if this is working for anyone else? I just removed the docker and re-installed with the same results. thanks, david
  13. Owncloud quit working for me today, so I saw there was an update, so I installed it. Now it won't even start. If I look at the syslog I see: Nov 2 18:31:10 tower php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'start' 'ownCloud' Nov 2 18:31:10 tower kernel: device veth86c5d74 entered promiscuous mode Nov 2 18:31:10 tower kernel: docker0: port 2(veth86c5d74) entered forwarding state Nov 2 18:31:10 tower kernel: docker0: port 2(veth86c5d74) entered forwarding state Nov 2 18:31:10 tower avahi-daemon[2740]: Withdrawing workstation service for vethf3d9038. Nov 2 18:31:10 tower kernel: docker0: port 2(veth86c5d74) entered disabled state Nov 2 18:31:10 tower kernel: eth0: renamed from vethf3d9038 Nov 2 18:31:10 tower kernel: docker0: port 2(veth86c5d74) entered forwarding state Nov 2 18:31:10 tower kernel: docker0: port 2(veth86c5d74) entered forwarding state Nov 2 18:31:11 tower kernel: vethf3d9038: renamed from eth0 Nov 2 18:31:11 tower kernel: docker0: port 2(veth86c5d74) entered disabled state Nov 2 18:31:11 tower kernel: docker0: port 2(veth86c5d74) entered disabled state Nov 2 18:31:11 tower avahi-daemon[2740]: Withdrawing workstation service for vethf3d9038. Nov 2 18:31:11 tower avahi-daemon[2740]: Withdrawing workstation service for veth86c5d74. Nov 2 18:31:11 tower kernel: device veth86c5d74 left promiscuous mode Nov 2 18:31:11 tower kernel: docker0: port 2(veth86c5d74) entered disabled state Any ideas what happened? david
  14. I was reading on how to setup nzbtomedia and nzbget it said that they were already included. Apparently this is not correct, but I was able to get them installed just like you said. thanks, david hi, im not sure it is included, i had a quick google search and cant find any documentation to back this up. if you want to manually install nzbtomedia then there are instructions on this here:- https://github.com/clinton-hall/nzbToMedia/wiki/nzbget make sure to download and unzip the scripts to "/data/scripts" host volume mapping you should then be able to see them as available in the nzbget webui, littke bit more on this:- Option ScriptDir defines the location of extension scripts. To make a script available in NZBGet put the script into this directory. Then go to settings tab in web-interface (if you were already on settings tab switch to downloads tab and then back to settings tab to reread the list of available scripts from the disk). Menu at the left of page should list all extension scripts found in ScriptDir. Select a script to review or change its options (if it has any).
  15. Is it possible to use nzbtomedia with the nzbget docker? I couldn't see where it was installed if it is. I thought this was now included with nzbget. thanks david
  16. I'm trying to add nzbtomedia to my nzbget install. Can someone tell me how? I don't see an area in the config area where I can install the script. thanks
  17. I can see that my .ui_info in the docker is now getting changed everytime I restart the server, this didn't use to happen. I also see that it gets overwritten even though I set the permissions to 400 and owner to root. Seems it just gets recreated each time. My server is running 4.4.0, but I can't find a download for windows for 4.4.0, I have 4.3.0 installed from July. Anyone able to get this up and running? I looked at the /etc/my_init.d/config.sh and it is creating a .ui_info file, but they (code42), must be overwriting it later. thanks david
  18. Now that I'm past the 2TB part (where all my other drives end) it is running very fast. 175MBps, says it will complete in 5 hours. I guess I'm just surprised that the parity rebuild estimated 11 hours and it was right on. It never sped up and never slowed down. david
  19. Here it is. Pretty cool, I didn't know that existed. david tower-diagnostics-20150921-1617.zip
  20. Great dropped to 13MBps, never seen it this slow ever. I'm up to over 2 days for this rebuild. Ideas? david
  21. I take that back. 5 hours into it I'm not up to 1day 15 hours left. At this rate I will never complete the upgrade. It really slowed down after it passed the 785GB (from the previous disk), instead of 60MBps I'm down to 24MBps. Who know verifying zeros was so complicated. david
  22. thanks all. Previously I had always expanded my array. I'm now just upgrading drives because I don't want more drives. I'll wait 12 hours per drive david
  23. I'm curious as to how this should work. I ran prelcear on all 3 off my 5TB drives. I then swapped out my parity drive, and waited the 12 hours to rebuild parity. Since the second largest drive in the system is 2TB, i though that once you got through that it would go faster, since there really is no parity to check and the drive was precleared. But no, it took the 12 hours. I then swapped out my one 1TB drive for another 5TB precleared drive, and it is has now rebuilt beyond where the drive was used. It still reports 11hour to go before the rebuild is done. It seems as if preclear didn't really help me get my drives put in any quicker. I thought that last time I did this it actually helped. I ran preclear using the latest script for the thread that the nice plugin for it. They all reported complete with a good smart report. david
  24. Took it all apart the wrote down all the data, entered it into the serverlayout plugin. When I put hooked it all up, I decided to leave a port on the motherboard open, and use the last IBM1015 port instead. Bad idea as it would not boot. After dragging a monitor up to the server, fussing with it, I finally figure out that when there is a port open on the motherboard, the BIOS will not boot off the flash drive. It doesn't even list it as an option. ARGS!! Anyway up and running, doing a parity check prior to upgrading the parity drive and two other drives. thanks for all the suggestions, david
  25. Over the years I have lost track of which drive in the server is which in the array. Any ideas on how I can remap them without having to take them all out of the 4-in-3 cages and reading the serial numbers? I don't think drives have LEDs to look for. Can you hear drives spinning up? I bought 3 new drives and want to swap out smaller ones instead of just adding more to the array. thanks, david