Jump to content

JonathanM

Moderators
  • Posts

    16,659
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. Each time I stop and start, I need to go in, reapply the download directory, my seeding preferences, if I want to allow remote connections, labels, etc.  Any thoughts on what I could be doing wrong? 

     

    This sounds like a classic case of the share your using not being marked a "cache only", what this means if you dont do this is the internal unraid process called mover comes along and moves all your files in that share to the array, then when you restart the docker you have no config again, and so the cycle repeats. The way to solve this is to mark the share as "cache only", this tells the mover not to touch any files/folders contained within this share, go to unraid ui/shares/click on share/select from dopdown "use cache disk" the value "only", save and your done.

     

    fyi, in Couch I have tried to connect to "localIP:58846", "127.0.0.1:58846" and "localhost:58846".  I also tried changing the port number in deluge and in couch but no luck.  In the beginning of this thread in the FAQ Binhex says to "enable remote connections, then restart".  In my case, every restart resets to unchecked and I have to check again - so now I"m thinking whatever is keeping my preferences from staying might be a contributing factor.

     

    ok this will depend on where couchpotato is installed and how its installed, for now im going to assume couchpotato is installed as another docker on the unraid host, if this is the case then read FAQ Q1 in post #2 of this thread and follow it carefully.

    Just to add to this a little, once mover has messed with it, it seems like you need to remove the delugevpn config folder and let the container recreate it. Also, I'd set the mapping to /mnt/cache/appdata/delugevpn instead of /mnt/user/appdata/delugevpn if that's what you used.
  2. One thing to keep an eye on is mixing /mnt/cache download locations and /mnt/user post processing destinations. I accidentally lost a few movie downloads when I was re-jiggering my  CP processing setup. For a minute or two I couldn't figure out why a perfectly completed torrent just disappeared as soon as CP tried to process it.

  3. Since we're sort of on the subject of the Dynamix webUI instead of any of these other plugins, is there anything that can be done about the "adblocker" problem that has spawned so many threads recently? This has never been a problem for me since I have always had my server whitelisted, but it seems to be a new problem for many so I don't think it has always been this way.

    It seems to be an issue that could be dealt with one of two ways. 1st, change the web coding that is triggering the adblocker (not likely, or it probably would have already been done) or 2nd, code a "please disable your adblocker" message if one is detected.
  4. Given the excellent state of IPMI, SSH, and plugins like Shell-In-A-Box, why the heck does unraid need a dedicated GPU out?

     

    Not trolling on this one, truly curious.

    Very few consumer boards have IPMI, but mostly I think it's a KVM limitation, not something that unraid has direct control over. However, how would you troubleshoot a network connection failure with no local console? I suppose you could redirect the console to a serial port like many other network appliances, but then managing unraid turns into a game of trying to connect a second machine just to get an output screen.
  5. So how do you know when the files listing memory is getting freed up?
    No logging that I am aware of. It's kernel memory, so it just does its thing invisibly in the background. If a directory list event causes a drive access, then either the list is no longer in RAM, or it wasn't just a file list request, but content as well. Most gui file managers try to display some form of thumbnail unless you turn that feature off, so if a drive spins up when you browse it, make sure the program isn't doing anything but listing the file names.
  6. OK. I found Joe L. post in original thread. I had thought that if you run it on the drives the user shares already got the benefit. Did he mean the user shares already have them cached even if you don't run it at all?
    Yes and no, sort of.

    User shares are indeed built and run in memory, but that memory can be claimed by any and all other processes. When something is accessed, the disks are spun up to rebuild that portion of the user share fs. Cache dirs keeps the underlying disks contents fresh in memory, so accesses are nearly instantaneous. So to answer the OP's question directly, you will see a benefit from running cache dirs on the DISKS that make up the user share that you wish to cache, as long as you have enough RAM so the directory tree can actually stay cached without being overrun by other processes. Whether or not you use disk shares has no bearing on cache dirs being useful to user shares. Where people are running into issues is trying to use cache dirs to keep too much of the directory tree in memory, as that causes cache dirs to keep the disks spun up because as soon as it's done walking the disk, something else comes along and needs that RAM, knocking the directory list out, causing the disk to stay spun up as cache dirs reads it into RAM again, causing a loop.

  7. This doesn't use the same lists as Pi-Hole but it works the same way - just point your DNS addresses to your unRaid tower and it will forward the valid requests vias Googles DNS servers and drop the rubbish ones....
    Can this be configured to do .lan internal dns as well? Also split dns would be nice, so you could use the same names internally and externally for published services on your lan.
  8.  

    More of a general request for all of linuxserver.io Dockers, can updates not occur automatically on Docker start?  Perhaps a separate GUI button to toggle update on start?  Thanks.

    wait do you want updates or not ?

    Yes or no, depending on a saved toggle setting for each docker. All that's needed is a user settable variable that skips the auto update portion and starts immediately if set.
  9. cadvisor container will show container sizes and help narrow it down.

     

     

    cadvisor was useless in resolving this issue for me.  The sizes reported in cadvisor never changed while the docker image continued to fill up.

    Really?  That's interesting, might have to look into that....

     

    Like I say, I've not been affected by this issue..

    In theory that should be easy to test. dockexec into the container, and dd if=/dev/urandom of=bigfile.test bs=1M count=100 then check cadvisor for a 100MB bump in that container.
  10. Suspicious at one point I may have been

    I do keep posting the same advice in this thread to be fair....

    It was a dumb joke that may have been only funny to myself.  Don't worry about it

    ;D ;D ;D:-X

     

    (Don't worry Squid, it was funny)

    (I don't think CHBMB has awakened yet)

    Someone please explain.... I get the Yoda thing but after that..... lost.

    Squid just thought it was funny that conciously or not you posted in yoda voice during all the marketing buzz for force awakens. Nothing deep or hidden that I can see, but maybe I missed it.
  11. Suspicious at one point I may have been

    I do keep posting the same advice in this thread to be fair....

    It was a dumb joke that may have been only funny to myself.  Don't worry about it

    ;D ;D ;D:-X

     

    (Don't worry Squid, it was funny)

    (I don't think CHBMB has awakened yet)

  12. Can I get a little clarification on this bug? I am looking to backup one of my user shares to an external USB drive I mount with the "unassigned devices" plugin via MC. Since the usb drive can not be part of the array (and therefore not part of the share) will I be safe to copy over a user share to the mounted USB disk? Its 3TB of data so I'd prefer to use MC rather than network to save time.

     

    Thanks

    Doing the entire process locally on unraid means you don't need another PC running while the copy is in progress, but keep in mind that it may actually go quicker over the network. I'm not sure if the USB speeds have improved on the latest builds, but in the past a locally attached USB drive was way slower than going over the network to a client PC with a good USB3 connection. I'll be interested to see if your experience is different.
  13. I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.

     

    tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.

     

    @Brit  I posted in here to see if anyone else has the same problem so we can collaborate to fix it.  If you don't have the problem, that's great. But telling those of us that do have the problem that its our problem and we have to fix it ourselves is not helpful.  What would be helpful is if you could tell us which dockers you are using that have not given you any problems so we can possibly eliminate them from the list of potentially misbehaving dockers.

    Running 24/7

     

    binhex/arch-couchpotato

    binhex/arch-delugevpn

    binhex/arch-sabnzbd

    binhex/arch-sickrage

    binhex/arch-sonarr

    emby/embyserver

    smdion/reverseproxy

     

    Running on demand or not fully configured and utilized.

     

    sparklyballs/krusader

    yujiod/minecraft-mineos

    lsiodev/minetest

    sparklyballs/tftp-server

     

    10GB docker at 64% utilization constant.

     

    I don't think it's a docker app that's misbehaving, I'm betting there is a setting or configuration in the docker app itself that should be pointed to the mapped appdata location but hasn't been changed and is still writing to the image. I'd go over EVERY setting and configuration page and examine each listed location to make sure it's pointed to the correct mapped spot.

     

  14. 2- I have some backups but not all other stuff that its important for me too.
    Don't mess with the server until you have backups of everything you don't want to lose. Copying data from drive to drive and changing formats is risky, there is a chance of typing a command wrong or not understanding the directions and erasing stuff by accident. Add to that the fact you want to eliminate the single drive failure protection by invalidating parity in order to move stuff, and you have a recipe for disaster unless everything works perfectly.
  15. my drives do make some "crunching" sound; like when the read/write-heads are moving quite much, but always in the same relative movement so emitting the same rhythm/scrubbing-noise. is that normal?
    Drive models seem to have unique sound signatures, as long as like drives all sound alike it's probably ok. What I don't like to hear is one drive making lots more noise than others of the exact same model.
  16. Older WD 2TB popped a single pending sector. Replaced the drive, precleared it three times and ran a long self test, still have 1 pending sector.

     

    Thoughts?

    smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.7-unRAID] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Western Digital Caviar Green (AF)
    Device Model:     WDC WD20EARS-00J2GB0
    Serial Number:    WD-WCAYY0240773
    LU WWN Device Id: 5 0014ee 25a299d4d
    Firmware Version: 80.00A80
    User Capacity:    2,000,398,934,016 bytes [2.00 TB]
    Sector Size:      512 bytes logical/physical
    Device is:        In smartctl database [for details use: -P show]
    ATA Version is:   ATA8-ACS (minor revision not indicated)
    SATA Version is:  SATA 2.6, 3.0 Gb/s
    Local Time is:    Sun Nov  1 22:27:25 2015 EST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    General SMART Values:
    Offline data collection status:  (0x82)	Offline data collection activity
    				was completed without error.
    				Auto Offline Data Collection: Enabled.
    Self-test execution status:      (   0)	The previous self-test routine completed
    				without error or no self-test has ever 
    				been run.
    Total time to complete Offline 
    data collection: 		(40260) seconds.
    Offline data collection
    capabilities: 			 (0x7b) SMART execute Offline immediate.
    				Auto Offline data collection on/off support.
    				Suspend Offline collection upon new
    				command.
    				Offline surface scan supported.
    				Self-test supported.
    				Conveyance Self-test supported.
    				Selective Self-test supported.
    SMART capabilities:            (0x0003)	Saves SMART data before entering
    				power-saving mode.
    				Supports SMART auto save timer.
    Error logging capability:        (0x01)	Error logging supported.
    				General Purpose Logging supported.
    Short self-test routine 
    recommended polling time: 	 (   2) minutes.
    Extended self-test routine
    recommended polling time: 	 ( 459) minutes.
    Conveyance self-test routine
    recommended polling time: 	 (   5) minutes.
    SCT capabilities: 	       (0x3031)	SCT Status supported.
    				SCT Feature Control supported.
    				SCT Data Table supported.
    
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
      1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
      3 Spin_Up_Time            0x0027   167   162   021    Pre-fail  Always       -       8641
      4 Start_Stop_Count        0x0032   093   093   000    Old_age   Always       -       7534
      5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
      9 Power_On_Hours          0x0032   039   039   000    Old_age   Always       -       45254
    10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
    11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
    12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       129
    192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       60
    193 Load_Cycle_Count        0x0032   055   055   000    Old_age   Always       -       437832
    194 Temperature_Celsius     0x0022   120   110   000    Old_age   Always       -       32
    196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
    197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       1
    198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
    199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
    200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0
    
    SMART Error Log Version: 1
    No Errors Logged
    
    SMART Self-test log structure revision number 1
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Short offline       Completed without error       00%     45254         -
    # 2  Extended offline    Completed without error       00%     45253         -
    # 3  Extended offline    Aborted by host               80%     45176         -
    
    SMART Selective self-test log data structure revision number 1
    SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    

  17. So I've upgraded to 6.1.3. My server is up and the array is online.

     

    I'm noticing a few omissions:

    • make - the makefile utility isn't installed
    • screen - also isn't installed

     

    I'd like to use screen to run preclear from the command line. Is there a reason that I don't need screen anymore? It's running in a live terminal session, but I recognize that it could fail at any time.

     

    I tried installing it as an extra, but I'm now getting an "screen: error while loading shared libraries: libutempter.so.0: cannot open shared object file: No such file or directory."  I'm not sure how to install the so. Can you help?

    http://lime-technology.com/forum/index.php?topic=37541.0
  18. My main motivation for wanting a redundant PSU is this notion I have that if a PSU fails while a hard drive is being used (e.g., spinning up for read/write operation) then that hard drive can be corrupted beyond repair. If this notion is true, then the thought that 24 drives can all become lost beyond repair in a single PSU failure is scary. Is my fear unreasonable or misinformed?
    c3 covered the cases of a gentle failure where the power supply stops supplying voltage like turning off a switch. That probably covers 99.99% of failures, the other very small probability is a catastrophic failure where a severe over voltage surge is sent through the whole machine, in which case, yes, you can fry everything at once. A UPS will put the probability of that happening even lower, but even then, the mechanicals of the drive are fine, and you can get replacement circuit boards for the drives for much less than a clean room recovery fee, typically less than $100 per drive recovered.

     

    Bottom line, get a good name brand single rail PSU with a healthy margin of capacity, a good UPS, and power supply issues should be rare to non-existant.

  19. Ok, I went this route as the firmware upgrade is my last remaining hope...

     

    Installing Unraid 5 was actually easy and it is running now.

     

    I also downloaded all the files required, but fail with a very simple task. I cannot copy the files to the flash disk. I just copied them manually (plugged unraid usb into another computer), but cannot find them after booting into unraid. Also, winscp does not appear to work. This worked in Unraid 6, but somehow missing something in Unraid 5. Please note that I did not create an array in Unraid 5 as I hope to be able to do so without.

     

    Any thoughts?

    The root of the USB drive should show up on the network under \\tower\flash, or at the console or telnet terminal under /boot.
×
×
  • Create New...