-
Posts
16,659 -
Joined
-
Last visited
-
Days Won
65
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by JonathanM
-
-
One thing to keep an eye on is mixing /mnt/cache download locations and /mnt/user post processing destinations. I accidentally lost a few movie downloads when I was re-jiggering my CP processing setup. For a minute or two I couldn't figure out why a perfectly completed torrent just disappeared as soon as CP tried to process it.
-
It seems to be an issue that could be dealt with one of two ways. 1st, change the web coding that is triggering the adblocker (not likely, or it probably would have already been done) or 2nd, code a "please disable your adblocker" message if one is detected.Since we're sort of on the subject of the Dynamix webUI instead of any of these other plugins, is there anything that can be done about the "adblocker" problem that has spawned so many threads recently? This has never been a problem for me since I have always had my server whitelisted, but it seems to be a new problem for many so I don't think it has always been this way.
-
Very few consumer boards have IPMI, but mostly I think it's a KVM limitation, not something that unraid has direct control over. However, how would you troubleshoot a network connection failure with no local console? I suppose you could redirect the console to a serial port like many other network appliances, but then managing unraid turns into a game of trying to connect a second machine just to get an output screen.Given the excellent state of IPMI, SSH, and plugins like Shell-In-A-Box, why the heck does unraid need a dedicated GPU out?
Not trolling on this one, truly curious.
-
No logging that I am aware of. It's kernel memory, so it just does its thing invisibly in the background. If a directory list event causes a drive access, then either the list is no longer in RAM, or it wasn't just a file list request, but content as well. Most gui file managers try to display some form of thumbnail unless you turn that feature off, so if a drive spins up when you browse it, make sure the program isn't doing anything but listing the file names.So how do you know when the files listing memory is getting freed up? -
Yes and no, sort of.OK. I found Joe L. post in original thread. I had thought that if you run it on the drives the user shares already got the benefit. Did he mean the user shares already have them cached even if you don't run it at all?User shares are indeed built and run in memory, but that memory can be claimed by any and all other processes. When something is accessed, the disks are spun up to rebuild that portion of the user share fs. Cache dirs keeps the underlying disks contents fresh in memory, so accesses are nearly instantaneous. So to answer the OP's question directly, you will see a benefit from running cache dirs on the DISKS that make up the user share that you wish to cache, as long as you have enough RAM so the directory tree can actually stay cached without being overrun by other processes. Whether or not you use disk shares has no bearing on cache dirs being useful to user shares. Where people are running into issues is trying to use cache dirs to keep too much of the directory tree in memory, as that causes cache dirs to keep the disks spun up because as soon as it's done walking the disk, something else comes along and needs that RAM, knocking the directory list out, causing the disk to stay spun up as cache dirs reads it into RAM again, causing a loop.
-
on apple certified hardware. When you have a limited pool of possible hardware combinations to support, it's easy to get it right. Running hackintosh, not so much.And here I thought Apple products was chosen for it's ease of use -
Ran into an issue I've had before and thought was being fixed, but it's still happening with 6.1.6 and 2015.11.18 UD
1. Map SMB share in UD
2. PC that hosts that SMB share goes offline.
3. Webgui main page takes forever to load, sticks at Please Wait.
4. now offline mapped SMB share nearly impossible to umount.
-
Yes, the fuse filesystem that is used to aggregate folders for /mnt/user does not support hard links which some dockers may depend on.if the share is on cache only does it actually matter?
-
Can this be configured to do .lan internal dns as well? Also split dns would be nice, so you could use the same names internally and externally for published services on your lan.This doesn't use the same lists as Pi-Hole but it works the same way - just point your DNS addresses to your unRaid tower and it will forward the valid requests vias Googles DNS servers and drop the rubbish ones.... -
Yes or no, depending on a saved toggle setting for each docker. All that's needed is a user settable variable that skips the auto update portion and starts immediately if set.
wait do you want updates or not ?More of a general request for all of linuxserver.io Dockers, can updates not occur automatically on Docker start? Perhaps a separate GUI button to toggle update on start? Thanks.
-
In theory that should be easy to test. dockexec into the container, and dd if=/dev/urandom of=bigfile.test bs=1M count=100 then check cadvisor for a 100MB bump in that container.cadvisor container will show container sizes and help narrow it down.
cadvisor was useless in resolving this issue for me. The sizes reported in cadvisor never changed while the docker image continued to fill up.
Really? That's interesting, might have to look into that....
Like I say, I've not been affected by this issue..
-
Squid just thought it was funny that conciously or not you posted in yoda voice during all the marketing buzz for force awakens. Nothing deep or hidden that I can see, but maybe I missed it.
;D
It was a dumb joke that may have been only funny to myself. Don't worry about itSuspicious at one point I may have been
I do keep posting the same advice in this thread to be fair....
(Don't worry Squid, it was funny)
(I don't think CHBMB has awakened yet)
Someone please explain.... I get the Yoda thing but after that..... lost.
-
;D
It was a dumb joke that may have been only funny to myself. Don't worry about itSuspicious at one point I may have been
I do keep posting the same advice in this thread to be fair....
(Don't worry Squid, it was funny)
(I don't think CHBMB has awakened yet)
-
Try temporarily changing your DNS server to 8.8.8.8"Failing to fetch the mirror is something local to you."
But you are calling to a different place than I am. Is the call not coded somewhere as to where to go look? If it is, then I surely got a different file than you did as it is looking in another place.
So just to be sure...My container came from... https://hub.docker.com/r/linuxserver/sabnzbd/
-
Initial parity build != parity check. Totally different operation, you need to do a check after the build completes if you want full confidence in the array. Many people see much higher speeds on the build than they do on the check.My parity check is way faster though. I'm at ~15.5 hours to complete the initial parity of the 3 drive array. -
Doing the entire process locally on unraid means you don't need another PC running while the copy is in progress, but keep in mind that it may actually go quicker over the network. I'm not sure if the USB speeds have improved on the latest builds, but in the past a locally attached USB drive was way slower than going over the network to a client PC with a good USB3 connection. I'll be interested to see if your experience is different.Can I get a little clarification on this bug? I am looking to backup one of my user shares to an external USB drive I mount with the "unassigned devices" plugin via MC. Since the usb drive can not be part of the array (and therefore not part of the share) will I be safe to copy over a user share to the mounted USB disk? Its 3TB of data so I'd prefer to use MC rather than network to save time.
Thanks
-
I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.
tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.
@Brit I posted in here to see if anyone else has the same problem so we can collaborate to fix it. If you don't have the problem, that's great. But telling those of us that do have the problem that its our problem and we have to fix it ourselves is not helpful. What would be helpful is if you could tell us which dockers you are using that have not given you any problems so we can possibly eliminate them from the list of potentially misbehaving dockers.
Running 24/7
binhex/arch-couchpotato
binhex/arch-delugevpn
binhex/arch-sabnzbd
binhex/arch-sickrage
binhex/arch-sonarr
emby/embyserver
smdion/reverseproxy
Running on demand or not fully configured and utilized.
sparklyballs/krusader
yujiod/minecraft-mineos
lsiodev/minetest
sparklyballs/tftp-server
10GB docker at 64% utilization constant.
I don't think it's a docker app that's misbehaving, I'm betting there is a setting or configuration in the docker app itself that should be pointed to the mapped appdata location but hasn't been changed and is still writing to the image. I'd go over EVERY setting and configuration page and examine each listed location to make sure it's pointed to the correct mapped spot.
-
Don't mess with the server until you have backups of everything you don't want to lose. Copying data from drive to drive and changing formats is risky, there is a chance of typing a command wrong or not understanding the directions and erasing stuff by accident. Add to that the fact you want to eliminate the single drive failure protection by invalidating parity in order to move stuff, and you have a recipe for disaster unless everything works perfectly.2- I have some backups but not all other stuff that its important for me too. -
Drive models seem to have unique sound signatures, as long as like drives all sound alike it's probably ok. What I don't like to hear is one drive making lots more noise than others of the exact same model.my drives do make some "crunching" sound; like when the read/write-heads are moving quite much, but always in the same relative movement so emitting the same rhythm/scrubbing-noise. is that normal? -
Older WD 2TB popped a single pending sector. Replaced the drive, precleared it three times and ran a long self test, still have 1 pending sector.
Thoughts?
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.7-unRAID] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD20EARS-00J2GB0 Serial Number: WD-WCAYY0240773 LU WWN Device Id: 5 0014ee 25a299d4d Firmware Version: 80.00A80 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Sun Nov 1 22:27:25 2015 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (40260) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 459) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 167 162 021 Pre-fail Always - 8641 4 Start_Stop_Count 0x0032 093 093 000 Old_age Always - 7534 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 039 039 000 Old_age Always - 45254 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 129 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 60 193 Load_Cycle_Count 0x0032 055 055 000 Old_age Always - 437832 194 Temperature_Celsius 0x0022 120 110 000 Old_age Always - 32 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 45254 - # 2 Extended offline Completed without error 00% 45253 - # 3 Extended offline Aborted by host 80% 45176 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
-
This docker is for the full minecraft for PC's, it won't work with PE, the programs are totally different.I set this up for my nephew but he couldn't connect to it. He uses an app called minecraft PE.
Is there a special version of minecraft client that works with this server?
-
http://lime-technology.com/forum/index.php?topic=37541.0So I've upgraded to 6.1.3. My server is up and the array is online.
I'm noticing a few omissions:
- make - the makefile utility isn't installed
- screen - also isn't installed
I'd like to use screen to run preclear from the command line. Is there a reason that I don't need screen anymore? It's running in a live terminal session, but I recognize that it could fail at any time.
I tried installing it as an extra, but I'm now getting an "screen: error while loading shared libraries: libutempter.so.0: cannot open shared object file: No such file or directory." I'm not sure how to install the so. Can you help?
- make - the makefile utility isn't installed
-
c3 covered the cases of a gentle failure where the power supply stops supplying voltage like turning off a switch. That probably covers 99.99% of failures, the other very small probability is a catastrophic failure where a severe over voltage surge is sent through the whole machine, in which case, yes, you can fry everything at once. A UPS will put the probability of that happening even lower, but even then, the mechanicals of the drive are fine, and you can get replacement circuit boards for the drives for much less than a clean room recovery fee, typically less than $100 per drive recovered.My main motivation for wanting a redundant PSU is this notion I have that if a PSU fails while a hard drive is being used (e.g., spinning up for read/write operation) then that hard drive can be corrupted beyond repair. If this notion is true, then the thought that 24 drives can all become lost beyond repair in a single PSU failure is scary. Is my fear unreasonable or misinformed?Bottom line, get a good name brand single rail PSU with a healthy margin of capacity, a good UPS, and power supply issues should be rare to non-existant.
-
The root of the USB drive should show up on the network under \\tower\flash, or at the console or telnet terminal under /boot.Ok, I went this route as the firmware upgrade is my last remaining hope...
Installing Unraid 5 was actually easy and it is running now.
I also downloaded all the files required, but fail with a very simple task. I cannot copy the files to the flash disk. I just copied them manually (plugged unraid usb into another computer), but cannot find them after booting into unraid. Also, winscp does not appear to work. This worked in Unraid 6, but somehow missing something in Unraid 5. Please note that I did not create an array in Unraid 5 as I hope to be able to do so without.
Any thoughts?
[Support] binhex - DelugeVPN
in Docker Containers
Posted