Jump to content

Everend

Members
  • Posts

    131
  • Joined

  • Last visited

Everything posted by Everend

  1. Ammar, I'm probably not qualified to properly answer your questions, but since noone else has I'll share my opinion. Increase your cache pool by adding the second 1tb ssd, let it rebalance, then remove the 240gb (or leave it). See the steps explained above for how to do that. I still haven't done the last step of reordering the drives. On my config, Cache line is empty, my two drives are in Cache 2 & Cache 3. As for backing up, I think you could set crashplan to auto back it up. That's part of the reason for having the pool too, to have a mirror of it on the other disk. Yes, just copy the contents to a folder in the array.
  2. In another thread, Bungy, talked about replacing a cache pool drive. He posted the following steps, since it's an old thread and noone responded when I posted to that thread I'm starting a new one. Are these the right steps? I've completed step 5.
  3. Bungy, I'm about to go through this procedure as my drive is reallocating a sector or two each day, (currently up to 7 reallocated sectors). yesterday I had only one drive as cache, last night I added another drive so now I have a 2 disk cache pool (and a backup of the data on the array). I'm using my cache drive for recording mythtv, so there's quite a bit to backup (>500GB) Step 5 below is "Wait for the updated pool to rebalance" - How do I know when this is done? Step 7, similar question, does it show delete progress on the terminal window after running the delete command? Did you replace your drive before complete drive failure? thanks Everend
  4. Did you figure out the answer to this question? Can you add drives to cache pool without loosing data on the drive? My current situation is a little different but I'm still curious about the answer to your question.
  5. Thanks! As for manually writing zeros and trusting parity, I understand the process but I'm reluctant to follow that procedure. My main reluctance is that it hasn't been added to unraid as a feature. It seems to be a strait forward process that's been discussed here for years so I figure there must be a problem with the process and that's why it hasn't been implemented, at minimum, as a plugin. Have you removed a drive this way? I think one could follow that process then run a parity check after to verify. It would still exercise all the drives the same amount as a rebuild but in this scenario one is never without protection, right? If the second (3TB) drive fails during/after the zero write it would be protected by parity. If one of the 3tb drive failed during the parity check after trusting parity then with the manual procedure it may not be handled right but if this were written into a plugin, the plugin could monitor the parity check and would be able to determine if the discrepancy it finds (while checking parity) is the fault of a mis-trusted parity (then trust the 3tb drive & update/correct parity) or disc 3tb malfunction (then trust parity). So overall what would you do, put the new 2tb in cache pool or move 6 month old 1tb to cache pool? 2tb is double what I need in cache but I'm only using 7.05TB of 10TB in the array so I don't need the extra TB there either. If you think I should move 1tb to cache pool, I could 1) start now by removing it from the array and rebuild parity 2) writing zeros to the drive & trust then test parity or 3) wait till Tue & rebuild the 1tb to the 2tb disk? Which would you do? thanks Everend
  6. I brought up removing the drive since I don't have the 2tb drive now, I could remove the 1tb drive and rebuild parity then add it to the cache pool before the new drive arrives on Tuesday. Is it the case that if another drive (3TB) fails while rebuilding (the 1tb->2tb) the 3TB drive is still protected by parity? - if so I'm sorry I don't understand how that works, As I understand it, if another drive (3TB fails while the drive is being replaced (1tb->2tb) the newly failed drive (3tb) was not protected. Could you point me to where someone has walked through this scenario since I must me missing something? Maybe the part I'm missing is that the rebuild is really quick or doesn't involve exercising the other drives in the array (minimizing the chance another drive will fail). I've already moved everything off the 1TB drive so still having the original drive seems irrelevant. PS. I'm NOT speaking bad of unRAID, I'm a FIRM believer this is the best product for my home server. I'm very thankful to Frank (my neighbor) for telling me about it, for everyone who works on developing and supporting it and especially for everyone who contributes to the forums. I'm am trying to understand the best option here and accommodate my phobia of loosing data.
  7. I have an array with 3TB parity & four drives (3TB, 3TB, 3TB, 1TB) then a cache drive of 1.5TB. The cache drive is showing age so I want to add another drive to the cache (creating a pool) so there is redundancy for the drive that's getting old. I've ordered a new 2TB drive (arrives Tue). Mythtv uses the cache drive for recording TV, currently using about 700GB of that drive. Should I.... 1) replace the 1TB drive in the array with the new 2TB drive, then add the 1TB drive to the cache pool 2) add the 2TB drive to the cache pool and leave the 1TB drive where it is in the array I'm concerned there doesn't seem to be a method for moving/replacing/etc the 1TB drive without leaving the array vulnerable since it is unprotected while rebuilding parity or unprotected while rebuilding the replaced drive. I saw a thread from 2008 where this was discussed, I'm surprised a "remove disk from array" feature hasn't been added since then.
  8. maybe your install image is called something different? ssh into your machine and see what images you have installed. docker images
  9. I stumbled upon this report tonight while looking for something else. https://www.backblaze.com/blog/hard-drive-reliability-q3-2015/ According to this chart, they use a lot of these seagate drives despite a relatively higher failure rate. This data did give me the opportunity to pull some deeper statistics. The first & obvious stat to consider when buying a drive is $/TB. Then compare that to the failure rate and decide if the savings in $/TB between various options is worth the increase failure rate. This chart also lists an average # months in service. Now I can also price TB over time to find HGST 2TB may be a better deal even if it cost more per TB, since it may live about 40% longer.
  10. I've had good service from this drive for several years now. I've had 4 of them start to fail about 3-5 months before the factory warranty expired and Seagate was quick to send me replacement drives. All the replacement drives are still running, two of them for over 2 years. I'm satisfied with getting 4+ years out of a $90 drive. As for drive reliability, that seems to be chasing the wind. Every person I've spoke with has their own opinion and favorite. There are so many variables that go into drive quality that unless you have the purchase power (volume) to pick and choose production lots, its a fool's errand for a non-pro, like me to find the perfect drive. On the other hand, once I get a drive its only prudent to test it before putting it into the array. So anyone have any idea what the 'return to normal' notices are about?
  11. Here is the SMART for the one I returned smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.13-unRAID] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.14 (AF) Device Model: ST3000DM001-1ER166 Serial Number: Z501N37C LU WWN Device Id: 5 000c50 086caf86e Firmware Version: CC25 User Capacity: 3,000,592,982,016 bytes [3.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Sun Dec 27 19:30:18 2015 CST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 80) seconds. Offline data collection capabilities: (0x73) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 316) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x1085) SCT Status supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 114 099 006 Pre-fail Always - 152076578 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 1 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 43841 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 2 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 1 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 098 098 000 Old_age Always - 2 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 067 067 045 Old_age Always - 33 (Min/Max 22/33) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 6 194 Temperature_Celsius 0x0022 033 040 000 Old_age Always - 33 (0 22 0 0 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 8 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 8 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 0h+55m+16.762s 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 0 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 880049214 SMART Error Log Version: 1 ATA Error Count: 2 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 01:57:55.984 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 01:57:55.984 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 01:57:55.968 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 01:57:55.968 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 01:57:55.968 READ FPDMA QUEUED Error 1 occurred at disk power-on lifetime: 1 hours (0 days + 1 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 01:57:52.122 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 01:57:52.117 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 01:57:52.117 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 01:57:52.104 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 01:57:52.104 READ FPDMA QUEUED SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  12. smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.13-unRAID] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.14 (AF) Device Model: ST3000DM001-1ER166 Serial Number: Z501N3RT LU WWN Device Id: 5 000c50 086cac477 Firmware Version: CC25 User Capacity: 3,000,592,982,016 bytes [3.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Dec 29 18:46:17 2015 CST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 80) seconds. Offline data collection capabilities: (0x73) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 311) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x1085) SCT Status supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 116 099 006 Pre-fail Always - 105868552 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 1 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 29196 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 0 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 2 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 066 066 045 Old_age Always - 34 (Min/Max 28/34) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 6 194 Temperature_Celsius 0x0022 034 040 000 Old_age Always - 34 (0 25 0 0 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 0h+38m+23.722s 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 0 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 591730646 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  13. I'm trying to add a new drive to my machine. After installing the drive and restarting the array (without this drive) I initiated the pre-clear of the new drive using gfjardim's pre-clear plugin. After receiving several error messages regarding the new drive I returned it for a replacement drive. This evening I installed the new replacement drive and started the pre-clear again. This new drive is the same Seagate 3TB model, obviously with a new serial number. 9% into the preclear I got these four notices in quick succession. I don't know what to make of these messages since I didn't receive any error messages or notices that anything was abnormal. I wouldn't expect these are corrections of the error messages from the other (returned) drive.
  14. The port mapping for mythtv went away with the last update in Oct or so. If I change it from "Host" to "Bridge" in the config then various port mappings options become visible. See the attached image for what that looks like. In the past troubleshooting mythtv issues someone made it clear that it needs to be host, not bridge.
  15. I have two dockers which use RDP to connect to config pages. Mythtv & Crashplan. For about a month I have been unable to RDP into either of these containers. I haven't fretted about it since both applications have been working correctly. This evening I'm doing maintenance on the machine (new disk & updating plugins to latest version) and thought to tackle the unresponsiveness of RDP since I think crashplan is misbehaving. After updating to the latest version of Unraid (and plugins) and restarting a couple of times through the process I prayed the RDP issue would resolve itself. Unfortunately I still can't RDP into either docker image. I don't know enough about RDP to see if its a host, docker or unraid issue. Can someone point me in the right direction for diagnosing RPD unresponsiveness? thanks Everend
  16. I moved mythlink.pl to my recordings folder. For some reason it fails to pick up all the files in the recordings folder when it runs as a user job (not sure why but it picks a date and only creates links for recordings after that date). However when I log into the docker and run it this way, it picks up everything. perl /home/mythtv/recordings/mythlink.pl --link /home/mythtv/recordings/pretty --format '%T/%T%-%S%-%y%m%d%H%i'
  17. Crashplan support respnded as followed when I asked about a how to install the GUI only. Based on this response, how do we disable the engine on the unraid distribution?
  18. Thanks, welcome back! I've been looking into how to upgrade the GUI container (Crashplan-Desktop). I think I figured parts of this out, http://lime-technology.com/forum/index.php?topic=43271.0 The part I'm currently stuck on is how to modify the Crashplan install script to only install the GUI or if we use the standard install script, how to disable the new backup engine that's installed within the Crashplan-Desktop container. Once I figured that out, I still needed to figure out how the path's crashplan asks for in their install script relates to the docker container configuration. thanks
  19. What about installing crashplan from the link above, then disabling the backup engine that's installed within the Crashplan-Desktop docker by removing the appropriate services from the rc*.d directories? The instance of the engine within the Crashplan-Desktop container would use a little disk space but if it's not running then it shouldn't be using any memory/processor. right? As I said above, I'm very new to Linux so please chime in if you have an idea or comment. thanks
  20. It seems to me that we should be able to upgrade the GUI from within the docker container without waiting for gfjardim to update the docker container template. My linux and unraid experience is rather limited so I'm fumbling around in the dark here. I did the following steps last night and it seemed to work. The GUI container updated and I could RDP into the container. The GUI upgraded to 4.4.1 and connected to the engine. It all seemed to be working correctly until I figured out that I must have done something wrong because I started getting warnings that docker.img was growing over 70% full. I think what happened was that I used the standard linux Crashplan install which installed both the GUI and the backup engine. Because I didn't do anything to map the GUI container to the backup engine container I think the GUI was actually connecting to the new backup engine install within the GUI container. When I saw the warning about docker.img going over 70% I ran the uninstall again and then attempted to install again, using different directories. I tried several sets of directories until I realized the install includes a new backup engine being installed inside the GUI container. My next strategy is to look at the install.sh script to see if I can modify it to install only the GUI. or even install the engine but not start it. Then use Leifgg's dropbox file to map the gui container to the backup engine container. Any advise? --- THIS DOESN'T WORK yet, so don't try this --- 1) Download the new version of Crashplan for Linux https://www.code42.com/crashplan/download/ 2) Save the .tgz file to the unraid share /appdata/crashplan/data/ 3) Putty into TOWER as root, navigate to that dir and unpack it. I typed 'tar zxvf CrashPlan_4.4.1_Linux.tgz' - I don't know what the zxvf are for, that is just what the google result said to use to unpack it. 4) get into the docker I typed 'docker exec -it CrashPlan-Desktop bash' 5) go to where I unpacked the tgz file. 'cd /data/appdata/crashplan/data/crashplan-install/' 6) removed the old version 'sudo ./uninstall.sh -i /usr/local/crashplan' (following Step 1 from the code42 page linked above) 7) I did not follow the next step for "Complete Uninstall" Followed Step 2 to install the new version... "sudo ./install.sh" 9) Following the install prompts...
  21. I see this thread is linked as the official Crashplan GUI docker support thread so I'm reposting the issue here. Several weeks ago several of us started getting the message "CrashPlan has been disconnected from the backup engine." Someone figured out that Crashplan pushed out an update of the backup engine to 4.4.1. The docker container for the backup engine did upgrade successfully but the push did not also upgrade the Crashplan GUI docker which is still at 4.3.0. So because they are different versions, the GUI doesn't connect to the backup engine. There is a thread specific to this problem but no real resolution posted. http://lime-technology.com/forum/index.php?topic=43271.msg414802#msg414802 Can someone post info on how to upgrade the Crashplan GUI container to 4.4.1? thanks Everend
  22. So setting it up as a system event is easier than I expected. RDP into the mythtv docker. Look to the mythtv support thread for info on how to do that. One note was that for a time RDP stopped working/connecting. To fix it I followed bungee91 & trurl advise http://lime-technology.com/forum/index.php?topic=43322.msg413871#msg413871 So once you've RDP into the docker, then start the backend setup. The last section is for 8. System Events. Scroll down through the list of system events until you get to "Recording Started Writing" and paste the perl code from the original post. Restart the backend and that's it. I tested it by starting a recording right away and it updated the list of pretty file names. The mythlink.pl documentation recommends specifying a chanel ID and start time in the mythlink.pl command so that it creates the link for only that new recording. I didn't do this because I like the idea of generating the whole folder of links each time, clearing out any old links. I suppose if I had hundreds of recordings it may take some time to do this each time but for only a few dozen recordings it regenerates the whole list of symlinks before I can hit refresh on the Windows Explorer window.
  23. I'm using Sparkleyballs MythTV docker template to record TV onto my cache drive. Some of the recordings I would like to keep after watching so after recording I use his Handbreak docker to reencode them and transfer to another disk with the rest of my TV Shows playable through Kodi. MythTV saves recordings with a date code for a name. Mythlink.pl is a script that harvests the show and episode information from MythTV db and creates a symbolic link file. Now instead of referencing cryptic file names to the list of recordings in MythTV I can queue up shows for reencoding in handbreak by their show-episode file name. I can also see them sorted nicely in a unRAID user share; \\TOWER\mythtv\recordings\pretty The first challenge was to get mythlink.pl into the docker. As far as I could tell, it was not included in the docker template. For someone who actually knows any linux please suggest a better way of doing this. I found the source code for the script on the mythtv.org site (google search). opened a putty terminal into the unraid server as root then got into the docker container with this command docker exec -it MythTv bash navigated to the directory the script is supposed to be "/usr/share/doc/mythtv-backend/contrib/user_jobs/" (I suspect the script can be located somewhere else, but this is where my searching said it was supposed to be.) I created the file here by pasting it into a vi window. - There's got to be a better way of doing this. Once created, I had to modify the script because mythtv is inside a docker and the script is being run inside where the file mappings are different. Ultimately I use this command to execute the script. perl /usr/share/doc/mythtv-backend/contrib/user_jobs/mythlink.pl --link /home/mythtv/recordings/pretty --format '%T/%T%-%S%-%y%m%d%H%i' If you run the script right away before modifying it the symbolic links point to 'var/lib/mythtv/recordings/' but outside of the docker that path doesn't work so none of the links work. I made the following changes to the script, using the vi editor while still inside the docker container bash. LINE 29 - added a new variable '$newmap' LINE 413 - I commented this line out and made my own version. $newmap = $show->{'local_path'}; $newmap =~ s/var\/lib/mnt\/user/; # symlink $show->{'local_path'}, "$dest/$name" # With mythtv running inside docker the paths are not mapped right. symlink $newmap, "$dest/$name" or die "Can't create symlink $dest/$name: $!\n"; vprint("$dest/$name"); } Next step is to run this script as a System Event so new links are created as shows are recorded.
×
×
  • Create New...