• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About abnersnell

  • Rank


  • Gender
  1. Can someone point me in the right direction on setting up PHP mail() function to work within SWAG? Is this something I should expect to work or should I give up and use SMTP connectivity to Gmail, for example, to send email messages from a simple php script. Thanks in advance, Abner
  2. Same results in both Unstable and Stable. No channels mapped. Hdhomerun prime with cable card works fine with Mythtv docker and Kodi client. I read somewhere along the way about a special patched version of dvbhdhomerun for accessing copy free encrypted channels. Is that already included in this docker or do I need to swap that out? Thanks again!
  3. I have an HDHomerun Prime Cable Card with Comcast. I have installed the 4.0.5 Stable version of this Docker and created a custom mux file. 76 muxes are added, 536 services created. EPG is setup with Webgrabplus and Schedules Direct. I am trying to map services to channels. I receive the following "No access" error repeatedly in the debug log: service_mapper: checking Comcast/243MHz/{PMT:56} service_mapper: waiting for input mpegts: 243MHz in Comcast - tuning on HDHomeRun ATSC Tuner #0 ( tvhdhomerun: tuning to auto:243000000 subscription: 00E8: "service_mapper" s
  4. I will attach the syslog as soon as the parity check is completed - 9 hours to go. Do you think I will have to go through a normal parity rebuild or will I be okay after this check is completed?
  5. I initiated the parity check after the data disk rebuild was completed. At this point the estimate is 30 hours for the final .80TB while all data drives are spun down. I will continue to let it run. Thanks again!
  6. Thanks for the quick response. I forced the copy option by not assigning anything to the slot of the 1.5tb drive I removed and starting the array. Stopped the array and assigned the old parity drive to the open slot and was presented the copy option. Everything looks great on the 2tb data disk(former parity). New parity drive looks great and parity check was humming along until 2tb mark.
  7. I performed the following steps: 1. Performed parity check with zero errors. 2. Removed 1.5tb data disk and added new 3tb disk. 3. Assigned new 3tb disk as Parity. 4. Assigned old parity 2tb in missing slot for 1.5tb drive removed in step 2. 5. Copy option - copy parity to new 3tb parity drive 6. Data rebuild for 2tb drive. 7. Parity check (correcting) slowed at 2tb mark and reported millions of sync errors. Zero errors before 2tb. Smart report no errors all drives. Syslog attached. I did not pre-clear new parity disk. Any thoughts as to why I now see
  8. I am running several docker apps - Sabnzbd, sickbeard, couch potato, Plex, plexwatch and Dropbox. I also have a cache pool using btrfs. When Dropbox is enabled, cache drives remain active along with any array drives I have media on for syncing with Dropbox. The webgui for unraid becomes very slow to respond, it does eventually respond. After sometime, cache drives eventually heat up and case fans become louder than usual to compensate. As soon as I disable dropbox docker app, everything goes back to normal. Any advice or thoughts would be greatly appreciated.
  9. I used the default btrfs balance command for Raid1 support.
  10. I have two 500gb drives as a btrfs cache pool. Main page reports 1tb cache drive. Shouldn't this be 500gb?
  11. Thanks ddeeds for the write up and Influencer for your plugin work! Everything looks good on my end minus automating "update_binaries.php" & "update_releases.php". Previous posts mention Cron, cron_scripts, screen, etc. My Newznab install does not include cron_scripts and I don't see the screen command available. Any other suggestions for automating the indexing? Thanks again!