Jump to content

PCRx

Members
  • Posts

    162
  • Joined

  • Last visited

Posts posted by PCRx

  1. Version 6.8.3

     

    Eleven years of running unRaid and had my first array drive failure. Only replacement I have is 12tb with current parity being 10tb. A parity swap was in order.

     

    Precleared the 12tb with no errors.

    Removed failed 8tb drive

    Assigned 12tb to parity slot

    Assigned 10tb old parity drive to be new data drive

     

    Started array and let it copy the 10tb parity drive to the new 12tb parity drive

    After copying finished the server was in a stopped state and the Start button said it will bring array online and rebuild the data drive

    At this point I shutdown and would deal with it later

     

    Upon rebooting my old 10tb parity was back in the parity position. The 12tb was not assigned

    I reassigned the 12tb to parity slot which then got marked as wrong

    I reassigned the 10tb old parity back to the data slot and the wrong disk message went away and both icons turned blue

    Now the Start button says it will begin copying the 10tb to the 12tb like it's starting parity swap again. It's already done that. I don't want to do it again

    I just want to trust the new 12TB parity drive and rebuild the failed drive to the 10tb

     

    At this point I don't want to guess around with things or risk losing my failed drives data. If anyone has any ideas on how to safely proceed please let me know. Thanks

  2. Picked up 3 yesterday at BB. They are White drives, WD100EMAZ, 256MB. As stated above, it's accepted that these are "updated" Red drives. Supposedly shuckers were reselling the Reds on Ebay and undercutting WD's Red drive market so they started with the White labels on the new external drives.

    I've gotten Whites and Reds from the 8TB externals in the past. No noticeable difference.

    • Like 1
  3. 18 hours ago, t001z said:

    Check step 6 in my instructions above. I may not have described it very well but you need to boot your unraid server into the GUI mode.

    You were correct, I did misinterpret that. However I'm not sure doing that would even work. When the docker starts nothing appears in the log file. I'd expect to see some progress listed in the log file during startup despite not being able to access it remotely. Unfortunately, for numerous reasons, it's not a simple procedure for me to start the server in GUI mode to find out. I'll have to give up on it for now and maybe revisit it later. Thanks.

  4. On 10/31/2018 at 1:52 AM, t001z said:

    It sounds like the area where you are getting stuck is on remote administration (step 6 below).  Here is what I did to get that opened from the beginning:

    I'm having the same issue as joeshmoe1.

    Installed Olbat's container. It started but there is no WebUI. The log is blank as well.

    Entering the Console shows the files are indeed there but it seems to have not fully started the server.

    The appdata/cups folder is also empty.

    Tried with gfjardim settings and a fresh clean install using your settings. It just doesn't want to startup for some reason.

  5. Just shucked mine an hour ago. I watched the video for reference but followed the PDF more in practice.

     

    I used four plastic store gift cards. Didn't do any sliding back and forth of the cards as was suggested. I simply inserted the corner of the card where the internal clip should be located according to the PDF.  You can feel it when you're in the right spot.

     

    I then pushed the cards corner in straight and with some force until there was a noticeable give which released the clip. Repeated with the other three cards. After doing the first one the rest were done in about 5 secs. Was really easy..

     

    Using a small plastic screw driver I pried the top and bottom back to get the gap started then slide it apart by hand.

     

    The cards and the enclosure were not damaged in any way.

  6. 5 hours ago, pinion said:

    Has anyone gotten the DVR part to work using a HDHR Connect? I can watch Live TV with the latest version but when I try to DVR something it plays fine for 15 seconds to a minute and then it's just all messed up. I figure since I can watch Live TV fine but can't DVR the same channel without it messing up that it's an issue somewhere with Plex or maybe how unraid is setup... I don't know. But I'd love to get DVR working!

     

    I've not had any issues with it.

     

  7. 2 hours ago, grandprix said:

    The PMS version which has Live TV enabled is Beta, or so is my understanding anyway.  How are you guys enabling Beta from within the linuxserver.io docker?  I looked at the Github FAQ and it doesn't list beta as a valid version choice.  Or is this just conversation regarding the new Live TV function in a general sense of it all?

     

    I just updated from within unRaid like any normal update. It's set to get the Lastest. Now mines at Version 1.7.2.3878

    The PMS settings menu for me says 'DVR (Beta)'. Their video online just shows it as DVR. I did run the beta version last fall so figured it might be a leftover config setting causing that.

     

    I went ahead and setup HDHR Connect which went smoothly. After setup there was no outward sign of anything to do with Live TV with Plex Web. When using my iPhone it did play Live TV from the Program Guide menu.

     

  8. I have a question regarding the multiple IPs that are unavoidable on Docker (the host IP -in the 10.0.1.0 range- and the docker0 interface in the 172.16.1.0 range).

     

    How can I force plex to only work over a single interface?  My TV's plex app wont work otherwise.  I'd like to specify the 10.0.1.0 range to be used.  I disabled docker0 and it worked great but NONE of my other docker apps work then.

     

    Your Plex client app, once signed in, should connect to Plex.tv to find out where your Plex Media Server is located. It then connects to that IP address, which is typically your routers WAN IP.

    Make sure your router port forwarding is allowing the Plex connections through. On my router I've set forwarding on port 32400 to connect to the IP of my unRaid server which is running the Plex Media Server Docker.

  9. There shouldn't be any reason to mount the docker.img file. Let unRaid handle writing to that file, just set it and forget it.

     

    Did you create a mount point on the hard drive to store the files? If not then I'd guess when you mapped the volume to mnt/appdata/minecraftos the system created a directory in RAM instead. When the power went out you would then loose whatever was in RAM.

     

  10. No, it will be in the edit for the Plex Docker container in the Docker tab

     

    He's got it set as Host in previous picture.

     

    I'd completely remove Plex as well as it's container. Then from a command line run:

     

    docker rmi $(docker images -q --filter "dangling=true")

     

    Then reinstall the container.

    Wait a bit for the container to load. You can watch using the log by clicking the piece of paper looking icon to the far right of the plex container.

     

  11. It looks like your Host paths are not correct.

     

    SAB is putting the finished download in "/mnt/cache/downloads/complete/" but Sonarr is looking for it in "/mnt/cache/appdata/Download"

    Change the Sonarr path to the same you used in SAB so they both share the same mapped volume location.

     

    I don't use SAB myself, however, I don't think you'll need that "/mnt" volume mapped for any reason.

  12. Just restarted PMS to update to latest version. Watching the log file as it started I saw this AccessDenied error. Is it safe to ignore?

     

    Starting Plex Media Server.

    Starting dbus-daemon

    Starting Avahi daemon

    6 3000 /config/Library/Application Support

    unlimited

    Dec 24 08:53:16 unRAID syslog-ng[123]: syslog-ng starting up; version='3.5.3'

    Dec 24 08:53:16 unRAID dbus[125]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted

    Starting Avahi daemon

  13. If cp is referencing /mnt/user then it will spin up drives to see if it should copy directly to the array or to cache.  Installing cache dirs will help alleviate that

     

    I grew tired of this and several other random issues so I nuked my entire install and rebuilt everything from scratch on a new flash drive.

    Just finished doing a download test and it works as expected now. Configuration was exactly the same so must have been an issue with the old setup I had.

     

    Thanks for your efforts.

  14. Check that the cache drive is not in the same spin up group as the other drives

     

    cache is set to never spin down. All other drives spindown after 15 mins.

    Under Disk Settings > Enable spinup groups is set to No and was never enabled at any time.

     

    Using any post-processing scripts or anything in Sonarr or any other container?

     

    Nothing. NZBGet downloads the file, unpacks it to the cache drive, Sonarr or CP will find it there and rename it and copy to to it's final location. The final location is also on the cache drive just a different directory.

     

    I'm in the process of shutting down my entire array and rebooting. Hopefully that will clean out any ghosts in the machine. I'll let you know.

     

  15. I've just started using NZBGet and noticed that when a download completes my array spins up.

    Normally I have all processes with unRaid take place on my cache drive and only spin up the array to manually archive data.

     

    I've checked my volume mappings and there's nothing pointed at the array proper, just the cache drive and a SSD mounted outside the array.

     

    Array is 5 drives formatted with reiserfs and 1 drive formatted XFS.

    All 5 reiserfs drives get spun up after every download. The 6th (XFS) never does. unRaid reports some read action on the 5 drives, never any writing. NZBGet shouldn't even know the array exists let alone spin it up and read from it.

     

    Anyone else notice this behavior? I rarely spin up my array unless necessary so this is very annoying. I used to run SABnzbd on a VM and this never happened until switching to the NZBget docker.

     

    I've not noticed this behaviour.  Post a screenshot of your NZBGet settings, sometimes a second pair of eyes can help...

     

    The way it's setup is that NZBGet will download to an externally mounted SSD (/scratch). The files will then be unpacked from the /scratch/incomplete folder to my cache drive where Sonarr or CP will process them. None of these dockers are mapped to the actual array in any way, only the /scratch and /cache drives. It's a mystery as to why my array spins up, and oddly, not the XFS drive.

     

     

    unraid2.JPG.ffff8a7ff2f1700ebe1518f0bd115f9d.JPG

    unraid3.JPG.55a3d640b4f0d49d9cdb18cdde87181c.JPG

    unraid1.JPG.2cc0045624984a18113315d347c892e6.JPG

  16. I've just started using NZBGet and noticed that when a download completes my array spins up.

    Normally I have all processes with unRaid take place on my cache drive and only spin up the array to manually archive data.

     

    I've checked my volume mappings and there's nothing pointed at the array proper, just the cache drive and a SSD mounted outside the array.

     

    Array is 5 drives formatted with reiserfs and 1 drive formatted XFS.

    All 5 reiserfs drives get spun up after every download. The 6th (XFS) never does. unRaid reports some read action on the 5 drives, never any writing. NZBGet shouldn't even know the array exists let alone spin it up and read from it.

     

    Anyone else notice this behavior? I rarely spin up my array unless necessary so this is very annoying. I used to run SABnzbd on a VM and this never happened until switching to the NZBget docker.

  17. I set this up for my nephew but he couldn't connect to it. He uses an app called minecraft PE.

    Is there a special version of minecraft client that works with this server?

    This docker is for the full minecraft for PC's, it won't work with PE, the programs are totally different.

     

    Thanks for the info!

  18. In addition to my cache drive that Plex uses to stores it's data. I have a second drive that is mounted outside the array that I'd like to store a few albums on so Plex can access them.

     

    I've assigned a volume map in the GUI and it creates the proper folders as expected. However Plex can't access the folders.

     

    Inside the docker I can see the folder and it appears empty. On the physical drive it shows the folder with the files in it.

     

    It's possibly a permissions issue. When examining a volume mapped folder on my cache drive (within the Plex docker) they are owned by abc:users. The folders mapped on my second drive are all root:root. This is after mapping folders through the GUI, they didn't exist on the drive beforehand.

     

    On both the physical cache drive and secondary drive the owner is nobody:users.

     

    Any ideas on why it creates proper mappings on my cache drive but not a secondary drive?

×
×
  • Create New...