jumperalex

Members
  • Posts

    2000
  • Joined

  • Last visited

Posts posted by jumperalex

  1. Yeah not sure what is going on. My container's config is set to:

     

    /config /mnt/cache/appdata/plexmediaserver

     

    and then inside plex's webgui is set to:

     

    Transcoder temporary directory

    /config/transcode

     

    I just transcoded to my phone and watched at the segments populated my transcode folder.

     

    Soooo that is how it works for me. Full disclosure I am using neeto's docker but once you have set the /config folder for the container, Plex should respect a transcode folder designated inside of it.

  2. The A10-5700 should be able to transcode 1 PLEX stream + serves all the NAS functionality.

    2 PLEX streams are possible but highly dependant on what content being transcoded.

     

    I would contend that an A10-5700, with a 3120 futuremark score, should handle two full BD bit-rate 1080p transcodes at the same time. Tests show 1000-1500 "futuremarks" are needed per stream. Also lets not forget that not all streams need transcoding (cough bittorents cough) and for those the cpu load is almost nothing. also transcoding THOSE for mobile is less taxing than the aforementioned full BD rips. 

     

    So, a realistic assessment of your transcoding needs is important, but the above CPU should have no trouble with dual streams.

     

    That said, I'd go with an FX processor and an inexpensive PCI graphics card (not PCIe to save the slot of SATA expansion) to handle basic serve console /boot duties unless you need more serious graphics passthru to VM/docker for which an APU is not considered a good choice. The FX will let you buy more "cpu" for the same price since you aren't paying for the onboard APU.

     

    Just some other thoughts to consider from another AMD server builder.

  3. I'm still experiencing lockups when there is a significant amount of disk activity. It was somewhat resolved in beta 21 by reducing the md tunable and I put it back to stock with beta 22.

     

    Next time this happens, please type this command and then post your syslog:

     

    mdcmd dump

     

    Worth a mention in OP?

  4. In all RAID setups there is always the risk of losing yet one more disk while rebuilding; the issue being that "one more disk" pushes you beyond the limit of protection afforded by your array choice.

     

    The nice thing about unRaid is that losing one more disk means you only lose one more disk. In a striped array losing one more disk could mean losing the entire array.

     

    There is of course another way to increase your total protection and that is to only grow your array to the size / # of disks you are comfortable with before building another server. Hardware costs are not trivial, but given the honestly low system requirements for just a straight storage array system it isn't prohibitive.

     

    Even better then if you are going to do that is to just use that second server as a no kidding backup server to the first since backups are 100% still required for important data regardless of RAID choice.

     

    Which then of course brings up question about where/how a backup should exist like isolated power sources, isolation for ransom-ware resistance (if you aren't thinking about it, you should be) different physical location for natural disaster protection (house, city, geographic land mass) depending on how important your data is to you etc.

     

    the point being, you need to really really REALLY sit down and consider what your likely failure scenarios are, what steps you're taking to prevent them, if those steps are truly going to be as effective as you think they are, and of course the cost/benefit analysis.

  5. gundamguy, has LT documented this anywhere in the wiki?

     

    Hint LT: it would be helpful if you were able to provide some basic steps / warnings about dealing with BTRFS and XFS even if that just means links to authoritative sources.

     

    Oh yeah and why isn't there a link to the wiki at the top of every page of this forum?

  6.  

    This would actually solve another goal that I have of getting a backup domain controller, but could the virtual network speed keep up with the tape drive?  I've always been stunned at how slow VMWare is when transferring files across virtual ethernet from the same physical box.

     

    virtual network speed on a docker should be ... very fast. My memory and understanding is that while it is presented as Gigabit connection it operates at near bus speed. "Near" because there is nominal overhead.

  7. It should :) Especially since unRaid is not a backup strategy. So you should be backing up your really important data:

     

    - on another disk in the array

    - on another array in house

    - on another PC in the house

    - on an external drive (ideally stored in another location)

    - on another array in another location

    - on a crashplan-like setup or cloud drive

    - etc ... the options are limited only by your imagination, and your tolerance for risk (where Risk = magnitude of loss * likelihood of event) ... use multiple for even more piece of mind.

     

    and if you aren't keeping backups remember that ransomware WILL hit your shares so you are not just at risk for drive failures!!! A thread on the topic with ideas https://lime-technology.com/forum/index.php?topic=47961.0

     

  8. If you don't want to be emailing your config file around (considering it has your private keys) you could transfer it using any number of wifi explorer app.

     

    I can't vouch for iOS (https://itunes.apple.com/us/app/wifi-explorer/id494803304?mt=12), but for Android I use https://play.google.com/store/apps/details?id=com.dooblou.WiFiFileExplorer&hl=en and am very happy with it.

     

    Once you've imported it into the client app delete it from your phone of course; but at least there is no chance of it sitting on any server anywhere

  9. FYI, while not the best security practice, you can add username and password to your webgui shortcut so that you don't have to enter it every time. As you said, no one else in the house to protect from, and chance are if someone has snagged your shortcut they have owned your computer and the game is already lost. This way you can use a very strong password and not worry so much about someone getting past your router to unraid's SSH (which itself means you've already lost the game).

     

    there is of course any number of very good password managers that could help you too. I'm a huge fan of keypass because it is opensource and NOT a cloud solution; but it does offer a way to put your passwords on your mobile device or other computers if need be; keeping them in sync without the cloud is of course a problem left for the user.

  10. Some pointers to anyone installing this container (which I think should have been mentioned in the OP):

    [*] As for users passwords - the container is set up by default to use PAM (I have no idea why). Change it to Local (under "Authentication"->"General") and you'll be able to setup the passwords using the ui.

    Just my two (three) cents.

     

    Hey thanks. Sounds like that will fix the password problem when the container is recreated.

     

    Any thought on getting the latest version of OVPN-AS?

  11. So I managed to make the password change without a problem, but then I made a few changes to my container settings and it updated the container. The admin password was then reset to the default and didn't remember the new password I had set

     

    Is there a way to fix this? I don't want to have to change the admin password everytime the container receives an update

     

    Also, @Nem ... did you ever figure this out? I had the same problem when I had jus tried to edit/restart the container to grab the most recent openvpn-as version

     

    anyone?

    My volume mapping is /config ==> /mnt/cache/appdata/vpn/ literally everything else in the docker setup is unchanged from default

    Network Type:  Bridge

    Privileged: checked

    Bind Time: checked

  12. What about this:

     

    1) A share is set read only

    - This already exists

     

    2) ALL writes go to the cache, regardless of the files current presence on the array?

    - There will be some logistics to deconflict the duplicate named file (maybe just make it a hidden file?).

    - The hope here is that a change to the fusion and/or md driver would allow the system to refuse to even accept the duplicate named file and push a write permission error back over SMB to the source client. Worst case, silently reject it but that could really suck so it might be better handled in 3

     

    3) The mover, which is just a script, is modified to only copy over new files, (The assumption here being we couldn't reject it in 2 above), then the moved deletes the duplicate from the cache and is done.

    - The problem with this of course is the user has no idea at all they just lost their legit modified file. So, again assuming we can't deal with it in 2 above:

          a) rename the original with "-COPY_[datetimestamp]" appended to the end and then copy the new file over or,

          b) copy the new file over "-COPY_[datetimestamp]" appended to the end. This one will mean the user will have to search for a minute for their new file but the file is there and they should damn well know this behavior is what they asked for when they turned on the feature.

    - Basically a rudimentary COW protocol

  13. Let's try to stay on topic here. We will address the phone home licensing stuff soon but for now, let's not derail this thread with that discussion. We are actively looking into some of the other issues that have been found here.

     

    Jonp, with all do respect this IS on-topic. If LT is planning another announcement, then say so and most will hold discussion until that time (and it should be a SHORT amount of time!). But absent a response to the on-topic issue of a new "feature" it will continue to be discussed appropriately.

    All I meant by my post is that I didn't want pages of phone-home discussion to bury feedback relating to bugs and other issues.  We will address the concerns about phone home licensing soon, but for now, we are trying to stay focused on bugs we need to solve.

     

    I thoroughly hear the concerns you guys are bringing up. We will address this.

     

    And that is fair ... and now you've said it :) We eagerly await.

  14. Let's try to stay on topic here. We will address the phone home licensing stuff soon but for now, let's not derail this thread with that discussion. We are actively looking into some of the other issues that have been found here.

     

    Jonp, with all do respect this IS on-topic. If LT is planning another announcement, then say so and most will hold discussion until that time (and it should be a SHORT amount of time!). But absent a response to the on-topic issue of a new "feature" it will continue to be discussed appropriately.