jumperalex

Members
  • Posts

    2000
  • Joined

  • Last visited

Everything posted by jumperalex

  1. Yeah not sure what is going on. My container's config is set to: /config /mnt/cache/appdata/plexmediaserver and then inside plex's webgui is set to: Transcoder temporary directory /config/transcode I just transcoded to my phone and watched at the segments populated my transcode folder. Soooo that is how it works for me. Full disclosure I am using neeto's docker but once you have set the /config folder for the container, Plex should respect a transcode folder designated inside of it.
  2. Even more obvious, just to show how ridiculous it is to be arguing over this ...
  3. [gulp], that is a ... well ... big feature add in the RC phase isn't it? Enough time to test for corner cases or is it seemingly straight forward? My ignorance makes me ask the easy questions so please be gentle
  4. I would contend that an A10-5700, with a 3120 futuremark score, should handle two full BD bit-rate 1080p transcodes at the same time. Tests show 1000-1500 "futuremarks" are needed per stream. Also lets not forget that not all streams need transcoding (cough bittorents cough) and for those the cpu load is almost nothing. also transcoding THOSE for mobile is less taxing than the aforementioned full BD rips. So, a realistic assessment of your transcoding needs is important, but the above CPU should have no trouble with dual streams. That said, I'd go with an FX processor and an inexpensive PCI graphics card (not PCIe to save the slot of SATA expansion) to handle basic serve console /boot duties unless you need more serious graphics passthru to VM/docker for which an APU is not considered a good choice. The FX will let you buy more "cpu" for the same price since you aren't paying for the onboard APU. Just some other thoughts to consider from another AMD server builder.
  5. I had the same problem. The fix sadly is to blow away both the container and your appdata for openvpn and start completely over unless you are willing to manually redo the permissions. The first method surely takes less time.
  6. Wow so quiet. Is no one using this beta or has it finally become bug-free and stable enough that there is just so little to say? Should the rest of us start doing a happy dance for an -RC?
  7. Next time this happens, please type this command and then post your syslog: mdcmd dump Worth a mention in OP?
  8. In all RAID setups there is always the risk of losing yet one more disk while rebuilding; the issue being that "one more disk" pushes you beyond the limit of protection afforded by your array choice. The nice thing about unRaid is that losing one more disk means you only lose one more disk. In a striped array losing one more disk could mean losing the entire array. There is of course another way to increase your total protection and that is to only grow your array to the size / # of disks you are comfortable with before building another server. Hardware costs are not trivial, but given the honestly low system requirements for just a straight storage array system it isn't prohibitive. Even better then if you are going to do that is to just use that second server as a no kidding backup server to the first since backups are 100% still required for important data regardless of RAID choice. Which then of course brings up question about where/how a backup should exist like isolated power sources, isolation for ransom-ware resistance (if you aren't thinking about it, you should be) different physical location for natural disaster protection (house, city, geographic land mass) depending on how important your data is to you etc. the point being, you need to really really REALLY sit down and consider what your likely failure scenarios are, what steps you're taking to prevent them, if those steps are truly going to be as effective as you think they are, and of course the cost/benefit analysis.
  9. great googly moogly, why did we leave reiserfs I know, but still :(
  10. gundamguy, has LT documented this anywhere in the wiki? Hint LT: it would be helpful if you were able to provide some basic steps / warnings about dealing with BTRFS and XFS even if that just means links to authoritative sources. Oh yeah and why isn't there a link to the wiki at the top of every page of this forum?
  11. virtual network speed on a docker should be ... very fast. My memory and understanding is that while it is presented as Gigabit connection it operates at near bus speed. "Near" because there is nominal overhead.
  12. Its been mentioned but not explained how ... this is a nice article I just found on using rsync for snapshots http://www.mikerubel.org/computers/rsync_snapshots/ I do smell a great opportunity for a plug-in if anyone is so skilled and inclined
  13. It should Especially since unRaid is not a backup strategy. So you should be backing up your really important data: - on another disk in the array - on another array in house - on another PC in the house - on an external drive (ideally stored in another location) - on another array in another location - on a crashplan-like setup or cloud drive - etc ... the options are limited only by your imagination, and your tolerance for risk (where Risk = magnitude of loss * likelihood of event) ... use multiple for even more piece of mind. and if you aren't keeping backups remember that ransomware WILL hit your shares so you are not just at risk for drive failures!!! A thread on the topic with ideas https://lime-technology.com/forum/index.php?topic=47961.0
  14. If you don't want to be emailing your config file around (considering it has your private keys) you could transfer it using any number of wifi explorer app. I can't vouch for iOS (https://itunes.apple.com/us/app/wifi-explorer/id494803304?mt=12), but for Android I use https://play.google.com/store/apps/details?id=com.dooblou.WiFiFileExplorer&hl=en and am very happy with it. Once you've imported it into the client app delete it from your phone of course; but at least there is no chance of it sitting on any server anywhere
  15. FYI, while not the best security practice, you can add username and password to your webgui shortcut so that you don't have to enter it every time. As you said, no one else in the house to protect from, and chance are if someone has snagged your shortcut they have owned your computer and the game is already lost. This way you can use a very strong password and not worry so much about someone getting past your router to unraid's SSH (which itself means you've already lost the game). there is of course any number of very good password managers that could help you too. I'm a huge fan of keypass because it is opensource and NOT a cloud solution; but it does offer a way to put your passwords on your mobile device or other computers if need be; keeping them in sync without the cloud is of course a problem left for the user.
  16. Also keep in mind that unlike most other RAID arrangements, you don't lose your entire array with a multi-drive failure in unraid. A 10 disk array using single parity isn't going to lose all 10 disks once you've exceeded unraid's single-failure tolerance. That still might be too much of a risk for you, but it does change the equation a bit.
  17. So OpenVPN-AS is now up to version 2.1.0 ... and I still can't get the docker to update from 2.0.24 Is anyone else able to make this happen? Do we need to bug the maintainers (and by bug I mean nicely ask them since they are doing this all for free).
  18. Also, @Nem ... did you ever figure this out? I had the same problem when I had jus tried to edit/restart the container to grab the most recent openvpn-as version anyone? My volume mapping is /config ==> /mnt/cache/appdata/vpn/ literally everything else in the docker setup is unchanged from default Network Type: Bridge Privileged: checked Bind Time: checked
  19. Anyone know how to get this docker to update the version of OpenVPN-AS? I'm on 2.0.24 which is the original I got when i first downloaded the docker. I've edited and restarted the container several times (like plex and rutorrent) but it isn't grabbing the latest(2.0.26)
  20. What about this: 1) A share is set read only - This already exists 2) ALL writes go to the cache, regardless of the files current presence on the array? - There will be some logistics to deconflict the duplicate named file (maybe just make it a hidden file?). - The hope here is that a change to the fusion and/or md driver would allow the system to refuse to even accept the duplicate named file and push a write permission error back over SMB to the source client. Worst case, silently reject it but that could really suck so it might be better handled in 3 3) The mover, which is just a script, is modified to only copy over new files, (The assumption here being we couldn't reject it in 2 above), then the moved deletes the duplicate from the cache and is done. - The problem with this of course is the user has no idea at all they just lost their legit modified file. So, again assuming we can't deal with it in 2 above: a) rename the original with "-COPY_[datetimestamp]" appended to the end and then copy the new file over or, b) copy the new file over "-COPY_[datetimestamp]" appended to the end. This one will mean the user will have to search for a minute for their new file but the file is there and they should damn well know this behavior is what they asked for when they turned on the feature. - Basically a rudimentary COW protocol
  21. http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.2857196&utm_medium=Email&utm_source=IGNEFL042616B&cm_mmc=EMC-IGNEFL042616B-_-EMC-042616-Index-_-Combo-_-Combo2857196-S0H Seems like a good deal especially for anyone looking to finally upgrade their Parity and add a shiny new 4TB to their array.
  22. Jonp, with all do respect this IS on-topic. If LT is planning another announcement, then say so and most will hold discussion until that time (and it should be a SHORT amount of time!). But absent a response to the on-topic issue of a new "feature" it will continue to be discussed appropriately. All I meant by my post is that I didn't want pages of phone-home discussion to bury feedback relating to bugs and other issues. We will address the concerns about phone home licensing soon, but for now, we are trying to stay focused on bugs we need to solve. I thoroughly hear the concerns you guys are bringing up. We will address this. And that is fair ... and now you've said it We eagerly await.
  23. Jonp, with all do respect this IS on-topic. If LT is planning another announcement, then say so and most will hold discussion until that time (and it should be a SHORT amount of time!). But absent a response to the on-topic issue of a new "feature" it will continue to be discussed appropriately.