Poprin

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Poprin

  1. I can help here, I have a F4-223. I found this tidbit of information on Reddit: To enable Intel Quicksync on Terramaster, open terminal and paste the following command: echo "options i915 enable_guc=3" > /boot/config/modprobe.d/i915.conf Exit terminal and then reboot the server. Do this, then add the usual config line into Plex docker and it should work. It did for me.
  2. I'm still testing this but to answer the original question, yes there is a simple way of doing this leveraging the Synology software suite. I've googled this extensively and came across this post and I'm really surprised there is no info about this on the internet because I imagine there must be quite a few people out there with both an Unraid and a Synology. The application you need to use on your Synology is Active Backup For Business. Install this and you do need to register with your Synology account because although it is free you do need to confirm your hardware for it to work I think. From within the software the backup option you want is 'file server' (not server, specifically file server as this essentially backs up an SMB target). To setup you need to link an SMB server (unraid). You need to either create or use an existing setup user account on unraid. You can either use your host name or IP as the server name, enter the credentials for the SMB account you have created on unraid. This will setup unraid as a file server SMB source. You can then go into tasks and create a backup task (Mirror, Incremental, multi version). I have personally used mirror as an option because my Synology supports BTRFS snapshots and I have configured that to take a snapshot of the backup location on a weekly basis so that I can roll back any issues through snapshot manager rather than Active Backup For Business. Only been using this for about a week and have been experimenting as I found no specific info online but it appears to be working an absolute treat!
  3. I hope you have managed to get this resolved. I have recently migrated my unraid install from an ageing AMD FX machine into a Terramaster f4-423. I had some teething issues but now I'm getting used to new features I wasn't using before and some of the limitations of the hardware it is running absolutely great with Plex docker (a few other low resource ones) and a Win10 VM on cores 3&4. Even Plex 4k encoding with quicksync (which I was unable to do before) works fantastic. Maybe you had a faulty PCIe riser? I believe all the drives in the hotswap bays are connected to the PCIe connector on the board. The separate SATA controllers are present as hardware but I didn't see the ports on the board (but I didn't remove the board from the chassis so could be hidden).
  4. This should be correct, this is all my script does and it works. Once PlexTraktSync is configured correctly and has done it's first run every time the docker is triggered it will run a sync and then turn itself off.
  5. Thanks @Rex099 your comment here is the only thing that got me on the right path to sorting this after some extensive googling! Essentially deleting the appdata folder got the docker working again. I then spent almost 2 hours getting it to work again because it's been ages since I did it the first time! With the new docker as of today the command to run when the docker is working again is "plextraktsync sync". This will trigger the setup process. Top tip... I was stuck on this for a solid half hour... use your email as the login username for plex. It gives you an undecipherable error if you use your username! Then follow the instructions on the plextraktsync githib page. You will need to setup again from scratch. But finally got this working again!
  6. I'm a long time user of unRAID. Over the past 2 years due to the efforts of both limetech and a host of community developers my system runs with such little intervention from me that I visit these forums very infrequently. I do occasionally log on though to see what's going on, quite upsetting to see this thread and the one proceeding. It would be fair to say that I am likely not alone in that my use of unRAID is completely dependant on community developed dockers that are pivotal in my use of the operating system. It would be a bit of a devastating blow if the relationships between the key community developers and limetech broke down and support of these community applications stopped. The increasing ease of use and quality of streaming services is already the death knell for home cooked media systems for all but die hard fans, the breakdown of the community would definitely be the nails in the coffin.
  7. You don't necessarily need server hardware to increase your storage. When you say 7 users running 1080p streams what exactly do you mean? Transcoding in Plex? Those Xeon E5-2620's are getting on a bit now. Sandybridge era, they can cut it if you need the cores for low power processing but if you need some raw power they will leave you wanting.
  8. I was on the brink of updating once, just couldn't do it. 5 minutes down time and 4 mouse clicks is just taking the piss.
  9. @trurl Sorry not much detail in my swift answer, I should have said if you add the second drive to the same share and it is set to 'high water' if one drive is empty and the other is only 50% full it will start filling the empty drive. I know unRAID doesn't specifically spread the load between drives but it will fill them relatively evenly if you are using all your drives across a share and have it set to high water.
  10. You can do it this way. There is an element of risk involved, but if you don't currently have the data backed up or parity protected then you are already at risk of data loss so it is more than likely an acceptable level of risk at this point. I would suggest not completely filling your 8TB drive on unraid, maybe move some to your other drive first so that when you add the other drive to the array unRAID will balance the load across the drives for you. Also the answer is yes to all your other questions. SMB/NFS/AFP/FTP are all supported by unRAID, with the above mentioned setup you will be protected from a single drive failure. Also Plex can be run in a docker container on unRAID itself.
  11. @SiNtEnEl Yes you are correct it is mostly cached rather than utilised in fairness, I installed the cAdvisor plugin to get a closer look as to what was going on. In conclusion I think my utilisation of the server has increased and I'm popping another 4GB in today to see if it gives me enough headroom. Failing that I still have expansion for another 4GB... failing that it's new server time!! It is getting a bit long in the tooth.
  12. @french_guy I use the linuxserver docker and it has been pretty reliable for me. I also have an LG 4K TV, if you are running on LG webos 3.5 or above the Plex software is now well featured and I can direct play everything from my server so you should not need to transcode on your Athlon meaning you should be able to use Plex ok with your current hardware.
  13. @tucansam I have just had a little trawl of the support form to see if anyone was complaining of RAM usage. My lowly little server only has 8gb of ECC RAM but this has always been ample until about the last month and I seem to be constantly running at 90%+ capacity. I've started limiting the amount of RAM my dockers use but the two things I don't limit is unRAID itself (because you can't) and my Plex docker as that is the primary use of the server. I am using the Linuxserver.IO docker template for Plex and I am noticing that if I restart the docker it looks like that is responsible for at least 25% of the RAM utilisation. So I'm not sure if it is a change within this docker or unRAID itself causing the additional RAM usage. Are you using the Plex docker?
  14. @squirrellydw I to would like to edit my Chromecast profile but it does not appear where expected when I'm looking on other forum resources (not specifically for unraid docker) It is quoted as being at this location > plexmediaserver/Library/Application Support/Plex Media Server/Profiles However the profiles folder does not exist in my cache/appdata location for Plex? Can anyone advise if the docker stores these profiles elsewhere?
  15. Thanks Squid, although I have just tried again. Could have been a random blip? Seems to be working now. Everything was working my end at the time as I updated 5 other plugins at the same time. Just tried again now and it worked straight away. Odd...
  16. @Squid I am having some issues if I try to update the plugin, also tried removing and installing from scratch but get the following error: plugin: downloading: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.02.25.txz ... failed (Invalid URL / Server error response)plugin: wget: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.02.25.txz download failure (Invalid URL / Server error response)
  17. Thanks gfjardim, as I say I don't think this plugin or anything on unRAID itself is to blame. Just doing a bit of due diligence. I'm not concerned about the email account just more concerned of a 'key logger' or some other such bit of nastiness poaching my passwords for things that could cost me money!
  18. @gfjardim could you explain what the Statistics plugin does please and does it have a separate support page? Is this part of the preclear plugin? I'm not suggesting this is the cause but I have recently had a security breach with the email account I use to send reports out of unRAID and I'm just looking into each new or updated item that I have changed in the past week. This just happens to be one of them! Thanks in advance.
  19. Thanks Lionelhutz, does the BTRFS scrub work with a single drive / image though? Can it repair errors in the file system? I appreciate you don't have fault tolerance with one drive but surely it can repair errors in the file system?
  20. Hello everyone and Merry Christmas! I'm having an issue that is infrequent but quite annoying. I have my docker image (10gb) on my cache drive which is an Intel 530 120gb. I don't actually use my cache drive as a cache it is actually my application drive really. Now every now and then my docker image becomes corrupted. I know when it happens because either my dockers start acting up or they just won't start with no apparent errors in the logs. If I stop docker and then run a BTRFS scrub on the image I will find errors that it says it is unable to repair. Then I have to go through the process to delete the image and re-install the dockers. OK this takes like 5 minutes but I'd like to get to the bottom of how this is occurring. Does the BTRFS scrub work if you only have one drive? Do you require two drives for fault tolerance?
  21. Thanks RobJ just digesting this information now! Time for a little bit of fettling and a new plugin to play with by the looks of things!
  22. RobJ, I feel like this is something that limetech thought about then changed their mind. Because in the disk settings page you have the setting 'Tunable (md_write_method):' Now this has three settings: Auto - read/modify/write (default) - reconstruct write (turbo). Now to my mind this would make sense if 'Auto' meant standard write if array was spun down and you just wrote a random file to a disk. Then to basically enable turbo write you could click the already existing button to spin up all disks. Then if all disks were already spinning then unRAID would switch intelligently to turbo write. Because Turbo write is great I'm playing with it now. It's doubling my write speed which is great for my machine backups, etc but probably overkill if i'm copying an MP3 over! I actually thought that's what auto meant on that setting until I clicked the help button!
  23. Thanks johnnie.black, at least you are seeing the same results as me. I don't think this is suitable for my usage scenario then unfortunately which is a shame because it seemed like the answer! Also read some threads about 'turbo write' using all the drives spun up to perform a write operation. Might give that a whirl! Thanks for the swift response.
  24. I think this is what you are looking for: https://lime-technology.com/wiki/index.php/The_parity_swap_procedure
  25. Appreciate this is an old thread but I thought it would be worth replying to as I found this via a google search when looking for ways to utilise my RAM more effectively. Does anyone have their unRAID box configured in this manner with the adjusted parameter for vm.dirty_ratio=xx? I have adjusted my go file to add this parameter with a value of 50 so in theory using approximately 4GB of my RAM. Now at first I thought this was a god send because 50% of my data transfers are single files up to approximately 3GB. When testing copying to the array via my VM it would copy the whole file into the RAM cache first and I thought cracked it! However I quickly realised something was amiss when my weekly machine backup on my desktop took much longer than before to complete. After testing by making a backup of my virtual disk to the array (30GB) I noticed that when it run out of cache after about 4GB instead of seeing the usual copy speed I get of 40-42MB/s I was instead only achieving 24-26MB/s. Can anyone shed any light on what is happening here? Have I read this old thread and this is no longer a wise thing to adjust? Any help appreciated!