Leaderboard

Popular Content

Showing content with the highest reputation on 12/01/18 in all areas

  1. Hi Guys been thinking about how to get sonarr and radarr to auto re-encode downloads automatically using handbrake. There will be 2 parts to this and i will post the next under this one in a few days time. This video the first part shows how to automatically re-encode video files downloaded by sonarr or radarr using an advanced docker path mapping sending the media files through handbrake first before then being processed by sonarr/radarr. I use h265 as i want smaller video files but any format can be choosen. This first part goes through the principles of how this works. The second video will go deeper using detailed examples. It is recommended to watch this video first. PART ONE PART 2
    2 points
  2. I added a confirmation checkbox before starting the array when a disk is missing. Available in next version.
    2 points
  3. Using a network share as a Time Machine destination is problematic, even using Apple's own Time Capsule. An external hard disk plugged into the USB port of your Mac is much faster and much more reliable. The weakness is in the use of the sparse bundle disk image. A sparse bundle consists of a containing folder, a few database files, a plist that stores the current state of the image and nested subfolders containing many many thousands of small (8 MiB) 'band' files. An image file has to be used in order to recreate the necessary HPFS+J file system that Time Machine requires to support hard linked folders on network storage, while using many small files to create the image rather than one huge monolithic file allows updates to be made at a usable speed. But sparse bundles have shown themselves to be fragile and often Time Machine will detect a problem, spend a long time checking the image for consistency and then give up, prompting the user that it needs to start over from scratch, losing all the previous backups. Because there's a disk image on a file server a double mount is involved. First the Mac has to mount the network share, then it has to mount the sparse bundle disk image. Once that is done it treats the disk image exactly as it would a locally connected hard disk that you have dedicated to Time Machine use. Sparse bundles grow by writing more band files so you have the opportunity to specify the maximum size of the image. If you don't it will eventually grow until it fills up all the space available on the share. If you still want to do it, here's what I'd do. First create a user on your Unraid server that's only going to be used for Time Machine. Let's call that user 'tm'. Set a password. Now enable user shares, disable disk shares and create a new user share. Let's call it 'TMBackups'. Include just one disk and set Use cache disk to No. You can set Minimum Free Space quite low (e.g. 10 MB) since the largest files in the share will be the 8 MiB band files. Allocation method and split level are irrelevant if your user share is restricted to a single disk. Under SMB Security Settings for the share, turn Export off. Under NFS Security Settings, confirm that Export is off. Under AFP Security Settings, turn Export on. If you want to restrict the size occupied by your Time Machine backups do so here. Even if you want it to be able to use all the disk it's worth setting a limit so that it doesn't become totally full. I fill up my Unraid disks but I like to leave about 10 GB free. Set Security to Private and give your new user 'tm' exclusive read/write access. Consider moving the Volume dbpath away from its default location (the root of the share) to somewhere faster, where it's less likely to get damaged. The Volume database is the .AppleDB folder that can be found in the root of an AFP share. It too is fragile and benefits from fast storage. I moved mine onto my cache pool by entering the path /mnt/user/system/AppleDB (i.e. a subfolder of the pre-existing Unraid 'system' share, which by default is of type cache:prefer). This will improve both the speed and reliability of AFP shares, so do it to any other user shares you have that are exported via AFP. The system will automatically create a sub-folder named for the share, so in this example the .AppleDB folder gets moved from the root of the share and placed in a folder called /mnt/user/system/AppleDB/TMBackups. Now that the user share is set up, go to your Mac. Open a Finder window and in the left-hand pane under Shared, find Tower-AFP and click it. In the right hand pane make sure you connect as user 'tm', not as your regular macOS user and not as Guest. You'll need to enter the password associated with tm's account on your server. Check the box to store the credentials on your keyring. The TMBackups share should be displayed in the Finder window. Mount it. Now open Time Machine Preferences and click "Add or remove backup disk". Under Available Disks you will see "TMBackups on Tower-AFP.local". Choose it. If Time Machine ever asks you to provide credentials then enter those for the 'tm' user, not your regular macOS user. Enter a name for the disk image (say, "Unraid TM Backups") and do not choose the encryption option, and let Time Machine create the sparse bundle image, mount it and write out the initial full backup, which will take a long time. Once it has completed, Time Machine should unmount the Unraid TM Backups image but it will probably leave the TMBackups share mounted, since you mounted it manually in the first place. You can unmount it manually or leave it until you reboot your Mac. From then on, each time Time Machine runs it will automatically mount the user share (keeping it hidden), automatically mount the sparse bundle image, perform the backup and tidy up after itself, then it will unmount the image then unmount the share, all without your interaction. The use of a dedicated 'tm' account offers a degree of security but if you want your backups to be encrypted then use unRAID's encrypted whole disk file system, not the encryption option offered by Time Machine.
    1 point
  4. In host mode there is no masquerading (network address translation). Only in bridge mode.
    1 point
  5. Software cannot overcome the hardware limitations. The CPU / chipset / BIOS has decided to group those things together. If none of the ACS override settings will split them apart, then there is nothing you can do short of passing through a PCIe card & audio
    1 point
  6. Yes we can fix that. This will be included in a larger change that also addresses this one:
    1 point
  7. Please don't do this. Which thread should we respond to? Should we all be expected to read both threads to see what others have said in response? In a more general case, you could have someone going to a lot of trouble to research a problem for you and write a long response in one thread, without being aware that someone else has already made a similar effort on the other thread. This is one reason why crossposting has been frowned upon on message boards since long before the World Wide Web. In future, if you really feel you must make the post in one thread and take responses there, then just post a link in the other thread.
    1 point
  8. No, Marvell 9215, 9230 and 9235 all use the standard AHCI driver, and some work well, the 9215 and 9235 are likely more stable since there's no raid support, even though it's not being used, the 9230 seems to be the worst offender, but it can vary from manufacturer to manufacturer, the ones used on that series of Asrock boards are usually very problematic, they can have issues every day of the week that ends in Y.
    1 point
  9. You need to set the PUID and PGID. Will most likely fix your issue. I'll make a template for it this weekend.
    1 point
  10. And if it's never been enabled, then libvirt.img doesn't exist, so no backup will happen of the file
    1 point
  11. Yes, most currently recommended LSI HBAs don't currently support trim with Unraid, but the user gets an error when running fstrim, so easy to know, what I believe bubbaQ is asking is if the fstrim command reports the pool was trimmed but it's not actually trimmed because of some btrfs issue.
    1 point
  12. Made another update which adds a confirmation checkbox when a cache disk is missing.
    1 point
  13. Made an update whereby cache slots selection will become disabled when cache devices are re-arranged. Available in next release.
    1 point
  14. Does that include missing cache pool devices? It would be great if they are included in the warning.
    1 point
  15. Does disk5 mount after the repair? Ads for the cache pool, docker image is corrupt and needs to be recreated, cache fs seems fine, at least for now.
    1 point
  16. A word of caution... I believe that memory is one preclear requirement that some folks don't think about. I am not sure what the requirement is per HD but is not insignificant! It sound like you are using an old system so be aware that you could have an issue there. Also make sure you have adequate air flow over your HD's if you have the case side off. One of two drives should not be an issue but when you start talking four or more, you need to think about heat problems and air flow.
    1 point
  17. They are now supported. Enjoy your 10GbE speeds!
    1 point
  18. This helped me thanks .. What i did was edited the plex.subdomain.conf file and replaced proxy_pass https://$upstream_plex:32400 with proxy_pass http://192.168.178.23:32400 <-- your local ip for plex .. My plex docker stayed at host Then the second part was you had to go into plex > Settings > network. under Custom server access URLs put the plex domain in i.e https://plex.yourdomain.com:443 i when to the domain and it worked. it asked me to sign in again and bingo! hope this helps someone ..
    1 point
  19. The easiest tuners to use will be the ones that do not requires to be plugged into the server, they function over ethernet. This way you wont even need special editions of unRaid. For the US market this means SiliconDust HDHomeRuns. I'm not sure exactly which model for UK DVB usage. Try searching on network DVB.
    1 point
  20. MemTest86 version 7.5 found my faulty DIMM using the default settings. You may want to just change the setting for the number of times it cycles through the different tests. I think the default is four cycles of 13 tests. Another thing you could try, if MemTest86 returns another pass, is to run on just one DIMM for a while in, say, the channel A socket. Run a parity check and when it finishes swap with the other DIMM in the same socket and repeat. If that reveals no difference you might want to try each DIMM singly in the channel B socket. Label the DIMMs (or note their serial numbers) and make careful notes and eventually you should be able to narrow the problem down to either one DIMM or one socket. It's all very time consuming stuff but it can run unattended and I'm sure you want to get to the bottom of this problem. It's annoying to have a potentially bad DIMM but Corsair offer an lifetime warranty and I really can't fault their RMA process.
    1 point
  21. You probably need to set the networking mode to host in the tvheadend template.
    1 point