Jump to content

awediohead

Members
  • Posts

    127
  • Joined

  • Last visited

Everything posted by awediohead

  1. I'll watch this thread with great interest as I've been trying to do much the same things. In fairness, the problem IME arises mostly from Apple making things unnecessarily difficult. I have had reasonable results following Space Invader tutorials on YouTube to set up Time Machine back ups - though much less so when actually trying to use those back ups, and of course a back up you can't actually utilise to recover with can't reasonably be called a back up. I have also tried using Carbon Copy Cloner to back up to a sparse image, but I need to do more testing to see if that actually functions properly as it should. i.e. it does back up on a schedule incrementally, but what would happen should that boot volume fail I dread to think. That's why I have local per machine back ups to external drives as duplicates. For photos I can't suggest anything beyond archiving the various photo libraries to an unraid share - maybe using Syncthing? I plan to do this but haven't implemented it yet - I've just some peace of mind that at least 20 odd years of photos are backed up to unraid AND external drives per PC. The idea of having a single photo library that pools our family photos, identifies duplicates and generally organises itself according to meta data is something I understand is do-able with Android phones and Windows PC's - though I've not dug too deep into that because it's irrelevant to my family situation. For me part of the problem is that, quite rightly, unraid and best practice have kept on evolving over the years, yet some of the most reliable tutorials are several years old - faced with putting hours of effort into getting something working that's no longer best practice leads to me procrastinating like hell. A bit of a Catch 22. The end result is that I've stuck with bomb proof local back ups and mostly resigned myself to my unraid set up being 90% Plex media server and the *arrs, though it's got a couple of TB's of backups as TM's and Samba shares, it's the polar opposite of the elegant all in one solution I'm hoping for eventually. Given the age of the current hardware my unraid set up runs on (4th gen intel) I've not even tried to use it as a gaming VM - and I haven't tried to upgrade the hardware because I'm not convinced the end result would be user friendly enough for my family to cope with. I have fun in my spare time tinkering around, but understandably they just want things to work without having to jump through any tech hoops.
  2. At the moment I have unraid running on a 4th Gen intel CPU with 1050 gfx card and it's basically a Plex Media server with the *arrs. It works fine and does very important work because my partner is severely disabled and in chronic pain and uses the Plex media to distract/occupy and help manage her pain and mental health given she's largely housebound recovering from major surgery. We've got another 4th gen Intel box with an RX580 that does "gaming PC" duties in the lounge - very occasionally. My kid normally plays Skyrim or GTA, (VERY uncompetitively!) . . . in other words no demanding modern games. My wacky idea is based on trying to replace both with another PC that's a Ryzen 3600 + x570 + RX580. I was thinking of dedicating 2 cores and 4 threads to the Plex side of things (I could add in the 1050 for encoding) and running a Windows VM or possibly a Linux VM (I'm really enjoying Nobara on my personal PC) with the other 4 cores / 8 threads + the RX 580 passed through. 1080p gaming is all that's needed. The "gaming" aspect would get used for just a few hours a week on average - the question is whether or not such a set up would be possible to configure so it ran reasonably reliably and in a user friendly way to my non-techy family? And also is it safe to assume that a much more modern CPU like the Ryzen 3600 would run more efficiently, especially in terms of powering down when idle, than the 4690k and z97 that's currently doing unraid duties? IOW if I were to pin the Plex duties to 2 cores and the remaining 4 cores were unused 90% of the time - is that efficient? Or is it more the case that more modern CPU's work more efficiently if the compute demand is spread over the available cores? Any thoughts and general pointers gratefully accepted!
  3. I think I may have figured this out - I noticed I had two folders for the artist, one called Artist and the other ARTIST - these having been automatically generated from metadata by Lidarr. Having deleted the Artist folder in Krusader the ARTIST folder which had previously only shown itself to contain one album suddenly showed itself to contain all 15 album folders (visible via Krusader) via SMB. If anyone cares to explain why this happened - I'm always happy to learn. I'll wait a day before marking this solved just in case someone cares to explain? Thanks
  4. So I have some music files that Lidarr recognises as present, which Plex can play and which I can see using Krusader, but a lot of these album folders or directories are invisible over my network. So far - just as an example of what's happening - a folder that I know contains 15 albums from a particular artist shows as only containing one album when viewing the share over SMB - the other 14 album folders being essentially invisible, except when I use Krusader. So I can't play them or copy them by using a local music player pointed at my data/media/music folder on unraid. The player can of course play the audio files in the folders it CAN see. While I've identified this as a problem with one artist I suspect there could be many more "invisible" directories where any given artist has some albums missing when viewing shares over the network and there would seem to be a weird permissions problem going on? Could someone please walk me through what I should do? I have checked the permissions of these "invisible over the network" album folders with Krusader and I can't see anything obviously wrong. My /data share over SMB is public This is NOT a Lidarr problem, or a Plex problem or a Krusader problem - they all work perfectly well, which is why I posted in the general support forum. It could be a weird SMB and MacOS thing I suppose? Thanks Julian
  5. Hi ljm42 - thanks for the help - changing the DNS seems to have made some good changes. I watched an interesting video about Quad 9 DNS security a few months back and while I didn't change the unraid settings I did change them on my computers and then, sometime later, on my router. Anyway I think it's all sorted now - all up to date Thanks again
  6. Hi there - I've spent the last few months moving house and putting up shelves in the new one so, as unraid has continued to function as a media server and I've been able to update plugins and dockers fine, I've not paid much attention to the myservers api problems that have been going on for quite a while. I understand that 6.10.3 is current but I'm on 6.9.2 and the status on the upgrade os page is "unknown" and there doesn't seem to be a way to upgrade or update it as things stand? Also when I try to send this to support via the My Servers dropdown (trying to log out as it suggests) it 'Fails to fetch' constantly - I can't log out as this also 'fails to fetch' though when I login via the forums/unraid site to myservers it says I'm offline. I've tried to restart the api via terminal and the report -v says: <-----UNRAID-API-REPORT-----> SERVER_NAME: SINGTHESIS-NAS ENVIRONMENT: production UNRAID_VERSION: 6.9.2 UNRAID_API_VERSION: 2.47.1 (running) NODE_VERSION: v14.15.3 API_KEY: invalid MY_SERVERS: authenticated MY_SERVERS_USERNAME: awediohead RELAY: disconnected MOTHERSHIP: disconnected ONLINE_SERVERS: OFFLINE_SERVERS: ALLOWED_ORIGINS: HAS_CRASH_LOGS: no </----UNRAID-API-REPORT-----> Presumably the problem is that the api is invalid but no idea how to fix that myself what to do next? Cheers Julian
  7. So I know I'd ideally get an 8TB or larger drive to back up ALL the files on my server, but I can't afford one right now, and in any case by the time I get an 8TB drive I'll probably need a 10 TB drive and so on. Nor can I afford TB's of cloud back up. Then it struck me that the truly important files I really need to back up are only a fraction of what's on the server, since the vast majority of the 6/7TB currently on it are movies and TV shows, I can always download again. So that means I already have big enough hard drives to back up the important stuff, I just don't know how to do so selectively and then how to schedule the back ups? I'm used to GUI back up apps like Carbon Copy Cloner and haven't the faintest idea how to do this with unRAID? Again if I could be properly selective about what is really important then I could probably afford 50 odd GB of cloud storage for the ultra important stuff. When I search Community Apps for 'Backups' there's 83 pages to wade through, so if anyone can help me whittle down the options or recommend a specific docker for what I want to do, that would be really helpful: Back up of selected files/shares/subfolders to an external drive and / or unassigned device that can be scheduled? Preferably with some sort of hand holding tutorial as to how to set it up. Thanks
  8. I'm probably doing something dumb but about 10% of the tv shows are actually increasing in file size - which also means that 90% of the time there's a significant decrease in file size, so I'm not complaining, I'd have just assumed it would automatically not process a file if the end result was to make it bigger- or revert it - but then of course I don't really understand the process. I'm just 90% very grateful and 10% puzzled Any ideas ? Ah Ok I found the reject file if larger than original plugin - which I hadn't added as I thought there was a five plugin limit, which seems to have been changed since I first set things up?
  9. Midnight Commander to the rescue - marked as solved. No more errors Thanks very much Squid
  10. Sorry to say but I don't actually know how to delete the folder as it's specific to the one cache ssd I think it'll be something like rm /mnt/cache_standard/domains - but obviously I'm not going experiment in CLI !!! Can someone walk me through it please?
  11. Hi Squid - thanks to your instructions no more errors re appdata and system, but the process doesn't seem to have done anything for the domains error - I tried switching the cache, running the mover and then switching back and running the mover again. But it still says "Share domains set to use pool cache_vms, but files / folders exist on the cache_standard pool" However when I look at the /mnt/cache_standard/domains folder contents it says it contains 0 objects: 0 directories, 0 files (0 B total) The three vdisk1.img containing folders for Windows XP, 7 and 10 are all on cache_vms What do you suggest please? Thanks
  12. Hi Squid - what I don't understand is I haven't changed anything - When I look at the attached screenshots, User Shares shows that this mysterious 'Cache' is assigned to appdata and system, but when I select those shares they both show themselves to have been assigned to cache_standard as shown below: That's how it's been for as long as I can remember - the cache_standard ssd having been there since I first installed unraid - I added the cache_vms drive a year or more ago. It might be that when I added the cache_vms drive I renamed the original cache to cache_standard - I don't remember. Either way though, nothing has changed for months as I recall, and until the power cut the server had been up with no errors from FIx Common Problems for a couple of months. Does a bad shutdown maybe cause the system to do a deeper level of self-check ? i.e. that might reveal something I messed up months ago? Anyway, by 'disable all the services" does that mean turn docker off? Not running any VMs . . . anything else to disable? And once off - given the share settings that have always been in place for appdata and system, I should just run mover? Thanks very much for your help!
  13. Thanks Squid Diagnostics attached - if you need them un-anonymised let me know. cheers singthesis-nas-diagnostics-20220130-1717.zip
  14. So yesterday with the high winds we had a powercut for about 15 minutes - when power was restored and I rebooted my server I got the notifications shown in the attachment. I call them weird because I haven't changed anything for months: I haven't renamed a pool as far as either appdata or system shares are concerned. I have 2 cache SSDs called cache_standard and cache_vms with the latter being used for VM related stuff - isos, domains etc. Been that way for well over a year. Re the domains share it's possible that there maybe some files/folders VM related stuff I did over 18 months ago, but that begs the question why this hasn't been flagged up since? Obviously I suspect the power outage has messed something up and I'm wondering what to do about it? Apart from these alerts in Fix Common Problems everything seems to be working as normal. I rarely boot up the sole Windows 10 VM I have installed, as this is only a 4-core CPU and it's got its work cut out to keep up with media server duties. I am a novice with regard to the CLI but perfectly willing to learn if someone can walk me through what to do. Thank you PS Yes I have a UPS arriving tomorrow!
  15. Just wondering why you're thinking in terms of doing this on a server of any kind? How does using a server make the 'clean up' you mention easier or more efficient? Why not just add a 2TB drive to a regular PC running the OS of your choice, do your 'clean up', upload to the cloud and be done? Building a NAS with multiple drives seems like a bit of a complicated way to solve a simple problem, while spending a lot of money on drives designed for long term reliability seems an odd choice for a short term project? Just a bit puzzled? Maybe you need to say more about the nature of the data you're 'cleaning up' for it to make sense?
  16. The only positive thing I can say about my Plex audiobook library is that it sorta works . . . kinda I've stuck with it because via the iOS Prologue app the user experience for my immediate family is pretty solid and we're mostly re-playing books we've had for years. i.e. we'd notice pretty quickly if the player app was dropping chapters or losing place, which did happen with Plex <---> other apps I tried. When I first set it up, I remember I cherry picked some bits from https://github.com/seanap/Plex-Audiobook-Guide but it's changed a lot since and I didn't bother with a lot of the automation that guide describes because I use OSX on hackintosh. The plugin mentioned in the current version is also unfamiliar to me as I used Audiobooks.bundle - which the newer plugin refers to as being possible to 'upgrade' from. But if it ain't broke . . . . https://github.com/djdembeck/Audnexus.bundle Lastly on iOS the Prologue app seems to be pretty good at dealing with the fact that Plex "thinks" it's serving music files and ignoring anything not audiobook specific. A lot of Android audiobook player apps (going by reviews) only seem to work well if accessing a local audiobook specific folder on the phone, rather than Plex. I don't have an Android phone to test with, but would like to be able to recommend a solid, reliable app to older family members with Android phones who aren't very tech-savvy, if anyone can recommend one? HTH
  17. Also very excited to see this - I currently have no use for this for eBooks - only for audiobooks, of which I have a large collection in a Plex library. Now Plex is pretty rubbish for audiobooks , but in the absence of anything specific to audiobooks and after various tweaks, it is workable. Would be great to have a 'connections' option for Plex in the pipeline if possible - similar to that on the other *arrs? Hopefully fairly simple to add? Thanks
  18. So I recently added another 4TB drive to my array and some weird stuff started happening with it not mounting - and various other errors. I won't detail them as they're now resolved. The point is that while all this was going on I searched here and it seemed that my Marvell chipset PCIe to SATA 4 port card was the likely problem since the new 4TB drive was the only array drive connected to this card - the other device connected to this card being an optical drive. So assuming the whole parity check thing would end badly with the Marvell PCIe card , I ordered a new PCIe card (Chip JMICRON+JMB582) - BUT to my surprise once the whole process of formatting and parity checking was finished, the new drive lit up green for 'normal operation'. So the question is, can I simply swap out the Marvell PCIe card for the new JMicron card when it arrives tomorrow? Will that muck up the array at all - can't see why it should just double checking? And should I? I mean in an, "if it ain't broke" sense? Right now, there's nothing on the new drive so I'm wondering if "Marvell" related problems - the reason they're so actively recommended against here - might only start happening once the drive is regularly having data written to it and being read from as part of the array?
  19. Just FYI as of today I got third time lucky with Safari after both Firefox and Chrome were illegable in the terminal - Safari crystal clear, which is both great (to get stuff done) and annoying as I never use Safari! Anyway, maybe helps someone else do the "browser dance" more efficiently!
  20. https://github.com/headkaze/EFI-Agent I've been using this menu bar app for ages on my bare metal hackintoshes - since the days of Clover using ANY kind of configurator has been strongly advised against because they muck up the config file. That said, Clover was much more "relaxed" in this regard (and I did use Clover Configurator for some years) but OC is very strict - the most frequent 'help' advice I give on hackintosh forums is simply not to use configurators and follow the vanilla Dortania guide to the letter. So I was a little surprised to see Ed incorporate using a configurator into a tutorial - I mean in a 'that's going to be a can of worms!" sense 😰 FWIW my successes on bare metal hackintosh installs, both Intel and Ryzen, have all happened when I've started with a clean slate - new everything, kexts, OC version, config.sample, ProperTree etc, and done everything manually, including research as to the closest match between my hardware and the SMBIOS selected. That applies no matter how old the OS version is btw. Conversely I have wasted hours of my life trying to cut corners and "update" EFI folders' contents I've no experience with running OSX in VM, (though I'm here reading this cos Macinabox didn't work as expected), but having read a dozen or so pages of this thread it seems a hell of a lot simpler to run MacOS bare metal. Anyway EFI Agent is very stable, unobtrusive, tiny and does the job. HTH someone
  21. Probably a dumb question but in my Plex Extra Parameters I have --runtime=nvidia Does that mean the transcoding 'cache' is happening in the GPU's RAM or that it's using the SSD? I changed my Plex docker and Plex settings according to the OP and it seems to be working fine, and the dashboard shows it's using the GPU - but no idea how or even if I should be adding anything to Extra Parameters? i.e. should I have: --runtime=nvidia --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 ?
  22. Just about to move house and put some of our belongings into long term storage so very interested in reading the answers you receive!
  23. Welcome to the forums Bill I second peterg23's recommendation that you check out SpaceInvaderOne's (aka Ed) You Tube channel - and not just the most recent ones! In fact you can easily find very helpful video tutorials going back four or five years, unRAID's GUI might look a bit different (and don't be put off by that) - the fundamentals not having changed all that much. In fact, most of my most frustrating total wastes of time have come out of my trying to follow more apparently "modern" ways of doing things, only to have to revert everything to the way Ed suggested they be done in the first place! The You Tube comments sections to his videos are littered with people saying, "If it weren't for your videos I wouldn't have an unRAID server / would have given up" - and I'm absolutely one of them.
  24. Ok thanks for the clarification - I mistakenly thought I could pass through on a per port basis, hence the numbering. Thanks for the other info and explanations - much appreciated!
  25. Thanks ghost82 for the link to that video - I somehow missed it. Thought I'd watched all of Ed's videos but clearly not! As the R5 3600 doesn't have an iGPU is there anything I need to consider about passing through the RX580? I think it would only be an issue if I was trying to run two VM's at the same time as they can't share a GPU simultaneously? Am I right in thinking that if I associate (pass through) particular SATA controllers no's with a specific VM I can leave those drives formatted as each OS expects? For e.g. If on SATA 01 I have my Mac boot SSD (APFS) and on SATA 02 I have a 4TB HD (HFS+), so SATA 1 and 2 are my "MacOS VM" drives . . . Likewise, SATA 03 Windows 10 boot SSD (NTFS) and SATA 04 Windows Steam Library SSD (NTFS), for Win10 VM . . . If that works I can be more experimental as I can always revert to bare metal multi-boot if I make a mess of passing through the sata controllers.
×
×
  • Create New...