jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Confirming that adding rmmod acpi_power_meter to the end of my flash/config/go file did indeed fix the problem of the log filling up. Although I also have this error in my log (probably there before, just swallowed by the other error): May 20 16:46:58 Percy nginx: 2020/05/20 16:46:58 [error] 5620#5620: *423 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Any ideas on that one? EDIT: looks like it was related to the Netdata Docker container (it appeared in the log the line after its startup notification). I disabled auto start for it and rebooted and the error didn't reappear in the log. So all good here! What a journey! I guess it makes sense to keep this thread here for posterity, even though it was only me, alone, shouting into the void.
  2. Ok doing some googling and apparently it's a common thing with HPE servers, but it's super strange that it's only just started appearing, right? Especially since it's not due to a plugin I installed or anything (as it remains in safe mode, too). If anyone knows anything more about it, I'd be grateful, so I can understand why it seems to have only recently appeared on my system. In the mean time, I'll see if the fix mentioned in this thread works:
  3. Hi all, Not sure how long it's been going on, so can't really say what triggered it, but my I received a warning from fix common problems today that my log was filling up. Indeed it is constantly being spammed with: May 20 16:18:09 Percy kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-390) May 20 16:18:09 Percy kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-514) May 20 16:18:09 Percy kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) Ad infinitum. I thought I'd reboot to see if it helped and to get a clean diagnostics (attached) but of course that now means we can't determine how long it's been going on. Sorry about that. I'll reboot and run it in safe mode now to see if I get the same error, and will report back. percy-diagnostics-20200520-1608.zip EDIT: can confirm that I get the same error in safe mode
  4. As you can see from my sig, I currently have 4x 4 TB 5400 RPM Seagate NAS drives in my array, as well as some 5400 RPM 2.5" drives. A couple months ago I had a disk fail and replaced it with another 4TB 5400 RPM Seagate NAS drive. I realise now, as a couple disks approach 90% full (and one has now surpassed it), that I probably should have bought a bigger disk instead. So I'm thinking of wiping and selling the new(ish) disk, and replacing it with an 8TB NAS model so that I can have an upgrade path in the near future (I have no more room in my microserver for any more disks, and don't want to have to sell and upgrade the bulk of my hardware). Anyway, my question is this: I can only find 8 TB NAS drives at 7200 RPM rather than 5400 RPM. I'd previously been lead to believe that 7200 RPM is overkill for light usage network storage, so I've preferred the heat, noise, and power savings that the slower drives have brought me. But now that I can only find 7200 RPM NAS drives I'm wondering if those assumptions I've had are correct. Is there a benefit to having a 7200 RPM parity drive in an array of otherwise 5400 RPM drives? There's only one other user in the house, and the heaviest usage the server sees is streaming to Plex while listening to music elsewhere in the house. I am totally happy with the current performance in my array. Is there any benefit to adding a 7200 RPM drive (as parity) to my array, or will it just increase heat, noise, and energy consumption? I'm more interested in maintaining low noise levels than I am improving performance (the server lives in the spare bedroom). Bonus question: I've previously stuck to NAS drives because of their warranty, 24/7 usage rating, and vibration tolerance, but are they overkill for my server? Should I just save myself the £50-£80 extra for a NAS 8 TB and just shuck an external (again this is for my parity drive). Many thanks for your insight.
  5. jademonkee

    Jitsi?

    +1 desire unit (I'm not the demanding type 😛).
  6. Weird. I've never experienced this on my machine.
  7. The 'Unassigned Devices' plugin allows you to access usb drives (or even disks connected internally that aren't part of the array - although your FreeNAS disks won't be able to be read by Unraid). You can then use Midnight Commander (old skool GUI/TUI that's accessible from the terminal) or Krusader (a full GUI file manager that runs via a Docker). Space Invader One has a video on copying files via Krusader: https://www.youtube.com/watch?v=I0XCFPAsWZE&list=PL6MCtOroZNDCXhQQWrVjbPWO-45qV7SOF&index=6 (the rest of the videos in that playlist/on his channel are really, really helpful for people starting out with Unraid, so go check them out if you haven't already). I prefer midnight commander coz I'm a sucker for terminals/command lines, so here's some info on that: https://www.linode.com/docs/tools-reference/tools/how-to-install-midnight-commander/ (note that you don't need to follow the installation part, as it's already installed, and you can only run as root on unraid, so ignore that warning - well, don't ignore it, as you can definitely do some real damage if you edit the wrong file as root.) I would recommend setting up your final shares before copying files over to the array, then you can navigate to /mnt/user/ to see your shares there (Anything under /mnt/ is your array - don't touch anything outside of that directory). Your unassigned disks (the USB or internal drives that currently hold your data) will be mounted under /mnt/disks/ NOTE: I don't recommend copying to /mnt/disk1/ (or disk2,3 etc) as your shares should be set up in a way that your data will go to the right disk anyhow. The different dirs in /mnt/ are different 'views' of your array - for your needs, stick to /mnt/user/ but it's worth learning what the other dirs represent. I also don't recommend using your main PC to copy via its file manager. It's often slower (at least it feels that way), and you'll have to leave your machine running while it copies. Doing it via midnight commander on a monitor hooked up to the server means it'll just choof along in the background and you can check on it when needed (accessing it via ssh means you'll need to leave your PC on while it copies, so is less good, but will still be faster (I think) than copying via SMB). Finally, it usually makes sense (after preclearing all your disks as a stress test) to setup your array WITHOUT PARITY so that your initial copy can run faster. Once you've filled your array with all your existing data, you then add and build parity. This is another reason why it's highly recommended to preclear (I do 3 preclears per disk) new hard drives to weed out dodgy disks - a failure during this initial copy will (at the very least) ruin your day! The preclear process, especially on larger disks, will take days - so plan your time appropriately. Any other questions, lemme know. And enjoy Unraid!
  8. Just my 2p, but unless you're running a couple VMs or a fiend for the downloadin', a 1TB cache is total overkill. I have 2x 250GB in a btrfs pool, but they sit at about 50GB usage, and only get higher than that when files get written to the array, which is seldom more than ~40GB in a day (I run the mover daily, so by morning it's back down to 50GB usage again). I think in the few years I've had my server I've only written close to capacity (200GB) in a day twice. And even then, I just paused writing to the server/cache, ran the mover, and started writing again once it had completed (which took a while, but was no real drama). So, IMO (YMMV), you'd be better off going for 250GB (or less) cache, and putting the money saved towards either an 'EVO PRO' (better warranty, better components), a second cache disk, or a bigger parity. But, like I said, YMMV - if you're a heavy app or VM user, you may need the extra space on the cache - although for VM images, I hear that there are advantages to storing them on a SSD outside the array (via the unassigned devices plugin), anyway, so 2x 512, or 3x 250 (2x cache one unassigned) may be a better use of your ££ anyhow.
  9. Now that this is fixed in v2.6+, how do I delete the erroneous test results from my result history? Do I have to purge everything, or can I just delete the erroneous ones? Thanks.
  10. So, after recently getting back on the Spotify bandwagon, I installed the 'Spotty' Spotify plugin (after upgrading to v8, but that's unrelated), and even without the 'Online Music Library Integration' (which I actually later disabled, as my Spotify account is too messy to invite into my neat home library), Spotify support in LMS is actually really good! The last time I used the Spotify plugin, it didn't support appearing as a player in the Spotify app, which is a total drag - but now it does! Better yet, it also adds an 'On Spotify' album to every artist, which makes it a total breeze to play or enqueue a new album by an artist you already have in your collection. You can also browse your Spotify library through the plugin (without having to integrate it into your existing library/playlists), so it feels like a total solution. So much so that I now have no interest in the 'Online music library integration' that made me upgrade to v8 in the first place. I just love how LMS keeps on being amazing, even though it's been officially deprecated for a decade (or more?) now!
  11. Ok. Updated, installed the Online Libraries plugin, as well as the dev build of the Spotty plugin, and it's working! At least sometimes. Syncing seems to be a bit dodgy. And hoooo boy do I have to clean up my Spotify playlist library now. There's also a lot of duplicate albums for those I have added in Spotify even though I have a copy in my LMS library. Still: pretty neat!
  12. @dlandonany chance this can get an upgrade to 8.0.0? I'm keen to try out the online music service integration: http://htmlpreview.github.io/?https://github.com/Logitech/slimserver/blob/public/8.0/Changelog8.html I'm sure this feature will be of interest to others in this thread, too (particularly those that have come from Sonos) I do note, however that v8 is still in development, so maybe even as a separate build until it's stable? Thanks so much! EDIT: bit more info on v8: https://forums.slimdevices.com/showthread.php?111600-Version-8-0-ready-to-test
  13. Just re-ran the tests on v2.6 Much better results! I didn't receive the speed gap warning, either. Thanks for the help.
  14. Upgraded to v2.5 this morning and thought I'd re-run the benchmarks. Was receiving 'speed gap' notification, so I aborted, disabled all dockers and re-ran. Still got the 'speed gap' notification, so I aborted, and re-ran with the 'disable speed gap detection' option checked. The final results page looks vastly different to the last time I ran it, however, going back to the main page, the curves look just as before (almost as if the main page didn't update to the latest results - even after I Ctrl+R refreshed the page). Please see attached. So: is it something I should be worried about? I haven't received any notification from Unraid that anything is wrong with SMART, but should I do something to confirm that my disks (or controller?) are ok? Thanks for your help. EDIT: I just ran the controller benchmark, which showed erratic results, and as per the recommendation I restarted the Docker and ran it again: same erratic results (although I think the disks with low speeds were different this time). Attached is a screenshot.
  15. Odd. I'm backing up almost 3TB. Maybe you have more files than me, though (>2TB of my backup is FLAC audio from my CD rips). Still, with 128GB RAM, I guess it's no problem to have it higher than mine (only 16GB RAM).
  16. I think it's coz they're cheap. Serious question, though: what's a good alternative? I've been thinking of chucking in the towel, but haven't found anything even close in price.
  17. Don't know how, don't know why, but as of this morning, it's fixed itself. Maybe this has something to do with me raising a ticket with CrashPlan support yesterday? I don't know.
  18. I'd recommend a different service for that much data - they're not really unlimited, and may end up booting you off the service for a backup that large. https://www.reddit.com/r/Crashplan/comments/ezuztk/warning_unlimited_not_really_unlimited/
  19. My bad: I was thinking it was different to the option you'd changed, but it's actually the same thing. Sorry 'bout that.
  20. https://support.code42.com/CrashPlan/6/Troubleshooting/Adjust_Code42_app_settings_for_memory_usage_with_large_backups Have you tried this?
  21. Thanks for replying, tcharron. My dashboard says 2.4TB used. If I click on Devices > my server (the only device under 'Active'), it shows as 2.4TB stored. But if I click on the entry for my server, it says under 'Selected' that there's only 212GB. I don't know what 'selected' means, but it's clearly got something to do with this. If I click on the 'Backup' tab on that same page, it appears that all my folders are indeed present (at least the top level ones - I can't drill down in this UI). In Devices > 'Deactivated' there is one other device, but I'm pretty sure it's just my old backup from before they made everyone switch to a 'pro' account. And it's 0MB anyhow.
  22. I rebooted my network equipment and server this morning, and thought I'd just check that everything came back up happy, and upon entering CrashPlan, found that it thinks I only have 228GB of backups - but I actually have about 3TB. You can see what I mean in the attached screenshots: 1 showing the main screen with only 228.4GB backed up and the other showing the preferences > destinations size of using 3TB. I fairly frequently open the CrashPlan UI to check on it, so this has only happened either in the last few days, or since my server reboot. I restarted the Docker to see if it fixed anything, but it remains the same. As you can see from the main screenshot, no maintenance is currently being performed. I went through the file list (under the manage files button), and all major folders appear in there as backed up. Does anybody know what's happening? Or is this something I should email CrashPlan about? Thanks for your help.
  23. Install the Unraid Tips & Tweaks plugin from Community Applications. Then go to Settings (in Unraid, not Crashplan Docker) > Tips and Tweaks. There will be an option in there to increase inotify. Setting it depends on your system, but I have 16GB RAM and have it set to 1048576 which seems fine. (although I have had a weird thing happen this morning, see my upcoming post below).
  24. Hi all, I'm trying to figure out how to rename the digital inputs on my Transporter. I found these instructions: https://forums.slimdevices.com/showthread.php?54831-Labelling-transporter-digital-inputs-possible Which says to create a custom-strings.txt file in the same directory as strings.txt So thought I'd open a terminal window and try and find that strings.txt file, but can't find it anywhere. The post suggested Slim/Plugin/DigitalInput/strings.txt But there is no Slim directory Any logitechmediaserver directory I've found doesn't contain Plugin, or if it does, doesn't contain DigitalInput (or strings.txt). Any idea where it is? Or is this method no longer applicable? Thanks for your help! EDIT: I should note that the 'plugins' directory found in my appdata folder (\\PERCY\appdata\LogitechMediaServer\cache\InstalledPlugins\Plugins) contains my third party plugins, which all have strings.txt files in their dirs, but there is no dir for the DigitalInput plugin (which is a Logitech plugin). I also checked the plugins directorys mentioned in the settings > advanced page in the UI: /config/cache/InstalledPlugins/Plugins, /usr/sbin/Plugins, /usr/share/squeezeboxserver/Plugins