Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

9 Neutral

About jademonkee

  • Rank
    Advanced Member


  • Gender
  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Confirming that adding rmmod acpi_power_meter to the end of my flash/config/go file did indeed fix the problem of the log filling up. Although I also have this error in my log (probably there before, just swallowed by the other error): May 20 16:46:58 Percy nginx: 2020/05/20 16:46:58 [error] 5620#5620: *423 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client:, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "" Any ideas on that one? EDIT: looks like it was related to the Netdata Docker container (it appeared in the log the line after its startup notification). I disabled auto start for it and rebooted and the error didn't reappear in the log. So all good here! What a journey! I guess it makes sense to keep this thread here for posterity, even though it was only me, alone, shouting into the void.
  2. Ok doing some googling and apparently it's a common thing with HPE servers, but it's super strange that it's only just started appearing, right? Especially since it's not due to a plugin I installed or anything (as it remains in safe mode, too). If anyone knows anything more about it, I'd be grateful, so I can understand why it seems to have only recently appeared on my system. In the mean time, I'll see if the fix mentioned in this thread works:
  3. Hi all, Not sure how long it's been going on, so can't really say what triggered it, but my I received a warning from fix common problems today that my log was filling up. Indeed it is constantly being spammed with: May 20 16:18:09 Percy kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-390) May 20 16:18:09 Percy kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-514) May 20 16:18:09 Percy kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) Ad infinitum. I thought I'd reboot to see if it helped and to get a clean diagnostics (attached) but of course that now means we can't determine how long it's been going on. Sorry about that. I'll reboot and run it in safe mode now to see if I get the same error, and will report back. percy-diagnostics-20200520-1608.zip EDIT: can confirm that I get the same error in safe mode
  4. As you can see from my sig, I currently have 4x 4 TB 5400 RPM Seagate NAS drives in my array, as well as some 5400 RPM 2.5" drives. A couple months ago I had a disk fail and replaced it with another 4TB 5400 RPM Seagate NAS drive. I realise now, as a couple disks approach 90% full (and one has now surpassed it), that I probably should have bought a bigger disk instead. So I'm thinking of wiping and selling the new(ish) disk, and replacing it with an 8TB NAS model so that I can have an upgrade path in the near future (I have no more room in my microserver for any more disks, and don't want to have to sell and upgrade the bulk of my hardware). Anyway, my question is this: I can only find 8 TB NAS drives at 7200 RPM rather than 5400 RPM. I'd previously been lead to believe that 7200 RPM is overkill for light usage network storage, so I've preferred the heat, noise, and power savings that the slower drives have brought me. But now that I can only find 7200 RPM NAS drives I'm wondering if those assumptions I've had are correct. Is there a benefit to having a 7200 RPM parity drive in an array of otherwise 5400 RPM drives? There's only one other user in the house, and the heaviest usage the server sees is streaming to Plex while listening to music elsewhere in the house. I am totally happy with the current performance in my array. Is there any benefit to adding a 7200 RPM drive (as parity) to my array, or will it just increase heat, noise, and energy consumption? I'm more interested in maintaining low noise levels than I am improving performance (the server lives in the spare bedroom). Bonus question: I've previously stuck to NAS drives because of their warranty, 24/7 usage rating, and vibration tolerance, but are they overkill for my server? Should I just save myself the £50-£80 extra for a NAS 8 TB and just shuck an external (again this is for my parity drive). Many thanks for your insight.
  5. jademonkee


    +1 desire unit (I'm not the demanding type 😛).
  6. Weird. I've never experienced this on my machine.
  7. The 'Unassigned Devices' plugin allows you to access usb drives (or even disks connected internally that aren't part of the array - although your FreeNAS disks won't be able to be read by Unraid). You can then use Midnight Commander (old skool GUI/TUI that's accessible from the terminal) or Krusader (a full GUI file manager that runs via a Docker). Space Invader One has a video on copying files via Krusader: https://www.youtube.com/watch?v=I0XCFPAsWZE&list=PL6MCtOroZNDCXhQQWrVjbPWO-45qV7SOF&index=6 (the rest of the videos in that playlist/on his channel are really, really helpful for people starting out with Unraid, so go check them out if you haven't already). I prefer midnight commander coz I'm a sucker for terminals/command lines, so here's some info on that: https://www.linode.com/docs/tools-reference/tools/how-to-install-midnight-commander/ (note that you don't need to follow the installation part, as it's already installed, and you can only run as root on unraid, so ignore that warning - well, don't ignore it, as you can definitely do some real damage if you edit the wrong file as root.) I would recommend setting up your final shares before copying files over to the array, then you can navigate to /mnt/user/ to see your shares there (Anything under /mnt/ is your array - don't touch anything outside of that directory). Your unassigned disks (the USB or internal drives that currently hold your data) will be mounted under /mnt/disks/ NOTE: I don't recommend copying to /mnt/disk1/ (or disk2,3 etc) as your shares should be set up in a way that your data will go to the right disk anyhow. The different dirs in /mnt/ are different 'views' of your array - for your needs, stick to /mnt/user/ but it's worth learning what the other dirs represent. I also don't recommend using your main PC to copy via its file manager. It's often slower (at least it feels that way), and you'll have to leave your machine running while it copies. Doing it via midnight commander on a monitor hooked up to the server means it'll just choof along in the background and you can check on it when needed (accessing it via ssh means you'll need to leave your PC on while it copies, so is less good, but will still be faster (I think) than copying via SMB). Finally, it usually makes sense (after preclearing all your disks as a stress test) to setup your array WITHOUT PARITY so that your initial copy can run faster. Once you've filled your array with all your existing data, you then add and build parity. This is another reason why it's highly recommended to preclear (I do 3 preclears per disk) new hard drives to weed out dodgy disks - a failure during this initial copy will (at the very least) ruin your day! The preclear process, especially on larger disks, will take days - so plan your time appropriately. Any other questions, lemme know. And enjoy Unraid!
  8. Just my 2p, but unless you're running a couple VMs or a fiend for the downloadin', a 1TB cache is total overkill. I have 2x 250GB in a btrfs pool, but they sit at about 50GB usage, and only get higher than that when files get written to the array, which is seldom more than ~40GB in a day (I run the mover daily, so by morning it's back down to 50GB usage again). I think in the few years I've had my server I've only written close to capacity (200GB) in a day twice. And even then, I just paused writing to the server/cache, ran the mover, and started writing again once it had completed (which took a while, but was no real drama). So, IMO (YMMV), you'd be better off going for 250GB (or less) cache, and putting the money saved towards either an 'EVO PRO' (better warranty, better components), a second cache disk, or a bigger parity. But, like I said, YMMV - if you're a heavy app or VM user, you may need the extra space on the cache - although for VM images, I hear that there are advantages to storing them on a SSD outside the array (via the unassigned devices plugin), anyway, so 2x 512, or 3x 250 (2x cache one unassigned) may be a better use of your ££ anyhow.
  9. Now that this is fixed in v2.6+, how do I delete the erroneous test results from my result history? Do I have to purge everything, or can I just delete the erroneous ones? Thanks.
  10. So, after recently getting back on the Spotify bandwagon, I installed the 'Spotty' Spotify plugin (after upgrading to v8, but that's unrelated), and even without the 'Online Music Library Integration' (which I actually later disabled, as my Spotify account is too messy to invite into my neat home library), Spotify support in LMS is actually really good! The last time I used the Spotify plugin, it didn't support appearing as a player in the Spotify app, which is a total drag - but now it does! Better yet, it also adds an 'On Spotify' album to every artist, which makes it a total breeze to play or enqueue a new album by an artist you already have in your collection. You can also browse your Spotify library through the plugin (without having to integrate it into your existing library/playlists), so it feels like a total solution. So much so that I now have no interest in the 'Online music library integration' that made me upgrade to v8 in the first place. I just love how LMS keeps on being amazing, even though it's been officially deprecated for a decade (or more?) now!
  11. Ok. Updated, installed the Online Libraries plugin, as well as the dev build of the Spotty plugin, and it's working! At least sometimes. Syncing seems to be a bit dodgy. And hoooo boy do I have to clean up my Spotify playlist library now. There's also a lot of duplicate albums for those I have added in Spotify even though I have a copy in my LMS library. Still: pretty neat!
  12. @dlandonany chance this can get an upgrade to 8.0.0? I'm keen to try out the online music service integration: http://htmlpreview.github.io/?https://github.com/Logitech/slimserver/blob/public/8.0/Changelog8.html I'm sure this feature will be of interest to others in this thread, too (particularly those that have come from Sonos) I do note, however that v8 is still in development, so maybe even as a separate build until it's stable? Thanks so much! EDIT: bit more info on v8: https://forums.slimdevices.com/showthread.php?111600-Version-8-0-ready-to-test
  13. Just re-ran the tests on v2.6 Much better results! I didn't receive the speed gap warning, either. Thanks for the help.
  14. Upgraded to v2.5 this morning and thought I'd re-run the benchmarks. Was receiving 'speed gap' notification, so I aborted, disabled all dockers and re-ran. Still got the 'speed gap' notification, so I aborted, and re-ran with the 'disable speed gap detection' option checked. The final results page looks vastly different to the last time I ran it, however, going back to the main page, the curves look just as before (almost as if the main page didn't update to the latest results - even after I Ctrl+R refreshed the page). Please see attached. So: is it something I should be worried about? I haven't received any notification from Unraid that anything is wrong with SMART, but should I do something to confirm that my disks (or controller?) are ok? Thanks for your help. EDIT: I just ran the controller benchmark, which showed erratic results, and as per the recommendation I restarted the Docker and ran it again: same erratic results (although I think the disks with low speeds were different this time). Attached is a screenshot.
  15. Odd. I'm backing up almost 3TB. Maybe you have more files than me, though (>2TB of my backup is FLAC audio from my CD rips). Still, with 128GB RAM, I guess it's no problem to have it higher than mine (only 16GB RAM).