tj80

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by tj80

  1. I now get this happening regularly on a Lenovo Thinkstation E31 (Xeon E3-1225v2), never happened in 8 years continuous running on my old HP Microserver N36L. Have been running unraid 24x7 since 2011 (originally version 4.7, currently 6.8.3 on a Basic license) and never had the problem until moving to the new hardware about 4 months ago. Same installation - I just moved the USB drive and all the disks across to the Thinkstation and the problem started and has been recurring since then. Stopping then restarting the array fixes it for a few days or weeks, then it comes back again.
  2. Hi, Apologies for the simple question, but can anyone tell me how to backup a docker and then roll back to that version in case of a bad docker update? Specifically I'm running HomeAssistant, which often introduces changes which break things in updates. These are always fixable, but take effort and time to resolve. I'd like to be able to update the docker, but have the option of rolling that update back to the previous working version in case the update breaks too many things! I can then choose to upgrade at a more convenient time when I can fix the problems. Many thanks, Tim
  3. Unfortunately I have a whole load of MQTT devices set up manually in configurations.yaml already and haven't used discovery previously - maybe it's the mix and match it doesn't like. Do you happen to have an example of a standard sensor which should be available if it's working? Then I could just set that up manually as a test. Thanks, Tim
  4. Yep, HomeAssistant and Mosquitto are both running as Dockers on my unraid server. Does this look about right? 192.168.0.206 is the IP of Mosquitto and it's running on the standard port 1883: Then in my HA configuration.yaml I have:
  5. Thanks. MQTT is definitely OK - running Mosquitto with about 20 Tasmota devices talking to HA quite happily. I've installed Glances as you suggested, that's working fine in HA now as well - but still nothing from UnRAID-API! It's not showing up on the Integrations page even with discovery enabled, so I wonder if I have misconfigured something and UnRAID-API can't actually talk to the MQTT broker for some reason? Any idea how to check that? I can just about configure devices but delving further into MQTT is a bit beyond me I'm afraid. Cheers, Tim
  6. I was really excited to find this as I've been looking for a way to pull UnRAID data into HA for a long time! Many thanks, @ElectroBrainUK! However - I can't get anything to appear in HA! I have installed UnRAID-API as a Docker and if I go into the web UI it correctly shows a whole load of data about my UnRAID server. If I click on the "MQTT devices" option in the UnRAID-API web UI I get this in the log - not sure what any of it means though: > unraidapi@0.5.0 start /app > cross-env NUXT_HOST=0.0.0.0 NUXT_PORT=80 NODE_ENV=production node --max-old-space-size=4096 server/index.js READY Server listening on http://0.0.0.0:80 (node:26) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, open 'config/mqttDisabledDevices.json' at Object.openSync (fs.js:447:3) at Proxy.readFileSync (fs.js:349:35) at IncomingMessage.<anonymous> (/app/api/mqttDevices.js:19:19) at IncomingMessage.emit (events.js:205:15) at IncomingMessage.EventEmitter.emit (domain.js:471:20) at endReadableNT (_stream_readable.js:1154:12) at processTicksAndRejections (internal/process/task_queues.js:84:9) (node:26) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:26) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. (node:26) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, open 'config/mqttDisabledDevices.json' Any help would be very much appreciated! Thanks, Tim
  7. Apologies for the delay replying to this thread - I got hold of a 5450, then the whole coronavirus situation arrived and I haven't had a chance to try it until now! Just to say "thanks", the card works fine passed through to a Windows 10 VM with no hassle (not even any drivers to install) - just what I was after. For some reason UnRAID detects it as a Radeon Mobile 5430 rather than a Radeon 5450 - but whatever, still works! Thanks, Tim
  8. Hi, I'm running 6.8.0 on a Lenovo Thinkstation E31 (Xeon E3-1225 v2), and I moved this installation across from an old HP N36L Microserver. Everything worked fine - including the CPU and mobo temperature and fan speed sensors. However, in my wisdom I tried reconfiguring something and it's now lost them! Settings doesn't autodetect the sensors on the system (it just comes up with "coretemp" as the driver and no sensors are listed). Can anyone advise how to find out what drivers I should be using with this system, and how to get UnRAID to use them? Disk temperatures are showing fine via SMART, it's just the system fan and temperature sensors which are the problem. Many thanks, Tim
  9. Many thanks, all. Found a 5450 on eBay for peanuts so will report back when it turns up! Cheers, Tim
  10. Many thanks, Squid, that's great. I see many of those seem to have HDMI outputs too so that makes connectivity to the TV nice and easy. Will pick one up and give it a shot. The USB DisplayLink option which is (was) kinda working seems to be incredibly flaky - plugged a USB keyboard into the hub, selected to pass it through in unRAID and it just threw an error - now the VM won't even start! Many thanks, Tim
  11. Any suggestions? What I don't want to do is end up buying an endless series of obsolete GPUs which don't work if, for example, there's a modern key feature they must support in order to work. There are lots of guides on using high-end cards, but I can't find much on the lower end. Since my original post I've tried passing through the integrated graphics without success. I have managed to get it working (kind of) by passing through an old USB docking station which supports DisplayLink, then using VNC to configure the VM to display only on the "second" (DisplayLink) monitor. Unfortunately for some reason it's very sluggish - far slower than when used on a bare metal machine natively, with mouse movement lagging and video playback very choppy even in a window. Audio doesn't seem to work through the dock either for some reason, even though the VM sees the USB audio device and claims it's playing through it! The dock works very well on a native Windows 10 laptop, so it's not a hardware issue. So tantalizingly close to working... Many thanks, Tim
  12. Hi, I'd like to set up GPU passthrough for a Windows 10 VM but would like to do this as cheaply (free...?) as possible! The VM doesn't need a decent GPU as it will only be used in my garage to read workshop manuals (PDFs), look up things on a web browser or play music through Youtube Music. It will be connected to an old 32" TV and I need to connect through HDMI (DVI, HDMI, etc all fine but analogue VGA looks terrible at 1920x1080 over a long cable). My server has an Intel Xeon E3-1225 v2 (Ivy Lake) with integrated Intel P4000 graphics - there's no other GPU in the system, just the onboard one. I have only one PCIe x16 slot (and 2 original PCI, if they're any use) but ideally I'd like to keep the PCIe slot free in case I add an HBA later. Is it possible to passthrough the integrated GPU without adding a discrete GPU? I have read the thread about this which says Ivy Lake isn't supported but it's quite old and I don't know if there have been any updates. If not, can anyone recommend the cheapest GPU which will work - ideally without occupying the PCIe slot, but not the end of the world if it does. Second hand is absolutely fine! I can pick up the cheapest card from eBay, CEX or similar here in the UK such as this: https://uk.webuy.com/product-detail?id=sgrae5pohaaa&categoryName=graphics-cards-pci-e&superCatName=computing&title=ati-radeon-x300-se-128mb It's rubbish but perfectly adequate for my purposes, with DVI out, low power consumption, no fan and only costs the same as a pint. However, it will use up the PCIe slot and I have no idea if it will work either with Windows 10 or as a passthrough with unRAID! I don't want to spend a whole lot of time fiddling around, so a cheap GPU known to work would be ideal. Many thanks in advance, Tim
  13. Resurrecting this topic just to follow up in case it helps anyone in the future. I tracked down the problem to the built-in MQTT broker used by HomeAssistant. I disabled this, and instead set up Mosquitto in another Docker container and the ever increasing CPU utilisation has stopped. Hope that helps someone! Tim
  14. Aha! Thanks, I will keep an eye out on that thread for a solution. Best regards, Tim
  15. Hi, I have what seems to be a recent problem cropping up in that the Dynamix CacheDirs plugin seems to suddenly start consuming 100% of one CPU core. I've been running this for years without problem and the only recent change I've made (apart from keeping the plugins updated) is to update the version of my HomeAssistant docker to address some gradually increasing CPU consumption. That may have been masking this CacheDirs issue I guess. Attached is my CPU chart for the past 2 days - you can see utilisation suddenly jump at 20.45 exactly yesterday and has sat there ever sinceI won't pretend to understand the differences between User, Nice and System utilisation, but it seems strange that System utilisation dropped at the same time as this large increase in Nice utilisation. HTOP is attached and appears to point to CacheDirs. All my data disks have spun down, so they're definitely not being accessed. If I reboot the server CPU consumption sits at about 15% for anything from a day to a week, then this will suddenly happen. Can anyone suggest what might be happening here? UnRAID 6.4.0, an old faithful HP MicroServer 36L (running 24x7 since 2011), 4 data disks and 1 SSD cache drive which also holds the dockers (HomeAssistant, DuckDNS, Nginx and TasmoAdmin). Thanks, Tim
  16. Thanks! I'll let the CPU utilisation build up for a while and review. Cheers, Tim
  17. Hi, I have a strange problem with CPU utilisation gradually increasing over a period of many weeks to the point that I need to restart the server. I'm running 6.1.4 on an old HP Microserver with 4 Dockers running: - Duck DNS - HomeAssistant - TasmoAdmin - Nginx (doing nothing but hosting a single static HTML page which I just use to store shortcuts to various devices on my internal network) After a clean boot CPU utilisation averages about 7-8% but gradually this ramps up to >80% and I notice my HomeAssistant automations become slower and slower to react - things which normally happen almost instantly take several seconds. After a reboot CPU drops back down and HomeAssistant performance is back to normal again. It seems to take about 6 weeks to get to this point, but I can see CPU utilisation gradually increasing all the time. I rebooted last night and after settling down CPU utilisation was: At 11.30pm last night: System 5.5%, Nice 1.4%, User 1.8% At 8am this morning: System 5.5%, Nice 1.4%, User 3.6% That may not sound like a lot of increase but this is just over the course of ~8 hours (mostly idle) and happens continuously until the server runs out of puff! How can I see what is causing this and is there a solution? Many thanks, Tim
  18. Thanks, both - reassuring! Think I'll leave it be (yes I have automatic email alerts for failures set up). Interesting point about the PSU, to be honest I hadn't thought about that. It's an HP Microserver which has been running 24x7 for 7 years, so the PSU should be decent quality and it's not too heavily bogged down (4 x 3.5" disks, 1 x 2.5" disk and an SSD - no expansion cards, etc). Will definitely keep it in mind though. Cheers, Tim
  19. Hi, My UnRAID box (6.4.0 currently) has been running nicely for many years now, including saving me from one failed WD Green 2TB data drive. However, some of the drives (the parity disk in particular) are getting pretty old and I wonder what the consensus is on whether to pre-emptively replace drives or just wait until they fail? None of my drives have any SMART warnings or other issues I know of (no funny noises, high temperatures, etc), they auto spin down so haven't been running continuously, and see very light use in a domestic environment - but in terms of power on hours some are getting pretty old. Yes I have backups, but still don't want to lose anything! I'm not so concerned about losing one drive obviously, more that the intensive workload recalculating parity after a single failure might kill off a second drive at the worst possible time. Power-On Time: Parity - 2TB Hitachi Deskstar, 7y 9m Data 1 - 2TB WD Green, 6y 7m Data 2 - 2TB Seagate Barracuda, 3y 7m Data 3 - 250GB HP Enterprise SATA, 5y 8m Data 4 - 500GB Seagate Momentus (2.5"), 1y 8m Cache - 256GB Crucial M4 SSD, 2y 6m Most are well past the "infant mortality" timeframe so are clearly good solid drives - so should I just keep them running until they fail or think about replacing early? One option I thought about was to replace the old 2TB parity disk with a new one, then use the old drive to replace the 250GB Data 3 - but I don't actually need any more space yet! What would you do? Many thanks, Tim
  20. Magic, many thanks - that solved it for me too. Cheers, Tim
  21. Hi, I recently added a new data disk to my 6.4 installation, giving a total of 4 data disks and 1 cache drive (plus a single parity disk). I just noticed that only the newly added disk will show any SMART attributes - all the other disks just report "Can not read capabilities". They definitely all used to report fine, and I remember being able to see the SMART attributes for all 6 drives (including the new one) recently as I checked the power-on time for the new drive after I installed it. Can anyone suggest why the 5 old drives have stopped reporting but the new one is still OK? Many thanks, Tim
  22. Oddly, I just tried this again on the off-chance and it's working fine. Server has been running for 2 weeks and nothing has changed in that time. Very peculiar! Cheers, Tim
  23. Hi, Anything look unusual in the diagnostics dump? Many thanks, Tim
  24. Makes sense! Thanks, have disabled mover logging, rebooted and attached the diagnostics zip following an attempt to check for plugin updates. Many thanks, Tim nas-diagnostics-20171002-1112.zip
  25. That's what I thought! The tick box was definitely checked, but here's an example from the log of a filename and path which isn't sensitive, but I have others which include the names of customers I work for etc: Sep 27 03:40:02 NAS root: >f+++++++++ Media/Documents/Paperport/Work/Pay Slips/2017/September.pdf