Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Seems the logs are wiped at every reboot. Surely not, but that's what they're telling me.
  2. Diagnostics attached. The crash happened somewhere between 10pm last night and 7.15 this morning. obi-wan-diagnostics-20190209-0711-2.zip
  3. I also removed the 2nd NIC a few days ago. However, I woke up this morning and it had crashed. Maybe it's running on bitcoin lol. I'm really struggling to get good logs on this one. It lasted for a few days longer, however as I had seen before this seems to happen more quickly at times of high I/O which has not been happening lately. Hmmm, I wonder if I have one of those Marvell controllers.
  4. I've done some googling on this, and haven't yet found a solution. Cleared cookies, restarted browser etc and no dice. This is what I get, I'm sure someone knows the answer and I've just missed it somewhere. Thanks.
  5. So h265ize is supposed to be a set and forget tool - however, once it's process whatever is in the input folder, the docker container stops. Is this by design? I'm wanting it to keep tabs on a directory for incoming files and it won't work in it's current form - I might have to use a VM instead which would be a shame. Thanks.
  6. I think the maintainer of the docker CAN update the version of h265ize that is currently available inside the docker if that's what you're asking and I think you are....
  7. Sadly, two more. It always happens when I'm doing high IO. Maybe it's network. Guess I'm going to have to properly figure out how to read kernel exceptions.
  8. Yes, I formatted XFS first (confirmed that), but then it reformatted to Btrfs as soon as it was added. I recall last time I could bury into the settings and change it - but didn't seem to find it this time - not as simple as I would have thought. And I AM on the RC. Not sure if it's still in the latest release candidate.
  9. Yeah, even in single device, reformatted as nfs, it changed it to BTRFS. Even with the pool previously removed. I did see something about Btrfs caches being a problem before and one of those errors took me to an XFS issue which sounded similar. Stopping some large writes I was doing, it is now not crashing. Therefore I have removed the cache and restarted the writes. Will see what happens. If it goes away, I'll try to recreate it again I guess. I'd rather it was cache than something hardware related.
  10. Call me suspicious, but have disabled the BTRFS cache. Will see how that goes. I thought there was a way to use XFS in the cache, however it doesn't seem to be available in RC2 - not that I saw anyway.
  11. Updated to 6.7.0-rc2 and it has already crashed again. That's probably going to make it easier to track down - I'm suspicious about disk io now.... this can't be specific to Ryzen or it would have always been happening. I have added a Dell Perc H310 card though and probably now I'm actually beginning to write to those disks. Hmmm
  12. So since it didn't work anyway, and it appears to be unnecessary with my BIOS settings, I've removed the rcp_nocbs=0-15 from my kernel boot params. https://utcc.utoronto.ca/~cks/space/blog/linux/KernelRcuNocbsMeaning This didn't happen when I was running Proxmox - and actually I didn't even have the BIOS settings set on that.... Think I'll try the RC for a few days.
  13. Maybe I should run the new Unraid RC - I think I read it has some AMD stuff in it and definitely newer kernels...
  14. Thanks, I have mine on Auto too - And it's been running fine since my last post. However yesterday and today I have again had lockups. Haven't started googling yet, in case it's something new - though I don't see why it would be - any ideas? These Ryzens are seemingly quite a hassle. To recap, I've set the power supply idle state and set the c-states and added the rcu_nocbs=0-15 (15 in my case) to the kernel. Getting tired of having to do parity checks, which then find errors
  15. So I added the rcu_nocbs=0-15 parameter slightly wrong I think, which was causing the crash. Fixed now. Will see how that goes.
  16. It just happened again overnight - where I suspect it was near idle. I've added the rcu_nocbs=0-15 because I read this was still required and rebooted and now it crashes before it even boots up. So will have to remove that. But the question remains, what's crashing the system and how do I fix it. Any other options you can think of?
  17. Given QNAP IS Linux, it can definitely be done. It's probably beyond my abilities but someone 'should' be able to help you out. I doubt it's completely proprietary. Some googling finds some linux drivers here. Also, if you sign up on their site maybe? Apparently it's the same as the 4xxx drivers which are definitely available in linux. I'm very new to Unraid so don't yet know how you might add them. Kernel module maybe?
  18. Nice idea - software for this might be available via the raspberry pi forums - then you'd just need to get some kind of hardware card I assume. Unless it could be connected via internal usb?
  19. This thread has lost the intent of the original post. We are talking about adding a setting to change the resolution of an EXISTING feature, not a new feature. It’s basically a couple of text files with the potential addition of the relevant display driver, which should already have been included given the existing feature is a GUI and these low resolutions went out back with VGA CRT screens about 15 years ago. I don’t think we need to start worrying about memory and dev time and such, their product owner will decide the value of it based on community interest and difficulty to implement. It’s certainly not going to result in going back to 2 yearly release cycles. Sent from my iPhone using Tapatalk
  20. I came here as I'd like to have this feature too. I assume this will be just like a desktop, you need a driver for your card and a way of setting it up. Lime could package this all up like everyone else does. Shouldn't be too hard.
  21. I don't care if it's ZFS or not. What people like though is the self-repairing file system. That's the point of this thread. We can choose where we put it (cache or array), but an option would be great. I don't know of a self-repairing option for any other file-system, but it seems to me, a company like lime could put pressure on to get it in the roadmap for one of the file systems, even if it is a 5-10 year plan. Maybe it already is, I haven't actually looked. ECC is just if you're a purist or not. You get incremental improvements with various additions and implications when you leave bits out. Up to the end-user to figure out if they're important. ZFS does have a huge hardware penalty though, it's why I moved to Unraid - FreeNAS / TrueNAS and Proxmox performance was absolutely abysmal. And all for the idea that your data is somehow randomly falling out of your drives while you sleep. Absolutely not true. But peace of mind does have a lot of value doesn't it.
  22. 32GB. I don't think I'll need any more than that currently. For NAS storage, a few dockers and a few VM's. The worst VM which is now a docker is the Crashplan one, which eats RAM. Think I'll turn that into a schedule rather than having it watch the file system live - should help. I have a Ryzen 1800x which is plenty of grunt. I did it this way because the QNAP equivalent was about 6k vs this at about 2-3k $NZD
  23. LOL, now there's a scary thought! Bring on devops!
  24. @trurl @ken-ji @bonienl Thanks for your replies - all your points are good ones. This was a very very long thread and I read probably too much of it. What jumped out at me were some poor people panicking and either negatively misinterpreting the results or over expecting the plugin like it's a miracle cure. Sometimes I think we forget that not everyone (possibly even most on here) do not have IT as a first job (like at least two of us seem to) and can take thinks a little too literally or not have the background to interpret the results. e.g. 'I read that I must preclear three times for every disk, it's not working and what do I do? Is my raid safe? Must I return my drive?! Panic!' someone said. That poor soul is more likely to lose data by not having enough Unraid space available than by worrying about pre-clear status. So my point is there seems to be an unnecessary bias or unbalanced viewpoint provided (at least through the parts of this thread that I read). There should be some kind of disclaimer that balances things out for some of these people - might help prevent a few panicked comments on a very long thread too! Point of note - when I did run the preclear plugin I got a small error, which doesn't actually matter. When I added it to the array, I got that same error during its (much faster) preclear anyway. And in both cases it happened within first 10 seconds. Probably the case for most people - if they see anything, otherwise the vendor would include it in the product right? And pre-clear seems quite buggy.
×
×
  • Create New...