Jump to content

Marshalleq

Members
  • Content Count

    639
  • Joined

  • Last visited

Community Reputation

66 Good

About Marshalleq

  • Rank
    Advanced Member
  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

1206 profile views
  1. Lots of questions in this thread, but not many answers. I'd answer a few, but I don't know the answers either.
  2. Trying a different approach - this is what I now get on the beta - which would you choose for CPU and MB? It all went west since unraid beta. Not really sure what to choose anymore. I have Threadripper 1950x. I can assure you it can't operate at 89 degrees.
  3. Hey, I realise some might see this as unhelpful, but I'm honestly trying to be the opposite. To my observation, this is just btrfs. I don't know why, but stuff happens on this filesystem. It might have been a new version in the upgrade or something. I've used a stack of file systems and all of them have been great except btrfs and I did also have an issue with reiserfs at some point maybe 10-15 years ago and that's it over 30 years or so. I've had unrecoverable btrfs on my cache array also, on a previous version of unraid. My solution after several failures with the cache drive with btrfs was to run a single XFS drive. Anything that needs the redundancy as soon as it's written bypasses the cache. (I am hoping in an upcoming version, they give us an alternative option for mirroring the cache drive). I do apologise if this is considered a hijack - but I did want to help in the sense that while this may be considered rare, it is certainly not a one off case and lend some 'moral' support from that perspective! I used to run btrfs on my array also, but ultimately changed it back. The big benefits of btrfs are mostly lost in the unraid implementation. And yes, I realise there are plenty of people without issues. I'm not trying to turn this into a btrfs vs something else discussion. Marshalleq
  4. I just noticed something a little unexpected when trying to track down which drive says it's overheating. I have notifications set up over telegram. I receive an overheat message for one drive, with serial number listed, but on the unraid screen it lists as /dev/sdi and on the telegram screen it lists as /dev/sdj See attached screenshots and debug. To my reading the serial numbers match and even the disk, just not the device. Hopefully it's just the notification system that's wrong as the alternate option would be quite worrying. obi-wan-diagnostics-20200910-0907.zip
  5. Solved. Sort of. What do you know, you can make a USB flash drive into a dummy disk on the array. That makes docker available, then I can chuck it onto my ZFS drive. mint.
  6. I could add a USB array. I do have a 5 bay USB QNAP box at a friends house I could use. It'd be starting to waste a lot of disks on parity though. Will be awesome if they get ZFS into unraid eventually - problem would be solved hopefully. As long as it's not tethered to being in Unraids array only or something.
  7. I think never had a problem might be pushing it a bit - there are tons of problems, but also problems with stable lol. I think I'd just like to add that it's an easy rollback and despite the disclaimer, it's pretty safe for everything that matters - e.g. your data.
  8. Hi, so Unraid has been my experiment in trying to run everything on one box. It's worked very well really, but of course the one area that was always going to be a problem was me. I mean I like to mess with stuff and that becomes a problem with downtime. So, I'm thinking I'll move back to having a separate machine for the 24x7 stuff. This is some basic stuff, Wordpress sites, Mattermost chat, mariadb / postgresql, lets encrypt docker now called swag of course, nextcloud, taiga and lancache. I think that's it really. Oh and my mail server. So I figure I can just get an extra basic unraid licence throw my little Dell SFF at it, it'll take 2xSSD's inside it and with unraid can boot off the usb. Problem number one though, is file system. To run it in a mirror, I must use btrfs. I want to use ZFS, but since it only has 2sata connections I'm screwed because unraid will only allow dockers to run, if there is a started non-zfs array. Also, I have had a lot of trouble with btrfs so I'm not very happy about putting critical data on it. Nevertheless it appears my only option. Can anyone else think of another? Is there anything I can do with the new beta that allows me to run docker without the unraid array running? I could just roll ubuntu, but I'd have to make it boot from usb and it's more effort than I'd like. I thought about proxmox / freenas, but neither boot from USB and my SSD's are two tiny 150G enterprise by Intel. Perfect for what I need, but probably not going to fit everything if I install the OS on there as well. Any thoughts? Many thanks, Marshalleq
  9. I actually also discovered my disk timeout had been reset to none, in the disk settings and applied to all disks. So while crash plan was activating the disks (as it should), it was actually that they weren't set to spin down. I never looked there because I never go into those disk settings and had forgotten there was even a setting for it. I'd suggest to double check that setting in case it really is resetting for some people as a result of the upgrade. It seems unlikely, but who knows.
  10. Just to add for others, the disks that weren't needed seemingly were not spun up until needed so that's good. However, they also seemingly didn't spin down. I have since noted the default spin down delay has been reset to never. I assume it's that. Still testing.
  11. @DarkMan83 Mine seems to have been crash plan docker container.
  12. OK, I may have just solved this for myself by looking at the logs included. For some reason Crashplan backup was at 100% CPU. Stopping this docker seems to have stopped the drives spinning up. I'm nearly positive this does not happen under previous unraid, however I'm now doubting myself. I will post back here if they spin up again by end of day, otherwise this can be closed. Many thanks. Marshalleq.
  13. I'm aware there's a view that this is not happening and it could be a plugin. Unfortunately I can't easily boot into safe mode, so I've taken a different approach. Firstly I've resolved all the GSO errors by changing a linux VM to Machine 5.0. This enables us to actually read the logs without hunting through rubbish. Second I spin down all the drives. I visually note that they have spun down and remain spun down. I refresh the main page to ensure that they haven't spun up in the background. I note some activity comes in and spins up a single drive, OK all good so far. I again refresh the main page to ensure drives have not spun up in the background. I view the logs, clearly showing when the drives spun down. Still good. Within 5 minutes, all drives have spun up again. I check the system log. There is no record of anything spinning up the drive. Perhaps it's in another log. I have downloaded the logs within 1 minute of the drives spinning up, thus there is not much to have to delve through to make it easier. Maybe this will help the few of us whom still remain with this problem and whether it's specific to our setups or not. obi-wan-diagnostics-20200830-1425.zip
  14. Just wanted to add that the annoying GSO error, actually happens when a linux VM is running Machine Q35-<5.0. I haven't noticed anyone else report that specific to linux here before.
  15. I can't really boot into safe mode without a lot of effort since I run a ZFS plugin with all my dockers and vms on it. However I do have all my disks still spun up caused by something. I could compile a custom kernel with ZFS in it, but then people would probably point at that. Only other option would be to format / move my zfs volumes. Probably easier to let someone else do it in this case. @DarkMan83 want to compare plugins or something to help rule them out?