Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Report Comments posted by Marshalleq

  1. Oh boy.  I wonder if it's some new feature in a newer version of BTRFS?  Running btrfs --version for me results in btrfs-progs v5.6.1. You might like to compare that with what you've got to see if there's something listed in the changelog with parts of that error.

     

    Ultimately I've decided that a single drive xfs is more reliable than a btrfs mirror so I've stuck with that until something else comes around.  So, that kind of thinking would be one workaround for you - stop your array, copy your data on the current good version to some other storage.  Upgrade Unraid and format a new cache of your choosing e.g. XFS or btrfs so that it's a known good starting point.  Copy your data back and start the array.  Or along those lines.  You get my drift right?  Obviously kick start a move first.

     

    Also, pays to check for sure the hardware is not full of smart errors or something.  Could also make a copy now and run some btrfs repair tools before upgrading.  Someone else will probably have some better advice on that.

     

    Good luck!

     

     

  2. So starting with memory, there appears not to be any issues there.  However, I've clearly not let it run overnight, which technically I probably should do.

     

    However, the one thing I didn't do, which I should have done was a full power off of the machine, resetting the hardware.  Which I've now done and now the GPU works.

     

    I don't know why this is now happening, it could be a symptom of my hardware, or it could be triggered by something new in unraid.

     

    The logs are there if anyone wants to look at it.  I'll log another ticket if it comes back (or try reopening this one).  Until then I guess we can close this.

  3. Hey, I realise some might see this as unhelpful, but I'm honestly trying to be the opposite.  To my observation, this is just btrfs.  I don't know why, but stuff happens on this filesystem.  It might have been a new version in the upgrade or something.  I've used a stack of file systems and all of them have been great except btrfs and I did also have an issue with reiserfs at some point maybe 10-15 years ago and that's it over 30 years or so.  I've had unrecoverable btrfs on my cache array also, on a previous version of unraid.  

     

    My solution after several failures with the cache drive with btrfs was to run a single XFS drive.  Anything that needs the redundancy as soon as it's written bypasses the cache.  (I am hoping in an upcoming version, they give us an alternative option for mirroring the cache drive).

     

    I do apologise if this is considered a hijack - but I did want to help in the sense that while this may be considered rare, it is certainly not a one off case and lend some 'moral' support from that perspective! :D

     

    I used to run btrfs on my array also, but ultimately changed it back.  The big benefits of btrfs are mostly lost in the unraid implementation.

     

    And yes, I realise there are plenty of people without issues.  I'm not trying to turn this into a btrfs vs something else discussion.

     

    Marshalleq

  4. I actually also discovered my disk timeout had been reset to none, in the disk settings and applied to all disks.  So while crash plan was activating the disks (as it should), it was actually that they weren't set to spin down.  I never looked there  because I never go into those disk settings and had forgotten there was even a setting for it.  I'd suggest to double check that setting in case it really is resetting for some people as a result of the upgrade.  It seems unlikely, but who knows.

  5. Just to add for others, the disks that weren't needed seemingly were not spun up until needed so that's good.  However, they also seemingly didn't spin down.  I have since noted the default spin down delay has been reset to never.  I assume it's that.  Still testing.

    • Thanks 1
  6. OK, I may have just solved this for myself by looking at the logs included.  For some reason Crashplan backup was at 100% CPU.  Stopping this docker seems to have stopped the drives spinning up.  I'm nearly positive this does not happen under previous unraid, however I'm now doubting myself. :)

     

    I will post back here if they spin up again by end of day, otherwise this can be closed.

     

    Many thanks.

     

    Marshalleq.

  7. I can't really boot into safe mode without a lot of effort since I run a ZFS plugin with all my dockers and vms on it.  However I do have all my disks still spun up caused by something.  I could compile a custom kernel with ZFS in it, but then people would probably point at that.  Only other option would be to format / move my zfs volumes.  Probably easier to let someone else do it in this case.  @DarkMan83 want to compare plugins or something to help rule them out?

     

  8. Hmmm, I thought the upgraded kernel took care of that now (whereas yes, I agree previously you needed a plugin for the driver).  My assumption was the current plugin now reads whatever the current kernel is sending.  And to that and, I do note with the same plugin on the two different kernel versions, that this kernel has a lot more plugins to choose from, which does seem to indicate I'm on the right track there.

     

    So plugin to display, kernel to send temps right?

     

    If so, I still say kernel is sending wrong temps, or the plugin needs to be updated for AMD's crazy temp +27 degrees or whatever they do.

    • So I'm just registering my disks are no longer spinning down.  I can spin them down manually, but at some point they will spin up again and don't go to sleep.  I've attached logs.  
    • Also I received an out of memory error (below) on my 96G memory system, which I assume is to do with the windows / syslog issue, but don't know, haven't checked yet.  Will also be in logs, not likely to be RAM anyway.
    • My logs seem to be going straight into a folder and I have to manually compress them.  Perhaps my zip is automatically unzipping them, but thought I'd mention in case anyone else has the same.
    • Finally, the idle temperature of my Threadripper 1950X on Asus X399 Prime-A board reports incorrectly since the beta was introduced.  It reports idle temps as about 90 degrees C.  Clearly not correct.

    1171374913_ScreenShot2020-08-28at8_49_38AM.thumb.png.00a9c1564be2115c0f561fe1bb6488bd.png

    obi-wan-diagnostics-20200828-0850.zip

  9. While I'm not going to worry about when it's released (because this is a common response across open source software development).  A pillar of agile is openness and sharing of progress so that anyone can see what is being attempted to be completed by when and what is being aimed for.  It doesn't say when it will be done or if it will be accepted however, just that it would be attempted within a sprint (e.g. a 1-4 week timeframe). 

     

    Sadly, most don't share this information.

     

    Of course, perhaps lime tech would be one of the few not using Agile, instead using something like waterfall - which would mean they would definitely have a deadline to share.

    Or they could be using nothing, which would be quite enjoyable and would explain why there is nothing to share.  My bet is this last one.  Because there's not really any commitment to provide anything specific and I think that's fine in this environment.  The team is probably distributed and they probably have other responsibilities which make things complicated.

     

    I do wish though that we could see into a Scrum board or something at a read only level to satisfy curiosity.  Or they would pick random customers to participate in each sprint to help or something.  That'd be cool.  (Putting my hand up if anyone from lime tech reads this).

     

    But this note is really just to say, I read this 'theres no official timeline' thing a lot.  And while that's typically true for a software development process, it doesn't mean there's no process or aim that can be shared. 

     

    Hope that doesn't offend anyone, especially at limetech, I just like to help educate on agile sometimes (Certified Agile Scrum Master) among other things. :)

  10. On 11/28/2019 at 3:35 PM, dalben said:

    I've noticed my disks no longer spin down with this RC.  Is anyone else experiencing this?  If so I'll start a separate thread, but if no one else has seen it then it's got to be my local problem.

    Interesting mine have just started doing this - or I've just noticed it.

  11. On 8/24/2020 at 3:10 AM, Dava2k7 said:

    Hey all I’m in same position only way I can get the VM to boot is via vnc and using a command line. I get no signal through hdmi when trying to boot Vm driving me crazy!!!!!

    This happened to mine the other day, using it one night, shut it down, woke up in the morning and black screen.  I ended up just recreating the whole vm and installing windows from scratch.  I really shouldn't have had to do that, but had tried so many things that I actually thought my GPU had failed.

  12. Just to add some context - my VM DOES work in windows 10 on this beta.  I have created a new template (by deleting the VM without deleting the disks) and created a new one - pointed back at the disks etc.

     

    So that might be why it works, though personally I've always had to do this delete vm template dance in unraid since at least 3-4 versions ago.  At leat with windows.

     

    Networking isn't great though.  Even downloaded the latest virt-io drivers but no difference.  I've just passed through a physical Nic for now as connections were dropping.

     

    Anyway. hopefully that works for you as an alternate option.

  13. Just chipping in after installing latest beta I'm also getting my logs fill up (like they're actually 100% filled up the tmps drive) as per the below:

     

    Aug  6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468

    Aug  6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468

    Aug  6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa b3 b3 49 92  $I.$..........I.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 8f 5c 2c 5d 4e 99 30 e3 1d 15 ee 66 4a fc 79 09  .\,]N.0....fJ.y.

    Aug  6 16:11:44 OBI-WAN kernel: tun: e6 17 43 84 7a 39 48 1c e9 f4 c4 35 77 26 6e fa  ..C.z9H....5w&n.

    Aug  6 16:11:44 OBI-WAN kernel: tun: fe 53 61 0d 59 e5 6d 03 39 2b 47 51 0e f0 42 ab  .Sa.Y.m.9+GQ..B.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa b3 b3 49 92  $I.$..........I.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa a9 a9 49 92  $I.$..........I.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 3a 1c 4f f1 93 59 f7 ec 24 5c 8a 63 f9 8d 34 a9  :.O..Y..$\.c..4.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 00 00 00 00 aa aa aa aa a9 a9 49 92  $I.$..........I.

    Aug  6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468

    Aug  6 16:11:44 OBI-WAN kernel: tun: 74 51 07 64 ba 4e 12 d9 33 53 1b ac c4 a3 af 38  tQ.d.N..3S.....8

    Aug  6 16:11:44 OBI-WAN kernel: tun: d1 a7 a6 a5 52 de 50 9b 9d 42 7d fc 2a 07 c8 c1  ....R.P..B}.*...

    Aug  6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468

    Aug  6 16:11:44 OBI-WAN kernel: tun: 5a 65 d4 d1 f7 3a f0 9c 09 44 7e d6 2e b9 b4 df  Ze...:...D~.....

    Aug  6 16:11:44 OBI-WAN kernel: tun: 7a b8 67 bb 3c db 50 6a c0 24 12 5f 6e 8c 56 19  z.g.<.Pj.$._n.V.

    Aug  6 16:11:44 OBI-WAN kernel: tun: 94 63 43 2d 3d fb 29 af 83 32 95 21 f0 6f 87 16  .cC-=.)..2.!.o..

     

    Commonalities to the above, I also have 10G Intel Nic, multi internal networks, some windows VM's.  I'll go through the above steps (haven't changed to Q35-5.0 yet - just wanted to register a 'me too'.

     

    Edit - As it turns out, I did have Q35-5.0 already due to having to recreate the template - (something that I constantly have to do with unraid for some reason) and that defaulted to 5.0 and virtio-net.

     

    In this configuration, my logs are flooded with these messages.  It starts when I start a Windows 10 machine and doesn't stop after I've stopped the VM - have to restart the whole server.  Perhaps I can restart the virtual machine manager - haven't gotten that far yet.

     

    Edit 2: One more reboot and they're down to 500 or so an hour.  So that's survivable compared to before at about 1-2 per second.  Will continue to try and pin it down over the next few days.

     

    Logs now attached.

     

     

    obi-wan-diagnostics-20200807-1106.zip

  14. Regarding using docker in a folder image, is this active in the GUI?  I tried just wiping out the existing /mnt/INTEL1TB/docker/dockerimage/docker.img and replacing with /mnt/INTEL1TB/docker/dockerimage/ and also with /mnt/INTEL1TB/docker/dockerimage but neither worked.  It still thinks it's an image, wants an image size to be set etc.

     

    I also tried moving it to /mnt/user something - that didn't work either.  I read up that that maybe you have to use a share, not sure how since it still in the GUI wants me to put in an image size.

     

    I assume there's something obvious I'm missing?

     

    Everything else is working great. :)

     

    Thanks.

    1659899111_ScreenShot2020-08-06at3_25_35PM.thumb.png.de6bb641902b8a9f0fc998950ef329ad.png

  15. 2 hours ago, JesterEE said:

    The #1 place I think ZFS still needs some more time the oven is, as @_rogue pointed out, vdev expansion.  All indicators point to that being a priority for the project devs, so maybe ZFS implementation for an Unraid 7.0 release target?  Soon™

    I wouldn't be holding it up for that.  There's still a ton of use cases.  Cache drive mirrors is one and the functionality that provides for backups, Virtual Machines and dockers is immense.   And also ZFS is better at telling you when there is corrupted data even in a single drive implementation of it, or when a dual drive or better it will repair it for you and let you know.  I'd love to be able to convert my docker.img file to ZFS and have another option for a mirrored cache drive than btrfs.

     

    2 hours ago, JesterEE said:

    One issue I see with incorporating ZFS as the "main Unraid array" is how it handles the parity in a ZFS RAIDZ1 implementation; it's just different from how Unraid does it today.  While a Unraid array stores parity information on the parity disk(s), a ZFS RAIDZ stores necessary parity throughout the array. 

    Well yes, but again why hold it up because it doesn't fit into unraid's main array?  I'm running a ZFS mirror for my critical data alongside a standard unraid array and it's amazing.

    2 hours ago, JesterEE said:

    Also, the way ZFS caches reads and writes is different and can require a LOT of RAM for big arrays.

    This is no longer an issue, the whole Gigabytes per terrabyte of disk is a completely incorrect formula that seems to live on through ether legend.  It's been possible for a long time to run on very small amounts of memory.  The main thing that trips people up is the ZIL, which slows everything down, eats memory if you do it wrong and should be disabled for most use cases.  Which really is ZFS's main adoption problem - High entry criteria due to complex descriptions of what everything does.  I mean they could have just called the ZIL a write cache and then explained why it's different and how it works compared to other caches.

     

    2 hours ago, JesterEE said:

    I'm obviously oversimplifying here, but that fact remains, the way it works is a fundamental shift from the current Unraid state.  Is this better ... or worse?  I think that's subjective.  However given the ZFS baked-in features such as snapshots, block checksums to protect from bitrot, and native copy on write ... I'll think I'd deal with the few downsides.

    Yeah true.  Each has primary advantages and a few disadvantages.

     

    Unraid's primary advantages are that it lets you use differing size disks and it lets you power down inactive disks due to it not writing in stripes.  It's primary disadvantage is that it will only read from a single disk which results in quite a lot of performance degradation when compared to a standard raid array.  But for the right use case, it's extremely effective e.g. media storage with a lot of streaming.

     

    ZFS advantages are it's self healing and the ton of nice features built in for VM's, dockers and backups and is relatively fast due to the way it reads and the differing raid options you can create depending on your needs (like most raid arrays).   It's disadvantages in this case will be it won't spin down single drives, doesn't really let you use differing sized drives and adding disks (not increasing disk size) can't easily be done.

     

    Whether unraid allow a single ZFS disk in their unraid array is up to them, but I think the advantages for certain use cases in other areas are huge.

    This is why I have both.  Unraid for storage of minor accessed files, ZFS for critical data, VM's and dockers.

     

    Sorry for long post - but didn't want ZFS to be misunderstood in this thread!

  16. Seems like it's time to get a better DVD software, I'm quite confident that is still possible.  Or dual boot windows, or run linux, or just about anything.  Anyway, I'm not trying to argue with your decisions so I can think of two solutions:

     

    1 - I assume you will be able to use NFS in alternate to AFP if you don't wish to use SMB. 

    2 - If you want to add drivers and things, this is the way to do it.

     

  17. 13 hours ago, tjb_altf4 said:

    Wording has always been a little misleading, it means there is no balance job running

    Thanks for pointing that out - and now that you mention it, I've seen that!  Perhaps since we're in beta we can convince @limetech to consider naming it something slightly more specific such as balance status, or somehow surpessing it if inactive.  I can see that might not be particularly easy though.  I assume the 'no stats available' under scrub status is a similar issue.

×
×
  • Create New...