Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. So I'm quite frustrated with this new beta, they're unusual a lot more stable by now - (the Unraid one and maybe the zfs rc - though I'm not sure). I'm still getting this issue and other randomness. As part of that testing, I'd like to downgrade to Unraid stable and keep the latest ZFS. The instructions for that are mentioned above, except it didn't seem to work for me. I think what's happening is when you reboot, it does it's auto update or whatever and makes it the lower version again? My process was to downgrade the kernel, remove the older zfs versions, copy the above files from dropbox where the old ones were, reboot again. Thoughts?
  2. I've just discovered that I can't spin up a new Ubuntu VM, not only on my threadripper system, but also on my xeon system. Anyone else seeing this? I'm finding the beta quite buggy, hosts needing to be rebooted, GPU's not passing through properly and VM editing causing issues. Docker seems OK. Thought it was an AMD thing before. skywalker-diagnostics-20201014-1438.zip
  3. There is a statistic somewhere - ZFS needs so many gigabytes of memory per gigabytes of deduped data. That's why it isn't worth it. You're trading disk space for ram. Not sure on specifics, but you should be able to find it without too much searching.
  4. I've just been bitten again by the GPU bug whereby I have to shut down (not restart) the whole host to get it to work. Logs have been submitted before. Basically the VM does start and run, but the screen is still on text mode, running perfectly. Shutting it down seems to get it back into gear. Latest beta 30. There is of course a possibility of faulty hardware occurring at the same time as the beta upgrade. That's a hard one to test. Scratch that - that doesn't work either. Downgrading - FYI I can't downgrade to stable, even though it's an option, just reverts back to beta 25 all the time. OK downgrading back to beta 25 got it to work. This might in fact be downloading the machine version. Haven't checked.
  5. Yes it sounds about right. I don't think you will have issues like that, though I'm actually not sure what the default mode is for file permissions in unraid now that I think about it. There is actually a plugin that fixes permissions automatically which I forgot to mention, I'm sorry I'm rushing out to work again, but it might be part of the fix common problems plugin. Just don't run it on your dockers unless you're sure, sometimes those can be a little different I'm told - though I think that's perfectly resolvable. Not sure why one is working and not the other, seems like everything is the same. Make sure they've both got user accounts created through the gui. Make sure they're both in the write list and you are connecting via smb. Sorry gotta run.
  6. Can I just add, VNC on Safari (from unraid specifically) has never worked for me until Mac OS Big Sur (currently in beta). I used to have to open a separate Firefox browser, copy the link from safari over and it would work. I tested it on multiple computers installs and things and could never get it to work before. Big Sur is still in beta and their browser does seem to have a few minor issues here and there, but clearly there's a new engine or something behind it - never seen a new version of Mac OS change the browser quite so much before. But vnc works on unraid so I'm pretty excited about that. Hope it helps someone.
  7. Make sure the folder has permissions of nobody.users. Then set the share in smb extras like you have done with user permissions that you want. You've left out who has access to the share. Sorry, I would post you an example, but I'm about to leave for first day on a new job.
  8. This is where the target market of the two products you mention is slightly different. Unraid doesn’t do ACLs instead it uses share based security. Granted you could probably Jerry rig something but it’s going to be on you. By default everything is set to be nobody.users for this reason because it’s targeted at the home user as a primary market. Sent from my iPhone using Tapatalk
  9. I get this kind of thing (the part where stuff stops working in a VM etc for a good year or so, but it's worse in the current beta if that's what you're using) and find that rebooting the host usually fixes it. You may have to do a complete power off though to reset the hardware. Failing that, try deleting the vm template (not the disks) and recreating it. This is still for some reason needed to be done on a regular basis for me. Though Limetech said that it's not normal, but worth a try for you.
  10. I did some testing on one of my own problems last night, but it turn out it exists in beta 25 also. The problem I accidentally discovered due to an unbootable install iso can be replicated on my machine over and over. The problem is that if you force close a vm at the first install screen (or the screen where you are presented with the failed to boot grub text (e.g. using vnc), you are presented with similar to the following screenshot: This seems to result in two issues I've noticed, 1 - I can no longer access the virtual machines tab or it's contents, 2 - I can't delete files from the virtual machines folder on the SSD I'm using, requiring a reboot of the host. I've run a full memory test over night, including with SMP enabled to rule out any memory related issues and it all came up clean. I'm posting this here in case it's some other combination of hardware I have. To that end please note I'm on a threadripper 1950x which has never given me a single issue on unraid and I am storing this vm on an SSD formatted with ZFS. If that becomes an issue I can store it on an official unraid file system. I have performed the same test on my xeon system and cannot get this to occur, my suspicion is it's AMD related as both systems run the beta and ZFS and the AMD system it appears to be 100% repeatable and seems to slow down the system too. I suspect if I force close the VM at any other point this will also happen. Logs attached. obi-wan-diagnostics-20201002-0852beta25.zip obi-wan-diagnostics-20201001-1724beta29.zip
  11. Hey, @steini84I just noticed that I couldn't delete something from a ZFS drive and after multiple attempts decided to look at dmesg, which looked rather alarming. I don't know if this is the latest zfs or if it's the latest beta 29 kernel. But posting in here first as we're not likely to get much in the way of ZFS support without a strong case and testing from the unraid guys. I have just recently tested my memory, which would be my first thought and it could still be of course, but because of that I'm thinking new kernel or the zfs update that came to match. I repeatedly get these results by starting a rancherOS vm, with the standard linux64 template. Logs attached, I'm hoping I'm not the only one TBH. obi-wan-diagnostics-20201001-1724.zip
  12. Nice, thanks for the info. It's a bit of a catch-22, nobody wants to buy the hardware unless they know it's supported and will work as it's still quite expensive. I've just purchased a second machine (may or may not be unraid), it's a ThinkStation P700 dual xeon. It happens to have thunderbolt built in, but being IBM, it's some kind of add on cable to buy. And it's thunderbolt 1 - even that's better than USB though.
  13. It's not just speed that make TB3 useful. I'd really like to be able to use a cheap SFF computer and attach a portable disk array such as one of these next to it. There are not many elegant case designs around with proper hot swap capability, drive status indicators at the front. Most seem to top out at about 4-6 bays, have plastic drive bays, no status lights, poor cooling and such and on top of that be quite expensive. This would be a great addition for unraid. I also like the idea of being able to run one cable from my Unraid box in the cupboard, to my screen, keyboard, monitor and headphones. On top of that, it's a cheap 10G network card from my Mac, except it's 40G not 10! Thunderbolt gives the advantage of not having to use the long standing always poor cousin of transports (USB) so that you can support ALL the protocols required by the storage. Anyone ever tried to get trim support running over USB? Also, did you see the speed of transfer you can get in that link above? Pretty awesome. When USB4 does come along, I suspect, the thunderbolt driver will still need to exist separately anyway. Now that the Licencing of thunderbolt is open and it's based on a standard port, I very much believe TB will become the standard in a few years.
  14. Oh boy. I wonder if it's some new feature in a newer version of BTRFS? Running btrfs --version for me results in btrfs-progs v5.6.1. You might like to compare that with what you've got to see if there's something listed in the changelog with parts of that error. Ultimately I've decided that a single drive xfs is more reliable than a btrfs mirror so I've stuck with that until something else comes around. So, that kind of thinking would be one workaround for you - stop your array, copy your data on the current good version to some other storage. Upgrade Unraid and format a new cache of your choosing e.g. XFS or btrfs so that it's a known good starting point. Copy your data back and start the array. Or along those lines. You get my drift right? Obviously kick start a move first. Also, pays to check for sure the hardware is not full of smart errors or something. Could also make a copy now and run some btrfs repair tools before upgrading. Someone else will probably have some better advice on that. Good luck!
  15. So starting with memory, there appears not to be any issues there. However, I've clearly not let it run overnight, which technically I probably should do. However, the one thing I didn't do, which I should have done was a full power off of the machine, resetting the hardware. Which I've now done and now the GPU works. I don't know why this is now happening, it could be a symptom of my hardware, or it could be triggered by something new in unraid. The logs are there if anyone wants to look at it. I'll log another ticket if it comes back (or try reopening this one). Until then I guess we can close this.
  16. Just to update this, I tired a whole new VM this time and same result, VM starts, but black screen. No errors in VM log. I'm seeing some unusual errors in the system log around PCIe, so am going to try a memory test as a first step, Hopefully not another stick gone, this is expensive memory!
  17. I honestly don't know what's happening here, except it's the second time this has happened. I tried quite a lot last time, this time I just tried re-creating the VM without recreating the disks. However that didn't work. The symptom is that twice now, after some time (e.g. a month or two) perfectly running Windows 10 gaming VM, no longer displays on a plugged in monitor with pass through GPU. I do suspect it's running in the background, however if I change the GPU to VNC, there is no option to view the VM via VNC in the properties of the VM. Last time, I just created the whole VM including disk again to get it working. I don't really want to do that again to be honest so will try harder this time! Anyway, attached diagnostics in case it helps with anything. obi-wan-diagnostics-20200921-1738.zip
  18. I get it to work if I create a new VM, but at some point it goes black (like a month later maybe). I don't know why. Can't figure out how to fix it, then create a new VM from scratch and that works. Going to log a separate ticket since it sounds different, but thought I'd add my 2c.
  19. Lots of questions in this thread, but not many answers. I'd answer a few, but I don't know the answers either.
  20. Trying a different approach - this is what I now get on the beta - which would you choose for CPU and MB? It all went west since unraid beta. Not really sure what to choose anymore. I have Threadripper 1950x. I can assure you it can't operate at 89 degrees.
  21. Hey, I realise some might see this as unhelpful, but I'm honestly trying to be the opposite. To my observation, this is just btrfs. I don't know why, but stuff happens on this filesystem. It might have been a new version in the upgrade or something. I've used a stack of file systems and all of them have been great except btrfs and I did also have an issue with reiserfs at some point maybe 10-15 years ago and that's it over 30 years or so. I've had unrecoverable btrfs on my cache array also, on a previous version of unraid. My solution after several failures with the cache drive with btrfs was to run a single XFS drive. Anything that needs the redundancy as soon as it's written bypasses the cache. (I am hoping in an upcoming version, they give us an alternative option for mirroring the cache drive). I do apologise if this is considered a hijack - but I did want to help in the sense that while this may be considered rare, it is certainly not a one off case and lend some 'moral' support from that perspective! I used to run btrfs on my array also, but ultimately changed it back. The big benefits of btrfs are mostly lost in the unraid implementation. And yes, I realise there are plenty of people without issues. I'm not trying to turn this into a btrfs vs something else discussion. Marshalleq
  22. I just noticed something a little unexpected when trying to track down which drive says it's overheating. I have notifications set up over telegram. I receive an overheat message for one drive, with serial number listed, but on the unraid screen it lists as /dev/sdi and on the telegram screen it lists as /dev/sdj See attached screenshots and debug. To my reading the serial numbers match and even the disk, just not the device. Hopefully it's just the notification system that's wrong as the alternate option would be quite worrying. obi-wan-diagnostics-20200910-0907.zip
  23. Solved. Sort of. What do you know, you can make a USB flash drive into a dummy disk on the array. That makes docker available, then I can chuck it onto my ZFS drive. mint.
  24. I could add a USB array. I do have a 5 bay USB QNAP box at a friends house I could use. It'd be starting to waste a lot of disks on parity though. Will be awesome if they get ZFS into unraid eventually - problem would be solved hopefully. As long as it's not tethered to being in Unraids array only or something.
×
×
  • Create New...