Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. I think never had a problem might be pushing it a bit - there are tons of problems, but also problems with stable lol. I think I'd just like to add that it's an easy rollback and despite the disclaimer, it's pretty safe for everything that matters - e.g. your data.
  2. Hi, so Unraid has been my experiment in trying to run everything on one box. It's worked very well really, but of course the one area that was always going to be a problem was me. I mean I like to mess with stuff and that becomes a problem with downtime. So, I'm thinking I'll move back to having a separate machine for the 24x7 stuff. This is some basic stuff, Wordpress sites, Mattermost chat, mariadb / postgresql, lets encrypt docker now called swag of course, nextcloud, taiga and lancache. I think that's it really. Oh and my mail server. So I figure I can just get an extra basic unraid licence throw my little Dell SFF at it, it'll take 2xSSD's inside it and with unraid can boot off the usb. Problem number one though, is file system. To run it in a mirror, I must use btrfs. I want to use ZFS, but since it only has 2sata connections I'm screwed because unraid will only allow dockers to run, if there is a started non-zfs array. Also, I have had a lot of trouble with btrfs so I'm not very happy about putting critical data on it. Nevertheless it appears my only option. Can anyone else think of another? Is there anything I can do with the new beta that allows me to run docker without the unraid array running? I could just roll ubuntu, but I'd have to make it boot from usb and it's more effort than I'd like. I thought about proxmox / freenas, but neither boot from USB and my SSD's are two tiny 150G enterprise by Intel. Perfect for what I need, but probably not going to fit everything if I install the OS on there as well. Any thoughts? Many thanks, Marshalleq
  3. I actually also discovered my disk timeout had been reset to none, in the disk settings and applied to all disks. So while crash plan was activating the disks (as it should), it was actually that they weren't set to spin down. I never looked there because I never go into those disk settings and had forgotten there was even a setting for it. I'd suggest to double check that setting in case it really is resetting for some people as a result of the upgrade. It seems unlikely, but who knows.
  4. Just to add for others, the disks that weren't needed seemingly were not spun up until needed so that's good. However, they also seemingly didn't spin down. I have since noted the default spin down delay has been reset to never. I assume it's that. Still testing.
  5. @DarkMan83 Mine seems to have been crash plan docker container.
  6. OK, I may have just solved this for myself by looking at the logs included. For some reason Crashplan backup was at 100% CPU. Stopping this docker seems to have stopped the drives spinning up. I'm nearly positive this does not happen under previous unraid, however I'm now doubting myself. I will post back here if they spin up again by end of day, otherwise this can be closed. Many thanks. Marshalleq.
  7. I'm aware there's a view that this is not happening and it could be a plugin. Unfortunately I can't easily boot into safe mode, so I've taken a different approach. Firstly I've resolved all the GSO errors by changing a linux VM to Machine 5.0. This enables us to actually read the logs without hunting through rubbish. Second I spin down all the drives. I visually note that they have spun down and remain spun down. I refresh the main page to ensure that they haven't spun up in the background. I note some activity comes in and spins up a single drive, OK all good so far. I again refresh the main page to ensure drives have not spun up in the background. I view the logs, clearly showing when the drives spun down. Still good. Within 5 minutes, all drives have spun up again. I check the system log. There is no record of anything spinning up the drive. Perhaps it's in another log. I have downloaded the logs within 1 minute of the drives spinning up, thus there is not much to have to delve through to make it easier. Maybe this will help the few of us whom still remain with this problem and whether it's specific to our setups or not. obi-wan-diagnostics-20200830-1425.zip
  8. Just wanted to add that the annoying GSO error, actually happens when a linux VM is running Machine Q35-<5.0. I haven't noticed anyone else report that specific to linux here before.
  9. I can't really boot into safe mode without a lot of effort since I run a ZFS plugin with all my dockers and vms on it. However I do have all my disks still spun up caused by something. I could compile a custom kernel with ZFS in it, but then people would probably point at that. Only other option would be to format / move my zfs volumes. Probably easier to let someone else do it in this case. @DarkMan83 want to compare plugins or something to help rule them out?
  10. Hi everyone, I'm running the latest beta of unraid and noticed (like I see someone else has commented) that the temperatures on my AMD Threadripper 1950x / Asus X399-A Prime board are reporting much, much higher. E.g idle temps are circa 90 degrees c. Obviously this is not correct. I posted in the beta forum and the only response I got was that Unraid doesn't handle temperatures, which I think is quite incorrect to be honest. What I understand is, the plugin reports the temps provided by either, the built in sensor drivers, or the injected sensor drivers in the case where they're not included with the kernel or packages. My understanding was that in the newer kernel, the relevant drivers for my hardware are now included, and I can see that more sensor options are now available in the Dynamix plugin. Previously I had to input it87 into the dynamix plugin to get it to work. I have rolled back to stable and can confirm the temperatures go back down to circa 60 degrees c at idle. Can anyone help me by a) Advising / confirming if this is likely dynamix plugin issue or kernel issue, b) giving me something to go back to the beta thread with? One thing I'm wondering is if there are now correct drivers in the kernel, perhaps the plugin logic needs to be changed. This I'm basing on that AMD used to have some weird 27 degree offset and perhaps the plugin has something to compensate for that that's playing up, though it seems to be doing it backwards TBH. Logs attached. Many thanks, Marshalleq obi-wan-diagnostics-20200828-0850.zip
  11. Hmmm, I thought the upgraded kernel took care of that now (whereas yes, I agree previously you needed a plugin for the driver). My assumption was the current plugin now reads whatever the current kernel is sending. And to that and, I do note with the same plugin on the two different kernel versions, that this kernel has a lot more plugins to choose from, which does seem to indicate I'm on the right track there. So plugin to display, kernel to send temps right? If so, I still say kernel is sending wrong temps, or the plugin needs to be updated for AMD's crazy temp +27 degrees or whatever they do.
  12. So I'm just registering my disks are no longer spinning down. I can spin them down manually, but at some point they will spin up again and don't go to sleep. I've attached logs. Also I received an out of memory error (below) on my 96G memory system, which I assume is to do with the windows / syslog issue, but don't know, haven't checked yet. Will also be in logs, not likely to be RAM anyway. My logs seem to be going straight into a folder and I have to manually compress them. Perhaps my zip is automatically unzipping them, but thought I'd mention in case anyone else has the same. Finally, the idle temperature of my Threadripper 1950X on Asus X399 Prime-A board reports incorrectly since the beta was introduced. It reports idle temps as about 90 degrees C. Clearly not correct. obi-wan-diagnostics-20200828-0850.zip
  13. While I'm not going to worry about when it's released (because this is a common response across open source software development). A pillar of agile is openness and sharing of progress so that anyone can see what is being attempted to be completed by when and what is being aimed for. It doesn't say when it will be done or if it will be accepted however, just that it would be attempted within a sprint (e.g. a 1-4 week timeframe). Sadly, most don't share this information. Of course, perhaps lime tech would be one of the few not using Agile, instead using something like waterfall - which would mean they would definitely have a deadline to share. Or they could be using nothing, which would be quite enjoyable and would explain why there is nothing to share. My bet is this last one. Because there's not really any commitment to provide anything specific and I think that's fine in this environment. The team is probably distributed and they probably have other responsibilities which make things complicated. I do wish though that we could see into a Scrum board or something at a read only level to satisfy curiosity. Or they would pick random customers to participate in each sprint to help or something. That'd be cool. (Putting my hand up if anyone from lime tech reads this). But this note is really just to say, I read this 'theres no official timeline' thing a lot. And while that's typically true for a software development process, it doesn't mean there's no process or aim that can be shared. Hope that doesn't offend anyone, especially at limetech, I just like to help educate on agile sometimes (Certified Agile Scrum Master) among other things.
  14. Interesting mine have just started doing this - or I've just noticed it.
  15. This happened to mine the other day, using it one night, shut it down, woke up in the morning and black screen. I ended up just recreating the whole vm and installing windows from scratch. I really shouldn't have had to do that, but had tried so many things that I actually thought my GPU had failed.
  16. Just to add some context - my VM DOES work in windows 10 on this beta. I have created a new template (by deleting the VM without deleting the disks) and created a new one - pointed back at the disks etc. So that might be why it works, though personally I've always had to do this delete vm template dance in unraid since at least 3-4 versions ago. At leat with windows. Networking isn't great though. Even downloaded the latest virt-io drivers but no difference. I've just passed through a physical Nic for now as connections were dropping. Anyway. hopefully that works for you as an alternate option.
  17. Out of interest, what kind of idle temps do you show? When I upgraded to the new beta, my temps show insane high numbers, like 90 degrees C at idle. But I have a thread ripper 1950x - which reported correctly before the beta.
  18. Just chipping in after installing latest beta I'm also getting my logs fill up (like they're actually 100% filled up the tmps drive) as per the below: Aug 6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468 Aug 6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468 Aug 6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa b3 b3 49 92 $I.$..........I. Aug 6 16:11:44 OBI-WAN kernel: tun: 8f 5c 2c 5d 4e 99 30 e3 1d 15 ee 66 4a fc 79 09 .\,]N.0....fJ.y. Aug 6 16:11:44 OBI-WAN kernel: tun: e6 17 43 84 7a 39 48 1c e9 f4 c4 35 77 26 6e fa ..C.z9H....5w&n. Aug 6 16:11:44 OBI-WAN kernel: tun: fe 53 61 0d 59 e5 6d 03 39 2b 47 51 0e f0 42 ab .Sa.Y.m.9+GQ..B. Aug 6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa b3 b3 49 92 $I.$..........I. Aug 6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 96 b5 95 ad aa aa aa aa a9 a9 49 92 $I.$..........I. Aug 6 16:11:44 OBI-WAN kernel: tun: 3a 1c 4f f1 93 59 f7 ec 24 5c 8a 63 f9 8d 34 a9 :.O..Y..$\.c..4. Aug 6 16:11:44 OBI-WAN kernel: tun: 24 49 92 24 00 00 00 00 aa aa aa aa a9 a9 49 92 $I.$..........I. Aug 6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468 Aug 6 16:11:44 OBI-WAN kernel: tun: 74 51 07 64 ba 4e 12 d9 33 53 1b ac c4 a3 af 38 tQ.d.N..3S.....8 Aug 6 16:11:44 OBI-WAN kernel: tun: d1 a7 a6 a5 52 de 50 9b 9d 42 7d fc 2a 07 c8 c1 ....R.P..B}.*... Aug 6 16:11:44 OBI-WAN kernel: tun: unexpected GSO type: 0x0, gso_size 1402, hdr_len 1468 Aug 6 16:11:44 OBI-WAN kernel: tun: 5a 65 d4 d1 f7 3a f0 9c 09 44 7e d6 2e b9 b4 df Ze...:...D~..... Aug 6 16:11:44 OBI-WAN kernel: tun: 7a b8 67 bb 3c db 50 6a c0 24 12 5f 6e 8c 56 19 z.g.<.Pj.$._n.V. Aug 6 16:11:44 OBI-WAN kernel: tun: 94 63 43 2d 3d fb 29 af 83 32 95 21 f0 6f 87 16 .cC-=.)..2.!.o.. Commonalities to the above, I also have 10G Intel Nic, multi internal networks, some windows VM's. I'll go through the above steps (haven't changed to Q35-5.0 yet - just wanted to register a 'me too'. Edit - As it turns out, I did have Q35-5.0 already due to having to recreate the template - (something that I constantly have to do with unraid for some reason) and that defaulted to 5.0 and virtio-net. In this configuration, my logs are flooded with these messages. It starts when I start a Windows 10 machine and doesn't stop after I've stopped the VM - have to restart the whole server. Perhaps I can restart the virtual machine manager - haven't gotten that far yet. Edit 2: One more reboot and they're down to 500 or so an hour. So that's survivable compared to before at about 1-2 per second. Will continue to try and pin it down over the next few days. Logs now attached. obi-wan-diagnostics-20200807-1106.zip
  19. Regarding using docker in a folder image, is this active in the GUI? I tried just wiping out the existing /mnt/INTEL1TB/docker/dockerimage/docker.img and replacing with /mnt/INTEL1TB/docker/dockerimage/ and also with /mnt/INTEL1TB/docker/dockerimage but neither worked. It still thinks it's an image, wants an image size to be set etc. I also tried moving it to /mnt/user something - that didn't work either. I read up that that maybe you have to use a share, not sure how since it still in the GUI wants me to put in an image size. I assume there's something obvious I'm missing? Everything else is working great. Thanks.
  20. That LuaJIT version / OpenResty's is normal. At least I've had it forever and it doesn't seem to impact anything, so you've probably just not noticed it before. At a wild guess - have you tried ports? Is unraid on 443 and 80 still? Obviously you can't have both letsencrypt and unraid on the same ports.
  21. Actually yes you can run quite well on a raspberry pi. However, if you have to transcode (and you don't always have to transcode) that's where the problems start. Have you seen the direct play option? For most peoples setups, a minor processor is fine. Most people would only transcode maybe one stream at a time and direct play the rest (that would be my observation anyway, and is true for many cases - yours may not be the same). If you're looking at buying a new CPU though - I would be very surprised if it has less than 8 threads accessible - which is actually more than I have assigned to my plex docker container because it doesn't need a lot. Basically any AMD Ryzen would work I think, but to get some room for other unraid things I'd go Ryzen 7 or 9 or a threadripper. The point is, you don't really need to be considering GPU encoding for plex with most processors today and if you end up requiring it - you can add it later. If you want to do encoding and don't want to do GPU encoding AMD is where it's at. More cores at a lower price point = more capability. Intel is not something people are really buying much of at the moment and for your use case it sounds like AMD would be great. Your call though!
  22. There's no GPU on most AMD's so no hardware acceleration built in - so yeah. There are some CPU's (I don't know which) by AMD that do, those are relatively new. But - Plex uses not a lot of CPU really. Unless you're doing 4k, really you don't need a lot. I mostly have plex using the scraps of whatever I've got left after 24x7 encoding and it handles multiple simultaneous streams quite easily, 6, 8 whatever. Also remember, a lot of the time it doesn't actually need to use the CPU for it - depends on your clients.
  23. A friend of mine uses Nvidia 1660 card, because it supports b frames. He swears by it. And you can do your whole library in days. But I still like the CPU encoding, it takes longer, but better quality at smaller sizes (not neccessarily better quality at larges sizes FYI). I use tdarr which has worked out great so far.
  24. If you want quick sync I believe that requires additional ‘config and stuff’ being that it’s still hardware compression and needs to be made visible to a cm or a docker. Someone correct me if I’m wrong. If you have more cores it doesn’t require additional config, plus you can share them amongst everything. I have about 30 dockers running and a number of vms including 24x7 handbrake encoding one 75% of the cores. Plex never misses a beat. I’d go for cores and memory hands down every time.
×
×
  • Create New...