Jump to content

-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. Any chance we can get LAME added? I'm looking to get a FLAC folder auto-converted to MP3 for use with mobile devices. Showing my Linux noob status here. Looks like there are five packages needed for what I'm looking for to work. I can manually add them, though it would be nice to have them included here for the UI:
  2. Hi all, I have a MineOS container that thinks it's CouchPotato. I've tried: Removing and re-adding the container/image Manually replacing the icon in \config\plugins\dockerMan\images (the file has the CouchPotato icon) Specifying a new icon through the GUI in the container settings page but it won't stick. Every time the container restarts, it replaces the image to use the CP one. Any idea where else I can look for this? Edit: I've since found this post, which mentions the paths for the icons in RAM. So I've moved the appropriate file over there, and that's fixed it for the moment, but I don't know if this will persist through a reboot, as I'd imagine it'll just pull the CPU image from HDD the next time, which is still the CP image.
  3. Has this something to do with the system responsiveness issue during large writes to the cache, I wonder?
  4. Updated server without issue, with the exception of the following: Mar 14 12:36:54 server root: error: /webGui/include/ProcessStatus.php: wrong csrf_token Which doesn't seem to have affected anything.
  5. Why not something like: Always - Files are written to the cache. If there is insufficient space, the transfer will fail. Prefer cache - Files are written to the cache. If there is insufficient space, files will overflow to the array, and be moved back to cache by the mover on a schedule if there is sufficient space. Prefer array - Files are written to the cache. They will be moved to the array by the mover on a schedule. Never - Files are written to the array. Existing files are not affected by these settings. If files on cache should be on array, or visa versa, they should be moved manually first. I think the wording and order are both important here. I feel ordering them in how much they use the cache drive is more intuitive, and naming them more consistently would help as well. I also tried to keep the word count down, as users stand a better chance of understanding it, and reading it in the first place. I'm not entirely happy with the warning text though. Thoughts?
  6. Hi all, Having recently moved to a 1950X, and experimenting with clustered VMs and a few other things, I find myself needing to move around core assignments more than before. Before, with only a VM or two, I'd use 'isolcpus' to leave a couple of cores free for VM use. I'm wondering though: Can I take this a bit further, and isolate everything bar, say 1 or 2 cores, and manually assign everything? I'd leave a core or two (whatever is needed) for unRAID itself, and the lighter Docker containers - download clients, monitoring, etc. - and manually assign the heavier containers - Plex and MineOS, chiefly - as well as any VMs I'm running. This would mean that the machine wouldn't need a reboot every time I wanted to assign more isolated cores to a VM, and only Plex itself would need a restart if I needed to give it more resources. Anyone else do this? Does it work well?
  7. Agreed. I've brought things like this up before, but the comments are usually: 1) There's a plug-in to do this 2) There's a (relatively involved) procedure for this, check the wiki 3) We don't need this build in, see the first two points In the same vein, unRAID could do with more disk management features in general. Having a "Remove" button for a drive that does what's being suggested here would be nice. Having a "Replace" button, that does this, but also moves the data to a new disk, would be nice as well.
  8. I never knew this was a thing! How many in Ireland?
  9. Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place. I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well.
  10. There's a feature request for multiple cache pools. I'd really like something other than cache - that isn't as slow as the main array, but is protected - for VMs and the like.
  11. I know this has been raised before around the time of the original launch , and I think Limetech said they would look into it at the time, but (unless I'm missing something), it hasn't been implemented. Recently invested in a new monitor, and I now find this forum's theme is especially retina-searing. Any chance could get something going on this?
  12. Cheers. BIOS 2.00 was released not too long ago for the Taichi. I'm going to try that this weekend with RC15e (released today) assuming you don't beat me to it. If it's still not working I'll flash 1.70 and see what happens.
  13. Similar to what John_M asked, if you try with one card, and it's detected, do you see your logs full of the same errors I do? You don't always see it by opening the live log, so I usually go Tools -> System Log. What slots have you tried the card in? Because I've tried it in all of them, same thing. I have a 9207-8i on the way as well. I'll give things a go with that at some point, but the exact same behavior was present with an IT-flashed H310 as well, which makes me think it's not the card.
  14. Thanks guys. Had to set adapter to host mode for the container, and had to manually force an inform via SSH, but all up and running now. Man does regular consumer stuff suck in comparison.
  15. Hi all, Running this container in 6.3.5. Get to the setup in the webUI, but can't detect any devices. All settings are default, no ports changed. Server is static 1.100 address. Installing controller software on a Windows machine on 1.8 works as expected. USG, AP and switch are all seen on the configuration wizard. Any ideas?
  16. Thought I'd chime in here. Haven't read the thread (yet) as it's huge, but: 1950X / X399 Taichi build. On 6.3.5 at the moment. Have been for a day or two now. Originally I was having lots of problems with a HBA, so I ended up leaving SVM disabled. Ran for over a day with no problems. Turned SVM on today and everything's back up, with the only change being I can't see my CPU usage on the dashboard (though it works in Stats). I should mention I've changed no settings at all, other than enabling, SR-IOV, SVM, and IOMMU in BIOS. Now the problems with 6.4 If I boot into 6.4 with BIOS defaults, my HBA isn't detected. If I then enable IOMMU, my HBA is detected, but my logs are flooded with: I'm hesitant to start my array in this condition, so I've gone back to 6.3.5, which seems (weirdly), to be working completely fine so far. I'm not using passthrough for any of my VMs, so perhaps that's why. I also haven't disabled C-States. Does anyone have anything I can try? As I said, I haven't read through the thread yet, so there could well be something simple I'm missing. server-diagnostics-20171118-1501.zip server-diagnostics-20171119-1730.zip
  17. Because then you can't run your VM on redundant storage, requiring downtime if you want to create a backup image. That, or you have to passthrough a hardware RAID1 config, which seems a bit silly within an OS like unRAID.
  18. Apologies for the necro, but having seen that single-drive vDev expansion is coming for ZFS some time in the future, I figured I'd nudge this again for visibility. For myself, I'd be happy just having ZFS for cache, not the main array. Myself and some other users have been having some issues possibly related to BTRFS cache pooling (see below, the issue seem to go away when a single XFS cache device is used), and I feel like having something that's been around longer and has been put through it's paces a little more might be a nice option. I understand that for all the reasons Limetech listed above, it might still not be viable, but just putting it out there none-the-less. https://forums.lime-technology.com/topic/58381-large-copywrite-on-btrfs-cache-pool-locking-up-server-temporarily/
  19. Just to chime in here: I'm experiencing the same issue, also using 2x 850 Evos. Not a fan. Haven't tried any troubleshooting things mentioned in this thread or anything yet, but just to say that it is also affecting me, etc.
  20. Yup. 2x RAID1 arrays is my use-case as well. One for VMs/User-facing Docker stuff, the other for cache and "backend" containers.
  21. +1. I feel like this is probably coming, given what's been talked about with 6.4, and with the most recent RC, but worth mentioning anyway. Would love to be able to use stronger passwords with the webUI.
  22. Valid points. And, to be honest, once I can move to somewhere big enough that I can shove that stuff in a separate room some place and leave a dedicated KVM with it, then I won't care much about IPMI anyway. Either way, it'll be an interesting couple of months, seeing how Threadripper/EPYC (Big Ryzen?) turns out.
  23. Most (if not all) Threadripper boards won't have server features though, namely IPMI. Have you ever used an IPMI solution? If you haven't, try it out. You'll never go back. That alone, never mind the longer warranty (though that's pretty damn nice as well), is worth the extra cost for me.
×
×
  • Create New...