Jump to content

s.Oliver

Members
  • Content Count

    289
  • Joined

  • Last visited

Community Reputation

19 Good

About s.Oliver

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. i've seen this behavior once by myself, but several times by others. in any case it wasn't necessary to start over. but i just remember it can be a hassle, to get it recognized again. latest case: friend couldn't use firefox to sign in locally to his server. it always used the plex.tv site to login in. after fidling around we used chrome browser and it worked immediately. you could also try to use this url once: http://[ip]:32400/manage maybe some of it helps
  2. hey Maciej, i can report: it works! thanks alot! 😃
  3. thx. mciekb! 🙂 i'll try out whenever i find a few minutes for the downtime
  4. here runs unRAID 6.8.1 now as VM under Proxmox. all is fine so far, but i admit, i don't use VMs inside unRAID (as that can be very tricky with performance). using quite a few dockers and plug-ins, all runs fine. Proxmox as the hypervisor uses KVM/QEMU itself to run VMs. so it would be great to have QEMU Agent support for unRAID as being run as VM (the QEMU agent would then report some informations back to the hypervisor). there's one guy, which did a VMware Tools build for his ESXi as hypervisor. but (and that's fine) he won't/can't do it for the QEMU Agent. so if anybody, or LimeTech, could bring support for the standard QEMU/Agent – that would be fine. thx. alot. PS: Proxmox is free (based on Debian) and has no limitations whatsoever. it can use all (from Debian supported) hardware, offers (built in) ZFS as filesystem and has a quite good GUI and also a respectable user base & forums. it can be (and is offered) with prof. support if wanted.
  5. hi Steven, i do run unRAID as VM under Proxmox v6.x and it uses a qemu-agent to talk to the hypervisor. so my question would be: do you see a way of compiling the necessary QEMU-Agent for unRAID, or is this out of question? a quick search for slackware based QEMU agent showed this for example: https://slackbuilds.org/repository/14.2/system/qemu-guest-agent/ that would be really great and very much appreaciated – and it would offer one more integration of unRAID into another great hypervisor (which is free and has no artifical limitations). and btw. happy new year to ya! 🙂
  6. thx., but i did that from beginning on. it never booted automatically from the USB stick – it entered the EFI shell. when intercepted while initializing BIOS and selecting the proper shown boot device (unRAID USB stick) it boots and everything is fine. not sure, why it doesn't automatically choose it. i'm running on Proxmox v6 and use UEFI/EFI boot because of passthrough'ing serveral PCI devices.
  7. so today was the day for PROXMOX installation and virtualizing unRAID again. most of the stuff works as expected, but some things aren't up to expectations: a) booting unRAID (directly from USB stick) brings me every time into the EFI shell, even though i set all the right boot options in the PROXMOX BIOS. so with every reboot i need to enter the BIOS, set the USB stick as boot option and then it boots unRAID. looks like PROXMOX doesn't save the changes into the EFI partitions or so. b) even though i enabled nested virtualization (PROXMOX) my macOS VM (under unRAID) is drastically slower than before. well, i had expected a little loss, but not that much. played around a bit with different options in the XML, but can't see any improvements. has someone any ideas, how to improve? well, i'll try to setup the macOS VM under PROXMOX, so no need for the nested virtualization then, but it can be a hassle
  8. hey siwat, thanks for sharing this Adguard Home docker! i had PiHole running before, but this one has some nice features. two things popped up and i wanted to let you know about (docker has a dedicated IP): 1) the template shows 2 entries for the Web GUI (ports 6052 [it's under the Icon URL] and port 3000 [under the working directory setup]) – but none of them works. ONLY port 80 does. 2) whenever i try to connect to it via a fqdn (like adguard.mydomain.whatever) it only shows a blank page (this works for all my other dockers, so i could image, that some kind of verification in the background doesn't like it to be called by fqdn and not IP only). i'm on unRAID 6.6.7 right now. tell me, if i can help with testing. thanks again for your work! 😃
  9. i didn't want to go into deep of the concept of unRAIDs parity algorithm. so you're right, unRAID needs to be strict in writing the same sector to data/parity drive(s) at (more or less) at the same time (given how fast different drives are completing the request). so the slowest drive in the mix (which is in the data writing cycle – doesn't matter if parity or data) is responsible for the time needed (or how fast that write cycle will be completed). but, unRAID is not immune against data loss because of not finished write operations (whatever reason) and has no concept of a journal (to my knowledge). so this file (at that time when writing was abrupt ended and not finished) is damaged/incomplete and parity doesn't/can't change anything here and probably isn't in sync anyway. so unRAID does usually force an parity sync on next start of the array (and it will rebuild parity information completely/only based on the values of the data drive(s)). unRAID would need some concept of journaling to replay the writes and find the missing part. it has not (again, to my knowledge). ZFS is one file system, which has an algorithm to prevent exactly this. my observation is, that it is a pretty much synchronous write operation (all drives which need to write data, do write the sectors in the same order/same time – else i imagine, i could hear much more 'noise' from my drives, especially if you do a rebuild). but i do confess – that is only my understanding of unRAIDs way of writing data into the array.
  10. on normal SSDs (SATA) (at least on one machine as cache drive seen) it is set to "32". but these are fast enough to handle it and they are not embedded in that special "RAID" operation as the data/parity drives. because of the nature of unRAIDs "RAID"-modus i guess, the drives are "faster" if they work one small chunks of data in 'sequential' order.
  11. nearly perfect i haven't checked my own QD settings on 6.7.x before i left (no one has brought up the QD as a possible reason), but i looked at a friends unRAID system. a fresh setup (just a few weeks old) and there all spinners are also on QD=1.
  12. i was just reacting on @patchrules2000 post, he was setting all drives to QD=32 (even on 6.6.x).
  13. my 2cents here (i'm back on 6.6.7 for 12 days and all is as good as it ever was): Disk Settings: Tunable (md_write_method): Auto (have never touched it) cat /sys/block/sdX/device/queue_depth for all rotational HDDs is "1" QD for cache NVMe drive is unknown (doesn't have the same path to print the value) wouldn't this contradict the opinion, that because of 6.6.x series has a higher QD value, it performs better?
  14. need to correct my last post: PLEX docker (media scan background task) did crash now once. so possible that this isn't related to the kernel, or whatever.
  15. maybe you don't have to, if limetech can identify the problem and fix it.