Jump to content

s.Oliver

Members
  • Content Count

    284
  • Joined

  • Last visited

Community Reputation

15 Good

About s.Oliver

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thx., but i did that from beginning on. it never booted automatically from the USB stick – it entered the EFI shell. when intercepted while initializing BIOS and selecting the proper shown boot device (unRAID USB stick) it boots and everything is fine. not sure, why it doesn't automatically choose it. i'm running on Proxmox v6 and use UEFI/EFI boot because of passthrough'ing serveral PCI devices.
  2. so today was the day for PROXMOX installation and virtualizing unRAID again. most of the stuff works as expected, but some things aren't up to expectations: a) booting unRAID (directly from USB stick) brings me every time into the EFI shell, even though i set all the right boot options in the PROXMOX BIOS. so with every reboot i need to enter the BIOS, set the USB stick as boot option and then it boots unRAID. looks like PROXMOX doesn't save the changes into the EFI partitions or so. b) even though i enabled nested virtualization (PROXMOX) my macOS VM (under unRAID) is drastically slower than before. well, i had expected a little loss, but not that much. played around a bit with different options in the XML, but can't see any improvements. has someone any ideas, how to improve? well, i'll try to setup the macOS VM under PROXMOX, so no need for the nested virtualization then, but it can be a hassle
  3. hey siwat, thanks for sharing this Adguard Home docker! i had PiHole running before, but this one has some nice features. two things popped up and i wanted to let you know about (docker has a dedicated IP): 1) the template shows 2 entries for the Web GUI (ports 6052 [it's under the Icon URL] and port 3000 [under the working directory setup]) – but none of them works. ONLY port 80 does. 2) whenever i try to connect to it via a fqdn (like adguard.mydomain.whatever) it only shows a blank page (this works for all my other dockers, so i could image, that some kind of verification in the background doesn't like it to be called by fqdn and not IP only). i'm on unRAID 6.6.7 right now. tell me, if i can help with testing. thanks again for your work! 😃
  4. i didn't want to go into deep of the concept of unRAIDs parity algorithm. so you're right, unRAID needs to be strict in writing the same sector to data/parity drive(s) at (more or less) at the same time (given how fast different drives are completing the request). so the slowest drive in the mix (which is in the data writing cycle – doesn't matter if parity or data) is responsible for the time needed (or how fast that write cycle will be completed). but, unRAID is not immune against data loss because of not finished write operations (whatever reason) and has no concept of a journal (to my knowledge). so this file (at that time when writing was abrupt ended and not finished) is damaged/incomplete and parity doesn't/can't change anything here and probably isn't in sync anyway. so unRAID does usually force an parity sync on next start of the array (and it will rebuild parity information completely/only based on the values of the data drive(s)). unRAID would need some concept of journaling to replay the writes and find the missing part. it has not (again, to my knowledge). ZFS is one file system, which has an algorithm to prevent exactly this. my observation is, that it is a pretty much synchronous write operation (all drives which need to write data, do write the sectors in the same order/same time – else i imagine, i could hear much more 'noise' from my drives, especially if you do a rebuild). but i do confess – that is only my understanding of unRAIDs way of writing data into the array.
  5. on normal SSDs (SATA) (at least on one machine as cache drive seen) it is set to "32". but these are fast enough to handle it and they are not embedded in that special "RAID" operation as the data/parity drives. because of the nature of unRAIDs "RAID"-modus i guess, the drives are "faster" if they work one small chunks of data in 'sequential' order.
  6. nearly perfect i haven't checked my own QD settings on 6.7.x before i left (no one has brought up the QD as a possible reason), but i looked at a friends unRAID system. a fresh setup (just a few weeks old) and there all spinners are also on QD=1.
  7. i was just reacting on @patchrules2000 post, he was setting all drives to QD=32 (even on 6.6.x).
  8. my 2cents here (i'm back on 6.6.7 for 12 days and all is as good as it ever was): Disk Settings: Tunable (md_write_method): Auto (have never touched it) cat /sys/block/sdX/device/queue_depth for all rotational HDDs is "1" QD for cache NVMe drive is unknown (doesn't have the same path to print the value) wouldn't this contradict the opinion, that because of 6.6.x series has a higher QD value, it performs better?
  9. need to correct my last post: PLEX docker (media scan background task) did crash now once. so possible that this isn't related to the kernel, or whatever.
  10. maybe you don't have to, if limetech can identify the problem and fix it.
  11. funny thing, now another problem has disappeared (after going back to 6.6.7), which brought some serious brain smashing: PLEX (docker) has some background tasks running (usually in the night), one is the media scanning job. this one regularly crashed and alot of people had this problem too and tried to find a solution. now after some days of up time with 6.6.7 i haven't seen one crash – YEAH! in the nights i've some big backup jobs running, which are writing into the array. so i would guess, that PLEX has timed out on accessing data in the array (albeit, i just reads files).
  12. well, couldn't stand it anymore – so back to 6.6.7 and all is back to normal, expected behavior. though, missing stuff from 6.7, so i'll hope they can identify/fix the problem really soon.
  13. i can add to this and it's a major drop-down for unRAID going from 6.7 onward. before i was reluctant to post about it, cause of too less tests done to be 100% sure of not having some settings somewhere changed… but now, i'm sure. today i upgraded one more unRAID server from 6.6x to 6.7.2 and do see the exact same behavior! so i do have 2 machines here, which haven't had a single change, except they were uograded to 6.7.x (meanwhile all on 6.7.2). in my book, it doesn't matter how you access the data: coming from network or locally on the server, using different machines to connect to the server… when one write into the array is ongoing, then any reads (even from cache SSDs/NVMe') – even the ones coming from data or cache devices which aren't written to – are super slow. also whenever now a rebuild is happening, you better not want to read any file... also RAM amount doesn't change anything, nor the used controllers nor the cpu (with/without mitigation enabled/disabled). and while i can't back it by data, it seems that rebuilds are slower too. this can have severe scenarios, where some services are writing continuously data into the array (like video surveillance for example). hopefully we can find a fast fix for this, because going back to 6.6.x isn't a good option anymore. @limetech what can we do to help debugging this?
  14. alright man thx. i'll try this once i'm ready for my new PROXMOX installation.
  15. hmm… maybe i read the screenshots wrong... but i guess, because of no other specified bootable device (boot device 1 = cd-rom with empty 'tray', device 2 & 3 are set to nothing, or have non-bootable devices) PROXMOX is looking at it's usb devices (for this VM) and boots from it, if bootable?