Jump to content

s.Oliver

Members
  • Content Count

    282
  • Joined

  • Last visited

Community Reputation

15 Good

About s.Oliver

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. hey siwat, thanks for sharing this Adguard Home docker! i had PiHole running before, but this one has some nice features. two things popped up and i wanted to let you know about (docker has a dedicated IP): 1) the template shows 2 entries for the Web GUI (ports 6052 [it's under the Icon URL] and port 3000 [under the working directory setup]) – but none of them works. ONLY port 80 does. 2) whenever i try to connect to it via a fqdn (like adguard.mydomain.whatever) it only shows a blank page (this works for all my other dockers, so i could image, that some kind of verification in the background doesn't like it to be called by fqdn and not IP only). i'm on unRAID 6.6.7 right now. tell me, if i can help with testing. thanks again for your work! 😃
  2. i didn't want to go into deep of the concept of unRAIDs parity algorithm. so you're right, unRAID needs to be strict in writing the same sector to data/parity drive(s) at (more or less) at the same time (given how fast different drives are completing the request). so the slowest drive in the mix (which is in the data writing cycle – doesn't matter if parity or data) is responsible for the time needed (or how fast that write cycle will be completed). but, unRAID is not immune against data loss because of not finished write operations (whatever reason) and has no concept of a journal (to my knowledge). so this file (at that time when writing was abrupt ended and not finished) is damaged/incomplete and parity doesn't/can't change anything here and probably isn't in sync anyway. so unRAID does usually force an parity sync on next start of the array (and it will rebuild parity information completely/only based on the values of the data drive(s)). unRAID would need some concept of journaling to replay the writes and find the missing part. it has not (again, to my knowledge). ZFS is one file system, which has an algorithm to prevent exactly this. my observation is, that it is a pretty much synchronous write operation (all drives which need to write data, do write the sectors in the same order/same time – else i imagine, i could hear much more 'noise' from my drives, especially if you do a rebuild). but i do confess – that is only my understanding of unRAIDs way of writing data into the array.
  3. on normal SSDs (SATA) (at least on one machine as cache drive seen) it is set to "32". but these are fast enough to handle it and they are not embedded in that special "RAID" operation as the data/parity drives. because of the nature of unRAIDs "RAID"-modus i guess, the drives are "faster" if they work one small chunks of data in 'sequential' order.
  4. nearly perfect i haven't checked my own QD settings on 6.7.x before i left (no one has brought up the QD as a possible reason), but i looked at a friends unRAID system. a fresh setup (just a few weeks old) and there all spinners are also on QD=1.
  5. i was just reacting on @patchrules2000 post, he was setting all drives to QD=32 (even on 6.6.x).
  6. my 2cents here (i'm back on 6.6.7 for 12 days and all is as good as it ever was): Disk Settings: Tunable (md_write_method): Auto (have never touched it) cat /sys/block/sdX/device/queue_depth for all rotational HDDs is "1" QD for cache NVMe drive is unknown (doesn't have the same path to print the value) wouldn't this contradict the opinion, that because of 6.6.x series has a higher QD value, it performs better?
  7. need to correct my last post: PLEX docker (media scan background task) did crash now once. so possible that this isn't related to the kernel, or whatever.
  8. maybe you don't have to, if limetech can identify the problem and fix it.
  9. funny thing, now another problem has disappeared (after going back to 6.6.7), which brought some serious brain smashing: PLEX (docker) has some background tasks running (usually in the night), one is the media scanning job. this one regularly crashed and alot of people had this problem too and tried to find a solution. now after some days of up time with 6.6.7 i haven't seen one crash – YEAH! in the nights i've some big backup jobs running, which are writing into the array. so i would guess, that PLEX has timed out on accessing data in the array (albeit, i just reads files).
  10. well, couldn't stand it anymore – so back to 6.6.7 and all is back to normal, expected behavior. though, missing stuff from 6.7, so i'll hope they can identify/fix the problem really soon.
  11. i can add to this and it's a major drop-down for unRAID going from 6.7 onward. before i was reluctant to post about it, cause of too less tests done to be 100% sure of not having some settings somewhere changed… but now, i'm sure. today i upgraded one more unRAID server from 6.6x to 6.7.2 and do see the exact same behavior! so i do have 2 machines here, which haven't had a single change, except they were uograded to 6.7.x (meanwhile all on 6.7.2). in my book, it doesn't matter how you access the data: coming from network or locally on the server, using different machines to connect to the server… when one write into the array is ongoing, then any reads (even from cache SSDs/NVMe') – even the ones coming from data or cache devices which aren't written to – are super slow. also whenever now a rebuild is happening, you better not want to read any file... also RAM amount doesn't change anything, nor the used controllers nor the cpu (with/without mitigation enabled/disabled). and while i can't back it by data, it seems that rebuilds are slower too. this can have severe scenarios, where some services are writing continuously data into the array (like video surveillance for example). hopefully we can find a fast fix for this, because going back to 6.6.x isn't a good option anymore. @limetech what can we do to help debugging this?
  12. alright man thx. i'll try this once i'm ready for my new PROXMOX installation.
  13. hmm… maybe i read the screenshots wrong... but i guess, because of no other specified bootable device (boot device 1 = cd-rom with empty 'tray', device 2 & 3 are set to nothing, or have non-bootable devices) PROXMOX is looking at it's usb devices (for this VM) and boots from it, if bootable?
  14. you're using it this way? (no special prepared bootable image, pure unRAID usb-stick and it boots directly from that?)
  15. well, i answered all of your questions actually when you do passthrough your hardware to unRAID (HBA, or SATA controllers, or whatever) then unRAID has full control of it's features – and yes, it spins down all HDDs, which are connected to these passthrough'ed hardware. and this includes things like hot-swap (if supported by your hardware) or other things. if done in the right way, your 2nd question could be answered with yes! unRAID see's the drives as they would be on bare metal booted hardware (because you passthrough the hardware, which your drives are connected to then). your 3rd question references to my hint using plopkexec to boot natively into your unRAID installation on a USB stick (this is needed, because the hypervisor can't boot any VM from a usb stick; so plopkexec is a super small VM on ISO and it's then boots unRAID from the USB stick in stage 2). this method allows for native unRAID setup (with your USB key), without any changes/modifications, whatsoever. and the bonus is, that you never need to think about it in the future. all changes which need to be written to the usb stick (where unRAID is installed on) are written to the stick (and not to the ISO in the other guys setup). so no worries in the future – and you could boot unRAID on bare metal (without modifications). probably soon i'll tweak my setup again to go this virtualization route – because now i need the features of Proxmox, but i can't loose my unRAID-server – so i'll redo all this on my newer rig. hope this helps for your decision making.