Jump to content

s.Oliver

Members
  • Posts

    308
  • Joined

  • Last visited

Everything posted by s.Oliver

  1. @bonienl has done always a great job. give him enough good feedback (in a polite way) and all will be good eventually. can't judge on the recent changes though (still on 6.6.1) because of the special DVB build which is needed here – so i'll wait and can be excited to see what is coming. i'm on the black theme and like it really much. question to unRAID (@administrator) developers: is there any way in (slackware) linux to have those DVB drivers loaded as kinda additional/external module (or whatever is the technically correct wording for it) at boot time? if so, maybe that could offer a way for CHBMB or @piotrasd to build a solution which could dramatically cut down on manual work to be done which each unRAID version released.
  2. well, can't answer that question. not qualified here, to judge about the work which needs to go into building them. but maybe @piotrasd is on something here.
  3. well, it wasn't that long for a new version awhile ago, when @CHBMB had more time on his hands. sometimes it was nearly instantaneous – unRAID released, DVB build a few hours later. but real life first...
  4. mostly sure, that an previous available & passed through usb-device, with identifier 093a:2510, isn't known/available to the qemu-process at launch time of your VM. you can look them up via "Tools > System Devices". when triggered it shows you a live view of all known devices – you simply can use your browsers search/find command to look for your device "093a:2510" if you think it has to be here (because you didn't change anything else). or, you do this on an older build of unRAID (you already reverted back i read), do the identification there (so you're sure the device with this adress is available) – make a screenshot of it – then upgrade again and re-identify this device. if it's there, note the adress of it, go to the VM tab, edit your VM and be sure to to passthrough the right device at the bottom. the other logged lines "...vfio-pci 0000:03:00.0..." do only indicate, that a pci-device (most probably your GFX card) was allocated to the VM at start time.
  5. @CHBMB was (probably still is) quite busy with his day job. so we need to wait until he finds time to do the compiles.
  6. well, @CHBMB busy schedule saved him one build for 6.6.2 already 😁 now the race is on. who is faster? limetech with a new build, or can CHBMB catch up and bring 6.6.3 right before? but it's all good... 6.6.1 works good for me (except another NVMe nightmare, but that's another thing for another thread).
  7. hey spaceinvaderone, might consider to tell the name/type of the mainboards you use for your threadripper cpus? well, i can recommend the Noctua CPU air coolers, at least here i've used several of them, all excellent!
  8. ok, first (short) test is done. i used the GitHub build 6.6RC4. drivers get reported as 0.9.36 vs. 1.2.2 (from LibreELEC build 6.5.3). it looks good so far. these (sometimes) massive errors reported at the beginning of a recording are gone (just on some recordings TVH shows one data error – but that has always been like this) so i'm quite happy again. i had to check always against the logs, if these errors were real (happened while the recording was ongoing, or if they just popped up at the beginning and would be meaningless). i might add, i use cable here and run on a Digital Devices card which uses Sony CXD2854 chips. with the next RC build i probably try the LibreELEC build again, to check if those (maybe updated drivers) are better than in the past.
  9. ok, i'll try to find time for testing and maybe i can do two installs to see the difference then. last time the github drivers had a better error handling (tvheadend did not show those errors at the beginning of a recording, which drives me kinda nuts, because i'll have to check often for them in the log to exclude those as bad recordings). AND another negative aspect of the drivers in the LibreELEC build is, that alot of these errors get logged with nearly every recording at the beginning (or with every epg grabber run much more often): [WARNING] linuxdvb: Unable to provide BER value. –> this value is available in TVH in the Status / Stream page (when driver supports it, else it's always "0") [WARNING] linuxdvb: Unhandled ERROR_BLOCK_COUNT scale: 0 [WARNING] linuxdvb: Unable to provide UNC value. i'll post back, if i made my tests with these builds.
  10. a big "thank you" goes out to all involved. respect for your work, which makes my unRAID (still find that one more fitting) usage more complete. @CHBMB you've done a new Digital Devices Github build this time. my i ask, if that or the LibreELEC build would offer the newer drivers for those cards right now?
  11. i know, i know – i just had thought i miss out on that mysterios RC1 build 😁
  12. hey CHBMB, hey guys, in preparation to Unraid 6.6 i'm checking from time to time the availability of new Unraid DVB builds with drivers for Digital Devices cards (=LibreELEC). 2 posts above @CHBMB noted, there are currently issues building, so i guess it'll take awhile until they are resolved, right? now @olfolfolf said, he stays on RC1 until... so i'm i wrong, or was there already a new build once and i never catched that one (still on 6.5.3 here, and no update for the DVB Edition Plug-in every since popped up)? just curious here, how things are going (i hope for a fix for my NVME drive through new kernel, etc.).
  13. what's your use case? why the need for crazy read speeds (which deeply depend on how well in continues blocks they have been written before)? ZFS as the worlds safest file system uses only software and needs to have direct hardware access to the drives (so no RAID controller in between). well, i experienced already problems with RAID controllers (mostly in RAID enclosures), but anyway, whenever possible i don't go for hardware RAID x anymore. and well, in the worst cases of all cases, where more drives die as your RAID config can deal with – in unRAIDs case all other drives have their data intact, because all of them have their regular file system (xfs), which can be read from any linux. unRAID has it's disadvantages too, no doubt. it's not meant to be an ultrafast NAS/RAID in terms of disk throughput. but it can be an ultra cool NAS, with a twist. you can tune on different areas (like using cache drives), you can passthrough independent drives (outside the array) to VMs, etc. alot of stuff which keeps this solution ultra flexible. adding drives of different sizes to the array and more. and it can keep heat and energy usage low, because of it's dont't use, don't spin approach (but you decide, don't want to wait on a drive which needs to spin up to access files, ok, don't let it spin down – it's just a setting, per disk if you want). let's think of an RAID controller failure – now you have a pretty hard time ahead (worst case, you can't get the same card and the new one can't read the RAID signature from your drives, oops). so, think about a minute, what you want to do with your rig. you want to game on it – you can with unRAID (me and friends do it). you want to use it as a daily mainstream workstation, you can. having stuff like a plex-server, tv-recorder or both and more – you can. no need to have that 8 drive hardware based RAID 6 for that. buy one good SSD or NVMe as cache drive and your set. and whenever you need to move to new hardware, it's super easy with unRAID – take your USB stick, plug your disks to your SATA ports, or HBA card and boot from USB. there's no such thing as will it recognize my RAID 6 from the previous controller card, or whatever. i've done it several times, it's too cool how easy that is. and btw. the less drives do spin, the quieter your rig gets. so 8 drives all time vs. 0 drive(s) in best case (example: Windows VM with GPU passthrough in gaming session). what do you want to do with it?
  14. ok, meanwhile i switched to unRAID as native boot OS (so i've all virtualization features available). but, if you don't need them inside unRAID – because you do them on Proxmox – then i probably would go again the route i've taken back then. there are a few things to think about: a) using one of the most cool features of unRAID is, being not a RAID x with all disks spinning all the time – see at the name "un" & "RAID" – which leads to the necessity, that unRAID has the only hand on these disks (and/or the controller where they are connected to) which need to be spun down. passing disks via PVE's into any VM (also unRAID) doesn't let it spin down these disks, because PVE never spins them down on it's own. so i passed through a whole host adapter with all disks connected to it, which should be only available in the unRAID universe. and so all features related to these disks worked as expected. b) plug-ins / dockers they worked inside unRAID as expected (well, at least all stuff which i tested, but back then, i hadn't got any hardware access from dockers like for example a TV card for TVHeadend). but i would expect them to work, because if they are (hardware)-passed through like the HBA then why not. the only additional layer of (maybe) problems could arise because of the bootloader (plopkexec) in front of unRAIDs kernel – well, it worked perfectly with my HBA, YMMV. c) PVE on the PVE side of things you can do whatever you like (because all disks related to unRAID are/and should be passed through via hardware into the VM). so go ahead with ZFS (if you got enough RAM) or any other kind of RAID which is supported. d) bootloader vs. boot image Proxmox doesn't boot a VM from a USB-stick directly (which is unRAIDs way and it needs this USB sticks UUID for verifying your license anyway). the bootloader approach (plopkexec) has the advantage, that the unRAID VM will eventually be booted from the USB stick (PVE boots the bootloader and that in turn loads unRAID from the USB stick). and because of this, unRAID sees no difference in it's boot device and all operations are done like traditionally when unRAID booted natively from this USB stick (all updates, changes, etc. are written directly to the USB stick and are available with the next boot). doing a VM disk image approach works too (i read back then about it), but has the disadvantage that people needed to manually copy changes made to the VM disk image back to the USB stick, before booting again. i would choose Proxmox again, over any other hypervisor. alright, hope this helps in decision making.
  15. not using this docker, but seeing the same behavior with unRAID, whenever (especially) write access into the array is starting AND the destination drives are spun down (so they need to get online/spun up) and this delays other parts on unRAID – especially all tasks, which are time sensitiv (i.e. had time outs in other dockers because of this). i would guess this is related to the virtual file system, which unRAID uses, to manage the whole array (all drives) as one big file system. probably a kind of locking mechanism. often it helps, to manually spin up the destination drive(s) [incl. parity], before any write operations start.
  16. hmm... maybe there's still a problem with wrong authentication method used by unRAID. before i recognized that, my setup looked like: email-server was not using SSL/TLS and allowed a bunch of authentication methods, like CRAM-MD5, LOGIN, Kerberos… after i saw these errors, the setup had changed to this: email-server was setup to accept TLS, but also non-TLS connections, but i disabled the LOGIN authentication. nothing else changed (same account, same passwords). but, because of the now disabled LOGIN authentication method, unRAID server yells "Server didn't like our AUTH LOGIN (504 Authentication method not enabled)" – yeah, right, because i configured you to use CRAM-MD5. so it was using all the time the "LOGIN" authentication method, despite me setting it to use CRAM-MD5. i've used the "Custom" preset, because i run my own email-server (postfix). i run this server for years and it cares about quite alot of domains/users. everthing works for everybody, only unRAID seems to misbehave here – it simply doesn't use the choosen authentication method. the email-server log, when unRAID tries to send email (or me hitting the "Test"-button). Creating SSL connection to host SSL connection using DHE-RSA-AES256-SHA Server didn't like our AUTH LOGIN (504 Authentication method not enabled)
  17. thx. didn't help either. just bought myself the breaking bad box and can't rip season four and five (now tested with 1.12.0 and latest 1.12.2). then tried v1.12.2 on mac (with an ext. slim-drive) and there it worked.
  18. @TheClaus how did you tag the docker to receive the v1.12.0 version?
  19. well, some similar stuff i see here too. but, i haven't diagnosed, if this is related to an HBA or not. just right now, i had two occurrences with just 1 hr 15 mins. in between and this happened: a plex client was streaming a movie, suddenly the playback froze for probably 10-15 secs. and then resumed. at the same time i could verify, that a remote backup to the unRAID server had to delete some files, which forced the two parity drives to spin-up. also at the same time all recordings within TVHeadend (docker) had errors logged. in the next 75 mins. unused drives spun-down (after the 30 mins. delay i've set) and then the exact same scenario repeated: streamed movie froze, all recordings TVH done logged errors (again the spin-up of the parity drives – for write/delete operations in a share – caused this). i'll watch this (hopefully) more closely soon.
  20. well, well, solved. the SSD was connected to an LSI controller. changed to int. SATA controller and it's getting it's trim command fine.
  21. @bonienl ideas for improvement 1) option to choose to run the trim command after the mover has finished instead of running it at a static time via the cron schedule, run it automatically after the mover job is finished. in my case the biggest delete operations are almost always occurring when the mover deletes the copied data from the cache SSD/NVMe. or even better: add it as an optional checkbox in the 'Scheduler' preferences, which could read like: "Automatically run after the 'Mover' has finished" so we have best of both worlds – cron style fixed time (can be disabled if then not needed anymore) and automatic style after the mover operation. 2) make it possible to choose, which devices should get trimmed in combination would make a super flexible solution (for example: cache gets trimmed automatically after mover finished, but UD mounted only via cron's call) but i know, that asks for quite some programming work. thx. alot for your solutions.
  22. i know, exactly this i'll do on next possible occasion. probably one of those is connected to an LSI controller.
  23. yes, i do see those others as well getting trimmed. but i've got a assumption... wasn't there talk about awhile ago (or maybe really long ago) which controllers did/didn't support trim commands? these two same SSDs are for sure not connected to the same controller. but i can't physically check right now, which drive is connected to which controller; will do asap. is there any CLI command, which lists devices according to the used controller?
  24. @bonienl Dynamix SSD TRIM Plug-In Problem hi bonienl, temporarily replaced my NVMe cache device (system stability problems) with a regular Samsung SSD 850 EVO and now your Dynamix SSD TRIM doesn't trim it. maybe your plug-in gets confused, because i've the exact same model/type SSD once more installed and used as UD disk, which gets trimmed. i could imagine your plug-in trims the first found disk and then goes on to other devices (with different name), no? my system is up for 3 days, and in the logs i see the trim gets executed, but never for the cache SSD; the UD SSD gets trimmed every time.
×
×
  • Create New...