KoNeko

Members
  • Posts

    149
  • Joined

Everything posted by KoNeko

  1. Currently 16 gb DDR3. New setup soon to be build will have 64 gb ddr4 ECC. All parts are in except the CPU.
  2. Currently 38 tb will be more when i get my new system running. Going to replace the 3 tbs for larger ones and i can add more in the new one.
  3. im using dockers. Rutorrent and Medusa. Medusa is my Rss thingy and it works very nice. i almost have everything on auto. I use a Rpi4 on my TV and stream directly to it without plex etc i dont like the overhead and i dont see the extra value it will bring.
  4. it might be that File Integrity is running on the same disk at this time but in the File Integrity settings i have the When parity operation is running: don't start But i cant see that anywhere. in the Gui.
  5. i have a very slow parity check going at this moment. I had this problem before and when i reset unraid its all fine again. ( Atleast last time) I didnt reset it yet this time but i have put it on pause. it was going with 4-9 MB/sec Unraid Status Notice [THANEKOS] - array health report [PASS] 1604272801 Array has 8 disks (including parity & cache) normal Parity - WDC_WD120EDAZ-11F3RA0_5PG67DBF (sdi) - active 38 C [OK] Disk 1 - WDC_WD120EDAZ-11F3RA0_5PG611EF (sdh) - active 38 C [OK] Disk 2 - WDC_WD120EDAZ-11F3RA0_8CKZESUF (sdf) - active 40 C [OK] Disk 3 - WDC_WD30EFRX-68AX9N0_WD-WMC1T0904397 (sdb) - active 38 C [OK] Disk 4 - WDC_WD30EFRX-68AX9N0_WD-WMC1T0969769 (sdc) - active 39 C [OK] Disk 5 - WDC_WD80EDAZ-11TA3A0_VDKUVMNK (sdg) - active 39 C [OK] Cache - Samsung_SSD_860_EVO_250GB_S4CJNZFN116988V (sdd) - active 34 C [OK] Cache 2 - Samsung_SSD_840_PRO_Series_S12RNEAD700080B (sde) - active 27 C [OK] Parity check in progress. Total size: 12 TB Elapsed time: 50 minutes Current position: 29.3 GB (0.2 %) Estimated speed: 4.7 MB/sec Estimated finish: 29 days, 6 hours, 3 minutes Sync errors corrected: 0 added my diag Edit: i let the integrity check finsh and resumed the parity and now it goes ok again. Not top speed but around 100-130 MB/s thanekos-diagnostics-20201102-0320.zip
  6. sounds a very big hassle? How to do that ? or is there a other way.
  7. Found 25 files with BLAKE2 hash key mismatch warning BLAKE2 hash key mismatch (updated), blah/blah/aom-u-move/kowai/catacodec.data was modified BLAKE2 hash key mismatch (updated), blah/blah/aom-u-move/kowai/local_files.data was modified i had in the plugin excluded teh aom-u-move directory but it still shows the directory under that. these files change often zo i excluded the whole directory but that didnt seems to work.
  8. Het is leuk dat het ook zo meer aan de "normale" mens wordt gebracht. Heel vaak zijn het de tech personen in iemands leven die zo iets aanraden/opzetten voor iemand. Die zetten het dan op voor pa en ma etc. Ben het ook niet helemaal met alles eens wat er in staat. Voor de rest is het wel een ok artikel.
  9. Ik ben nederlands en zit in de IT. Heb verder alles in het Engels. Als je problemen heb is het makelijker te zoeken op het interwebs naar oplossingen. Vindt het verder wel een goed idee om het te vertalen naar nederlands.
  10. That it will takes WAY to much time. for i guess very little performance increase.
  11. i only tested my 2x 12 tb ones dont want to defrag those. fs_db> frag actual 52445, ideal 50533, fragmentation factor 3.65% Note, this number is largely meaningless. Files on this filesystem average 1.04 extents per file xfs_db> frag -d actual 1629, ideal 1510, fragmentation factor 7.31% Note, this number is largely meaningless. Files on this filesystem average 1.08 extents per file xfs_db> frag -f actual 50816, ideal 49023, fragmentation factor 3.53% Note, this number is largely meaningless. Files on this filesystem average 1.04 extents per file xfs_db> frag actual 49875, ideal 42765, fragmentation factor 14.26% Note, this number is largely meaningless. Files on this filesystem average 1.17 extents per file xfs_db> frag -d actual 1514, ideal 1471, fragmentation factor 2.84% Note, this number is largely meaningless. Files on this filesystem average 1.03 extents per file xfs_db> frag -f actual 48359, ideal 41292, fragmentation factor 14.61% Note, this number is largely meaningless. Files on this filesystem average 1.17 extents per file
  12. got a question i had a few blake2 corruptions now i replaced those files. and did a normal check which says ok. SO i did a check with the plugin again it took 19 hours to do all 45k files. The check says finish and it has less corruptions now but when i check the disk2.export.20201011.bad.log it still shows the old count and the date is also not updated on it. While it finsh the check files this morning. Is there something i missed why it didnt updated this bad.log file? i also dont see any other with a newer date.
  13. check and ohters of his videos
  14. nice you got it working. I also have a few qnaps laying around doing nothing and looking for a backup sollution. this might also do it for me
  15. Im looking into building a new server and looking at. https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U with a AMD Ryzen 5 3600 Samsung M391A4G43MB1-CTD - DDR4 i think 64 GB got this from the QVL list on the asrock site. the rest i could move over from my old setup. Was just wondering if anyone got the ryzen 3600 running good on unraid. Was reading alot about it many dont have a problem with it.
  16. i also have bought a 8 tb external and removed the casing and it isnt helium. BUT the 12 tbs that i got are.
  17. The plugin "Fix common problems" said the letsencrypt has a error because the name change to SWAG. and ask it it could change a It changed a url of something. So i click ok change it. and it changed the logo etc and some text. Tried a few times and it gave a error that a certificate could not renewed while everything it said was correct. it did said success ful added dns records etc ( using dnsplugin) and also removed it again but still fail and the docker didnt want to start. It also said “Plugin legacy name certbot-dns-transip:dns-transip may be removed in a future version. Please use dns-transip instead. “ I think i use the correct plugin. So I clicked apply again for the Xth time so that refresh/rebuild the docker. Finally after the XTh time ( lost count) all error were gone and the certificate works and the docker finally works. Only thing it shows in the log is the following error tho but everything seems to work again it does not break the container just yet.. nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
  18. I do like the recycle plugin but i got 1 thing where it does not get into the recycle bin. Might be a setting. But here it is. my work station OS is ubuntu and if i remove a file there from a share it gets nicely put in the Bin and i can see it retrieve it etc. Now i have on unraid a ubuntu VM. In that ubuntu VM i have made a " Unraid Mount tag: " and i added that in the VM so the VM sees that directory as 1 of its own directory so i have direct access on that share inside the VM. https://wiki.qemu.org/Documentation/9psetup Now if i remove a file in the VM on that share its just gone. Is this a setting i can set or anything?
  19. kinda a prtg ish something or 1 of the other tools.
  20. you parity Disk needs to be the same or Bigger than your biggest data disk. So the 6 TB is good. So with 1 parity 1 disk may fail and you in theory still keeps all data. if you get more disks you might think about a second party disk. Many ppl say 1 parity till 20 disks works ok.
  21. i let the drive finsch it's check and that is done. now im doing the preparing to test. i put the array in maintenance mode and check the disk but i dont see the button check. It was a XFS before like all hdd's