Cull2ArcaHeresy

Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by Cull2ArcaHeresy

  1. Spaceinvaderone's video was all usb3 (or at least top 3 were), but i know that was just extreme case stress testing with temperature readings. I just had the one issue that was easy to fix (will check boot flag first if happens again instead of reimaging), but if the issue is reoccurring then ill move my license to a new usb2 drive. Was hoping by now that the anecdotal-ness of it was historical and not current, but guess we're not there yet due to the amount of old hardware still in use and/or tech in the usb3 drives.
  2. after taking care of failed drive, clearing off 4 more of the 4tb drives that are to be in 8 drive archive_two pool, and dealing with a boot usb issue, i updated to 6.9.2 with no problems. After upgrade, i assigned the 8 drives to new pool archive_two, started array, and then used the gui to set to raid 6 and it did. Since it worked i forgot to copy the command from syslog, so just did balances from raid6 -> raid0 -> raid6. If someone else has issues maybe this could be of help. Apr 19 23:35:10 Raza ool www[29765]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid6,soft -mconvert=raid1c3,soft' Apr 19 23:35:10 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid6,soft -mconvert=raid1c3,soft -sconvert=raid1c3,soft Apr 19 23:35:10 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Apr 19 23:37:11 Raza ool www[29765]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid0,soft -mconvert=raid1,soft' Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid0,soft -mconvert=raid1,soft -sconvert=raid1,soft Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24894242816 flags metadata|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24860688384 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24827133952 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 18384683008 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 11942232064 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 11908677632 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 10834935808 flags metadata|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): found 3 extents, stage: move data extents Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): clearing incompat feature flag for RAID1C34 (0x800) Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 4392484864 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): clearing incompat feature flag for RAID56 (0x80) Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Apr 19 23:37:19 Raza ool www[36771]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid6,soft -mconvert=raid1c3,soft' Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid6,soft -mconvert=raid1c3,soft -sconvert=raid1c3,soft Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): setting incompat feature flag for RAID56 (0x80) Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): setting incompat feature flag for RAID1C34 (0x800) Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 54019489792 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 52945747968 flags metadata|raid1 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 52912193536 flags system|raid1 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 44322258944 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 35732324352 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 27142389760 flags data|raid0 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27108835328 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27075280896 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27041726464 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 25967984640 flags metadata|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): found 3 extents, stage: move data extents Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Speaking of usb, is using 3 still a bad thing? I migrated from a 10+ year old 2gig to a new 64gig usb3.1 drive. After having issue, got new usb2 drives to use if need be, but didn't yet since i didnt want to blacklist the 3.1 drive if possible. Got it working by making a copy of contents, reflashing it from myservers backup, and then copying all contents back (for custom scripts and ssh keys). My local backup wasnt up to date so had to go around that way. The drive works fine, but it was not bootable until i reflashed it. I'm assuming the boot flag got unmarked somehow, which would have been an easier fix. Just is usb3 still a thing to avoid, or is that old news? Setting as solved since the balance issue seems to be fine. Should it still be filed as a 6.9.1 bug for documentation sake, assuming that was the issue?
  3. Safe mode + GUI, local browser, same results. Did it that way so there is no chance it would be any kind of plugin/extension in browser or unraid plugin. When i boot into GUI or safemode+GUI, if i have a monitor plugged in and am using idrac console, after boot select menu and then part of the boot text, both are just a black screen. Not sure if that is an unraid thing or a hardware thing. Web access still works, but local does not. I did not try non-gui mode. Integrated graphics...was using the front vga on 720xd, not the rear one if that makes any difference. Dont think i ever had both going at the same time before, so cannot speak to if this is new or not. Also rare to need local access anyways, i just have default boot set to GUI so if i do need it, it is there. A main array drive failed, and i dont have a replacement drive for it so am using unbalance to move the emulated contents off. At ~25mbs for the ~7tb left means a 75 hour eta. In the mean time i am leaving optional things offline to reduce array stress (like binhexrtorrentvpn). Because i wanted to resolve this drive first, i have not installed 6.9.2 yet. I did go ahead and delete archive_two and add 2 of the drives to make archive_one into an 8 drive pool. Balance in web ui had same results, so used the command to balance it since i added 2 drives (even tho still empty). In a few days after having resolved the failed drive, and updating to 6.9.2, ill try again to see if the web ui calls the right command. Thanks for the help so far
  4. command/cli output root@Raza:~# btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/archive_one Done, had to relocate 3 out of 3 chunks log Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): balance: start -dconvert=raid6 -mconvert=raid1c3 -sconvert=raid1c3 Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): setting incompat feature flag for RAID1C34 (0x800) Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): relocating block group 14223933440 flags metadata|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): relocating block group 14190379008 flags system|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): setting incompat feature flag for RAID56 (0x80) Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): relocating block group 13116637184 flags data|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0 and it shows up as raid6 in pool settings + pool capacity is correct at 16 now. I have archive_two also empty if there are other things you need tested for debugging. Not putting anything in either pool yet because i'm considering switching to 3 pools of 8 drives instead of 4 pools of 6.
  5. I upgraded from 6.8.3 to 6.9.1 and while at it finally upgrade my flash drive from a OLD 2gig that was from sometime before 2010. VMs and dockers seem to be working fine now and main array and cache normal. I created a new pool "archive_one" with 6 * 4tb sas drives. Now trying to rebalance it to raid6 instead of default raid 1. I started it last night and it ran for over 8 hours displaying this Data, RAID1: total=1.00GiB, used=0.00B System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=2.00GiB, used=128.00KiB GlobalReserve, single: total=3.25MiB, used=16.00KiB btrfs balance status: Balance on '/mnt/archive_one' is running 2 out of about 3 chunks balanced (3 considered), 33% left Since then i tried again a couple times while also watching the log (below). Sometimes it would go right to the above display and then nothing, sometimes UI showed sometimes like above but with 1 of 3 chunks 2 considered 66% left. Both UI displays had the same log output of a quick blurb and then nothing else printed out and main page there are no reads/writes on the drives which confirms that nothing is happening. Apr 6 19:40:13 Raza ool www[15440]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_one' '' Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): balance: start -d -m -s Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9861857280 flags metadata|raid1 Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9828302848 flags system|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): found 1 extents, stage: move data extents Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): relocating block group 8754561024 flags data|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0 Posting here as a new thread as reply on original post said to and that "balance start command is missing some arguments". raza-diagnostics-20210407-0234.zip
  6. I upgraded from 6.8.3 to 6.9.1 and while at it finally upgrade my flash drive from a OLD 2gig that was from sometime before 2010. VMs and dockers seem to be working fine now and main array and cache normal. I created a new pool "archive_one" with 6 * 4tb sas drives. Now trying to rebalance it to raid6 instead of default raid 1. I started it last night and it ran for over 8 hours displaying this Data, RAID1: total=1.00GiB, used=0.00B System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=2.00GiB, used=128.00KiB GlobalReserve, single: total=3.25MiB, used=16.00KiB btrfs balance status: Balance on '/mnt/archive_one' is running 2 out of about 3 chunks balanced (3 considered), 33% left Since then i tried again a couple times while also watching the log (below). Sometimes it would go right to the above display and then nothing, sometimes UI showed sometimes like above but with 1 of 3 chunks 2 considered 66% left. Both UI displays had the same log output of a quick blurb and then nothing else printed out and main page there are no reads/writes on the drives which confirms that nothing is happening. Apr 6 19:40:13 Raza ool www[15440]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_one' '' Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): balance: start -d -m -s Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9861857280 flags metadata|raid1 Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9828302848 flags system|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): found 1 extents, stage: move data extents Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): relocating block group 8754561024 flags data|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0
  7. that seems to have been the issue (or at least fixed it), thanks
  8. I thought it was part of the core functionality. [25.03.2021 19:02:07] WebUI started. [25.03.2021 19:02:30] _cloudflare: Plugin will not work. rTorrent user can't access external program (python). [25.03.2021 19:02:30] _task: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] autotools: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] create: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] datadir: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] history: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] loginmgr: Some functionality will be unavailable. rTorrent user can't access external program (php). [25.03.2021 19:02:30] ratio: Some functionality will be unavailable. rTorrent user can't access external program (php). [25.03.2021 19:02:30] retrackers: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] rss: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] rutracker_check: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] scheduler: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] trafic: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] unpack: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] xmpp: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] _task: Plugin will not work. rTorrent user can't access external program (pgrep). [25.03.2021 19:02:30] mediainfo: Plugin will not work. rTorrent user can't access external program (mediainfo). [25.03.2021 19:02:30] rss: Some functionality will be unavailable. rTorrent user can't access external program (curl). [25.03.2021 19:02:30] screenshots: Plugin will not work. rTorrent user can't access external program (ffmpeg). [25.03.2021 19:02:30] spectrogram: Plugin will not work. rTorrent user can't access external program (sox). Is this related to it?
  9. Any indication of an issue before starting the array? I was thinking of putting a clean trial copy of unraid on another usb to boot to check before upgrading. Could this be a valid way to check hardware/drive compatibility before upgrading?
  10. Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.
  11. I thought it was saying that if you chose raid6 that it would replace it with raid1c3, but the behind the scenes reason makes much more sense Functionally speaking (for parity calculation), isnt unraid single/double parity the same as raid5/6 and has the same vulnerability? I do have a UPS that estimates ~30 minutes runtime, but I know any unclean shutdown/system crash could still cause problems. I know it is all a balancing act of capacity vs redundancy/risk. Considering all these 4tb sas drives are used which the seller told me to expect ~1 drive failure per year if on 24/7, i might end up going with raid1(c2) just for less stress on drives...raid 50 or 60 would be great option, but not seeing that in btrfs lists. At unraid capacity of 28+2 parity. Also only 3 open bays, but a second ds4246 has been ordered.
  12. NEW POOL CLARIFICATION If i'm following the new pool stuff right, it basically allows 30*34*disk size total storage size (35 if no cache). A share will show files from all pools, but multi pool for same share requiring manual movement. But downsides: mover only goes between main array and cache (no cache/fast/slow 3 tier option or cache/other pool option) pool can only be in raid0 or raid1 (with 2, 3, or 4 copies of data). Raid6 replaced with raid1c3, meaning instead of #disk-2parity / #disk you only get 1/3 of storage max? Is super reduces capacity (for more than a few drives), but less stress during rebuild + higher redundancy the only option here? WHAT I AM TRYING TO DO, RELATED TO NEW POOLS BUT CAN MOVE TO ANOTHER THREAD IF NEED TO I have 14 x 4tb sas drives in my main array that i would like to move into a pool along with some more 4tb drives that are not in use. I was looking at getting another server for the function of this pool of the 4tb disks, probably running truenas, but for now i do not need more server horsepower and the pool will be under 30 disks. To do this, my thought is to make a pool with the ~10 drives that are not in use move share that i want on this pool onto it from main array (or at least as much as will fit) use unbalance to clear the rest of the contents of the 4tb drives that are in main array remove them from array which will result in rebuilding double parity for 2 days add drives to pool and i assume need to do do some kind of rebalancing operation since the drive count would have just gone up 2.4 times Depending on available space, might need to do steps 2-5 in 2 or 3 batches instead of all at once. Either way this will be at least a few days out as I only have a few open bays and need to order another DAS. After expanding bay count, and upgrading from 6.8.3, is that the right way to do it? The 4tb drives in main array are disk11-disk24, so after this whole process is over I'll have to reassign disk positions so the main array is contiguous (i know it doesnt have to be, but would rather). Unless it would be better to reorder the drives before moving to new pool, but either way have to do parity rebuilds. Been wanting a reorder anyways as it goes 12tb*2, 6tb*3, 12tb*5, 4tb*14, 6tb*1, 8tb*3 (6*3 came from another system, 6*1,8*3 were shucked externals), but have been putting off due to parity rebuilds.
  13. so i guess the support from my seedbox a while back was not 100% accurate and that it was more of an issue with whatever hardware they use, missing optimizations, and/or maybe even if they use btrfs. Will dedicate a couple drives for p2p share to be on and disable cache on that share to remove possible problems. Would also love to know what tricks Cat_Seeder recommends
  14. I was having issues with my seedbox over a year ago (closer to 2 probably), and the tldr of the quoted support response from provider is too many active is a problem (currently tend to have around 75 active with no prob on there). Issue on seedbox was move when completed not always working, torrents going into pausing state, and rtorrent crashing. Over the past couple months i remembered the seedbox issue and applied that to binhexrtorrentvpn (was having issues similar to cliff), and container has been working pretty well since. Getting uploads and downloads, but run into issues if there are more than 100 to 150 non-stopped torrents, or if active count is over like 30. I'm running dual e5-2695v2 (2*12 cores, hyperthreaded, 2.4ghz), 128gb ram, with nothing pinned, pia vpn. My plan has been to make a manager to coordinate between multiple containers like a "rtorrent container swarm" thing (multiple containers networked thru 1 running vpn & manager)...but have not gotten around to it yet as it feels like an overengineered solution. Maybe others can weigh in as to how many non-stopped and active torrents they have...or other knowledge of what our problems might be.
  15. anyone else having this issue with the German server (or others)? sh-5.0# ./testvpn.sh 66.115.142.201 CA Canada 66.115.142.201 sh-5.0# sh-5.0# ./testvpn.sh 212.102.49.91 ES Spain 212.102.49.91 sh-5.0# sh-5.0# ./testvpn.sh 195.246.120.122 SE Sweden 195.246.120.122 sh-5.0# sh-5.0# ./testvpn.sh 154.13.1.102 DE United States 154.13.1.102 sh-5.0# for reference sh-5.0# cat testvpn.sh curl ifconfig.io && curl ifconfig.io/country_code && curl ifconfig.co/country && curl ifconfig.co
  16. not having all country codes memorized, was trying to alter my standard get ip&location command to print country name. Currently *.co gets the ip right but my location not the vpn, whereas *.io gets both right. Is there a different site to use to get country name that yall know of that is better? Error on *.co end or some leak in container? sh-5.0# curl ifconfig.co/country United States sh-5.0# curl ifconfig.co 154.13.1.102 sh-5.0# curl ifconfig.io && curl ifconfig.io/country_code 154.13.1.102 DE sh-5.0# ----------------- has has since switched vpn endpoints, new server seems to not be doing that sh-5.0# curl ifconfig.io && curl ifconfig.io/country_code 212.102.37.21 CH sh-5.0# curl ifconfig.co && curl ifconfig.co/country 212.102.37.21 Switzerland sh-5.0#
  17. what else other than user/pass needs to be removed? ive been meaning to make a sanitizer script for a while now.
  18. When i move a completed torrent from my seedbox to local (move files then add .torrent), rtorrent checks before seeding. Great to make sure nothing got corrupted during transfer and all...but it takes forever. Earlier it was checking a 3gb one when i looked (was at like 20 or 40%), now 6 hours later it is at 75% on that same torrent. After it finishes check i move it to correct done folder, it adds it back to queue to be checked, but this check is faster and seems to be on par with force recheck (maybe slower but nowhere near hours per gig). Is this a container thing like i need to give it more resources (top below), or some rtorrent setting...which it seems most the checking hash options have been deprecated. From what have found, there has never been an option for it to check multiple at once, but the biggest (deprecated) flag was hash_check_interval (others too, but smaller impacts). top - 02:40:28 up 33 days, 4:33, 0 users, load average: 15.73, 16.31, 16.44 Tasks: 76 total, 1 running, 75 sleeping, 0 stopped, 0 zombie %Cpu(s): 26.3 us, 1.8 sy, 0.0 ni, 71.4 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 128952.8 total, 4412.8 free, 46173.8 used, 78366.2 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 79801.2 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 37385 nobody 20 0 3472912 1.1g 140908 S 0.3 0.9 113:22.49 rtorrent main 1 root 20 0 2340 284 204 S 0.0 0.0 0:59.15 tini 6 root 20 0 30816 17412 1612 S 0.0 0.0 9:41.05 supervisord 368 nobody 20 0 7436 2372 1740 S 0.0 0.0 0:55.70 logrotate.sh 370 nobody 20 0 7436 672 4 S 0.0 0.0 0:00.35 rutorrent.sh 371 root 20 0 7728 2720 1780 S 0.0 0.0 0:00.47 start.sh 372 nobody 20 0 7664 2772 1968 S 0.0 0.0 6:35.67 watchdog.sh 1039 nobody 20 0 80496 16500 5948 S 0.0 0.0 0:31.58 php-fpm 1040 nobody 20 0 79060 17432 8852 S 0.0 0.0 0:31.74 php-fpm 1041 nobody 20 0 77012 15364 8780 S 0.0 0.0 0:32.12 php-fpm 4214 root 20 0 7728 2120 1280 S 0.0 0.0 0:37.05 start.sh 5212 nobody 20 0 76068 7852 1556 S 0.0 0.0 1:37.87 php-fpm 5216 nobody 20 0 10348 1592 260 S 0.0 0.0 0:00.05 nginx 5217 nobody 20 0 11156 3400 1292 S 0.0 0.0 5:33.05 nginx ...bunch more nginx update: 3 hours later that torrent finished checking ("posted 3 hours ago"...so might be closer to 4). Checking 100% took like 40 minutes to change to seeding.
  19. what next-gen servers are yall having luck with?
  20. yea, just best wrong premises/logical conclusion could figure out (edited post should be more clear)
  21. no luck finding where saw, best guess is misreading/misremembering [the following wrong premises/logical conclusion] 1) nextgen uses (only) wireguard, 2) wireguard prone to hacks (instead of requiring hacks to work), thus 3) nextgen prone to hacks
  22. wasnt there a major vulnerability in something that the nextgen were implementing?