Cull2ArcaHeresy

Members
  • Posts

    83
  • Joined

  • Last visited

Everything posted by Cull2ArcaHeresy

  1. The same way that generally you assume a gigabit switch is not a bottleneck, i didn't even think about the LSI HBA cards being one (gigabit switch comment is about common "normal" network activity that is not mass file transfers). Looks like the pcie riser card has each of the pcie slots at 8x based on online images of the r720xd riser card. Plenty when that riser just had the 2 9207-8i cards that connect to the 14 r720xd bays. I would assume the riser gets 24 pcie lanes for the 3 8x slots, but even with that, my 2 ds4246 shelfs are connected to a 9202-16e that is in that 3rd slot with only 8 lanes. The ds4246s have 36 drives in them now, but 24 of those are pool drives now instead of array and the other 12 are part of main array (moved them to pools for multiple reasons). Since netapp designed it to use 1 cable, i'm assuming it is not restricting at all, but now thinking i should map it all out and calculate each step to see if there are other places of accidental bottleneck like putting a 16x in an 8x slot. When i reorganized the layout of the drives i did make sure the SSDs and parity drives are in the r720xd bays, as well as the 16tb disks and as many of 12tb as could still fit...really glad i did that (especially if bottleneck was big). Parity check takes 2 to 3.5 days now (or 5.625 when a bunch of random automatic rsyncs happen during it pulling hundreds of gigs from remote server for hours). But next time i shut the server down, since gpu x16 slot is open again, i'll try to remember to move the 9202-16e HBA over to it (well if it came with a full height plate). Guess i've gone from near the limit of my server to pretty much out grown it...but that upgrade has to come later down the line..and 2 to 4 more 16tb drives is needed sooner than the upgrade. Thanks for pointing that out, which could help future peoples.
  2. sort by type="Cache" then by ["poolname*"] to group ["tv_pool6"] in with ["tv_pool"] and ["cache2"] with ["cache"]? Unless you can name 2 pools the same in which case edit-oh i see the name="cache3" also now
  3. notification of reply got lost, thus why it took so long. Used the tool and grabbed the sanitized disks.ini out of it. If you need the other stuff i can check it before attaching. Btw my streamlabs share and tv share use the main array. First i filled the 2 cache pools as much as could and then set them to main array. For using unbalance that might make things weird, but i assume i am not a normal case with this. Btw monthly parity check is still going for another estimated 15 hours. I assume that it would not produce different data, but pointing it out if unraid hides something during check. disks.ini
  4. your vpn login is saved in plain text there too
  5. yes. Like the speed limits need to be set in config file to be persistent. Unless that is an old issue that no longer exists in which case this is outdated info.
  6. i have 3 pools along with cache and main array if you need it. Would have to find disks.ini and see if it contains any data that would need to be redacted before sharing.
  7. Either that was a fast update (thanks) or i've never looked at the "TOTAL" line (oops hidden in plain sight).
  8. With the new pool options, i'm trying to have different shares for different things. Currently trying to have downloads be in a share that is on a pool (pool only), and done/seeding to be on main array (mover pool->array). Initially tried having p2p share (for incoming and everything else) mount as /data and also shares mounting as /data/downloading and /data/DONE but that caused conflicts where some stuff went to proper share and some went to p2p/[downloading or DONE]. Now do not have a base /data mount, but mounting everything needed...which seems to work ok but save to cannot escape whatever share it is currently in, and auto tools does not always succeeded. Is there a "right" way to do this? If can get this working, the idea is to start splitting off different categories into their own pools (like all defcon/infocon torrents to be in a separate pool). Considered adding links within p2p share to the other shares, but that just seemed like a really bad idea.
  9. Unless there is a longer log elsewhere, the one in the ui doesn't go back far enough. But those were the last 3 lines to be transferred to disk 7, who currently has 49.2 kb free. Source disk still has 36.8 gb on it, belonging to those 3 lines. So i'm assuming that was the issue. Mover was disabled, but guess something else was writing to that disk during transfers. All lines before were green (disk 7), and after those 3 lines (disk 12) were also green checks. One feedback (been meaning to give for a while). A select/deselect all checkbox would be great. Currently working on clearing off 8 4tb drives to move into a pool instead of main array, and each time it requires select source + shares, then unselecting over 20 destination disks. The auto-select-all is great when all/most of your disks are <70% full, but otherwise a quick unselect all would be nice to have. Either way thanks for the tool
  10. do the colors of the icons indicate a problem or just ui bug?
  11. Spaceinvaderone's video was all usb3 (or at least top 3 were), but i know that was just extreme case stress testing with temperature readings. I just had the one issue that was easy to fix (will check boot flag first if happens again instead of reimaging), but if the issue is reoccurring then ill move my license to a new usb2 drive. Was hoping by now that the anecdotal-ness of it was historical and not current, but guess we're not there yet due to the amount of old hardware still in use and/or tech in the usb3 drives.
  12. after taking care of failed drive, clearing off 4 more of the 4tb drives that are to be in 8 drive archive_two pool, and dealing with a boot usb issue, i updated to 6.9.2 with no problems. After upgrade, i assigned the 8 drives to new pool archive_two, started array, and then used the gui to set to raid 6 and it did. Since it worked i forgot to copy the command from syslog, so just did balances from raid6 -> raid0 -> raid6. If someone else has issues maybe this could be of help. Apr 19 23:35:10 Raza ool www[29765]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid6,soft -mconvert=raid1c3,soft' Apr 19 23:35:10 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid6,soft -mconvert=raid1c3,soft -sconvert=raid1c3,soft Apr 19 23:35:10 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Apr 19 23:37:11 Raza ool www[29765]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid0,soft -mconvert=raid1,soft' Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid0,soft -mconvert=raid1,soft -sconvert=raid1,soft Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24894242816 flags metadata|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24860688384 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 24827133952 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 18384683008 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 11942232064 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 11908677632 flags system|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 10834935808 flags metadata|raid1c3 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): found 3 extents, stage: move data extents Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): clearing incompat feature flag for RAID1C34 (0x800) Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): relocating block group 4392484864 flags data|raid6 Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): clearing incompat feature flag for RAID56 (0x80) Apr 19 23:37:12 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Apr 19 23:37:19 Raza ool www[36771]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_two' '-dconvert=raid6,soft -mconvert=raid1c3,soft' Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): balance: start -dconvert=raid6,soft -mconvert=raid1c3,soft -sconvert=raid1c3,soft Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): setting incompat feature flag for RAID56 (0x80) Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): setting incompat feature flag for RAID1C34 (0x800) Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 54019489792 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 52945747968 flags metadata|raid1 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 52912193536 flags system|raid1 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 44322258944 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 35732324352 flags data|raid0 Apr 19 23:37:19 Raza kernel: BTRFS info (device sdx1): relocating block group 27142389760 flags data|raid0 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27108835328 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27075280896 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 27041726464 flags system|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): relocating block group 25967984640 flags metadata|raid1 Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): found 3 extents, stage: move data extents Apr 19 23:37:20 Raza kernel: BTRFS info (device sdx1): balance: ended with status: 0 Speaking of usb, is using 3 still a bad thing? I migrated from a 10+ year old 2gig to a new 64gig usb3.1 drive. After having issue, got new usb2 drives to use if need be, but didn't yet since i didnt want to blacklist the 3.1 drive if possible. Got it working by making a copy of contents, reflashing it from myservers backup, and then copying all contents back (for custom scripts and ssh keys). My local backup wasnt up to date so had to go around that way. The drive works fine, but it was not bootable until i reflashed it. I'm assuming the boot flag got unmarked somehow, which would have been an easier fix. Just is usb3 still a thing to avoid, or is that old news? Setting as solved since the balance issue seems to be fine. Should it still be filed as a 6.9.1 bug for documentation sake, assuming that was the issue?
  13. Safe mode + GUI, local browser, same results. Did it that way so there is no chance it would be any kind of plugin/extension in browser or unraid plugin. When i boot into GUI or safemode+GUI, if i have a monitor plugged in and am using idrac console, after boot select menu and then part of the boot text, both are just a black screen. Not sure if that is an unraid thing or a hardware thing. Web access still works, but local does not. I did not try non-gui mode. Integrated graphics...was using the front vga on 720xd, not the rear one if that makes any difference. Dont think i ever had both going at the same time before, so cannot speak to if this is new or not. Also rare to need local access anyways, i just have default boot set to GUI so if i do need it, it is there. A main array drive failed, and i dont have a replacement drive for it so am using unbalance to move the emulated contents off. At ~25mbs for the ~7tb left means a 75 hour eta. In the mean time i am leaving optional things offline to reduce array stress (like binhexrtorrentvpn). Because i wanted to resolve this drive first, i have not installed 6.9.2 yet. I did go ahead and delete archive_two and add 2 of the drives to make archive_one into an 8 drive pool. Balance in web ui had same results, so used the command to balance it since i added 2 drives (even tho still empty). In a few days after having resolved the failed drive, and updating to 6.9.2, ill try again to see if the web ui calls the right command. Thanks for the help so far
  14. command/cli output root@Raza:~# btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/archive_one Done, had to relocate 3 out of 3 chunks log Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): balance: start -dconvert=raid6 -mconvert=raid1c3 -sconvert=raid1c3 Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): setting incompat feature flag for RAID1C34 (0x800) Apr 7 03:45:46 Raza kernel: BTRFS info (device sdak1): relocating block group 14223933440 flags metadata|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): relocating block group 14190379008 flags system|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): setting incompat feature flag for RAID56 (0x80) Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): relocating block group 13116637184 flags data|raid1 Apr 7 03:45:47 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0 and it shows up as raid6 in pool settings + pool capacity is correct at 16 now. I have archive_two also empty if there are other things you need tested for debugging. Not putting anything in either pool yet because i'm considering switching to 3 pools of 8 drives instead of 4 pools of 6.
  15. I upgraded from 6.8.3 to 6.9.1 and while at it finally upgrade my flash drive from a OLD 2gig that was from sometime before 2010. VMs and dockers seem to be working fine now and main array and cache normal. I created a new pool "archive_one" with 6 * 4tb sas drives. Now trying to rebalance it to raid6 instead of default raid 1. I started it last night and it ran for over 8 hours displaying this Data, RAID1: total=1.00GiB, used=0.00B System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=2.00GiB, used=128.00KiB GlobalReserve, single: total=3.25MiB, used=16.00KiB btrfs balance status: Balance on '/mnt/archive_one' is running 2 out of about 3 chunks balanced (3 considered), 33% left Since then i tried again a couple times while also watching the log (below). Sometimes it would go right to the above display and then nothing, sometimes UI showed sometimes like above but with 1 of 3 chunks 2 considered 66% left. Both UI displays had the same log output of a quick blurb and then nothing else printed out and main page there are no reads/writes on the drives which confirms that nothing is happening. Apr 6 19:40:13 Raza ool www[15440]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_one' '' Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): balance: start -d -m -s Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9861857280 flags metadata|raid1 Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9828302848 flags system|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): found 1 extents, stage: move data extents Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): relocating block group 8754561024 flags data|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0 Posting here as a new thread as reply on original post said to and that "balance start command is missing some arguments". raza-diagnostics-20210407-0234.zip
  16. I upgraded from 6.8.3 to 6.9.1 and while at it finally upgrade my flash drive from a OLD 2gig that was from sometime before 2010. VMs and dockers seem to be working fine now and main array and cache normal. I created a new pool "archive_one" with 6 * 4tb sas drives. Now trying to rebalance it to raid6 instead of default raid 1. I started it last night and it ran for over 8 hours displaying this Data, RAID1: total=1.00GiB, used=0.00B System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=2.00GiB, used=128.00KiB GlobalReserve, single: total=3.25MiB, used=16.00KiB btrfs balance status: Balance on '/mnt/archive_one' is running 2 out of about 3 chunks balanced (3 considered), 33% left Since then i tried again a couple times while also watching the log (below). Sometimes it would go right to the above display and then nothing, sometimes UI showed sometimes like above but with 1 of 3 chunks 2 considered 66% left. Both UI displays had the same log output of a quick blurb and then nothing else printed out and main page there are no reads/writes on the drives which confirms that nothing is happening. Apr 6 19:40:13 Raza ool www[15440]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_balance 'start' '/mnt/archive_one' '' Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): balance: start -d -m -s Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9861857280 flags metadata|raid1 Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): found 3 extents, stage: move data extents Apr 6 19:40:13 Raza kernel: BTRFS info (device sdak1): relocating block group 9828302848 flags system|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): found 1 extents, stage: move data extents Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): relocating block group 8754561024 flags data|raid1 Apr 6 19:40:14 Raza kernel: BTRFS info (device sdak1): balance: ended with status: 0
  17. that seems to have been the issue (or at least fixed it), thanks
  18. I thought it was part of the core functionality. [25.03.2021 19:02:07] WebUI started. [25.03.2021 19:02:30] _cloudflare: Plugin will not work. rTorrent user can't access external program (python). [25.03.2021 19:02:30] _task: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] autotools: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] create: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] datadir: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] history: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] loginmgr: Some functionality will be unavailable. rTorrent user can't access external program (php). [25.03.2021 19:02:30] ratio: Some functionality will be unavailable. rTorrent user can't access external program (php). [25.03.2021 19:02:30] retrackers: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] rss: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] rutracker_check: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] scheduler: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] trafic: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] unpack: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] xmpp: Plugin will not work. rTorrent user can't access external program (php). [25.03.2021 19:02:30] _task: Plugin will not work. rTorrent user can't access external program (pgrep). [25.03.2021 19:02:30] mediainfo: Plugin will not work. rTorrent user can't access external program (mediainfo). [25.03.2021 19:02:30] rss: Some functionality will be unavailable. rTorrent user can't access external program (curl). [25.03.2021 19:02:30] screenshots: Plugin will not work. rTorrent user can't access external program (ffmpeg). [25.03.2021 19:02:30] spectrogram: Plugin will not work. rTorrent user can't access external program (sox). Is this related to it?
  19. Any indication of an issue before starting the array? I was thinking of putting a clean trial copy of unraid on another usb to boot to check before upgrading. Could this be a valid way to check hardware/drive compatibility before upgrading?
  20. Does the coloring function work relative to disks in array instead of to 100%? Your description here made me think of conditional formatting in excel like so, round up applied to all % to avoid the middle yellow color for the purposes of this point.
  21. I thought it was saying that if you chose raid6 that it would replace it with raid1c3, but the behind the scenes reason makes much more sense Functionally speaking (for parity calculation), isnt unraid single/double parity the same as raid5/6 and has the same vulnerability? I do have a UPS that estimates ~30 minutes runtime, but I know any unclean shutdown/system crash could still cause problems. I know it is all a balancing act of capacity vs redundancy/risk. Considering all these 4tb sas drives are used which the seller told me to expect ~1 drive failure per year if on 24/7, i might end up going with raid1(c2) just for less stress on drives...raid 50 or 60 would be great option, but not seeing that in btrfs lists. At unraid capacity of 28+2 parity. Also only 3 open bays, but a second ds4246 has been ordered.
  22. NEW POOL CLARIFICATION If i'm following the new pool stuff right, it basically allows 30*34*disk size total storage size (35 if no cache). A share will show files from all pools, but multi pool for same share requiring manual movement. But downsides: mover only goes between main array and cache (no cache/fast/slow 3 tier option or cache/other pool option) pool can only be in raid0 or raid1 (with 2, 3, or 4 copies of data). Raid6 replaced with raid1c3, meaning instead of #disk-2parity / #disk you only get 1/3 of storage max? Is super reduces capacity (for more than a few drives), but less stress during rebuild + higher redundancy the only option here? WHAT I AM TRYING TO DO, RELATED TO NEW POOLS BUT CAN MOVE TO ANOTHER THREAD IF NEED TO I have 14 x 4tb sas drives in my main array that i would like to move into a pool along with some more 4tb drives that are not in use. I was looking at getting another server for the function of this pool of the 4tb disks, probably running truenas, but for now i do not need more server horsepower and the pool will be under 30 disks. To do this, my thought is to make a pool with the ~10 drives that are not in use move share that i want on this pool onto it from main array (or at least as much as will fit) use unbalance to clear the rest of the contents of the 4tb drives that are in main array remove them from array which will result in rebuilding double parity for 2 days add drives to pool and i assume need to do do some kind of rebalancing operation since the drive count would have just gone up 2.4 times Depending on available space, might need to do steps 2-5 in 2 or 3 batches instead of all at once. Either way this will be at least a few days out as I only have a few open bays and need to order another DAS. After expanding bay count, and upgrading from 6.8.3, is that the right way to do it? The 4tb drives in main array are disk11-disk24, so after this whole process is over I'll have to reassign disk positions so the main array is contiguous (i know it doesnt have to be, but would rather). Unless it would be better to reorder the drives before moving to new pool, but either way have to do parity rebuilds. Been wanting a reorder anyways as it goes 12tb*2, 6tb*3, 12tb*5, 4tb*14, 6tb*1, 8tb*3 (6*3 came from another system, 6*1,8*3 were shucked externals), but have been putting off due to parity rebuilds.
  23. so i guess the support from my seedbox a while back was not 100% accurate and that it was more of an issue with whatever hardware they use, missing optimizations, and/or maybe even if they use btrfs. Will dedicate a couple drives for p2p share to be on and disable cache on that share to remove possible problems. Would also love to know what tricks Cat_Seeder recommends
  24. I was having issues with my seedbox over a year ago (closer to 2 probably), and the tldr of the quoted support response from provider is too many active is a problem (currently tend to have around 75 active with no prob on there). Issue on seedbox was move when completed not always working, torrents going into pausing state, and rtorrent crashing. Over the past couple months i remembered the seedbox issue and applied that to binhexrtorrentvpn (was having issues similar to cliff), and container has been working pretty well since. Getting uploads and downloads, but run into issues if there are more than 100 to 150 non-stopped torrents, or if active count is over like 30. I'm running dual e5-2695v2 (2*12 cores, hyperthreaded, 2.4ghz), 128gb ram, with nothing pinned, pia vpn. My plan has been to make a manager to coordinate between multiple containers like a "rtorrent container swarm" thing (multiple containers networked thru 1 running vpn & manager)...but have not gotten around to it yet as it feels like an overengineered solution. Maybe others can weigh in as to how many non-stopped and active torrents they have...or other knowledge of what our problems might be.