myths

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by myths

  1. ok figured these had 2 ports on them so made 2 different networks and connected both cables to start 2 transfers, jumping between 400-450 MB/s total now.
  2. thanks for the info. my network is on a 10.98.150.xx network i set the new nics to a 20.98.150.xx. after initial testing i went from 107mb to 188 stable before i read your post, just enabled the 9000mtu on both nics and going to see how well it improves. im not sure if there is a way to go any faster or what my current bottle neck is as im transferring off a raid to a single disk but from my research this disk should be able to sustain 240ish write speeds. at 9000 it seams to cap at 165 so ill play with those settings and see how it affects speeds.
  3. thanks ill post up once i get them configured. from my understanding they should be able to hit 500mb/s but unless doing some kind of raid to raid transfer the cap should be hard drive speed. 1g throttles at 110mb/s but if i can get up to 300 which the drives should do i think id be very happy.
  4. Hi im in the process of moving a failed read only zfs over to another unraid server via smb. The problem is that while my main server is on a 10gb network my other server is only on a 1gb. As for guides this is the only thing ive come across. Ill be connecting 2 X550T2 10gb nics to each server to connect them together bypassing the home network, my question is, is there any guides or info on how to do this? My main goal is speeding up the transfer as ill have to transfer 3 times and being capped at 1gb network speeds is going to be painfully slow.
  5. thanks ive got some new drives ordered to backup what i can. i had thought that each vdev was independant of other vdevs. looks like they strip across all so that makes sense. thanks for all your help.
  6. it seams all the errors im getting are isolated to 1 vdev. Do you know if its possible to just sacrafice that vdev and have the pool work with just the other 3?
  7. i didnt see anywhere to put them. the guides say to edit the zfs boot files and add command lines to them. unraid just now supports zfs officially so maybe its hiding somewhere ive not looked yet. did the scans with zero errors. so last thing to do is try that. ill try the fx but i think in the boot file i saw commants to bypass fail safe checks and other checks before loading the zfs as in not to check for any curroption. not sure im half asleep right now. 2 days of reading up on all this XD. thanks for the help. said x was an invalid option
  8. im running a hardware diag on server to see if it finds anything as well. also going to boot up another os in a few days and try to connect zfs to it. i see some people with these panic errors are able to open on another computer or in true/free nas whatever its called now change the zfs commands to bypass checks and start pool. not found a way to do that on there. pretty much grabbing at straws to see what i can do before rebuild
  9. im wondering if the fault could be with my cache. i was reading over this post https://forums.unraid.net/topic/129408-solved-read-only-file-system-after-crash-of-cache-pool/ this is the very first error i got before the zfs error. i only have a picture of it so will sum it up. btrfs fritical device nvme unable to find logical device page cache invalidation failure on direct io file /var/cache/netdata/dbengine/datafile- then a second invalidationfauluer in same folder but different file. the file was datafile-1-0000002124.ndf pid 5903 the second was 2177 pid 59017 i had already tried the -f import before. im wondering if it could possible the with the nvme?
  10. how would i go about reverting some, ive got snapshots but in read only im not able to.
  11. ahh, hmm, know of any alternatives besides backup? backing up 200tb of data would require quite alot in drives and a new server.
  12. i tried to do a fresh install of plugin, during install it would cause the panic and freeze. i also tried to update but couldnt find much info on importing and rolled back.
  13. i just did this. if i remove the zfs plugin it boots. I also went a step further and unmounted all my drives and installed one at a time to see if a disk was causing this. what ive found out is i have a 4disk backplane inside server for the 4th vdev, if i plug one of the disks into it i get the error -.-. when i run without plugin i checked connected disks in console and it sees all my disks. but with plugin installed i cant boot with a disk plugged in. could this be a backplane failure or possible vdev failure of those 4 disk. i didnt check diags while doing all this but i can reinstall the drives and get them if it would help.
  14. I notice 1 of the lights to a hard drive tray isnt lighting up, but the disk is spinning up. bad light or something more, not sure. tried booting without disk in to see if different error but same thing. ive looked over all connections and nothing seamed off.
  15. went on a trip out of town recently and had alot of storms here. no power outages for more than a minute or two. server is on backup power so dont think it shut down. During bootup i get a panic and doesnt go past there. it looks like the drives are all being found and ran test but thats as far as im able to get. This is the trace log i found. i have all my storage disks in a zfs pool. no zfs pools being found. but not sure if its related to error. this is what im getting on normal boot and the trace from logs. freezes at panic if anyone could point me in the right direction that would help alot this is not on the new unraid with zfs. Jun 2 00:38:03 Tower kernel: PANIC: zfs: removing nonexistent segment from range tree (offset=1f10c3514000 size=2000) Jun 2 00:38:03 Tower kernel: Showing stack for process 57252 Jun 2 00:38:03 Tower kernel: CPU: 81 PID: 57252 Comm: z_wr_iss Tainted: P O 5.19.17-Unraid #2 Jun 2 00:38:03 Tower kernel: Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.13.0 05/14/2021 Jun 2 00:38:03 Tower kernel: Call Trace: Jun 2 00:38:03 Tower kernel: <TASK> Jun 2 00:38:03 Tower kernel: dump_stack_lvl+0x44/0x5c Jun 2 00:38:03 Tower kernel: vcmn_err+0x86/0xc3 [spl] Jun 2 00:38:03 Tower kernel: ? pn_free+0x2a/0x2a [zfs] Jun 2 00:38:03 Tower kernel: ? bt_grow_leaf+0xc3/0xd6 [zfs] Jun 2 00:38:03 Tower kernel: ? zfs_btree_insert_leaf_impl+0x21/0x44 [zfs] Jun 2 00:38:03 Tower kernel: ? pn_free+0x2a/0x2a [zfs] Jun 2 00:38:03 Tower kernel: ? zfs_btree_find_in_buf+0x4b/0x97 [zfs] Jun 2 00:38:03 Tower kernel: zfs_panic_recover+0x6d/0x88 [zfs] Jun 2 00:38:03 Tower kernel: range_tree_remove_impl+0xd3/0x416 [zfs] Jun 2 00:38:03 Tower kernel: space_map_load_callback+0x70/0x79 [zfs] Jun 2 00:38:03 Tower kernel: space_map_iterate+0x2ec/0x341 [zfs] Jun 2 00:38:03 Tower kernel: ? spa_stats_destroy+0x16c/0x16c [zfs] Jun 2 00:38:03 Tower kernel: space_map_load_length+0x94/0xd0 [zfs] Jun 2 00:38:03 Tower kernel: metaslab_load+0x34d/0x6f5 [zfs] Jun 2 00:38:03 Tower kernel: ? spl_kmem_alloc_impl+0xc6/0xf7 [spl] Jun 2 00:38:03 Tower kernel: ? __kmalloc_node+0x1b4/0x1df Jun 2 00:38:03 Tower kernel: metaslab_activate+0x3b/0x1f4 [zfs] Jun 2 00:38:03 Tower kernel: metaslab_alloc_dva+0x7e2/0xf39 [zfs] Jun 2 00:38:03 Tower kernel: ? spl_kmem_cache_alloc+0x4a/0x608 [spl] Jun 2 00:38:03 Tower kernel: metaslab_alloc+0xfd/0x1f6 [zfs] Jun 2 00:38:03 Tower kernel: zio_dva_allocate+0xe8/0x738 [zfs] Jun 2 00:38:03 Tower kernel: ? spl_kmem_alloc_impl+0xc6/0xf7 [spl] Jun 2 00:38:03 Tower kernel: ? preempt_latency_start+0x2b/0x46 Jun 2 00:38:03 Tower kernel: ? _raw_spin_lock+0x13/0x1c Jun 2 00:38:03 Tower kernel: ? _raw_spin_unlock+0x14/0x29 Jun 2 00:38:03 Tower kernel: ? tsd_hash_search+0x74/0x81 [spl] Jun 2 00:38:03 Tower kernel: zio_execute+0xb2/0xdd [zfs] Jun 2 00:38:03 Tower kernel: taskq_thread+0x277/0x3a5 [spl] Jun 2 00:38:03 Tower kernel: ? wake_up_q+0x44/0x44 Jun 2 00:38:03 Tower kernel: ? zio_taskq_member.constprop.0.isra.0+0x4f/0x4f [zfs] Jun 2 00:38:03 Tower kernel: ? taskq_dispatch_delay+0x115/0x115 [spl] Jun 2 00:38:03 Tower kernel: kthread+0xe7/0xef Jun 2 00:38:03 Tower kernel: ? kthread_complete_and_exit+0x1b/0x1b Jun 2 00:38:03 Tower kernel: ret_from_fork+0x22/0x30 Jun 2 00:38:03 Tower kernel: </TASK> Jun 2 00:41:36 Tower kernel: md: sync done. time=729sec Jun 2 00:41:36 Tower kernel: md: recovery thread: exit status: 0 Jun 2 00:53:25 Tower kernel: mdcmd (37): nocheck cancel
  16. anyone know what might be going on? so i have qbit on docker setup with torguard threw the port forwarding threw tor and config file. download speeds are good when it downloads but 90% stuck on metadata and not finding peers. those same torrents added to the one running on desktop setup threw socks5 with no firewall settings find and dl in an instant. not sure if its related to vpn but just giving my setups. ive also tried setting up docker one threw sock5 and it found alot more but not all.
  17. i upgraded to 6.10 and sofar havnt crashed, cross fingers. moved plex off host and setup everything for br0
  18. how does this work. i just updated to 6.10 and installed the update listed but it still doesnt work.
  19. where do i find info about what versions this supports. ive looked threw this thread a bit but no real answers. is it safe to upgrade to 6.10 apparently im having fatal crashes that might be fixed in upgrade.
  20. it looks like that talks about static ip. i think the only thing ive done before it started crashing related to networking was change 1 docker to host. the rest are all dhcp. but im not even sure if that was before or after the problems.but ill look into if its safe to upgrade unraid with zfs tools yet. how to i enable logs to save after reboot. loots like logs only start when it rebooted from crash so not sure if they stored anywhere else so i can see what happend.
  21. i run zfs on unraid so wasnt sure if upgrading would affect it. but this is the 3rd crash this week like this. ill check those links out
  22. would anyone be able to help me figure out whats going on. i thought it might be a memory problem but upgraded memory and still getting a crash. not sure where i would go to pull other logs if more info is needed.
  23. any info on what to if monitor doesnt display any card info? it sees it in settings and nvidia drivers see it but no display settings. it shows the addon just nothing else update ok, i cycled car to intel then back to nvidia and picked it up.
  24. i cant activate purchased key, how can i get new one without waiting days.
  25. i had a tech over redoing network, everything got new ips and voided the trial, i purchased a new key and and tried to install to use but its saying its invalid. cant get support anywhere. should i chargeback and try again sense unraid wont reply?