GTvert90

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by GTvert90

  1. My Cache has conastant wites because of my UUD but My zfs_pool is also having constant rights and I never could figure out why. Though this was my first crack at zfs. I was noticing it on 6.12.2 too. I don't know if I had this pool set up on 12 or 12.1. Looking at open files and file activity I can't find any reason for this. vulcan-diagnostics-20230719-1602.zip
  2. I had this running flawlessly for quite a while. I installed SmokePing a few weeks ago and for some reason it refuses to start back up after the backup process automatically, but always starts up once I start it from the GUI. I excluded it for now in hopes to get a successful backup and clean out the old ones. any ideas?
  3. I came here looking for help on this. First my password was too short. Now It's having permission issues. and I'm not sure how to correct it. 2023-03-04 13:30:39,433 ERROR: org.graylog2.shared.journal.LocalKafkaJournal - Cannot access offset file: Permission denied 2023-03-04 13:30:39,462 ERROR: org.graylog2.shared.journal.LocalKafkaJournal - Cannot access offset file: Permission denied 2023-03-04 13:30:39,671 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled. 2023-03-04 13:30:39,673 ERROR: org.graylog2.shared.journal.LocalKafkaJournal - Cannot access offset file: Permission denied 2023-03-04 13:30:39,769 ERROR: org.graylog2.storage.versionprobe.VersionProbe - Unable to retrieve version from Elasticsearch node: Unknown host 'elasticsearch: Name or service not known'. - Unknown host 'elasticsearch: Name or service not known'. 2023-03-04 13:30:44,773 ERROR: org.graylog2.storage.versionprobe.VersionProbe - Unable to retrieve version from Elasticsearch node: Unknown host 'elasticsearch'. - Unknown host 'elasticsearch'. @Maniek2as2 what is in your graylog-graylog-1 log file?
  4. I didn't see Iperf3 on the list of slackware packages. also all the guides I found for installing on unraid were with nerdpacks. Can anyone point me in the right direction to get this installed? Thanks!
  5. That seems to have worked. Thank you sir.
  6. So the last few weeks I've noticed a higher than normal CPU load ( so I think anyway), CPU seems to be running hotter, system is idling at a higher power usage as well according to the UPS plugin. I've tried turning off all my dockers without a change in load. Changes I've mad recently since I've noticed, I set up the my server plugin, and I switched out 2 case fans that were running off the power supply to new fans running off system fan headers on the motherboard. I looked at my log and saw a lot of these errors, Feb 4 14:19:51 Vulcan kernel: pcieport 0000:00:1c.6: Enabling MPC IRBNCE Feb 4 14:19:51 Vulcan kernel: pcieport 0000:00:1c.6: Intel PCH root port ACS workaround enabled Feb 4 14:19:51 Vulcan kernel: mpt2sas_cm0: log_info(0x31120b10): originator(PL), code(0x12), sub_code(0x0b10) Feb 4 14:19:56 Vulcan kernel: sd 9:0:2:0: Power-on or device reset occurred Feb 4 14:19:56 Vulcan rc.diskinfo[11933]: SIGHUP received, forcing refresh of disks info. Feb 4 14:20:01 Vulcan ntfs-3g[2068]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Feb 4 14:20:01 Vulcan ntfs-3g[2068]: Failed to read vcn 0x2: Input/output error Feb 4 14:20:01 Vulcan kernel: Buffer I/O error on dev sdn1, logical block 3364, async page read Feb 4 14:21:01 Vulcan kernel: Buffer I/O error on dev sdn1, logical block 3364, async page read Feb 4 14:21:01 Vulcan ntfs-3g[2068]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Feb 4 14:21:01 Vulcan ntfs-3g[2068]: Failed to read vcn 0x2: Input/output error Feb 4 14:22:01 Vulcan ntfs-3g[2068]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Feb 4 14:22:01 Vulcan ntfs-3g[2068]: Failed to read vcn 0x2: Input/output error Feb 4 14:22:01 Vulcan kernel: Buffer I/O error on dev sdn1, logical block 3364, async page read Feb 4 14:23:01 Vulcan ntfs-3g[2068]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Feb 4 14:23:01 Vulcan ntfs-3g[2068]: Failed to read vcn 0x2: Input/output error Feb 4 14:23:01 Vulcan kernel: Buffer I/O error on dev sdn1, logical block 3364, async page read Feb 4 14:24:01 Vulcan ntfs-3g[2068]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Feb 4 14:24:01 Vulcan ntfs-3g[2068]: Failed to read vcn 0x2: Input/output error Feb 4 14:24:01 Vulcan kernel: Buffer I/O error on dev sdn1, logical block 3364, async page read attached is my diagnostic. Is there a way to see a historical graph of system resource usage? Found it, At least it goes back to last boot I believe, You can clearly see the change. Let me know what you guys think, Thanks! vulcan-diagnostics-20220204-1619.zip
  7. Ah, so if I make the proper changes to the share config and invoke mover it will move the files back? Good to know. Sent from my Pixel 6 Pro using Tapatalk
  8. So, the system file is 1 byte. unbalance is giving me a permissions warning... How else do I move this? Or do you think it would be OK to move with unbalance?
  9. So I fixed that error in my docker config, I shouldn't have to delete anything in that /Remotes folder as it deletes on reboot right? Fixed, Thanks!
  10. I woke up to an email. looking at the dashboard everything seems OK but Fix Common Problems suggested we make a post here with diagnostics. vulcan-diagnostics-20220119-0757.zip
  11. So I put it back to the way it was (well with the drive the parity was copied to in the parity slot. ) and the original disk 7 in. I selected parity is valid (it should be) but I'm still running another parity check. If it completes successfully I'll swap out disk 7 and let it rebuild. Sent from my Pixel 6 Pro using Tapatalk
  12. Do I want to do this? Can i do this with a replacement disk for drive 7 or would I want to put the original drive 7 back in and reset it with this? Will all the shares and other config like dockers and what not still be there? Parity successfully copied but I made the mistake of formatting the old parity drive. In theory, if its possible I feel like I should be able to reset with the new formatted drive in disk 7 and everything should populate and rebuild properly. vulcan-diagnostics-20220107-1416.zip
  13. So I think I messed up. Parity copied, but before I mounted and rebuilt data I thought I was going to cover my bases and format the old parity drive to avoid any issues in the future with the partition starting point. Now it's telling me config is invalid too manu missing disks. This isn't a huge deal. I can pop disk 7 back in and rebuild parity.. but is there an easier way? Sent from my Pixel 6 Pro using Tapatalk
  14. I committed to the parity swap. Let's see how it goes Sent from my Pixel 6 Pro using Tapatalk
  15. Would this be the procedure for parity swap on 6.9.2? I know its not "tested" but I'm good with an educated "assumption" in this case. I still have disk 7 to hopefully get data off if needed. This should work for me correct? https://wiki.unraid.net/The_parity_swap_procedure
  16. Thanks for the help guys. Sent from my Pixel 6 Pro using Tapatalk
  17. Yeah, I can back up disk 7 if needed. I actually already removed it as I tried to replace it. Can I perform a parity swap to resolve this? (Whatever that involves or Would I need to reinstall disk 7 for that? Whichever is the path of least resistance. That's weird it is formatted like that. It's only ever been in an unRAID server. I can't imagine I formatted it anywhere else. Sent from my Pixel 6 Pro using Tapatalk
  18. Disk /dev/sdb: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors Disk model: ST14000NM001G-2K Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: F81D82F2-BC67-4F1C-BA61-03CDE1B1ED89 Device Start End Sectors Size Type /dev/sdb1 2048 27344764894 27344762847 12.7T Linux filesystem Disk /dev/sdm: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors Disk model: ST14000NM001G-2K Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9A21185A-CEEA-4740-B6D8-B8F79ADEB5BB Device Start End Sectors Size Type /dev/sdm1 64 27344764894 27344764831 12.7T Linux filesystem I ran the new disk for good measure too.
  19. no, Both are x16 Exos 14TB drives vulcan-diagnostics-20220106-0738.zip I don't mind parity swapping if that is what is needed. I just thought this was strange. Thank you for all your help. I couldn't really find the right keywords to find any posts on this issue.
  20. So the GUI is telling me that the new drive (14tb replacing a 4tb) is larger than the parity drive(Also 14TB).. Its equal but not bigger. Do I have to parity swap or am I missing something?
  21. True. It's not often I'm performing a full backup/restore and a 50MBps speed hit when spinning up a 40mbps movie wouldn't even be noticed. I don't think it would impact my usage but man all that red and orange bothers me haha. I might move one stuff around, I just don't want to cause more harm than good so maybe I won't. Sent from my Pixel 6 Pro using Tapatalk
  22. Thank you sir. Every thread I've read says not to worry about "balancing" drives after something like this. Using high water any new data is going to be written to this new drive for quite a while so maybe it isn't as big of a deal but doesn't drive performance degrade as the drive fills? Wouldn't all the drives being 70% filled give better r/w speeds than them being 92% filled?
  23. I'm also going to run a parity check before doing all of this. My last one is only a month old or so but you can't be too safe.
  24. Parity drive is 14TB new data drive will be 14TB. Final result would be remove disk 6 (4tb) and 7(4TB), have new 14TB drive as drive 6. Depending how storage look maybe pull disk 5 as well. If you need any more info please let me know! vulcan-diagnostics-20220105-0546.zip
  25. I think this is my first post here! The conversion to unRAID has been smooth. This is such a great community and so much documentation. I did see threads.on this I just wanted to make sure my plan was solid. I currently have an 8 disk array with 7D+1P. I am going to pull out at least 2 maybe 3 smaller disks and replace with 1 larger drive, so I'll have 5 or 6 data disks and 1 parity. All of my internal SATA ports are full. I was thinking I can pull out one of the data drives to be replaced. Preclear the new drive, let it rebuild from parity. Then shut down dockers and mover then use unbalance to move data from the other drive I will remove. And then remove that drive from the array and run a parity check/write/rebuild... Profit? I understand unbalance is pretty much just a gui for rsync but I've successfully used it in the past and I don't have to learn more command line lol Sent from my Pixel 6 Pro using Tapatalk