limawaken

Members
  • Posts

    128
  • Joined

  • Last visited

Everything posted by limawaken

  1. ah, ok... that makes sense. i do have a camera that i have to occasionally restart. It seems to have poor wifi signal and is sometimes dropping frames, even though other devices in that same location shows good signal. i guess i'll have to get another camera and see how it goes then. thanks!
  2. hi @jordandrako, just wondering if you found any solutions to this? i'm using frigate docker, and i also have this out of memory killing a ffmpeg process. fix common problems reported the out of memory error, my system didn't crash. maybe because i don't have plex or anything else that does transcoding. strangely i actually didn't notice any issues with frigate either. but i would like to find out how to fix this before it causes bigger problems.
  3. i was afraid that would be the case... i guess that's one drawback of ZFS for me.
  4. hi guys, i recently upgraded my cache pool, decided to make it a 3 disk ZFS pool. before this i had been using a regular xfs single disk cache pool. with zfs cache pool now, when moving files from a /user/temporary folder to another /user/destination (both are cached shares) it seems to be doing an actual move or a copy. i checked and confirm it is still moving files within the cache pool, not moving anything to the array. so everything is actually happening within the cache pool. previously moving files like this happened instantly since the move only happened within the cache pool, then only later mover would move the files to the array. is this the expected behavior for zfs cache pools? i'm using unraid 6.12.6 and moving files using midnight commander in a terminal window or putty.
  5. hi @JorgeB, i used the ---checksum flag, and rsync took around 7 hours to go over almost 3TB of data. i guess 7 hours isn't too bad. by the end of it there were 6 files that were copied. at least i didn't have to swap the disks around and rebuild the data again. i'm starting to suspect that my supermicro drive cage has something to do with these errors that i'm getting. these problems seem to occur after i move drives in or out of the drive cage. i always do power down before i swap any drives out, just to make sure that the HBA card detects all the disks before unraid starts up. i had an experience this time when a drive couldn't be detected until i had to sort of install the drive into the caddy in a certain way to make it sit a little further in towards the backplane (if that makes any sense). anyway, thanks again JB, you saved my sanity.
  6. hi @JorgeB sorry, my sleep deprived brain was slow to pickup. the data rebuild on disk1 completed without additional disk errors from disk4, now the array looks to be healthy. now instead of swapping back to the old disk and doing all that, i would like to instead try mounting the old disk1 using unassigned devices and copying the data over to the array. i believe rsync would be good for this, right, as it would only copy the parts of the files that are different from the source? so i tried this, using the rsync command rsync -avh /mnt/disks/X7TY08TAS /mnt/disk1/ however all i get is sending incremental file list sent 766.30K bytes received 1.52K bytes 1.54M bytes/sec total size is 2.90T speedup is 3,782,919.89 there doesn't seem to be any data being compared or transferred. am i doing this wrong?
  7. i was thinking... would it be possible to switch back to the old disk1 (since all the data inside it is intact)? so i can replace the failing disk4 first. does that make any sense? however the new disk1 (whose contents are currently being emulated) is bigger than the old disk1 (4tb vs 3tb). will unraid accept back my old disk1, maybe just re-validate the parity? or would the failing disk4 also cause the parity to have corruption when rebuilding the replacement disk4?
  8. any changes should still be in the cache because i had stopped mover from transferring anything to the array. i think the parity should still be valid? i was worried that the data on the rebuilt drive would be corrupted because of the disk errors during the rebuild process.
  9. hi JB, here's the diagnostics. earlier i had paused the rebuild for a while... then i restarted and it ran for a while without new errors, but just around a minute ago i got another warning and the pending sector count increased to 4 and now there is 1 reallocated sector so i have paused the rebuild again. silometalico-diagnostics-20240207-2347.zip
  10. Hi guys, Just wanted some advice about this situation that i'm having. Data was being rebuilt for one of my disks which i had upgraded to a larger size, it was progressing nicely up until 59% and 4 hours to go then i got some pending sector warning on another disk. i have only 1 parity disk. Before upgrading the drive i did a parity check to be sure that parity and the other drives were all ok. There were no problems when doing the parity check, but just my luck that another disk starts to fail when i'm halfway rebuilding data on a new disk... so guys, should i just let the data rebuild complete? will the rebuilt disk have corrupted data due to the errors from the other drive? i'm worried that if i stop the data-rebuild now, i'll be faced with 2 failed disks and with no way to rebuild the array. What should i do?
  11. wow i had no idea this was happening. i'm using Brave browser but this does seem to be the case. its quite interesting... like the OP i changed my computer's dns to several different servers (1.1.1.1, 8.8.8.8, 208.67.222.222) but couldn't ping github.com. edge browser was also able to load the page.
  12. its not the linux bond, its not proxmox, or unraid. its my WINDOWS laptop! i ran iperf from between unraid vm to another linux box and upload/download speeds were at around 930 Mbits/s! so this is a windows 11 bug! some info here: https://www.windowscentral.com/software-apps/windows-11/windows-11-2022-update-slowing-file-transfers-by-up-to-40#:~:text=Microsoft recently confirmed an issue,(Windows 11 version 22H2). it seems there is no fix yet
  13. you're not the only one. for me the issue just went away eventhough i still can't ping github.com. it still seems to resolve to an ip address that is down. but i can access github.com on the web. i don't understand it. i didn't change any dns settings.
  14. i have a feeling this has something to do with the network interface configured in proxmox. i configured a 802.3ad LAG for 2 network ports and assigned it to a bridge, which is also proxmox main interface. anyone with virtualised unraid experience, or experience setting up linux bond, please give me some advice? iperf client on proxmox host and my windows laptop as the server, i get about 500+ Mbits/s iperf client on laptop and proxmox host as server, i get 900+ Mbits/s i also ran it on unraid vm and got similar result. i also tried balance-rr, and the results were also similar. is a linux bond supposed to reduce the server's upload speed like this?
  15. couple days ago all my plugin update statuses showed "not available" and after some googling it seems it is because unraid couldn't connect to github. true enough i couldn't ping github. strangely though, when i ping github.com it says pinging 20.205.243.166, but when i did a dns lookup it seems that github.com is supposed to be 140.82.113.3. so for some reason the dns record isn't correctly propogated (i'm in Malaysia). i check dns propogation and seems a few other countries are still resolving github.com to 20.205.243.166 which is still down. can you ping github.com?
  16. i just noticed that my read speeds are now slower than bare-metal. now i'm getting about 55 - 60 MB/s when copying a large file from unraid. used to be MUCH faster. but write speeds are 110 - 120 MB/s when copying a large file to unraid (cache share) i tried setting the "max protocol = SMB2_02" setting that makes none of the shares accessible and logs would show these errors: Jan 3 22:32:52 SILOmetalico smbd[22262]: [2023/01/03 22:32:52.682566, 0] ../../source3/smbd/smb2_server.c:657(smb2_validate_sequence_number) Jan 3 22:32:52 SILOmetalico smbd[22262]: smb2_validate_sequence_number: smb2_validate_sequence_number: bad message_id 0 (sequence id 0) (granted = 1, low = 1, range = 1) Jan 3 22:32:53 SILOmetalico smbd[22263]: [2023/01/03 22:32:53.915491, 0] ../../source3/smbd/smb2_server.c:657(smb2_validate_sequence_number) Jan 3 22:32:53 SILOmetalico smbd[22263]: smb2_validate_sequence_number: smb2_validate_sequence_number: bad message_id 0 (sequence id 0) (granted = 1, low = 1, range = 1) Jan 3 22:33:03 SILOmetalico smbd[22500]: [2023/01/03 22:33:03.071618, 0] ../../source3/smbd/smb2_server.c:657(smb2_validate_sequence_number) Jan 3 22:33:03 SILOmetalico smbd[22500]: smb2_validate_sequence_number: smb2_validate_sequence_number: bad message_id 0 (sequence id 0) (granted = 1, low = 1, range = 1) any tweaks that could fix this? could it be some network adapter setting in proxmox that is causing this? my network device is virtio and default settings. when doing the same transfer in a windows VM the read/write speeds are between 400 - 500 MB/s (reading speed seems just slightly slower than the write speed)
  17. the last couple of days i've been running unraid as a VM in proxmox. machine - q35, bios - seabios, cpu - host i only had to passthru the LSI controllers and USB flashdrive. it could be my imagination, but i feel that unraid vm starts up faster, including starting up all the dockers. its very nice to be able to open a console to unraid, its very much like having a pikvm or ipmi on the unraid machine. now i can make pfsense vm start before unraid (its just one of the small things that bugged me and made me want to try unraid as a vm) i had a few VMs in unraid, importing them over to proxmox was quite simple and quite fast (using qm importdisk). one of the things i miss is being able to see the cpu and mb temperatures on the dashboard. and the fan speeds too. could there be a way to have these displayed on the dashboard like before? another thing that bugs me is the processor appears as "pc-q35-7.1 @ 2000 MHz" on the dashboard instead of the actual intel processor and speed. server power usage is still displayed on the dashboard because i passed thru the UPS, but i would eventually have to connect the ups to proxmox as the nut master, then configuring nut slave in unraid, but i'm afraid the power usage will no longer be shown. or anyone can suggest better way to monitor the PC health (temperatures, fan, power consumption, etc.)?
  18. sorry, i'm a proxmox noob... where do i set the bios?
  19. my unraid dashboard displays the CPU as "pc-q35-7.1 at 2000Mhz" i'm also wondering if this can be fixed? i'm on 6.11.5
  20. you're awesome. firmware and bios updated, rebooted the machine and verified that the updated controller detected all the drives, then unraid started right up. after about 30 mins the disks spun down. back to 95w when array disks idle. that was it! amazing... just wow...
  21. hi jb! here's the diagnostics. i'm currently rebuilding disk5, because earlier i had to remove a disk to change the caddy which had broken. i think it wasn't seated properly when i put it back... array started and disk was being emulated. strange because i'm very sure it didn't have any errors when the array first started. thanks. silometalico-diagnostics-20221219-1818.zip
  22. over the weekend i swapped out my old LSI 8i HBA and connected all my disks to the 16i card. previously some were on the m/b sata controller and some were on the 8i. all my disks are SATA. even after a day all the disks would still be active (green dot). previously they were all able to spin down. i have tried updating to latest 6.11.5. there was a small difference after installing 6.11.5. on the previous 6.11.2 clicking the green dot would turn to the spinning circle and after a few seconds would go back to the green dot. on 6.11.5 clicking on the green dot would turn to the spinning circle forever. when i refresh the page it would be back to the green dot. i don't have turbo-write plugin installed and spin up group under disk settings is not enabled. as far as i can tell there is no read/write activity happening on the array. this is what clicking on disklog information shows: Dec 19 09:52:16 SILOmetalico kernel: sd 9:0:1:0: [sdc] 976773168 512-byte logical blocks: (500 GB/466 GiB) Dec 19 09:52:16 SILOmetalico kernel: sd 9:0:1:0: [sdc] Write Protect is off Dec 19 09:52:16 SILOmetalico kernel: sd 9:0:1:0: [sdc] Mode Sense: 9b 00 10 08 Dec 19 09:52:16 SILOmetalico kernel: sd 9:0:1:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Dec 19 09:52:16 SILOmetalico kernel: sdc: sdc1 Dec 19 09:52:16 SILOmetalico kernel: sd 9:0:1:0: [sdc] Attached SCSI disk Dec 19 09:52:36 SILOmetalico emhttpd: WDC_WD5000AAKX-001CA0_WD-WCAYUS199252 (sdc) 512 976773168 Dec 19 09:52:36 SILOmetalico kernel: mdcmd (9): import 8 sdc 64 488386552 0 WDC_WD5000AAKX-001CA0_WD-WCAYUS199252 Dec 19 09:52:36 SILOmetalico kernel: md: import disk8: (sdc) WDC_WD5000AAKX-001CA0_WD-WCAYUS199252 size: 488386552 Dec 19 09:52:36 SILOmetalico emhttpd: read SMART /dev/sdc Dec 19 09:56:49 SILOmetalico emhttpd: spinning down /dev/sdc Dec 19 09:56:49 SILOmetalico emhttpd: sdspin /dev/sdc down: 1 disklog does show that the disks spin down, but they actually are not (as far as i can tell). power usage seems to be between 125W - 135W, previously it would be at 95W - 105W when disks were idle. i looked through other similar posts but those seems to be related to SAS drives or some other plugin or disk activity. for my case, the only change is connecting all the drives to the 16i card. unraid booted up without any fuss. other than redoing the IOMMU group there was nothing else that had to be reconfigured. is there some setting i need to check on the LSI controller or in my bios? under my system devices the LSI card appears as Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) any ideas what could be preventing all the disks from spinning down? thanks.
  23. I'm also really interested in these r730 servers. but SAS drives are so expensive, i won't be able to afford them. will normal SATA drives work?
  24. anyone experienced docker guru seen this problem before? do i need to map some paths or something for unraid to be able to get the logs?
  25. i managed to get eclipse-mosquitto docker (the official version from dockerhub) running. it seems to work. using mqtt explorer i was able to access and see it working. whenever i start a docker i like to view the logs which can be conveniently accessed via the GUI. but for this docker it doesn't work, i only see an empty log like this: i have installed other dockers from dockerhub before and their logs all show up like one would expect. is there some kind of config for getting the gui logs to work?