Triplerinse

Members
  • Posts

    22
  • Joined

  • Last visited

Triplerinse's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. So i have been running unraid for about 5 years now its great. A few hiccups here and there but its been good. Just before 6.12 dropped (not knowing when it would actually drop) I rebuilt my server and got a new CPU and mother board. intel i5 11600k because it was on sale for a good price. 6.12 drops and all 11th gen intel cpu lock up unraid because of i915 drivers. I know there is a temporary fix by opening up a web terminal and typing some lines of code. they said doing this might result in higher power use. (yes i know its more of a Linux kernel issue than an unraid issue) Fine ill wait till this issue is resolved and then upgrade. then it comes out that community applications will no longer support anything prior to 6.12. 1st question has the i915 issue been resolved? or has unraid implemented a fix for it other then open a web terminal and going that route? 2nd Question. only real dockers I am running is krusader, plex and Tautulli, will these still get updates? For the most part i don't like messing with stuff that's not broke. if its not broke and I try to do something like an upgrade or anything it seems to cause more issues than it was going to fix. this is the fix i was talking about but its still not clear what to do exactly if it wont boot. create a config file in the flash drive and type in "options i915 enable_dc=0". at the risk of sounding dump how do i exactly do this? thanks for any advise.
  2. i tried looking in the support thread but nothing has been updated in a while. I don't know if it was just timing or something else. But 2 days ago i shut down my server because of a storm. No error on boot up. but come Sunday morning my ca app data was backing up but did not do to errors. i looked and the network drive that i have mapped and it wasn't mounted so i remounted it and figured that was it. Then i tried to run the backup manually and it started. Logs are as follows Jul 16 21:18:20 ---------l CA Backup/Restore: Stopping tautulli... Jul 16 21:18:21 --------- kernel: docker0: port 1(vethc3b9931) entered disabled state Jul 16 21:18:21 --------- kernel: vethb7e55ef: renamed from eth0 Jul 16 21:18:21 --------- avahi-daemon[15014]: Interface vethc3b9931.IPv6 no longer relevant for mDNS. Jul 16 21:18:21 --------- avahi-daemon[15014]: Leaving mDNS multicast group on interface vethc3b9931.IPv6 with address fe80::a88e:d7ff:fe1c:6bcf. Jul 16 21:18:21 --------- kernel: docker0: port 1(vethc3b9931) entered disabled state Jul 16 21:18:21 --------- kernel: device vethc3b9931 left promiscuous mode Jul 16 21:18:21---------l kernel: docker0: port 1(vethc3b9931) entered disabled state Jul 16 21:18:21 --------- avahi-daemon[15014]: Withdrawing address record for fe80::a88e:d7ff:fe1c:6bcf on vethc3b9931. Jul 16 21:18:21 --------- CA Backup/Restore: done! (took 1 seconds) Jul 16 21:18:21 --------- CA Backup/Restore: Backing Up appData from /mnt/appcache/appdata/ to /mnt/remotes/WDMYCLOUD_nfs/backupcpu/parzival/[email protected] Jul 16 21:18:21 --------- CA Backup/Restore: Separate archives disabled! Saving into one file. Jul 16 21:18:21 --------- CA Backup/Restore: Using command: cd '/mnt/appcache/appdata/' && /usr/bin/tar -caf '/mnt/remotes/WDMYCLOUD_nfs/backupcpu/parzival/[email protected]/CA_backup.tar' --exclude "binhex-plex" . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $! Jul 16 21:18:21 --------- CA Backup/Restore: Backing Up and just sit there. then i about it and then the loggs Jul 16 21:29:40 --------- CA Backup/Restore: CA Backup / Restore tar process running. Killing 20727 Jul 16 21:30:10 --------- kernel: traps: lsof[19097] general protection fault ip:14901a8d84ee sp:9fae83dbb29b697f error:0 in libc-2.36.so[14901a8c0000+16b000] Jul 16 21:30:10 --------- CA Backup/Restore: User aborted backup/restore! Jul 16 21:30:10--------- CA Backup/Restore: done Jul 16 21:30:10 --------- CA Backup/Restore: Starting binhex-plexpass... (try #1) Jul 16 21:30:10 --------- CA Backup/Restore: done! Jul 16 21:30:12 --------- CA Backup/Restore: Starting tautulli... (try #1) Jul 16 21:30:12 ---------kernel: docker0: port 1(vetha9b7d7a) entered blocking state Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered disabled state Jul 16 21:30:12 ---------kernel: device vetha9b7d7a entered promiscuous mode Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered blocking state Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered forwarding state Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered disabled state Jul 16 21:30:12 --------- kernel: eth0: renamed from veth06a4dc9 Jul 16 21:30:12 --------- kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha9b7d7a: link becomes ready Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered blocking state Jul 16 21:30:12 --------- kernel: docker0: port 1(vetha9b7d7a) entered forwarding state Jul 16 21:30:12 --------- CA Backup/Restore: done! Jul 16 21:30:14 --------- avahi-daemon[15014]: Joining mDNS multicast group on interface vetha9b7d7a.IPv6 with address fe80::d84a:c3ff:feaa:f128. Jul 16 21:30:14 --------- avahi-daemon[15014]: New relevant interface vetha9b7d7a.IPv6 for mDNS. Jul 16 21:30:14 --------- avahi-daemon[15014]: Registering new address record for fe80::d84a:c3ff:feaa:f128 on vetha9b7d7a.*. Jul 16 21:30:14---------CA Backup/Restore: ####################### Jul 16 21:30:14 ---------l CA Backup/Restore: appData Backup complete Jul 16 21:30:14 ---------CA Backup/Restore: ####################### Jul 16 21:30:14 --------- sSMTP[2853]: Creating SSL connection to host Jul 16 21:30:14 --------- sSMTP[2853]: SSL connection using TLS_AES_256_GCM_SHA384 Only errors i see in all of my sys log. i think i recently updated the application so i have disabled it for the time being. Any help would be appreciated.
  3. Can you better explain what this line does and how it will effect server? I use I gpu for plex.
  4. It may may ne the 11th gen intel cpu issue. I built my new server 2 months ago because the 11th gen was a great buy.
  5. i recently did a fresh install and moved over my old data from my old unraid server. Everything seems to be fine but it seems that occasionally drives will spin up with i access webgui page, or ill access a file on drive 7 and drive 1 and drive 3 will spin up. Is there a different disk setting in 6.11.5 that would causing this? Also i noticed a warning in logs when transferring files to the array. Apr 21 15:11:37 smbd[16202]: [2023/04/21 17:11:37.307830, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) Apr 21 15:11:37 smbd[16202]: synthetic_pathref: opening [4k Movies/Babylon.mkv] failed i found this is this a know issue with 6.11? and i dont need to worry about it. seems like my file transferred fast and moved off my cash. now when i access that file it plays off of my array. Any help would be appreciated.
  6. got eveything up and running on 6.11.5 went shares popped up and everything was looking good. Did a parity build and then started to test files transfters. files seem to transfer fine but then i see this in the logs Apr 21 15:11:37 smbd[16202]: [2023/04/21 17:11:37.307830, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) Apr 21 15:11:37 smbd[16202]: synthetic_pathref: opening [4k Movies/Babylon.mkv] failed it seems like any time i transfer a file it has issues. I also have diagnostics if that would help. Also it seems that when i logon to the webgui some times random disk spin up even though no files are being accessed.
  7. Thanks for the info. This was my main plan and figured this was how it should go but wanted to double check before proceeding. Printing them out would be a good idea for future incase something happed to the drive I had the jpg stored on my main computer for a backup. Is there any special check box to say keep the data on the drives and not format for the array.
  8. i know its odd to do this! But im just wanting to start from brand new install. New flash, new cash drive, new everything but keep data on drives. im fine with redoing dockers since i not going to install all of them again. Maybe its not the best course of action but i was on 6.9.2 before and when i moved to 6.10 weird things started to happen. Drives were renamed (outside of the array) and smart data on one drive 1tb hdd was showing on a 280gb ssd.
  9. i have built a new unraid server and wanted to simplify things to just a few dockers and no vms lowering power consumption and down graded cpu. Just wanting to start fresh from the very begging no app data and new flash drive. I have a decent array 68 tb usable and 30 some used. new install of unraid. Can i just take the drives in the old machine move them to the new machine and do a new config matching the config of the old server. Will this keep my data intact or do i have use unassigned drives and move the data one drive at a time?
  10. Ill try that. But preclear is actively reading or writing to them so shouldn't that keep them from spinning down in the first place
  11. I am starting a new server for a small amount of drives as a backup. Got new drives was preclearing them before starting the array with them. is this normal to see this in main logs? Temps seem fine and speed seems good. The green dont on the drives while they are precleaning goes gray for about a half second. Done preclearing before but just never looked at logs.
  12. thanks for the answer, i appreciate the response greatly. i have some new sata cables coming in the mail this week. ill swap sata and power cable and see what happens.