jsoonias
Members-
Posts
64 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by jsoonias
-
A few months ago I tried to replace my 500g cache drive with a 2tb cache when I rebuilt my server. This did not work for whatever reason and since then I have had an error that there was an invalid appdata folder within mnt. I did not have time to look into this so it was left until now. I then recently got the old drive replaced with the newer drive but now I notice that I have constant reads and writes on my cache and array. I also see that I the fix common problems app is telling me I have data on both cache and array for appdata and system as they are set to cache only. I believe I also messed up the cache settings on those shares when I changed my cache drive. I thought that I had found the appdata folder within my mnt and the only thing in there is a plex transcode folder. I tried to remove with krusader when docker was disabled but it didnt work. Could someone please look at my logs and let me know what I need to change? Thanks in advance jesse-diagnostics-20240424-2017.zip
-
I am getting the following error. I believe this started happening after I tried (and failed) to replace my 500g cache with a 2tb cache drive. Any help would be greatly appreciated! jesse-diagnostics-20240413-1702.zip
-
Unraid Crashed, Need help to decipher diagnostics
jsoonias replied to jsoonias's topic in General Support
thats correct. I enabled the syslog this morning and it crashed soon after. -
Unraid Crashed, Need help to decipher diagnostics
jsoonias replied to jsoonias's topic in General Support
Yes, I did. Am I doing something wrong? -
Unraid Crashed, Need help to decipher diagnostics
jsoonias replied to jsoonias's topic in General Support
Those logs were right after unraid started after a crash before downgrade. -
Unraid Crashed, Need help to decipher diagnostics
jsoonias replied to jsoonias's topic in General Support
See attached. I have downgraded to 6.10.3 for the time being as another user on reddit said this solved a similar problem. jesse-diagnostics-20230103-0927.zip syslog-192.168.1.49.log -
Hello All, I posted this topic on reddit and have downloaded the logs and have them attached. Basically unraid has crashed/ become unresponsive for the second time after upgrading the OS. jesse-diagnostics-20230102-1954.zip
-
Disk in error state. First time Ive had a disk fail?
jsoonias replied to jsoonias's topic in General Support
I ended up rebuilding on a new drive and preclearing the existing one. I got antsy. Any recommendations on a replacement controller? Thanks for all your help -
Disk in error state. First time Ive had a disk fail?
jsoonias replied to jsoonias's topic in General Support
I remounted the drive and it has shown up again. I have included the SMART report. Should I be starting the array with the drive in it original position? It says starting the array will start a parity check and or data rebuild jesse-smart-20210221-1129.zip -
Disk in error state. First time Ive had a disk fail?
jsoonias replied to jsoonias's topic in General Support
At this point I think I should probably replace the drive and rebuild. Am I correct? -
Disk in error state. First time Ive had a disk fail?
jsoonias replied to jsoonias's topic in General Support
Server booted with missing disk. Says not installed. Why is SASLP not recommended? See updated diagnostics jesse-diagnostics-20210221-1019.zip -
Woke up to a warning that I have a disk with read errors and an alert with one disk in error state. I do have a drive ready to go if need be. Should I be replacing this ASAP? attached is diagnostic file jesse-diagnostics-20210221-0817.zip
-
Looking for some assistance with this error. attached are my diagnostics. Thanks, jesse-diagnostics-20210119-0708.zip
-
[Support] Linuxserver.io - Nextcloud
jsoonias replied to linuxserver.io's topic in Docker Containers
My Nextcloud has not worked for over a year. What is my best shot for getting it back up and running again? I have existing files in my nextcloud docker still. Should I delete the nextcloud and mariadb dockers? is there anything else I would need to do? Thanks so much for you input -
Be aware the lists can contain probably mutliple TBs of data.
-
Hello, I have an issue where I added a Disney list that contained about 100000 movies and now I want to purge my radarr and start from scratch. What is the best way to do this?
-
Unraid Forum 100K Giveaway
jsoonias replied to SpencerJ's topic in Unraid Blog and Uncast Show Discussion
I like the community built around unraid. I bought my first server from rajahal 10? years ago without really knowing anything about computers. Multiple arrays would be nice. Thanks guys! -
[Support] Linuxserver.io - Nextcloud
jsoonias replied to linuxserver.io's topic in Docker Containers
Hey guys, My nextcloud stopped working a while back and I am just starting to look at it again. I cannot load the nextcloud webui at all. Where should I start?? Browser gives me error : ERR_CONNECTION_REFUSED Found my letsencrypt docker was orphaned and not running. Found the guide. Let me confirm everything is ok first. oops Thanks -
[SOLVED] Server will not start, OB8A: BMC SEL area full
jsoonias replied to jsoonias's topic in General Support
Thanks John, I cleared the event log through IPMI. Most alarms due to low fan threshold speeds. I guess I need to look further into that. Anyways, I am now able to boot. Thanks for the help -
Hey guys, need some help. My server get stuck at this point and reboots. Second picture shows no TPM or TPM has problem.
-
i801_smbus 0000:00:1f.3: SMBus is busy, can't use it!
jsoonias replied to slimshizn's topic in General Support
Got the same issue. Anyone figure it out? -
Thanks again squid. That worked any idea on what all the activity in the log means? regarding the port activity?
-
attached is the diagnostics jesse-diagnostics-20190101-1959.zip
-
Hey guys, Hit the update all button for my dockers and the process got stuck on shutting down duckdns. after an hour I exited out of the update window and now have a few orphan images where my dockers should be. Is there an easy way to get these back? Jan 1 19:06:00 Jesse emhttpd: req (15): cmdStartMover=Move+now&csrf_token=**************** Jan 1 19:06:00 Jesse emhttpd: shcmd (7311): /usr/local/sbin/mover &> /dev/null & Jan 1 19:10:11 Jesse nginx: 2019/01/01 19:10:11 [error] 10254#10254: *529594 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.1.29, server: , request: "GET /plugins/dynamix.docker.manager/include/CreateDocker.php?updateContainer=true&ct[]=duckdns&ct[]=JesseSoonias&ct[]=mariadb&ct[]=openvpn-as HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.49", referrer: "http://192.168.1.49/Docker" Jan 1 19:13:21 Jesse kernel: veth8196083: renamed from eth0 Jan 1 19:13:21 Jesse kernel: br-d041a9a063a1: port 1(vethe668705) entered disabled state Jan 1 19:13:23 Jesse avahi-daemon[10107]: Interface vethe668705.IPv6 no longer relevant for mDNS. Jan 1 19:13:23 Jesse avahi-daemon[10107]: Leaving mDNS multicast group on interface vethe668705.IPv6 with address fe80::ec0c:adff:fe17:21c5. Jan 1 19:13:23 Jesse kernel: br-d041a9a063a1: port 1(vethe668705) entered disabled state Jan 1 19:13:23 Jesse kernel: device vethe668705 left promiscuous mode Jan 1 19:13:23 Jesse kernel: br-d041a9a063a1: port 1(vethe668705) entered disabled state Jan 1 19:13:23 Jesse avahi-daemon[10107]: Withdrawing address record for fe80::ec0c:adff:fe17:21c5 on vethe668705. Jan 1 19:16:37 Jesse kernel: veth4f72b13: renamed from eth0 Jan 1 19:16:37 Jesse kernel: docker0: port 1(vethe1d2ce7) entered disabled state Jan 1 19:16:39 Jesse avahi-daemon[10107]: Interface vethe1d2ce7.IPv6 no longer relevant for mDNS. Jan 1 19:16:39 Jesse kernel: docker0: port 1(vethe1d2ce7) entered disabled state Jan 1 19:16:39 Jesse avahi-daemon[10107]: Leaving mDNS multicast group on interface vethe1d2ce7.IPv6 with address fe80::784f:eaff:fede:9dcb. Jan 1 19:16:39 Jesse kernel: device vethe1d2ce7 left promiscuous mode Jan 1 19:16:39 Jesse kernel: docker0: port 1(vethe1d2ce7) entered disabled state Jan 1 19:16:39 Jesse avahi-daemon[10107]: Withdrawing address record for fe80::784f:eaff:fede:9dcb on vethe1d2ce7. Jan 1 19:17:11 Jesse kernel: docker0: port 1(veth9c2f2ae) entered blocking state Jan 1 19:17:11 Jesse kernel: docker0: port 1(veth9c2f2ae) entered disabled state Jan 1 19:17:11 Jesse kernel: device veth9c2f2ae entered promiscuous mode Jan 1 19:17:11 Jesse kernel: IPv6: ADDRCONF(NETDEV_UP): veth9c2f2ae: link is not ready Jan 1 19:17:11 Jesse kernel: docker0: port 1(veth9c2f2ae) entered blocking state Jan 1 19:17:11 Jesse kernel: docker0: port 1(veth9c2f2ae) entered forwarding state Jan 1 19:17:11 Jesse kernel: docker0: port 1(veth9c2f2ae) entered disabled state Jan 1 19:17:18 Jesse kernel: eth0: renamed from vetha01a56f Jan 1 19:17:18 Jesse kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9c2f2ae: link becomes ready Jan 1 19:17:18 Jesse kernel: docker0: port 1(veth9c2f2ae) entered blocking state Jan 1 19:17:18 Jesse kernel: docker0: port 1(veth9c2f2ae) entered forwarding state Jan 1 19:17:19 Jesse avahi-daemon[10107]: Joining mDNS multicast group on interface veth9c2f2ae.IPv6 with address fe80::90f3:3bff:feec:8692. Jan 1 19:17:19 Jesse avahi-daemon[10107]: New relevant interface veth9c2f2ae.IPv6 for mDNS. Jan 1 19:17:19 Jesse avahi-daemon[10107]: Registering new address record for fe80::90f3:3bff:feec:8692 on veth9c2f2ae.*.
-
Thanks Squid that worked.