Rex099

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by Rex099

  1. If i load binhex/arch-sabnzbd it shows "Par2cmdline-turbo:Not available" but if I load binhex/arch-sabnzbd:test it shows its working
  2. Will this ever be rolled into the latest tag?
  3. Wondering if you even found a fix for this? Whenever I used the action center to update anymore then one docker, I seem to have the same problem. If I update one at a time, it works fine but if I do multiple it says the updates happened but if i click the advanced toggle on the docker tab there will be orphaned images. Then I have to click force update on the individual dockers to get them to straighten out.
  4. Posting how i fixed this just in case someone else runs into this. So I still never figured out what exactly the issue was but I ended up backing up binhex-plexpass using the old Backup/Restore Appdata plugin again. Then reformatted plex_appdata as xfs, then again back to zfs (I tried zfs first but it didnt fix the issue, so i had to do xfs first then zfs again). Once I did that I restored from the Backup/Restore Appdata plugin and the ran the spaceinvaderone script and it did a proper full snapshot... what a headache with no clear reason why.
  5. Sanity check: In theory if I deleted all the snapshots from both datasets binhex-plexpass and plex_appdata_binhex-plexpass. Then took a snapshot, shouldn't it send a full backup from binhex-plexpass -> plex_appdata_binhex-plexpass when the Spaceinvadeone script runs?
  6. I'm wondering if there is someway to reset the zfs snapshots without deleting a dataset. I have 3 zfs drives, 1 cache and 1 unassigned devices and both of them are replicating to a ZFS drive in my array using 2 copies of spaceinvaderone's script. But i think my unassigned drive which is my plex appdata somehow didn't actually get a full snapshot, even though it says it did. really hard to explain what I mean but ill do my best. Here is my ZFS master My appdata folder from cache seems to be working fine and everything is backed up to the zfs-backup on disk9 with no issues But my binhex-plexpass does not seem to be actually backing up to the zfs-backup-plex, The snapshots say they are getting incrementally backed up but no data is being populated on plex_appdata_binhex-plexpass like it is on cache_appdata Example when i browse cache_appdata i see files But if i browse plex_appdata_binhex-plexpass its empty, like it never got the first full snapshot So im just not sure what I need to do to fix it or even if there is a way to without blowing away the plex_appdata drive and rebuilding again, which i really dont want to do. its something like 420GB of plex appdata which takes a long time to build.
  7. Just just seemed to encounter this for the first time today. I installed the Dynamix Share Floor plugin which modifies the Minimum free space value. but just as noted in the other posts, if I modify any of the export settings(even if you just change it and change it back then save), the message goes away.
  8. Not really sure what i'm looking for, or what this means but line 1343 of the syslog: Feb 22 02:46:36 Tower kernel: mce: [Hardware Error]: Machine check events logged Should i be looking elsewhere?
  9. So I guess sometime over the past two days I got a MCE error from fix common problems. Attaching the diagnosis log as requested from the plugin. Any help or info would be appreciated. tower-diagnostics-20230227-1441.zip
  10. Honestly I'm not sure but I think I recall having a similar problem when I fist setup my instance. I never figured out what was causing the problem, but I just stopped the docker, removed my folder appdata/plextraktsync. Restarted the docker, went to the console, entered "python3 -m plextraktsync" and re-setup the credentials and it just worked. Just figure something in the original appdata folder got corrupted somehow but don't know what. To note, both my plex and plextraktsync are running on host
  11. Sorry I never actually figured out what the issue was. But after ~ 3 weeks everything just started working again on its own. I'm still thinking that it was something the ISP was doing but don't have any proof.
  12. Wondering if someone might be willing to help point me in the right direction to what is going on with my Issue. As of 3am this morning everything was working fine with my swag setup, then i went to bed... when I awoke today my website (dlongo.net) is no longer accessible from inside my local network.(The site just times out ERR_CONNECTION_TIMED_OUT) But it seems to work fine if I turn on my VPN or access it from my mobile connection. Also if I ping dlongo.net it seems to resolve the correct IP. Anyone have any ideas on what I can check? Im just kinda lost at this point.
  13. Not sure if others are geting the same thing, but mine is still showing Mono Version 6.4.0.
  14. Looks like its been updated on arch! yay!
  15. I know you were waiting for an upgrade but just wondering can the docker use the commands from the wiki? I see this in the wiki I get an TLS handshake (or similar certificate based) error Try mozroots --import --ask-remove which should update monos certificates. mozroots is part of the mono package.
  16. Is there anyway to update Mono manually. Seems that the docker is running with v6.4.0. and the current stable is v6.8.0. i'm getting some web socket errors and have been told that i need to update the mono to current, but i'm not sure if I can do that if its in a docker. Not sure if this was right, but I did find that the Arch package page (https://www.archlinux.org/packages/?name=mono) is showing version 6.4.0.198-2 so I wrote a post on the forums requesting to update it, not sure if that is going to do any good or not. (https://bbs.archlinux.org/viewtopic.php?id=256541)
  17. Ty for the info guys about the using old builds. I didn't know that was possible. Lear new stuff everyday.
  18. So an little bit of an update. This whole time I didn't realize that I had an additional molex connection off my power supply. supermicro chassis with a backplane thus up until now I've had all my drives including my caches drives plugged into the backplane and this wasn't an issue until now. However today i got the bright idea to take unplug everything in the server and plug it back together. Still no change... However during this process i found that there was an additional molex pigtail off to the side tucked away. So i ran to the store and bought a molex to sata-Y cable. Moved my cache drives off the backplane and onto the on-board sata ports and like magic everything seems to be functioning much better and not hanging like before. Only about an hour of up-time right now, but i'm hoping that my performance issues were related to this. (If something goes south ill post another update) Take away from this, do what you read and put your cache drives to the onboard sata ports. It will save you headaches down the road.
  19. CommandExecutor Error occurred while executing task CheckForFinishedDownload: database is locked database is locked 4:25pm EventAggregator TaskManager failed while processing [CommandExecutedEvent]: database is locked database is locked 4:24pm TaskExtensions Task Error: database is locked database is locked 4:24pm CommandExecutor Error occurred while executing task CheckForFinishedDownload: database is locked database is locked 4:20pm CommandExecutor Error occurred while executing task MessagingCleanup: database is locked database is locked 4:19pm EventAggregator TaskManager failed while processing [CommandExecutedEvent]: database is locked database is locked 4:19pm EventAggregator TaskManager failed while processing [CommandExecutedEvent]: database is locked database is locked 4:18pm TaskExtensions Task Error: database is locked database is locked 4:17pm CommandExecutor Error occurred while executing task CheckForFinishedDownload: database is locked database is locked 4:17pm EventAggregator TaskManager failed while processing [CommandExecutedEvent]: database is locked database is locked 4:17pm TaskExtensions Task Error: database is locked database is locked 4:12pm TaskExtensions Task Error: database is locked database is locked 4:10pm TaskExtensions Task Error: database is locked database is locked 4:07pm CommandExecutor Error occurred while executing task CheckForFinishedDownload: database is locked database is locked 4:06pm EventAggregator TaskManager failed while processing [CommandExecutedEvent]: database is locked database is locked 4:06pm TaskExtensions Task Error: database is locked database is locked 4:06pm TaskExtensions Task Error: database is locked database is locked 3:06pm Here is what my sonarr error log looks like. I pulled these Diags like 1 min after so ~ 4:26 tower-diagnostics-20190310-1325.zip
  20. Does anyone have any additional ideas for troubleshooting this? I'm still lost. Periodically the system in just unresponsive for like 5 mins. When it comes back around the only real errors i'm seeing is inside Sonarr and Radarr saying that the database is locked? Not really sure how it could be locked. When i google this its saying that another process is using the DB but how could that be since they are in separate dockers. Task Error: database is locked database is locked System.Data.SQLite.SQLiteException (0x80004005): database is locked database is locked at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x00088] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at Marr.Data.QGen.InsertQueryBuilder`1[T].Execute () [0x00046] in M:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\QGen\InsertQueryBuilder.cs:140 at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in M:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\DataMapper.cs:728 at NzbDrone.Core.Datastore.BasicRepository`1[TModel].Insert (TModel model) [0x0002d] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Datastore\BasicRepository.cs:111 at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x0013d] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:82 at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push (System.String commandName, System.Nullable`1[T] lastExecutionTime, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x000b7] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:95 at NzbDrone.Core.Jobs.Scheduler.ExecuteCommands () [0x00043] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Jobs\Scheduler.cs:42 at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in /build/mono/src/mono/external/corert/src/System.Private.CoreLib/src/System/Threading/Tasks/Task.cs:2501 at System.Threading.Tasks.Task.Execute () [0x00000] in /build/mono/src/mono/external/corert/src/System.Private.CoreLib/src/System/Threading/Tasks/Task.cs:2344
  21. Recently I upgraded from 6.6.6 to 6.6.7 and after the upgrade i've noticed that the system seems to have taken a performance hit. Like Loading, my dockers takes 2-3 times longer. Web pages that are hosted via lets encrypt are taking much longer to load. Sometime I'm unable to load a docker at all without restarting it(also posted on the binhex-sabnzbd page here . So I thought about maybe rolling back to 6.6.6 but the option seems to be missing. I know ive seen it there in the past versions but this time I don't see it. It almost like instead of upgrading it overwrote the flash as a new install yet, kept my settings. Im just at a loss here and dont know what to troubleshoot next. Some notes: 1. My dockers (sonarr radarr, and sabnzbd) periodically report that their databases are locked and they cant write to them. I assume this is because something is pegging the I/O? 2. Ran a ramtest for almost 13 hours no issues 3. Parity check returns 0 errors 4. Scrubs of my cache and dockers return 0 errors, as wells as check FS check in maintenance mode. 5. I thought maybe my LSI 9211-8i may have been going bad, swapped that (had a spare already on hand) still no change. 6. Attached diags 7. Also wanted to know that the issues are not just contained to the dockers, randomly my share will be unavailable from my windows machine for 20-30 sec while nothing else is running on my server. and also the main GUI pages take much longer to load as well if they load at all. 2-3 Refreshes are needed sometimes. I would be down to reflash my flash drive but i'm unsure if that will mess up everything or can i just reconfigure it without losing the data on the array ? Im just at the loss on what the next step would be as i'm not all that experienced with unRAID as i've only been a user since ~ Aug 2018. tower-diagnostics-20190305-1509.zip
  22. Looking for some help to see if someone can help me pinpoint what is going on. Since updating to 6.6.7 I've noticed that my downloads from SABnzbd have been acting very strangely. Like It will download something, then Sonarr or Radarr will pause (they give you the gold/yellow option on the item) and say that the download is empty. But then when i manually check sometime there is files still in the download location, or they will be split up some in the incomplete and some in the complete. So i went to check the LOG for SAB, and i just get a bunch of weird symbols. Here is what my current log shows. Also Attached my Diagnostics log in case that can be of some help. ErrorWarningSystemArrayLogin ���,j�z 0��u�� �u��Lv" ���u��t �V�@�ɇ ��c����� �I�i0}q� �n�T�g� qoO2b5� ���v:�^3 ���/B�1� ��yr��� �{���/ ��yK�A� ��^��� ��6`@�� 1��p��]l ~�p���L �7*��� �~�vU�e �4�� O� cx,��j� ������G( i-e*� y����_ ����U.�� ��q?+s� w���C`� y�}���� �I�e�q� ���FZ� U���6 � ������ ҳ�Et3m ]W!n$�c% ��+c�X 4 8�J���� �<��!�6 ̺-̍�� �"z\�� � d���� �����P�� `�����^� y�)��\� ��;��6( dž�0H �$�: C�`�7&� ��*��\ aɻO�eٿ l��Z��h� ܎zV���� 'm�ࢹ7 ��NuQk�] ��W@�b ��v�]} I_����# @w����e� @�э7�� "������� ��C.�hc �%�vG�� ��m���>i �#>��|? ;���%�| ��,n�}D� ��z�T�� C��fH� '!�墪�� ��;��* 2��᭡�$ ,̛S�� �/���;h� �\��#q ��V�2_D �p ���k* ���;��A nQ~�iZ,_ ���R�� H}��مI �J�(\y�E ?&錯B� �0M�5�� �x�H�� �_�R�= fdm&�X ���s�� �*9�IrW� � [wK� �h�;Y�s� b��q�ܡ �!L��)�� �s���(�� ���7}�� �+za�ε R���:m�8 �� ��W�7 X���i�WL ����d �:���Q�� wc*N�? �N�x� ��6A�i� ,�$]ؤ� M�`�H= ~ 3�/砵 ���Lk5$ }�u��.N Ӆ��[� ��s�Nx# �\u3��� ��<��#+ �X�d�q,� �W�9��# $����y�; LE��R� ��-�Y�( O1Ĵ��v ��^j�B +Q9�B��� w��31�p ���r !�vU�os� �p�6AeQ� ��7C��� s0��a,� v�]ة�"J bD�"�,�� ����} gW=��]H ��C1a��� ��>DP~ �����,� �T�CY�P� =�&�ch 1�u�aAJY �U�tc�� 6i'k!n� KEWt�{X ������| �X����� )�|f�Ua szw�(��� ��h���� Q�=�4�� s��!`��� t�e�!a� �&�#L�v� ��QI��8 �&����� Y1�����| s#S�A�0 r��y�Q�� M�l��â }-��_�m ��1��R ���X�#� ;F�G4e�� ��]_v::� t�J�]5+� ��C���� �f��{� ���3H�C ��ӯ� # ���moq�{ �W�j� �����?^^ ˝+Bcpי +R�e�cQ '?0�r� p_����l G�/B� �Y�`�� ����5��� ��L$7B|� ꠍ.� Hd�>@�C1 �$�pͻ� *�5~_�Y `{��C*�p ^�)A��d� ܄�� �n? ������q� d��Hq�4� �|%ȩ�N� /���� q����� 0p���9` �3��Yf �-��"�<l �F��H �#��x�N ��̢�� ֳy���7 �9������ rl)�/� �(ϓ��t� r:"�a#i� ���v�SR ����x�1 �����M 7���5X 5 ��4�|��� ��� �g祏��K 8��@s�q L� ���R �d�Y�o�� Xhp�˂ ஄��� 㛘tǸb3 YZ��\Z� GO��X��� �}�y$̓� Խ��cZ �k���z� km9Q�8c ����[|�_ e��\9� ��L�ML?k �ʿ����� ��`��d� 3�O�,��$ ������Tm X��ew�e ��[)H��) �_�j���{ F�|��� ���#cK Cȴg�� �Aơ�E�� �������4 �U�jj� E��!���� ��z�.!�� �]/]���� ��;?�� ��h� ��!}���� $5.�*נ� +oWY��� ݀�0 3 ��X��: ��ўl� oc���� �'���K�� B4+�pUb V<ݗ˼ xl�c� �h�l�� w�zl�d�- ����A� �z����w ��nǯj�� byG��'� e���Y��� �W���p k�Lt��� �f�w^��# W����K� �H!���J� u1��FEL n�3�w� .��\��� �uʶ4qO� g 賒t�E ���!߯� h������ =Paz�)�� ��M�(�V F�ʈ� ɦ���ZZ� D]�y��d� ��,�ʑ� �n�,���m �-�2��k� ]��۝ ) @�4`h^�� "�f.�d�H Y��D�� �Q4~�XP� ����aW9 �� �J7G܁ � �:vW��� ������ �!��j_�^ ��>��ޱ� =�*ep� �١�w%&� 5��س(�9 ������� ���{Y0J� N�,�pʿ� ]q!8հ �ޘVr��� 俻6�aT 31Q�:�5G �� �M=6 ; Bc��� 6���q��y c���֐�� ���4��% %��<e:J� c���$/ 37'�MpD ����i� ����{Z� y��.4� $.�3,�F ��e٘�k ���"/ �XaY��0� j����j 8�L\� ,9�7=�r )*��TJ� \�����p ���¤� RӶ'� ��H���: w�2\(Ö �� �����$�� \W]� ��m|��� ix����� V"��� �J�KD� ;�-_�&e a�T/ 2019-03-01 13:04:27,871 DEBG 'sabnzbd' stderr output: 2019-03-01 13:04:27,786::WARNING::[assembler:123] Aborted job "NCIS.S13E17.1080p.HDTV.X264-DIMENSION-xpost" because of encrypted RAR file (if supplied, all passwords were tried) tower-diagnostics-20190301-1350.zip