seestray

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by seestray

  1. Ran 4 passes of memtest6.2 on a different pair of 2x16 using the same slots in the motherboard, they passed. Rebooted the server and parity check has completed with no errors found. Anything else I should check/run to catch errors that the bad memory could have caused? (As an aside running memtest86+7 with only the suspect dimm in a different PC, will leave it for a while more, but it's done 4 passes without an error. I'll let it go until I need that PC tomorrow - is it likely that it was just a connection issue that was solved by removing/reinstalling in a diff PC?)
  2. Thanks for the reply JorgeB, I've been running memtest this afternoon and looks like one of the sticks are bad. Swapped in a different pair from the desktop, and running memtest on them to rule out the board or something else. The ipvlan change is on my do-to list, didn't want to change too many things at once with the upgrade - since things looked to be stable before this.
  3. Updated to 6.12.8 about two weeks ago, came back from a week away and my shares have disappeared. (And most docker containers have either stopped due to errors, or aren't running correctly due to no disk.) Browsing from the Main --> Disk shares, the files all appear to be present Shares --> all are missing When I run: i3-12100, Gigabyte Technology Co., Ltd. B660M DS3H AX 16Gb DDR4, LSI SAS2008 Found a pair of similar threads, but they are earlier versions. Solving it with a reboot doesn't seem to be a permanent fix, though, and I should probably try to understand why it's happening first. Diagnostics attached, anything else to check/get before I reboot the box? tower-diagnostics-20240320-0842.zip
  4. Went from 6.11.5 --> 6.12.8 with the realtek RTL8125, macvlan and a unifi docker that's on 8.0.24. An hour in and all appear to be working at first glance - we'll see over the next couple of weeks if it remains stable.
  5. Thought I would add a little bit of documentation - I did the cache swap procedure to replace/add a second cache drive - it caused an issue where paperless would not start after reinstalling all the dockers (it was the only one that had an issue) nothing changed with my config. (Don't know if it is fully related, but either way, figure this is the best spot to add in case someone else has the issue.) From the startup log, this was the only thing that stood out: django.db.utils.IntegrityError: CHECK constraint failed: archive_serial_number Applying documents.1029_alter_document_archive_serial_number...Paperless-ngx docker container starting... I don't use the ASN feature, but since I didn't have anything else to go on, turns out one of my documents had the ASN set to '-2' and this was enough to prevent it from starting. To fix: open terminal cd /mnt/cache/appdata/paperless-ngx/data#ls -l /mnt/cache/appdata/paperless-ngx/data#ls -l make note of the current permissions for the database /mnt/cache/appdata/paperless-ngx/data# chmod 777 db.sqlite3 change to allow all to write (we will change it back at the end) Open the documents_document table in DB Browser for SQLite, look for archive_serial_number that is out of range (negative or huge), set them to a null, or other valid value. back to terminal, /mnt/cache/appdata/paperless-ngx/data# chmod 644 db.sqlite3 At this point I was able to start paperless-ngx again.
  6. So if they are already organized, I would scan each batch to the watched folder, go to the paperless inbox view - tag that batch, then scan the next batch, tag it and repeat. Over a couple of evenings I think I scanned in ~500 docs which got most of my semi-recent stuff off my desk.
  7. I have it setup so a 'new' tag is assigned to everything as it's imported (and a view that shows only the 'new' by default.) I then manually review and remove the new tag. You can also edit a batch of them all at the same time from the list view no need to do them individually if you know they are all the same. I've found the automatic/AI tagging to be pretty decent after a few rounds, seems to get the tags right more often then not, but you need to do import/tagging in batches - it only learns after you correct things, so items in the 'new' queue won't take advantage of the AI learnings from earlier in the batch.
  8. +1, saw the same thing when I attempted to add a second cache drive today with 6.11.5 Glad that we should see a fix soon
  9. I'm running two instances of pi-hole, one in unraid, and a second on a pi-zero w. The pi is generally reliable, but when a wyze cam goes on the fritz and starts a couple of million dns requests to google, it will crash. Having both the pi & a docker version, lets me update without having to worry about any of the clients having dns issues (or having to rely on an outside source.)
  10. FYI - mass downloading of old episodes is an issue, so update your v2 instance asap. Had this issue this past week on a couple of shows - over 500g in unexpected downloads of un-monitored seasons I've added a quota that resets daily in SABnzb, hopefully that will reduce the chances of automation gone wild in the future - likely a good idea to add one, if you don't already have one in place. Sonar version: 2.0.0.5338 (issue occurs) v2 master: 2.0.0.5344 (issue corrected) More info on the issue: https://forums.sonarr.tv/t/important-zero-episodes-and-old-episodes-reappearing-as-monitored-and-potentially-downloaded-unintentionally/25023 https://old.reddit.com/r/sonarr/comments/fhj8z9/important_zero_episodes_and_old_episodes/ https://github.com/Sonarr/Sonarr/issues/3619
  11. but if you look to the prior releases in the -latest branch, it's the same comment/update for 5.12.35-ls49 (5 days ago) , 5.12.35-ls48 (12 days ago), 5.12.35-ls47 (18 days ago), 5.12.35-ls46 (26 days ago), etc. So please correct me if I'm wrong, but it doesn't look like the updates are for something ubiquiti has changed.
  12. Interesting, thanks for the info on the Seagate attributes, if anything reporting raw numbers completely differently them all other manufacturers make me want to avoid the brand going forward.
  13. Hi unraid crew, Two/three days ago my 8tb Seagate - sdc had 237 read errors, unraid disabled the drive (red X) and emulated the contents. The yesterday I attempted to do some investigation, and was unable to get the drive to spin up, or download smart data for it. Once the array was stopped, sdc was no longer an option in the list of devices. I downloaded the diagnostics, but the smart data was predictably missing for the drive. The attached smart details are from after the reboot. Powered down the box today, re-seated the drive cables (just in case, but don't think that was the issue), and then booted up, drive was visible again in the device list, so I started up the array and a data rebuild is in process. Hardware Dell T110, Parity=sdf WDC_WD80EMAZ 8tb, disk1=sdc ST8000DM004 8tb, disk2=sdd TOSHIBA_DT01ACA300 3tb, disk3=sde WDC_WD20EARS 2tb, with a cache=sdb Crucial_CT120. 16gb ram. unRaid 6.7.2 Looking at some other threads on the forum, I'm a little concerned about the SMART numbers for the drive, the read/seek/timeout/ecc numbers seem really high - should I be replacing it asap? (or assuming it rebuilds without error should I be ok for a while) === START OF INFORMATION SECTION === Model Family: Seagate Barracuda Compute Device Model: ST8000DM004-2CX188 === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 080 064 006 - 94092324 3 Spin_Up_Time PO---- 092 091 000 - 0 4 Start_Stop_Count -O--CK 099 099 020 - 1539 5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0 7 Seek_Error_Rate POSR-- 083 060 045 - 221147486 9 Power_On_Hours -O--CK 076 076 000 - 21404 (210 195 0) 10 Spin_Retry_Count PO--C- 100 100 097 - 0 12 Power_Cycle_Count -O--CK 100 100 020 - 55 183 Runtime_Bad_Block -O--CK 100 100 000 - 0 184 End-to-End_Error -O--CK 100 100 099 - 0 187 Reported_Uncorrect -O--CK 100 100 000 - 0 188 Command_Timeout -O--CK 094 057 000 - 214751641653 189 High_Fly_Writes -O-RCK 100 100 000 - 0 190 Airflow_Temperature_Cel -O---K 073 042 040 - 27 (Min/Max 25/27) 191 G-Sense_Error_Rate -O--CK 100 100 000 - 0 192 Power-Off_Retract_Count -O--CK 100 100 000 - 576 193 Load_Cycle_Count -O--CK 099 099 000 - 3228 194 Temperature_Celsius -O---K 027 058 000 - 27 (0 16 0 0 0) 195 Hardware_ECC_Recovered -O-RC- 080 064 000 - 94092324 197 Current_Pending_Sector -O--C- 100 100 000 - 0 198 Offline_Uncorrectable ----C- 100 100 000 - 0 199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 0 240 Head_Flying_Hours ------ 100 253 000 - 15806 (156 110 0) 241 Total_LBAs_Written ------ 100 253 000 - 61074957028 242 Total_LBAs_Read ------ 100 253 000 - 402051143430 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning tower-diagnostics-20200108-0319(anon).zip tower-smart-20200108-2119.zip
  14. The extended smart test came back clean - so in theory the drive is ok. I'm thinking next steps; 1) swap the cables between a pair of drives, 2) rebuild the drive, 3) monitor for more errors - if they happen, then I should be able to narrow things down. Does that make sense?
  15. So woke up this morning to a pair of notices: Array has 1 disk with read errors Alert [TOWER] - Disk 1 in error state (disk dsbl) I've got a few lines in the ST8000DM004 log with different sector numbers, but they are the red highlighted ones. May 1 03:58:21 Tower kernel: print_req_error: I/O error, dev sde, sector 2256536312 Dashboard shows 144 errors for this disk - the others are all at 0. Checked warranty, and it's still within a couple months of the end date (so if it's the disk I'd like it replaced before it runs out.) Ran a short smart test, it passed. Is there anything I should do before running the long smart test - the polling time is 952minutes, so if there's something I should do first - I'd like to do that before the 15+hours needed to run it. tower-diagnostics-20190501-0951.zip
  16. Thanks for the reply trurl. I read somewhere that adding the cache last would make moving things over a bit faster. But in hindsight it really makes sense - so for anyone else reading this, move things over, then add the cache, then lastly the dockers/vm's. Went back through some of the shares, and changed most of them to cache=no, left the incoming/processing ones as preferred. (my thinking is that they are temp files, and don't really need the parity protection.) So thought I'd dig in, shut down the two running dockers, went to settings --> docker --> enable docker, changed to No. Apply/done. Launched an ssh through putty, opened up midnight commander (MC), then consolidated the systems folders you mentioned above, off the spinning drivers and onto the cache. While I was in there also consolidated a couple of user shares too (which were split due to the order that I added drives initially.) Back to settings --> docker --> enable docker, changed to Yes. Apply/done. Reboot. Entered encryption key, and all of the dockers came up automatically on the boot. (was ready to use the previous apps, but didn't need to.) Edit: This approach mucked up the ubuntu vm, reset the vm through the gui - settings --> vm manager --> changed enable vm's = No, deleted the Libvirt file (via checkbox). Re-enabled, and we are working again. Running through the fix common problems on the CA, it found a couple of things that should have been changed too - I appreciate the help today, thanks!
  17. knew I was forgetting something, here's the diagnostics tower-diagnostics-20190417-0945.zip
  18. My search foo isn't working this morning - seems like more an general config issue then docker specific - but feel free to move if it better belongs there. Posted over on r/unRaid, but figured I should ask the experts here as I didn't get any definitive answers. Powered down (via the UI) and physically moved the box yesterday. On power-up, the sab, CP, radarr dockers, and ubuntu are no longer in the dashboard/docker/vm menu's (but a couple unifi, plex, emby remained), parity check ran and completed without error. Unraid 6.6.6, with all disks using encryption. What happened? Background/setup; Recently started using unRaid after a trial - and what I did was similar to the following - I can't remember the exact order of operations, but this is pretty close if my memory is accurate: 1) moved data from the windows box via network across 3 data drives, 2) next setup the unifi docker (which remained) 3) setup sab/CP/radarr dockers (this might have been after # - they all disappeared 4) 4) added the parity drive, let it calculate, 5) set up some the remaining dockers (syncthing, dupeguru, and a couple others - all of these disappeared too), and an ubuntu vm (this might have been after #6) 6) then added a 120g cache I have a feeling it has to do with the order of setting them up, and the cache drive - I can browse the cache drive and see the docker folders there, why aren't they showing in the UI/where do I need to look to fix it? I think it might be relevant: (they've been set this way for at least a few days before the reboot - but might not have been that way initially) the \system share is set to high-water, auto split top level only, all disks included, no excluded disks, use cache=prefer the \appdata share is set to high-water, auto split top level only, all disks included, no excluded disks, use cache=prefer