live4soccer7

Members
  • Posts

    410
  • Joined

  • Last visited

Everything posted by live4soccer7

  1. What would cause it to go into read-only mode? If it is in read only mode, can I extract data off it still? I do have a backup, but it is from february, so not terribly new. Still way better than no backup
  2. Could the "failed" status have any to do with the unmountable drive/fs? I wouldn't think so, but figured I would ask. The frontend says it is healthy, so not sure why that would be
  3. smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.46-Unraid] (local build) Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: Samsung SSD 980 PRO 2TB Serial Number: S6B0NG0R405728R Firmware Version: 2B2QGXA7 PCI Vendor/Subsystem ID: 0x144d IEEE OUI Identifier: 0x002538 Total NVM Capacity: 2,000,398,934,016 [2.00 TB] Unallocated NVM Capacity: 0 Controller ID: 6 NVMe Version: 1.3 Number of Namespaces: 1 Namespace 1 Size/Capacity: 2,000,398,934,016 [2.00 TB] Namespace 1 Utilization: 1,496,877,862,912 [1.49 TB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 002538 b41150549f Local Time is: Thu Jun 30 09:39:16 2022 PDT Firmware Updates (0x16): 3 Slots, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x0057): Comp Wr_Unc DS_Mngmt Sav/Sel_Feat Timestmp Log Page Attributes (0x0f): S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Maximum Data Transfer Size: 128 Pages Warning Comp. Temp. Threshold: 82 Celsius Critical Comp. Temp. Threshold: 85 Celsius Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 8.49W - - 0 0 0 0 0 0 1 + 4.48W - - 1 1 1 1 0 200 2 + 3.18W - - 2 2 2 2 0 1000 3 - 0.0400W - - 3 3 3 3 2000 1200 4 - 0.0050W - - 4 4 4 4 500 9500 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0 === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! - available spare has fallen below threshold - media has been placed in read only mode SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x09 Temperature: 36 Celsius Available Spare: 0% Available Spare Threshold: 10% Percentage Used: 56% Data Units Read: 3,017,526,991 [1.54 PB] Data Units Written: 2,839,436,501 [1.45 PB] Host Read Commands: 5,464,158,312 Host Write Commands: 4,063,944,349 Controller Busy Time: 45,841 Power Cycles: 457 Power On Hours: 4,340 Unsafe Shutdowns: 28 Media and Data Integrity Errors: 9,994 Error Information Log Entries: 9,994 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Temperature Sensor 1: 36 Celsius Temperature Sensor 2: 49 Celsius Error Information (NVMe Log 0x01, 16 of 64 entries) No Errors Logged
  4. Is there a way to check for previous notifications in unraid? I may be able to see if something filled it to the "gills" last night. If not, there should be a lot of free space
  5. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... xfs_repair: Flushing the data device failed, err=61! Cannot clear needsrepair due to flush failure, err=61. xfs_repair: Flushing the data device failed, err=61! fatal error -- File system metadata writeout failed, err=61. Re-run xfs_repair. Ran without any flags a couple times with this. It's always possible a log or something filled it up and caused this problem.
  6. It is a 2tb nvme drive. it should have about 1tb free. Should I run it with any flags?
  7. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata agi unlinked bucket 23 is 73394135 in ag 1 (inode=1147135959) sb_icount 1021248, counted 1021376 sb_ifree 8167, counted 6971 sb_fdblocks 164425577, counted 166053595 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 2 - agno = 0 clearing reflink flag on inodes when possible Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 1147135959, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (72:1879323) is ahead of log (1:2). Format log to cycle 75. xfs_repair: Flushing the data device failed, err=61! Cannot clear needsrepair due to flush failure, err=61. xfs_repair: Flushing the data device failed, err=61! fatal error -- File system metadata writeout failed, err=61. Re-run xfs_repair.
  8. I woke up this morning to docker and VM manager being down. The cache drive (xfs fs), which these files are stored on is now "unmountable" (wrong or no file system). I did put the array into maintenance mode and attempt to hit repair. You can see the results below. My backup of it is from a few months ago, so I can recover most all of what was on it, but if possible I would definitely like to get this functional again. On the "Main" menu it reads out the temp, says active and "Healthy". I'm not sure if these are accurate or just last readings of the disk. I was on 6.10.0. I did push the update today (disk failed before this) to 6.10.3. Any help would be greatly appreciated. I use the VM for work and have lots of other things down that are somewhat important and time sensitive. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... agi unlinked bucket 23 is 73394135 in ag 1 (inode=1147135959) sb_icount 1021248, counted 1021376 sb_ifree 8167, counted 6971 sb_fdblocks 164425577, counted 166053595 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 1147135959, would move to lost+found Phase 7 - verify link counts... would have reset inode 1147135959 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting.
  9. That did the trick. There was quite a bit more in the .conf file after re-installing it. The config page for the docker also didn't show the PUID GUID either. A few changes. I've had the same install probably not long after you initially released the docker.
  10. @spants Yes, they are all there. I have done everything suggested in the forum/thread here. I'm quite stumped on this. I'm thinking of nuking the whole thing. Deleting the image and the appdata folder for MQTT and installing again from scratch. I don't think there is much in the folder itself that would need to be saved. Without persistence, I think it would just be the mosquito.conf and the passwords.mqtt. I could manually open those with nano and copy the contents.
  11. After updating Unraid to 6.10 today, I was unable to start MQTT. 1653129413: mosquitto version 1.4.8 (build date 2020-01-27 00:25:20+0000) starting 1653129413: Config loaded from /config/mosquitto.conf. 1653129413: Error: Unable to open pwfile "/config/passwords.mqtt". 1653129413: Error opening password file "/config/passwords.mqtt". 1653129593: Error: Unable to open log file /config/log/mosquitto.log for writing. 1653129593: mosquitto version 1.4.8 (build date 2020-01-27 00:25:20+0000) starting 1653129593: Config loaded from /config/mosquitto.conf. 1653129593: Error: Unable to open pwfile "/config/passwords.mqtt". 1653129593: Error opening password file "/config/passwords.mqtt". I have probably 25 other dockers that didn't have an issue. I've had this MQTT docker installed for years and never ran into this. I checked the GUI UID in the settings and they're proper. I did attempted to do a chown -R, but that was unsuccessful. I am also unable to access the mqtt folder through the Windows Network share on a remote machine on the network. I can access any other folders. I find this a bit strange, any ideas? I have a lot of things that run through MQTT, so it is quite important to get back up and running.
  12. I have haproxy on pfsense and am trying to setup Authelia. It is a real bear to get to work. I've been following this: https://github.com/authelia/authelia/issues/2696 I know I'm extremely close to having it work. I attempted this about a year or more ago and couldn't get it. I came across the above post a few days ago and thought I would give it another "whack"
  13. Has anyone ever seen this before? It has stumped me for a couple days now: time="2022-03-13T09:26:36-07:00" level=error msg="Scheme of target URL //synoscgi.sock/socket.io/?SynoToken=undefined&UserType=guest&EIO=3&transport=polling&t=N-46b_W must be secure since cookies are only transported over a secure connection for security reasons" method=HEAD path=/api/verify remote_ip=147.185.124.196 stack="github.com/authelia/authelia/v4/internal/handlers/handler_verify.go:459 VerifyGet.func1\ngithub.com/authelia/authelia/v4/internal/middlewares/authelia_context.go:52 AutheliaMiddleware.func1.1\ngithub.com/fasthttp/[email protected]/router.go:414 (*Router).Handler\ngithub.com/authelia/authelia/v4/internal/middlewares/log_request.go:14 LogRequestMiddleware.func1\ngithub.com/valyala/[email protected]/server.go:2341 (*Server).serveConn\ngithub.com/valyala/[email protected]/workerpool.go:224 (*workerPool).workerFunc\ngithub.com/valyala/[email protected]/workerpool.go:196 (*workerPool).getCh.func1\nruntime/asm_amd64.s:1581 goexit" I know it is a configuration type of thing and not the actual docker, but I was hoping for any kind of additional clues at all
  14. Has anyone successfully been able to run DSM 7.01? I have tried and tried without success. The build supposedly happens successfully, however I'm not convinced. Yes, I can not get to the gui via the IP nor can I ping it, but I'm not convinced everything else went right up to that point. I think that perhaps some of the "extensions" needed are not available and hence something needs changed in the XML. I have tried every single bus type and tried USB with this same result below. The other reason I think that the image is not properly being added to the "media" is because the allocation only shows 4k. Any ideas?
  15. I just turned it off for now since I'm not using it at the moment. I do plan to soon, so I will check on this. Thank you
  16. Is it normal for this docker to create entries in the log every few seconds or so? I just worry about the log size and if there is a possible issue causing this.
  17. Yeah, there are many different ways you can configure them. I looked at the photos/configs when I initially got the unit and didn't see anything referencing connecting the two IOM modules on the same shelf together.
  18. That what I had originally thought as well, but had read a thread somewhere saying someone connected the two IOM6 modules. There were no details or additional information, so he/she may definitely be wrong. Figured I would look into it and see if anyone knew or if I could find supporting documentation. @JorgeB Thanks for always answering pretty much all my questions lately. I've just been doing a lot of restructuring and updating decade old equipment from when I first got started with unraid.
  19. With the netapps, they have two IOM6 modules. I currently have an HBA that is connected to the top module's "square" marked QSFP+ port (I think that's the cable, my memory can't remember everything, lol). That's the only cable I have hooked to the whole thing. Should I then connect the top module's "circle" port to the bottom modules "square" port? If so, will this increase write speed to the array/disks (pooled devices, not in the unraid array)? Or is this more of a redundancy thing? or none of the above.
  20. Yes, I completely agree. I am mounting the shares in the clients fstab.
  21. I have a couple windows machines and the transfer speeds to the smb shares are always 100MB/s +. I setup nfs and also smb on ubuntu, but the speeds are roughly half that of a windows machine.. From windows, I'm basically saturating the gig speeds. It's not a bad cable as I have multiple ubuntu installs that have the same issue. Surely, I'm needing to add a parameter to the fstab file. Any ideas or anyone willing to share their fstab line for their share (nfs or cifs) that gets gig transfer speeds?
  22. It is the SAS2-EL2. It was very difficult to see and even then it was a blurry image through my phone camera.
  23. I just checked the global settings and while you can set it to SAT, you can not specify "12" in the global setting. My guess is that it wouldn't work properly. Luckily, this will get fixed in the next release.