smakovits

Members
  • Posts

    872
  • Joined

  • Last visited

Everything posted by smakovits

  1. yeah, so the question is, should it be possible as an unassigned device or do I want to put it in another system? As an unassigned drive it only shows 802GB, as size vs 3TB. and it is not mounted or shown mounted. The only immediate option in the UI is format which I will not do. I assume there might be a way to access the data via CLI or do we just consider it dead
  2. mounted as USB, but I believe that limits the SMART capabilities. I can try other things if needed. smartctl -H /dev/sdr smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.28-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. Failed Attributes: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 001 001 005 Pre-fail Always FAILING_NOW 2005
  3. This perhaps is a misunderstanding. The drive that is now xfs is the new 8tb drive I put in to replace the failing 3tb drive. The failed/failing drive is still reiserfs and has my data, it is simply sitting on the counter. The 8tb drive is xfs, new and empty and never had anything on it. All which makes me think maybe I try usb mounting and start copying folders. Question is, is this where I user reiserfsck knowing unbalance was not cutting it prior to swapping the disks.
  4. As a side topic. I thought we want xfs these days. I know I started going xfs for that reason with new disks. But this now makes me question my doings.
  5. I'll try this to try and get my most critical data first and see where it gets me.
  6. are you suggesting there is corruption restored to the new disk or that by it is better off since I formatted it off for the new disk? As for getting the original data back, can we do a bit level recovery if I mount it as a external USB? They are crc rewrites and not a clicking disk, so not sure what is possible. I know when I was trying to get it all off with unbalance, it started at like 30 hours and 192 errors and when I stopped it it was 50 hours in with 75 hours remaining and over 1200 errors, so not sure what is possible with it.
  7. Is it enough to do it as a external USB mount or do I want it as a sata mount?
  8. Since the drive is empty and new, are you suggesting to change the new drive back? Or are you suggesting putting the failed drive back instead. Or are you suggesting something completely different? I am game to try, just need to understand the steps to try what you described.
  9. I had a failing disk, all parity was good. Prior to replacing, I thought I would be smart and actually move the data off the disk with unbalance. As the time kept going up, I realized the disk was gone and I proceeded to stop the process and stop the array. Replaced the failed disk with a new disk and re-built the drive. However, the disk is empty. Did I screw something up in my process? If I think about the steps I did, the only thing I did extra was switch from riser to xfs. Did this recovery drop all the data? I know it formatted, but I did not think about it as anything different from any other time. As I type, I am thinking I jumped the gun on my conversion. I should have rebuilt reiser, moved the data off and then gone xfs. Is this correct? Is this where I went wrong? Assume there is no going back, some 2.5TB gone. Is there any way to possibly see what was there that is gone? I have backups of my music and pictures, which is most important. Other stuff is replaceable, however, I am just looking for a way to know where to start.
  10. strange issue I am dealing with. When I browse my network for NFS shares unraid no longer appears. I am able to see another server (freenas) with a share, but my unraid server is gone. At the same time, if I manually path to the nfs share it works, so NFS is working. I have it set to export shares etc. It is really strange because this used to work without any sort of issues, but then one day it just stopped. I often browse to a share to quickly access a file, so setting the mount manually is quite annoying.
  11. After adding a new drive, I did something that caused my appdata to include it and as a result, mover moved my appdata data. After this, I had files on cache appdata and disk4 appdata. I resolved all docker containers except plex. I copied back preferences.xml and things appear OK, but since there are so many files in the directory, I thought to ask what the best thing to do it. Should I accept that plex is back up and move on or should I stop plex, copy everything from disk4 to cache, overwriting along the way and then just start the container again?
  12. You are STRONGLY suggested to install the Fix Common Problems plugin, then run the Docker Safe New Permissions command Ran the command, but keep getting the error. Is there a better way to complete this or resolve it? edit: if I check the said log, there do not appear to be any errors
  13. do I need to do it with the old disk first or just go straight to the new disk?
  14. Upgrading a 2TB to 8TB drive and In the process I figure I will take it to XFS. I have already take the steps to exclude the drive from my shares and I moved all the data off the drive. The only thing on the 2TB drive at this time is 2 empty share folders. Looking through the documents, I am trying to understand the next steps. Is it as simple as stopping the array with the old drive in place and changing the file system to xfs? Then Start array? Or, do I just place the 8TB drive and format it to XFS and re-build parity and go?
  15. Gotcha, that's what I was thinking, but wanted to make sure. Thanks
  16. Thank you, yes, I believe that's what I called crazy, the raw values. In that case, if my seagates are ok for now (knock on wood), I was going to Target sdq and sdn as they have the most hours, are 2tb and have experienced high temps. As far as zeroing drives when replacing them, is that any different than the documented exclude from array/safe mode, copy off data, remove, reboot, start array and re-build parity? I was thinking of removing a few drives, so I was reviewing the process a little. First I must confirm the the supermicro backplane supports my shucked drive.
  17. ran a diagnostic to collect a SMART report etc. Looking at the reports and the power-on hours I have a few drives over 50 and 60k hours. One that even hits 70k. However, the thing that stuck out the most to me during review is the seagate drives. Every one of my ST3000 drives show crazy values across the board. My assumption is that it either has to be something with the drives and their firmware or the drives are really going bad, which would not actually surprise me. I am actually looking at an ST3000 that never got used and never made it into the server before going bad. I pre-cleared it and shelved it and now it is just bad. tower-diagnostics-20190312-1921.zip
  18. I am trying to upgrade some drives. I have some aging 2-3 tb drives that I am targeting as I am out of drive slots. I want to replace the best candidate. At 2 tb an 8tb is the smartest, but only if one of the 3tb is not for some reason a better target. Not sure if there is a simple report for this or not.
  19. oh goodness, I totally misunderstood what they were telling me. At least I know the parity drive I replaced did have some 1400 reallocated sectors, so it was definitely going bad. This one, as you note looks good and healthy. I guess it is time to carry on, thank you for debunking my fears.
  20. Had a parity drive aging and reporting sector re-allocation, therefore, I purchased a new drive to replace it. Went with a black friday 8TB, I ran a preclear on it and everything checked out OK, so yesterday I installed it into the system and began the data re-build process. Re-built is going fine, but when I look at the SMART data of the drive, it is reporting pre-fail and old age. Is this normal or should I replace the drive with another? Since the power on hours are accurate it does not seem to be reporting old data, so before getting too far into it, I can cancel the re-build, replace the drive with a second one I purchased and see if it reports any better. Below is the SMART data I am looking at. Thanks # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 1 Raw read error rate 0x000b 100 100 016 Pre-fail Always Never 0 2 Throughput performance 0x0005 134 134 054 Pre-fail Offline Never 104 3 Spin up time 0x0007 100 100 024 Pre-fail Always Never 0 4 Start stop count 0x0012 100 100 000 Old age Always Never 5 5 Reallocated sector count 0x0033 100 100 005 Pre-fail Always Never 0 7 Seek error rate 0x000b 100 100 067 Pre-fail Always Never 0 8 Seek time performance 0x0005 128 128 020 Pre-fail Offline Never 18 9 Power on hours 0x0012 100 100 000 Old age Always Never 67 (2d, 19h) 10 Spin retry count 0x0013 100 100 060 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 000 Old age Always Never 5 22 Unknown attribute 0x0023 100 100 025 Pre-fail Always Never 100 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 33 193 Load cycle count 0x0012 100 100 000 Old age Always Never 33 194 Temperature celsius 0x0002 222 222 000 Old age Always Never 27 (min/max 22/42) 196 Reallocated event count 0x0032 100 100 000 Old age Always Never 0 197 Current pending sector 0x0022 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0008 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x000a 200 200 000 Old age Always Never 0
  21. Is this in the syslinux.cfg of unraid or is this somewhere in the libreelec config?
  22. been sort of ongoing for several weeks and finally thought to try fixing. When I try updating a container, it downloads, extracts, but when it goes to stop the container and update, it just reports error killing container If I refresh the page, the container is stopped. I can then issue the update command again and it updates and starts as expected. However, this is not the desired behavior as it means stopping the container and then waiting and then updating it.
  23. trying to use remmina vnc to connect to a linux vm as the noVNC web method is painful. I can get to 2 other vms, ubunutu server 5900 and windows 5901, but when I go to my ubuntu desktop 5902 it immediately disconnects. I can connect at the start of booting, but then once the desktop loads I lose it. the web client still works, but performance is brutal at best with the duplicated keys
  24. permissions... I have come to realize there is something wrong with the permissions of all my files being pulled. It is the same with all my containers, so if I can fix one I can fix them all. My folders are created as: drwxr-xr-x 1 nobody users and my files: -rw-r--r-- 1 nobody users If I run newperms then I can fix the issue, but I am certain that this is not needed. When I add my LSIO containers, I have always used the default settings, besides the ports. so I say add <container>, set my port, set my paths and go. I have never tried or looked at the user/group settings/permissions, but I am guessing I have to do something there, the question is what. Thanks
  25. Is the 210 still the go to cheap option? Microcenter has one for 15$ after rebate