Jump to content

smakovits

Members
  • Content Count

    863
  • Joined

  • Last visited

Community Reputation

0 Neutral

About smakovits

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. strange issue I am dealing with. When I browse my network for NFS shares unraid no longer appears. I am able to see another server (freenas) with a share, but my unraid server is gone. At the same time, if I manually path to the nfs share it works, so NFS is working. I have it set to export shares etc. It is really strange because this used to work without any sort of issues, but then one day it just stopped. I often browse to a share to quickly access a file, so setting the mount manually is quite annoying.
  2. After adding a new drive, I did something that caused my appdata to include it and as a result, mover moved my appdata data. After this, I had files on cache appdata and disk4 appdata. I resolved all docker containers except plex. I copied back preferences.xml and things appear OK, but since there are so many files in the directory, I thought to ask what the best thing to do it. Should I accept that plex is back up and move on or should I stop plex, copy everything from disk4 to cache, overwriting along the way and then just start the container again?
  3. You are STRONGLY suggested to install the Fix Common Problems plugin, then run the Docker Safe New Permissions command Ran the command, but keep getting the error. Is there a better way to complete this or resolve it? edit: if I check the said log, there do not appear to be any errors
  4. do I need to do it with the old disk first or just go straight to the new disk?
  5. Upgrading a 2TB to 8TB drive and In the process I figure I will take it to XFS. I have already take the steps to exclude the drive from my shares and I moved all the data off the drive. The only thing on the 2TB drive at this time is 2 empty share folders. Looking through the documents, I am trying to understand the next steps. Is it as simple as stopping the array with the old drive in place and changing the file system to xfs? Then Start array? Or, do I just place the 8TB drive and format it to XFS and re-build parity and go?
  6. Gotcha, that's what I was thinking, but wanted to make sure. Thanks
  7. Thank you, yes, I believe that's what I called crazy, the raw values. In that case, if my seagates are ok for now (knock on wood), I was going to Target sdq and sdn as they have the most hours, are 2tb and have experienced high temps. As far as zeroing drives when replacing them, is that any different than the documented exclude from array/safe mode, copy off data, remove, reboot, start array and re-build parity? I was thinking of removing a few drives, so I was reviewing the process a little. First I must confirm the the supermicro backplane supports my shucked drive.
  8. ran a diagnostic to collect a SMART report etc. Looking at the reports and the power-on hours I have a few drives over 50 and 60k hours. One that even hits 70k. However, the thing that stuck out the most to me during review is the seagate drives. Every one of my ST3000 drives show crazy values across the board. My assumption is that it either has to be something with the drives and their firmware or the drives are really going bad, which would not actually surprise me. I am actually looking at an ST3000 that never got used and never made it into the server before going bad. I pre-cleared it and shelved it and now it is just bad. tower-diagnostics-20190312-1921.zip
  9. I am trying to upgrade some drives. I have some aging 2-3 tb drives that I am targeting as I am out of drive slots. I want to replace the best candidate. At 2 tb an 8tb is the smartest, but only if one of the 3tb is not for some reason a better target. Not sure if there is a simple report for this or not.
  10. oh goodness, I totally misunderstood what they were telling me. At least I know the parity drive I replaced did have some 1400 reallocated sectors, so it was definitely going bad. This one, as you note looks good and healthy. I guess it is time to carry on, thank you for debunking my fears.
  11. Had a parity drive aging and reporting sector re-allocation, therefore, I purchased a new drive to replace it. Went with a black friday 8TB, I ran a preclear on it and everything checked out OK, so yesterday I installed it into the system and began the data re-build process. Re-built is going fine, but when I look at the SMART data of the drive, it is reporting pre-fail and old age. Is this normal or should I replace the drive with another? Since the power on hours are accurate it does not seem to be reporting old data, so before getting too far into it, I can cancel the re-build, replace the drive with a second one I purchased and see if it reports any better. Below is the SMART data I am looking at. Thanks # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 1 Raw read error rate 0x000b 100 100 016 Pre-fail Always Never 0 2 Throughput performance 0x0005 134 134 054 Pre-fail Offline Never 104 3 Spin up time 0x0007 100 100 024 Pre-fail Always Never 0 4 Start stop count 0x0012 100 100 000 Old age Always Never 5 5 Reallocated sector count 0x0033 100 100 005 Pre-fail Always Never 0 7 Seek error rate 0x000b 100 100 067 Pre-fail Always Never 0 8 Seek time performance 0x0005 128 128 020 Pre-fail Offline Never 18 9 Power on hours 0x0012 100 100 000 Old age Always Never 67 (2d, 19h) 10 Spin retry count 0x0013 100 100 060 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 000 Old age Always Never 5 22 Unknown attribute 0x0023 100 100 025 Pre-fail Always Never 100 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 33 193 Load cycle count 0x0012 100 100 000 Old age Always Never 33 194 Temperature celsius 0x0002 222 222 000 Old age Always Never 27 (min/max 22/42) 196 Reallocated event count 0x0032 100 100 000 Old age Always Never 0 197 Current pending sector 0x0022 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0008 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x000a 200 200 000 Old age Always Never 0
  12. Is this in the syslinux.cfg of unraid or is this somewhere in the libreelec config?
  13. been sort of ongoing for several weeks and finally thought to try fixing. When I try updating a container, it downloads, extracts, but when it goes to stop the container and update, it just reports error killing container If I refresh the page, the container is stopped. I can then issue the update command again and it updates and starts as expected. However, this is not the desired behavior as it means stopping the container and then waiting and then updating it.
  14. trying to use remmina vnc to connect to a linux vm as the noVNC web method is painful. I can get to 2 other vms, ubunutu server 5900 and windows 5901, but when I go to my ubuntu desktop 5902 it immediately disconnects. I can connect at the start of booting, but then once the desktop loads I lose it. the web client still works, but performance is brutal at best with the duplicated keys
  15. permissions... I have come to realize there is something wrong with the permissions of all my files being pulled. It is the same with all my containers, so if I can fix one I can fix them all. My folders are created as: drwxr-xr-x 1 nobody users and my files: -rw-r--r-- 1 nobody users If I run newperms then I can fix the issue, but I am certain that this is not needed. When I add my LSIO containers, I have always used the default settings, besides the ports. so I say add <container>, set my port, set my paths and go. I have never tried or looked at the user/group settings/permissions, but I am guessing I have to do something there, the question is what. Thanks