Mokkisjeva

Members
  • Posts

    15
  • Joined

  • Last visited

Converted

  • Gender
    Male

Mokkisjeva's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Since I can't find any topic regarding this I'm quite sure there's something I don't understand, not necessarily something that's wrong. But, looking at attached picture, how come I get 1tb and not 1.5tb assuming it's 1tb for redundancy? 1tb+1tb+0.5tb = 1tb? Edit: From what I can understand after some googleing is that zfs does not work quite like how I thought it would in this regard as to how I used to have it with btrfs. So I need to replace the 500gb with a 1tb disk to get 2tb I guess then?
  2. ------------------------------------- ---OpenTTD not found! Downloading,--- ---compiling and installing v1.11.0--- ---Please be patient, this can take-- ---some time, waiting 15 seconds..--- ------------------------------------- ---Successfully downloaded OpenTTD v1.11.0--- /opt/scripts/start-server.sh: line 60: /serverdata/serverfiles/compileopenttd/openttd-1.11.0/configure: No such file or directory make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target 'install'. Stop. ---Something went wrong, couldn't install OpenTTD v1.11.0--- Any help regarding this? Am I supposed to install anything manually first or?
  3. Can you elaborate? I'm quite .... bad at this. Where should I run these commands?
  4. Update: After 4 days the reallocate sector value is now 60, so it just keep going up at a steady pace. New drive on the way and I'm currently moving files off the disk. What I'm wondering is if there's a chance a cable or mb port could be the issue? It's just so weird it happened after upgrading HW, like first reallocate sector warning came day after new rig. If so, would new cable/port and a preclear "reveal" all the bad sectors?
  5. So what is a "few" in this case? Sure, if I see a rapid increase then obviously I would have to do something soon, but you say they can be stable which ofc makes me want to believe the drive might be still be good for some time. Assuming read errors are where I would stop using the drive all together?
  6. Hello, (Hope I'm in the correct section of the forum) I'm starting to get errors on my WD Red 8TB, and wondering how critical it is for me to replace, as in.... what time frame are we talking about here. Raw read error rate 0x000b 100 100 016 Pre-fail Always Never 0 2 Throughput performance 0x0005 131 131 054 Pre-fail Offline Never 116 3 Spin up time 0x0007 148 148 024 Pre-fail Always Never 454 (average 432) 4 Start stop count 0x0012 100 100 000 Old age Always Never 20 5 Reallocated sector count 0x0033 100 100 005 Pre-fail Always Never 19 7 Seek error rate 0x000b 100 100 067 Pre-fail Always Never 0 8 Seek time performance 0x0005 128 128 020 Pre-fail Offline Never 18 9 Power on hours 0x0012 096 096 000 Old age Always Never 29766 (3y, 4m, 23d, 6h) 10 Spin retry count 0x0013 100 100 060 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 000 Old age Always Never 20 22 Helium level 0x0023 100 100 025 Pre-fail Always Never 100 192 Power-off retract count 0x0032 064 064 000 Old age Always Never 43353 193 Load cycle count 0x0012 064 064 000 Old age Always Never 43353 194 Temperature celsius 0x0002 162 162 000 Old age Always Never 37 (min/max 24/47) 196 Reallocated event count 0x0032 100 100 000 Old age Always Never 19 197 Current pending sector 0x0022 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0008 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x000a 200 200 000 Old age Always Never 0 I'm also wondering how I can find out why this drive in particular is being used so much more than any of the others. (4x 8TB WD RED installed same date) See attachment (Disk two is the dying one)
  7. Hello, My unraid server is currently in a phase where parts seems to struggle (SSD just died a few days ago) and for my current use the HW just isn't that great any longer. It's a 7 year old server i made just for storage, then I added plex, and put a VM up and now running a minecraft server too. So I'm going down a path where I use it more than I'd ever imagine I would. Anyways, as I live in Norway it's very hard to come by server parts so I've been doing my best at finding what I think will work, but before I decide to molest my wallet I definitely need to consult with someone that actually know these stuffs and can give me some heads-up. Knowing if it's compatible with unraid, or if there's any known issues etc... These are the parts that I consider buying; CPU: Ryzen 3900x MoBo: ASRock Rack X470D4U2-2T Ram: KSM26ED8/16ME x4 (Kingston DDR4 2666MHz 16GB ECC) M.2: Samsung EVO 970 Pluss 500gb (Already have from old gaming pc) HDD: WD Red 8TB x6 (From previous server) PSU: Corsair RM850 (Already have from old gaming pc) For the server rack: (Here I'm lost and damn near impossible to find parts that's A) In stock B) Ships to Norway) Floor Cabinet: Toten G6818GM 4U Chassie: Inter-Tech 4U-4416 (Extreme overkill but couldn't really find what I was looking for) UPS: Powerwalker VI 1200 RLE Disk Controller: LSI SAS 9305-16e +switch, patchpanel, other accessories... According to Inter-Tech they recommend LSI SAS 9305-16e in order to use the backplane, so didn't see options here. And these cables; Sata to SAS SFF 8087 So are there any parts that doesn't go well together? Compatibility issues with unraid? Bad choice of components? Brain dead build? Any help is greatly appreciated Edit: Would it be ok to put my 2080ti in this for transcoding? (Will swap for 3080ti in my gaming rig when it releases).
  8. Well shit, can't mount it via UD so rip. Any way I can prevent this from ever happening again? Like, is there backup solutions for dockers?
  9. I just did that, got the following: "Unmountable: Unsupported partition layout" So if I need to format the drive then I assume I lose the /appdata/ location completely? Something tells me the data is all ready lost
  10. Hellu, I just updated to 8.3 from 8.2 and after reboot I noticed that no dockers was running. After some looking around I then realized that the SSD cache disk was missing... I changed port on the SSD to see if I could still find it and it did. I then changed back to it's original port and it still shows up. So I have no idea what happened there, anyways now my cache driver is unassigned. How do I... get it back to how it was? I'm kinda sure all the dockers was installed on that disk 👀 So I'm kinda hoping I'm able to just "hey, this is the disk you used before, just continue like nothing happened" so I don't have to set up everything from scratch.
  11. Thank you so damn much Johnnie!!!!! Made a 50gb dummy file to transfer, had a flawless write speed without dipping ones!
  12. Any ideas anyone? Transfer to unraid = 120mb/s for almost 8gb then drops to 30-50mb/s for the rest of the duration. Transfer from unraid = 120mb/s constantly.
  13. Oh shit, no wonder using the mover filled the cache rather than empty it! Then I still struggle with 50% transfer speed, how do I fix this?
  14. Hello, I have an issue when transferring files from my PC to my Unraid server. Under the parity check it says "Duration: 17 hours, 47 minutes, 48 seconds. Average speed: 124.9 MB/sec" So I'm assuming the transfer speed would be ~120MB/sec at best. So I made a dummy file in windows at 100GB, the first 6-8GB the transfer speed is ~110MB/sec, then it falls to ~50MB/sec. Funny enough, the reason I wanted to make this post is because when I transfer actual files it usually runs at ~50MB/sec and drops to ~1MB/sec for about 20sec then speeds up to ~50MB/sec. And so it fluctuate back and forth. So how do I troubleshoot and figure out why the slow transfer speed? It doesn't matter if I move from wifi laptop or cabled PC or even from the VM running on the server. (This got fixed after doing a unraid update) Edit: I just sent a 25GB file from the Unraid server to my PC and it transferred 110MB/sec - 117MB/sec. Then sent it back to Unraid, and same story as before. After about 8GB it slows down to half the speed (50-60 MB/sec) Edit 2: Forgot to add Cache drive, but same story. After nearly 8GB transfer at full speed it drops to half for the rest of the duration. Edit 3: Changed use Cache from YES to PREFER. Fixed it, stable 110+ MB/sec transfer speed.
  15. Hey! Hope I post this in the correct section of the forum, otherwise feel free to move the thread. I have an issue with my W10 VM. I have 5 disks in raid, and the average write speed between the disks are 110mb/s. But when I copy a file from one location to another via the W10 VM the speed fluctuate so much. It usualy starts off with a speed of 140mb/s for the first 3-5sec. Then it declines fast to 0-5mb/s where it usually stays for up to 15sec. Then it shoots up to the average disk speed of about 110mb/s for 5sec and declines back down again. So instead of a file taking 20sec to copy it takes 3-5min. While the write speed is down between 0 - 5 mb/s the VM itself is hardly usable. Impossible to even brows folders The system still works fine, but anything that requires any info from the disks are slow / frozen. Anyone know wtf is going on?