Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About B1scu1T

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yeah I was actually using the VPN docker but didn't realise I was in the wrong thread.
  2. This must have happened during my server meltdown last night... Long story short, I didnt see different issues I experiences as seperate co-incidental issues and figured the whole world was coming down around me. As it turns out: - nextcloud decided to start locking random files in my storage - still dont know what caused this - appdata backups were backing up the download folder, filling up the cache drive - I had a possible drive failure, causing me to need to rebuild my array - my docker file corrupted, forcing me to rebuild it - Deluge stopped communicating with sonarr and radarr Long night... In the mean time whilst we wait for deluge 2 support, I have just just up a black hole as deluge supports the .magnet links (wqith autoadd). Edit - Turns out autoremoveplus isnt supported... ffs.... downgrading
  3. Given the rocky road it appears to have ahead, do you see Unraid moving away from Slackware at any time in the future? If so, how do you see this affecting existing users on 6.xx? If not, are you expecting Limetech to have to take on the workload of maintaining an OS in any way?
  4. I understand the scientific reason for using it and how much you should use as a best practice. I've read everything there​ is the read and tried everything there is to try, it's not my first rodeo. The point is that these are just hypothesis that everyone constantly repeats until someone actually tests and proves it, including the idea that trapped small air pockets will decrease the performance. I don't dispute the hypothesis, it's solid, but the testing doesn't line up with it. The reality in a conclusion is generally along the lines of... "Don't worry about it so much" I mean there are countless videos and articles for it. My advise for anyone: Use a small blob or line, don't be too precious about it and don't worry if you have a bit too much. When mounting, don't worry if you have to reseat it. Once it's on and running, you should check the temps from time to make sure everything is still working ok, especially if it's a 24/7 box, change the paste every ~3 years as a preventative measure.
  5. Temperatures are one of the easiest things to hunt down due to the number of probes installed in modern platforms and fairly solid universal support in software, so I doubt this would form any kind of niggling long term issue. I know where you're coming from and I don't even entirely disagree, but it's more from a "perfection" point of view rather than one of actual results based testing. The air bubble thing actually being a problem is one of those builder myths that prevails. I remember when the standard practice was to use a card to spread heatpaste out (some pastes were included with one and paper instructions showing the process) and spend fortunes on fancy paste to gain 2°c improvement. Systems I have built in the last 5 years (maybe 7 iirc), I have been a bit more relaxed and so far none of them have had any issues. I probably come across like I'm trying to get your back up, I'm really not, I just don't think the application of heatpaste is quite as important as the computer community makes it out to be. As long as there is enough, and it's of acceptable quality, don't worry about it.
  6. Whilst you're right about the air bubbles, hasn't there been pretty extensive testing across various YouTubers that conclusively shows performance difference to be miniscule at best? Obviously if you're taking a heatsink off and the paste is dry, you should replace it, but if you're just reseating during the initial build, it's really not a big deal and won't impact temperatures significantly.
  7. Ok there seems to be I/O activity on the cache so it looks like things are working. Is it safe to also run a pre-clear at the same time or is that asking for trouble?
  8. Apologies for my ignorance, but is the command to be entered directly from the console, or is this something I can do via web access... I cant see a terminal app built in as standard
  9. Eek! Thats really not the kind of thing I wanted to hear! New timeline: (~1502.zip) Shudown Unplugged the Sandisk SSD and booted up - files are missing but drive does mount (~1511.zip) Shutdown and switched round so the Sandisk is in and Kingston is out, but I also unplugged my new Toshiba HDD - All files are back as expected (~1526.zip) Plug in Toshiba drive while running and reboot server - Status the same as previous I have left it like this for now as everything seems to be running, but it would be good to get the Kingston back into the Cache pool safely, should I just follow the standard instructions or is there anything that I should check first? tower-diagnostics-20170329-1502.zip tower-diagnostics-20170329-1511.zip tower-diagnostics-20170329-1526.zip
  10. Hi All, Hopefully someone can help me our here, here is a summary of the timeline: As far as I know, everything was fine this morning. I used some of my docker apps so I have no reason to presume otherwise. My new 4TB HDD arrived, so I plugged it in and then opened up the webGUI Saw that there was a load of plug in updates to do, so I ran (inc. 6.2.x (sorry, I dont remember exactly) to 6.3.2 OS update) them and rebooted. After the reboot, I set off the preclear on my new disk. Switched back to the "Main" page and noticed my cache drive 1 is now unmountable. Switch back to the preclear and stop it, so that I can analyse the cache problem first. Check the "appdata" via SMB share and all my docker files are missing Switch the two cache drives in the GUI (1 becomes 2, and 2 becomes 1) and now the other physical cache drive is unmountable Try starting the array with just one cache drive, each time the same result... they are unmountable. As far as I'm aware, they were always set up as btrfs but to be honest its not something I have paid huge attention too... ive never had too! Essentially, I didnt change any settings that should affect this part of the system, so I cant think of any reason this would happen? I have attached the diags. tower-diagnostics-20170329-1354.zip
  11. I have also started a hardware thread to discuss the issues I have been having with memory errors. Unfortunately no-one has replied to that one, so I'm going the very long slow way of testing everything I can. With the help of some people from another forum, I'm actually making progress.... but I'll report on that once done. Currently on 1CPU with 2 sticks of RAM and no "CATERR" system events in IMPI so its looking relatively good at the moment..... anyway..... I keep seeing this warning in the system log: Apr 29 22:48:10 Tower kernel: BTRFS warning (device sdc1): block group 155780644864 has wrong amount of free space Apr 29 22:48:10 Tower kernel: BTRFS warning (device sdc1): failed to load free space cache for block group 155780644864, rebuild it now Is there anything to be worried about there? Edit - spoke too soon about the CATERR messages. I forced a rescan of Embys media folders and it collapsed on me again
  12. Hi Guys, So I was having some issues with my unraid setup and I got some help here in this thread: http://lime-technology.com/forum/index.php?topic=48551.msg465724#msg465724 Thanks to the advice given and some exploration, I came to the conclusion that there was a hardware issue in the system. Since then I have tried a number of different RAM setups both ECC and non-ECC to be sure I didnt have a faulty stick. The moment I took out both CPUs and put the one from CPU-1 to CPU-0 then booted up with only one CPU and 1 stick of the original Samsung Registered ECC RAM, I no longer seem to get any issues with memcheck. I then plugged all the drives back in and stuck another 2 sticks of RAM in and everything was not just fine, it also seems to be running faster than ever before. I will experiment with the hardware a little more so that I can be sure what the issue is, but seeing as this is my first time building a dual CPU system and my first time with Supermicro, I was wondering if anyone with a bit more experience had any advise on things to check/look out for. Perhaps BIOS settings or jumpers that might be relevant. I really hope it is just something simple and daft.
  13. Well I took out all the RAM except two sticks (minimum for dual CPUs) and ran memtest. Immediately a load of errors popped up under CPU0. I stopped the test and swapped out the RAM for some crucial non registered stuff out of my other server and again, errors immediately. Guess its either a CPU Issue or perhaps even the motherboard I'm pretty stumped to be honest now will have to look at it again another day