• Posts

  • Joined

  • Last visited

Everything posted by prostuff1

  1. 2 things to check since this is a Ryzen system: 1. Disable global c-states in BIOS 2. If you have sleep enabled for the system disable that.
  2. I'm not trying to be mean but you fail to grasp the sheer number of hardware configurations out there that can cause different issues to present themselves. Not to mention that Limetech is not in control of the linux kernel nor most/all of the different drivers used in it. The VT-d and NIC combo issue seems to be specific to that configuration and I would NOT expect Limetech to have all configs of hardware covered, heck one hardware config might be perfectly fine from one manufacturer while another has issues; purely because of the BIOS. Software like Adobe CC should be much easier to control for as it is only the software layer, they don't care so much about the OS (while Limetech has to since they are building an appliance) As for Time Machine... it has from my experience always been finicky on anything other than Apple hardware or USB drives connected to a Mac. I stopped using it with my Mac laptops and desktop a long time ago for that reason. Most people are unwilling to test a beta/rc on there machines, but will willing upgrade to a stable release as soon as it is out. The pool of people willing to test is therefore much much lower and the number of different configs available for testing is smaller. It is a catch 22 for both Limetech and the person with the server. I don't upgrade any of my machines to the newest reason right away; my main box stayed on 6.4 until 6.9.2 was out. I doubt my machines will see 6.10 until Christmas, mostly because they are running beautifully right now and I don't want to touch them.
  3. Ha, as a software dev I can say with certainty that is not how it works. QA (Quality Assurance) and UA (User Acceptance) Testing tend to happen with a very small subset of the actual people that will end up using the product/service. You will almost ALWAYS find bugs once "stable" is released to the masses.
  4. Ah, ok, that makes it much more clear. You are taking those 4TB drives that use to be the parity drives and adding them for storage. In that case you would end up with 8TB of space available.
  5. You gain no space if those drive(s) have to be used as parity.
  6. I have a handful of the 16GB cruzer fit drives that I bought quite a while back. Knock on wood, all of them have been fine and the customers I have used them for have not reported any problems to me.
  7. We do not currently offer anything with 10Gig Ethernet built into the motherboard but if you have a specific build in mind feel free to drop us an email . If you have certain hardware in mind put it in the email and I would be more than happy to discuss a custom build to meet your needs. The builds on the website are out of date, I am working on a new website so the custom server email form is the best way to get the build you need and want.
  8. Built a new server for myself and am having an odd issue with the preclear plugin and specifically using the gfjardim script. When I kick it off to do a preclear on this new machine I get the below message in the preview window. /usr/local/emhttp/plugins/preclear.disk/script/ line 514: 0 * 100 / 0 : division by 0 (error token is "0 ") The main unraid gui shows that the preclear is finished but the machine is definitely still doing something as there is a dd process that is using CPU. If I start the process using the Joe. L script then the process starts as expected and works. I am running 6.9.2, the newest version of the plugin, and all that jazz. I do not particularly care which script I need to use when running preclear but figured I would mention it here in this thread to see if anyone has seen similar. The new build is a Ryzen 7 5700G, 16GB of RAM on an MSI B550i Gaming Edge Max motherboard. I have a much older build that is my test system that is running something like an Intel Q6600, 12GB of RAM on a supermicro motherboard and the gfjardim script seems to work perfectly fine on that one.
  9. 2 are production servers, one is a test server I play around on. There will soon be another one added to that mix... so a total of four within the next 6 months
  10. You all are far nicer than I am. My parity check running full out takes almost a day (have a lot of different disk sizes and some disks are quite old). The people that have access to my server have been told about the parity check that happens at the beginning of the month and since they do not pay anything for the "service" I am providing they do not get to complain when it is slow or goes down for periods of time.
  11. I have flash drives in use since the unRAID 4.7 days... more than 10 years ago. 1 is a USB 1.0 drive and the other is a USB 2.0 drive. both are running fine and not given me any problems. If you are losing USB drive at 1 per year that seems quite excessive for the amount of writes that should be going to a flash drive normally.
  12. Very unlikely to happen as Apple itself has dropped AFP from the latest Mac OS Big Sur. You should be able to set up Time Machine to point to an SMB share.
  13. But it is, with the way the current license model works. I don't understand how/why it is so hard to purchase a small USB flash drive and boot from that? I'm not trying to be a pain in the butt, just honestly trying to understand why you want this SD module thing working so badly. The USB license model has been in place for a LONG time at this point and is pretty darn reliable.
  14. Drop me an email via the website and I will get back to you. Let me know what you are looking to do with the server and I can make some suggestions and recommendations on what would best fit your needs.
  15. I have never specifically bought/used NAS rated drives in any of my unRAID servers I have built for me, my friends, my family, or even the customers who have asked which drives I recommend.
  16. Got 4 of my old mining rigs up and running on Team unRAID. Limiting factor for my rigs is the availability of work units.
  17. Would love to throw my mining machines at the team but need a "good/easy" way to get them up and running and have not had the time to research the best way to do that. If anyone is using there mining machine in such a fashion and can point me in the correct direction that would be much appreciated. EDIT: Never mind, it appears the minig software I run has added support so it is just a matter of me taking the time to get that up and running.
  18. What size are those 30 drives? If some of the drive sizes are smaller it might be worth consideration on selling off some of the smaller ones and getting one or two larger ones.
  19. We do still build systems, though the builds have changed a bit so best to drop up an email. I am working on a new website though I do not know quite when it will go out.
  20. The latest update of the container breaking nextcloud was a big pain in the butt for me. I ended up without my nextcloud install for nearly a week before I got everything back up and running (probably could have gotten it figured out and reinstalled faster but just not enough time in the day). I had to downgrade the docker container with the "linuxserver/nextcloud:140 tag. From there I upgrade nextcloud from within the docker using the command line. But I could not just jump from version 13, which I think was the one within the docker for the tag I installed. I had to upgrade to 14, which had it's own set of issues, then to 15, which went fine, and then to 16 which also when fine. Once nextcloud was all up to date within the docker I had to manually run some fo the indices updates for nextcloud, and then remove the :140 tag from the docker container to get up to date on the latest docker. I did manager to get everything squared away and running again but it was not an overly easy process and for someone not comfortable with the command line might be more than they are willing to tackle.
  21. I just ran into this issue as well. I have a very old system built for a customer from nearly 10 years ago now. It is only ever used as a file server so there was no real reason to touch it. Well, parts finally started to fail (not the motherboard or processor but the drive cages). The customer was not interested in upgrading anything except for the unRAID OS version. I went to update it and of course nothing booted after 6.2.4. As soon as I put 6.3.5 on the USB drive and attempt to boot I would get the above mentioned error. I updated the BIOS on the board but that solved nothing. So some google-fu later I find this thread and the suggestion to add "root=sda" to the syslinux.cfg. That seems to have worked on the OLD Biostar A760G M2+ board with a Phenom II X4 in it. Not sure what in the kernel between 6.2.4 and 6.3.5 made it stop booting but the "root=sda" trick works fine on this old build.
  22. I am currently sitting on 296 days and counting on my main machine and my other machine is at 66 days.
  23. I have not had a chance to play with getting the docker up and running on my unRAID machine. I plan to play with it sometime this week and will be sure to do a writeup after I get it going.
  24. Duplicati docker backing up to an Amazon S3 account. That is likely what I will be doing for all my computers for an "offsite backup" replacement (had Crashplan Home). My next step is a Minio serice running on my server and then backing up via Duplicati to that to get full backups in 2 different locations.
  25. I just posted a request on the forums (you can find it here about getting a Minio docker created. I found a blog article discussing the use on Minio and duplicati to mimic Crashplan Home. I will likely be combining duplicati with an Amazon S3 service to get the cloud backup portion taken care of; but that does not cover the Crashplan server I had running locally and was also backing up to. I believe I can get Minio and Duplicati to work together so that it will function the same or very close to how my local crashplan server did. I found a blog article talking about using these 2 apps together here ( Thoughts and comments welcome.