prostuff1

Moderators
  • Posts

    6362
  • Joined

  • Last visited

Everything posted by prostuff1

  1. Built a new server for myself and am having an odd issue with the preclear plugin and specifically using the gfjardim script. When I kick it off to do a preclear on this new machine I get the below message in the preview window. /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 514: 0 * 100 / 0 : division by 0 (error token is "0 ") The main unraid gui shows that the preclear is finished but the machine is definitely still doing something as there is a dd process that is using CPU. If I start the process using the Joe. L script then the process starts as expected and works. I am running 6.9.2, the newest version of the plugin, and all that jazz. I do not particularly care which script I need to use when running preclear but figured I would mention it here in this thread to see if anyone has seen similar. The new build is a Ryzen 7 5700G, 16GB of RAM on an MSI B550i Gaming Edge Max motherboard. I have a much older build that is my test system that is running something like an Intel Q6600, 12GB of RAM on a supermicro motherboard and the gfjardim script seems to work perfectly fine on that one.
  2. 2 are production servers, one is a test server I play around on. There will soon be another one added to that mix... so a total of four within the next 6 months
  3. You all are far nicer than I am. My parity check running full out takes almost a day (have a lot of different disk sizes and some disks are quite old). The people that have access to my server have been told about the parity check that happens at the beginning of the month and since they do not pay anything for the "service" I am providing they do not get to complain when it is slow or goes down for periods of time.
  4. I have flash drives in use since the unRAID 4.7 days... more than 10 years ago. 1 is a USB 1.0 drive and the other is a USB 2.0 drive. both are running fine and not given me any problems. If you are losing USB drive at 1 per year that seems quite excessive for the amount of writes that should be going to a flash drive normally.
  5. Very unlikely to happen as Apple itself has dropped AFP from the latest Mac OS Big Sur. You should be able to set up Time Machine to point to an SMB share.
  6. But it is, with the way the current license model works. I don't understand how/why it is so hard to purchase a small USB flash drive and boot from that? I'm not trying to be a pain in the butt, just honestly trying to understand why you want this SD module thing working so badly. The USB license model has been in place for a LONG time at this point and is pretty darn reliable.
  7. Drop me an email via the website and I will get back to you. Let me know what you are looking to do with the server and I can make some suggestions and recommendations on what would best fit your needs.
  8. I have never specifically bought/used NAS rated drives in any of my unRAID servers I have built for me, my friends, my family, or even the customers who have asked which drives I recommend.
  9. Got 4 of my old mining rigs up and running on Team unRAID. Limiting factor for my rigs is the availability of work units.
  10. Would love to throw my mining machines at the team but need a "good/easy" way to get them up and running and have not had the time to research the best way to do that. If anyone is using there mining machine in such a fashion and can point me in the correct direction that would be much appreciated. EDIT: Never mind, it appears the minig software I run has added support so it is just a matter of me taking the time to get that up and running.
  11. What size are those 30 drives? If some of the drive sizes are smaller it might be worth consideration on selling off some of the smaller ones and getting one or two larger ones.
  12. We do still build systems, though the builds have changed a bit so best to drop up an email. I am working on a new website though I do not know quite when it will go out.
  13. The latest update of the container breaking nextcloud was a big pain in the butt for me. I ended up without my nextcloud install for nearly a week before I got everything back up and running (probably could have gotten it figured out and reinstalled faster but just not enough time in the day). I had to downgrade the docker container with the "linuxserver/nextcloud:140 tag. From there I upgrade nextcloud from within the docker using the command line. But I could not just jump from version 13, which I think was the one within the docker for the tag I installed. I had to upgrade to 14, which had it's own set of issues, then to 15, which went fine, and then to 16 which also when fine. Once nextcloud was all up to date within the docker I had to manually run some fo the indices updates for nextcloud, and then remove the :140 tag from the docker container to get up to date on the latest docker. I did manager to get everything squared away and running again but it was not an overly easy process and for someone not comfortable with the command line might be more than they are willing to tackle.
  14. I just ran into this issue as well. I have a very old system built for a customer from nearly 10 years ago now. It is only ever used as a file server so there was no real reason to touch it. Well, parts finally started to fail (not the motherboard or processor but the drive cages). The customer was not interested in upgrading anything except for the unRAID OS version. I went to update it and of course nothing booted after 6.2.4. As soon as I put 6.3.5 on the USB drive and attempt to boot I would get the above mentioned error. I updated the BIOS on the board but that solved nothing. So some google-fu later I find this thread and the suggestion to add "root=sda" to the syslinux.cfg. That seems to have worked on the OLD Biostar A760G M2+ board with a Phenom II X4 in it. Not sure what in the kernel between 6.2.4 and 6.3.5 made it stop booting but the "root=sda" trick works fine on this old build.
  15. I am currently sitting on 296 days and counting on my main machine and my other machine is at 66 days.
  16. I have not had a chance to play with getting the docker up and running on my unRAID machine. I plan to play with it sometime this week and will be sure to do a writeup after I get it going.
  17. Duplicati docker backing up to an Amazon S3 account. That is likely what I will be doing for all my computers for an "offsite backup" replacement (had Crashplan Home). My next step is a Minio serice running on my server and then backing up via Duplicati to that to get full backups in 2 different locations.
  18. I just posted a request on the linuxserver.io forums (you can find it here https://forum.linuxserver.io/thread-585.html) about getting a Minio docker created. I found a blog article discussing the use on Minio and duplicati to mimic Crashplan Home. I will likely be combining duplicati with an Amazon S3 service to get the cloud backup portion taken care of; but that does not cover the Crashplan server I had running locally and was also backing up to. I believe I can get Minio and Duplicati to work together so that it will function the same or very close to how my local crashplan server did. I found a blog article talking about using these 2 apps together here (http://blog.quindorian.org/2017/08/diy-cloud-backup-replacing-crashplan-home-family-diy-style.html/) Thoughts and comments welcome.
  19. Install the "Command Line Tool" through Community Applications and try it out instead.
  20. Back in the day when i was running unRAID under ESXi I was using Plop to "load" the flash drive so ESXi could use it. Honeslty have not done and ESXi stuff in a while so not sure if that is still needed/used.
  21. I would also consider upgrading the CPU to at least a E3-1230 or better. The 1220 does NOT have hyperthreading. I would not bother with with VM1, VM2, or VM3. Get a big SSD and set it as a cache drive in unRAID. Use that as the VM location and go about creating your editing VM there. All the apps you want/need to run can be done via docker and take up less space then full blown VM's
  22. My dedicated crashplan backup server that runs unRAID is usually up for quite a while at a time as I do not usually bother updating with every unRAID release. My main server was up for nearly a year and a half at one point. I skipped one whole release of unRAID on my main server, mostly because it was a "if it aint broke don't fix it"
  23. Yeah, I am going to have to start looking at alternatives now. I am currently looking into https://www.duplicati.com/ and https://www.arqbackup.com/. Kind of want to see what a Duplicati docker would be like and if someone could get it up and running. I could just pay for an unlimited cloud storage at amazon, google, microsoft, etc on a per month and then send all the computers that i was sending to crashplan there. If anyone out there has played with either of the above mentioned please chime in on your opinions.
  24. The linked post above is from Limetech so you should be fairly confident in the information provided there. There is no 100% gaurentee that it will not change in the future but for now everything works as you would like. Limetech has been pretty understanding about the users needs/wants/desires.