Bizquick

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by Bizquick

  1. Seams odd that just started to happen. I was planning on moving everything to a new case with more cooling options. I guess I'm just going to have to make it a little quicker move than I expected. I do have one other option I can do. I've also been trying to move to new hardware again so I might be able to see if the backup server is ready enough to take over.
  2. Server is rebooting on its own I think about every 6 hours or so.
  3. Not sure what is going on I need to some help to see if this is hardware or software. If hardware what is failing. Seams like this just started the last few days. Log attached. neonas3-diagnostics-20240326-1201.zip
  4. Well like I said above a lot of times people will list and say anything. But you don't know for sure unless you check here is where I saw it in the log. Mar 11 11:03:50 NAS kernel: mpt2sas_cm0: LSISAS2008: FWVersion(11.00.00.00), ChipRevision(0x03), BiosVersion(07.00.00.01) I would think the firmware version would say something else. I just dont have a log file with one of these HBA cards to tell for sure. So thats why I suggested you at least check. You ont have to flash it but I would try to follow the steps like you would flash it and check the firmware version. or if the place you got it from added the BIOS to it. you could check before boot and get into the controller. Just don't try to change anything. It might be fine though so not sure. I wish I had a older log file from my older systems to see what mine said.
  5. from your Log post it does look like your HBA controllers are on a older firmware. But If you don't have any issues you might not need to do anything. I would guess that your older setup might have had the power settings in bios set low which was why it could boot up and make all your drives usable. Now in this newer setup. Generally they set things a little more on the higher side especially after you add 1 or 2 cards plugged into the motherboard. But for me when I'm building out systems if I got more than 5 drives I usually use a 650watt min PSU. and I was seeing 7 spin drives plus the small SSD's. That with a new Intel chip with igpu. 500watts was just a little under. might 550watts would be just enough. but it was just a little on the low side to me.
  6. Still not sure what is causing these messages. Diag is above.
  7. 500 watts PSU does seam a bit low with that many drives and HBA cards. older 4TB and lower Spin Drives do take some power when I ran 6 4TB WD reds. I made sure my PSU was 650. kind of surprised your dual Xeon box didn't have power issues. unless the CPU's just constantly ran in low power mode. Which can happen if your Bios is not set to turn off C-states and a few other things all depending on what the BIOS lets you do. glad to hear it sounds like your up and running now. I would double check your HBA Firmware to make sure its P20. Sometimes I've gotten a few of those old 9201 cards and I've had to reflash them because who ever sold them didn't do it right or just claimed they did. But your log shows that's good I don't know how to read everything in the logs yet still learning.
  8. Here I'm currently replacing my parity drive. But other than that I just got those weird errors for the i915 and the i2c's neonas3-diagnostics-20240311-1245.zip
  9. I'm not really a 100 percent familiar with the SyncThing Docker. But you do need to setup your docker to see all your shares or what your syncing. I think this should go into the SyncThing Docker support maybe. Its not really General Unraid OS issue.
  10. Yeah I've removed and replaced drives a few times. If you have enough open sata ports or drive connection ports. I would just add the new 20TB to the array and use either plug-in balance to move the files from the 8TB drive to the new 20TB. or add Krusader Docker and move the files manually. Balance Plug in most likely a littler safer but might take as long as swapping one of the 8TB drive out and letting parity rebuild on the 20TB. Christopher in his above post is most likely the easiest. only reason I might not like that way is you end up doing 2 parity checks 1 is rebuilding. then the next one after you redo the array and move the files manually. 20TB parity takes you almost 1.2 days right? my 12TB is 19 hours without my HBA. with my HBA I can get it in 17hours.
  11. I guess I've had this error in my logs for a while too. Not sure how to get rid of it. Mar 10 06:15:09 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f Mar 10 06:15:36 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f Mar 10 06:37:03 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f Mar 10 06:47:14 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 3f Mar 10 07:30:02 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f Mar 10 08:11:47 NEONAS3 kernel: i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f I dont have a video card installed its just onboard iGPU. And I stopped using a dummy plug to see if this would stop generating the error. I suspect its maybe a video card driver error. Because I have Swapped motherboards a few times and CPU's. But I haven't reloaded just swapped and removed the video card plug-ins a few times and added them back. Anyway I'm almost ready to where I can swap to another server. I just need to figure out a way I can get all my dockers completely duplicated over on my new box. I tried a backup and restore but it didn't like to use appdata backup to restore to a new box. Something about wanting the same server name. I'll most likely have to figure out how I can do it a different way.
  12. not sure what this means but I'm seeing this in my logs every now and then. Not sure what it really means I don't see any issues on my server. But just trying to figure out if this is something I can clean up or what. Mar 6 17:26:39 NEONAS3 kernel: i2c i2c-2: readbytes: ack/nak timeout Mar 6 17:26:39 NEONAS3 kernel: i2c i2c-2: readbytes: ack/nak timeout Mar 6 17:26:39 NEONAS3 kernel: i2c i2c-2: readbytes: ack/nak timeout Mar 6 17:26:39 NEONAS3 kernel: i2c i2c-2: sendbytes: error -110 Mar 6 17:26:40 NEONAS3 kernel: i2c i2c-2: sendbytes: error -110 Mar 6 17:27:26 NEONAS3 kernel: i2c i2c-2: sendbytes: error -110 Mar 6 17:27:26 NEONAS3 kernel: i2c i2c-2: sendbytes: error -110 Is this like NIC card related or something else?
  13. One of my systems I need to upgrade the parity drive and I don't have a open sata slot. Can I just shut down and replace the parity drive and then put in the new one and assign it before array starts? If not I guess i can add one of my HBA's and do what I've see on other threads suggested. I just got some larger drives and only way I think I can use them is to swap out my smaller parity.
  14. If you cant get that working with out display plugged in I suggest just getting a dummy plug. I had to do that for one of my builds that used IPMI and igpu. or you can hit on the motherboard maker to make some sot of BIOS adjustment that works. That would be my suggestion sense you have done most of the work and found that it works with a display plugged in. A dummy plug shouldn't cost you too much I got a 2 pack on amazon for almost 9 dollars.
  15. Totally agree I think its the Catch 22 there. The hardware Dev's don't want to spend time on the open source because not getting paid for it. And the community is limited to what it can see to help them. I think I'm just going to wait till I upgrade again and then use this board and test the Marvel 9172 controller ports in a backup box to test with. So it might be another year I guess.
  16. Interesting board. I'm not familiar with this new OCuLink for the other Sata ports. But that doesn't mean it wont work out fine. As for your CPU I understand saving power is something your trying to achieve. So I'm going to give you a little bit of advice from my experience with the T CPU's. Now sense unraid run's on a linux OS. all your CPU benifits all depend mostly on what the OS Kernel has developed for. So for me even when I ran my older box with a old i7-8700T the base clock speed was 2.4Ghz and could boost to 3.4 or 3.8Ghz. The system never really used the Boosted clock range. So my 8th Gen CPU just ran at 2.4ghz on all cores. So I dont know if T CPU's are that good of a choice. At least if your going to mostly just run it all in dockers. But it might be different if say you were to run things in a VM instead. Maybe like run a windows VM to host Plex or something. Maybe it might be able to use the CPU benefits better and boost the Clock rate in there. But I couldnt say for sure. Because I didnt try that. Instead I focused on upgrading my parts to a higher base clock rate. So if you run with that 13600T I noticed it has 2 different base clock speeds 1.8ghz and 1.3ghz. So depending if the linux kernal has coded for these newer cores and base clock rates. The OS might just take the lowest base clock speed and run all cores at that rate. And 14/20 cores at 1.3ghz seams slow to me. Someone else might correct me though. Also my T CPU I couldn't overclock or do anything to get the base clock rate up. I will guess that you might have the same experience with the 13600T. So yeah saving power sounds good but I would avoid it. I think it would be nice to see if someone else can validate if I'm correct about the Linux Kernel able to add the new benefits of these newer clock rates and cores that intel CPU's are doing. I think you would be better of with the 12600K the lowest base clock on that was 2.8ghz and 10/16 cores. But I also agree with Lolight you dont have to build on server grade parts you can find lots of good quality consumer grade part options and ECC is not really as important as some might say. Put it this way If your going to setup your box to use all your spinners in the main unraid array. You dont want to use ZFS for your format because you basicly will have 3 maybe 4 or 5 Drives all using a raid 0 ZFS. And ZFS format in the main array actually runs like crap. so that leave the Pools to run ZFS where your ECC memory actually might have a little more need. Now if your like me I only use 2 Sata SSD's or NVME's on that and those pool sizes are like 2 or 4TB tops. I don't see the benefit of ECC ram for that small of an array. Because the system at default is only going to put 8gigs at most with a 2TB array. On my test build it only put 4 gigs but my test build only has 48gigs of ram. Another thing ECC ram get really expensive with consumer CPU's. Mostly because they will use UDIMM ECC. and UDIMM ECC is quite a bit more than RDIMM which all the Server Grade CPU's use. Also I noticed ASRock has not even posted 1 EEC grade memory QVL for that board. Which is a little odd but also doesn't mean that much because they don't post much and there is always way more modules that work fine. But the fact they didn't put any ECC up is not very good. I guess it means they didn't get any free samples to post. I love ASrock Rack boards because of the IPMI option. But if you dont need IPMI either. I would really take a look at more options.
  17. Just to let you know I tried something similar to this for about 5 days last week. It actually worked just fine. I use a Qnap TS-1283XU-RP (it the smallest Qnap Work retired.) I used 8 -- 6TB NAS drives in it that. And let the Qnap os make a raid 5 array (I didnt know the QNAP OS very well so that took me a little bit to understand and figure out. And still dont know much about it) But this NAS I had 2 options for the NIC connections 1 gig or 10gig. I used both the 1 Gig NIC's for my first test. Now for the Unraid frontend like I said my test hardware is limited. But I did unraid on a little Dell Mini Optiplex 7060 with a i7 8700T 32Gig's ram. I used a 2TB nvme and 1 Sata 2TB SSD. I used the Mini to run the dockers (Plex, Radarr, Sonarr, SabNZBd, etc) in unraid I did SMB Mounts to the Qnap Data Share. (The Datashare had all my folder shares set to what I needed) Now when it came to the unraid box. I tried 2 ways. First time I just wanted to make it very simple. So I used the NVME and the Sata SSD in the main array and didnt waist time with a cache array. This worked for the most part. But I think long term its expensive and it felt strange to me If I had 2 NVME slots I would have left it alone. But in my case with the 2 interfaces were different. I ended up changing it up. Plus NVME's cost more and I didnt like the downloads doing all those writes and deletes. I ended up splitting it. I created a cache pool with the NVME and then a data array with the SSD. I didnt have parity or a mirror on my cache or array. But what I did do is I used Appdata backup plugin. That did a backup of my dockers and apps running on the NVME to my SSD array. Then made a user script to nightly copy that over to a qnap share and scheduled that. I figured this was good enough because that was where all the data array was. So then the real question some might ask. Well what was the SSD doing besides being a holder for you Appdata backups etc.. And the answer to that is I wanted to use the Sata SSD to be my Temp download area. So like when I SabNZB downloads. it writes and unpacks on the SSD drive. Then when it move's the data It moved into the Big Data array. I like this idea because Sata SSD are much cheaper and easier to replace. The I/O on 1 gig NIC's is perfect for this. And easly not hard to setup the folder paths in the ARR's to do this. The last step it moves the files the Share I'm using for Plex to look at. In the end this actually worked really well. I just prefer to have all my stuff in 1 box. But If I already had a lower end Synology or Qnap box and I just wanted a easier OS to work with to be a frontend for my apps etc.. This worked pretty well. Much better than trying to run those ARR's with plex on a brand name NAS while also being a storage array. I even streamed 1 of my 4k movies just fine. Over all for the 5 days tested. I did about 300gigs in downloads and watched plex with it for 70 something hours. I even had 2 steams going when my coworker watched stuff from my box. Now I mostly just watch 1080p and Internet remotes connections only watch 720p. In the end I could run this way pretty well. I think if your not giving up you synology data array yet. I wouldn't spend a lot making a frontend. I did this with Intel Gen 8 hardware and it ran pretty decent. If your not quite there yet to make a switch to run all into 1 box. I don't think you need to spend that much to make what it looked like you were looking for. But if your looking to expand and do like a game host server with this doing you media needs too. Then yeah your original hardware Ideas might be what you need for a beefy frontend. But if not all your looking for is a better front end. You not need to spend that much. I'll bet you could even find a cheap premade Intel Mini to do this better than my Optiplex 7060 mini. Again my stuff is Gen 8. and you can buy 10th and 13th gen stuff that would run circles around my old gear. I will say I had a fun time attempting and creating this. At least I used my old gear to do something interesting.
  18. Some what true. Linux is open source so sometimes vendors leave it up to the community to help build support or improve on it.
  19. This is the chipset on the motherboard. Marvell SE9172: 4 x SATA3 6.0 Gb/s, support RAID 0, 1, 5, 10 and a link to my mainboard if that helps. I dont want to start using it and find out later. https://www.asrockrack.com/general/productdetail.asp?Model=C246 WS#Specifications I'm planning on a redoing my system to use a ZFS Data array instead. And if I use that controller for that I wouldn't want to risk seeing drives falling offline. I would end up with issues with my zpool etc... I just figured that this has been a long time known issue that somebody must have figured out how to improve driver support by now.
  20. Is Marvel controllers still bad to use in unraid with array drives? I only ask because my ASRock WS246 Board has 4 extra Sata ports using a Marvel 9172 controller. I was trying to trim down not using my HBA controller. But I could stick with using it because I'm using onboard GPU from the processor. But It would be nice to not have to use one of my PCIe slots just to hook up all the drives I would use. If its still an issue kind of surprised I would have thought something would have improved by now.
  21. Man nice Motherboard Choice for a AMD proc. Just kind of sucks they didnt give you more SATA ports. I would have expected better from a ASRock Server Rack board. You should be ok to add that kind of adapter card. I think others have used the same one. One question I have if this is going to be for a Media Server. Are you adding a Nvida GPU card for transcoding? I know recently they just got plex to with with ATI cards in Linux. But I'm sure its no where nearly as good as Intel QuickSync. Me personally I've been building on much older hardware spec's and using Intel CPU's with GPU. But AMD can be really good too. It just depends on what your building. From the sounds of things it sounded like you are looking to run low power as well. I think you might want to rethink the setup if your not planning to use a GPU card. For me I would look at a older Intel 10 to 13th gen with similar core count like a i5 something. I wouldn't really look into 14th gen because then you get into those new weird cores. I'm not sure anyone has like developed the OS to take any benefit from those yet. The preformace cores and stuff sounds good in theory. But everyone mostly just builds apps and dockers to the OS layer and depend on the OS to handle the new hardware changes. You might try your idea on a build. And then later on see how much better the ATI Graphics gets better for Transcoding. I would expect to wait almost 1 year before it gets better. But again if adding a GPU then it doesn't matter.
  22. If use this plugin to backup my appdata. Can I use the plugin to restore my appdata to another unraid server? I'm looking for a easy way I can have a backup server ready to go with out rebuilding everything separately.
  23. Wow seams like a lot of power just to be the front end server to run some dockers and VM's but I guess it depends on the VM's. From what you said you have your Synology NAS as your storage array. If your keeping that for your media storage. You should be okay with your setup. There is many ways you can do your connections to connect your data to your dockers or VM's The simple way is just creating SMB shares with Unassigned devices. Years ago when I was getting my feet wet with unraid. I needed to transition my setup from FreeNAS. And all my Plex Shares I just mounted it with a SMB shares till I figured out how I wanted to setup my Data array in unraid and copy my data. Its wasn't the best way I wouldn't recommend it on 1Gig nics if your going to stream 4K video. Worked fine for me on most cases. But at that time I don't think 4k video was even a thing. So I would say it depends on your I/O Speed and NIC speed for that kind of setup. Anyway. For what your spec'ed out I would do what what your planning put the 2 NVME's in the array and make one a parity. You can put your dockers and VM's all in your array because it sounds like your data is going to be in the synology NAS. you will want some sort of backup in case your parity was to fail you in the event the other NVME fails. You could find a script to backup the array to a SMB mounted share or write one. If you dont want to script anything and your not using to much memory for hosted VM's you could also use one of the many backup dockers to back up your array to your Data synology box. this would be one way of setting things up. It would work fine till you could build something to have 1 box do all of it. If I had this Idea your doing I would run like this for a while till I could build 1 nice box running unraid. And when done find some extra data drive I could stick in the Synology box and setup a script to back up the unraid arrays and turn off and on the synololgy box. many ways to do stuff like that though. I think this is the only way I could see using a Mini with some NVME's and unraid. And memory that's a good amount of memory unless your VM's need a large amount. if this was me and I just needed this for a Media front end like your talking about. I could do this with 32gigs easy. 16 gigs would be tight. But I might be able to pull that all if I didnt have VM's. Oh and I would use ZFS format on any of the array drives. it will just slow it down. With parity you will be in good shape. Good luck though. I might try to build something similar with some of my older hardware to get an Idea how that might work out. I just would have to sub the Synology with something else in my old hardware bins I got some Qnaps the company retired this year. I might use that in my test case. Oh my mini would be much older though only a Dell 7060 with 32gig ram. But I can make something similar.
  24. Well that must be what is going on then because I can only see 8 drive bays and the 2 NVME's on the PCI-E Card. I would have no Idea how I could change the back plane to use Some sort of HBA connectors. I guess that would be my next attempt. I almost want to grab the 12 bay one and take it home and try that one instead. Only small problem is I don't have a small form Video card I can use to get into the Bios. None of these models have a external display port so I have to add a Video card to just get into bios to change the boot order. Also the 12bay the Processor is just a 4c with Hyper threading. So I would have to change the processors around just to get that. But If I find its worth it and all bays work. I might consider it. Thanks for checking that over and letting me know. Now I need to learn where to see what your talking about in the log to see that so I dont have to ask so many questions.
  25. Sorry I was going to do a new thread but I think that could be a bad Idea. Here is the Diag. neonas4-diagnostics-20231101-1630.zip