sgibbers17

Members
  • Posts

    468
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sgibbers17's Achievements

Enthusiast

Enthusiast (6/14)

1

Reputation

  1. I have a few 10 gigabit network cards and would like to have my unRAID server act as a network switch. I have seen a few posts on how to have a 10 gig connection between the PC and server but those wanted to setup another subnet. I want to keep it all one sudnet and make sure my router can assign ip addresses to devices connected to the server. Setting up a docker or VM to run something like PFsense or opnsense but not a router but something that may be stripped down so it is not bloated with router features I will not use and to only act as a switch would work
  2. I am finishing up my testing phase of my surveillance set up and would like to know if a purple drive aka surveillance drive would be appropriate for a recording to from a docker. These drives were ment to be placed in a standalone NVR or DVR. Will Shinobi record data in the same way or close enough for it not to matter? My cameras have a two lower resolution sub streams available to record from and my plan was to use a purple drive to do 1080p 24hr recording and save 4k motion detection saved to the array via a cache drive. Would using a purple drive be beneficial over a NAS drive they are about the same price? Sent from my LE2125 using Tapatalk
  3. I am weeding through all the disks and so far I have found 2 drives that I would trust after 3 preclears and an extended smart test. So far most of the drives just so smart data of UDMA CRC error count and Current pending sector. So far the Current pending sectors have gone away without adding to the Reallocated sector count. I have read that the UDMA CRC error count isn't indicative of drive failure but most likely a bad cable, I am also making sure this number doesn't increase. What would the rsync command that I would use and how would I set it up to run on a schedule? Is there an rsync plugin that would simplify this? He has a really bad backup system as of now. Right now his backup system is when a drive starts to get full he buys a new drive then make a backup folder on the new drive that he copies all the old drive to then puts the old drive in to a drawer this has been going on since before SATA was a thing and he also has a drawer full of IDE drives but we wrote those off. So he has backups of backups of backups some on drives that are going bad some that the data has corrupted he doesn't even know what is on what drive. He wants to now go through all of his drives and consolidate his data. We have consolidated enough of it for me to have 9 drives to go through and test. Once I have his new server up and running I am going to either set up a off site backup solution with a third party or rsync the important stuff to my server.
  4. Let me just start off saying, I know all about how an were backups should be stored. This is not an ideal situation for backups but it is far better than it was. I am setting up an unRAID server for my father who is tech savvy enough to get around a computer and I would consider him a power user but just not to the extent of the people here on the forum. That being said he knows absolutely nothing about linux and I need a simple solution that is a plug-in and not a docker I think a docker would be to complicated. What I want to do is set up for each share a share that is hidden. I will set up each share and hidden share to use different drives on the server. What I need is a plug-in to sync everything on the main share to the hidden share. This way if any drives fail that the parity can't correct for the data is still there. The reason I need two copies of everything is that my father is cheap, he won't use a offsite backup service, and his current system is just placing drives in a desk drawer some of the drives are old, bad, or going bad. I am currently weeding through the drives to find the ones that I think will work just fine but there are a couple so far that I do not trust so I am not going to use those. The drives that I will be using in this server are going to be old and possible fail at any time. Like I said before my father is cheap so if a drive fails he will be forced to replace it but for now this is what I have to work with. Is there a plug-in that will do this? What I am finding in the community apps are dockers or for offsite syncing and I just need to sync one share with another (not in real time but one a day would be fine.)
  5. Once the data is written to the array I don't want it moved back to the cache drive. When on Prefer and data is written to the array because the cache drive is full and the mover is started the data that is in shares that are set to Yes will move off the cache disk to the array, freeing up space on the cache disk then the share that is set as Prefer will then move data from the array to the cache disk. I want the data that gets written to the array to stay on the array stead of moving back and forth.
  6. I have a 500GB NVME cache disk and currently want one of my shares to use only the cache disk but once the cache disk is full I can no longer write to it. I see there is a prefer option but that will move that data back to the cache drive off the array when there is free space. And the yes option will move the data to the array when the move command is used. Both of these options are not ideal. I would like to have an option like prefer but once the cache drive is full then write to the array but leave it on the array once it is there instead of moving the data back and forth. I use this drive as a scratch disk for my video projects and 500GB is normally enough but I fill it completely frequently enough that it does become a problem. Once my project it complete I move the files to the array by putting them in another share. Can I suggest it be called "Preferred w/Overflow"
  7. I was able to do a repair of windows by reinstalling by keeping my apps and data and re-ran sfc /scannow and it completed with no errors but I am still getting the 0x8007003b error. Edit. I re-downloaded the virtio drivers iso and did a driver update and now seems to be working. I'll see if my tv recording start working and I'll update here. Microsoft must have pushed a update through that broke the old drivers that I was using because I was using the same drivers for many years ever since unraid supported VMs and never had an issue with that driver.
  8. Yep that's one of the many posts that I read before making my own post about the subject. I even listed the things I tried listed within that post that frank1940 recommended. To be clear this is only on one of my windows machines which is a VM on my unRAID server, all my other machines work fine.
  9. Solution: I updated the virtIO network fixed my 0x8007003B error. So I have been fighting this issue for about a month. I run a windows 10 VM as my plex server that I use as a DVR and for about a month I have been having issues recording my shows. At first I thought it was a tuner issue because it would work once in a while like about 10% of the time and would appear to only work when I had no more than 2 tuners added to plex (these are networked HDhomeruns) but it wasn't consistently working. I was finally able to swap out the tuners and was still having the issues. This morning I made plex record to a local directory and it worked just fine, but when I tried copying the file to the server I got the error 0x8007003B. I searched the forum here and on google and I believe I have tried everything but still not having success. This is what I have tried 1. Running my AV including a offline scan during startup 2. Disabling the AV and Firewall 3. Running a sfc /scannow (It did have errors but was unable to correct them and I am still trying to find out how to correct them other than reinstalling windows and starting over. 4. Turning windows search service off. I might have success with number 3 if I can get rid of the errors. Does anybody know of a way to fix the corruptions without starting from scratch?
  10. There are errors in the log file that I had attached but I do not know what they mean.
  11. I Did a 24hr memtest when I first got all the parts in back sept. but I haven't done one since I finished building the server. I'll do one once I get a chance after the parity check.
  12. I just put together my new serve a couple of weeks ago and started moving data over to it. But the last week it has been acting funny, sometimes it just restarts others a disk error. This time I was unable to access it over the network and when I went to do a clean reboot it froze for 15 min and never shut down cleanly so not it is doing a parity check again but atleast I was able to get a log file this time and notice that 2 of my CPU cores are pegged at 100% This is the motherboard I am using https://www.asrock.com/mb/AMD/Fatal1ty X399 Professional Gaming/index.asp with a 1900x tower-syslog-20190208-1658.zip Edit: I am running unRAID 6.6.6
  13. I brought up using the other PC because you mentioned that rsync had a pre allocation issue when using shares larger than the disk and you were not sure if MC would have the same issue. I have two maybe three shares that are larger than 6TB so I would have to manually manage what goes to each drive. I just had the idea that if I use the windows VM to make the transfer I should be able to move the files at 10 GBe speeds.
  14. I mounted a NFS share using unassigned drives and then tried using unbalance to move data over but unbalance does not detect the NFS share. Is there way to move data over without having an allocation issue that was mentioned above? I'd like to just tell the old server to move a share to the new server without using a third PC as a go between as it will slow down the data transfer.