dynamis_dk

Members
  • Posts

    92
  • Joined

  • Last visited

Everything posted by dynamis_dk

  1. I do have another few 4TB drives which I could pre-clear and use to rebuild disk1 but these disk are to become part of a 2nd server I'm building I with the plan of it being offsite and just bring it around once a month so if I use up a 4TB drive to rebuild disk1, I would want that back would as some point for the new server ideally so I'm presuming I would then just be in a position where I'd need to use the New Config tool to remove it down the line and would be in the same position without protection until the rebuild completed? As I've added 4TB storage (as I needed somewhere to copy my existing RFS 4TB data to), I had planned by the end to be able to remove 2x 2TB drives to remove some of the oldest disks leaving me with 1x Parity 4TB, 2x 4TB data disk and 3x 2TB giving me 14TB total which is the same as I started with. I have about 3.5TB free before starting this process so that arrangement would work fine and if I need more space in the future, I can upgrade both the main server and backup server at the same time to keep the total space the same. Sorry I never thought to mention plans for the 2nd server and what impact that may have to my options. This 2nd server will have a 6TB parity, 1x 6TB data and 2x 4TB data. Sadly disk1 isn't detecting at bios level so I think its just packed in completely, its making start-up sounds when powered but isn't detected and I've tried different cables, ports etc. I think its about 10yo so Its not done too bad so I think I'll just bid him farewell as I'm not sure I'd trust is even if it did start to detect again.
  2. Hi, Its been a while as I've had my server down during decorating for around 6 months so on powering everything back up again I had OS upgrades to do, apps to upgrade etc and as part of it setup the common issues / fix app. On its advise I'm working on converting all my RFS disks to XFS. At the start my Parity is 4TB, one 4TB data disk and rest 2TB (5). I've installed a new pre-cleared 4TB to allow me to copy from existing 4TB, then i'd work down the 2TB drives like follows: Disk 7 (new disk formatted as XFS) Disk 4 (RFS 4TB) -> Disk 7 (4TB XFS) Disk 2 (RFS 2TB)-> Disk 4 (Now XFS 4TB) Disk 1 (RFS 2TB)-> Disk 4 (Now XFS 4TB) My plan was to replace disk 1 as it is fairly old so moved data from Disk 2 and Disk 1 onto the 4TB thinking Disk 1 could be removed and Disk 2 formatted to XFS to carry on the cycle. Towards the end of copying Disk 1 is redballed on me and as it was copying overnight I didnt notice until the morning so the copy process completed onto Disk 4 using emulated data. I've checked the hashes of the data copy from Disk 1 to Disk 4 and everything looks ok so I don't believe its caused an issue but I'm not really sure how best to get back to protected without emulation and carry on the move to XFS. Can I remove Disk 1 completely from the setup, allow it to build whatever parity it needs to again and then stop array, change Disk 2 to XFS and carry on? If so can someone outline the steps please? I've seen a few threads mention using new confg tool but I didn't want to start hitting buttons before getting advise on what I'm doing. I've been pretty lucky with Unraid over the years and I've needed to do very little to keep it going and I've only had a couple failures which were straight replacements but this ease of use has left me with little in the way of troubleshooting / fixing skills
  3. Awesome, Cheers JorgeB, I'll leave well alone for now and just bookmark this thread just in case
  4. Hi, I'm in the process of giving my unraid server a once over as I've had an old drive die and overall I've been fairly lucky over the years so I'm figuring out the process etc. I stumbled upon this thread via another users thread for support where someone noted via the syslog that their SAS controller was on an old firmware so it made me wonder about mine. I've been able to download the files etc as outlined in your guidelines and I've been left with the following so I know there is a newer update available for my card - question is, should I? Would you recommend always updating to latest firmware or is there an element of 'if its not broke, don't fix it'? Its one of the IBM 1015 cards Adapter Selected is a LSI SAS: SAS2008(B2) Controller Number : 0 Controller : SAS2008(B2) PCI Address : 00:02:00:00 SAS Address : 500605b-0-0474-f290 NVDATA Version (Default) : 0f.00.00.05 NVDATA Version (Persistent) : 0f.00.00.05 Firmware Product ID : 0x2213 (IT) Firmware Version : 15.00.00.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9211-8i BIOS Version : N/A UEFI BSD Version : N/A FCODE Version : N/A Board Name : SAS9211-8i Board Assembly : N/A Board Tracer Number : N/A Finished Processing Commands Successfully. Also, does the provided sasXflash version in the download from broadcom depend on the hardware its for as your guide mentions sas3flash but i've got sas2flash and it would seem to be working. many thanks
  5. So I'm in the UK and the included Virgin Media router you get is pants so for my VM setup one of which is a game server I was thinking to use this as a learning exercise by building up a pfsense VM. Now having the rest of the family here (and specifically given everyone is living of the internet during COVID lockdown) I wanted to give myself some pfsense flexibility without effecting the rest of the home network. My goal is to be able to pass unraid docker and VM's so they only go out to the internet via the pfsense VM then I can control ports etc and get a few advanced features like OpenVPN config for routing docker internet and possibly an option to VPN back into my network for remote admin of game server? I was wondering if it would work / be secure to use the home router's DMZ IP option to set pfsense's WAN as the DMZ address and just connect the WAN to my existing house network switch. In my setup I'm passing a 4 port intel card to the pfsense VM but If I were to just pass 2 ports (for WAN and LAN), could I then connect LAN to another nic which is visible to unraid to act as a bridge between pfsense and unraid so I can tell dockers / VM's to use that network interface, putting them on the pfsense managed LAN? Any advice much appreaciated here as you can likely tell, networking isn't my area at the best of times and thowing unraid / VM's in the mix is just confusing me a little.
  6. Hi, thanks for the response. my logic was based on if I move the data from the faulty disk (currently being emulated from everything else), as I’m moving the parity would be calculated as I’m moving from the emulated disk so I could then remove the disk which flagged as faulty and replace (to get back to the same total array size) with new once it arrives. if that theory doesn’t hold up (dispite having run unraid for many years, I’m still very much an unraid noob with this like this) then I think I might just power the server down until a replacement drive arrives and I can get it pre-cleared. i wouldn’t say anticipated as such but it had racked up about 50 errors last night so I was going to look at it further today but then came to it and unraid had marked it red and disabled. im doing a bit of home wiring (replacing Power sockets etc) so I’ve powered everything down for now. Once I’m finished and I can grab the logs, I’ll upload for further guidance later tonight. Many thanks
  7. I've had a disk fail out on me this afternoon and sadly with the state of things at the moment COVID-19 wise, I'm struggling to find a suitable next day delivery for a replacement drive. The rest of the disks have enough space free for me to copy the data from the emulated failed disk to the rest of the drives so I'm protected until I can get a drive delivered and pre-cleared. Is is just as simple as moving everything from the failed /disk2/movies folder into /disk3/movies etc etc? Does the parity recalculate on the fly as per a normal write to the shares so there would be no other steps, just moving the data around?
  8. Thanks JB, I've managed to mount it and copy off some of the bits I've needed. My downloads folder has already been processed and my docker needed recreating anyway from an earlier beta bug so thankfully I'm now back up and running again.
  9. As I'm not looking to use a pool, does selecting a cache number of disks as 1 delete the pool config in the background, setting the drive to a single disk? I'll give those recovery options a go to see if I can at least get back the current data, its not hugely important as I've got a backup from Feb - I was joping there might be a repair option which just lets me mount the drive again and set the cache to no pool, single disk lol
  10. With the whole lock down situation happening in the UK I was doing a few home improvement jobs which required the power turning off into the house. My Unraid server had a graceful shutdown (I manually turned off all dockers and VM's, powered down via the 'Power Down' button on main tab. Powered off UPS, killed the house power and carried on for the afternoon. I've now come back to powering up my unraid again and I'm getting a "Unmountable: No file system". Now I did some testing with a nvme drive a few months back and with a lack of understanding on my part I ended up with a cache pool and I managed to search enough info to get myself back in a working state so until the reboot today I've been able to user my 500GB SSD as my cache drive and the nvme drive is just mounted using the unassigned drives tool so I could copy a few files back and forth to check speeds out. I've done a bit of digging on the forum to see if I can find guidelines on how to fix myself back up again but I'm a little cautious to do anything without assistance as I'm very much hoping my data isn't gone. It would seem I've still managed to get a cache pool in the behind the scenes somewhere. I've seen this posted so hope this info start to help. root@unraid:/dev# btrfs fi show Label: none uuid: 49098d04-e56e-4515-81b0-dbca32aa2579 Total devices 1 FS bytes used 392.00KiB devid 1 size 1.00GiB used 174.38MiB path /dev/loop2 Label: none uuid: 478f1048-7afe-4109-aa57-974abe73591a Total devices 2 FS bytes used 141.54GiB devid 1 size 465.76GiB used 147.01GiB path /dev/sdi1 *** Some devices missing I've attached the log from the boot up to check over. unraid-syslog-20200404-1622.zip Only thing I've tried is to stop the array, remove the cache drive, start it up and confirm no errors, stop array, assign drive back as cache, start array and confirm its still showing as not mountable. Any advice on getting back up and running again please
  11. Windows 10 drag and drop as its my main workstation. Is there an alternative copy method which maybe worth a look if that might be my bottleneck? Robocopy, teracopy?
  12. Hi guys, I've not had much need to come back to the forums for a while as everything has been purring away just nicely but it was time to make a change so I'm seeking advise once again. I've got a ASRock Z97 Extreme4 motherboard which gives me a M.2 PCIe Gen2 x2 slot to work with so I've got a Samsung PM961 M.2 2 256GB drive to play with. I've installed the drive in the motherboard, unraid has detected it fine and I've installed Unassigned Drives so I could present it as a share for a few file copies to see how it performs. I'm happy with the write speed as I believe I'm hitting what I should expect from Gen2 x2 but the write speed tapers off after about 8gb of write. I've tested the disk with jbartlett777/diskspeed to get a ball park of the performance and it shows a write at about 800mb from start to finish so I'm trying to get an idea of why the write performance drops to 120mb after about 8gb of transfer. Given this is a unassigned drive I wouldn't expect to take any write hit like you would writing to the array so can someone pitch in and give me some thoughts of where to resolve this? Eventually I'd like to move to a 512 or 1tb SDD cache so I can take advantage of the 10gb networking and also give a little spring in the step of my VM's. Is it the SSD thats the issue (age, memory type etc) or something more fundamental behind the scenes? Cheers guys
  13. I've amended the folder permissions to include 'x' so giving 777 and also tried the file and folder both set to 777 but I still can't delete the files EDIT: I looked into if it could be the share causing issues rather than files permissions. by comparing another share which is working exactly how I want I managed to get it working by editing the .cfg file for the share I couldn't delete from.
  14. Has anyone got any ideas on how I move forward with this? I removed everything on my cache drive which has all my backups of apps etc I ran on 5.x so starting from a fresh cache drive I reinstalled all my dockers and reconfigured but I'm still getting the same issue where I can't delete files which have been downloaded. After a Sabnzbd download completes a script is run against it depending on if its TV shows, movies etc so I'm guessing the script is running as nobody but I'm afraid I'm a relative unix noob - All I know is what I've learnt over the last few years from running unraid
  15. files permissions within the folder (ls - l): -rw-rw-rw- 1 nobody users folder permissions (ls -ld): drwxrwxrwx 1 nobody users From that I would guess the permissions are set ok on the folder too. The folder is created within the share using the current date (as part of a script run by Sabnzbd) so I'm guessing its created as nobody due to sabnzbd running as nobody.
  16. I've taken the plunge and upgraded to 6.1 this week and overall I've very happy with everything and I've managed to get my head around the drive mapping side of the docker system. Everything is downloading as I expect however I've got a slight permission issue on access the shares from a Win7 PC where I can no longer delete the files from within Windows. I've tried to use the 'new permissions' fix under tools which doesn't seem to have fix anything. I've tried to set my Sabnzbd config to use 666 or 777 as file permissions which didn't help. The permission seems to be set right as from what I can tell reading other threads the dockers run as nobody and the permissions should be for nobody:users but Im confused how I get permission to my 'David' account on my PC.
  17. sorted thanks guys. In the end it was: telnet into server listed drives using 'df - H' to find which was mounted as /mnt/cache (found out after this is easy to get via unraid or unmenu web interface too) stopped array (didn't know if this was safe to do with array up - might now matter as cache is unprotected) gdisk /dev/[yourdevice] when in gdisk command, 'p' to list the partitions, 'o' to wipe the partition and create new blank full disk partition, 'w' to save changes. rebooted server on unraid web interface, it now shows the drive as 'unformatted' so ticked the box (make sure this is the only drive showing as unformatted!) and let it do its format. Hay presto, full 320GB back. Thanks for much for the pointers guys, I'm very much a Linux based noob so much appreciated
  18. cheers itimpi, said above about using fdisk or cfdisk so wasn't sure which. So basically stop the array, make changes to cache drive partition, start array, format it
  19. So having kind of just dealt with this small issue for the past 18 months I've finally thought about doing something about it lol I've tried to look up a few bits of info on how to delete a partition but on following a few commands in fidks I get this output: This is where I fail as its not showing a partition in the list so I'm not sure how I can delete it Would someone be able to provide something a bit more step by step based for the Linux noob lol, I've already copied over the data to another drive for backup. After deleting the partition, at what point does unraid prompt me to let it format the disk?
  20. I've been moving my kit around as my desk underneath was in silly need of a rearrange so I've built a bit of a rackmount case to house the UPS and switch. As part of this I had to power down the unraid box to move it out the way and on reconnecting / powering back up I've checked the log again and this time there is no instance of the error so I think I'll just need to keep an eye out. Thank Joe L. I think in this case I'll live with it then as all my downloaders are working and I don't use the cache drives as an actual 'cache' as I do very little copying to the unraid box. At some point early this year I'm going to look to replace this cache with a 240/256GB SSD so for now I'll live with it
  21. Thanks for the info. Its looking like I'm pretty much sorted. I've got my new drive in and I've done a data-rebuild, parity check, upgrade to latest v5 stable, parity check and everything has come back error free. Also all my apps on the cache drive /.custom area are working without issue (other then a few permissions issues but easily fixed). Running another pass of the precheck on the 3TB drive which will become the new parity so once this finished (maybe tomorrow) I'll add the new parity and should be good to go for the upcoming future. I do have the following highlighted in unmenu's syslog highlighted red: Dec 25 02:41:14 unraid kernel: ata17.00: exception Emask 0x32 SAct 0x0 SErr 0x0 action 0xe frozen (Errors) Dec 25 02:41:14 unraid kernel: ata17.00: irq_stat 0xffffffff, unknown FIS 00000000 00000000 00000000 00000000, host bus (Drive related) Dec 25 02:41:14 unraid kernel: ata17.00: failed command: READ DMA EXT (Minor Issues) Dec 25 02:41:14 unraid kernel: ata17.00: cmd 25/00:50:0f:ee:d0/00:00:53:00:00/e0 tag 0 dma 40960 in (Drive related) Dec 25 02:41:14 unraid kernel: res 50/00:00:5e:ee:d0/00:00:53:00:00/e0 Emask 0x32 (host bus error) (Errors) Dec 25 02:41:14 unraid kernel: ata17.00: status: { DRDY } (Drive related) Dec 25 02:41:14 unraid kernel: ata17: hard resetting link (Minor Issues) Dec 25 02:41:15 unraid kernel: ata17: SATA link up 3.0 Gbps (SStatus 123 SControl 300) (Drive related) Dec 25 02:41:15 unraid kernel: ata17.00: configured for UDMA/133 (Drive related) Dec 25 02:41:15 unraid kernel: ata17: EH complete (Drive related) Anything to worry about?
  22. Thanks for clearing that up Joe L. I've ran another parity check overnight and everything has come back clear with no errors. Also attached another syslog just in case. Think I know what the issue is with the cache drive which is giving me: Dec 23 10:03:15 unraid kernel: sdb1: rw=0, want=625142384, limit=312592707 (Drive related) I'm pretty sure when I moved from a 160GB to a 320GB cache drive I just cloned the smaller drive to the larger one but it never expanded the size to the whole disk and I had forgot about it in all honesty. Is the easiest way to resolve the issue to backup all the addons running from the drive and format it again, or is there some commandline tool I can use to expand the partition out to use the full available size of the disk? syslog-2013-12-24.txt
  23. With it being the cache drive I may just backup everything on it and format it again. I'm toying with the idea of putting a 120gb SSD in there instead since the addons keep it spinning all the time and the SSD should speed up file operations for extracting and PAR checks etc. Now my data-rebuild has finished it says 'Parity - Last checked on 12/23/2013 8:01:57 PM, finding 0 errors' Does this mean it automatically runs a Parity check after the rebuild or is this just the time the rebuild finished? I've attached the syslog from booting with the new HDD to how things stand now... syslog-2013-12-23.txt
  24. Thanks BobPhoenix, You just beat me to it on the 3TB front - I have another small issue I've noticed in my syslog: Dec 23 10:03:15 unraid kernel: attempt to access beyond end of device Dec 23 10:03:15 unraid kernel: sdb1: rw=0, want=625142384, limit=312592707 (Drive related) This is my cache drive which is just used for addon's so having read over a few thread on similar issues refering to gdisk I've noticed someone mention about the 2.2TB drive limit on 4.7. Thank you for the advice on the parity checks during the process, I figure it would be sensible to do something similar but wasn't sure what was the right time to check the parity. My media is all video and music, although I'd cry if I lost everything its not so important that I have a full backup or hash to compare with so its parity check only for me
  25. Oh yeah, cheers Joe L - I didn't even spot that. Lol this is why I always post up when I'm having issues! Well I've done 3x preclear passes and its come back all good on my new 2TB RED disk so its installed, I'm about 75% way through of my disk data rebuild so should be done in a couple hour. I'm looking to move to the latest Version 5 stable build very soon so is there anything I should be looking to do from a maintenance point of view before I look to move? I've also got a precleared ready to go 3TB drive to use as my new parity disk but I'm not sure if its best to upgrade first to v5 software or put the new parity in. My last complete parity check was the 1st month and checked without errors. I presume when I upgrade the parity drive it will be rebuilt from fresh using the data on the drives so is there anything special I should do to check the data disks? I've already checked the SMART on them all and everything looks good with the other drives in my system and I've not experiences any corruption or data loss that I've noticed as a result of the bad sector reallocation business which sparked this all off.