Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. This is all over the place, including in the Unraid forums and spaceinvaderone videos. I think it's some enhancement that is present in today's power supplies that activates a power saving mode in them. So you just turn it off. It's in the AMD bios settings. I also found that putting my ram back to stock speed helped a specific issue I was having. BTW it was C states not P states.
  2. No I didn't, it's bloody awful isn't it. I'm really not sure why it's happening either. Did you happen to be experimenting both with passthrough PCI device via stubbing the drive and also via the XML code in the VM? I seem to remember it started when I did one of those but not the other, though removing either / both didn't fix the issue.
  3. Well that's very nice too! Thanks, I will give it a try.
  4. Well that's very nice. I'm thinking BTRFS would be more suited to my Lightroom catalogues and such which are lots of small files. Do you have any idea how I do the reformat? Logically I should be able to format it live (or perhaps with the array stopped) and make the parity see those changes realtime, but I'm betting I'll have to do a newconfig with the newly formatted drive included? (It's existing currently as XFS, but with no data on it).
  5. I've run an 1800x as one with 16GB ram. It's essentially the same thing and it works a treat. You've just got to set typical current idle in the bios and turn of the pstates. Interestingly when I upgraded to thread ripper I didn't have to do that. So it's been a while and I many have gotten the terminology wrong (tired right now), but you get the idea. There are plenty of posts about it. Those two things make the system stable, which it is not without it. One day, hopefully will be fixed in a kernel upgrade. So honestly the answer is that yes, it's not only a good system, it's a great system and well worth it. Go for it!
  6. I've not seen the video and I'm not planning on losing however many minutes of my life I'll lose by watching it - sometimes I do wish instructions were written so I could skim read - nevertheless videos do have their place! Anyway, enough about that - what you're attempting to set up is quite involved so I'd start with the basics. The basics to me would be that you can ping (from an externally connected device), unraidseven.duckdns.org. Bear in mind you've got cloudlfares security on in the screenshot which hides your real IP. You might like to turn it off for testing purposes (click on the orange cloud once) and maybe give it some time to reflect the change. I'm not actually sure what svnprx.me is, it doesn't seem related to anything you've got in cloudlflare, so perhaps have a look at that. You've just got to trace the basics back first. Make sure your DNS is working before you start playing around with certificates, reverse proxy's and whatever 'DNS verification' is - I assume it's secure DNS, but honestly don't know. So maybe post some findings from there. Good luck.
  7. Actually, I was surprised Unraid did not have this time tested feature. I personally think a hot spare capability would still be of benefit in unraid. Even though you only lose one disk of data in unraid, it is actually also about the risk factor of losing another disk. Once one disk is gone, you can rebuild that, but a second no. Liklihood of more than one disk dying simultaneously? Well, more likely the more disks you have. And yes it does happen. One advantage of a hot spare particularly for smaller builds is that you could do away with the negative performance impact of dual parity and still have cover to reduce the risk of a second disk dying, which is heightened once one disk has died, e.g. due to the extra heat from having to constantly calculate the parity of the failed disk. It's a really great feature and unraid is the first redundant system I've ever seen that doesn't have it.
  8. I'm assuming the parity works at the block level so doesn't care about what is actually on each disk? To that end, I could reformat one of my blank XFS disks to btrfs? Thanks,
  9. APFS is a containerised file system. I think you mean SMB. Apple still operates AFP, however it is clear it is moving to SMB. It still works and I imagine it will for some time, but new features and development will likely be across SMB. Yes, that's the Windows networking protocol being used by apple.
  10. I came here because I'm experiencing similar. The unbalance app seems to be much, much slower than it should be. I have come to the conclusion there is something wrong with it, or some ballooning error that happens with long transfers. When it eventually finishes it's large file copy to (shifting between enterprise disks which is currently scheduled to take 14 hours for the remaining 1.6TB, I will do it by command line. I've done all sorts of large copy operations on the system and only since using unbalance has it been slow. Also for @rclifton I note that the unbalance plugin, does not equal the same as the actual transferring speed of the drive as reported by unraid. Clearly the speed field in unbalance is taking all sorts of things into account and averaging it out. But, this being my first time doing whole drive cleanup with the unbalance plugin, I note it slowed way down after a couple of hours. Originally based on it's then active transfer speed, it was going to take 5 hours for a 3TB copy, however it's now been running for 20 hours and it's done 1.4TB only. Some observations that seem odd to me include at times the disk is reading and writing from the same disk at the same speed in both read and write columns of 75MB/s and simultaneously the drive it's copying from is only running at 10 or 20MB/s sometimes less. Other behaviour that seems odd to me, is it cycles between reading from the source drive (and not writing to the target), then not reading from the source drive and writing to the target. So it's like copying it to a buffer somewhere. Something I'm sure is not normal for a normal move or copy operation. That all said, I accept I don't know lots about how Unraid operates and perphaps I don't understand something. But it in no way feels normal. I did have some custom disk tuning set up which gave me larger write speeds. I've reset that to defaults, but it hasn't helped. obi-wan-diagnostics-20190705-2317.zip
  11. Well, one would think that Unraid's / linux's mounting of the exfat system would be enough to tell the OS to ignore file permissions. However when you're running a tool that is explicitly trying to change permissions on a target file system that simply does not support ANY permissions, it's hard to blame Unraid. I'm not sure, but it's possible there's an extra mount flag that isn't being run that would get around this issue - which would have to be a feature / bug request with unraid. Perhaps you could try that.
  12. Awesome, I have to admit, I do love how flexible rsync is - definitely check those errors though, there shouldn't really be any. Good luck!
  13. Well, exFAT as far as I know doesn't support permissions. My first guess (and some small memory of having done similar years ago) is this is part of why. Though in that case the files did copy and the error was informational. I can't tell with your rsync logs, but certainly MC doesn't seem to be informational. rsync does have an option to skip permissions. I think it's --no-owner --no-group and / or --no-permissions. Or using archive (a) with rsync, maybe don't use that as that explicitly copies permissions. I can confirm it doesn't happy on my UD drives, though all of those are either XFS or BTRFS. Edit: If it really becomes a problem, you could just copy to the network share within unraid. Internal unraid network copies are 10Gb so not too bad. The share would get around the permissions problem I think.
  14. Does anyone know if there is a way to get disk io from the attached unassigned devices? I'd really like to be able to visually see real time so I can guage the amount of writing to my attached SSD's. Thanks.
  15. Yeah, I'd combine the transparent proxy and probably an extra vlan that routes the whole vlan through that proxy. Should be possible. Thinking that through, if you have a semi-decent firewall you could actually do this on the firewall. That would have the advantage of being available to the whole network and not having to configure proxy's on anything (just e.g. choose the relevant network). Not as simple as the vpn docker container you've got though.
  16. I can't remember what it's called now, but there is a way of displaying free memory that doesn't really show you the free memory. i.e it shows up as used, when actually it's cached or something, but still available to use when needed. So possibly depends on what you used to show the free / used memory? Unraid GUI or something else? Also, depending on how you assigned the memory to the VM, it's either dedicated or ballooned. If you assigned the 10G in both fields of your vm config, I believe that's dedicated, what it sounds like you want to do is have shareable memory (like most other vm servers do by default), which means you need to put in a range. Also, you need to ensure the balloon driver is installed in the guest. Hope that helps.
  17. Hey, (warning this is not a fully formed thought, but..), you can force applications that don't have proxy capabilities to use a proxy by setting up a transparent proxy. I'd have a read up on that, it could be an option added to the deluge-vpn docker depending on what that uses. In the days of slow connections I set this up using squid a few times. Maybe it helps.
  18. Yeah, I'm not going to bother to post on this any more, this is a stupid conversation and I like to keep it friendly for the most part. But I will point out you're blaming me again, which is exactly why I wrote that. I knew this would happen. Can't win. But no more posts from me, have a nice day.
  19. I’d you read my above posts you’d see I explicitly stated I wasn’t seeking help to know those kinds of details. I accepted they were dead. I did this predicting the coming opinions about how I couldn’t be helped. And sure enough, others feel it necessary to tell me off for not posting my diags. So sometimes I feel it necessary to reply and keep the game even. Not everyone is looking for detailed step by step support.
  20. Yeah, that's what I did when I first set this up. Or at least what I think I did. I remember being a bit annoyed that it didn't look right, but never coming back to it until now, because I realised it really wasn't working. I don't think via the GUI it would be too easy to get wrong either, yet somehow it was. A good education for me and I prefer command line anyway. I'll edit my post to say, make sure you read Jonnie Blacks comment after this post incase it works for you (quicker).
  21. I just noticed I got the title wrong too - supposed to be RAID1 lol. **Make sure you read Jonnie Blacks comment after this post incase it works for you (quicker)**. So for the benefit of others I did a quick google and sorted it out in the command line. Someone else who's familiar with btrfs can critique my console fu if they like: # btrfs filesystem show Label: none uuid: 0dfcf92d-3b3b-4328-b264-0ed8641019f7 Total devices 1 FS bytes used 133.64GiB devid 1 size 465.76GiB used 139.02GiB path /dev/sdd1 Label: none uuid: a3cc2ec9-de47-4142-92ac-a9c8905ad4d0 Total devices 1 FS bytes used 3.75GiB devid 1 size 50.00GiB used 4.52GiB path /dev/loop2 Label: none uuid: 37f59ca5-8f97-4193-807c-82e4f9e81f31 Total devices 1 FS bytes used 128.00KiB devid 1 size 465.76GiB used 20.00MiB path /dev/sdb1 So this shows that there are three btrfs devices on my system, though one is a loopback which I think is something to do with metadata. # btrfs filesystem show /mnt/cache Label: none uuid: 0dfcf92d-3b3b-4328-b264-0ed8641019f7 Total devices 1 FS bytes used 145.42GiB devid 1 size 465.76GiB used 153.02GiB path /dev/sdd1 This shows the cache pool does only have one device assigned to it, despite what the Unraid GUI thought (bug?) #mount /dev/sdb1 /mnt/test # ls /mnt/test (There was no data) #umount /mnt/test #mount /dev/sdd1 /mnt/test # ls /mnt/test (There was the expected data) So this proved my second device that was meant to be in the pool had nothing on it. #mount /dev/sdd1 on /mnt/cache type btrfs (rw,noatime,nodiratime) (Yep double confirming it's the btrfs device) # btrfs device add /dev/sdb /mnt/cache -f Performing full device TRIM /dev/sdb (465.76GiB) ... (-f forces the sdb device to be added to the pool despite it having a file system on it - be very careful as if you get this wrong you will lose your data, but that's OK for me, because have already confirmed it above) # btrfs filesystem balance /mnt/cache WARNING: Full balance without filters requested. This operation is very intense and takes potentially very long. It is recommended to use the balance filters to narrow down the scope of balance. Use 'btrfs balance start --full-balance' option to skip this warning. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting balance without any filters. Done, had to relocate 177 out of 177 chunks (This is needed to turn it into a true mirror, according to the docs it will rebalance (stripe) all the extents across all the disks now in the pool. I'm not sure about the warning though, and I always worry about doing these write intensive workloads across SSD's, especially low quality ones like these Samsungs that aren't made for heavy writes. Someone else can probably chip in here. The whole process on these SSD's took about 5 minutes so not too bad). # btrfs filesystem show /mnt/cache Label: none uuid: 0dfcf92d-3b3b-4328-b264-0ed8641019f7 Total devices 2 FS bytes used 118.06GiB devid 1 size 465.76GiB used 54.03GiB path /dev/sdd1 devid 2 size 465.76GiB used 66.00GiB path /dev/sdb (I then realised I had to convert to Raid-1 (using the command below) because btrfs defaults to Raid-0 which is not what I want. That would explain the differing sizes above. And you can see this in the first screenshot below (it shows that it's made a 1TB drive out of two 500G drives rather than a 500G Raid-1 Mirror. So I needn't have run the first balance command. I will be impressed if it actually CAN convert a bigger stripe to a mirror live and on the fly). # btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache Done, had to relocate 121 out of 121 chunks Yep, it converted it, that's impressive. And a screenshot showing it's now 500G Well I'm right (write?) impressed! A whole lot easier than mdadm though SSD's obviously make it a whole lot quicker - so that could be swaying me. If his is what we're in for in the future with the departure of spinning disk, I'm excited. And all done online without taking down / unmounting the drives. Just wow. Thanks for reading!
  22. Yeah, so as per title, I just realised this drive never spins up. I checked the power on hours and it says it's only been powered on for 5 minutes, yet it has been in the cache pool running for quite some time. You can see below it's part of the cache pool and while writing it's just spun down and greyed out. I'm not an expert with btrfs, do I need to do some kind of rebalance to get it to work?
  23. Yes, however I'm not really wanting to get into all that proving it died thing, they died and I don't need help to fix them. Just posting my experience in case someone else ends up saying, "Hey me too that's weird" and it turns out to be some unusual virus. Clearly all of that's extremely unlikely. Though, I can add this: The Samsung SM863 is now not visible to smart under unraid / somewhat visible under ubuntu in a usb caddy (go figure), says it is an unknown model, thinks it is 1GB instead of 1TB, doesn't have a partition table, won't take a new partition table in fdisk and really probably shouldn't have died as it's no-where near it's write capacity as far as I know - no smart errors then just poof. The second Intel is an older one, so it's not entirely surprising, nevertheless, coincidences abound! System is on UPS. If the SM863 had a firmware tool, it would be worth trying a refresh, but trying to do that in the past - I never did manage to find firmware, and not with the typical Samsung SSD tools either. Dead it is. No, Dead they are. I'm scared to reboot now - my other 2 SSD's could pork it also!
  24. OK, so yesterday morning my main SSD, an Enterprise Samsung, surprisingly died. That sucked, cause it's expensive and was not in a RAID, only backed up, which has taken me several days to sort out. So I had a spare Intel SSD lying around, given not the newest, which I installed yesterday (different power, different data cable). It's been going fine, but reboot and now it's dead too. Coincidence? Unlucky? I am off to buy an expensive Intel DC SSD tomorrow, now I'm worried it will die too lol. But I can't think of any reason why anything in unraid would harm an SSD, the second one wasn't even doing any I/O. *nix virus? I wouldn't think so. Has anyone else got any thoughts, besides the obvious one that I'm paranoid, unlucky and it's all a coincidence! These things aint cheap! Edit - both mounted in unassigned devices.
×
×
  • Create New...