Jump to content

John_M

Members
  • Content Count

    3966
  • Joined

  • Last visited

  • Days Won

    10

John_M last won the day on November 29 2018

John_M had the most liked content!

Community Reputation

264 Very Good

About John_M

  • Rank
    Away for much longer than I expected

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

2098 profile views
  1. It seems you can: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/page/5/?tab=comments#comment-11073
  2. There's a typo in that command, a missing ">" Try this: cat /var/log/syslog > /boot/syslog.txt
  3. I've experienced it too with Unraid 6.8.3. I discovered I suddenly wasn't able to click on objects on the Windows VM's desktop. As far as I can tell, it's a noVNC issue because connecting via a Remote Desktop client works normally. I haven't experienced it with Unraid 6.9.0-beta30 though.
  4. I didn't receive my daily Array Status email today. The last one I received was yesterday, before I upgraded from beta25 to beta29. I did receive emails from other servers, running 6.8.3 though. I haven't changed anything in Settings -> Notifications but I see this in the syslog: Sep 30 00:20:01 Lapulapu crond[1567]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null Hopefully, exit status 255 gives a clue. lapulapu-diagnostics-20200930-0045.zip
  5. If there's any way to get FREE reported correctly, as it was in beta25, I'd be happy to forego accurancy in the other two values, but if that isn't possible I'll move on and not mention it again.
  6. Looking at the output of df, it now gets the Used and Avail and even the percentage correct, so couldn't the Size simply be calculated by adding the two? Or would that fail in the case of mismatched RAID1?
  7. I notice that the way SIZE/USED/FREE are reported for btrfs pools has changed since the last beta. I understand that Tom hates this subject because the inconsistencey is caused by the btrfs development team's intransigence and I apologise for raising it again, but I honestly think the situation was better with beta25. Here's what I see on the Main page for my four disk pool in RAID5: and here's what I see from the command line: root@Lapulapu:~# df -h /dev/sdf1 Filesystem Size Used Avail Use% Mounted on /dev/sdf1 7.3T 2.8T 2.7T 51% /mnt/extra root@Lapulapu:~# df -H /dev/sdf1 Filesystem Size Used Avail Use% Mounted on /dev/sdf1 8.1T 3.1T 3.0T 51% /mnt/extra root@Lapulapu:~# btrfs fi df /mnt/extra Data, RAID5: total=2.77TiB, used=2.77TiB System, RAID1: total=32.00MiB, used=224.00KiB Metadata, RAID1: total=3.00GiB, used=2.88GiB GlobalReserve, single: total=512.00MiB, used=0.00B So I have (in round figures) four 2TB disks, which should give me 6TB of usable storage, with 2TB being used for parity. I currently have about 3TB of files and about 3TB free, as df shows, so it troubles me that the GUI suggests I have nearly 5TB free. I realise it's tricky to get all three values to display correctly but I think the most important figure is the amount of FREE space. If I can only have one of the three values reported correctly, I would very much prefer it to be that one. The SIZE and USED values are really only of secondary importance because running out of space is much more of a problem. I don't mind if the SIZE is shown 2TB more than it actually is. If an empty filesystem showed 2TB USED (to account for the parity, as though it were a file system overhead) I wouldn't mind, as long as the FREE value is reasonably accurate. For that reason I prefer the way it was with beta25. Maybe it was changed to solve some other conflicting requirement (I know that RAID1 pools with unequally sized members can be problematic). I'd be interested to hear other people's opinons.
  8. In the Ok how do I re-partition my SSD pools? section, Is there any particular reason why the destination disks should be btrfs formatted? I've previously used the procedure with XFS-formatted destination disks and it worked well enough. I would think the majority of users choose the default XFS format for array disks so, unless there's a compelling reason for doing so, requiring btrfs is an unnecessary headache for them.
  9. Hopefully Asus will fix it with a BIOS update. You ought to close the bug report you made.
  10. I don't use FreeFileSync so I can't comment on its use. Regarding rsync, the very simple rsync -a /path/to/source /path/to/destination should preserve permissions and ownerships when run as root.
  11. What are the scripts that you're running; how are you invoking rsync? Are they run by the root user? What are the ownerships and permissions on the original files that are being backed up? Do the first two screen grabs show the situation before and after running the scripts or not. If not then the difference could be explained by the fact that the /mnt/user0 path excludes the cache disk or pool from the union, while /mnt/user includes it. In which case, what does "ls -l /mnt/cache" look like?
  12. I replied to your general help thread.
  13. See here: https://superuser.com/questions/298102/how-to-restore-mac-address-in-linux#962806 I'd try the ethtool -P approach.
  14. It isn't a general bug, otherwise everyone would be affected since bonding is enabled by default. You don't need bonding for VMs and dockers, just bridging. From the screenshot your bond has only one member, eth0 and eth0 has the MAC address 88:88:88:88:87:88. This indicates that you or the previous owner of your NIC has over-ridden the unique MAC address it was given in the factory. Some NICs allow you to do that, some don't. It isn't an Unraid feature or obfuscation.
  15. I believe it's a browser "feature". When it happens the "Check for Updates" button only inverts and responds to clicks if you manage to catch the very bottom edge. I've seen it before and I'm seeing it now on the Plugins page of three different servers running two different versions of Unraid in the Chrome browser, but not in Firefox on the same PC or in Chrome on a different PC. Probably clearing Chrome's cache will fix it but I want to play a little more before trying that as it's so difficult to re-create.