Jump to content

pwm

Members
  • Posts

    1,682
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by pwm

  1. I notice CA also uses upgradepkg - but with --install-new so upgradepkg will just jump directly to installpkg - or exits if the new package file doesn't exist. <!-- The 'source' file. --> <FILE Name="/boot/config/plugins/&name;/&name;-&version;.txz" Run="upgradepkg --install-new"> <URL>https://raw.github.com/&github;/master/archive/&name;-&version;.txz</URL> <MD5>&md5;</MD5> </FILE> It seems the download manager doesn't continue if the download fails - so doesn't process the "Run" command. So neither installpkg or upgradepkg run. But the pre-install code in /boot/config/plugins/community.applications.plg performs file removal before the download of the new version. It is a bit confusing that the *.plg file is run and performs a cleanup before knowing the result of the download of the *.txz file. The plugin doesn't seem to have a chance to perform any cleanup when it knows if the new install will work or not. It would be better if the plugin performs the download without any Run command. Then based on the download result decides if any cleanup is needed before upgradepkg --install-new is run.
  2. It sounds like you are saying that /sbin/upgradepkg is generally not used when updating plugins.
  3. That mitigates the problem but doesn't solve it. A user may have that page open for quite some time before deciding to update plugins, in which case the version displayed may still differ from the latest version available on the server. Somewhere down-the-line, whatever script is used needs to verify that the new version is available before the uninstall happens. That's the only correct way of doing an update. I don't normally use slackware since a huge many years, but correct usage would be that the installpkg script is called with a package file as parameter - and that file is expected to exist. If installpkg is called without an existing script then it will exit. It's just that when updating a plugin, then installpkg is expected to be called from another script - updatepkg - that is responsible for removing unused code. And updatepkg itself verifies that it receives two package names - the old and the new. It really is irrelevant who have written the code - but down the line, installpkg should only be called if the package exists. And the contents of the previous package version should not be removed unless the system has the files for the new package. Does unRAID have a document showing the exact steps performed when a plugin is updated?
  4. It may not be a bug in CA, but it is a bug for an update manager to uninstall a previous version before having successfully retrieved the new version. Your app should never have to bother if specifically version x exists on github. If it does, then the update should work. If it doesn't then the update should stop before the uninstall step, in which case there should be a chance for the system to continue to monitor github for yet newer versions. Your mobile phone would never uninstall the previous version before it has retrieved and validated a new version.
  5. I haven't checked the specification of the UPS, but if the UPS has 650 in the name, then it's likely that 650 is the capacity in VA - not in W. So it's likely that both plugin shows the correct load values. One showing the load as 253 VA One showing the load as 203 W The VA value is almost always higher than the W value, because they can only be same for a perfectly resistive load.
  6. Yes, after I had made my error report, I did pick up the git link from the first post of this thread and did a normal install. I just wanted to report that the app was able to uninstall before verifiying that it could download the newer version. But from what you are saying, this is a bug in unRAID itself that should be fixed - the new version should be downloaded with a temporary file name, and no uninstall should be allowed to happen until the new file has been downloaded ok.
  7. Note that the parity doesn't have any file system, so no file system conversion for that drive.
  8. I did an attempted update on jan 1st from version 2017.12.20 to 2017.12.31. I didn't notice exactly what it was writing on the first update attempt but it didn't perform any update. What it did was uninstall the previous version - I have no directory /usr/local/emhttp/plugins/community.applications The file /boot/config/plugins/community.applications.plg is for version 2017.12.20. If trying to press the update button again, it fails with: plugin: updating: community.applications.plg Cleaning Up Old Versions plugin: downloading: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.12.31.txz ... failed (Invalid URL / Server error response) plugin: wget: https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.12.31.txz download failure (Invalid URL / Server error response) I didn't immediately noticed that I missed the application - I thought it was just a temporary issue with the repository. Then I noticed I no longer had the extra tab for applications. The log from my update attempt doesn't show anything special - I did try twice, and I'm pretty sure the popup dialog did show extra text the first time. Jan 1 05:45:21 n54l-3 emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update community.applications.plg Jan 1 05:45:21 n54l-3 root: plugin: running: anonymous Jan 1 05:45:21 n54l-3 root: plugin: running: anonymous Jan 1 05:45:21 n54l-3 root: plugin: creating: /boot/config/plugins/community.applications/community.applications-2017.12.31.txz - downloading from URL https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.12.31.txz Jan 1 05:45:34 n54l-3 emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update community.applications.plg Jan 1 05:45:34 n54l-3 root: plugin: running: anonymous Jan 1 05:45:34 n54l-3 root: plugin: running: anonymous Jan 1 05:45:34 n54l-3 root: plugin: creating: /boot/config/plugins/community.applications/community.applications-2017.12.31.txz - downloading from URL https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2017.12.31.txz Edit: Forgot to mention that I run 6.3.5.
  9. Read slower or read twice. 1) Steam was an example. But there are no games you are interested in that you haven't gotten interested in because of some kind of reviews. And there are hundreds of sites that offers forums where people write their comments about games. 2) I didn't write anything about games offering test versions. But a whole world have managed quite well even without test versions because of the very ample amount of information available through your favorite search engine. 3) How you find the game? You find people with the same taste as you that writes reviews. Then you follow their reviews. You can't steal a book and read just to figure out if you are going to like the book - you either take a chance, or you do your due diligence. The seller has the right to upvote their products. The buyer has the right to vote with the wallet and not buy. But the buyer never has the right to steal. How do you think you buy paint for painting the house? By stealing one bucket of 10 different brands and test for a couple of years before deciding on buying? In the physical world, you'll have to do your due diligence. Same in the digital world.
  10. Ever noticed review sites? Ever noted that Steam et al have a review function? No, you do not NEED to test before buying.
  11. If you click on Tools/Diagnostics you can get a zip file with the SMART dump from every disk. You still get one file/disk, but it's quite quick to view the individual files. Just note that it is seldom the oldest disk that is the best to replace, since the drives are individuals and ages differently. So it's best to look at more than the number of power-on hours before making your decision.
  12. There are often a number of SMART attributes that have threshold 0, which means the attribute will never be able to go below the threshold value and so will never be in a failing state. So these attributes will only be treated as indicators for an intelligent reader to ponder. Having a current pending sector is a warning - not a failure - for a disk. On one hand, it can be a physically damaged sector - which doesn't mean the disk will fail but that the specific sectors needs to be remapped with spare sectors. Or it can be a physically damaged head, meaning that the drive is permanently bad at reading and/or writing data. But it can also be a power issue etc that has nothing to do with the disk itself. Another thing is that the drive doesn't keep track of write times for the individual sectors. You know that you have recently written to every sector, which means that you know that 2560 offline uncorrectable is a extremely bad figure, because it means the recent fails did fail because head or surface is bad. The above is a reason why many SMART programs informs about changes - expecting the owner to react if some of these fields ticks up and to try to make an intelligent decision of what to do next.
  13. Problems with newly booted machines with lack of entropy for pseudo-random data, resulting in some part of the program hanging waiting for /dev/random to produce enough data?
  14. The text "Pre-fail" and "Old age" are just descriptions of what the different attributes are measuring. Edit - notice that you also have a column "Failed". All your attributes have the value "Never". If you had seen a "Now" there, then you would have known that the disk considers that specific attribute to be currently failing. There are a few of the Raw values that you want to stay at zero permanently. You don't want Attribute 5 (Reallocated sector count), 196 (Reallocated event count), 197 (Current pending sector) or 198 (Offline uncorrectable) to tick up because then you have problems with the drive. And ifh 199 (UDMA CRC error count) is ticking up, then you have issues with the SATA cable or possibly with the electronics in the controller card or the disk. The disk itself gives an indication what it likes the different values based on the colum "Value". Compare this value with the other column "Threshold" - it's a failure for any attribute to have the "Value" drop down all the way to the "Threshold" value.
  15. I have managed quite well to store DVD/BD ISO files to Reiserfs and fill to 99% on older 4 TB volumes. But the special case is that I have only accumulated new files - never erased any existing files. And the total number of files are very low since the ISO images are on average 8 GB for the DVD rips and 44 GB for the BD rips. I also have a few, very small, meta-data files for each rip, but still ends up with a multi-GB average file size. In the general case, 99% is much too high filling level for ReiserFS. RFS starts to spend significant time rebalancing the internal tree structures much earlier, so 90% should be a better general figure - especially if there are important real-time requirements. But just to reiterate: all file systems gets affected when nearly full. It's just a question of how early you start to see significant effects for a specific set of files. Some file systems suffers because the free-space management breaks down. Some, because the FS tries to balance the internal structures. If it's balancing operations that slows down, then it can work quite well to fill the FS to a high level since the rest of the file system life will be reads. But if it's the free-space management that breaks down, then even future reads are likely to become severely affected.
  16. Different file systems behaves differently when nearly full. But in general, you should never fill a file system almost full unless it's an archival disk where you only write to it once and then leave it in that state with no more writes. An almost full disk has a very hard problem figuring out what free space to use for new writes with tends to result in very severe fragmentation. So you can get huge slowdowns accessing some of the files if you make use of all space for a partition. Note that some file systems can allocate a bit of extra space, speculatively, for some files just to allow them to be extended without additional fragments. If you then fill the disk full, then the file system may have to go back and try to get back these speculative allocations and take a huge number of small blocks to store the last files written.
  17. It was unexpected that ps hung. Process hangs normally happens if accessing something protected by a lock that hasn't been released, but ps is normally quite nice about what data it goes hunting for. What arguments did you give to the ps command?
  18. I have had issues where I close notifications and they instantly comes back. And if I close all, they instantly comes back. If I look on the flash drive, all are closed - the "unread" directory is empty and the notifications that comes back are in the "archive" directory. If I navigate to a different page on the unRAID web interface then the problem seems to go away. So it seems like the web interface may sometimes cache incorrect data.
  19. How can this be the wrong board? 6.3.5 isn't pre-release anymore, and this isn't related to any specific plugin. More and more users will come running with this kind of problems where the unRAID and the unRAID shares aren't found since M$ is rolling out Windows 10 updates that has turned off support for SMBv1, so you basically need DNS support to locate other hosts on the local network. Or require that the local hosts file is updated with the IP/name of the missing unRAID machine.
  20. WiFi is quite often fast enough, which is why lots of media players can be run over WiFi. And it allows a backup server to be placed in a room that doesn't have wired networking. The important thing is that different users have different usage cases and different needs.
  21. Windows doesn't check if the USB is bootable - just if it (potentially after a file system repair) seems to be readable. So as @johnnie.black recommends - regenerate the flash drive or possibly replace it. But take care to make a copy of the configuration first.
  22. One more wanting ISCSI.
  23. For a drive you just fill you could go really, really full with most file systems. But note that a file system using copy-on-write (CoW) like Btrfs needs additional disk space even for updating file access times. And you might want to store additional information later - possibly checksum information for the files. Or you might later want to switch to a different file system - different file systems requires different amounts of hidden storage for book keeping. Anyway - having 0.5% free on every drive represents just 0.5% of the purchase cost of the drives and 0.5% of the electricity to run the drive. So there isn't any strong economic incentive to push the limit. Especially since fragmentation issues can become a really big issue way earlier than this, depending on usage pattern.
  24. First off - you should use 4+1 = 5 for directory access to allow read+execute - execute is used to traverse the directory. Without this, you shouldn't even be able to see the files in the directory. What are your settings for the share? If the Windows machine is user "win" and belonging to group "users", and your share allows write access for "win" then the Windows machine should be able to change access rights for files owned by "win". It should be allowed to change any files that specifically allows write access, but not to change any access rights for a file owned by root. The normal Samba access rights checking happens in two layers. The first layer is the samba share settings. So the share settings may have: "valid users <user1> <user2> ..." to specify which users that may access the share. And "write list <user1> <user2> ..." to specify which users that it will accept writes from. Alternatively, the share may allow read or write from all users. But below that, there is the unix file access rights that must also be fulfilled. Your Windows account belongs to the group "users". So read and write accesses given to "users" will allow the Windows machine to read and write. But I fail to see why you are allowed to change the access rights for files/directories owned by root. If I test in a directory, I get the following. Before Windows tries to change access rights: root@n54l-3:/mnt/disk2/isos/test# ls -l total 68 -r--r--r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-444 -rw-r--r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-644 -rw-rw-r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-664 -r--r--r-- 1 radium nobody 18 Nov 19 15:16 radium-nobody-444 -rw-r--r-- 1 radium nobody 18 Nov 19 15:16 radium-nobody-644 -rw-rw-r-- 1 radium nobody 18 Nov 19 15:16 radium-nobody-664 -r--r--r-- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-444 -rw-r--r-- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-644 -rw-rw-r-- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-664 -r--r--r-- 1 radium users 17 Nov 19 15:16 radium-users-444 -rw-r--r-- 1 radium users 17 Nov 19 15:16 radium-users-644 -rw-rw-r-- 1 radium users 17 Nov 19 15:16 radium-users-664 -r--r--r-- 1 root root 14 Nov 19 15:16 root-root-444 -rw-r--r-- 1 root root 14 Nov 19 15:16 root-root-644 -r--r--r-- 1 root users 15 Nov 19 15:16 root-users-444 -rw-r--r-- 1 root users 15 Nov 19 15:16 root-users-644 -rw-rw-r-- 1 root users 15 Nov 19 15:16 root-users-664 After: root@n54l-3:/mnt/disk2/isos/test# ls -l total 68 -r--r--r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-444 -rw-r--r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-644 -rw-rw-r-- 1 nobody nobody 18 Nov 19 15:16 nobody-nobody-664 -rw-rw-rw- 1 radium nobody 18 Nov 19 15:16 radium-nobody-444 -rw-r--r-- 1 radium nobody 18 Nov 19 15:16 radium-nobody-644 -rw-rw-r-- 1 radium nobody 18 Nov 19 15:16 radium-nobody-664 -rw-rw-rw- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-444 -rw-r--r-- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-644 -rw-rw-r-- 1 radium nogroup 19 Nov 19 15:16 radium-nogroup-664 -rw-rw-rw- 1 radium users 17 Nov 19 15:16 radium-users-444 -rw-r--r-- 1 radium users 17 Nov 19 15:16 radium-users-644 -rw-rw-r-- 1 radium users 17 Nov 19 15:16 radium-users-664 -r--r--r-- 1 root root 14 Nov 19 15:16 root-root-444 -rw-r--r-- 1 root root 14 Nov 19 15:16 root-root-644 -r--r--r-- 1 root users 15 Nov 19 15:16 root-users-444 -rw-r--r-- 1 root users 15 Nov 19 15:16 root-users-644 -rw-rw-r-- 1 root users 15 Nov 19 15:16 root-users-664 As you can see, the entries owned by radium (the Windows machine account) did change -r--r--r-- into -rw-rw-rw. Any entries owned by nobody or root could not have the file attributes changes.
  25. I traditionally keep file server shares read-only with the exception of accesses from a few trusted Linux machines. So to move files from a Windows machine I have the file server mount the Windows volume and pull the files. Or I use an intermediary Linux machine that mounts both the Windows share and the file server share. But in the end, there are no open shares for a hijacked Windows machine to destroy. If editing a word document, the Windows machine has to create a new revision of the document, and rsync may later come and fetch the new revision and store on the read-only share.
×
×
  • Create New...