On 21/12/18 5:21 am, Florian Pritz via aur-general wrote:
That should probably be fixed as well, but I agree with making the rate limit window 1 hour, at most. A 24 hour API restriction on the AUR API is really nasty imo. So is running cower -u in conky every 5 seconds and not even knowing
On Thu, Dec 20, 2018 at 11:02:31PM +0200, Jerome Leclanche <jerome@leclan.ch> wrote: that this may send 50 requests per seconds on average because you have so many packages installed. The limit is there to show people that there is something wrong with their API usage and if the window is too small they'll never notice.
We have resolved this particular issue for now though via a temporary reset of the limit. For the future I think the way to go is to implement configurable limits for certain subnets so that we can raise them to a suitable value. If someone wants to write a patch for this that would be very welcome.
My initial idea is that a simple db table with the limits for each subnet should be enough I think. We don't (yet) need a webui for maintaining it. The tricky part is probably figuring out which limit shall apply to which IP. Since the AUR supports mysql and sqlite, any code has to work with both.
If anyone wants to implement this, feel free to talk to me via mail or on IRC.
Florian
Is there anything wrong with creating a pertner file to the packages file that has the pkgname, provides, description and git address in it. Then encourage it to be downloaded - say at an interval of 12 hours, then use this file to search and encourage the use of the git address rather than a direct download of the pkgbuild. They can then cache these pkgbuilds directories in a build tree. The aur helper packages should then be encouraged to use this method and thus take the load off the system. Macca