[aur-dev] various implementations of a json api query endpoint for the aur
Greetings. I have implemented stand-alone versions of the aur rpc interface, in 3 languages. https://github.com/cactus/spew-ruby (ruby + sinatra) https://github.com/cactus/spew-js (nodejs + coffee-script) https://github.com/cactus/spew-python (python + tornado) The ruby version is probably the most complete (and arguably my favorite of the 3 so far), but each is a usable implementation. The nodejs version may node be idiomatic node code, as I am still learning my way around that ecosystem. The python version dosen't have a good implementation of the '/search' livesearch example endpoint, but that is just due to laziness on my part (I could add it easily if anyone else was interested in it besides me). Currently all 3 versions utilize the mysql database format that the AUR itself uses. One nice aspect of having the rpc interface separated from the php aur code itself are: - less api endpoint downtime when performing maintenance on the aur - decoupled from the aur codebase which could provide an arena for faster development of new features for api clients - nicer languages to work with (I am not a fan of php, due to having used it extensively in the past ;). - ability to have a more restful interface (utilize http response codes: 404 for packages not found, 204 for no content, etc) - provides ability to version the api. this would be a way to support legacy clients for a while, after changes to the api. fake example: /api/v3/maint?q=bob -> /api/v4/maintainer?q=bob The api I have currently implemented (v2) supports _search_ and _info_ type queries. Supporting _maintainer_ queries would be very easy to add to any of the implementations. The v2 interface is slightly different than the current rpc interface (more resftul, supports conditional get support for clients that can cache and perform conditional get requests -- to reduce bandwidth, etc). If there is any interest in utilizing any of the versions listed for Arch, I would be glad to continue working on the codebase and clean up any rough edges and add the maintainer feature. Creating an AUR query interface is one of the projects I use when learning a new language, so even if no version ever gets used for anything, I do not consider it a waste of my time -- so don't be concerned that I will feel bad if the answer is 'no'. Thanks.
Very cool! Are any of these implementations hosted anywhere? I have been making an AUR scraper/copier that saves package meta-information. There is also a web service that provides the data. I put both the database and webservice here: http://juster.us/aurlite This scraper saves the AUR data into a new schema as well as using SQLite so it has a slightly different angle than your projects. Because it is essentially read-only this works out well for my needs. I was not trying to mirror or re-implement the AUR. I was thinking of adding binary ALPM repos as well to provide a single service for searching/browsing packages across many repos as well as the AUR. This would probably have a javascript frontent much like your livesearch! The whole thing is written in perl. The webservice uses Dancer which is basically a perl copy of ruby's Sinatra. Maybe you would be interested in it because your web services seem so similar. -- -Justin
On Sun, Mar 27, 2011 at 9:02 AM, Justin Davis <jrcd83@gmail.com> wrote:
Very cool! Are any of these implementations hosted anywhere?
I have development versions running, but only the ruby version is 'live' right now. http://test.awesometrousers.net/ There is also a live-search implementation that utilizes the ruby backend: http://test.awesometrousers.net/search As the api supports limit and offset query variables, I have thought of making it the live-search also do load-on-scroll instead of load all at once, but I haven't endeavored to do so yet.
I have been making an AUR scraper/copier that saves package meta-information. There is also a web service that provides the data. I put both the database and webservice here: http://juster.us/aurlite
neat! I will take a look at it.
This scraper saves the AUR data into a new schema as well as using SQLite so it has a slightly different angle than your projects. Because it is essentially read-only this works out well for my needs. I was not trying to mirror or re-implement the AUR. I was thinking of adding binary ALPM repos as well to provide a single service for searching/browsing packages across many repos as well as the AUR. This would probably have a javascript frontent much like your livesearch!
Certainly feel free to re-use any of the livesearch code. It is liberally MIT licensed. :)
participants (2)
-
elij
-
Justin Davis