[aur-general] TU Application -Thomas Hatch

Kaiting Chen kaitocracy at gmail.com
Wed Jan 5 15:35:47 EST 2011


On Wed, Jan 5, 2011 at 2:50 PM, Thomas S Hatch <thatch45 at gmail.com> wrote:

> The difference is that Gluster is a nightmare!
>
> The problem with gluster is that the replication is tiered, and that there
> is no metadata. The client is then the master, this means that if you
> connect to gluster with a mis-configured client you can have large scale
> data corruption.
>
> Next since the replication of data is tiered you don't have true
> replication, so only the gluster server you connect to to save the data has
> the correct data, if that server goes down the replications are old and you
> have data corruption.
>
> The gluster devs actually had to recall gluster 3.1 because the data
> corruption was rampant.
>
> The difference between gluster and MooseFS is that MooseFS works!
>
> MooseFS also has a cool web frontend :)
>
> We were using gluster and the business cost became catastrophic, picking up
> the peices was a nightmare.
>
> MooseFS saves data to replication nodes in paralell! MooseFS maintains a
> master metalogger so client connections are agnostic.
> MooseFS maintains metadata replication so you can restore is something
> happens to the master.
>
> I take it you don't like FUSE? EVERYBODY is doing it ;)
>
> I am looking forward to Ceph, which does not require fuse, but I don't
> think
> it is going to be production ready for at least a year, and MooseFS easily
> compete with Ceph IMHO.
>
> If there are GlusterFS devs in the room, please disregard the previous rant
> :)
>

Thanks for the very thorough answer. And yes I hate the idea of a filesystem
in userspace. Everyone knows the FS's should be in kernel space! Mostly it's
the fact that in my opinion bypassing the kernel's caching mechanism is
entirely impractical for a high performance FS. Feel free to correct me if
I'm wrong.

Anyways your application looks really good. Good luck! --Kaiting.

-- 
Kiwis and Limes: http://kaitocracy.blogspot.com/


More information about the aur-general mailing list