Em março 6, 2018 7:37 Thore Bödecker via arch-devops escreveu:
During my brief contact with OpenLDAP I was really annoyed by that LDIF stuff. The learning curve for this is rather steep and when you've never done anything with it before it can get really frustrating. This makes it more difficult to distribute the responsibilities / knowledge of our future LDAP setup. We most likely do not want to have only a single person capable of maintaining it.
Well, this is why we are using ansible for this. So we can set it up and maintain it in a way that anyone from our devops team can do it. And we have other people with LDAP knowledge as well.
From what I've heard (never used it myself) 389-ds seems to be more straightforward to setup and use. Do you have some actual numbers for the difference in resource usage?
No, I don't. And OpenLDAP now uses mdb so it might be that OpenLDAP is the same, resource usage wise. I'll make some tests.
Regarding (G)UIs for the LDAP servers:
I would strongly advise against phpLDAPadmin, it is cumbersome and the last stable release was over 5 years ago.
I don't want to use it either, but we have it packaged on our repos.
It would be great to have some kind of CLI/TUI locally on the server which could be used through SSH.
389-ds has a java console that you can run locally and tunnel over SSH or run remotely with SSH X forwarding.
In case our DS software choice does not come with that, there is always Apache Directory Studio (aur/apachedirectorystudio). It is also not really *great* but most of the time it just works. This could be used through an SSH port tunnel without the need to expose the LDAP(s) port publicly.
I have not included other options, like OpenDJ and apache's because of their licenses.
I support the idea of each machine/server having a local (replicated) slave running. This distributes the load more or less evenly across the servers, has shorter delays and doesn't require a remote service to be online/available.
There's also the option of using SSSD for caching, as it was pointed out to me on IRC.
However:
When the slaves are used locally for the services (aur, wiki, archweb, ...), we do need to provide a way for the users to change their password (among other account settings) which uses the master backend as the slaves won't be able to change any data.
This is expected, yes.
Having only one master means that changes to the LDAP data are not possible while the master is down. I would strongly recommend to run at least two masters if that is possible without severe drawbacks or heavily increased complexity.
We can have a multi-master scenario. In fact, since we reboot the machines now and then, it's even advised to.
Regarding TLS: Using TLS for the replication traffic between the servers is an absolute must-have, I fully agree with this. We should also restrict access to the LDAP(s) port through iptables to our own services which should only allow replication traffic. However, having seen live how broken some TLS/x509 implementations are (e.g. MySQL) I would opt for a VPN between our servers using tinc for example. The VPN network across the servers can still be firewalled and it enforces all traffic between the servers to be encrypted.
I don't like the idea either of leaving LDAP open to the internet. Even if it is without anonymous connections support. But we do have other services in this situation. They use TLS and the internal auth of the service, but that's that. If we go the VPN route, it would add complexity. Let's see how this goes.
That's all what I've got so far.
Thanks. Regards, Giancarlo Razzolini