On Mon, Mar 06, 2017 at 09:44:12AM +0100, Tinu Weber wrote:
Because anonymisation: even if one dataset in isolation may look unsuspicious from a privacy POV, if combined with other datasets, it may suddenly reveal information that was not intended to be public.
I admit that a simple one-column list of user nick names may probably not really be joinable with other datasets or -tables in any useful manner, but it is still not always obvious how data can be (ab)used (see also [1]).
I would not give out the user list. Even if there are means for everybody to somehow obtain the data (with enough effort from their side), it is not the same thing as simply handing it out conveniently prepared and formatted.
See, the rule should be that private information is the one that is manifestly marked so. For example, a password or a secret key is private information which you never ever disclose to anyone. But a username is by definition open. Therefore, if your privacy relies on a web service not disclosing usernames, you haven't considered the threat model carefuly enough. What I'm saying is just another example of avoiding security through obscurity: don't rely on a web service not advertising your usernames, if this is an issue, make each username a random string (which defeats the attack [1]).
[1] http://archive.wired.com/politics/security/commentary/securitymatters/2007/1...
Cheers, -- Leonid Isaev