On 7/5/07, Xavier <shiningxc@gmail.com> wrote:
On Thu, Jul 05, 2007 at 02:06:09PM -0700, Jason Chu wrote:
I was the main person pushing for this and it was mostly for the malicious downloads.
It's not the package downloading that I was worried about as much as the source tarballs. We use md5sums to make sure that the tarball we downloaded building the package is the same as the tarball that the developer used when they built the package. If someone gets access to the upstream's server, we're using the md5sum to trust files over time.
Oh I see. But what I am really wondering is why combining two existing algorithms that have flaws instead of using one for which no flaw has been found yet ? Isn't it both less secure and more complicated ?
<offtopic> Every possible hashing algorithm has flaws. However, they need to be exploitable for them to be of any use. Just because a flaw hasn't been found doesn't mean there isn't one. And I think the very important point was missed above in all these emails- creating a useful 'flaw' is not easy at all. Let's first use the example of one hashing function, such as MD5. First, you have the original file's hash. In all flaw finding exercises, this is not the case- all they look for is for two hashes to match, not trying to match one to a preexisting hash. So we are already off to a hard challenge here. Say you are able to find some other junk data that hashes to the same value. Sorry- that is worthless. You need valid data. Now add a second hash. You will need a *double* collision of data- one where *both* hashes are the same for the valid data and the malicious data. I dare to say impossible. </offtopic> One hash being more secure? Doubtful. Maybe about the same. One hash being less complicated? Do you like dealing with 80 character strings? -Dan