Trust is a massive issue with software today.
You want to use some computing device to achieve a task, so you go to google – after all – someone must have tried to solve this problem with software somewhere. You search away, deftly clicking through links, finally coming upon Acme software, the answer to all your searching. There before you is an application that will solve you problem perfectly.
Presuming there’s nothing stopping you (e.g. cost, incomprehensible download instructions, platform availability), you go on and install, making the assumption that the developer has utterly benign intentions and there isn’t anything untoward going on.
In that single act of installation, you take your chance; is it legitimate, or is your machine now part of a bot net?
Which is why things like code signing and app stores amongst other things exist. Whilst not perfect they attempt to mitigate some of the risks.
There is no perfect system and there never will be; all we can do is aim for the situation that is just less riskier. Just think about these two constructs for a moment.
Firstly code signing. When a developer signs code with a certificate from a Certificate Authority, the CA is confirming that the developer is who they say they are. When your OS asks you to confirm that Verisign for example knows who Acme software of 42 Wallaby Way, Sydney are, you are trusting that Verisign has checked them out, given them a phone call for example, checked articles of incorporation etc – i.e. carried out a little bit of background checking. But who are Verisign, do they have a relationship with you (of course not)? Are all CA’s thorough? And because there are so many of them, who says who you can trust them anyway?
If you write code, you probably think you know the answer to these questions, but to the general user, they’ll just click “OK”, and think nothing of it – and that says nothing of self-signed code (where the developer generates their own certificate, and signs the code with it).
And then the app store? Some of these (in the case of Apple) provide the certificates, and the gateway to users. Users go to them, hopefully able to trust that they have carried out the due diligence of a CA, perhaps actually checking the code (which is not a requirement of a CA – so they are taking the duty one step further). Of course this requires that the user has a level of trust in the app stores proprietors, and believes they aren’t going to do anything evil (same would apply to the CA, but general users will already have that with Google or Apple – unlikely with Verisign). Of course, the OS manufacturer is actually the one asking you to trust the certificate from the CA, but the question is always posed the other way around – as a request to the user to trust the CA by clicking ok.
So what is wrong with the app store model?
The first one is that everyone reaches for is that the app store proprietor is given too much control – ultimate censorship – but it has proven to be a bit of a moot point in most circumstances. When the Apple app store started, there where many cases of software being prevented from approval, and then the accompanying furore. But these are now a much rarer occurance (or perhaps no one cares). If we take the case of Opera, in my opinion, it has such a terrible implementation on iOS that I wonder if there’s more of a back story than they would care to admit to.
If there’s anything wrong with this model, it’s that it isn’t prefect; so when it goes wrong we think – hey, what the? I thought they made these things bullet proof. As most users will always click ok – right here and now, the app store model seems like the best solution.
Which leads us to the disappointment with our human endeavours – there has never been (or will be) a perfect situation – just ones that are less bad. Failure leads to improvement, but as we are utterly unable to foresee all circumstances, there will always be unknown unknowns.