On Sat, Nov 29, 2008 at 3:20 AM, Ian G <***@iang.org> wrote:
>> The sad thing is: The users, in this case my project colleagues, sometimes do not know how to use the existing S/MIME infrastructure although they enrolled during a user registration process and they already have everything on their desktop. Although I'm not involved personally with the S/MIME infrastructure my attitude is to teach the people how to use it. And they feel better when using it because they know there's a need for e-mail protection. But they were simply not teached. That's a non-technical problem.
> IMO, the root cause is not training. Nor legal. To blame some other process is what we call "shifting the burden," a pattern that allows us to ignore the root causes.
> The root cause is that the S/MIME security model is inefficient; it doesn't deliver benefits in accordance with the costs imposed.
> Funnily enough, users are very savvy. They can spot a worthless system much more easily than engineers. What they can't do is explain why it is worthless; they simply bypass it. This is why smart product is always developed in association with lots of user feedback, and paper designs generally don't succeed.
> In this sense, Mozilla is on the right track with trying to put in place a user security model that doesn't require user intervention. (E.g., the UI hides the CA, from the "all CAs are equal" assumption.) However, this only works if the result is efficient. As Kyle comments, it isn't, for S/MIME, and the result is that the model experiences low usage rates.
First off: User training is arguably more technical than computer
infrastructure. You can't simply say "they were simply not teached
[sic]" and "that's a non-technical problem", because computers need to
be taught exactly one thing: how to perform a series of complex tasks.
Users need to be taught that (perhaps not to the specific granularity
of the operations that computers need to be taught, but they do need
to know how to do a series of complex things) as well as something,
perhaps more important: WHY to perform a series of complex tasks.
(Why should someone change the oil in their car? Because it helps the
car's engine last longer. Why should someone go through the
additional mess and morass of using S/MIME? To let themselves in for
more user-interface headache and annoyance?)
The root cause is not "training". There are actually two root causes
of the failure of cryptography to make sizeable inroads into everyday
non-commercial life. First is that the UI designers and programmers
have violated the contract and interface to which the users have been
trained. In other words, WE BROKE THE INTERFACE. Second is that the
threat model currently used for commerce and government is NOT
appropriate for non-commercial and non-governmental social
interaction. In other words, for the "general user", WE DIDN'T HAVE A
GOOD REASON TO BREAK THE INTERFACE.
As cryptographers, we can know -- and show, to a certainty far beyond
any other data-transformation discipline -- several aspects of the
metadata of properly-formatted messages. In our zeal to try to
explain what we can know, and how we can know it, and why we can know
it, we've overwhelmed the coders, the UI designers, the UI experts,
the people who are supposed to distill complex operations, notices,
and warnings to individually-understandable pieces.
I like the idea of putting in a user security model that doesn't
require user intervention -- but only to a point. I /don't/ like the
idea of trying to make the system make all of its security decisions
in a vacuum, especially in areas where the user has historically been
the master (for me, that includes anything which can be covered under
the Electronic Communication Privacy Act of 1986). The problem is
this: the system is not intelligent. Only the user is intelligent.
This means that data which would otherwise not be acceptable under the
system's rules might be acceptable under the user's rules.
This is why I've been in favor of unobtrusive pop-ups (rather like
Growl notifications on the Mac). There are only a couple of pieces of
information truly necessary for any security UI... who it's from, who
says it's from the person it's from, who (ultimately) has been deemed
acceptable to provide that kind of information, and whether it's been
modified in transit. i.e., certificate subject, certificate issuer,
issuer's root authority, and hash-match.
I've also been putting energy toward making it possible for those
interactions that don't require positive legal identification to be
able to use certificates with the identities that others interact
with. As I wrote elsewhere, it's entirely possible for two people to
use the same login name or nickname -- but I've not seen any system
where it is possible for two people to use the same login name within
the same authentication/authorization boundary.
What X.509 needs to be viewed as is not "a means of identification".
It needs to be viewed as "a means of authentication which uses the
same identity policy as the issuing realm" -- in other words, it's a
means of agreeing which set of rules is being used to identify each
person. (Nelson mentioned using certificates across AOL Instant
Messenger. This is a perfect example -- normally, when you
communicate over AIM, you're relying on the AIM realm for
identification and authentication of identity, which it does via
screenname/password tuples. When you use certificates, the user can
essentially throw away the AIM identification [since the only reason
to have it at that point is to tell the AIM network where to route the
message], and instead start talking with that person as though they
were inside the realm from which used the certificate to authenticate
with. This also means that when the communication is over, just
because someone's using that AIM screenname doesn't mean that they're
the same person who authenticated via certificate earlier.)
>> And any other signature/encryption/whatever standard will suffer from this.
> If by "standard" you mean "security model," that's simply not true. Skype delivers the goods and takes only a few minutes of training. There is practically no training required to get users to use Skype in its secure mode, because it nicely follows the idea of "there is only one mode, and it is secure." Although it is not likely that we can move email to the same model, it is entirely plausible to adopt 90% of the ease-of-use, without losing any of the CA certificate benefit.
> Of on the other hand you mean more literally, a standards-based security model, then yes, that's true. Correct me if I'm wrong, but I don't think any standards approach ever came up with a security model that works for users.
It's entirely possible to make a secure mode easy to use for the
users. It's not possible to do it with any of the current
*n.b. I haven't looked at all of them, but the ones from the IETF and
the ones from the ITU that I've looked at seem designed to require
advanced degrees to figure out what they're trying to say -- and the
current implementations seem to require pushing that complexity to the
>>>>> E.g., after changing laptops recently, I still cannot s/mime to half
>>>>> my counterparties because I don't have their certs. This happens
>>>>> regularly with everyone I know...
>>>> I've changed my notebook harddisk quite often. I never lost my Seamonkey
>>>> cert DB containing the key history of the last 10 years since it's part
>>>> of the Mozilla profile which I have backups of.
> It is a curious thing: I have been using Tbird for many years, and each time I've never managed to transport more than a portion of the stuff across. I just spent some time looking and couldn't find the magic command, so I always wonder... I know there is a thing called profiles, but where does one import & export them?
>>> Each time you want to use another computer.
>> Oh, come on! How often do you *really* do this? And how do you move around the rest of your workspace? There are many more things to consider when you want real roaming than just your keys and PKCs of others.
> Sure. It's a nightmare. I do it around once a year at least -- full migration. This year, three times. I hate it.
> But it is reality. Saying "you don't need to do that" is just ignoring the problem by arguing some technicality which is totally irrelevant to the way users have to live their lives.
I'm so glad that one of the cognoscenti can manage to transport his
profile around. Yay, it's possible.
I've suffered three hard disk crashes this year. Fortunately, none
have managed to destroy my most valuable data... but honestly. If we
(the users) are supposed to keep our secret keys under our physical
control, what are we supposed to do? Worse, what if we keep the keys
in a TPM on our board, but we have to change the board?
I have an iDisk. I could theoretically upload my key and certificate
databases up there for backup, but with a single PIN being used to
unlock all of the keys in my PKCS#11 store I can't put different
passwords on them, I can't partition the trust that I have been
granted (unless I put them in separate modules, which I haven't yet
been able to accomplish), I can't do my part to implement the policies
that I'm trusted with upholding... my personal email PIN is the same
as my business email PIN is the same as my business contract-signature
PIN. (And if you ask me why I have my business contract-signature key
at home... haven't you ever worked from home?)
This last could be described as, "do I really have to trust that my
backup provider isn't going to break into my personal keystores?
Can't I do something to make it less likely that they'd succeed?"
>>> Why do you think I claim that mobile crypto is a prerequisite?
>> Either your mobile also runs the apps or you have to integrate your mobile with the PC on which the whatever-you-call-your-standard-enabled app runs. The latter is the same problem space like using smartcards/readers or USB tokens as key store.
> Right. Guess what: user-oriented applications like webmail, google tools, skype (cough!) and so forth solve this problem by integrating the entire database into some form of network store.
Webmail and Google tools solve this problem by not having a local
store, period. You interact with them via an http or https
connection. (If you want to use certificates with webmail, though,
you need to use imap or pop3, and run your PKI-enabled app locally.)
Since Skype happens to be this big bad confusing pile of steaming
crud, and since Eddy's using it to enable a red herring attack, I'm
going to ignore it. (Hint: Eddy, just because someone else chooses
not to do things that you've done doesn't mean they're automatically
useless or harmful. I have not seen evidence of Skype being misused,
so I will not raise my voice to decry them -- even though you aren't
seeing evidence of Skype having done an audit, and are raising the hue
and cry based on that. Also, there is nothing wrong with Skype
holding private keys, even if they're not an escrow service. All
Skype needs to do is ensure that they're only being used on behalf of
the account that they're assigned to. All the users need to do is
realize that it's no more secure than AIM's screenname/password
authentication, though it's unlikely that others signed into the
system -- even those whose machines are being routed through for NAT
traversal -- can piece together any of the conversation.)
>>>>> * it needs a few tweaks in UI to align it with the safe usage models,
>>>>> so, for example the "signing" icon has to go because it cannot be used
>>>>> for signing, because signing is needed for key distribution. It also
>>>>> cannot be used for signing unless reference is made to the conditions of
>>>>> signing, and no UI vendor has ever wanted to give time&space to a CPS.
>>>> Maybe it's me but frankly I don't understand what you say here.
>>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>>> Certificate Practice Statement is meant here).
> Not quite, what I mean here is that somehow, the user has to figure out what is happening. The PKI view is that this is done by referring to the CPS. The secure browsing view is that it is done by the vendor, on behalf of the user, and the CPS is reviewed by the vendor for that purpose. (Yes, these two views are at odds, and the vendor has some questions to answer here...)
Not my most stellar work, and probably extremely substandard by the
views of those assembled. However, it's a CPS (from the POV of the
certifier, not the vendor) which describes a means to tweak the UI to
a point that I consider necessary.
> One could surmise that this situation/confusion is good enough for encryption between websites and users; given that there are lots of other protections in place, etc. Indeed, this is our informal preference here, in that we prefer to get more CAs in and and more encryption happening, and this addresses the current threat scenario which breaches secure browsing by exploiting its rarity.
> However: one would be hard-pushed to suggest that this situation / confusion could be acceptable for users to interchange legally binding signatures, because there is an absence of other protections in place, or those protections that are in place are uncertain.
> Recall Nelson's view that he does not sign anything without reading. The wider principle here is that one should not enter into an agreement unless it is understood. Now, applied to S/MIME, if it implied some form digital signing over emails, then it should not be used, because one cannot read the implied contract (CPS, or whatever), and nobody else is stepping up to say it's ok, sign away, we're watching your back. Full understanding is not possible, at any of many layers and levels.
This, right here, is why I have a serious problem accepting the
current CA model.
I've also come up with a means of potentially helping with this
situation, but it relies on OS vendors actually stepping up to the
> In order to satisfy users' needs for clarity, the governance UI should present a workable human signing view to the user. But, as we have seen in recent threads, that is fantasy. It's a non-starter.
> Ergo, S/MIME client UI implementations should be modified to drop any sense of signing, by default, and the digsigs should be used for integrity protection and key distribution.
S/MIME client UIs need to stop handling S/MIME differently from
non-S/MIME (except for the addition of a badge to the chrome).
I'm not yet ready to go into the entire set of traffic-analysis
attacks which can be applied against S/MIME. However, they do exist,
and their existence (and lack of mitigating factors) is worrying to
>>> I believe Ian is referring to the problem which made me starting this thread...
>>> That is, the need for end-users to become trust managers.
> Yes. Or, the absence of end-to-end trust management in the system, if we are using that language.
Erm... more importantly, there is no /central/ trust manager. And as
long as the people clamoring to become central trust managers (i.e.,
the root CAs) refuse to accept that I need information that they won't
certify, and that they certify information that is completely and
abjectly useless to me, I cannot accept them as trust managers.
Also: I hereby put forth that Startcom is not "free". It derives
monetary benefit from the personal information that it demands of
anyone before they're ever approved to become users of the system.
See http://www.turbulence.org/Works/swipe/calculator.html for
>> Everybody is a trust manager. All day everybody is making trust decisions. But there's no ultimate trust.
> No user can make a trust decision without evaluation of the circumstances. Without info, it is called gambling. They are indeed good at evaluation, given the limited resources that they can apply at any time. However, as S/MIME does not provide any "circumstances" that suggest a reliable framework for agreements, it should drop the suggestion entirely.
> (Users as a mass have already rejected S/MIME as a signing framework, so this is more about protecting those users who might otherwise be mistaken or might otherwise be sold a product by their IT supplier.)
Sure there's ultimate trust. The problem is that there are as many
points of ultimate trust as there are people. If governments want to
get into the business of dictating arbitrary ultimate trust points,
that number goes down to 230 or however many countries there currently
are in the world.
If the UN decided, after that, to issue and run its own CA, that would
create one single ultimate point of trust... for legal interactions,
for fiscal interactions. But not for other interactions.
There is no single point of ultimate trust (and thus single point of
failure) for legal interaction or fiscal interaction. And as for
those of us in non-dictatorships, there would be no single point of
ultimate trust for non-legal/non-fiscal (i.e., social) interaction.
In the US, at least, there's the right of free assembly.
Also, getting into the business of telling someone who to trust, or
what information to trust, puts one squarely into the role of
"fiduciary advisor". Insurance agents, bankers, brokers, and lawyers
are basically the kind of people who get into that group -- and there
are laws strictly limiting what those people can do with their
clients' information or with their clients' trust.
I'm rather sick of asking this question: "What can we do to get the
users to use the technologies that have been developed?"
I'd rather ask this question: "What do the users need that can have
partial or total solutions implemented using the technologies that
have been developed?"