“It is easy to be beautiful; it is difficult to appear so.”
― Frank O'Hara, Meditations in an Emergency
“Usable security does not mean getting people to do what we want. It means creating security that works, given (or despite) what people do.”
― Bruce Schneier
So much of the difficulty with privacy and security products has little to do with the privacy or security itself. There’s nothing about verifying signatures or X25519 that warrants a clunky interface. And yet, we’re left with green text on black backgrounds, pop-ups that exist to dismiss, and tomes that describe arcane log in processes.
Security certainly adds new constraints. It can make good design harder. But too often, a carelessness towards design precludes the usefulness of the security in the first place. What’s the use of PGP, if your friend sends the wrong key by mistake anyway? (Even PGP's creator got fed up with its design).
Bringing user-friendly design to security matters. Technical skills cannot be a prerequisite to digital security. The people whose lives and reputations depend on it are rarely cryptographers or engineers themselves. And in a world of instant checkouts and auto-magic links even minor inconveniences can be a dealbreaker. We need to stop asking whether people care about privacy and start asking whether we, as designers and engineers, care enough about how people interact with privacy. If we’re truly to make the internet private for everyone –as we have set out to do at Skiff– usability must be front and center.
At Skiff, we’ve built a privacy-first platform for collaboration. Security is built into every feature and aspect of our product from onboarding to the emojis and the text editor. That means we've had to face usability challenges with almost everything we've built. We're still learning but along the way we've come up with a framework to help guide us. For each feature, we ask ourselves, does it:
The principles behind these questions are hopefully straightforward: for 1 and 2, it's that users will do what's convenient regardless of any instructions; they set the constraints and decide the path, not us. For 3 and 4 – while users set the path, there's still a lot we can do to frame and organize it; it's our job to re-pave common actions and fence off undesirable ones (see image above). For 5, it's that any extra time cost should be put on us during development, not on our users; otherwise, it's impossible to be a true alternative. And 6 – for users to intuitively trust the product, the interface must reflect the underlying security.
Applying this framework to a real product is hard. Consider our account recovery process. When you forget your password on Skiff, there’s not much we can do since we don’t store passwords in any form on our servers (we use zero-knowledge proofs for authentication). Users instead need to generate a random symmetric key that they store physically. This can reduce account safety to a crumbled up piece of paper and lead to a bad experience for forgetful users.
Skiff’s design and security teams worked closely together to make this process smoother and more secure. Our security team integrated Shamir secret sharing to split the recovery key into three pieces. Now, one key is stored on paper, another in the browser, and a third is encrypted and stored on the server. To recover an account, two out of the three shares must be combined. This ensures that at no point can the server decrypt user data, while still giving forgetful users a backup on their browser.
Our design team, meanwhile, updated the onboarding UX to make it nearly impossible for a user to misconfigure their recovery keys. This all comes together in the most boring way possible (we hope). When you forget your password, you check your email inbox and set a new password. It just works. For all the extra implementation details, the user actually has to do fewer actions.
Skiff’s public link feature follows a similar template. It implements a novel cryptographic protocol to ensure Skiff never has access to the link or document. But as far as users are concerned it’s just a toggle – same as any site.
Make no mistake – secure design isn’t always about invisibility. Being conspicuous serves a critical purpose too. Responsive indicators, well-placed labels, even sounds and animations can create a sense of security. As long as the design is honest, these components can reinforce and validate the actual security.
Other more subtle factors also play a role. When it comes to tone and voice, we’re careful to avoid exclamations, excessive emojis, and condescending descriptors like “my documents.” It's hard to trust a platform, if it's too playful.
Down to the last pixel, we also strive for every element to be aligned, centered, and perfectly set. It's not out of unbridled perfectionism. A single visual artifact puts in question all the things you can’t see. And in aggregate, it's unsettling even if subtle.
There’s a reason Skiff doesn’t default to dark mode. As tempting as monospace type and command-line interfaces can be, they alienate often more than they help. The initial coolness of feeling like a hacker might build some sense of security, but it's short-lived. Fun becomes gimmicky. And over time, these references can detach how the product looks from the how the product actually works.
We're currently putting these principles to work at Skiff, but there's still much to improve. If you’d like to provide feedback on the product itself join our beta and send suggestions either in-app or by email (firstname.lastname@example.org).
If you’re working on your own security projects, feel free to fork and contribute to our open source design system: Skiff UI.
Finally, for product engineers and designers excited by the intersection of security and usability – if you want to help us craft new design patterns for a privacy-first future, check out our careers page.
Skiff is making privacy accessible to everyone.