Publication details

“These results must be false”: A usability evaluation of constant-time analysis tools

Authors

FOURNÉ Marcel DE ALMEIDA BRAGA Daniel JANČÁR Ján SABT Mohamed SCHWABE Peter BARTHE Gilles FOUQUE Pierre-Alain ACAR Yasemin

Year of publication 2024
Type Article in Proceedings
Conference 33rd USENIX Security Symposium
MU Faculty or unit

Faculty of Informatics

Citation
Web https://www.usenix.org/conference/usenixsecurity24/presentation/fourne
Keywords constant-time; timing attacks; crypto library; survey; developer survey; user study; usable security; human factors; cryptography
Description Cryptography secures our online interactions, transactions, and trust. To achieve this goal, not only do the cryptographic primitives and protocols need to be secure in theory, they also need to be securely implemented by cryptographic library developers in practice. However, implementing cryptographic algorithms securely is challenging, even for skilled professionals, which can lead to vulnerable implementations, especially to side-channel attacks. For timing attacks, a severe class of side-channel attacks, there exist a multitude of tools that are supposed to help cryptographic library developers assess whether their code is vulnerable to timing attacks. Previous work has established that despite an interest in writing constant-time code, cryptographic library developers do not routinely use these tools due to their general lack of usability. However, the precise factors affecting the usability of these tools remain unexplored. While many of the tools are developed in an academic context, we believe that it is worth exploring the factors that contribute to or hinder their effective use by cryptographic library developers. To assess what contributes to and detracts from usability of tools that verify constant-timeness (CT), we conducted a two-part usability study with 24 (post) graduate student participants on 6 tools across diverse tasks that approximate real-world use cases for cryptographic library developers. We find that all studied tools are affected by similar usability issues to varying degrees, with no tool excelling in usability, and usability issues preventing their effective use. Based on our results, we recommend that effective tools for verifying CT need usable documentation, simple installation, easy to adapt examples, clear output corresponding to CT violations, and minimal noninvasive code markup. We contribute first steps to achieving these with limited academic resources, with our documentation, examples, and installation scripts.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info