Scientists at the Daegu Gyeongbuk Institute of Science and Technology (DGIS) in Korea have just developed a series of algorithms that more efficiently measure how difficult it would be for an attacker to find the secret keys to cryptographic systems .
The approach they have used has been published in the journal ‘IEEE Transactions on Information Forensics and Security’, and could be very useful in reducing the computational complexity required for validating encryption security .
Cryptography is used in cybersecurity in order to protect user information. In this sense, random numbering is essential when generating cryptographic information. And, precisely, this randomness is the main responsible for the security of the different cryptographic systems.
Scientists use ‘mini-entropy’, a useful metric to estimate and validate how good a given source is at generating random numbers used for data encryption. This means that data with ‘low entropy’ are easier to decipher, while data with ‘high entropy’ are much more difficult.
But as experts point out, accurately estimating mini-entropy, especially for some types of sources, is tricky, which can lead to underestimations.
To solve this, the scientists developed an algorithm capable of estimating mini-entropy based on a complete data set and an estimator that only requires limited data samples.
The precision of this last element improves as the number of data samples increases, and it does not need to store complete data sets, so it can be used in applications with strict memory, hardware and storage restrictions, such as Internet devices. things (IoT).
According to tests carried out by the specialists, their evaluations showed that the algorithms could estimate minientropy 500 times faster than the current standard algorithm, while maintaining the precision of the estimate.
In this way, the scientists responsible for this study are currently working to improve the precision of both this and other algorithms, in order to estimate entropy in cryptography , and how to improve privacy in different machine learning applications. .