Timing attacks have a long and successful history when used against a wide variety of systems and technologies. This is because these attacks can take so many forms, from vulnerabilities related to race conditions, or blind SQL injection vectors which use delays in execution through to the timing of a UNIX login.
One of the classic timing attacks is based on measuring the difference in the time an application takes to complete two different but related tasks. If the code path followed by different inputs varies in its length or in its complexity the execution time for the two different inputs can vary slightly – but measurably. The most common example of this is the time taken by a login mechanism to process authentication attempts. When the username which is supplied is valid, the code path can often be longer than that taken for an invalid user and therefore could allow a timing attack to occur. This type of attack has been widely publicised and there are many examples which are known to work.
These types of attack are indirect and it can be difficult to identify every instance when they are viable. As a result they can occur even in established and widely deployed technologies. The following example was identified on a number of occasions during penetration testing, although the cause could never be isolated.
The issue relates to installations of Citrix Access Gateway where users can authenticate with Microsoft Active Directory (AD) credentials. In these scenarios it has been observed that authentication attempts that use valid AD users take a marginally longer time to return a failed login message to the user’s browser. This enables an attacker to identify whether a username is valid which, in turn, provides help with password guessing attacks.
It is worth noting here that a number of attempts were made to replicate this issue in controlled conditions and the vendor was actively involved in this process. However, it was not possible to reproduce the behaviour observed in production environments and with customers being naturally reluctant to disclose details of their internal environments the cause could not be positively identified.
We are raising the issue now for two reasons: to provide security professionals with information about these observations and to encourage further investigation which could lead to the underlying cause or its dependencies being identified. It is hoped that with more open discussion and further testing more information can be uncovered about this issue and a resolution identified.