You have to know
- the algorithm
- the shortest path through the algorithm
- the longest path through the algorithm
- how the path lengths or numbers of iterations vary with the number of data items processed
- what an "average" data set looks like.
The time complexity of the best case can be calculated by considering what happens to the execution time or number of iterations when you increase the dataset by 1 item, by 2 items, by a factor of 2, a factor of 3. If the execution time increase is linear with the number of data items, we say the time complexity is O(N), that is, on the order of N, the number of data items. If the execution time varies as the square of the number of data items, then we say the time complexity is O(N2). The time complexity can be of other orders, exponential, or have some other relationship to the number of data items, for example, O(N3/2).
The same assessment of time complexity can be done for the worst and average cases.