Yes, because you can make any implementation of anything as slow as you'd like. You can make a quick sort as slow as a bubble sort, for example.

Looking at algorithmic complexity, the only source I could find is Wikipedia which states:

> [A radix trie] supports the following main operations, all of which are O(k), where k is the maximum length of all strings in the set:

> Lookup: ...

> Insert: ...

> Delete: ...

> Find predecessor: ...

> Find successor: ...

You can think of O(k) as O(log N) if the tree is balanced. Space complexity is at most O(N * k), because obviously every key needs to be stored once. However, I am suspecting the space complexity hovers more toward O(k * log N) because of the string "joining" effect the radix trie depends upon.

Hash table implementations vary too widely to have specific algorithmic complexities. However, in general, lookups are O(1 + k), where *k* is the number of elements in the bucket. A hash table's space complexity is O(N * k) (*k* = 1 if the key isn't stored) plus some constant for storing the buckets.

If you have a balanced radix trie, and and imbalanced hash table, it is likely the trie (O(log N)) will perform better than the hash table (O(N)). However, in the general case, with a good radix trie implementation and a good hash table implementation, the hash table will win.

A clear advantage of a radix trie is that a hash function is not required, and the keys do not need to be duplicated. This may affect performance behaviour, especially with frequent string lookups and key-inspecting iterations and inserts.