The typing module allows assigning complex type signatures to aliases, which can then be used interchangeably with the actual type. This seems to have the problem that it confuses type aliases with actual classes, i.e. in a definition like this
def foo(obj: MyComplexObject):
...
there is no way to tell if MyComplexObject
is implemented as a class somewhere or is just a type alias. This seems to be an unfortunate source of confusion to me, especially when the alias is (re)used far from its original definition. Is that generally accepted or is there a convention to somehow distinguish type aliases from actual classes?
I believe type aliases and regular types are indistinguishable by design: type aliases are meant to be exact stand-ins for whatever type they're aliasing.
So, to use your example, it doesn't necessarily matter if MyComplexObject
is an actual class or a type alias: they both behave equivalently at runtime and static typecheck time.
And because the type checker can give you immediate feedback on whether or not you're using obj
in a type-safe way, there are minimal opportunities to get confused: either you'll use the obj
parameter correctly or you'll get immediate feedback otherwise. (And similarly, IDEs such as Pycharm that understand PEP 484 semantics will also understand type aliases and will correctly auto-complete/flag mistakes.)
It's also worth noting that in the case where you specifically alias other classes, the alias and the original class are literally indistinguishable at runtime: they're both variables that refer to the same underlying type object. (So arguably, there's nothing to actually be confused about: both the alias and the original type are the "same thing".)
Consequently, there's been very little incentive to find a naming convention for distinguishing between types and type aliases -- it doesn't really end up being an issue in practice.
That said, there are some implicit conventions for when to use type aliases. For example, people usually don't alias classes directly: type aliases, as Ethan said, are mainly meant to be used to help simplify larger compound types that are repeated in multiple places.
This means that type aliases are mainly used when you would otherwise have a bunch of really ugly type signatures. It then becomes a "tradeoff" sort of thing: the reader does need to spend time looking up the definition of an alias and cache it in their head, but once they do, reading the rest of the code ought to be much more pleasant.
So if you still find type aliases potentially confusing and prefer to avoid using them unless necessary, that's perfectly fine: that's what pretty much everybody else does anyways. The only difference comes down to when you cross that tipping point from "unnecessary" to "necessary".