First, my reasons to do this - I know it's a bad idea but I am out of ideas.
I want to install a package which requires a ld
version, which is higher than the one in the repo of my Centos 6.5. So I should either go for setting up everything in a Docker and running it in production - something I lack experience with and I don't feel comfortable doing for a serious project. Or upgrade ld
manually building from external source. Which I read, could result in devastation of my Centos. So the last option I am left with is install the packed on other machine and manually copy it to site-packages
.
I have successfully installed the package on my home laptop under Debian.
I encountered everywhere advice to copy the whole site-packages
directory. Something which I don't want to do as I have different packages on both machines and I want to avoid messing up with other stuff.
I copied the .so
build and .egginfo
of the package. Then, on the target machine, pip freeze
indeed showed me the transferred package. However, Python can't find it when I try to import and use it.
Am I missing something else?
Not any of that.
Don't mess with system Python's site-packages dir, this belongs to the system Python env only. You should only add/remove code in there by using the package manager of your OS (that's yum
for CentOS). This is especially true in Linux where many OS services can rely on system Python.
So what to do instead? Use a virtualenv and/or pipx to isolate any other dependencies of the package you want to install from the system versions.