i am trying to make a high speed shared buffer between two different python interpreters, for this i created a file in /tmp then used it to make the mmap object
fd= os.open("file", os.O_CREAT | os.O_TRUNC | os.O_RDWR)
assert os.write(fd2, bytes('\x00' * Size, encoding='utf-8')) == Size
mem = mmap.mmap(fd, Size , mmap.MAP_SHARED, mmap.PROT_WRITE)
but later i made few tests and i noticed that if i made the mmap anonymous the shared buffer will be much faster, so is there a way to share the anonymous mmap with two python subprocesses?
Memory mapped with MAP_ANONYMOUS
can only ever be shared with descendant processes, and even then not across execve
. However, you don't need this: the reason for your speedup was just the memory was no longer backed by disk, which is achievable in other ways. The most portable solution is to use multiprocessing.shared_memory
, which you can use even if you're not using multiprocessing
to run your processes. Example:
from multiprocessing import shared_memory
shm = shared_memory.SharedMemory(create=True, size=4096)
shm.buf[0] = 42
print(shm.name)
print('Run the other program with the above name, then press Enter')
input()
print(shm.buf[0])
shm.close()
shm.unlink()
from multiprocessing import shared_memory
name = input("What was the name?")
shm = shared_memory.SharedMemory(name, track=False) # If your version of Python is too old to support the track parameter, then import resource_tracker too, and add the following right below this line: resource_tracker.unregister(shm._name, "shared_memory")
print(shm.buf[0])
shm.buf[0] += 1
shm.close()
You can use shm.buf
just like you use mem
in your current code. Note that as an alternative to creating a random name and then sharing it, you can specify a hardcoded name parameter when creating the shared memory, but doing so increases the risk of a name collision.