Suppose I'm creating 2 thread and I'm also created synchronization.
Thread 1 : program to adding 2 integer.
Thread 2 : program to subtract 2 integer
In multithread thread 1 and thread 2 will be execute simultaneously.
Suppose Thread 2 locking the memory access so only thread 2 can accessing the shared memory.
But before Thread 2 added the 2 integer number, the cpu doing context switching to execute thread 1, but thread 1 execution have been complete and need to access the shared memory but because thread 2 locking the memory access the thread 1 can't store the result in the shared memory.
The thread 1 can only accessing the shared memory only if the thread 2 unlocking the shared memory access.
My question is : Does the result of the thread 1 will be placed in the temporary buffer and then when the thread 2 unlocking the shared memory the data will be placed from the temporary buffer into the shared memory buffer ?
TL;DR: no, it doesn't work that way
In the first place, I don't know any threading implementation that supports unilateral or preemptive locking. Locks are cooperative, generally in the sense that one thread holding a lock prevents other threads from obtaining it. But the other threads are unaffected until and unless they try to obtain the lock themselves. At that point they block until either they succeed in obtaining it or they give up.
Suppose Thread 2 locking the memory access so only thread 2 can accessing the shared memory.
In most execution environments, that's not possible in those terms. Thread 2 can lock an exclusive lock, and that will prevent thread 1 from locking the same lock until thread 2 unlocks it. That prevents thread 1 from writing to the shared memory if and only if thread 1 is implemented such that it only writes the shared memory while it holds the lock. Consistently using locks or similar synchronization objects in this way is typically a requirement for multithreaded programs to be correct / have well-defined behavior.
But before Thread 2 added the 2 integer number, the cpu doing context switching to execute thread 1,
Context switching is one way that thread 1 could get a chance to run, but most computers these days have multiple execution units, allowing true concurrency in place of context switching.
but thread 1 execution have been complete and need to access the shared memory
I guess you mean that thread 1 has completed its primary computation. If it still has any work to do, such as writing the result to shared memory, then its overall execution is not complete.
but because thread 2 locking the memory access the thread 1 can't store the result in the shared memory.
Again, no. Thread 2 cannot lock shared memory access. It can lock a lock, which it and thread 1 use cooperatively to control who may access the shared memory at any given time. That prevents thread 1 from obtaining the lock until thread 2 releases it, and that is what (potentially) prevents thread 1 from writing to the shared memory. But it is then at the locking attempt that thread 1 is blocked, not at an attempt to write to shared memory.
My question is : Does the result of the thread 1 will be placed in the temporary buffer and then when the thread 2 unlocking the shared memory the data will be placed from the temporary buffer into the shared memory buffer ?
Computers do have hardware that manages transfer of data between cpu and main memory, including handling multiple cpus and / or cpus with multiple execution units. CPU caches operate in this space, and they are something like the "temporary buffers" you suggest. But thread-level locking operates at a higher level, and although lock implementations interact with caching, they do not generally do so in the manner you describe. So, no.
Rather, what generally happens in a correct program is that thread 1 attempts to acquire the lock, and blocks until it does so or fails. Only on success does it attempt its write to shared memory, so its write attempt doesn't occur while thread 2 holds the lock. In the meantime, all the thread's private memory retains its contents, just as it does throughout the rest of the thread's lifetime. That's where any already-computed result thread 1 may have will reside.
Moreover, again, no thread locks shared memory overall, nor are locks unilateral or preemptive. Threads' writes to memory are not prevented or delayed directly by other threads holding locks. It is only their own acquisition of locks that is delayed or perhaps prevented by other threads holding (the same) locks.