linux - Concurrent access to elements in the same cacheline in non-shared cache on x86-64 -


Assume that I have the following code:

  int x [200]; Zero 1 thread (for (for int i = 0; i & lt; 100; i ++) x [i * 2] = 1;} zero thread 2 () (for (int i = 0; i & Lt; 100; i ++) x [i * 2 + 1] = 1;}   

Is the correct code in the x86-64 memory model (I think what it is) What is the effect on the performance of this type of code (which I think - none)?

The format of the PS display is in Linux handling the page with default writing cache policy? In - I am in Sandy Bridge I am interested.

Edit: As expected - I want to write alliances with different threads. I hope that after finishing the upper code and blocking Instead of x in {1,1,1, ...} , {0,1,0, 1, ...} or {1,0,1,0, ...} .

If I understand correctly, then I will endorse snooping requests. Sandy Bridge Uses the quick path between the eyes, so snooping can not kill the FSB, but very quickly related to each other as it is not based on the cache - it should be 'fairly' fast when writing illegal, although I do not know What was the solution to conflict resolution (but perhaps less then write L3). According to the clean hit, the effect of 43 cycles and dirty hit falls on 60 chakras (normal overhead for L1 compared to 4 chakras, 31 for L2 and 31 for 26-L3).

Comments

Popular posts from this blog

mysql - BLOB/TEXT column 'value' used in key specification without a key length -

c# - Using Vici cool Storage with monodroid -

python - referencing a variable in another function? -