Replacing the atomic64_add_and_return Instruction
Symptom
The assembly language of Arm differs from that of x86 in that its assembly code segments need to be rewritten. The code that uses embedded assembly needs to be modified to adapt to Arm.
Solution
Rewrite the assembly code segments.
Example:
- Code on x86:
static inline long atomic64_add_and_return(long i, atomic64_t *v) { long i = i; asm_volatile_( "lock ; " "xaddq %0, %1;" :"=r"(i) :"m"(v->counter), "0"(i)); return i + __i; } static inline void prefetch(void *x) { asm volatile("prefetcht0 %0" :: "m" (*(unsigned long *)x)); } - On the Arm64 platform, use the GCC built-in functions to implement the following instructions:
static __inline__ long atomic64_add_and_return(long i, atomic64_t *v) { return __sync_add_and_fetch(&((v)->counter), i); } #define prefetch(_x) __builtin_prefetch(_x) Use __sync_add_and_fetch as an example. The corresponding disassembly code is as follows: <__sync_add_and_fetch >: ldxr x2, [x0] add x2, x2, x1 stlxr w3, x2, [x0]
Parent topic: Source Code Modification Cases