csky: fixup abiv2 mmap(... O_SYNC) failed.

Glibc function mmap(... O_SYNC) will make page to _PAGE_UNCACHE +
_PAGE_SO and strong-order page couldn't support unalignment access.
So remove _PAGE_SO from _PAGE_UNCACHE, also sync abiv1 with the macro
of _PAGE_SO.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Reported-by: Liu Renwei <Renwei.Liu@verisilicon.com>
Tested-by: Yuan Qiyun <qiyun_yuan@c-sky.com>
This commit is contained in:
Guo Ren 2018-12-30 21:47:28 +08:00
parent d770b25653
commit 2b070ccdf8
3 changed files with 3 additions and 2 deletions

View File

@ -26,6 +26,7 @@
#define _PAGE_CACHE (3<<9)
#define _PAGE_UNCACHE (2<<9)
#define _PAGE_SO _PAGE_UNCACHE
#define _CACHE_MASK (7<<9)

View File

@ -32,6 +32,6 @@
#define _CACHE_MASK _PAGE_CACHE
#define _CACHE_CACHED (_PAGE_VALID | _PAGE_CACHE | _PAGE_BUF)
#define _CACHE_UNCACHED (_PAGE_VALID | _PAGE_SO)
#define _CACHE_UNCACHED (_PAGE_VALID)
#endif /* __ASM_CSKY_PGTABLE_BITS_H */

View File

@ -30,7 +30,7 @@ void __iomem *ioremap(phys_addr_t addr, size_t size)
vaddr = (unsigned long)area->addr;
prot = __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE |
_PAGE_GLOBAL | _CACHE_UNCACHED);
_PAGE_GLOBAL | _CACHE_UNCACHED | _PAGE_SO);
if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
free_vm_area(area);