1
0
mirror of https://github.com/xmrig/xmrig.git synced 2026-01-23 14:52:52 -05:00

Compare commits

...

77 Commits

Author SHA1 Message Date
XMRig
db24bf5154 Revert "Merge branch 'pr3764' into dev"
This reverts commit 0d9a372e49, reversing
changes made to 1a04bf2904.
2026-01-21 21:32:51 +07:00
XMRig
0d9a372e49 Merge branch 'pr3764' into dev 2026-01-21 21:27:41 +07:00
XMRig
c1e3d386fe Merge branch 'master' of https://github.com/oxyzenQ/xmrig into pr3764 2026-01-21 21:27:11 +07:00
rezky_nightky
5ca4828255 feat: stability improvements, see detail below
Key stability improvements made (deterministic + bounded)
1) Bounded memory usage in long-running stats
Fixed unbounded growth in NetworkState latency tracking:
Replaced std::vector<uint16_t> m_latency + push_back() with a fixed-size ring buffer (kLatencyWindow = 1024) and explicit counters.
Median latency computation now operates on at most 1024 samples, preventing memory growth and avoiding performance cliffs from ever-growing copies/sorts.
2) Prevent crash/UAF on shutdown + more predictable teardown
Controller shutdown ordering (Controller::stop()):
Now stops m_miner before destroying m_network.
This reduces chances of worker threads submitting results into a network listener that’s already destroyed.
Thread teardown hardening (backend/common/Thread.h):
Destructor now checks std:🧵:joinable() before join().
Avoids std::terminate() if a thread object exists but never started due to early exit/error paths.
3) Fixed real leaks (including executable memory)
Executable memory leak fixed (crypto/cn/CnCtx.cpp):
CnCtx::create() allocates executable memory for generated_code via VirtualMemory::allocateExecutableMemory(0x4000, ...).
Previously CnCtx::release() only _mm_free()’d the struct, leaking the executable mapping.
Now CnCtx::release() frees generated_code before freeing the ctx.
GPU verification leak fixed (net/JobResults.cpp):
In getResults() (GPU result verification), a cryptonight_ctx was created via CnCtx::create() but never released.
Added CnCtx::release(ctx, 1).
4) JobResults: bounded queues + backpressure + safe shutdown semantics
The old JobResults could:

enqueue unlimited std::list items (m_results, m_bundles) → unbounded RAM,
call uv_queue_work per async batch → unbounded libuv threadpool backlog,
delete handler directly while worker threads might still submit → potential crash/UAF.
Changes made:

Hard queue limits:
kMaxQueuedResults = 4096
kMaxQueuedBundles = 256
Excess is dropped (bounded behavior under load).
Async coalescing:
Only one pending async notification at a time (m_pendingAsync), reducing eventfd/uv wake storms.
Bounded libuv work scheduling:
Only one uv_queue_work is scheduled at a time (m_workScheduled), preventing CPU starvation and unpredictable backlog.
Safe shutdown:
JobResults::stop() now detaches global handler first, then calls handler->stop().
Shutdown detaches m_listener, clears queues, and defers deletion until in-flight work is done.
Defensive bound on GPU result count:
Clamp count to 0xFF inside JobResults as well, not just in the caller, to guard against corrupted kernels/drivers.
5) Idempotent cleanup
VirtualMemory::destroy() now sets pool = nullptr after delete:
prevents accidental double-delete on repeated teardown paths.
Verification performed
codespell . --config ./.codespellrc: clean
CMake configure + build completed successfully (Release build)

Signed-off-by: rezky_nightky <with.rezky@gmail.com>
2026-01-21 21:22:43 +07:00
XMRig
1a04bf2904 Merge branch 'pr3762' into dev 2026-01-21 21:22:34 +07:00
XMRig
5feb764b27 Merge branch 'fix-keepalive-timer' of https://github.com/HashVault/vltrig into pr3762 2026-01-21 21:21:48 +07:00
rezky_nightky
cb7511507f fix: cross typos detail below:
What I did
Repository rules / exclusions
I didn’t find repo-specific spelling tooling already in place (no existing codespell config).
Given this is a C/C++ repo with vendored sources, I treated src/3rdparty/** as third-party and excluded it from typo fixing (and encoded that in the new .codespellrc).
Typos fixed (first-party only)
Docs
CHANGELOG.md: perfomance -> performance
doc/API.md: optionaly -> optionally, Offical -> Official
doc/BENCHMARK.md: parameteres -> parameters
doc/CPU.md: loosing -> losing, instuctions -> instructions
doc/CHANGELOG_OLD.md: multiple obvious text typos like Breaked -> Broken, singal -> signal, previos -> previous, secons -> seconds, automaticaly -> automatically, perfomance -> performance
Code comments / doc comments (safe text-only changes)
src/base/crypto/sha3.cpp: Inteface -> Interface (comment banner)
src/backend/opencl/cl/cn/cryptonight.cl: performe -> perform, crashs -> crashes (comments)
src/backend/opencl/cl/kawpow/kawpow.cl: regsters -> registers, intial -> initial (comments)
src/crypto/randomx/aes_hash.cpp: intial -> initial (comment)
src/crypto/randomx/intrin_portable.h: cant -> can't (comment)
src/crypto/randomx/randomx.h: intialization -> initialization (doc comment)
src/crypto/cn/c_jh.c: intital -> initial (comment)
src/crypto/cn/skein_port.h: varaiable -> variable (comment)
src/backend/opencl/cl/cn/wolf-skein.cl: Build-in -> Built-in (comment)
What I intentionally did NOT change
Anything under src/3rdparty/** (vendored).
A few remaining codespell hits are either:
Upstream/embedded sources we excluded (groestl256.cl, jh.cl contain Projet)
Potentially valid identifier/name (Carmel CPU codename)
Low-risk token in codegen comments (vor inside an instruction comment)
These are handled via ignore rules in .codespellrc instead of modifying code.

Added: .codespellrc
Created /.codespellrc with:

skip entries for vendored / embedded upstream areas:
./src/3rdparty
./src/crypto/ghostrider
./src/crypto/randomx/blake2
./src/crypto/cn/sse2neon.h
./src/backend/opencl/cl/cn/groestl256.cl
./src/backend/opencl/cl/cn/jh.cl
ignore-words-list for:
Carmel
vor
Verification
codespell . --config ./.codespellrc now exits clean (exit code 0).

Signed-off-by: rezky_nightky <with.rezky@gmail.com>
2026-01-21 20:14:59 +07:00
HashVault
6e6eab1763 Fix keepalive timer logic
- Reset timer on send instead of receive (pool needs to know we're alive)
- Remove timer disable after first ping to enable continuous keepalives
2026-01-20 14:39:06 +03:00
xmrig
f35f9d7241 Merge pull request #3759 from SChernykh/dev
Optimized VAES code
2026-01-17 21:55:01 +07:00
SChernykh
45d0a15c98 Optimized VAES code
Use only 1 mask instead of 2
2026-01-16 20:43:35 +01:00
xmrig
f4845cbd68 Merge pull request #3758 from SChernykh/dev
RandomX: added VAES-512 support for Zen5
2026-01-16 19:07:09 +07:00
SChernykh
ed80a8a828 RandomX: added VAES-512 support for Zen5
+0.1-0.2% hashrate improvement.
2026-01-16 13:04:40 +01:00
xmrig
9e5492eecc Merge pull request #3757 from SChernykh/dev
Improved RISC-V code
2026-01-15 19:51:57 +07:00
SChernykh
e41b28ef78 Improved RISC-V code 2026-01-15 12:48:55 +01:00
xmrig
1bd59129c4 Merge pull request #3750 from SChernykh/dev
RISC-V: use vector hardware AES instead of scalar
2026-01-01 15:43:36 +07:00
SChernykh
8ccf7de304 RISC-V: use vector hardware AES instead of scalar 2025-12-31 23:37:55 +01:00
xmrig
30ffb9cb27 Merge pull request #3749 from SChernykh/dev
RISC-V: detect and use hardware AES
2025-12-30 14:13:44 +07:00
SChernykh
d3a84c4b52 RISC-V: detect and use hardware AES 2025-12-29 22:10:07 +01:00
xmrig
eb49237aaa Merge pull request #3748 from SChernykh/dev
RISC-V: auto-detect and use vector code for all RandomX AES functions
2025-12-28 13:12:50 +07:00
SChernykh
e1efd3dc7f RISC-V: auto-detect and use vector code for all RandomX AES functions 2025-12-27 21:30:14 +01:00
xmrig
e3d0135708 Merge pull request #3746 from SChernykh/dev
RISC-V: vectorized RandomX main loop
2025-12-27 18:40:47 +07:00
SChernykh
f661e1eb30 RISC-V: vectorized RandomX main loop 2025-12-26 22:11:39 +01:00
XMRig
99488751f1 v6.25.1-dev 2025-12-23 20:53:43 +07:00
XMRig
5fb0321c84 Merge branch 'master' into dev 2025-12-23 20:53:11 +07:00
XMRig
753859caea v6.25.0 2025-12-23 19:44:52 +07:00
XMRig
712a5a5e66 Merge branch 'dev' 2025-12-23 19:44:21 +07:00
XMRig
290a0de6e5 v6.25.0-dev 2025-12-23 19:37:24 +07:00
xmrig
e0564b5fdd Merge pull request #3743 from SChernykh/dev
Linux: added support for transparent huge pages
2025-12-12 01:20:03 +07:00
SChernykh
482a1f0b40 Linux: added support for transparent huge pages 2025-12-11 11:23:18 +01:00
xmrig
856813c1ae Merge pull request #3740 from SChernykh/dev
RISC-V: added vectorized soft AES
2025-12-06 19:39:47 +07:00
SChernykh
23da1a90f5 RISC-V: added vectorized soft AES 2025-12-05 21:09:22 +01:00
xmrig
7981e4a76a Merge pull request #3736 from SChernykh/dev
RISC-V: added vectorized dataset init
2025-12-01 10:46:03 +07:00
SChernykh
7ef5142a52 RISC-V: added vectorized dataset init (activated by setting init-avx2 to 1 in config.json) 2025-11-30 19:15:15 +01:00
xmrig
db5c6d9190 Merge pull request #3733 from void-512/master
Add detection for MSVC/2026
2025-11-13 15:52:43 +07:00
Tony Wang
e88009d575 add detection for MSVC/2026 2025-11-12 17:32:57 -05:00
XMRig
5115597e7f Improved compatibility for automatically enabling huge pages on Linux systems without NUMA support. 2025-11-07 01:55:00 +07:00
xmrig
4cdc35f966 Merge pull request #3731 from user0-07161/dev-haiku-os-support
feat: initial haiku os support
2025-11-05 18:47:22 +07:00
user0-07161
b02519b9f5 feat: initial support for haiku 2025-11-04 13:58:01 +00:00
XMRig
a44b21cef3 Cleanup 2025-10-27 19:18:52 +07:00
XMRig
ea832899f2 Fixed macOS build. 2025-10-23 11:17:59 +07:00
xmrig
3ecacf0ac2 Merge pull request #3725 from SChernykh/dev
RISC-V integration and JIT compiler
2025-10-23 11:02:21 +07:00
SChernykh
27c8e60919 Removed unused files 2025-10-22 23:31:02 +02:00
SChernykh
985fe06e8d RISC-V: test for instruction extensions 2025-10-22 19:21:26 +02:00
SChernykh
75b63ddde9 RISC-V JIT compiler 2025-10-22 19:00:20 +02:00
slayingripper
643b65f2c0 RISC-V Intergration 2025-10-22 18:57:20 +02:00
xmrig
116ba1828f Merge pull request #3722 from SChernykh/dev
Added Zen4 (Hawk Point) CPUs detection
2025-10-15 13:23:36 +07:00
SChernykh
da5a5674b4 Added Zen4 (Hawk Point) CPUs detection 2025-10-15 08:07:58 +02:00
xmrig
6cc4819cec Merge pull request #3719 from SChernykh/dev
Fix: correct FCMP++ version number
2025-10-05 18:28:21 +07:00
SChernykh
a659397c41 Fix: correct FCMP++ version number 2025-10-05 13:24:55 +02:00
xmrig
20acfd0d79 Merge pull request #3718 from SChernykh/dev
Solo mining: added support for FCMP++ hardfork
2025-10-05 18:04:23 +07:00
SChernykh
da683d8c3e Solo mining: added support for FCMP++ hardfork 2025-10-05 13:00:21 +02:00
XMRig
255565b533 Merge branch 'xtophyr-master' into dev 2025-09-22 21:31:28 +07:00
XMRig
878e83bf59 Merge branch 'master' of https://github.com/xtophyr/xmrig into xtophyr-master 2025-09-22 21:31:14 +07:00
Christopher Wright
7abf17cb59 adjust instruction/register suffixes to compile with gcc-based assemblers. 2025-09-21 14:57:42 -04:00
Christopher Wright
eeec5ecd10 undo this change 2025-09-20 08:38:40 -04:00
Christopher Wright
93f5067999 minor Aarch64 JIT changes (better instruction selection, don't emit instructions that add 0, etc) 2025-09-20 08:32:32 -04:00
XMRig
dd6671bc59 Merge branch 'dev' of github.com:xmrig/xmrig into dev 2025-06-29 12:29:01 +07:00
XMRig
a1ee2fd9d2 Improved LibreSSL support. 2025-06-29 12:28:35 +07:00
xmrig
2619131176 Merge pull request #3680 from benthetechguy/armhf
Add armv8l to list of 32 bit ARM targets
2025-06-25 04:14:22 +07:00
Ben Westover
1161f230c5 Add armv8l to list of 32 bit ARM targets
armv8l is what CMAKE_SYSTEM_PROCESSOR is set to when an ARMv8 processor
is in 32-bit mode, so it should be added to the ARMv7 target list even
though it's v8 because it's 32 bits. Currently, it's not in any ARM
target list which means x86 is assumed and the build fails.
2025-06-24 15:28:01 -04:00
XMRig
d2363ba28b v6.24.1-dev 2025-06-23 08:37:15 +07:00
XMRig
1676da1fe9 Merge branch 'master' into dev 2025-06-23 08:36:52 +07:00
XMRig
6e4a5a6d94 v6.24.0 2025-06-23 07:44:53 +07:00
XMRig
273133aa63 Merge branch 'dev' 2025-06-23 07:44:05 +07:00
xmrig
c69e30c9a0 Update CHANGELOG.md 2025-06-23 05:39:26 +07:00
XMRig
6a690ba1e9 More DNS cleanup. 2025-06-20 23:45:53 +07:00
XMRig
545aef0937 v6.24.0-dev 2025-06-20 08:34:58 +07:00
xmrig
9fa66d3242 Merge pull request #3678 from xmrig/dns_ip_version
Improved IPv6 support.
2025-06-20 08:33:50 +07:00
XMRig
ec286c7fef Improved IPv6 support. 2025-06-20 07:39:52 +07:00
xmrig
e28d663d80 Merge pull request #3677 from SChernykh/dev
Tweaked autoconfig for AMD CPUs with < 2 MB L3 cache per thread, again (hopefully the last time)
2025-06-19 18:07:54 +07:00
SChernykh
aba1ad8cfc Tweaked autoconfig for AMD CPUs with < 2 MB L3 cache per thread, again (hopefully the last time) 2025-06-19 12:58:31 +02:00
xmrig
bf44ed52e9 Merge pull request #3674 from benthetechguy/armhf
cflags: Add lax-vector-conversions on ARMv7
2025-06-19 04:46:02 +07:00
Ben Westover
762c435fa8 cflags: Add lax-vector-conversions on ARMv7
lax-vector-conversions is enabled in the CXXFLAGS but not CFLAGS for ARMv7.
This commit adds it to CFLAGS which fixes the ARMv7 build (Fixes: #3673).
2025-06-18 16:38:05 -04:00
xmrig
48faf0a11b Merge pull request #3671 from SChernykh/dev
Hwloc: fixed detection of L2 cache size for some complex NUMA topologies
2025-06-17 18:52:43 +07:00
SChernykh
d125d22d27 Hwloc: fixed detection of L2 cache size for some complex NUMA topologies 2025-06-17 13:49:02 +02:00
XMRig
9f3591ae0d v6.23.1-dev 2025-06-16 21:29:17 +07:00
XMRig
6bbbcc71f1 Merge branch 'master' into dev 2025-06-16 21:28:48 +07:00
95 changed files with 8976 additions and 548 deletions

2
.gitignore vendored
View File

@@ -4,3 +4,5 @@ scripts/deps
/CMakeLists.txt.user
/.idea
/src/backend/opencl/cl/cn/cryptonight_gen.cl
.vscode
/.qtcreator

View File

@@ -1,3 +1,23 @@
# v6.25.0
- [#3680](https://github.com/xmrig/xmrig/pull/3680) Added `armv8l` to the list of 32-bit ARM targets.
- [#3708](https://github.com/xmrig/xmrig/pull/3708) Minor Aarch64 JIT changes (better instruction selection, don't emit instructions that add 0, etc).
- [#3718](https://github.com/xmrig/xmrig/pull/3718) Solo mining: added support for FCMP++ hardfork.
- [#3722](https://github.com/xmrig/xmrig/pull/3722) Added Zen4 (Hawk Point) CPUs detection.
- [#3725](https://github.com/xmrig/xmrig/pull/3725) Added **RISC-V** support with JIT compiler.
- [#3731](https://github.com/xmrig/xmrig/pull/3731) Added initial Haiku OS support.
- [#3733](https://github.com/xmrig/xmrig/pull/3733) Added detection for MSVC/2026.
- [#3736](https://github.com/xmrig/xmrig/pull/3736) RISC-V: added vectorized dataset init.
- [#3740](https://github.com/xmrig/xmrig/pull/3740) RISC-V: added vectorized soft AES.
- [#3743](https://github.com/xmrig/xmrig/pull/3743) Linux: added support for transparent huge pages.
- Improved LibreSSL support.
- Improved compatibility for automatically enabling huge pages on Linux systems without NUMA support.
# v6.24.0
- [#3671](https://github.com/xmrig/xmrig/pull/3671) Fixed detection of L2 cache size for some complex NUMA topologies.
- [#3674](https://github.com/xmrig/xmrig/pull/3674) Fixed ARMv7 build.
- [#3677](https://github.com/xmrig/xmrig/pull/3677) Fixed auto-config for AMD CPUs with less than 2 MB L3 cache per thread.
- [#3678](https://github.com/xmrig/xmrig/pull/3678) Improved IPv6 support: the new default settings use IPv6 equally with IPv4.
# v6.23.0
- [#3668](https://github.com/xmrig/xmrig/issues/3668) Added support for Windows ARM64.
- [#3665](https://github.com/xmrig/xmrig/pull/3665) Tweaked auto-config for AMD CPUs with < 2 MB L3 cache per thread.

View File

@@ -95,7 +95,7 @@ set(HEADERS_CRYPTO
src/crypto/common/VirtualMemory.h
)
if (XMRIG_ARM)
if (XMRIG_ARM OR XMRIG_RISCV)
set(HEADERS_CRYPTO "${HEADERS_CRYPTO}" src/crypto/cn/CryptoNight_arm.h)
else()
set(HEADERS_CRYPTO "${HEADERS_CRYPTO}" src/crypto/cn/CryptoNight_x86.h)

View File

@@ -10,7 +10,7 @@
XMRig is a high performance, open source, cross platform RandomX, KawPow, CryptoNight and [GhostRider](https://github.com/xmrig/xmrig/tree/master/src/crypto/ghostrider#readme) unified CPU/GPU miner and [RandomX benchmark](https://xmrig.com/benchmark). Official binaries are available for Windows, Linux, macOS and FreeBSD.
## Mining backends
- **CPU** (x86/x64/ARMv7/ARMv8)
- **CPU** (x86/x64/ARMv7/ARMv8/RISC-V)
- **OpenCL** for AMD GPUs.
- **CUDA** for NVIDIA GPUs via external [CUDA plugin](https://github.com/xmrig/xmrig-cuda).

View File

@@ -1,4 +1,4 @@
if (WITH_ASM AND NOT XMRIG_ARM AND CMAKE_SIZEOF_VOID_P EQUAL 8)
if (WITH_ASM AND NOT XMRIG_ARM AND NOT XMRIG_RISCV AND CMAKE_SIZEOF_VOID_P EQUAL 8)
set(XMRIG_ASM_LIBRARY "xmrig-asm")
if (CMAKE_C_COMPILER_ID MATCHES MSVC)

View File

@@ -21,6 +21,19 @@ if (NOT VAES_SUPPORTED)
set(WITH_VAES OFF)
endif()
# Detect RISC-V architecture early (before it's used below)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(riscv64|riscv|rv64)$")
set(RISCV_TARGET 64)
set(XMRIG_RISCV ON)
add_definitions(-DXMRIG_RISCV)
message(STATUS "Detected RISC-V 64-bit architecture (${CMAKE_SYSTEM_PROCESSOR})")
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(riscv32|rv32)$")
set(RISCV_TARGET 32)
set(XMRIG_RISCV ON)
add_definitions(-DXMRIG_RISCV)
message(STATUS "Detected RISC-V 32-bit architecture (${CMAKE_SYSTEM_PROCESSOR})")
endif()
if (XMRIG_64_BIT AND CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|AMD64)$")
add_definitions(-DRAPIDJSON_SSE2)
else()
@@ -29,6 +42,120 @@ else()
set(WITH_VAES OFF)
endif()
# Disable x86-specific features for RISC-V
if (XMRIG_RISCV)
set(WITH_SSE4_1 OFF)
set(WITH_AVX2 OFF)
set(WITH_VAES OFF)
# default build uses the RV64GC baseline
set(RVARCH "rv64gc")
enable_language(ASM)
try_run(RANDOMX_VECTOR_RUN_FAIL
RANDOMX_VECTOR_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_vector.s
COMPILE_DEFINITIONS "-march=rv64gcv")
if (RANDOMX_VECTOR_COMPILE_OK AND NOT RANDOMX_VECTOR_RUN_FAIL)
set(RVARCH_V ON)
message(STATUS "RISC-V vector extension detected")
else()
set(RVARCH_V OFF)
endif()
try_run(RANDOMX_ZICBOP_RUN_FAIL
RANDOMX_ZICBOP_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_zicbop.s
COMPILE_DEFINITIONS "-march=rv64gc_zicbop")
if (RANDOMX_ZICBOP_COMPILE_OK AND NOT RANDOMX_ZICBOP_RUN_FAIL)
set(RVARCH_ZICBOP ON)
message(STATUS "RISC-V zicbop extension detected")
else()
set(RVARCH_ZICBOP OFF)
endif()
try_run(RANDOMX_ZBA_RUN_FAIL
RANDOMX_ZBA_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_zba.s
COMPILE_DEFINITIONS "-march=rv64gc_zba")
if (RANDOMX_ZBA_COMPILE_OK AND NOT RANDOMX_ZBA_RUN_FAIL)
set(RVARCH_ZBA ON)
message(STATUS "RISC-V zba extension detected")
else()
set(RVARCH_ZBA OFF)
endif()
try_run(RANDOMX_ZBB_RUN_FAIL
RANDOMX_ZBB_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_zbb.s
COMPILE_DEFINITIONS "-march=rv64gc_zbb")
if (RANDOMX_ZBB_COMPILE_OK AND NOT RANDOMX_ZBB_RUN_FAIL)
set(RVARCH_ZBB ON)
message(STATUS "RISC-V zbb extension detected")
else()
set(RVARCH_ZBB OFF)
endif()
try_run(RANDOMX_ZVKB_RUN_FAIL
RANDOMX_ZVKB_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_zvkb.s
COMPILE_DEFINITIONS "-march=rv64gcv_zvkb")
if (RANDOMX_ZVKB_COMPILE_OK AND NOT RANDOMX_ZVKB_RUN_FAIL)
set(RVARCH_ZVKB ON)
message(STATUS "RISC-V zvkb extension detected")
else()
set(RVARCH_ZVKB OFF)
endif()
try_run(RANDOMX_ZVKNED_RUN_FAIL
RANDOMX_ZVKNED_COMPILE_OK
${CMAKE_CURRENT_BINARY_DIR}/
${CMAKE_CURRENT_SOURCE_DIR}/src/crypto/randomx/tests/riscv64_zvkned.s
COMPILE_DEFINITIONS "-march=rv64gcv_zvkned")
if (RANDOMX_ZVKNED_COMPILE_OK AND NOT RANDOMX_ZVKNED_RUN_FAIL)
set(RVARCH_ZVKNED ON)
message(STATUS "RISC-V zvkned extension detected")
else()
set(RVARCH_ZVKNED OFF)
endif()
# for native builds, enable Zba and Zbb if supported by the CPU
if (ARCH STREQUAL "native")
if (RVARCH_V)
set(RVARCH "${RVARCH}v")
endif()
if (RVARCH_ZICBOP)
set(RVARCH "${RVARCH}_zicbop")
endif()
if (RVARCH_ZBA)
set(RVARCH "${RVARCH}_zba")
endif()
if (RVARCH_ZBB)
set(RVARCH "${RVARCH}_zbb")
endif()
if (RVARCH_ZVKB)
set(RVARCH "${RVARCH}_zvkb")
endif()
if (RVARCH_ZVKNED)
set(RVARCH "${RVARCH}_zvkned")
endif()
endif()
message(STATUS "Using -march=${RVARCH}")
endif()
add_definitions(-DRAPIDJSON_WRITE_DEFAULT_FLAGS=6) # rapidjson::kWriteNanAndInfFlag | rapidjson::kWriteNanAndInfNullFlag
if (ARM_V8)
@@ -40,7 +167,7 @@ endif()
if (NOT ARM_TARGET)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|arm64|ARM64|armv8-a)$")
set(ARM_TARGET 8)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(armv7|armv7f|armv7s|armv7k|armv7-a|armv7l|armv7ve)$")
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(armv7|armv7f|armv7s|armv7k|armv7-a|armv7l|armv7ve|armv8l)$")
set(ARM_TARGET 7)
endif()
endif()

View File

@@ -26,8 +26,13 @@ if (CMAKE_CXX_COMPILER_ID MATCHES GNU)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${ARM8_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${ARM8_CXX_FLAGS} -flax-vector-conversions")
elseif (ARM_TARGET EQUAL 7)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv7-a -mfpu=neon")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv7-a -mfpu=neon -flax-vector-conversions")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv7-a -mfpu=neon -flax-vector-conversions")
elseif (XMRIG_RISCV)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=${RVARCH}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=${RVARCH}")
add_definitions(-DHAVE_ROTR)
else()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -maes")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -maes")
@@ -41,6 +46,8 @@ if (CMAKE_CXX_COMPILER_ID MATCHES GNU)
else()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static -Wl,--large-address-aware")
endif()
elseif(CMAKE_SYSTEM_NAME STREQUAL "Haiku")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc")
else()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++")
endif()
@@ -74,6 +81,11 @@ elseif (CMAKE_CXX_COMPILER_ID MATCHES Clang)
elseif (ARM_TARGET EQUAL 7)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mfpu=neon -march=${CMAKE_SYSTEM_PROCESSOR}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mfpu=neon -march=${CMAKE_SYSTEM_PROCESSOR}")
elseif (XMRIG_RISCV)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=${RVARCH}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=${RVARCH}")
add_definitions(-DHAVE_ROTR)
else()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -maes")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -maes")

View File

@@ -17,6 +17,10 @@ else()
set(XMRIG_OS_LINUX ON)
elseif(CMAKE_SYSTEM_NAME STREQUAL FreeBSD OR CMAKE_SYSTEM_NAME STREQUAL DragonFly)
set(XMRIG_OS_FREEBSD ON)
elseif(CMAKE_SYSTEM_NAME STREQUAL OpenBSD)
set(XMRIG_OS_OPENBSD ON)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Haiku")
set(XMRIG_OS_HAIKU ON)
endif()
endif()
@@ -43,6 +47,10 @@ elseif(XMRIG_OS_UNIX)
add_definitions(-DXMRIG_OS_LINUX)
elseif (XMRIG_OS_FREEBSD)
add_definitions(-DXMRIG_OS_FREEBSD)
elseif (XMRIG_OS_OPENBSD)
add_definitions(-DXMRIG_OS_OPENBSD)
elseif (XMRIG_OS_HAIKU)
add_definitions(-DXMRIG_OS_HAIKU)
endif()
endif()

View File

@@ -62,7 +62,7 @@ if (WITH_RANDOMX)
src/crypto/randomx/jit_compiler_x86_static.asm
src/crypto/randomx/jit_compiler_x86.cpp
)
elseif (WITH_ASM AND NOT XMRIG_ARM AND CMAKE_SIZEOF_VOID_P EQUAL 8)
elseif (WITH_ASM AND NOT XMRIG_ARM AND NOT XMRIG_RISCV AND CMAKE_SIZEOF_VOID_P EQUAL 8)
list(APPEND SOURCES_CRYPTO
src/crypto/randomx/jit_compiler_x86_static.S
src/crypto/randomx/jit_compiler_x86.cpp
@@ -80,6 +80,39 @@ if (WITH_RANDOMX)
else()
set_property(SOURCE src/crypto/randomx/jit_compiler_a64_static.S PROPERTY LANGUAGE C)
endif()
elseif (XMRIG_RISCV AND CMAKE_SIZEOF_VOID_P EQUAL 8)
list(APPEND SOURCES_CRYPTO
src/crypto/randomx/jit_compiler_rv64_static.S
src/crypto/randomx/jit_compiler_rv64_vector_static.S
src/crypto/randomx/jit_compiler_rv64.cpp
src/crypto/randomx/jit_compiler_rv64_vector.cpp
src/crypto/randomx/aes_hash_rv64_vector.cpp
src/crypto/randomx/aes_hash_rv64_zvkned.cpp
)
# cheat because cmake and ccache hate each other
set_property(SOURCE src/crypto/randomx/jit_compiler_rv64_static.S PROPERTY LANGUAGE C)
set_property(SOURCE src/crypto/randomx/jit_compiler_rv64_vector_static.S PROPERTY LANGUAGE C)
set(RV64_VECTOR_FILE_ARCH "rv64gcv")
if (ARCH STREQUAL "native")
if (RVARCH_ZICBOP)
set(RV64_VECTOR_FILE_ARCH "${RV64_VECTOR_FILE_ARCH}_zicbop")
endif()
if (RVARCH_ZBA)
set(RV64_VECTOR_FILE_ARCH "${RV64_VECTOR_FILE_ARCH}_zba")
endif()
if (RVARCH_ZBB)
set(RV64_VECTOR_FILE_ARCH "${RV64_VECTOR_FILE_ARCH}_zbb")
endif()
if (RVARCH_ZVKB)
set(RV64_VECTOR_FILE_ARCH "${RV64_VECTOR_FILE_ARCH}_zvkb")
endif()
endif()
set_source_files_properties(src/crypto/randomx/jit_compiler_rv64_vector_static.S PROPERTIES COMPILE_FLAGS "-march=${RV64_VECTOR_FILE_ARCH}")
set_source_files_properties(src/crypto/randomx/aes_hash_rv64_vector.cpp PROPERTIES COMPILE_FLAGS "-O3 -march=${RV64_VECTOR_FILE_ARCH}")
set_source_files_properties(src/crypto/randomx/aes_hash_rv64_zvkned.cpp PROPERTIES COMPILE_FLAGS "-O3 -march=${RV64_VECTOR_FILE_ARCH}_zvkned")
else()
list(APPEND SOURCES_CRYPTO
src/crypto/randomx/jit_compiler_fallback.cpp
@@ -116,7 +149,7 @@ if (WITH_RANDOMX)
)
endif()
if (WITH_MSR AND NOT XMRIG_ARM AND CMAKE_SIZEOF_VOID_P EQUAL 8 AND (XMRIG_OS_WIN OR XMRIG_OS_LINUX))
if (WITH_MSR AND NOT XMRIG_ARM AND NOT XMRIG_RISCV AND CMAKE_SIZEOF_VOID_P EQUAL 8 AND (XMRIG_OS_WIN OR XMRIG_OS_LINUX))
add_definitions(/DXMRIG_FEATURE_MSR)
add_definitions(/DXMRIG_FIX_RYZEN)
message("-- WITH_MSR=ON")
@@ -157,6 +190,15 @@ if (WITH_RANDOMX)
list(APPEND HEADERS_CRYPTO src/crypto/rx/Profiler.h)
list(APPEND SOURCES_CRYPTO src/crypto/rx/Profiler.cpp)
endif()
if (WITH_VAES)
set(SOURCES_CRYPTO "${SOURCES_CRYPTO}" src/crypto/randomx/aes_hash_vaes512.cpp)
if (CMAKE_C_COMPILER_ID MATCHES MSVC)
set_source_files_properties(src/crypto/randomx/aes_hash_vaes512.cpp PROPERTIES COMPILE_FLAGS "/O2 /Ob2 /Oi /Ot /arch:AVX512")
elseif (CMAKE_C_COMPILER_ID MATCHES GNU OR CMAKE_C_COMPILER_ID MATCHES Clang)
set_source_files_properties(src/crypto/randomx/aes_hash_vaes512.cpp PROPERTIES COMPILE_FLAGS "-O3 -mavx512f -mvaes")
endif()
endif()
else()
remove_definitions(/DXMRIG_ALGO_RANDOMX)
endif()

365
doc/RISCV_PERF_TUNING.md Normal file
View File

@@ -0,0 +1,365 @@
# RISC-V Performance Optimization Guide
This guide provides comprehensive instructions for optimizing XMRig on RISC-V architectures.
## Build Optimizations
### Compiler Flags Applied Automatically
The CMake build now applies aggressive RISC-V-specific optimizations:
```cmake
# RISC-V ISA with extensions
-march=rv64gcv_zba_zbb_zbc_zbs
# Aggressive compiler optimizations
-funroll-loops # Unroll loops for ILP (instruction-level parallelism)
-fomit-frame-pointer # Free up frame pointer register (RISC-V has limited registers)
-fno-common # Better code generation for global variables
-finline-functions # Inline more functions for better cache locality
-ffast-math # Relaxed FP semantics (safe for mining)
-flto # Link-time optimization for cross-module inlining
# Release build additions
-minline-atomics # Inline atomic operations for faster synchronization
```
### Optimal Build Command
```bash
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc)
```
**Expected build time**: 5-15 minutes depending on CPU
## Runtime Optimizations
### 1. Memory Configuration (Most Important)
Enable huge pages to reduce TLB misses and fragmentation:
#### Enable 2MB Huge Pages
```bash
# Calculate required huge pages (1 page = 2MB)
# For 2 GB dataset: 1024 pages
# For cache + dataset: 1536 pages minimum
sudo sysctl -w vm.nr_hugepages=2048
```
Verify:
```bash
grep HugePages /proc/meminfo
# Expected: HugePages_Free should be close to nr_hugepages
```
#### Enable 1GB Huge Pages (Optional but Recommended)
```bash
# Run provided helper script
sudo ./scripts/enable_1gb_pages.sh
# Verify 1GB pages are available
cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
# Should be: >= 1 (one 1GB page)
```
Update config.json:
```json
{
"cpu": {
"huge-pages": true
},
"randomx": {
"1gb-pages": true
}
}
```
### 2. RandomX Mode Selection
| Mode | Memory | Init Time | Throughput | Recommendation |
|------|--------|-----------|-----------|-----------------|
| **light** | 256 MB | 10 sec | Low | Testing, resource-constrained |
| **fast** | 2 GB | 2-5 min* | High | Production (with huge pages) |
| **auto** | 2 GB | Varies | High | Default (uses fast if possible) |
*With optimizations; can be 30+ minutes without huge pages
**For RISC-V, use fast mode with huge pages enabled.**
### 3. Dataset Initialization Threads
Optimal thread count = 60-75% of CPU cores (leaves headroom for OS/other tasks)
```json
{
"randomx": {
"init": 4
}
}
```
Or auto-detect (rewritten for RISC-V):
```json
{
"randomx": {
"init": -1
}
}
```
### 4. CPU Affinity (Optional)
Pin threads to specific cores for better cache locality:
```json
{
"cpu": {
"rx/0": [
{ "threads": 1, "affinity": 0 },
{ "threads": 1, "affinity": 1 },
{ "threads": 1, "affinity": 2 },
{ "threads": 1, "affinity": 3 }
]
}
}
```
### 5. CPU Governor (Linux)
Set to performance mode for maximum throughput:
```bash
# Check current governor
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Set to performance (requires root)
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Verify
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Should output: performance
```
## Configuration Examples
### Minimum (Testing)
```json
{
"randomx": {
"mode": "light"
},
"cpu": {
"huge-pages": false
}
}
```
### Recommended (Balanced)
```json
{
"randomx": {
"mode": "auto",
"init": 4,
"1gb-pages": true
},
"cpu": {
"huge-pages": true,
"priority": 2
}
}
```
### Maximum Performance (Production)
```json
{
"randomx": {
"mode": "fast",
"init": -1,
"1gb-pages": true,
"scratchpad_prefetch_mode": 1
},
"cpu": {
"huge-pages": true,
"priority": 3,
"yield": false
}
}
```
## CLI Equivalents
```bash
# Light mode
./xmrig --randomx-mode=light
# Fast mode with 4 init threads
./xmrig --randomx-mode=fast --randomx-init=4
# Benchmark
./xmrig --bench=1M --algo=rx/0
# Benchmark Wownero variant (1 MB scratchpad)
./xmrig --bench=1M --algo=rx/wow
# Mine to pool
./xmrig -o pool.example.com:3333 -u YOUR_WALLET -p x
```
## Performance Diagnostics
### Check if Vector Extensions are Detected
Look for `FEATURES:` line in output:
```
* CPU: ky,x60 (uarch ky,x1)
* FEATURES: rv64imafdcv zba zbb zbc zbs
```
- `v`: Vector extension (RVV) ✓
- `zba`, `zbb`, `zbc`, `zbs`: Bit manipulation ✓
- If missing, make sure build used `-march=rv64gcv_zba_zbb_zbc_zbs`
### Verify Huge Pages at Runtime
```bash
# Run xmrig with --bench=1M and check output
./xmrig --bench=1M
# Look for line like:
# HUGE PAGES 100% 1 / 1 (1024 MB)
```
- Should show 100% for dataset AND threads
- If less, increase `vm.nr_hugepages` and reboot
### Monitor Performance
```bash
# Run benchmark multiple times to find stable hashrate
./xmrig --bench=1M --algo=rx/0
./xmrig --bench=10M --algo=rx/0
./xmrig --bench=100M --algo=rx/0
# Check system load and memory during mining
while true; do free -h; grep HugePages /proc/meminfo; sleep 2; done
```
## Expected Performance
### Hardware: Orange Pi RV2 (Ky X1, 8 cores @ ~1.5 GHz)
| Config | Mode | Hashrate | Init Time |
|--------|------|----------|-----------|
| Scalar (baseline) | fast | 30 H/s | 10 min |
| Scalar + huge pages | fast | 33 H/s | 2 min |
| RVV (if enabled) | fast | 70-100 H/s | 3 min |
*Actual results depend on CPU frequency, memory speed, and load*
## Troubleshooting
### Long Initialization Times (30+ minutes)
**Cause**: Huge pages not enabled, system using swap
**Solution**:
1. Enable huge pages: `sudo sysctl -w vm.nr_hugepages=2048`
2. Reboot: `sudo reboot`
3. Reduce mining threads to free memory
4. Check available memory: `free -h`
### Low Hashrate (50% of expected)
**Cause**: CPU governor set to power-save, no huge pages, high contention
**Solution**:
1. Set governor to performance: `echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor`
2. Enable huge pages
3. Reduce number of mining threads
4. Check system load: `top` or `htop`
### Dataset Init Crashes or Hangs
**Cause**: Insufficient memory, corrupted huge pages
**Solution**:
1. Disable huge pages temporarily: set `huge-pages: false` in config
2. Reduce mining threads
3. Reboot and re-enable huge pages
4. Try light mode: `--randomx-mode=light`
### Out of Memory During Benchmark
**Cause**: Not enough RAM for dataset + cache + threads
**Solution**:
1. Use light mode: `--randomx-mode=light`
2. Reduce mining threads: `--threads=1`
3. Increase available memory (kill other processes)
4. Check: `free -h` before mining
## Advanced Tuning
### Vector Length (VLEN) Detection
RISC-V vector extension variable length (VLEN) affects performance:
```bash
# Check VLEN on your CPU
cat /proc/cpuinfo | grep vlen
# Expected values:
# - 128 bits (16 bytes) = minimum
# - 256 bits (32 bytes) = common
# - 512 bits (64 bytes) = high performance
```
Larger VLEN generally means better performance for vectorized operations.
### Prefetch Optimization
The code automatically optimizes memory prefetching for RISC-V:
```
scratchpad_prefetch_mode: 0 = disabled (slowest)
scratchpad_prefetch_mode: 1 = prefetch.r (default, recommended)
scratchpad_prefetch_mode: 2 = prefetch.w (experimental)
```
### Memory Bandwidth Saturation
If experiencing memory bandwidth saturation (high latency):
1. Reduce mining threads
2. Increase L2/L3 cache by mining fewer threads per core
3. Enable cache QoS (AMD Ryzen): `cache_qos: true`
## Building with Custom Flags
To build with custom RISC-V flags:
```bash
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_FLAGS="-march=rv64gcv_zba_zbb_zbc_zbs -O3 -funroll-loops -fomit-frame-pointer" \
..
make -j$(nproc)
```
## Future Optimizations
- [ ] Zbk* (crypto) support detection and usage
- [ ] Optimal VLEN-aware algorithm selection
- [ ] Per-core memory affinity (NUMA support)
- [ ] Dynamic thread count adjustment based on thermals
- [ ] Cross-compile optimizations for various RISC-V cores
## References
- [RISC-V Vector Extension Spec](https://github.com/riscv/riscv-v-spec)
- [RISC-V Bit Manipulation Spec](https://github.com/riscv/riscv-bitmanip)
- [RISC-V Crypto Spec](https://github.com/riscv/riscv-crypto)
- [XMRig Documentation](https://xmrig.com/docs)
---
For further optimization, enable RVV intrinsics by replacing `sse2rvv.h` with `sse2rvv_optimized.h` in the build.

View File

@@ -12,7 +12,7 @@ if grep -E 'AMD Ryzen|AMD EPYC|AuthenticAMD' /proc/cpuinfo > /dev/null;
then
if grep "cpu family[[:space:]]\{1,\}:[[:space:]]25" /proc/cpuinfo > /dev/null;
then
if grep "model[[:space:]]\{1,\}:[[:space:]]97" /proc/cpuinfo > /dev/null;
if grep "model[[:space:]]\{1,\}:[[:space:]]\(97\|117\)" /proc/cpuinfo > /dev/null;
then
echo "Detected Zen4 CPU"
wrmsr -a 0xc0011020 0x4400000000000

View File

@@ -35,7 +35,7 @@ if (CMAKE_C_COMPILER_ID MATCHES MSVC)
add_feature_impl(xop "" HAVE_XOP)
add_feature_impl(avx2 "/arch:AVX2" HAVE_AVX2)
add_feature_impl(avx512f "/arch:AVX512F" HAVE_AVX512F)
elseif (NOT XMRIG_ARM AND CMAKE_SIZEOF_VOID_P EQUAL 8)
elseif (NOT XMRIG_ARM AND NOT XMRIG_RISCV AND CMAKE_SIZEOF_VOID_P EQUAL 8)
function(add_feature_impl FEATURE GCC_FLAG DEF)
add_library(argon2-${FEATURE} STATIC arch/x86_64/lib/argon2-${FEATURE}.c)
target_include_directories(argon2-${FEATURE} PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/../../)

View File

@@ -31,7 +31,7 @@
#include <libkern/OSByteOrder.h>
#define ethash_swap_u32(input_) OSSwapInt32(input_)
#define ethash_swap_u64(input_) OSSwapInt64(input_)
#elif defined(__FreeBSD__) || defined(__DragonFly__) || defined(__NetBSD__)
#elif defined(__FreeBSD__) || defined(__DragonFly__) || defined(__NetBSD__) || defined(__HAIKU__)
#define ethash_swap_u32(input_) bswap32(input_)
#define ethash_swap_u64(input_) bswap64(input_)
#elif defined(__OpenBSD__)

View File

@@ -89,11 +89,16 @@ static void print_cpu(const Config *)
{
const auto info = Cpu::info();
Log::print(GREEN_BOLD(" * ") WHITE_BOLD("%-13s%s (%zu)") " %s %sAES%s",
Log::print(GREEN_BOLD(" * ") WHITE_BOLD("%-13s%s (%zu)") " %s %s%sAES%s",
"CPU",
info->brand(),
info->packages(),
ICpuInfo::is64bit() ? GREEN_BOLD("64-bit") : RED_BOLD("32-bit"),
#ifdef XMRIG_RISCV
info->hasRISCV_Vector() ? GREEN_BOLD_S "RVV " : RED_BOLD_S "-RVV ",
#else
"",
#endif
info->hasAES() ? GREEN_BOLD_S : RED_BOLD_S "-",
info->isVM() ? RED_BOLD_S " VM" : ""
);

View File

@@ -87,14 +87,14 @@ xmrig::CpuWorker<N>::CpuWorker(size_t id, const CpuLaunchData &data) :
if (!cn_heavyZen3Memory) {
// Round up number of threads to the multiple of 8
const size_t num_threads = ((m_threads + 7) / 8) * 8;
cn_heavyZen3Memory = new VirtualMemory(m_algorithm.l3() * num_threads, data.hugePages, false, false, node());
cn_heavyZen3Memory = new VirtualMemory(m_algorithm.l3() * num_threads, data.hugePages, false, false, node(), VirtualMemory::kDefaultHugePageSize);
}
m_memory = cn_heavyZen3Memory;
}
else
# endif
{
m_memory = new VirtualMemory(m_algorithm.l3() * N, data.hugePages, false, true, node());
m_memory = new VirtualMemory(m_algorithm.l3() * N, data.hugePages, false, true, node(), VirtualMemory::kDefaultHugePageSize);
}
# ifdef XMRIG_ALGO_GHOSTRIDER

View File

@@ -46,7 +46,12 @@ else()
set(CPUID_LIB "")
endif()
if (XMRIG_ARM)
if (XMRIG_RISCV)
list(APPEND SOURCES_BACKEND_CPU
src/backend/cpu/platform/lscpu_riscv.cpp
src/backend/cpu/platform/BasicCpuInfo_riscv.cpp
)
elseif (XMRIG_ARM)
list(APPEND SOURCES_BACKEND_CPU src/backend/cpu/platform/BasicCpuInfo_arm.cpp)
if (XMRIG_OS_WIN)

View File

@@ -85,13 +85,14 @@ public:
FLAG_POPCNT,
FLAG_CAT_L3,
FLAG_VM,
FLAG_RISCV_VECTOR,
FLAG_MAX
};
ICpuInfo() = default;
virtual ~ICpuInfo() = default;
# if defined(__x86_64__) || defined(_M_AMD64) || defined (__arm64__) || defined (__aarch64__)
# if defined(__x86_64__) || defined(_M_AMD64) || defined (__arm64__) || defined (__aarch64__) || defined(__riscv) && (__riscv_xlen == 64)
inline constexpr static bool is64bit() { return true; }
# else
inline constexpr static bool is64bit() { return false; }
@@ -109,6 +110,7 @@ public:
virtual bool hasOneGbPages() const = 0;
virtual bool hasXOP() const = 0;
virtual bool isVM() const = 0;
virtual bool hasRISCV_Vector() const = 0;
virtual bool jccErratum() const = 0;
virtual const char *backend() const = 0;
virtual const char *brand() const = 0;

View File

@@ -58,8 +58,8 @@
namespace xmrig {
constexpr size_t kCpuFlagsSize = 15;
static const std::array<const char *, kCpuFlagsSize> flagNames = { "aes", "vaes", "avx", "avx2", "avx512f", "bmi2", "osxsave", "pdpe1gb", "sse2", "ssse3", "sse4.1", "xop", "popcnt", "cat_l3", "vm" };
constexpr size_t kCpuFlagsSize = 16;
static const std::array<const char *, kCpuFlagsSize> flagNames = { "aes", "vaes", "avx", "avx2", "avx512f", "bmi2", "osxsave", "pdpe1gb", "sse2", "ssse3", "sse4.1", "xop", "popcnt", "cat_l3", "vm", "rvv" };
static_assert(kCpuFlagsSize == ICpuInfo::FLAG_MAX, "kCpuFlagsSize and FLAG_MAX mismatch");
@@ -250,7 +250,7 @@ xmrig::BasicCpuInfo::BasicCpuInfo() :
break;
case 0x19:
if (m_model == 0x61) {
if ((m_model == 0x61) || (m_model == 0x75)) {
m_arch = ARCH_ZEN4;
m_msrMod = MSR_MOD_RYZEN_19H_ZEN4;
}

View File

@@ -52,6 +52,7 @@ protected:
inline bool hasOneGbPages() const override { return has(FLAG_PDPE1GB); }
inline bool hasXOP() const override { return has(FLAG_XOP); }
inline bool isVM() const override { return has(FLAG_VM); }
inline bool hasRISCV_Vector() const override { return has(FLAG_RISCV_VECTOR); }
inline bool jccErratum() const override { return m_jccErratum; }
inline const char *brand() const override { return m_brand; }
inline const std::vector<int32_t> &units() const override { return m_units; }
@@ -65,7 +66,7 @@ protected:
inline Vendor vendor() const override { return m_vendor; }
inline uint32_t model() const override
{
# ifndef XMRIG_ARM
# if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
return m_model;
# else
return 0;
@@ -80,7 +81,7 @@ protected:
Vendor m_vendor = VENDOR_UNKNOWN;
private:
# ifndef XMRIG_ARM
# if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
uint32_t m_procInfo = 0;
uint32_t m_family = 0;
uint32_t m_model = 0;

View File

@@ -0,0 +1,119 @@
/* XMRig
* Copyright (c) 2025 Slayingripper <https://github.com/Slayingripper>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2017-2019 XMR-Stak <https://github.com/fireice-uk>, <https://github.com/psychocrypt>
* Copyright (c) 2016-2025 XMRig <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <array>
#include <cstring>
#include <fstream>
#include <thread>
#include "backend/cpu/platform/BasicCpuInfo.h"
#include "base/tools/String.h"
#include "3rdparty/rapidjson/document.h"
namespace xmrig {
extern String cpu_name_riscv();
extern bool has_riscv_vector();
extern bool has_riscv_aes();
} // namespace xmrig
xmrig::BasicCpuInfo::BasicCpuInfo() :
m_threads(std::thread::hardware_concurrency())
{
m_units.resize(m_threads);
for (int32_t i = 0; i < static_cast<int32_t>(m_threads); ++i) {
m_units[i] = i;
}
memcpy(m_brand, "RISC-V", 6);
auto name = cpu_name_riscv();
if (!name.isNull()) {
strncpy(m_brand, name.data(), sizeof(m_brand) - 1);
}
// Check for vector extensions
m_flags.set(FLAG_RISCV_VECTOR, has_riscv_vector());
// Check for AES extensions (Zknd/Zkne)
m_flags.set(FLAG_AES, has_riscv_aes());
// RISC-V typically supports 1GB huge pages
m_flags.set(FLAG_PDPE1GB, std::ifstream("/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages").good());
}
const char *xmrig::BasicCpuInfo::backend() const
{
return "basic/1";
}
xmrig::CpuThreads xmrig::BasicCpuInfo::threads(const Algorithm &algorithm, uint32_t) const
{
# ifdef XMRIG_ALGO_GHOSTRIDER
if (algorithm.family() == Algorithm::GHOSTRIDER) {
return CpuThreads(threads(), 8);
}
# endif
return CpuThreads(threads());
}
rapidjson::Value xmrig::BasicCpuInfo::toJSON(rapidjson::Document &doc) const
{
using namespace rapidjson;
auto &allocator = doc.GetAllocator();
Value out(kObjectType);
out.AddMember("brand", StringRef(brand()), allocator);
out.AddMember("aes", hasAES(), allocator);
out.AddMember("avx2", false, allocator);
out.AddMember("x64", is64bit(), allocator); // DEPRECATED will be removed in the next major release.
out.AddMember("64_bit", is64bit(), allocator);
out.AddMember("l2", static_cast<uint64_t>(L2()), allocator);
out.AddMember("l3", static_cast<uint64_t>(L3()), allocator);
out.AddMember("cores", static_cast<uint64_t>(cores()), allocator);
out.AddMember("threads", static_cast<uint64_t>(threads()), allocator);
out.AddMember("packages", static_cast<uint64_t>(packages()), allocator);
out.AddMember("nodes", static_cast<uint64_t>(nodes()), allocator);
out.AddMember("backend", StringRef(backend()), allocator);
out.AddMember("msr", "none", allocator);
out.AddMember("assembly", "none", allocator);
out.AddMember("arch", "riscv64", allocator);
Value flags(kArrayType);
if (hasAES()) {
flags.PushBack("aes", allocator);
}
out.AddMember("flags", flags, allocator);
return out;
}

View File

@@ -87,7 +87,7 @@ static inline size_t countByType(hwloc_topology_t topology, hwloc_obj_type_t typ
}
#ifndef XMRIG_ARM
#if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
static inline std::vector<hwloc_obj_t> findByType(hwloc_obj_t obj, hwloc_obj_type_t type)
{
std::vector<hwloc_obj_t> out;
@@ -207,7 +207,7 @@ bool xmrig::HwlocCpuInfo::membind(hwloc_const_bitmap_t nodeset)
xmrig::CpuThreads xmrig::HwlocCpuInfo::threads(const Algorithm &algorithm, uint32_t limit) const
{
# ifndef XMRIG_ARM
# if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
if (L2() == 0 && L3() == 0) {
return BasicCpuInfo::threads(algorithm, limit);
}
@@ -277,7 +277,7 @@ xmrig::CpuThreads xmrig::HwlocCpuInfo::allThreads(const Algorithm &algorithm, ui
void xmrig::HwlocCpuInfo::processTopLevelCache(hwloc_obj_t cache, const Algorithm &algorithm, CpuThreads &threads, size_t limit) const
{
# ifndef XMRIG_ARM
# if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
constexpr size_t oneMiB = 1024U * 1024U;
size_t PUs = countByType(cache, HWLOC_OBJ_PU);
@@ -311,17 +311,17 @@ void xmrig::HwlocCpuInfo::processTopLevelCache(hwloc_obj_t cache, const Algorith
uint32_t intensity = algorithm.maxIntensity() == 1 ? 0 : 1;
if (cache->attr->cache.depth == 3) {
for (size_t i = 0; i < cache->arity; ++i) {
hwloc_obj_t l2 = cache->children[i];
auto process_L2 = [&L2, &L2_associativity, L3_exclusive, this, &extra, scratchpad](hwloc_obj_t l2) {
if (!hwloc_obj_type_is_cache(l2->type) || l2->attr == nullptr) {
continue;
return;
}
L2 += l2->attr->cache.size;
L2_associativity = l2->attr->cache.associativity;
if (L3_exclusive) {
if (vendor() == VENDOR_AMD) {
if ((vendor() == VENDOR_AMD) && ((arch() == ARCH_ZEN4) || (arch() == ARCH_ZEN5))) {
// Use extra L2 only on newer CPUs because older CPUs (Zen 3 and older) don't benefit from it.
// For some reason, AMD CPUs can use only half of the exclusive L2/L3 cache combo efficiently
extra += std::min<size_t>(l2->attr->cache.size / 2, scratchpad);
}
@@ -329,6 +329,18 @@ void xmrig::HwlocCpuInfo::processTopLevelCache(hwloc_obj_t cache, const Algorith
extra += scratchpad;
}
}
};
for (size_t i = 0; i < cache->arity; ++i) {
hwloc_obj_t ch = cache->children[i];
if (ch->type == HWLOC_OBJ_GROUP) {
for (size_t j = 0; j < ch->arity; ++j) {
process_L2(ch->children[j]);
}
}
else {
process_L2(ch);
}
}
}

View File

@@ -0,0 +1,150 @@
/* XMRig
* Copyright (c) 2025 Slayingripper <https://github.com/Slayingripper>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "base/tools/String.h"
#include "3rdparty/fmt/core.h"
#include <cstdio>
#include <cstring>
#include <string>
namespace xmrig {
struct riscv_cpu_desc
{
String model;
String isa;
String uarch;
bool has_vector = false;
bool has_aes = false;
inline bool isReady() const { return !isa.isNull(); }
};
static bool lookup_riscv(char *line, const char *pattern, String &value)
{
char *p = strstr(line, pattern);
if (!p) {
return false;
}
p += strlen(pattern);
while (isspace(*p)) {
++p;
}
if (*p == ':') {
++p;
}
while (isspace(*p)) {
++p;
}
// Remove trailing newline
size_t len = strlen(p);
if (len > 0 && p[len - 1] == '\n') {
p[len - 1] = '\0';
}
// Ensure we call the const char* assignment (which performs a copy)
// instead of the char* overload (which would take ownership of the pointer)
value = (const char*)p;
return true;
}
static bool read_riscv_cpuinfo(riscv_cpu_desc *desc)
{
auto fp = fopen("/proc/cpuinfo", "r");
if (!fp) {
return false;
}
char buf[2048]; // Larger buffer for long ISA strings
while (fgets(buf, sizeof(buf), fp) != nullptr) {
lookup_riscv(buf, "model name", desc->model);
if (lookup_riscv(buf, "isa", desc->isa)) {
desc->isa.toLower();
for (const String& s : desc->isa.split('_')) {
const char* p = s.data();
const size_t n = s.size();
if ((s.size() > 4) && (memcmp(p, "rv64", 4) == 0)) {
for (size_t i = 4; i < n; ++i) {
if (p[i] == 'v') {
desc->has_vector = true;
break;
}
}
}
else if (s == "zve64d") {
desc->has_vector = true;
}
else if ((s == "zvkn") || (s == "zvknc") || (s == "zvkned") || (s == "zvkng")){
desc->has_aes = true;
}
}
}
lookup_riscv(buf, "uarch", desc->uarch);
if (desc->isReady()) {
break;
}
}
fclose(fp);
return desc->isReady();
}
String cpu_name_riscv()
{
riscv_cpu_desc desc;
if (read_riscv_cpuinfo(&desc)) {
if (!desc.uarch.isNull()) {
return fmt::format("{} ({})", desc.model, desc.uarch).c_str();
}
return desc.model;
}
return "RISC-V";
}
bool has_riscv_vector()
{
riscv_cpu_desc desc;
if (read_riscv_cpuinfo(&desc)) {
return desc.has_vector;
}
return false;
}
bool has_riscv_aes()
{
riscv_cpu_desc desc;
if (read_riscv_cpuinfo(&desc)) {
return desc.has_aes;
}
return false;
}
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -71,11 +71,11 @@ char *xmrig::Platform::createUserAgent()
#ifndef XMRIG_FEATURE_HWLOC
#ifdef __DragonFly__
#if defined(__DragonFly__) || defined(XMRIG_OS_OPENBSD) || defined(XMRIG_OS_HAIKU)
bool xmrig::Platform::setThreadAffinity(uint64_t cpu_id)
{
return true;
return false;
}
#else

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -18,14 +18,12 @@
#include <cstdio>
#ifdef _MSC_VER
# include "getopt/getopt.h"
#else
# include <getopt.h>
#endif
#include "base/kernel/config/BaseTransform.h"
#include "base/io/json/JsonChain.h"
#include "base/io/log/Log.h"
@@ -37,7 +35,6 @@
#include "base/net/stratum/Pools.h"
#include "core/config/Config_platform.h"
#ifdef XMRIG_FEATURE_TLS
# include "base/net/tls/TlsConfig.h"
#endif
@@ -47,9 +44,9 @@ void xmrig::BaseTransform::load(JsonChain &chain, Process *process, IConfigTrans
{
using namespace rapidjson;
int key = 0;
int argc = process->arguments().argc();
char **argv = process->arguments().argv();
int key = 0;
const int argc = process->arguments().argc();
char **argv = process->arguments().argv();
Document doc(kObjectType);
@@ -262,7 +259,8 @@ void xmrig::BaseTransform::transform(rapidjson::Document &doc, int key, const ch
case IConfig::DaemonKey: /* --daemon */
case IConfig::SubmitToOriginKey: /* --submit-to-origin */
case IConfig::VerboseKey: /* --verbose */
case IConfig::DnsIPv6Key: /* --dns-ipv6 */
case IConfig::DnsIPv4Key: /* --ipv4 */
case IConfig::DnsIPv6Key: /* --ipv6 */
return transformBoolean(doc, key, true);
case IConfig::ColorKey: /* --no-color */
@@ -323,8 +321,11 @@ void xmrig::BaseTransform::transformBoolean(rapidjson::Document &doc, int key, b
case IConfig::NoTitleKey: /* --no-title */
return set(doc, BaseConfig::kTitle, enable);
case IConfig::DnsIPv6Key: /* --dns-ipv6 */
return set(doc, DnsConfig::kField, DnsConfig::kIPv6, enable);
case IConfig::DnsIPv4Key: /* --ipv4 */
return set(doc, DnsConfig::kField, DnsConfig::kIPv, 4);
case IConfig::DnsIPv6Key: /* --ipv6 */
return set(doc, DnsConfig::kField, DnsConfig::kIPv, 6);
default:
break;

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,9 +16,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_ICONFIG_H
#define XMRIG_ICONFIG_H
#pragma once
#include "3rdparty/rapidjson/fwd.h"
@@ -82,7 +80,8 @@ public:
HugePageSizeKey = 1050,
PauseOnActiveKey = 1051,
SubmitToOriginKey = 1052,
DnsIPv6Key = 1053,
DnsIPv4Key = '4',
DnsIPv6Key = '6',
DnsTtlKey = 1054,
SpendSecretKey = 1055,
DaemonZMQPortKey = 1056,
@@ -177,7 +176,4 @@ public:
};
} /* namespace xmrig */
#endif // XMRIG_ICONFIG_H
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,21 +16,16 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_IDNSBACKEND_H
#define XMRIG_IDNSBACKEND_H
#pragma once
#include "base/tools/Object.h"
#include <memory>
namespace xmrig {
class DnsConfig;
class DnsRecords;
class DnsRequest;
class IDnsListener;
class String;
@@ -43,12 +38,8 @@ public:
IDnsBackend() = default;
virtual ~IDnsBackend() = default;
virtual const DnsRecords &records() const = 0;
virtual std::shared_ptr<DnsRequest> resolve(const String &host, IDnsListener *listener, uint64_t ttl) = 0;
virtual void resolve(const String &host, const std::weak_ptr<IDnsListener> &listener, const DnsConfig &config) = 0;
};
} /* namespace xmrig */
#endif // XMRIG_IDNSBACKEND_H
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -18,6 +18,7 @@
#include "base/net/dns/Dns.h"
#include "base/net/dns/DnsRequest.h"
#include "base/net/dns/DnsUvBackend.h"
@@ -25,17 +26,21 @@ namespace xmrig {
DnsConfig Dns::m_config;
std::map<String, std::shared_ptr<IDnsBackend> > Dns::m_backends;
std::map<String, std::shared_ptr<IDnsBackend>> Dns::m_backends;
} // namespace xmrig
std::shared_ptr<xmrig::DnsRequest> xmrig::Dns::resolve(const String &host, IDnsListener *listener, uint64_t ttl)
std::shared_ptr<xmrig::DnsRequest> xmrig::Dns::resolve(const String &host, IDnsListener *listener)
{
auto req = std::make_shared<DnsRequest>(listener);
if (m_backends.find(host) == m_backends.end()) {
m_backends.insert({ host, std::make_shared<DnsUvBackend>() });
}
return m_backends.at(host)->resolve(host, listener, ttl == 0 ? m_config.ttl() : ttl);
m_backends.at(host)->resolve(host, req, m_config);
return req;
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -43,7 +43,7 @@ public:
inline static const DnsConfig &config() { return m_config; }
inline static void set(const DnsConfig &config) { m_config = config; }
static std::shared_ptr<DnsRequest> resolve(const String &host, IDnsListener *listener, uint64_t ttl = 0);
static std::shared_ptr<DnsRequest> resolve(const String &host, IDnsListener *listener);
private:
static DnsConfig m_config;

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -20,15 +20,15 @@
#include "3rdparty/rapidjson/document.h"
#include "base/io/json/Json.h"
#include <algorithm>
#include <uv.h>
namespace xmrig {
const char *DnsConfig::kField = "dns";
const char *DnsConfig::kIPv6 = "ipv6";
const char *DnsConfig::kIPv = "ip_version";
const char *DnsConfig::kTTL = "ttl";
@@ -37,8 +37,26 @@ const char *DnsConfig::kTTL = "ttl";
xmrig::DnsConfig::DnsConfig(const rapidjson::Value &value)
{
m_ipv6 = Json::getBool(value, kIPv6, m_ipv6);
m_ttl = std::max(Json::getUint(value, kTTL, m_ttl), 1U);
const uint32_t ipv = Json::getUint(value, kIPv, m_ipv);
if (ipv == 0 || ipv == 4 || ipv == 6) {
m_ipv = ipv;
}
m_ttl = std::max(Json::getUint(value, kTTL, m_ttl), 1U);
}
int xmrig::DnsConfig::ai_family() const
{
if (m_ipv == 4) {
return AF_INET;
}
if (m_ipv == 6) {
return AF_INET6;
}
return AF_UNSPEC;
}
@@ -49,8 +67,8 @@ rapidjson::Value xmrig::DnsConfig::toJSON(rapidjson::Document &doc) const
auto &allocator = doc.GetAllocator();
Value obj(kObjectType);
obj.AddMember(StringRef(kIPv6), m_ipv6, allocator);
obj.AddMember(StringRef(kTTL), m_ttl, allocator);
obj.AddMember(StringRef(kIPv), m_ipv, allocator);
obj.AddMember(StringRef(kTTL), m_ttl, allocator);
return obj;
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,9 +16,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_DNSCONFIG_H
#define XMRIG_DNSCONFIG_H
#pragma once
#include "3rdparty/rapidjson/fwd.h"
@@ -30,25 +28,22 @@ class DnsConfig
{
public:
static const char *kField;
static const char *kIPv6;
static const char *kIPv;
static const char *kTTL;
DnsConfig() = default;
DnsConfig(const rapidjson::Value &value);
inline bool isIPv6() const { return m_ipv6; }
inline uint32_t ipv() const { return m_ipv; }
inline uint32_t ttl() const { return m_ttl * 1000U; }
int ai_family() const;
rapidjson::Value toJSON(rapidjson::Document &doc) const;
private:
bool m_ipv6 = false;
uint32_t m_ttl = 30U;
uint32_t m_ttl = 30U;
uint32_t m_ipv = 0U;
};
} /* namespace xmrig */
#endif /* XMRIG_DNSCONFIG_H */
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2023 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2023 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,19 +16,16 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <uv.h>
#include "base/net/dns/DnsRecord.h"
xmrig::DnsRecord::DnsRecord(const addrinfo *addr) :
m_type(addr->ai_family == AF_INET6 ? AAAA : (addr->ai_family == AF_INET ? A : Unknown))
xmrig::DnsRecord::DnsRecord(const addrinfo *addr)
{
static_assert(sizeof(m_data) >= sizeof(sockaddr_in6), "Not enough storage for IPv6 address.");
memcpy(m_data, addr->ai_addr, m_type == AAAA ? sizeof(sockaddr_in6) : sizeof(sockaddr_in));
memcpy(m_data, addr->ai_addr, addr->ai_family == AF_INET6 ? sizeof(sockaddr_in6) : sizeof(sockaddr_in));
}
@@ -44,7 +41,7 @@ xmrig::String xmrig::DnsRecord::ip() const
{
char *buf = nullptr;
if (m_type == AAAA) {
if (reinterpret_cast<const sockaddr &>(m_data).sa_family == AF_INET6) {
buf = new char[45]();
uv_ip6_name(reinterpret_cast<const sockaddr_in6*>(m_data), buf, 45);
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,14 +16,11 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_DNSRECORD_H
#define XMRIG_DNSRECORD_H
#pragma once
struct addrinfo;
struct sockaddr;
#include "base/tools/String.h"
@@ -33,28 +30,15 @@ namespace xmrig {
class DnsRecord
{
public:
enum Type : uint32_t {
Unknown,
A,
AAAA
};
DnsRecord() {}
DnsRecord(const addrinfo *addr);
const sockaddr *addr(uint16_t port = 0) const;
String ip() const;
inline bool isValid() const { return m_type != Unknown; }
inline Type type() const { return m_type; }
private:
mutable uint8_t m_data[28]{};
const Type m_type = Unknown;
};
} /* namespace xmrig */
#endif /* XMRIG_DNSRECORD_H */
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -18,90 +18,96 @@
#include <uv.h>
#include "base/net/dns/DnsRecords.h"
#include "base/net/dns/Dns.h"
const xmrig::DnsRecord &xmrig::DnsRecords::get(DnsRecord::Type prefered) const
namespace {
static size_t dns_records_count(const addrinfo *res, int &ai_family)
{
size_t ipv4 = 0;
size_t ipv6 = 0;
while (res != nullptr) {
if (res->ai_family == AF_INET) {
++ipv4;
}
if (res->ai_family == AF_INET6) {
++ipv6;
}
res = res->ai_next;
}
if (ai_family == AF_INET6 && !ipv6) {
ai_family = AF_INET;
}
switch (ai_family) {
case AF_UNSPEC:
return ipv4 + ipv6;
case AF_INET:
return ipv4;
case AF_INET6:
return ipv6;
default:
break;
}
return 0;
}
} // namespace
xmrig::DnsRecords::DnsRecords(const addrinfo *res, int ai_family)
{
size_t size = dns_records_count(res, ai_family);
if (!size) {
return;
}
m_records.reserve(size);
if (ai_family == AF_UNSPEC) {
while (res != nullptr) {
if (res->ai_family == AF_INET || res->ai_family == AF_INET6) {
m_records.emplace_back(res);
}
res = res->ai_next;
};
} else {
while (res != nullptr) {
if (res->ai_family == ai_family) {
m_records.emplace_back(res);
}
res = res->ai_next;
};
}
size = m_records.size();
if (size > 1) {
m_index = static_cast<size_t>(rand()) % size; // NOLINT(concurrency-mt-unsafe, cert-msc30-c, cert-msc50-cpp)
}
}
const xmrig::DnsRecord &xmrig::DnsRecords::get() const
{
static const DnsRecord defaultRecord;
if (isEmpty()) {
return defaultRecord;
}
const size_t ipv4 = m_ipv4.size();
const size_t ipv6 = m_ipv6.size();
if (ipv6 && (prefered == DnsRecord::AAAA || Dns::config().isIPv6() || !ipv4)) {
return m_ipv6[ipv6 == 1 ? 0 : static_cast<size_t>(rand()) % ipv6]; // NOLINT(concurrency-mt-unsafe, cert-msc30-c, cert-msc50-cpp)
}
if (ipv4) {
return m_ipv4[ipv4 == 1 ? 0 : static_cast<size_t>(rand()) % ipv4]; // NOLINT(concurrency-mt-unsafe, cert-msc30-c, cert-msc50-cpp)
const size_t size = m_records.size();
if (size > 0) {
return m_records[m_index++ % size];
}
return defaultRecord;
}
size_t xmrig::DnsRecords::count(DnsRecord::Type type) const
{
if (type == DnsRecord::A) {
return m_ipv4.size();
}
if (type == DnsRecord::AAAA) {
return m_ipv6.size();
}
return m_ipv4.size() + m_ipv6.size();
}
void xmrig::DnsRecords::clear()
{
m_ipv4.clear();
m_ipv6.clear();
}
void xmrig::DnsRecords::parse(addrinfo *res)
{
clear();
addrinfo *ptr = res;
size_t ipv4 = 0;
size_t ipv6 = 0;
while (ptr != nullptr) {
if (ptr->ai_family == AF_INET) {
++ipv4;
}
else if (ptr->ai_family == AF_INET6) {
++ipv6;
}
ptr = ptr->ai_next;
}
if (ipv4 == 0 && ipv6 == 0) {
return;
}
m_ipv4.reserve(ipv4);
m_ipv6.reserve(ipv6);
ptr = res;
while (ptr != nullptr) {
if (ptr->ai_family == AF_INET) {
m_ipv4.emplace_back(ptr);
}
else if (ptr->ai_family == AF_INET6) {
m_ipv6.emplace_back(ptr);
}
ptr = ptr->ai_next;
}
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,9 +16,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_DNSRECORDS_H
#define XMRIG_DNSRECORDS_H
#pragma once
#include "base/net/dns/DnsRecord.h"
@@ -29,20 +27,19 @@ namespace xmrig {
class DnsRecords
{
public:
inline bool isEmpty() const { return m_ipv4.empty() && m_ipv6.empty(); }
DnsRecords() = default;
DnsRecords(const addrinfo *res, int ai_family);
const DnsRecord &get(DnsRecord::Type prefered = DnsRecord::Unknown) const;
size_t count(DnsRecord::Type type = DnsRecord::Unknown) const;
void clear();
void parse(addrinfo *res);
inline bool isEmpty() const { return m_records.empty(); }
inline const std::vector<DnsRecord> &records() const { return m_records; }
inline size_t size() const { return m_records.size(); }
const DnsRecord &get() const;
private:
std::vector<DnsRecord> m_ipv4;
std::vector<DnsRecord> m_ipv6;
mutable size_t m_index = 0;
std::vector<DnsRecord> m_records;
};
} /* namespace xmrig */
#endif /* XMRIG_DNSRECORDS_H */
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,35 +16,30 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_DNSREQUEST_H
#define XMRIG_DNSREQUEST_H
#pragma once
#include "base/tools/Object.h"
#include <cstdint>
#include "base/kernel/interfaces/IDnsListener.h"
namespace xmrig {
class IDnsListener;
class DnsRequest
class DnsRequest : public IDnsListener
{
public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(DnsRequest)
DnsRequest(IDnsListener *listener) : listener(listener) {}
~DnsRequest() = default;
inline DnsRequest(IDnsListener *listener) : m_listener(listener) {}
~DnsRequest() override = default;
IDnsListener *listener;
protected:
inline void onResolved(const DnsRecords &records, int status, const char *error) override {
m_listener->onResolved(records, status, error);
}
private:
IDnsListener *m_listener;
};
} /* namespace xmrig */
#endif /* XMRIG_DNSREQUEST_H */
} // namespace xmrig

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2023 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2023 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,13 +16,11 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <uv.h>
#include "base/net/dns/DnsUvBackend.h"
#include "base/kernel/interfaces/IDnsListener.h"
#include "base/net/dns/DnsRequest.h"
#include "base/net/dns/DnsConfig.h"
#include "base/tools/Chrono.h"
@@ -73,21 +71,23 @@ xmrig::DnsUvBackend::~DnsUvBackend()
}
std::shared_ptr<xmrig::DnsRequest> xmrig::DnsUvBackend::resolve(const String &host, IDnsListener *listener, uint64_t ttl)
void xmrig::DnsUvBackend::resolve(const String &host, const std::weak_ptr<IDnsListener> &listener, const DnsConfig &config)
{
auto req = std::make_shared<DnsRequest>(listener);
m_queue.emplace_back(listener);
if (Chrono::currentMSecsSinceEpoch() - m_ts <= ttl && !m_records.isEmpty()) {
req->listener->onResolved(m_records, 0, nullptr);
} else {
m_queue.emplace(req);
if (Chrono::currentMSecsSinceEpoch() - m_ts <= config.ttl()) {
return notify();
}
if (m_queue.size() == 1 && !resolve(host)) {
done();
if (m_req) {
return;
}
return req;
m_ai_family = config.ai_family();
if (!resolve(host)) {
notify();
}
}
@@ -102,44 +102,46 @@ bool xmrig::DnsUvBackend::resolve(const String &host)
}
void xmrig::DnsUvBackend::done()
void xmrig::DnsUvBackend::notify()
{
const char *error = m_status < 0 ? uv_strerror(m_status) : nullptr;
while (!m_queue.empty()) {
auto req = std::move(m_queue.front()).lock();
if (req) {
req->listener->onResolved(m_records, m_status, error);
for (const auto &l : m_queue) {
auto listener = l.lock();
if (listener) {
listener->onResolved(m_records, m_status, error);
}
m_queue.pop();
}
m_queue.clear();
m_req.reset();
}
void xmrig::DnsUvBackend::onResolved(int status, addrinfo *res)
{
m_ts = Chrono::currentMSecsSinceEpoch();
m_status = status;
m_ts = Chrono::currentMSecsSinceEpoch();
if ((m_status = status) < 0) {
return done();
if (m_status < 0) {
m_records = {};
return notify();
}
m_records.parse(res);
m_records = { res, m_ai_family };
if (m_records.isEmpty()) {
m_status = UV_EAI_NONAME;
}
done();
notify();
}
void xmrig::DnsUvBackend::onResolved(uv_getaddrinfo_t *req, int status, addrinfo *res)
{
auto backend = getStorage().get(req->data);
auto *backend = getStorage().get(req->data);
if (backend) {
backend->onResolved(status, res);
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,16 +16,13 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_DNSUVBACKEND_H
#define XMRIG_DNSUVBACKEND_H
#pragma once
#include "base/kernel/interfaces/IDnsBackend.h"
#include "base/net/dns/DnsRecords.h"
#include "base/net/tools/Storage.h"
#include <queue>
#include <deque>
using uv_getaddrinfo_t = struct uv_getaddrinfo_s;
@@ -43,20 +40,19 @@ public:
~DnsUvBackend() override;
protected:
inline const DnsRecords &records() const override { return m_records; }
std::shared_ptr<DnsRequest> resolve(const String &host, IDnsListener *listener, uint64_t ttl) override;
void resolve(const String &host, const std::weak_ptr<IDnsListener> &listener, const DnsConfig &config) override;
private:
bool resolve(const String &host);
void done();
void notify();
void onResolved(int status, addrinfo *res);
static void onResolved(uv_getaddrinfo_t *req, int status, addrinfo *res);
DnsRecords m_records;
int m_ai_family = 0;
int m_status = 0;
std::queue<std::weak_ptr<DnsRequest> > m_queue;
std::deque<std::weak_ptr<IDnsListener>> m_queue;
std::shared_ptr<uv_getaddrinfo_t> m_req;
uint64_t m_ts = 0;
uintptr_t m_key;
@@ -66,7 +62,4 @@ private:
};
} /* namespace xmrig */
#endif /* XMRIG_DNSUVBACKEND_H */
} // namespace xmrig

View File

@@ -554,6 +554,7 @@ int64_t xmrig::Client::send(size_t size)
}
m_expire = Chrono::steadyMSecs() + kResponseTimeout;
startTimeout();
return m_sequence++;
}
@@ -661,8 +662,6 @@ void xmrig::Client::onClose()
void xmrig::Client::parse(char *line, size_t len)
{
startTimeout();
LOG_DEBUG("[%s] received (%d bytes): \"%.*s\"", url(), len, static_cast<int>(len), line);
if (len < 22 || line[0] != '{') {
@@ -857,8 +856,6 @@ void xmrig::Client::parseResponse(int64_t id, const rapidjson::Value &result, co
void xmrig::Client::ping()
{
send(snprintf(m_sendBuf.data(), m_sendBuf.size(), "{\"id\":%" PRId64 ",\"jsonrpc\":\"2.0\",\"method\":\"keepalived\",\"params\":{\"id\":\"%s\"}}\n", m_sequence, m_rpcId.data()));
m_keepAlive = 0;
}

View File

@@ -1,7 +1,7 @@
/* XMRig
* Copyright (c) 2018 Lee Clagett <https://github.com/vtnerd>
* Copyright (c) 2018-2023 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2023 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -45,7 +45,7 @@ namespace xmrig {
// https://wiki.openssl.org/index.php/Diffie-Hellman_parameters
#if OPENSSL_VERSION_NUMBER < 0x30000000L || defined(LIBRESSL_VERSION_NUMBER)
#if OPENSSL_VERSION_NUMBER < 0x30000000L || (defined(LIBRESSL_VERSION_NUMBER) && !defined(LIBRESSL_HAS_TLS1_3))
static DH *get_dh2048()
{
static unsigned char dhp_2048[] = {
@@ -152,7 +152,7 @@ bool xmrig::TlsContext::load(const TlsConfig &config)
SSL_CTX_set_options(m_ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);
SSL_CTX_set_options(m_ctx, SSL_OP_CIPHER_SERVER_PREFERENCE);
# if OPENSSL_VERSION_NUMBER >= 0x1010100fL && !defined(LIBRESSL_VERSION_NUMBER)
# if OPENSSL_VERSION_NUMBER >= 0x1010100fL || defined(LIBRESSL_HAS_TLS1_3)
SSL_CTX_set_max_early_data(m_ctx, 0);
# endif
@@ -180,7 +180,7 @@ bool xmrig::TlsContext::setCipherSuites(const char *ciphersuites)
return true;
}
# if OPENSSL_VERSION_NUMBER >= 0x1010100fL && !defined(LIBRESSL_VERSION_NUMBER)
# if OPENSSL_VERSION_NUMBER >= 0x1010100fL || defined(LIBRESSL_HAS_TLS1_3)
if (SSL_CTX_set_ciphersuites(m_ctx, ciphersuites) == 1) {
return true;
}
@@ -194,7 +194,7 @@ bool xmrig::TlsContext::setCipherSuites(const char *ciphersuites)
bool xmrig::TlsContext::setDH(const char *dhparam)
{
# if OPENSSL_VERSION_NUMBER < 0x30000000L || defined(LIBRESSL_VERSION_NUMBER)
# if OPENSSL_VERSION_NUMBER < 0x30000000L || (defined(LIBRESSL_VERSION_NUMBER) && !defined(LIBRESSL_HAS_TLS1_3))
DH *dh = nullptr;
if (dhparam != nullptr) {

View File

@@ -241,8 +241,13 @@ bool xmrig::BlockTemplate::parse(bool hashes)
ar(m_amount);
ar(m_outputType);
// output type must be txout_to_key (2) or txout_to_tagged_key (3)
if ((m_outputType != 2) && (m_outputType != 3)) {
const bool is_fcmp_pp = (m_coin == Coin::MONERO) && (m_version.first >= 17);
// output type must be txout_to_key (2) or txout_to_tagged_key (3) for versions < 17, and txout_to_carrot_v1 (0) for version FCMP++
if (is_fcmp_pp && (m_outputType == 0)) {
// all good
}
else if ((m_outputType != 2) && (m_outputType != 3)) {
return false;
}
@@ -250,6 +255,11 @@ bool xmrig::BlockTemplate::parse(bool hashes)
ar(m_ephPublicKey, kKeySize);
if (is_fcmp_pp) {
ar(m_carrotViewTag);
ar(m_janusAnchor);
}
if (m_coin == Coin::ZEPHYR) {
if (m_outputType != 2) {
return false;

View File

@@ -148,6 +148,8 @@ private:
Buffer m_hashes;
Buffer m_minerTxMerkleTreeBranch;
uint8_t m_rootHash[kHashSize]{};
uint8_t m_carrotViewTag[3]{};
uint8_t m_janusAnchor[16]{};
};

View File

@@ -93,7 +93,7 @@
"dhparam": null
},
"dns": {
"ipv6": false,
"ip_version": 0,
"ttl": 30
},
"user-agent": null,

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -16,9 +16,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_CONFIG_PLATFORM_H
#define XMRIG_CONFIG_PLATFORM_H
#pragma once
#ifdef _MSC_VER
# include "getopt/getopt.h"
@@ -28,13 +26,12 @@
#include "base/kernel/interfaces/IConfig.h"
#include "version.h"
namespace xmrig {
static const char short_options[] = "a:c:kBp:Px:r:R:s:t:T:o:u:O:v:l:Sx:";
static const char short_options[] = "a:c:kBp:Px:r:R:s:t:T:o:u:O:v:l:Sx:46";
static const option options[] = {
@@ -99,7 +96,8 @@ static const option options[] = {
{ "no-title", 0, nullptr, IConfig::NoTitleKey },
{ "pause-on-battery", 0, nullptr, IConfig::PauseOnBatteryKey },
{ "pause-on-active", 1, nullptr, IConfig::PauseOnActiveKey },
{ "dns-ipv6", 0, nullptr, IConfig::DnsIPv6Key },
{ "ipv4", 0, nullptr, IConfig::DnsIPv4Key },
{ "ipv6", 0, nullptr, IConfig::DnsIPv6Key },
{ "dns-ttl", 1, nullptr, IConfig::DnsTtlKey },
{ "spend-secret-key", 1, nullptr, IConfig::SpendSecretKey },
# ifdef XMRIG_FEATURE_BENCHMARK
@@ -169,6 +167,3 @@ static const option options[] = {
} // namespace xmrig
#endif /* XMRIG_CONFIG_PLATFORM_H */

View File

@@ -4,8 +4,8 @@
* Copyright (c) 2014 Lucas Jones <https://github.com/lucasjones>
* Copyright (c) 2014-2016 Wolf9466 <https://github.com/OhGodAPet>
* Copyright (c) 2016 Jay D Dee <jayddee246@gmail.com>
* Copyright (c) 2018-2024 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2024 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -21,13 +21,10 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef XMRIG_USAGE_H
#define XMRIG_USAGE_H
#pragma once
#include "version.h"
#include <string>
@@ -59,7 +56,8 @@ static inline const std::string &usage()
u += " --tls-fingerprint=HEX pool TLS certificate fingerprint for strict certificate pinning\n";
# endif
u += " --dns-ipv6 prefer IPv6 records from DNS responses\n";
u += " -4, --ipv4 resolve names to IPv4 addresses\n";
u += " -6, --ipv6 resolve names to IPv6 addresses\n";
u += " --dns-ttl=N N seconds (default: 30) TTL for internal DNS cache\n";
# ifdef XMRIG_FEATURE_HTTP
@@ -205,6 +203,4 @@ static inline const std::string &usage()
}
} /* namespace xmrig */
#endif /* XMRIG_USAGE_H */
} // namespace xmrig

View File

@@ -23,7 +23,7 @@
#include "crypto/common/VirtualMemory.h"
#if defined(XMRIG_ARM)
#if defined(XMRIG_ARM) || defined(XMRIG_RISCV)
# include "crypto/cn/CryptoNight_arm.h"
#else
# include "crypto/cn/CryptoNight_x86.h"

View File

@@ -30,7 +30,7 @@
#include <stddef.h>
#include <stdint.h>
#if defined _MSC_VER || defined XMRIG_ARM
#if defined _MSC_VER || defined XMRIG_ARM || defined XMRIG_RISCV
# define ABI_ATTRIBUTE
#else
# define ABI_ATTRIBUTE __attribute__((ms_abi))

View File

@@ -27,6 +27,9 @@
#ifndef XMRIG_CRYPTONIGHT_ARM_H
#define XMRIG_CRYPTONIGHT_ARM_H
#ifdef XMRIG_RISCV
# include "crypto/cn/sse2rvv.h"
#endif
#include "base/crypto/keccak.h"
#include "crypto/cn/CnAlgo.h"

View File

@@ -30,7 +30,7 @@
#include <math.h>
// VARIANT ALTERATIONS
#ifndef XMRIG_ARM
#if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
# define VARIANT1_INIT(part) \
uint64_t tweak1_2_##part = 0; \
if (BASE == Algorithm::CN_1) { \
@@ -60,7 +60,7 @@
}
#ifndef XMRIG_ARM
#if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
# define VARIANT2_INIT(part) \
__m128i division_result_xmm_##part = _mm_cvtsi64_si128(static_cast<int64_t>(h##part[12])); \
__m128i sqrt_result_xmm_##part = _mm_cvtsi64_si128(static_cast<int64_t>(h##part[13]));

View File

@@ -29,6 +29,8 @@
#if defined(XMRIG_ARM)
# include "crypto/cn/sse2neon.h"
#elif defined(XMRIG_RISCV)
# include "crypto/cn/sse2rvv.h"
#elif defined(__GNUC__)
# include <x86intrin.h>
#else

748
src/crypto/cn/sse2rvv.h Normal file
View File

@@ -0,0 +1,748 @@
/* XMRig
* Copyright (c) 2025 Slayingripper <https://github.com/Slayingripper>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* SSE to RISC-V Vector (RVV) optimized compatibility header
* Provides both scalar fallback and vectorized implementations using RVV intrinsics
*
* Based on sse2neon.h concepts, adapted for RISC-V architecture with RVV extensions
* Original sse2neon.h: https://github.com/DLTcollab/sse2neon
*/
#ifndef XMRIG_SSE2RVV_OPTIMIZED_H
#define XMRIG_SSE2RVV_OPTIMIZED_H
#ifdef __cplusplus
extern "C" {
#endif
#include <stdint.h>
#include <string.h>
/* Check if RVV is available */
#if defined(__riscv_vector)
#include <riscv_vector.h>
#define USE_RVV_INTRINSICS 1
#else
#define USE_RVV_INTRINSICS 0
#endif
/* 128-bit vector type */
typedef union {
uint8_t u8[16];
uint16_t u16[8];
uint32_t u32[4];
uint64_t u64[2];
int8_t i8[16];
int16_t i16[8];
int32_t i32[4];
int64_t i64[2];
} __m128i_union;
typedef __m128i_union __m128i;
/* Set operations */
static inline __m128i _mm_set_epi32(int e3, int e2, int e1, int e0)
{
__m128i result;
result.i32[0] = e0;
result.i32[1] = e1;
result.i32[2] = e2;
result.i32[3] = e3;
return result;
}
static inline __m128i _mm_set_epi64x(int64_t e1, int64_t e0)
{
__m128i result;
result.i64[0] = e0;
result.i64[1] = e1;
return result;
}
static inline __m128i _mm_setzero_si128(void)
{
__m128i result;
memset(&result, 0, sizeof(result));
return result;
}
/* Extract/insert operations */
static inline int _mm_cvtsi128_si32(__m128i a)
{
return a.i32[0];
}
static inline int64_t _mm_cvtsi128_si64(__m128i a)
{
return a.i64[0];
}
static inline __m128i _mm_cvtsi32_si128(int a)
{
__m128i result = _mm_setzero_si128();
result.i32[0] = a;
return result;
}
static inline __m128i _mm_cvtsi64_si128(int64_t a)
{
__m128i result = _mm_setzero_si128();
result.i64[0] = a;
return result;
}
/* Shuffle operations */
static inline __m128i _mm_shuffle_epi32(__m128i a, int imm8)
{
__m128i result;
result.u32[0] = a.u32[(imm8 >> 0) & 0x3];
result.u32[1] = a.u32[(imm8 >> 2) & 0x3];
result.u32[2] = a.u32[(imm8 >> 4) & 0x3];
result.u32[3] = a.u32[(imm8 >> 6) & 0x3];
return result;
}
/* Logical operations - optimized with RVV when available */
static inline __m128i _mm_xor_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vxor_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] ^ b.u64[0];
result.u64[1] = a.u64[1] ^ b.u64[1];
return result;
#endif
}
static inline __m128i _mm_or_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vor_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
#endif
}
static inline __m128i _mm_and_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vand_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
#endif
}
static inline __m128i _mm_andnot_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vnot_a = __riscv_vnot_v_u64m1(va, vl);
vuint64m1_t vr = __riscv_vand_vv_u64m1(vnot_a, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = (~a.u64[0]) & b.u64[0];
result.u64[1] = (~a.u64[1]) & b.u64[1];
return result;
#endif
}
/* Shift operations */
static inline __m128i _mm_slli_si128(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
size_t vl = __riscv_vsetvl_e8m1(16);
vuint8m1_t va = __riscv_vle8_v_u8m1(a.u8, vl);
vuint8m1_t vr = __riscv_vslideup_vx_u8m1(__riscv_vmv_v_x_u8m1(0, vl), va, count, vl);
__riscv_vse8_v_u8m1(result.u8, vr, vl);
return result;
#else
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = 0; i < 16 - count; i++) {
result.u8[i + count] = a.u8[i];
}
return result;
#endif
}
static inline __m128i _mm_srli_si128(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
size_t vl = __riscv_vsetvl_e8m1(16);
vuint8m1_t va = __riscv_vle8_v_u8m1(a.u8, vl);
vuint8m1_t vr = __riscv_vslidedown_vx_u8m1(va, count, vl);
__riscv_vse8_v_u8m1(result.u8, vr, vl);
return result;
#else
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = count; i < 16; i++) {
result.u8[i - count] = a.u8[i];
}
return result;
#endif
}
static inline __m128i _mm_slli_epi64(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vr = __riscv_vsll_vx_u64m1(va, imm8, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] << imm8;
result.u64[1] = a.u64[1] << imm8;
}
return result;
#endif
}
static inline __m128i _mm_srli_epi64(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vr = __riscv_vsrl_vx_u64m1(va, imm8, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] >> imm8;
result.u64[1] = a.u64[1] >> imm8;
}
return result;
#endif
}
/* Load/store operations - optimized with RVV */
static inline __m128i _mm_load_si128(const __m128i* p)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t v = __riscv_vle64_v_u64m1((const uint64_t*)p, vl);
__riscv_vse64_v_u64m1(result.u64, v, vl);
return result;
#else
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
#endif
}
static inline __m128i _mm_loadu_si128(const __m128i* p)
{
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
}
static inline void _mm_store_si128(__m128i* p, __m128i a)
{
#if USE_RVV_INTRINSICS
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t v = __riscv_vle64_v_u64m1(a.u64, vl);
__riscv_vse64_v_u64m1((uint64_t*)p, v, vl);
#else
memcpy(p, &a, sizeof(__m128i));
#endif
}
static inline void _mm_storeu_si128(__m128i* p, __m128i a)
{
memcpy(p, &a, sizeof(__m128i));
}
/* Arithmetic operations - optimized with RVV */
static inline __m128i _mm_add_epi64(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vadd_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] + b.u64[0];
result.u64[1] = a.u64[1] + b.u64[1];
return result;
#endif
}
static inline __m128i _mm_add_epi32(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vb = __riscv_vle32_v_u32m1(b.u32, vl);
vuint32m1_t vr = __riscv_vadd_vv_u32m1(va, vb, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
return result;
#else
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a.i32[i] + b.i32[i];
}
return result;
#endif
}
static inline __m128i _mm_sub_epi64(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vsub_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] - b.u64[0];
result.u64[1] = a.u64[1] - b.u64[1];
return result;
#endif
}
static inline __m128i _mm_mul_epu32(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va_lo = __riscv_vzext_vf2_u64m1(__riscv_vle32_v_u32mf2(&a.u32[0], 2), vl);
vuint64m1_t vb_lo = __riscv_vzext_vf2_u64m1(__riscv_vle32_v_u32mf2(&b.u32[0], 2), vl);
vuint64m1_t vr = __riscv_vmul_vv_u64m1(va_lo, vb_lo, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = (uint64_t)a.u32[0] * (uint64_t)b.u32[0];
result.u64[1] = (uint64_t)a.u32[2] * (uint64_t)b.u32[2];
return result;
#endif
}
/* Unpack operations */
static inline __m128i _mm_unpacklo_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0];
result.u64[1] = b.u64[0];
return result;
}
static inline __m128i _mm_unpackhi_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[1];
result.u64[1] = b.u64[1];
return result;
}
/* Pause instruction for spin-wait loops */
static inline void _mm_pause(void)
{
/* RISC-V pause hint if available (requires Zihintpause extension) */
#if defined(__riscv_zihintpause)
__asm__ __volatile__("pause");
#else
__asm__ __volatile__("nop");
#endif
}
/* Memory fence - optimized for RISC-V */
static inline void _mm_mfence(void)
{
__asm__ __volatile__("fence rw,rw" ::: "memory");
}
static inline void _mm_lfence(void)
{
__asm__ __volatile__("fence r,r" ::: "memory");
}
static inline void _mm_sfence(void)
{
__asm__ __volatile__("fence w,w" ::: "memory");
}
/* Comparison operations */
static inline __m128i _mm_cmpeq_epi32(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.u32[i] = (a.u32[i] == b.u32[i]) ? 0xFFFFFFFF : 0;
}
return result;
}
static inline __m128i _mm_cmpeq_epi64(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 2; i++) {
result.u64[i] = (a.u64[i] == b.u64[i]) ? 0xFFFFFFFFFFFFFFFFULL : 0;
}
return result;
}
/* Additional shift operations */
static inline __m128i _mm_slli_epi32(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 31) {
memset(&result, 0, sizeof(result));
} else {
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vr = __riscv_vsll_vx_u32m1(va, imm8, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] << imm8;
}
}
return result;
#endif
}
static inline __m128i _mm_srli_epi32(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 31) {
memset(&result, 0, sizeof(result));
} else {
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vr = __riscv_vsrl_vx_u32m1(va, imm8, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] >> imm8;
}
}
return result;
#endif
}
/* 64-bit integer operations */
static inline __m128i _mm_set1_epi64x(int64_t a)
{
__m128i result;
result.i64[0] = a;
result.i64[1] = a;
return result;
}
/* Float type for compatibility */
typedef __m128i __m128;
/* Float operations - simplified scalar implementations */
static inline __m128 _mm_set1_ps(float a)
{
__m128 result;
uint32_t val;
memcpy(&val, &a, sizeof(float));
for (int i = 0; i < 4; i++) {
result.u32[i] = val;
}
return result;
}
static inline __m128 _mm_setzero_ps(void)
{
__m128 result;
memset(&result, 0, sizeof(result));
return result;
}
static inline __m128 _mm_add_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] + fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_mul_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] * fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_and_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
}
static inline __m128 _mm_or_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
}
static inline __m128 _mm_cvtepi32_ps(__m128i a)
{
__m128 result;
float fr[4];
for (int i = 0; i < 4; i++) {
fr[i] = (float)a.i32[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128i _mm_cvttps_epi32(__m128 a)
{
__m128i result;
float fa[4];
memcpy(fa, &a, sizeof(__m128));
for (int i = 0; i < 4; i++) {
result.i32[i] = (int32_t)fa[i];
}
return result;
}
/* Casting operations */
static inline __m128 _mm_castsi128_ps(__m128i a)
{
__m128 result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
static inline __m128i _mm_castps_si128(__m128 a)
{
__m128i result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
/* Additional set operations */
static inline __m128i _mm_set1_epi32(int a)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a;
}
return result;
}
/* AES instructions - placeholders for soft_aes compatibility */
static inline __m128i _mm_aesenc_si128(__m128i a, __m128i roundkey)
{
return _mm_xor_si128(a, roundkey);
}
static inline __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon)
{
return a;
}
/* Rotate right operation for soft_aes.h */
static inline uint32_t _rotr(uint32_t value, unsigned int count)
{
const unsigned int mask = 31;
count &= mask;
return (value >> count) | (value << ((-count) & mask));
}
/* ARM NEON compatibility types and intrinsics for RISC-V */
typedef __m128i_union uint64x2_t;
typedef __m128i_union uint8x16_t;
typedef __m128i_union int64x2_t;
typedef __m128i_union int32x4_t;
static inline uint64x2_t vld1q_u64(const uint64_t *ptr)
{
uint64x2_t result;
result.u64[0] = ptr[0];
result.u64[1] = ptr[1];
return result;
}
static inline int64x2_t vld1q_s64(const int64_t *ptr)
{
int64x2_t result;
result.i64[0] = ptr[0];
result.i64[1] = ptr[1];
return result;
}
static inline void vst1q_u64(uint64_t *ptr, uint64x2_t val)
{
ptr[0] = val.u64[0];
ptr[1] = val.u64[1];
}
static inline uint64x2_t veorq_u64(uint64x2_t a, uint64x2_t b)
{
return _mm_xor_si128(a, b);
}
static inline uint64x2_t vaddq_u64(uint64x2_t a, uint64x2_t b)
{
return _mm_add_epi64(a, b);
}
static inline uint64x2_t vreinterpretq_u64_u8(uint8x16_t a)
{
uint64x2_t result;
memcpy(&result, &a, sizeof(uint64x2_t));
return result;
}
static inline uint64_t vgetq_lane_u64(uint64x2_t v, int lane)
{
return v.u64[lane];
}
static inline int64_t vgetq_lane_s64(int64x2_t v, int lane)
{
return v.i64[lane];
}
static inline int32_t vgetq_lane_s32(int32x4_t v, int lane)
{
return v.i32[lane];
}
typedef struct { uint64_t val[1]; } uint64x1_t;
static inline uint64x1_t vcreate_u64(uint64_t a)
{
uint64x1_t result;
result.val[0] = a;
return result;
}
static inline uint64x2_t vcombine_u64(uint64x1_t low, uint64x1_t high)
{
uint64x2_t result;
result.u64[0] = low.val[0];
result.u64[1] = high.val[0];
return result;
}
#ifdef __cplusplus
}
#endif
#endif /* XMRIG_SSE2RVV_OPTIMIZED_H */

View File

@@ -0,0 +1,748 @@
/* XMRig
* Copyright (c) 2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* SSE to RISC-V Vector (RVV) optimized compatibility header
* Provides both scalar fallback and vectorized implementations using RVV intrinsics
*/
#ifndef XMRIG_SSE2RVV_OPTIMIZED_H
#define XMRIG_SSE2RVV_OPTIMIZED_H
#ifdef __cplusplus
extern "C" {
#endif
#include <stdint.h>
#include <string.h>
/* Check if RVV is available */
#if defined(__riscv_vector)
#include <riscv_vector.h>
#define USE_RVV_INTRINSICS 1
#else
#define USE_RVV_INTRINSICS 0
#endif
/* 128-bit vector type */
typedef union {
uint8_t u8[16];
uint16_t u16[8];
uint32_t u32[4];
uint64_t u64[2];
int8_t i8[16];
int16_t i16[8];
int32_t i32[4];
int64_t i64[2];
#if USE_RVV_INTRINSICS
vuint64m1_t rvv_u64;
vuint32m1_t rvv_u32;
vuint8m1_t rvv_u8;
#endif
} __m128i_union;
typedef __m128i_union __m128i;
/* Set operations */
static inline __m128i _mm_set_epi32(int e3, int e2, int e1, int e0)
{
__m128i result;
result.i32[0] = e0;
result.i32[1] = e1;
result.i32[2] = e2;
result.i32[3] = e3;
return result;
}
static inline __m128i _mm_set_epi64x(int64_t e1, int64_t e0)
{
__m128i result;
result.i64[0] = e0;
result.i64[1] = e1;
return result;
}
static inline __m128i _mm_setzero_si128(void)
{
__m128i result;
memset(&result, 0, sizeof(result));
return result;
}
/* Extract/insert operations */
static inline int _mm_cvtsi128_si32(__m128i a)
{
return a.i32[0];
}
static inline int64_t _mm_cvtsi128_si64(__m128i a)
{
return a.i64[0];
}
static inline __m128i _mm_cvtsi32_si128(int a)
{
__m128i result = _mm_setzero_si128();
result.i32[0] = a;
return result;
}
static inline __m128i _mm_cvtsi64_si128(int64_t a)
{
__m128i result = _mm_setzero_si128();
result.i64[0] = a;
return result;
}
/* Shuffle operations */
static inline __m128i _mm_shuffle_epi32(__m128i a, int imm8)
{
__m128i result;
result.u32[0] = a.u32[(imm8 >> 0) & 0x3];
result.u32[1] = a.u32[(imm8 >> 2) & 0x3];
result.u32[2] = a.u32[(imm8 >> 4) & 0x3];
result.u32[3] = a.u32[(imm8 >> 6) & 0x3];
return result;
}
/* Logical operations - optimized with RVV when available */
static inline __m128i _mm_xor_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vxor_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] ^ b.u64[0];
result.u64[1] = a.u64[1] ^ b.u64[1];
return result;
#endif
}
static inline __m128i _mm_or_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vor_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
#endif
}
static inline __m128i _mm_and_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vand_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
#endif
}
static inline __m128i _mm_andnot_si128(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vnot_a = __riscv_vnot_v_u64m1(va, vl);
vuint64m1_t vr = __riscv_vand_vv_u64m1(vnot_a, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = (~a.u64[0]) & b.u64[0];
result.u64[1] = (~a.u64[1]) & b.u64[1];
return result;
#endif
}
/* Shift operations */
static inline __m128i _mm_slli_si128(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
size_t vl = __riscv_vsetvl_e8m1(16);
vuint8m1_t va = __riscv_vle8_v_u8m1(a.u8, vl);
vuint8m1_t vr = __riscv_vslideup_vx_u8m1(__riscv_vmv_v_x_u8m1(0, vl), va, count, vl);
__riscv_vse8_v_u8m1(result.u8, vr, vl);
return result;
#else
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = 0; i < 16 - count; i++) {
result.u8[i + count] = a.u8[i];
}
return result;
#endif
}
static inline __m128i _mm_srli_si128(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
size_t vl = __riscv_vsetvl_e8m1(16);
vuint8m1_t va = __riscv_vle8_v_u8m1(a.u8, vl);
vuint8m1_t vr = __riscv_vslidedown_vx_u8m1(va, count, vl);
__riscv_vse8_v_u8m1(result.u8, vr, vl);
return result;
#else
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = count; i < 16; i++) {
result.u8[i - count] = a.u8[i];
}
return result;
#endif
}
static inline __m128i _mm_slli_epi64(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vr = __riscv_vsll_vx_u64m1(va, imm8, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] << imm8;
result.u64[1] = a.u64[1] << imm8;
}
return result;
#endif
}
static inline __m128i _mm_srli_epi64(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vr = __riscv_vsrl_vx_u64m1(va, imm8, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] >> imm8;
result.u64[1] = a.u64[1] >> imm8;
}
return result;
#endif
}
/* Load/store operations - optimized with RVV */
static inline __m128i _mm_load_si128(const __m128i* p)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t v = __riscv_vle64_v_u64m1((const uint64_t*)p, vl);
__riscv_vse64_v_u64m1(result.u64, v, vl);
return result;
#else
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
#endif
}
static inline __m128i _mm_loadu_si128(const __m128i* p)
{
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
}
static inline void _mm_store_si128(__m128i* p, __m128i a)
{
#if USE_RVV_INTRINSICS
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t v = __riscv_vle64_v_u64m1(a.u64, vl);
__riscv_vse64_v_u64m1((uint64_t*)p, v, vl);
#else
memcpy(p, &a, sizeof(__m128i));
#endif
}
static inline void _mm_storeu_si128(__m128i* p, __m128i a)
{
memcpy(p, &a, sizeof(__m128i));
}
/* Arithmetic operations - optimized with RVV */
static inline __m128i _mm_add_epi64(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vadd_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] + b.u64[0];
result.u64[1] = a.u64[1] + b.u64[1];
return result;
#endif
}
static inline __m128i _mm_add_epi32(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vb = __riscv_vle32_v_u32m1(b.u32, vl);
vuint32m1_t vr = __riscv_vadd_vv_u32m1(va, vb, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
return result;
#else
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a.i32[i] + b.i32[i];
}
return result;
#endif
}
static inline __m128i _mm_sub_epi64(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va = __riscv_vle64_v_u64m1(a.u64, vl);
vuint64m1_t vb = __riscv_vle64_v_u64m1(b.u64, vl);
vuint64m1_t vr = __riscv_vsub_vv_u64m1(va, vb, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = a.u64[0] - b.u64[0];
result.u64[1] = a.u64[1] - b.u64[1];
return result;
#endif
}
static inline __m128i _mm_mul_epu32(__m128i a, __m128i b)
{
#if USE_RVV_INTRINSICS
__m128i result;
size_t vl = __riscv_vsetvl_e64m1(2);
vuint64m1_t va_lo = __riscv_vzext_vf2_u64m1(__riscv_vle32_v_u32mf2(&a.u32[0], 2), vl);
vuint64m1_t vb_lo = __riscv_vzext_vf2_u64m1(__riscv_vle32_v_u32mf2(&b.u32[0], 2), vl);
vuint64m1_t vr = __riscv_vmul_vv_u64m1(va_lo, vb_lo, vl);
__riscv_vse64_v_u64m1(result.u64, vr, vl);
return result;
#else
__m128i result;
result.u64[0] = (uint64_t)a.u32[0] * (uint64_t)b.u32[0];
result.u64[1] = (uint64_t)a.u32[2] * (uint64_t)b.u32[2];
return result;
#endif
}
/* Unpack operations */
static inline __m128i _mm_unpacklo_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0];
result.u64[1] = b.u64[0];
return result;
}
static inline __m128i _mm_unpackhi_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[1];
result.u64[1] = b.u64[1];
return result;
}
/* Pause instruction for spin-wait loops */
static inline void _mm_pause(void)
{
/* RISC-V pause hint if available (requires Zihintpause extension) */
#if defined(__riscv_zihintpause)
__asm__ __volatile__("pause");
#else
__asm__ __volatile__("nop");
#endif
}
/* Memory fence - optimized for RISC-V */
static inline void _mm_mfence(void)
{
__asm__ __volatile__("fence rw,rw" ::: "memory");
}
static inline void _mm_lfence(void)
{
__asm__ __volatile__("fence r,r" ::: "memory");
}
static inline void _mm_sfence(void)
{
__asm__ __volatile__("fence w,w" ::: "memory");
}
/* Comparison operations */
static inline __m128i _mm_cmpeq_epi32(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.u32[i] = (a.u32[i] == b.u32[i]) ? 0xFFFFFFFF : 0;
}
return result;
}
static inline __m128i _mm_cmpeq_epi64(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 2; i++) {
result.u64[i] = (a.u64[i] == b.u64[i]) ? 0xFFFFFFFFFFFFFFFFULL : 0;
}
return result;
}
/* Additional shift operations */
static inline __m128i _mm_slli_epi32(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 31) {
memset(&result, 0, sizeof(result));
} else {
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vr = __riscv_vsll_vx_u32m1(va, imm8, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] << imm8;
}
}
return result;
#endif
}
static inline __m128i _mm_srli_epi32(__m128i a, int imm8)
{
#if USE_RVV_INTRINSICS
__m128i result;
if (imm8 > 31) {
memset(&result, 0, sizeof(result));
} else {
size_t vl = __riscv_vsetvl_e32m1(4);
vuint32m1_t va = __riscv_vle32_v_u32m1(a.u32, vl);
vuint32m1_t vr = __riscv_vsrl_vx_u32m1(va, imm8, vl);
__riscv_vse32_v_u32m1(result.u32, vr, vl);
}
return result;
#else
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] >> imm8;
}
}
return result;
#endif
}
/* 64-bit integer operations */
static inline __m128i _mm_set1_epi64x(int64_t a)
{
__m128i result;
result.i64[0] = a;
result.i64[1] = a;
return result;
}
/* Float type for compatibility */
typedef __m128i __m128;
/* Float operations - simplified scalar implementations */
static inline __m128 _mm_set1_ps(float a)
{
__m128 result;
uint32_t val;
memcpy(&val, &a, sizeof(float));
for (int i = 0; i < 4; i++) {
result.u32[i] = val;
}
return result;
}
static inline __m128 _mm_setzero_ps(void)
{
__m128 result;
memset(&result, 0, sizeof(result));
return result;
}
static inline __m128 _mm_add_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] + fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_mul_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] * fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_and_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
}
static inline __m128 _mm_or_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
}
static inline __m128 _mm_cvtepi32_ps(__m128i a)
{
__m128 result;
float fr[4];
for (int i = 0; i < 4; i++) {
fr[i] = (float)a.i32[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128i _mm_cvttps_epi32(__m128 a)
{
__m128i result;
float fa[4];
memcpy(fa, &a, sizeof(__m128));
for (int i = 0; i < 4; i++) {
result.i32[i] = (int32_t)fa[i];
}
return result;
}
/* Casting operations */
static inline __m128 _mm_castsi128_ps(__m128i a)
{
__m128 result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
static inline __m128i _mm_castps_si128(__m128 a)
{
__m128i result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
/* Additional set operations */
static inline __m128i _mm_set1_epi32(int a)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a;
}
return result;
}
/* AES instructions - placeholders for soft_aes compatibility */
static inline __m128i _mm_aesenc_si128(__m128i a, __m128i roundkey)
{
return _mm_xor_si128(a, roundkey);
}
static inline __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon)
{
return a;
}
/* Rotate right operation for soft_aes.h */
static inline uint32_t _rotr(uint32_t value, unsigned int count)
{
const unsigned int mask = 31;
count &= mask;
return (value >> count) | (value << ((-count) & mask));
}
/* ARM NEON compatibility types and intrinsics for RISC-V */
typedef __m128i_union uint64x2_t;
typedef __m128i_union uint8x16_t;
typedef __m128i_union int64x2_t;
typedef __m128i_union int32x4_t;
static inline uint64x2_t vld1q_u64(const uint64_t *ptr)
{
uint64x2_t result;
result.u64[0] = ptr[0];
result.u64[1] = ptr[1];
return result;
}
static inline int64x2_t vld1q_s64(const int64_t *ptr)
{
int64x2_t result;
result.i64[0] = ptr[0];
result.i64[1] = ptr[1];
return result;
}
static inline void vst1q_u64(uint64_t *ptr, uint64x2_t val)
{
ptr[0] = val.u64[0];
ptr[1] = val.u64[1];
}
static inline uint64x2_t veorq_u64(uint64x2_t a, uint64x2_t b)
{
return _mm_xor_si128(a, b);
}
static inline uint64x2_t vaddq_u64(uint64x2_t a, uint64x2_t b)
{
return _mm_add_epi64(a, b);
}
static inline uint64x2_t vreinterpretq_u64_u8(uint8x16_t a)
{
uint64x2_t result;
memcpy(&result, &a, sizeof(uint64x2_t));
return result;
}
static inline uint64_t vgetq_lane_u64(uint64x2_t v, int lane)
{
return v.u64[lane];
}
static inline int64_t vgetq_lane_s64(int64x2_t v, int lane)
{
return v.i64[lane];
}
static inline int32_t vgetq_lane_s32(int32x4_t v, int lane)
{
return v.i32[lane];
}
typedef struct { uint64_t val[1]; } uint64x1_t;
static inline uint64x1_t vcreate_u64(uint64_t a)
{
uint64x1_t result;
result.val[0] = a;
return result;
}
static inline uint64x2_t vcombine_u64(uint64x1_t low, uint64x1_t high)
{
uint64x2_t result;
result.u64[0] = low.val[0];
result.u64[1] = high.val[0];
return result;
}
#ifdef __cplusplus
}
#endif
#endif /* XMRIG_SSE2RVV_OPTIMIZED_H */

View File

@@ -0,0 +1,571 @@
/* XMRig
* Copyright (c) 2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* SSE to RISC-V compatibility header
* Provides scalar implementations of SSE intrinsics for RISC-V architecture
*/
#ifndef XMRIG_SSE2RVV_H
#define XMRIG_SSE2RVV_H
#ifdef __cplusplus
extern "C" {
#endif
#include <stdint.h>
#include <string.h>
/* 128-bit vector type */
typedef union {
uint8_t u8[16];
uint16_t u16[8];
uint32_t u32[4];
uint64_t u64[2];
int8_t i8[16];
int16_t i16[8];
int32_t i32[4];
int64_t i64[2];
} __m128i_union;
typedef __m128i_union __m128i;
/* Set operations */
static inline __m128i _mm_set_epi32(int e3, int e2, int e1, int e0)
{
__m128i result;
result.i32[0] = e0;
result.i32[1] = e1;
result.i32[2] = e2;
result.i32[3] = e3;
return result;
}
static inline __m128i _mm_set_epi64x(int64_t e1, int64_t e0)
{
__m128i result;
result.i64[0] = e0;
result.i64[1] = e1;
return result;
}
static inline __m128i _mm_setzero_si128(void)
{
__m128i result;
memset(&result, 0, sizeof(result));
return result;
}
/* Extract/insert operations */
static inline int _mm_cvtsi128_si32(__m128i a)
{
return a.i32[0];
}
static inline int64_t _mm_cvtsi128_si64(__m128i a)
{
return a.i64[0];
}
static inline __m128i _mm_cvtsi32_si128(int a)
{
__m128i result = _mm_setzero_si128();
result.i32[0] = a;
return result;
}
static inline __m128i _mm_cvtsi64_si128(int64_t a)
{
__m128i result = _mm_setzero_si128();
result.i64[0] = a;
return result;
}
/* Shuffle operations */
static inline __m128i _mm_shuffle_epi32(__m128i a, int imm8)
{
__m128i result;
result.u32[0] = a.u32[(imm8 >> 0) & 0x3];
result.u32[1] = a.u32[(imm8 >> 2) & 0x3];
result.u32[2] = a.u32[(imm8 >> 4) & 0x3];
result.u32[3] = a.u32[(imm8 >> 6) & 0x3];
return result;
}
/* Logical operations */
static inline __m128i _mm_xor_si128(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0] ^ b.u64[0];
result.u64[1] = a.u64[1] ^ b.u64[1];
return result;
}
static inline __m128i _mm_or_si128(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
}
static inline __m128i _mm_and_si128(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
}
static inline __m128i _mm_andnot_si128(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = (~a.u64[0]) & b.u64[0];
result.u64[1] = (~a.u64[1]) & b.u64[1];
return result;
}
/* Shift operations */
static inline __m128i _mm_slli_si128(__m128i a, int imm8)
{
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = 0; i < 16 - count; i++) {
result.u8[i + count] = a.u8[i];
}
return result;
}
static inline __m128i _mm_srli_si128(__m128i a, int imm8)
{
__m128i result = _mm_setzero_si128();
int count = imm8 & 0xFF;
if (count > 15) return result;
for (int i = count; i < 16; i++) {
result.u8[i - count] = a.u8[i];
}
return result;
}
static inline __m128i _mm_slli_epi64(__m128i a, int imm8)
{
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] << imm8;
result.u64[1] = a.u64[1] << imm8;
}
return result;
}
static inline __m128i _mm_srli_epi64(__m128i a, int imm8)
{
__m128i result;
if (imm8 > 63) {
result.u64[0] = 0;
result.u64[1] = 0;
} else {
result.u64[0] = a.u64[0] >> imm8;
result.u64[1] = a.u64[1] >> imm8;
}
return result;
}
/* Load/store operations */
static inline __m128i _mm_load_si128(const __m128i* p)
{
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
}
static inline __m128i _mm_loadu_si128(const __m128i* p)
{
__m128i result;
memcpy(&result, p, sizeof(__m128i));
return result;
}
static inline void _mm_store_si128(__m128i* p, __m128i a)
{
memcpy(p, &a, sizeof(__m128i));
}
static inline void _mm_storeu_si128(__m128i* p, __m128i a)
{
memcpy(p, &a, sizeof(__m128i));
}
/* Arithmetic operations */
static inline __m128i _mm_add_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0] + b.u64[0];
result.u64[1] = a.u64[1] + b.u64[1];
return result;
}
static inline __m128i _mm_add_epi32(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a.i32[i] + b.i32[i];
}
return result;
}
static inline __m128i _mm_sub_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0] - b.u64[0];
result.u64[1] = a.u64[1] - b.u64[1];
return result;
}
static inline __m128i _mm_mul_epu32(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = (uint64_t)a.u32[0] * (uint64_t)b.u32[0];
result.u64[1] = (uint64_t)a.u32[2] * (uint64_t)b.u32[2];
return result;
}
/* Unpack operations */
static inline __m128i _mm_unpacklo_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[0];
result.u64[1] = b.u64[0];
return result;
}
static inline __m128i _mm_unpackhi_epi64(__m128i a, __m128i b)
{
__m128i result;
result.u64[0] = a.u64[1];
result.u64[1] = b.u64[1];
return result;
}
/* Pause instruction for spin-wait loops */
static inline void _mm_pause(void)
{
/* RISC-V doesn't have a direct equivalent to x86 PAUSE
* Use a simple NOP or yield hint */
__asm__ __volatile__("nop");
}
/* Memory fence */
static inline void _mm_mfence(void)
{
__asm__ __volatile__("fence" ::: "memory");
}
static inline void _mm_lfence(void)
{
__asm__ __volatile__("fence r,r" ::: "memory");
}
static inline void _mm_sfence(void)
{
__asm__ __volatile__("fence w,w" ::: "memory");
}
/* Comparison operations */
static inline __m128i _mm_cmpeq_epi32(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.u32[i] = (a.u32[i] == b.u32[i]) ? 0xFFFFFFFF : 0;
}
return result;
}
static inline __m128i _mm_cmpeq_epi64(__m128i a, __m128i b)
{
__m128i result;
for (int i = 0; i < 2; i++) {
result.u64[i] = (a.u64[i] == b.u64[i]) ? 0xFFFFFFFFFFFFFFFFULL : 0;
}
return result;
}
/* Additional shift operations */
static inline __m128i _mm_slli_epi32(__m128i a, int imm8)
{
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] << imm8;
}
}
return result;
}
static inline __m128i _mm_srli_epi32(__m128i a, int imm8)
{
__m128i result;
if (imm8 > 31) {
for (int i = 0; i < 4; i++) result.u32[i] = 0;
} else {
for (int i = 0; i < 4; i++) {
result.u32[i] = a.u32[i] >> imm8;
}
}
return result;
}
/* 64-bit integer operations */
static inline __m128i _mm_set1_epi64x(int64_t a)
{
__m128i result;
result.i64[0] = a;
result.i64[1] = a;
return result;
}
/* Float type for compatibility - we'll treat it as int for simplicity */
typedef __m128i __m128;
/* Float operations - simplified scalar implementations */
static inline __m128 _mm_set1_ps(float a)
{
__m128 result;
uint32_t val;
memcpy(&val, &a, sizeof(float));
for (int i = 0; i < 4; i++) {
result.u32[i] = val;
}
return result;
}
static inline __m128 _mm_setzero_ps(void)
{
__m128 result;
memset(&result, 0, sizeof(result));
return result;
}
static inline __m128 _mm_add_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] + fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_mul_ps(__m128 a, __m128 b)
{
__m128 result;
float fa[4], fb[4], fr[4];
memcpy(fa, &a, sizeof(__m128));
memcpy(fb, &b, sizeof(__m128));
for (int i = 0; i < 4; i++) {
fr[i] = fa[i] * fb[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128 _mm_and_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] & b.u64[0];
result.u64[1] = a.u64[1] & b.u64[1];
return result;
}
static inline __m128 _mm_or_ps(__m128 a, __m128 b)
{
__m128 result;
result.u64[0] = a.u64[0] | b.u64[0];
result.u64[1] = a.u64[1] | b.u64[1];
return result;
}
static inline __m128 _mm_cvtepi32_ps(__m128i a)
{
__m128 result;
float fr[4];
for (int i = 0; i < 4; i++) {
fr[i] = (float)a.i32[i];
}
memcpy(&result, fr, sizeof(__m128));
return result;
}
static inline __m128i _mm_cvttps_epi32(__m128 a)
{
__m128i result;
float fa[4];
memcpy(fa, &a, sizeof(__m128));
for (int i = 0; i < 4; i++) {
result.i32[i] = (int32_t)fa[i];
}
return result;
}
/* Casting operations */
static inline __m128 _mm_castsi128_ps(__m128i a)
{
__m128 result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
static inline __m128i _mm_castps_si128(__m128 a)
{
__m128i result;
memcpy(&result, &a, sizeof(__m128));
return result;
}
/* Additional set operations */
static inline __m128i _mm_set1_epi32(int a)
{
__m128i result;
for (int i = 0; i < 4; i++) {
result.i32[i] = a;
}
return result;
}
/* AES instructions - these are placeholders, actual AES is done via soft_aes.h */
/* On RISC-V without crypto extensions, these should never be called directly */
/* They are only here for compilation compatibility */
static inline __m128i _mm_aesenc_si128(__m128i a, __m128i roundkey)
{
/* This is a placeholder - actual implementation should use soft_aes */
/* If this function is called, it means SOFT_AES template parameter wasn't used */
/* We return a XOR as a minimal fallback, but proper code should use soft_aesenc */
return _mm_xor_si128(a, roundkey);
}
static inline __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon)
{
/* Placeholder for AES key generation - should use soft_aeskeygenassist */
return a;
}
/* Rotate right operation for soft_aes.h */
static inline uint32_t _rotr(uint32_t value, unsigned int count)
{
const unsigned int mask = 31;
count &= mask;
return (value >> count) | (value << ((-count) & mask));
}
/* ARM NEON compatibility types and intrinsics for RISC-V */
typedef __m128i_union uint64x2_t;
typedef __m128i_union uint8x16_t;
typedef __m128i_union int64x2_t;
typedef __m128i_union int32x4_t;
static inline uint64x2_t vld1q_u64(const uint64_t *ptr)
{
uint64x2_t result;
result.u64[0] = ptr[0];
result.u64[1] = ptr[1];
return result;
}
static inline int64x2_t vld1q_s64(const int64_t *ptr)
{
int64x2_t result;
result.i64[0] = ptr[0];
result.i64[1] = ptr[1];
return result;
}
static inline void vst1q_u64(uint64_t *ptr, uint64x2_t val)
{
ptr[0] = val.u64[0];
ptr[1] = val.u64[1];
}
static inline uint64x2_t veorq_u64(uint64x2_t a, uint64x2_t b)
{
uint64x2_t result;
result.u64[0] = a.u64[0] ^ b.u64[0];
result.u64[1] = a.u64[1] ^ b.u64[1];
return result;
}
static inline uint64x2_t vaddq_u64(uint64x2_t a, uint64x2_t b)
{
uint64x2_t result;
result.u64[0] = a.u64[0] + b.u64[0];
result.u64[1] = a.u64[1] + b.u64[1];
return result;
}
static inline uint64x2_t vreinterpretq_u64_u8(uint8x16_t a)
{
uint64x2_t result;
memcpy(&result, &a, sizeof(uint64x2_t));
return result;
}
static inline uint64_t vgetq_lane_u64(uint64x2_t v, int lane)
{
return v.u64[lane];
}
static inline int64_t vgetq_lane_s64(int64x2_t v, int lane)
{
return v.i64[lane];
}
static inline int32_t vgetq_lane_s32(int32x4_t v, int lane)
{
return v.i32[lane];
}
typedef struct { uint64_t val[1]; } uint64x1_t;
static inline uint64x1_t vcreate_u64(uint64_t a)
{
uint64x1_t result;
result.val[0] = a;
return result;
}
static inline uint64x2_t vcombine_u64(uint64x1_t low, uint64x1_t high)
{
uint64x2_t result;
result.u64[0] = low.val[0];
result.u64[1] = high.val[0];
return result;
}
#ifdef __cplusplus
}
#endif
#endif /* XMRIG_SSE2RVV_H */

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -35,15 +35,69 @@ constexpr size_t twoMiB = 2U * 1024U * 1024U;
constexpr size_t oneGiB = 1024U * 1024U * 1024U;
static inline std::string sysfs_path(uint32_t node, size_t hugePageSize, bool nr)
static bool sysfs_write(const std::string &path, uint64_t value)
{
std::ofstream file(path, std::ios::out | std::ios::binary | std::ios::trunc);
if (!file.is_open()) {
return false;
}
file << value;
file.flush();
return true;
}
static int64_t sysfs_read(const std::string &path)
{
std::ifstream file(path);
if (!file.is_open()) {
return -1;
}
uint64_t value = 0;
file >> value;
return value;
}
static std::string sysfs_path(uint32_t node, size_t hugePageSize, bool nr)
{
return fmt::format("/sys/devices/system/node/node{}/hugepages/hugepages-{}kB/{}_hugepages", node, hugePageSize / 1024, nr ? "nr" : "free");
}
static inline bool write_nr_hugepages(uint32_t node, size_t hugePageSize, uint64_t count) { return LinuxMemory::write(sysfs_path(node, hugePageSize, true).c_str(), count); }
static inline int64_t free_hugepages(uint32_t node, size_t hugePageSize) { return LinuxMemory::read(sysfs_path(node, hugePageSize, false).c_str()); }
static inline int64_t nr_hugepages(uint32_t node, size_t hugePageSize) { return LinuxMemory::read(sysfs_path(node, hugePageSize, true).c_str()); }
static std::string sysfs_path(size_t hugePageSize, bool nr)
{
return fmt::format("/sys/kernel/mm/hugepages/hugepages-{}kB/{}_hugepages", hugePageSize / 1024, nr ? "nr" : "free");
}
static bool write_nr_hugepages(uint32_t node, size_t hugePageSize, uint64_t count)
{
if (sysfs_write(sysfs_path(node, hugePageSize, true), count)) {
return true;
}
return sysfs_write(sysfs_path(hugePageSize, true), count);
}
static int64_t sysfs_read_hugepages(uint32_t node, size_t hugePageSize, bool nr)
{
const int64_t value = sysfs_read(sysfs_path(node, hugePageSize, nr));
if (value >= 0) {
return value;
}
return sysfs_read(sysfs_path(hugePageSize, nr));
}
static inline int64_t free_hugepages(uint32_t node, size_t hugePageSize) { return sysfs_read_hugepages(node, hugePageSize, false); }
static inline int64_t nr_hugepages(uint32_t node, size_t hugePageSize) { return sysfs_read_hugepages(node, hugePageSize, true); }
} // namespace xmrig
@@ -62,31 +116,3 @@ bool xmrig::LinuxMemory::reserve(size_t size, uint32_t node, size_t hugePageSize
return write_nr_hugepages(node, hugePageSize, std::max<size_t>(nr_hugepages(node, hugePageSize), 0) + (required - available));
}
bool xmrig::LinuxMemory::write(const char *path, uint64_t value)
{
std::ofstream file(path, std::ios::out | std::ios::binary | std::ios::trunc);
if (!file.is_open()) {
return false;
}
file << value;
file.flush();
return true;
}
int64_t xmrig::LinuxMemory::read(const char *path)
{
std::ifstream file(path);
if (!file.is_open()) {
return -1;
}
uint64_t value = 0;
file >> value;
return value;
}

View File

@@ -1,6 +1,6 @@
/* XMRig
* Copyright (c) 2018-2021 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2021 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -31,13 +31,10 @@ class LinuxMemory
{
public:
static bool reserve(size_t size, uint32_t node, size_t hugePageSize);
static bool write(const char *path, uint64_t value);
static int64_t read(const char *path);
};
} /* namespace xmrig */
} // namespace xmrig
#endif /* XMRIG_LINUXMEMORY_H */
#endif // XMRIG_LINUXMEMORY_H

View File

@@ -49,7 +49,7 @@ xmrig::MemoryPool::MemoryPool(size_t size, bool hugePages, uint32_t node)
constexpr size_t alignment = 1 << 24;
m_memory = new VirtualMemory(size * pageSize + alignment, hugePages, false, false, node);
m_memory = new VirtualMemory(size * pageSize + alignment, hugePages, false, false, node, VirtualMemory::kDefaultHugePageSize);
m_alignOffset = (alignment - (((size_t)m_memory->scratchpad()) % alignment)) % alignment;
}

View File

@@ -75,6 +75,16 @@ xmrig::VirtualMemory::VirtualMemory(size_t size, bool hugePages, bool oneGbPages
}
m_scratchpad = static_cast<uint8_t*>(_mm_malloc(m_size, alignSize));
// Huge pages failed to allocate, but try to enable transparent huge pages for the range
if (alignSize >= kDefaultHugePageSize) {
if (m_scratchpad) {
adviseLargePages(m_scratchpad, m_size);
}
else {
m_scratchpad = static_cast<uint8_t*>(_mm_malloc(m_size, 64));
}
}
}

View File

@@ -65,6 +65,7 @@ public:
static void *allocateExecutableMemory(size_t size, bool hugePages);
static void *allocateLargePagesMemory(size_t size);
static void *allocateOneGbPagesMemory(size_t size);
static bool adviseLargePages(void *p, size_t size);
static void destroy();
static void flushInstructionCache(void *p, size_t size);
static void freeLargePagesMemory(void *p, size_t size);

View File

@@ -86,7 +86,7 @@ bool xmrig::VirtualMemory::isHugepagesAvailable()
{
# ifdef XMRIG_OS_LINUX
return std::ifstream("/proc/sys/vm/nr_hugepages").good() || std::ifstream("/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages").good();
# elif defined(XMRIG_OS_MACOS) && defined(XMRIG_ARM)
# elif defined(XMRIG_OS_MACOS) && defined(XMRIG_ARM) || defined(XMRIG_OS_HAIKU)
return false;
# else
return true;
@@ -156,7 +156,8 @@ void *xmrig::VirtualMemory::allocateExecutableMemory(size_t size, bool hugePages
if (!mem) {
mem = mmap(0, size, PROT_READ | PROT_WRITE | SECURE_PROT_EXEC, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
}
# elif defined(XMRIG_OS_HAIKU)
void *mem = mmap(0, size, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
# else
void *mem = nullptr;
@@ -181,6 +182,8 @@ void *xmrig::VirtualMemory::allocateLargePagesMemory(size_t size)
void *mem = mmap(0, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, VM_FLAGS_SUPERPAGE_SIZE_2MB, 0);
# elif defined(XMRIG_OS_FREEBSD)
void *mem = mmap(0, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_ALIGNED_SUPER | MAP_PREFAULT_READ, -1, 0);
# elif defined(XMRIG_OS_HAIKU)
void *mem = nullptr;
# else
void *mem = mmap(0, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_POPULATE | hugePagesFlag(hugePageSize()), 0, 0);
# endif
@@ -273,6 +276,16 @@ bool xmrig::VirtualMemory::allocateOneGbPagesMemory()
}
bool xmrig::VirtualMemory::adviseLargePages(void *p, size_t size)
{
# ifdef XMRIG_OS_LINUX
return (madvise(p, size, MADV_HUGEPAGE) == 0);
# else
return false;
# endif
}
void xmrig::VirtualMemory::freeLargePagesMemory()
{
if (m_flags.test(FLAG_LOCK)) {

View File

@@ -260,6 +260,12 @@ bool xmrig::VirtualMemory::allocateOneGbPagesMemory()
}
bool xmrig::VirtualMemory::adviseLargePages(void *p, size_t size)
{
return false;
}
void xmrig::VirtualMemory::freeLargePagesMemory()
{
freeLargePagesMemory(m_scratchpad, m_size);

View File

@@ -26,7 +26,7 @@
#define XMRIG_MM_MALLOC_PORTABLE_H
#if defined(XMRIG_ARM) && !defined(__clang__)
#if (defined(XMRIG_ARM) || defined(XMRIG_RISCV)) && !defined(__clang__)
#include <stdlib.h>

View File

@@ -57,6 +57,9 @@
#if defined(XMRIG_ARM)
# include "crypto/cn/sse2neon.h"
#elif defined(XMRIG_RISCV)
// RISC-V doesn't have SSE/NEON, provide minimal compatibility
# define _mm_pause() __asm__ __volatile__("nop")
#elif defined(__GNUC__)
# include <x86intrin.h>
#else
@@ -286,7 +289,7 @@ struct HelperThread
void benchmark()
{
#ifndef XMRIG_ARM
#if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
static std::atomic<int> done{ 0 };
if (done.exchange(1)) {
return;
@@ -478,7 +481,7 @@ static inline bool findByType(hwloc_obj_t obj, hwloc_obj_type_t type, func lambd
HelperThread* create_helper_thread(int64_t cpu_index, int priority, const std::vector<int64_t>& affinities)
{
#ifndef XMRIG_ARM
#if !defined(XMRIG_ARM) && !defined(XMRIG_RISCV)
hwloc_bitmap_t helper_cpu_set = hwloc_bitmap_alloc();
hwloc_bitmap_t main_threads_set = hwloc_bitmap_alloc();
@@ -807,7 +810,7 @@ void hash_octa(const uint8_t* data, size_t size, uint8_t* output, cryptonight_ct
uint32_t cn_indices[6];
select_indices(cn_indices, seed);
#ifdef XMRIG_ARM
#if defined(XMRIG_ARM) || defined(XMRIG_RISCV)
uint32_t step[6] = { 1, 1, 1, 1, 1, 1 };
#else
uint32_t step[6] = { 4, 4, 1, 2, 4, 4 };

View File

@@ -38,6 +38,13 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "crypto/randomx/common.hpp"
#include "crypto/rx/Profiler.h"
#include "backend/cpu/Cpu.h"
#ifdef XMRIG_RISCV
#include "crypto/randomx/aes_hash_rv64_vector.hpp"
#include "crypto/randomx/aes_hash_rv64_zvkned.hpp"
#endif
#define AES_HASH_1R_STATE0 0xd7983aad, 0xcc82db47, 0x9fa856de, 0x92b52c0d
#define AES_HASH_1R_STATE1 0xace78057, 0xf59e125a, 0x15c7b798, 0x338d996e
#define AES_HASH_1R_STATE2 0xe8a07ce4, 0x5079506b, 0xae62c7d0, 0x6a770017
@@ -59,7 +66,20 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Hashing throughput: >20 GiB/s per CPU core with hardware AES
*/
template<int softAes>
void hashAes1Rx4(const void *input, size_t inputSize, void *hash) {
void hashAes1Rx4(const void *input, size_t inputSize, void *hash)
{
#ifdef XMRIG_RISCV
if (xmrig::Cpu::info()->hasAES()) {
hashAes1Rx4_zvkned(input, inputSize, hash);
return;
}
if (xmrig::Cpu::info()->hasRISCV_Vector()) {
hashAes1Rx4_RVV<softAes>(input, inputSize, hash);
return;
}
#endif
const uint8_t* inptr = (uint8_t*)input;
const uint8_t* inputEnd = inptr + inputSize;
@@ -127,7 +147,20 @@ template void hashAes1Rx4<true>(const void *input, size_t inputSize, void *hash)
calls to this function.
*/
template<int softAes>
void fillAes1Rx4(void *state, size_t outputSize, void *buffer) {
void fillAes1Rx4(void *state, size_t outputSize, void *buffer)
{
#ifdef XMRIG_RISCV
if (xmrig::Cpu::info()->hasAES()) {
fillAes1Rx4_zvkned(state, outputSize, buffer);
return;
}
if (xmrig::Cpu::info()->hasRISCV_Vector()) {
fillAes1Rx4_RVV<softAes>(state, outputSize, buffer);
return;
}
#endif
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
@@ -171,7 +204,20 @@ static constexpr randomx::Instruction inst{ 0xFF, 7, 7, 0xFF, 0xFFFFFFFFU };
alignas(16) static const randomx::Instruction inst_mask[2] = { inst, inst };
template<int softAes>
void fillAes4Rx4(void *state, size_t outputSize, void *buffer) {
void fillAes4Rx4(void *state, size_t outputSize, void *buffer)
{
#ifdef XMRIG_RISCV
if (xmrig::Cpu::info()->hasAES()) {
fillAes4Rx4_zvkned(state, outputSize, buffer);
return;
}
if (xmrig::Cpu::info()->hasRISCV_Vector()) {
fillAes4Rx4_RVV<softAes>(state, outputSize, buffer);
return;
}
#endif
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
@@ -235,10 +281,34 @@ void fillAes4Rx4(void *state, size_t outputSize, void *buffer) {
template void fillAes4Rx4<true>(void *state, size_t outputSize, void *buffer);
template void fillAes4Rx4<false>(void *state, size_t outputSize, void *buffer);
#ifdef XMRIG_VAES
void hashAndFillAes1Rx4_VAES512(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state);
#endif
template<int softAes, int unroll>
void hashAndFillAes1Rx4(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state) {
void hashAndFillAes1Rx4(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state)
{
PROFILE_SCOPE(RandomX_AES);
#ifdef XMRIG_RISCV
if (xmrig::Cpu::info()->hasAES()) {
hashAndFillAes1Rx4_zvkned(scratchpad, scratchpadSize, hash, fill_state);
return;
}
if (xmrig::Cpu::info()->hasRISCV_Vector()) {
hashAndFillAes1Rx4_RVV<softAes, unroll>(scratchpad, scratchpadSize, hash, fill_state);
return;
}
#endif
#ifdef XMRIG_VAES
if (xmrig::Cpu::info()->arch() == xmrig::ICpuInfo::ARCH_ZEN5) {
hashAndFillAes1Rx4_VAES512(scratchpad, scratchpadSize, hash, fill_state);
return;
}
#endif
uint8_t* scratchpadPtr = (uint8_t*)scratchpad;
const uint8_t* scratchpadEnd = scratchpadPtr + scratchpadSize;
@@ -386,43 +456,54 @@ hashAndFillAes1Rx4_impl* softAESImpl = &hashAndFillAes1Rx4<1,1>;
void SelectSoftAESImpl(size_t threadsCount)
{
constexpr uint64_t test_length_ms = 100;
const std::array<hashAndFillAes1Rx4_impl *, 4> impl = {
&hashAndFillAes1Rx4<1,1>,
&hashAndFillAes1Rx4<2,1>,
&hashAndFillAes1Rx4<2,2>,
&hashAndFillAes1Rx4<2,4>,
};
size_t fast_idx = 0;
double fast_speed = 0.0;
for (size_t run = 0; run < 3; ++run) {
for (size_t i = 0; i < impl.size(); ++i) {
const double t1 = xmrig::Chrono::highResolutionMSecs();
std::vector<uint32_t> count(threadsCount, 0);
std::vector<std::thread> threads;
for (size_t t = 0; t < threadsCount; ++t) {
threads.emplace_back([&, t]() {
std::vector<uint8_t> scratchpad(10 * 1024);
alignas(16) uint8_t hash[64] = {};
alignas(16) uint8_t state[64] = {};
do {
(*impl[i])(scratchpad.data(), scratchpad.size(), hash, state);
++count[t];
} while (xmrig::Chrono::highResolutionMSecs() - t1 < test_length_ms);
});
}
uint32_t total = 0;
for (size_t t = 0; t < threadsCount; ++t) {
threads[t].join();
total += count[t];
}
const double t2 = xmrig::Chrono::highResolutionMSecs();
const double speed = total * 1e3 / (t2 - t1);
if (speed > fast_speed) {
fast_idx = i;
fast_speed = speed;
}
}
}
softAESImpl = impl[fast_idx];
constexpr uint64_t test_length_ms = 100;
const std::array<hashAndFillAes1Rx4_impl *, 4> impl = {
&hashAndFillAes1Rx4<1,1>,
&hashAndFillAes1Rx4<2,1>,
&hashAndFillAes1Rx4<2,2>,
&hashAndFillAes1Rx4<2,4>,
};
size_t fast_idx = 0;
double fast_speed = 0.0;
for (size_t run = 0; run < 3; ++run) {
for (size_t i = 0; i < impl.size(); ++i) {
const double t1 = xmrig::Chrono::highResolutionMSecs();
std::vector<uint32_t> count(threadsCount, 0);
std::vector<std::thread> threads;
for (size_t t = 0; t < threadsCount; ++t) {
threads.emplace_back([&, t]() {
std::vector<uint8_t> scratchpad(10 * 1024);
alignas(16) uint8_t hash[64] = {};
alignas(16) uint8_t state[64] = {};
do {
(*impl[i])(scratchpad.data(), scratchpad.size(), hash, state);
++count[t];
} while (xmrig::Chrono::highResolutionMSecs() - t1 < test_length_ms);
});
}
uint32_t total = 0;
for (size_t t = 0; t < threadsCount; ++t) {
threads[t].join();
total += count[t];
}
const double t2 = xmrig::Chrono::highResolutionMSecs();
const double speed = total * 1e3 / (t2 - t1);
if (speed > fast_speed) {
fast_idx = i;
fast_speed = speed;
}
}
}
softAESImpl = impl[fast_idx];
}

View File

@@ -0,0 +1,322 @@
/*
Copyright (c) 2025 SChernykh <https://github.com/SChernykh>
Copyright (c) 2025 XMRig <support@xmrig.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <riscv_vector.h>
#include "crypto/randomx/soft_aes.h"
#include "crypto/randomx/randomx.h"
static FORCE_INLINE vuint32m1_t softaes_vector_double(
vuint32m1_t in,
vuint32m1_t key,
vuint8m1_t i0, vuint8m1_t i1, vuint8m1_t i2, vuint8m1_t i3,
const uint32_t* lut0, const uint32_t* lut1, const uint32_t *lut2, const uint32_t* lut3)
{
const vuint8m1_t in8 = __riscv_vreinterpret_v_u32m1_u8m1(in);
const vuint32m1_t index0 = __riscv_vreinterpret_v_u8m1_u32m1(__riscv_vrgather_vv_u8m1(in8, i0, 32));
const vuint32m1_t index1 = __riscv_vreinterpret_v_u8m1_u32m1(__riscv_vrgather_vv_u8m1(in8, i1, 32));
const vuint32m1_t index2 = __riscv_vreinterpret_v_u8m1_u32m1(__riscv_vrgather_vv_u8m1(in8, i2, 32));
const vuint32m1_t index3 = __riscv_vreinterpret_v_u8m1_u32m1(__riscv_vrgather_vv_u8m1(in8, i3, 32));
vuint32m1_t s0 = __riscv_vluxei32_v_u32m1(lut0, __riscv_vsll_vx_u32m1(index0, 2, 8), 8);
vuint32m1_t s1 = __riscv_vluxei32_v_u32m1(lut1, __riscv_vsll_vx_u32m1(index1, 2, 8), 8);
vuint32m1_t s2 = __riscv_vluxei32_v_u32m1(lut2, __riscv_vsll_vx_u32m1(index2, 2, 8), 8);
vuint32m1_t s3 = __riscv_vluxei32_v_u32m1(lut3, __riscv_vsll_vx_u32m1(index3, 2, 8), 8);
s0 = __riscv_vxor_vv_u32m1(s0, s1, 8);
s2 = __riscv_vxor_vv_u32m1(s2, s3, 8);
s0 = __riscv_vxor_vv_u32m1(s0, s2, 8);
return __riscv_vxor_vv_u32m1(s0, key, 8);
}
static constexpr uint32_t AES_HASH_1R_STATE02[8] = { 0x92b52c0d, 0x9fa856de, 0xcc82db47, 0xd7983aad, 0x6a770017, 0xae62c7d0, 0x5079506b, 0xe8a07ce4 };
static constexpr uint32_t AES_HASH_1R_STATE13[8] = { 0x338d996e, 0x15c7b798, 0xf59e125a, 0xace78057, 0x630a240c, 0x07ad828d, 0x79a10005, 0x7e994948 };
static constexpr uint32_t AES_GEN_1R_KEY02[8] = { 0x6daca553, 0x62716609, 0xdbb5552b, 0xb4f44917, 0x3f1262f1, 0x9f947ec6, 0xf4c0794f, 0x3e20e345 };
static constexpr uint32_t AES_GEN_1R_KEY13[8] = { 0x6d7caf07, 0x846a710d, 0x1725d378, 0x0da1dc4e, 0x6aef8135, 0xb1ba317c, 0x16314c88, 0x49169154 };
static constexpr uint32_t AES_HASH_1R_XKEY00[8] = { 0xf6fa8389, 0x8b24949f, 0x90dc56bf, 0x06890201, 0xf6fa8389, 0x8b24949f, 0x90dc56bf, 0x06890201 };
static constexpr uint32_t AES_HASH_1R_XKEY11[8] = { 0x61b263d1, 0x51f4e03c, 0xee1043c6, 0xed18f99b, 0x61b263d1, 0x51f4e03c, 0xee1043c6, 0xed18f99b };
static constexpr uint32_t AES_HASH_STRIDE_X2[8] = { 0, 4, 8, 12, 32, 36, 40, 44 };
static constexpr uint32_t AES_HASH_STRIDE_X4[8] = { 0, 4, 8, 12, 64, 68, 72, 76 };
template<int softAes>
void hashAes1Rx4_RVV(const void *input, size_t inputSize, void *hash) {
const uint8_t* inptr = (const uint8_t*)input;
const uint8_t* inputEnd = inptr + inputSize;
//intial state
vuint32m1_t state02 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE02, 8);
vuint32m1_t state13 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
const vuint8m1_t lutenc_index0 = __riscv_vle8_v_u8m1(lutEncIndex[0], 32);
const vuint8m1_t lutenc_index1 = __riscv_vle8_v_u8m1(lutEncIndex[1], 32);
const vuint8m1_t lutenc_index2 = __riscv_vle8_v_u8m1(lutEncIndex[2], 32);
const vuint8m1_t lutenc_index3 = __riscv_vle8_v_u8m1(lutEncIndex[3], 32);
const vuint8m1_t& lutdec_index0 = lutenc_index0;
const vuint8m1_t lutdec_index1 = __riscv_vle8_v_u8m1(lutDecIndex[1], 32);
const vuint8m1_t& lutdec_index2 = lutenc_index2;
const vuint8m1_t lutdec_index3 = __riscv_vle8_v_u8m1(lutDecIndex[3], 32);
//process 64 bytes at a time in 4 lanes
while (inptr < inputEnd) {
state02 = softaes_vector_double(state02, __riscv_vluxei32_v_u32m1((uint32_t*)inptr + 0, stride, 8), lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state13 = softaes_vector_double(state13, __riscv_vluxei32_v_u32m1((uint32_t*)inptr + 4, stride, 8), lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
inptr += 64;
}
//two extra rounds to achieve full diffusion
const vuint32m1_t xkey00 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY00, 8);
const vuint32m1_t xkey11 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY11, 8);
state02 = softaes_vector_double(state02, xkey00, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state13 = softaes_vector_double(state13, xkey00, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state02 = softaes_vector_double(state02, xkey11, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state13 = softaes_vector_double(state13, xkey11, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
//output hash
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 4, stride, state13, 8);
}
template void hashAes1Rx4_RVV<false>(const void *input, size_t inputSize, void *hash);
template void hashAes1Rx4_RVV<true>(const void *input, size_t inputSize, void *hash);
template<int softAes>
void fillAes1Rx4_RVV(void *state, size_t outputSize, void *buffer) {
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
const vuint32m1_t key02 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY02, 8);
const vuint32m1_t key13 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
vuint32m1_t state02 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 0, stride, 8);
vuint32m1_t state13 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 4, stride, 8);
const vuint8m1_t lutenc_index0 = __riscv_vle8_v_u8m1(lutEncIndex[0], 32);
const vuint8m1_t lutenc_index1 = __riscv_vle8_v_u8m1(lutEncIndex[1], 32);
const vuint8m1_t lutenc_index2 = __riscv_vle8_v_u8m1(lutEncIndex[2], 32);
const vuint8m1_t lutenc_index3 = __riscv_vle8_v_u8m1(lutEncIndex[3], 32);
const vuint8m1_t& lutdec_index0 = lutenc_index0;
const vuint8m1_t lutdec_index1 = __riscv_vle8_v_u8m1(lutDecIndex[1], 32);
const vuint8m1_t& lutdec_index2 = lutenc_index2;
const vuint8m1_t lutdec_index3 = __riscv_vle8_v_u8m1(lutDecIndex[3], 32);
while (outptr < outputEnd) {
state02 = softaes_vector_double(state02, key02, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state13 = softaes_vector_double(state13, key13, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 4, stride, state13, 8);
outptr += 64;
}
__riscv_vsuxei32_v_u32m1((uint32_t*)state + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)state + 4, stride, state13, 8);
}
template void fillAes1Rx4_RVV<false>(void *state, size_t outputSize, void *buffer);
template void fillAes1Rx4_RVV<true>(void *state, size_t outputSize, void *buffer);
template<int softAes>
void fillAes4Rx4_RVV(void *state, size_t outputSize, void *buffer) {
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
const vuint32m1_t stride4 = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X4, 8);
const vuint32m1_t key04 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 0), stride4, 8);
const vuint32m1_t key15 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 1), stride4, 8);
const vuint32m1_t key26 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 2), stride4, 8);
const vuint32m1_t key37 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 3), stride4, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
vuint32m1_t state02 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 0, stride, 8);
vuint32m1_t state13 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 4, stride, 8);
const vuint8m1_t lutenc_index0 = __riscv_vle8_v_u8m1(lutEncIndex[0], 32);
const vuint8m1_t lutenc_index1 = __riscv_vle8_v_u8m1(lutEncIndex[1], 32);
const vuint8m1_t lutenc_index2 = __riscv_vle8_v_u8m1(lutEncIndex[2], 32);
const vuint8m1_t lutenc_index3 = __riscv_vle8_v_u8m1(lutEncIndex[3], 32);
const vuint8m1_t& lutdec_index0 = lutenc_index0;
const vuint8m1_t lutdec_index1 = __riscv_vle8_v_u8m1(lutDecIndex[1], 32);
const vuint8m1_t& lutdec_index2 = lutenc_index2;
const vuint8m1_t lutdec_index3 = __riscv_vle8_v_u8m1(lutDecIndex[3], 32);
while (outptr < outputEnd) {
state02 = softaes_vector_double(state02, key04, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state13 = softaes_vector_double(state13, key04, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state02 = softaes_vector_double(state02, key15, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state13 = softaes_vector_double(state13, key15, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state02 = softaes_vector_double(state02, key26, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state13 = softaes_vector_double(state13, key26, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
state02 = softaes_vector_double(state02, key37, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
state13 = softaes_vector_double(state13, key37, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 4, stride, state13, 8);
outptr += 64;
}
}
template void fillAes4Rx4_RVV<false>(void *state, size_t outputSize, void *buffer);
template void fillAes4Rx4_RVV<true>(void *state, size_t outputSize, void *buffer);
template<int softAes, int unroll>
void hashAndFillAes1Rx4_RVV(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state) {
uint8_t* scratchpadPtr = (uint8_t*)scratchpad;
const uint8_t* scratchpadEnd = scratchpadPtr + scratchpadSize;
vuint32m1_t hash_state02 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE02, 8);
vuint32m1_t hash_state13 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE13, 8);
const vuint32m1_t key02 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY02, 8);
const vuint32m1_t key13 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
vuint32m1_t fill_state02 = __riscv_vluxei32_v_u32m1((uint32_t*)fill_state + 0, stride, 8);
vuint32m1_t fill_state13 = __riscv_vluxei32_v_u32m1((uint32_t*)fill_state + 4, stride, 8);
const vuint8m1_t lutenc_index0 = __riscv_vle8_v_u8m1(lutEncIndex[0], 32);
const vuint8m1_t lutenc_index1 = __riscv_vle8_v_u8m1(lutEncIndex[1], 32);
const vuint8m1_t lutenc_index2 = __riscv_vle8_v_u8m1(lutEncIndex[2], 32);
const vuint8m1_t lutenc_index3 = __riscv_vle8_v_u8m1(lutEncIndex[3], 32);
const vuint8m1_t& lutdec_index0 = lutenc_index0;
const vuint8m1_t lutdec_index1 = __riscv_vle8_v_u8m1(lutDecIndex[1], 32);
const vuint8m1_t& lutdec_index2 = lutenc_index2;
const vuint8m1_t lutdec_index3 = __riscv_vle8_v_u8m1(lutDecIndex[3], 32);
//process 64 bytes at a time in 4 lanes
while (scratchpadPtr < scratchpadEnd) {
#define HASH_STATE(k) \
hash_state02 = softaes_vector_double(hash_state02, __riscv_vluxei32_v_u32m1((uint32_t*)scratchpadPtr + k * 16 + 0, stride, 8), lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3); \
hash_state13 = softaes_vector_double(hash_state13, __riscv_vluxei32_v_u32m1((uint32_t*)scratchpadPtr + k * 16 + 4, stride, 8), lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
#define FILL_STATE(k) \
fill_state02 = softaes_vector_double(fill_state02, key02, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3); \
fill_state13 = softaes_vector_double(fill_state13, key13, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3); \
__riscv_vsuxei32_v_u32m1((uint32_t*)scratchpadPtr + k * 16 + 0, stride, fill_state02, 8); \
__riscv_vsuxei32_v_u32m1((uint32_t*)scratchpadPtr + k * 16 + 4, stride, fill_state13, 8);
switch (softAes) {
case 0:
HASH_STATE(0);
HASH_STATE(1);
FILL_STATE(0);
FILL_STATE(1);
scratchpadPtr += 128;
break;
default:
switch (unroll) {
case 4:
HASH_STATE(0);
FILL_STATE(0);
HASH_STATE(1);
FILL_STATE(1);
HASH_STATE(2);
FILL_STATE(2);
HASH_STATE(3);
FILL_STATE(3);
scratchpadPtr += 64 * 4;
break;
case 2:
HASH_STATE(0);
FILL_STATE(0);
HASH_STATE(1);
FILL_STATE(1);
scratchpadPtr += 64 * 2;
break;
default:
HASH_STATE(0);
FILL_STATE(0);
scratchpadPtr += 64;
break;
}
break;
}
}
#undef HASH_STATE
#undef FILL_STATE
__riscv_vsuxei32_v_u32m1((uint32_t*)fill_state + 0, stride, fill_state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)fill_state + 4, stride, fill_state13, 8);
//two extra rounds to achieve full diffusion
const vuint32m1_t xkey00 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY00, 8);
const vuint32m1_t xkey11 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY11, 8);
hash_state02 = softaes_vector_double(hash_state02, xkey00, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
hash_state13 = softaes_vector_double(hash_state13, xkey00, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
hash_state02 = softaes_vector_double(hash_state02, xkey11, lutenc_index0, lutenc_index1, lutenc_index2, lutenc_index3, lutEnc0, lutEnc1, lutEnc2, lutEnc3);
hash_state13 = softaes_vector_double(hash_state13, xkey11, lutdec_index0, lutdec_index1, lutdec_index2, lutdec_index3, lutDec0, lutDec1, lutDec2, lutDec3);
//output hash
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 0, stride, hash_state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 4, stride, hash_state13, 8);
}
template void hashAndFillAes1Rx4_RVV<0,2>(void* scratchpad, size_t scratchpadSize, void* hash, void* fill_state);
template void hashAndFillAes1Rx4_RVV<1,1>(void* scratchpad, size_t scratchpadSize, void* hash, void* fill_state);
template void hashAndFillAes1Rx4_RVV<2,1>(void* scratchpad, size_t scratchpadSize, void* hash, void* fill_state);
template void hashAndFillAes1Rx4_RVV<2,2>(void* scratchpad, size_t scratchpadSize, void* hash, void* fill_state);
template void hashAndFillAes1Rx4_RVV<2,4>(void* scratchpad, size_t scratchpadSize, void* hash, void* fill_state);

View File

@@ -0,0 +1,42 @@
/*
Copyright (c) 2025 SChernykh <https://github.com/SChernykh>
Copyright (c) 2025 XMRig <support@xmrig.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
template<int softAes>
void hashAes1Rx4_RVV(const void *input, size_t inputSize, void *hash);
template<int softAes>
void fillAes1Rx4_RVV(void *state, size_t outputSize, void *buffer);
template<int softAes>
void fillAes4Rx4_RVV(void *state, size_t outputSize, void *buffer);
template<int softAes, int unroll>
void hashAndFillAes1Rx4_RVV(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state);

View File

@@ -0,0 +1,199 @@
/*
Copyright (c) 2025 SChernykh <https://github.com/SChernykh>
Copyright (c) 2025 XMRig <support@xmrig.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "crypto/randomx/aes_hash.hpp"
#include "crypto/randomx/randomx.h"
#include "crypto/rx/Profiler.h"
#include <riscv_vector.h>
static FORCE_INLINE vuint32m1_t aesenc_zvkned(vuint32m1_t a, vuint32m1_t b) { return __riscv_vaesem_vv_u32m1(a, b, 8); }
static FORCE_INLINE vuint32m1_t aesdec_zvkned(vuint32m1_t a, vuint32m1_t b, vuint32m1_t zero) { return __riscv_vxor_vv_u32m1(__riscv_vaesdm_vv_u32m1(a, zero, 8), b, 8); }
static constexpr uint32_t AES_HASH_1R_STATE02[8] = { 0x92b52c0d, 0x9fa856de, 0xcc82db47, 0xd7983aad, 0x6a770017, 0xae62c7d0, 0x5079506b, 0xe8a07ce4 };
static constexpr uint32_t AES_HASH_1R_STATE13[8] = { 0x338d996e, 0x15c7b798, 0xf59e125a, 0xace78057, 0x630a240c, 0x07ad828d, 0x79a10005, 0x7e994948 };
static constexpr uint32_t AES_GEN_1R_KEY02[8] = { 0x6daca553, 0x62716609, 0xdbb5552b, 0xb4f44917, 0x3f1262f1, 0x9f947ec6, 0xf4c0794f, 0x3e20e345 };
static constexpr uint32_t AES_GEN_1R_KEY13[8] = { 0x6d7caf07, 0x846a710d, 0x1725d378, 0x0da1dc4e, 0x6aef8135, 0xb1ba317c, 0x16314c88, 0x49169154 };
static constexpr uint32_t AES_HASH_1R_XKEY00[8] = { 0xf6fa8389, 0x8b24949f, 0x90dc56bf, 0x06890201, 0xf6fa8389, 0x8b24949f, 0x90dc56bf, 0x06890201 };
static constexpr uint32_t AES_HASH_1R_XKEY11[8] = { 0x61b263d1, 0x51f4e03c, 0xee1043c6, 0xed18f99b, 0x61b263d1, 0x51f4e03c, 0xee1043c6, 0xed18f99b };
static constexpr uint32_t AES_HASH_STRIDE_X2[8] = { 0, 4, 8, 12, 32, 36, 40, 44 };
static constexpr uint32_t AES_HASH_STRIDE_X4[8] = { 0, 4, 8, 12, 64, 68, 72, 76 };
void hashAes1Rx4_zvkned(const void *input, size_t inputSize, void *hash)
{
const uint8_t* inptr = (const uint8_t*)input;
const uint8_t* inputEnd = inptr + inputSize;
//intial state
vuint32m1_t state02 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE02, 8);
vuint32m1_t state13 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
const vuint32m1_t zero = {};
//process 64 bytes at a time in 4 lanes
while (inptr < inputEnd) {
state02 = aesenc_zvkned(state02, __riscv_vluxei32_v_u32m1((uint32_t*)inptr + 0, stride, 8));
state13 = aesdec_zvkned(state13, __riscv_vluxei32_v_u32m1((uint32_t*)inptr + 4, stride, 8), zero);
inptr += 64;
}
//two extra rounds to achieve full diffusion
const vuint32m1_t xkey00 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY00, 8);
const vuint32m1_t xkey11 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY11, 8);
state02 = aesenc_zvkned(state02, xkey00);
state13 = aesdec_zvkned(state13, xkey00, zero);
state02 = aesenc_zvkned(state02, xkey11);
state13 = aesdec_zvkned(state13, xkey11, zero);
//output hash
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 4, stride, state13, 8);
}
void fillAes1Rx4_zvkned(void *state, size_t outputSize, void *buffer)
{
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
const vuint32m1_t key02 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY02, 8);
const vuint32m1_t key13 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
const vuint32m1_t zero = {};
vuint32m1_t state02 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 0, stride, 8);
vuint32m1_t state13 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 4, stride, 8);
while (outptr < outputEnd) {
state02 = aesdec_zvkned(state02, key02, zero);
state13 = aesenc_zvkned(state13, key13);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 4, stride, state13, 8);
outptr += 64;
}
__riscv_vsuxei32_v_u32m1((uint32_t*)state + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)state + 4, stride, state13, 8);
}
void fillAes4Rx4_zvkned(void *state, size_t outputSize, void *buffer)
{
const uint8_t* outptr = (uint8_t*)buffer;
const uint8_t* outputEnd = outptr + outputSize;
const vuint32m1_t stride4 = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X4, 8);
const vuint32m1_t key04 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 0), stride4, 8);
const vuint32m1_t key15 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 1), stride4, 8);
const vuint32m1_t key26 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 2), stride4, 8);
const vuint32m1_t key37 = __riscv_vluxei32_v_u32m1((uint32_t*)(RandomX_CurrentConfig.fillAes4Rx4_Key + 3), stride4, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
const vuint32m1_t zero = {};
vuint32m1_t state02 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 0, stride, 8);
vuint32m1_t state13 = __riscv_vluxei32_v_u32m1((uint32_t*)state + 4, stride, 8);
while (outptr < outputEnd) {
state02 = aesdec_zvkned(state02, key04, zero);
state13 = aesenc_zvkned(state13, key04);
state02 = aesdec_zvkned(state02, key15, zero);
state13 = aesenc_zvkned(state13, key15);
state02 = aesdec_zvkned(state02, key26, zero);
state13 = aesenc_zvkned(state13, key26);
state02 = aesdec_zvkned(state02, key37, zero);
state13 = aesenc_zvkned(state13, key37);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 0, stride, state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)outptr + 4, stride, state13, 8);
outptr += 64;
}
}
void hashAndFillAes1Rx4_zvkned(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state)
{
uint8_t* scratchpadPtr = (uint8_t*)scratchpad;
const uint8_t* scratchpadEnd = scratchpadPtr + scratchpadSize;
vuint32m1_t hash_state02 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE02, 8);
vuint32m1_t hash_state13 = __riscv_vle32_v_u32m1(AES_HASH_1R_STATE13, 8);
const vuint32m1_t key02 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY02, 8);
const vuint32m1_t key13 = __riscv_vle32_v_u32m1(AES_GEN_1R_KEY13, 8);
const vuint32m1_t stride = __riscv_vle32_v_u32m1(AES_HASH_STRIDE_X2, 8);
const vuint32m1_t zero = {};
vuint32m1_t fill_state02 = __riscv_vluxei32_v_u32m1((uint32_t*)fill_state + 0, stride, 8);
vuint32m1_t fill_state13 = __riscv_vluxei32_v_u32m1((uint32_t*)fill_state + 4, stride, 8);
//process 64 bytes at a time in 4 lanes
while (scratchpadPtr < scratchpadEnd) {
hash_state02 = aesenc_zvkned(hash_state02, __riscv_vluxei32_v_u32m1((uint32_t*)scratchpadPtr + 0, stride, 8));
hash_state13 = aesdec_zvkned(hash_state13, __riscv_vluxei32_v_u32m1((uint32_t*)scratchpadPtr + 4, stride, 8), zero);
fill_state02 = aesdec_zvkned(fill_state02, key02, zero);
fill_state13 = aesenc_zvkned(fill_state13, key13);
__riscv_vsuxei32_v_u32m1((uint32_t*)scratchpadPtr + 0, stride, fill_state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)scratchpadPtr + 4, stride, fill_state13, 8);
scratchpadPtr += 64;
}
__riscv_vsuxei32_v_u32m1((uint32_t*)fill_state + 0, stride, fill_state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)fill_state + 4, stride, fill_state13, 8);
//two extra rounds to achieve full diffusion
const vuint32m1_t xkey00 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY00, 8);
const vuint32m1_t xkey11 = __riscv_vle32_v_u32m1(AES_HASH_1R_XKEY11, 8);
hash_state02 = aesenc_zvkned(hash_state02, xkey00);
hash_state13 = aesdec_zvkned(hash_state13, xkey00, zero);
hash_state02 = aesenc_zvkned(hash_state02, xkey11);
hash_state13 = aesdec_zvkned(hash_state13, xkey11, zero);
//output hash
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 0, stride, hash_state02, 8);
__riscv_vsuxei32_v_u32m1((uint32_t*)hash + 4, stride, hash_state13, 8);
}

View File

@@ -0,0 +1,35 @@
/*
Copyright (c) 2025 SChernykh <https://github.com/SChernykh>
Copyright (c) 2025 XMRig <support@xmrig.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
void hashAes1Rx4_zvkned(const void *input, size_t inputSize, void *hash);
void fillAes1Rx4_zvkned(void *state, size_t outputSize, void *buffer);
void fillAes4Rx4_zvkned(void *state, size_t outputSize, void *buffer);
void hashAndFillAes1Rx4_zvkned(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state);

View File

@@ -0,0 +1,148 @@
/*
Copyright (c) 2018-2019, tevador <tevador@gmail.com>
Copyright (c) 2026 XMRig <support@xmrig.com>
Copyright (c) 2026 SChernykh <https://github.com/SChernykh>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <cstddef>
#include <cstdint>
#include <immintrin.h>
#define REVERSE_4(A, B, C, D) D, C, B, A
alignas(64) static const uint32_t AES_HASH_1R_STATE[] = {
REVERSE_4(0xd7983aad, 0xcc82db47, 0x9fa856de, 0x92b52c0d),
REVERSE_4(0xace78057, 0xf59e125a, 0x15c7b798, 0x338d996e),
REVERSE_4(0xe8a07ce4, 0x5079506b, 0xae62c7d0, 0x6a770017),
REVERSE_4(0x7e994948, 0x79a10005, 0x07ad828d, 0x630a240c)
};
alignas(64) static const uint32_t AES_GEN_1R_KEY[] = {
REVERSE_4(0xb4f44917, 0xdbb5552b, 0x62716609, 0x6daca553),
REVERSE_4(0x0da1dc4e, 0x1725d378, 0x846a710d, 0x6d7caf07),
REVERSE_4(0x3e20e345, 0xf4c0794f, 0x9f947ec6, 0x3f1262f1),
REVERSE_4(0x49169154, 0x16314c88, 0xb1ba317c, 0x6aef8135)
};
alignas(64) static const uint32_t AES_HASH_1R_XKEY0[] = {
REVERSE_4(0x06890201, 0x90dc56bf, 0x8b24949f, 0xf6fa8389),
REVERSE_4(0x06890201, 0x90dc56bf, 0x8b24949f, 0xf6fa8389),
REVERSE_4(0x06890201, 0x90dc56bf, 0x8b24949f, 0xf6fa8389),
REVERSE_4(0x06890201, 0x90dc56bf, 0x8b24949f, 0xf6fa8389)
};
alignas(64) static const uint32_t AES_HASH_1R_XKEY1[] = {
REVERSE_4(0xed18f99b, 0xee1043c6, 0x51f4e03c, 0x61b263d1),
REVERSE_4(0xed18f99b, 0xee1043c6, 0x51f4e03c, 0x61b263d1),
REVERSE_4(0xed18f99b, 0xee1043c6, 0x51f4e03c, 0x61b263d1),
REVERSE_4(0xed18f99b, 0xee1043c6, 0x51f4e03c, 0x61b263d1)
};
void hashAndFillAes1Rx4_VAES512(void *scratchpad, size_t scratchpadSize, void *hash, void* fill_state)
{
uint8_t* scratchpadPtr = (uint8_t*)scratchpad;
const uint8_t* scratchpadEnd = scratchpadPtr + scratchpadSize;
const __m512i fill_key = _mm512_load_si512(AES_GEN_1R_KEY);
const __m512i initial_hash_state = _mm512_load_si512(AES_HASH_1R_STATE);
const __m512i initial_fill_state = _mm512_load_si512(fill_state);
constexpr uint8_t mask = 0b11001100;
// enc_data[0] = hash_state[0]
// enc_data[1] = fill_state[1]
// enc_data[2] = hash_state[2]
// enc_data[3] = fill_state[3]
__m512i enc_data = _mm512_mask_blend_epi64(mask, initial_hash_state, initial_fill_state);
// dec_data[0] = fill_state[0]
// dec_data[1] = hash_state[1]
// dec_data[2] = fill_state[2]
// dec_data[3] = hash_state[3]
__m512i dec_data = _mm512_mask_blend_epi64(mask, initial_fill_state, initial_hash_state);
constexpr int PREFETCH_DISTANCE = 7168;
const uint8_t* prefetchPtr = scratchpadPtr + PREFETCH_DISTANCE;
scratchpadEnd -= PREFETCH_DISTANCE;
for (const uint8_t* p = scratchpadPtr; p < prefetchPtr; p += 256) {
_mm_prefetch((const char*)(p + 0), _MM_HINT_T0);
_mm_prefetch((const char*)(p + 64), _MM_HINT_T0);
_mm_prefetch((const char*)(p + 128), _MM_HINT_T0);
_mm_prefetch((const char*)(p + 192), _MM_HINT_T0);
}
for (int i = 0; i < 2; ++i) {
while (scratchpadPtr < scratchpadEnd) {
const __m512i scratchpad_data = _mm512_load_si512(scratchpadPtr);
// enc_key[0] = scratchpad_data[0]
// enc_key[1] = fill_key[1]
// enc_key[2] = scratchpad_data[2]
// enc_key[3] = fill_key[3]
enc_data = _mm512_aesenc_epi128(enc_data, _mm512_mask_blend_epi64(mask, scratchpad_data, fill_key));
// dec_key[0] = fill_key[0]
// dec_key[1] = scratchpad_data[1]
// dec_key[2] = fill_key[2]
// dec_key[3] = scratchpad_data[3]
dec_data = _mm512_aesdec_epi128(dec_data, _mm512_mask_blend_epi64(mask, fill_key, scratchpad_data));
// fill_state[0] = dec_data[0]
// fill_state[1] = enc_data[1]
// fill_state[2] = dec_data[2]
// fill_state[3] = enc_data[3]
_mm512_store_si512(scratchpadPtr, _mm512_mask_blend_epi64(mask, dec_data, enc_data));
_mm_prefetch((const char*)prefetchPtr, _MM_HINT_T0);
scratchpadPtr += 64;
prefetchPtr += 64;
}
prefetchPtr = (const uint8_t*) scratchpad;
scratchpadEnd += PREFETCH_DISTANCE;
}
_mm512_store_si512(fill_state, _mm512_mask_blend_epi64(mask, dec_data, enc_data));
//two extra rounds to achieve full diffusion
const __m512i xkey0 = _mm512_load_si512(AES_HASH_1R_XKEY0);
const __m512i xkey1 = _mm512_load_si512(AES_HASH_1R_XKEY1);
enc_data = _mm512_aesenc_epi128(enc_data, xkey0);
dec_data = _mm512_aesdec_epi128(dec_data, xkey0);
enc_data = _mm512_aesenc_epi128(enc_data, xkey1);
dec_data = _mm512_aesdec_epi128(dec_data, xkey1);
//output hash
_mm512_store_si512(hash, _mm512_mask_blend_epi64(mask, enc_data, dec_data));
// Just in case
_mm256_zeroupper();
}

View File

@@ -111,6 +111,10 @@ namespace randomx {
#define RANDOMX_HAVE_COMPILER 1
class JitCompilerA64;
using JitCompiler = JitCompilerA64;
#elif defined(__riscv) && defined(__riscv_xlen) && (__riscv_xlen == 64)
#define RANDOMX_HAVE_COMPILER 1
class JitCompilerRV64;
using JitCompiler = JitCompilerRV64;
#else
#define RANDOMX_HAVE_COMPILER 0
class JitCompilerFallback;

View File

@@ -32,6 +32,8 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "crypto/randomx/jit_compiler_x86.hpp"
#elif defined(__aarch64__)
#include "crypto/randomx/jit_compiler_a64.hpp"
#elif defined(__riscv) && defined(__riscv_xlen) && (__riscv_xlen == 64)
#include "crypto/randomx/jit_compiler_rv64.hpp"
#else
#include "crypto/randomx/jit_compiler_fallback.hpp"
#endif

View File

@@ -67,7 +67,6 @@ constexpr uint32_t LDR_LITERAL = 0x58000000;
constexpr uint32_t ROR = 0x9AC02C00;
constexpr uint32_t ROR_IMM = 0x93C00000;
constexpr uint32_t MOV_REG = 0xAA0003E0;
constexpr uint32_t MOV_VREG_EL = 0x6E080400;
constexpr uint32_t FADD = 0x4E60D400;
constexpr uint32_t FSUB = 0x4EE0D400;
constexpr uint32_t FEOR = 0x6E201C00;
@@ -102,7 +101,7 @@ static size_t CalcDatasetItemSize()
((uint8_t*)randomx_calc_dataset_item_aarch64_end - (uint8_t*)randomx_calc_dataset_item_aarch64_store_result);
}
constexpr uint32_t IntRegMap[8] = { 4, 5, 6, 7, 12, 13, 14, 15 };
constexpr uint8_t IntRegMap[8] = { 4, 5, 6, 7, 12, 13, 14, 15 };
JitCompilerA64::JitCompilerA64(bool hugePagesEnable, bool) :
hugePages(hugePagesJIT && hugePagesEnable),
@@ -128,11 +127,12 @@ void JitCompilerA64::generateProgram(Program& program, ProgramConfiguration& con
uint32_t codePos = MainLoopBegin + 4;
uint32_t mask = ((RandomX_CurrentConfig.Log2_ScratchpadL3 - 7) << 10);
// and w16, w10, ScratchpadL3Mask64
emit32(0x121A0000 | 16 | (10 << 5) | ((RandomX_CurrentConfig.Log2_ScratchpadL3 - 7) << 10), code, codePos);
emit32(0x121A0000 | 16 | (10 << 5) | mask, code, codePos);
// and w17, w20, ScratchpadL3Mask64
emit32(0x121A0000 | 17 | (20 << 5) | ((RandomX_CurrentConfig.Log2_ScratchpadL3 - 7) << 10), code, codePos);
emit32(0x121A0000 | 17 | (20 << 5) | mask, code, codePos);
codePos = PrologueSize;
literalPos = ImulRcpLiteralsEnd;
@@ -155,13 +155,14 @@ void JitCompilerA64::generateProgram(Program& program, ProgramConfiguration& con
const uint32_t offset = (((uint8_t*)randomx_program_aarch64_vm_instructions_end) - ((uint8_t*)randomx_program_aarch64)) - codePos;
emit32(ARMV8A::B | (offset / 4), code, codePos);
// and w20, w20, CacheLineAlignMask
mask = ((RandomX_CurrentConfig.Log2_DatasetBaseSize - 7) << 10);
// and w20, w9, CacheLineAlignMask
codePos = (((uint8_t*)randomx_program_aarch64_cacheline_align_mask1) - ((uint8_t*)randomx_program_aarch64));
emit32(0x121A0000 | 20 | (20 << 5) | ((RandomX_CurrentConfig.Log2_DatasetBaseSize - 7) << 10), code, codePos);
emit32(0x121A0000 | 20 | (9 << 5) | mask, code, codePos);
// and w10, w10, CacheLineAlignMask
codePos = (((uint8_t*)randomx_program_aarch64_cacheline_align_mask2) - ((uint8_t*)randomx_program_aarch64));
emit32(0x121A0000 | 10 | (10 << 5) | ((RandomX_CurrentConfig.Log2_DatasetBaseSize - 7) << 10), code, codePos);
emit32(0x121A0000 | 10 | (10 << 5) | mask, code, codePos);
// Update spMix1
// eor x10, config.readReg0, config.readReg1
@@ -497,9 +498,12 @@ void JitCompilerA64::emitMemLoad(uint32_t dst, uint32_t src, Instruction& instr,
if (src != dst)
{
imm &= instr.getModMem() ? (RandomX_CurrentConfig.ScratchpadL1_Size - 1) : (RandomX_CurrentConfig.ScratchpadL2_Size - 1);
emitAddImmediate(tmp_reg, src, imm, code, k);
uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
if (imm)
emitAddImmediate(tmp_reg, src, imm, code, k);
else
t = 0x927d0000 | tmp_reg | (src << 5);
constexpr uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
const uint32_t andInstrL1 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL1 - 4) << 10);
const uint32_t andInstrL2 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL2 - 4) << 10);
@@ -511,10 +515,18 @@ void JitCompilerA64::emitMemLoad(uint32_t dst, uint32_t src, Instruction& instr,
else
{
imm = (imm & ScratchpadL3Mask) >> 3;
emitMovImmediate(tmp_reg, imm, code, k);
if (imm)
{
emitMovImmediate(tmp_reg, imm, code, k);
// ldr tmp_reg, [x2, tmp_reg, lsl 3]
emit32(0xf8607840 | tmp_reg | (tmp_reg << 16), code, k);
// ldr tmp_reg, [x2, tmp_reg, lsl 3]
emit32(0xf8607840 | tmp_reg | (tmp_reg << 16), code, k);
}
else
{
// ldr tmp_reg, [x2]
emit32(0xf9400040 | tmp_reg, code, k);
}
}
codePos = k;
@@ -529,25 +541,22 @@ void JitCompilerA64::emitMemLoadFP(uint32_t src, Instruction& instr, uint8_t* co
constexpr uint32_t tmp_reg = 19;
imm &= instr.getModMem() ? (RandomX_CurrentConfig.ScratchpadL1_Size - 1) : (RandomX_CurrentConfig.ScratchpadL2_Size - 1);
emitAddImmediate(tmp_reg, src, imm, code, k);
uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
if (imm)
emitAddImmediate(tmp_reg, src, imm, code, k);
else
t = 0x927d0000 | tmp_reg | (src << 5);
constexpr uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
const uint32_t andInstrL1 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL1 - 4) << 10);
const uint32_t andInstrL2 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL2 - 4) << 10);
emit32(instr.getModMem() ? andInstrL1 : andInstrL2, code, k);
// add tmp_reg, x2, tmp_reg
emit32(ARMV8A::ADD | tmp_reg | (2 << 5) | (tmp_reg << 16), code, k);
// ldr tmp_reg_fp, [x2, tmp_reg]
emit32(0x3ce06800 | tmp_reg_fp | (2 << 5) | (tmp_reg << 16), code, k);
// ldpsw tmp_reg, tmp_reg + 1, [tmp_reg]
emit32(0x69400000 | tmp_reg | (tmp_reg << 5) | ((tmp_reg + 1) << 10), code, k);
// ins tmp_reg_fp.d[0], tmp_reg
emit32(0x4E081C00 | tmp_reg_fp | (tmp_reg << 5), code, k);
// ins tmp_reg_fp.d[1], tmp_reg + 1
emit32(0x4E181C00 | tmp_reg_fp | ((tmp_reg + 1) << 5), code, k);
// sxtl.2d tmp_reg_fp, tmp_reg_fp
emit32(0x0f20a400 | tmp_reg_fp | (tmp_reg_fp << 5), code, k);
// scvtf tmp_reg_fp.2d, tmp_reg_fp.2d
emit32(0x4E61D800 | tmp_reg_fp | (tmp_reg_fp << 5), code, k);
@@ -835,7 +844,8 @@ void JitCompilerA64::h_IROR_R(Instruction& instr, uint32_t& codePos)
else
{
// ror dst, dst, imm
emit32(ARMV8A::ROR_IMM | dst | (dst << 5) | ((instr.getImm32() & 63) << 10) | (dst << 16), code, codePos);
if ((instr.getImm32() & 63))
emit32(ARMV8A::ROR_IMM | dst | (dst << 5) | ((instr.getImm32() & 63) << 10) | (dst << 16), code, codePos);
}
reg_changed_offset[instr.dst] = codePos;
@@ -861,7 +871,8 @@ void JitCompilerA64::h_IROL_R(Instruction& instr, uint32_t& codePos)
else
{
// ror dst, dst, imm
emit32(ARMV8A::ROR_IMM | dst | (dst << 5) | ((-instr.getImm32() & 63) << 10) | (dst << 16), code, k);
if ((instr.getImm32() & 63))
emit32(ARMV8A::ROR_IMM | dst | (dst << 5) | ((-instr.getImm32() & 63) << 10) | (dst << 16), code, k);
}
reg_changed_offset[instr.dst] = k;
@@ -894,13 +905,8 @@ void JitCompilerA64::h_FSWAP_R(Instruction& instr, uint32_t& codePos)
const uint32_t dst = instr.dst + 16;
constexpr uint32_t tmp_reg_fp = 28;
constexpr uint32_t src_index1 = 1 << 14;
constexpr uint32_t dst_index1 = 1 << 20;
emit32(ARMV8A::MOV_VREG_EL | tmp_reg_fp | (dst << 5) | src_index1, code, k);
emit32(ARMV8A::MOV_VREG_EL | dst | (dst << 5) | dst_index1, code, k);
emit32(ARMV8A::MOV_VREG_EL | dst | (tmp_reg_fp << 5), code, k);
// ext dst.16b, dst.16b, dst.16b, #0x8
emit32(0x6e004000 | dst | (dst << 5) | (dst << 16), code, k);
codePos = k;
}
@@ -1029,11 +1035,19 @@ void JitCompilerA64::h_CFROUND(Instruction& instr, uint32_t& codePos)
constexpr uint32_t tmp_reg = 20;
constexpr uint32_t fpcr_tmp_reg = 8;
// ror tmp_reg, src, imm
emit32(ARMV8A::ROR_IMM | tmp_reg | (src << 5) | ((instr.getImm32() & 63) << 10) | (src << 16), code, k);
if (instr.getImm32() & 63)
{
// ror tmp_reg, src, imm
emit32(ARMV8A::ROR_IMM | tmp_reg | (src << 5) | ((instr.getImm32() & 63) << 10) | (src << 16), code, k);
// bfi fpcr_tmp_reg, tmp_reg, 40, 2
emit32(0xB3580400 | fpcr_tmp_reg | (tmp_reg << 5), code, k);
// bfi fpcr_tmp_reg, tmp_reg, 40, 2
emit32(0xB3580400 | fpcr_tmp_reg | (tmp_reg << 5), code, k);
}
else // no rotation
{
// bfi fpcr_tmp_reg, src, 40, 2
emit32(0xB3580400 | fpcr_tmp_reg | (src << 5), code, k);
}
// rbit tmp_reg, fpcr_tmp_reg
emit32(0xDAC00000 | tmp_reg | (fpcr_tmp_reg << 5), code, k);
@@ -1059,9 +1073,12 @@ void JitCompilerA64::h_ISTORE(Instruction& instr, uint32_t& codePos)
else
imm &= RandomX_CurrentConfig.ScratchpadL3_Size - 1;
emitAddImmediate(tmp_reg, dst, imm, code, k);
uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
if (imm)
emitAddImmediate(tmp_reg, dst, imm, code, k);
else
t = 0x927d0000 | tmp_reg | (dst << 5);
constexpr uint32_t t = 0x927d0000 | tmp_reg | (tmp_reg << 5);
const uint32_t andInstrL1 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL1 - 4) << 10);
const uint32_t andInstrL2 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL2 - 4) << 10);
const uint32_t andInstrL3 = t | ((RandomX_CurrentConfig.Log2_ScratchpadL3 - 4) << 10);

View File

@@ -100,9 +100,9 @@
# v26 -> "a2"
# v27 -> "a3"
# v28 -> temporary
# v29 -> E 'and' mask = 0x00ffffffffffffff00ffffffffffffff
# v30 -> E 'or' mask = 0x3*00000000******3*00000000******
# v31 -> scale mask = 0x81f000000000000081f0000000000000
# v29 -> E 'and' mask = 0x00ffffffffffffff'00ffffffffffffff
# v30 -> E 'or' mask = 0x3*00000000******'3*00000000******
# v31 -> scale mask = 0x80f0000000000000'80f0000000000000
.balign 4
DECL(randomx_program_aarch64):
@@ -142,17 +142,14 @@ DECL(randomx_program_aarch64):
ldp q26, q27, [x0, 224]
# Load E 'and' mask
mov x16, 0x00FFFFFFFFFFFFFF
ins v29.d[0], x16
ins v29.d[1], x16
movi v29.2d, #0x00FFFFFFFFFFFFFF
# Load E 'or' mask (stored in reg.f[0])
ldr q30, [x0, 64]
# Load scale mask
mov x16, 0x80f0000000000000
ins v31.d[0], x16
ins v31.d[1], x16
dup v31.2d, x16
# Read fpcr
mrs x8, fpcr
@@ -162,35 +159,22 @@ DECL(randomx_program_aarch64):
str x0, [sp, -16]!
# Read literals
ldr x0, literal_x0
ldr x11, literal_x11
ldr x21, literal_x21
ldr x22, literal_x22
ldr x23, literal_x23
ldr x24, literal_x24
ldr x25, literal_x25
ldr x26, literal_x26
ldr x27, literal_x27
ldr x28, literal_x28
ldr x29, literal_x29
ldr x30, literal_x30
adr x30, literal_v0
ldp q0, q1, [x30]
ldp q2, q3, [x30, 32]
ldp q4, q5, [x30, 64]
ldp q6, q7, [x30, 96]
ldp q8, q9, [x30, 128]
ldp q10, q11, [x30, 160]
ldp q12, q13, [x30, 192]
ldp q14, q15, [x30, 224]
ldr q0, literal_v0
ldr q1, literal_v1
ldr q2, literal_v2
ldr q3, literal_v3
ldr q4, literal_v4
ldr q5, literal_v5
ldr q6, literal_v6
ldr q7, literal_v7
ldr q8, literal_v8
ldr q9, literal_v9
ldr q10, literal_v10
ldr q11, literal_v11
ldr q12, literal_v12
ldr q13, literal_v13
ldr q14, literal_v14
ldr q15, literal_v15
ldp x0, x11, [x30, -96] // literal_x0
ldp x21, x22, [x30, -80] // literal_x21
ldp x23, x24, [x30, -64] // literal_x23
ldp x25, x26, [x30, -48] // literal_x25
ldp x27, x28, [x30, -32] // literal_x27
ldp x29, x30, [x30, -16] // literal_x29
DECL(randomx_program_aarch64_main_loop):
# spAddr0 = spMix1 & ScratchpadL3Mask64;
@@ -221,40 +205,31 @@ DECL(randomx_program_aarch64_main_loop):
eor x15, x15, x19
# Load group F registers (spAddr1)
ldpsw x20, x19, [x17]
ins v16.d[0], x20
ins v16.d[1], x19
ldpsw x20, x19, [x17, 8]
ins v17.d[0], x20
ins v17.d[1], x19
ldpsw x20, x19, [x17, 16]
ins v18.d[0], x20
ins v18.d[1], x19
ldpsw x20, x19, [x17, 24]
ins v19.d[0], x20
ins v19.d[1], x19
ldr q17, [x17]
sxtl v16.2d, v17.2s
scvtf v16.2d, v16.2d
sxtl2 v17.2d, v17.4s
scvtf v17.2d, v17.2d
ldr q19, [x17, 16]
sxtl v18.2d, v19.2s
scvtf v18.2d, v18.2d
sxtl2 v19.2d, v19.4s
scvtf v19.2d, v19.2d
# Load group E registers (spAddr1)
ldpsw x20, x19, [x17, 32]
ins v20.d[0], x20
ins v20.d[1], x19
ldpsw x20, x19, [x17, 40]
ins v21.d[0], x20
ins v21.d[1], x19
ldpsw x20, x19, [x17, 48]
ins v22.d[0], x20
ins v22.d[1], x19
ldpsw x20, x19, [x17, 56]
ins v23.d[0], x20
ins v23.d[1], x19
ldr q21, [x17, 32]
sxtl v20.2d, v21.2s
scvtf v20.2d, v20.2d
sxtl2 v21.2d, v21.4s
scvtf v21.2d, v21.2d
ldr q23, [x17, 48]
sxtl v22.2d, v23.2s
scvtf v22.2d, v22.2d
sxtl2 v23.2d, v23.4s
scvtf v23.2d, v23.2d
and v20.16b, v20.16b, v29.16b
and v21.16b, v21.16b, v29.16b
and v22.16b, v22.16b, v29.16b
@@ -310,10 +285,9 @@ DECL(randomx_program_aarch64_vm_instructions_end):
eor x9, x9, x20
# Calculate dataset pointer for dataset prefetch
mov w20, w9
DECL(randomx_program_aarch64_cacheline_align_mask1):
# Actual mask will be inserted by JIT compiler
and x20, x20, 1
and x20, x9, 1
add x20, x20, x1
# Prefetch dataset data
@@ -491,42 +465,39 @@ DECL(randomx_calc_dataset_item_aarch64):
stp x10, x11, [sp, 80]
stp x12, x13, [sp, 96]
ldr x12, superscalarMul0
adr x7, superscalarMul0
# superscalarMul0, superscalarAdd1
ldp x12, x13, [x7]
mov x8, x0
mov x9, x1
ldp x8, x9, [sp]
mov x10, x2
# rl[0] = (itemNumber + 1) * superscalarMul0;
madd x0, x2, x12, x12
# rl[1] = rl[0] ^ superscalarAdd1;
ldr x12, superscalarAdd1
eor x1, x0, x12
eor x1, x0, x13
# rl[2] = rl[0] ^ superscalarAdd2;
ldr x12, superscalarAdd2
ldp x12, x13, [x7, 16]
eor x2, x0, x12
# rl[3] = rl[0] ^ superscalarAdd3;
ldr x12, superscalarAdd3
eor x3, x0, x12
eor x3, x0, x13
# rl[4] = rl[0] ^ superscalarAdd4;
ldr x12, superscalarAdd4
ldp x12, x13, [x7, 32]
eor x4, x0, x12
# rl[5] = rl[0] ^ superscalarAdd5;
ldr x12, superscalarAdd5
eor x5, x0, x12
eor x5, x0, x13
# rl[6] = rl[0] ^ superscalarAdd6;
ldr x12, superscalarAdd6
ldp x12, x13, [x7, 48]
eor x6, x0, x12
# rl[7] = rl[0] ^ superscalarAdd7;
ldr x12, superscalarAdd7
eor x7, x0, x12
eor x7, x0, x13
b DECL(randomx_calc_dataset_item_aarch64_prefetch)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,151 @@
/*
Copyright (c) 2023 tevador <tevador@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#include <cstdint>
#include <cstring>
#include <vector>
#include "crypto/randomx/common.hpp"
#include "crypto/randomx/jit_compiler_rv64_static.hpp"
namespace randomx {
struct CodeBuffer {
uint8_t* code;
int32_t codePos;
int32_t rcpCount;
void emit(const uint8_t* src, int32_t len) {
memcpy(&code[codePos], src, len);
codePos += len;
}
template<typename T>
void emit(T src) {
memcpy(&code[codePos], &src, sizeof(src));
codePos += sizeof(src);
}
void emitAt(int32_t codePos, const uint8_t* src, int32_t len) {
memcpy(&code[codePos], src, len);
}
template<typename T>
void emitAt(int32_t codePos, T src) {
memcpy(&code[codePos], &src, sizeof(src));
}
};
struct CompilerState : public CodeBuffer {
int32_t instructionOffsets[RANDOMX_PROGRAM_MAX_SIZE];
int registerUsage[RegistersCount];
};
class Program;
struct ProgramConfiguration;
class SuperscalarProgram;
class Instruction;
#define HANDLER_ARGS randomx::CompilerState& state, randomx::Instruction isn, int i
typedef void(*InstructionGeneratorRV64)(HANDLER_ARGS);
class JitCompilerRV64 {
public:
JitCompilerRV64(bool hugePagesEnable, bool optimizedInitDatasetEnable);
~JitCompilerRV64();
void prepare() {}
void generateProgram(Program&, ProgramConfiguration&, uint32_t);
void generateProgramLight(Program&, ProgramConfiguration&, uint32_t);
template<size_t N>
void generateSuperscalarHash(SuperscalarProgram(&programs)[N]);
void generateDatasetInitCode() {}
ProgramFunc* getProgramFunc() {
return (ProgramFunc*)(vectorCode ? entryProgramVector : entryProgram);
}
DatasetInitFunc* getDatasetInitFunc() {
return (DatasetInitFunc*)(vectorCode ? entryDataInitVector : entryDataInit);
}
uint8_t* getCode() {
return state.code;
}
size_t getCodeSize();
void enableWriting() const;
void enableExecution() const;
static InstructionGeneratorRV64 engine[256];
static uint8_t inst_map[256];
private:
CompilerState state;
uint8_t* vectorCode = nullptr;
size_t vectorCodeSize = 0;
void* entryDataInit = nullptr;
void* entryDataInitVector = nullptr;
void* entryProgram = nullptr;
void* entryProgramVector = nullptr;
public:
static void v1_IADD_RS(HANDLER_ARGS);
static void v1_IADD_M(HANDLER_ARGS);
static void v1_ISUB_R(HANDLER_ARGS);
static void v1_ISUB_M(HANDLER_ARGS);
static void v1_IMUL_R(HANDLER_ARGS);
static void v1_IMUL_M(HANDLER_ARGS);
static void v1_IMULH_R(HANDLER_ARGS);
static void v1_IMULH_M(HANDLER_ARGS);
static void v1_ISMULH_R(HANDLER_ARGS);
static void v1_ISMULH_M(HANDLER_ARGS);
static void v1_IMUL_RCP(HANDLER_ARGS);
static void v1_INEG_R(HANDLER_ARGS);
static void v1_IXOR_R(HANDLER_ARGS);
static void v1_IXOR_M(HANDLER_ARGS);
static void v1_IROR_R(HANDLER_ARGS);
static void v1_IROL_R(HANDLER_ARGS);
static void v1_ISWAP_R(HANDLER_ARGS);
static void v1_FSWAP_R(HANDLER_ARGS);
static void v1_FADD_R(HANDLER_ARGS);
static void v1_FADD_M(HANDLER_ARGS);
static void v1_FSUB_R(HANDLER_ARGS);
static void v1_FSUB_M(HANDLER_ARGS);
static void v1_FSCAL_R(HANDLER_ARGS);
static void v1_FMUL_R(HANDLER_ARGS);
static void v1_FDIV_M(HANDLER_ARGS);
static void v1_FSQRT_R(HANDLER_ARGS);
static void v1_CBRANCH(HANDLER_ARGS);
static void v1_CFROUND(HANDLER_ARGS);
static void v1_ISTORE(HANDLER_ARGS);
static void v1_NOP(HANDLER_ARGS);
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,53 @@
/*
Copyright (c) 2023 tevador <tevador@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
extern "C" {
void randomx_riscv64_literals();
void randomx_riscv64_literals_end();
void randomx_riscv64_data_init();
void randomx_riscv64_fix_data_call();
void randomx_riscv64_prologue();
void randomx_riscv64_loop_begin();
void randomx_riscv64_data_read();
void randomx_riscv64_data_read_light();
void randomx_riscv64_fix_loop_call();
void randomx_riscv64_spad_store();
void randomx_riscv64_spad_store_hardaes();
void randomx_riscv64_spad_store_softaes();
void randomx_riscv64_loop_end();
void randomx_riscv64_fix_continue_loop();
void randomx_riscv64_epilogue();
void randomx_riscv64_softaes();
void randomx_riscv64_program_end();
void randomx_riscv64_ssh_init();
void randomx_riscv64_ssh_load();
void randomx_riscv64_ssh_prefetch();
void randomx_riscv64_ssh_end();
}

View File

@@ -0,0 +1,845 @@
/*
Copyright (c) 2018-2020, tevador <tevador@gmail.com>
Copyright (c) 2019-2021, XMRig <https://github.com/xmrig>, <support@xmrig.com>
Copyright (c) 2025, SChernykh <https://github.com/SChernykh>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "crypto/randomx/configuration.h"
#include "crypto/randomx/jit_compiler_rv64_vector.h"
#include "crypto/randomx/jit_compiler_rv64_vector_static.h"
#include "crypto/randomx/reciprocal.h"
#include "crypto/randomx/superscalar.hpp"
#include "crypto/randomx/program.hpp"
namespace randomx {
#define ADDR(x) ((uint8_t*) &(x))
#define DIST(x, y) (ADDR(y) - ADDR(x))
void* generateDatasetInitVectorRV64(uint8_t* buf, SuperscalarProgram* programs, size_t num_programs)
{
uint8_t* p = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_sshash_generated_instructions);
uint8_t* literals = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_sshash_imul_rcp_literals);
uint8_t* cur_literal = literals;
for (size_t i = 0; i < num_programs; ++i) {
// Step 4
size_t k = DIST(randomx_riscv64_vector_sshash_cache_prefetch, randomx_riscv64_vector_sshash_xor);
memcpy(p, reinterpret_cast<void*>(randomx_riscv64_vector_sshash_cache_prefetch), k);
p += k;
// Step 5
for (uint32_t j = 0; j < programs[i].size; ++j) {
const uint32_t dst = programs[i].programBuffer[j].dst & 7;
const uint32_t src = programs[i].programBuffer[j].src & 7;
const uint32_t modShift = (programs[i].programBuffer[j].mod >> 2) & 3;
const uint32_t imm32 = programs[i].programBuffer[j].imm32;
uint32_t inst;
#define EMIT(data) inst = (data); memcpy(p, &inst, 4); p += 4
switch (static_cast<SuperscalarInstructionType>(programs[i].programBuffer[j].opcode)) {
case SuperscalarInstructionType::ISUB_R:
// 57 00 00 0A vsub.vv v0, v0, v0
EMIT(0x0A000057 | (dst << 7) | (src << 15) | (dst << 20));
break;
case SuperscalarInstructionType::IXOR_R:
// 57 00 00 2E vxor.vv v0, v0, v0
EMIT(0x2E000057 | (dst << 7) | (src << 15) | (dst << 20));
break;
case SuperscalarInstructionType::IADD_RS:
if (modShift == 0) {
// 57 00 00 02 vadd.vv v0, v0, v0
EMIT(0x02000057 | (dst << 7) | (src << 15) | (dst << 20));
}
else {
// 57 39 00 96 vsll.vi v18, v0, 0
// 57 00 09 02 vadd.vv v0, v0, v18
EMIT(0x96003957 | (modShift << 15) | (src << 20));
EMIT(0x02090057 | (dst << 7) | (dst << 20));
}
break;
case SuperscalarInstructionType::IMUL_R:
// 57 20 00 96 vmul.vv v0, v0, v0
EMIT(0x96002057 | (dst << 7) | (src << 15) | (dst << 20));
break;
case SuperscalarInstructionType::IROR_C:
{
#ifdef __riscv_zvkb
// 57 30 00 52 vror.vi v0, v0, 0
EMIT(0x52003057 | (dst << 7) | (dst << 20) | ((imm32 & 31) << 15) | ((imm32 & 32) << 21));
#else // __riscv_zvkb
const uint32_t shift_right = imm32 & 63;
const uint32_t shift_left = 64 - shift_right;
if (shift_right < 32) {
// 57 39 00 A2 vsrl.vi v18, v0, 0
EMIT(0xA2003957 | (shift_right << 15) | (dst << 20));
}
else {
// 93 02 00 00 li x5, 0
// 57 C9 02 A2 vsrl.vx v18, v0, x5
EMIT(0x00000293 | (shift_right << 20));
EMIT(0xA202C957 | (dst << 20));
}
if (shift_left < 32) {
// 57 30 00 96 vsll.vi v0, v0, 0
EMIT(0x96003057 | (dst << 7) | (shift_left << 15) | (dst << 20));
}
else {
// 93 02 00 00 li x5, 0
// 57 C0 02 96 vsll.vx v0, v0, x5
EMIT(0x00000293 | (shift_left << 20));
EMIT(0x9602C057 | (dst << 7) | (dst << 20));
}
// 57 00 20 2B vor.vv v0, v18, v0
EMIT(0x2B200057 | (dst << 7) | (dst << 15));
#endif // __riscv_zvkb
}
break;
case SuperscalarInstructionType::IADD_C7:
case SuperscalarInstructionType::IADD_C8:
case SuperscalarInstructionType::IADD_C9:
// B7 02 00 00 lui x5, 0
// 9B 82 02 00 addiw x5, x5, 0
// 57 C0 02 02 vadd.vx v0, v0, x5
EMIT(0x000002B7 | ((imm32 + ((imm32 & 0x800) << 1)) & 0xFFFFF000));
EMIT(0x0002829B | ((imm32 & 0x00000FFF) << 20));
EMIT(0x0202C057 | (dst << 7) | (dst << 20));
break;
case SuperscalarInstructionType::IXOR_C7:
case SuperscalarInstructionType::IXOR_C8:
case SuperscalarInstructionType::IXOR_C9:
// B7 02 00 00 lui x5, 0
// 9B 82 02 00 addiw x5, x5, 0
// 57 C0 02 2E vxor.vx v0, v0, x5
EMIT(0x000002B7 | ((imm32 + ((imm32 & 0x800) << 1)) & 0xFFFFF000));
EMIT(0x0002829B | ((imm32 & 0x00000FFF) << 20));
EMIT(0x2E02C057 | (dst << 7) | (dst << 20));
break;
case SuperscalarInstructionType::IMULH_R:
// 57 20 00 92 vmulhu.vv v0, v0, v0
EMIT(0x92002057 | (dst << 7) | (src << 15) | (dst << 20));
break;
case SuperscalarInstructionType::ISMULH_R:
// 57 20 00 9E vmulh.vv v0, v0, v0
EMIT(0x9E002057 | (dst << 7) | (src << 15) | (dst << 20));
break;
case SuperscalarInstructionType::IMUL_RCP:
{
uint32_t offset = cur_literal - literals;
if (offset == 2040) {
literals += 2040;
offset = 0;
// 93 87 87 7F add x15, x15, 2040
EMIT(0x7F878793);
}
const uint64_t r = randomx_reciprocal_fast(imm32);
memcpy(cur_literal, &r, 8);
cur_literal += 8;
// 83 B2 07 00 ld x5, (x15)
// 57 E0 02 96 vmul.vx v0, v0, x5
EMIT(0x0007B283 | (offset << 20));
EMIT(0x9602E057 | (dst << 7) | (dst << 20));
}
break;
default:
UNREACHABLE;
}
}
// Step 6
k = DIST(randomx_riscv64_vector_sshash_xor, randomx_riscv64_vector_sshash_end);
memcpy(p, reinterpret_cast<void*>(randomx_riscv64_vector_sshash_xor), k);
p += k;
// Step 7. Set cacheIndex to the value of the register that has the longest dependency chain in the SuperscalarHash function executed in step 5.
if (i + 1 < num_programs) {
// vmv.v.v v9, v0 + programs[i].getAddressRegister()
const uint32_t t = 0x5E0004D7 + (static_cast<uint32_t>(programs[i].getAddressRegister()) << 15);
memcpy(p, &t, 4);
p += 4;
}
}
// Emit "J randomx_riscv64_vector_sshash_generated_instructions_end" instruction
const uint8_t* e = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_sshash_generated_instructions_end);
const uint32_t k = e - p;
const uint32_t j = 0x6F | ((k & 0x7FE) << 20) | ((k & 0x800) << 9) | (k & 0xFF000);
memcpy(p, &j, 4);
char* result = (char*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_sshash_dataset_init));
#ifdef __GNUC__
__builtin___clear_cache(result, (char*)(buf + DIST(randomx_riscv64_vector_sshash_begin, randomx_riscv64_vector_sshash_end)));
#endif
return result;
}
#define emit16(value) { const uint16_t t = value; memcpy(p, &t, 2); p += 2; }
#define emit32(value) { const uint32_t t = value; memcpy(p, &t, 4); p += 4; }
#define emit64(value) { const uint64_t t = value; memcpy(p, &t, 8); p += 8; }
#define emit_data(arr) { memcpy(p, arr, sizeof(arr)); p += sizeof(arr); }
static void imm_to_x5(uint32_t imm, uint8_t*& p)
{
const uint32_t imm_hi = (imm + ((imm & 0x800) << 1)) & 0xFFFFF000U;
const uint32_t imm_lo = imm & 0x00000FFFU;
if (imm_hi == 0) {
// li x5, imm_lo
emit32(0x00000293 + (imm_lo << 20));
return;
}
if (imm_lo == 0) {
// lui x5, imm_hi
emit32(0x000002B7 + imm_hi);
return;
}
if (imm_hi < (32 << 12)) {
//c.lui x5, imm_hi
emit16(0x6281 + (imm_hi >> 10));
}
else {
// lui x5, imm_hi
emit32(0x000002B7 + imm_hi);
}
// addiw x5, x5, imm_lo
emit32(0x0002829B | (imm_lo << 20));
}
static void loadFromScratchpad(uint32_t src, uint32_t dst, uint32_t mod, uint32_t imm, uint8_t*& p)
{
if (src == dst) {
imm &= RandomX_CurrentConfig.ScratchpadL3Mask_Calculated;
if (imm <= 2047) {
// ld x5, imm(x12)
emit32(0x00063283 | (imm << 20));
}
else if (imm <= 2047 * 2) {
// addi x5, x12, 2047
emit32(0x7FF60293);
// ld x5, (imm - 2047)(x5)
emit32(0x0002B283 | ((imm - 2047) << 20));
}
else {
// lui x5, imm & 0xFFFFF000U
emit32(0x000002B7 | ((imm + ((imm & 0x800) << 1)) & 0xFFFFF000U));
// c.add x5, x12
emit16(0x92B2);
// ld x5, (imm & 0xFFF)(x5)
emit32(0x0002B283 | ((imm & 0xFFF) << 20));
}
return;
}
uint32_t shift = 32;
uint32_t mask_reg;
if ((mod & 3) == 0) {
shift -= RandomX_CurrentConfig.Log2_ScratchpadL2;
mask_reg = 17;
}
else {
shift -= RandomX_CurrentConfig.Log2_ScratchpadL1;
mask_reg = 16;
}
imm = static_cast<uint32_t>(static_cast<int32_t>(imm << shift) >> shift);
// 0-0x7FF, 0xFFFFF800-0xFFFFFFFF fit into 12 bit (a single addi instruction)
if (imm - 0xFFFFF800U < 0x1000U) {
// addi x5, x20 + src, imm
emit32(0x000A0293 + (src << 15) + (imm << 20));
}
else {
imm_to_x5(imm, p);
// c.add x5, x20 + src
emit16(0x92D2 + (src << 2));
}
// and x5, x5, mask_reg
emit32(0x0002F2B3 + (mask_reg << 20));
// c.add x5, x12
emit16(0x92B2);
// ld x5, 0(x5)
emit32(0x0002B283);
}
void* generateProgramVectorRV64(uint8_t* buf, Program& prog, ProgramConfiguration& pcfg, const uint8_t (&inst_map)[256], void* entryDataInitScalar, uint32_t datasetOffset)
{
uint64_t* params = (uint64_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_params));
params[0] = RandomX_CurrentConfig.ScratchpadL1_Size - 8;
params[1] = RandomX_CurrentConfig.ScratchpadL2_Size - 8;
params[2] = RandomX_CurrentConfig.ScratchpadL3_Size - 8;
params[3] = RandomX_CurrentConfig.DatasetBaseSize - 64;
params[4] = (1 << RandomX_ConfigurationBase::JumpBits) - 1;
uint64_t* imul_rcp_literals = (uint64_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_imul_rcp_literals));
uint64_t* cur_literal = imul_rcp_literals;
uint32_t* spaddr_xor = (uint32_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_spaddr_xor));
uint32_t* spaddr_xor2 = (uint32_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_scratchpad_prefetch));
uint32_t* mx_xor = (uint32_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_mx_xor));
uint32_t* mx_xor_light = (uint32_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_mx_xor_light_mode));
*spaddr_xor = 0x014A47B3 + (pcfg.readReg0 << 15) + (pcfg.readReg1 << 20); // xor x15, readReg0, readReg1
*spaddr_xor2 = 0x014A42B3 + (pcfg.readReg0 << 15) + (pcfg.readReg1 << 20); // xor x5, readReg0, readReg1
const uint32_t mx_xor_value = 0x014A42B3 + (pcfg.readReg2 << 15) + (pcfg.readReg3 << 20); // xor x5, readReg2, readReg3
*mx_xor = mx_xor_value;
*mx_xor_light = mx_xor_value;
if (entryDataInitScalar) {
void* light_mode_data = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_light_mode_data);
const uint64_t data[2] = { reinterpret_cast<uint64_t>(entryDataInitScalar), datasetOffset };
memcpy(light_mode_data, &data, sizeof(data));
}
uint8_t* p = (uint8_t*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_instructions));
// 57C8025E vmv.v.x v16, x5
// 57A9034B vsext.vf2 v18, v16
// 5798214B vfcvt.f.x.v v16, v18
static constexpr uint8_t group_f_convert[] = {
0x57, 0xC8, 0x02, 0x5E, 0x57, 0xA9, 0x03, 0x4B, 0x57, 0x98, 0x21, 0x4B
};
// 57080627 vand.vv v16, v16, v12
// 5788062B vor.vv v16, v16, v13
static constexpr uint8_t group_e_post_process[] = { 0x57, 0x08, 0x06, 0x27, 0x57, 0x88, 0x06, 0x2B };
uint8_t* last_modified[RegistersCount] = { p, p, p, p, p, p, p, p };
for (uint32_t i = 0, n = prog.getSize(); i < n; ++i) {
Instruction instr = prog(i);
uint32_t src = instr.src % RegistersCount;
uint32_t dst = instr.dst % RegistersCount;
const uint32_t shift = instr.getModShift();
uint32_t imm = instr.getImm32();
const uint32_t mod = instr.mod;
switch (static_cast<InstructionType>(inst_map[instr.opcode])) {
case InstructionType::IADD_RS:
if (shift == 0) {
// c.add x20 + dst, x20 + src
emit16(0x9A52 + (src << 2) + (dst << 7));
}
else {
#ifdef __riscv_zba
// sh{shift}add x20 + dst, x20 + src, x20 + dst
emit32(0x214A0A33 + (shift << 13) + (dst << 7) + (src << 15) + (dst << 20));
#else // __riscv_zba
// slli x5, x20 + src, shift
emit32(0x000A1293 + (src << 15) + (shift << 20));
// c.add x20 + dst, x5
emit16(0x9A16 + (dst << 7));
#endif // __riscv_zba
}
if (dst == RegisterNeedsDisplacement) {
imm_to_x5(imm, p);
// c.add x20 + dst, x5
emit16(0x9A16 + (dst << 7));
}
last_modified[dst] = p;
break;
case InstructionType::IADD_M:
loadFromScratchpad(src, dst, mod, imm, p);
// c.add x20 + dst, x5
emit16(0x9A16 + (dst << 7));
last_modified[dst] = p;
break;
case InstructionType::ISUB_R:
if (src != dst) {
// sub x20 + dst, x20 + dst, x20 + src
emit32(0x414A0A33 + (dst << 7) + (dst << 15) + (src << 20));
}
else {
imm_to_x5(-imm, p);
// c.add x20 + dst, x5
emit16(0x9A16 + (dst << 7));
}
last_modified[dst] = p;
break;
case InstructionType::ISUB_M:
loadFromScratchpad(src, dst, mod, imm, p);
// sub x20 + dst, x20 + dst, x5
emit32(0x405A0A33 + (dst << 7) + (dst << 15));
last_modified[dst] = p;
break;
case InstructionType::IMUL_R:
if (src != dst) {
// mul x20 + dst, x20 + dst, x20 + src
emit32(0x034A0A33 + (dst << 7) + (dst << 15) + (src << 20));
}
else {
imm_to_x5(imm, p);
// mul x20 + dst, x20 + dst, x5
emit32(0x025A0A33 + (dst << 7) + (dst << 15));
}
last_modified[dst] = p;
break;
case InstructionType::IMUL_M:
loadFromScratchpad(src, dst, mod, imm, p);
// mul x20 + dst, x20 + dst, x5
emit32(0x025A0A33 + (dst << 7) + (dst << 15));
last_modified[dst] = p;
break;
case InstructionType::IMULH_R:
// mulhu x20 + dst, x20 + dst, x20 + src
emit32(0x034A3A33 + (dst << 7) + (dst << 15) + (src << 20));
last_modified[dst] = p;
break;
case InstructionType::IMULH_M:
loadFromScratchpad(src, dst, mod, imm, p);
// mulhu x20 + dst, x20 + dst, x5
emit32(0x025A3A33 + (dst << 7) + (dst << 15));
last_modified[dst] = p;
break;
case InstructionType::ISMULH_R:
// mulh x20 + dst, x20 + dst, x20 + src
emit32(0x034A1A33 + (dst << 7) + (dst << 15) + (src << 20));
last_modified[dst] = p;
break;
case InstructionType::ISMULH_M:
loadFromScratchpad(src, dst, mod, imm, p);
// mulh x20 + dst, x20 + dst, x5
emit32(0x025A1A33 + (dst << 7) + (dst << 15));
last_modified[dst] = p;
break;
case InstructionType::IMUL_RCP:
if (!isZeroOrPowerOf2(imm)) {
const uint64_t offset = (cur_literal - imul_rcp_literals) * 8;
*(cur_literal++) = randomx_reciprocal_fast(imm);
static constexpr uint32_t rcp_regs[26] = {
/* Integer */ 8, 10, 28, 29, 30, 31,
/* Float */ 0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 28, 29, 30, 31
};
if (offset < 6 * 8) {
// mul x20 + dst, x20 + dst, rcp_reg
emit32(0x020A0A33 + (dst << 7) + (dst << 15) + (rcp_regs[offset / 8] << 20));
}
else if (offset < 26 * 8) {
// fmv.x.d x5, rcp_reg
emit32(0xE20002D3 + (rcp_regs[offset / 8] << 15));
// mul x20 + dst, x20 + dst, x5
emit32(0x025A0A33 + (dst << 7) + (dst << 15));
}
else {
// ld x5, offset(x18)
emit32(0x00093283 + (offset << 20));
// mul x20 + dst, x20 + dst, x5
emit32(0x025A0A33 + (dst << 7) + (dst << 15));
}
last_modified[dst] = p;
}
break;
case InstructionType::INEG_R:
// sub x20 + dst, x0, x20 + dst
emit32(0x41400A33 + (dst << 7) + (dst << 20));
last_modified[dst] = p;
break;
case InstructionType::IXOR_R:
if (src != dst) {
// xor x20 + dst, x20 + dst, x20 + src
emit32(0x014A4A33 + (dst << 7) + (dst << 15) + (src << 20));
}
else {
imm_to_x5(imm, p);
// xor x20, x20, x5
emit32(0x005A4A33 + (dst << 7) + (dst << 15));
}
last_modified[dst] = p;
break;
case InstructionType::IXOR_M:
loadFromScratchpad(src, dst, mod, imm, p);
// xor x20, x20, x5
emit32(0x005A4A33 + (dst << 7) + (dst << 15));
last_modified[dst] = p;
break;
#ifdef __riscv_zbb
case InstructionType::IROR_R:
if (src != dst) {
// ror x20 + dst, x20 + dst, x20 + src
emit32(0x614A5A33 + (dst << 7) + (dst << 15) + (src << 20));
}
else {
// rori x20 + dst, x20 + dst, imm
emit32(0x600A5A13 + (dst << 7) + (dst << 15) + ((imm & 63) << 20));
}
last_modified[dst] = p;
break;
case InstructionType::IROL_R:
if (src != dst) {
// rol x20 + dst, x20 + dst, x20 + src
emit32(0x614A1A33 + (dst << 7) + (dst << 15) + (src << 20));
}
else {
// rori x20 + dst, x20 + dst, -imm
emit32(0x600A5A13 + (dst << 7) + (dst << 15) + ((-imm & 63) << 20));
}
last_modified[dst] = p;
break;
#else // __riscv_zbb
case InstructionType::IROR_R:
if (src != dst) {
// sub x5, x0, x20 + src
emit32(0x414002B3 + (src << 20));
// srl x6, x20 + dst, x20 + src
emit32(0x014A5333 + (dst << 15) + (src << 20));
// sll x20 + dst, x20 + dst, x5
emit32(0x005A1A33 + (dst << 7) + (dst << 15));
// or x20 + dst, x20 + dst, x6
emit32(0x006A6A33 + (dst << 7) + (dst << 15));
}
else {
// srli x5, x20 + dst, imm
emit32(0x000A5293 + (dst << 15) + ((imm & 63) << 20));
// slli x6, x20 + dst, -imm
emit32(0x000A1313 + (dst << 15) + ((-imm & 63) << 20));
// or x20 + dst, x5, x6
emit32(0x0062EA33 + (dst << 7));
}
last_modified[dst] = p;
break;
case InstructionType::IROL_R:
if (src != dst) {
// sub x5, x0, x20 + src
emit32(0x414002B3 + (src << 20));
// sll x6, x20 + dst, x20 + src
emit32(0x014A1333 + (dst << 15) + (src << 20));
// srl x20 + dst, x20 + dst, x5
emit32(0x005A5A33 + (dst << 7) + (dst << 15));
// or x20 + dst, x20 + dst, x6
emit32(0x006A6A33 + (dst << 7) + (dst << 15));
}
else {
// srli x5, x20 + dst, -imm
emit32(0x000A5293 + (dst << 15) + ((-imm & 63) << 20));
// slli x6, x20 + dst, imm
emit32(0x000A1313 + (dst << 15) + ((imm & 63) << 20));
// or x20 + dst, x5, x6
emit32(0x0062EA33 + (dst << 7));
}
last_modified[dst] = p;
break;
#endif // __riscv_zbb
case InstructionType::ISWAP_R:
if (src != dst) {
// c.mv x5, x20 + dst
emit16(0x82D2 + (dst << 2));
// c.mv x20 + dst, x20 + src
emit16(0x8A52 + (src << 2) + (dst << 7));
// c.mv x20 + src, x5
emit16(0x8A16 + (src << 7));
last_modified[src] = p;
last_modified[dst] = p;
}
break;
case InstructionType::FSWAP_R:
// vmv.x.s x5, v0 + dst
emit32(0x420022D7 + (dst << 20));
// vslide1down.vx v0 + dst, v0 + dst, x5
emit32(0x3E02E057 + (dst << 7) + (dst << 20));
break;
case InstructionType::FADD_R:
src %= RegisterCountFlt;
dst %= RegisterCountFlt;
// vfadd.vv v0 + dst, v0 + dst, v8 + src
emit32(0x02041057 + (dst << 7) + (src << 15) + (dst << 20));
break;
case InstructionType::FADD_M:
dst %= RegisterCountFlt;
loadFromScratchpad(src, RegistersCount, mod, imm, p);
emit_data(group_f_convert);
// vfadd.vv v0 + dst, v0 + dst, v16
emit32(0x02081057 + (dst << 7) + (dst << 20));
break;
case InstructionType::FSUB_R:
src %= RegisterCountFlt;
dst %= RegisterCountFlt;
// vfsub.vv v0 + dst, v0 + dst, v8 + src
emit32(0x0A041057 + (dst << 7) + (src << 15) + (dst << 20));
break;
case InstructionType::FSUB_M:
dst %= RegisterCountFlt;
loadFromScratchpad(src, RegistersCount, mod, imm, p);
emit_data(group_f_convert);
// vfsub.vv v0 + dst, v0 + dst, v16
emit32(0x0A081057 + (dst << 7) + (dst << 20));
break;
case InstructionType::FSCAL_R:
dst %= RegisterCountFlt;
// vxor.vv v0, v0, v14
emit32(0x2E070057 + (dst << 7) + (dst << 20));
break;
case InstructionType::FMUL_R:
src %= RegisterCountFlt;
dst %= RegisterCountFlt;
// vfmul.vv v4 + dst, v4 + dst, v8 + src
emit32(0x92441257 + (dst << 7) + (src << 15) + (dst << 20));
break;
case InstructionType::FDIV_M:
dst %= RegisterCountFlt;
loadFromScratchpad(src, RegistersCount, mod, imm, p);
emit_data(group_f_convert);
emit_data(group_e_post_process);
// vfdiv.vv v0 + dst, v0 + dst, v16
emit32(0x82481257 + (dst << 7) + (dst << 20));
break;
case InstructionType::FSQRT_R:
dst %= RegisterCountFlt;
// vfsqrt.v v4 + dst, v4 + dst
emit32(0x4E401257 + (dst << 7) + (dst << 20));
break;
case InstructionType::CBRANCH:
{
const uint32_t shift = (mod >> 4) + RandomX_ConfigurationBase::JumpOffset;
imm |= (1UL << shift);
if (RandomX_ConfigurationBase::JumpOffset > 0 || shift > 0) {
imm &= ~(1UL << (shift - 1));
}
// slli x6, x7, shift
// x6 = branchMask
emit32(0x00039313 + (shift << 20));
// x5 = imm
imm_to_x5(imm, p);
// c.add x20 + dst, x5
emit16(0x9A16 + (dst << 7));
// and x5, x20 + dst, x6
emit32(0x006A72B3 + (dst << 15));
const int offset = static_cast<int>(last_modified[dst] - p);
if (offset >= -4096) {
// beqz x5, offset
const uint32_t k = static_cast<uint32_t>(offset);
emit32(0x80028063 | ((k & 0x1E) << 7) | ((k & 0x7E0) << 20) | ((k & 0x800) >> 4));
}
else {
// bnez x5, 8
emit32(0x00029463);
// j offset
const uint32_t k = static_cast<uint32_t>(offset - 4);
emit32(0x8000006F | ((k & 0x7FE) << 20) | ((k & 0x800) << 9) | (k & 0xFF000));
}
for (uint32_t j = 0; j < RegistersCount; ++j) {
last_modified[j] = p;
}
}
break;
case InstructionType::CFROUND:
if ((imm - 1) & 63) {
#ifdef __riscv_zbb
// rori x5, x20 + src, imm - 1
emit32(0x600A5293 + (src << 15) + (((imm - 1) & 63) << 20));
#else // __riscv_zbb
// srli x5, x20 + src, imm - 1
emit32(0x000A5293 + (src << 15) + (((imm - 1) & 63) << 20));
// slli x6, x20 + src, 1 - imm
emit32(0x000A1313 + (src << 15) + (((1 - imm) & 63) << 20));
// or x5, x5, x6
emit32(0x0062E2B3);
#endif // __riscv_zbb
// andi x5, x5, 6
emit32(0x0062F293);
}
else {
// andi x5, x20 + src, 6
emit32(0x006A7293 + (src << 15));
}
// li x6, 01111000b
// x6 = CFROUND lookup table
emit32(0x07800313);
// srl x5, x6, x5
emit32(0x005352B3);
// andi x5, x5, 3
emit32(0x0032F293);
// csrw frm, x5
emit32(0x00229073);
break;
case InstructionType::ISTORE:
{
uint32_t mask_reg;
uint32_t shift = 32;
if ((mod >> 4) >= 14) {
shift -= RandomX_CurrentConfig.Log2_ScratchpadL3;
mask_reg = 1; // x1 = L3 mask
}
else {
if ((mod & 3) == 0) {
shift -= RandomX_CurrentConfig.Log2_ScratchpadL2;
mask_reg = 17; // x17 = L2 mask
}
else {
shift -= RandomX_CurrentConfig.Log2_ScratchpadL1;
mask_reg = 16; // x16 = L1 mask
}
}
imm = static_cast<uint32_t>(static_cast<int32_t>(imm << shift) >> shift);
imm_to_x5(imm, p);
// c.add x5, x20 + dst
emit16(0x92D2 + (dst << 2));
// and x5, x5, x0 + mask_reg
emit32(0x0002F2B3 + (mask_reg << 20));
// c.add x5, x12
emit16(0x92B2);
// sd x20 + src, 0(x5)
emit32(0x0142B023 + (src << 20));
}
break;
default:
UNREACHABLE;
}
}
const uint8_t* e;
if (entryDataInitScalar) {
// Emit "J randomx_riscv64_vector_program_main_loop_instructions_end_light_mode" instruction
e = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_instructions_end_light_mode);
}
else {
// Emit "J randomx_riscv64_vector_program_main_loop_instructions_end" instruction
e = buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_main_loop_instructions_end);
}
const uint32_t k = e - p;
emit32(0x6F | ((k & 0x7FE) << 20) | ((k & 0x800) << 9) | (k & 0xFF000));
#ifdef __GNUC__
char* p1 = (char*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_params));
char* p2 = (char*)(buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_end));
__builtin___clear_cache(p1, p2);
#endif
return buf + DIST(randomx_riscv64_vector_code_begin, randomx_riscv64_vector_program_begin);
}
} // namespace randomx

View File

@@ -0,0 +1,45 @@
/*
Copyright (c) 2018-2020, tevador <tevador@gmail.com>
Copyright (c) 2019-2021, XMRig <https://github.com/xmrig>, <support@xmrig.com>
Copyright (c) 2025, SChernykh <https://github.com/SChernykh>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#include <cstdint>
#include <cstdlib>
namespace randomx {
class SuperscalarProgram;
struct ProgramConfiguration;
class Program;
void* generateDatasetInitVectorRV64(uint8_t* buf, SuperscalarProgram* programs, size_t num_programs);
void* generateProgramVectorRV64(uint8_t* buf, Program& prog, ProgramConfiguration& pcfg, const uint8_t (&inst_map)[256], void* entryDataInitScalar, uint32_t datasetOffset);
} // namespace randomx

View File

@@ -0,0 +1,869 @@
/*
Copyright (c) 2018-2020, tevador <tevador@gmail.com>
Copyright (c) 2019-2021, XMRig <https://github.com/xmrig>, <support@xmrig.com>
Copyright (c) 2025, SChernykh <https://github.com/SChernykh>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "configuration.h"
// Compatibility macros
#if !defined(RANDOMX_CACHE_ACCESSES) && defined(RANDOMX_CACHE_MAX_ACCESSES)
#define RANDOMX_CACHE_ACCESSES RANDOMX_CACHE_MAX_ACCESSES
#endif
#if defined(RANDOMX_ARGON_MEMORY)
#define RANDOMX_CACHE_MASK RANDOMX_ARGON_MEMORY * 1024 / 64 - 1
#elif defined(RANDOMX_CACHE_MAX_SIZE)
#define RANDOMX_CACHE_MASK RANDOMX_CACHE_MAX_SIZE / 64 - 1
#endif
#define DECL(x) x
.text
#ifndef __riscv_v
#error This file requires rv64gcv
#endif
.option pic
.global DECL(randomx_riscv64_vector_code_begin)
.global DECL(randomx_riscv64_vector_sshash_begin)
.global DECL(randomx_riscv64_vector_sshash_imul_rcp_literals)
.global DECL(randomx_riscv64_vector_sshash_dataset_init)
.global DECL(randomx_riscv64_vector_sshash_generated_instructions)
.global DECL(randomx_riscv64_vector_sshash_generated_instructions_end)
.global DECL(randomx_riscv64_vector_sshash_cache_prefetch)
.global DECL(randomx_riscv64_vector_sshash_xor)
.global DECL(randomx_riscv64_vector_sshash_end)
.global DECL(randomx_riscv64_vector_program_params)
.global DECL(randomx_riscv64_vector_program_imul_rcp_literals)
.global DECL(randomx_riscv64_vector_program_begin)
.global DECL(randomx_riscv64_vector_program_main_loop_instructions)
.global DECL(randomx_riscv64_vector_program_main_loop_instructions_end)
.global DECL(randomx_riscv64_vector_program_main_loop_mx_xor)
.global DECL(randomx_riscv64_vector_program_main_loop_spaddr_xor)
.global DECL(randomx_riscv64_vector_program_main_loop_light_mode_data)
.global DECL(randomx_riscv64_vector_program_main_loop_instructions_end_light_mode)
.global DECL(randomx_riscv64_vector_program_main_loop_mx_xor_light_mode)
.global DECL(randomx_riscv64_vector_program_scratchpad_prefetch)
.global DECL(randomx_riscv64_vector_program_end)
.global DECL(randomx_riscv64_vector_code_end)
.balign 8
DECL(randomx_riscv64_vector_code_begin):
DECL(randomx_riscv64_vector_sshash_begin):
sshash_constant_0: .dword 6364136223846793005
sshash_constant_1: .dword 9298411001130361340
sshash_constant_2: .dword 12065312585734608966
sshash_constant_3: .dword 9306329213124626780
sshash_constant_4: .dword 5281919268842080866
sshash_constant_5: .dword 10536153434571861004
sshash_constant_6: .dword 3398623926847679864
sshash_constant_7: .dword 9549104520008361294
sshash_offsets: .dword 0,1,2,3
store_offsets: .dword 0,64,128,192
DECL(randomx_riscv64_vector_sshash_imul_rcp_literals): .fill 512,8,0
/*
Reference: https://github.com/tevador/RandomX/blob/master/doc/specs.md#73-dataset-block-generation
Register layout
---------------
x5 = temporary
x10 = randomx cache
x11 = output buffer
x12 = startBlock
x13 = endBlock
x14 = cache mask
x15 = imul_rcp literal pointer
v0-v7 = r0-r7
v8 = itemNumber
v9 = cacheIndex, then a pointer into cache->memory (for prefetch), then a byte offset into cache->memory
v10-v17 = sshash constants
v18 = temporary
v19 = dataset item store offsets
*/
DECL(randomx_riscv64_vector_sshash_dataset_init):
// Process 4 64-bit values at a time
vsetivli zero, 4, e64, m1, ta, ma
// Load cache->memory pointer
ld x10, (x10)
// Init cache mask
li x14, RANDOMX_CACHE_MASK
// Init dataset item store offsets
lla x5, store_offsets
vle64.v v19, (x5)
// Init itemNumber vector to (startBlock, startBlock + 1, startBlock + 2, startBlock + 3)
lla x5, sshash_offsets
vle64.v v8, (x5)
vadd.vx v8, v8, x12
// Load constants (stride = x0 = 0, so a 64-bit value will be broadcast into each element of a vector)
lla x5, sshash_constant_0
vlse64.v v10, (x5), x0
lla x5, sshash_constant_1
vlse64.v v11, (x5), x0
lla x5, sshash_constant_2
vlse64.v v12, (x5), x0
lla x5, sshash_constant_3
vlse64.v v13, (x5), x0
lla x5, sshash_constant_4
vlse64.v v14, (x5), x0
lla x5, sshash_constant_5
vlse64.v v15, (x5), x0
lla x5, sshash_constant_6
vlse64.v v16, (x5), x0
lla x5, sshash_constant_7
vlse64.v v17, (x5), x0
// Calculate the end pointer for dataset init
sub x13, x13, x12
slli x13, x13, 6
add x13, x13, x11
init_item:
// Step 1. Init r0-r7
// r0 = (itemNumber + 1) * 6364136223846793005
vmv.v.v v0, v8
vmadd.vv v0, v10, v10
// r_i = r0 ^ c_i for i = 1..7
vxor.vv v1, v0, v11
vxor.vv v2, v0, v12
vxor.vv v3, v0, v13
vxor.vv v4, v0, v14
vxor.vv v5, v0, v15
vxor.vv v6, v0, v16
vxor.vv v7, v0, v17
// Step 2. Let cacheIndex = itemNumber
vmv.v.v v9, v8
// Step 3 is implicit (all iterations are inlined, there is no "i")
// Init imul_rcp literal pointer
lla x15, randomx_riscv64_vector_sshash_imul_rcp_literals
DECL(randomx_riscv64_vector_sshash_generated_instructions):
// Generated by JIT compiler
//
// Step 4. randomx_riscv64_vector_sshash_cache_prefetch
// Step 5. SuperscalarHash[i]
// Step 6. randomx_riscv64_vector_sshash_xor
//
// Above steps will be repeated RANDOMX_CACHE_ACCESSES times
.fill RANDOMX_CACHE_ACCESSES * 2048, 4, 0
DECL(randomx_riscv64_vector_sshash_generated_instructions_end):
// Step 9. Concatenate registers r0-r7 in little endian format to get the final Dataset item data.
vsuxei64.v v0, (x11), v19
add x5, x11, 8
vsuxei64.v v1, (x5), v19
add x5, x11, 16
vsuxei64.v v2, (x5), v19
add x5, x11, 24
vsuxei64.v v3, (x5), v19
add x5, x11, 32
vsuxei64.v v4, (x5), v19
add x5, x11, 40
vsuxei64.v v5, (x5), v19
add x5, x11, 48
vsuxei64.v v6, (x5), v19
add x5, x11, 56
vsuxei64.v v7, (x5), v19
// Iterate to the next 4 items
vadd.vi v8, v8, 4
add x11, x11, 256
bltu x11, x13, init_item
ret
// Step 4. Load a 64-byte item from the Cache. The item index is given by cacheIndex modulo the total number of 64-byte items in Cache.
DECL(randomx_riscv64_vector_sshash_cache_prefetch):
// v9 = convert from cacheIndex to a direct pointer into cache->memory
vand.vx v9, v9, x14
vsll.vi v9, v9, 6
vadd.vx v9, v9, x10
// Prefetch element 0
vmv.x.s x5, v9
#ifdef __riscv_zicbop
prefetch.r (x5)
#else
ld x5, (x5)
#endif
// Prefetch element 1
vslidedown.vi v18, v9, 1
vmv.x.s x5, v18
#ifdef __riscv_zicbop
prefetch.r (x5)
#else
ld x5, (x5)
#endif
// Prefetch element 2
vslidedown.vi v18, v9, 2
vmv.x.s x5, v18
#ifdef __riscv_zicbop
prefetch.r (x5)
#else
ld x5, (x5)
#endif
// Prefetch element 3
vslidedown.vi v18, v9, 3
vmv.x.s x5, v18
#ifdef __riscv_zicbop
prefetch.r (x5)
#else
ld x5, (x5)
#endif
// v9 = byte offset into cache->memory
vsub.vx v9, v9, x10
// Step 6. XOR all registers with data loaded from randomx cache
DECL(randomx_riscv64_vector_sshash_xor):
vluxei64.v v18, (x10), v9
vxor.vv v0, v0, v18
add x5, x10, 8
vluxei64.v v18, (x5), v9
vxor.vv v1, v1, v18
add x5, x10, 16
vluxei64.v v18, (x5), v9
vxor.vv v2, v2, v18
add x5, x10, 24
vluxei64.v v18, (x5), v9
vxor.vv v3, v3, v18
add x5, x10, 32
vluxei64.v v18, (x5), v9
vxor.vv v4, v4, v18
add x5, x10, 40
vluxei64.v v18, (x5), v9
vxor.vv v5, v5, v18
add x5, x10, 48
vluxei64.v v18, (x5), v9
vxor.vv v6, v6, v18
add x5, x10, 56
vluxei64.v v18, (x5), v9
vxor.vv v7, v7, v18
DECL(randomx_riscv64_vector_sshash_end):
/*
Reference: https://github.com/tevador/RandomX/blob/master/doc/specs.md#46-vm-execution
C declarations:
struct RegisterFile {
uint64_t r[8];
double f[4][2];
double e[4][2];
double a[4][2];
};
struct MemoryRegisters {
uint32_t mx, ma;
uint8_t* memory; // dataset (fast mode) or cache (light mode)
};
void ProgramFunc(RegisterFile* reg, MemoryRegisters* mem, uint8_t* scratchpad, uint64_t iterations);
Register layout
---------------
x0 = zero
x1 = scratchpad L3 mask
x2 = stack pointer
x3 = global pointer (unused)
x4 = thread pointer (unused)
x5 = temporary
x6 = temporary
x7 = branch mask (unshifted)
x8 = frame pointer, also 64-bit literal inside the loop
x9 = scratchpad L3 mask (64-byte aligned)
x10 = RegisterFile* reg, also 64-bit literal inside the loop
x11 = MemoryRegisters* mem, then dataset/cache pointer
x12 = scratchpad
x13 = iterations
x14 = mx, ma (always stored with dataset mask applied)
x15 = spAddr0, spAddr1
x16 = scratchpad L1 mask
x17 = scratchpad L2 mask
x18 = IMUL_RCP literals pointer
x19 = dataset mask
x20-x27 = r0-r7
x28-x31 = 64-bit literals
f0-f7 = 64-bit literals
f10-f17 = 64-bit literals
f28-f31 = 64-bit literals
v0-v3 = f0-f3
v4-v7 = e0-e3
v8-v11 = a0-a3
v12 = E 'and' mask = 0x00ffffffffffffff'00ffffffffffffff
v13 = E 'or' mask = 0x3*00000000******'3*00000000******
v14 = scale mask = 0x80f0000000000000'80f0000000000000
v15 = unused
v16 = temporary
v17 = unused
v18 = temporary
v19-v31 = unused
*/
.balign 8
DECL(randomx_riscv64_vector_program_params):
// JIT compiler will adjust these values for different RandomX variants
randomx_masks: .dword 16376, 262136, 2097144, 2147483584, 255
DECL(randomx_riscv64_vector_program_imul_rcp_literals):
imul_rcp_literals: .fill RANDOMX_PROGRAM_MAX_SIZE, 8, 0
DECL(randomx_riscv64_vector_program_begin):
addi sp, sp, -112
sd x8, 96(sp) // save old frame pointer
addi x8, sp, 112 // setup new frame pointer
sd x1, 104(sp) // save return address
// Save callee-saved registers
sd x9, 0(sp)
sd x18, 8(sp)
sd x19, 16(sp)
sd x20, 24(sp)
sd x21, 32(sp)
sd x22, 40(sp)
sd x23, 48(sp)
sd x24, 56(sp)
sd x25, 64(sp)
sd x26, 72(sp)
sd x27, 80(sp)
// Save x10 as it will be used as an IMUL_RCP literal
sd x10, 88(sp)
// Load mx, ma and dataset pointer
ld x14, (x11)
ld x11, 8(x11)
// Initialize spAddr0-spAddr1
mv x15, x14
// Set registers r0-r7 to zero
li x20, 0
li x21, 0
li x22, 0
li x23, 0
li x24, 0
li x25, 0
li x26, 0
li x27, 0
// Load masks
lla x5, randomx_masks
ld x16, 0(x5)
ld x17, 8(x5)
ld x1, 16(x5)
ld x19, 24(x5)
ld x7, 32(x5)
addi x9, x1, -56
// Set vector registers to 2x64 bit
vsetivli zero, 2, e64, m1, ta, ma
// Apply dataset mask to mx, ma
slli x5, x19, 32
or x5, x5, x19
and x14, x14, x5
// Load group A registers
addi x5, x10, 192
vle64.v v8, (x5)
addi x5, x10, 208
vle64.v v9, (x5)
addi x5, x10, 224
vle64.v v10, (x5)
addi x5, x10, 240
vle64.v v11, (x5)
// Load E 'and' mask
vmv.v.i v12, -1
vsrl.vi v12, v12, 8
// Load E 'or' mask (stored in reg.f[0])
addi x5, x10, 64
vle64.v v13, (x5)
// Load scale mask
lui x5, 0x80f00
slli x5, x5, 32
vmv.v.x v14, x5
// IMUL_RCP literals pointer
lla x18, imul_rcp_literals
// Load IMUL_RCP literals
ld x8, 0(x18)
ld x10, 8(x18)
ld x28, 16(x18)
ld x29, 24(x18)
ld x30, 32(x18)
ld x31, 40(x18)
fld f0, 48(x18)
fld f1, 56(x18)
fld f2, 64(x18)
fld f3, 72(x18)
fld f4, 80(x18)
fld f5, 88(x18)
fld f6, 96(x18)
fld f7, 104(x18)
fld f10, 112(x18)
fld f11, 120(x18)
fld f12, 128(x18)
fld f13, 136(x18)
fld f14, 144(x18)
fld f15, 152(x18)
fld f16, 160(x18)
fld f17, 168(x18)
fld f28, 176(x18)
fld f29, 184(x18)
fld f30, 192(x18)
fld f31, 200(x18)
randomx_riscv64_vector_program_main_loop:
and x5, x15, x9 // x5 = spAddr0 & 64-byte aligned L3 mask
add x5, x5, x12 // x5 = &scratchpad[spAddr0 & 64-byte aligned L3 mask]
// read a 64-byte line from scratchpad (indexed by spAddr0) and XOR it with r0-r7
ld x6, 0(x5)
xor x20, x20, x6
ld x6, 8(x5)
xor x21, x21, x6
ld x6, 16(x5)
xor x22, x22, x6
ld x6, 24(x5)
xor x23, x23, x6
ld x6, 32(x5)
xor x24, x24, x6
ld x6, 40(x5)
xor x25, x25, x6
ld x6, 48(x5)
xor x26, x26, x6
ld x6, 56(x5)
xor x27, x27, x6
srli x5, x15, 32 // x5 = spAddr1
and x5, x5, x9 // x5 = spAddr1 & 64-byte aligned L3 mask
add x5, x5, x12 // x5 = &scratchpad[spAddr1 & 64-byte aligned L3 mask]
// read a 64-byte line from scratchpad (indexed by spAddr1) and initialize f0-f3, e0-e3 registers
// Set vector registers to 2x32 bit
vsetivli zero, 2, e32, m1, ta, ma
// load f0
vle32.v v16, (x5)
vfwcvt.f.x.v v0, v16
// load f1
addi x6, x5, 8
vle32.v v1, (x6)
// Use v16 as an intermediary register because vfwcvt accepts only registers with even numbers here
vfwcvt.f.x.v v16, v1
vmv1r.v v1, v16
// load f2
addi x6, x5, 16
vle32.v v16, (x6)
vfwcvt.f.x.v v2, v16
// load f3
addi x6, x5, 24
vle32.v v3, (x6)
vfwcvt.f.x.v v16, v3
vmv1r.v v3, v16
// load e0
addi x6, x5, 32
vle32.v v16, (x6)
vfwcvt.f.x.v v4, v16
// load e1
addi x6, x5, 40
vle32.v v5, (x6)
vfwcvt.f.x.v v16, v5
vmv1r.v v5, v16
// load e2
addi x6, x5, 48
vle32.v v16, (x6)
vfwcvt.f.x.v v6, v16
// load e3
addi x6, x5, 56
vle32.v v7, (x6)
vfwcvt.f.x.v v16, v7
vmv1r.v v7, v16
// Set vector registers back to 2x64 bit
vsetivli zero, 2, e64, m1, ta, ma
// post-process e0-e3
vand.vv v4, v4, v12
vand.vv v5, v5, v12
vand.vv v6, v6, v12
vand.vv v7, v7, v12
vor.vv v4, v4, v13
vor.vv v5, v5, v13
vor.vv v6, v6, v13
vor.vv v7, v7, v13
DECL(randomx_riscv64_vector_program_main_loop_instructions):
// Generated by JIT compiler
// FDIV_M can generate up to 50 bytes of code (round it up to 52 - a multiple of 4)
// +32 bytes for the scratchpad prefetch and the final jump instruction
.fill RANDOMX_PROGRAM_MAX_SIZE * 52 + 32, 1, 0
DECL(randomx_riscv64_vector_program_main_loop_instructions_end):
// Calculate dataset pointer for dataset read
// Do it here to break false dependency from readReg2 and readReg3 (see below)
srli x6, x14, 32 // x6 = ma & dataset mask
DECL(randomx_riscv64_vector_program_main_loop_mx_xor):
xor x5, x24, x26 // x5 = readReg2 ^ readReg3 (JIT compiler will substitute the actual registers)
and x5, x5, x19 // x5 = (readReg2 ^ readReg3) & dataset mask
xor x14, x14, x5 // mx ^= (readReg2 ^ readReg3) & dataset mask
add x5, x14, x11 // x5 = &dataset[mx & dataset mask]
#ifdef __riscv_zicbop
prefetch.r (x5)
#else
ld x5, (x5)
#endif
add x5, x6, x11 // x5 = &dataset[ma & dataset mask]
// read a 64-byte line from dataset and XOR it with r0-r7
ld x6, 0(x5)
xor x20, x20, x6
ld x6, 8(x5)
xor x21, x21, x6
ld x6, 16(x5)
xor x22, x22, x6
ld x6, 24(x5)
xor x23, x23, x6
ld x6, 32(x5)
xor x24, x24, x6
ld x6, 40(x5)
xor x25, x25, x6
ld x6, 48(x5)
xor x26, x26, x6
ld x6, 56(x5)
xor x27, x27, x6
DECL(randomx_riscv64_vector_program_scratchpad_prefetch):
xor x5, x20, x22 // spAddr0-spAddr1 = readReg0 ^ readReg1 (JIT compiler will substitute the actual registers)
srli x6, x5, 32 // x6 = spAddr1
and x5, x5, x9 // x5 = spAddr0 & 64-byte aligned L3 mask
and x6, x6, x9 // x6 = spAddr1 & 64-byte aligned L3 mask
c.add x5, x12 // x5 = &scratchpad[spAddr0 & 64-byte aligned L3 mask]
c.add x6, x12 // x6 = &scratchpad[spAddr1 & 64-byte aligned L3 mask]
#ifdef __riscv_zicbop
prefetch.r (x5)
prefetch.r (x6)
#else
ld x5, (x5)
ld x6, (x6)
#endif
// swap mx <-> ma
#ifdef __riscv_zbb
rori x14, x14, 32
#else
srli x5, x14, 32
slli x14, x14, 32
or x14, x14, x5
#endif
srli x5, x15, 32 // x5 = spAddr1
and x5, x5, x9 // x5 = spAddr1 & 64-byte aligned L3 mask
add x5, x5, x12 // x5 = &scratchpad[spAddr1 & 64-byte aligned L3 mask]
// store registers r0-r7 to the scratchpad
sd x20, 0(x5)
sd x21, 8(x5)
sd x22, 16(x5)
sd x23, 24(x5)
sd x24, 32(x5)
sd x25, 40(x5)
sd x26, 48(x5)
sd x27, 56(x5)
and x5, x15, x9 // x5 = spAddr0 & 64-byte aligned L3 mask
add x5, x5, x12 // x5 = &scratchpad[spAddr0 & 64-byte aligned L3 mask]
DECL(randomx_riscv64_vector_program_main_loop_spaddr_xor):
xor x15, x20, x22 // spAddr0-spAddr1 = readReg0 ^ readReg1 (JIT compiler will substitute the actual registers)
// store registers f0-f3 to the scratchpad (f0-f3 are first combined with e0-e3)
vxor.vv v0, v0, v4
vxor.vv v1, v1, v5
vxor.vv v2, v2, v6
vxor.vv v3, v3, v7
vse64.v v0, (x5)
addi x6, x5, 16
vse64.v v1, (x6)
addi x6, x5, 32
vse64.v v2, (x6)
addi x6, x5, 48
vse64.v v3, (x6)
addi x13, x13, -1
beqz x13, randomx_riscv64_vector_program_main_loop_end
j randomx_riscv64_vector_program_main_loop
randomx_riscv64_vector_program_main_loop_end:
// Restore x8 and x10
addi x8, sp, 112
ld x10, 88(sp)
// Store integer registers
sd x20, 0(x10)
sd x21, 8(x10)
sd x22, 16(x10)
sd x23, 24(x10)
sd x24, 32(x10)
sd x25, 40(x10)
sd x26, 48(x10)
sd x27, 56(x10)
// Store FP registers
addi x5, x10, 64
vse64.v v0, (x5)
addi x5, x10, 80
vse64.v v1, (x5)
addi x5, x10, 96
vse64.v v2, (x5)
addi x5, x10, 112
vse64.v v3, (x5)
addi x5, x10, 128
vse64.v v4, (x5)
addi x5, x10, 144
vse64.v v5, (x5)
addi x5, x10, 160
vse64.v v6, (x5)
addi x5, x10, 176
vse64.v v7, (x5)
// Restore callee-saved registers
ld x9, 0(sp)
ld x18, 8(sp)
ld x19, 16(sp)
ld x20, 24(sp)
ld x21, 32(sp)
ld x22, 40(sp)
ld x23, 48(sp)
ld x24, 56(sp)
ld x25, 64(sp)
ld x26, 72(sp)
ld x27, 80(sp)
ld x8, 96(sp) // old frame pointer
ld x1, 104(sp) // return address
addi sp, sp, 112
ret
DECL(randomx_riscv64_vector_program_main_loop_light_mode_data):
// 1) Pointer to the scalar dataset init function
// 2) Dataset offset
.dword 0, 0
DECL(randomx_riscv64_vector_program_main_loop_instructions_end_light_mode):
// Calculate dataset pointer for dataset read
// Do it here to break false dependency from readReg2 and readReg3 (see below)
srli x6, x14, 32 // x6 = ma & dataset mask
DECL(randomx_riscv64_vector_program_main_loop_mx_xor_light_mode):
xor x5, x24, x26 // x5 = readReg2 ^ readReg3 (JIT compiler will substitute the actual registers)
and x5, x5, x19 // x5 = (readReg2 ^ readReg3) & dataset mask
xor x14, x14, x5 // mx ^= (readReg2 ^ readReg3) & dataset mask
// Save all registers modified when calling dataset_init_scalar_func_ptr
addi sp, sp, -192
// bytes [0, 127] - saved registers
// bytes [128, 191] - output buffer
sd x1, 0(sp)
sd x7, 16(sp)
sd x10, 24(sp)
sd x11, 32(sp)
sd x12, 40(sp)
sd x13, 48(sp)
sd x14, 56(sp)
sd x15, 64(sp)
sd x16, 72(sp)
sd x17, 80(sp)
sd x28, 88(sp)
sd x29, 96(sp)
sd x30, 104(sp)
sd x31, 112(sp)
// setup randomx_riscv64_vector_sshash_dataset_init's parameters
// x10 = pointer to pointer to cache memory
// pointer to cache memory was saved in "sd x11, 32(sp)", so x10 = sp + 32
addi x10, sp, 32
// x11 = output buffer (64 bytes)
addi x11, sp, 128
// x12 = start block
lla x5, randomx_riscv64_vector_program_main_loop_light_mode_data
ld x12, 8(x5)
add x12, x12, x6
srli x12, x12, 6
// x13 = end block
addi x13, x12, 1
ld x5, 0(x5)
jalr x1, 0(x5)
// restore registers
ld x1, 0(sp)
ld x7, 16(sp)
ld x10, 24(sp)
ld x11, 32(sp)
ld x12, 40(sp)
ld x13, 48(sp)
ld x14, 56(sp)
ld x15, 64(sp)
ld x16, 72(sp)
ld x17, 80(sp)
ld x28, 88(sp)
ld x29, 96(sp)
ld x30, 104(sp)
ld x31, 112(sp)
// read a 64-byte line from dataset and XOR it with r0-r7
ld x5, 128(sp)
xor x20, x20, x5
ld x5, 136(sp)
xor x21, x21, x5
ld x5, 144(sp)
xor x22, x22, x5
ld x5, 152(sp)
xor x23, x23, x5
ld x5, 160(sp)
xor x24, x24, x5
ld x5, 168(sp)
xor x25, x25, x5
ld x5, 176(sp)
xor x26, x26, x5
ld x5, 184(sp)
xor x27, x27, x5
addi sp, sp, 192
j randomx_riscv64_vector_program_scratchpad_prefetch
DECL(randomx_riscv64_vector_program_end):
DECL(randomx_riscv64_vector_code_end):

View File

@@ -0,0 +1,74 @@
/*
Copyright (c) 2018-2020, tevador <tevador@gmail.com>
Copyright (c) 2019-2021, XMRig <https://github.com/xmrig>, <support@xmrig.com>
Copyright (c) 2025, SChernykh <https://github.com/SChernykh>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#if defined(__cplusplus)
#include <cstdint>
#else
#include <stdint.h>
#endif
#if defined(__cplusplus)
extern "C" {
#endif
struct randomx_cache;
void randomx_riscv64_vector_code_begin();
void randomx_riscv64_vector_sshash_begin();
void randomx_riscv64_vector_sshash_imul_rcp_literals();
void randomx_riscv64_vector_sshash_dataset_init(struct randomx_cache* cache, uint8_t* output_buf, uint32_t startBlock, uint32_t endBlock);
void randomx_riscv64_vector_sshash_cache_prefetch();
void randomx_riscv64_vector_sshash_generated_instructions();
void randomx_riscv64_vector_sshash_generated_instructions_end();
void randomx_riscv64_vector_sshash_cache_prefetch();
void randomx_riscv64_vector_sshash_xor();
void randomx_riscv64_vector_sshash_end();
void randomx_riscv64_vector_program_params();
void randomx_riscv64_vector_program_imul_rcp_literals();
void randomx_riscv64_vector_program_begin();
void randomx_riscv64_vector_program_main_loop_instructions();
void randomx_riscv64_vector_program_main_loop_instructions_end();
void randomx_riscv64_vector_program_main_loop_mx_xor();
void randomx_riscv64_vector_program_main_loop_spaddr_xor();
void randomx_riscv64_vector_program_main_loop_light_mode_data();
void randomx_riscv64_vector_program_main_loop_instructions_end_light_mode();
void randomx_riscv64_vector_program_main_loop_mx_xor_light_mode();
void randomx_riscv64_vector_program_end();
void randomx_riscv64_vector_program_scratchpad_prefetch();
void randomx_riscv64_vector_code_end();
#if defined(__cplusplus)
}
#endif

View File

@@ -39,6 +39,8 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "crypto/randomx/jit_compiler_x86_static.hpp"
#elif (XMRIG_ARM == 8)
#include "crypto/randomx/jit_compiler_a64_static.hpp"
#elif defined(__riscv) && defined(__riscv_xlen) && (__riscv_xlen == 64)
#include "crypto/randomx/jit_compiler_rv64_static.hpp"
#endif
#include "backend/cpu/Cpu.h"
@@ -190,7 +192,7 @@ RandomX_ConfigurationBase::RandomX_ConfigurationBase()
# endif
}
#if (XMRIG_ARM == 8)
#if (XMRIG_ARM == 8) || defined(XMRIG_RISCV)
static uint32_t Log2(size_t value) { return (value > 1) ? (Log2(value / 2) + 1) : 0; }
#endif
@@ -274,6 +276,17 @@ typedef void(randomx::JitCompilerX86::* InstructionGeneratorX86_2)(const randomx
#define JIT_HANDLE(x, prev) randomx::JitCompilerA64::engine[k] = &randomx::JitCompilerA64::h_##x
#elif defined(XMRIG_RISCV)
Log2_ScratchpadL1 = Log2(ScratchpadL1_Size);
Log2_ScratchpadL2 = Log2(ScratchpadL2_Size);
Log2_ScratchpadL3 = Log2(ScratchpadL3_Size);
#define JIT_HANDLE(x, prev) do { \
randomx::JitCompilerRV64::engine[k] = &randomx::JitCompilerRV64::v1_##x; \
randomx::JitCompilerRV64::inst_map[k] = static_cast<uint8_t>(randomx::InstructionType::x); \
} while (0)
#else
#define JIT_HANDLE(x, prev)
#endif

View File

@@ -133,7 +133,7 @@ struct RandomX_ConfigurationBase
uint32_t ScratchpadL3Mask_Calculated;
uint32_t ScratchpadL3Mask64_Calculated;
# if (XMRIG_ARM == 8)
# if (XMRIG_ARM == 8) || defined(XMRIG_RISCV)
uint32_t Log2_ScratchpadL1;
uint32_t Log2_ScratchpadL2;
uint32_t Log2_ScratchpadL3;

View File

@@ -73,8 +73,20 @@ uint64_t randomx_reciprocal(uint64_t divisor) {
#if !RANDOMX_HAVE_FAST_RECIPROCAL
#ifdef __GNUC__
uint64_t randomx_reciprocal_fast(uint64_t divisor)
{
const uint64_t q = (1ULL << 63) / divisor;
const uint64_t r = (1ULL << 63) % divisor;
const uint64_t shift = 64 - __builtin_clzll(divisor);
return (q << shift) + ((r << shift) / divisor);
}
#else
uint64_t randomx_reciprocal_fast(uint64_t divisor) {
return randomx_reciprocal(divisor);
}
#endif
#endif

View File

@@ -39,6 +39,9 @@ alignas(64) uint32_t lutDec1[256];
alignas(64) uint32_t lutDec2[256];
alignas(64) uint32_t lutDec3[256];
alignas(64) uint8_t lutEncIndex[4][32];
alignas(64) uint8_t lutDecIndex[4][32];
static uint32_t mul_gf2(uint32_t b, uint32_t c)
{
uint32_t s = 0;
@@ -115,5 +118,49 @@ static struct SAESInitializer
lutDec2[i] = w; w = (w << 8) | (w >> 24);
lutDec3[i] = w;
}
memset(lutEncIndex, -1, sizeof(lutEncIndex));
memset(lutDecIndex, -1, sizeof(lutDecIndex));
lutEncIndex[0][ 0] = 0;
lutEncIndex[0][ 4] = 4;
lutEncIndex[0][ 8] = 8;
lutEncIndex[0][12] = 12;
lutEncIndex[1][ 0] = 5;
lutEncIndex[1][ 4] = 9;
lutEncIndex[1][ 8] = 13;
lutEncIndex[1][12] = 1;
lutEncIndex[2][ 0] = 10;
lutEncIndex[2][ 4] = 14;
lutEncIndex[2][ 8] = 2;
lutEncIndex[2][12] = 6;
lutEncIndex[3][ 0] = 15;
lutEncIndex[3][ 4] = 3;
lutEncIndex[3][ 8] = 7;
lutEncIndex[3][12] = 11;
lutDecIndex[0][ 0] = 0;
lutDecIndex[0][ 4] = 4;
lutDecIndex[0][ 8] = 8;
lutDecIndex[0][12] = 12;
lutDecIndex[1][ 0] = 13;
lutDecIndex[1][ 4] = 1;
lutDecIndex[1][ 8] = 5;
lutDecIndex[1][12] = 9;
lutDecIndex[2][ 0] = 10;
lutDecIndex[2][ 4] = 14;
lutDecIndex[2][ 8] = 2;
lutDecIndex[2][12] = 6;
lutDecIndex[3][ 0] = 7;
lutDecIndex[3][ 4] = 11;
lutDecIndex[3][ 8] = 15;
lutDecIndex[3][12] = 3;
for (uint32_t i = 0; i < 4; ++i) {
for (uint32_t j = 0; j < 16; j += 4) {
lutEncIndex[i][j + 16] = lutEncIndex[i][j] + 16;
lutDecIndex[i][j + 16] = lutDecIndex[i][j] + 16;
}
}
}
} aes_initializer;

View File

@@ -41,6 +41,9 @@ extern uint32_t lutDec1[256];
extern uint32_t lutDec2[256];
extern uint32_t lutDec3[256];
extern uint8_t lutEncIndex[4][32];
extern uint8_t lutDecIndex[4][32];
template<int soft> rx_vec_i128 aesenc(rx_vec_i128 in, rx_vec_i128 key);
template<int soft> rx_vec_i128 aesdec(rx_vec_i128 in, rx_vec_i128 key);

View File

@@ -0,0 +1,12 @@
/* RISC-V - test if the vector extension is present */
.text
.option arch, rv64gcv
.global main
main:
li x5, 4
vsetvli x6, x5, e64, m1, ta, ma
vxor.vv v0, v0, v0
sub x10, x5, x6
ret

View File

@@ -0,0 +1,9 @@
/* RISC-V - test if the Zba extension is present */
.text
.global main
main:
sh1add x6, x6, x7
li x10, 0
ret

View File

@@ -0,0 +1,9 @@
/* RISC-V - test if the Zbb extension is present */
.text
.global main
main:
ror x6, x6, x7
li x10, 0
ret

View File

@@ -0,0 +1,11 @@
/* RISC-V - test if the prefetch instruction is present */
.text
.option arch, rv64gc_zicbop
.global main
main:
lla x5, main
prefetch.r (x5)
mv x10, x0
ret

View File

@@ -0,0 +1,13 @@
/* RISC-V - test if the vector bit manipulation extension is present */
.text
.option arch, rv64gcv_zvkb
.global main
main:
vsetivli zero, 8, e32, m1, ta, ma
vror.vv v0, v0, v0
vror.vx v0, v0, x5
vror.vi v0, v0, 1
li x10, 0
ret

View File

@@ -0,0 +1,12 @@
/* RISC-V - test if the vector bit manipulation extension is present */
.text
.option arch, rv64gcv_zvkned
.global main
main:
vsetivli zero, 8, e32, m1, ta, ma
vaesem.vv v0, v0
vaesdm.vv v0, v0
li x10, 0
ret

View File

@@ -58,7 +58,7 @@ namespace randomx {
void CompiledVm<softAes>::execute() {
PROFILE_SCOPE(RandomX_JIT_execute);
# ifdef XMRIG_ARM
# if defined(XMRIG_ARM) || defined(XMRIG_RISCV)
memcpy(reg.f, config.eMask, sizeof(config.eMask));
# endif
compiler.getProgramFunc()(reg, mem, scratchpad, RandomX_CurrentConfig.ProgramIterations);

View File

@@ -43,6 +43,12 @@ static void init_dataset_wrapper(randomx_dataset *dataset, randomx_cache *cache,
randomx_init_dataset(dataset, cache, startItem, itemCount - (itemCount % 5));
randomx_init_dataset(dataset, cache, startItem + itemCount - 5, 5);
}
#ifdef XMRIG_RISCV
else if (itemCount % 4) {
randomx_init_dataset(dataset, cache, startItem, itemCount - (itemCount % 4));
randomx_init_dataset(dataset, cache, startItem + itemCount - 4, 4);
}
#endif
else {
randomx_init_dataset(dataset, cache, startItem, itemCount);
}
@@ -209,7 +215,7 @@ void xmrig::RxDataset::allocate(bool hugePages, bool oneGbPages)
return;
}
m_memory = new VirtualMemory(maxSize(), hugePages, oneGbPages, false, m_node);
m_memory = new VirtualMemory(maxSize(), hugePages, oneGbPages, false, m_node, VirtualMemory::kDefaultHugePageSize);
if (m_memory->isOneGbPages()) {
m_scratchpadOffset = maxSize() + RANDOMX_CACHE_MAX_SIZE;

View File

@@ -29,9 +29,17 @@ randomx_vm *xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool so
{
int flags = 0;
// On RISC-V, force software AES path even if CPU reports AES capability.
// The RandomX portable intrinsics will throw at runtime when HAVE_AES is not defined
// for this architecture. Until native AES intrinsics are wired for RISC-V, avoid
// setting HARD_AES to prevent "Platform doesn't support hardware AES" aborts.
# ifndef XMRIG_RISCV
if (!softAes) {
flags |= RANDOMX_FLAG_HARD_AES;
}
# else
(void)softAes; // unused on RISC-V to force soft AES
# endif
if (dataset->get()) {
flags |= RANDOMX_FLAG_FULL_MEM;

View File

@@ -115,7 +115,7 @@ static inline void checkHash(const JobBundle &bundle, std::vector<JobResult> &re
static void getResults(JobBundle &bundle, std::vector<JobResult> &results, uint32_t &errors, bool hwAES)
{
const auto &algorithm = bundle.job.algorithm();
auto memory = new VirtualMemory(algorithm.l3(), false, false, false);
auto memory = new VirtualMemory(algorithm.l3(), false, false, false, 0, VirtualMemory::kDefaultHugePageSize);
alignas(16) uint8_t hash[32]{ 0 };
if (algorithm.family() == Algorithm::RANDOM_X) {

View File

@@ -2,18 +2,7 @@
* Copyright (c) 2018-2025 SChernykh <https://github.com/SChernykh>
* Copyright (c) 2016-2025 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
* SPDX-License-Identifier: GPL-3.0-or-later
*/
#ifndef XMRIG_VERSION_H
@@ -22,18 +11,20 @@
#define APP_ID "xmrig"
#define APP_NAME "XMRig"
#define APP_DESC "XMRig miner"
#define APP_VERSION "6.23.0"
#define APP_VERSION "6.25.1-dev"
#define APP_DOMAIN "xmrig.com"
#define APP_SITE "www.xmrig.com"
#define APP_COPYRIGHT "Copyright (C) 2016-2025 xmrig.com"
#define APP_KIND "miner"
#define APP_VER_MAJOR 6
#define APP_VER_MINOR 23
#define APP_VER_PATCH 0
#define APP_VER_MINOR 25
#define APP_VER_PATCH 1
#ifdef _MSC_VER
# if (_MSC_VER >= 1930)
# if (_MSC_VER >= 1950)
# define MSVC_VERSION 2026
# elif (_MSC_VER >=1930 && _MSC_VER < 1950)
# define MSVC_VERSION 2022
# elif (_MSC_VER >= 1920 && _MSC_VER < 1930)
# define MSVC_VERSION 2019
@@ -64,6 +55,10 @@
# define APP_OS "Linux"
#elif defined XMRIG_OS_FREEBSD
# define APP_OS "FreeBSD"
#elif defined XMRIG_OS_OPENBSD
# define APP_OS "OpenBSD"
#elif defined XMRIG_OS_HAIKU
# define APP_OS "Haiku"
#else
# define APP_OS "Unknown OS"
#endif
@@ -73,6 +68,8 @@
#ifdef XMRIG_ARM
# define APP_ARCH "ARMv" STR2(XMRIG_ARM)
#elif defined(XMRIG_RISCV)
# define APP_ARCH "RISC-V"
#else
# if defined(__x86_64__) || defined(__amd64__) || defined(_M_X64) || defined(_M_AMD64)
# define APP_ARCH "x86-64"