1
0
mirror of https://github.com/xmrig/xmrig.git synced 2025-12-26 06:00:00 -05:00

Compare commits

..

2 Commits

Author SHA1 Message Date
Artem Zuikov
07d81c6587 Merge ab5be0b773 into e32731b60b 2024-10-20 18:07:59 +03:00
4ertus2
ab5be0b773 replace new/delete with sp 2024-10-20 18:03:25 +03:00
99 changed files with 609 additions and 1276 deletions

View File

@@ -1,10 +1,3 @@
# v6.22.1
- [#3531](https://github.com/xmrig/xmrig/pull/3531) Always reset nonce on RandomX dataset change.
- [#3534](https://github.com/xmrig/xmrig/pull/3534) Fixed threads auto-config on Zen5.
- [#3535](https://github.com/xmrig/xmrig/pull/3535) RandomX: tweaks for Zen5.
- [#3539](https://github.com/xmrig/xmrig/pull/3539) Added Zen5 to `randomx_boost.sh`.
- [#3540](https://github.com/xmrig/xmrig/pull/3540) Detect AMD engineering samples in `randomx_boost.sh`.
# v6.22.0
- [#2411](https://github.com/xmrig/xmrig/pull/2411) Added support for [Yada](https://yadacoin.io/) (`rx/yada` algorithm).
- [#3492](https://github.com/xmrig/xmrig/pull/3492) Fixed `--background` option on Unix systems.

View File

@@ -1,5 +1,5 @@
Copyright © 2009 CNRS
Copyright © 2009-2024 Inria. All rights reserved.
Copyright © 2009-2023 Inria. All rights reserved.
Copyright © 2009-2013 Université Bordeaux
Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
Copyright © 2020 Hewlett Packard Enterprise. All rights reserved.
@@ -17,71 +17,6 @@ bug fixes (and other actions) for each version of hwloc since version
0.9.
Version 2.11.2
--------------
* Add missing CPU info attrs on aarch64 on Linux.
* Use ACPI CPPC on Linux to get better information about cpukinds,
at least on AMD CPUs.
* Fix crash when manipulating cpukinds after topology
duplication, thanks to Hadrien Grasland for the report.
* Fix missing input target checks in memattr functions,
thanks to Hadrien Grasland for the report.
* Fix a memory leak when ignoring NUMA distances on FreeBSD.
* Fix build failure on old Linux distributions without accessat().
* Fix non-Windows importing of XML topologies and CPUID dumps exported
on Windows.
* hwloc-calc --cpuset-output-format systemd-dbus-api now allows
to generate AllowedCPUs information for systemd slices.
See the hwloc-calc manpage for examples. Thanks to Pierre Neyron.
* Some fixes in manpage EXAMPLES and split them into subsections.
Version 2.11.1
--------------
* Fix bash completions, thanks Tavis Rudd.
Version 2.11.0
--------------
* API
+ Add HWLOC_MEMBIND_WEIGHTED_INTERLEAVE memory binding policy on
Linux 6.9+. Thanks to Honggyu Kim for the patch.
- weighted_interleave_membind is added to membind support bits.
- The "weighted" policy is added to the hwloc-bind tool.
+ Add hwloc_obj_set_subtype(). Thanks to Hadrien Grasland for the report.
* GPU support
+ Don't hide the GPU NUMA node on NVIDIA Grace Hopper.
+ Get Intel GPU OpenCL device locality.
+ Add bandwidths between subdevices in the LevelZero XeLinkBandwidth
matrix.
+ Fix PCI Gen4+ link speed of NVIDIA GPU obtained from NVML,
thanks to Akram Sbaih for the report.
* Windows support
+ Fix Windows support when UNICODE is enabled, several hwloc features
were missing, thanks to Martin for the report.
+ Fix the enabling of CUDA in Windows CMake build,
Thanks to Moritz Kreutzer for the patch.
+ Fix CUDA/OpenCL test source path in Windows CMake.
* Tools
+ Option --best-memattr may now return multiple nodes. Additional
configuration flags may be given to tweak its behavior.
+ hwloc-info has a new --get-attr option to get a single attribute.
+ hwloc-info now supports "levels", "support" and "topology"
special keywords for backward compatibility for hwloc 3.0.
+ The --taskset command-line option is superseded by the new
--cpuset-output-format which also allows to export as list.
+ hwloc-calc may now import bitmasks described as a list of bits
with the new "--cpuset-input-format list".
* Misc
+ The MemoryTiersNr info attribute in the root object now says how many
memory tiers were built. Thanks to Antoine Morvan for the report.
+ Fix the management of infinite cpusets in the bitmap printf/sscanf
API as well as in command-line tools.
+ Add section "Compiling software on top of hwloc's C API" in the
documentation with examples for GNU Make and CMake,
thanks to Florent Pruvost for the help.
Version 2.10.0
--------------
* Heterogeneous Memory core improvements

View File

@@ -418,8 +418,14 @@ return 0;
}
hwloc provides a pkg-config executable to obtain relevant compiler and linker
flags. See Compiling software on top of hwloc's C API for details on building
program on top of hwloc's API using GNU Make or CMake.
flags. For example, it can be used thusly to compile applications that utilize
the hwloc library (assuming GNU Make):
CFLAGS += $(shell pkg-config --cflags hwloc)
LDLIBS += $(shell pkg-config --libs hwloc)
hwloc-hello: hwloc-hello.c
$(CC) hwloc-hello.c $(CFLAGS) -o hwloc-hello $(LDLIBS)
On a machine 2 processor packages -- each package of which has two processing
cores -- the output from running hwloc-hello could be something like the

View File

@@ -8,8 +8,8 @@
# Please update HWLOC_VERSION* in contrib/windows/hwloc_config.h too.
major=2
minor=11
release=2
minor=10
release=0
# greek is used for alpha or beta release tags. If it is non-empty,
# it will be appended to the version number. It does not have to be
@@ -22,7 +22,7 @@ greek=
# The date when this release was created
date="Sep 26, 2024"
date="Dec 04, 2023"
# If snapshot=1, then use the value from snapshot_version as the
# entire hwloc version (i.e., ignore major, minor, release, and
@@ -41,6 +41,6 @@ snapshot_version=${major}.${minor}.${release}${greek}-git
# 2. Version numbers are described in the Libtool current:revision:age
# format.
libhwloc_so_version=23:1:8
libhwloc_so_version=22:0:7
# Please also update the <TargetName> lines in contrib/windows/libhwloc.vcxproj

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -11,10 +11,10 @@
#ifndef HWLOC_CONFIG_H
#define HWLOC_CONFIG_H
#define HWLOC_VERSION "2.11.2"
#define HWLOC_VERSION "2.10.0"
#define HWLOC_VERSION_MAJOR 2
#define HWLOC_VERSION_MINOR 11
#define HWLOC_VERSION_RELEASE 2
#define HWLOC_VERSION_MINOR 10
#define HWLOC_VERSION_RELEASE 0
#define HWLOC_VERSION_GREEK ""
#define __hwloc_restrict

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2010-2024 Inria. All rights reserved.
* Copyright © 2010-2023 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -28,18 +28,18 @@ extern "C" {
/** \brief Matrix of distances between a set of objects.
*
* The most common matrix contains latencies between NUMA nodes
* This matrix often contains latencies between NUMA nodes
* (as reported in the System Locality Distance Information Table (SLIT)
* in the ACPI specification), which may or may not be physically accurate.
* It corresponds to the latency for accessing the memory of one node
* from a core in another node.
* The corresponding kind is ::HWLOC_DISTANCES_KIND_MEANS_LATENCY | ::HWLOC_DISTANCES_KIND_FROM_USER.
* The corresponding kind is ::HWLOC_DISTANCES_KIND_FROM_OS | ::HWLOC_DISTANCES_KIND_FROM_USER.
* The name of this distances structure is "NUMALatency".
* Others distance structures include and "XGMIBandwidth", "XGMIHops",
* "XeLinkBandwidth" and "NVLinkBandwidth".
*
* The matrix may also contain bandwidths between random sets of objects,
* possibly provided by the user, as specified in the \p kind attribute.
* Others common distance structures include and "XGMIBandwidth", "XGMIHops",
* "XeLinkBandwidth" and "NVLinkBandwidth".
*
* Pointers \p objs and \p values should not be replaced, reallocated, freed, etc.
* However callers are allowed to modify \p kind as well as the contents
@@ -70,10 +70,11 @@ struct hwloc_distances_s {
* The \p kind attribute of struct hwloc_distances_s is a OR'ed set
* of kinds.
*
* Each distance matrix may have only one kind among HWLOC_DISTANCES_KIND_FROM_*
* specifying where distance information comes from,
* and one kind among HWLOC_DISTANCES_KIND_MEANS_* specifying
* whether values are latencies or bandwidths.
* A kind of format HWLOC_DISTANCES_KIND_FROM_* specifies where the
* distance information comes from, if known.
*
* A kind of format HWLOC_DISTANCES_KIND_MEANS_* specifies whether
* values are latencies or bandwidths, if applicable.
*/
enum hwloc_distances_kind_e {
/** \brief These distances were obtained from the operating system or hardware.
@@ -356,8 +357,6 @@ typedef void * hwloc_distances_add_handle_t;
* Otherwise, it will be copied internally and may later be freed by the caller.
*
* \p kind specifies the kind of distance as a OR'ed set of ::hwloc_distances_kind_e.
* Only one kind of meaning and one kind of provenance may be given if appropriate
* (e.g. ::HWLOC_DISTANCES_KIND_MEANS_BANDWIDTH and ::HWLOC_DISTANCES_KIND_FROM_USER).
* Kind ::HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES will be automatically set
* according to objects having different types in hwloc_distances_add_values().
*
@@ -404,8 +403,7 @@ HWLOC_DECLSPEC int hwloc_distances_add_values(hwloc_topology_t topology,
/** \brief Flags for adding a new distances to a topology. */
enum hwloc_distances_add_flag_e {
/** \brief Try to group objects based on the newly provided distance information.
* Grouping is only performed when the distances structure contains latencies,
* and when all objects are of the same type.
* This is ignored for distances between objects of different types.
* \hideinitializer
*/
HWLOC_DISTANCES_ADD_FLAG_GROUP = (1UL<<0),

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2010 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -946,14 +946,6 @@ enum hwloc_distrib_flags_e {
*
* \return 0 on success, -1 on error.
*
* \note On hybrid CPUs (or asymmetric platforms), distribution may be suboptimal
* since the number of cores or PUs inside packages or below caches may vary
* (the top-down recursive partitioning ignores these numbers until reaching their levels).
* Hence it is recommended to distribute only inside a single homogeneous domain.
* For instance on a CPU with energy-efficient E-cores and high-performance P-cores,
* one should distribute separately N tasks on E-cores and M tasks on P-cores
* instead of trying to distribute directly M+N tasks on the entire CPUs.
*
* \note This function requires the \p roots objects to have a CPU set.
*/
static __hwloc_inline int
@@ -968,7 +960,7 @@ hwloc_distrib(hwloc_topology_t topology,
unsigned given, givenweight;
hwloc_cpuset_t *cpusetp = set;
if (!n || (flags & ~HWLOC_DISTRIB_FLAG_REVERSE)) {
if (flags & ~HWLOC_DISTRIB_FLAG_REVERSE) {
errno = EINVAL;
return -1;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2019-2024 Inria. All rights reserved.
* Copyright © 2019-2023 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -69,10 +69,7 @@ extern "C" {
* @{
*/
/** \brief Predefined memory attribute IDs.
* See ::hwloc_memattr_id_t for the generic definition of IDs
* for predefined or custom attributes.
*/
/** \brief Memory node attributes. */
enum hwloc_memattr_id_e {
/** \brief
* The \"Capacity\" is returned in bytes (local_memory attribute in objects).
@@ -81,8 +78,6 @@ enum hwloc_memattr_id_e {
*
* No initiator is involved when looking at this attribute.
* The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST.
*
* Capacity values may not be modified using hwloc_memattr_set_value().
* \hideinitializer
*/
HWLOC_MEMATTR_ID_CAPACITY = 0,
@@ -98,8 +93,6 @@ enum hwloc_memattr_id_e {
*
* No initiator is involved when looking at this attribute.
* The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST.
* Locality values may not be modified using hwloc_memattr_set_value().
* \hideinitializer
*/
HWLOC_MEMATTR_ID_LOCALITY = 1,
@@ -180,19 +173,11 @@ enum hwloc_memattr_id_e {
/* TODO persistence? */
HWLOC_MEMATTR_ID_MAX /**< \private
* Sentinel value for predefined attributes.
* Dynamically registered custom attributes start here.
*/
HWLOC_MEMATTR_ID_MAX /**< \private Sentinel value */
};
/** \brief A memory attribute identifier.
*
* hwloc predefines some commonly-used attributes in ::hwloc_memattr_id_e.
* One may then dynamically register custom ones with hwloc_memattr_register(),
* they will be assigned IDs immediately after the predefined ones.
* See \ref hwlocality_memattrs_manage for more information about
* existing attribute IDs.
* May be either one of ::hwloc_memattr_id_e or a new id returned by hwloc_memattr_register().
*/
typedef unsigned hwloc_memattr_id_t;
@@ -298,10 +283,6 @@ hwloc_get_local_numanode_objs(hwloc_topology_t topology,
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL.
*
* \p target_node cannot be \c NULL. If \p attribute is ::HWLOC_MEMATTR_ID_CAPACITY,
* \p target_node must be a NUMA node. If it is ::HWLOC_MEMATTR_ID_LOCALITY,
* \p target_node must have a CPU set.
*
* \p flags must be \c 0 for now.
*
* \return 0 on success.
@@ -371,8 +352,6 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
* The returned initiator should not be modified or freed,
* it belongs to the topology.
*
* \p target_node cannot be \c NULL.
*
* \p flags must be \c 0 for now.
*
* \return 0 on success.
@@ -383,10 +362,100 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
HWLOC_DECLSPEC int
hwloc_memattr_get_best_initiator(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
hwloc_obj_t target_node,
hwloc_obj_t target,
unsigned long flags,
struct hwloc_location *best_initiator, hwloc_uint64_t *value);
/** @} */
/** \defgroup hwlocality_memattrs_manage Managing memory attributes
* @{
*/
/** \brief Return the name of a memory attribute.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_name(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
const char **name);
/** \brief Return the flags of the given attribute.
*
* Flags are a OR'ed set of ::hwloc_memattr_flag_e.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_flags(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
unsigned long *flags);
/** \brief Memory attribute flags.
* Given to hwloc_memattr_register() and returned by hwloc_memattr_get_flags().
*/
enum hwloc_memattr_flag_e {
/** \brief The best nodes for this memory attribute are those with the higher values.
* For instance Bandwidth.
*/
HWLOC_MEMATTR_FLAG_HIGHER_FIRST = (1UL<<0),
/** \brief The best nodes for this memory attribute are those with the lower values.
* For instance Latency.
*/
HWLOC_MEMATTR_FLAG_LOWER_FIRST = (1UL<<1),
/** \brief The value returned for this memory attribute depends on the given initiator.
* For instance Bandwidth and Latency, but not Capacity.
*/
HWLOC_MEMATTR_FLAG_NEED_INITIATOR = (1UL<<2)
};
/** \brief Register a new memory attribute.
*
* Add a specific memory attribute that is not defined in ::hwloc_memattr_id_e.
* Flags are a OR'ed set of ::hwloc_memattr_flag_e. It must contain at least
* one of ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST or ::HWLOC_MEMATTR_FLAG_LOWER_FIRST.
*
* \return 0 on success.
* \return -1 with errno set to \c EBUSY if another attribute already uses this name.
*/
HWLOC_DECLSPEC int
hwloc_memattr_register(hwloc_topology_t topology,
const char *name,
unsigned long flags,
hwloc_memattr_id_t *id);
/** \brief Set an attribute value for a specific target NUMA node.
*
* If the attribute does not relate to a specific initiator
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL.
*
* The initiator will be copied into the topology,
* the caller should free anything allocated to store the initiator,
* for instance the cpuset.
*
* \p flags must be \c 0 for now.
*
* \note The initiator \p initiator should be of type ::HWLOC_LOCATION_TYPE_CPUSET
* when referring to accesses performed by CPU cores.
* ::HWLOC_LOCATION_TYPE_OBJECT is currently unused internally by hwloc,
* but users may for instance use it to provide custom information about
* host memory accesses performed by GPUs.
*
* \return 0 on success or -1 on error.
*/
HWLOC_DECLSPEC int
hwloc_memattr_set_value(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
hwloc_obj_t target_node,
struct hwloc_location *initiator,
unsigned long flags,
hwloc_uint64_t value);
/** \brief Return the target NUMA nodes that have some values for a given attribute.
*
* Return targets for the given attribute in the \p targets array
@@ -450,8 +519,6 @@ hwloc_memattr_get_targets(hwloc_topology_t topology,
* The returned initiators should not be modified or freed,
* they belong to the topology.
*
* \p target_node cannot be \c NULL.
*
* \p flags must be \c 0 for now.
*
* If the attribute does not relate to a specific initiator
@@ -471,131 +538,6 @@ hwloc_memattr_get_initiators(hwloc_topology_t topology,
hwloc_obj_t target_node,
unsigned long flags,
unsigned *nr, struct hwloc_location *initiators, hwloc_uint64_t *values);
/** @} */
/** \defgroup hwlocality_memattrs_manage Managing memory attributes
*
* Memory attribues are identified by an ID (::hwloc_memattr_id_t)
* and a name. hwloc_memattr_get_name() and hwloc_memattr_get_by_name()
* convert between them (or return error if the attribute does not exist).
*
* The set of valid ::hwloc_memattr_id_t is a contigous set starting at \c 0.
* It first contains predefined attributes, as listed
* in ::hwloc_memattr_id_e (from \c 0 to \c HWLOC_MEMATTR_ID_MAX-1).
* Then custom attributes may be dynamically registered with
* hwloc_memattr_register(). They will get the following IDs
* (\c HWLOC_MEMATTR_ID_MAX for the first one, etc.).
*
* To iterate over all valid attributes
* (either predefined or dynamically registered custom ones),
* one may iterate over IDs starting from \c 0 until hwloc_memattr_get_name()
* or hwloc_memattr_get_flags() returns an error.
*
* The values for an existing attribute or for custom dynamically registered ones
* may be set or modified with hwloc_memattr_set_value().
*
* @{
*/
/** \brief Return the name of a memory attribute.
*
* The output pointer \p name cannot be \c NULL.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_name(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
const char **name);
/** \brief Return the flags of the given attribute.
*
* Flags are a OR'ed set of ::hwloc_memattr_flag_e.
*
* The output pointer \p flags cannot be \c NULL.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_flags(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
unsigned long *flags);
/** \brief Memory attribute flags.
* Given to hwloc_memattr_register() and returned by hwloc_memattr_get_flags().
*/
enum hwloc_memattr_flag_e {
/** \brief The best nodes for this memory attribute are those with the higher values.
* For instance Bandwidth.
*/
HWLOC_MEMATTR_FLAG_HIGHER_FIRST = (1UL<<0),
/** \brief The best nodes for this memory attribute are those with the lower values.
* For instance Latency.
*/
HWLOC_MEMATTR_FLAG_LOWER_FIRST = (1UL<<1),
/** \brief The value returned for this memory attribute depends on the given initiator.
* For instance Bandwidth and Latency, but not Capacity.
*/
HWLOC_MEMATTR_FLAG_NEED_INITIATOR = (1UL<<2)
};
/** \brief Register a new memory attribute.
*
* Add a new custom memory attribute.
* Flags are a OR'ed set of ::hwloc_memattr_flag_e. It must contain one of
* ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST or ::HWLOC_MEMATTR_FLAG_LOWER_FIRST but not both.
*
* The new attribute \p id is immediately after the last existing attribute ID
* (which is either the ID of the last registered attribute if any,
* or the ID of the last predefined attribute in ::hwloc_memattr_id_e).
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if an invalid set of flags is given.
* \return -1 with errno set to \c EBUSY if another attribute already uses this name.
*/
HWLOC_DECLSPEC int
hwloc_memattr_register(hwloc_topology_t topology,
const char *name,
unsigned long flags,
hwloc_memattr_id_t *id);
/** \brief Set an attribute value for a specific target NUMA node.
*
* If the attribute does not relate to a specific initiator
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL.
*
* The initiator will be copied into the topology,
* the caller should free anything allocated to store the initiator,
* for instance the cpuset.
*
* \p target_node cannot be \c NULL.
*
* \p attribute cannot be ::HWLOC_MEMATTR_FLAG_ID_CAPACITY or
* ::HWLOC_MEMATTR_FLAG_ID_LOCALITY.
*
* \p flags must be \c 0 for now.
*
* \note The initiator \p initiator should be of type ::HWLOC_LOCATION_TYPE_CPUSET
* when referring to accesses performed by CPU cores.
* ::HWLOC_LOCATION_TYPE_OBJECT is currently unused internally by hwloc,
* but users may for instance use it to provide custom information about
* host memory accesses performed by GPUs.
*
* \return 0 on success or -1 on error.
*/
HWLOC_DECLSPEC int
hwloc_memattr_set_value(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
hwloc_obj_t target_node,
struct hwloc_location *initiator,
unsigned long flags,
hwloc_uint64_t value);
/** @} */
#ifdef __cplusplus

View File

@@ -41,15 +41,6 @@ extern "C" {
*/
/* Copyright (c) 2008-2018 The Khronos Group Inc. */
/* needs "cl_khr_pci_bus_info" device extension, but not strictly required for clGetDeviceInfo() */
typedef struct {
cl_uint pci_domain;
cl_uint pci_bus;
cl_uint pci_device;
cl_uint pci_function;
} hwloc_cl_device_pci_bus_info_khr;
#define HWLOC_CL_DEVICE_PCI_BUS_INFO_KHR 0x410F
/* needs "cl_amd_device_attribute_query" device extension, but not strictly required for clGetDeviceInfo() */
#define HWLOC_CL_DEVICE_TOPOLOGY_AMD 0x4037
typedef union {
@@ -87,19 +78,9 @@ hwloc_opencl_get_device_pci_busid(cl_device_id device,
unsigned *domain, unsigned *bus, unsigned *dev, unsigned *func)
{
hwloc_cl_device_topology_amd amdtopo;
hwloc_cl_device_pci_bus_info_khr khrbusinfo;
cl_uint nvbus, nvslot, nvdomain;
cl_int clret;
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_PCI_BUS_INFO_KHR, sizeof(khrbusinfo), &khrbusinfo, NULL);
if (CL_SUCCESS == clret) {
*domain = (unsigned) khrbusinfo.pci_domain;
*bus = (unsigned) khrbusinfo.pci_bus;
*dev = (unsigned) khrbusinfo.pci_device;
*func = (unsigned) khrbusinfo.pci_function;
return 0;
}
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL);
if (CL_SUCCESS == clret
&& HWLOC_CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD == amdtopo.raw.type) {

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2013-2024 Inria. All rights reserved.
* Copyright © 2013-2022 Inria. All rights reserved.
* Copyright © 2016 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -645,19 +645,6 @@ HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_parent_by_busid(struct hwloc_to
*/
HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
/** @} */
/** \defgroup hwlocality_components_distances Components and Plugins: distances
*
* \note These structures and functions may change when ::HWLOC_COMPONENT_ABI is modified.
*
* @{
*/
/** \brief Handle to a new distances structure during its addition to the topology. */
typedef void * hwloc_backend_distances_add_handle_t;

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* Copyright © 2010-2024 Inria. All rights reserved.
* Copyright © 2010-2022 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -210,7 +210,6 @@ extern "C" {
#define hwloc_obj_get_info_by_name HWLOC_NAME(obj_get_info_by_name)
#define hwloc_obj_add_info HWLOC_NAME(obj_add_info)
#define hwloc_obj_set_subtype HWLOC_NAME(obj_set_subtype)
#define HWLOC_CPUBIND_PROCESS HWLOC_NAME_CAPS(CPUBIND_PROCESS)
#define HWLOC_CPUBIND_THREAD HWLOC_NAME_CAPS(CPUBIND_THREAD)
@@ -233,7 +232,6 @@ extern "C" {
#define HWLOC_MEMBIND_FIRSTTOUCH HWLOC_NAME_CAPS(MEMBIND_FIRSTTOUCH)
#define HWLOC_MEMBIND_BIND HWLOC_NAME_CAPS(MEMBIND_BIND)
#define HWLOC_MEMBIND_INTERLEAVE HWLOC_NAME_CAPS(MEMBIND_INTERLEAVE)
#define HWLOC_MEMBIND_WEIGHTED_INTERLEAVE HWLOC_NAME_CAPS(MEMBIND_WEIGHTED_INTERLEAVE)
#define HWLOC_MEMBIND_NEXTTOUCH HWLOC_NAME_CAPS(MEMBIND_NEXTTOUCH)
#define HWLOC_MEMBIND_MIXED HWLOC_NAME_CAPS(MEMBIND_MIXED)
@@ -562,7 +560,6 @@ extern "C" {
/* opencl.h */
#define hwloc_cl_device_pci_bus_info_khr HWLOC_NAME(cl_device_pci_bus_info_khr)
#define hwloc_cl_device_topology_amd HWLOC_NAME(cl_device_topology_amd)
#define hwloc_opencl_get_device_pci_busid HWLOC_NAME(opencl_get_device_pci_ids)
#define hwloc_opencl_get_device_cpuset HWLOC_NAME(opencl_get_device_cpuset)
@@ -718,8 +715,6 @@ extern "C" {
#define hwloc__obj_type_is_dcache HWLOC_NAME(_obj_type_is_dcache)
#define hwloc__obj_type_is_icache HWLOC_NAME(_obj_type_is_icache)
#define hwloc__pci_link_speed HWLOC_NAME(_pci_link_speed)
/* private/cpuid-x86.h */
#define hwloc_have_x86_cpuid HWLOC_NAME(have_x86_cpuid)

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009, 2011, 2012 CNRS. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2021 Inria. All rights reserved.
* Copyright © 2009, 2011, 2012, 2015 Université Bordeaux. All rights reserved.
* Copyright © 2009-2020 Cisco Systems, Inc. All rights reserved.
* $COPYRIGHT$
@@ -17,10 +17,6 @@
#define HWLOC_HAVE_MSVC_CPUIDEX 1
/* #undef HAVE_MKSTEMP */
#define HWLOC_HAVE_X86_CPUID 1
/* Define to 1 if the system has the type `CACHE_DESCRIPTOR'. */
#define HAVE_CACHE_DESCRIPTOR 0
@@ -132,7 +128,8 @@
#define HAVE_DECL__SC_PAGE_SIZE 0
/* Define to 1 if you have the <dirent.h> header file. */
/* #undef HAVE_DIRENT_H */
/* #define HAVE_DIRENT_H 1 */
#undef HAVE_DIRENT_H
/* Define to 1 if you have the <dlfcn.h> header file. */
/* #undef HAVE_DLFCN_H */
@@ -285,7 +282,7 @@
#define HAVE_STRING_H 1
/* Define to 1 if you have the `strncasecmp' function. */
/* #undef HAVE_STRNCASECMP */
#define HAVE_STRNCASECMP 1
/* Define to '1' if sysctl is present and usable */
/* #undef HAVE_SYSCTL */
@@ -326,7 +323,8 @@
/* #undef HAVE_UNAME */
/* Define to 1 if you have the <unistd.h> header file. */
/* #undef HAVE_UNISTD_H */
/* #define HAVE_UNISTD_H 1 */
#undef HAVE_UNISTD_H
/* Define to 1 if you have the `uselocale' function. */
/* #undef HAVE_USELOCALE */
@@ -661,7 +659,7 @@
#define hwloc_pid_t HANDLE
/* Define this to either strncasecmp or strncmp */
/* #undef hwloc_strncasecmp */
#define hwloc_strncasecmp strncasecmp
/* Define this to the thread ID type */
#define hwloc_thread_t HANDLE

View File

@@ -11,22 +11,6 @@
#ifndef HWLOC_PRIVATE_CPUID_X86_H
#define HWLOC_PRIVATE_CPUID_X86_H
/* A macro for annotating memory as uninitialized when building with MSAN
* (and otherwise having no effect). See below for why this is used with
* our custom assembly.
*/
#ifdef __has_feature
#define HWLOC_HAS_FEATURE(name) __has_feature(name)
#else
#define HWLOC_HAS_FEATURE(name) 0
#endif
#if HWLOC_HAS_FEATURE(memory_sanitizer) || defined(MEMORY_SANITIZER)
#include <sanitizer/msan_interface.h>
#define HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ptr, len) __msan_unpoison(ptr, len)
#else
#define HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ptr, len)
#endif
#if (defined HWLOC_X86_32_ARCH) && (!defined HWLOC_HAVE_MSVC_CPUIDEX)
static __hwloc_inline int hwloc_have_x86_cpuid(void)
{
@@ -87,18 +71,12 @@ static __hwloc_inline void hwloc_x86_cpuid(unsigned *eax, unsigned *ebx, unsigne
"movl %k2,%1\n\t"
: "+a" (*eax), "=m" (*ebx), "=&r"(sav_rbx),
"+c" (*ecx), "=&d" (*edx));
/* MSAN does not recognize the effect of the above assembly on the memory operand
* (`"=m"(*ebx)`). This may get improved in MSAN at some point in the future, e.g.
* see https://github.com/llvm/llvm-project/pull/77393. */
HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ebx, sizeof *ebx);
#elif defined(HWLOC_X86_32_ARCH)
__asm__(
"mov %%ebx,%1\n\t"
"cpuid\n\t"
"xchg %%ebx,%1\n\t"
: "+a" (*eax), "=&SD" (*ebx), "+c" (*ecx), "=&d" (*edx));
/* See above. */
HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ebx, sizeof *ebx);
#else
#error unknown architecture
#endif

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -573,35 +573,4 @@ typedef SSIZE_T ssize_t;
# endif
#endif
static __inline float
hwloc__pci_link_speed(unsigned generation, unsigned lanes)
{
float lanespeed;
/*
* These are single-direction bandwidths only.
*
* Gen1 used NRZ with 8/10 encoding.
* PCIe Gen1 = 2.5GT/s signal-rate per lane x 8/10 = 0.25GB/s data-rate per lane
* PCIe Gen2 = 5 GT/s signal-rate per lane x 8/10 = 0.5 GB/s data-rate per lane
* Gen3 switched to NRZ with 128/130 encoding.
* PCIe Gen3 = 8 GT/s signal-rate per lane x 128/130 = 1 GB/s data-rate per lane
* PCIe Gen4 = 16 GT/s signal-rate per lane x 128/130 = 2 GB/s data-rate per lane
* PCIe Gen5 = 32 GT/s signal-rate per lane x 128/130 = 4 GB/s data-rate per lane
* Gen6 switched to PAM with with 242/256 FLIT (242B payload protected by 8B CRC + 6B FEC).
* PCIe Gen6 = 64 GT/s signal-rate per lane x 242/256 = 8 GB/s data-rate per lane
* PCIe Gen7 = 128GT/s signal-rate per lane x 242/256 = 16 GB/s data-rate per lane
*/
/* lanespeed in Gbit/s */
if (generation <= 2)
lanespeed = 2.5f * generation * 0.8f;
else if (generation <= 5)
lanespeed = 8.0f * (1<<(generation-3)) * 128/130;
else
lanespeed = 8.0f * (1<<(generation-3)) * 242/256; /* assume Gen8 will be 256 GT/s and so on */
/* linkspeed in GB/s */
return lanespeed * lanes / 8;
}
#endif /* HWLOC_PRIVATE_MISC_H */

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2010, 2012 Université Bordeaux
* Copyright © 2011-2015 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -287,7 +287,6 @@ static __hwloc_inline int hwloc__check_membind_policy(hwloc_membind_policy_t pol
|| policy == HWLOC_MEMBIND_FIRSTTOUCH
|| policy == HWLOC_MEMBIND_BIND
|| policy == HWLOC_MEMBIND_INTERLEAVE
|| policy == HWLOC_MEMBIND_WEIGHTED_INTERLEAVE
|| policy == HWLOC_MEMBIND_NEXTTOUCH)
return 0;
return -1;

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -245,7 +245,6 @@ int hwloc_bitmap_copy(struct hwloc_bitmap_s * dst, const struct hwloc_bitmap_s *
/* Strings always use 32bit groups */
#define HWLOC_PRIxSUBBITMAP "%08lx"
#define HWLOC_BITMAP_SUBSTRING_SIZE 32
#define HWLOC_BITMAP_SUBSTRING_FULL_VALUE 0xFFFFFFFFUL
#define HWLOC_BITMAP_SUBSTRING_LENGTH (HWLOC_BITMAP_SUBSTRING_SIZE/4)
#define HWLOC_BITMAP_STRING_PER_LONG (HWLOC_BITS_PER_LONG/HWLOC_BITMAP_SUBSTRING_SIZE)
@@ -262,7 +261,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
const unsigned long accum_mask = ~0UL;
#else /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */
const unsigned long accum_mask = ((1UL << HWLOC_BITMAP_SUBSTRING_SIZE) - 1) << (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE);
int merge_with_infinite_prefix = 0;
#endif /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */
HWLOC__BITMAP_CHECK(set);
@@ -281,9 +279,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
res = size>0 ? (int)size - 1 : 0;
tmp += res;
size -= res;
#if HWLOC_BITS_PER_LONG > HWLOC_BITMAP_SUBSTRING_SIZE
merge_with_infinite_prefix = 1;
#endif
}
i=(int) set->ulongs_count-1;
@@ -299,24 +294,16 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
}
while (i>=0 || accumed) {
unsigned long value;
/* Refill accumulator */
if (!accumed) {
accum = set->ulongs[i--];
accumed = HWLOC_BITS_PER_LONG;
}
value = (accum & accum_mask) >> (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE);
#if HWLOC_BITS_PER_LONG > HWLOC_BITMAP_SUBSTRING_SIZE
if (merge_with_infinite_prefix && value == HWLOC_BITMAP_SUBSTRING_FULL_VALUE) {
/* first full subbitmap merged with infinite prefix */
res = 0;
} else
#endif
if (value) {
if (accum & accum_mask) {
/* print the whole subset if not empty */
res = hwloc_snprintf(tmp, size, needcomma ? ",0x" HWLOC_PRIxSUBBITMAP : "0x" HWLOC_PRIxSUBBITMAP, value);
res = hwloc_snprintf(tmp, size, needcomma ? ",0x" HWLOC_PRIxSUBBITMAP : "0x" HWLOC_PRIxSUBBITMAP,
(accum & accum_mask) >> (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE));
needcomma = 1;
} else if (i == -1 && accumed == HWLOC_BITMAP_SUBSTRING_SIZE) {
/* print a single 0 to mark the last subset */
@@ -336,7 +323,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
#else
accum <<= HWLOC_BITMAP_SUBSTRING_SIZE;
accumed -= HWLOC_BITMAP_SUBSTRING_SIZE;
merge_with_infinite_prefix = 0;
#endif
if (res >= size)
@@ -376,8 +362,7 @@ int hwloc_bitmap_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc_restric
{
const char * current = string;
unsigned long accum = 0;
int count = 0;
int ulongcount;
int count=0;
int infinite = 0;
/* count how many substrings there are */
@@ -398,20 +383,9 @@ int hwloc_bitmap_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc_restric
count--;
}
ulongcount = (count + HWLOC_BITMAP_STRING_PER_LONG - 1) / HWLOC_BITMAP_STRING_PER_LONG;
if (hwloc_bitmap_reset_by_ulongs(set, ulongcount) < 0)
if (hwloc_bitmap_reset_by_ulongs(set, (count + HWLOC_BITMAP_STRING_PER_LONG - 1) / HWLOC_BITMAP_STRING_PER_LONG) < 0)
return -1;
set->infinite = 0; /* will be updated later */
#if HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE
if (infinite && (count % HWLOC_BITMAP_STRING_PER_LONG) != 0) {
/* accumulate substrings of the first ulong that are hidden in the infinite prefix */
int i;
for(i = (count % HWLOC_BITMAP_STRING_PER_LONG); i < HWLOC_BITMAP_STRING_PER_LONG; i++)
accum |= (HWLOC_BITMAP_SUBSTRING_FULL_VALUE << (i*HWLOC_BITMAP_SUBSTRING_SIZE));
}
#endif
set->infinite = 0;
while (*current != '\0') {
unsigned long val;
@@ -570,9 +544,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
ssize_t size = buflen;
char *tmp = buf;
int res, ret = 0;
#if HWLOC_BITS_PER_LONG == 64
int merge_with_infinite_prefix = 0;
#endif
int started = 0;
int i;
@@ -592,9 +563,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
res = size>0 ? (int)size - 1 : 0;
tmp += res;
size -= res;
#if HWLOC_BITS_PER_LONG == 64
merge_with_infinite_prefix = 1;
#endif
}
i=set->ulongs_count-1;
@@ -614,11 +582,7 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
if (started) {
/* print the whole subset */
#if HWLOC_BITS_PER_LONG == 64
if (merge_with_infinite_prefix && (val & 0xffffffff00000000UL) == 0xffffffff00000000UL) {
res = hwloc_snprintf(tmp, size, "%08lx", val & 0xffffffffUL);
} else {
res = hwloc_snprintf(tmp, size, "%016lx", val);
}
res = hwloc_snprintf(tmp, size, "%016lx", val);
#else
res = hwloc_snprintf(tmp, size, "%08lx", val);
#endif
@@ -635,9 +599,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
res = size>0 ? (int)size - 1 : 0;
tmp += res;
size -= res;
#if HWLOC_BITS_PER_LONG == 64
merge_with_infinite_prefix = 0;
#endif
}
/* if didn't display anything, display 0x0 */
@@ -718,10 +679,6 @@ int hwloc_bitmap_taskset_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc
goto failed;
set->ulongs[count-1] = val;
if (infinite && tmpchars != HWLOC_BITS_PER_LONG/4) {
/* infinite prefix with partial substring, fill remaining bits */
set->ulongs[count-1] |= (~0ULL)<<(4*tmpchars);
}
current += tmpchars;
chars -= tmpchars;

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2020-2024 Inria. All rights reserved.
* Copyright © 2020-2022 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -50,7 +50,6 @@ hwloc_internal_cpukinds_dup(hwloc_topology_t new, hwloc_topology_t old)
return -1;
new->cpukinds = kinds;
new->nr_cpukinds = old->nr_cpukinds;
new->nr_cpukinds_allocated = old->nr_cpukinds;
memcpy(kinds, old->cpukinds, old->nr_cpukinds * sizeof(*kinds));
for(i=0;i<old->nr_cpukinds; i++) {

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2010-2024 Inria. All rights reserved.
* Copyright © 2010-2022 Inria. All rights reserved.
* Copyright © 2011-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -624,8 +624,8 @@ void * hwloc_distances_add_create(hwloc_topology_t topology,
return NULL;
}
if ((kind & ~HWLOC_DISTANCES_KIND_ALL)
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_FROM_ALL) > 1
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_MEANS_ALL) > 1) {
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_FROM_ALL) != 1
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_MEANS_ALL) != 1) {
errno = EINVAL;
return NULL;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2020-2024 Inria. All rights reserved.
* Copyright © 2020-2023 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -14,26 +14,13 @@
*/
static __hwloc_inline
int hwloc__memattr_get_convenience_value(hwloc_memattr_id_t id,
hwloc_obj_t node,
hwloc_uint64_t *valuep)
hwloc_uint64_t hwloc__memattr_get_convenience_value(hwloc_memattr_id_t id,
hwloc_obj_t node)
{
if (id == HWLOC_MEMATTR_ID_CAPACITY) {
if (node->type != HWLOC_OBJ_NUMANODE) {
errno = EINVAL;
return -1;
}
*valuep = node->attr->numanode.local_memory;
return 0;
}
else if (id == HWLOC_MEMATTR_ID_LOCALITY) {
if (!node->cpuset) {
errno = EINVAL;
return -1;
}
*valuep = hwloc_bitmap_weight(node->cpuset);
return 0;
}
if (id == HWLOC_MEMATTR_ID_CAPACITY)
return node->attr->numanode.local_memory;
else if (id == HWLOC_MEMATTR_ID_LOCALITY)
return hwloc_bitmap_weight(node->cpuset);
else
assert(0);
return 0; /* shut up the compiler */
@@ -635,7 +622,7 @@ hwloc_memattr_get_targets(hwloc_topology_t topology,
if (found<max) {
targets[found] = node;
if (values)
hwloc__memattr_get_convenience_value(id, node, &values[found]);
values[found] = hwloc__memattr_get_convenience_value(id, node);
}
found++;
}
@@ -761,7 +748,7 @@ hwloc_memattr_get_initiators(hwloc_topology_t topology,
struct hwloc_internal_memattr_target_s *imtg;
unsigned i, max;
if (flags || !target_node) {
if (flags) {
errno = EINVAL;
return -1;
}
@@ -823,7 +810,7 @@ hwloc_memattr_get_value(hwloc_topology_t topology,
struct hwloc_internal_memattr_s *imattr;
struct hwloc_internal_memattr_target_s *imtg;
if (flags || !target_node) {
if (flags) {
errno = EINVAL;
return -1;
}
@@ -836,7 +823,8 @@ hwloc_memattr_get_value(hwloc_topology_t topology,
if (imattr->iflags & HWLOC_IMATTR_FLAG_CONVENIENCE) {
/* convenience attributes */
return hwloc__memattr_get_convenience_value(id, target_node, valuep);
*valuep = hwloc__memattr_get_convenience_value(id, target_node);
return 0;
}
/* normal attributes */
@@ -948,7 +936,7 @@ hwloc_memattr_set_value(hwloc_topology_t topology,
{
struct hwloc_internal_location_s iloc, *ilocp;
if (flags || !target_node) {
if (flags) {
errno = EINVAL;
return -1;
}
@@ -1019,10 +1007,10 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
/* convenience attributes */
for(j=0; ; j++) {
hwloc_obj_t node = hwloc_get_obj_by_type(topology, HWLOC_OBJ_NUMANODE, j);
hwloc_uint64_t value = 0;
hwloc_uint64_t value;
if (!node)
break;
hwloc__memattr_get_convenience_value(id, node, &value);
value = hwloc__memattr_get_convenience_value(id, node);
hwloc__update_best_target(&best, &best_value, &found,
node, value,
imattr->flags & HWLOC_MEMATTR_FLAG_HIGHER_FIRST);
@@ -1105,7 +1093,7 @@ hwloc_memattr_get_best_initiator(hwloc_topology_t topology,
int found;
unsigned i;
if (flags || !target_node) {
if (flags) {
errno = EINVAL;
return -1;
}
@@ -1818,12 +1806,6 @@ hwloc__apply_memory_tiers_subtypes(hwloc_topology_t topology,
}
}
}
if (nr_tiers > 1) {
hwloc_obj_t root = hwloc_get_root_obj(topology);
char tmp[20];
snprintf(tmp, sizeof(tmp), "%u", nr_tiers);
hwloc__add_info_nodup(&root->infos, &root->infos_count, "MemoryTiersNr", tmp, 1);
}
}
int

View File

@@ -1,5 +1,5 @@
/*
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2022 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@@ -886,12 +886,36 @@ hwloc_pcidisc_find_linkspeed(const unsigned char *config,
unsigned offset, float *linkspeed)
{
unsigned linksta, speed, width;
float lanespeed;
memcpy(&linksta, &config[offset + HWLOC_PCI_EXP_LNKSTA], 4);
speed = linksta & HWLOC_PCI_EXP_LNKSTA_SPEED; /* PCIe generation */
width = (linksta & HWLOC_PCI_EXP_LNKSTA_WIDTH) >> 4; /* how many lanes */
/*
* These are single-direction bandwidths only.
*
* Gen1 used NRZ with 8/10 encoding.
* PCIe Gen1 = 2.5GT/s signal-rate per lane x 8/10 = 0.25GB/s data-rate per lane
* PCIe Gen2 = 5 GT/s signal-rate per lane x 8/10 = 0.5 GB/s data-rate per lane
* Gen3 switched to NRZ with 128/130 encoding.
* PCIe Gen3 = 8 GT/s signal-rate per lane x 128/130 = 1 GB/s data-rate per lane
* PCIe Gen4 = 16 GT/s signal-rate per lane x 128/130 = 2 GB/s data-rate per lane
* PCIe Gen5 = 32 GT/s signal-rate per lane x 128/130 = 4 GB/s data-rate per lane
* Gen6 switched to PAM with with 242/256 FLIT (242B payload protected by 8B CRC + 6B FEC).
* PCIe Gen6 = 64 GT/s signal-rate per lane x 242/256 = 8 GB/s data-rate per lane
* PCIe Gen7 = 128GT/s signal-rate per lane x 242/256 = 16 GB/s data-rate per lane
*/
*linkspeed = hwloc__pci_link_speed(speed, width);
/* lanespeed in Gbit/s */
if (speed <= 2)
lanespeed = 2.5f * speed * 0.8f;
else if (speed <= 5)
lanespeed = 8.0f * (1<<(speed-3)) * 128/130;
else
lanespeed = 8.0f * (1<<(speed-3)) * 242/256; /* assume Gen8 will be 256 GT/s and so on */
/* linkspeed in GB/s */
*linkspeed = lanespeed * width / 8;
return 0;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012, 2020 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -220,7 +220,7 @@ static void hwloc_win_get_function_ptrs(void)
#pragma GCC diagnostic ignored "-Wcast-function-type"
#endif
kernel32 = LoadLibrary(TEXT("kernel32.dll"));
kernel32 = LoadLibrary("kernel32.dll");
if (kernel32) {
GetActiveProcessorGroupCountProc =
(PFN_GETACTIVEPROCESSORGROUPCOUNT) GetProcAddress(kernel32, "GetActiveProcessorGroupCount");
@@ -249,12 +249,12 @@ static void hwloc_win_get_function_ptrs(void)
}
if (!QueryWorkingSetExProc) {
HMODULE psapi = LoadLibrary(TEXT("psapi.dll"));
HMODULE psapi = LoadLibrary("psapi.dll");
if (psapi)
QueryWorkingSetExProc = (PFN_QUERYWORKINGSETEX) GetProcAddress(psapi, "QueryWorkingSetEx");
}
ntdll = GetModuleHandle(TEXT("ntdll"));
ntdll = GetModuleHandle("ntdll");
RtlGetVersionProc = (PFN_RTLGETVERSION) GetProcAddress(ntdll, "RtlGetVersion");
#if HWLOC_HAVE_GCC_W_CAST_FUNCTION_TYPE

View File

@@ -1,11 +1,11 @@
/*
* Copyright © 2010-2024 Inria. All rights reserved.
* Copyright © 2010-2023 Inria. All rights reserved.
* Copyright © 2010-2013 Université Bordeaux
* Copyright © 2010-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*
*
* This backend is mostly used when the operating system does not export
* This backend is only used when the operating system does not export
* the necessary hardware topology information to user-space applications.
* Currently, FreeBSD and NetBSD only add PUs and then fallback to this
* backend for CPU/Cache discovery.
@@ -15,7 +15,6 @@
* on various architectures, without having to use this x86-specific code.
* But this backend is still used after them to annotate some objects with
* additional details (CPU info in Package, Inclusiveness in Caches).
* It may also be enabled manually to work-around bugs in native OS discovery.
*/
#include "private/autogen/config.h"
@@ -488,7 +487,7 @@ static void read_amd_cores_legacy(struct procinfo *infos, struct cpuiddump *src_
}
/* AMD unit/node from CPUID 0x8000001e leaf (topoext) */
static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned long flags __hwloc_attribute_unused, struct cpuiddump *src_cpuiddump)
static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned long flags, struct cpuiddump *src_cpuiddump)
{
unsigned apic_id, nodes_per_proc = 0;
unsigned eax, ebx, ecx, edx;
@@ -497,6 +496,7 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
infos->apicid = apic_id = eax;
if (flags & HWLOC_X86_DISC_FLAG_TOPOEXT_NUMANODES) {
if (infos->cpufamilynumber == 0x16) {
/* ecx is reserved */
infos->ids[NODE] = 0;
@@ -511,6 +511,7 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
|| (infos->cpufamilynumber == 0x19 && nodes_per_proc > 1)) {
hwloc_debug("warning: undefined nodes_per_proc value %u, assuming it means %u\n", nodes_per_proc, nodes_per_proc);
}
}
if (infos->cpufamilynumber <= 0x16) { /* topoext appeared in 0x15 and compute-units were only used in 0x15 and 0x16 */
unsigned cores_per_unit;
@@ -532,9 +533,9 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
}
/* Intel core/thread or even die/module/tile from CPUID 0x0b or 0x1f leaves (v1 and v2 extended topology enumeration)
* or AMD core/thread or even complex/ccd from CPUID 0x0b or 0x80000026 (extended CPU topology)
* or AMD complex/ccd from CPUID 0x80000026 (extended CPU topology)
*/
static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned leaf, enum cpuid_type cpuid_type __hwloc_attribute_unused, struct cpuiddump *src_cpuiddump)
static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned leaf, enum cpuid_type cpuid_type, struct cpuiddump *src_cpuiddump)
{
unsigned level, apic_nextshift, apic_type, apic_id = 0, apic_shift = 0, id;
unsigned threadid __hwloc_attribute_unused = 0; /* shut-up compiler */
@@ -546,15 +547,20 @@ static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct pro
eax = leaf;
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
/* Intel specifies that the 0x0b/0x1f loop should stop when we get "invalid domain" (0 in ecx[8:15])
* (if so, we also get 0 in eax/ebx for invalid subleaves). Zhaoxin implements this too.
* (if so, we also get 0 in eax/ebx for invalid subleaves).
* However AMD rather says that the 0x80000026/0x0b loop should stop when we get "no thread at this level" (0 in ebx[0:15]).
*
* Linux kernel <= 6.8 used "invalid domain" for both Intel and AMD (in detect_extended_topology())
* but x86 discovery revamp in 6.9 now properly checks both Intel and AMD conditions (in topo_subleaf()).
* So let's assume we are allowed to break-out once one of the Intel+AMD conditions is met.
* Zhaoxin follows the Intel specs but also returns "no thread at this level" for the last *valid* level (at least on KH-4000).
* From the Linux kernel code, it's very likely that AMD also returns "invalid domain"
* (because detect_extended_topology() uses that for all x86 CPUs)
* but keep with the official doc until AMD can clarify that (see #593).
*/
if (!(ebx & 0xffff) || !(ecx & 0xff00))
break;
if (cpuid_type == amd) {
if (!(ebx & 0xffff))
break;
} else {
if (!(ecx & 0xff00))
break;
}
apic_packageshift = eax & 0x1f;
}
@@ -566,8 +572,13 @@ static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct pro
ecx = level;
eax = leaf;
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
if (!(ebx & 0xffff) || !(ecx & 0xff00))
break;
if (cpuid_type == amd) {
if (!(ebx & 0xffff))
break;
} else {
if (!(ecx & 0xff00))
break;
}
apic_nextshift = eax & 0x1f;
apic_type = (ecx & 0xff00) >> 8;
apic_id = edx;
@@ -1814,7 +1825,7 @@ hwloc_x86_check_cpuiddump_input(const char *src_cpuiddump_path, hwloc_bitmap_t s
goto out_with_path;
}
fclose(file);
if (strncmp(line, "Architecture: x86", 17)) {
if (strcmp(line, "Architecture: x86\n")) {
fprintf(stderr, "hwloc/x86: Found non-x86 dumped cpuid summary in %s: %s\n", path, line);
goto out_with_path;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -41,7 +41,7 @@ typedef struct hwloc__nolibxml_import_state_data_s {
static char *
hwloc__nolibxml_import_ignore_spaces(char *buffer)
{
return buffer + strspn(buffer, " \t\n\r");
return buffer + strspn(buffer, " \t\n");
}
static int

View File

@@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved.
* Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2011, 2020 Université Bordeaux
* Copyright © 2009-2018 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@@ -872,10 +872,6 @@ hwloc__xml_import_object(hwloc_topology_t topology,
/* deal with possible future type */
obj->type = HWLOC_OBJ_GROUP;
obj->attr->group.kind = HWLOC_GROUP_KIND_INTEL_MODULE;
} else if (!strcasecmp(attrvalue, "Cluster")) {
/* deal with possible future type */
obj->type = HWLOC_OBJ_GROUP;
obj->attr->group.kind = HWLOC_GROUP_KIND_LINUX_CLUSTER;
} else if (!strcasecmp(attrvalue, "MemCache")) {
/* ignore possible future type */
obj->type = _HWLOC_OBJ_FUTURE;
@@ -1348,7 +1344,7 @@ hwloc__xml_v2import_support(hwloc_topology_t topology,
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*));
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 16);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 15);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1);
#endif
@@ -1382,7 +1378,6 @@ hwloc__xml_v2import_support(hwloc_topology_t topology,
else DO(membind,firsttouch_membind);
else DO(membind,bind_membind);
else DO(membind,interleave_membind);
else DO(membind,weighted_interleave_membind);
else DO(membind,nexttouch_membind);
else DO(membind,migrate_membind);
else DO(membind,get_area_memlocation);
@@ -1441,10 +1436,6 @@ hwloc__xml_v2import_distances(hwloc_topology_t topology,
}
else if (!strcmp(attrname, "kind")) {
kind = strtoul(attrvalue, NULL, 10);
/* forward compat with "HOPS" kind in v3 */
if (kind & (1UL<<5))
/* hops becomes latency */
kind = (kind & ~(1UL<<5)) | HWLOC_DISTANCES_KIND_MEANS_LATENCY;
}
else if (!strcmp(attrname, "name")) {
name = attrvalue;
@@ -3096,7 +3087,7 @@ hwloc__xml_v2export_support(hwloc__xml_export_state_t parentstate, hwloc_topolog
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*));
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 16);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 15);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1);
#endif
@@ -3141,7 +3132,6 @@ hwloc__xml_v2export_support(hwloc__xml_export_state_t parentstate, hwloc_topolog
DO(membind,firsttouch_membind);
DO(membind,bind_membind);
DO(membind,interleave_membind);
DO(membind,weighted_interleave_membind);
DO(membind,nexttouch_membind);
DO(membind,migrate_membind);
DO(membind,get_area_memlocation);

View File

@@ -465,20 +465,6 @@ hwloc_debug_print_objects(int indent __hwloc_attribute_unused, hwloc_obj_t obj)
#define hwloc_debug_print_objects(indent, obj) do { /* nothing */ } while (0)
#endif /* !HWLOC_DEBUG */
int hwloc_obj_set_subtype(hwloc_topology_t topology __hwloc_attribute_unused, hwloc_obj_t obj, const char *subtype)
{
char *new = NULL;
if (subtype) {
new = strdup(subtype);
if (!new)
return -1;
}
if (obj->subtype)
free(obj->subtype);
obj->subtype = new;
return 0;
}
void hwloc__free_infos(struct hwloc_info_s *infos, unsigned count)
{
unsigned i;

View File

@@ -65,22 +65,22 @@ public:
}
}
# else
inline ~Thread() { m_thread.join(); delete m_worker; }
inline ~Thread() { m_thread.join(); }
inline void start(void *(*callback)(void *)) { m_thread = std::thread(callback, this); }
# endif
inline const T &config() const { return m_config; }
inline IBackend *backend() const { return m_backend; }
inline IWorker *worker() const { return m_worker; }
inline IWorker* worker() const { return m_worker.get(); }
inline size_t id() const { return m_id; }
inline void setWorker(IWorker *worker) { m_worker = worker; }
inline void setWorker(std::shared_ptr<IWorker> worker) { m_worker = worker; }
private:
const size_t m_id = 0;
const T m_config;
IBackend *m_backend;
IWorker *m_worker = nullptr;
std::shared_ptr<IWorker> m_worker;
#ifdef XMRIG_OS_APPLE
pthread_t m_thread{};

View File

@@ -62,19 +62,12 @@ public:
template<class T>
xmrig::Workers<T>::Workers() :
d_ptr(new WorkersPrivate())
d_ptr(std::make_shared<WorkersPrivate>())
{
}
template<class T>
xmrig::Workers<T>::~Workers()
{
delete d_ptr;
}
template<class T>
bool xmrig::Workers<T>::tick(uint64_t)
{
@@ -88,7 +81,7 @@ bool xmrig::Workers<T>::tick(uint64_t)
uint64_t hashCount = 0;
uint64_t rawHashes = 0;
for (Thread<T> *handle : m_workers) {
for (auto& handle : m_workers) {
IWorker *worker = handle->worker();
if (worker) {
worker->hashrateData(hashCount, ts, rawHashes);
@@ -135,10 +128,6 @@ void xmrig::Workers<T>::stop()
Nonce::stop(T::backend());
# endif
for (Thread<T> *worker : m_workers) {
delete worker;
}
m_workers.clear();
# ifdef XMRIG_MINER_PROJECT
@@ -166,7 +155,7 @@ void xmrig::Workers<T>::start(const std::vector<T> &data, const std::shared_ptr<
template<class T>
xmrig::IWorker *xmrig::Workers<T>::create(Thread<T> *)
std::shared_ptr<xmrig::IWorker> xmrig::Workers<T>::create(Thread<T> *)
{
return nullptr;
}
@@ -177,22 +166,21 @@ void *xmrig::Workers<T>::onReady(void *arg)
{
auto handle = static_cast<Thread<T>* >(arg);
IWorker *worker = create(handle);
assert(worker != nullptr);
std::shared_ptr<IWorker> worker = create(handle);
assert(worker);
if (!worker || !worker->selfTest()) {
LOG_ERR("%s " RED("thread ") RED_BOLD("#%zu") RED(" self-test failed"), T::tag(), worker ? worker->id() : 0);
handle->backend()->start(worker, false);
delete worker;
worker.reset();
handle->backend()->start(worker.get(), false);
return nullptr;
}
assert(handle->backend() != nullptr);
handle->setWorker(worker);
handle->backend()->start(worker, true);
handle->backend()->start(worker.get(), true);
return nullptr;
}
@@ -202,7 +190,7 @@ template<class T>
void xmrig::Workers<T>::start(const std::vector<T> &data, bool /*sleep*/)
{
for (const auto &item : data) {
m_workers.push_back(new Thread<T>(d_ptr->backend, m_workers.size(), item));
m_workers.emplace_back(std::make_shared<Thread<T>>(d_ptr->backend, m_workers.size(), item));
}
d_ptr->hashrate = std::make_shared<Hashrate>(m_workers.size());
@@ -211,7 +199,7 @@ void xmrig::Workers<T>::start(const std::vector<T> &data, bool /*sleep*/)
Nonce::touch(T::backend());
# endif
for (auto worker : m_workers) {
for (auto& worker : m_workers) {
worker->start(Workers<T>::onReady);
}
}
@@ -221,34 +209,34 @@ namespace xmrig {
template<>
xmrig::IWorker *xmrig::Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle)
std::shared_ptr<xmrig::IWorker> Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle)
{
# ifdef XMRIG_MINER_PROJECT
switch (handle->config().intensity) {
case 1:
return new CpuWorker<1>(handle->id(), handle->config());
return std::make_shared<CpuWorker<1>>(handle->id(), handle->config());
case 2:
return new CpuWorker<2>(handle->id(), handle->config());
return std::make_shared<CpuWorker<2>>(handle->id(), handle->config());
case 3:
return new CpuWorker<3>(handle->id(), handle->config());
return std::make_shared<CpuWorker<3>>(handle->id(), handle->config());
case 4:
return new CpuWorker<4>(handle->id(), handle->config());
return std::make_shared<CpuWorker<4>>(handle->id(), handle->config());
case 5:
return new CpuWorker<5>(handle->id(), handle->config());
return std::make_shared<CpuWorker<5>>(handle->id(), handle->config());
case 8:
return new CpuWorker<8>(handle->id(), handle->config());
return std::make_shared<CpuWorker<8>>(handle->id(), handle->config());
}
return nullptr;
# else
assert(handle->config().intensity == 1);
return new CpuWorker<1>(handle->id(), handle->config());
return std::make_shared<CpuWorker<1>>(handle->id(), handle->config());
# endif
}
@@ -258,9 +246,9 @@ template class Workers<CpuLaunchData>;
#ifdef XMRIG_FEATURE_OPENCL
template<>
xmrig::IWorker *xmrig::Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle)
std::shared_ptr<xmrig::IWorker> Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle)
{
return new OclWorker(handle->id(), handle->config());
return std::make_shared<OclWorker>(handle->id(), handle->config());
}
@@ -270,9 +258,9 @@ template class Workers<OclLaunchData>;
#ifdef XMRIG_FEATURE_CUDA
template<>
xmrig::IWorker *xmrig::Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle)
std::shared_ptr<xmrig::IWorker> Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle)
{
return new CudaWorker(handle->id(), handle->config());
return std::make_shared<CudaWorker>(handle->id(), handle->config());
}

View File

@@ -52,7 +52,6 @@ public:
XMRIG_DISABLE_COPY_MOVE(Workers)
Workers();
~Workers();
inline void start(const std::vector<T> &data) { start(data, true); }
@@ -67,20 +66,20 @@ public:
# endif
private:
static IWorker *create(Thread<T> *handle);
static std::shared_ptr<IWorker> create(Thread<T> *handle);
static void *onReady(void *arg);
void start(const std::vector<T> &data, bool sleep);
std::vector<Thread<T> *> m_workers;
WorkersPrivate *d_ptr;
std::vector<std::shared_ptr<Thread<T>>> m_workers;
std::shared_ptr<WorkersPrivate> d_ptr;
};
template<class T>
void xmrig::Workers<T>::jobEarlyNotification(const Job &job)
{
for (Thread<T>* t : m_workers) {
for (auto& t : m_workers) {
if (t->worker()) {
t->worker()->jobEarlyNotification(job);
}
@@ -89,20 +88,20 @@ void xmrig::Workers<T>::jobEarlyNotification(const Job &job)
template<>
IWorker *Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle);
std::shared_ptr<IWorker> Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle);
extern template class Workers<CpuLaunchData>;
#ifdef XMRIG_FEATURE_OPENCL
template<>
IWorker *Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle);
std::shared_ptr<IWorker> Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle);
extern template class Workers<OclLaunchData>;
#endif
#ifdef XMRIG_FEATURE_CUDA
template<>
IWorker *Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle);
std::shared_ptr<IWorker> Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle);
extern template class Workers<CudaLaunchData>;
#endif

View File

@@ -51,7 +51,7 @@ public:
};
static BenchStatePrivate *d_ptr = nullptr;
static std::shared_ptr<BenchStatePrivate> d_ptr;
std::atomic<uint64_t> BenchState::m_data{};
@@ -61,7 +61,7 @@ std::atomic<uint64_t> BenchState::m_data{};
bool xmrig::BenchState::isDone()
{
return d_ptr == nullptr;
return !d_ptr;
}
@@ -105,14 +105,13 @@ uint64_t xmrig::BenchState::start(size_t threads, const IBackend *backend)
void xmrig::BenchState::destroy()
{
delete d_ptr;
d_ptr = nullptr;
d_ptr.reset();
}
void xmrig::BenchState::done()
{
assert(d_ptr != nullptr && d_ptr->async && d_ptr->remaining > 0);
assert(d_ptr && d_ptr->async && d_ptr->remaining > 0);
const uint64_t ts = Chrono::steadyMSecs();
@@ -129,15 +128,15 @@ void xmrig::BenchState::done()
void xmrig::BenchState::init(IBenchListener *listener, uint32_t size)
{
assert(d_ptr == nullptr);
assert(!d_ptr);
d_ptr = new BenchStatePrivate(listener, size);
d_ptr = std::make_shared<BenchStatePrivate>(listener, size);
}
void xmrig::BenchState::setSize(uint32_t size)
{
assert(d_ptr != nullptr);
assert(d_ptr);
d_ptr->size = size;
}

View File

@@ -31,20 +31,20 @@
#endif
static xmrig::ICpuInfo *cpuInfo = nullptr;
static std::shared_ptr<xmrig::ICpuInfo> cpuInfo;
xmrig::ICpuInfo *xmrig::Cpu::info()
{
if (cpuInfo == nullptr) {
if (!cpuInfo) {
# if defined(XMRIG_FEATURE_HWLOC)
cpuInfo = new HwlocCpuInfo();
cpuInfo = std::make_shared<HwlocCpuInfo>();
# else
cpuInfo = new BasicCpuInfo();
cpuInfo = std::make_shared<BasicCpuInfo>();
# endif
}
return cpuInfo;
return cpuInfo.get();
}
@@ -56,6 +56,5 @@ rapidjson::Value xmrig::Cpu::toJSON(rapidjson::Document &doc)
void xmrig::Cpu::release()
{
delete cpuInfo;
cpuInfo = nullptr;
cpuInfo.reset();
}

View File

@@ -242,7 +242,7 @@ const char *xmrig::cpu_tag()
xmrig::CpuBackend::CpuBackend(Controller *controller) :
d_ptr(new CpuBackendPrivate(controller))
d_ptr(std::make_shared<CpuBackendPrivate>(controller))
{
d_ptr->workers.setBackend(this);
}
@@ -250,7 +250,6 @@ xmrig::CpuBackend::CpuBackend(Controller *controller) :
xmrig::CpuBackend::~CpuBackend()
{
delete d_ptr;
}

View File

@@ -70,7 +70,7 @@ protected:
# endif
private:
CpuBackendPrivate *d_ptr;
std::shared_ptr<CpuBackendPrivate> d_ptr;
};

View File

@@ -57,7 +57,7 @@ static constexpr uint32_t kReserveCount = 32768;
#ifdef XMRIG_ALGO_CN_HEAVY
static std::mutex cn_heavyZen3MemoryMutex;
VirtualMemory* cn_heavyZen3Memory = nullptr;
std::shared_ptr<VirtualMemory> cn_heavyZen3Memory;
#endif
} // namespace xmrig
@@ -87,14 +87,14 @@ xmrig::CpuWorker<N>::CpuWorker(size_t id, const CpuLaunchData &data) :
if (!cn_heavyZen3Memory) {
// Round up number of threads to the multiple of 8
const size_t num_threads = ((m_threads + 7) / 8) * 8;
cn_heavyZen3Memory = new VirtualMemory(m_algorithm.l3() * num_threads, data.hugePages, false, false, node());
cn_heavyZen3Memory = std::make_shared<VirtualMemory>(m_algorithm.l3() * num_threads, data.hugePages, false, false, node());
}
m_memory = cn_heavyZen3Memory;
}
else
# endif
{
m_memory = new VirtualMemory(m_algorithm.l3() * N, data.hugePages, false, true, node());
m_memory = std::make_shared<VirtualMemory>(m_algorithm.l3() * N, data.hugePages, false, true, node());
}
# ifdef XMRIG_ALGO_GHOSTRIDER
@@ -107,7 +107,7 @@ template<size_t N>
xmrig::CpuWorker<N>::~CpuWorker()
{
# ifdef XMRIG_ALGO_RANDOMX
RxVm::destroy(m_vm);
m_vm.reset();
# endif
CnCtx::release(m_ctx, N);
@@ -116,7 +116,7 @@ xmrig::CpuWorker<N>::~CpuWorker()
if (m_memory != cn_heavyZen3Memory)
# endif
{
delete m_memory;
m_memory.reset();
}
# ifdef XMRIG_ALGO_GHOSTRIDER
@@ -148,7 +148,7 @@ void xmrig::CpuWorker<N>::allocateRandomX_VM()
}
else if (!dataset->get() && (m_job.currentJob().seed() != m_seed)) {
// Update RandomX light VM with the new seed
randomx_vm_set_cache(m_vm, dataset->cache()->get());
randomx_vm_set_cache(m_vm.get(), dataset->cache()->get());
}
m_seed = m_job.currentJob().seed();
}
@@ -296,7 +296,7 @@ void xmrig::CpuWorker<N>::start()
if (job.hasMinerSignature()) {
job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr);
}
randomx_calculate_hash_first(m_vm, tempHash, m_job.blob(), job.size());
randomx_calculate_hash_first(m_vm.get(), tempHash, m_job.blob(), job.size());
}
if (!nextRound()) {
@@ -307,7 +307,7 @@ void xmrig::CpuWorker<N>::start()
memcpy(miner_signature_saved, miner_signature_ptr, sizeof(miner_signature_saved));
job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr);
}
randomx_calculate_hash_next(m_vm, tempHash, m_job.blob(), job.size(), m_hash);
randomx_calculate_hash_next(m_vm.get(), tempHash, m_job.blob(), job.size(), m_hash);
}
else
# endif

View File

@@ -66,7 +66,7 @@ protected:
void hashrateData(uint64_t &hashCount, uint64_t &timeStamp, uint64_t &rawHashes) const override;
void start() override;
inline const VirtualMemory *memory() const override { return m_memory; }
inline const VirtualMemory* memory() const override { return m_memory.get(); }
inline size_t intensity() const override { return N; }
inline void jobEarlyNotification(const Job&) override {}
@@ -92,11 +92,11 @@ private:
const Miner *m_miner;
const size_t m_threads;
cryptonight_ctx *m_ctx[N];
VirtualMemory *m_memory = nullptr;
std::shared_ptr<VirtualMemory> m_memory;
WorkerJob<N> m_job;
# ifdef XMRIG_ALGO_RANDOMX
randomx_vm *m_vm = nullptr;
std::shared_ptr<randomx_vm> m_vm;
Buffer m_seed;
# endif

View File

@@ -283,7 +283,7 @@ const char *xmrig::ocl_tag()
xmrig::OclBackend::OclBackend(Controller *controller) :
d_ptr(new OclBackendPrivate(controller))
d_ptr(std::make_shared<OclBackendPrivate>(controller))
{
d_ptr->workers.setBackend(this);
}
@@ -291,7 +291,7 @@ xmrig::OclBackend::OclBackend(Controller *controller) :
xmrig::OclBackend::~OclBackend()
{
delete d_ptr;
d_ptr.reset();
OclLib::close();

View File

@@ -70,7 +70,7 @@ protected:
# endif
private:
OclBackendPrivate *d_ptr;
std::shared_ptr<OclBackendPrivate> d_ptr;
};

View File

@@ -95,8 +95,7 @@ xmrig::Api::~Api()
# ifdef XMRIG_FEATURE_HTTP
if (m_httpd) {
m_httpd->stop();
delete m_httpd;
m_httpd = nullptr; // Ensure the pointer is set to nullptr after deletion
m_httpd.reset();
}
# endif
}
@@ -116,12 +115,11 @@ void xmrig::Api::start()
# ifdef XMRIG_FEATURE_HTTP
if (!m_httpd) {
m_httpd = new Httpd(m_base);
m_httpd = std::make_shared<Httpd>(m_base);
if (!m_httpd->start()) {
LOG_ERR("%s " RED_BOLD("HTTP API server failed to start."), Tags::network());
delete m_httpd; // Properly handle failure to start
m_httpd = nullptr;
m_httpd.reset();
}
}
# endif

View File

@@ -66,7 +66,7 @@ private:
Base *m_base;
char m_id[32]{};
const uint64_t m_timestamp;
Httpd *m_httpd = nullptr;
std::shared_ptr<Httpd> m_httpd;
std::vector<IApiListener *> m_listeners;
String m_workerId;
uint8_t m_ticks = 0;

View File

@@ -69,13 +69,13 @@ bool xmrig::Httpd::start()
bool tls = false;
# ifdef XMRIG_FEATURE_TLS
m_http = new HttpsServer(m_httpListener);
m_http = std::make_shared<HttpsServer>(m_httpListener);
tls = m_http->setTls(m_base->config()->tls());
# else
m_http = new HttpServer(m_httpListener);
m_http = std::make_shared<HttpServer>(m_httpListener);
# endif
m_server = new TcpServer(config.host(), config.port(), m_http);
m_server = std::make_shared<TcpServer>(config.host(), config.port(), m_http.get());
const int rc = m_server->bind();
Log::print(GREEN_BOLD(" * ") WHITE_BOLD("%-13s") CSI "1;%dm%s:%d" " " RED_BOLD("%s"),
@@ -112,9 +112,6 @@ bool xmrig::Httpd::start()
void xmrig::Httpd::stop()
{
delete m_server;
delete m_http;
m_server = nullptr;
m_http = nullptr;
m_port = 0;

View File

@@ -55,13 +55,13 @@ private:
const Base *m_base;
std::shared_ptr<IHttpListener> m_httpListener;
TcpServer *m_server = nullptr;
std::shared_ptr<TcpServer> m_server;
uint16_t m_port = 0;
# ifdef XMRIG_FEATURE_TLS
HttpsServer *m_http = nullptr;
std::shared_ptr<HttpsServer> m_http;
# else
HttpServer *m_http = nullptr;
std::shared_ptr<HttpServer> m_http;
# endif
};

View File

@@ -128,7 +128,7 @@ public:
} // namespace xmrig
xmrig::Async::Async(Callback callback) : d_ptr(new AsyncPrivate())
xmrig::Async::Async(Callback callback) : d_ptr(std::make_shared<AsyncPrivate>())
{
d_ptr->callback = std::move(callback);
d_ptr->async = new uv_async_t;
@@ -151,8 +151,6 @@ xmrig::Async::Async(IAsyncListener *listener) : d_ptr(new AsyncPrivate())
xmrig::Async::~Async()
{
Handle::close(d_ptr->async);
delete d_ptr;
}

View File

@@ -49,7 +49,7 @@ public:
void send();
private:
AsyncPrivate *d_ptr;
std::shared_ptr<AsyncPrivate> d_ptr;
};

View File

@@ -36,7 +36,7 @@ xmrig::Watcher::Watcher(const String &path, IWatcherListener *listener) :
m_listener(listener),
m_path(path)
{
m_timer = new Timer(this);
m_timer = std::make_shared<Timer>(this);
m_fsEvent = new uv_fs_event_t;
m_fsEvent->data = this;
@@ -48,8 +48,6 @@ xmrig::Watcher::Watcher(const String &path, IWatcherListener *listener) :
xmrig::Watcher::~Watcher()
{
delete m_timer;
Handle::close(m_fsEvent);
}

View File

@@ -60,7 +60,7 @@ private:
IWatcherListener *m_listener;
String m_path;
Timer *m_timer;
std::shared_ptr<Timer> m_timer;
uv_fs_event_t *m_fsEvent;
};

View File

@@ -66,17 +66,10 @@ public:
LogPrivate() = default;
~LogPrivate() = default;
inline ~LogPrivate()
{
for (auto backend : m_backends) {
delete backend;
}
}
inline void add(ILogBackend *backend) { m_backends.push_back(backend); }
inline void add(std::shared_ptr<ILogBackend> backend) { m_backends.emplace_back(backend); }
void print(Log::Level level, const char *fmt, va_list args)
@@ -108,7 +101,7 @@ public:
}
if (!m_backends.empty()) {
for (auto backend : m_backends) {
for (auto& backend : m_backends) {
backend->print(ts, level, m_buf, offset, size, true);
backend->print(ts, level, txt.c_str(), offset ? (offset - 11) : 0, txt.size(), false);
}
@@ -188,13 +181,13 @@ private:
char m_buf[Log::kMaxBufferSize]{};
std::mutex m_mutex;
std::vector<ILogBackend*> m_backends;
std::vector<std::shared_ptr<ILogBackend>> m_backends;
};
bool Log::m_background = false;
bool Log::m_colors = true;
LogPrivate *Log::d = nullptr;
std::shared_ptr<LogPrivate> Log::d{};
uint32_t Log::m_verbose = 0;
@@ -202,7 +195,7 @@ uint32_t Log::m_verbose = 0;
void xmrig::Log::add(ILogBackend *backend)
void xmrig::Log::add(std::shared_ptr<ILogBackend> backend)
{
assert(d != nullptr);
@@ -214,14 +207,13 @@ void xmrig::Log::add(ILogBackend *backend)
void xmrig::Log::destroy()
{
delete d;
d = nullptr;
d.reset();
}
void xmrig::Log::init()
{
d = new LogPrivate();
d = std::make_shared<LogPrivate>();
}

View File

@@ -23,6 +23,7 @@
#include <cstddef>
#include <cstdint>
#include <memory>
namespace xmrig {
@@ -49,7 +50,7 @@ public:
constexpr static size_t kMaxBufferSize = 16384;
static void add(ILogBackend *backend);
static void add(std::shared_ptr<ILogBackend> backend);
static void destroy();
static void init();
static void print(const char *fmt, ...);
@@ -66,7 +67,7 @@ public:
private:
static bool m_background;
static bool m_colors;
static LogPrivate *d;
static std::shared_ptr<LogPrivate> d;
static uint32_t m_verbose;
};

View File

@@ -80,11 +80,10 @@ public:
inline ~BasePrivate()
{
# ifdef XMRIG_FEATURE_API
delete api;
api.reset();
# endif
delete config;
delete watcher;
watcher.reset();
NetBuffer::destroy();
}
@@ -98,27 +97,25 @@ public:
}
inline void replace(Config *newConfig)
inline void replace(std::shared_ptr<Config> newConfig)
{
Config *previousConfig = config;
auto previousConfig = config;
config = newConfig;
for (IBaseListener *listener : listeners) {
listener->onConfigChanged(config, previousConfig);
listener->onConfigChanged(config.get(), previousConfig.get());
}
delete previousConfig;
}
Api *api = nullptr;
Config *config = nullptr;
std::shared_ptr<Api> api;
std::shared_ptr<Config> config;
std::vector<IBaseListener *> listeners;
Watcher *watcher = nullptr;
std::shared_ptr<Watcher> watcher;
private:
inline static Config *load(Process *process)
inline static std::shared_ptr<Config> load(Process *process)
{
JsonChain chain;
ConfigTransform transform;
@@ -127,29 +124,29 @@ private:
ConfigTransform::load(chain, process, transform);
if (read(chain, config)) {
return config.release();
return config;
}
chain.addFile(Process::location(Process::DataLocation, "config.json"));
if (read(chain, config)) {
return config.release();
return config;
}
chain.addFile(Process::location(Process::HomeLocation, "." APP_ID ".json"));
if (read(chain, config)) {
return config.release();
return config;
}
chain.addFile(Process::location(Process::HomeLocation, ".config" XMRIG_DIR_SEPARATOR APP_ID ".json"));
if (read(chain, config)) {
return config.release();
return config;
}
# ifdef XMRIG_FEATURE_EMBEDDED_CONFIG
chain.addRaw(default_config);
if (read(chain, config)) {
return config.release();
return config;
}
# endif
@@ -162,7 +159,7 @@ private:
xmrig::Base::Base(Process *process)
: d_ptr(new BasePrivate(process))
: d_ptr(std::make_shared<BasePrivate>(process))
{
}
@@ -170,7 +167,6 @@ xmrig::Base::Base(Process *process)
xmrig::Base::~Base()
{
delete d_ptr;
}
@@ -183,7 +179,7 @@ bool xmrig::Base::isReady() const
int xmrig::Base::init()
{
# ifdef XMRIG_FEATURE_API
d_ptr->api = new Api(this);
d_ptr->api = std::make_shared<Api>(this);
d_ptr->api->addListener(this);
# endif
@@ -193,16 +189,16 @@ int xmrig::Base::init()
Log::setBackground(true);
}
else {
Log::add(new ConsoleLog(config()->title()));
Log::add(std::make_shared<ConsoleLog>(config()->title()));
}
if (config()->logFile()) {
Log::add(new FileLog(config()->logFile()));
Log::add(std::make_shared<FileLog>(config()->logFile()));
}
# ifdef HAVE_SYSLOG_H
if (config()->isSyslog()) {
Log::add(new SysLog());
Log::add(std::make_shared<SysLog>());
}
# endif
@@ -221,7 +217,7 @@ void xmrig::Base::start()
}
if (config()->isWatch()) {
d_ptr->watcher = new Watcher(config()->fileName(), this);
d_ptr->watcher = std::make_shared<Watcher>(config()->fileName(), this);
}
}
@@ -232,8 +228,7 @@ void xmrig::Base::stop()
api()->stop();
# endif
delete d_ptr->watcher;
d_ptr->watcher = nullptr;
d_ptr->watcher.reset();
}
@@ -241,7 +236,7 @@ xmrig::Api *xmrig::Base::api() const
{
assert(d_ptr->api != nullptr);
return d_ptr->api;
return d_ptr->api.get();
}
@@ -258,18 +253,14 @@ bool xmrig::Base::reload(const rapidjson::Value &json)
return false;
}
auto config = new Config();
auto config = std::make_shared<Config>();
if (!config->read(reader, d_ptr->config->fileName())) {
delete config;
return false;
}
const bool saved = config->save();
if (config->isWatch() && d_ptr->watcher && saved) {
delete config;
return true;
}
@@ -279,11 +270,11 @@ bool xmrig::Base::reload(const rapidjson::Value &json)
}
xmrig::Config *xmrig::Base::config() const
xmrig::Config* xmrig::Base::config() const
{
assert(d_ptr->config != nullptr);
assert(d_ptr->config);
return d_ptr->config;
return d_ptr->config.get();
}
@@ -300,12 +291,10 @@ void xmrig::Base::onFileChanged(const String &fileName)
JsonChain chain;
chain.addFile(fileName);
auto config = new Config();
auto config = std::make_shared<Config>();
if (!config->read(chain, chain.fileName())) {
LOG_ERR("%s " RED("reloading failed"), Tags::config());
delete config;
return;
}

View File

@@ -64,7 +64,7 @@ protected:
# endif
private:
BasePrivate *d_ptr;
std::shared_ptr<BasePrivate> d_ptr;
};

View File

@@ -5,8 +5,8 @@
* Copyright 2014-2016 Wolf9466 <https://github.com/OhGodAPet>
* Copyright 2016 Jay D Dee <jayddee246@gmail.com>
* Copyright 2017-2018 XMR-Stak <https://github.com/fireice-uk>, <https://github.com/psychocrypt>
* Copyright 2018-2024 SChernykh <https://github.com/SChernykh>
* Copyright 2016-2024 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* Copyright 2018-2019 SChernykh <https://github.com/SChernykh>
* Copyright 2016-2019 XMRig <https://github.com/xmrig>, <support@xmrig.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -22,9 +22,11 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <cstdio>
#include <uv.h>
#ifdef XMRIG_FEATURE_TLS
# include <openssl/opensslv.h>
#endif
@@ -64,13 +66,13 @@ static int showVersion()
# endif
printf("\n features:"
# if defined(__x86_64__) || defined(_M_AMD64) || defined (__arm64__) || defined (__aarch64__)
" 64-bit"
# else
# if defined(__i386__) || defined(_M_IX86)
" 32-bit"
# elif defined(__x86_64__) || defined(_M_AMD64)
" 64-bit"
# endif
# if defined(__AES__) || defined(_MSC_VER) || defined(__ARM_FEATURE_CRYPTO)
# if defined(__AES__) || defined(_MSC_VER)
" AES"
# endif
"\n");

View File

@@ -29,13 +29,13 @@
namespace xmrig {
static Storage<DnsUvBackend> *storage = nullptr;
static std::shared_ptr<Storage<DnsUvBackend>> storage = nullptr;
Storage<DnsUvBackend> &DnsUvBackend::getStorage()
{
if (storage == nullptr) {
storage = new Storage<DnsUvBackend>();
if (!storage) {
storage = std::make_shared<Storage<DnsUvBackend>>();
}
return *storage;
@@ -67,8 +67,7 @@ xmrig::DnsUvBackend::~DnsUvBackend()
storage->release(m_key);
if (storage->isEmpty()) {
delete storage;
storage = nullptr;
storage.reset();
}
}

View File

@@ -87,14 +87,13 @@ xmrig::DaemonClient::DaemonClient(int id, IClientListener *listener) :
BaseClient(id, listener)
{
m_httpListener = std::make_shared<HttpListener>(this);
m_timer = new Timer(this);
m_timer = std::make_shared<Timer>(this);
m_key = m_storage.add(this);
}
xmrig::DaemonClient::~DaemonClient()
{
delete m_timer;
delete m_ZMQSocket;
}
@@ -104,9 +103,6 @@ void xmrig::DaemonClient::deleteLater()
if (m_pool.zmq_port() >= 0) {
ZMQClose(true);
}
else {
delete this;
}
}

View File

@@ -107,7 +107,7 @@ private:
uint64_t m_jobSteadyMs = 0;
String m_tlsFingerprint;
String m_tlsVersion;
Timer *m_timer;
std::shared_ptr<Timer> m_timer;
uint64_t m_blocktemplateRequestHeight = 0;
WalletAddress m_walletAddress;

View File

@@ -221,42 +221,42 @@ bool xmrig::Pool::isEqual(const Pool &other) const
}
xmrig::IClient *xmrig::Pool::createClient(int id, IClientListener *listener) const
std::shared_ptr<xmrig::IClient> xmrig::Pool::createClient(int id, IClientListener* listener) const
{
IClient *client = nullptr;
std::shared_ptr<xmrig::IClient> client;
if (m_mode == MODE_POOL) {
# if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER
const uint32_t f = m_algorithm.family();
if ((f == Algorithm::KAWPOW) || (f == Algorithm::GHOSTRIDER) || (m_coin == Coin::RAVEN)) {
client = new EthStratumClient(id, Platform::userAgent(), listener);
client = std::make_shared<EthStratumClient>(id, Platform::userAgent(), listener);
}
else
# endif
{
client = new Client(id, Platform::userAgent(), listener);
client = std::make_shared<Client>(id, Platform::userAgent(), listener);
}
}
# ifdef XMRIG_FEATURE_HTTP
else if (m_mode == MODE_DAEMON) {
client = new DaemonClient(id, listener);
client = std::make_shared<DaemonClient>(id, listener);
}
else if (m_mode == MODE_SELF_SELECT) {
client = new SelfSelectClient(id, Platform::userAgent(), listener, m_submitToOrigin);
client = std::make_shared<SelfSelectClient>(id, Platform::userAgent(), listener, m_submitToOrigin);
}
# endif
# if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER
else if (m_mode == MODE_AUTO_ETH) {
client = new AutoClient(id, Platform::userAgent(), listener);
client = std::make_shared<AutoClient>(id, Platform::userAgent(), listener);
}
# endif
# ifdef XMRIG_FEATURE_BENCHMARK
else if (m_mode == MODE_BENCHMARK) {
client = new BenchClient(m_benchmark, listener);
client = std::make_shared<BenchClient>(m_benchmark, listener);
}
# endif
assert(client != nullptr);
assert(client);
if (client) {
client->setPool(*this);

View File

@@ -127,7 +127,7 @@ public:
bool isEnabled() const;
bool isEqual(const Pool &other) const;
IClient *createClient(int id, IClientListener *listener) const;
std::shared_ptr<IClient> createClient(int id, IClientListener *listener) const;
rapidjson::Value toJSON(rapidjson::Document &doc) const;
std::string printableName() const;

View File

@@ -80,17 +80,17 @@ int xmrig::Pools::donateLevel() const
}
xmrig::IStrategy *xmrig::Pools::createStrategy(IStrategyListener *listener) const
std::shared_ptr<xmrig::IStrategy> xmrig::Pools::createStrategy(IStrategyListener *listener) const
{
if (active() == 1) {
for (const Pool &pool : m_data) {
if (pool.isEnabled()) {
return new SinglePoolStrategy(pool, retryPause(), retries(), listener);
return std::make_shared<SinglePoolStrategy>(pool, retryPause(), retries(), listener);
}
}
}
auto strategy = new FailoverStrategy(retryPause(), retries(), listener);
auto strategy = std::make_shared<FailoverStrategy>(retryPause(), retries(), listener);
for (const Pool &pool : m_data) {
if (pool.isEnabled()) {
strategy->add(pool);
@@ -154,7 +154,7 @@ void xmrig::Pools::load(const IJsonReader &reader)
Pool pool(value);
if (pool.isValid()) {
m_data.push_back(std::move(pool));
m_data.emplace_back(std::move(pool));
}
}

View File

@@ -73,7 +73,7 @@ public:
bool isEqual(const Pools &other) const;
int donateLevel() const;
IStrategy *createStrategy(IStrategyListener *listener) const;
std::shared_ptr<IStrategy> createStrategy(IStrategyListener *listener) const;
rapidjson::Value toJSON(rapidjson::Document &doc) const;
size_t active() const;
uint32_t benchSize() const;

View File

@@ -56,13 +56,12 @@ xmrig::SelfSelectClient::SelfSelectClient(int id, const char *agent, IClientList
m_listener(listener)
{
m_httpListener = std::make_shared<HttpListener>(this);
m_client = new Client(id, agent, this);
m_client = std::make_shared<Client>(id, agent, this);
}
xmrig::SelfSelectClient::~SelfSelectClient()
{
delete m_client;
}

View File

@@ -105,7 +105,7 @@ private:
bool m_active = false;
bool m_quiet = false;
const bool m_submitToOrigin;
IClient *m_client;
std::shared_ptr<IClient> m_client;
IClientListener *m_listener;
int m_retries = 5;
int64_t m_failures = 0;

View File

@@ -53,7 +53,7 @@ public:
inline int64_t sequence() const override { return 0; }
inline int64_t submit(const JobResult &) override { return 0; }
inline void connect(const Pool &pool) override { setPool(pool); }
inline void deleteLater() override { delete this; }
inline void deleteLater() override {}
inline void setAlgo(const Algorithm &algo) override {}
inline void setEnabled(bool enabled) override {}
inline void setProxy(const ProxyUrl &proxy) override {}

View File

@@ -47,7 +47,7 @@ xmrig::FailoverStrategy::FailoverStrategy(int retryPause, int retries, IStrategy
xmrig::FailoverStrategy::~FailoverStrategy()
{
for (IClient *client : m_pools) {
for (auto& client : m_pools) {
client->deleteLater();
}
}
@@ -55,7 +55,7 @@ xmrig::FailoverStrategy::~FailoverStrategy()
void xmrig::FailoverStrategy::add(const Pool &pool)
{
IClient *client = pool.createClient(static_cast<int>(m_pools.size()), this);
std::shared_ptr<IClient> client = pool.createClient(static_cast<int>(m_pools.size()), this);
client->setRetries(m_retries);
client->setRetryPause(m_retryPause * 1000);
@@ -93,7 +93,7 @@ void xmrig::FailoverStrategy::resume()
void xmrig::FailoverStrategy::setAlgo(const Algorithm &algo)
{
for (IClient *client : m_pools) {
for (auto& client : m_pools) {
client->setAlgo(algo);
}
}
@@ -101,7 +101,7 @@ void xmrig::FailoverStrategy::setAlgo(const Algorithm &algo)
void xmrig::FailoverStrategy::setProxy(const ProxyUrl &proxy)
{
for (IClient *client : m_pools) {
for (auto& client : m_pools) {
client->setProxy(proxy);
}
}
@@ -109,7 +109,7 @@ void xmrig::FailoverStrategy::setProxy(const ProxyUrl &proxy)
void xmrig::FailoverStrategy::stop()
{
for (auto &pool : m_pools) {
for (auto& pool : m_pools) {
pool->disconnect();
}
@@ -122,7 +122,7 @@ void xmrig::FailoverStrategy::stop()
void xmrig::FailoverStrategy::tick(uint64_t now)
{
for (IClient *client : m_pools) {
for (auto& client : m_pools) {
client->tick(now);
}
}

View File

@@ -49,7 +49,7 @@ public:
protected:
inline bool isActive() const override { return m_active >= 0; }
inline IClient *client() const override { return isActive() ? active() : m_pools[m_index]; }
inline IClient* client() const override { return isActive() ? active() : m_pools[m_index].get(); }
int64_t submit(const JobResult &result) override;
void connect() override;
@@ -67,7 +67,7 @@ protected:
void onVerifyAlgorithm(const IClient *client, const Algorithm &algorithm, bool *ok) override;
private:
inline IClient *active() const { return m_pools[static_cast<size_t>(m_active)]; }
inline IClient* active() const { return m_pools[static_cast<size_t>(m_active)].get(); }
const bool m_quiet;
const int m_retries;
@@ -75,7 +75,7 @@ private:
int m_active = -1;
IStrategyListener *m_listener;
size_t m_index = 0;
std::vector<IClient*> m_pools;
std::vector<std::shared_ptr<IClient>> m_pools;
};

View File

@@ -66,7 +66,7 @@ void xmrig::SinglePoolStrategy::resume()
return;
}
m_listener->onJob(this, m_client, m_client->job(), rapidjson::Value(rapidjson::kNullType));
m_listener->onJob(this, m_client.get(), m_client->job(), rapidjson::Value(rapidjson::kNullType));
}

View File

@@ -49,7 +49,7 @@ public:
protected:
inline bool isActive() const override { return m_active; }
inline IClient *client() const override { return m_client; }
inline IClient* client() const override { return m_client.get(); }
int64_t submit(const JobResult &result) override;
void connect() override;
@@ -68,7 +68,7 @@ protected:
private:
bool m_active;
IClient *m_client;
std::shared_ptr<IClient> m_client;
IStrategyListener *m_listener;
};

View File

@@ -23,22 +23,23 @@
#include <cassert>
#include <memory>
#include <uv.h>
namespace xmrig {
static MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS> *pool = nullptr;
static std::shared_ptr<MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>> pool;
inline MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS> *getPool()
{
if (!pool) {
pool = new MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>();
pool = std::make_shared<MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>>();
}
return pool;
return pool.get();
}
@@ -59,8 +60,7 @@ void xmrig::NetBuffer::destroy()
assert(pool->freeSize() == pool->size());
delete pool;
pool = nullptr;
pool.reset();
}

View File

@@ -84,10 +84,10 @@ public:
inline ~MinerPrivate()
{
delete timer;
timer.reset();
for (IBackend *backend : backends) {
delete backend;
for (auto& backend : backends) {
backend.reset();
}
# ifdef XMRIG_ALGO_RANDOMX
@@ -98,7 +98,7 @@ public:
bool isEnabled(const Algorithm &algorithm) const
{
for (IBackend *backend : backends) {
for (auto& backend : backends) {
if (backend->isEnabled() && backend->isEnabled(algorithm)) {
return true;
}
@@ -124,7 +124,7 @@ public:
Nonce::reset(job.index());
}
for (IBackend *backend : backends) {
for (auto& backend : backends) {
backend->setJob(job);
}
@@ -175,7 +175,7 @@ public:
double t[3] = { 0.0 };
for (IBackend *backend : backends) {
for (auto& backend : backends) {
const Hashrate *hr = backend->hashrate();
if (!hr) {
continue;
@@ -221,7 +221,7 @@ public:
reply.SetArray();
for (IBackend *backend : backends) {
for (auto& backend : backends) {
reply.PushBack(backend->toJSON(doc), allocator);
}
}
@@ -364,9 +364,9 @@ public:
Controller *controller;
Job job;
mutable std::map<Algorithm::Id, double> maxHashrate;
std::vector<IBackend *> backends;
std::vector<std::shared_ptr<IBackend>> backends;
String userJobId;
Timer *timer = nullptr;
std::shared_ptr<Timer> timer;
uint64_t ticks = 0;
Taskbar m_taskbar;
@@ -378,7 +378,7 @@ public:
xmrig::Miner::Miner(Controller *controller)
: d_ptr(new MinerPrivate(controller))
: d_ptr(std::make_shared<MinerPrivate>(controller))
{
const int priority = controller->config()->cpu().priority();
if (priority >= 0) {
@@ -400,29 +400,23 @@ xmrig::Miner::Miner(Controller *controller)
controller->api()->addListener(this);
# endif
d_ptr->timer = new Timer(this);
d_ptr->timer = std::make_shared<Timer>(this);
d_ptr->backends.reserve(3);
d_ptr->backends.push_back(new CpuBackend(controller));
d_ptr->backends.emplace_back(std::make_shared<CpuBackend>(controller));
# ifdef XMRIG_FEATURE_OPENCL
d_ptr->backends.push_back(new OclBackend(controller));
d_ptr->backends.emplace_back(std::make_shared<OclBackend>(controller));
# endif
# ifdef XMRIG_FEATURE_CUDA
d_ptr->backends.push_back(new CudaBackend(controller));
d_ptr->backends.emplace_back(std::make_shared<CudaBackend>(controller));
# endif
d_ptr->rebuild();
}
xmrig::Miner::~Miner()
{
delete d_ptr;
}
bool xmrig::Miner::isEnabled() const
{
return d_ptr->enabled;
@@ -441,7 +435,7 @@ const xmrig::Algorithms &xmrig::Miner::algorithms() const
}
const std::vector<xmrig::IBackend *> &xmrig::Miner::backends() const
const std::vector<std::shared_ptr<xmrig::IBackend>>& xmrig::Miner::backends() const
{
return d_ptr->backends;
}
@@ -538,7 +532,7 @@ void xmrig::Miner::setEnabled(bool enabled)
void xmrig::Miner::setJob(const Job &job, bool donate)
{
for (IBackend *backend : d_ptr->backends) {
for (auto& backend : d_ptr->backends) {
backend->prepare(job);
}
@@ -606,7 +600,7 @@ void xmrig::Miner::stop()
{
Nonce::stop();
for (IBackend *backend : d_ptr->backends) {
for (auto& backend : d_ptr->backends) {
backend->stop();
}
}
@@ -622,7 +616,7 @@ void xmrig::Miner::onConfigChanged(Config *config, Config *previousConfig)
const Job job = this->job();
for (IBackend *backend : d_ptr->backends) {
for (auto& backend : d_ptr->backends) {
backend->setJob(job);
}
}
@@ -636,7 +630,7 @@ void xmrig::Miner::onTimer(const Timer *)
bool stopMiner = false;
for (IBackend *backend : d_ptr->backends) {
for (auto& backend : d_ptr->backends) {
if (!backend->tick(d_ptr->ticks)) {
stopMiner = true;
}
@@ -718,7 +712,7 @@ void xmrig::Miner::onRequest(IApiRequest &request)
}
}
for (IBackend *backend : d_ptr->backends) {
for (auto& backend : d_ptr->backends) {
backend->handleRequest(request);
}
}

View File

@@ -46,12 +46,12 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(Miner)
Miner(Controller *controller);
~Miner() override;
~Miner() override = default;
bool isEnabled() const;
bool isEnabled(const Algorithm &algorithm) const;
const Algorithms &algorithms() const;
const std::vector<IBackend *> &backends() const;
const std::vector<std::shared_ptr<IBackend>> &backends() const;
Job job() const;
void execCommand(char command);
void pause();
@@ -72,7 +72,7 @@ protected:
# endif
private:
MinerPrivate *d_ptr;
std::shared_ptr<MinerPrivate> d_ptr;
};

View File

@@ -65,14 +65,13 @@ struct TaskbarPrivate
};
Taskbar::Taskbar() : d_ptr(new TaskbarPrivate())
Taskbar::Taskbar() : d_ptr(std::make_shared<TaskbarPrivate>())
{
}
Taskbar::~Taskbar()
{
delete d_ptr;
}

View File

@@ -19,6 +19,7 @@
#ifndef XMRIG_TASKBAR_H
#define XMRIG_TASKBAR_H
#include <memory>
namespace xmrig {
@@ -39,7 +40,7 @@ private:
bool m_active = false;
bool m_enabled = true;
TaskbarPrivate* d_ptr = nullptr;
std::shared_ptr<TaskbarPrivate> d_ptr;
void updateTaskbarColor();
};

View File

@@ -115,14 +115,13 @@ public:
xmrig::Config::Config() :
d_ptr(new ConfigPrivate())
d_ptr(std::make_shared<ConfigPrivate>())
{
}
xmrig::Config::~Config()
{
delete d_ptr;
}

View File

@@ -101,7 +101,7 @@ public:
void getJSON(rapidjson::Document &doc) const override;
private:
ConfigPrivate *d_ptr;
std::shared_ptr<ConfigPrivate> d_ptr;
};

View File

@@ -49,18 +49,12 @@ xmrig::MemoryPool::MemoryPool(size_t size, bool hugePages, uint32_t node)
constexpr size_t alignment = 1 << 24;
m_memory = new VirtualMemory(size * pageSize + alignment, hugePages, false, false, node);
m_memory = std::make_shared<VirtualMemory>(size * pageSize + alignment, hugePages, false, false, node);
m_alignOffset = (alignment - (((size_t)m_memory->scratchpad()) % alignment)) % alignment;
}
xmrig::MemoryPool::~MemoryPool()
{
delete m_memory;
}
bool xmrig::MemoryPool::isHugePages(uint32_t) const
{
return m_memory && m_memory->isHugePages();

View File

@@ -44,7 +44,7 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(MemoryPool)
MemoryPool(size_t size, bool hugePages, uint32_t node = 0);
~MemoryPool() override;
~MemoryPool() override = default;
protected:
bool isHugePages(uint32_t node) const override;
@@ -55,7 +55,7 @@ private:
size_t m_refs = 0;
size_t m_offset = 0;
size_t m_alignOffset = 0;
VirtualMemory *m_memory = nullptr;
std::shared_ptr<VirtualMemory> m_memory;
};

View File

@@ -42,14 +42,6 @@ xmrig::NUMAMemoryPool::NUMAMemoryPool(size_t size, bool hugePages) :
}
xmrig::NUMAMemoryPool::~NUMAMemoryPool()
{
for (auto kv : m_map) {
delete kv.second;
}
}
bool xmrig::NUMAMemoryPool::isHugePages(uint32_t node) const
{
if (!m_size) {
@@ -81,7 +73,7 @@ void xmrig::NUMAMemoryPool::release(uint32_t node)
xmrig::IMemoryPool *xmrig::NUMAMemoryPool::get(uint32_t node) const
{
return m_map.count(node) ? m_map.at(node) : nullptr;
return m_map.count(node) ? m_map.at(node).get() : nullptr;
}
@@ -89,8 +81,9 @@ xmrig::IMemoryPool *xmrig::NUMAMemoryPool::getOrCreate(uint32_t node) const
{
auto pool = get(node);
if (!pool) {
pool = new MemoryPool(m_nodeSize, m_hugePages, node);
m_map.insert({ node, pool });
auto new_pool = std::make_shared<MemoryPool>(m_nodeSize, m_hugePages, node);
m_map.emplace(node, new_pool);
pool = new_pool.get();
}
return pool;

View File

@@ -47,7 +47,7 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(NUMAMemoryPool)
NUMAMemoryPool(size_t size, bool hugePages);
~NUMAMemoryPool() override;
~NUMAMemoryPool() override = default;
protected:
bool isHugePages(uint32_t node) const override;
@@ -61,7 +61,7 @@ private:
bool m_hugePages = true;
size_t m_nodeSize = 0;
size_t m_size = 0;
mutable std::map<uint32_t, IMemoryPool *> m_map;
mutable std::map<uint32_t, std::shared_ptr<IMemoryPool>> m_map;
};

View File

@@ -38,7 +38,7 @@ namespace xmrig {
size_t VirtualMemory::m_hugePageSize = VirtualMemory::kDefaultHugePageSize;
static IMemoryPool *pool = nullptr;
static std::shared_ptr<IMemoryPool> pool;
static std::mutex mutex;
@@ -113,7 +113,7 @@ uint32_t xmrig::VirtualMemory::bindToNUMANode(int64_t)
void xmrig::VirtualMemory::destroy()
{
delete pool;
pool.reset();
}
@@ -125,10 +125,10 @@ void xmrig::VirtualMemory::init(size_t poolSize, size_t hugePageSize)
# ifdef XMRIG_FEATURE_HWLOC
if (Cpu::info()->nodes() > 1) {
pool = new NUMAMemoryPool(align(poolSize, Cpu::info()->nodes()), hugePageSize > 0);
pool = std::make_shared<NUMAMemoryPool>(align(poolSize, Cpu::info()->nodes()), hugePageSize > 0);
} else
# endif
{
pool = new MemoryPool(poolSize, hugePageSize > 0);
pool = std::make_shared<MemoryPool>(poolSize, hugePageSize > 0);
}
}

View File

@@ -312,7 +312,7 @@ void benchmark()
constexpr uint32_t N = 1U << 21;
VirtualMemory::init(0, N);
VirtualMemory* memory = new VirtualMemory(N * 8, true, false, false);
std::shared_ptr<VirtualMemory> memory = std::make_shared<VirtualMemory>(N * 8, true, false, false);
// 2 MB cache per core by default
size_t max_scratchpad_size = 1U << 21;
@@ -438,7 +438,6 @@ void benchmark()
delete helper;
CnCtx::release(ctx, 8);
delete memory;
});
t.join();

View File

@@ -38,17 +38,6 @@ std::mutex KPCache::s_cacheMutex;
KPCache KPCache::s_cache;
KPCache::KPCache()
{
}
KPCache::~KPCache()
{
delete m_memory;
}
bool KPCache::init(uint32_t epoch)
{
if (epoch >= sizeof(cache_sizes) / sizeof(cache_sizes[0])) {
@@ -63,8 +52,7 @@ bool KPCache::init(uint32_t epoch)
const size_t size = cache_sizes[epoch];
if (!m_memory || m_memory->size() < size) {
delete m_memory;
m_memory = new VirtualMemory(size, false, false, false);
m_memory = std::make_shared<VirtualMemory>(size, false, false, false);
}
const ethash_h256_t seedhash = ethash_get_seedhash(epoch);

View File

@@ -41,8 +41,8 @@ public:
XMRIG_DISABLE_COPY_MOVE(KPCache)
KPCache();
~KPCache();
KPCache() = default;
~KPCache() = default;
bool init(uint32_t epoch);
@@ -61,7 +61,7 @@ public:
static KPCache s_cache;
private:
VirtualMemory* m_memory = nullptr;
std::shared_ptr<VirtualMemory> m_memory;
size_t m_size = 0;
uint32_t m_epoch = 0xFFFFFFFFUL;
std::vector<uint32_t> m_DAGCache;

View File

@@ -40,7 +40,7 @@ class RxPrivate;
static bool osInitialized = false;
static RxPrivate *d_ptr = nullptr;
static std::shared_ptr<RxPrivate> d_ptr;
class RxPrivate
@@ -73,15 +73,13 @@ void xmrig::Rx::destroy()
RxMsr::destroy();
# endif
delete d_ptr;
d_ptr = nullptr;
d_ptr.reset();
}
void xmrig::Rx::init(IRxListener *listener)
{
d_ptr = new RxPrivate(listener);
d_ptr = std::make_shared<RxPrivate>(listener);
}

View File

@@ -44,8 +44,8 @@ public:
inline ~RxBasicStoragePrivate() { deleteDataset(); }
inline bool isReady(const Job &job) const { return m_ready && m_seed == job; }
inline RxDataset *dataset() const { return m_dataset; }
inline void deleteDataset() { delete m_dataset; m_dataset = nullptr; }
inline RxDataset *dataset() const { return m_dataset.get(); }
inline void deleteDataset() { m_dataset.reset(); }
inline void setSeed(const RxSeed &seed)
@@ -64,7 +64,7 @@ public:
{
const uint64_t ts = Chrono::steadyMSecs();
m_dataset = new RxDataset(hugePages, oneGbPages, true, mode, 0);
m_dataset = std::make_shared<RxDataset>(hugePages, oneGbPages, true, mode, 0);
if (!m_dataset->cache()->get()) {
deleteDataset();
@@ -117,7 +117,7 @@ private:
bool m_ready = false;
RxDataset *m_dataset = nullptr;
std::shared_ptr<RxDataset> m_dataset;
RxSeed m_seed;
};
@@ -133,7 +133,6 @@ xmrig::RxBasicStorage::RxBasicStorage() :
xmrig::RxBasicStorage::~RxBasicStorage()
{
delete d_ptr;
}

View File

@@ -46,7 +46,7 @@ protected:
void init(const RxSeed &seed, uint32_t threads, bool hugePages, bool oneGbPages, RxConfig::Mode mode, int priority) override;
private:
RxBasicStoragePrivate *d_ptr;
std::shared_ptr<RxBasicStoragePrivate> d_ptr;
};

View File

@@ -35,7 +35,7 @@ static_assert(RANDOMX_FLAG_JIT == 8, "RANDOMX_FLAG_JIT flag mismatch");
xmrig::RxCache::RxCache(bool hugePages, uint32_t nodeId)
{
m_memory = new VirtualMemory(maxSize(), hugePages, false, false, nodeId);
m_memory = std::make_shared<VirtualMemory>(maxSize(), hugePages, false, false, nodeId);
create(m_memory->raw());
}
@@ -50,8 +50,6 @@ xmrig::RxCache::RxCache(uint8_t *memory)
xmrig::RxCache::~RxCache()
{
randomx_release_cache(m_cache);
delete m_memory;
}

View File

@@ -69,7 +69,7 @@ private:
bool m_jit = true;
Buffer m_seed;
randomx_cache *m_cache = nullptr;
VirtualMemory *m_memory = nullptr;
std::shared_ptr<VirtualMemory> m_memory;
};

View File

@@ -79,10 +79,7 @@ xmrig::RxDataset::RxDataset(RxCache *cache) :
xmrig::RxDataset::~RxDataset()
{
randomx_release_dataset(m_dataset);
delete m_cache;
delete m_memory;
}
@@ -107,7 +104,7 @@ bool xmrig::RxDataset::init(const Buffer &seed, uint32_t numThreads, int priorit
for (uint64_t i = 0; i < numThreads; ++i) {
const uint32_t a = (datasetItemCount * i) / numThreads;
const uint32_t b = (datasetItemCount * (i + 1)) / numThreads;
threads.emplace_back(init_dataset_wrapper, m_dataset, m_cache->get(), a, b - a, priority);
threads.emplace_back(init_dataset_wrapper, m_dataset.get(), m_cache->get(), a, b - a, priority);
}
for (uint32_t i = 0; i < numThreads; ++i) {
@@ -115,7 +112,7 @@ bool xmrig::RxDataset::init(const Buffer &seed, uint32_t numThreads, int priorit
}
}
else {
init_dataset_wrapper(m_dataset, m_cache->get(), 0, datasetItemCount, priority);
init_dataset_wrapper(m_dataset.get(), m_cache->get(), 0, datasetItemCount, priority);
}
return true;
@@ -180,7 +177,7 @@ uint8_t *xmrig::RxDataset::tryAllocateScrathpad()
void *xmrig::RxDataset::raw() const
{
return m_dataset ? randomx_get_dataset_memory(m_dataset) : nullptr;
return m_dataset ? randomx_get_dataset_memory(m_dataset.get()) : nullptr;
}
@@ -191,7 +188,7 @@ void xmrig::RxDataset::setRaw(const void *raw)
}
volatile size_t N = maxSize();
memcpy(randomx_get_dataset_memory(m_dataset), raw, N);
memcpy(randomx_get_dataset_memory(m_dataset.get()), raw, N);
}
@@ -199,24 +196,22 @@ void xmrig::RxDataset::allocate(bool hugePages, bool oneGbPages)
{
if (m_mode == RxConfig::LightMode) {
LOG_ERR(CLEAR "%s" RED_BOLD_S "fast RandomX mode disabled by config", Tags::randomx());
return;
}
if (m_mode == RxConfig::AutoMode && uv_get_total_memory() < (maxSize() + RxCache::maxSize())) {
LOG_ERR(CLEAR "%s" RED_BOLD_S "not enough memory for RandomX dataset", Tags::randomx());
return;
}
m_memory = new VirtualMemory(maxSize(), hugePages, oneGbPages, false, m_node);
m_memory = std::make_shared<VirtualMemory>(maxSize(), hugePages, oneGbPages, false, m_node);
if (m_memory->isOneGbPages()) {
m_scratchpadOffset = maxSize() + RANDOMX_CACHE_MAX_SIZE;
m_scratchpadLimit = m_memory->capacity();
}
m_dataset = randomx_create_dataset(m_memory->raw());
m_dataset = std::shared_ptr<randomx_dataset>(randomx_create_dataset(m_memory->raw()), randomx_release_dataset);
# ifdef XMRIG_OS_LINUX
if (oneGbPages && !isOneGbPages()) {

View File

@@ -50,7 +50,7 @@ public:
RxDataset(RxCache *cache);
~RxDataset();
inline randomx_dataset *get() const { return m_dataset; }
inline randomx_dataset *get() const { return m_dataset.get(); }
inline RxCache *cache() const { return m_cache; }
inline void setCache(RxCache *cache) { m_cache = cache; }
@@ -70,11 +70,11 @@ private:
const RxConfig::Mode m_mode = RxConfig::FastMode;
const uint32_t m_node;
randomx_dataset *m_dataset = nullptr;
std::shared_ptr<randomx_dataset> m_dataset;
RxCache *m_cache = nullptr;
size_t m_scratchpadLimit = 0;
std::atomic<size_t> m_scratchpadOffset{};
VirtualMemory *m_memory = nullptr;
std::shared_ptr<VirtualMemory> m_memory;
};

View File

@@ -49,8 +49,6 @@ xmrig::RxQueue::~RxQueue()
m_cv.notify_one();
m_thread.join();
delete m_storage;
}
@@ -90,12 +88,12 @@ void xmrig::RxQueue::enqueue(const RxSeed &seed, const std::vector<uint32_t> &no
if (!m_storage) {
# ifdef XMRIG_FEATURE_HWLOC
if (!nodeset.empty()) {
m_storage = new RxNUMAStorage(nodeset);
m_storage = std::make_shared<RxNUMAStorage>(nodeset);
}
else
# endif
{
m_storage = new RxBasicStorage();
m_storage = std::make_shared<RxBasicStorage>();
}
}

View File

@@ -94,7 +94,7 @@ private:
void onReady();
IRxListener *m_listener = nullptr;
IRxStorage *m_storage = nullptr;
std::shared_ptr<IRxStorage> m_storage;
RxSeed m_seed;
State m_state = STATE_IDLE;
std::condition_variable m_cv;

View File

@@ -25,7 +25,7 @@
#include "crypto/rx/RxVm.h"
randomx_vm *xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node)
std::shared_ptr<randomx_vm> xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node)
{
int flags = 0;
@@ -46,13 +46,8 @@ randomx_vm *xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool so
flags |= RANDOMX_FLAG_AMD;
}
return randomx_create_vm(static_cast<randomx_flags>(flags), !dataset->get() ? dataset->cache()->get() : nullptr, dataset->get(), scratchpad, node);
return std::shared_ptr<randomx_vm>(randomx_create_vm(
static_cast<randomx_flags>(flags), !dataset->get() ? dataset->cache()->get() : nullptr, dataset->get(), scratchpad, node),
randomx_destroy_vm);
}
void xmrig::RxVm::destroy(randomx_vm* vm)
{
if (vm) {
randomx_destroy_vm(vm);
}
}

View File

@@ -38,8 +38,7 @@ class RxDataset;
class RxVm
{
public:
static randomx_vm *create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node);
static void destroy(randomx_vm *vm);
static std::shared_ptr<randomx_vm> create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node);
};

View File

@@ -59,7 +59,7 @@ private:
bool rdmsr(uint32_t reg, int32_t cpu, uint64_t &value) const;
bool wrmsr(uint32_t reg, uint64_t value, int32_t cpu);
MsrPrivate *d_ptr = nullptr;
std::shared_ptr<MsrPrivate> d_ptr;
};

View File

@@ -72,11 +72,9 @@ private:
const bool m_available;
};
} // namespace xmrig
xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::Msr() : d_ptr(std::make_shared<MsrPrivate>())
{
if (!isAvailable()) {
LOG_WARN("%s " YELLOW_BOLD("msr kernel module is not available"), tag());
@@ -86,7 +84,6 @@ xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::~Msr()
{
delete d_ptr;
}

View File

@@ -85,7 +85,7 @@ public:
} // namespace xmrig
xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::Msr() : d_ptr(std::make_shared<MsrPrivate>())
{
DWORD err = 0;
@@ -195,8 +195,6 @@ xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::~Msr()
{
d_ptr->uninstall();
delete d_ptr;
}

View File

@@ -133,12 +133,10 @@ static void getResults(JobBundle &bundle, std::vector<JobResult> &results, uint3
for (uint32_t nonce : bundle.nonces) {
*bundle.job.nonce() = nonce;
randomx_calculate_hash(vm, bundle.job.blob(), bundle.job.size(), hash);
randomx_calculate_hash(vm.get(), bundle.job.blob(), bundle.job.size(), hash);
checkHash(bundle, results, nonce, hash, errors);
}
RxVm::destroy(vm);
# endif
}
else if (algorithm.family() == Algorithm::ARGON2) {
@@ -303,7 +301,7 @@ private:
};
static JobResultsPrivate *handler = nullptr;
static std::shared_ptr<JobResultsPrivate> handler;
} // namespace xmrig
@@ -317,19 +315,17 @@ void xmrig::JobResults::done(const Job &job)
void xmrig::JobResults::setListener(IJobResultListener *listener, bool hwAES)
{
assert(handler == nullptr);
assert(!handler);
handler = new JobResultsPrivate(listener, hwAES);
handler = std::make_shared<JobResultsPrivate>(listener, hwAES);
}
void xmrig::JobResults::stop()
{
assert(handler != nullptr);
assert(handler);
delete handler;
handler = nullptr;
handler.reset();
}
@@ -347,7 +343,7 @@ void xmrig::JobResults::submit(const Job& job, uint32_t nonce, const uint8_t* re
void xmrig::JobResults::submit(const JobResult &result)
{
assert(handler != nullptr);
assert(handler);
if (handler) {
handler->submit(result);

View File

@@ -67,27 +67,23 @@ xmrig::Network::Network(Controller *controller) :
controller->api()->addListener(this);
# endif
m_state = new NetworkState(this);
m_state = std::make_shared<NetworkState>(this);
const Pools &pools = controller->config()->pools();
m_strategy = pools.createStrategy(m_state);
m_strategy = pools.createStrategy(m_state.get());
if (pools.donateLevel() > 0) {
m_donate = new DonateStrategy(controller, this);
m_donate = std::make_shared<DonateStrategy>(controller, this);
}
m_timer = new Timer(this, kTickInterval, kTickInterval);
static constexpr int kTickInterval = 1 * 1000;
m_timer = std::make_shared<Timer>(this, kTickInterval, kTickInterval);
}
xmrig::Network::~Network()
{
JobResults::stop();
delete m_timer;
delete m_donate;
delete m_strategy;
delete m_state;
}
@@ -118,7 +114,7 @@ void xmrig::Network::execCommand(char command)
void xmrig::Network::onActive(IStrategy *strategy, IClient *client)
{
if (m_donate && m_donate == strategy) {
if (m_donate && m_donate.get() == strategy) {
LOG_NOTICE("%s " WHITE_BOLD("dev donate started"), Tags::network());
return;
}
@@ -157,19 +153,18 @@ void xmrig::Network::onConfigChanged(Config *config, Config *previousConfig)
config->pools().print();
delete m_strategy;
m_strategy = config->pools().createStrategy(m_state);
m_strategy = config->pools().createStrategy(m_state.get());
connect();
}
void xmrig::Network::onJob(IStrategy *strategy, IClient *client, const Job &job, const rapidjson::Value &)
{
if (m_donate && m_donate->isActive() && m_donate != strategy) {
if (m_donate && m_donate->isActive() && m_donate.get() != strategy) {
return;
}
setJob(client, job, m_donate == strategy);
setJob(client, job, m_donate.get() == strategy);
}
@@ -210,7 +205,7 @@ void xmrig::Network::onLogin(IStrategy *, IClient *client, rapidjson::Document &
void xmrig::Network::onPause(IStrategy *strategy)
{
if (m_donate && m_donate == strategy) {
if (m_donate && m_donate.get() == strategy) {
LOG_NOTICE("%s " WHITE_BOLD("dev donate finished"), Tags::network());
m_strategy->resume();
}
@@ -292,7 +287,7 @@ void xmrig::Network::setJob(IClient *client, const Job &job, bool donate)
}
if (!donate && m_donate) {
static_cast<DonateStrategy *>(m_donate)->update(client, job);
static_cast<DonateStrategy &>(*m_donate).update(client, job);
}
m_controller->miner()->setJob(job, donate);

View File

@@ -30,7 +30,7 @@
#include "interfaces/IJobResultListener.h"
#include <vector>
#include <memory>
namespace xmrig {
@@ -49,7 +49,7 @@ public:
Network(Controller *controller);
~Network() override;
inline IStrategy *strategy() const { return m_strategy; }
inline IStrategy *strategy() const { return m_strategy.get(); }
void connect();
void execCommand(char command);
@@ -64,15 +64,13 @@ protected:
void onLogin(IStrategy *strategy, IClient *client, rapidjson::Document &doc, rapidjson::Value &params) override;
void onPause(IStrategy *strategy) override;
void onResultAccepted(IStrategy *strategy, IClient *client, const SubmitResult &result, const char *error) override;
void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override;
void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override;
# ifdef XMRIG_FEATURE_API
void onRequest(IApiRequest &request) override;
# endif
private:
constexpr static int kTickInterval = 1 * 1000;
void setJob(IClient *client, const Job &job, bool donate);
void tick();
@@ -82,10 +80,10 @@ private:
# endif
Controller *m_controller;
IStrategy *m_donate = nullptr;
IStrategy *m_strategy = nullptr;
NetworkState *m_state = nullptr;
Timer *m_timer = nullptr;
std::shared_ptr<IStrategy> m_donate;
std::shared_ptr<IStrategy> m_strategy;
std::shared_ptr<NetworkState> m_state;
std::shared_ptr<Timer> m_timer;
};

View File

@@ -75,13 +75,13 @@ xmrig::DonateStrategy::DonateStrategy(Controller *controller, IStrategyListener
m_pools.emplace_back(kDonateHost, 3333, m_userId, nullptr, nullptr, 0, true, false, mode);
if (m_pools.size() > 1) {
m_strategy = new FailoverStrategy(m_pools, 10, 2, this, true);
m_strategy = std::make_shared<FailoverStrategy>(m_pools, 10, 2, this, true);
}
else {
m_strategy = new SinglePoolStrategy(m_pools.front(), 10, 2, this, true);
m_strategy = std::make_shared<SinglePoolStrategy>(m_pools.front(), 10, 2, this, true);
}
m_timer = new Timer(this);
m_timer = std::make_shared<Timer>(this);
setState(STATE_IDLE);
}
@@ -89,8 +89,8 @@ xmrig::DonateStrategy::DonateStrategy(Controller *controller, IStrategyListener
xmrig::DonateStrategy::~DonateStrategy()
{
delete m_timer;
delete m_strategy;
m_timer.reset();
m_strategy.reset();
if (m_proxy) {
m_proxy->deleteLater();
@@ -237,7 +237,7 @@ void xmrig::DonateStrategy::onVerifyAlgorithm(const IClient *client, const Algor
}
void xmrig::DonateStrategy::onVerifyAlgorithm(IStrategy *, const IClient *client, const Algorithm &algorithm, bool *ok)
void xmrig::DonateStrategy::onVerifyAlgorithm(IStrategy *, const IClient *client, const Algorithm &algorithm, bool *ok)
{
m_listener->onVerifyAlgorithm(this, client, algorithm, ok);
}
@@ -249,7 +249,7 @@ void xmrig::DonateStrategy::onTimer(const Timer *)
}
xmrig::IClient *xmrig::DonateStrategy::createProxy()
std::shared_ptr<xmrig::IClient> xmrig::DonateStrategy::createProxy()
{
if (m_controller->config()->pools().proxyDonate() == Pools::PROXY_DONATE_NONE) {
return nullptr;
@@ -267,7 +267,7 @@ xmrig::IClient *xmrig::DonateStrategy::createProxy()
pool.setAlgo(client->pool().algorithm());
pool.setProxy(client->pool().proxy());
IClient *proxy = new Client(-1, Platform::userAgent(), this);
std::shared_ptr<IClient> proxy = std::make_shared<Client>(-1, Platform::userAgent(), this);
proxy->setPool(pool);
proxy->setQuiet(true);

View File

@@ -47,7 +47,7 @@ public:
protected:
inline bool isActive() const override { return state() == STATE_ACTIVE; }
inline IClient *client() const override { return m_proxy ? m_proxy : m_strategy->client(); }
inline IClient *client() const override { return m_proxy ? m_proxy.get() : m_strategy->client(); }
inline void onJob(IStrategy *, IClient *client, const Job &job, const rapidjson::Value &params) override { setJob(client, job, params); }
inline void onJobReceived(IClient *client, const Job &job, const rapidjson::Value &params) override { setJob(client, job, params); }
inline void onResultAccepted(IClient *client, const SubmitResult &result, const char *error) override { setResult(client, result, error); }
@@ -69,7 +69,7 @@ protected:
void onLogin(IStrategy *strategy, IClient *client, rapidjson::Document &doc, rapidjson::Value &params) override;
void onLoginSuccess(IClient *client) override;
void onVerifyAlgorithm(const IClient *client, const Algorithm &algorithm, bool *ok) override;
void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override;
void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override;
void onTimer(const Timer *timer) override;
@@ -84,7 +84,7 @@ private:
inline State state() const { return m_state; }
IClient *createProxy();
std::shared_ptr<IClient> createProxy();
void idle(double min, double max);
void setJob(IClient *client, const Job &job, const rapidjson::Value &params);
void setParams(rapidjson::Document &doc, rapidjson::Value &params);
@@ -98,12 +98,12 @@ private:
const uint64_t m_donateTime;
const uint64_t m_idleTime;
Controller *m_controller;
IClient *m_proxy = nullptr;
IStrategy *m_strategy = nullptr;
std::shared_ptr<IClient> m_proxy;
std::shared_ptr<IStrategy> m_strategy;
IStrategyListener *m_listener;
State m_state = STATE_NEW;
std::vector<Pool> m_pools;
Timer *m_timer = nullptr;
std::shared_ptr<Timer> m_timer;
uint64_t m_diff = 0;
uint64_t m_height = 0;
uint64_t m_now = 0;

View File

@@ -22,7 +22,7 @@
#define APP_ID "xmrig"
#define APP_NAME "XMRig"
#define APP_DESC "XMRig miner"
#define APP_VERSION "6.22.1"
#define APP_VERSION "6.22.1-dev"
#define APP_DOMAIN "xmrig.com"
#define APP_SITE "www.xmrig.com"
#define APP_COPYRIGHT "Copyright (C) 2016-2024 xmrig.com"