1
0
mirror of https://github.com/xmrig/xmrig.git synced 2026-01-07 18:02:42 -05:00

Compare commits

..

2 Commits

Author SHA1 Message Date
Artem Zuikov
07d81c6587 Merge ab5be0b773 into e32731b60b 2024-10-20 18:07:59 +03:00
4ertus2
ab5be0b773 replace new/delete with sp 2024-10-20 18:03:25 +03:00
103 changed files with 667 additions and 1387 deletions

View File

@@ -1,14 +1,3 @@
# v6.22.2
- [#3569](https://github.com/xmrig/xmrig/pull/3569) Fixed corrupted API output in some rare conditions.
- [#3571](https://github.com/xmrig/xmrig/pull/3571) Fixed number of threads on the new Intel Core Ultra CPUs.
# v6.22.1
- [#3531](https://github.com/xmrig/xmrig/pull/3531) Always reset nonce on RandomX dataset change.
- [#3534](https://github.com/xmrig/xmrig/pull/3534) Fixed threads auto-config on Zen5.
- [#3535](https://github.com/xmrig/xmrig/pull/3535) RandomX: tweaks for Zen5.
- [#3539](https://github.com/xmrig/xmrig/pull/3539) Added Zen5 to `randomx_boost.sh`.
- [#3540](https://github.com/xmrig/xmrig/pull/3540) Detect AMD engineering samples in `randomx_boost.sh`.
# v6.22.0 # v6.22.0
- [#2411](https://github.com/xmrig/xmrig/pull/2411) Added support for [Yada](https://yadacoin.io/) (`rx/yada` algorithm). - [#2411](https://github.com/xmrig/xmrig/pull/2411) Added support for [Yada](https://yadacoin.io/) (`rx/yada` algorithm).
- [#3492](https://github.com/xmrig/xmrig/pull/3492) Fixed `--background` option on Unix systems. - [#3492](https://github.com/xmrig/xmrig/pull/3492) Fixed `--background` option on Unix systems.

View File

@@ -1,5 +1,5 @@
Copyright © 2009 CNRS Copyright © 2009 CNRS
Copyright © 2009-2024 Inria. All rights reserved. Copyright © 2009-2023 Inria. All rights reserved.
Copyright © 2009-2013 Université Bordeaux Copyright © 2009-2013 Université Bordeaux
Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved. Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
Copyright © 2020 Hewlett Packard Enterprise. All rights reserved. Copyright © 2020 Hewlett Packard Enterprise. All rights reserved.
@@ -17,71 +17,6 @@ bug fixes (and other actions) for each version of hwloc since version
0.9. 0.9.
Version 2.11.2
--------------
* Add missing CPU info attrs on aarch64 on Linux.
* Use ACPI CPPC on Linux to get better information about cpukinds,
at least on AMD CPUs.
* Fix crash when manipulating cpukinds after topology
duplication, thanks to Hadrien Grasland for the report.
* Fix missing input target checks in memattr functions,
thanks to Hadrien Grasland for the report.
* Fix a memory leak when ignoring NUMA distances on FreeBSD.
* Fix build failure on old Linux distributions without accessat().
* Fix non-Windows importing of XML topologies and CPUID dumps exported
on Windows.
* hwloc-calc --cpuset-output-format systemd-dbus-api now allows
to generate AllowedCPUs information for systemd slices.
See the hwloc-calc manpage for examples. Thanks to Pierre Neyron.
* Some fixes in manpage EXAMPLES and split them into subsections.
Version 2.11.1
--------------
* Fix bash completions, thanks Tavis Rudd.
Version 2.11.0
--------------
* API
+ Add HWLOC_MEMBIND_WEIGHTED_INTERLEAVE memory binding policy on
Linux 6.9+. Thanks to Honggyu Kim for the patch.
- weighted_interleave_membind is added to membind support bits.
- The "weighted" policy is added to the hwloc-bind tool.
+ Add hwloc_obj_set_subtype(). Thanks to Hadrien Grasland for the report.
* GPU support
+ Don't hide the GPU NUMA node on NVIDIA Grace Hopper.
+ Get Intel GPU OpenCL device locality.
+ Add bandwidths between subdevices in the LevelZero XeLinkBandwidth
matrix.
+ Fix PCI Gen4+ link speed of NVIDIA GPU obtained from NVML,
thanks to Akram Sbaih for the report.
* Windows support
+ Fix Windows support when UNICODE is enabled, several hwloc features
were missing, thanks to Martin for the report.
+ Fix the enabling of CUDA in Windows CMake build,
Thanks to Moritz Kreutzer for the patch.
+ Fix CUDA/OpenCL test source path in Windows CMake.
* Tools
+ Option --best-memattr may now return multiple nodes. Additional
configuration flags may be given to tweak its behavior.
+ hwloc-info has a new --get-attr option to get a single attribute.
+ hwloc-info now supports "levels", "support" and "topology"
special keywords for backward compatibility for hwloc 3.0.
+ The --taskset command-line option is superseded by the new
--cpuset-output-format which also allows to export as list.
+ hwloc-calc may now import bitmasks described as a list of bits
with the new "--cpuset-input-format list".
* Misc
+ The MemoryTiersNr info attribute in the root object now says how many
memory tiers were built. Thanks to Antoine Morvan for the report.
+ Fix the management of infinite cpusets in the bitmap printf/sscanf
API as well as in command-line tools.
+ Add section "Compiling software on top of hwloc's C API" in the
documentation with examples for GNU Make and CMake,
thanks to Florent Pruvost for the help.
Version 2.10.0 Version 2.10.0
-------------- --------------
* Heterogeneous Memory core improvements * Heterogeneous Memory core improvements

View File

@@ -418,8 +418,14 @@ return 0;
} }
hwloc provides a pkg-config executable to obtain relevant compiler and linker hwloc provides a pkg-config executable to obtain relevant compiler and linker
flags. See Compiling software on top of hwloc's C API for details on building flags. For example, it can be used thusly to compile applications that utilize
program on top of hwloc's API using GNU Make or CMake. the hwloc library (assuming GNU Make):
CFLAGS += $(shell pkg-config --cflags hwloc)
LDLIBS += $(shell pkg-config --libs hwloc)
hwloc-hello: hwloc-hello.c
$(CC) hwloc-hello.c $(CFLAGS) -o hwloc-hello $(LDLIBS)
On a machine 2 processor packages -- each package of which has two processing On a machine 2 processor packages -- each package of which has two processing
cores -- the output from running hwloc-hello could be something like the cores -- the output from running hwloc-hello could be something like the

View File

@@ -8,8 +8,8 @@
# Please update HWLOC_VERSION* in contrib/windows/hwloc_config.h too. # Please update HWLOC_VERSION* in contrib/windows/hwloc_config.h too.
major=2 major=2
minor=11 minor=10
release=2 release=0
# greek is used for alpha or beta release tags. If it is non-empty, # greek is used for alpha or beta release tags. If it is non-empty,
# it will be appended to the version number. It does not have to be # it will be appended to the version number. It does not have to be
@@ -22,7 +22,7 @@ greek=
# The date when this release was created # The date when this release was created
date="Sep 26, 2024" date="Dec 04, 2023"
# If snapshot=1, then use the value from snapshot_version as the # If snapshot=1, then use the value from snapshot_version as the
# entire hwloc version (i.e., ignore major, minor, release, and # entire hwloc version (i.e., ignore major, minor, release, and
@@ -41,6 +41,6 @@ snapshot_version=${major}.${minor}.${release}${greek}-git
# 2. Version numbers are described in the Libtool current:revision:age # 2. Version numbers are described in the Libtool current:revision:age
# format. # format.
libhwloc_so_version=23:1:8 libhwloc_so_version=22:0:7
# Please also update the <TargetName> lines in contrib/windows/libhwloc.vcxproj # Please also update the <TargetName> lines in contrib/windows/libhwloc.vcxproj

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux * Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -11,10 +11,10 @@
#ifndef HWLOC_CONFIG_H #ifndef HWLOC_CONFIG_H
#define HWLOC_CONFIG_H #define HWLOC_CONFIG_H
#define HWLOC_VERSION "2.11.2" #define HWLOC_VERSION "2.10.0"
#define HWLOC_VERSION_MAJOR 2 #define HWLOC_VERSION_MAJOR 2
#define HWLOC_VERSION_MINOR 11 #define HWLOC_VERSION_MINOR 10
#define HWLOC_VERSION_RELEASE 2 #define HWLOC_VERSION_RELEASE 0
#define HWLOC_VERSION_GREEK "" #define HWLOC_VERSION_GREEK ""
#define __hwloc_restrict #define __hwloc_restrict

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2010-2024 Inria. All rights reserved. * Copyright © 2010-2023 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -28,18 +28,18 @@ extern "C" {
/** \brief Matrix of distances between a set of objects. /** \brief Matrix of distances between a set of objects.
* *
* The most common matrix contains latencies between NUMA nodes * This matrix often contains latencies between NUMA nodes
* (as reported in the System Locality Distance Information Table (SLIT) * (as reported in the System Locality Distance Information Table (SLIT)
* in the ACPI specification), which may or may not be physically accurate. * in the ACPI specification), which may or may not be physically accurate.
* It corresponds to the latency for accessing the memory of one node * It corresponds to the latency for accessing the memory of one node
* from a core in another node. * from a core in another node.
* The corresponding kind is ::HWLOC_DISTANCES_KIND_MEANS_LATENCY | ::HWLOC_DISTANCES_KIND_FROM_USER. * The corresponding kind is ::HWLOC_DISTANCES_KIND_FROM_OS | ::HWLOC_DISTANCES_KIND_FROM_USER.
* The name of this distances structure is "NUMALatency". * The name of this distances structure is "NUMALatency".
* Others distance structures include and "XGMIBandwidth", "XGMIHops",
* "XeLinkBandwidth" and "NVLinkBandwidth".
* *
* The matrix may also contain bandwidths between random sets of objects, * The matrix may also contain bandwidths between random sets of objects,
* possibly provided by the user, as specified in the \p kind attribute. * possibly provided by the user, as specified in the \p kind attribute.
* Others common distance structures include and "XGMIBandwidth", "XGMIHops",
* "XeLinkBandwidth" and "NVLinkBandwidth".
* *
* Pointers \p objs and \p values should not be replaced, reallocated, freed, etc. * Pointers \p objs and \p values should not be replaced, reallocated, freed, etc.
* However callers are allowed to modify \p kind as well as the contents * However callers are allowed to modify \p kind as well as the contents
@@ -70,10 +70,11 @@ struct hwloc_distances_s {
* The \p kind attribute of struct hwloc_distances_s is a OR'ed set * The \p kind attribute of struct hwloc_distances_s is a OR'ed set
* of kinds. * of kinds.
* *
* Each distance matrix may have only one kind among HWLOC_DISTANCES_KIND_FROM_* * A kind of format HWLOC_DISTANCES_KIND_FROM_* specifies where the
* specifying where distance information comes from, * distance information comes from, if known.
* and one kind among HWLOC_DISTANCES_KIND_MEANS_* specifying *
* whether values are latencies or bandwidths. * A kind of format HWLOC_DISTANCES_KIND_MEANS_* specifies whether
* values are latencies or bandwidths, if applicable.
*/ */
enum hwloc_distances_kind_e { enum hwloc_distances_kind_e {
/** \brief These distances were obtained from the operating system or hardware. /** \brief These distances were obtained from the operating system or hardware.
@@ -356,8 +357,6 @@ typedef void * hwloc_distances_add_handle_t;
* Otherwise, it will be copied internally and may later be freed by the caller. * Otherwise, it will be copied internally and may later be freed by the caller.
* *
* \p kind specifies the kind of distance as a OR'ed set of ::hwloc_distances_kind_e. * \p kind specifies the kind of distance as a OR'ed set of ::hwloc_distances_kind_e.
* Only one kind of meaning and one kind of provenance may be given if appropriate
* (e.g. ::HWLOC_DISTANCES_KIND_MEANS_BANDWIDTH and ::HWLOC_DISTANCES_KIND_FROM_USER).
* Kind ::HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES will be automatically set * Kind ::HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES will be automatically set
* according to objects having different types in hwloc_distances_add_values(). * according to objects having different types in hwloc_distances_add_values().
* *
@@ -404,8 +403,7 @@ HWLOC_DECLSPEC int hwloc_distances_add_values(hwloc_topology_t topology,
/** \brief Flags for adding a new distances to a topology. */ /** \brief Flags for adding a new distances to a topology. */
enum hwloc_distances_add_flag_e { enum hwloc_distances_add_flag_e {
/** \brief Try to group objects based on the newly provided distance information. /** \brief Try to group objects based on the newly provided distance information.
* Grouping is only performed when the distances structure contains latencies, * This is ignored for distances between objects of different types.
* and when all objects are of the same type.
* \hideinitializer * \hideinitializer
*/ */
HWLOC_DISTANCES_ADD_FLAG_GROUP = (1UL<<0), HWLOC_DISTANCES_ADD_FLAG_GROUP = (1UL<<0),

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux * Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2010 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2010 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -946,14 +946,6 @@ enum hwloc_distrib_flags_e {
* *
* \return 0 on success, -1 on error. * \return 0 on success, -1 on error.
* *
* \note On hybrid CPUs (or asymmetric platforms), distribution may be suboptimal
* since the number of cores or PUs inside packages or below caches may vary
* (the top-down recursive partitioning ignores these numbers until reaching their levels).
* Hence it is recommended to distribute only inside a single homogeneous domain.
* For instance on a CPU with energy-efficient E-cores and high-performance P-cores,
* one should distribute separately N tasks on E-cores and M tasks on P-cores
* instead of trying to distribute directly M+N tasks on the entire CPUs.
*
* \note This function requires the \p roots objects to have a CPU set. * \note This function requires the \p roots objects to have a CPU set.
*/ */
static __hwloc_inline int static __hwloc_inline int
@@ -968,7 +960,7 @@ hwloc_distrib(hwloc_topology_t topology,
unsigned given, givenweight; unsigned given, givenweight;
hwloc_cpuset_t *cpusetp = set; hwloc_cpuset_t *cpusetp = set;
if (!n || (flags & ~HWLOC_DISTRIB_FLAG_REVERSE)) { if (flags & ~HWLOC_DISTRIB_FLAG_REVERSE) {
errno = EINVAL; errno = EINVAL;
return -1; return -1;
} }

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2019-2024 Inria. All rights reserved. * Copyright © 2019-2023 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -69,10 +69,7 @@ extern "C" {
* @{ * @{
*/ */
/** \brief Predefined memory attribute IDs. /** \brief Memory node attributes. */
* See ::hwloc_memattr_id_t for the generic definition of IDs
* for predefined or custom attributes.
*/
enum hwloc_memattr_id_e { enum hwloc_memattr_id_e {
/** \brief /** \brief
* The \"Capacity\" is returned in bytes (local_memory attribute in objects). * The \"Capacity\" is returned in bytes (local_memory attribute in objects).
@@ -81,8 +78,6 @@ enum hwloc_memattr_id_e {
* *
* No initiator is involved when looking at this attribute. * No initiator is involved when looking at this attribute.
* The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST. * The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST.
*
* Capacity values may not be modified using hwloc_memattr_set_value().
* \hideinitializer * \hideinitializer
*/ */
HWLOC_MEMATTR_ID_CAPACITY = 0, HWLOC_MEMATTR_ID_CAPACITY = 0,
@@ -98,8 +93,6 @@ enum hwloc_memattr_id_e {
* *
* No initiator is involved when looking at this attribute. * No initiator is involved when looking at this attribute.
* The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST. * The corresponding attribute flags are ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST.
* Locality values may not be modified using hwloc_memattr_set_value().
* \hideinitializer * \hideinitializer
*/ */
HWLOC_MEMATTR_ID_LOCALITY = 1, HWLOC_MEMATTR_ID_LOCALITY = 1,
@@ -180,19 +173,11 @@ enum hwloc_memattr_id_e {
/* TODO persistence? */ /* TODO persistence? */
HWLOC_MEMATTR_ID_MAX /**< \private HWLOC_MEMATTR_ID_MAX /**< \private Sentinel value */
* Sentinel value for predefined attributes.
* Dynamically registered custom attributes start here.
*/
}; };
/** \brief A memory attribute identifier. /** \brief A memory attribute identifier.
* * May be either one of ::hwloc_memattr_id_e or a new id returned by hwloc_memattr_register().
* hwloc predefines some commonly-used attributes in ::hwloc_memattr_id_e.
* One may then dynamically register custom ones with hwloc_memattr_register(),
* they will be assigned IDs immediately after the predefined ones.
* See \ref hwlocality_memattrs_manage for more information about
* existing attribute IDs.
*/ */
typedef unsigned hwloc_memattr_id_t; typedef unsigned hwloc_memattr_id_t;
@@ -298,10 +283,6 @@ hwloc_get_local_numanode_objs(hwloc_topology_t topology,
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR), * (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL. * location \p initiator is ignored and may be \c NULL.
* *
* \p target_node cannot be \c NULL. If \p attribute is ::HWLOC_MEMATTR_ID_CAPACITY,
* \p target_node must be a NUMA node. If it is ::HWLOC_MEMATTR_ID_LOCALITY,
* \p target_node must have a CPU set.
*
* \p flags must be \c 0 for now. * \p flags must be \c 0 for now.
* *
* \return 0 on success. * \return 0 on success.
@@ -371,8 +352,6 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
* The returned initiator should not be modified or freed, * The returned initiator should not be modified or freed,
* it belongs to the topology. * it belongs to the topology.
* *
* \p target_node cannot be \c NULL.
*
* \p flags must be \c 0 for now. * \p flags must be \c 0 for now.
* *
* \return 0 on success. * \return 0 on success.
@@ -383,10 +362,100 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
HWLOC_DECLSPEC int HWLOC_DECLSPEC int
hwloc_memattr_get_best_initiator(hwloc_topology_t topology, hwloc_memattr_get_best_initiator(hwloc_topology_t topology,
hwloc_memattr_id_t attribute, hwloc_memattr_id_t attribute,
hwloc_obj_t target_node, hwloc_obj_t target,
unsigned long flags, unsigned long flags,
struct hwloc_location *best_initiator, hwloc_uint64_t *value); struct hwloc_location *best_initiator, hwloc_uint64_t *value);
/** @} */
/** \defgroup hwlocality_memattrs_manage Managing memory attributes
* @{
*/
/** \brief Return the name of a memory attribute.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_name(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
const char **name);
/** \brief Return the flags of the given attribute.
*
* Flags are a OR'ed set of ::hwloc_memattr_flag_e.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_flags(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
unsigned long *flags);
/** \brief Memory attribute flags.
* Given to hwloc_memattr_register() and returned by hwloc_memattr_get_flags().
*/
enum hwloc_memattr_flag_e {
/** \brief The best nodes for this memory attribute are those with the higher values.
* For instance Bandwidth.
*/
HWLOC_MEMATTR_FLAG_HIGHER_FIRST = (1UL<<0),
/** \brief The best nodes for this memory attribute are those with the lower values.
* For instance Latency.
*/
HWLOC_MEMATTR_FLAG_LOWER_FIRST = (1UL<<1),
/** \brief The value returned for this memory attribute depends on the given initiator.
* For instance Bandwidth and Latency, but not Capacity.
*/
HWLOC_MEMATTR_FLAG_NEED_INITIATOR = (1UL<<2)
};
/** \brief Register a new memory attribute.
*
* Add a specific memory attribute that is not defined in ::hwloc_memattr_id_e.
* Flags are a OR'ed set of ::hwloc_memattr_flag_e. It must contain at least
* one of ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST or ::HWLOC_MEMATTR_FLAG_LOWER_FIRST.
*
* \return 0 on success.
* \return -1 with errno set to \c EBUSY if another attribute already uses this name.
*/
HWLOC_DECLSPEC int
hwloc_memattr_register(hwloc_topology_t topology,
const char *name,
unsigned long flags,
hwloc_memattr_id_t *id);
/** \brief Set an attribute value for a specific target NUMA node.
*
* If the attribute does not relate to a specific initiator
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL.
*
* The initiator will be copied into the topology,
* the caller should free anything allocated to store the initiator,
* for instance the cpuset.
*
* \p flags must be \c 0 for now.
*
* \note The initiator \p initiator should be of type ::HWLOC_LOCATION_TYPE_CPUSET
* when referring to accesses performed by CPU cores.
* ::HWLOC_LOCATION_TYPE_OBJECT is currently unused internally by hwloc,
* but users may for instance use it to provide custom information about
* host memory accesses performed by GPUs.
*
* \return 0 on success or -1 on error.
*/
HWLOC_DECLSPEC int
hwloc_memattr_set_value(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
hwloc_obj_t target_node,
struct hwloc_location *initiator,
unsigned long flags,
hwloc_uint64_t value);
/** \brief Return the target NUMA nodes that have some values for a given attribute. /** \brief Return the target NUMA nodes that have some values for a given attribute.
* *
* Return targets for the given attribute in the \p targets array * Return targets for the given attribute in the \p targets array
@@ -450,8 +519,6 @@ hwloc_memattr_get_targets(hwloc_topology_t topology,
* The returned initiators should not be modified or freed, * The returned initiators should not be modified or freed,
* they belong to the topology. * they belong to the topology.
* *
* \p target_node cannot be \c NULL.
*
* \p flags must be \c 0 for now. * \p flags must be \c 0 for now.
* *
* If the attribute does not relate to a specific initiator * If the attribute does not relate to a specific initiator
@@ -471,131 +538,6 @@ hwloc_memattr_get_initiators(hwloc_topology_t topology,
hwloc_obj_t target_node, hwloc_obj_t target_node,
unsigned long flags, unsigned long flags,
unsigned *nr, struct hwloc_location *initiators, hwloc_uint64_t *values); unsigned *nr, struct hwloc_location *initiators, hwloc_uint64_t *values);
/** @} */
/** \defgroup hwlocality_memattrs_manage Managing memory attributes
*
* Memory attribues are identified by an ID (::hwloc_memattr_id_t)
* and a name. hwloc_memattr_get_name() and hwloc_memattr_get_by_name()
* convert between them (or return error if the attribute does not exist).
*
* The set of valid ::hwloc_memattr_id_t is a contigous set starting at \c 0.
* It first contains predefined attributes, as listed
* in ::hwloc_memattr_id_e (from \c 0 to \c HWLOC_MEMATTR_ID_MAX-1).
* Then custom attributes may be dynamically registered with
* hwloc_memattr_register(). They will get the following IDs
* (\c HWLOC_MEMATTR_ID_MAX for the first one, etc.).
*
* To iterate over all valid attributes
* (either predefined or dynamically registered custom ones),
* one may iterate over IDs starting from \c 0 until hwloc_memattr_get_name()
* or hwloc_memattr_get_flags() returns an error.
*
* The values for an existing attribute or for custom dynamically registered ones
* may be set or modified with hwloc_memattr_set_value().
*
* @{
*/
/** \brief Return the name of a memory attribute.
*
* The output pointer \p name cannot be \c NULL.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_name(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
const char **name);
/** \brief Return the flags of the given attribute.
*
* Flags are a OR'ed set of ::hwloc_memattr_flag_e.
*
* The output pointer \p flags cannot be \c NULL.
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if the attribute does not exist.
*/
HWLOC_DECLSPEC int
hwloc_memattr_get_flags(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
unsigned long *flags);
/** \brief Memory attribute flags.
* Given to hwloc_memattr_register() and returned by hwloc_memattr_get_flags().
*/
enum hwloc_memattr_flag_e {
/** \brief The best nodes for this memory attribute are those with the higher values.
* For instance Bandwidth.
*/
HWLOC_MEMATTR_FLAG_HIGHER_FIRST = (1UL<<0),
/** \brief The best nodes for this memory attribute are those with the lower values.
* For instance Latency.
*/
HWLOC_MEMATTR_FLAG_LOWER_FIRST = (1UL<<1),
/** \brief The value returned for this memory attribute depends on the given initiator.
* For instance Bandwidth and Latency, but not Capacity.
*/
HWLOC_MEMATTR_FLAG_NEED_INITIATOR = (1UL<<2)
};
/** \brief Register a new memory attribute.
*
* Add a new custom memory attribute.
* Flags are a OR'ed set of ::hwloc_memattr_flag_e. It must contain one of
* ::HWLOC_MEMATTR_FLAG_HIGHER_FIRST or ::HWLOC_MEMATTR_FLAG_LOWER_FIRST but not both.
*
* The new attribute \p id is immediately after the last existing attribute ID
* (which is either the ID of the last registered attribute if any,
* or the ID of the last predefined attribute in ::hwloc_memattr_id_e).
*
* \return 0 on success.
* \return -1 with errno set to \c EINVAL if an invalid set of flags is given.
* \return -1 with errno set to \c EBUSY if another attribute already uses this name.
*/
HWLOC_DECLSPEC int
hwloc_memattr_register(hwloc_topology_t topology,
const char *name,
unsigned long flags,
hwloc_memattr_id_t *id);
/** \brief Set an attribute value for a specific target NUMA node.
*
* If the attribute does not relate to a specific initiator
* (it does not have the flag ::HWLOC_MEMATTR_FLAG_NEED_INITIATOR),
* location \p initiator is ignored and may be \c NULL.
*
* The initiator will be copied into the topology,
* the caller should free anything allocated to store the initiator,
* for instance the cpuset.
*
* \p target_node cannot be \c NULL.
*
* \p attribute cannot be ::HWLOC_MEMATTR_FLAG_ID_CAPACITY or
* ::HWLOC_MEMATTR_FLAG_ID_LOCALITY.
*
* \p flags must be \c 0 for now.
*
* \note The initiator \p initiator should be of type ::HWLOC_LOCATION_TYPE_CPUSET
* when referring to accesses performed by CPU cores.
* ::HWLOC_LOCATION_TYPE_OBJECT is currently unused internally by hwloc,
* but users may for instance use it to provide custom information about
* host memory accesses performed by GPUs.
*
* \return 0 on success or -1 on error.
*/
HWLOC_DECLSPEC int
hwloc_memattr_set_value(hwloc_topology_t topology,
hwloc_memattr_id_t attribute,
hwloc_obj_t target_node,
struct hwloc_location *initiator,
unsigned long flags,
hwloc_uint64_t value);
/** @} */ /** @} */
#ifdef __cplusplus #ifdef __cplusplus

View File

@@ -41,15 +41,6 @@ extern "C" {
*/ */
/* Copyright (c) 2008-2018 The Khronos Group Inc. */ /* Copyright (c) 2008-2018 The Khronos Group Inc. */
/* needs "cl_khr_pci_bus_info" device extension, but not strictly required for clGetDeviceInfo() */
typedef struct {
cl_uint pci_domain;
cl_uint pci_bus;
cl_uint pci_device;
cl_uint pci_function;
} hwloc_cl_device_pci_bus_info_khr;
#define HWLOC_CL_DEVICE_PCI_BUS_INFO_KHR 0x410F
/* needs "cl_amd_device_attribute_query" device extension, but not strictly required for clGetDeviceInfo() */ /* needs "cl_amd_device_attribute_query" device extension, but not strictly required for clGetDeviceInfo() */
#define HWLOC_CL_DEVICE_TOPOLOGY_AMD 0x4037 #define HWLOC_CL_DEVICE_TOPOLOGY_AMD 0x4037
typedef union { typedef union {
@@ -87,19 +78,9 @@ hwloc_opencl_get_device_pci_busid(cl_device_id device,
unsigned *domain, unsigned *bus, unsigned *dev, unsigned *func) unsigned *domain, unsigned *bus, unsigned *dev, unsigned *func)
{ {
hwloc_cl_device_topology_amd amdtopo; hwloc_cl_device_topology_amd amdtopo;
hwloc_cl_device_pci_bus_info_khr khrbusinfo;
cl_uint nvbus, nvslot, nvdomain; cl_uint nvbus, nvslot, nvdomain;
cl_int clret; cl_int clret;
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_PCI_BUS_INFO_KHR, sizeof(khrbusinfo), &khrbusinfo, NULL);
if (CL_SUCCESS == clret) {
*domain = (unsigned) khrbusinfo.pci_domain;
*bus = (unsigned) khrbusinfo.pci_bus;
*dev = (unsigned) khrbusinfo.pci_device;
*func = (unsigned) khrbusinfo.pci_function;
return 0;
}
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL); clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL);
if (CL_SUCCESS == clret if (CL_SUCCESS == clret
&& HWLOC_CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD == amdtopo.raw.type) { && HWLOC_CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD == amdtopo.raw.type) {

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2013-2024 Inria. All rights reserved. * Copyright © 2013-2022 Inria. All rights reserved.
* Copyright © 2016 Cisco Systems, Inc. All rights reserved. * Copyright © 2016 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -645,19 +645,6 @@ HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_parent_by_busid(struct hwloc_to
*/ */
HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func); HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
/** @} */
/** \defgroup hwlocality_components_distances Components and Plugins: distances
*
* \note These structures and functions may change when ::HWLOC_COMPONENT_ABI is modified.
*
* @{
*/
/** \brief Handle to a new distances structure during its addition to the topology. */ /** \brief Handle to a new distances structure during its addition to the topology. */
typedef void * hwloc_backend_distances_add_handle_t; typedef void * hwloc_backend_distances_add_handle_t;

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* Copyright © 2010-2024 Inria. All rights reserved. * Copyright © 2010-2022 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -210,7 +210,6 @@ extern "C" {
#define hwloc_obj_get_info_by_name HWLOC_NAME(obj_get_info_by_name) #define hwloc_obj_get_info_by_name HWLOC_NAME(obj_get_info_by_name)
#define hwloc_obj_add_info HWLOC_NAME(obj_add_info) #define hwloc_obj_add_info HWLOC_NAME(obj_add_info)
#define hwloc_obj_set_subtype HWLOC_NAME(obj_set_subtype)
#define HWLOC_CPUBIND_PROCESS HWLOC_NAME_CAPS(CPUBIND_PROCESS) #define HWLOC_CPUBIND_PROCESS HWLOC_NAME_CAPS(CPUBIND_PROCESS)
#define HWLOC_CPUBIND_THREAD HWLOC_NAME_CAPS(CPUBIND_THREAD) #define HWLOC_CPUBIND_THREAD HWLOC_NAME_CAPS(CPUBIND_THREAD)
@@ -233,7 +232,6 @@ extern "C" {
#define HWLOC_MEMBIND_FIRSTTOUCH HWLOC_NAME_CAPS(MEMBIND_FIRSTTOUCH) #define HWLOC_MEMBIND_FIRSTTOUCH HWLOC_NAME_CAPS(MEMBIND_FIRSTTOUCH)
#define HWLOC_MEMBIND_BIND HWLOC_NAME_CAPS(MEMBIND_BIND) #define HWLOC_MEMBIND_BIND HWLOC_NAME_CAPS(MEMBIND_BIND)
#define HWLOC_MEMBIND_INTERLEAVE HWLOC_NAME_CAPS(MEMBIND_INTERLEAVE) #define HWLOC_MEMBIND_INTERLEAVE HWLOC_NAME_CAPS(MEMBIND_INTERLEAVE)
#define HWLOC_MEMBIND_WEIGHTED_INTERLEAVE HWLOC_NAME_CAPS(MEMBIND_WEIGHTED_INTERLEAVE)
#define HWLOC_MEMBIND_NEXTTOUCH HWLOC_NAME_CAPS(MEMBIND_NEXTTOUCH) #define HWLOC_MEMBIND_NEXTTOUCH HWLOC_NAME_CAPS(MEMBIND_NEXTTOUCH)
#define HWLOC_MEMBIND_MIXED HWLOC_NAME_CAPS(MEMBIND_MIXED) #define HWLOC_MEMBIND_MIXED HWLOC_NAME_CAPS(MEMBIND_MIXED)
@@ -562,7 +560,6 @@ extern "C" {
/* opencl.h */ /* opencl.h */
#define hwloc_cl_device_pci_bus_info_khr HWLOC_NAME(cl_device_pci_bus_info_khr)
#define hwloc_cl_device_topology_amd HWLOC_NAME(cl_device_topology_amd) #define hwloc_cl_device_topology_amd HWLOC_NAME(cl_device_topology_amd)
#define hwloc_opencl_get_device_pci_busid HWLOC_NAME(opencl_get_device_pci_ids) #define hwloc_opencl_get_device_pci_busid HWLOC_NAME(opencl_get_device_pci_ids)
#define hwloc_opencl_get_device_cpuset HWLOC_NAME(opencl_get_device_cpuset) #define hwloc_opencl_get_device_cpuset HWLOC_NAME(opencl_get_device_cpuset)
@@ -718,8 +715,6 @@ extern "C" {
#define hwloc__obj_type_is_dcache HWLOC_NAME(_obj_type_is_dcache) #define hwloc__obj_type_is_dcache HWLOC_NAME(_obj_type_is_dcache)
#define hwloc__obj_type_is_icache HWLOC_NAME(_obj_type_is_icache) #define hwloc__obj_type_is_icache HWLOC_NAME(_obj_type_is_icache)
#define hwloc__pci_link_speed HWLOC_NAME(_pci_link_speed)
/* private/cpuid-x86.h */ /* private/cpuid-x86.h */
#define hwloc_have_x86_cpuid HWLOC_NAME(have_x86_cpuid) #define hwloc_have_x86_cpuid HWLOC_NAME(have_x86_cpuid)

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009, 2011, 2012 CNRS. All rights reserved. * Copyright © 2009, 2011, 2012 CNRS. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved. * Copyright © 2009-2021 Inria. All rights reserved.
* Copyright © 2009, 2011, 2012, 2015 Université Bordeaux. All rights reserved. * Copyright © 2009, 2011, 2012, 2015 Université Bordeaux. All rights reserved.
* Copyright © 2009-2020 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2020 Cisco Systems, Inc. All rights reserved.
* $COPYRIGHT$ * $COPYRIGHT$
@@ -17,10 +17,6 @@
#define HWLOC_HAVE_MSVC_CPUIDEX 1 #define HWLOC_HAVE_MSVC_CPUIDEX 1
/* #undef HAVE_MKSTEMP */
#define HWLOC_HAVE_X86_CPUID 1
/* Define to 1 if the system has the type `CACHE_DESCRIPTOR'. */ /* Define to 1 if the system has the type `CACHE_DESCRIPTOR'. */
#define HAVE_CACHE_DESCRIPTOR 0 #define HAVE_CACHE_DESCRIPTOR 0
@@ -132,7 +128,8 @@
#define HAVE_DECL__SC_PAGE_SIZE 0 #define HAVE_DECL__SC_PAGE_SIZE 0
/* Define to 1 if you have the <dirent.h> header file. */ /* Define to 1 if you have the <dirent.h> header file. */
/* #undef HAVE_DIRENT_H */ /* #define HAVE_DIRENT_H 1 */
#undef HAVE_DIRENT_H
/* Define to 1 if you have the <dlfcn.h> header file. */ /* Define to 1 if you have the <dlfcn.h> header file. */
/* #undef HAVE_DLFCN_H */ /* #undef HAVE_DLFCN_H */
@@ -285,7 +282,7 @@
#define HAVE_STRING_H 1 #define HAVE_STRING_H 1
/* Define to 1 if you have the `strncasecmp' function. */ /* Define to 1 if you have the `strncasecmp' function. */
/* #undef HAVE_STRNCASECMP */ #define HAVE_STRNCASECMP 1
/* Define to '1' if sysctl is present and usable */ /* Define to '1' if sysctl is present and usable */
/* #undef HAVE_SYSCTL */ /* #undef HAVE_SYSCTL */
@@ -326,7 +323,8 @@
/* #undef HAVE_UNAME */ /* #undef HAVE_UNAME */
/* Define to 1 if you have the <unistd.h> header file. */ /* Define to 1 if you have the <unistd.h> header file. */
/* #undef HAVE_UNISTD_H */ /* #define HAVE_UNISTD_H 1 */
#undef HAVE_UNISTD_H
/* Define to 1 if you have the `uselocale' function. */ /* Define to 1 if you have the `uselocale' function. */
/* #undef HAVE_USELOCALE */ /* #undef HAVE_USELOCALE */
@@ -661,7 +659,7 @@
#define hwloc_pid_t HANDLE #define hwloc_pid_t HANDLE
/* Define this to either strncasecmp or strncmp */ /* Define this to either strncasecmp or strncmp */
/* #undef hwloc_strncasecmp */ #define hwloc_strncasecmp strncasecmp
/* Define this to the thread ID type */ /* Define this to the thread ID type */
#define hwloc_thread_t HANDLE #define hwloc_thread_t HANDLE

View File

@@ -11,22 +11,6 @@
#ifndef HWLOC_PRIVATE_CPUID_X86_H #ifndef HWLOC_PRIVATE_CPUID_X86_H
#define HWLOC_PRIVATE_CPUID_X86_H #define HWLOC_PRIVATE_CPUID_X86_H
/* A macro for annotating memory as uninitialized when building with MSAN
* (and otherwise having no effect). See below for why this is used with
* our custom assembly.
*/
#ifdef __has_feature
#define HWLOC_HAS_FEATURE(name) __has_feature(name)
#else
#define HWLOC_HAS_FEATURE(name) 0
#endif
#if HWLOC_HAS_FEATURE(memory_sanitizer) || defined(MEMORY_SANITIZER)
#include <sanitizer/msan_interface.h>
#define HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ptr, len) __msan_unpoison(ptr, len)
#else
#define HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ptr, len)
#endif
#if (defined HWLOC_X86_32_ARCH) && (!defined HWLOC_HAVE_MSVC_CPUIDEX) #if (defined HWLOC_X86_32_ARCH) && (!defined HWLOC_HAVE_MSVC_CPUIDEX)
static __hwloc_inline int hwloc_have_x86_cpuid(void) static __hwloc_inline int hwloc_have_x86_cpuid(void)
{ {
@@ -87,18 +71,12 @@ static __hwloc_inline void hwloc_x86_cpuid(unsigned *eax, unsigned *ebx, unsigne
"movl %k2,%1\n\t" "movl %k2,%1\n\t"
: "+a" (*eax), "=m" (*ebx), "=&r"(sav_rbx), : "+a" (*eax), "=m" (*ebx), "=&r"(sav_rbx),
"+c" (*ecx), "=&d" (*edx)); "+c" (*ecx), "=&d" (*edx));
/* MSAN does not recognize the effect of the above assembly on the memory operand
* (`"=m"(*ebx)`). This may get improved in MSAN at some point in the future, e.g.
* see https://github.com/llvm/llvm-project/pull/77393. */
HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ebx, sizeof *ebx);
#elif defined(HWLOC_X86_32_ARCH) #elif defined(HWLOC_X86_32_ARCH)
__asm__( __asm__(
"mov %%ebx,%1\n\t" "mov %%ebx,%1\n\t"
"cpuid\n\t" "cpuid\n\t"
"xchg %%ebx,%1\n\t" "xchg %%ebx,%1\n\t"
: "+a" (*eax), "=&SD" (*ebx), "+c" (*ecx), "=&d" (*edx)); : "+a" (*eax), "=&SD" (*ebx), "+c" (*ecx), "=&d" (*edx));
/* See above. */
HWLOC_ANNOTATE_MEMORY_IS_INITIALIZED(ebx, sizeof *ebx);
#else #else
#error unknown architecture #error unknown architecture
#endif #endif

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux * Copyright © 2009-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -573,35 +573,4 @@ typedef SSIZE_T ssize_t;
# endif # endif
#endif #endif
static __inline float
hwloc__pci_link_speed(unsigned generation, unsigned lanes)
{
float lanespeed;
/*
* These are single-direction bandwidths only.
*
* Gen1 used NRZ with 8/10 encoding.
* PCIe Gen1 = 2.5GT/s signal-rate per lane x 8/10 = 0.25GB/s data-rate per lane
* PCIe Gen2 = 5 GT/s signal-rate per lane x 8/10 = 0.5 GB/s data-rate per lane
* Gen3 switched to NRZ with 128/130 encoding.
* PCIe Gen3 = 8 GT/s signal-rate per lane x 128/130 = 1 GB/s data-rate per lane
* PCIe Gen4 = 16 GT/s signal-rate per lane x 128/130 = 2 GB/s data-rate per lane
* PCIe Gen5 = 32 GT/s signal-rate per lane x 128/130 = 4 GB/s data-rate per lane
* Gen6 switched to PAM with with 242/256 FLIT (242B payload protected by 8B CRC + 6B FEC).
* PCIe Gen6 = 64 GT/s signal-rate per lane x 242/256 = 8 GB/s data-rate per lane
* PCIe Gen7 = 128GT/s signal-rate per lane x 242/256 = 16 GB/s data-rate per lane
*/
/* lanespeed in Gbit/s */
if (generation <= 2)
lanespeed = 2.5f * generation * 0.8f;
else if (generation <= 5)
lanespeed = 8.0f * (1<<(generation-3)) * 128/130;
else
lanespeed = 8.0f * (1<<(generation-3)) * 242/256; /* assume Gen8 will be 256 GT/s and so on */
/* linkspeed in GB/s */
return lanespeed * lanes / 8;
}
#endif /* HWLOC_PRIVATE_MISC_H */ #endif /* HWLOC_PRIVATE_MISC_H */

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2010, 2012 Université Bordeaux * Copyright © 2009-2010, 2012 Université Bordeaux
* Copyright © 2011-2015 Cisco Systems, Inc. All rights reserved. * Copyright © 2011-2015 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -287,7 +287,6 @@ static __hwloc_inline int hwloc__check_membind_policy(hwloc_membind_policy_t pol
|| policy == HWLOC_MEMBIND_FIRSTTOUCH || policy == HWLOC_MEMBIND_FIRSTTOUCH
|| policy == HWLOC_MEMBIND_BIND || policy == HWLOC_MEMBIND_BIND
|| policy == HWLOC_MEMBIND_INTERLEAVE || policy == HWLOC_MEMBIND_INTERLEAVE
|| policy == HWLOC_MEMBIND_WEIGHTED_INTERLEAVE
|| policy == HWLOC_MEMBIND_NEXTTOUCH) || policy == HWLOC_MEMBIND_NEXTTOUCH)
return 0; return 0;
return -1; return -1;

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux * Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -245,7 +245,6 @@ int hwloc_bitmap_copy(struct hwloc_bitmap_s * dst, const struct hwloc_bitmap_s *
/* Strings always use 32bit groups */ /* Strings always use 32bit groups */
#define HWLOC_PRIxSUBBITMAP "%08lx" #define HWLOC_PRIxSUBBITMAP "%08lx"
#define HWLOC_BITMAP_SUBSTRING_SIZE 32 #define HWLOC_BITMAP_SUBSTRING_SIZE 32
#define HWLOC_BITMAP_SUBSTRING_FULL_VALUE 0xFFFFFFFFUL
#define HWLOC_BITMAP_SUBSTRING_LENGTH (HWLOC_BITMAP_SUBSTRING_SIZE/4) #define HWLOC_BITMAP_SUBSTRING_LENGTH (HWLOC_BITMAP_SUBSTRING_SIZE/4)
#define HWLOC_BITMAP_STRING_PER_LONG (HWLOC_BITS_PER_LONG/HWLOC_BITMAP_SUBSTRING_SIZE) #define HWLOC_BITMAP_STRING_PER_LONG (HWLOC_BITS_PER_LONG/HWLOC_BITMAP_SUBSTRING_SIZE)
@@ -262,7 +261,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
const unsigned long accum_mask = ~0UL; const unsigned long accum_mask = ~0UL;
#else /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */ #else /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */
const unsigned long accum_mask = ((1UL << HWLOC_BITMAP_SUBSTRING_SIZE) - 1) << (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE); const unsigned long accum_mask = ((1UL << HWLOC_BITMAP_SUBSTRING_SIZE) - 1) << (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE);
int merge_with_infinite_prefix = 0;
#endif /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */ #endif /* HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE */
HWLOC__BITMAP_CHECK(set); HWLOC__BITMAP_CHECK(set);
@@ -281,9 +279,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
res = size>0 ? (int)size - 1 : 0; res = size>0 ? (int)size - 1 : 0;
tmp += res; tmp += res;
size -= res; size -= res;
#if HWLOC_BITS_PER_LONG > HWLOC_BITMAP_SUBSTRING_SIZE
merge_with_infinite_prefix = 1;
#endif
} }
i=(int) set->ulongs_count-1; i=(int) set->ulongs_count-1;
@@ -299,24 +294,16 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
} }
while (i>=0 || accumed) { while (i>=0 || accumed) {
unsigned long value;
/* Refill accumulator */ /* Refill accumulator */
if (!accumed) { if (!accumed) {
accum = set->ulongs[i--]; accum = set->ulongs[i--];
accumed = HWLOC_BITS_PER_LONG; accumed = HWLOC_BITS_PER_LONG;
} }
value = (accum & accum_mask) >> (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE);
#if HWLOC_BITS_PER_LONG > HWLOC_BITMAP_SUBSTRING_SIZE if (accum & accum_mask) {
if (merge_with_infinite_prefix && value == HWLOC_BITMAP_SUBSTRING_FULL_VALUE) {
/* first full subbitmap merged with infinite prefix */
res = 0;
} else
#endif
if (value) {
/* print the whole subset if not empty */ /* print the whole subset if not empty */
res = hwloc_snprintf(tmp, size, needcomma ? ",0x" HWLOC_PRIxSUBBITMAP : "0x" HWLOC_PRIxSUBBITMAP, value); res = hwloc_snprintf(tmp, size, needcomma ? ",0x" HWLOC_PRIxSUBBITMAP : "0x" HWLOC_PRIxSUBBITMAP,
(accum & accum_mask) >> (HWLOC_BITS_PER_LONG - HWLOC_BITMAP_SUBSTRING_SIZE));
needcomma = 1; needcomma = 1;
} else if (i == -1 && accumed == HWLOC_BITMAP_SUBSTRING_SIZE) { } else if (i == -1 && accumed == HWLOC_BITMAP_SUBSTRING_SIZE) {
/* print a single 0 to mark the last subset */ /* print a single 0 to mark the last subset */
@@ -336,7 +323,6 @@ int hwloc_bitmap_snprintf(char * __hwloc_restrict buf, size_t buflen, const stru
#else #else
accum <<= HWLOC_BITMAP_SUBSTRING_SIZE; accum <<= HWLOC_BITMAP_SUBSTRING_SIZE;
accumed -= HWLOC_BITMAP_SUBSTRING_SIZE; accumed -= HWLOC_BITMAP_SUBSTRING_SIZE;
merge_with_infinite_prefix = 0;
#endif #endif
if (res >= size) if (res >= size)
@@ -376,8 +362,7 @@ int hwloc_bitmap_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc_restric
{ {
const char * current = string; const char * current = string;
unsigned long accum = 0; unsigned long accum = 0;
int count = 0; int count=0;
int ulongcount;
int infinite = 0; int infinite = 0;
/* count how many substrings there are */ /* count how many substrings there are */
@@ -398,20 +383,9 @@ int hwloc_bitmap_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc_restric
count--; count--;
} }
ulongcount = (count + HWLOC_BITMAP_STRING_PER_LONG - 1) / HWLOC_BITMAP_STRING_PER_LONG; if (hwloc_bitmap_reset_by_ulongs(set, (count + HWLOC_BITMAP_STRING_PER_LONG - 1) / HWLOC_BITMAP_STRING_PER_LONG) < 0)
if (hwloc_bitmap_reset_by_ulongs(set, ulongcount) < 0)
return -1; return -1;
set->infinite = 0;
set->infinite = 0; /* will be updated later */
#if HWLOC_BITS_PER_LONG != HWLOC_BITMAP_SUBSTRING_SIZE
if (infinite && (count % HWLOC_BITMAP_STRING_PER_LONG) != 0) {
/* accumulate substrings of the first ulong that are hidden in the infinite prefix */
int i;
for(i = (count % HWLOC_BITMAP_STRING_PER_LONG); i < HWLOC_BITMAP_STRING_PER_LONG; i++)
accum |= (HWLOC_BITMAP_SUBSTRING_FULL_VALUE << (i*HWLOC_BITMAP_SUBSTRING_SIZE));
}
#endif
while (*current != '\0') { while (*current != '\0') {
unsigned long val; unsigned long val;
@@ -570,9 +544,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
ssize_t size = buflen; ssize_t size = buflen;
char *tmp = buf; char *tmp = buf;
int res, ret = 0; int res, ret = 0;
#if HWLOC_BITS_PER_LONG == 64
int merge_with_infinite_prefix = 0;
#endif
int started = 0; int started = 0;
int i; int i;
@@ -592,9 +563,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
res = size>0 ? (int)size - 1 : 0; res = size>0 ? (int)size - 1 : 0;
tmp += res; tmp += res;
size -= res; size -= res;
#if HWLOC_BITS_PER_LONG == 64
merge_with_infinite_prefix = 1;
#endif
} }
i=set->ulongs_count-1; i=set->ulongs_count-1;
@@ -614,11 +582,7 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
if (started) { if (started) {
/* print the whole subset */ /* print the whole subset */
#if HWLOC_BITS_PER_LONG == 64 #if HWLOC_BITS_PER_LONG == 64
if (merge_with_infinite_prefix && (val & 0xffffffff00000000UL) == 0xffffffff00000000UL) { res = hwloc_snprintf(tmp, size, "%016lx", val);
res = hwloc_snprintf(tmp, size, "%08lx", val & 0xffffffffUL);
} else {
res = hwloc_snprintf(tmp, size, "%016lx", val);
}
#else #else
res = hwloc_snprintf(tmp, size, "%08lx", val); res = hwloc_snprintf(tmp, size, "%08lx", val);
#endif #endif
@@ -635,9 +599,6 @@ int hwloc_bitmap_taskset_snprintf(char * __hwloc_restrict buf, size_t buflen, co
res = size>0 ? (int)size - 1 : 0; res = size>0 ? (int)size - 1 : 0;
tmp += res; tmp += res;
size -= res; size -= res;
#if HWLOC_BITS_PER_LONG == 64
merge_with_infinite_prefix = 0;
#endif
} }
/* if didn't display anything, display 0x0 */ /* if didn't display anything, display 0x0 */
@@ -718,10 +679,6 @@ int hwloc_bitmap_taskset_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc
goto failed; goto failed;
set->ulongs[count-1] = val; set->ulongs[count-1] = val;
if (infinite && tmpchars != HWLOC_BITS_PER_LONG/4) {
/* infinite prefix with partial substring, fill remaining bits */
set->ulongs[count-1] |= (~0ULL)<<(4*tmpchars);
}
current += tmpchars; current += tmpchars;
chars -= tmpchars; chars -= tmpchars;

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2020-2024 Inria. All rights reserved. * Copyright © 2020-2022 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -50,7 +50,6 @@ hwloc_internal_cpukinds_dup(hwloc_topology_t new, hwloc_topology_t old)
return -1; return -1;
new->cpukinds = kinds; new->cpukinds = kinds;
new->nr_cpukinds = old->nr_cpukinds; new->nr_cpukinds = old->nr_cpukinds;
new->nr_cpukinds_allocated = old->nr_cpukinds;
memcpy(kinds, old->cpukinds, old->nr_cpukinds * sizeof(*kinds)); memcpy(kinds, old->cpukinds, old->nr_cpukinds * sizeof(*kinds));
for(i=0;i<old->nr_cpukinds; i++) { for(i=0;i<old->nr_cpukinds; i++) {

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2010-2024 Inria. All rights reserved. * Copyright © 2010-2022 Inria. All rights reserved.
* Copyright © 2011-2012 Université Bordeaux * Copyright © 2011-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -624,8 +624,8 @@ void * hwloc_distances_add_create(hwloc_topology_t topology,
return NULL; return NULL;
} }
if ((kind & ~HWLOC_DISTANCES_KIND_ALL) if ((kind & ~HWLOC_DISTANCES_KIND_ALL)
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_FROM_ALL) > 1 || hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_FROM_ALL) != 1
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_MEANS_ALL) > 1) { || hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_MEANS_ALL) != 1) {
errno = EINVAL; errno = EINVAL;
return NULL; return NULL;
} }

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2020-2024 Inria. All rights reserved. * Copyright © 2020-2023 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -14,26 +14,13 @@
*/ */
static __hwloc_inline static __hwloc_inline
int hwloc__memattr_get_convenience_value(hwloc_memattr_id_t id, hwloc_uint64_t hwloc__memattr_get_convenience_value(hwloc_memattr_id_t id,
hwloc_obj_t node, hwloc_obj_t node)
hwloc_uint64_t *valuep)
{ {
if (id == HWLOC_MEMATTR_ID_CAPACITY) { if (id == HWLOC_MEMATTR_ID_CAPACITY)
if (node->type != HWLOC_OBJ_NUMANODE) { return node->attr->numanode.local_memory;
errno = EINVAL; else if (id == HWLOC_MEMATTR_ID_LOCALITY)
return -1; return hwloc_bitmap_weight(node->cpuset);
}
*valuep = node->attr->numanode.local_memory;
return 0;
}
else if (id == HWLOC_MEMATTR_ID_LOCALITY) {
if (!node->cpuset) {
errno = EINVAL;
return -1;
}
*valuep = hwloc_bitmap_weight(node->cpuset);
return 0;
}
else else
assert(0); assert(0);
return 0; /* shut up the compiler */ return 0; /* shut up the compiler */
@@ -635,7 +622,7 @@ hwloc_memattr_get_targets(hwloc_topology_t topology,
if (found<max) { if (found<max) {
targets[found] = node; targets[found] = node;
if (values) if (values)
hwloc__memattr_get_convenience_value(id, node, &values[found]); values[found] = hwloc__memattr_get_convenience_value(id, node);
} }
found++; found++;
} }
@@ -761,7 +748,7 @@ hwloc_memattr_get_initiators(hwloc_topology_t topology,
struct hwloc_internal_memattr_target_s *imtg; struct hwloc_internal_memattr_target_s *imtg;
unsigned i, max; unsigned i, max;
if (flags || !target_node) { if (flags) {
errno = EINVAL; errno = EINVAL;
return -1; return -1;
} }
@@ -823,7 +810,7 @@ hwloc_memattr_get_value(hwloc_topology_t topology,
struct hwloc_internal_memattr_s *imattr; struct hwloc_internal_memattr_s *imattr;
struct hwloc_internal_memattr_target_s *imtg; struct hwloc_internal_memattr_target_s *imtg;
if (flags || !target_node) { if (flags) {
errno = EINVAL; errno = EINVAL;
return -1; return -1;
} }
@@ -836,7 +823,8 @@ hwloc_memattr_get_value(hwloc_topology_t topology,
if (imattr->iflags & HWLOC_IMATTR_FLAG_CONVENIENCE) { if (imattr->iflags & HWLOC_IMATTR_FLAG_CONVENIENCE) {
/* convenience attributes */ /* convenience attributes */
return hwloc__memattr_get_convenience_value(id, target_node, valuep); *valuep = hwloc__memattr_get_convenience_value(id, target_node);
return 0;
} }
/* normal attributes */ /* normal attributes */
@@ -948,7 +936,7 @@ hwloc_memattr_set_value(hwloc_topology_t topology,
{ {
struct hwloc_internal_location_s iloc, *ilocp; struct hwloc_internal_location_s iloc, *ilocp;
if (flags || !target_node) { if (flags) {
errno = EINVAL; errno = EINVAL;
return -1; return -1;
} }
@@ -1019,10 +1007,10 @@ hwloc_memattr_get_best_target(hwloc_topology_t topology,
/* convenience attributes */ /* convenience attributes */
for(j=0; ; j++) { for(j=0; ; j++) {
hwloc_obj_t node = hwloc_get_obj_by_type(topology, HWLOC_OBJ_NUMANODE, j); hwloc_obj_t node = hwloc_get_obj_by_type(topology, HWLOC_OBJ_NUMANODE, j);
hwloc_uint64_t value = 0; hwloc_uint64_t value;
if (!node) if (!node)
break; break;
hwloc__memattr_get_convenience_value(id, node, &value); value = hwloc__memattr_get_convenience_value(id, node);
hwloc__update_best_target(&best, &best_value, &found, hwloc__update_best_target(&best, &best_value, &found,
node, value, node, value,
imattr->flags & HWLOC_MEMATTR_FLAG_HIGHER_FIRST); imattr->flags & HWLOC_MEMATTR_FLAG_HIGHER_FIRST);
@@ -1105,7 +1093,7 @@ hwloc_memattr_get_best_initiator(hwloc_topology_t topology,
int found; int found;
unsigned i; unsigned i;
if (flags || !target_node) { if (flags) {
errno = EINVAL; errno = EINVAL;
return -1; return -1;
} }
@@ -1818,12 +1806,6 @@ hwloc__apply_memory_tiers_subtypes(hwloc_topology_t topology,
} }
} }
} }
if (nr_tiers > 1) {
hwloc_obj_t root = hwloc_get_root_obj(topology);
char tmp[20];
snprintf(tmp, sizeof(tmp), "%u", nr_tiers);
hwloc__add_info_nodup(&root->infos, &root->infos_count, "MemoryTiersNr", tmp, 1);
}
} }
int int

View File

@@ -1,5 +1,5 @@
/* /*
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2022 Inria. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
*/ */
@@ -886,12 +886,36 @@ hwloc_pcidisc_find_linkspeed(const unsigned char *config,
unsigned offset, float *linkspeed) unsigned offset, float *linkspeed)
{ {
unsigned linksta, speed, width; unsigned linksta, speed, width;
float lanespeed;
memcpy(&linksta, &config[offset + HWLOC_PCI_EXP_LNKSTA], 4); memcpy(&linksta, &config[offset + HWLOC_PCI_EXP_LNKSTA], 4);
speed = linksta & HWLOC_PCI_EXP_LNKSTA_SPEED; /* PCIe generation */ speed = linksta & HWLOC_PCI_EXP_LNKSTA_SPEED; /* PCIe generation */
width = (linksta & HWLOC_PCI_EXP_LNKSTA_WIDTH) >> 4; /* how many lanes */ width = (linksta & HWLOC_PCI_EXP_LNKSTA_WIDTH) >> 4; /* how many lanes */
/*
* These are single-direction bandwidths only.
*
* Gen1 used NRZ with 8/10 encoding.
* PCIe Gen1 = 2.5GT/s signal-rate per lane x 8/10 = 0.25GB/s data-rate per lane
* PCIe Gen2 = 5 GT/s signal-rate per lane x 8/10 = 0.5 GB/s data-rate per lane
* Gen3 switched to NRZ with 128/130 encoding.
* PCIe Gen3 = 8 GT/s signal-rate per lane x 128/130 = 1 GB/s data-rate per lane
* PCIe Gen4 = 16 GT/s signal-rate per lane x 128/130 = 2 GB/s data-rate per lane
* PCIe Gen5 = 32 GT/s signal-rate per lane x 128/130 = 4 GB/s data-rate per lane
* Gen6 switched to PAM with with 242/256 FLIT (242B payload protected by 8B CRC + 6B FEC).
* PCIe Gen6 = 64 GT/s signal-rate per lane x 242/256 = 8 GB/s data-rate per lane
* PCIe Gen7 = 128GT/s signal-rate per lane x 242/256 = 16 GB/s data-rate per lane
*/
*linkspeed = hwloc__pci_link_speed(speed, width); /* lanespeed in Gbit/s */
if (speed <= 2)
lanespeed = 2.5f * speed * 0.8f;
else if (speed <= 5)
lanespeed = 8.0f * (1<<(speed-3)) * 128/130;
else
lanespeed = 8.0f * (1<<(speed-3)) * 242/256; /* assume Gen8 will be 256 GT/s and so on */
/* linkspeed in GB/s */
*linkspeed = lanespeed * width / 8;
return 0; return 0;
} }

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2012, 2020 Université Bordeaux * Copyright © 2009-2012, 2020 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -220,7 +220,7 @@ static void hwloc_win_get_function_ptrs(void)
#pragma GCC diagnostic ignored "-Wcast-function-type" #pragma GCC diagnostic ignored "-Wcast-function-type"
#endif #endif
kernel32 = LoadLibrary(TEXT("kernel32.dll")); kernel32 = LoadLibrary("kernel32.dll");
if (kernel32) { if (kernel32) {
GetActiveProcessorGroupCountProc = GetActiveProcessorGroupCountProc =
(PFN_GETACTIVEPROCESSORGROUPCOUNT) GetProcAddress(kernel32, "GetActiveProcessorGroupCount"); (PFN_GETACTIVEPROCESSORGROUPCOUNT) GetProcAddress(kernel32, "GetActiveProcessorGroupCount");
@@ -249,12 +249,12 @@ static void hwloc_win_get_function_ptrs(void)
} }
if (!QueryWorkingSetExProc) { if (!QueryWorkingSetExProc) {
HMODULE psapi = LoadLibrary(TEXT("psapi.dll")); HMODULE psapi = LoadLibrary("psapi.dll");
if (psapi) if (psapi)
QueryWorkingSetExProc = (PFN_QUERYWORKINGSETEX) GetProcAddress(psapi, "QueryWorkingSetEx"); QueryWorkingSetExProc = (PFN_QUERYWORKINGSETEX) GetProcAddress(psapi, "QueryWorkingSetEx");
} }
ntdll = GetModuleHandle(TEXT("ntdll")); ntdll = GetModuleHandle("ntdll");
RtlGetVersionProc = (PFN_RTLGETVERSION) GetProcAddress(ntdll, "RtlGetVersion"); RtlGetVersionProc = (PFN_RTLGETVERSION) GetProcAddress(ntdll, "RtlGetVersion");
#if HWLOC_HAVE_GCC_W_CAST_FUNCTION_TYPE #if HWLOC_HAVE_GCC_W_CAST_FUNCTION_TYPE

View File

@@ -1,11 +1,11 @@
/* /*
* Copyright © 2010-2024 Inria. All rights reserved. * Copyright © 2010-2023 Inria. All rights reserved.
* Copyright © 2010-2013 Université Bordeaux * Copyright © 2010-2013 Université Bordeaux
* Copyright © 2010-2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2010-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
* *
* *
* This backend is mostly used when the operating system does not export * This backend is only used when the operating system does not export
* the necessary hardware topology information to user-space applications. * the necessary hardware topology information to user-space applications.
* Currently, FreeBSD and NetBSD only add PUs and then fallback to this * Currently, FreeBSD and NetBSD only add PUs and then fallback to this
* backend for CPU/Cache discovery. * backend for CPU/Cache discovery.
@@ -15,7 +15,6 @@
* on various architectures, without having to use this x86-specific code. * on various architectures, without having to use this x86-specific code.
* But this backend is still used after them to annotate some objects with * But this backend is still used after them to annotate some objects with
* additional details (CPU info in Package, Inclusiveness in Caches). * additional details (CPU info in Package, Inclusiveness in Caches).
* It may also be enabled manually to work-around bugs in native OS discovery.
*/ */
#include "private/autogen/config.h" #include "private/autogen/config.h"
@@ -488,7 +487,7 @@ static void read_amd_cores_legacy(struct procinfo *infos, struct cpuiddump *src_
} }
/* AMD unit/node from CPUID 0x8000001e leaf (topoext) */ /* AMD unit/node from CPUID 0x8000001e leaf (topoext) */
static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned long flags __hwloc_attribute_unused, struct cpuiddump *src_cpuiddump) static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned long flags, struct cpuiddump *src_cpuiddump)
{ {
unsigned apic_id, nodes_per_proc = 0; unsigned apic_id, nodes_per_proc = 0;
unsigned eax, ebx, ecx, edx; unsigned eax, ebx, ecx, edx;
@@ -497,6 +496,7 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump); cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
infos->apicid = apic_id = eax; infos->apicid = apic_id = eax;
if (flags & HWLOC_X86_DISC_FLAG_TOPOEXT_NUMANODES) {
if (infos->cpufamilynumber == 0x16) { if (infos->cpufamilynumber == 0x16) {
/* ecx is reserved */ /* ecx is reserved */
infos->ids[NODE] = 0; infos->ids[NODE] = 0;
@@ -511,6 +511,7 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
|| (infos->cpufamilynumber == 0x19 && nodes_per_proc > 1)) { || (infos->cpufamilynumber == 0x19 && nodes_per_proc > 1)) {
hwloc_debug("warning: undefined nodes_per_proc value %u, assuming it means %u\n", nodes_per_proc, nodes_per_proc); hwloc_debug("warning: undefined nodes_per_proc value %u, assuming it means %u\n", nodes_per_proc, nodes_per_proc);
} }
}
if (infos->cpufamilynumber <= 0x16) { /* topoext appeared in 0x15 and compute-units were only used in 0x15 and 0x16 */ if (infos->cpufamilynumber <= 0x16) { /* topoext appeared in 0x15 and compute-units were only used in 0x15 and 0x16 */
unsigned cores_per_unit; unsigned cores_per_unit;
@@ -532,9 +533,9 @@ static void read_amd_cores_topoext(struct hwloc_x86_backend_data_s *data, struct
} }
/* Intel core/thread or even die/module/tile from CPUID 0x0b or 0x1f leaves (v1 and v2 extended topology enumeration) /* Intel core/thread or even die/module/tile from CPUID 0x0b or 0x1f leaves (v1 and v2 extended topology enumeration)
* or AMD core/thread or even complex/ccd from CPUID 0x0b or 0x80000026 (extended CPU topology) * or AMD complex/ccd from CPUID 0x80000026 (extended CPU topology)
*/ */
static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned leaf, enum cpuid_type cpuid_type __hwloc_attribute_unused, struct cpuiddump *src_cpuiddump) static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct procinfo *infos, unsigned leaf, enum cpuid_type cpuid_type, struct cpuiddump *src_cpuiddump)
{ {
unsigned level, apic_nextshift, apic_type, apic_id = 0, apic_shift = 0, id; unsigned level, apic_nextshift, apic_type, apic_id = 0, apic_shift = 0, id;
unsigned threadid __hwloc_attribute_unused = 0; /* shut-up compiler */ unsigned threadid __hwloc_attribute_unused = 0; /* shut-up compiler */
@@ -546,15 +547,20 @@ static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct pro
eax = leaf; eax = leaf;
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump); cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
/* Intel specifies that the 0x0b/0x1f loop should stop when we get "invalid domain" (0 in ecx[8:15]) /* Intel specifies that the 0x0b/0x1f loop should stop when we get "invalid domain" (0 in ecx[8:15])
* (if so, we also get 0 in eax/ebx for invalid subleaves). Zhaoxin implements this too. * (if so, we also get 0 in eax/ebx for invalid subleaves).
* However AMD rather says that the 0x80000026/0x0b loop should stop when we get "no thread at this level" (0 in ebx[0:15]). * However AMD rather says that the 0x80000026/0x0b loop should stop when we get "no thread at this level" (0 in ebx[0:15]).
* * Zhaoxin follows the Intel specs but also returns "no thread at this level" for the last *valid* level (at least on KH-4000).
* Linux kernel <= 6.8 used "invalid domain" for both Intel and AMD (in detect_extended_topology()) * From the Linux kernel code, it's very likely that AMD also returns "invalid domain"
* but x86 discovery revamp in 6.9 now properly checks both Intel and AMD conditions (in topo_subleaf()). * (because detect_extended_topology() uses that for all x86 CPUs)
* So let's assume we are allowed to break-out once one of the Intel+AMD conditions is met. * but keep with the official doc until AMD can clarify that (see #593).
*/ */
if (!(ebx & 0xffff) || !(ecx & 0xff00)) if (cpuid_type == amd) {
break; if (!(ebx & 0xffff))
break;
} else {
if (!(ecx & 0xff00))
break;
}
apic_packageshift = eax & 0x1f; apic_packageshift = eax & 0x1f;
} }
@@ -566,8 +572,13 @@ static void read_extended_topo(struct hwloc_x86_backend_data_s *data, struct pro
ecx = level; ecx = level;
eax = leaf; eax = leaf;
cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump); cpuid_or_from_dump(&eax, &ebx, &ecx, &edx, src_cpuiddump);
if (!(ebx & 0xffff) || !(ecx & 0xff00)) if (cpuid_type == amd) {
break; if (!(ebx & 0xffff))
break;
} else {
if (!(ecx & 0xff00))
break;
}
apic_nextshift = eax & 0x1f; apic_nextshift = eax & 0x1f;
apic_type = (ecx & 0xff00) >> 8; apic_type = (ecx & 0xff00) >> 8;
apic_id = edx; apic_id = edx;
@@ -1814,7 +1825,7 @@ hwloc_x86_check_cpuiddump_input(const char *src_cpuiddump_path, hwloc_bitmap_t s
goto out_with_path; goto out_with_path;
} }
fclose(file); fclose(file);
if (strncmp(line, "Architecture: x86", 17)) { if (strcmp(line, "Architecture: x86\n")) {
fprintf(stderr, "hwloc/x86: Found non-x86 dumped cpuid summary in %s: %s\n", path, line); fprintf(stderr, "hwloc/x86: Found non-x86 dumped cpuid summary in %s: %s\n", path, line);
goto out_with_path; goto out_with_path;
} }

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux * Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -41,7 +41,7 @@ typedef struct hwloc__nolibxml_import_state_data_s {
static char * static char *
hwloc__nolibxml_import_ignore_spaces(char *buffer) hwloc__nolibxml_import_ignore_spaces(char *buffer)
{ {
return buffer + strspn(buffer, " \t\n\r"); return buffer + strspn(buffer, " \t\n");
} }
static int static int

View File

@@ -1,6 +1,6 @@
/* /*
* Copyright © 2009 CNRS * Copyright © 2009 CNRS
* Copyright © 2009-2024 Inria. All rights reserved. * Copyright © 2009-2023 Inria. All rights reserved.
* Copyright © 2009-2011, 2020 Université Bordeaux * Copyright © 2009-2011, 2020 Université Bordeaux
* Copyright © 2009-2018 Cisco Systems, Inc. All rights reserved. * Copyright © 2009-2018 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory. * See COPYING in top-level directory.
@@ -872,10 +872,6 @@ hwloc__xml_import_object(hwloc_topology_t topology,
/* deal with possible future type */ /* deal with possible future type */
obj->type = HWLOC_OBJ_GROUP; obj->type = HWLOC_OBJ_GROUP;
obj->attr->group.kind = HWLOC_GROUP_KIND_INTEL_MODULE; obj->attr->group.kind = HWLOC_GROUP_KIND_INTEL_MODULE;
} else if (!strcasecmp(attrvalue, "Cluster")) {
/* deal with possible future type */
obj->type = HWLOC_OBJ_GROUP;
obj->attr->group.kind = HWLOC_GROUP_KIND_LINUX_CLUSTER;
} else if (!strcasecmp(attrvalue, "MemCache")) { } else if (!strcasecmp(attrvalue, "MemCache")) {
/* ignore possible future type */ /* ignore possible future type */
obj->type = _HWLOC_OBJ_FUTURE; obj->type = _HWLOC_OBJ_FUTURE;
@@ -1348,7 +1344,7 @@ hwloc__xml_v2import_support(hwloc_topology_t topology,
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*)); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*));
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 16); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 15);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1);
#endif #endif
@@ -1382,7 +1378,6 @@ hwloc__xml_v2import_support(hwloc_topology_t topology,
else DO(membind,firsttouch_membind); else DO(membind,firsttouch_membind);
else DO(membind,bind_membind); else DO(membind,bind_membind);
else DO(membind,interleave_membind); else DO(membind,interleave_membind);
else DO(membind,weighted_interleave_membind);
else DO(membind,nexttouch_membind); else DO(membind,nexttouch_membind);
else DO(membind,migrate_membind); else DO(membind,migrate_membind);
else DO(membind,get_area_memlocation); else DO(membind,get_area_memlocation);
@@ -1441,10 +1436,6 @@ hwloc__xml_v2import_distances(hwloc_topology_t topology,
} }
else if (!strcmp(attrname, "kind")) { else if (!strcmp(attrname, "kind")) {
kind = strtoul(attrvalue, NULL, 10); kind = strtoul(attrvalue, NULL, 10);
/* forward compat with "HOPS" kind in v3 */
if (kind & (1UL<<5))
/* hops becomes latency */
kind = (kind & ~(1UL<<5)) | HWLOC_DISTANCES_KIND_MEANS_LATENCY;
} }
else if (!strcmp(attrname, "name")) { else if (!strcmp(attrname, "name")) {
name = attrvalue; name = attrvalue;
@@ -3096,7 +3087,7 @@ hwloc__xml_v2export_support(hwloc__xml_export_state_t parentstate, hwloc_topolog
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*)); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_support) == 4*sizeof(void*));
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_discovery_support) == 6);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_cpubind_support) == 11);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 16); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_membind_support) == 15);
HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1); HWLOC_BUILD_ASSERT(sizeof(struct hwloc_topology_misc_support) == 1);
#endif #endif
@@ -3141,7 +3132,6 @@ hwloc__xml_v2export_support(hwloc__xml_export_state_t parentstate, hwloc_topolog
DO(membind,firsttouch_membind); DO(membind,firsttouch_membind);
DO(membind,bind_membind); DO(membind,bind_membind);
DO(membind,interleave_membind); DO(membind,interleave_membind);
DO(membind,weighted_interleave_membind);
DO(membind,nexttouch_membind); DO(membind,nexttouch_membind);
DO(membind,migrate_membind); DO(membind,migrate_membind);
DO(membind,get_area_memlocation); DO(membind,get_area_memlocation);

View File

@@ -465,20 +465,6 @@ hwloc_debug_print_objects(int indent __hwloc_attribute_unused, hwloc_obj_t obj)
#define hwloc_debug_print_objects(indent, obj) do { /* nothing */ } while (0) #define hwloc_debug_print_objects(indent, obj) do { /* nothing */ } while (0)
#endif /* !HWLOC_DEBUG */ #endif /* !HWLOC_DEBUG */
int hwloc_obj_set_subtype(hwloc_topology_t topology __hwloc_attribute_unused, hwloc_obj_t obj, const char *subtype)
{
char *new = NULL;
if (subtype) {
new = strdup(subtype);
if (!new)
return -1;
}
if (obj->subtype)
free(obj->subtype);
obj->subtype = new;
return 0;
}
void hwloc__free_infos(struct hwloc_info_s *infos, unsigned count) void hwloc__free_infos(struct hwloc_info_s *infos, unsigned count)
{ {
unsigned i; unsigned i;

View File

@@ -30,10 +30,10 @@
#include "base/tools/Handle.h" #include "base/tools/Handle.h"
inline static const char *format(std::pair<bool, double> h, char *buf, size_t size) inline static const char *format(double h, char *buf, size_t size)
{ {
if (h.first) { if (std::isnormal(h)) {
snprintf(buf, size, (h.second < 100.0) ? "%04.2f" : "%03.1f", h.second); snprintf(buf, size, (h < 100.0) ? "%04.2f" : "%03.1f", h);
return buf; return buf;
} }
@@ -80,16 +80,15 @@ double xmrig::Hashrate::average() const
} }
const char *xmrig::Hashrate::format(std::pair<bool, double> h, char *buf, size_t size) const char *xmrig::Hashrate::format(double h, char *buf, size_t size)
{ {
return ::format(h, buf, size); return ::format(h, buf, size);
} }
rapidjson::Value xmrig::Hashrate::normalize(std::pair<bool, double> d) rapidjson::Value xmrig::Hashrate::normalize(double d)
{ {
using namespace rapidjson; return Json::normalize(d, false);
return d.first ? Value(floor(d.second * 100.0) / 100.0) : Value(kNullType);
} }
@@ -123,11 +122,11 @@ rapidjson::Value xmrig::Hashrate::toJSON(size_t threadId, rapidjson::Document &d
#endif #endif
std::pair<bool, double> xmrig::Hashrate::hashrate(size_t index, size_t ms) const double xmrig::Hashrate::hashrate(size_t index, size_t ms) const
{ {
assert(index < m_threads); assert(index < m_threads);
if (index >= m_threads) { if (index >= m_threads) {
return { false, 0.0 }; return nan("");
} }
uint64_t earliestHashCount = 0; uint64_t earliestHashCount = 0;
@@ -158,27 +157,17 @@ std::pair<bool, double> xmrig::Hashrate::hashrate(size_t index, size_t ms) const
} while (idx != idx_start); } while (idx != idx_start);
if (!haveFullSet || earliestStamp == 0 || lastestStamp == 0) { if (!haveFullSet || earliestStamp == 0 || lastestStamp == 0) {
return { false, 0.0 }; return nan("");
} }
if (lastestHashCnt == earliestHashCount) { if (lastestStamp - earliestStamp == 0) {
return { true, 0.0 }; return nan("");
}
if (lastestStamp == earliestStamp) {
return { false, 0.0 };
} }
const auto hashes = static_cast<double>(lastestHashCnt - earliestHashCount); const auto hashes = static_cast<double>(lastestHashCnt - earliestHashCount);
const auto time = static_cast<double>(lastestStamp - earliestStamp); const auto time = static_cast<double>(lastestStamp - earliestStamp) / 1000.0;
const auto hr = hashes * 1000.0 / time; return hashes / time;
if (!std::isnormal(hr)) {
return { false, 0.0 };
}
return { true, hr };
} }

View File

@@ -47,16 +47,16 @@ public:
Hashrate(size_t threads); Hashrate(size_t threads);
~Hashrate(); ~Hashrate();
inline std::pair<bool, double> calc(size_t ms) const { return hashrate(0U, ms); } inline double calc(size_t ms) const { const double data = hashrate(0U, ms); return std::isnormal(data) ? data : 0.0; }
inline std::pair<bool, double> calc(size_t threadId, size_t ms) const { return hashrate(threadId + 1, ms); } inline double calc(size_t threadId, size_t ms) const { return hashrate(threadId + 1, ms); }
inline size_t threads() const { return m_threads > 0U ? m_threads - 1U : 0U; } inline size_t threads() const { return m_threads > 0U ? m_threads - 1U : 0U; }
inline void add(size_t threadId, uint64_t count, uint64_t timestamp) { addData(threadId + 1U, count, timestamp); } inline void add(size_t threadId, uint64_t count, uint64_t timestamp) { addData(threadId + 1U, count, timestamp); }
inline void add(uint64_t count, uint64_t timestamp) { addData(0U, count, timestamp); } inline void add(uint64_t count, uint64_t timestamp) { addData(0U, count, timestamp); }
double average() const; double average() const;
static const char *format(std::pair<bool, double> h, char *buf, size_t size); static const char *format(double h, char *buf, size_t size);
static rapidjson::Value normalize(std::pair<bool, double> d); static rapidjson::Value normalize(double d);
# ifdef XMRIG_FEATURE_API # ifdef XMRIG_FEATURE_API
rapidjson::Value toJSON(rapidjson::Document &doc) const; rapidjson::Value toJSON(rapidjson::Document &doc) const;
@@ -64,7 +64,7 @@ public:
# endif # endif
private: private:
std::pair<bool, double> hashrate(size_t index, size_t ms) const; double hashrate(size_t index, size_t ms) const;
void addData(size_t index, uint64_t count, uint64_t timestamp); void addData(size_t index, uint64_t count, uint64_t timestamp);
constexpr static size_t kBucketSize = 2 << 11; constexpr static size_t kBucketSize = 2 << 11;

View File

@@ -65,22 +65,22 @@ public:
} }
} }
# else # else
inline ~Thread() { m_thread.join(); delete m_worker; } inline ~Thread() { m_thread.join(); }
inline void start(void *(*callback)(void *)) { m_thread = std::thread(callback, this); } inline void start(void *(*callback)(void *)) { m_thread = std::thread(callback, this); }
# endif # endif
inline const T &config() const { return m_config; } inline const T &config() const { return m_config; }
inline IBackend *backend() const { return m_backend; } inline IBackend *backend() const { return m_backend; }
inline IWorker *worker() const { return m_worker; } inline IWorker* worker() const { return m_worker.get(); }
inline size_t id() const { return m_id; } inline size_t id() const { return m_id; }
inline void setWorker(IWorker *worker) { m_worker = worker; } inline void setWorker(std::shared_ptr<IWorker> worker) { m_worker = worker; }
private: private:
const size_t m_id = 0; const size_t m_id = 0;
const T m_config; const T m_config;
IBackend *m_backend; IBackend *m_backend;
IWorker *m_worker = nullptr; std::shared_ptr<IWorker> m_worker;
#ifdef XMRIG_OS_APPLE #ifdef XMRIG_OS_APPLE
pthread_t m_thread{}; pthread_t m_thread{};

View File

@@ -62,19 +62,12 @@ public:
template<class T> template<class T>
xmrig::Workers<T>::Workers() : xmrig::Workers<T>::Workers() :
d_ptr(new WorkersPrivate()) d_ptr(std::make_shared<WorkersPrivate>())
{ {
} }
template<class T>
xmrig::Workers<T>::~Workers()
{
delete d_ptr;
}
template<class T> template<class T>
bool xmrig::Workers<T>::tick(uint64_t) bool xmrig::Workers<T>::tick(uint64_t)
{ {
@@ -88,7 +81,7 @@ bool xmrig::Workers<T>::tick(uint64_t)
uint64_t hashCount = 0; uint64_t hashCount = 0;
uint64_t rawHashes = 0; uint64_t rawHashes = 0;
for (Thread<T> *handle : m_workers) { for (auto& handle : m_workers) {
IWorker *worker = handle->worker(); IWorker *worker = handle->worker();
if (worker) { if (worker) {
worker->hashrateData(hashCount, ts, rawHashes); worker->hashrateData(hashCount, ts, rawHashes);
@@ -135,10 +128,6 @@ void xmrig::Workers<T>::stop()
Nonce::stop(T::backend()); Nonce::stop(T::backend());
# endif # endif
for (Thread<T> *worker : m_workers) {
delete worker;
}
m_workers.clear(); m_workers.clear();
# ifdef XMRIG_MINER_PROJECT # ifdef XMRIG_MINER_PROJECT
@@ -166,7 +155,7 @@ void xmrig::Workers<T>::start(const std::vector<T> &data, const std::shared_ptr<
template<class T> template<class T>
xmrig::IWorker *xmrig::Workers<T>::create(Thread<T> *) std::shared_ptr<xmrig::IWorker> xmrig::Workers<T>::create(Thread<T> *)
{ {
return nullptr; return nullptr;
} }
@@ -177,22 +166,21 @@ void *xmrig::Workers<T>::onReady(void *arg)
{ {
auto handle = static_cast<Thread<T>* >(arg); auto handle = static_cast<Thread<T>* >(arg);
IWorker *worker = create(handle); std::shared_ptr<IWorker> worker = create(handle);
assert(worker != nullptr); assert(worker);
if (!worker || !worker->selfTest()) { if (!worker || !worker->selfTest()) {
LOG_ERR("%s " RED("thread ") RED_BOLD("#%zu") RED(" self-test failed"), T::tag(), worker ? worker->id() : 0); LOG_ERR("%s " RED("thread ") RED_BOLD("#%zu") RED(" self-test failed"), T::tag(), worker ? worker->id() : 0);
handle->backend()->start(worker, false); worker.reset();
delete worker; handle->backend()->start(worker.get(), false);
return nullptr; return nullptr;
} }
assert(handle->backend() != nullptr); assert(handle->backend() != nullptr);
handle->setWorker(worker); handle->setWorker(worker);
handle->backend()->start(worker, true); handle->backend()->start(worker.get(), true);
return nullptr; return nullptr;
} }
@@ -202,7 +190,7 @@ template<class T>
void xmrig::Workers<T>::start(const std::vector<T> &data, bool /*sleep*/) void xmrig::Workers<T>::start(const std::vector<T> &data, bool /*sleep*/)
{ {
for (const auto &item : data) { for (const auto &item : data) {
m_workers.push_back(new Thread<T>(d_ptr->backend, m_workers.size(), item)); m_workers.emplace_back(std::make_shared<Thread<T>>(d_ptr->backend, m_workers.size(), item));
} }
d_ptr->hashrate = std::make_shared<Hashrate>(m_workers.size()); d_ptr->hashrate = std::make_shared<Hashrate>(m_workers.size());
@@ -211,7 +199,7 @@ void xmrig::Workers<T>::start(const std::vector<T> &data, bool /*sleep*/)
Nonce::touch(T::backend()); Nonce::touch(T::backend());
# endif # endif
for (auto worker : m_workers) { for (auto& worker : m_workers) {
worker->start(Workers<T>::onReady); worker->start(Workers<T>::onReady);
} }
} }
@@ -221,34 +209,34 @@ namespace xmrig {
template<> template<>
xmrig::IWorker *xmrig::Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle) std::shared_ptr<xmrig::IWorker> Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle)
{ {
# ifdef XMRIG_MINER_PROJECT # ifdef XMRIG_MINER_PROJECT
switch (handle->config().intensity) { switch (handle->config().intensity) {
case 1: case 1:
return new CpuWorker<1>(handle->id(), handle->config()); return std::make_shared<CpuWorker<1>>(handle->id(), handle->config());
case 2: case 2:
return new CpuWorker<2>(handle->id(), handle->config()); return std::make_shared<CpuWorker<2>>(handle->id(), handle->config());
case 3: case 3:
return new CpuWorker<3>(handle->id(), handle->config()); return std::make_shared<CpuWorker<3>>(handle->id(), handle->config());
case 4: case 4:
return new CpuWorker<4>(handle->id(), handle->config()); return std::make_shared<CpuWorker<4>>(handle->id(), handle->config());
case 5: case 5:
return new CpuWorker<5>(handle->id(), handle->config()); return std::make_shared<CpuWorker<5>>(handle->id(), handle->config());
case 8: case 8:
return new CpuWorker<8>(handle->id(), handle->config()); return std::make_shared<CpuWorker<8>>(handle->id(), handle->config());
} }
return nullptr; return nullptr;
# else # else
assert(handle->config().intensity == 1); assert(handle->config().intensity == 1);
return new CpuWorker<1>(handle->id(), handle->config()); return std::make_shared<CpuWorker<1>>(handle->id(), handle->config());
# endif # endif
} }
@@ -258,9 +246,9 @@ template class Workers<CpuLaunchData>;
#ifdef XMRIG_FEATURE_OPENCL #ifdef XMRIG_FEATURE_OPENCL
template<> template<>
xmrig::IWorker *xmrig::Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle) std::shared_ptr<xmrig::IWorker> Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle)
{ {
return new OclWorker(handle->id(), handle->config()); return std::make_shared<OclWorker>(handle->id(), handle->config());
} }
@@ -270,9 +258,9 @@ template class Workers<OclLaunchData>;
#ifdef XMRIG_FEATURE_CUDA #ifdef XMRIG_FEATURE_CUDA
template<> template<>
xmrig::IWorker *xmrig::Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle) std::shared_ptr<xmrig::IWorker> Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle)
{ {
return new CudaWorker(handle->id(), handle->config()); return std::make_shared<CudaWorker>(handle->id(), handle->config());
} }

View File

@@ -52,7 +52,6 @@ public:
XMRIG_DISABLE_COPY_MOVE(Workers) XMRIG_DISABLE_COPY_MOVE(Workers)
Workers(); Workers();
~Workers();
inline void start(const std::vector<T> &data) { start(data, true); } inline void start(const std::vector<T> &data) { start(data, true); }
@@ -67,20 +66,20 @@ public:
# endif # endif
private: private:
static IWorker *create(Thread<T> *handle); static std::shared_ptr<IWorker> create(Thread<T> *handle);
static void *onReady(void *arg); static void *onReady(void *arg);
void start(const std::vector<T> &data, bool sleep); void start(const std::vector<T> &data, bool sleep);
std::vector<Thread<T> *> m_workers; std::vector<std::shared_ptr<Thread<T>>> m_workers;
WorkersPrivate *d_ptr; std::shared_ptr<WorkersPrivate> d_ptr;
}; };
template<class T> template<class T>
void xmrig::Workers<T>::jobEarlyNotification(const Job &job) void xmrig::Workers<T>::jobEarlyNotification(const Job &job)
{ {
for (Thread<T>* t : m_workers) { for (auto& t : m_workers) {
if (t->worker()) { if (t->worker()) {
t->worker()->jobEarlyNotification(job); t->worker()->jobEarlyNotification(job);
} }
@@ -89,20 +88,20 @@ void xmrig::Workers<T>::jobEarlyNotification(const Job &job)
template<> template<>
IWorker *Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle); std::shared_ptr<IWorker> Workers<CpuLaunchData>::create(Thread<CpuLaunchData> *handle);
extern template class Workers<CpuLaunchData>; extern template class Workers<CpuLaunchData>;
#ifdef XMRIG_FEATURE_OPENCL #ifdef XMRIG_FEATURE_OPENCL
template<> template<>
IWorker *Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle); std::shared_ptr<IWorker> Workers<OclLaunchData>::create(Thread<OclLaunchData> *handle);
extern template class Workers<OclLaunchData>; extern template class Workers<OclLaunchData>;
#endif #endif
#ifdef XMRIG_FEATURE_CUDA #ifdef XMRIG_FEATURE_CUDA
template<> template<>
IWorker *Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle); std::shared_ptr<IWorker> Workers<CudaLaunchData>::create(Thread<CudaLaunchData> *handle);
extern template class Workers<CudaLaunchData>; extern template class Workers<CudaLaunchData>;
#endif #endif

View File

@@ -51,7 +51,7 @@ public:
}; };
static BenchStatePrivate *d_ptr = nullptr; static std::shared_ptr<BenchStatePrivate> d_ptr;
std::atomic<uint64_t> BenchState::m_data{}; std::atomic<uint64_t> BenchState::m_data{};
@@ -61,7 +61,7 @@ std::atomic<uint64_t> BenchState::m_data{};
bool xmrig::BenchState::isDone() bool xmrig::BenchState::isDone()
{ {
return d_ptr == nullptr; return !d_ptr;
} }
@@ -105,14 +105,13 @@ uint64_t xmrig::BenchState::start(size_t threads, const IBackend *backend)
void xmrig::BenchState::destroy() void xmrig::BenchState::destroy()
{ {
delete d_ptr; d_ptr.reset();
d_ptr = nullptr;
} }
void xmrig::BenchState::done() void xmrig::BenchState::done()
{ {
assert(d_ptr != nullptr && d_ptr->async && d_ptr->remaining > 0); assert(d_ptr && d_ptr->async && d_ptr->remaining > 0);
const uint64_t ts = Chrono::steadyMSecs(); const uint64_t ts = Chrono::steadyMSecs();
@@ -129,15 +128,15 @@ void xmrig::BenchState::done()
void xmrig::BenchState::init(IBenchListener *listener, uint32_t size) void xmrig::BenchState::init(IBenchListener *listener, uint32_t size)
{ {
assert(d_ptr == nullptr); assert(!d_ptr);
d_ptr = new BenchStatePrivate(listener, size); d_ptr = std::make_shared<BenchStatePrivate>(listener, size);
} }
void xmrig::BenchState::setSize(uint32_t size) void xmrig::BenchState::setSize(uint32_t size)
{ {
assert(d_ptr != nullptr); assert(d_ptr);
d_ptr->size = size; d_ptr->size = size;
} }

View File

@@ -31,20 +31,20 @@
#endif #endif
static xmrig::ICpuInfo *cpuInfo = nullptr; static std::shared_ptr<xmrig::ICpuInfo> cpuInfo;
xmrig::ICpuInfo *xmrig::Cpu::info() xmrig::ICpuInfo *xmrig::Cpu::info()
{ {
if (cpuInfo == nullptr) { if (!cpuInfo) {
# if defined(XMRIG_FEATURE_HWLOC) # if defined(XMRIG_FEATURE_HWLOC)
cpuInfo = new HwlocCpuInfo(); cpuInfo = std::make_shared<HwlocCpuInfo>();
# else # else
cpuInfo = new BasicCpuInfo(); cpuInfo = std::make_shared<BasicCpuInfo>();
# endif # endif
} }
return cpuInfo; return cpuInfo.get();
} }
@@ -56,6 +56,5 @@ rapidjson::Value xmrig::Cpu::toJSON(rapidjson::Document &doc)
void xmrig::Cpu::release() void xmrig::Cpu::release()
{ {
delete cpuInfo; cpuInfo.reset();
cpuInfo = nullptr;
} }

View File

@@ -242,7 +242,7 @@ const char *xmrig::cpu_tag()
xmrig::CpuBackend::CpuBackend(Controller *controller) : xmrig::CpuBackend::CpuBackend(Controller *controller) :
d_ptr(new CpuBackendPrivate(controller)) d_ptr(std::make_shared<CpuBackendPrivate>(controller))
{ {
d_ptr->workers.setBackend(this); d_ptr->workers.setBackend(this);
} }
@@ -250,7 +250,6 @@ xmrig::CpuBackend::CpuBackend(Controller *controller) :
xmrig::CpuBackend::~CpuBackend() xmrig::CpuBackend::~CpuBackend()
{ {
delete d_ptr;
} }

View File

@@ -70,7 +70,7 @@ protected:
# endif # endif
private: private:
CpuBackendPrivate *d_ptr; std::shared_ptr<CpuBackendPrivate> d_ptr;
}; };

View File

@@ -57,7 +57,7 @@ static constexpr uint32_t kReserveCount = 32768;
#ifdef XMRIG_ALGO_CN_HEAVY #ifdef XMRIG_ALGO_CN_HEAVY
static std::mutex cn_heavyZen3MemoryMutex; static std::mutex cn_heavyZen3MemoryMutex;
VirtualMemory* cn_heavyZen3Memory = nullptr; std::shared_ptr<VirtualMemory> cn_heavyZen3Memory;
#endif #endif
} // namespace xmrig } // namespace xmrig
@@ -87,14 +87,14 @@ xmrig::CpuWorker<N>::CpuWorker(size_t id, const CpuLaunchData &data) :
if (!cn_heavyZen3Memory) { if (!cn_heavyZen3Memory) {
// Round up number of threads to the multiple of 8 // Round up number of threads to the multiple of 8
const size_t num_threads = ((m_threads + 7) / 8) * 8; const size_t num_threads = ((m_threads + 7) / 8) * 8;
cn_heavyZen3Memory = new VirtualMemory(m_algorithm.l3() * num_threads, data.hugePages, false, false, node()); cn_heavyZen3Memory = std::make_shared<VirtualMemory>(m_algorithm.l3() * num_threads, data.hugePages, false, false, node());
} }
m_memory = cn_heavyZen3Memory; m_memory = cn_heavyZen3Memory;
} }
else else
# endif # endif
{ {
m_memory = new VirtualMemory(m_algorithm.l3() * N, data.hugePages, false, true, node()); m_memory = std::make_shared<VirtualMemory>(m_algorithm.l3() * N, data.hugePages, false, true, node());
} }
# ifdef XMRIG_ALGO_GHOSTRIDER # ifdef XMRIG_ALGO_GHOSTRIDER
@@ -107,7 +107,7 @@ template<size_t N>
xmrig::CpuWorker<N>::~CpuWorker() xmrig::CpuWorker<N>::~CpuWorker()
{ {
# ifdef XMRIG_ALGO_RANDOMX # ifdef XMRIG_ALGO_RANDOMX
RxVm::destroy(m_vm); m_vm.reset();
# endif # endif
CnCtx::release(m_ctx, N); CnCtx::release(m_ctx, N);
@@ -116,7 +116,7 @@ xmrig::CpuWorker<N>::~CpuWorker()
if (m_memory != cn_heavyZen3Memory) if (m_memory != cn_heavyZen3Memory)
# endif # endif
{ {
delete m_memory; m_memory.reset();
} }
# ifdef XMRIG_ALGO_GHOSTRIDER # ifdef XMRIG_ALGO_GHOSTRIDER
@@ -148,7 +148,7 @@ void xmrig::CpuWorker<N>::allocateRandomX_VM()
} }
else if (!dataset->get() && (m_job.currentJob().seed() != m_seed)) { else if (!dataset->get() && (m_job.currentJob().seed() != m_seed)) {
// Update RandomX light VM with the new seed // Update RandomX light VM with the new seed
randomx_vm_set_cache(m_vm, dataset->cache()->get()); randomx_vm_set_cache(m_vm.get(), dataset->cache()->get());
} }
m_seed = m_job.currentJob().seed(); m_seed = m_job.currentJob().seed();
} }
@@ -296,7 +296,7 @@ void xmrig::CpuWorker<N>::start()
if (job.hasMinerSignature()) { if (job.hasMinerSignature()) {
job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr); job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr);
} }
randomx_calculate_hash_first(m_vm, tempHash, m_job.blob(), job.size()); randomx_calculate_hash_first(m_vm.get(), tempHash, m_job.blob(), job.size());
} }
if (!nextRound()) { if (!nextRound()) {
@@ -307,7 +307,7 @@ void xmrig::CpuWorker<N>::start()
memcpy(miner_signature_saved, miner_signature_ptr, sizeof(miner_signature_saved)); memcpy(miner_signature_saved, miner_signature_ptr, sizeof(miner_signature_saved));
job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr); job.generateMinerSignature(m_job.blob(), job.size(), miner_signature_ptr);
} }
randomx_calculate_hash_next(m_vm, tempHash, m_job.blob(), job.size(), m_hash); randomx_calculate_hash_next(m_vm.get(), tempHash, m_job.blob(), job.size(), m_hash);
} }
else else
# endif # endif

View File

@@ -66,7 +66,7 @@ protected:
void hashrateData(uint64_t &hashCount, uint64_t &timeStamp, uint64_t &rawHashes) const override; void hashrateData(uint64_t &hashCount, uint64_t &timeStamp, uint64_t &rawHashes) const override;
void start() override; void start() override;
inline const VirtualMemory *memory() const override { return m_memory; } inline const VirtualMemory* memory() const override { return m_memory.get(); }
inline size_t intensity() const override { return N; } inline size_t intensity() const override { return N; }
inline void jobEarlyNotification(const Job&) override {} inline void jobEarlyNotification(const Job&) override {}
@@ -92,11 +92,11 @@ private:
const Miner *m_miner; const Miner *m_miner;
const size_t m_threads; const size_t m_threads;
cryptonight_ctx *m_ctx[N]; cryptonight_ctx *m_ctx[N];
VirtualMemory *m_memory = nullptr; std::shared_ptr<VirtualMemory> m_memory;
WorkerJob<N> m_job; WorkerJob<N> m_job;
# ifdef XMRIG_ALGO_RANDOMX # ifdef XMRIG_ALGO_RANDOMX
randomx_vm *m_vm = nullptr; std::shared_ptr<randomx_vm> m_vm;
Buffer m_seed; Buffer m_seed;
# endif # endif

View File

@@ -342,7 +342,7 @@ void xmrig::HwlocCpuInfo::processTopLevelCache(hwloc_obj_t cache, const Algorith
} }
# ifdef XMRIG_ALGO_RANDOMX # ifdef XMRIG_ALGO_RANDOMX
if ((vendor() == VENDOR_INTEL) && (algorithm.family() == Algorithm::RANDOM_X) && L3_exclusive && (PUs < cores.size() * 2)) { if ((algorithm.family() == Algorithm::RANDOM_X) && L3_exclusive && (PUs > cores.size()) && (PUs < cores.size() * 2)) {
// Use all L3+L2 on latest Intel CPUs with P-cores, E-cores and exclusive L3 cache // Use all L3+L2 on latest Intel CPUs with P-cores, E-cores and exclusive L3 cache
cacheHashes = (L3 + L2) / scratchpad; cacheHashes = (L3 + L2) / scratchpad;
} }

View File

@@ -372,20 +372,15 @@ void xmrig::CudaBackend::printHashrate(bool details)
char num[16 * 3] = { 0 }; char num[16 * 3] = { 0 };
auto hashrate_short = hashrate()->calc(Hashrate::ShortInterval); const double hashrate_short = hashrate()->calc(Hashrate::ShortInterval);
auto hashrate_medium = hashrate()->calc(Hashrate::MediumInterval); const double hashrate_medium = hashrate()->calc(Hashrate::MediumInterval);
auto hashrate_large = hashrate()->calc(Hashrate::LargeInterval); const double hashrate_large = hashrate()->calc(Hashrate::LargeInterval);
double scale = 1.0; double scale = 1.0;
const char* h = " H/s"; const char* h = " H/s";
if ((hashrate_short.second >= 1e6) || (hashrate_medium.second >= 1e6) || (hashrate_large.second >= 1e6)) { if ((hashrate_short >= 1e6) || (hashrate_medium >= 1e6) || (hashrate_large >= 1e6)) {
scale = 1e-6; scale = 1e-6;
hashrate_short.second *= scale;
hashrate_medium.second *= scale;
hashrate_large.second *= scale;
h = "MH/s"; h = "MH/s";
} }
@@ -393,20 +388,12 @@ void xmrig::CudaBackend::printHashrate(bool details)
size_t i = 0; size_t i = 0;
for (const auto& data : d_ptr->threads) { for (const auto& data : d_ptr->threads) {
auto h0 = hashrate()->calc(i, Hashrate::ShortInterval); Log::print("| %8zu | %8" PRId64 " | %8s | %8s | %8s |" CYAN_BOLD(" #%u") YELLOW(" %s") GREEN(" %s"),
auto h1 = hashrate()->calc(i, Hashrate::MediumInterval);
auto h2 = hashrate()->calc(i, Hashrate::LargeInterval);
h0.second *= scale;
h1.second *= scale;
h2.second *= scale;
Log::print("| %8zu | %8" PRId64 " | %8s | %8s | %8s |" CYAN_BOLD(" #%u") YELLOW(" %s") GREEN(" %s"),
i, i,
data.thread.affinity(), data.thread.affinity(),
Hashrate::format(h0, num, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::ShortInterval) * scale, num, sizeof num / 3),
Hashrate::format(h1, num + 16, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::MediumInterval) * scale, num + 16, sizeof num / 3),
Hashrate::format(h2, num + 16 * 2, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::LargeInterval) * scale, num + 16 * 2, sizeof num / 3),
data.device.index(), data.device.index(),
data.device.topology().toString().data(), data.device.topology().toString().data(),
data.device.name().data() data.device.name().data()
@@ -416,9 +403,9 @@ void xmrig::CudaBackend::printHashrate(bool details)
} }
Log::print(WHITE_BOLD_S "| - | - | %8s | %8s | %8s |", Log::print(WHITE_BOLD_S "| - | - | %8s | %8s | %8s |",
Hashrate::format(hashrate_short , num, sizeof num / 3), Hashrate::format(hashrate_short * scale, num, sizeof num / 3),
Hashrate::format(hashrate_medium, num + 16, sizeof num / 3), Hashrate::format(hashrate_medium * scale, num + 16, sizeof num / 3),
Hashrate::format(hashrate_large , num + 16 * 2, sizeof num / 3) Hashrate::format(hashrate_large * scale, num + 16 * 2, sizeof num / 3)
); );
} }

View File

@@ -283,7 +283,7 @@ const char *xmrig::ocl_tag()
xmrig::OclBackend::OclBackend(Controller *controller) : xmrig::OclBackend::OclBackend(Controller *controller) :
d_ptr(new OclBackendPrivate(controller)) d_ptr(std::make_shared<OclBackendPrivate>(controller))
{ {
d_ptr->workers.setBackend(this); d_ptr->workers.setBackend(this);
} }
@@ -291,7 +291,7 @@ xmrig::OclBackend::OclBackend(Controller *controller) :
xmrig::OclBackend::~OclBackend() xmrig::OclBackend::~OclBackend()
{ {
delete d_ptr; d_ptr.reset();
OclLib::close(); OclLib::close();
@@ -352,20 +352,15 @@ void xmrig::OclBackend::printHashrate(bool details)
char num[16 * 3] = { 0 }; char num[16 * 3] = { 0 };
auto hashrate_short = hashrate()->calc(Hashrate::ShortInterval); const double hashrate_short = hashrate()->calc(Hashrate::ShortInterval);
auto hashrate_medium = hashrate()->calc(Hashrate::MediumInterval); const double hashrate_medium = hashrate()->calc(Hashrate::MediumInterval);
auto hashrate_large = hashrate()->calc(Hashrate::LargeInterval); const double hashrate_large = hashrate()->calc(Hashrate::LargeInterval);
double scale = 1.0; double scale = 1.0;
const char* h = " H/s"; const char* h = " H/s";
if ((hashrate_short.second >= 1e6) || (hashrate_medium.second >= 1e6) || (hashrate_large.second >= 1e6)) { if ((hashrate_short >= 1e6) || (hashrate_medium >= 1e6) || (hashrate_large >= 1e6)) {
scale = 1e-6; scale = 1e-6;
hashrate_short.second *= scale;
hashrate_medium.second *= scale;
hashrate_large.second *= scale;
h = "MH/s"; h = "MH/s";
} }
@@ -373,16 +368,12 @@ void xmrig::OclBackend::printHashrate(bool details)
size_t i = 0; size_t i = 0;
for (const auto& data : d_ptr->threads) { for (const auto& data : d_ptr->threads) {
auto h0 = hashrate()->calc(i, Hashrate::ShortInterval); Log::print("| %8zu | %8" PRId64 " | %8s | %8s | %8s |" CYAN_BOLD(" #%u") YELLOW(" %s") " %s",
auto h1 = hashrate()->calc(i, Hashrate::MediumInterval);
auto h2 = hashrate()->calc(i, Hashrate::LargeInterval);
Log::print("| %8zu | %8" PRId64 " | %8s | %8s | %8s |" CYAN_BOLD(" #%u") YELLOW(" %s") " %s",
i, i,
data.affinity, data.affinity,
Hashrate::format(h0, num, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::ShortInterval) * scale, num, sizeof num / 3),
Hashrate::format(h1, num + 16, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::MediumInterval) * scale, num + 16, sizeof num / 3),
Hashrate::format(h2, num + 16 * 2, sizeof num / 3), Hashrate::format(hashrate()->calc(i, Hashrate::LargeInterval) * scale, num + 16 * 2, sizeof num / 3),
data.device.index(), data.device.index(),
data.device.topology().toString().data(), data.device.topology().toString().data(),
data.device.printableName().data() data.device.printableName().data()
@@ -392,9 +383,9 @@ void xmrig::OclBackend::printHashrate(bool details)
} }
Log::print(WHITE_BOLD_S "| - | - | %8s | %8s | %8s |", Log::print(WHITE_BOLD_S "| - | - | %8s | %8s | %8s |",
Hashrate::format(hashrate_short , num, sizeof num / 3), Hashrate::format(hashrate_short * scale, num, sizeof num / 3),
Hashrate::format(hashrate_medium, num + 16, sizeof num / 3), Hashrate::format(hashrate_medium * scale, num + 16, sizeof num / 3),
Hashrate::format(hashrate_large , num + 16 * 2, sizeof num / 3) Hashrate::format(hashrate_large * scale, num + 16 * 2, sizeof num / 3)
); );
} }

View File

@@ -70,7 +70,7 @@ protected:
# endif # endif
private: private:
OclBackendPrivate *d_ptr; std::shared_ptr<OclBackendPrivate> d_ptr;
}; };

View File

@@ -95,8 +95,7 @@ xmrig::Api::~Api()
# ifdef XMRIG_FEATURE_HTTP # ifdef XMRIG_FEATURE_HTTP
if (m_httpd) { if (m_httpd) {
m_httpd->stop(); m_httpd->stop();
delete m_httpd; m_httpd.reset();
m_httpd = nullptr; // Ensure the pointer is set to nullptr after deletion
} }
# endif # endif
} }
@@ -116,12 +115,11 @@ void xmrig::Api::start()
# ifdef XMRIG_FEATURE_HTTP # ifdef XMRIG_FEATURE_HTTP
if (!m_httpd) { if (!m_httpd) {
m_httpd = new Httpd(m_base); m_httpd = std::make_shared<Httpd>(m_base);
if (!m_httpd->start()) { if (!m_httpd->start()) {
LOG_ERR("%s " RED_BOLD("HTTP API server failed to start."), Tags::network()); LOG_ERR("%s " RED_BOLD("HTTP API server failed to start."), Tags::network());
delete m_httpd; // Properly handle failure to start m_httpd.reset();
m_httpd = nullptr;
} }
} }
# endif # endif

View File

@@ -66,7 +66,7 @@ private:
Base *m_base; Base *m_base;
char m_id[32]{}; char m_id[32]{};
const uint64_t m_timestamp; const uint64_t m_timestamp;
Httpd *m_httpd = nullptr; std::shared_ptr<Httpd> m_httpd;
std::vector<IApiListener *> m_listeners; std::vector<IApiListener *> m_listeners;
String m_workerId; String m_workerId;
uint8_t m_ticks = 0; uint8_t m_ticks = 0;

View File

@@ -69,13 +69,13 @@ bool xmrig::Httpd::start()
bool tls = false; bool tls = false;
# ifdef XMRIG_FEATURE_TLS # ifdef XMRIG_FEATURE_TLS
m_http = new HttpsServer(m_httpListener); m_http = std::make_shared<HttpsServer>(m_httpListener);
tls = m_http->setTls(m_base->config()->tls()); tls = m_http->setTls(m_base->config()->tls());
# else # else
m_http = new HttpServer(m_httpListener); m_http = std::make_shared<HttpServer>(m_httpListener);
# endif # endif
m_server = new TcpServer(config.host(), config.port(), m_http); m_server = std::make_shared<TcpServer>(config.host(), config.port(), m_http.get());
const int rc = m_server->bind(); const int rc = m_server->bind();
Log::print(GREEN_BOLD(" * ") WHITE_BOLD("%-13s") CSI "1;%dm%s:%d" " " RED_BOLD("%s"), Log::print(GREEN_BOLD(" * ") WHITE_BOLD("%-13s") CSI "1;%dm%s:%d" " " RED_BOLD("%s"),
@@ -112,9 +112,6 @@ bool xmrig::Httpd::start()
void xmrig::Httpd::stop() void xmrig::Httpd::stop()
{ {
delete m_server;
delete m_http;
m_server = nullptr; m_server = nullptr;
m_http = nullptr; m_http = nullptr;
m_port = 0; m_port = 0;

View File

@@ -55,13 +55,13 @@ private:
const Base *m_base; const Base *m_base;
std::shared_ptr<IHttpListener> m_httpListener; std::shared_ptr<IHttpListener> m_httpListener;
TcpServer *m_server = nullptr; std::shared_ptr<TcpServer> m_server;
uint16_t m_port = 0; uint16_t m_port = 0;
# ifdef XMRIG_FEATURE_TLS # ifdef XMRIG_FEATURE_TLS
HttpsServer *m_http = nullptr; std::shared_ptr<HttpsServer> m_http;
# else # else
HttpServer *m_http = nullptr; std::shared_ptr<HttpServer> m_http;
# endif # endif
}; };

View File

@@ -128,7 +128,7 @@ public:
} // namespace xmrig } // namespace xmrig
xmrig::Async::Async(Callback callback) : d_ptr(new AsyncPrivate()) xmrig::Async::Async(Callback callback) : d_ptr(std::make_shared<AsyncPrivate>())
{ {
d_ptr->callback = std::move(callback); d_ptr->callback = std::move(callback);
d_ptr->async = new uv_async_t; d_ptr->async = new uv_async_t;
@@ -151,8 +151,6 @@ xmrig::Async::Async(IAsyncListener *listener) : d_ptr(new AsyncPrivate())
xmrig::Async::~Async() xmrig::Async::~Async()
{ {
Handle::close(d_ptr->async); Handle::close(d_ptr->async);
delete d_ptr;
} }

View File

@@ -49,7 +49,7 @@ public:
void send(); void send();
private: private:
AsyncPrivate *d_ptr; std::shared_ptr<AsyncPrivate> d_ptr;
}; };

View File

@@ -36,7 +36,7 @@ xmrig::Watcher::Watcher(const String &path, IWatcherListener *listener) :
m_listener(listener), m_listener(listener),
m_path(path) m_path(path)
{ {
m_timer = new Timer(this); m_timer = std::make_shared<Timer>(this);
m_fsEvent = new uv_fs_event_t; m_fsEvent = new uv_fs_event_t;
m_fsEvent->data = this; m_fsEvent->data = this;
@@ -48,8 +48,6 @@ xmrig::Watcher::Watcher(const String &path, IWatcherListener *listener) :
xmrig::Watcher::~Watcher() xmrig::Watcher::~Watcher()
{ {
delete m_timer;
Handle::close(m_fsEvent); Handle::close(m_fsEvent);
} }

View File

@@ -60,7 +60,7 @@ private:
IWatcherListener *m_listener; IWatcherListener *m_listener;
String m_path; String m_path;
Timer *m_timer; std::shared_ptr<Timer> m_timer;
uv_fs_event_t *m_fsEvent; uv_fs_event_t *m_fsEvent;
}; };

View File

@@ -66,17 +66,10 @@ public:
LogPrivate() = default; LogPrivate() = default;
~LogPrivate() = default;
inline ~LogPrivate() inline void add(std::shared_ptr<ILogBackend> backend) { m_backends.emplace_back(backend); }
{
for (auto backend : m_backends) {
delete backend;
}
}
inline void add(ILogBackend *backend) { m_backends.push_back(backend); }
void print(Log::Level level, const char *fmt, va_list args) void print(Log::Level level, const char *fmt, va_list args)
@@ -108,7 +101,7 @@ public:
} }
if (!m_backends.empty()) { if (!m_backends.empty()) {
for (auto backend : m_backends) { for (auto& backend : m_backends) {
backend->print(ts, level, m_buf, offset, size, true); backend->print(ts, level, m_buf, offset, size, true);
backend->print(ts, level, txt.c_str(), offset ? (offset - 11) : 0, txt.size(), false); backend->print(ts, level, txt.c_str(), offset ? (offset - 11) : 0, txt.size(), false);
} }
@@ -188,13 +181,13 @@ private:
char m_buf[Log::kMaxBufferSize]{}; char m_buf[Log::kMaxBufferSize]{};
std::mutex m_mutex; std::mutex m_mutex;
std::vector<ILogBackend*> m_backends; std::vector<std::shared_ptr<ILogBackend>> m_backends;
}; };
bool Log::m_background = false; bool Log::m_background = false;
bool Log::m_colors = true; bool Log::m_colors = true;
LogPrivate *Log::d = nullptr; std::shared_ptr<LogPrivate> Log::d{};
uint32_t Log::m_verbose = 0; uint32_t Log::m_verbose = 0;
@@ -202,7 +195,7 @@ uint32_t Log::m_verbose = 0;
void xmrig::Log::add(ILogBackend *backend) void xmrig::Log::add(std::shared_ptr<ILogBackend> backend)
{ {
assert(d != nullptr); assert(d != nullptr);
@@ -214,14 +207,13 @@ void xmrig::Log::add(ILogBackend *backend)
void xmrig::Log::destroy() void xmrig::Log::destroy()
{ {
delete d; d.reset();
d = nullptr;
} }
void xmrig::Log::init() void xmrig::Log::init()
{ {
d = new LogPrivate(); d = std::make_shared<LogPrivate>();
} }

View File

@@ -23,6 +23,7 @@
#include <cstddef> #include <cstddef>
#include <cstdint> #include <cstdint>
#include <memory>
namespace xmrig { namespace xmrig {
@@ -49,7 +50,7 @@ public:
constexpr static size_t kMaxBufferSize = 16384; constexpr static size_t kMaxBufferSize = 16384;
static void add(ILogBackend *backend); static void add(std::shared_ptr<ILogBackend> backend);
static void destroy(); static void destroy();
static void init(); static void init();
static void print(const char *fmt, ...); static void print(const char *fmt, ...);
@@ -66,7 +67,7 @@ public:
private: private:
static bool m_background; static bool m_background;
static bool m_colors; static bool m_colors;
static LogPrivate *d; static std::shared_ptr<LogPrivate> d;
static uint32_t m_verbose; static uint32_t m_verbose;
}; };

View File

@@ -80,11 +80,10 @@ public:
inline ~BasePrivate() inline ~BasePrivate()
{ {
# ifdef XMRIG_FEATURE_API # ifdef XMRIG_FEATURE_API
delete api; api.reset();
# endif # endif
delete config; watcher.reset();
delete watcher;
NetBuffer::destroy(); NetBuffer::destroy();
} }
@@ -98,27 +97,25 @@ public:
} }
inline void replace(Config *newConfig) inline void replace(std::shared_ptr<Config> newConfig)
{ {
Config *previousConfig = config; auto previousConfig = config;
config = newConfig; config = newConfig;
for (IBaseListener *listener : listeners) { for (IBaseListener *listener : listeners) {
listener->onConfigChanged(config, previousConfig); listener->onConfigChanged(config.get(), previousConfig.get());
} }
delete previousConfig;
} }
Api *api = nullptr; std::shared_ptr<Api> api;
Config *config = nullptr; std::shared_ptr<Config> config;
std::vector<IBaseListener *> listeners; std::vector<IBaseListener *> listeners;
Watcher *watcher = nullptr; std::shared_ptr<Watcher> watcher;
private: private:
inline static Config *load(Process *process) inline static std::shared_ptr<Config> load(Process *process)
{ {
JsonChain chain; JsonChain chain;
ConfigTransform transform; ConfigTransform transform;
@@ -127,29 +124,29 @@ private:
ConfigTransform::load(chain, process, transform); ConfigTransform::load(chain, process, transform);
if (read(chain, config)) { if (read(chain, config)) {
return config.release(); return config;
} }
chain.addFile(Process::location(Process::DataLocation, "config.json")); chain.addFile(Process::location(Process::DataLocation, "config.json"));
if (read(chain, config)) { if (read(chain, config)) {
return config.release(); return config;
} }
chain.addFile(Process::location(Process::HomeLocation, "." APP_ID ".json")); chain.addFile(Process::location(Process::HomeLocation, "." APP_ID ".json"));
if (read(chain, config)) { if (read(chain, config)) {
return config.release(); return config;
} }
chain.addFile(Process::location(Process::HomeLocation, ".config" XMRIG_DIR_SEPARATOR APP_ID ".json")); chain.addFile(Process::location(Process::HomeLocation, ".config" XMRIG_DIR_SEPARATOR APP_ID ".json"));
if (read(chain, config)) { if (read(chain, config)) {
return config.release(); return config;
} }
# ifdef XMRIG_FEATURE_EMBEDDED_CONFIG # ifdef XMRIG_FEATURE_EMBEDDED_CONFIG
chain.addRaw(default_config); chain.addRaw(default_config);
if (read(chain, config)) { if (read(chain, config)) {
return config.release(); return config;
} }
# endif # endif
@@ -162,7 +159,7 @@ private:
xmrig::Base::Base(Process *process) xmrig::Base::Base(Process *process)
: d_ptr(new BasePrivate(process)) : d_ptr(std::make_shared<BasePrivate>(process))
{ {
} }
@@ -170,7 +167,6 @@ xmrig::Base::Base(Process *process)
xmrig::Base::~Base() xmrig::Base::~Base()
{ {
delete d_ptr;
} }
@@ -183,7 +179,7 @@ bool xmrig::Base::isReady() const
int xmrig::Base::init() int xmrig::Base::init()
{ {
# ifdef XMRIG_FEATURE_API # ifdef XMRIG_FEATURE_API
d_ptr->api = new Api(this); d_ptr->api = std::make_shared<Api>(this);
d_ptr->api->addListener(this); d_ptr->api->addListener(this);
# endif # endif
@@ -193,16 +189,16 @@ int xmrig::Base::init()
Log::setBackground(true); Log::setBackground(true);
} }
else { else {
Log::add(new ConsoleLog(config()->title())); Log::add(std::make_shared<ConsoleLog>(config()->title()));
} }
if (config()->logFile()) { if (config()->logFile()) {
Log::add(new FileLog(config()->logFile())); Log::add(std::make_shared<FileLog>(config()->logFile()));
} }
# ifdef HAVE_SYSLOG_H # ifdef HAVE_SYSLOG_H
if (config()->isSyslog()) { if (config()->isSyslog()) {
Log::add(new SysLog()); Log::add(std::make_shared<SysLog>());
} }
# endif # endif
@@ -221,7 +217,7 @@ void xmrig::Base::start()
} }
if (config()->isWatch()) { if (config()->isWatch()) {
d_ptr->watcher = new Watcher(config()->fileName(), this); d_ptr->watcher = std::make_shared<Watcher>(config()->fileName(), this);
} }
} }
@@ -232,8 +228,7 @@ void xmrig::Base::stop()
api()->stop(); api()->stop();
# endif # endif
delete d_ptr->watcher; d_ptr->watcher.reset();
d_ptr->watcher = nullptr;
} }
@@ -241,7 +236,7 @@ xmrig::Api *xmrig::Base::api() const
{ {
assert(d_ptr->api != nullptr); assert(d_ptr->api != nullptr);
return d_ptr->api; return d_ptr->api.get();
} }
@@ -258,18 +253,14 @@ bool xmrig::Base::reload(const rapidjson::Value &json)
return false; return false;
} }
auto config = new Config(); auto config = std::make_shared<Config>();
if (!config->read(reader, d_ptr->config->fileName())) { if (!config->read(reader, d_ptr->config->fileName())) {
delete config;
return false; return false;
} }
const bool saved = config->save(); const bool saved = config->save();
if (config->isWatch() && d_ptr->watcher && saved) { if (config->isWatch() && d_ptr->watcher && saved) {
delete config;
return true; return true;
} }
@@ -279,11 +270,11 @@ bool xmrig::Base::reload(const rapidjson::Value &json)
} }
xmrig::Config *xmrig::Base::config() const xmrig::Config* xmrig::Base::config() const
{ {
assert(d_ptr->config != nullptr); assert(d_ptr->config);
return d_ptr->config; return d_ptr->config.get();
} }
@@ -300,12 +291,10 @@ void xmrig::Base::onFileChanged(const String &fileName)
JsonChain chain; JsonChain chain;
chain.addFile(fileName); chain.addFile(fileName);
auto config = new Config(); auto config = std::make_shared<Config>();
if (!config->read(chain, chain.fileName())) { if (!config->read(chain, chain.fileName())) {
LOG_ERR("%s " RED("reloading failed"), Tags::config()); LOG_ERR("%s " RED("reloading failed"), Tags::config());
delete config;
return; return;
} }

View File

@@ -64,7 +64,7 @@ protected:
# endif # endif
private: private:
BasePrivate *d_ptr; std::shared_ptr<BasePrivate> d_ptr;
}; };

View File

@@ -5,8 +5,8 @@
* Copyright 2014-2016 Wolf9466 <https://github.com/OhGodAPet> * Copyright 2014-2016 Wolf9466 <https://github.com/OhGodAPet>
* Copyright 2016 Jay D Dee <jayddee246@gmail.com> * Copyright 2016 Jay D Dee <jayddee246@gmail.com>
* Copyright 2017-2018 XMR-Stak <https://github.com/fireice-uk>, <https://github.com/psychocrypt> * Copyright 2017-2018 XMR-Stak <https://github.com/fireice-uk>, <https://github.com/psychocrypt>
* Copyright 2018-2024 SChernykh <https://github.com/SChernykh> * Copyright 2018-2019 SChernykh <https://github.com/SChernykh>
* Copyright 2016-2024 XMRig <https://github.com/xmrig>, <support@xmrig.com> * Copyright 2016-2019 XMRig <https://github.com/xmrig>, <support@xmrig.com>
* *
* This program is free software: you can redistribute it and/or modify * This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
@@ -22,9 +22,11 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <cstdio> #include <cstdio>
#include <uv.h> #include <uv.h>
#ifdef XMRIG_FEATURE_TLS #ifdef XMRIG_FEATURE_TLS
# include <openssl/opensslv.h> # include <openssl/opensslv.h>
#endif #endif
@@ -64,13 +66,13 @@ static int showVersion()
# endif # endif
printf("\n features:" printf("\n features:"
# if defined(__x86_64__) || defined(_M_AMD64) || defined (__arm64__) || defined (__aarch64__) # if defined(__i386__) || defined(_M_IX86)
" 64-bit"
# else
" 32-bit" " 32-bit"
# elif defined(__x86_64__) || defined(_M_AMD64)
" 64-bit"
# endif # endif
# if defined(__AES__) || defined(_MSC_VER) || defined(__ARM_FEATURE_CRYPTO) # if defined(__AES__) || defined(_MSC_VER)
" AES" " AES"
# endif # endif
"\n"); "\n");

View File

@@ -29,13 +29,13 @@
namespace xmrig { namespace xmrig {
static Storage<DnsUvBackend> *storage = nullptr; static std::shared_ptr<Storage<DnsUvBackend>> storage = nullptr;
Storage<DnsUvBackend> &DnsUvBackend::getStorage() Storage<DnsUvBackend> &DnsUvBackend::getStorage()
{ {
if (storage == nullptr) { if (!storage) {
storage = new Storage<DnsUvBackend>(); storage = std::make_shared<Storage<DnsUvBackend>>();
} }
return *storage; return *storage;
@@ -67,8 +67,7 @@ xmrig::DnsUvBackend::~DnsUvBackend()
storage->release(m_key); storage->release(m_key);
if (storage->isEmpty()) { if (storage->isEmpty()) {
delete storage; storage.reset();
storage = nullptr;
} }
} }

View File

@@ -87,14 +87,13 @@ xmrig::DaemonClient::DaemonClient(int id, IClientListener *listener) :
BaseClient(id, listener) BaseClient(id, listener)
{ {
m_httpListener = std::make_shared<HttpListener>(this); m_httpListener = std::make_shared<HttpListener>(this);
m_timer = new Timer(this); m_timer = std::make_shared<Timer>(this);
m_key = m_storage.add(this); m_key = m_storage.add(this);
} }
xmrig::DaemonClient::~DaemonClient() xmrig::DaemonClient::~DaemonClient()
{ {
delete m_timer;
delete m_ZMQSocket; delete m_ZMQSocket;
} }
@@ -104,9 +103,6 @@ void xmrig::DaemonClient::deleteLater()
if (m_pool.zmq_port() >= 0) { if (m_pool.zmq_port() >= 0) {
ZMQClose(true); ZMQClose(true);
} }
else {
delete this;
}
} }

View File

@@ -107,7 +107,7 @@ private:
uint64_t m_jobSteadyMs = 0; uint64_t m_jobSteadyMs = 0;
String m_tlsFingerprint; String m_tlsFingerprint;
String m_tlsVersion; String m_tlsVersion;
Timer *m_timer; std::shared_ptr<Timer> m_timer;
uint64_t m_blocktemplateRequestHeight = 0; uint64_t m_blocktemplateRequestHeight = 0;
WalletAddress m_walletAddress; WalletAddress m_walletAddress;

View File

@@ -221,42 +221,42 @@ bool xmrig::Pool::isEqual(const Pool &other) const
} }
xmrig::IClient *xmrig::Pool::createClient(int id, IClientListener *listener) const std::shared_ptr<xmrig::IClient> xmrig::Pool::createClient(int id, IClientListener* listener) const
{ {
IClient *client = nullptr; std::shared_ptr<xmrig::IClient> client;
if (m_mode == MODE_POOL) { if (m_mode == MODE_POOL) {
# if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER # if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER
const uint32_t f = m_algorithm.family(); const uint32_t f = m_algorithm.family();
if ((f == Algorithm::KAWPOW) || (f == Algorithm::GHOSTRIDER) || (m_coin == Coin::RAVEN)) { if ((f == Algorithm::KAWPOW) || (f == Algorithm::GHOSTRIDER) || (m_coin == Coin::RAVEN)) {
client = new EthStratumClient(id, Platform::userAgent(), listener); client = std::make_shared<EthStratumClient>(id, Platform::userAgent(), listener);
} }
else else
# endif # endif
{ {
client = new Client(id, Platform::userAgent(), listener); client = std::make_shared<Client>(id, Platform::userAgent(), listener);
} }
} }
# ifdef XMRIG_FEATURE_HTTP # ifdef XMRIG_FEATURE_HTTP
else if (m_mode == MODE_DAEMON) { else if (m_mode == MODE_DAEMON) {
client = new DaemonClient(id, listener); client = std::make_shared<DaemonClient>(id, listener);
} }
else if (m_mode == MODE_SELF_SELECT) { else if (m_mode == MODE_SELF_SELECT) {
client = new SelfSelectClient(id, Platform::userAgent(), listener, m_submitToOrigin); client = std::make_shared<SelfSelectClient>(id, Platform::userAgent(), listener, m_submitToOrigin);
} }
# endif # endif
# if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER # if defined XMRIG_ALGO_KAWPOW || defined XMRIG_ALGO_GHOSTRIDER
else if (m_mode == MODE_AUTO_ETH) { else if (m_mode == MODE_AUTO_ETH) {
client = new AutoClient(id, Platform::userAgent(), listener); client = std::make_shared<AutoClient>(id, Platform::userAgent(), listener);
} }
# endif # endif
# ifdef XMRIG_FEATURE_BENCHMARK # ifdef XMRIG_FEATURE_BENCHMARK
else if (m_mode == MODE_BENCHMARK) { else if (m_mode == MODE_BENCHMARK) {
client = new BenchClient(m_benchmark, listener); client = std::make_shared<BenchClient>(m_benchmark, listener);
} }
# endif # endif
assert(client != nullptr); assert(client);
if (client) { if (client) {
client->setPool(*this); client->setPool(*this);

View File

@@ -127,7 +127,7 @@ public:
bool isEnabled() const; bool isEnabled() const;
bool isEqual(const Pool &other) const; bool isEqual(const Pool &other) const;
IClient *createClient(int id, IClientListener *listener) const; std::shared_ptr<IClient> createClient(int id, IClientListener *listener) const;
rapidjson::Value toJSON(rapidjson::Document &doc) const; rapidjson::Value toJSON(rapidjson::Document &doc) const;
std::string printableName() const; std::string printableName() const;

View File

@@ -80,17 +80,17 @@ int xmrig::Pools::donateLevel() const
} }
xmrig::IStrategy *xmrig::Pools::createStrategy(IStrategyListener *listener) const std::shared_ptr<xmrig::IStrategy> xmrig::Pools::createStrategy(IStrategyListener *listener) const
{ {
if (active() == 1) { if (active() == 1) {
for (const Pool &pool : m_data) { for (const Pool &pool : m_data) {
if (pool.isEnabled()) { if (pool.isEnabled()) {
return new SinglePoolStrategy(pool, retryPause(), retries(), listener); return std::make_shared<SinglePoolStrategy>(pool, retryPause(), retries(), listener);
} }
} }
} }
auto strategy = new FailoverStrategy(retryPause(), retries(), listener); auto strategy = std::make_shared<FailoverStrategy>(retryPause(), retries(), listener);
for (const Pool &pool : m_data) { for (const Pool &pool : m_data) {
if (pool.isEnabled()) { if (pool.isEnabled()) {
strategy->add(pool); strategy->add(pool);
@@ -154,7 +154,7 @@ void xmrig::Pools::load(const IJsonReader &reader)
Pool pool(value); Pool pool(value);
if (pool.isValid()) { if (pool.isValid()) {
m_data.push_back(std::move(pool)); m_data.emplace_back(std::move(pool));
} }
} }

View File

@@ -73,7 +73,7 @@ public:
bool isEqual(const Pools &other) const; bool isEqual(const Pools &other) const;
int donateLevel() const; int donateLevel() const;
IStrategy *createStrategy(IStrategyListener *listener) const; std::shared_ptr<IStrategy> createStrategy(IStrategyListener *listener) const;
rapidjson::Value toJSON(rapidjson::Document &doc) const; rapidjson::Value toJSON(rapidjson::Document &doc) const;
size_t active() const; size_t active() const;
uint32_t benchSize() const; uint32_t benchSize() const;

View File

@@ -56,13 +56,12 @@ xmrig::SelfSelectClient::SelfSelectClient(int id, const char *agent, IClientList
m_listener(listener) m_listener(listener)
{ {
m_httpListener = std::make_shared<HttpListener>(this); m_httpListener = std::make_shared<HttpListener>(this);
m_client = new Client(id, agent, this); m_client = std::make_shared<Client>(id, agent, this);
} }
xmrig::SelfSelectClient::~SelfSelectClient() xmrig::SelfSelectClient::~SelfSelectClient()
{ {
delete m_client;
} }

View File

@@ -105,7 +105,7 @@ private:
bool m_active = false; bool m_active = false;
bool m_quiet = false; bool m_quiet = false;
const bool m_submitToOrigin; const bool m_submitToOrigin;
IClient *m_client; std::shared_ptr<IClient> m_client;
IClientListener *m_listener; IClientListener *m_listener;
int m_retries = 5; int m_retries = 5;
int64_t m_failures = 0; int64_t m_failures = 0;

View File

@@ -53,7 +53,7 @@ public:
inline int64_t sequence() const override { return 0; } inline int64_t sequence() const override { return 0; }
inline int64_t submit(const JobResult &) override { return 0; } inline int64_t submit(const JobResult &) override { return 0; }
inline void connect(const Pool &pool) override { setPool(pool); } inline void connect(const Pool &pool) override { setPool(pool); }
inline void deleteLater() override { delete this; } inline void deleteLater() override {}
inline void setAlgo(const Algorithm &algo) override {} inline void setAlgo(const Algorithm &algo) override {}
inline void setEnabled(bool enabled) override {} inline void setEnabled(bool enabled) override {}
inline void setProxy(const ProxyUrl &proxy) override {} inline void setProxy(const ProxyUrl &proxy) override {}

View File

@@ -47,7 +47,7 @@ xmrig::FailoverStrategy::FailoverStrategy(int retryPause, int retries, IStrategy
xmrig::FailoverStrategy::~FailoverStrategy() xmrig::FailoverStrategy::~FailoverStrategy()
{ {
for (IClient *client : m_pools) { for (auto& client : m_pools) {
client->deleteLater(); client->deleteLater();
} }
} }
@@ -55,7 +55,7 @@ xmrig::FailoverStrategy::~FailoverStrategy()
void xmrig::FailoverStrategy::add(const Pool &pool) void xmrig::FailoverStrategy::add(const Pool &pool)
{ {
IClient *client = pool.createClient(static_cast<int>(m_pools.size()), this); std::shared_ptr<IClient> client = pool.createClient(static_cast<int>(m_pools.size()), this);
client->setRetries(m_retries); client->setRetries(m_retries);
client->setRetryPause(m_retryPause * 1000); client->setRetryPause(m_retryPause * 1000);
@@ -93,7 +93,7 @@ void xmrig::FailoverStrategy::resume()
void xmrig::FailoverStrategy::setAlgo(const Algorithm &algo) void xmrig::FailoverStrategy::setAlgo(const Algorithm &algo)
{ {
for (IClient *client : m_pools) { for (auto& client : m_pools) {
client->setAlgo(algo); client->setAlgo(algo);
} }
} }
@@ -101,7 +101,7 @@ void xmrig::FailoverStrategy::setAlgo(const Algorithm &algo)
void xmrig::FailoverStrategy::setProxy(const ProxyUrl &proxy) void xmrig::FailoverStrategy::setProxy(const ProxyUrl &proxy)
{ {
for (IClient *client : m_pools) { for (auto& client : m_pools) {
client->setProxy(proxy); client->setProxy(proxy);
} }
} }
@@ -109,7 +109,7 @@ void xmrig::FailoverStrategy::setProxy(const ProxyUrl &proxy)
void xmrig::FailoverStrategy::stop() void xmrig::FailoverStrategy::stop()
{ {
for (auto &pool : m_pools) { for (auto& pool : m_pools) {
pool->disconnect(); pool->disconnect();
} }
@@ -122,7 +122,7 @@ void xmrig::FailoverStrategy::stop()
void xmrig::FailoverStrategy::tick(uint64_t now) void xmrig::FailoverStrategy::tick(uint64_t now)
{ {
for (IClient *client : m_pools) { for (auto& client : m_pools) {
client->tick(now); client->tick(now);
} }
} }

View File

@@ -49,7 +49,7 @@ public:
protected: protected:
inline bool isActive() const override { return m_active >= 0; } inline bool isActive() const override { return m_active >= 0; }
inline IClient *client() const override { return isActive() ? active() : m_pools[m_index]; } inline IClient* client() const override { return isActive() ? active() : m_pools[m_index].get(); }
int64_t submit(const JobResult &result) override; int64_t submit(const JobResult &result) override;
void connect() override; void connect() override;
@@ -67,7 +67,7 @@ protected:
void onVerifyAlgorithm(const IClient *client, const Algorithm &algorithm, bool *ok) override; void onVerifyAlgorithm(const IClient *client, const Algorithm &algorithm, bool *ok) override;
private: private:
inline IClient *active() const { return m_pools[static_cast<size_t>(m_active)]; } inline IClient* active() const { return m_pools[static_cast<size_t>(m_active)].get(); }
const bool m_quiet; const bool m_quiet;
const int m_retries; const int m_retries;
@@ -75,7 +75,7 @@ private:
int m_active = -1; int m_active = -1;
IStrategyListener *m_listener; IStrategyListener *m_listener;
size_t m_index = 0; size_t m_index = 0;
std::vector<IClient*> m_pools; std::vector<std::shared_ptr<IClient>> m_pools;
}; };

View File

@@ -66,7 +66,7 @@ void xmrig::SinglePoolStrategy::resume()
return; return;
} }
m_listener->onJob(this, m_client, m_client->job(), rapidjson::Value(rapidjson::kNullType)); m_listener->onJob(this, m_client.get(), m_client->job(), rapidjson::Value(rapidjson::kNullType));
} }

View File

@@ -49,7 +49,7 @@ public:
protected: protected:
inline bool isActive() const override { return m_active; } inline bool isActive() const override { return m_active; }
inline IClient *client() const override { return m_client; } inline IClient* client() const override { return m_client.get(); }
int64_t submit(const JobResult &result) override; int64_t submit(const JobResult &result) override;
void connect() override; void connect() override;
@@ -68,7 +68,7 @@ protected:
private: private:
bool m_active; bool m_active;
IClient *m_client; std::shared_ptr<IClient> m_client;
IStrategyListener *m_listener; IStrategyListener *m_listener;
}; };

View File

@@ -23,22 +23,23 @@
#include <cassert> #include <cassert>
#include <memory>
#include <uv.h> #include <uv.h>
namespace xmrig { namespace xmrig {
static MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS> *pool = nullptr; static std::shared_ptr<MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>> pool;
inline MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS> *getPool() inline MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS> *getPool()
{ {
if (!pool) { if (!pool) {
pool = new MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>(); pool = std::make_shared<MemPool<XMRIG_NET_BUFFER_CHUNK_SIZE, XMRIG_NET_BUFFER_INIT_CHUNKS>>();
} }
return pool; return pool.get();
} }
@@ -59,8 +60,7 @@ void xmrig::NetBuffer::destroy()
assert(pool->freeSize() == pool->size()); assert(pool->freeSize() == pool->size());
delete pool; pool.reset();
pool = nullptr;
} }

View File

@@ -84,10 +84,10 @@ public:
inline ~MinerPrivate() inline ~MinerPrivate()
{ {
delete timer; timer.reset();
for (IBackend *backend : backends) { for (auto& backend : backends) {
delete backend; backend.reset();
} }
# ifdef XMRIG_ALGO_RANDOMX # ifdef XMRIG_ALGO_RANDOMX
@@ -98,7 +98,7 @@ public:
bool isEnabled(const Algorithm &algorithm) const bool isEnabled(const Algorithm &algorithm) const
{ {
for (IBackend *backend : backends) { for (auto& backend : backends) {
if (backend->isEnabled() && backend->isEnabled(algorithm)) { if (backend->isEnabled() && backend->isEnabled(algorithm)) {
return true; return true;
} }
@@ -124,7 +124,7 @@ public:
Nonce::reset(job.index()); Nonce::reset(job.index());
} }
for (IBackend *backend : backends) { for (auto& backend : backends) {
backend->setJob(job); backend->setJob(job);
} }
@@ -173,21 +173,17 @@ public:
Value total(kArrayType); Value total(kArrayType);
Value threads(kArrayType); Value threads(kArrayType);
std::pair<bool, double> t[3] = { { true, 0.0 }, { true, 0.0 }, { true, 0.0 } }; double t[3] = { 0.0 };
for (IBackend *backend : backends) { for (auto& backend : backends) {
const Hashrate *hr = backend->hashrate(); const Hashrate *hr = backend->hashrate();
if (!hr) { if (!hr) {
continue; continue;
} }
const auto h0 = hr->calc(Hashrate::ShortInterval); t[0] += hr->calc(Hashrate::ShortInterval);
const auto h1 = hr->calc(Hashrate::MediumInterval); t[1] += hr->calc(Hashrate::MediumInterval);
const auto h2 = hr->calc(Hashrate::LargeInterval); t[2] += hr->calc(Hashrate::LargeInterval);
if (h0.first) { t[0].second += h0.second; } else { t[0].first = false; }
if (h1.first) { t[1].second += h1.second; } else { t[1].first = false; }
if (h2.first) { t[2].second += h2.second; } else { t[2].first = false; }
if (version > 1) { if (version > 1) {
continue; continue;
@@ -208,7 +204,7 @@ public:
total.PushBack(Hashrate::normalize(t[2]), allocator); total.PushBack(Hashrate::normalize(t[2]), allocator);
hashrate.AddMember("total", total, allocator); hashrate.AddMember("total", total, allocator);
hashrate.AddMember("highest", Hashrate::normalize({ maxHashrate[algorithm] > 0.0, maxHashrate[algorithm] }), allocator); hashrate.AddMember("highest", Hashrate::normalize(maxHashrate[algorithm]), allocator);
if (version == 1) { if (version == 1) {
hashrate.AddMember("threads", threads, allocator); hashrate.AddMember("threads", threads, allocator);
@@ -225,7 +221,7 @@ public:
reply.SetArray(); reply.SetArray();
for (IBackend *backend : backends) { for (auto& backend : backends) {
reply.PushBack(backend->toJSON(doc), allocator); reply.PushBack(backend->toJSON(doc), allocator);
} }
} }
@@ -287,7 +283,7 @@ public:
void printHashrate(bool details) void printHashrate(bool details)
{ {
char num[16 * 5] = { 0 }; char num[16 * 5] = { 0 };
std::pair<bool, double> speed[3] = { { true, 0.0 }, { true, 0.0 }, { true, 0.0 } }; double speed[3] = { 0.0 };
uint32_t count = 0; uint32_t count = 0;
double avg_hashrate = 0.0; double avg_hashrate = 0.0;
@@ -297,13 +293,9 @@ public:
if (hashrate) { if (hashrate) {
++count; ++count;
const auto h0 = hashrate->calc(Hashrate::ShortInterval); speed[0] += hashrate->calc(Hashrate::ShortInterval);
const auto h1 = hashrate->calc(Hashrate::MediumInterval); speed[1] += hashrate->calc(Hashrate::MediumInterval);
const auto h2 = hashrate->calc(Hashrate::LargeInterval); speed[2] += hashrate->calc(Hashrate::LargeInterval);
if (h0.first) { speed[0].second += h0.second; } else { speed[0].first = false; }
if (h1.first) { speed[1].second += h1.second; } else { speed[1].first = false; }
if (h2.first) { speed[2].second += h2.second; } else { speed[2].first = false; }
avg_hashrate += hashrate->average(); avg_hashrate += hashrate->average();
} }
@@ -320,13 +312,8 @@ public:
double scale = 1.0; double scale = 1.0;
const char* h = "H/s"; const char* h = "H/s";
if ((speed[0].second >= 1e6) || (speed[1].second >= 1e6) || (speed[2].second >= 1e6) || (maxHashrate[algorithm] >= 1e6)) { if ((speed[0] >= 1e6) || (speed[1] >= 1e6) || (speed[2] >= 1e6) || (maxHashrate[algorithm] >= 1e6)) {
scale = 1e-6; scale = 1e-6;
speed[0].second *= scale;
speed[1].second *= scale;
speed[2].second *= scale;
h = "MH/s"; h = "MH/s";
} }
@@ -335,16 +322,16 @@ public:
# ifdef XMRIG_ALGO_GHOSTRIDER # ifdef XMRIG_ALGO_GHOSTRIDER
if (algorithm.family() == Algorithm::GHOSTRIDER) { if (algorithm.family() == Algorithm::GHOSTRIDER) {
snprintf(avg_hashrate_buf, sizeof(avg_hashrate_buf), " avg " CYAN_BOLD("%s %s"), Hashrate::format({ true, avg_hashrate * scale }, num + 16 * 4, 16), h); snprintf(avg_hashrate_buf, sizeof(avg_hashrate_buf), " avg " CYAN_BOLD("%s %s"), Hashrate::format(avg_hashrate * scale, num + 16 * 4, 16), h);
} }
# endif # endif
LOG_INFO("%s " WHITE_BOLD("speed") " 10s/60s/15m " CYAN_BOLD("%s") CYAN(" %s %s ") CYAN_BOLD("%s") " max " CYAN_BOLD("%s %s") "%s", LOG_INFO("%s " WHITE_BOLD("speed") " 10s/60s/15m " CYAN_BOLD("%s") CYAN(" %s %s ") CYAN_BOLD("%s") " max " CYAN_BOLD("%s %s") "%s",
Tags::miner(), Tags::miner(),
Hashrate::format(speed[0], num, 16), Hashrate::format(speed[0] * scale, num, 16),
Hashrate::format(speed[1], num + 16, 16), Hashrate::format(speed[1] * scale, num + 16, 16),
Hashrate::format(speed[2], num + 16 * 2, 16), h, Hashrate::format(speed[2] * scale, num + 16 * 2, 16), h,
Hashrate::format({ maxHashrate[algorithm] > 0.0, maxHashrate[algorithm] * scale }, num + 16 * 3, 16), h, Hashrate::format(maxHashrate[algorithm] * scale, num + 16 * 3, 16), h,
avg_hashrate_buf avg_hashrate_buf
); );
@@ -377,9 +364,9 @@ public:
Controller *controller; Controller *controller;
Job job; Job job;
mutable std::map<Algorithm::Id, double> maxHashrate; mutable std::map<Algorithm::Id, double> maxHashrate;
std::vector<IBackend *> backends; std::vector<std::shared_ptr<IBackend>> backends;
String userJobId; String userJobId;
Timer *timer = nullptr; std::shared_ptr<Timer> timer;
uint64_t ticks = 0; uint64_t ticks = 0;
Taskbar m_taskbar; Taskbar m_taskbar;
@@ -391,7 +378,7 @@ public:
xmrig::Miner::Miner(Controller *controller) xmrig::Miner::Miner(Controller *controller)
: d_ptr(new MinerPrivate(controller)) : d_ptr(std::make_shared<MinerPrivate>(controller))
{ {
const int priority = controller->config()->cpu().priority(); const int priority = controller->config()->cpu().priority();
if (priority >= 0) { if (priority >= 0) {
@@ -413,29 +400,23 @@ xmrig::Miner::Miner(Controller *controller)
controller->api()->addListener(this); controller->api()->addListener(this);
# endif # endif
d_ptr->timer = new Timer(this); d_ptr->timer = std::make_shared<Timer>(this);
d_ptr->backends.reserve(3); d_ptr->backends.reserve(3);
d_ptr->backends.push_back(new CpuBackend(controller)); d_ptr->backends.emplace_back(std::make_shared<CpuBackend>(controller));
# ifdef XMRIG_FEATURE_OPENCL # ifdef XMRIG_FEATURE_OPENCL
d_ptr->backends.push_back(new OclBackend(controller)); d_ptr->backends.emplace_back(std::make_shared<OclBackend>(controller));
# endif # endif
# ifdef XMRIG_FEATURE_CUDA # ifdef XMRIG_FEATURE_CUDA
d_ptr->backends.push_back(new CudaBackend(controller)); d_ptr->backends.emplace_back(std::make_shared<CudaBackend>(controller));
# endif # endif
d_ptr->rebuild(); d_ptr->rebuild();
} }
xmrig::Miner::~Miner()
{
delete d_ptr;
}
bool xmrig::Miner::isEnabled() const bool xmrig::Miner::isEnabled() const
{ {
return d_ptr->enabled; return d_ptr->enabled;
@@ -454,7 +435,7 @@ const xmrig::Algorithms &xmrig::Miner::algorithms() const
} }
const std::vector<xmrig::IBackend *> &xmrig::Miner::backends() const const std::vector<std::shared_ptr<xmrig::IBackend>>& xmrig::Miner::backends() const
{ {
return d_ptr->backends; return d_ptr->backends;
} }
@@ -551,7 +532,7 @@ void xmrig::Miner::setEnabled(bool enabled)
void xmrig::Miner::setJob(const Job &job, bool donate) void xmrig::Miner::setJob(const Job &job, bool donate)
{ {
for (IBackend *backend : d_ptr->backends) { for (auto& backend : d_ptr->backends) {
backend->prepare(job); backend->prepare(job);
} }
@@ -619,7 +600,7 @@ void xmrig::Miner::stop()
{ {
Nonce::stop(); Nonce::stop();
for (IBackend *backend : d_ptr->backends) { for (auto& backend : d_ptr->backends) {
backend->stop(); backend->stop();
} }
} }
@@ -635,7 +616,7 @@ void xmrig::Miner::onConfigChanged(Config *config, Config *previousConfig)
const Job job = this->job(); const Job job = this->job();
for (IBackend *backend : d_ptr->backends) { for (auto& backend : d_ptr->backends) {
backend->setJob(job); backend->setJob(job);
} }
} }
@@ -649,7 +630,7 @@ void xmrig::Miner::onTimer(const Timer *)
bool stopMiner = false; bool stopMiner = false;
for (IBackend *backend : d_ptr->backends) { for (auto& backend : d_ptr->backends) {
if (!backend->tick(d_ptr->ticks)) { if (!backend->tick(d_ptr->ticks)) {
stopMiner = true; stopMiner = true;
} }
@@ -659,10 +640,7 @@ void xmrig::Miner::onTimer(const Timer *)
} }
if (backend->hashrate()) { if (backend->hashrate()) {
const auto h = backend->hashrate()->calc(Hashrate::ShortInterval); maxHashrate += backend->hashrate()->calc(Hashrate::ShortInterval);
if (h.first) {
maxHashrate += h.second;
}
} }
} }
@@ -734,7 +712,7 @@ void xmrig::Miner::onRequest(IApiRequest &request)
} }
} }
for (IBackend *backend : d_ptr->backends) { for (auto& backend : d_ptr->backends) {
backend->handleRequest(request); backend->handleRequest(request);
} }
} }

View File

@@ -46,12 +46,12 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(Miner) XMRIG_DISABLE_COPY_MOVE_DEFAULT(Miner)
Miner(Controller *controller); Miner(Controller *controller);
~Miner() override; ~Miner() override = default;
bool isEnabled() const; bool isEnabled() const;
bool isEnabled(const Algorithm &algorithm) const; bool isEnabled(const Algorithm &algorithm) const;
const Algorithms &algorithms() const; const Algorithms &algorithms() const;
const std::vector<IBackend *> &backends() const; const std::vector<std::shared_ptr<IBackend>> &backends() const;
Job job() const; Job job() const;
void execCommand(char command); void execCommand(char command);
void pause(); void pause();
@@ -72,7 +72,7 @@ protected:
# endif # endif
private: private:
MinerPrivate *d_ptr; std::shared_ptr<MinerPrivate> d_ptr;
}; };

View File

@@ -65,14 +65,13 @@ struct TaskbarPrivate
}; };
Taskbar::Taskbar() : d_ptr(new TaskbarPrivate()) Taskbar::Taskbar() : d_ptr(std::make_shared<TaskbarPrivate>())
{ {
} }
Taskbar::~Taskbar() Taskbar::~Taskbar()
{ {
delete d_ptr;
} }

View File

@@ -19,6 +19,7 @@
#ifndef XMRIG_TASKBAR_H #ifndef XMRIG_TASKBAR_H
#define XMRIG_TASKBAR_H #define XMRIG_TASKBAR_H
#include <memory>
namespace xmrig { namespace xmrig {
@@ -39,7 +40,7 @@ private:
bool m_active = false; bool m_active = false;
bool m_enabled = true; bool m_enabled = true;
TaskbarPrivate* d_ptr = nullptr; std::shared_ptr<TaskbarPrivate> d_ptr;
void updateTaskbarColor(); void updateTaskbarColor();
}; };

View File

@@ -115,14 +115,13 @@ public:
xmrig::Config::Config() : xmrig::Config::Config() :
d_ptr(new ConfigPrivate()) d_ptr(std::make_shared<ConfigPrivate>())
{ {
} }
xmrig::Config::~Config() xmrig::Config::~Config()
{ {
delete d_ptr;
} }

View File

@@ -101,7 +101,7 @@ public:
void getJSON(rapidjson::Document &doc) const override; void getJSON(rapidjson::Document &doc) const override;
private: private:
ConfigPrivate *d_ptr; std::shared_ptr<ConfigPrivate> d_ptr;
}; };

View File

@@ -49,18 +49,12 @@ xmrig::MemoryPool::MemoryPool(size_t size, bool hugePages, uint32_t node)
constexpr size_t alignment = 1 << 24; constexpr size_t alignment = 1 << 24;
m_memory = new VirtualMemory(size * pageSize + alignment, hugePages, false, false, node); m_memory = std::make_shared<VirtualMemory>(size * pageSize + alignment, hugePages, false, false, node);
m_alignOffset = (alignment - (((size_t)m_memory->scratchpad()) % alignment)) % alignment; m_alignOffset = (alignment - (((size_t)m_memory->scratchpad()) % alignment)) % alignment;
} }
xmrig::MemoryPool::~MemoryPool()
{
delete m_memory;
}
bool xmrig::MemoryPool::isHugePages(uint32_t) const bool xmrig::MemoryPool::isHugePages(uint32_t) const
{ {
return m_memory && m_memory->isHugePages(); return m_memory && m_memory->isHugePages();

View File

@@ -44,7 +44,7 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(MemoryPool) XMRIG_DISABLE_COPY_MOVE_DEFAULT(MemoryPool)
MemoryPool(size_t size, bool hugePages, uint32_t node = 0); MemoryPool(size_t size, bool hugePages, uint32_t node = 0);
~MemoryPool() override; ~MemoryPool() override = default;
protected: protected:
bool isHugePages(uint32_t node) const override; bool isHugePages(uint32_t node) const override;
@@ -55,7 +55,7 @@ private:
size_t m_refs = 0; size_t m_refs = 0;
size_t m_offset = 0; size_t m_offset = 0;
size_t m_alignOffset = 0; size_t m_alignOffset = 0;
VirtualMemory *m_memory = nullptr; std::shared_ptr<VirtualMemory> m_memory;
}; };

View File

@@ -42,14 +42,6 @@ xmrig::NUMAMemoryPool::NUMAMemoryPool(size_t size, bool hugePages) :
} }
xmrig::NUMAMemoryPool::~NUMAMemoryPool()
{
for (auto kv : m_map) {
delete kv.second;
}
}
bool xmrig::NUMAMemoryPool::isHugePages(uint32_t node) const bool xmrig::NUMAMemoryPool::isHugePages(uint32_t node) const
{ {
if (!m_size) { if (!m_size) {
@@ -81,7 +73,7 @@ void xmrig::NUMAMemoryPool::release(uint32_t node)
xmrig::IMemoryPool *xmrig::NUMAMemoryPool::get(uint32_t node) const xmrig::IMemoryPool *xmrig::NUMAMemoryPool::get(uint32_t node) const
{ {
return m_map.count(node) ? m_map.at(node) : nullptr; return m_map.count(node) ? m_map.at(node).get() : nullptr;
} }
@@ -89,8 +81,9 @@ xmrig::IMemoryPool *xmrig::NUMAMemoryPool::getOrCreate(uint32_t node) const
{ {
auto pool = get(node); auto pool = get(node);
if (!pool) { if (!pool) {
pool = new MemoryPool(m_nodeSize, m_hugePages, node); auto new_pool = std::make_shared<MemoryPool>(m_nodeSize, m_hugePages, node);
m_map.insert({ node, pool }); m_map.emplace(node, new_pool);
pool = new_pool.get();
} }
return pool; return pool;

View File

@@ -47,7 +47,7 @@ public:
XMRIG_DISABLE_COPY_MOVE_DEFAULT(NUMAMemoryPool) XMRIG_DISABLE_COPY_MOVE_DEFAULT(NUMAMemoryPool)
NUMAMemoryPool(size_t size, bool hugePages); NUMAMemoryPool(size_t size, bool hugePages);
~NUMAMemoryPool() override; ~NUMAMemoryPool() override = default;
protected: protected:
bool isHugePages(uint32_t node) const override; bool isHugePages(uint32_t node) const override;
@@ -61,7 +61,7 @@ private:
bool m_hugePages = true; bool m_hugePages = true;
size_t m_nodeSize = 0; size_t m_nodeSize = 0;
size_t m_size = 0; size_t m_size = 0;
mutable std::map<uint32_t, IMemoryPool *> m_map; mutable std::map<uint32_t, std::shared_ptr<IMemoryPool>> m_map;
}; };

View File

@@ -38,7 +38,7 @@ namespace xmrig {
size_t VirtualMemory::m_hugePageSize = VirtualMemory::kDefaultHugePageSize; size_t VirtualMemory::m_hugePageSize = VirtualMemory::kDefaultHugePageSize;
static IMemoryPool *pool = nullptr; static std::shared_ptr<IMemoryPool> pool;
static std::mutex mutex; static std::mutex mutex;
@@ -113,7 +113,7 @@ uint32_t xmrig::VirtualMemory::bindToNUMANode(int64_t)
void xmrig::VirtualMemory::destroy() void xmrig::VirtualMemory::destroy()
{ {
delete pool; pool.reset();
} }
@@ -125,10 +125,10 @@ void xmrig::VirtualMemory::init(size_t poolSize, size_t hugePageSize)
# ifdef XMRIG_FEATURE_HWLOC # ifdef XMRIG_FEATURE_HWLOC
if (Cpu::info()->nodes() > 1) { if (Cpu::info()->nodes() > 1) {
pool = new NUMAMemoryPool(align(poolSize, Cpu::info()->nodes()), hugePageSize > 0); pool = std::make_shared<NUMAMemoryPool>(align(poolSize, Cpu::info()->nodes()), hugePageSize > 0);
} else } else
# endif # endif
{ {
pool = new MemoryPool(poolSize, hugePageSize > 0); pool = std::make_shared<MemoryPool>(poolSize, hugePageSize > 0);
} }
} }

View File

@@ -312,7 +312,7 @@ void benchmark()
constexpr uint32_t N = 1U << 21; constexpr uint32_t N = 1U << 21;
VirtualMemory::init(0, N); VirtualMemory::init(0, N);
VirtualMemory* memory = new VirtualMemory(N * 8, true, false, false); std::shared_ptr<VirtualMemory> memory = std::make_shared<VirtualMemory>(N * 8, true, false, false);
// 2 MB cache per core by default // 2 MB cache per core by default
size_t max_scratchpad_size = 1U << 21; size_t max_scratchpad_size = 1U << 21;
@@ -438,7 +438,6 @@ void benchmark()
delete helper; delete helper;
CnCtx::release(ctx, 8); CnCtx::release(ctx, 8);
delete memory;
}); });
t.join(); t.join();

View File

@@ -38,17 +38,6 @@ std::mutex KPCache::s_cacheMutex;
KPCache KPCache::s_cache; KPCache KPCache::s_cache;
KPCache::KPCache()
{
}
KPCache::~KPCache()
{
delete m_memory;
}
bool KPCache::init(uint32_t epoch) bool KPCache::init(uint32_t epoch)
{ {
if (epoch >= sizeof(cache_sizes) / sizeof(cache_sizes[0])) { if (epoch >= sizeof(cache_sizes) / sizeof(cache_sizes[0])) {
@@ -63,8 +52,7 @@ bool KPCache::init(uint32_t epoch)
const size_t size = cache_sizes[epoch]; const size_t size = cache_sizes[epoch];
if (!m_memory || m_memory->size() < size) { if (!m_memory || m_memory->size() < size) {
delete m_memory; m_memory = std::make_shared<VirtualMemory>(size, false, false, false);
m_memory = new VirtualMemory(size, false, false, false);
} }
const ethash_h256_t seedhash = ethash_get_seedhash(epoch); const ethash_h256_t seedhash = ethash_get_seedhash(epoch);

View File

@@ -41,8 +41,8 @@ public:
XMRIG_DISABLE_COPY_MOVE(KPCache) XMRIG_DISABLE_COPY_MOVE(KPCache)
KPCache(); KPCache() = default;
~KPCache(); ~KPCache() = default;
bool init(uint32_t epoch); bool init(uint32_t epoch);
@@ -61,7 +61,7 @@ public:
static KPCache s_cache; static KPCache s_cache;
private: private:
VirtualMemory* m_memory = nullptr; std::shared_ptr<VirtualMemory> m_memory;
size_t m_size = 0; size_t m_size = 0;
uint32_t m_epoch = 0xFFFFFFFFUL; uint32_t m_epoch = 0xFFFFFFFFUL;
std::vector<uint32_t> m_DAGCache; std::vector<uint32_t> m_DAGCache;

View File

@@ -40,7 +40,7 @@ class RxPrivate;
static bool osInitialized = false; static bool osInitialized = false;
static RxPrivate *d_ptr = nullptr; static std::shared_ptr<RxPrivate> d_ptr;
class RxPrivate class RxPrivate
@@ -73,15 +73,13 @@ void xmrig::Rx::destroy()
RxMsr::destroy(); RxMsr::destroy();
# endif # endif
delete d_ptr; d_ptr.reset();
d_ptr = nullptr;
} }
void xmrig::Rx::init(IRxListener *listener) void xmrig::Rx::init(IRxListener *listener)
{ {
d_ptr = new RxPrivate(listener); d_ptr = std::make_shared<RxPrivate>(listener);
} }

View File

@@ -44,8 +44,8 @@ public:
inline ~RxBasicStoragePrivate() { deleteDataset(); } inline ~RxBasicStoragePrivate() { deleteDataset(); }
inline bool isReady(const Job &job) const { return m_ready && m_seed == job; } inline bool isReady(const Job &job) const { return m_ready && m_seed == job; }
inline RxDataset *dataset() const { return m_dataset; } inline RxDataset *dataset() const { return m_dataset.get(); }
inline void deleteDataset() { delete m_dataset; m_dataset = nullptr; } inline void deleteDataset() { m_dataset.reset(); }
inline void setSeed(const RxSeed &seed) inline void setSeed(const RxSeed &seed)
@@ -64,7 +64,7 @@ public:
{ {
const uint64_t ts = Chrono::steadyMSecs(); const uint64_t ts = Chrono::steadyMSecs();
m_dataset = new RxDataset(hugePages, oneGbPages, true, mode, 0); m_dataset = std::make_shared<RxDataset>(hugePages, oneGbPages, true, mode, 0);
if (!m_dataset->cache()->get()) { if (!m_dataset->cache()->get()) {
deleteDataset(); deleteDataset();
@@ -117,7 +117,7 @@ private:
bool m_ready = false; bool m_ready = false;
RxDataset *m_dataset = nullptr; std::shared_ptr<RxDataset> m_dataset;
RxSeed m_seed; RxSeed m_seed;
}; };
@@ -133,7 +133,6 @@ xmrig::RxBasicStorage::RxBasicStorage() :
xmrig::RxBasicStorage::~RxBasicStorage() xmrig::RxBasicStorage::~RxBasicStorage()
{ {
delete d_ptr;
} }

View File

@@ -46,7 +46,7 @@ protected:
void init(const RxSeed &seed, uint32_t threads, bool hugePages, bool oneGbPages, RxConfig::Mode mode, int priority) override; void init(const RxSeed &seed, uint32_t threads, bool hugePages, bool oneGbPages, RxConfig::Mode mode, int priority) override;
private: private:
RxBasicStoragePrivate *d_ptr; std::shared_ptr<RxBasicStoragePrivate> d_ptr;
}; };

View File

@@ -35,7 +35,7 @@ static_assert(RANDOMX_FLAG_JIT == 8, "RANDOMX_FLAG_JIT flag mismatch");
xmrig::RxCache::RxCache(bool hugePages, uint32_t nodeId) xmrig::RxCache::RxCache(bool hugePages, uint32_t nodeId)
{ {
m_memory = new VirtualMemory(maxSize(), hugePages, false, false, nodeId); m_memory = std::make_shared<VirtualMemory>(maxSize(), hugePages, false, false, nodeId);
create(m_memory->raw()); create(m_memory->raw());
} }
@@ -50,8 +50,6 @@ xmrig::RxCache::RxCache(uint8_t *memory)
xmrig::RxCache::~RxCache() xmrig::RxCache::~RxCache()
{ {
randomx_release_cache(m_cache); randomx_release_cache(m_cache);
delete m_memory;
} }

View File

@@ -69,7 +69,7 @@ private:
bool m_jit = true; bool m_jit = true;
Buffer m_seed; Buffer m_seed;
randomx_cache *m_cache = nullptr; randomx_cache *m_cache = nullptr;
VirtualMemory *m_memory = nullptr; std::shared_ptr<VirtualMemory> m_memory;
}; };

View File

@@ -79,10 +79,7 @@ xmrig::RxDataset::RxDataset(RxCache *cache) :
xmrig::RxDataset::~RxDataset() xmrig::RxDataset::~RxDataset()
{ {
randomx_release_dataset(m_dataset);
delete m_cache; delete m_cache;
delete m_memory;
} }
@@ -107,7 +104,7 @@ bool xmrig::RxDataset::init(const Buffer &seed, uint32_t numThreads, int priorit
for (uint64_t i = 0; i < numThreads; ++i) { for (uint64_t i = 0; i < numThreads; ++i) {
const uint32_t a = (datasetItemCount * i) / numThreads; const uint32_t a = (datasetItemCount * i) / numThreads;
const uint32_t b = (datasetItemCount * (i + 1)) / numThreads; const uint32_t b = (datasetItemCount * (i + 1)) / numThreads;
threads.emplace_back(init_dataset_wrapper, m_dataset, m_cache->get(), a, b - a, priority); threads.emplace_back(init_dataset_wrapper, m_dataset.get(), m_cache->get(), a, b - a, priority);
} }
for (uint32_t i = 0; i < numThreads; ++i) { for (uint32_t i = 0; i < numThreads; ++i) {
@@ -115,7 +112,7 @@ bool xmrig::RxDataset::init(const Buffer &seed, uint32_t numThreads, int priorit
} }
} }
else { else {
init_dataset_wrapper(m_dataset, m_cache->get(), 0, datasetItemCount, priority); init_dataset_wrapper(m_dataset.get(), m_cache->get(), 0, datasetItemCount, priority);
} }
return true; return true;
@@ -180,7 +177,7 @@ uint8_t *xmrig::RxDataset::tryAllocateScrathpad()
void *xmrig::RxDataset::raw() const void *xmrig::RxDataset::raw() const
{ {
return m_dataset ? randomx_get_dataset_memory(m_dataset) : nullptr; return m_dataset ? randomx_get_dataset_memory(m_dataset.get()) : nullptr;
} }
@@ -191,7 +188,7 @@ void xmrig::RxDataset::setRaw(const void *raw)
} }
volatile size_t N = maxSize(); volatile size_t N = maxSize();
memcpy(randomx_get_dataset_memory(m_dataset), raw, N); memcpy(randomx_get_dataset_memory(m_dataset.get()), raw, N);
} }
@@ -199,24 +196,22 @@ void xmrig::RxDataset::allocate(bool hugePages, bool oneGbPages)
{ {
if (m_mode == RxConfig::LightMode) { if (m_mode == RxConfig::LightMode) {
LOG_ERR(CLEAR "%s" RED_BOLD_S "fast RandomX mode disabled by config", Tags::randomx()); LOG_ERR(CLEAR "%s" RED_BOLD_S "fast RandomX mode disabled by config", Tags::randomx());
return; return;
} }
if (m_mode == RxConfig::AutoMode && uv_get_total_memory() < (maxSize() + RxCache::maxSize())) { if (m_mode == RxConfig::AutoMode && uv_get_total_memory() < (maxSize() + RxCache::maxSize())) {
LOG_ERR(CLEAR "%s" RED_BOLD_S "not enough memory for RandomX dataset", Tags::randomx()); LOG_ERR(CLEAR "%s" RED_BOLD_S "not enough memory for RandomX dataset", Tags::randomx());
return; return;
} }
m_memory = new VirtualMemory(maxSize(), hugePages, oneGbPages, false, m_node); m_memory = std::make_shared<VirtualMemory>(maxSize(), hugePages, oneGbPages, false, m_node);
if (m_memory->isOneGbPages()) { if (m_memory->isOneGbPages()) {
m_scratchpadOffset = maxSize() + RANDOMX_CACHE_MAX_SIZE; m_scratchpadOffset = maxSize() + RANDOMX_CACHE_MAX_SIZE;
m_scratchpadLimit = m_memory->capacity(); m_scratchpadLimit = m_memory->capacity();
} }
m_dataset = randomx_create_dataset(m_memory->raw()); m_dataset = std::shared_ptr<randomx_dataset>(randomx_create_dataset(m_memory->raw()), randomx_release_dataset);
# ifdef XMRIG_OS_LINUX # ifdef XMRIG_OS_LINUX
if (oneGbPages && !isOneGbPages()) { if (oneGbPages && !isOneGbPages()) {

View File

@@ -50,7 +50,7 @@ public:
RxDataset(RxCache *cache); RxDataset(RxCache *cache);
~RxDataset(); ~RxDataset();
inline randomx_dataset *get() const { return m_dataset; } inline randomx_dataset *get() const { return m_dataset.get(); }
inline RxCache *cache() const { return m_cache; } inline RxCache *cache() const { return m_cache; }
inline void setCache(RxCache *cache) { m_cache = cache; } inline void setCache(RxCache *cache) { m_cache = cache; }
@@ -70,11 +70,11 @@ private:
const RxConfig::Mode m_mode = RxConfig::FastMode; const RxConfig::Mode m_mode = RxConfig::FastMode;
const uint32_t m_node; const uint32_t m_node;
randomx_dataset *m_dataset = nullptr; std::shared_ptr<randomx_dataset> m_dataset;
RxCache *m_cache = nullptr; RxCache *m_cache = nullptr;
size_t m_scratchpadLimit = 0; size_t m_scratchpadLimit = 0;
std::atomic<size_t> m_scratchpadOffset{}; std::atomic<size_t> m_scratchpadOffset{};
VirtualMemory *m_memory = nullptr; std::shared_ptr<VirtualMemory> m_memory;
}; };

View File

@@ -49,8 +49,6 @@ xmrig::RxQueue::~RxQueue()
m_cv.notify_one(); m_cv.notify_one();
m_thread.join(); m_thread.join();
delete m_storage;
} }
@@ -90,12 +88,12 @@ void xmrig::RxQueue::enqueue(const RxSeed &seed, const std::vector<uint32_t> &no
if (!m_storage) { if (!m_storage) {
# ifdef XMRIG_FEATURE_HWLOC # ifdef XMRIG_FEATURE_HWLOC
if (!nodeset.empty()) { if (!nodeset.empty()) {
m_storage = new RxNUMAStorage(nodeset); m_storage = std::make_shared<RxNUMAStorage>(nodeset);
} }
else else
# endif # endif
{ {
m_storage = new RxBasicStorage(); m_storage = std::make_shared<RxBasicStorage>();
} }
} }

View File

@@ -94,7 +94,7 @@ private:
void onReady(); void onReady();
IRxListener *m_listener = nullptr; IRxListener *m_listener = nullptr;
IRxStorage *m_storage = nullptr; std::shared_ptr<IRxStorage> m_storage;
RxSeed m_seed; RxSeed m_seed;
State m_state = STATE_IDLE; State m_state = STATE_IDLE;
std::condition_variable m_cv; std::condition_variable m_cv;

View File

@@ -25,7 +25,7 @@
#include "crypto/rx/RxVm.h" #include "crypto/rx/RxVm.h"
randomx_vm *xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node) std::shared_ptr<randomx_vm> xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node)
{ {
int flags = 0; int flags = 0;
@@ -46,13 +46,8 @@ randomx_vm *xmrig::RxVm::create(RxDataset *dataset, uint8_t *scratchpad, bool so
flags |= RANDOMX_FLAG_AMD; flags |= RANDOMX_FLAG_AMD;
} }
return randomx_create_vm(static_cast<randomx_flags>(flags), !dataset->get() ? dataset->cache()->get() : nullptr, dataset->get(), scratchpad, node); return std::shared_ptr<randomx_vm>(randomx_create_vm(
static_cast<randomx_flags>(flags), !dataset->get() ? dataset->cache()->get() : nullptr, dataset->get(), scratchpad, node),
randomx_destroy_vm);
} }
void xmrig::RxVm::destroy(randomx_vm* vm)
{
if (vm) {
randomx_destroy_vm(vm);
}
}

View File

@@ -38,8 +38,7 @@ class RxDataset;
class RxVm class RxVm
{ {
public: public:
static randomx_vm *create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node); static std::shared_ptr<randomx_vm> create(RxDataset *dataset, uint8_t *scratchpad, bool softAes, const Assembly &assembly, uint32_t node);
static void destroy(randomx_vm *vm);
}; };

View File

@@ -59,7 +59,7 @@ private:
bool rdmsr(uint32_t reg, int32_t cpu, uint64_t &value) const; bool rdmsr(uint32_t reg, int32_t cpu, uint64_t &value) const;
bool wrmsr(uint32_t reg, uint64_t value, int32_t cpu); bool wrmsr(uint32_t reg, uint64_t value, int32_t cpu);
MsrPrivate *d_ptr = nullptr; std::shared_ptr<MsrPrivate> d_ptr;
}; };

View File

@@ -72,11 +72,9 @@ private:
const bool m_available; const bool m_available;
}; };
} // namespace xmrig } // namespace xmrig
xmrig::Msr::Msr() : d_ptr(std::make_shared<MsrPrivate>())
xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
{ {
if (!isAvailable()) { if (!isAvailable()) {
LOG_WARN("%s " YELLOW_BOLD("msr kernel module is not available"), tag()); LOG_WARN("%s " YELLOW_BOLD("msr kernel module is not available"), tag());
@@ -86,7 +84,6 @@ xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::~Msr() xmrig::Msr::~Msr()
{ {
delete d_ptr;
} }

View File

@@ -85,7 +85,7 @@ public:
} // namespace xmrig } // namespace xmrig
xmrig::Msr::Msr() : d_ptr(new MsrPrivate()) xmrig::Msr::Msr() : d_ptr(std::make_shared<MsrPrivate>())
{ {
DWORD err = 0; DWORD err = 0;
@@ -195,8 +195,6 @@ xmrig::Msr::Msr() : d_ptr(new MsrPrivate())
xmrig::Msr::~Msr() xmrig::Msr::~Msr()
{ {
d_ptr->uninstall(); d_ptr->uninstall();
delete d_ptr;
} }

View File

@@ -133,12 +133,10 @@ static void getResults(JobBundle &bundle, std::vector<JobResult> &results, uint3
for (uint32_t nonce : bundle.nonces) { for (uint32_t nonce : bundle.nonces) {
*bundle.job.nonce() = nonce; *bundle.job.nonce() = nonce;
randomx_calculate_hash(vm, bundle.job.blob(), bundle.job.size(), hash); randomx_calculate_hash(vm.get(), bundle.job.blob(), bundle.job.size(), hash);
checkHash(bundle, results, nonce, hash, errors); checkHash(bundle, results, nonce, hash, errors);
} }
RxVm::destroy(vm);
# endif # endif
} }
else if (algorithm.family() == Algorithm::ARGON2) { else if (algorithm.family() == Algorithm::ARGON2) {
@@ -303,7 +301,7 @@ private:
}; };
static JobResultsPrivate *handler = nullptr; static std::shared_ptr<JobResultsPrivate> handler;
} // namespace xmrig } // namespace xmrig
@@ -317,19 +315,17 @@ void xmrig::JobResults::done(const Job &job)
void xmrig::JobResults::setListener(IJobResultListener *listener, bool hwAES) void xmrig::JobResults::setListener(IJobResultListener *listener, bool hwAES)
{ {
assert(handler == nullptr); assert(!handler);
handler = new JobResultsPrivate(listener, hwAES); handler = std::make_shared<JobResultsPrivate>(listener, hwAES);
} }
void xmrig::JobResults::stop() void xmrig::JobResults::stop()
{ {
assert(handler != nullptr); assert(handler);
delete handler; handler.reset();
handler = nullptr;
} }
@@ -347,7 +343,7 @@ void xmrig::JobResults::submit(const Job& job, uint32_t nonce, const uint8_t* re
void xmrig::JobResults::submit(const JobResult &result) void xmrig::JobResults::submit(const JobResult &result)
{ {
assert(handler != nullptr); assert(handler);
if (handler) { if (handler) {
handler->submit(result); handler->submit(result);

View File

@@ -67,27 +67,23 @@ xmrig::Network::Network(Controller *controller) :
controller->api()->addListener(this); controller->api()->addListener(this);
# endif # endif
m_state = new NetworkState(this); m_state = std::make_shared<NetworkState>(this);
const Pools &pools = controller->config()->pools(); const Pools &pools = controller->config()->pools();
m_strategy = pools.createStrategy(m_state); m_strategy = pools.createStrategy(m_state.get());
if (pools.donateLevel() > 0) { if (pools.donateLevel() > 0) {
m_donate = new DonateStrategy(controller, this); m_donate = std::make_shared<DonateStrategy>(controller, this);
} }
m_timer = new Timer(this, kTickInterval, kTickInterval); static constexpr int kTickInterval = 1 * 1000;
m_timer = std::make_shared<Timer>(this, kTickInterval, kTickInterval);
} }
xmrig::Network::~Network() xmrig::Network::~Network()
{ {
JobResults::stop(); JobResults::stop();
delete m_timer;
delete m_donate;
delete m_strategy;
delete m_state;
} }
@@ -118,7 +114,7 @@ void xmrig::Network::execCommand(char command)
void xmrig::Network::onActive(IStrategy *strategy, IClient *client) void xmrig::Network::onActive(IStrategy *strategy, IClient *client)
{ {
if (m_donate && m_donate == strategy) { if (m_donate && m_donate.get() == strategy) {
LOG_NOTICE("%s " WHITE_BOLD("dev donate started"), Tags::network()); LOG_NOTICE("%s " WHITE_BOLD("dev donate started"), Tags::network());
return; return;
} }
@@ -157,19 +153,18 @@ void xmrig::Network::onConfigChanged(Config *config, Config *previousConfig)
config->pools().print(); config->pools().print();
delete m_strategy; m_strategy = config->pools().createStrategy(m_state.get());
m_strategy = config->pools().createStrategy(m_state);
connect(); connect();
} }
void xmrig::Network::onJob(IStrategy *strategy, IClient *client, const Job &job, const rapidjson::Value &) void xmrig::Network::onJob(IStrategy *strategy, IClient *client, const Job &job, const rapidjson::Value &)
{ {
if (m_donate && m_donate->isActive() && m_donate != strategy) { if (m_donate && m_donate->isActive() && m_donate.get() != strategy) {
return; return;
} }
setJob(client, job, m_donate == strategy); setJob(client, job, m_donate.get() == strategy);
} }
@@ -210,7 +205,7 @@ void xmrig::Network::onLogin(IStrategy *, IClient *client, rapidjson::Document &
void xmrig::Network::onPause(IStrategy *strategy) void xmrig::Network::onPause(IStrategy *strategy)
{ {
if (m_donate && m_donate == strategy) { if (m_donate && m_donate.get() == strategy) {
LOG_NOTICE("%s " WHITE_BOLD("dev donate finished"), Tags::network()); LOG_NOTICE("%s " WHITE_BOLD("dev donate finished"), Tags::network());
m_strategy->resume(); m_strategy->resume();
} }
@@ -292,7 +287,7 @@ void xmrig::Network::setJob(IClient *client, const Job &job, bool donate)
} }
if (!donate && m_donate) { if (!donate && m_donate) {
static_cast<DonateStrategy *>(m_donate)->update(client, job); static_cast<DonateStrategy &>(*m_donate).update(client, job);
} }
m_controller->miner()->setJob(job, donate); m_controller->miner()->setJob(job, donate);

View File

@@ -30,7 +30,7 @@
#include "interfaces/IJobResultListener.h" #include "interfaces/IJobResultListener.h"
#include <vector> #include <memory>
namespace xmrig { namespace xmrig {
@@ -49,7 +49,7 @@ public:
Network(Controller *controller); Network(Controller *controller);
~Network() override; ~Network() override;
inline IStrategy *strategy() const { return m_strategy; } inline IStrategy *strategy() const { return m_strategy.get(); }
void connect(); void connect();
void execCommand(char command); void execCommand(char command);
@@ -64,15 +64,13 @@ protected:
void onLogin(IStrategy *strategy, IClient *client, rapidjson::Document &doc, rapidjson::Value &params) override; void onLogin(IStrategy *strategy, IClient *client, rapidjson::Document &doc, rapidjson::Value &params) override;
void onPause(IStrategy *strategy) override; void onPause(IStrategy *strategy) override;
void onResultAccepted(IStrategy *strategy, IClient *client, const SubmitResult &result, const char *error) override; void onResultAccepted(IStrategy *strategy, IClient *client, const SubmitResult &result, const char *error) override;
void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override; void onVerifyAlgorithm(IStrategy *strategy, const IClient *client, const Algorithm &algorithm, bool *ok) override;
# ifdef XMRIG_FEATURE_API # ifdef XMRIG_FEATURE_API
void onRequest(IApiRequest &request) override; void onRequest(IApiRequest &request) override;
# endif # endif
private: private:
constexpr static int kTickInterval = 1 * 1000;
void setJob(IClient *client, const Job &job, bool donate); void setJob(IClient *client, const Job &job, bool donate);
void tick(); void tick();
@@ -82,10 +80,10 @@ private:
# endif # endif
Controller *m_controller; Controller *m_controller;
IStrategy *m_donate = nullptr; std::shared_ptr<IStrategy> m_donate;
IStrategy *m_strategy = nullptr; std::shared_ptr<IStrategy> m_strategy;
NetworkState *m_state = nullptr; std::shared_ptr<NetworkState> m_state;
Timer *m_timer = nullptr; std::shared_ptr<Timer> m_timer;
}; };

Some files were not shown because too many files have changed in this diff Show More