We have reached an era where 6-core and 8-core CPUs are common. However, whether more cores result in higher speed depends on the software implementation.
If each process is completely independent, each core can operate independently, leading to faster performance. However, when processes are interdependent, the cores cannot operate independently, so the number of cores doesn’t directly translate to faster performance.
I, In data recovery tasks, the analysis part benefits from parallel processing, so more cores result in faster performance. However, when it comes to drive read/write operations, the speed doesn’t change much, as drives are much slower compared to CPUs.
II, In blockchain, LoadBlockIndex is parallelized and assigned a dedicated stream, which accelerates the process. However, block validation and connection are not parallelized because the blocks are interconnected, so they are synchronized one block at a time in a straightforward manner.
This brings us to the presence of E-cores. These are recognized by the OS as physical cores, right? But how do they perform?
The purpose of these E-cores seems to be power saving, but if you’re going to dedicate cores for power saving, wouldn’t it be better to allocate those resources to other P-cores to improve single-thread performance and, as a result, overall performance?
In the end, it feels like the bad habit from the Pentium 4 days with Hyper-Threading (increasing the apparent number of cores) has resurfaced in modern times.