Difference between revisions of "Power ISA/Vector Operations"

From RCS Wiki
Jump to navigation Jump to search
(Power ISA v3.1 adds MMA)
(→‎Github / Gitlab pages: Translation from other architectures)
 
(One intermediate revision by the same user not shown)
Line 9: Line 9:
 
According to [[File:POWER9-Features-and-Specifications.pdf]] page 7, the Vector Scalar Unit (VSU)'s 128-bit hardware is dedicated per super-slice (2 threads).  This may indicate that trying to aggressively use 128-bit VSX instructions in two threads that use the same super-slice will be inefficient.  It is possible that clever usage of <code>taskset</code> may improve this situation.
 
According to [[File:POWER9-Features-and-Specifications.pdf]] page 7, the Vector Scalar Unit (VSU)'s 128-bit hardware is dedicated per super-slice (2 threads).  This may indicate that trying to aggressively use 128-bit VSX instructions in two threads that use the same super-slice will be inefficient.  It is possible that clever usage of <code>taskset</code> may improve this situation.
  
Power ISA v3.1 adds an optional VSX extension, MMA, targeted at matrix math applications.
+
Power ISA v3.1 adds an optional VSX extension, MMA (Matrix-Multiply Assist), targeted at matrix math applications.
  
 
== External Links ==
 
== External Links ==
Line 27: Line 27:
  
 
== Github / Gitlab pages ==
 
== Github / Gitlab pages ==
 +
 +
=== Translation from other architectures ===
 +
 +
Implementations of non-POWER instruction sets for POWER.
 +
 +
* [https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000 GCC]. x86 to VSX.
 +
* [https://github.com/llvm/llvm-project/tree/main/clang/lib/Headers/ppc_wrappers Clang]. x86 to VSX.
 +
* [https://github.com/simd-everywhere/simde SIMD Everywhere]. x86/ARM/WASM to VSX (and other arches).
 +
 +
=== Other ===
  
 
* Eigen. [https://gitlab.com/libeigen/eigen A C++ template library for linear algebra: matrices, vectors, numerical solvers and related algorithms]
 
* Eigen. [https://gitlab.com/libeigen/eigen A C++ template library for linear algebra: matrices, vectors, numerical solvers and related algorithms]
 
* Simd Library. [https://github.com/ermig1979/Simd C++ image processing and machine learning library with using of SIMD]
 
* Simd Library. [https://github.com/ermig1979/Simd C++ image processing and machine learning library with using of SIMD]
* SIMD Everywhere. [https://github.com/simd-everywhere/simde Implementations of SIMD instruction sets for systems which don't natively support them]
 
 
* EVE - the Expressive Vector Engine. [https://github.com/jfalcou/eve SIMD in C++]
 
* EVE - the Expressive Vector Engine. [https://github.com/jfalcou/eve SIMD in C++]
 
* UniSIMD Assembler. [https://github.com/VectorChief/UniSIMD-assembler SIMD macro assembler unified for ARM, MIPS, PPC and x86]
 
* UniSIMD Assembler. [https://github.com/VectorChief/UniSIMD-assembler SIMD macro assembler unified for ARM, MIPS, PPC and x86]

Latest revision as of 03:43, 2 October 2024

The Power Architecture ISA includes a specification of vector or SIMD operations. Prior to the Power ISA, i.e. PowerPC, some of these operations were available, but defined in an external standard, called Altivec by Freescale (Motorola spin-off), Vector Multimedia Extension (VMX) by IBM, and Velocity Engine by Apple.

The Vector operations are classified as Vector Facility and Vector Scalar Extension (VSX) in current versions of the Power ISA.

Power ISA v2.07 still refers to some instructions as VMX in its summary of changes since the previous version, but the rest of the document avoids mentioning VMX completely.

Power ISA v3.0 no longer mentions VMX at all.

According to File:POWER9-Features-and-Specifications.pdf page 7, the Vector Scalar Unit (VSU)'s 128-bit hardware is dedicated per super-slice (2 threads). This may indicate that trying to aggressively use 128-bit VSX instructions in two threads that use the same super-slice will be inefficient. It is possible that clever usage of taskset may improve this situation.

Power ISA v3.1 adds an optional VSX extension, MMA (Matrix-Multiply Assist), targeted at matrix math applications.

External Links

Github / Gitlab pages

Translation from other architectures

Implementations of non-POWER instruction sets for POWER.

Other