Intel XeSS Gaming Neural Network

Intel beats AMD and fights with NVIDIA in gaming neural networks!

Karthik Vaidyanathan, Intel’s lead engineer for XeSS technology, has revealed a number of interesting details about this new technology for Intel’s gaming GPUs that aims to rival NVIDIA DLSS 2.0 and AMD FidelityFX, including precisely how it compares to NVIDIA DLSS, what support it will have for existing GPUs, and especially why Intel has jumped into the development of this neural networking technology for gaming.

In recent months, major graphics card manufacturers have placed a specific emphasis on Super Sampling technologies that use AI to increase gaming performance. Along with its first specialized gaming GPUs, Intel has developed its own technology, called XeSS Upscaling, that performs the same thing, and has now revealed more specifics.

Intel XeSS requires no individual training per game!

One of the most intriguing things revealed by the Intel engineer during the interview was that the XeSS technology will not require individual training per game to establish its neural networks; instead, we are dealing with an independent library that will be compatible with multiple titles at the same time, in a manner similar to how NVIDIA DLSS 2.0 works, where libraries can be moved between games.

Furthermore, according to Vaidyanathan’s statements, the XeSS neural network technology for gaming will be open source, and it’s interesting that he “plays dumb” with regard to NVIDIA DLSS, claiming that he has no understanding how it works because this NVIDIA technology is not. 

However, he has affirmed that the purpose of XeSS from the start is to be something generic that can be used in any game, without the “fragility” of other Super Sampling techniques that require training in each game in which they are used to operate successfully.

Intel XeSS will function if your GPU supports Shader Model 6.4

Intel’s engineer also explained why they believe XeSS will be more widely adopted than NVIDIA DLSS! Intel’s technology will be available in two variants: XMX-accelerated, which will be exclusive to Intel’s gaming GPUs, and DP4a, a special mode that supports Microsoft Shader Model 6.4 and is thus compatible with NVIDIA Pascal, Turing, and AMD RDNA 1 and 2 graphics. DP4a mode has slightly higher rendering delay than XMX mode, but it is still significantly faster than generating the image at native 4K resolution.

At the moment, AMD has not publicly said whether they intend to add support for XeSS technology on their graphics cards, and they have not even provided a list of supported GPUs for now, understandable since the technology is not yet out on the street. The curious thing about this is that Intel did not wait for AMD to make its FSR technology public to show interest in the technology.

On the other hand, it is worth noting (at least from a developer standpoint) that XeSS will have a single API for both versions. Thus, there will be no need to change anything to enjoy both the XMX and DP4a variations.

For the time being, there is no support for Tensor Cores or FP16/32

Intel’s engineer also revealed that, for the time being, Intel XeSS gaming neural networking technology will not support NVIDIA’s specialized Tensor cores. Many users, on the other hand, will be interested to learn that, unlike AMD’s FSR technology, they have no plans to offer FP16 or FP32 operations for the time being (this is to ensure a wider spectrum of support with older GPUs).

Another interesting fact is that XeSS technology will have multiple quality modes, just like FSR and DLSS. This will provide greater flexibility to users, because if the game developer implements it, it means that in the graphics settings we will be able to select which quality level we want to use, thus being able to choose the best balance between performance and quality.

Unstoppable progress: XeSS 2.0 and 3.0 are already in the works!

During the interview, Karthik also stated that versions 2.0 and 3.0 of the technology are already in the works, indicating that they are taking things seriously and intend for the technology to progress over time. 

Once the vendor’s technology matures, it will be made open source. An open-source approach to an artificial intelligence-based neural network technology may help increase XeSS’s appeal and move the industry toward a ubiquitous end-to-end solution, but it may also signal the start of greater market fragmentation (remember we have DLSS, FSR and now XeSS).

Finally, Intel has stated that XeSS is already “trained” to use up to 64 samples per pixel, which is four times more than NVIDIA DLSS technology.

This concludes the information we have for the time being, but rest confident that Intel will reveal more official data regarding this technology that promises a lot in the coming weeks and months.


My Thoughts On Final Fantasy 7 Crisis Core Reunion
My Thoughts On Final Fantasy 7 Crisis Core Reunion

Gear We Use In Our Videos
Podcast Camera:
Podcast Lens:
Podcast Mics:
Mic Stands:
Desk Of Jingle Glory:
Streaming Mics:
Streaming Camera:
🛒 Our Kit Profile:
Artlist Get 2 Months Free:
Twitch Music Library:

Follow Us

Affiliate Links Notice
Some of these links are affiliate links, which means at no extra cost to you, we will make a small commission if you click them and make a purchase.

🚩 If you’re a YouTuber, you have to check out TubeBuddy. It’s an amazing tool!

Previous Post

Why I Love The Steam Deck

Next Post

Intel Royal Core Leaks

Related Posts