Still only rumours, but: "Greymon55 continued, saying that production of Zen 4 processors with 3D-stacked cache will not begin until production of Zen 3D chips has stopped so that the line can be changed over to the new chips. What this means in practice, confirmed by a follow-up reply, is that AMD's first wave of Zen 4-based processors likely won't include any chips with 3D V-cache. Instead, those parts will probably come next year." Source: https://hothardware.com/news/zen-4-cpus-will-also-get-3d-v-cache-next-year
https://www.google.com/amp/s/www.di...en-gpus-may-deliver-130-performance-jump/?amp The reference to GPU architecture i was using, dated from few days ago time will tell
It really sounds too good to be true, but the performance gains are real. If I had not owned and tested the 5800X before upgrading, I also would have a hard time believing that it would make such a difference.
That sounds really interesting. I'm thinking of switching from my 5600x. I use the HP G2 and even there I reach the CPU limit. My GPU load doesn't get above 75%. Could you observe something regarding the GPU load? You can see that with the tool FPSVR.
Oculus Rift has its own performance monitor HUD and I've used that to back my claims up, FPSVR is not needed. I did not see the GPU load on a separate app (I could use GPU-Z for this), but I'm pretty sure the GPU was fully utilized. I'll check the GPU load later today and will let you know.
The cv1 is a completely different story to the G2. It's incomparable. Cv1 is low resolution with better software including the open composite possibility, the g2 is high resolution with crappy wmr software. If you plan on doing ranked, you better test with time progression on, because that's on by default on these servers.
I was not in the mood for recording this in VR, so I did in pancake mode but at 4K60 VSync off to roughly simulate the VR load. It's not the same thing, I know. But at least its something. GPU used here is the Nvidia 1080 Ti. CPU is the Ryzen 5800X3D. There is Rivatuner OSD statistics showing CPU and GPU usage, and the framerate obviously. The video is still not processed yet, so you have to wait a little. Raceroom CPU Stress test Ryzen 5800X3D at Nords with 99 AI cars 52 visible cars 4K60FPS Drunk Mode lol
I just tested it too and was at about 55 frames at the start. 99 opponents and 52 visible on the Nordschleife. You were even in the GPU limit the whole time. The frames would probably be higher. In any case, that sounds tempting, since I'm even in VR in the CPU limit.
Yes, with a better GPU the frames would be higher with the 5800X3D, but not in this case. As you've noticed, I'm GPU bottlenecked in this specific scenario. You on the other hand are CPU bottlenecked. Same race but at 1080p to eliminate the GPU bottleneck. Instead of uploading a video (I don't think its necessary at this time), I took some screenshots to show the limit of the 5800X3D in that exotic race with 100 cars at Nords. It seems changing the GPU will not be that much better except in VR. The game is heavily CPU bottlenecked due to being DX9 and mostly single core, like Starcraft.
I think this is enough to show the limit of the 5800X3D with the video and the screenshots. I hope it helps people to make their decision to upgrade or not you CPU to this one. For me it's well worthy it.
Grats on the new record! I thought the 12th gen would do very well but not sure why you have DDR4 when it supports 5 and also why the E cores are disabled.
DDR5 is not per definition faster than DDR4. I already have very fast DDR4, so i used those. Windows 10 doesn't handle the E cores very well. This causes lag spikes in vr. Maybe Windows 11 does a better job, but i haven't tested that yet.
Good to know. I do belive DDR5 is much quicker but there is only one expensive way to find out! As for W11, the annual update will be out in a couple of months so probably best to wait till then to change as a lot of the UI stuff they dropped from W10 will hopefully be coming back.
It hasn't been the case so far, the latency of the modules has also increased in line with the frequencies, so there is no significant performance difference between the two. But the manufacturers are coming up with better and better latency-frequency ratios, so we'll see.
Specs: - GPU: Nvidia GeForce RTX 3070 Founders Edition - CPU: Intel Core i5-12600k - RAM: Kingston Fury 2x8gb 5200MHz DDR5 - Motherboard: MSI Pro Z690-a
When overclocked with Intel XTU's Speed Optimizer from 4.5GHz all core to 4.7GHz, it managed to pull off this:
My latest build: 12900K running stock with OCTVB +2 profile, both P and E cores enabled, Asus Strix Z690-A Gaming WiFi D4 (DDR4) motherboard. Memory is 4x8GB G.Skill Trident Z 4000CL15-16-16-36 but will only run in Gear 2 at those speeds/timings. At 3600 it runs stable in Gear 1 with tightened timings. CPU: Intel i9 12900K CPU OC: stock with OCTVB +2 profile enabled Memory: 32GB DDR4-3600 Memory timings: CL14-14-14-34 GPU: Nvidia GeForce RTX 3080ti OS: Windows 11
Have you done anything else but changing the CPU? I'm on a Ryzen 5 3600 / RX 6600 @ 4k but can't even come close to this kind of performance. DXVK helps a lot but shadows are bugged with it. Edit: Answering my own question. No, a 5800X3D simply is that superior.
Sily question... I guess anything over 60FPS is good, but does anything actually run at 640x360? (presumably is just being used as a standard for benchmarking purposes only?).