Deus ex mankind divided test cards. Testing the performance of Nvidia GeForce video cards in the game Deus Ex: Mankind Divided on Gigabyte solutions
Recently released game Deus Ex: Mankind Divided, which has become a major release for fans of the role-playing genre and cyberberpunk. The sequel offered more gaming options relative to Human Revolution and much prettier visually. This is the most beautiful RPG game since The Witcher 3: Wild Hunt. Accordingly, and system requirements the game is very serious. Therefore, the question of choosing the optimal video card for the game is acute, and we will try to answer it in this article.
Deus Ex: Mankind Divided uses the new Dawn Engine, which is a heavily modified Glacier 2 (Hitman series of games). This engine was developed by Eidos Montreal with an eye to future use in new Deus Ex games. That is, Dawn Engine will develop and improve. It is worth noting here that Mankind Divided was originally predicted to be one of the pioneers of DirectX 12, but the final version of the game only works under DirectX 11 so far. They promise to add support for the new API as a separate update in early September.
The game looks very attractive due to the high detail and excellent work with lighting. All buildings and lanes in urban areas are carefully designed, industrial locations demonstrate an abundance of some complex structures, interiors are always filled with a variety of objects and details.
All this is complemented by ubiquitous reflections on the surrounding surfaces and live volumetric lighting from numerous lanterns. Therefore, at night, the city even looks more attractive. In one of the presentations, the developers said that the largest location itself has about a million polygons.
Parallax mapping gives volume to the ubiquitous paving stones on the streets of Prague.
Tessellation is used, but its use is still noticeable in smoothing out sharp edges on character models. Sub-surface light scattering helps to achieve a more natural "soft" skin.
The game uses complex diffuse shading to create more natural shadows, taking into account the lighting of objects and their influence on each other. This enhances the feeling of volumetric lighting.
If we talk about shadows, then it is worth noting the problem with the disappearance of distant shadows in the city in daylight.
The Depth of Field effect is actively used to change the depth of field in action scenes with finishing moves, in dialogues and at other moments.
Among the post-processing effects, motion blur is also actively used. The 'camera' feel comes from chromatic aberration, which can be turned off immediately if you want to sharpen the image. The game uses TAA (Temporal Anti-Aliasing) by default. One of side effects is a decrease in clarity, but this is compensated by a special filter that enhances sharpness. When trying to turn off TAA, we got flickering images, so it's better not to turn it off. And the “sharpness” parameter in this case should always be enabled, it also does not need to be changed in any mode.
The game has a built-in game test, but its results are quite low. Benchmark creates a more serious load compared to a regular game due to the more intensive use of various graphic effects. Compare the bottom screenshot from the benchmark with the first image - the original does not have such dense pillars of light from the lanterns and such an active background blur.
At the same time, it cannot be said that the benchmark is very far from the real situation with the performance in the game. For example, during an escape through greenhouses at a certain position relative to sun rays performance drops to about the same level as seen in the benchmark.
Therefore, for our comparative testing, a standard benchmark was used. But for a more objective assessment, the final performance graphs show all monitoring data: minimum, average and maximum fps.
The list of tested video cards is as follows:
- Radeon R9 290 4GB;
- Radeon R9 270X 2GB;
- Radeon R9 270 2GB;
- GeForce GTX 980 Ti 6GB;
- GeForce GTX 960 2GB;
- GeForce GTX 950 2GB.
All video cards have been tested at nominal and overclocked. An exception is made only for the GeForce GTX 1080 - there are no alternatives to the new NVIDIA flagship, it is the fastest by default.
The test bench configuration is as follows:
- Processor: Intel Core i7-6950X (3, [email protected].1 GHz);
- cooler: Noctua NH-D15 (two NF-A15 PWM fans, 140 mm, 1300 rpm);
- motherboard: X99S MPOWER (Intrl X99);
- memory: G.Skill F4-3200C14Q-32GTZ (4x8 GB, DDR4-3200, CL14-14-14-35);
- system disk: Intel SSD 520 Series 240GB (240 GB, SATA 6Gb/s);
- secondary drive: Hitachi HDS721010CLA332 (1 TB, SATA 3Gb/s, 7200 rpm);
- power supply: Seasonic SS-750KM (750 W);
- monitor: ASUS PB278Q (2560x1440, 27″);
- operating system: Windows 10 Pro x64;
- GeForce driver: NVIDIA GeForce 372.54;
- Radeon driver: AMD Crimson 16.8.2.
The main testing was carried out at a resolution of 1920x1080 with presets for the maximum graphics quality Ultra. No additional changes were made to the settings.
The results obtained are shown below. The younger participants were not included in the comparison due to extremely low performance in this mode. For them, separate testing was carried out with more gentle settings.
GeForce GTX 1080 has expectedly good results. The average frame rate in the heavy benchmark is more than 62. The maximum value of 79.5 fps is quite consistent general level performance in most gaming scenes. The GeForce GTX 980 Ti is 28% weaker, and this is taking into account the fact that our copy is about 30 MHz faster than usual. The GeForce GTX 980 Ti is about 8% weaker than the leader in overclocking.
Radeon RX 480 has unexpectedly high performance. The new graphics accelerator confidently overtakes its direct competitor in the face of the GeForce GTX 1060. The latter is comparable to the Radeon RX 470. But note that the average and minimum fps for the NVIDIA representative are higher, and only the maximum is better on the Radeon RX 470 That is, the heaviest scenes will be given by GeForce with the best performance. In addition, when trying to test these competitors under normal conditions, the Radeon RX 480 experienced slight lags when walking around virtual Prague. The GeForce GTX 1060 produced a completely smooth picture, albeit at lower fps.
In general, the Radeon RX 470, Radeon RX 480 and GeForce GTX 1060 are optimal graphics cards for gaming at 1920x1080. Real performance in Mankind Divided will be close to the level of average/maximum fps based on benchmark testing, i.e. with the specified video cards, you can count on 40-50 fps. Of the old video cards, the Radeon R9 290 has close results. The GeForce GTX 970, which is absent in this test, should also fall into this category.
But with simpler video adapters, problems will already begin. The GeForce GTX 960 and Radeon R9 270X categorically do not pull out the Ultra-quality mode. The problem is the low amount of video memory. The benchmark uses almost 4 GB of video memory, and all of our representatives of the budget class have 2 GB. But the appetites of the game are even higher, in Prague the memory load can reach 7 GB and more!
We will conduct additional testing of younger participants when choosing the standard High quality presets.
The results are below.
On average, most participants are barely 30 fps or below, but such figures are possible in some heavy scenes. The maximum fps is close to the average gaming performance level, and here the GeForce GTX 960 and Radeon R9 270X already give out about 40 fps. The second one is better precisely by this (maximum) result, but the advantage of the GeForce GTX 960 in terms of the minimum fps in the super-heavy test suggests that the NVIDIA representative is definitely not worse, and in some conditions it can be better. So these cards will provide approximately the same results in the game, but after overclocking the GeForce GTX 960 is preferable. Its younger brother, the GeForce GTX 950, looks weaker than the Radeon R9 270, although it demonstrates the best minimum fps. With overclocking, the GeForce GTX 950 just reaches the level of the GeForce GTX 960. The new Radeon RX 460 shows an average result on par with the Radeon R9 270 at the best minimum fps. Very decent performance for this budget video adapter.
Let's return to the issue of smoothing. As stated above, the game offers TAA by default. Therefore, the issue of using a heavier MSAA is not critical. But owners of top graphics solutions can use MSAA to additionally smooth the edges of objects to get an even higher quality image. Let's check how critical MSAA modes are for the GeForce GTX 980 Ti and GeForce GTX 1080. Let's compare the performance in the normal Ultra mode at 1920x1080 with the performance after enabling MSAA in 2X and 4X modes. Additionally, we will test these video cards at a higher resolution of 2560x1440 without MSAA. All results are below in a single graph.
Even MSAA 2x seriously drops performance by about 30%. But for older NVIDIA graphics cards, this is not a problem. But the expediency of MSAA 4x is already worth thinking about. As for the resolution of 2560x1440, it is also within the power of the older GeForce, and the performance in this mode is higher than at 1920x1080 with MSAA 4x.
conclusions
According to the test results, we can state that the new Radeon RX 480, Radeon RX 470 and GeForce GTX 1060 are the best choice for 1920x1080 with maximum graphics quality in Deus Ex: Mankind Divided. These graphics accelerators will provide comfortable fps in this mode. The game will be in demand and their high amount of video memory. Approximately at the same level should be solutions of the previous generation based on GeForce GTX 970/980 and Radeon R9 290/290X. If you are using a higher resolution monitor, then you will need a video card no weaker than the GeForce GTX 980 Ti, but rather the GeForce GTX 1080 right away. Perfectionists who want to use MSAA should also be prepared for the fact that only top models can handle such modes.
With more affordable, below-average graphics cards, you can expect high quality graphics. The GeForce GTX 960 and Radeon R9 270X with 2 GB of VRAM produce fairly high fps with the High settings profile. In the case of the GeForce GTX 960 with 4 GB of memory, you can count on higher graphics settings. Nice results the new Radeon RX 460, which confidently overtakes the GeForce GTX 950. So this AMD video adapter can be the best choice in its price category for this game.
The original material can be found, I give only a few graphs and some data from it. The tests below were done on a PC with an Intel Core i7-5960X Extreme Edition @ 4.4GHz on an X99 motherboard, definitely not the most common configuration. The system was controlled by Windows 10, driver version for GeForce - 372.54 WHQL, for Radeon -. The use of such a powerful PC is explained simply - so that the processor component does not become an obstacle to unlocking the full potential of the tested video cards.
The game was pretty tough on Maxwell-based graphics cards (and those that are weaker), allowing only one Pascal-based solution (the one for $1,200) to break the 100 FPS limit. However!
And the following graph shows the game's appetite for video memory with increasing resolution at various graphics settings:
Yeah, soon with 4GB of video memory (I'm talking about my old GTX 980) it will not be particularly comfortable in modern games...
Here's a post from a modder using the nickname Marty McFly Modding:
Clickable
The bottom line is that the code of the indicated comrade was found in the promoted Nvidia Ansel technology, which is not good. I wonder how Mr. Huang's company will respond to this statement? Given the date of the post and the lack of a response so far, everything is very serious.
Added later- Nvidia and Marty McFly Modding have agreed that the remnants of MasterEffect ReShade.fx will be removed from Ansel, and the modder himself will receive his own line in the "creators" section of future versions of Ansel.
Introduction
In this review, a summary test of video cards and processors in the game Deus Ex: Mankind Divided will be carried out. You can read a review of it by clicking on this link.
advertising
System requirements
Minimum system requirements:
- Operating system: Windows 7 (SP1), Windows 8/8.1 and Windows 10 (64-bit systems only).
- Processor: Intel Core i3-2100 @ 3100 MHz or AMD FX-6350 BE @ 3500 MHz.
- RAM: 8 GB.
- Video Card: Nvidia GeForce GTX 660 2048 MB or ATI Radeon HD 7870 2048 MB.
- Operating system: Windows 10 (64-bit systems only).
- Processor: Intel Core i7-3770K @ 3500 MHz or AMD FX-8350 BE @ 4000 MHz.
- RAM: 16 GB.
- Free HDD space: 50 GB.
- Video Card: Nvidia GeForce GTX 970 4096 MB or ATI Radeon R9 390 8192 MB.
Summary testing of video cards
Test configuration
The tests were carried out on the following stand:
- CPU: Intel Core i7-6700K (Skylake, L3 8 MB), 4000 @ 4600 MHz;
- Motherboard: Gigabyte GA-Z170X-Gaming 3, LGA 1151;
- CPU cooling system: Corsair Hydro Series H105 (~1300 rpm);
- RAM: 2 x 8GB DDR4 Corsair Vengeance LPX (Spec: 3000MHz / 14-16-16-31-1t / 1.2V) , X.M.P. -on;
- Disk Subsystem #1: 64GB SSD ADATA SX900;
- Disk Subsystem #2: 1TB HDD Western Digital Caviar Green (WD10EZRX);
- Power Supply: Corsair HX850 850 watts (stock fan: 140mm blower);
- Frame: open test stand;
- Monitor: 27" ASUS PB278Q BK (Wide LCD, 2560x1440 / 60Hz);
- Television: 40" 40UF670V (Wide LCD, 3840x2160 / 60Hz).
Video cards:
- Radeon R9 Fury X 4096 MB - 1050/500 @ 1150/500 MHz (Sapphire);
- Radeon R9 Fury 4096 MB - 1000/500 @ 1100/500 MHz (Sapphire);
- Radeon R9 390X 8192 MB - 1050/6000 @ 1160/6500 MHz (Sapphire);
- Radeon R9 390 8192 MB - 1000/6000 @ 1140/6500 MHz (ASUS);
- Radeon R9 380X 4096 MB - 970/5700 @ 1150/6500 MHz (Gigabyte);
- Radeon R9 380 2048 MB - 970/5500 @ 1100/6500 MHz (Sapphire);
- Radeon R7 370 2048 MB - 975/5600 @ 1180/6800 MHz (PowerColor);
- Radeon R7 360 2048 MB - 1050/6500 @ 1200/6800 MHz (Gigabyte);
- GeForce GTX 1080 8192 MB - 1734/10000 @ 2000/11500 MHz (Gigabyte);
- GeForce GTX 1070 8192 MB - 1683/8008 @ 1964/9500 MHz (MSI);
- GeForce GTX 980 Ti 6144 MB - 1076/7012 @ 1420/8100 MHz (Zotac);
- GeForce GTX 980 4096 MB - 1216/7012 @ 1440/8000 MHz (Palit);
- GeForce GTX 970 4096 MB - 1178/7012 @ 1430/8000 MHz (Zotac);
- GeForce GTX 960 2098 MB - 1178/7012 @ 1450/8000 MHz (Gigabyte);
- GeForce GTX 950 2098 MB - 1188/6600 @ 1480/8000 MHz (Palit);
- GeForce GTX 750 Ti 2048 MB - 1085/5400 @ 1220/6600 MHz (Gigabyte).
Software:
- Operating system: Windows 7 x64 SP1;
- Video card drivers: Nvidia GeForce 372.54 WHQL and AMD Radeon Software Crimson 16.8.2.
- Utilities: Fraps 3.5.99 Build 15618, AutoHotkey v1.0.48.05 and MSI Afterburner 4.3.0 Beta 4.
Testing tools and methodology
For a more visual comparison of video cards and processors, the game used as a test application was launched at 1920 x 1080, 2560 x 1440 and 3840 x 2160 resolutions.
Fraps 3.5.99 Build 15618 and AutoHotkey v1.0.48.05 utilities were used as performance measurement tools. Frozen in the game minimal and medium FPS values. vsync disabled during testing.
Test run video:
advertising
Monitoring the use of RAM and video memory
Components were tested with the following graphics settings:
advertising
- Version 1.1.
- DirectX11.
- Temporal anti-aliasing (FXAA) - enabled.
- Anti-aliasing (MSAA) - off.
- Texture filtering - anisotropic, 16x.
- Texture quality is high.
- Shadow quality is high.
- Complex shading (SSAO) - enabled.
- Realistic soft shadows included.
- Parallax texturing - high.
- Depth of field - enabled.
- The level of detail is high.
- Volumetric lighting - included.
- Motion blur - enabled.
- Sharpness - enabled.
- Halo - enabled.
- Tissue physics - included.
- Reflections in the screen - included.
- Subsurface scattering - enabled.
- Chromatic aberrations - included.
- Tessellation - enabled.
Before proceeding to the tests of video cards and processors, we will monitor the use of RAM and video memory in this game.
Video memory usage and random access memory
Video memory (standard settings)
RAM
Please enable JavaScript to see charts
MB
advertising
Test Results: Performance Comparison
Now let's go directly to the tests of graphics accelerators.
Summary charts of test results for single video cards
advertising
1920x1080
DenominationPlease enable JavaScript to see charts
Overclocking
Please enable JavaScript to see charts
2560x1440
DenominationPlease enable JavaScript to see charts
Overclocking
Please enable JavaScript to see charts
3840x2160
DenominationPlease enable JavaScript to see charts
Overclocking
Please enable JavaScript to see charts
Square Enix, which is a well-known publisher of many popular games, has organized a special sale on Steam - Publisher Weekend Sale. It will allow you to save when buying from 20% to 90% of the original cost. For example, NieR Automata is available at 50% off, Rise of the Tomb Raider- from 67%, Just Cause 3 - from 75%, and Deus Ex Mankind Divided - from 80%.
To make it easier for users to find promotional games, a special micro-site has been created, available at this link. In it, you can view all the offers from Square Enix, the discount level, the final price tag and go to the purchase page on Steam. But you should hurry, because discounts for many projects will only last until March 5th.
Comparison of video cards Radeon RX Vega 64 vs GeForce GTX 1080 in Quad HD and 4K Ultra HD
In one of the previous articles, we compared video cards, and now we decided to compare these models in Quad HD and 4K Ultra HD on an Intel processor.
By tradition, we will immediately introduce the contestants: these are ROG Strix series video cards with the same 3-fan cooling system. At first glance, they can even be confused.
Let's start with the novelty presented by the model - this is one of the first modified partner adapters based on Vega 10 XT GPU. In addition to a massive cooling system, which really turned out to be much quieter and more efficient than the reference cooler, the new product boasts the Super Alloy Power II element base, support for proprietary Aura Sync and FanConnect II technologies, and a number of other advantages.
The base frequency of the GPU of the video card is 1298 MHz, and the dynamic frequency can reach 1590 MHz. Effective video memory speed - 1890 MHz. According to ASUS internal tests, this results in a 4-5% improvement in performance in some 4K games compared to the reference sample.
The opponent is a model with a video memory overclocked to 11 GHz. It should be noted that versions of the GeForce GTX 1080 with accelerated memory were specially released by NVIDIA back in April to adequately resist the AMD Vega line. And this version from ASUS additionally acquired a 10-phase power subsystem and factory overclocked GPU up to 1835 MHz in the “Gaming” mode, although there is also an “OC” profile with higher parameters. As for the cooling system, LED backlighting and other goodies, the video cards are similar in this respect.
Anticipating in advance the wishes that “it was necessary to compare the reference versions”, we will immediately answer. Firstly, we do not have a warehouse with video cards to choose from, so we are testing what we can get from our partners. Secondly, modern video cards actively use GPU dynamic overclocking technology: the more efficient the cooling system, the higher the speed can rise and vice versa. NVIDIA video cards are especially good at this, so even buying a model with nominal parameters, you will still get dynamic overclocking in games, if, of course, the cooler allows. Therefore, in modern realities, it is very difficult to become rigidly attached to the nominal frequency.
For testing, we used the main stand based on an overclocked Intel Core i7-6700K processor.
Test stand:
- (OC 4.5 GHz)
- 2 x 8GB DDR4-3200 G.SKILL Trident Z SSHD
- Seagate ST2000DX001 2TB HDD
- WD WD1000DHTZ 1TB
- Thermaltake Core P3
- AOC U2879VF
Let's start with Deus Ex Mankind Divided in DX12 mode with an ultra profile. Immediately notice that the GPU frequency of the GeForce GTX 1080 dynamically rose above 1900 MHz, while the Radeon RX Vega 64 only slightly exceeded 1500 MHz, which is below the figure seen in GPU-Z, although the GPU temperature was normal. Surely this influenced the results: 54 vs. 46 FPS at average speed and 43 vs. 35 at the minimum in favor of the GeForce GTX 1080. The increase was 16-24%.
After the transition to 4K, the situation has changed a bit. In terms of average frame rate, the GeForce GTX 1080 remained in the lead with a score of 29 versus 24 fps, which is equivalent to a 20% bonus. And by the minimum, the Radeon RX Vega 64 has already taken the lead: 19 versus 16 FPS, or 14%.
For start Rise of the Tomb Raider used DirectX12 mode and a very high graphics settings profile. During the test, the Vega 64 GPU was consistently loaded to the maximum, which cannot be said about the competitor: in some places the load sank in the Syria scene and at the beginning of the Geothermal Valley. This did not prevent the GeForce GTX 1080 from showing a higher average result: 98 vs. 74 fps, but Vega already celebrated victory at the minimum frequency: 38 vs. 22 FPS.
At 4K resolution, the bundle with the GeForce GTX 1080 coped worse with the last scene, which led to its defeat in the minimum indicator: 16 vs. 6 FPS. This is equivalent to a lag of 60%. But in terms of average frequency, she did not miss the lead: 49 versus 39 frames / s.
Gameplay in PLAYERUNKNOWN'S BATTLEGROUNDS at ultra preset it is not so informative due to poor scene synchronization and repeatability of results in network projects, but the game is mega-popular, so we decided to add it to testing. The system with a GeForce GTX 1080 on board holds the palm in it: it gave an average of 60 FPS with drawdowns up to 45. The opponent has an average of 42 with drawdowns up to 30. The increase was at the level of 43-50%.
The 4K mode is given to both bundles with difficulty. According to the minimum indicator, a draw was recorded at 18 frames / s, but according to the average, the GeForce GTX 1080 was still ahead with a result of 28 versus 23 FPS, that is, the bonus was 22%.
AT Hellblade Senua's Sacrifice at a very high graphics preset, we chose one unhurried scene to achieve maximum gameplay synchronization and more accurate comparison. The GeForce GTX 1080 was again ahead by a noticeable margin: 74 versus 52 FPS at the average frame rate and 65 versus 46 at the minimum. The difference exceeds 40%.
The transition to a higher resolution naturally reduces the performance, but retains the general trend of the leadership of the model from NVIDIA. On average, she gave out 39 fps with drawdowns up to 35. Her opponent has only 26 FPS with drawdowns up to 23. The increase was 50-52%.
Benchmark Far Cry Primal traditionally supportive of NVIDIA products. This shows up even with the ultra graphics preset. For a system with a Radeon RX Vega 64, the average rose to 62 fps, and the minimum did not fall below 52. Not a bad result, if you do not compare it with a competitive bunch, in which these figures are up to 81 and 69 FPS, respectively.
In Ultra HD resolution, the miracle did not happen: the backlog of the Radeon RX Vega 64 was 10 FPS or 25% at the minimum and the same 10 FPS or 23% at the average frequency. The dominance of the GeForce GTX 1080 is becoming increasingly clear.
Comparison of video cards Radeon RX Vega 64 vs GeForce GTX 1080 in Full HD games: Volta can wait
We recently got acquainted with the capabilities of the video card AMD Radeon RX Vega 64 in 25 actual games, and now it's time to compare it in the most popular Full HD resolution with one of its main competitors in the face of the NVIDIA GeForce GTX 1080, against which it was initially positioned.
Let us briefly recall that the test used the reference version of the RX Vega 64 with support for 4096 stream processors, a dynamic frequency of up to 1630 MHz, 8 GB of HBM2 memory and a turbine cooling system. Note that the official dynamic frequency for the reference model is 1546 MHz.
It will be resisted by a model well known from our previous videos with factory overclocked GPU and memory, as well as a quiet and efficient 3-fan cooling system. We did not reduce the clock speeds to the reference level, since they provide only about 3% increase in Full HD resolution. In addition, there are a lot of versions on sale with factory overclocking, so such a comparison will not be uncommon.
And since there was a request to compare top-end video cards on the AMD platform, the configuration with the Ryzen 7 1700X at the head is used as a test bench. He is assisted by the MSI X370 SLI PLUS motherboard, be quiet! Silent Loop, a pair of 8GB 2-rank Patriot Viper 4 modules at DDR4-3200 and other familiar components.
We also deliberately refused to overclock the processor, because we wanted to see if it would limit the performance of such powerful video cards in games, and if so, how it would affect the comfort of the gameplay. In addition, overclocking top AMD Ryzen is a rather controversial issue: a slight acceleration significantly affects power consumption and heat dissipation, requiring more expensive and reliable components.
Test stand:
- AMD Ryzen 7 1700X
- MSI X370 SLI Plus
- be quiet! Silent Loop 240mm
- 2 x 8GB DDR4-3400 Patriot Viper 4
- Inno3D iChill GeForce GTX 1080 X3
- Kingston SSDNow KC400 (SKC400S37/256G)
- Seagate IronWolf ST2000VN004 2TB
- be quiet! Dark Power Pro 11 850W
- be Quiet! Pure Base 600 Window Orange
- AOC U2879VF
When testing, we tried to use the new DirectX 12 and Vulkan APIs, and the recording was carried out on an external system, that is, without performance loss.
Let's start with Rise of the Tomb Raider at a very high graphics preset. Please note that on a system with a GeForce GTX 1080, some processor threads may not be used, but their frequency is not reduced. Nevertheless, both video cards are loaded quite high, although not to the maximum. The GeForce GTX 1080 takes the lead: 141 versus 126 FPS on average and 55 versus 46 on the minimum.
But Far Cry Primal with an ultra preset, it requires high performance from the processor in single-threaded mode, which does not allow full use of the resources of the video card. This is more acutely felt in conjunction with the GeForce GTX 1080 in the early stages of the benchmark. But this does not prevent her from taking the lead again, although with a minimal advantage: 87 FPS versus 84 at the average frequency and 60 versus 59 at the minimum.
Before the big September update Rainbow Six Siege was better optimized for AMD graphics cards. Let's see if anything has changed since then, shall we? The ultra preset spreads the load across CPU threads quite well and loads both video cards high. But the final result is still for Veg: 208 frames / s versus 202 for the average frame rate and 75 versus 53 for the minimum. The difference was 3 and 40%, respectively.
And here is a newer representative of the games in the Tom Clancy's series, Ghost Recon Wildlands with an ultra profile, it returns to Olympus a bundle with a GeForce GTX 1080 on board. On average, she gave out 61.5 FPS with drawdowns up to 53.5. The competitor has an average of 52 fps with drawdowns of up to 42. The performance bonus is 18 and 28%, respectively.
AMD Radeon RX 560 4GB vs GeForce GTX 1050 and Radeon RX 460 4GB: the battle for the budget gaming segment!
We bring to your attention a comparison of the new AMD Radeon RX 560 graphics card with its close competitors: AMD Radeon RX 460 and NVIDIA GeForce GTX 1050 in Full HD resolution.
First, let's introduce our heroes. The honor of the new generation will be defended by a model with 4 GB of GDDR5 memory, which has factory overclocking and a dual-fan cooling system.
Its opponent in the internal model range is the 4-gigabyte ROG STRIX RX 460, which uses a similar cooler design, but loses to the new product both in terms of the number of structural blocks in the GPU and in the frequency formula.
The external competitor is an adapter with a simplified cooler design, in which there are no heat pipes. The frequency formula of its GPU and 2 GB of GDDR5 memory is at the reference level.
In order not to limit the capabilities of video cards, we used our top stand based on the Intel Core i7-6700K. And to bypass the HDCP protection of digital content and avoid performance losses when recording gameplay, we assembled a cascade of video capture devices from AverMedia.
Test stand:
- (OC 4.5 GHz)
- Thermaltake Water 3.0 Riing RGB 240
- SSHD Seagate ST2000DX001 2TB
- ASUS VH228H
- AVerMedia Live Gamer HD
Let's start with Deus Ex Mankind Divided at a high graphics preset, which loads all three configurations well. Nevertheless, the GeForce GTX 1050 feels very confident. According to the minimum indicator, it was she who took the lead with a result of 21 FPS. Next comes the Radeon RX 460 with 12 fps and the Radeon RX 560 with 8. But in terms of average, the new AMD takes the lead with a result of 35 FPS. The GeForce GTX 1050 is 5% behind, while the Radeon RX 460 is 12% behind.
high profile in Rise of the Tomb Raider in some places it requires about 3 GB of video memory, so the GeForce GTX 1050 has to use more RAM. As a result, the Radeon RX 560 took the clear first place with an average of 49 FPS and drawdowns of up to 31. The NVIDIA representative lagged behind by almost 4% in the average and 17% in the minimum. And the predecessor was behind by 5 and 26%, respectively.
We already know well that Far Cry Primal Ubisoft is more favorable towards NVIDIA products. Therefore, the launched test with a high profile of graphics settings was not particularly surprising in that it raised the GeForce GTX 1050 to the lead with an average result of 49 FPS and drawdowns of up to 44. The Radeon RX 560 produced 46 and 42 fps, respectively, and the Radeon RX 460 - 44 and 40 fps. The gap between the first and the leader is estimated at 5-6%, and the second - at 9-10%.
For Honor is another project from Ubisoft, and another clear win for NVIDIA. With a very high preset, it averaged 43 FPS with drawdowns of up to 30. The new product from AMD was 3-8% behind with 39 and 29 fps, respectively. The gap between the leader and the Radeon RX 460 already reaches 14-19%.
NVIDIA GeForce GT 1030 vs NVIDIA GeForce GTX 1050 vs AMD Radeon RX 460 4GB: Performance Matching Price?
In this article, we are in a hurry to continue comparing the performance of the new NVIDIA GeForce GT 1030 with close competitors in Full HD resolution.
The main object is the already familiar low-profile model with a passive cooling system, a reference frequency formula in the “Gaming” mode and 2 GB of GDDR5 memory with a 64-bit bus.
Its closest rival in the NVIDIA Pascal line is , whose position will be defended by the model of the ASUS Expedition series. It offers more building blocks in the GPU and the same 2 GB of GDDR5 memory, but with a 128-bit bus. At the same time, the test sample has a compact size and an efficient cooling system.
The external competitor for the GT 1030 is the RX 550, but it hasn't been tested by us yet. Therefore, its place was taken by a slightly more expensive and productive RX 460, which is represented by a model of the ASUS ROG STRIX series. It features a factory overclocked GPU, 4GB GDDR5 memory, and an efficient 2-fan cooling system.
Test stand:
- Aardwolf GH400
- ASUS MAXIMUS VIII RANGER
- SSHD Seagate ST2000DX001 2TB
- HDD WD WD1000DHTZ 1TB
- Seasonic Snow Silent 1050 1050W
- AVerMedia Live Gamer HD
- AVerMedia Live Gamer Portable 2
- ASUS VH228H
Complex benchmark Deus Ex Mankind Divided I had to run it with a low quality preset. Curiously, on NVIDIA models it requires more than 1.5 GB of VRAM, and on the Radeon RX 460 it requires about 1 GB, so the RAM consumption is higher in the first two cases. As a result, the GeForce GT 1030 averaged 29 FPS with drawdowns up to 12. The results for the GeForce GTX 1050 were 73 and 229% better. The gap between the RX 460 indicators was 63 and 121%, respectively.
But less demanding DiRT Rally can be run with a very high profile. The game requires just over 2 GB, so only the Radeon RX 460 gets by with the video buffer, and bundles with NVIDIA use additional RAM. Nevertheless, the GeForce GTX 1050 is again in the lead with an average result of 74 FPS and drawdowns up to 63. An AMD representative produced 57 fps with drawdowns up to 48, and the GeForce GT 1030 was able to provide 40 FPS with drawdowns up to 34.
Even at a low level of detail Hitman turned out to be a rather difficult task for the junior representative of the Pascal line: the minimum video sequence speed dipped to 5 frames / s, and the average rose to 35. The competitors again sped off into the lead: 76 FPS with drawdowns up to 51 for the GeForce GTX 1050 and 78 with drawdowns up to 45 for Radeon RX 460.
Rise of the Tomb Raider puts not only Lara Croft in harsh conditions, but also the gaming system as a whole, so I had to limit myself to a low graphics preset to launch. Interestingly, at the beginning of the first two scenes, the processor in conjunction with the Radeon RX 460 is loaded much higher, but the consumption of video memory and RAM is lower. As a result, the average speed of the GeForce GT 1030 was 40 FPS, and the minimum was 28. The competitors were ahead by 60-70%.
Benchmark next in line Rainbow Six Siege with an average graphics settings profile. It is better optimized for the Polaris microarchitecture, so the Radeon RX 460 took the lead: the average frame rate was 96.5 frames / s, and the minimum frame rate was almost 68. The GeForce GTX 1050 finished with a small margin with 96 and 66 FPS. But the GeForce GT 1030 with the results of 53 and 37 FPS lagged behind the leader by more than 80%.
Radeon RX 570 4GB vs Radeon RX 580 4GB and 8GB: an incredible comparison of the "new" AMD graphics cards
Surely you are well aware that the AMD Radeon RX 570 and AMD Radeon RX 580 graphics cards are presented in 4- and 8-gigabyte versions, like their predecessors. In the past, we have compared the performance of AMD Radeon RX 470 and AMD Radeon RX 480 with the same amount of video memory to determine the best option. Therefore, we decided to repeat a similar comparison with the models of the RX 500 line in Full HD resolution. Suddenly, this will soon be relevant for gamers?
The AMD Radeon RX 570 series will be represented by a graphics card. It is equipped with a fairly simple single-fan cooling system and supports the reference frequency formula for the GPU and 4 GB of GDDR5 memory.
It will be opposed by two representatives of the AMD Radeon RX 580 series. The first is the 2-fan model SAPPHIRE NITRO + Radeon RX 580 with 4 GB of video memory on board. The dynamic frequency of its GPU reaches 1400 MHz, which is 60 MHz higher than the reference figure.
And with the 3-fan model from ASUS, you are already familiar with the previous videos on YouTube channel and . Recall that it supports two factory overclocking modes: "Gaming" and "OC". In the first case, the clock frequency of the GPU reaches 1360 MHz, and in the second - 1380 MHz. Testing was carried out in the Gaming mode.
A monitor was used to play gameplay. It is based on a high-quality VA panel with nearly 100% sRGB gamut coverage, very good factory calibration, a static contrast ratio of over 2000:1, and a 4ms sensor response time. Therefore, it is good for both graphics and games.
And since AMD video cards use HDCP protection for digital content, we assembled a cascade of video capture devices from AverMedia to record gameplay, that is, the recording was carried out without loss of FPS.
Test stand:
- Intel Core i7-6700K (OC 4.5 GHz)
- Thermaltake Water 3.0 Riing RGB 240
- ASUS MAXIMUS VIII RANGER
- 2x8 GB DDR4-3200 G.SKILL Trident Z
- SSHD Seagate ST2000DX001 2TB
- HDD WD WD1000DHTZ 1TB
- Seasonic Snow Silent 1050 1050W
- Thermaltake Core P3
- AVerMedia Live Gamer HD
- AVerMedia Live Gamer Portable 2
- Philips Brilliance 328P6VJEB
Let's start with benchmarks of current games. Rise of the Tomb Raider at a very high graphics preset, it requires up to 6 GB of video memory, so 4 GB video cards use up to 10 GB of RAM. As a result, the AMD Radeon RX 570 averaged 66 FPS with drawdowns up to 12. The younger AMD Radeon RX 580 provided 75 fps, and the older one - 76. In both cases, the minimum speed was 16 FPS.
For start Rainbow Six Siege the ultra preset was used, which requires no more than 3 GB of video memory, so the RAM consumption was almost the same for all video cards. On average, the 8 GB Radeon RX 580 breaks out into the lead, which is only 0.5 FPS ahead of the 4 GB version. The gap from the Radeon RX 570 exceeds 14%. In terms of the minimum frame rate, the younger Radeon RX 580 is already in the lead due to better GPU overclocking.
prehistoric world Far Cry Primal at ultra settings, it confirms the general trend: the younger Radeon RX 580, due to higher overclocking, keeps at the level of the older one, but the gap from the Radeon RX 570 is 9-10% for the minimum and average.
With a very high graphics preset in a rally simulator DiRT Rally All three models did great. According to the average, the leadership was predicted to be taken by representatives of the Radeon RX 580 series with a result of 109 frames / s. The Radeon RX 570 delivered 97 FPS, a 12% difference. The situation is similar with the minimum speed: 82 frames / s for the Radeon RX 570 and 93 for both Radeon RX 580. The gap exceeds 13%.
Benchmark For Honor likes to give surprises. Not without them this time. According to the average, the younger Radeon RX 580 took the lead with a result of 83 FPS, although the gap from the older one is less than half a frame / s. But the Radeon RX 570 provided only 71 FPS or 14% less. According to the minimum indicator, the leader was at the end with an indicator of 47 FPS. The Radeon RX 570 provided almost 54 frames / s, and the older Radeon RX 580 delivered under 63 FPS.
Radeon RX 580 8GB vs GeForce GTX 1070: forced comparison in 11 games at Full HD
Thanks to miners, a very interesting situation has developed on the market: high-performance AMD Radeon video cards are very difficult to find on sale. But even if you meet them, their cost is much higher than recommended.
For example, the price tag of the 8 GB Radeon RX 580 is almost close to the level of the GeForce GTX 1070, although it was originally conceived as an alternative to the GeForce GTX 1060. Of course, an experienced gamer knows about this nuance and will do it without any problems. right choice. But many beginners are only looking at current cost, and when they see the slightly cheaper RX 580 and the slightly more expensive GTX 1070, they may make the wrong choice, mistakenly believing that these models are equal in performance.
Therefore, we decided to visually compare the graphics card data in a dozen of the latest demanding games in Full HD resolution to give a clear answer to the question: “How much better is the GTX 1070 than the 8GB RX 580?”
The capabilities of the AMD Radeon RX 580 will help us explore the top-of-the-range graphics card from ASUS. It is equipped with an efficient 3-fan cooler and a slight overclocking of the GPU: its dynamic frequency has been raised from 1340 to 1360 MHz. And in the "OC" profile, the speed is increased by another 20 MHz.
The flagship in its role acts as an opponent model range. It is equipped with an excellent 2-fan cooling system and also supports multiple factory overclocking profiles. Testing was conducted in the "Gaming" mode, although you can activate an even more efficient "OC" profile if you wish.
Test stand:
- Intel Core i7-6700K (OC 4.5 GHz)
- Aardwolf GH400
- ASUS MAXIMUS VIII RANGER
- 2 x 8GB DDR4-3200 G.SKILL Trident Z
- SSHD Seagate ST2000DX001 2TB
- HDD WD WD1000DHTZ 1 TB
- Seasonic Snow Silent 1050 1050W
- Philips Brilliance 328P6VJEB
Real gameplay Deus Ex Mankind Divided at a very high preset graphics settings provides a comfortable gaming experience in both cases. But with AMD Radeon RX 580 you can expect an average of 63 FPS with drawdowns up to 50, and with GeForce GTX 1070 - 85 FPS with drawdowns up to 68. The difference exceeds 34% in both cases.
AT Rise of the Tomb Raider at a very high preset, the average score of the GeForce GTX 1070 is 48% higher: 117 vs. 79 fps, but at the minimum rate, the AMD Radeon RX 580 took the lead by 32% thanks to the Syria scene. In other scenes, the leadership of the NVIDIA representative cannot be challenged.
Very high graphics preset in Assassin's Creed Syndicate Works well on both systems. But if with AMD Radeon RX 580 you can only count on 43 FPS with drawdowns up to 40, then when switching to the GeForce GTX 1070, we have 70 frames / s with drawdowns up to 61. The difference is 63% and 53%, respectively.
To the primitive world Far Cry Primal at ultra settings on an AMD graphics card, you can dive at an average speed of 64 fps. The minimum does not fall below 49. An NVIDIA representative will be able to award more solid performance: an average of 93 FPS with drawdowns up to 74. But this is 45% and 51% more.
Benchmark Rainbow Six Siege treats AMD video cards more favorably, but due to the difference in weight categories, here we also see a clear preponderance of the NVIDIA brainchild, although not as large as in other tests: 168 vs. 137 FPS in the average and 106 vs. 96 in the minimum, which is equivalent to a bonus of 23% and 10% respectively.
DirectX 11 vs DirectX 12: performance comparison on new cards with new drivers
In this article, we want to return to the topic of comparing performance in DirectX 11 and DirectX 12 on new video cards from AMD and NVIDIA. Indeed, over the past months, game developers and GPU manufacturers have tirelessly improved their products, including optimizing support for the new API. How much better and more promising now looks DirectX 12 in Full HD resolution? Let's check in practice.
By tradition, we will start with the introduction of the participants. The first test obstacle course will be the Inno3D iChill GeForce GTX 1080 TI X3 ULTRA video card. It has an excellent 3-fan cooling system and good factory overclocking not only for the GPU, but also for the video memory.
Then the master class will show from ASUS. It also uses an efficient 3-fan cooler and slightly overclocked GPU. The configuration of the rest of the stand has not changed:
- Intel Core i7-6700K (OC 4.5 GHz)
- Aardwolf GH400
- ASUS MAXIMUS VIII RANGER
- 2 x 8 GB DDR4-3200 G.SKILL Trident Z
- SSHD Seagate ST2000DX001 2TB
- HDD WD WD1000DHTZ 1TB
- Seasonic Snow Silent 1050 1050W
- ASUS VH228H
Deus Ex Mankind Divided at a very high quality preset, it provides higher CPU and video card usage in DirectX 11 mode. But more RAM and video memory were required in the new API. In terms of performance, DirectX 12 looks better: 122 versus 120 fps on the average and 91 versus 83 on the minimum.
Hitman at ultra-high settings, from the very first frames, it demonstrates a solid increase in the speed of the video sequence from switching to DirectX 12. However, now the load on the central and graphic processors is higher in the new API. Pay attention to the use of video memory: about 6 GB in the 12th DirectX and less than 3.5 GB in the 11th. As a result, we get 145 versus 123 FPS in favor of DirectX 12, which is equivalent to 18%.
Very high preset in Rise of the Tomb Raider loads the processor and video card well in both cases, although in DirectX 12 mode, the processor requires more power. The difference in video memory consumption is small, but the RAM in the new DirectX is eaten up by almost 2 GB more. The minimum frame rate in both cases was 66 FPS, and DirectX 12 leads the average: 179 versus 166.
Sniper Elite 4 at ultra settings, it loads the processor more in DirectX 11 mode, but it requires a little more video memory and RAM in DirectX 12. It also demonstrates slightly higher speed performance: 190 versus 187 FPS at the average frame rate and 170 versus 166 at the minimum. That is, the difference is 1-2%.
story campaign in Battlefield 1 at ultra settings, it loads the graphics card more stable and high in the new DirectX. It also requires 700 MB more video memory, but DirectX 11 uses 1200 MB more RAM. And the speed of the video sequence in the old API is also higher: 162 vs. 143 for the average frequency and 138 vs. 108 for the minimum. The difference is 13% and 28% respectively.
The Division at maximum graphics settings, it prefers DirectX 12. In it, the video card is better loaded, and the frame rate is slightly higher. True, video memory and RAM also require a little more. On average, we have 147 vs. 141 fps, which is equivalent to a 4% increase.
And completes the first part of the game DOOM at ultra-high graphics settings. In this case, we are comparing OpenGL and Vulkan modes. Previously, NVIDIA video cards looked better in the first of them, but now the situation has changed: in OpenGL there were drawdowns up to 183 FPS, so the average is approximately 192 fps. But in Vulkan, the frequency was consistently around 200 FPS.
How does anti-aliasing in 4K affect performance and picture quality?
It's no secret that when testing games, we mainly use either pre-installed graphics settings profiles or rely on auto-tuning. That is, we simulate the scenario of the behavior of many ordinary users. However, in the comments rises interesting topic: "What is the impact of anti-aliasing on picture quality and footage speed at 4K resolution?" It is to this issue that this material is devoted.
The wonderful Inno3D iChill GeForce GTX 1080 Ti X3 Ultra video card with an efficient 3-fan cooling system and factory overclocked GPU and 11 GB of GDDR5X memory will help us in researching this issue. The frequency formula of the first is 1607/1721 MHz instead of the reference 1480/1582 MHz. And the effective video memory speed reaches 11.4 GHz instead of 11 GHz.
And to display 4K gameplay, the 28-inch AOC U2879VF monitor was used, which also features a fast matrix with a 1ms response speed and an extended set of external interfaces.
Let's start with the game Assassin" sCreedSyndicate, which in the maximum graphics preset includes 4x MSAA anti-aliasing paired with FXAA. You can turn it off completely if you want. We apologize that we did not synchronize the time of day in the gameplay, but the difference in speed is noticeable to the naked eye: without anti-aliasing, we got an average of 43 FPS instead of 33, or 30% more. The minimum increase was 9 fps or 29%.
AT Hitman at maximum settings, SMAA is used. When it is turned off, it is visually difficult to notice changes in the quality of detailing of the game world. And it has little effect on the speed of the video sequence: the difference on average was almost 4 FPS or 5%. According to the minimum indicator, the gap is more significant: 27 versus 10 fps, but we are not sure about the correctness of this indicator calculation in this particular benchmark.
DeusexMankindDivided at ultra settings, allows you to turn off the "Temporal Anti-Aliasing" option. However, it is visually difficult to establish its effect on picture quality even when comparing without recording. What can we say about live gameplay, when attention is focused on the objectives of the mission. The difference in the average and minimum frame rates does not exceed 1 frame / s, which can be attributed to a measurement error.
Preset with ultra settings in FarCryPrimal includes SMAA anti-aliasing. As in Hitman, it’s hard to tell the difference in picture quality and video speed by eye. In some places, it even seems that the image is clearer without it. And the objective benchmark shows a slight difference: 61 vs. 57 FPS in the average and 54 vs. 52 FPS in the minimum in favor of disabling anti-aliasing.
Compare NVIDIA GeForce GTX 1050 Ti vs GTX 1060 3GB on Intel Core i7-6700K and Pentium G4560
Choosing a mid-priced gaming graphics card is not an easy task. It seems to many that solutions based on the NVIDIA GeForce GTX 1050 Ti are not yet sufficiently productive, but they already have 4 GB of video memory, while the more expensive GeForce GTX 1060 has only 3 GB. Maybe then generally look at the AMD Radeon RX 470 with 4 GB of video memory? And is the GeForce GTX 1050 Ti so slow? And how far are they behind the 3GB GTX 1060? And anyway, is 3 GB so little? Especially if the system memory is only 8 GB and the processor is not top? In this comparison, we will try to answer all these questions.
To do this, we used two of our permanent benches: one is based on a 4-core, 8-thread processor and 16 GB of DDR4 memory in dual-channel mode, and the second uses an available 2-core, 4-thread and 8 GB of RAM at its core. The main comparison will be between NVIDIA GeForce GTX 1050 Ti and GTX 1060 graphics cards with 3 GB of video memory.
The first of them is represented by a variant with an extraordinary hardware implementation of factory overclocking: there is a special button on the interface panel to activate it. The opponent is the GIGABYTE GeForce GTX 1060 G1 Gaming 3G model with an efficient cooling system and excellent factory overclocking, which is activated using a proprietary utility.
Let's start with a more powerful system based on the Intel Core i7-6700K.
Far Cry Primal at ultra settings requires more than 3 GB of video memory, so video cards with a smaller video buffer use slower RAM more actively. The GeForce GTX 1060 takes the lead in terms of minimum and average, which in both cases outperforms the GTX 1050 Ti by more than 50%. The GTX 1050's video sequence was not smooth enough in places, and the Radeon RX 470, as expected, was in second place.
Rainbow Six Siege with the ultra preset also shows a noticeable performance jump between the GeForce GTX 1050 Ti and GeForce GTX 1060: the average rises by 42 FPS or 55%, and the minimum by 25 fps or 47%. The GTX 1050 predictably showed the lowest results, but the RX 470 shows comparable average results with the GeForce GTX 1060 and even outperforms it in the minimum frame rate by 8 frames / s, which is 10%.
Maximum graphics settings in The Division allow you to fully show the feasibility of overpaying for a more expensive graphics adapter. If the GeForce GTX 1050 Ti produces an average of 35 frames / s, then with the GTX 1060 you can count on 59, that is, 69% more. The GeForce GTX 1050 had drawdowns of up to 15 FPS, so this mode is too difficult for it, and the Radeon RX 470 is on par with the GeForce GTX 1060.
At ultra settings in DirectX 12 mode HITMAN refused to start on the GeForce GTX 1050, so only three video cards take part in the comparison. The game is very demanding on the size of the video buffer, so the results were interesting: by 5 FPS or 11%, the GTX 1050 Ti with a 4 GB video buffer is ahead of the GeForce GTX 1060 with 3 GB. But the Radeon RX 470 does not notice competitors at all, breaking away from them by more than 60%.
Very high graphics settings Rise of the Tomb Raider create a large load on the memory subsystem, so the 3-gigabyte GeForce GTX 1060 lags behind competitors at the beginning of each scene due to the need to use slow RAM, but then regains lost ground. This point is clearly visible in the minimum FPS. On average, the GeForce GTX 1060 breaks away from the GeForce GTX 1050 Ti by 16 fps or 36%. The leader was the Radeon RX 470, and the owners of the GeForce GTX 1050 should not choose such settings at all.
Benchmark Deus Ex Mankind Divided at ultra settings, it creates a very large load on the video card, so the GTX 1050 can only handle it in slideshow mode. The GTX 1050 Ti teeters on the edge of being comfortable on average, but with drawdowns of up to 9 FPS. The GTX 1060 already provides more acceptable performance: an average of 40.5 FPS with drawdowns up to 29. And the RX 470 again burst into the lead with a minimal advantage.
Preset with very high graphics settings in the test For Honor requires 2 GB of video memory, so both video cards did not experience any problems in this regard. But the more powerful GPU of the GTX 1060 provides a noticeable performance boost. According to the minimum indicator, she pulled ahead by 19 FPS or 54%, and according to the average - by 26 frames / s, this is 53%.
Another new benchmark in our test suite is Tom Clancy's Ghost Recon Wildlands- we launched with the "Ultra" preset. During the test, the lack of smoothness of the video sequence is noticeable in the case of the GTX 1050 Ti, which is confirmed by monitoring and the final results. Its minimum rate was almost 20 fps, and for the GTX 1060 it rose to 28. According to the average, it loses even more significantly: 28 vs. 40 FPS, which is equivalent to 41%.
At DOOM’a does not yet have its own benchmark, so we will evaluate the results obtained when starting the game with the Ultra preset in Vulkan mode using one of the same scenes. The GTX 1060 delivered 86 fps, while with the GTX 1050 Ti we got 53 fps, which is a difference of 33 fps or 62%. The Radeon RX 470 took the lead with 95 FPS, and the last place was taken by the geForce GTX 1050 with 35 fps. In the course of the test, the indicators increase, but the arrangement does not change.
Ultra settings in Battlefield 1 not all the presented video cards can do it. In particular, the GeForce GTX 1050 shows less than 24 FPS, so even the story campaign will not be very comfortable to play. The GeForce GTX 1050 Ti hits around 49 fps, while the GeForce GTX 1060 hits 75, which is 26 FPS or 53% more. The Radeon RX 470 approaches the GeForce GTX 1060 at 69 fps.
For start Mafia III High graphics settings were used. On the bridge, the GTX 1060 had a 20 FPS advantage or 50%. Closer to the central part, it dropped to 12 FPS or 41%, but the result of the GTX 1050 Ti was approaching the comfort limit of 24 fps, so the advantage of the competitor looks even more impressive.
Completes the first part of the game WATCH_DOGS 2 at a very high preset. In one of the similar scenes, we have the following balance of power: the GTX 1050 delivered 28 FPS, the GTX 1050 Ti - 35, the GTX 1060 - 51 and the RX 470 - 46 fps. That is, the difference between the main competitors was 16 FPS or 46%, which is a serious performance bonus.
Radeon RX 470 8GB vs RX 480 4GB vs RX 480 8GB Comparison: Does More VRAM Decide or More GPU Power?
In one of the previous ones, we compared the capabilities of the 8 GB AMD Radeon RX 470 and AMD Radeon RX 480 video cards. But in terms of price, there is a 4 GB version of the RX 480 between them. Moreover, its cost is almost comparable to the 8 GB RX 470. Therefore, many buyers, a reasonable question arises: “Which is better: a faster GPU paired with less memory, or vice versa?” Let's figure it out.
The following video cards will help us with this. The honor of AMD Radeon RX 470 will be defended by the MSI RADEON RX 470 GAMING X 8G model with an efficient Twin Frozr VI cooler and three operating modes. When testing, the Gaming profile was used with a slight overclocking of the GPU to 1242 MHz and an effective video memory speed of 6600 MHz.
The second participant in the comparison is a graphics adapter with 4 GB of video memory, a stylish design and a branded dual-fan cooler. By default, the GPU is overclocked to 1306 MHz, but we decided to reduce its speed to the reference level. The effective frequency of GDDR5 memory is the reference 7 GHz.
The flagship position will be defended by the HIS RX 480 IceQ X² Roaring OC 8GB model, which also boasts a solid cooler and factory overclocked GPU. But its speed was also reduced to a reference figure. In turn, the effective frequency of 8 GB of GDDR5 memory remained at the AMD recommended level of 8 GHz.
Two points deserve special attention. Unlike the previous comparison, the 16GB DDR4 RAM has been scaled down from 3200MHz to 2400MHz to match the real world. After all, not all users will pair such video cards with expensive RAM and spend money on a top-end motherboard and processor.
Now let's move on to the comparison itself. For start Far Cry Primal Ultra graphics settings were selected. On all three systems, the game required a different amount of video memory and RAM. And if in the first case the difference is small, within 200 MB, then the total difference in RAM consumption reaches 1 GB. However, 4 GB of VRAM is enough for the game, so the speed of the GPU was a key influence. The AMD Radeon RX 470 averaged 55 FPS, the 4GB RX 480 achieved 59 FPS or 7% more, and the 8GB RX 480 achieved 61 FPS or 11% more than the RX 470. The minimum scores were 44, 47 and 48 FPS respectively.
Rainbow Six Siege even at ultra settings, it requires a little over 3 GB of video memory, so GPU power remains key. The maximum average recorded in the 8 GB AMD Radeon RX 480 is 130.5 FPS. The 4GB AMD Radeon RX 480 is 3fps behind. In turn, the result of the RX 470 is 117 FPS, which is 13 fps or 11% less than the leader. But according to the minimum indicator, the 4 GB RX 480 received the palm with a result of 94 FPS. The second was the RX 470 - almost 82 frames / s, or 15% less. Rounding out the top three is the 8GB RX 480 with 79 FPS.
The Division in DirectX 12 mode and at the maximum graphics settings profile with disabled vertical synchronization in all three cases, it gave out a completely playable frame rate. The consumption of video memory at the end of the test exceeded 4 GB, but still there is no need to talk about the need for an 8 GB buffer. The average performance is quite in line with the intended trend: the top-end RX 480 remains the leader with almost 66 fps. In the younger RX 480, the result is only 2 FPS or 3% more modest. But the RX 470 lagged behind the leader by 7 fps or 12%.
AT Gears of War 4 we again observed a feature noticed in the last comparison: when choosing the Ultra preset, additional graphics parameters changed depending on the installed video card. That is, the impression is that the engine adjusts the profile to the capabilities of the GPU. As a result, the loading of video memory in the younger RX 480 was significantly lower than in 8-gigabyte competitors.
At very high graphics settings Rise of the Tomb Raider requires much more than 4 GB of video memory. And here already the presence of an 8-gigabyte buffer should have become a significant advantage. However, the overall average is still in line with the trend: 67 fps for the RX 470, 73.5 FPS for the 4 GB RX 480, and nearly 75.5 fps for the 8 GB RX 480. scenes, the younger RX 480 was behind both competitors.
To begin with, a benchmark was launched Deus Ex Mankind Divided at ultra quality settings. Almost from the first frames, he took 4 GB of video memory, but his appetite was limited to this. The result is interesting. In terms of average, everything is quite predictable: 42 FPS for the RX 470, 46 fps for the cheaper RX 480 and 48 FPS for the leader. But according to the minimum indicator, it was the 4-gigabyte RX 480 that pulled ahead with an indicator of 33 frames / s.
Comparison Radeon RX 470 8GB vs Radeon RX 480 8GB: who benefits from these 8 GB?
We have prepared for you a comparison of two popular mid-range graphics cards with 8 GB of VRAM: AMD Radeon RX 480 and AMD Radeon RX 470 in actual games at Full HD resolution.
The comparison participants are the HIS RX 480 IceQ X² Roaring OC 8GB and MSI Radeon RX 470 GAMING X 8G models. Both use 8 GB of GDDR5 video memory. If you look at the structure of their GPU, then the RX 480 has 256 additional stream processors and 16 texture units. In addition, the RX 480 operates at slightly higher GPU and video memory frequencies. In turn, the difference in price between them was approximately $ 30-40 at the time of preparation of the material. Does it make sense to overpay or can you save? Here is the key question of this article.
Both models were tested on the new bench, which includes a 4-core 8-thread processor, an Intel Z170 chipset motherboard and 16 GB of DDR4-3200 RAM. And a pretty model acted as a monitor. It is built on the basis of a high-quality 27-inch PLS-panel with Full HD resolution, declared brightness of 250 cd/m 2 , wide viewing angles of 178 ° and a fairly high response speed of 5 ms. The new product has only three preset profiles, but they have a good factory calibration, and the real performance was even higher than the nominal, which allowed me to immerse myself in the gameplay. In addition, the integration of Philips Flicker Free technology has reduced eye fatigue during long gaming sessions.
Let's start with benchmarks that will allow us to give an unambiguous answer: who and how much better. For start DiRTRally A very high graphic settings profile has been selected. And from the very first frames, the RX 480 took the lead. The averages were 107 vs. 96 fps, meaning the RX 480 was ahead by 12%. The difference in the minimum values slightly exceeded 9 fps, which is also equivalent to 12%.
Ultra quality settings in DirectX 12 mode TotalwarWARHAMMER ensured high loading of video cards and one of the processor threads. But RAM consumption here and in almost all other tests was 300-500 MB higher in a system with RX 480. Perhaps the reason lies in the different sequence of benchmarks for each model, and part of the RAM is occupied by unloaded data from previous games. In any case, both systems provided a more than comfortable frame rate with an average of 83.5 versus 73 FPS in favor of the older model, which is equivalent to a 14% bonus.
play in FarCryPrimal You can do it with ultra quality settings. And if a computer with an RX 470 is capable of delivering an average of 55 frames / s, then when installing an RX 480, you can count on 62 FPS. That is almost 13% more. The difference in the minimum frame rate was 5 FPS or 11%. Video memory required less than 4 GB.
For start RainbowSixSiege I used the ultra preset graphics settings, which in both cases provides an average of more than 100 FPS. The final figure for the RX 480 was 132 fps, and the RX 470 hit the bar at 117 FPS. The difference of 15 fps or almost 13% is quite significant. For the minimum, the gap is even larger: 95 vs. 78 fps, which equals 17 FPS or almost 22%.
AT TheDivision we used the maximum quality preset with vsync turned off so that the benchmark could show the difference in performance levels. In both cases, we get more than 50 FPS, which is enough for comfortable gameplay. And only when the camera passes through the smoke, the frame rate drops to 35 frames / s. However, the RX 480 averaged almost 8 FPS or 13% higher: 66 vs. 59 fps.
DirectX 12 mode and very high graphics settings profile in RiseoftheTombRaider already from the first frames allow you to feel the difference in performance. Although it is noticeable only with active monitoring, since without it it is difficult to see differences in the speed of the video sequence. Final score: 77 vs. 67 fps in favor of the RX 480. So the difference is 10 FPS or 15%. Curiously, in the Syria scene, the minimum FPS of the RX 480 is lower than that of the RX 470: 34 versus 42, although the average in all three scenes is higher for the RX 480. Also note that the game required over 6.5GB of VRAM, so an 8GB buffer would come in handy.
DeusexMankindDivided was launched with an ultra preset in DirectX 12 mode. Both video cards were able to deliver a minimally comfortable 30-40 FPS for smooth gameplay even in the most difficult scenes, but the advantage of the RX 480 is still quite significant: on average, we have 48 versus 42 fps, which is equivalent to 15% bonus. But in terms of the minimum indicator, the RX 470 pulled ahead by 1.6 FPS or 11%.
HITMAN at maximum graphics settings in DirectX 12 mode, nothing new has been added to the balance of power: 87 versus 77 fps in favor of the RX 480. The difference of 10.5 FPS or almost 14% fits into the already recorded trend. But the minimum indicators are almost identical.
Determine the winner in DOOM quite difficult. The game itself was launched in Vulkan mode at ultra quality settings. The FPS level in both cases is in the range of 90-160 fps, which guarantees an absolutely comfortable gameplay. But in approximately the same scenes, the RX 480 gave out approximately 13-17 FPS more, which in this case is equivalent to 13-15%.
Intel HD Graphics 620 vs NVIDIA GeForce 940MX: laptop graphics comparison
Welcome to the GECID.com channel and bring to your attention mobile graphics testing in games. For greater clarity, we will compare the capabilities of the integrated graphics core Intel HD Graphics 620 with the capabilities of the mobile graphics card NVIDIA GeForce 940MX.
Let us briefly recall that the basis for comparison is an ASUS ultrabook based on a 2-core Intel Core i7-7500U processor with a frequency of up to 3.5 GHz and 16 GB of DDR4-2133 MHz RAM. Graphics processing in it can be handled by the integrated Intel HD Graphics 620 video core in the CPU or the NVIDIA GeForce 940MX mobile video card with 2 GB of GDDR3 memory. It is this fact that allowed us to test both accelerators in completely identical conditions. But ASUS has a laptop version without a discrete graphics card, so some users may have difficulty choosing.
Low graphics settings at Full HD resolution in Dota 2 allow you to feel the difference: Intel HD Graphics 620 delivered about 60 frames / s, and with the GeForce 940MX the frame rate rose to 118 frames / s. That is, the difference reached 100%. In both cases, a completely comfortable gameplay is provided, but with a video card, you can increase the graphics settings without losing the smoothness and responsiveness of the gameplay.
AT RocketLeague low graphics settings and Full HD resolution were also used. Monitoring in both cases showed about 50-60 frames / s, but still a slight advantage remained with the bundle with the video card, since it is its use that will allow you to increase some graphics options without losing control responsiveness. This can be seen from the load level: the iGPU is working at full capacity, and the video card is at 70-80%.
Moving on to a more accurate benchmark DiRTRally, which, at a low preset of graphics settings and Full HD resolution, provided an average of 29 FPS when using an iGPU and 45 fps when activating the GeForce 940MX. The difference was 16 FPS or 55%. At the same time, pay attention to the temperature of the CPU itself: the proximity to a hot video card in a very compact case is expressed in its increase by 10 °.
For start FarCryPrimal I had to drop down to HD resolution with a low profile settings. But even these settings did not help to avoid slide shows in the case of HD Graphics 620. As a result, we have an average of 17 fps versus 27 in favor of the GeForce 940MX. That is a difference of 10 FPS or 59%.
Similar resolution and low settings were used to run RainbowSixSiege. The final results again turned out to be significantly higher when using a video card: an average of 58 vs. 33 fps, which is 25 FPS or 76% of the difference. If you look at the results by scenes, you can see the advantage of the GeForce 940MX by 22-27 fps, which is very significant for a dynamic team shooter.
AT TheDivision you also don't have to rely on higher settings - just a low quality preset at HD resolution. And already from the first frames, the leadership of the configuration with the video card is undeniable. As a result, we have 37 versus 22 frames / s, that is, a more than significant difference of 15 FPS or 68%.
AMD Radeon RX Vega details with Capsaicin & Cream
As part of the Capsaicin & Cream event at GDC 2017, AMD revealed some new details regarding AMD Vega GPUs and AMD Radeon RX Vega graphics cards based on them. First of all, the logo of the new series was presented and it was once again confirmed that their full debut is scheduled for the second quarter of this year.
As we remember, the AMD Vega microarchitecture is based on four “pillars”: High Bandwidth Cache Controller (HBCC), a new generation of computing units (Next-Gen Compute Unit or NCU) with support for Rapid Packed Math (RPM), a new generation of programmable geometric pipeline ( New Programmable Geonmetry Pipeline) and an improved pixel engine (Advanced Pixel Engine). Some of them were discussed in detail in the past, but there were some new nuances.
8 GB Single vs 16 GB Dual Channel: Evaluating the effectiveness of a RAM upgrade in a budget gaming system
Many economically assembling a gaming computer, for example, based on the Intel Core i3-6100 and NVIDIA GeForce GTX 1050 Ti, immediately buy only one RAM module for the sake of the prospect of later buying another one of the same and getting not only 2 times more RAM, but also work it in dual channel mode. Therefore, for many users it will be interesting to evaluate the impact of dual-channel RAM in games. What we will check in this material.
In our case, the very popular one acts as a processor, which, in terms of gaming computing power, is at approximately the same level as the new 4-thread representatives of the Intel Pentium series. And it was used as a video card. The RAM in both systems worked at a frequency of 2133 MHz, only in the first case 8 GB was used in single-channel mode, and in the second we simulate the result of the upgrade - 16 GB in dual-channel.
Now let's move on to the comparison. Very high profile graphic settings in DiRTRally not very demanding on RAM - it took less than 4 GB to run. But immediately, a slightly higher consumption of RAM and, in general, a higher CPU load when the memory is running in single-channel mode catches the eye. On the average framerate, this did not show up much: the difference was almost 2 FPS or 2%.
Ultra high graphics preset in DirectX 12 mode in the benchmark TotalwarWARHAMMER allowed us to fix a higher difference in the average FPS level: 64 vs. 69 fps in favor of dual-channel mode, which is equivalent to a 9% increase. RAM consumption in single-channel mode was just above 3.5 GB, and in dual-channel mode it reached 4 GB. The load on the processor in both modes was almost the same.
primeval world FarCryPrimal we decided to explore at a very high profile graphic settings. You can immediately see that RAM in dual-channel mode is used by 70-150 MB more, but the overall load on the processor is significantly lower. As a result, the minimum FPS in both cases was the same, and the average and maximum frame rates turned out to be 1-2 frames / s higher in a system with a dual-channel RAM mode.
Ultra quality settings in RainbowSixSiege should have loaded both systems well enough to feel the difference more clearly. RAM consumption in dual-channel mode was 300 MB higher, and CPU load was 15% lower in places. However, this did not particularly affect the speed of the video sequence: only at the beginning of the test the difference almost reached 1 FPS, and in other cases it was even lower.
More demanding benchmark TheDivision we ran with a high graphics preset in DirectX 12 mode. From the very beginning, RAM consumption in dual-channel mode was 400 MB higher, but video buffer usage was slightly lower. The average CPU load was also lower by 7%. But the performance in both cases turned out to be the same.
HITMAN with a high level of detail and texture quality showed a similar trend: RAM consumption in dual-channel mode is approximately 350 MB higher, but the CPU is loaded 10-15% less in places, so it has more resources for background processes. The difference in frame rate is minimal. To be more precise, the average is less than 1 frame / s.
Demanding RiseoftheTombRaider with a high profile graphics settings immediately showed that the load on the processor in a system with dual-channel RAM can be even 50% lower, especially in heavy scenes. At the same time, RAM consumption was 100 MB higher. In the last scene, RAM consumption evened out, but there was a very high preponderance in both CPU load and FPS in favor of a system with dual-channel RAM. In numerical terms, the difference in speed was almost 8 fps or 14%. But in general, according to the benchmark, the difference reached only 3 FPS, i.e. only 4%.
Choosing the Radeon RX 460: with or without additional PCI-E power?
You all know well, but you don’t know - we recall that the power consumption of the reference AMD Radeon RX 460 video card does not exceed 75 W, so a PCI Express slot is enough to power it. However, there are versions on the market with an additional 6-pin PCIe connector. It is supposed to provide better overclocking stability. But maybe its presence will be useful not only for overclockers? Let's figure it out.
For testing, we were kindly provided with two different modifications: and HIS RX 460 iCooler OC 2GB. Both have 2 GB of GDDR5 memory at their disposal. However, the ASUS version is equipped with a dual-fan cooling system and does not require additional power, while the HIS version is characterized by slightly more compact dimensions, a single-fan cooler and a 6-pin PCIe connector. At the same time, the dynamic frequencies of the GPU of both video cards are almost at the same level.
The external set-top box AVerMedia Live Gamer Portable 2 in offline mode still only writes sound, although they promise to fix this in the new firmware, and in PC mode, the system itself was heavily loaded when encoding video. As a result, FPS drops were even higher than when recording programmatically using Radeon ReLive. However, the load on the system when using software tools is unpredictable: it depends on both the game and the version of the drivers. Moreover, the load does not always decrease with the release of a new driver, so Radeon ReLive is not a panacea either.
Through trial and error, we were still able to find the best option. To do this, a video card has been added to the system, which is used exclusively for encoding the video stream. The corresponding item can be found in the software settings for AVerMedia Live Gamer Portable 2. There is also an option to bypass video stream protection. As a result, the additional load on the CPU is reduced to 5%, which can be considered insignificant for a system with an Intel Core i7-6700K on board. In turn, the PCI Express x16 processor lines are equally divided between the two video cards. But since the RX 460 initially requires only 8 lines, this does not have any effect on performance.
Now let's move on to the games. DiRT Rally can be run with the "Very High" preset. Both video cards were loaded at 100%, only the version without additional power supply did not exceed 1150 MHz, and the model with an additional PCIe slot kept the speed at the nominal level of 1220 MHz. As a result, her performance was 3 FPS higher: 45 vs. 42 fps, which corresponds to an 8% bonus. On the other hand, the ASUS Dual-X cooler worked more efficiently, keeping the GPU temperature lower by 4-5°C, and noticeably slower - 1730 vs. 2450 rpm. That is, in terms of comfort, the model without additional power looks better.
Normal settings profile in Far Cry Primal is also able to squeeze out about 40 fps with the RX 460. The average model without additional power was 43 FPS, while its competitor has a slightly higher 46 FPS. At the same time, the difference in GPU frequency was 100 MHz, and in temperature - 5-6°C. The maximum gap in the speed of rotation of the fan blades has almost reached 400 rpm: 1670 versus 2060.
Rainbow Six Siege starts briskly with a very high graphics settings profile. In simple scenes on the street and behind the house, the model without additional power dropped the GPU frequency to 1140 MHz, and if necessary, to use all the resources, raised it to 1220 MHz. Therefore, in some places both video cards showed the same results, but on average, the advantage remained with the version with additional power: 54 versus 52 fps or + 4%. But in terms of GPU temperature and fan speed, the model from ASUS looks better. In terms of temperature, the gap is 7-11 ° C, and in terms of the frequency of the turntables - from 50 to 350 rpm.
Comparison NVIDIA GeForce GTX 1050 vs GTX 1050 Ti: why are we underpaying?
Many undemanding or budget-conscious gamers are interested in the question: is it possible to limit ourselves to buying an NVIDIA GeForce GTX 1050 for Full HD resolution, or is it better to pay extra and take the GTX 1050 Ti. In what cases will this or that video card be optimal? Let's figure it out!
We will be assisted by and, which differ in the versions of graphics processors and the amount of video memory. Both of them have two factory overclocking profiles that provide a performance bonus. But to make this comparison more universal, with the help of a proprietary utility, we limited their dynamic frequencies to reference levels.
Gameplay was recorded through the AverMedia Live Gamer HD video capture card, without any performance loss.
Let's start with more simple games on the example of the updated Dota 2 at maximum graphics settings. GTX 1050 produced 100-110 fps, and GTX 1050 Ti 115-130 FPS. That is, the difference was 15-20 frames / s. In both cases, the GPUs were not fully loaded, but this is quite enough for comfortable gameplay.
Maximum quality settings in Paragon led to the full loading of both video cards. The GTX 1050 scored 40-50 FPS and the GTX 1050 Ti 55-60 FPS. The difference of 10-15 fps can be called significant in this case. In addition, the GTX 1050 did not have enough video buffer, since the game at these settings requires 3 GB of memory. That is, there is an active use of RAM, and if slow memory is installed in the system, jamming may occur.
Ultra quality preset with active global illumination simulation technology The Long Dark did not bring any problems to both adapters. Indoors, the GTX 1050 shot up to 91 FPS, while the GTX 1050 Ti climbed to 105 FPS. Outdoors, it drops a bit, to 77 FPS in the former and 89 FPS in the latter. That is, on average, the difference was about 13 frames / s, or 15%.
For fans CS:GO don't worry: both graphics cards can handle the game perfectly at high graphics settings. GTX 1050 owners can expect 120-130 FPS outdoors, while GTX 1050 Ti owners can go up to 155 FPS. The difference reached 30 fps. Indoors, the results are even slightly better. The video memory required just over 1 GB, so there were no problems.
A block of heavier games unlocks timeless classics in the form of Crysis 3 with high profile graphic settings. In approximately the same scenes, the difference is about 10 FPS, which is equivalent to 16%. That is, the GTX 1050 delivers up to 63 fps, and the GTX 1050 Ti - up to 73 FPS. There is enough video memory in both cases. Pay attention to the fact that the operating frequencies of both GPUs are much higher than the fixed reference values, which primarily indicates an excellent cooling system.
Shadow Warrior 2 can be considered a not particularly demanding novelty, so we immediately set the ultra preset. The more affordable graphics card managed 40-60 FPS, while the GTX 1050 Ti was able to push it up to 45-75 fps. On average, the difference was about 8 fps. This required 2.5 GB of video memory, so the GTX 1050 had to use RAM.
gta v absolutely playable with high texture quality and standard shader quality. The GTX 1050 Ti naturally takes the lead with up to 100-140 FPS in the city. The system with the GTX 1050 averaged 10 fps behind, delivering 100-130 FPS. In addition, she actively used RAM.
But Dishonored 2 You can play comfortably only with an average profile of graphic settings. The younger video card provided 35-40 FPS, and the older one - up to 40-50 frames / s. The gap was about 6-8 FPS, which is equivalent to a 20% gain. The video memory consumption is slightly over 2 GB, so more RAM was needed in the case of the GTX 1050.
Play The Witcher 3 we decided with a high graphics preset and a medium post-processing profile. In the first system, we got about 40 fps, and in the second - up to 50 FPS. In general, the difference was at the level of 5-8 fps. Video memory consumption did not exceed 2 GB.
To start it Assassin's Creed Syndicate I had to choose medium quality settings. London runs are given to Jacob at 40-45 fps on the GTX 1050 or 45-50 fps on the GTX 1050 Ti. But in some places the difference can reach 10 FPS. The video memory required more than 2 GB, so the system with the GTX 1050 uses more RAM. In addition, unpleasant twitches appeared in places in it, so the GTX 1050 Ti definitely looks better.
Medium graphics settings in Battlefield 1 when switching to DirectX 12 mode, they can count on 40-45 FPS with a GTX 1050. For owners of the GTX 1050 Ti, the FPS level rises to 60-70 fps, which is 50-60% higher. And the 2-gigabyte video buffer is not enough for the game, so in the first case, RAM consumption is higher. As a result, the gameplay on the GTX 1050 Ti was more comfortable.
WATCH_DOGS 2 in principle, you can play even at high graphics quality, but this requires under 4 GB of video memory, so the use of RAM in a system with a GTX 1050 tended to 10 GB. The game was difficult for her, with noticeable slowdowns when loading new locations, even despite 30-37 FPS. But the GTX 1050 Ti did not experience such problems. Here you have a comfortable FPS in the region of 45-50 FPS, and the absence of unpleasant friezes. The difference of 12-15 fps in this case corresponds to at least 30% increase.
At your request DOOM we ran in Vulkan mode with a high graphics preset. The GTX 1050 delivered quite comfortable 50-70 fps, while the GTX 1050 Ti went even further, delivering 70-100 fps. That is, even on a more affordable video card, it is quite comfortable to play.
Let's move on to tests with even more visual and objective indicators. Let's start with DiRT Rally at a very high profile graphic settings. Both video cards provided quite a comfortable gaming experience. The GTX 1050 averaged 61 FPS, while the GTX 1050 Ti averaged 69 fps, or 13% more. Video memory required a little over 2 GB.
GeForce GTX 1050 versus Radeon RX 460 2GB. Whose 2 GB is more useful?
We continue to visually compare the performance of various video cards in actual games. In this article, we will evaluate the capabilities of 2-gigabyte budget representatives from AMD and NVIDIA in Full HD resolution.
We will talk about the most affordable new gaming graphics cards in the arsenal of each company: AMD Radeon RX 460 and NVIDIA GeForce GTX 1050 as an example and. To even out the odds somewhat, we deliberately lowered the GTX 1050's dynamic frequency to the reference 1455MHz.
Starting with an updated DOTA2 at maximum quality settings. The RX 460 delivered 75-80 fps while the GTX 1050 raised the bar above 100 fps. Pay attention to the GPU frequencies: the RX 460 has a nominal value of 1090 MHz, and a dynamic one of 1224, but the real speed does not exceed 1150 MHz. At the same time, the GPU-Z utility showed the GTX 1050 dynamic frequency limit at the reference level of 1455 MHz, but in the game it rose 200 MHz higher. These are the features of the dynamic performance tuning technologies to optimize power consumption and temperature, taking into account the frame rate already issued.
Next in line Paragon. By setting all the settings to the maximum, you can count on about 30 FPS in the case of the RX 460. If you have a GTX 1050, then the frame rate rises to 40-50 fps, that is, the gap in places reaches 50%. In both cases, the video buffer is not enough, so the consumption of RAM in the first case exceeded 8 GB, and in the second it approached this level.
Both models allow you to run in CS:GO at maximum graphics settings at a speed of more than 100 frames / s. But if with the RX 460 the FPS reaches 110-120, then with the GTX 1050 it already rises to 140-150 fps outdoors. Indoors, you can count on a higher frame rate. That is, the difference is about 30%.
If you decide to catch up or just go back to the classics and go through, for example, Crysis 3, then choosing high graphics settings and FXAA anti-aliasing will get you about 40-50 FPS when installing the RX 460. In the case of the GTX 1050, the speed rises above 50-60 fps / With. 2 GB of video memory in both cases is more than enough.
In a newer, but not too demanding Shadow Warrior 2 without problems, you can select the ultra preset for graphics settings. Owners of the RX 460 can expect a minimally comfortable 30-40 fps. Owners of the GTX 1050 will have some margin, as the FPS level rises to 40-50 frames / s, that is, the gap is 20-30%.
patched Dishonored 2 We recommend running at medium graphics settings for systems with budget gaming graphics cards. The AMD representative delivered 25-30 fps, while the NVIDIA model provided 34-39 FPS. At the same time, the consumption of RAM in the case of the GTX 1050 was 1 GB higher. We also recall that without recording the gameplay, you can count on a slightly higher frame rate.
If you wish, you can also play Assassin's Creed Syndicate, but also only on medium graphics settings. The FPS levels were pretty much the same at around 30fps, although in some similar scenes you can see the GTX 1050 drop by a few FPS. Perhaps the reason lies in the complex geometry of the game world, so the greater number of RX 460 compute units eliminates the difference in frequency. The video buffer in both cases is not enough, but the system with the RX 460 required 6.5 GB of RAM, and with the GTX 1050 - 7.5 GB.
We got an interesting result when we started Battlefield 1 in DirectX 12 mode with high graphics settings. For the first time, a significant advantage remained with the RX 460. It produced a minimally comfortable 30-40 fps, while the GTX 1050 was below 24 FPS.
We even compared the performance of the GTX 1050 in hardware and software capture to test our results. The difference turned out to be quite significant: under 40 versus 26 fps in favor of the video capture card. That is, ShadowPlay in this case reduces performance by half, which was an unpleasant surprise.
High graphics settings in Watch Dogs 2 in DirectX 11 mode, the situation returns to normal: the RX 460 produces 20-25 fps, while the GTX 1050 offers 30-35 FPS, that is, the gap can exceed 40%. A 2 GB video buffer is not enough, so lag is felt in both cases. But the RX 460 again consumes 1 GB less RAM than the GTX 1050.
Comparison GeForce GTX 1060 6GB and GeForce GTX 1070: for those who are ready to overpay
By the end of 2016, the price of GeForce GTX 10 series graphics cards had normalized, so many gamers were faced with a difficult choice: save money and get the GTX 1060, or report up to the GTX 1070 to have a margin for the future.
We decided to help you make an informed choice by visually comparing the two representatives of these graphics adapters in 11 popular games at Full HD resolution. They will help us with this with a small factory overclock and with reference frequencies.
Without further buildup, we launch gta v with maximum settings, active anti-aliasing MSAA x2 and inactive TXAA. Both models show quite comfortable frame rates - more than 50, but the GTX 1070 naturally pulls ahead by 5-13 FPS. There are more objects in the city, but the frame rate does not sag, and the difference remains in the same range.
A very high preset of settings is also relevant in the case of DiRT Rally. The average lead of the GTX 1070 was close to 30 fps, but both models delivered over 100 FPS. The processor was less than half loaded in both cases, so it did not affect the bottom line in any way.
The GTX 1060 and 1070 allowed us to play games without any problems. XCOM 2 at maximum graphics settings. The difference in performance is noticeable to the naked eye. Again, we have a gap of about 30 FPS in favor of the GTX 1070: respectively 60-70 versus 90-100 fps.
GTX 1060 owners Rainbow Six Siege at ultra settings, graphics can expect an average of 128 frames / s. In turn, the owners of the GTX 1070 will receive about 167 frames / s. In various scenes, the gap of 30-40 FPS is maintained. Therefore, for 120- or 144-Hz monitors at these graphics settings, the GTX 1070 looks more interesting.
The Division we tested it in DirectX 11 mode. Having chosen the profile with the maximum graphics quality, we again got a very significant difference in FPS, which prevented the pictures from being clearly synchronized. Final score: 59 vs. 80 for the GTX 1070.
Comparison of DDR3-1600 and DDR4-2133 in games with integrated and discrete video accelerator
We are glad to welcome you to GECID.com! During the transition from DDR3 to DDR4 RAM, many users wonder which memory is better and faster at stock frequencies. Therefore, we decided to conduct a live performance comparison of DDR3-1600 and DDR4-2133 when working with a discrete video card and an integrated video accelerator.
Let's briefly designate a test bench based on the universal and relatively affordable BIOSTAR Hi-Fi H170Z3 motherboard, which supports DDR3L and DDR4 modules. It was kindly provided by the pcshop ua online store, which has a huge selection of computer components.
The current top-end 4-core Intel Core i7-6700K acts as a processor. The first stage of testing took place with a built-in HD Graphics 530 graphics core, and for the second, a discrete ASUS Dual GeForce GTX 1060 OC graphics card with 3 GB GDDR5 on board was used, which is already well known to our viewers from previous videos. RAM is represented by the most massive and popular 16-gigabyte sets of DDR4-2133 MHz and DDR3-1600 MHz. We did not make any additional modifications to the memory parameters, just as many buyers will not do this.
The first phase of testing, dedicated to the impact on the performance of the integrated graphics core, opens with a well-optimized Dying Light. By choosing a low profile of graphic settings, you can practice parkour and self-defense skills with a minimum comfortable 30-40 frames per second without feeling a lack of mannequins. The lack of a built-in benchmark made it impossible to synchronize gameplay on different platforms, but in general, a system with DDR4 memory was ahead by several FPS.
For start gta v I had to lower the graphics settings to a minimum, and the resolution to 1280 x 720. Of course, there is no question of any special anti-aliasing, so Los Santos cannot boast of detailing, but the benchmark is cheerfully reproduced at 30 - 50 fps. The advantage of a system with DDR4 memory can reach 10 FPS, but the amount of memory involved in this case will be several hundred MB higher. But the loading of processor and graphics cores is approximately the same.
rally simulator lovers DiRT Rally can also exhale calmly: the game starts even on the integrated graphics, but with low settings and HD resolution. This is enough to get a comfortable minimum of 30-plus FPS. It is difficult to give the advantage to one of the systems: the minimum advantage passes from one configuration to another. There is also no particular difference in the use of other resources, and the level of performance depends solely on the capabilities of the graphics core.
Seek adventure on your head in Mad Max You can do it on low graphics settings too. Yes, the view of the post-apocalyptic desert will not be so intimidating, but the video sequence will be smoother at 30-40 fps, and the controls will be responsive. The processor cores fully load the graphic part, although they themselves do not particularly strain themselves. An advantage of a maximum of 4 FPS on the side of DDR4 memory, but this does not matter much.
Demonstrate mastery of in-game weapons and quick reactions in Rainbow Six Siege only possible on low graphics settings. The built-in benchmark provides a convenient visual comparison of the performance of both systems. The preponderance, as expected, turned out to be for DDR4 memory, but the difference does not exceed a few frames. And since in both cases we get from 30 FPS, the impact of a higher DDR4 frequency on the gameplay is minimal.
Developers Rise of Tomb Rider made a choice in favor of the beauty and realism of the virtual world, allowing players to experience both Siberian frosts and tropical heat. Therefore, on the built-in graphics, only complete phlegmatic people or owners of an inexhaustible supply of nerve cells can meddle in this game: 15-20 FPS at a very low quality preset will infuriate even the most persistent in a few minutes. And the advantage of 1 FPS when using DDR4 memory sounds mocking.
Deus Ex: Mankind Divided replete with various technologies to improve image quality and enhance the realism of the world around. Therefore, the integrated graphics simply cannot cope with such a load even at low quality settings. As a result, we are waiting for a slide show at a speed of up to 20 frames / s. AT general system with DDR4 it shows 1-2 FPS more, but this does not particularly affect the quality of the gameplay.
Testbed #2 based on the Intel Socket 2011 platform
Test stand #3 based on the Intel Socket 1155 platform
Test Bench #4 Based on AMD Socket AM3+ Platform
Testbed #5 based on the Intel Socket 1150 platform
All video cards were tested at the maximum graphics quality by the Action! program. The purpose of the test is to determine how video cards from different manufacturers behave under the same conditions. Below is a video of a test segment of the gaming benchmark:
Our video cards were tested at resolutions of 1920x1080, 2560x1600 and 3840x2160 at maximum graphics quality settings. AMD CrossFireX and SLI are only supported by the game in DX11. Since most video cards experience microfreezes associated with a lack of video memory, the results of the minimum FPS were taken from our own measurement, which differs from the result of the game benchmark.
Testing at very high quality settings 1920x1080 DirectX 11
With these settings, video cards of the Radeon HD 7970 or GeForce GTX 960 level showed an acceptable FPS. video cards like Radeon R9 380X or GeForce GTX 780 Ti.
Testing at very high quality settings 1920x1080 DirectX 12
With these settings, video cards of the Radeon R9 280X or GeForce GTX 780 Ti level showed an acceptable FPS. The best solutions would be video cards like Radeon R9 380X or GeForce GTX 970.
Testing at very high DirectX 11
With these settings, video cards of the Radeon HD 7990 or GeForce GTX 980 level showed an acceptable FPS.The optimal solutions will begraphics cards like Radeon R9 Nano or GeForce GTX 980 Ti.
Testing at very high quality settings 2560x1440 DirectX 12
With these settings, video cards of the Radeon RX 470 or GeForce GTX 980i level showed an acceptable FPS. The optimal solutions will be video cards of the level Radeon R9 290 or GeForce GTX 980 Ti.
DirectX 11
With these settings, video cards of the Radeon R9 Fury X CF or GeForce GTX 1080 level showed an acceptable FPS. The optimal solutions will begraphics cards like GeForce GTX 1080 SLI.
Testing at very high quality settings 3840x2160 DirectX 12
With these settings, there are no playable solutions for the game.
Testing of the video memory consumed by the game was carried out by the program MSI Afterburner. The indicator was taken on the results on top video cards from AMD and NVIDIA at resolutions 1920x1080 and 2560x1440 with various anti-aliasing settings.
Testing at maximum memory GPU quality settings
The recommended amount of video memory usage for a resolution of 1920x1080 will be 6144 MB of video memory, for a resolution of 2560x1440 - 6144 MB of video memory and for a resolution of 3840x2160 about 8192 MB of video memory.
We tested processor dependence on 16 models of basic configurations that are relevant today. The test was carried out in those places where the value of the video card for the game is minimal and its load was less than 99%, this time at a resolution of 1280x720, since multichips and a higher resolution do not allow revealing the processor potential.
DirectX 11 AMD
Testing at maximum quality settings 1280x720 DirectX 11 NVIDIA
The performance of processors in DX 11 in this case is higher for NVIDIA, and lower for AMD video cards.
Testing at maximum quality settings 1280x720 DirectX 12 AMD
Testing at maximum quality settings 1280x720 DirectX 12 NVIDIA
In DX 12 mode, we see a different picture - processor performance when using AMD video cards is higher, and NVIDIA is lower, then we even have a significant decrease in performance relative to DX11.
Deus Ex Mankind Divided uses up to 16 processing threads, and can load absolutely all cores.
The test was conducted on the base configuration Core i 7 5960X @ 4.6 GHz with the amount of pre-installed memory 32 GB DDR4 2400 MGz. All used RAM was taken as an indicator. The RAM test on the entire system was carried out on various test benches without running extraneous applications (browsers, etc.).
Testing the game's RAM consumption at various quality settings
As we can see, with various quality settings, the amount of RAM consumed in Deus Ex Mankind Divided is within 2800 megabytes.
Testing system RAM consumption
With a 6 GB system, the Deus Ex Mankind Divided consumes all 5.5 GB of RAM. In the presence of a system with 8 gigabytes, the RAM consumption of all RAM was 6.1 gigabytes. With a 16 GB system, total memory consumption was almost 6.8 GB. And with 32 gigabytes of RAM, the system consumes 7.2 gigabytes of RAM. Also, do not forget that as much as your video card runs out of memory, the same amount of memory will be borrowed from RAM, so with a smaller amount and more high resolution consumption will be slightly higher.
So guys, and especially for those who reproach us that the results of our test were obtained due to the fact that the anti-aliasing in the tests was MSAA 8X- especially for you!, benchmark video SLI GTX 1080 in resolution 1920x1080 when smoothing MSAA 8X: