What Is Texture Filtering Quality
Nvidia GeForce Tweak Guide
[Page 7] Nvidia Control Console (Pt.3)
Maximum Pre-Rendered Frames: This setting controls the size of the Context Queue, as well known as the Flip Queue or Render Ahead Queue, as explained in this article. The context queue is a buffer (storage area) for custom information which has been prepared specifically for processing past the GPU. In uncomplicated terms, this selection controls the maximum number of frames the CPU prepares in advance of being rendered by the GPU. The benefit of having frame data buffered in accelerate of being processed past the GPU is that it helps keep the GPU consistently fed with data, smoothing out whatsoever pocket-sized variations in rendering times. The disadvantage of buffering frame data in advance of displaying information technology is that it tin can increment latency, i.e. a delay between your input via keyboard or mouse, and the resulting touch on the image displayed, commonly referred to as input lag.
The available options for this setting are Use the 3D Awarding Setting, 1, 2, 3 and 4. When set to Use the 3D Application Setting, games volition either apply their own value, mayhap adamant by an in-game setting, or the Windows default of three. If you wish to override this, you can specify precisely how many frames can be pre-rendered hither ranging from 1 to iv frames. In theory, higher values ensure smoother framerates only with potentially more input lag, while lower values tin aid reduce input lag, just may upshot in stuttering or lower performance. In practice however, the affect of altering this setting appears to be quite circuitous. In some cases, lowering this setting may actually reduce stuttering or better performance, likely on systems where the CPU is much slower than the GPU. Then some experimentation is required to decide the optimal setting for each game on your system.
Maximum Pre-Rendered Frames setting is not the same every bit Triple Buffering, a setting that is covered in more detail further beneath. The key difference is that Maximum Pre-Rendered Frames controls the size of the data queue feeding rendering commands to the GPU to maintain polish framerates, while Triple Buffering is a dorsum-buffer configuration designed specifically to preclude a significant reduction in performance whenever the framerate falls beneath the refresh charge per unit while Vertical Sync is enabled. Game developers sometimes confuse the issue farther by using variations or amalgamations of these two setting names in an unclear fashion (e.one thousand. "GPU Max Buffered Frames").
It is recommended that Maximum Pre-rendered Frames be set to Use the 3D Application Setting under Global Settings. For particular games where you lot are experiencing stuttering, first try changing this setting to a high value of 4 under Program Settings to see if information technology helps smooth things out. If that makes things worse, or yous tin can't tolerate the input lag, experiment with progressively lower values. Of class if a game has its own settings related to frame buffering or precaching, use those in the beginning instance. Keep in listen that stuttering has numerous other causes if irresolute this setting has no impact. Finally, notation that this setting just works in DirectX games, not OpenGL games.
Multi-Frame Sampled AA (MFAA): An Antialiasing method only supported on GTX 900 and newer GPUs, Multi-Frame Sampled AA, or MFAA for short, is designed to provide MSAA-like Antialiasing quality at a much lower performance price. It does this past using sample patterns that aren't fixed, as they are with traditional AA, rather MFAA'southward sample patterns tin be customized to vary from frame to frame, or fifty-fifty within a frame, to remain optimal.
The available options for this setting are Off or On. To enable MFAA in a particular game, that game must first back up MFAA - see this list of supported games. If a game supports MFAA, then to implement it you must outset set the Multi-Frame Sampled AA setting to On, then launch the game and select a level of MSAA to apply. Once MSAA is enabled at 2x or above, MFAA comes into outcome, increasing the quality of your chosen MSAA level without a drop in performance. For case, if you enable MFAA in the NVCP, and then select 2x MSAA in a game, you will get the equivalent of 4x MSAA quality without any extra driblet in functioning; set 4x MSAA in the game and MFAA will convert it to 8x MSAA quality for free, and and so on.
The screenshot comparison in a higher place shows the aforementioned scene outset with no Antialiasing of whatsoever kind, highlighting the almost jagged areas, particularly the rooftop around the crosshair. The 2d screenshot shows the same scene with in-game 4x MSAA and no MFAA. The tertiary screenshot has in-game 2x MSAA and MFAA enabled, boosting the result to be roughly equivalent to 4x MSAA. A close comparison of the two reveals some differences, with the MFAA screenshot having slightly rougher edges. Merely expect at the framerate counter at the summit right: the MFAA screenshot shows a much higher framerate. Benchmarks and additional screenshots, such as in this commodity, confirm that MFAA does deliver higher MSAA quality at roughly the same performance cost every bit an MSAA level one increment lower.
It is recommended that Multi-Frame Sampled AA exist ready to On under Global Settings, every bit it only comes into effect in supported games when MSAA is enabled, and in such cases always provides an improvement in MSAA quality at no additional performance cost. If you're worried about potential compatibility issues or even the slightest drop in image quality from MFAA, so set it to Off globally and enable it for selected games under Program Settings.
Multi-brandish/Mixed-GPU Acceleration: This setting determines how OpenGL games and applications, which are not all that common, handle rendering when continued to multiple displays and/or if using 2 or more different types of Nvidia GPUs. This does not touch single or multiple displays running on single or multiple GPUs (SLI) under DirectX, which encompasses near games and the Windows Desktop. As such, the setting is largely irrelevant to most users.
If y'all accept a multiple display or mixed-GPU setup, and run OpenGL games or applications, and so the available options to consider are Unmarried Display Performance Mode, Multiple Display Performance Mode, and Compatibility Performance Fashion. If you have only one display, or are having issues with the other modes, the Unmarried Display Performance Mode option should exist used. If you are using multiple displays then select the Multiple Display Functioning Style, but if you feel any problems with item programs, select the Compatibility Performance Mode.
It is recommended that Multi-Display/Mixed-GPU Acceleration exist set to Single Brandish Performance mode on single display systems, and Multiple Display Functioning Manner on multi-display systems, or simply left at its default of Multiple Display Performance Fashion on all systems under Global Settings, equally it has no touch in most cases.
Power Management Mode: This setting makes employ of a modernistic GPU'southward abilities to support unlike performance levels depending on how much power is required. Ordinarily, the graphics card will monitor GPU load, and when nether minimal load, volition lower its clock speeds (downclock) to what is known as 2D or Idle clock speeds - the lowest possible speed - in guild to save power and reduce temperatures. This occurs when using the Windows Desktop for example. This setting does not alter that behavior, instead information technology controls what happens when the GPU is operating at what is known as 3D clock speeds, which are activated when running games and other 3D applications.
The bachelor options are Adaptive and Adopt Maximum Performance. If Adaptive is chosen, the graphics card automatically steps downwards in clock speed in games and 3D applications if they aren't generating a high GPU load. If Adopt Maximum Performance is chosen, when running whatever game or 3D application, the GPU will maintain its maximum clock speed regardless of load. The Adaptive setting shouldn't crusade any problems in modern games, as the GPU volition ever run at full speed when required without interruption. However, in old games which generate much lower GPU loads, the graphics carte may constantly downclock when fix to Adaptive, and this may crusade problems such every bit fluctuating functioning and stuttering. Alternatively, using the Prefer Maximum Performance selection should also exist fine, equally virtually games require maximum clock speeds for optimal operation, but yous may run into situations where the GPU does non properly downclock back to 2D clock speeds in one case the game is exited.
Update:
Optimal Power As of the 368.22 GeForce drivers, a new Optimal Power mode is bachelor and set equally the default. As yet I can't find an official Nvidia source to reference what this selection does, merely it was apparently alluded to at some indicate in the GTX 1080 launch conference. Basically, when Optimal Ability is selected, while the PC is idle and zilch is irresolute on the screen, the GPU will not return any new frames; the driver will only re-utilise the final rendered frame in the framebuffer. This should further reduce power consumption without whatever negative functioning impacts. Once I have a confirmed source, I volition update this description.
I recommend that Power Management Mode be gear up to the default of Optimal Power nether Global Settings. For any games for which you lot believe your GPU is constantly downclocking, you can change this setting to Adopt Maximum Functioning under the Plan Settings tab to ensure the highest possible clock rates at all times. Remember that this setting only relates to games and other 3D applications, not to 2D applications or the Windows Desktop. Note too that if you run a multi-monitor and/or high refresh rate display your idle clocks may be slightly higher regardless of this setting, which is normal.
Shader Enshroud: Shaders are programs written to implement customizeable visual effects in games, such as diverse lighting, shadow and surface furnishings. These shader programs are oft compiled (prepared for execution) during loading screens, or in open-world games they may be compiled as you move to a new area or see new objects. The act of compiling shaders tin increase loading times, and if the shaders are compiling during gameplay, this can increase CPU usage, reducing performance and also possibly resulting in stuttering. Worse yet, compiled shaders are usually discarded after you exit a game, then the process is repeated the next time you run the game. The Shader Cache characteristic in the GeForce drivers is designed to create a storage location for compiled shaders on your bulldoze then that the next time y'all run a game and it needs to use particular shaders, it tin can employ the precompiled stored shaders rather than compiling them again. This should speed up loading times, improve performance and reduce the potential for stutter.
The available options for this setting are On and Off. If ready to On, the next time you run a game, the driver volition brainstorm storing whatsoever compiled shader files in the post-obit location on your primary system drive: C:\Users\[username]\AppData\Local\Temp\NVIDIA Corporation\NV_Cache. Information technology will store upward to 256MB of precompiled shaders, and once that limit is reached, older shaders are automatically purged to make way for newer ones, so there is no need to manually clean out the Shader Cache folder. Indeed if y'all regularly use an automated cleaning utility like CCleaner, you should make sure information technology isn't set to clean out the Windows Temp folder where the Shader Cache resides - in CCleaner untick the 'Temporary Files' box under the System category. If you're running an SSD and are worried about the impact of Shader Cache writes on bulldoze longevity, you lot can move the unabridged Windows Temp folder to some other bulldoze if you lot wish. Open the Windows Control Panel, launch the System Component, select Advanced System Settings, then click the Environment Variables button and Edit the paths for both the TEMP and TMP entries. This really isn't necessary, equally opposite to popular belief SSDs actually have tremendously long lifespans and can take a huge amount of writes - encounter the Solid State Drives section under the Drive Optimization chapter of the TweakGuides Tweaking Companion for more details.
I recommend that the Shader Cache exist set to On under Global Settings. The Shader Cache folder should also be kept on your fastest drive for optimal operation, even if it'south an SSD, as it is only 256MB in size at well-nigh and will have a negligible bear upon on drive longevity. If yous wish to fix the Shader Cache on or off on per-game basis, you can practise so under the Plan Settings tab.
Texture Filtering: Textures are the images covering all of the objects, characters and surfaces in a game globe. These two-dimensional images are wrapped around wireframe polygons to create the illusion of a 3D object, viewable from every angle and distance. However, it'south precisely considering textured surfaces are often viewed at an angle, and at diverse distances, that they must accept texture filtering applied to them in gild to maintain optimal appearance. Like Antialiasing, this texture filtering process tin be achieved using different methods, with different results. If done poorly, textures will exhibit visual glitches, such equally blurriness, a noticeable gradation between lower and higher detailed areas, blockiness, shimmering and texture "noise".
There are up to v settings related to Texture Filtering in the Nvidia Control Console. Four are covered beneath; the other - Anisotropic Filtering - is covered earlier in the guide.
Texture Filtering - Anisotropic Sample Optimization: Covered earlier in the guide, Anisotropic Filtering (AF) is the highest quality texture filtering method, providing splendid results. When Anisotropic Filtering is used in a game and this setting is enabled, information technology attempts to optimize the AF technique to reduce its performance bear upon.
If the Texture Filtering - Quality setting (covered further beneath) is not set to Loftier Quality, and then you can adjust this setting. The bachelor options are On and Off. If fix to On, it limits the number of samples that any Anisotropic Filtering uses based on the size of the texels (texture pixels). This optimized texture sampling provides potentially faster functioning when using AF, but it may result in slightly lower prototype quality, or paradigm quirks such equally shimmering. If set to Off, AF is practical at maximum possible quality.
The screenshot comparison higher up shows a scene with 16x AF (and 8x AA) with Anisotropic Sample Optimization Off versus On. Looking closely at areas where any reduction in AF quality would become credible - such as the wood texture on the pier as it recedes into the altitude, and the protruding rocky surfaces on the mount - shows absolutely no divergence between the two screenshots. Simply when flicking rapidly between the 2 shots, and discounting the normal motility of the trees and h2o, practice a couple of tiny shadow differences become apparent in the altitude. I have confirmed via testing in several other games that for all intents and purposes, there is no discernible reduction in image quality when Anisotropic Sample Optimization is enabled. Simply equally the framerate counter at the peak right of the screenshots shows, and over again confirmed with testing in several games, there is likewise no real performance deviation either. Since Anisotropic Sample Optimization can theoretically introduce motion-based artifacts such equally shimmering, I did an in-game comparing of the two settings in several games looking for additional shimmering, but once once again plant no detectable deviation in that regard while the scene was in movement.
It is recommended that Texture Filtering - Anisotropic Sample Optimization be set to Off nether Global Settings to ensure the highest prototype quality when Anisotropic Filtering is in employ. In exercise the performance benefits of Anisotropic Sample Optimization are insignificant or non-existent on most systems, given full quality Anisotropic Filtering has minimal performance impact anyway. It is assumed that if y'all are using whatever level of AF in the beginning identify that you want to improve prototype quality, then it is better non to also enable anything that can potentially work against that improvement. Note that this setting only applies to DirectX games, non OpenGL.
Texture Filtering - Negative LOD Bias: LOD is short for Level of Detail, and adjusting the LOD Bias is a method of sharpening details on textures. The LOD Bias controls texture item past determining when different Mipmaps are used. Mipmaps are a precomputed series of textures each of a certain resolution used to improve performance. When you expect at a surface shut to you, a college resolution mipmap is loaded; as yous move farther abroad from that surface, progressively lower resolution mipmaps of it are displayed instead. The default LOD Bias in a game is typically 0.0, but by using a negative value for LOD Bias (e.g. -1.5), y'all tin can force mipmap levels to be moved farther abroad, which can improve texture sharpness at the cost of introducing shimmering when textures are in motion. In full general, it is ameliorate to only utilize Anisotropic Filtering to meliorate texture detail, rather than lowering LOD Bias, equally at that place is no shimmering and the performance impact is pocket-sized.
The available options for this setting are Permit and Clamp. Modern games automatically prepare the LOD Bias, which is why this setting exists, and so that y'all tin either select Clench to lock out and thus forcibly prevent any negative LOD Bias values from being used, or Permit it. Unfortunately, Nvidia has explicitly noted in its release notes for the GeForce drivers for several years now that: "Negative LOD bias clamp for DirectX applications is non supported on Fermi-based GPUs and later." In other words, this setting currently has no impact on the bulk of games on GTX 400 and newer GPUs; you cannot prevent negative LOD bias in most games.
It is recommended that Texture Filtering - Negative LOD Bias be prepare to Clench under Global Settings, and that Anisotropic Filtering be used instead to improve texture clarity. At the moment this will only work for OpenGL games, which are relatively rare. If Nvidia re-introduces this characteristic for DirectX games, and then the recommendation above will remain the aforementioned for optimal image quality.
Note: For details of how to manually adjust the LOD Bias value in some games, peculiarly useful in counteracting sure forms of Antialiasing which introduce blurriness to the image, run into the Nvidia Inspector utility covered under the Advanced Tweaking section of the guide.
Texture Filtering - Quality: This setting serves to provide an overall method of controlling whatever texture filtering optimizations practical by the graphics driver. The main settings it alters are Texture Filtering - Anisotropic Sample Optimization (covered further above), and Texture Filtering - Trilinear Optimization (covered further below).
The available options are Loftier Operation, Functioning, Quality and High Quality, and each setting has the following impact:
The driver may also exist controlling other forms of texture filtering optimizations with these settings, but it is unclear what these are, if any.
To better demonstrate the image quality and operation impact of this setting, four sets of screenshots are provided across several different games, each comparing the 2 extremes of Loftier Performance and High Quality. Each game has had 16xAF practical via the NVCP. The areas to examine almost advisedly for texture quality differences are long angled surfaces (e.g. brick walls, walkways, the ground) and the surfaces of distant objects (rocks, buildings, tree trunks).
The first set of screenshots above, taken in Crysis i, shows no discernible image quality difference and no performance difference.
The 2d prepare of screenshots above, taken in Skyrim, shows no discernible image quality deviation and no performance difference.
The tertiary set of screenshots higher up, taken in Crysis 3, shows no discernible difference in texture quality, only a four% performance comeback is axiomatic from using High Operation (49 FPS vs 47 FPS). Note nevertheless that in that location is very slight camera move betwixt the two shots, as Crysis 3's checkpoint relieve arrangement doesn't allow creation of identical screenshots in any given location. So the FPS variation could but be attributed to this discrepancy.
The final prepare of screenshots above, taken in Watch Dogs, shows no noticeable difference in texture quality, and an almost 2% functioning improvement using the High Performance option for this setting (65 FPS vs 64 FPS). Once over again, Watch Dog'southward checkpoint save system doesn't let strictly comparable screenshots to be taken, so the FPS difference could solely be due to this factor.
As noted earlier, an aspect of texture filtering quality not readily visible in screenshots is texture shimmering, equally it is largely a motion-based antiquity. Playing each game briefly at these settings, I was unable to detect any real difference in terms of texture shimmering betwixt High Functioning and Loftier Quality.
The general conclusion from this pocket-size sample of games is that the Texture Filtering - Quality setting may have an bear on, but mostly in terms of operation rather than prototype quality. Furthermore, the performance impact appears to be insignificant, but might be more significant on low-end GPUs.
It is recommended that Texture Filtering - Quality exist ready to High Quality on medium and high-cease systems, and High Performance on depression-end systems under Global Settings. For particular games where you have functioning to spare, you can select High Quality, and for those which are more strenuous, you can select High Performance under Program Settings as required. I can see no real reason to carp with using the Functioning or Quality options for this setting, given the performance and epitome quality difference is extremely pocket-sized even at the extremes of High Quality and High Performance. Information technology's all-time just to use High Quality if yous prefer the highest prototype quality, or High Operation if yous prefer a potential functioning boost. Additionally, there'southward no need to adjust the Texture Filtering - Anisotropic Sample Optimization and Texture Filtering - Trilinear Optimization settings separately; use this setting as your primary control over texture filtering and let those to be adapted automatically by this setting.
Texture Filtering - Trilinear Optimization: Unlike Anistropic Filtering, covered earlier in this guide, which is a non-linear and higher quality texture filtering method, Trilinear Filtering is a more than basic linear texture filtering method. Information technology is commonly combined with Anisotropic Filtering to provide the highest quality of overall texture filtering currently available.
If the Texture Filtering - Quality setting (covered above) is not fix to Loftier Quality, and then you can adjust this setting. The bachelor options are On and Off. If prepare to On, it substitutes Bilinear Filtering on textures in parts of the scene where Trilinear Filtering is non considered necessary. Bilinear Filtering is a lower form of texture filtering than Trilinear, merely performs slightly better, so this optimization may ameliorate functioning at the cost of image quality.
The screenshot comparison above shows a scene with 16x AF and Trilinear Optimization Off versus On. There is no detectable texture quality difference between the ii, and as the framerate counter at the top correct shows, no functioning departure either. The master reason is that Trilinear Filtering has a negligible performance impact on modern GPUs, even with AF applied on top of it, and if any AF is being used in a game (which is strongly recommended), and so whatsoever small-scale difference between Bilinear and Trilinear Filtering will be all but completely covered up anyway.
It is recommended that Texture Filtering - Trilinear Optimization be set to Off under Global Settings to ensure the highest prototype quality, especially when Anisotropic Filtering is in use. As with Anisotropic Sample Optimization, in practise the operation benefits of Trilinear Optimization are insignificant or non-real on most systems. Note that when the Texture Filtering - Quality setting is gear up to High Quality, this setting will become inaccessible but will show every bit being On. In issue it is Off because the setting volition not be used. Finally, this setting only applies to DirectX games, not OpenGL.
The next page concludes the descriptions of Nvidia Control Console settings.
|
What Is Texture Filtering Quality,
Source: https://tweakguides.pcgamingwiki.com/NVFORCE_7.html
Posted by: sheildsforlanstry.blogspot.com
0 Response to "What Is Texture Filtering Quality"
Post a Comment