Gf100: NVIDIA GF100 GPU Specs | TechPowerUp GPU Database

NVIDIA GF100 Architecture and Feature Preview

Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed «Fermi». At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance, and broader compatibility with computational applications were discussed.

We covered much of the early information regarding Fermi in this article. Just to recap some of the more pertinent details found there, the GPU codenamed Fermi will feature over 3 billion transistors and be produced using TSMC’s 40nm processes. If you remember, AMD’s RV870, which is used in the ATI Radeon HD 5870, is comprised of roughly 2.15 billion transistors and is also manufactured at 40nm. Fermi will be outfitted with more than double the number of cores as the current GT200, 512 in total. It will also offer 8x the peak double-precision compute performance as its predecessor, and Fermi will be the first GPU architecture to support ECC. ECC support will allow Fermi to compensate for soft error rate (SER) issues and also potentially allow it to scale to higher densities, mitigating the issue in larger designs.  The GPU will also be execute C++ code.

NVIDIA’s Jen-Hsun Huang hold’s GF100’s closest sibling, Fermi-based Tesla card

During the GPU Technology conference that took place in San Jose, NVIDIA’s CEO Jen-Hsun Huang showed off the first Fermi-based Tesla-branded prototype boards, and talked much of the compute performance of the architecture. Game performance wasn’t a focus of Huang’s speech, however, which led some to speculate that NVIDIA was forgetting about gamers with this generation of GPUs. That obviously is not the case, however. Fermi is going to be a powerful GPU after all. The simple fact of the matter is, NVIDIA is late with their next-gen GPU architecture and the company chose a different venue—the Consumer Electronic Show—to discuss Fermi’s gaming oriented features.

GF100 High-Level Block Diagram

With desktop oriented parts, Fermi-based GPUs will here on in be referred to as GF100. As we’ve mentioned in previous articles, GF100 is a significant architectural change from previous GPU architectures. Initial information focused mostly on the compute side, but today we can finally discuss some of the more consumer-centric details that gamers will be most interested in.

At the Consumer Electronics Show, NVIDIA showed of a number of GF100 configurations, including single-card, and 2-way and 3-way SLI setups in demo systems. Those demos, however, used pre-production boards that were not indicative of retail product. Due to this fact, and also because the company is obviously still working on feverishly on the product, NVIDIA chose NOT to disclose many specific features or speeds and feeds of GF100. Instead, we have more architectural details and information regarding some new IQ modes and geometry related enhancements.

In the block diagram above, the first major changes made to GF100 become evident. In each GPC cluster—there are four in the diagram—newly designed Raster and Polymorph Engines are present. We’ll give some more detail on these GPU segments a little later, but having these engines present in each GPC segment essentially allows each one to function as a full GPU. The design was implemented to allow for better geometry performance scalability, through a parallel implementation of geometry processing units. According to NVIDIA, the end result in an 8X improvement in geometry performance over the GT200. Segmenting the GPU in this way also allows for multiple levels of scalability, either at the GPC or individual SM unit level, etc.

Each GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you’re keeping count, the GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units. The geometry and raster units, as they are implemented in GF100, are not in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 in that the GPU uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock.

If we drill down a little deeper, each SM core in each GPC is comprised of 32 CUDA cores, with 48/16KB of shared memory (3 x that of GT200), 16/48KB of L1 (there is no L1 cache on GT200), 4 texture units, and 1 PolyMorph Engine. In addition to the actual units, we should point out that improvements have also been made over the previous generation for 32-bit integer operations performance and for full IEEE-754 2008 FMA support. The increase in cache size and the addition of L1 cache were designed to keep as much data on the GPU die as possible, without having to access memory.

The L1 cache is used for register spilling, stack ops, and global loads and stores, while the L2 cache is for vertex, SM, texture, and ROP data. According to NVIDIA, the GF100’s cache structure offers many benefits over GT200 in gaming applications, including faster texture filtering and more efficient processing of physics and ray tracing, in addition to greater texture coverage and generally better overall compute performance.

The PolyMorph and Raster Engines in the GPU perform very different tasks, but in the end result in greater parallelism in the GPU. The PolyMorph Engines are used for world space processing, while the Raster Engines are for screen space processing. There are a total of 16 polymorph engines placed before each SM. They allow work to be distributed across the chip, but there is also intelligent logic in place designed to keep the data in order. Communications happen between the units to ensure the data arrives in DRAM in the correct order and all of the data is kept on die, thanks to the chip’s cache structure. Synchronization is handled at the thread scheduling level. The four independent Raster Engines serve the geometry shaders running in each GPC and the cache architecture is used to pass data from stage to stage in the pipeline. We’re also told that the GF100 offers 10x faster context switching over the GT200, which further enhances performance when compute and graphics modes are both being utilized.

GF100 Series EtherCAT® — Brooks Instrument — PDF Catalogs | Technical Documentation

Add to favorites

{{requestButtons}}

Catalog excerpts

DATA SHEET Thermal Mass Flow GF100 Series with EtherCAT® The Fastest and Most Accurate MFCs Enhanced with the Speed of EtherCAT® GF100 Series with EtherCAT® High Purity/Ultra-High Purity Thermal Mass Flow Controllers and Meters Through hundreds of thousands of installations, the GF100 Series has been proven to have the fastest response time and most accurate performance of any mass flow controller on the market today, enabling precision gas chemistry control. Now enhanced with the speed of EtherCAT® (an Ethernet based communication system known for its cost efficient cabling and application efficiency), the GF100 Series delivers improved key specifications for the increasing demands of semiconductor processes. Features & Benefits • All-metal seal flow path: option for 5µ or 10µ inch Ra surface finish • Real-time EtherCAT® data acquisition capabilities • Improved valve shutdown (≤ 0.15% of bin range) reduces valve leak-by to reduce first wafer effects • Ultra-stable flow sensor (≤ 0.15% of F.S. drift per year) enables tighter low set point accuracy and reduces maintenance equirements ensuring long term zero stability • Newly enhanced pressure transient insensitivity reduces crosstalk sensitivity for consistent mass flow delivery • Ultra-fast settling times: as low as 300 ms • MultiFloTM technology enables one MFC to support thousands of gas types and range combinations without removing it from the gas line or compromising on accuracy • GF120 Safe Delivery System (SDS®) low pressure drop MFC for the delivery of sub atmospheric safe delivery system (SDS) gases used in Implant and Etch processes View GF100 Series w/ EtherCAT Product Page

Product Specifications Performance Full Scale Flow Range Flow Accuracy Repeatability & Reproducibility Flow Settling Time (NC Valve) < 1 sec Flow Settling Time (NO Valve) Pressure Insensitivity Control Range < 1% S. P. up to 5 psi/sec upstream press. spike 2-100% (Normally Closed Valve) 3-100% (Normally Open Valve) 2-100% (Normally Closed Valve) Valve Shut Down (N.C. Valve) ¹ 0.15% of F.S. Zero Leak Valve: Sh50 -Sh51 < 0.02% F.S. Sh52-SH50 <0.005% F.S. Valve Shut Down (N.O. Valve) <1% of F.S. Zero Leak Valve: Sh50-Sh51 < 0.02% F.S. Sh52-SH50 <0.005% F.S. Zero Stability Temperature…

Product Specifications Electrical Electrical Connection Power via 5-pin M8 Connector, EtherCAT via RJ45 jacks Digital Communication Diagnostic /Service Port Power Supply/Consumption 320 mA max. @ 18-30 Vdc, 230 mA max. @ 24 Vdc (under typical operating conditions) Compliance EMC Environmental Compliance MC Directive 2014/30/EU Evaluation Standard EN61326-1:2013 E RoHS Directive (2011/65/EU) REACH Directive EC (1907/2006)

Product Specifications Performance GF101 Full Scale Flow Range Flow Accuracy Repeatability & Reproducibility Response Time/Settling Time (NC Valve) Pressure Insensitivity Control Range Ability to measure inlet presssure 5-100% (Normally Closed Valve) Valve Shut Down (N. C. Valve) Zero Stability Temperature Coefficient Ratings Operating Temperature Range Differential Pressure Range Maximum Operating Pressure Proof Pressure Design Pressure Burst Pressure Controller: 75 psig Meter: 150 psig 700 psia 800 psia 3000 psia Leak Integrity (external) 700 psia 800 psia 3000 psia 140 psia 170 psia 500…

Product Dimensions — Surface Mount Configurations C-Seal C-Seal

Product Dimensions — VCR Configurations Access our library of CAD Drawings

Model Code (GF100/120/125 EtherCAT) Code Description I. Base Model Code II. Package / Finish Specifications Code Option Option Description High Purity/Ultra High Purity Digital Mass Flow Controllers Flow range 3 sccm — 55 slpm N2 Equivalent.; 1 sec Response; 10 Ra Flow range 3 sccm — 55 slpm N2 Equivalent.; 700 msec Response; 5 Ra Pressure Transient Insensitive (PTI) Flow range 3 sccm -55 slpm N2 Equivalent; 300-700 msec Response; 5 Ra MultiFlo capable. Standard bins or specific gas/range may be selected. Not MultiFlo capable. Specific gas/range required. (must select w/ SD or SL special…

Model Code (GF101/121/126 EtherCAT) Code Description I. Base Model Code II. Package / Finish Specifications Option Description High Purity/Ultra High Purity Digital Mass Flow Controllers Flow range 55 — 300 slm N2 Equivalent.; 10 Ra HP wetted flow path Flow range 55 — 300 slm N2 Equivalent 5 Ra HP wetted flow path Flow range 55 — 300 slm N2 Equivalent 5 Ra HP wetted flow path & integrated pressure measurement MultiFlo capable. Standard bins or specific gas/range may be selected Not MultiFlo capable. Specific gas/range required Special Application Normally Closed valve Meter (No Valve)…

Service and Support Brooks is committed to assuring all of our customers receive the ideal flow solution for their application, along with outstanding service and support to back it up. We operate first class repair facilities located around the world to provide rapid response and support. Each location utilizes primary standard calibration equipment to ensure accuracy and reliability for repairs and recalibration and is certified by our local Weights and Measures Authorities and traceable to the relevant International Standards. Visit www.BrooksInstrument.com to locate the service location…

All Brooks Instrument catalogs and technical brochures

  1. SolidSense II® Series

    12 Pages

  2. Uniquely Designed Flow Meters Product Guide

    4 Pages

  3. Sho-Rate Series

    9 Pages

  4. Glass and Metal Tube Variable Area Flow Meters

    4 Pages

  5. Biopharmaceuticals Product Guide

    8 Pages

  6. 947N

    4 Pages

  7. NRS™ Series

    4 Pages

  8. 0260 Series

    4 Pages

  9. 0250 Series

    8 Pages

  10. LR-056

    4 Pages

  11. VDM300

    4 Pages

  12. XacTorr® Series

    8 Pages

  13. CMC Series

    4 Pages

  14. PC100 Series

    4 Pages

  15. SLA7840

    4 Pages

  16. SLAMf10/20 Series Pressure Controllers

    12 Pages

  17. SLA10/20/40 Series

    12 Pages

  18. SLA5800 Series Pressure Controllers

    12 Pages

  19. 5866RT

    4 Pages

  20. 122 Series Pressure Switches & Transmitters

    5 Pages

  21. 122 Series Pressure Gauges

    4 Pages

  22. 8600 Series

    2 Pages

  23. MT3809 Series

    24 Pages

  24. MT3750 Series

    13 Pages

  25. FC Series

    8 Pages

  26. GT1600 Series

    11 Pages

  27. Flow Solutions for EPC Companies

    4 Pages

  28. Flow, Pressure, and Vapor Instrumentation

    8 Pages

  29. Quantim® Series

    14 Pages

  30. 9861 Series

    5 Pages

  31. 5850EMH Series

    5 Pages

  32. 5850EM Series

    7 Pages

  33. 5850E Series

    8 Pages

  34. GF80 Series

    10 Pages

  35. GF100 Series

    12 Pages

  36. GF125 Series

    12 Pages

  37. 5700 Series

    4 Pages

  38. 4800 Series

    10 Pages

  39. SLAMf Series

    12 Pages

  40. GF40 Series

    12 Pages

  41. SLA5800 Series

    11 Pages

Compare

Remove all

Compare up to 10 products

GF100 — Filterfine Advance


MORE INFO

PRODUCT NAME: ​Mass Flow Controller GF100
BRAND NAME: Brooks
MODEL: GF100
CATEGORY: Brooks

DESCRIPTION: 
>>1% digital set point accuracy and < 1 second responce time
>>Multi-gas/multi-range user configurability for optimum process flexibility
and reduced inventoryinvestment
>>Independent service port for data-logging and process development

DETAIL:
High Performance Flow Control
The Brooks Instrument GF100 is a robust upgrade for standard high purity (HP) mass flow controllers, offering improved accuracy and cleanliness. Optional electrical adapters ensure drop in compatibility with older style analog flow controllers.
Brooks’ new sensor technology with improved signal to noise performance and powerful control algorithms deliver enhanced measurement accuracy and reproducibility for optimal gas chemistry control.
The GF100 is designed for long-term reliability with embedded diagnostics and automated zeroing to reduce maintenance for a lower cost of ownership.
Standard Features
• High purity construction with reduced surface area and removal of unswept volumes for faster dry-down during purge
— Surface passivated high purity wetted flow path
— 10 μ inch Ra surface finish
• Independent service and diagnostic port for on-tool troubleshooting and process fingerprinting and optimization
• Integrated high visibility LCD display reports flow (%) and temperature (°C)
• Analog and Serial (RS-485), analog 0–5 VDC and DeviceNet™ communication interface• 
Gas and range user configurable
— Multi gas/multi range model created and proven using actual process gases to ensure real world accuracy
— Change full scale flow range up to 3:1 for optimum process and inventory management flexibility
— Select from hundreds of gases and gas mixtures
Applications
• Thin-film solar deposition and etch
• c-Si deposition
• Wear resistant surface coatings
• Physical vapor deposition
• MEMS manufacturing
• Fiber optic and glass coating
• Bioreactor gas management
• Flame control
• Gas blending
Multi Gas/Multi Range Technology
Multi gas/ multi range (MG/MR) is a proprietary technology available on all Brooks GF Series MFCs. Our MG/MR technology offers a host of benefits that increase tool uptime, reduce cost of ownership, and improve inventory requirements.
Brooks MFCs with MG/MR are offered in nine standard configurations, each programmable for a set of gases and flow ranges. Combined, the nine standard MFCs cover 85% of the gases and flow ranges used in a typical production fab (from 3 sccm to 30 slm, N2 equivalent).
MG/MR is offered with a configuration kit which allows the user to program the MFC for desired gas and flow range anywhere, anytime without removing the MFC from the gas panel. Calibration does not require surrogate gases and can be completed in just a few minutes. In a recent benchmark study, we were able to cover an entire semiconductor fab’s MFC inventory requirement with only 23 part numbers (nine configurable MFC part numbers and 14 other unique part numbers), significantly reducing the fab’s inventory requirements.
Better by Design
Brooks MFCs use a valve, sensor, and bypass design which has been perfected from years of research and testing. Brooks MFCs are robust, reliable, and field proven.
The Brooks solenoid valve has major advantages over other MFC valves (such as piezoelectric valves, which tend to shed particles). Our valve has only one moving part, and only three parts physically in the gas flow path. This results in no particle generation during normal operation. (Other valves, such as piezoelectrics, can release huge amounts of gas during a failure and can overtax abatement systems.)
Multi Gas/ Multi Range Benefits
• Replacement MFCs are available in only a few minutes
• Nine standard MFC part numbers cover 85% of all applications
• Enables on-site gas and range changes with no surrogate gas requirements
• Enables last minute changes in gas panel integration without impacting on-time delivery
• Dramatically reduces inventory requirements
• Increases tool uptime

SPECIFICATION:
GF100 Mass Flow Controllers Specifications
Display
Type : Top mount integrated
Viewing Angle : Fixed
Viewing Distance : 10 feet
Unit Displayed : Flow (%), temp. (°C)
Resolution : 0.1 (unit)
Diagnostics
Status Lights : MFC health, network status
Alarms : Sensor output, control valve output, over temperature, power surge/sag, network interruption
Materials
Gas Path : 316L and 304 stainless steel, KM-45
Surface Finish : 10 μ in Ra
Seals : Metal
Weight : <2.65 lbs (1.20 kg)
Electrical
Power Consumption : 545 mA (max) @ 11 VDC and 250 mA (max) @ 24 VDC
6 watts (max) @ ±15 VDC
Certifications : EMC 89/336/3EEC (CE), ODVA, RoHS/WEEE
Electronic Communication Interface Options
Primary Connectors : Analog/RS-485 via 9-pin “D”
Analog/DeviceNet
— DeviceNet via 5-pin “M8” connector
— Analog via Hirose connector
DeviceNet via 5-pin “M8” connector
Diagnostic Port : RS-485 via 2.5 mm jack
Performance
Leak Integrity (external) : 1 x 10–11 atm. cc/sec He
Linearity : ±0.5% full scale
Repeatability and Reproducibility : ±0. 15% set point
Zero Drift : <0.6% full scale per year
Auto Shut-Off : Valve off at set point <2% full scale
Warm Up Time : 60 minutes
Settling Time : 1 second
Standard Accuracy
5% to 35% : ±0.35% full scale
35% to100% : ±1.0% set point
Operating Conditions
Sh50–Sh54 ; Sh55–Sh56 ; Sh57–Sh58
Flow Range : 3–860 sccm; 861–7200 sccm ; 7201–30000 sccm
Proof Pressure : 140 psia max
Differential Pressure* : 7–45 psid ; 10–45 psid ; 15–45 psid**
Valve Configuration : Normally closed
Temperature Range : 10°C–50°C
Zero Temperature Coefficient : 0.005 full scale per ºC

HOME — Williamson Hardware

WOOD STOVES

invisible
What is the difference between catalytic and non-catalytic wood burning stoves?

In catalytic combustion the smoky exhaust is passed through a coated ceramic honeycomb inside the stove where the gases and particles ignite and burn at a temperature that is less than half the temperature required for the combustion of the gases without the catalytic action. Catalytic stoves are capable of producing a long, even heat output. The catalyst can last up to six seasons or more if the stove is used properly. Over-firing, burning garbage or treated woods, locking the catalyst down at extremely high temperatures will causes thermal shock and neglecting regular cleaning and maintenance can cause the catalyst to break down prematurely. Catalytic stoves are slightly more complicated to operate. Non-catalytic stoves utilize firebox insulation and a secondary burn chamber that injects pre-heated combustion air and turbulence through small holes in a baffle system above the fuel in the firebox. Non-cats have less even heat output and a bit shorter burn times than do catalytic stoves but create a more pleasing flame presentation and are much less expensive to maintain.

What’s the difference between an EPA stove and a non EPA stove?

The EPA certified wood burning stoves burn wood more efficiently and cleaner than traditional types of wood stoves. These wood stoves can provide a nearly smokeless burn, producing maximum heat while using less firewood. Each EPA certified wood stove or wood heating appliance is tested by an accredited laboratory to meet a particulate emissions limit of 7.5 grams per hour for non-catalytic wood stoves and 4.1 grams per hour for catalytic wood stoves, except in the State of Washington, where these values are 4.5 grams per hour for non-cat wood stoves and 2.5 grams per hour for catalytic wood stoves. Many older stoves from the 1970’s and 1980’s that still may be in use commonly put out well over 90 grams of particulate emissions. Additionally, a pre-EPA stove from the 1970’s and 1980’s averaged around 20 – 25% efficiency, while current EPA approved stoves range from 75% to as high as 90% efficient. This equates to burning less wood to get the same amount of heat. An EPA certified wood stove can be identified by a permanent metal label affixed to the back or side of the wood stove.  Switch and upgrade to a certified stove that is now over 10 times cleaner burning!

How is zone heating different than central heating?

Zone heating is putting the heat where you need it most versus central heating systems which can distribute heat to the whole house. Zone heating with a space heater such as a wood ( or gas or pellet) stove, fireplace or fireplace insert can save the homeowner significantly on their overall heating bills. Zone heating creates a cozy warm area for the family to gather allowing the central heating system to be set lower. The zone that one can heat with a space heater is determined by the design of the home. The more open the house plan, the larger area (zone) one can heat with a new heating appliance. Why heat every nook and cranny when you are not using that part of the house? Zone heating is direct heat. The duct systems of most central heating systems have significant heat loss before getting the heat to the livaing areas. Thye longer the run of the heating ducts, the greater the loss.

Where should I locate my new stove?

First, the area selected to install a new stove may be limited to the location of the existing venting system or by factors like obstructions above the installation that precludes installing a new chimney system. Second to consider is the area that one wishes to heat. Stoves are space heaters, sometimes referred to as zone heaters. For maximum enjoyment and heating effectiveness, install your new wood stove in a major living area where the family spends leisure hours and which provides heat flow to other areas is usually a strongly preferred location for the stove. A third consideration is the space requirements. Wood stove installations must meet minimum clearances between the stove and nearby combustible surfaces plus the hearth or floor protection must extend beyond the front and all sides of the stove. These requirements are clearly stated in the owner’s instruction manuals. Local codes must also be followed.

Why does the glass on my wood stove soot up?

Soot will appear on the glass if the firebox temperature is low or if the lighting off period is too short. When first firing the stove a lot of combustion air must be supplied to establish a good fire and warm up the chimney. Open the air controls. Once the kindling fire is well established dry wood can be added. The combustion is then controlled by the primary air control. Wet or green (unseasoned) wood or poor draft conditions might also cause sooty glass. Also never let logs touch the glass or it interferes with the air wash and sooting of the glass will result.

Does my wood stove need floor protection?

Yes, wood stoves require a hearth or stove pad unless installed on a concrete slab. Some wood burning only require ember protection while others need thermal protection. Many stoves supply (sometimes as an option) a bottom heat shield that reflects the heat away from the hearth. The bottom heat shield is then usually required unless the product is installed on the foundation level. . Always check the manufacturer’s installation requirements for R values and minimum size before installing your wood stove and always install according to the stove manufacturer’s instructions and local building codes.

Why does my wood stove draft poorly?

The main function of a chimney is to create draft for combustion and to transport the flue gases out of the building. A good draft is vital for a good combustion. We consider a normally good draft to be at least .05 water column inches as measured by a draft gage. The chimney creates the draft, not the appliance. Essential for the draft is the construction of the chimney. A tall chimney gives more draft than a short one. If the draft is insufficient one solution is to build a taller chimney. The chimney diameter should never be less than the diameter on the appliance flue outlet. A circular chimney liner normally gives a better draft than a square one. Use of flue pipe elbows reduces the draft. If elbows are used it is always better to install with 2 X 45 degree elbows, instead of one 90 degree one. Combustion air is essential for the draft An open fireplace requires approx 300m3 air each hr, while a “closed” fireplace requires approx 30m3 per hr. A kitchen duct / ventilator can suck much more air than a chimney. This will create a negative draft. and a negative chimney draft causes smoke to come into the room. Influence of the wind Draft disturbance can be caused by tall trees, cliffs or tall buildings. The problem can normally be solved by making the chimney taller. Draft is simply hot air rising. High temperature creates strong chimney draft. A good result is achieved when the height and diameter of the chimney fits the appliance. Too strong a draft can cause the heat to be sucked too fast into the chimney. Too strong a draft can be regulated with a damper or a draft regulator. • Wood quality: Wood with a lot of moisture can cause more smoke than the chimney can dispose of. • Air systems like air condition, bathroom or kitchen fans might take their need of air from the chimney (negative draft) • Operating errors: Always open the damper and primary air control before you reload the stove – open the door slowly. • Flue pipes: Remember that elbows (90 degree) and long horizontal flue pipes make restrictions on the draft. • A chimney that is too short could give insufficient draft for the fireplace. • A chimney that is too cold can cause low – or negative draft. The flue liner must be correctly connected to the fireplace and the chimney – and have the right dimensions. • A blocked chimney could be caused by a birds nest, soot, or tar. The best solution for a poorly drafting chimney may be a new product called a “Draw Collar” which is a device installed on top of the stove or fireplace insert that warms up the chimney with an electric coil to get the chimney to draw when the stack temperature is low.

Invisible

Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings.

Can I install a wood burning insert into my factory built fireplace?

There are very few manufacturers of wood burning inserts that are listed and approved to be installed into the manufactured or “zero clearance” type fireplace. Those that are listed have very exacting installation requirements that may be difficult if not impossible to achieve. We highly recommend not installing an insert into one of these types of fireplaces.

Is a wood burning fireplace insert right for me?

Casual use of an open fireplace is not efficient and does require frequent fire tending using a lot of firewood.. Oversize fireplace flues sweep the firebox heat up the flue and even worse, the draft of the fireplace gathers up the heated room air as well. A few hours of casual fireplace use can force your central heating system to work harder and cost you money! Also when you are not using your fireplace you are losing a lot of your heat up the chimney because most dampers and glass doors do not fit air tight they allow heated room air to be sucked up the chimney, costing you lots of money. A properly installed fireplace insert solves this problem and puts the heat in the room where you want it. Here are a few guidelines and suggestions when shopping for that new insert. First be sure to bring in the measurements of your existing fireplace, width, height and depth are the most important. Many customers take a digital photo of their fireplace and bring it in, which helps us immensely. CHIMNEY The first thing we do when installing a fireplace insert is to toughly clean your chimney. We want to install the insert into a clean environment so no odors can come back into the house. CHIMNEY LINERS Fireplace chimney flues are oversized, so installing an insert will require a stainless steel chimney liner to be installed from the insert to the top of the chimney. All gas inserts require aluminum liners to be installed to the top of the chimney. The reason for this is that the new insert is so efficient that it can’t keep the big old masonry chimney hot enough to establish a good draft. ELECTRIC Many inserts use electric blowers to extract and distribute the heat from the inserts firebox. A nearby electric outlet is required or to eliminate the exposed cord from the surround you may wish to have us install a UL approved hidden wire kit. SIZING Inserts are sold as two or more components: the insert plus a surround panel and sometimes an optional front. The insert must physically fit into the firebox. The surround panel is designed to cover the gap between the insert and the actual fireplace opening size. Surround panels are offered in differing sizes and styles giving each insert several options. CLEARANCES Each product is tested and listed to minimum clearances from combustible top and side fireplace trim as well as from combustible mantles. The dimensional depth of the facing trim and of the depth of mantle is important factor when measuring. HEARTH EXTENSION All wood inserts require the hearth to extend in front of the insert opening. This is a stated distance that varies with each product and is stated in the product manual.

How do I measure my fireplace for an insert?

First, open the glass doors if you have them and measure from side to side at the front of the fireplace opening to establish the Width of the opening. Second, measure the Height at the front from the bottom of the fireplace to the top of the opening. Then measure the Depth of the fireplace from the back to the front of the opening in the middle of the fireplace. We realize that most fireplaces have tapered sides and that is ok. If you have a digital camera or one on your phone, simply take a couple of photos of your fireplace and bring them in with your dimensions. This is like us being there, and is very helpful to us. We are also happy to come out to your home, at no charge, to help you if you should need it.

WOOD BURNING FIREPLACE INSERTS

invisible
Can I install a wood burning insert into my factory built fireplace?

There are very few manufacturers of wood burning inserts that are listed and approved to be installed into the manufactured or “zero clearance” type fireplace.

Those that are listed have very exacting installation requirements that may be difficult if not impossible to achieve.

We highly recommend not installing an insert into one of these types of fireplaces.

Is a wood burning fireplace insert right for me?

Casual use of an open fireplace is not efficient and does require frequent fire tending using a lot of firewood.. Oversize fireplace flues sweep the firebox heat up the flue and even worse, the draft of the fireplace gathers up the heated room air as well. A few hours of casual fireplace use can force your central heating system to work harder and cost you money! Also when you are not using your fireplace you are losing a lot of your heat up the chimney because most dampers and glass doors do not fit air tight they allow heated room air to be sucked up the chimney, costing you lots of money. A properly installed fireplace insert solves this problem and puts the heat in the room where you want it.

Here are a few guidelines and suggestions when shopping for that new insert. First be sure to bring in the measurements of your existing fireplace, width, height and depth are the most important. Many customers take a digital photo of their fireplace and bring it in, which helps us immensely.

CHIMNEY The first thing we do when installing a fireplace insert is to toughly clean your chimney. We want to install the insert into a clean environment so no odors can come back into the house.

CHIMNEY LINERS Fireplace chimney flues are oversized, so installing an insert will require a stainless steel chimney liner to be installed from the insert to the top of the chimney. All gas inserts require aluminum liners to be installed to the top of the chimney. The reason for this is that the new insert is so efficient that it can’t keep the big old masonry chimney hot enough to establish a good draft.

ELECTRIC Many inserts use electric blowers to extract and distribute the heat from the inserts firebox. A nearby electric outlet is required or to eliminate the exposed cord from the surround you may wish to have us install a UL approved hidden wire kit.

SIZING Inserts are sold as two or more components: the insert plus a surround panel and sometimes an optional front. The insert must physically fit into the firebox. The surround panel is designed to cover the gap between the insert and the actual fireplace opening size. Surround panels are offered in differing sizes and styles giving each insert several options.

CLEARANCES Each product is tested and listed to minimum clearances from combustible top and side fireplace trim as well as from combustible mantles. The dimensional depth of the facing trim and of the depth of mantle is important factor when measuring.

HEARTH EXTENSION All wood inserts require the hearth to extend in front of the insert opening. This is a stated distance that varies with each product and is stated in the product manual.

How do I measure my fireplace for an insert?

First, open the glass doors if you have them and measure from side to side at the front of the fireplace opening to establish the Width of the opening.

Second, measure the Height at the front from the bottom of the fireplace to the top of the opening.

Then measure the Depth of the fireplace from the back to the front of the opening in the middle of the fireplace. We realize that most fireplaces have tapered sides and that is ok.

If you have a digital camera or one on your phone, simply take a couple of photos of your fireplace and bring them in with your dimensions. This is like us being there, and is very helpful to us.

We are also happy to come out to your home, at no charge, to help you if you should need it.

VENTING SYSTEMS

invisible
Can I vent my woodstove and oil furnace on the same flue?

Most state codes do not allow more than one appliance to be installed into a single flue.  So NO you can’t and it would be very dangerous to do.  A flue is a separate passage way within a chimney structure.  Many chimneys have multiple flues so several different appliances can be housed into a single chimney. Gas Log Sets are sold complete with a grate system to hold the set of artificial logs, (several styles and sizes are available) a gas burner (propane or natural gas) and even a glowing ember bed to give the set a realistic appeal.   Vented gas sets are approximately 20% efficient and are designed to allow casual fire burning rather than as a serious heat source.  Most homeowners opt to install a glass door enclosure on their fireplace when using a vented gas log set as the damper is pinned in an open position.  Vented Gas log sets are not a remedy for poorly drafting fireplaces and should not be installed into them.  We really don’t recommend the use of gas logs as they are so very inefficient and the fact the you must leave the damper locked open all the time, waste an enormous amount of your homes energy and cost you a lot of money.  Highly efficient gas inserts are definitely the way to go..

How do «air cooled» and «solid pack» chimney systems differ?

Air cooled chimneys were designed as a component for manufactured open wood burning fireplaces.  These fireplaces are casual use appliances and produce very little heat.  Air cooled chimneys have no insulation; they stay cool by circulating cold air past the inner flue.   As a result, the air cooled flue stays colder than an insulated chimney resulting in reduced draft and a greater chance of creosote formation.  They are also not required to undergo severe chimney fire testing. Solid pack chimneys are manufactured insulated chimney systems designed for wood and coal stoves as well as for many oil fired appliances.  These appliances can produce high flue gas temperatures and can generate large amounts of creosote (which can cause a chimney fire) when tended improperly. These chimneys known in the industry as Class A Chimney Systems are certified to higher safety standard which requires that the chimney system withstand repeated 2100°F chimney fires.  They utilize high temperature insulation and warm up quickly making them less likely to accumulate creosote therefore increasing your safety.

GAS HEATING STOVES & FIREPLACES

invisible
Do any gas stoves or gas fireplaces work without electrical power?

Yes, most gas stoves and gas fireplaces work with a standing pilot light so no electrical power is needed to ignite or run the appliance.   In the northeast region of the USA, many homeowners seek an alternate heating source to use during the frequent power outages due to wind, ice and snow storms.  The standing pilot heater-rated appliances are a great option.  The optional blowers, if installed, will not work during the outage but most units rely on the radiant heat transfer and convection heat so they will work just fine with no electric power.  Many of the new electronic ignition fireplaces, inserts and stoves are “Green Smart” and have a built-in battery back-up that allows the unit to be started on simple AAA cell batteries in the event of a power failure.  This feature saves you money on the operating cost of keeping a pilot running when it is not in use.

GETTING STARTED WITH WOOD BURNING

invisible
Why do I need to install a stainless steel liner for my wood stove or insert?

Some people think the only justification for lining a perfectly good chimney is that it will help line your hearth retailer’s pockets with more of your hard earned cash.   After all you just plunked down somewhere between $800 and $3000 (or more) for the stove and now they are trying to get more!  Please let me explain how this extra step is going to help not just you, but also your chimney sweep and maybe even the fire department.  And here’s the kicker, it will even save you money in the long run. There are five very good reasons for lining a chimney. 1. Creating the proper flue size. 2. A chimney that is lined from the stove to the top is easy to keep clean. 3. A lined chimney is a safer chimney. 4. Some chimneys that are lined and insulated work better. 5. It may also be a code requirement. Creating the proper flue size. Some chimneys are built with one purpose and then used for another at a later time.  A perfect example of this is the fireplace chimney.  These chimneys have large flues that are designed to evacuate the copious amounts of smoke and gases that are created by burning wood in a fireplace with a large opening.  The conclusion that many people arrive at after living with this situation is they are throwing a lot of wood into a fireplace that is nice to look at but doesn’t give off much heat!  The logical next step is to put a wood burning fireplace insert into the existing masonry fireplace.   Here is where the liner will help.  Although the chimney usually already has a terracotta liner, the size of the liner is a minimum of 8”x12” and usually 12”x12” or larger. Some would say, “If my chimney is already lined why do I need another liner inside of it?”  So here is the answer.  The flue exit on most of the new E.P.A. rated wood burning inserts is 6” in diameter.  When the engineers that designed these stoves were testing them they performed most of their tests on a 6” flue.  And therefore it stands to reason that these puppies are going to work a whole lot better on, you guessed it a 6” flue.  I have used the fireplace as an example but there are many chimneys in older homes that were built with over-sized flues that will work better if lined. A chimney that is lined from the stove to the top is easy to keep clean. In some cases when an insert is installed into a masonry fireplace an installer will use a “direct connect” to connect the stove to the chimney flue.   In this case a short length of stainless steel flexible liner is connected to the top of the insert and run through the damper and up above the bottom of the first flue tile.  The remaining damper opening is then sealed with a steel plate or ceramic fiber blanket.  This installation may work fine but it creates more expense when it is time to clean the chimney.  To properly clean the entire chimney in this type of installation the stove and the venting should be removed and then reinstalled after the chimney is cleaned.  Chimney sweeps don’t enjoy all of this extra work and therefore charge accordingly.  If an insert had been installed in this same fireplace with a full liner the chimney sweep can run his brush and rods right down to the stove. In this case the only extra step is removing the top baffle, which is very easy to do. I recently saw a report about a house that burned because of a chimney fire.  Now chimney fires can start because of more than one reason.  One is creosote build-up and that should not be an issue if you are burning an E. P.A. certified stove properly with “Good Wood” and “Good Draft”.  Another reason is the chimney may be too close to combustible materials.  A liner and in some cases an insulated liner can make a chimney much safer.  Of course if you are unsure about the condition of your chimney you should always consult a professional like a certified hearth retailer.  Now if the house in the news report had an E.P.A. certified stove connected to a properly installed liner there is a good chance that the local firemen would have been back at the station perfecting a new recipe instead of out risking their lives putting out a fire! Most chimneys that are lined and insulated work better.  One of the best things about using a chimney liner when it’s needed, is that the stove will work better.  When the stove works better you will be happy, and when you are happy so will your hearth retailer.  Reminds me about a saying I use often pertaining to my wife, I’m sure you’ve heard it.  When Mother is happy, everybody’s happy!  Some chimneys are built outside the house, which is not conducive to “Good Draft”.   A large flue as I have mentioned earlier will only magnify the problem.  The solution here may require adding an insulated liner, which will allow the flue to stay warmer and as a result will contribute to better draft.  Another important point and an opportunity to bolster my claim that a liner may save you money can be made about improving performance and efficiency when installing a liner.  If the stove is running well you will be getting more heat from the stove and get your money’s worth from your wood.

How do I start a good fire in my new wood stove?

To start a fire you should have at least six things: 1. Draft 2. Firestarters 3. Kindling 4. Dry wood 5. Matches or a lighter 6. Stove top thermometer Draft is a force in your chimney that is the result of a temperature difference between the air inside the chimney and the air outside that causes a pressure difference.  This pressure difference causes the air inside the chimney to rise up and exit from the top of the chimney.   To learn more about draft see my article called “Good Draft”. Fire starters are usually made of a wax impregnated material like sawdust and when one or two is placed under the kindling wood makes fire starting easy and without the smoke that you can get from using newspaper. Kindling is wood that is very dry and split into pieces that are no bigger than 1 inch by 1 inch. Dry wood is wood that has been stacked, split and allowed to dry under cover until it reaches 20% moisture content.  To learn more about dry wood see my article called “Good Wood”. Matches or a lighter…need I say more. A thermometer is like a speedometer, it will tell you if you are burning hot enough or too hot. Now we are ready to start a fire!  Like anything else in life starting a fire will be much easier and be more successful if we build a good foundation.  Start by placing a couple of fire starters (We like the Rutland brand) on the bottom.  Then I use plenty of kindling.  I lay down three or four layers of kindling in opposing directions so air can circulate through the layers.   At this point I add a few small logs, because by this time there isn’t much room to put in much more.  I am assuming for this example that draft is present.  With my match or lighter I light the two fire starters.  Now I can set the air control wide open and close the load door.  An important point is that you should NEVER open your ash pan door to get the fire going. This can damage the stove and greatly reduce the useful life of the stove.  I leave the air control wide open until my thermometer reaches 400 degrees.  The fire will burn robustly and do a great job of warming up the chimney and establishing a strong draft.  At this point the kindling has probably burned down enough so I can add more wood.  When this additional load has caught and I am still seeing a surface temperature of at least 400 degrees I can now turn the air control down for an extended burn.

Why does my chimney have poor draft?

The single most important ingredient for successful wood burning in a modern, clean burning heating appliance (wood stove) is DRAFT!  What is draft?  Well hold on to your hat, because I’m going to tell you. The dictionary has many different definitions, one of which is “a drawing or a pulling”.  Incidentally one of my favorite definitions of draft in the dictionary is the one that refers to “a portion of beer”, but I will leave that for another, perhaps later discussion. Draft in purely technical terms is draft is a difference in temperature between the flue gases in the chimney and the atmosphere outside the chimney that create a pressure difference”.  In nature areas of high pressure flow to areas of low pressure all things being relative.  We are not talking about a very strong force either.  The force of good draft is so weak that it must be measured with very sensitive equipment.  In scientific terms it is measured in inches of water column, or somewhere between .05 and .1 inches. Before moving on let’s look back in time because as we all know, history is one of the best teachers.  If we only used this historical knowledge more, life would be so much easier.  In the beginning when fires were in caves everyone smelled like smoke, which was probably a blessing compared to how they might have smelled otherwise.   As time passed our ancestors discovered the chimney and the best location for the chimney.  Of course they knew that the best place was in the middle of the dwelling and running up through the highest point of the structure.  Of course I have oversimplified this journey there was no doubt plenty of unsuccessful trial and error before arriving at this happy place. Flash forward to the present and look at the current state of chimney location.  For a few different reasons, mainly aesthetics and space concerns many of our poor chimneys have been relegated to the cold desolate outdoors.  I am fond of saying, “outside chimneys may look nice and act as an anchor to hold the house down during a hurricane, but the truth is that they usually don’t work very well”. So what exactly is a good chimney? A good chimney is one that removes exhaust and also draws combustion air into our clean burning heating appliance.  As I have said it does this because of the force in it called draft.  The funny thing is that this force should exist in the chimney even if a stove is not connected to the flue!  I have witnessed the “miracle” of draft in some chimneys that was so strong, that a piece of paper placed over an open thimble, would be held in place by the flow of air up through the chimney!  There are always examples of chimneys that defy logic and work when they should not.   I have talked to people who own chimneys that are, 10 feet tall, originate in a basement, have 5 feet of horizontal run, 4 elbows and are outside.  They swear to me that they work just fine!  Don’t be fooled by these anomalies they just got lucky! There are some constants in chimney construction and location that support good draft. – Locate the chimney inside the insulated envelope and try to have it terminate above the highest point of the dwelling.  (A warm chimney is a happy chimney!) – Make the flue the same size as the as the outlet on the stove.  (Just as water flow will slow as a river widens, draft will be weaker if the flue size is too big or be restricted if the flue is too small.) – Use a round flue if possible.  (Exhaust flow doesn’t like corners) – Try not to introduce bends or elbows in the chimney.  If you have to use elbows try not to use more than two, and if you can use 45 degree elbows instead of 90’s. – Avoid horizontal runs if possible and if you must use them keep them short.   3 feet is an absolute maximum!  (It is not natural for smoke to go sideways!) – Make sure there are no other openings into the chimney that are diluting the draft, such as leaky clean-out doors or alternate thimbles.  (This has the same effect as trying to suck soda with a cracked straw.) – Check around the proposed termination for obstructions like overhanging branches. – A good rule of thumb for minimum chimney height is 14 feet. – Don’t locate a chimney in a one story addition attached two a multiple story dwelling. – Beware of cathedral ceilings.  Even if they are in the next room they might affect the performance of the chimney. – H.V.A.C. ducts, floor vents, and cold air returns can negatively affect draft. – Be aware of anything that might remove air from the house like bathroom fans, kitchen range hoods, ( particularly down-draft), open second floor windows, exhaust fans and open fireplace dampers to name a few. – Make sure the chimney is cleaned regularly and don’t forget about the cap and connecter pipe. – Start and burn your fire hot enough to help sustain good draft. (A stovetop thermometer is a must.)

Inside Chimney vs. Out Side Chimney. Does it matter?

I would say it matters, “tons” which is incidentally what a masonry chimney can weigh, but I will tell you more about that later.  To understand more about this subject let’s identify and define the major players in our game. The Chimney is a vertical structure extending above the roof of a building for carrying off smoke. (Merriam-Webster) The Stove is any manner of contraption with a load door that you can put wood in, some means of controlling combustion air, an outlet to allow smoke to escape and if you bought a good EPA stove it looks good even when it’s not burning. Draft, which is a temperature difference between the inside of the chimney and the air outside which causes a pressure difference that allows the air inside the chimney to rise up and exit from the top of the chimney. The Inside is a place that is warm and dry and tastefully decorated and a good place to have a glass of wine with your significant other when it’s cold and raw outside. The Outside is that cold and raw place that I was just talking about! Since the Renaissance period when the world suddenly became a much happier and artsy place people have been building chimneys inside the domicile.  The British may not have contributed greatly to the artistic community but they knew that the chimney belonged inside the house.  The Pilgrims may have endured a great deal learning how to adapt to the new world but they brought the knowledge with them that the chimney should be built in the middle of the house.  Somehow this state of enlightenment has eroded and many chimneys (even as you read these words) are being built outside of the house. So why exactly does the inside chimney function better than the outside chimney?  It has a lot to do with the mysterious and wonderful force of draft. Remember in my definition I mentioned that draft was a temperature difference that causes a pressure difference.  So getting back to the beautiful (looking) masonry outside chimney, we are going to have quite a challenge heating up tons of mass to create and maintain our desired temperature difference.   That poor chimney is outside, where remember it is cold and raw and now we not only want it to look good but to work as well!  If we are starting with a cold stove and it is 20 degrees below outside, the air inside the chimney may be 20 below too.  Depending on other factors going on inside the house such as negative pressure, air may actually be coming down the chimney which is far less desirable than say Santa Claus.  We might just get a house full of smoke before we get the chimney warm enough to support draft. As a side note let’s talk a little about why an old “air tight” stove that cost $600 worked on that outside chimney and this new $2000 EPA certified stove won’t. The older stove was not very efficient and that meant that lots of the heat being produced went up the chimney.  This worked very well to warm up the chimney but was also a great waste of heat.  The newer EPA certified stoves are far more efficient and don’t lose nearly as much heat up the chimney.  In a chimney that supports good draft and when burning good DRY hard wood you can expect to burn less wood and get more heat than you would have from the old “air tight” stove! Now let’s move the chimney inside the house and see what happens.   It all sounds so easy on paper!  One thing I haven’t mentioned is that the chimney should also exit the top of the house at or as near as possible to the top of the insulated envelope.  Our inside chimney is a happy chimney.  The house acts like a big blanket wrapped around it to keep it warm.  Now we can go back to that 20 below zero day but this time most of the air in the chimney is 70 degrees.  (Or very close to the same temperature that the air in the house is.)  When we start a fire in a stove that is connected to this chimney the smoke is going to go in the right direction because we already have that all important temperature difference that will support good draft!  The fact is that this chimney will probably be drafting even in a static state or when there is no fire in the stove.  That as they say is “a good thing”!  So the moral to the story is, “If you have a choice build your chimney inside the house where it will be happy and more importantly so will you!”

What does house pressure have to do with my wood stove?

Did you even know that you house was under pressure?  Do you care?  Read on and you will see that Indeed it does have a profound effect on the successful operation of your stove. So what is house pressure?  Well to explain what it is we have to understand a little about relativity.  Don’t worry you don’t have to be Einstein to understand relativity.  First to make sure we are all on the same page we are talking about air pressure!  The pressure inside the house is relative to the pressure outside the house.  It may be either higher or lower than the pressure outside.  If the pressure inside the house is positive or higher than the pressure outside and a window or door is opened air will leave or flow out of the house.  If the pressure inside the house is negative or lower than the pressure outside the house air will flow into the house when that door or window is opened.  If for some reason all of the doors and windows in the house were open equilibrium would be reached and the pressure would be the same inside and out.  Nature loves balance!  That seems easy enough to understand.  Nature has its own very predictable but perhaps not very well known rules.  One of them is that areas of high pressure flow to areas of low pressure.   Of course when it comes to burning a wood stove in the house we will be looking at what effect house pressure has on chimneys. Wouldn’t it be great if that was all there was to it!  We could all high five and walk away.  Of course as with most things in life there is a little more to it than that.  In fact in most houses there is an area of negative pressure, an area of positive pressure and a magical place in between called the Neutral Pressure Plane (NPP).  The NPP is the place where the pressure inside the house is equal to the pressure outside the house.  They are all in a state of flux, changing quite literally with the wind and many other factors.  The negative pressure area is typically located in the lower portion of the house and positive area is normally in the upper portion.  The NPP as I have mentioned is between them.  The NPP is often depicted as a straight line but it can actually be slanted or wavy and can jump around from level to level. So let’s apply some of what we are talking about to wood stoves and chimneys.   There are two openings in our system, the door or the air control on the inside of the house and the chimney termination on the outside.  If we put our system in an area of negative pressure the chimney, which is a conduit that air or flue gasses can flow through, might like an open door or window, allow air to flow into the house, especially if it is an outside chimney.  If we locate the system in an area of positive pressure the air should flow out of the house. Now let’s add some variables that can sabotage our system.  Anything that will take air out of the house mechanically like, but not limited to down draft ranges, bathroom exhaust fans, dryers, whole house fans, shop exhaust fans and range hoods can create negative pressure.  Recessed lighting is another culprit. If not sealed properly they are like holes in the ceiling that air will flow through and raise the NPP creating a greater area of negative pressure.  A masonry fireplace with an open damper may be taking air out of the house and creating negative pressure.   Some people sleep with a window open on the second floor and that can raise the NPP.  There are other culprits but I think you get the picture. So what is the solution to stopping all of these forces that are trying to get between us, and a nice warm fire in the woodstove?  The best possible solution is to locate the chimney inside the house and have it run up through the highest point in the insulated envelope.  The opposite of this is a chimney that is located outside, which is almost certainly doomed to fail.  If the chimney is inside the house and terminates through the highest point of the roof we achieve many desirable results.  First and foremost we keep the chimney warm.  A warm chimney is a happy chimney!  This is because a good chimney produces draft and draft is a temperature difference that produces a pressure difference that pulls air or flue gasses up the chimney.  It is much easier to keep a chimney warm when it is located inside the house.  Just think of what the temperature difference would be when it is 70 degrees inside and below freezing outside.   Because the warm happy chimney is producing strong draft it will be able to compete with all of the other forces that are trying to keep it from doing its job.  Remember that pesky little NPP I was talking about?  Well the chimney if located inside the house will have one that is higher than the NPP in the house and the result will be a chimney that has draft even when the stove is not running!  Let’s not forget the best part, with all the cards in our favor the wood stove will be responsive to control and provide sought-after heat.

What makes good firewood?

There are many different factors that affect the outcome of successful wood burning.  Many will argue as to which one is most important but instead of arguing let’s just start with wood! If a tree falls in the woods, and there is no one there to hear it, does it still make a sound?  Well if nobody told you yet, yes it does make a sound.  The real news is that when a tree falls it begins to decay and releases Carbon Dioxide.   Coincidently the very same release of CO2 occurs when we burn wood!  In the greater scheme of things the effect on the atmosphere is virtually the same.  Why does this matter to us?  Because unlike when fossil fuels are burned releasing otherwise trapped CO2 into the atmosphere, when we burn wood we are NOT adding to the Green House affect!  Now I don’t know about you, but knowing that, is making me feel all warm and fuzzy. We see trees as a renewable source of energy that when cut, split and dried under cover will provide heat.  In a word wood is “FUEL”!  The question we have to ask our-selves is what type of wood makes the best fuel to burn in our modern EPA rated wood burning appliances?  And the answer is “It depends”. In the Northeast almost everyone (not quite everyone, believe me I’ve talked to a few) knows that good dry hard wood like oak, beech, maple, ash and birch are best.  However in some areas like the Rocky Mountains soft wood is plentiful and hard wood is almost nonexistent.   Can you burn dry soft wood?  You bet!  You just can’t achieve the long burn times printed on those darn brochures with dry soft wood.  The main difference between dry hard wood like beech and dry soft wood like quaking aspen is density.  Beech is more dense (no that doesn’t mean the aspen is smarter) so there is more weight in the same amount of volume.  So on a larger scale there is more heat value or BTU’s in a cord (4’x4’x8’ or 128 cubic feet) of beech than there are in a cord of quaking aspen.  In plain English a cord of beech will burn longer and give off more heat (In the same appliance, chimney, house, etc.) than the cord of aspen. Has any one picked up on the fact that I keep using the word DRY in front of wood?  We have all heard it and said it but what does it mean?  Is finding dry wood like finding the Holy Grail?  During certain times of the year it can be.  Dry wood does not come from a place advertised in a newspaper in October.  Dry wood does not happen at the speed of e-mail or cell phones.   Dry wood is the result of a long and deliberate process that involves planning and dare I say thought.  They say that good things are worth waiting for and if you have ever tried to coax heat from wet wood you will agree. Let’s attack this subject from a different angle and talk about “wet wood” or as some call it “green wood”.  As an old Vermonter once told me the same thing that makes wood wet makes maple trees so popular in the springtime.  The stuff coming out of those taps and dripping in to those buckets, you know, sap.  They pour all that sap into great cauldrons and build blazing fires under them (with good dry hard wood).  The sap bubbles and boils and gives off great clouds of steam but at no time has the sap ever burst into flames…because IT’S WATER!  And as we all know water does not burn, as a matter of fact ask any fireman it’s what they use to put out fires!  In scientific terms moisture content in wet or green wood can be 50% or more!  That would mean that a log weighing 4 pounds would have 2 pounds of water in it.   Even so called, dry wood has about 20% moisture content but for our purposes that is just fine. So how do we obtain this elusive prize?  It’s quite easy actually if we plan ahead and use our heads.  Depending on the species, wood should be cut, split and allowed to dry under cover from 6 months to 2 years.  The woodpile should be elevated off the ground, with pallets or some other method and be covered on top but left open on the sides. It is important that the wood is protected from rainfall, but is allowed to be gently caressed by the warm summer wind. Another good idea is to get the woodpile out in to the open as much as possible.  There is nothing quite as powerful as the sun when it comes to properly seasoning wood.  Sounds crazy I know but your wood will thank you the following winter by providing you with plenty of nice heat. One other important tip is log length.  If you have a stove that will accept a 22” log and you have your wood cut to 16” length you are leaving part of your “tank” empty.   Have your wood cut 2” shorter than your firebox dimension and then if there is a little variation in actual length the logs will still fit. My final point is about timing.  The right time to buy wood is well before it is going to be burned.  A nice bonus as well is that the cost can be considerably less for green wood.  Remember depending on the species the seasoning time can be as long as 2 years.

GAS FIRED FIREPLACE INSERTS

invisible
How do fireplace gas log sets and gas fireplace inserts differ?

Gas Fireplace Inserts are designed be installed (inserted) into approved wood burning fireplaces.  Designer fronts and surrounds are offered in several styles.  An approved gas liner must be installed into masonry chimney to vent a gas-fired insert.  Gas Fireplace Inserts are available as heater-rated high efficiency appliances that are capable of heating even large areas.  Direct Vent Fireplace Gas Inserts are enclosed behind a fixed glass front as the outside combustion system is part of the venting making them ideal for tightly insulated homes or poorly drafting wood burning fireplaces. Gas Log Sets are sold complete with a grate system to hold the set of artificial logs, (several styles and sizes are available) a gas burner (propane or natural gas) and even a glowing ember bed to give the set a realistic appeal.  Vented gas sets are approximately 20% efficient and are designed to allow casual fire burning rather than as a serious heat source.  Most homeowners opt to install a glass door enclosure on their fireplace when using a vented gas log set as the damper is pinned in an open position.  Vented Gas log sets are not a remedy for poorly drafting fireplaces and should not be installed into them.  We really don’t recommend the use of gas logs as they are so very inefficient and the fact the you must leave the damper locked open all the time, waste an enormous amount of your homes energy and cost you a lot of money.  Highly efficient gas inserts are definitely the way to go..

The Gatlin – Fujinon GF100-200mm f/5.6 first look review – jonasrask

I actually promised myself. I promised myself that I wouldn’t be writing my thoughts on this lens. My life has been very very hectic during the course of the last couple of months, so I don’t really think I have given this lens the amount of attention it deserves for a proper review. And up until a couple of hours ago I still stood by that thought. But seeing how few people have actually handled this lens, and actually had some playtime with it, I thought I would do a little bit of public service anyway.

GFX50R – GF100-200 @ 100mm f/5.6 ISO200

As is to be expected as to not trigger any of the lurking trolls of ze internet here are some disclaimers before we proceed.

Disclaimer: The lens used in this review is a pre-production prototype lens. Image quality might therefore not be final.
Disclaimer II: All productshots of the camera are shot by me for Fujifilm Corp.
Disclaimer III: I’m an official X-photographer. That’s spelled brand-ambassador. My views are most likely as biased as they come. This being said, I’m an open and honest guy and I speak my opinions. I have used, and still use to this day, all other imaginable camera systems, be they analogues, digital or pinhole. Whether you believe my views regarding this camera or not, is up to you ?

GFX50R – GF100-200 @200mm f/5.6 1/60s ISO400

Alright, so, f/5.6 huh? – Thats probably the first thing that comes to mind when people see the specs of this lens. “A measly, lousy f/5.6 maximum aperture – what the..?”
Yes, it’s 5.6 maximum, no it will not give you narrow DOF like a 110mm f/2. – Now that we have that out of the way let’s try to focus on the real photography aspects of a lens like this.
The GFX lens lineup is growing nicely, and has so far consisted mostly of prime lenses. We have the 32-64 zoom, but no zoom lenses in the medium-long tele range. Well, that all changes now. With the GF100-200 a lot of photographers now have the longer focal lengths handy in a relatively compact option.
When I say relatively small package, I mean that based on the fact that designing good quality lenses for a medium format sensor mean that they grow in size compared to the compact APSC/Full-frame options. It’s just physics, no use whining about it. Just go to the gym, get fit and enjoy a good quality lens.
So who will benefit most from this lens? Well,to me it’s quite clear that this lens is for nature, and landscape photography. It’s a diverse lens when you want to do detailed landscapes with good clean compression of field.
It’s actually also quite well-fit for portraiture photographers, especially is studio settings where you use artificial lighting as a light source hence negating the relatively weak light gathering capabilities of the f/5.6 aperture.

Specifications

No use for me to rewrite all the specs of this lens, so here they are from the Fujifilm.com specifications.

Type FUJINON LENS GF100-200mmF5.6 R LM OIS WR
Lens configuration 20 elements in 13 groups
(including one aspherical lens element and two Super ED lens elements)
Focal length f=100-200mm (equivalent to 79-158mm in the 35mm film format)
Angle of view 30. 6° – 15.6°
Max. aperture F5.6
Min. aperture F32
Aperture control
Number of blades
Step size
9 (rounded diaphragm opening)
1/3EV (16 steps)
Focus range (from the sensor surface) 0.6m~∞ (Wide-angle end)
1.6m~∞ (Telephoto end)
Max. magnification 0.2x (Wide-angle end)
External dimensions :Diameter x Length (approx.)
(Distance from camera lens mount flange)
Φ89.5mm x 183mm
Weight (approx.)
(excluding the lens cap, lens hood and tripod collar foot)
1,050g
Filter size φ67mm
Accessories included Lens cap FLCP-67II
Lens rear cap RLCP-002
Lens hood
Tripod collar foot
Lens pouch

A thing to note is that the weight is relatively modest given the physical size. This lens weighs just above a kilogram. That is indeed quite light and portable for what it is.

The lens is built with space at the back to fit the GF 1.4X teleconverter. I did not test out the combination, but it will obviously give you a focal length range of 140-280mm. All on account of 1 full stop in light gathering ability.

GFX50R – GF100-200 @189mm – f/5.6 – 1/8s – ISO3200

Build and feel

Well I’m not going to spend too much time describing this. It is an extremely well built lens. Just like the rest of the GF series it has a very nice build quality. It bears a definite resemblance to the GF250mm f/4, which is definitely not a bad thing.

The focus is smooth and dampened, the aperture clicks are perfect. It is really really well built.
I used it on a GFX50R, and I found it to balance quite alright because of the low weight. It will however balance much better on a 50S maybe even with a battery grip.
The LM and OIS does make a little hissing noise that you can hear when you’re lying there waiting for them birds to show up. It’s no worse than all other Fujinon OIS lenses on the market.
The zoom is internal, as well as the focus, so the barrel will not elongate while zooming. Obviously this lens is weather resistant.

The autofocus is fast and precise, and the OIS gives you 5 stops of image stabilisation. This is really important for a maximum f/5.6 aperture lens that requires a lot of light or a very slow shutter speed to avoid having to up the ISO. I didn’t use a tripod for any of my testing, but I’m sure the target landscape audience will use the lens on a tripod. This way you will have no use for the OIS anyway. But for those of us who like to shoot handheld, it is actually possible to get sharp handheld shots at shutter speeds close to 1/16th sec. at 200mm. Quite impressive if you ask me.

The hood features a polariser filter hatch, again, just as expected. It works well. The front filter diameter is relatively modest at 67mm, so it will hold your 100×100 square filters with no issues what so ever.

Image Quality

Obviously the most important aspect of all lenses is….tadaaaaa…… Image Quality. The GF100-200mm f/5.6 of course does not avoid judgement in this area. Luckily for this lens the image quality is actually quite good.
First impression is that is bears a really pleasant sharpness. It’s not macro-lens sharp as the GF120mm f/4 lens, it’s more pleasing sharp like the 110mm f/2. This makes it really good for portraiture at the longer focal lengths. Just be aware that shooting portraits at 200mm tends to compress the ears of your subject closer to the nose, making it look like they gained an extra couple of pounds….

100mm

200mm

(The zoom range can been seen in the two images above)

Out of focus areas are rendered much better than I expected. Of course you can push the envelope, and under certain conditions I found that very busy backgrounds with foliage did tend to get a little messy. But this is the extreme ends of the spectrum. Under all other circumstances I shot the GF100-200mm the bokeh was very smooth and very pleasing to the eye. Especially the foreground bokeh is good.

I did not notice much difference in quality at either ends of the zoom range. The MTF curves will probably tell you if there is a difference, but I haven’t looked at them yet – and honestly I’d rather be outside shooting the lens and getting to know it.

It is really sharp throughout the aperture range, and nature and landscape photographers will love that at f/32 you have minimal aberrations, so you do get good resolution even at these high apertures.
At f/5.6 this lens is really nice and sharp. The only downside to the f/5.6 moniker is that it requires a lot of light. As a guy living in Denmark, we do not have a lot of light at this time of year, and I can honestly say that it was quite a challenge to get really clear shots without having to up the ISO. – A definite downside of relying on the natural light (and trying to do winter-people-long-tele-low-aperture-steady shots)

GFX50R – GF100-200mm @200mm f/5. 6 1/250s ISO400

Conclusion

So what do I have to conclude after such a short period of time using this lens?

Well, first and foremost, this lens is not for my type of shooting. I enjoyed using it, but come on, I’m a street shooter not a landscape sorcerer. So the whole premise of me giving my thoughts on this lens is a little off to begin with.
Secondly I have no issue with the lens being f/5.6 at its widest aperture when I know the type of work that this lens will be used for requires f stops around f/16-f/32. This is when speaking in terms of depth of field. In terms of light gathering ability, however, this is a different issue. Even though the OIS is very helpful you will need to up your ISO for sharp shots, when shooting handheld in less than ideal lighting situations.

This lens is gorgeously built and has image quality that yet again confirm Fujinon as one of the worlds best optics manufacturers.
And priced at 1999USD this lens is far from expensive.
If you’re a landscape, nature, or even studio photographer – this lens will probably be one of your most priced GF lenses.

Samples

(Rightclick and download to view the 3000px resolution files with metadata)

Like this:

Like Loading…

Fujifilm GF 100-200mm Review

Home
Donate
New
Search
Gallery
Reviews
How-To
Books
Links
Workshops
About
Contact

GF f/5.6 R LM OIS WR

(80-160mm equivalent)

Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

Fujifilm Fujinon GF 100-200mm f/5.6 R LM OIS WR (metal 67mm filter thread, 41.315 oz./1,171.2g with collar as shown, 36.950 oz./1,047.6g without 4.370 oz./123.95g collar, 2~5¼’/0.6~1.6m close focus, $1,500 new or about $1,200 used if you know How to Win at eBay). bigger.

I’d get mine at Adorama, at Amazon or at at B&H, or used at eBay.

This all-content, junk-free website’s biggest source of support is when you use those or any of these links to approved sources when you get anything, regardless of the country in which you live. Thanks for helping me help you! Ken.

 

May 2019   Better Pictures   Fuji   GFX System   GF Lenses   Sony   LEICA   Zeiss   Nikon   Canon   All

 

Sample Images

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

Half Dome in Clouds at Sunset, Yosemite Valley, 7:04 P.M., 07 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 123.8 mm, f/11 at 1/20 at ISO 100, Perfectly Clear v3.7. bigger or full-resolution.

 

Yosemite Falls, 5:01 P.M., 07 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 200mm, f/22 at 1/320 hand-held at ISO 100, Perfectly Clear v3. 7, split-toned print. bigger.

 

Tree, Yosemite Valley, 7:04 .P.M., 07 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 100mm, f/11 at 1/2 at ISO 100, Perfectly Clear v3.7, split-toned print. bigger.

 

Trees in the Merced River, Yosemite Valley, 2:58 P.M., 08 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 100mm, f/16 at 1/18 at ISO 100, Perfectly Clear v3.7, split-toned print. bigger.

 

Rainbow in Bridal Veil Falls, Yosemite Valley, 4:15 P.M., 08 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 200mm, f/16 at 1/42 at ISO 100, Perfectly Clear v3.7. bigger.

 

Backlit Tree, Yosemite Valley, 5:05 P.M., 08 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 164.5 mm, f/5.6 at 1/170 at ISO 100, Perfectly Clear v3.7, split-toned print. bigger.

 

Two Pine Trees on the Bank of the Merced River, Yosemite Valley, 7:26 P. M., 08 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 100mm, f/16 at 2 seconds at ISO 100, Perfectly Clear v3.7. bigger.

 

Alpenglow, Yosemite Valley, 7:52 P.M., 08 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 200mm, f/11 at 1/6 at ISO 100, Perfectly Clear v3.7. bigger.

 

Three Textures, Merced River, Yosemite Valley, 7:40 A.M., 09 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 123.8 mm, f/22 at 1/6 at ISO 100, Perfectly Clear v3.7, split-toned print. bigger.

 

Flowing Water, Merced River, Yosemite Valley, 8:23 A.M., 10 May 2019. Fujifilm GFX 50R, Fujifilm GF 100-200mm f/5.6 OIS at 158.8 mm, f/16 at 1/7 second at ISO 100, Perfectly Clear v3.7, split-toned print. bigger.

 

More at Springtime in Yosemite, May 2019.

Introduction

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

New   Good   Bad   Missing

B&H Photo — Video — Pro Audio

I buy only from these approved sources. I can’t vouch for ads below.

This GF 100-200mm lens is an all-purpose telephoto for Fujifilm’s GFX series of medium format cameras.

As we’d expect for a premium lens with such a limited (2:1) zoom range and slow (f/5.6) speed, its optical performance is flawless.

It’s limited zoom range is perfect for medium format, allowing us to make very precise compositional adjustments while on a tripod.

It’s as big, but weighs less than 70-200/2.8 lenses for full-frame cameras. This is an easy lens to carry all day, and it’s also super-easy and smooth to zoom very precisely with just a fingertip, even if pointed up or down.

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

 

New

Fujifilm’s first telephoto zoom for its digital medium-format GFX system.

A (AUTO) and C (Command-dial-controlled) positions of aperture ring now lock.

 

Good

Flawless optics.

Dedicated, locking aperture ring.

Stops down to f/32.

Internal focus and zoom, nothing moves externally.

Image Stabilization.

Smooth, precise one-fingertip zooming.

Metal filter threads.

Made domestically in Japan.

Works with GF1.4x TC WR Teleconverter.

Tripod collar included.

Weather- and dust-sealed in 10 areas:

Fujifilm 100-200mm weather gaskets.

 

Fujifilm GFX 50R and GF 100-200mm covered in spray at the base of Yosemite Falls. bigger.

 

Bad

Electronic manual-focus ring, not a real mechanical one.

 

Missing

Removable tripod collar included, but it has no 90º click stops.

No focus scales, which could have been used for computing optimum apertures.

No depth-of-field scales.

 

Specifications

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

 

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

 

Name

Fujifilm calls this the FUJINON LENS GF100-200mmF5.6 R LM OIS WR.

Fujinon is Fujifilm’s brand name for their lenses.

GF means it works with Fujifilm’s GFX medium format system.

R means it has an aperture ring.

LM means Linear (autofocus) Motor.

OIS means Optical Image Stabilization.

WR means weather resistant.

Fujifilm’s model number is GF100-200mmF5. 6 R LM OIS WR or 600020702.

 

Also has:

∅67: Takes 67mm filters.

Aspherical: Uses aspherically-shaped lens elements for sharper pictures.

Super EBC: Fujinon’s brand of multicoating, standing for Super Electron-Beam Coated.

 

Optics

Fujinon internal optical construction. Aspherical and Super ED elements.

20 elements in 13 groups.

Two Super ED extra-low dispersion elements, which help reduce secondary axial chromatic aberration.

One Aspherical element.

Internal focussing.

Internal zooming; doesn’t change length as zoomed.

Super Electron-Beam Coating (EBC).

 

Coverage

33 × 44mm Medium-Format (55mm image circle).

 

Diaphragm

Fujifilm GF 100-200mm (diaphragm not shown). bigger.

9 rounded blades.

Electronically actuated.

Stops down to f/32 in 1/3-stop clicks.

 

Focal Length

100~200mm.

When used on Fuji’s 33 × 44mm Medium-Format cameras it sees the same angles of view as an 80~160mm lens sees when used on a full-frame (24 × 36mm) camera.

See also Crop Factor.

 

Angle of View

30.6º ~ 15.6º diagonal on GFX medium format.

 

Autofocus

Internal focussing.

Rear focussing.

No external movement as focussed, so no air or dust is sucked in.

 

Focus Scale

No.

 

Infinity Focus Stop

No.

 

Depth of Field Scales

No.

 

Reproduction Ratio Scale

No.

 

Infrared Focus Indices

No.

 

Close Focus

2 feet (0.6 meters) at 100mm to 5¼ feet (1.6 meters) at 200mm. (distances measured to image plane; distances to front of lens are closer.)

 

Maximum Reproduction Ratio

1:5 (0.2×) at 100mm. Yes, it gets biggest and closest at the 100mm, not 200mm, end.

 

Image Stabilizer

Yes, not rated for stops improvement.

 

Filters

Metal 67 mm filter thread.

 

Hood

The hood is included and has a little sliding door so you can rotate your grads and polarizers.

Fuji 100-200mm hood. bigger.

Bottom, Fuji 100-200mm hood. bigger.

 

Case

Black sack included. A sock works better.

 

Tripod Collar

The tripod collar is included and removable.

There are no 90º click stops.

 

Size

3.52″ Ø maximum diameter × 7.20″ extension from flange.

89.5 mm Ø maximum diameter × 183 mm extension from flange.

 

Weight

41.315 oz. (1,171.2 g) with collar.

36.950 oz. (1,047.6 g) without collar.

4.370 oz. (123.95 g), collar only.

Rated 37.0 oz. (1,050 g).

 

Quality

Made in Japan.

 

Environment

Rated to work down to -10º C (+14º F).

 

Announced

Fujifilm’s Lens Roadmap, 25 September 2018.

 

Included

Lens.

FLCP-67 II 67mm front lens cap (p/n 16539807).

RLCP-002 G-mount rear cap (p/n 16539730).

Hood.

Tripod collar.

Black sack.

 

Packaging

Box, Fujifilm GF 100-200mm. bigger.

Microcorrugated cardboard box. Corrugami top section holding sack, closed-cell white foam below holding lens.

 

Fujifilm’s Fujinon Model Number

GF100-200mmF5.6 R LM OIS WR or 600020702.

 

Price, USA

$1,500 new or about $1,200 used if you know How to Win at eBay, May 2019.

 

 

Performance

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

 

Overall   Autofocus   Manual Focus   Breathing   Bokeh

Distortion   Ergonomics   Falloff   Filters   Flare & Ghosts

Lateral Color Fringes   Macro   Mechanics   Sharpness  

Spherochromatism   Stabilization   Sunstars

 

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

 

Overall

Performance          top

This is an essentially flawless lens. Optics are flawless and operation is superb, too.

 

Autofocus

Performance          top

Autofocus speed is moderate, and it’s silent.

 

Manual Focus

performance          top

Manual focusing is entirely electronic; the manual focus ring isn’t connected to anything other than a digital encoder.

Manual focus speed is dynamic: it focuses more quickly or more precisely based on how fast you turn the ring. It’s easy to get from one end of the range to the next, and then to get precise magnified focus.

 

Focus Breathing

Performance          top

Focus breathing is the image changing size as focused in and out. It’s important to cinematographers that the image not breathe because it looks funny if the image changes size as focus gets pulled back and forth between actors. If the lens does this, the image «breathes» by growing and contracting slightly as the dialog goes back and forth.

The image from the 100-200mm gets a little smaller as focussed more closely.

 

Bokeh

Performance          top

Bokeh, the feel or quality of out-of-focus areas as opposed to how far out of focus they are, is neutral to good. It’s nice and soft.

Here are photos from headshot distance wide-open:

Davis 6250 weather station, 04 May 2019. bigger or camera-original © file.

 

Davis 6250 weather station, 04 May 2019. bigger or camera-original © file.

As always, if you want to throw the background as far out of focus as possible, shoot at f/5.6 at 200mm and get as close as possible.

 

Distortion

Performance          top

The Fujinon 100~200mm has no visible distortion, at least as shot on a GFX 50R which is probably correcting any that’s there in its default Digital Lens Modulation Optimizer.

These aren’t facts or specifications, they are the results of my research that requires hours of photography and calculations on the resulting data.

On 33 × 44mm at 10′ (3m)

Correction factor with Digital Lens Optimizer ON

100mm

+0.10

130mm

±0.00

150mm

±0.00

170mm

±0.00

200mm

±0.00

© 2019 KenRockwell.com. All rights reserved.

 

Ergonomics

Performance          top

Fujifilm GF 100-200mm. bigger.

Fujifilm GF 100-200mm. bigger.

This lens handles great.

The focus ring is only electronic, so who knows when it’s working or not depending on how you have the camera set.

Zooming works so much better than other lenses: it’s smooth, easy to turn and half the lens is zoom ring. A fingertip is all it takes to zoom precisely, even pointed straight up or down.

The zoom is so nice you’re going to get caught just turning it back and forth for no reason as you’re relaxing while counting your profits from the great photos you’ll be selling made with this lens.

 

Falloff

Performance          top

Falloff is invisible, at least with the GFX 50R’s default Lens Modulation Optimizer ON.

I’ve greatly exaggerated the falloff by shooting a gray field and placing these on a gray background; it will not look this bad in actual photos of real things:

 

Fujinon GF 100~200mm f/5. 6 falloff at infinity.

 

f/5.6

f/8

f/11

100mm

200mm

© 2019 KenRockwell.com. All rights reserved.

 

Filters, use with

performance          top

There’s no need for thin filters. I can stack several regular 67mm filters without any vignetting.

Go ahead and use your standard rotating polarizer and grad filters.

 

Flare & Ghosts

Performance          top

Flare and ghosts aren’t a problem; there’s only the slightest ghosting under extreme conditions. See Sunstars for a sample.

 

Lateral Color Fringes

Performance          top

There are no color fringes as shot on Fujifilm’s cameras, which by default correct for any that may be there with the Digital Lens Optimizer.

 

Macro Performance

Performance          top

Oddly this lens gets closest and has the largest macro magnification at its 100mm setting, at which it works very well.

 

Wide-open at f/5.6

It’s sharp wide-open:

Casio G-Shock Solar Atomic Watch at close-focus distance at 100mm, 04 May 2019. bigger or camera-original © file.

Here’s a crop from this image:

1,200 × 900 pixel crop from above. bigger or camera-original © file.

If this crop is about 3″ (7.5cm) wide on your screen, then the complete image printed at this same high magnification would be about 14-1/2 × 21-3/4″ (1.2 × 1.8 feet or 37 × 55 cm).

If this crop is about 6″ (15cm) wide on your screen, then the complete image printed at this same extreme magnification would be about 29 × 43″ (2.4 × 3.6 feet or 0.75 × 1.1 meters).

If this crop is about 12″ (30cm) wide on your screen, then the complete image printed at this same insane level of magnification would be about 58 × 87″ (4. 8 × 7.2 feet or 1.5 × 2.2 meters).

 

At f/11

Stopped down it gets even sharper:

Casio G-Shock Solar Atomic Watch at close-focus distance at 200mm, 04 May 2019. bigger or camera-original © file.

Here’s a crop from this image:

1,200 × 900 pixel crop from above. bigger or camera-original © file.

If this crop is about 3″ (7.5cm) wide on your screen, then the complete image printed at this same high magnification would be about 14-1/2 × 21-3/4″ (1.2 × 1.8 feet or 37 × 55 cm).

If this crop is about 6″ (15cm) wide on your screen, then the complete image printed at this same extreme magnification would be about 29 × 43″ (2.4 × 3.6 feet or 0.75 × 1.1 meters).

If this crop is about 12″ (30cm) wide on your screen, then the complete image printed at this same insane level of magnification would be about 58 × 87″ (4.8 × 7.2 feet or 1.5 × 2.2 meters).

 

Mechanical Quality

Performance          top

Fujifilm GF 100-200mm. bigger.

This beautiful lens is almost all metal.

 

Hood

Plastic bayonet hood with locking pawl.

 

Front Bumper

None.

 

Filter Threads

Metal.

 

Hood Bayonet Mount

Metal.

 

Focus Ring

Rubbery.

 

Zoom Ring

Rubber-covered metal.

 

Aperture Ring

Metal.

 

Rear Barrel Exterior

Metal.

 

Identity

Engraved and filled with paint around front of lens and on top of barrel near mount.

 

Internals

Seems like all metal!

 

Dust Gasket at Mount

Yes.

 

Mount

Chromed brass.

 

Markings

Most are engraved and filled with paint as they should be, with some minor markings laser-engraved.

 

Serial Number

Laser engraved on the side of the barrel, just ahead of the tripod collar.

 

Date Code

None found.

 

Noises When Shaken

Mild clunking from the uncaged OIS section and the focus groups.

 

Made in

Japan.

 

Sharpness

Performance          top

Lens sharpness has nothing to do with picture sharpness; every lens made in the past 100 years is more than sharp enough to make super-sharp pictures if you know what you’re doing. The only limitation to picture sharpness is your skill as a photographer. It’s the least talented who spend the most time worrying about lens sharpness and blame crummy pictures on their equipment rather than themselves. Skilled photographers make great images with whatever camera is in their hands; I’ve made some of my best images of all time with an irreparably broken camera! Most pixels are thrown away before you see them, but camera makers don’t want you to know that.

If you still care, this lens is flawlessly sharp at every aperture from center to corner. The only thing you can do wrong is stop down too much, because the laws of physics are such that it’s supposed to be soft at f/22 and f/32 due to diffraction. Only use f/32 if you really need it.

If you’re not getting ultra-sharp pictures with this, be sure not to shoot at f/22 or smaller where all lenses are softer due to diffraction, always shoot at ISO 100 because cameras become softer at ISO 200 and above, avoid shooting across long distances over land which can lead to atmospheric heat shimmer, be sure everything is in perfect focus, set your camera’s sharpening as you want it and be sure nothing is moving, either camera or subject. If you want to ensure a soft image with any lens, shoot at f/32 at ISO 102,400 at default sharpening in daylight through heat shimmer of rapidly moving subjects at differing distances in the same image.

MTF at 100mm at 10 cyc/mm.

MTF at 100mm at 40 cyc/mm.

MTF at 100mm at 40 cyc/mm.

MTF at 200mm at 10 cyc/mm.

MTF at 200mm at 40 cyc/mm.

MTF at 200mm at 40 cyc/mm.

 

Spherochromatism

Performance          top

Spherochromatism, also called «color bokeh» by laymen, is an advanced form of chromatic aberration in a different dimension than lateral color. It can cause colored fringes on out-of-focus highlights, usually seen as green fringes on backgrounds and magenta fringes on foregrounds. Spherochromatism is common in fast lenses of moderate focal length when shooting contrasty items at full aperture. It goes away as stopped down.

 

I see no spherochromatism with this lens, it’s not that fast so I wouldn’t expect to.

 

Image Stabilization

Performance          top

The built-in stabilization works wonders. I can handhold with perfect sharpness at 18 to 1/15 of a second!

«Percent Perfectly Sharp Shots» are the percentage of frames with 100% perfect tripod-equivalent sharpness I get when I’m shooting hand-held while standing with no support. Hand tremor is a random occurrence, so at marginal speeds some frames will be perfectly sharp while others will be in various stages of blur — all at the same shutter speed. This rates what percentage of shots are perfectly sharp, not how sharp all the frames are:

% Perfectly Sharp Shots at 100mm

1

1/2

1/4

1/8

1/15

1/30

1/60

1/125

1/250

1/500

Stabilization ON

0

3

17

83

100

100

100

100

100

100

Stabilization OFF

0

0

0

0

0

5

67

100

100

100

 

% Perfectly Sharp Shots at 200mm

1

1/2

1/4

1/8

1/15

1/30

1/60

1/125

1/250

1/500

Stabilization ON

0

0

33

50

100

100

100

100

100

100

Stabilization OFF

0

0

0

0

0

0

5

50

100

100

As you can see the 100-200mm’s stabilizer adds about three to four stops of real-world stabilization. Astounding is that I can shoot at 1/8 to 1/15 and get perfect tripod-equivalent sharpness almost all the time!

 

Sunstars

Performance          top

With a 9-bladed rounded diaphragm at large apertures that becomes nonagonal at the smallest apertures, I get 18-pointed sunstars on brilliant points of light only at the smallest apertures.

Here’s the best you’re going to get at f/32. God beams are exaggerating the effect because this is shot in heavy mist at the base of Yosemite Falls:

Godbeams and Sunstars at f/32. bigger.

 

User’s Guide

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

 

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

 

Fujifilm GF 100-200mm. bigger.

Full / 5m — ∞ Switch

This is a focus distance limiter.

Leave it in FULL.

The 5m — ∞ position prevents the lens from autofocusing closer than 5 meters (16 feet). Use this setting only if you’re having a problem with the lens attempting to focus on irrelevant close items, or if for some reason the lens is «hunting» from near to far looking for distant subjects.

 

OIS ON — OFF Switch

Leave it ON hand-holding.

Leave it ON if on a tripod and making short exposures up to about 1/8 second, especially if you’re not using the electronic or electronic front-curtain shutter.

Leave it OFF when making exposures longer than a second on a tripod, or if you have a very sturdy tripod.

 

Recommendations

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

This is a superb do-everything telephoto for Fujifilm’s GFX series of medium format cameras. Its ultraprecise zooming is great for tripod landscape use, where I always can set the perfect framing easily with a fingertip.

I use a clear (UV) protective filter instead of a cap so I’m always ready to shoot instantly. I only use a cap when I throw this in a bag with other gear without padding — which is never. The UV filter never gets in the way, and never gets lost, either.

The very best protective filter is the Hoya multicoated HD3 67mm UV which uses hardened glass and repels dirt and fingerprints.

For less money, the B+W 67mm 010 is an excellent filter, as are the multicoated version and the basic multicoated Hoya filters, but the Hoya HD3 is the toughest and the best.

Filters last a lifetime, so you may as well get the best. The Hoya HD3 stays cleaner than the others since it repels oil and dirt.

 

More Information

Top   Sample Images   Introduction

Specifications   Performance

User’s Guide   Recommendations   More

 

I’d get my Fujifilm 100-200mm at Adorama, at Amazon or at at B&H, or used at eBay.

 

Fujifilm’s 100-200mm pages.

Fujifilm’s Lens Roadmap, 25 September 2018

 

© Ken Rockwell. All rights reserved. Tous droits réservés. Alle Rechte vorbehalten.

 

Help Me Help You

I support my growing family through this website, as crazy as it might seem.

The biggest help is when you use any of these links when you get anything. It costs you nothing, and is this site’s, and thus my family’s, biggest source of support. These places always have the best prices and service, which is why I’ve used them since before this website existed. I recommend them all personally.

If you find this page as helpful as a book you might have had to buy or a workshop you may have had to take, feel free to help me continue helping everyone.

If you’ve gotten your gear through one of my links or helped otherwise, you’re family. It’s great people like you who allow me to keep adding to this site full-time. Thanks!

If you haven’t helped yet, please do, and consider helping me with a gift of $5. 00.

As this page is copyrighted and formally registered, it is unlawful to make copies, especially in the form of printouts for personal use. If you wish to make a printout for personal use, you are granted one-time permission only if you PayPal me $5.00 per printout or part thereof. Thank you!

 

Thanks for reading!

 

 

Mr. & Mrs. Ken Rockwell, Ryan and Katie.

 

Home
Donate
New
Search
Gallery
Reviews
How-To
Books
Links
Workshops
About
Contact

NVIDIA GF100 — full architecture description / Graphics cards

3DNews Video cards NVIDIA NVIDIA GF100 — a complete description of the architecture…

The most interesting news


There are still about
two months. But now we can tell all the details about
new architecture prepared by Santa Clara engineers

⇣ Table of contents

It should be warned right away — alas, this material cannot be called a full-fledged review. Perhaps for the first time we are talking about a new GPU without being able to test it and illustrate the theoretical data with performance test results. However, so far only NVIDIA employees can boast of such an opportunity — video cards built on the basis of the new GPU are not available even for test labs, not to mention customers. At the same time, having detailed information about the architecture of NVIDIA’s new flagship GPU, we, of course, cannot but share it with readers. Therefore, in this material we will bring together everything that we know about the GF100 GPU, which has already received the nickname «paper dragon».

For the mass user, the first official announcement of a video card based on the new Fermi architecture should be considered the recent CES 2010 exhibition in Las Vegas. It was at it that the company first demonstrated a working instance of the GF100 to the general public, at a press conference held exactly there, President and CEO of NVIDIA Jenson Huang (Jen-Hsun Huang) officially announced that the series production of the new GPU has begun, and spoke about the most striking feature of this GPU, 3D Vision Surround technology. In addition, during the CES 2010 exhibition, NVIDIA held a closed technical conference, where journalists were provided with detailed information about the internal architecture of the GF100 and the opportunity to see with their own eyes a demonstration of the potential of the first copies of the video card.

However, much is still unknown about the new generation of video cards from the Californian company. First of all, the official name of the future video card remains unknown — so far we have to operate with the code name of the GPU (GF100) and the architecture underlying it (Fermi). There is no official data on the upcoming lineup either. Finally, nothing is known about the manufacturer’s recommended retail price, but this is a decisive factor for many buyers.

There is no data on the date of the official start of sales of new video cards. Although it’s not difficult to guess it at all — given the official statement by Mr. Jenson Huang about the start of mass production of the new GPU, as well as the fact that usually 6-8 weeks pass from the start of production to the appearance of finished products in the sales channel, it is worth assuming that The GF100 will hit store shelves in early March. It was at this time that one of the largest IT exhibitions in the world takes place — the German CeBit, where NVIDIA has already presented its new products. This year, CeBit starts on March 2 — most likely, this date will be the day of the official announcement and start of sales of the GF100.

⇡#Key Features

At a closed technical conference in Las Vegas (it was called GF100 Deep Dive), the NVIDIA engineers who led the GF100 project spoke about the main goals that were set for them when developing a new GPU. The four most important features of the GF100 were:

  • realistic geometry
  • improved image quality
  • high performance with additional calculations
  • record GPU power

Before describing in detail what the developers mean by each of these points, let’s dwell on the general characteristics of the new GPU.

So, GF100 is:

  • 512 CUDA processors
  • 16 geometry blocks
  • 4 ROPs
  • 64 texture units
  • 48 modules ROP
  • 384-bit GDDR5 memory interface

It’s worth noting that the GF100 will be NVIDIA’s first high-end GPU built to 40nm design standards. While AMD/ATI’s own production facilities allowed mastering the 40-nm process technology a long time ago, NVIDIA was in no hurry to switch to the new technology, testing it on low-end models. Perhaps, in this matter, the Californian company has become a hostage of its main manufacturing partner, TSMC, and it was precisely because of problems with the development of new design standards that the release of the new NVIDIA flagship was so delayed. One way or another, but as mentioned above, by now all problems have been overcome, and mass production of the GF100 has already begun.

Although there are some key characteristics among the gaps in the information about the GF100 — core and memory clock speeds of the new GPU — let’s try to compare the new GPU with the current flagships from ATI (Cypress, RADEON HD 5870) and NVIDIA (GT200, GeForce GTX 285).

⇡#Geometry

It would not be out of place to remember that it was NVIDIA that implemented the processing of geometric data at the hardware level in a video card more than 10 years ago. Until 1999 and the first GeForce-branded model, the scene’s geometric information was processed by the CPU. The transfer of the T&L block to the graphics core made it possible to significantly speed up the processing of three-dimensional scenes — in modern games, the number of polygons in a scene is already in the millions, while ten years ago it was tens of thousands at best. However, according to NVIDIA developers, the potential of modern programmable GPUs for processing geometric data is not being actively used. So, for example, the GeForce GTX 285 is 150 times faster than the GeForce FX in terms of shading and pixel output, but only 3 times in terms of the power of the geometric subsystem. This not only forces game developers to use simpler objects, but also makes it impossible to realistically display complex objects such as water or, for example, hair. The GF100 has many improvements to handle complex graphical information and to the mass adoption of techniques such as tessellation.

Currently, all objects and characters in games are developed in 3D modeling programs. Designers must manually create several models with different levels of detail (LOD, Level Of Details), using one or the other depending on the distance of the object from the front edge of the game scene. Considering that each object is transferred to the GPU anew for each frame, rather complex algorithms are required to use the model with the optimal level of detail at the moment. Moreover, a significant limitation is imposed not only by the maximum performance of the GPU when processing geometric information, but also by the throughput of PCI Express.

The tessellation method based on displacement maps circumvents this problem to a great extent. Recall that a displacement map is a monochrome texture used not to fill a polygon, but to change its geometric properties. The brightness of each point on this texture determines the deviation (height) of that point above the original surface. Unlike traditional methods, when the volume is imitated by ordinary flat textures, tessellation allows you to get much more complex and natural-looking objects, correctly calculate shadows, etc. The great advantage of displacement maps is that they allow you to create a generic model, the level of detail of which is determined only by the displacement map used. It is also important that, in fact, a displacement map is a regular texture, the optimization and compression methods of which have been successfully worked out for a long time. The GF100 GPU has hardware support for tesellation, and NVIDIA engineers have paid maximum attention to this aspect. But first things first.

⇡#Graphic Processing Cluster

The GF100 graphics processor is based on a scalable architecture based on the use of streaming multiprocessors (SM, Streaming Multiprocessor) combined into GPC clusters (Graphic Processing Cluster — graphics processing clusters). Each such cluster contains four multiprocessors, as well as all the necessary blocks for processing geometric data and texturing. In fact, each GPC is a standalone GPU that does not have only its own memory subsystem. The GF100 consists of four such clusters sharing six memory controllers, six ROPs (8 ROPs each), and an L2 cache. Obviously, younger (and eventually, possibly older) GPU models will be obtained by changing the number of «cubes» of this «constructor».

⇡#PolyMorph Engine

The use of tesellation fundamentally changed the load distribution inside the GPU and forced NVIDIA engineers to slightly change the layout of the computing blocks and introduce a new type of block — the PolyMorph Engine. Each graphics cluster (GPC) is equipped with four such blocks — one for each multiprocessor (SM). Each PolyMorph Engine performs five stages: vertex selection, tessellation, coordinate transformation, attribute transformation, streaming.

At the first stage, the vertices are selected from the global buffer, after that the vertex is sent to the multiprocessor, where its coordinates are converted into scene coordinates and the tessellation level (analogous to the level of detail, LOD) is determined. After that, the vertex is passed to the second stage — tessellation. At this stage, the polygon is divided into several new, smaller ones, and their coordinates are determined from the displacement map. The resulting new vertices are again processed in the multiprocessor and transferred via streaming output to memory for further processing.

⇡#Raster Engine

After the geometry data is processed by the PolyMorph Engine, it is passed to the Raster Engine for rasterization. This block filters out invisible primitives (the so-called back surfaces), then the geometric data is converted into screen points, which in turn are sorted and filtered by the Z-coordinate. Each cluster (GPC) is equipped with one rasterizer processing up to 8 points per clock, that is, the total performance of the GF100 is 32 points per clock, which is 8 times more than the GT200 provided.

⇡#Streaming Multiprocessor third generation

Each multiprocessor consists of 32 CUDA compute units, a fourfold improvement over previous architectures. As before, CUDA cores have a scalar architecture, which allows for maximum load, regardless of the type of data being processed — be it z-buffer operations or texture processing. Each CUDA processor is equipped with one ALU and one FPU.

In addition, each multiprocessor is equipped with 16 Load/Store blocks, allowing you to determine the data addresses in cache or memory for 16 threads per clock cycle. There are also four blocks of special functions (SFU, Special Function Unit) that perform operations such as sine, cosine, square root. Each SFU performs one operation per thread per clock, so a (warp, 32 threads) branch takes 8 clocks. The multiprocessor organizes threads into branches of 32 threads, two branch schedulers are used to manage these branches — two branches can be executed on one multiprocessor at the same time. The GF100 schedulers send one instruction from each branch to a group of 16 CUDA cores, 16 LD/STs, or four SFUs. In addition, each SM is equipped with four texture units — each of them selects up to four texture samples per clock, the result can be filtered immediately — bilinear, trilinear and anisotropic filtering is provided. Unlike the GT200, in the GF100 the texturing units operate at a frequency higher than the core frequency. The GF100 texture blocks support the BC6H and BC7 formats implemented in DX11, which reduce the load on the memory subsystem when processing HDR textures.

Shared memory and caches

Shared memory is a fast, programmable, on-chip memory that allows maximum optimization of data exchange within a thread. In addition to the shared memory, the GF100 also uses the L1 cache, which is its own inside each multiprocessor (SM). L1 cache works in tandem with shared memory, while shared memory is intended for algorithms with ordered memory access, L1 cache speeds up those algorithms where data addresses are not known in advance.

In the GF100, each multiprocessor is equipped with 64 KB of memory, which can be divided into 48 KB of shared memory and 16 KB of L1 cache, or vice versa. In addition, a unified L2 cache of 768 KB is provided. It provides the fastest possible data exchange between different GPU units.

ROP blocks

The ROP blocks in the GF100 are organized in partitions of eight blocks each. Each block can output a 32-bit integer value per clock, either FP16 in two clocks, or FP32 in four. Thanks to improved compression algorithms and an increased number of ROPs, 4x and 8xMSAA anti-aliasing is noticeably faster — it runs 1.6 and 2.3 times faster, respectively, than the GT200. It is worth noting that the GF100 performs 8xAA anti-aliasing only on 9% slower than 4xAA. In addition, the GF100 has a new anti-aliasing mode — 32xCSAA (Coverage Sample Antialiasing).

⇡#Additional features

In conclusion, it is worth mentioning the effects of the next generation — based not on traditional pixel-by-pixel processing, but on calculations using the CUDA architecture. Such calculations allow us to implement much more complex visualization algorithms, both the physical effects of PhysX, which are already familiar to us, and more advanced techniques, for example, ray tracing (Ray Tracing) and even the implementation of artificial intelligence using GPUs. The range of tasks that the GF100 allows you to solve is much wider than that of a conventional GPU, but the use of new capabilities is certainly a matter of tomorrow, since currently game developers are not yet ready to use the full arsenal of this processor. However, the topic of one of the reports at the conference held in Las Vegas was precisely the cooperation between NVIDIA and game developers, so there is every reason to hope that this year the first games will appear that can make the most of the GF100’s capabilities.

We thank the Russian representative office of NVIDIA for their help in organizing a trip to CES’2010 and participation in the NVIDIA GF100 Deep Dive

— Discuss the material at the conference


⇣ Contents

If you notice an error, select it with the mouse and press CTRL+ENTER.

Related Content

Permanent URL: https://3dnews. ru/585820/page-1.html

⇣ Comments

New Nvidia GF100 Graphics Architecture

Over the years, Nvidia has been actively developing its solutions for the real-time 3D hardware graphics market. Suffice it to recall the main architectures of the last decade: the appearance of hardware transformation and lighting (T&L) calculation in Geforce 256 (NV10) in 1999, programmable per-pixel processing on Geforce 3 (NV20) in 2001, further improved in Geforce FX (NV30).

The next really big step was the release of Geforce 8 (G80) in 2006, which introduced unified shaders and a scalar stream processor architecture. Each of the above chips represented a new architecture, and they were all really important steps in the development of gaming and professional 3D graphics.

Since the creation of a truly new GPU architecture now takes 3-4 years, it is no wonder that after the release of the GeForce 8800 GTX based on the G80, we had to wait for the update of the basic architecture for so long. Each company has its own deadlines and organization of work, Nvidia takes several years between updates, and now this time has just come to present a new architecture.

Today we’ll finally reveal the architectural details of Nvidia’s latest GPU, codenamed GF100. «GF» in this case stands for the graphics («Graphics») chip based on the Fermi computing architecture, and the number «100» is the name adopted for Nvidia products for the first of the chips of the architecture aimed at the upper price range of the market. Later, less powerful chips of the family, intended for other market sectors, should also appear.

Let’s immediately dwell on what we will NOT talk about today: specific models of GF100-based cards, their characteristics (number of blocks, clock speeds), prices and power consumption. Well, so far there is very little information about performance in existing applications. However, there is still time before the real release of products on the market, and next time we will definitely tell you about everything.

So, GF100 is the first video processor based on the Fermi architecture. It supports all the innovations of the modern DirectX 11 API, such as hardware tessellation and DirectCompute computing capabilities. What’s more, the GF100’s architecture is designed with future API capabilities and graphics application needs in mind, such as ray tracing and powerful physics effects.

Naturally, as Nvidia has always done, the top-end GF100 chip should become the most productive solution on the market. Unlike the competitor’s strategy, this company is always trying by any means to release the most productive single-chip solution. Whether this is good or bad is a separate question, but the success of the company speaks for itself.

The GF100 video chip already uses the third generation of streaming multiprocessors (Streaming Multiprocessor) and more than doubles the number of computing cores (CUDA cores) compared to the previous architecture. This is a noticeable gain, taking into account the significantly more complex actuators. Unlike existing competitor’s DX11 solutions, which are more of a refinement of the previous generation GPU.

The number and performance of other execution units have also been increased in the GF100, but this is not the main thing. In terms of 3D graphics, the most important change in the architecture, we see the fact that the geometry pipeline in the new GPU for the first time in a long time has undergone a very significant rework. To match the new capabilities of DirectX 11 and modern graphics applications, this GPU has significantly increased the peak performance of geometry processing, geometry shaders and stream out.

We need to make a small digression here… Look at the existing games. The pixel-by-pixel processing in them has reached a pretty decent level, and the pixel effects are quite complex, while the geometric complexity of even the best games and applications lags noticeably behind. In the frame of game projects, up to a maximum of 1-2 million polygons are processed, which is incomparable with hundreds of millions in animation projects, which the quality of real-time graphics strives for.

This situation is explained by the fact that hardware and software support in hardware 3D graphics has been growing for a long time in the direction of strengthening pixel shaders. And the geometry processing units remained unchanged for many years and their work was not parallelized. What has greatly influenced the difference in the growth of pixel and vertex processing capabilities over the years. So, Geforce GTX 285 is more than 100 times more powerful than Geforce FX in pixel processing, but only less than three times faster in geometry processing!

Well-known techniques that have long been used in the 3D graphics industry, which appeared long before hardware solutions, come to the rescue. For example, the film industry has long used the division of primitives (tessellation, tessellation) and the overlay of displacement maps (displacement mapping). We have already talked about these features in our materials, for example, in the 2005 3D graphics FAQ.

Large triangles are tessellated into smaller ones, and then the vertex coordinates are shifted using displacement maps to give the geometry a more detailed look. The complex application of these two rendering techniques makes it possible to obtain geometrically complex models from a relatively simple description.

Unfortunately, early APIs didn’t have the ability to increase the geometric complexity of scenes in this way. In D3D9 and even D3D10 applications, it is not possible to create that much geometry on the GPU, although some rudimentary features do exist in D3D10. Yes, and previous hardware architectures are very poorly prepared for the active use of tessellation. So, simply adding tessellation capability to the GT200 would create a severe limitation in geometric performance.

But the new GF100 graphics pipeline is designed with these features in mind, it is able to provide really high performance for tessellation and geometry processing. In the new GPU, the traditional geometry processing architecture has given way to a new architecture that uses several so-called polymorphic engines (PolyMorph Engines) and rasterization units (Raster Engines) working in parallel, compared to a single similar unit in all previous generations of GPUs.

We will tell about all this in great detail later. Also, the new memory architecture plays a plus for fast geometry processing. The caches of the first and second levels provide high-speed access to geometric attributes for stream processors and tessellation units.

While the GF100 has plenty of other performance and flexibility innovations, the biggest change we see in terms of impacting the 3D hardware graphics industry is high-speed geometry processing.

But the GF100 also has other strengths that bring out the full potential of the Fermi architecture. Although in graphical calculations, threads often operate independently of each other and provide high memory access locality, recently non-graphical computations on the GPU have become of great importance, imposing somewhat different requirements on the hardware. Such computational threads need to communicate with each other, their algorithms are much more diverse, and they need read and write access to different areas of memory.

The main changes in the GF100 related to improving the efficiency of computational algorithms are: fast context switching between graphical and non-graphical calculations (for example, PhysX), competitive execution of computational programs, and an improved caching architecture that is effective for tasks such as ray tracing and artificial intelligence .

For fast simultaneous execution of various algorithms in GF100, the context switch time has been reduced, which should increase overall performance. So, a game application can use Direct3D 11 to render a scene, then switch to CUDA for ray tracing in hybrid rendering, then call DirectCompute program for image post-processing and perform calculations for PhysX physical effects. And all this within one frame, that is, a few milliseconds.

Other innovations include improved performance of atomic operations (atomic operations), which speeds up algorithms such as rendering translucent surfaces without prior sorting (order independent transparency). But then we will dwell on everything in great detail…

In the meantime, let’s dwell on the interesting issue of chip clock frequencies. It looks like there are some interesting changes here too, compared to the GT200. Now the main frequency is not the base frequency of the chip, but the frequency of the shader domain. And other blocks operate at a frequency lowered by a certain divisor, relative to the shader. ROPs and L2 cache always operate at their own frequency, as they did before, and stream processors and L1 cache at the frequency of the shader domain. And the texture modules and the remaining blocks of the chip (PolyMorph and Raster) operate at half the frequency of the shader domain. Not to say that this has changed much, just an interesting change stream processors are now the main thing.

Tessellation and Displacement Mapping

Before we start talking about the GF100 architecture, we need to take some time to explain how tessellation and displacement maps work. Although these rendering techniques are not new, they have not been used much in real-time 3D graphics until now. Even despite the theoretical possibility of their use in the past and hardware support from some manufacturers (Matrox Parhelia and AMD RADEON).

Only with the advent of special stages of the graphics pipeline in DirectX 11 can developers really start using these techniques in games. But what are the advantages of using tessellation and displacement maps in real-time 3D graphics, and why was it not possible before?

Software packages for creating digital content (ZBrush, 3D Studio Max, Maya, SoftImage, etc.) contain tools that allow you to use these features. But in the current environment, the modeler must manually create polygonal models with multiple levels of detail to apply the LOD level of detail.

These models in the form of vertices and triangles, as well as textures associated with them, are transmitted every frame to the GPU via the PCI Express interface. Therefore, game developers are forced to use relatively simple models due to the limited bandwidth of this bus, as well as the not very high geometric performance of the available GPUs.

Even in the best gaming applications on models and environments, we see a lot of chopped lines and angles caused by the limitations of previous graphics APIs and existing video chips. Developers have to compromise, increasing the detail of character models, paying noticeably less attention to the environment. We’re not talking about the fact that developers have to imitate some of the not particularly fine geometry using pixel effects, and realistic human hair in games is replaced by textures, hats and short hairstyles.

All this can change with the use of tessellation and modern video chips. A simple geometric model of the rendered object is sent to the GPU, and the hardware tessellator breaks it down into more geometric primitives needed for the current scene. These vertices are then moved to the required distance to add detail.

We have already discussed in the article at the link above how this is done. Look at the figure, on the left is a simplified model using quad primitives. It is quite simple, compared to the current character models used in games. Next comes the image obtained using tessellation. It is very smooth and lacks sharp corners.

But tessellation by itself does not add detail, it only smoothes the model. Therefore, we still need to apply a displacement map to it. As a result, on the right we see a very realistic looking character model with a lot of geometric details.

There are many advantages to this approach. The original model is geometrically simple, which means high storage and bus transfer efficiency. The bandwidth requirements for sending the model to the GPU are very low, and since the animation is calculated for a simple initial model, more complex animation algorithms can be used.

Another important advantage is the ability to flexibly change the resulting geometric complexity, ie the dynamic level of detail (LOD). Since all data for tessellation is stored on the chip, it is not necessary to transfer multiple models with different levels of detail. Also, you can get by with the same model and displacement map for different gaming platforms, setting the level of detail by splitting into a different number of primitives.

An important difference between tessellation and displacement maps and per-pixel effects like normal maps and parallax mapping is that they affect vertices, not pixels. That is, with the method described above, clear shadows, self-shadowing and detailed silhouettes of objects will be obtained without any problems.

Moreover, displacement maps can be easily combined with pixel techniques. So, displacement mapping can be used to simulate large irregularities in the model, and for normal mapping, small details such as scratches and skin pores can be left.

Another of the most interesting features provided by tessellation and displacement maps is the ability to dynamically change geometry on the fly. For example, firing a machine gun at a brick wall will cause a real change in geometry and the appearance of a hole in the wall, and not just a mark (“decal”) indicating the place of impact, as is commonly done now.

GF100 Graphics Architecture

Now it’s time to move on to the architecture. Like previous Nvidia-designed chips, the GF100 is based on multiple clusters, but now they are Graphics Processing Clusters, each of which consists of several streaming multiprocessors (Streaming Multiprocessors), which, in turn, contain arrays of stream processors .

The GF100 contains four GPC clusters, sixteen SMs, and six 64-bit memory controllers. As usual, Nvidia plans to launch several models of GF100-based graphics solutions, with different numbers of active GPCs and memory controllers. This is logical both from the point of view of reducing the cost of production, since the TSMC technical process is still insufficiently developed, and there are not very many fully suitable chips, and with market policy, when chips with different quantitative characteristics are sent to different price ranges.

So, the new GPU in its full presentation contains an external PCI Express interface, a GigaThread engine, four GPCs, six memory controllers, six enlarged ROPs, and 768 KB of L2 cache attached to ROPs.

The GPU receives commands via the Host Interface, the GigaThread engine requests the necessary data from the system memory and copies it to the local memory. Unlike the previous chip, which has eight 64-bit memory controllers, the GF100 has six such controllers, but with support for GDDR5 memory, which solutions based on the GT200 did not have. As a result, the use of GDDR5 memory and 384-bit access to it gives a fairly high bandwidth.

The GigaThread manager is the center of the chip, it creates and distributes blocks of threads to different multiprocessors, and multiprocessors distribute warps (warps, groups of 32 threads) among stream processors (CUDA cores) and other execution units.

In total, GF100 includes 512 stream processors, the so-called CUDA cores, assembled into 16 multiprocessors, 32 pieces each. Each SM supports simultaneous execution of up to 48 warps, and CUDA core can execute all types of programs: vertex, pixel, geometry, computation.

The GF100 contains 48 ROPs that do the pixel blending and smoothing work, as well as the atomic memory operations. The ROPs in Nvidia’s new chip are grouped into six groups of eight modules. Each group is served by its own 64-bit memory controller.

Graphics Processing Clusters

As described above, the graphics architecture of the GF100 chip is based on four Graphics Processing Clusters, each containing four multiprocessors and its own Raster Engine.

The new GPC features two key innovations. First, there is a scalable rasterization engine that performs triangle setup, rasterization, and z-cull. And secondly, the GPC also contains separate PolyMorph engines that perform vertex attribute sampling and tessellation. Moreover, the Raster Engine belongs to the GPC, and PolyMorph to each of the SM multiprocessors in the cluster.

The GPC cluster includes all the main GPU graphics units, except for the ROP units. It turns out that this is almost a separate video chip, and there are four of them in the GF100. In previous Nvidia chips, multiprocessors and texture units were grouped into texture processing clusters (Texture Processing Clusters). And in the GF100, each of the SM multiprocessors has four dedicated texture units. But more on that later.

Third Generation Streaming Multiprocessors

In the third generation of Nvidia Streaming Multiprocessors, we see several improvements and innovations aimed at both increasing performance and improving programmability and flexibility of use.

So, each of the SM multiprocessors contains 32 streaming CUDA cores, which is four times more than in the GT200 (we must take into account the reduced total number of multiprocessors in the chip). They remain scalar, as before, which gives good efficiency for any application, not just those with special optimization. For example, Z-buffer operations (1D) and texture access (2D) can completely load the execution units of the GPU, unlike superscalar architectures.

Stream processors have an integer execution unit (ALU) and a floating point execution unit (FPU). GF100 calculations comply with the new IEEE 754-2008 floating point standard and also provide the ability to perform fused multiply-add (FMA) operations for single and double precision calculations.

FMA, unlike the multiply-add (MAD) instruction, performs these two operations with only one rounding. This approach ensures that there is no loss of accuracy when adding and minimizes rendering errors in some cases. For example, with close overlapping triangles.

The new integer ALU introduced in the GF100 supports full 32-bit precision for all instructions as required by programming languages. In addition, the integer ALU performs 64-bit operations with high efficiency. Each of the multiprocessors has 16 load/store units (LD/ST or LSU) that can compute source and destination addresses for up to 16 threads per clock.

Four Special Function Units (SFUs) perform complex operations such as sine, cosine, square root, and so on. In addition, these blocks are used to interpolate graphical attributes. Each SFU executes one instruction per thread per clock, meaning a warp of 32 threads will execute in 8 clocks. The SFU pipeline is separate from the dispatcher block, which allows the latter to access other execution units while the SFU is busy.

Dual warp scheduler

So, the multiprocessor executes threads in groups of 32, such groups are called warps. Each multiprocessor contains two warp schedulers (Warp Scheduler) and two instruction dispatchers (Instruction Dispatch Unit), which allows you to simultaneously execute two warps on each of the SMs.

The dual warp scheduler in the GF100 selects two warps and executes one instruction from each of them on a group of 16 compute cores, 16 LSUs, or four SFUs. Since warps execute independently of each other, the GPU scheduler does not have to check the instruction stream for dependent instructions. The use of such a model of simultaneous execution of two instructions (dual-issue) per cycle makes it possible to achieve high performance close to peak theoretical values.

Most instructions can be executed in two at a time: two integer instructions, two floating point instructions, or a combination of integer, floating point, data load, data store, SFU special instructions. But this only applies to single precision instructions. Double precision instructions cannot be executed at the same time as any other instruction.

Texture modules

Since we are talking about a graphics chip, the number of texture modules in the GPU and their capabilities are very important. As you can see in the multiprocessor diagram above, each of them has four texture units. Each of which computes an address and fetches data for four texture samples per clock. The result can be displayed both in unfiltered form (for Gather4) and with bilinear, trilinear or anisotropic filtering. Naturally, with a loss of pace.

It is not very clear from the description what has changed in the GF100 compared to previous chip architectures. But Nvidia claims that the main purpose of the texturers in the GF100 was to increase the efficiency of texture fetching. Moreover, as positive changes, the transfer of texture modules to multiprocessors, as well as improved caching efficiency and an increase in TMU clock frequencies, were noted.

Let’s dwell on this in more detail. In the previous GT200 chip, up to three multiprocessors used one larger texture unit containing eight texture units. In the new GF100 architecture, each of the multiprocessors has its own dedicated texture modules and texture cache. Which, in theory, should have a positive effect on efficiency, but we will check this next time.

Other improvements have been made to the texture units, and the resulting increase in texturing speed is viewed positively by Nvidia. Especially when it comes to shadow mapping and algorithms like screen space ambient occlusion. Both techniques use DirectX’s standard Gather4 feature, which allows four values ​​to be sampled simultaneously per clock. Moreover, a 2-3-fold advantage in the performance of such samples is declared, compared with a competing solution from AMD.

More importantly, the GF100 has a more efficient dedicated L1 cache. And together with the unified L2 cache, this gives three times the amount of available texture cache compared to the GT200.

Even though the GT200 has more texturing units than the GF100, the new chip delivers better texturing performance in real-world applications. Let’s see how this increase is estimated by Nvidia itself.

It is clear that this is not the average frame rate in the application, but the speed of making several calls to drawing functions, limited by the texturing speed. Considering the smaller number of texture units, the results of the GF100 can be considered good the chip copes with samples on average 1.6 times faster than the GT200.

Among other functional changes in TMU, we note that GF100 texture units received support for new compression formats BC6H and BC7, which appeared in DirectX 11 and are intended for textures and off-screen buffers (render target) in HDR format.

Parallel geometry processing

Now let’s go into great detail about the most important innovations in the GF100. All previous generations of GPUs use a single unit to sample, set, and rasterize triangles. Such a pipeline provides a fixed performance and is often the limiter of the overall rendering performance.

In this situation, the complexity of parallelizing rasterization was also to blame in the absence of corresponding changes in the program interface (API). And if earlier such an approach to a single rasterization block worked tolerably, then with an increase in the complexity and mass character of geometric calculations, rasterization has become the main limiter on the way to increasing the complexity of geometry in 3D scenes.

Active use of tessellation completely changes the load balance of different GPU blocks. With tessellation, the density of triangles grows by orders of magnitude, which heavily loads such previously sequential sections of the graphics pipeline as triangle setup and rasterization. To ensure high tessellation performance, it was necessary to solve this problem with architecture changes, rebalancing the entire GPU graphics pipeline.

To achieve high geometry rendering speed, Nvidia has developed a scalable geometry engine called the PolyMorph Engine. Each of the 16 PolyMorph blocks available in the GF100 contains its own vertex fetch unit and tesselator, which greatly increases the performance of geometric calculations.

In addition, the GF100 includes four Raster Engines running in parallel, allowing up to four triangles to be set per clock. Together, these blocks provide a decent performance boost for triangle processing, tessellation, and rasterization.

The PolyMorph Engine contains five stages: Vertex Fetch, Tessellation, Viewport Transform, Attribute Setup, and Stream Output. The results calculated in each stage are transferred to the multiprocessor SM. The latter executes the shader program, returning data to the next stage of the PolyMorph Engine. After passing through all the stages, the results are sent to the Raster Engine rasterization engines.

The first stage starts with fetching vertices from the global vertex buffer. The selected vertices are sent to the multiprocessor for vertex shading and hull shading. In these two stages, the vertices are converted from object space coordinates to world space coordinates, and the parameters necessary for tessellation, such as the tessellation factor, are calculated. These parameters are then sent to the tessellator.

In the second stage, the PolyMorph module reads these tessellation parameters and breaks the patch (smooth surface defined by control points), outputting the resulting mesh. These new vertices are sent to the multiprocessor, where the domain and geometry shaders are executed.

The domain shader calculates the final position of each vertex based on data from the Hull Shader and the tesselator. At this stage, a displacement map is usually applied to add detail to the patch. The geometry shader does additional processing by adding or removing vertices or primitives as needed.

In the last stage, the PolyMorph Engine performs a viewport transformation and perspective correction. This is followed by the setting of attributes, and the vertices can be output using stream output to memory for further processing.

In previous architectures, such fixed function operations were performed by only one pipeline. When executed on the GF100, both fixed function and programmable operations will be parallelized, which should result in a performance boost if performance is limited by these operations.

Rasterization block

After the primitives are processed by the PolyMorph block, they are sent to the Raster Engine rasterization block. Which are also installed in the chip several pieces four in the case of the GF100. They also work in parallel, and as a result, high performance geometry processing is achieved.

The rasterization engine performs three stages of the pipeline. In the edge setup stage, the position of the vertices is sampled and the projections of the triangle’s faces are calculated. Triangles facing the screen with the back side are discarded as invisible (back face culling). Each of the edge setting blocks processes one point, line or triangle per cycle.

The rasterizer uses face projections for each primitive and calculates the pixel coverage. If anti-aliasing is enabled, coverage is calculated for each color sample and coverage sample. Each of the four output rasterizers produces eight pixels per clock, that is, a total of 32 rasterized pixels per clock for the entire GPU.

Pixels from the rasterizer are sent to the Z-cull cull. This block compares the depth (depth) of pixels from the tile with the depth of existing pixels in the screen buffer and discards those that lie beyond the pixels in the screen buffer. This is called early culling of invisible surfaces, which saves resources by removing the need for extra pixel-by-pixel calculations.

This architecture of GPC clusters can be considered the most important innovation in the GF100 geometry pipeline. After all, tessellation requires a significantly higher performance of the blocks for setting triangles and their rasterization. Sixteen PolyMorph Engine blocks significantly increase the performance of triangle sampling, tessellation, and Stream Out, while four Raster Engine blocks provide high-speed triangle fitting and rasterization.

A lot of beautiful theory has been written, but where are at least approximate performance figures? According to Nvidia, the presence of dedicated tessellators in each of the multiprocessors and rasterization units in each GPC cluster gives an increase in geometric performance of the GF100 up to eight times compared to the GT200. Let’s see what happens in practice when compared with the best of the competitors RADEON HD 5870:

The first three columns show purely synthetic tessellation performance with different levels of detail. And with its increase, we see how much the performance of the GF100 grows relative to the competitor’s top single-chip video card (according to the results obtained in the Nvidia laboratory, of course, we will conduct our own tests when the technical possibility appears, that is, the cards themselves become available).

The next two Nvidia tests are called Hair and Water, they contain not only purely synthetic tessellation code, but also pixel and compute shaders, so the difference in performance is smaller. They look like this:

Well, the last column of the diagram shows the relative performance of a set of drawing function calls (state bucket) within one frame, taken from an unnamed DirectX 11 application. It is quite possible that this app is Heaven Benchmark by Unigine.

It should be noted that six times the difference in these bars does not mean the same difference even in the instantaneous frame rate. This is the performance of only part of the drawing function calls, the speed of which is limited precisely by the tessellation speed.

However, we have some data for several seconds of this benchmark. Nvidia measured the average frame rate in Unigine Heaven over 60 seconds, using scenes with a thorny dragon and a stone path.

Judging by the FPS graphs, the GF100 copes with tessellation really noticeably better than the fastest single-chip competitor products. And even though the average frame rate is only 1.6 times higher (although this is also not bad at all), the difference in the minimum performance is even greater.

Memory subsystem

Efficient organization of the memory subsystem is very important for a modern GPU. Especially when more and more attention is paid to non-graphical calculations. Back in the first generation of CUDA, Nvidia introduced configurable shared memory and a shared L1 cache. The exchange of data between computational threads is very important, and shared memory is now widely used in non-graphical tasks on the GPU.

In their new chip, Nvidia has again improved the memory model. The GF100 now contains a dedicated L1 cache in each multiprocessor (SM). This cache works in conjunction with and complements the shared (common) memory of the multiprocessor. Shared memory improves memory access speed for algorithms with predictable memory access, while L1 cache speeds up access from irregular algorithms in which the addresses of the requested data are not known in advance.

Each multiprocessor in the GF100 has 64 KB of on-chip memory, which can be configured in two different ways: 48 KB of shared memory and 16 KB of L1 cache, and vice versa 16 KB of shared memory and 48 KB of cache.

For graphics programs, the GF100 uses a variant with 16 KB cache, it works as a register buffer. In computing programs, cache and shared memory allows threads of the same block to communicate while working together, which reduces memory bandwidth requirements. In addition, the shared memory itself allows many computational algorithms to be efficiently used on the GPU.

In addition, the GF100 has a 768 KB unified L2 cache that handles all load and save requests, as well as texture fetches. The second level cache provides efficient and high-speed data exchange for the entire GPU. And computational algorithms in which data requests are unpredictable (physical calculations, ray tracing, etc.) will receive a significant speed boost from the hardware cache. And post-processing filters, in which multiple multiprocessors read the same data, will get a speedup due to fewer calls to data from external memory.

A unified cache is more efficient than separate caches for different purposes. With dedicated caches, a situation may arise when one of them is fully used, but it is impossible to use the idle volumes of other types of cache memory. And the caching efficiency will be lower than theoretically possible. And the unified L2 cache in the GF100 dynamically allocates space for different requests, which allows for high efficiency.

In general, one L2 cache now replaces the L2 texture cache, ROP cache, and GPU chip buffers of previous generations. The second level cache in the GF100 is used for writing and reading data, and is fully sequential (coherent). Compare with the read-only L2 cache in the GT200. The final comparison table of GF100 and GT200 cache systems looks like this:

In general, as you can see, the cache architecture in the GF100 has been significantly improved compared to previous chips. The new GPU provides more efficient data exchange between pipeline stages and can significantly save external memory bandwidth by increasing the efficiency of using the video chip execution units.

New ROPs and improved anti-aliasing

The ROPs and the entire blending and anti-aliasing subsystem in the GF100 have also undergone significant changes aimed at still increasing their efficiency. One ROP section in the GF100 contains eight ROPs, twice as many as in previous generations. Each ROP is capable of outputting a 32-bit integer per clock, an FP16 pixel per two clocks, or an FP32 pixel per four clocks.

The biggest shortcoming of previous chips related to ROP is the low efficiency of anti-aliasing by the MSAA 8x multisampling method. Nvidia says it has greatly improved the performance of this mode in the GF100 by raising the efficiency of buffer compression as well as the efficiency of ROPs when rendering small primitives that cannot be compressed. The last change is also important because tessellation increases the number of small primitives, and the performance requirements for ROP blocks increase as well.

Previous generations of Nvidia architectures experience a large drop in rendering performance when 8 multisampled anti-aliasing modes are enabled. So, in the game Tom Clancy’s HAWX, the average FPS of the GTX 285 in MSAA 8x mode is 60% lower than in MSAA 4x. With the GF100 the situation is quite different, performance in 8x mode is only 10% lower than in 4x mode. See diagram:

As a result, in 4x mode, the new card based on the GF100 chip is 1.6 times faster than the GTX 285 based on the GT200, but at MSAA 8x the new card is 2. 3 times faster than the old one! A very good figure, and a decent achievement. It seems that in 8x MSAA mode, the new solutions will be more than twice as fast as the old ones. But in less difficult conditions, the difference in speed will be clearly less.

Potential buyers of the GF100 are not only interested in smoothing speed, but also in image quality. In its new solutions, Nvidia introduces a new anti-aliasing algorithm called 32x CSAA (Coverage Sampling Antialiasing), which provides the highest quality anti-aliasing for both geometry and textures when using alpha-to-coverage mode. The number 32 in this case stands for 8 honest multisampling samples and 24 pixel coverage samples.

Existing games are limited by the API and GPU geometry processing capabilities, so in many cases, instead of real geometry, semi-transparent alpha textures and the alpha-to-coverage method of smoothing them are used. And the quality of smoothing their faces depends on the number of coverage samples. In previous generations, 4 or 8 samples were used, which did not completely eliminate aliasing, and also added banding (see screenshot below). Now, with 32x CSAA mode, the new GPU uses 32 coverage samples to minimize aliasing artifacts.

Transparency Multisampling (TMAA) now also benefits from improved CSAA. TMAA is commonly used in older DirectX 9 applications that do not use the alpha-to-coverage method, which is not available for this API. In this case, an alpha test technique is used, in which translucent textures have hard edges. With TMAA, the old code is translated into alpha-to-coverage, which in the case of the GF100 takes full advantage of the improved CSAA method.

The image on the left shows TMAA anti-aliasing using 16xQ mode with 8 multisamples and 8 coverage samples, which is the maximum for the GT200. And the picture on the right shows TMAA anti-aliasing using the 32x CSAA method, with 8 multisamples and 24 coverage samples, introduced in the GF100. As you can see, the difference in quality is noticeable.

Moreover, due to the fact that the use of coverage samples does not increase the requirements for memory bandwidth and memory size too much, the performance of the new 32x CSAA method differs slightly from the usual 8x MSAA on GF100. On average, the performance difference between 32x CSAA and 8x MSAA is only 7%. Given the small difference between 4x and 8x that we showed above, does it still make sense to use less than 32x CSAA methods on powerful solutions like the GF100? We will definitely check this in practice, immediately when such an opportunity presents itself to us.

Computing tasks on the GPU

In recent years, the realism of the picture rendered in real time has increased significantly. And in the main, these improvements were due to the rapid development of programmable pixel shaders. But computer graphics is not only about rasterization, there is also ray tracing and Reyes, for example. Each path has its own strengths and weaknesses, and different methods can be used to solve different problems.

Until now, GPUs have been designed with rasterization in mind. But gradually it becomes possible to apply other methods in graphics engines, and GPUs must adapt to these requirements, expanding their capabilities. Some of these graphics algorithms can already be applied using computational APIs like CUDA, DirectCompute or OpenCL.

The architecture of the GF100 chip has been designed to efficiently execute various algorithms and solve many tasks that can be parallelized. Algorithms such as ray tracing, physics calculations and artificial intelligence do not benefit from shared memory, in which case the cache memory found in the GF100 will help. 48 kilobytes of L1 cache per multiprocessor and the use of a global L2 cache will increase the performance of many algorithms.

Another important change in the GF100 is an improved scheduler. The G80 and GT200 execute large programs with relatively long context switching times between different tasks. For purely computational tasks with large amounts of data, this is normal, but gaming applications use several different tasks at the same time: tissue simulation, fluid physics, post-processing, etc. And on the GF100, these tasks can be efficiently performed in parallel, providing maximum efficiency for computing devices.

For example, in games using compute shaders, a context switch occurs every frame, and a high speed of this switch is critical to maintaining a high frame rate. The GF100 significantly reduced the context switch time (down to 20 microseconds), which made it possible to quickly and repeatedly switch between threads within a single frame.

Computational algorithms can be used to solve a large number of problems of various kinds in gaming applications. For example, these are new hybrid rendering algorithms, when ray tracing is used to render correct reflections and refractions. Or voxel rendering to realistically simulate volumetric data.

This can be complex post-processing of images: advanced HDR rendering, complex filters for smoothing and simulating optical effects, such as simulating a blur zone and bokeh. And games are already using physics effects, which can be further complicated, adding fluid dynamics, turbulence for effects with particle systems, like smoke or liquids, and so on.

There are many more possibilities, such as running artificial intelligence on the GPU, in order for the AI ​​to control a large number of characters using complex behaviors.

Ray tracing

Ray tracing is often used in 3D graphics, but is too complex to be used in real time graphics. Therefore, in future applications, it is possible to use tracing in conjunction with rasterization. It seems that the GF100 is the GPU with which high-quality real-time ray tracing is possible.

Tracing is not easy to perform efficiently on the GPU, because the rays being calculated have unpredictable directions, and their calculation requires accessing memory at random addresses, while the GPU usually receives data from memory in linear blocks.

But the GF100 architecture differs from the previous ones precisely in that its design took into account the requirements, including ray tracing algorithms. This is the first video chip to support hardware recursion, which makes it possible to efficiently perform such tasks. And the two-level caching architecture significantly increases the efficiency of ray tracing, increasing the speed of data requests from memory. L1 cache improves memory «locality» for adjacent beams, and L2 cache increases video memory access bandwidth.

The GF100 is also able to efficiently perform advanced global illumination algorithms such as path tracing. This method is similar to ray tracing, it uses a large number of rays to collect data about the indirect illumination of the scene. Nvidia estimates that the GF100 is up to four times faster than the GT200 in this algorithm.

But still, these methods are too complex to be applied in games. Developers can use both rasterization and ray tracing at the same time, which is called hybrid rendering. For example, rasterization can be used in the first rendering pass, and for a part of the pixels in the next pass, reflection will be calculated using ray tracing. These hybrid models are a great way to get high performance with very high quality results.

Just to be clear, here’s an example Nvidia demo that calculates global illumination using Nvidia’s OptiX technology when rendering car models. In the future, it is possible that OptiX will be integrated into the game engine of a racing game, and players will be able to get very high-quality screenshots of their favorite cars in the “photo mode” or “gallery mode” that such games have.

Let’s look at some of the computational effects that will appear in games in the near future. For example, in the game Metro 2033, which is due out in March of this year, high-quality post-filtering is implemented to simulate the effect of depth of field (depth of field).

DirectCompute’s DirectX 11 compute shaders are used to simulate this optical effect. Using standard post-processing techniques results in relatively poor post-filtering quality, while using pixel shaders for cinematic-quality DOF techniques causes too much performance loss.

Therefore, the game developers of the Metro 2033 project, together with Nvidia, developed a technique that uses the power of DirectCompute to implement complex post-processing. We should see the result live in March, but for now we will limit ourselves to a screenshot, the DOF effect on which looks pretty good.

And here’s another game that is about to hit the stores Dark Void. It has been in development since autumn 2008, and the most interesting part of it for us is the use of some very interesting advanced PhysX effects.

As with the previous game, the developers worked with Nvidia to incorporate the interesting physics effects offered by the new APEX Turbulence module in the PhysX feature suite. This very turbulence can be seen in the effect of smoke from a jetpack (jetpack) and some types of weapons (disintegrator).

Also in Dark Void, you can see many different physics effects based on particle systems. Usually they are clearly visible when shot from a variety of weapons and when these shots hit the surface. As in this screenshot:

Supersonic Sled physical effects demo

Well, as the most spectacular demonstration of the new GPU’s physical effects capabilities, Nvidia offers the Supersonic Sled demo. The application uses a lot of PhysX features, but also uses advanced rendering techniques (think tessellation and post-processing) and looks pretty good overall.

In the demo, you can see a lot of physical effects and simulations. For example, the model of the «vehicle» Sled is also physically correct calculated using PhysX. True, on the CPU, since it consists of too few parts to transfer it to the GPU.

The physical model of Sled consists of 200 rigid bodies and 200 axial hinges. And all the physics of these objects with the help of PhysX is calculated on the CPU. And the ragdoll model of the pilot also counts on PhysX. Of all the animations in the demo, only the pilot’s facial animation is pre-calculated. Everything else is calculated in real time.

The most demanding physical effects are assigned to the video chip: imitation of the behavior of smoke, dust and fragments. Particle systems on the GPU: smoke from a rocket engine, dust from a rocket, explosions, smoke trails from fallen parts.

The destruction of objects looks the most impressive. For example, a bridge can collapse into a predetermined number of solid bodies, up to a million! At the same time, the GF100 provides an interactive frame rate, and it looks very impressive.

Of the graphics technologies used in Supersonic Sled, not related to physical effects, we can note high-quality post-processing motion blur and the use of tessellation for the ground surface. Moreover, as you can see in the screenshot above, the tessellation is adaptive.

Nvidia 3D Vision Surround technology

Naturally, after the announcement by the main competitor of a technology that allows you to display images on three monitors at once, it would be very desirable to answer somehow. And Nvidia went even further, offering the ability to output images to three devices in stereo mode.

Readers are well aware of Nvidia’s 3D Vision technology, which uses active wireless shutter glasses and Nvidia stereo drivers to support several hundred games in stereo. So, on two Nvidia GF100 video cards operating in the SLI configuration, using 3D Vision Surround technology, it will be possible to obtain a high-resolution stereo image on three output devices at once.

It’s a pity that 3D Vision Surround support is only available with two or more GPUs combined in an SLI system, and this mode does not work with a single video card. In general, the solution is clearly a software one, and it will work also on SLI systems based on old video cards. But it will absolutely definitely be a game mode of the highest quality possible today. After all, three monitors are supported at a resolution of 1920×1080 in stereo, or 2560×1600 in regular 2D.

In addition, 3D Vision Surround includes the ability to compensate for images hidden behind monitors. With the function enabled, the part of the image that is hidden behind the monitors is not shown to the user. The result is a more coherent picture, which is especially important for stereo mode, when the slightest discrepancy destroys the effect of the stereo image. And so it turns out, as if the frames of the monitors are parts of the cockpit of an airplane, helicopter or a racing car.

Conclusions

As the details of Nvidia’s new architecture emerged, it became clear that the company is still at the forefront of 3D technology and pushing it into the PC market. It may scare someone that GPUs are now becoming more and more like CPUs and can even compete with them in some applications in high-performance computing. But still, GF100 is, first of all, a video chip.

The new GPU includes sixteen tessellation engines and four rasterizers. And they are needed specifically for 3D graphics, more likely even for future 3D graphics. It is tessellation and the overlay of displacement maps that can bring the improvement in image quality in gaming applications that users are waiting for. And it is in the GF100 that everything is done so that developers can use tessellation with maximum convenience for themselves and high performance for players.

But not only tessellation and the changed organization of geometric data processing are attractive in the new Nvidia graphics architecture. The non-graphical computing capabilities of the GPU are now becoming very important. And the GF100 offers the most possibilities at the moment. This is the first GPU with C++ support, recursion, and the ability to cache and write and read data. Together, these innovations give developers the ability to tackle a variety of problems and challenges, including ray tracing, global illumination, complex physics effects, and artificial intelligence.

Some may find it a little strange to assign tasks to the GPU that the CPU has always done. But it seems that this is exactly the path that the entire industry is moving on. And while most of today’s gaming apps are multiplatform and limited by the weaknesses of consoles, a lot more can be done on PC right now, and we can only hope that developers will use it. In any case, someday the next generation of consoles will come, and it will definitely be architecturally more similar to the GF100 than to previous generations of GPUs.

As for the rest, having become acquainted with the GF100 architecture in absentia, it can be noted that, in addition to the fact that a lot of new things have appeared, some shortcomings of previous GPUs have been eliminated in it. For example, ROP blocks have been significantly strengthened and full-screen anti-aliasing has been accelerated. Which also received improvements in quality.

Unfortunately, we don’t know much about the performance of the new GPU in a wide range of applications yet. We can only rely on data received from Nvidia, and then only for a small range of tasks and only with certain settings. But if they fulfill everything they promise, then it looks like the GF100 will indeed become the most productive solution among all available on the market. Not to mention the exceptional possibilities that this GPU provides for developers.

This is indeed a completely new architecture with many interesting changes. And it immediately shows only one potential bottleneck the performance of texture modules. Although Nvidia itself is talking about a 1.6x increase in TMU speed, it is quite possible that the GF100 will be on par or even inferior in texturing speed to competing solutions, especially in legacy applications that do not use Gather4 and SSAO. After all, the number of texture modules has even decreased compared to the GT200.

In general, as usual, the market success of new Nvidia solutions will depend on the final clock speeds. On paper, almost everything in the new architecture looks good, but all this can be spoiled by the low frequencies of specific solutions, if the production problems at TSMC are really as big as some sources say. We will wait for the appearance of video cards based on the new architecture and will definitely tell you about them!

GF100 News — NVIDIA WORLD

Last night, MSI introduced its new GTX 480 (NVIDIA GF100 core) with its proprietary Twin Frozr II cooling system.

The card has one of the most serious cooling systems for cards on Fermi — this is the original development of Twin Frozr II, equipped with five heat pipes (two of them are SuperPipes with an increased diameter of 8mm) and a massive radiator blown by two 80mm fans with control PWM.

Let me remind you that in the manufacture of these video cards, “Military-Class” components are used, such as high-conductivity polymer capacitors (Highly-Conductive Polymerized Capacitor or Hi-c CAP) and solid-state chokes with a superferrite core (SSC). Capacitors provide more accurate GPU voltage and better stability; chokes eliminate buzz; and all together the components give high stability when working at higher frequencies (and a good margin, of course).

Compared to the reference NVIDIA GeForce GTX 480, the card is 14 degrees Celsius cooler and 4 dB quieter.

Despite the powerful cooling system and high-quality components used in manufacturing, the card is clocked at almost standard frequencies of 700/1401/942 (core/shaders/memory). The onboard memory of the board is 1536MB, type GDDR5.

As usual, the video card comes with a utility for overclocking and monitoring video cards based on the RivaTuner engine — MSI Afterburner (read about the utility), but we recommend updating it from our website right away.

Estimated price of the card box is $499.

Video cardsigeForce GTX 480GF100TWIN FRUZRMSI AFTERBURNER

Comment similar news

FUDZILA.com

Two -Circulia GF100 on the basis of GF100 on the basis of GF100 the fact has not yet been. Information about the upcoming new product has to be collected literally bit by bit, since the manufacturer is determined not to disclose details regarding its product until the official announcement.

But despite this, the leaks, although small, constantly stir up the public.

The ExpReview resource managed to get (not without the help of Chinese colleagues) photos of an engineering sample of the GF104 graphics chip, which will be installed on GeForce GTX 460 series boards. . In addition, the specified characteristics of the GeForce GTX 460 are reported: the GPU has 336 stream processors and 192-bit memory access bus, and the general frequency formula will be 675/1350/1800 MHz for the core/shader domain/memory, respectively. It also became known that the video card will be equipped with 768 megabytes of video memory.

According to the source, the video card is aimed at the price segment up to $230.

The date of the announcement according to unconfirmed data is July 12 of this year.

video cardsNVIDIAFermiGeForce GTX 460GF100GF104

comment on similar news

ExpReview

Apparently, the GF104 will be based not only on the GTX 460, known to everyone and having more or less known characteristics, but also on the GTS 455, which no one has noticed.

NVIDIA will most likely try to fill the market in mid-range with more GF104-based cards, and since every dollar counts in that price range, demand for the GTX 460 won’t be outweighed by supply. This is where the GTS 455 video card should fill the gap — it has slightly lower performance than the GTX 460, but a more attractive price.

Of course, it is too early to talk about any characteristics, since there is very little information. And keep in mind that NVIDIA may delay the release of the GTS 455 to a later date (for example, the «Back-to-school» season). In any case, two cards based on the GF104 are possible.

rumorsNVIDIAFermiGF100 video cards

comment on similar news0007

Despite the fact that Inno3D did not present a sample of the video card, the specifications of the new GeForce GTX 465 could be seen at the exhibition stand. Sapphire. Apparently, very inspired by the latter, Inno3D decided to name its GTX 465 line appropriately — VapourX.

It’s hard to say anything about cooling performance without actual tests, but Sapphire’s «Vapor-X» cooling system, used on Radeon HD 5xxx graphics cards, does the job very well. In the meantime, we are waiting for a quiet and cold Inno3D GeForce GTX 465.

Computex 2010 Video -Cardinno3dnvidiaFermigeForce GTX 465GF100

Comment similar news

Behardware

At the ComPUTEX 2010 NVIDIA NVIDIA NVIDIA NVIDIA NVIDIA Continuated

NVIDIA’s first announcement at the show was the new GeForce GTX 465 graphics processor and affordable graphics cards based on it from partners such as ASUS, EVGA, Galaxy, MSI, Palit, PNY, Zotac and others starting at 279$. With this GPU, NVIDIA hopes to both increase the popularity of 3D Vision technology and push Fermi GF100-based graphics cards to the mainstream market. You could see an overview of the Zotac GeForce GTX 465 video card earlier on our website.

Until now, NVIDIA 3D Vision technology has only been available to enthusiasts who have purchased a 120Hz 3D Vision-Ready certified monitor, a suitable graphics card, a 3D Vision Kit including LCD shutter glasses, and installed all the drivers themselves. But today, at Computex 2010, Jensen Huang, president of NVIDIA, announced the creation of a new type of PC — 3D PC. Consumers can now buy a fully assembled and configured 3D PC and enjoy 3D Vision technology in games and Blu-ray 3D movies. This idea has received wide support in the industry from companies such as Asus, Acer, Dell, LG, Toshiba, ViewSonic and many others. 3D PC prices will start at $1500 and for that money you will get everything you need: 120Hz LCD monitor, 3D LCD shutter glasses and pre-installed drivers. $1,500 is half the cost of a new 3D HD TV, so a 3D PC is the cheapest way to enjoy 3D content at home. If you later buy a 3D TV (or have already bought one), you can connect your 3D PC via HDMI 1.4 and watch 3D content on the big screen using the NVIDIA 3DTV Play software.

Microsoft has announced a new version of Silverlight (an alternative to Adobe Flash) that now supports web-based 3D content streaming to NVIDIA 3D Vision-equipped computers. NVIDIA and Microsoft demonstrated a live HD stereo 3D music video (We Are The World 3D) streamed over the Internet. New details of this interesting project should appear soon, we will keep you posted.

ASUS introduced the new 15.6″ Asus RoG G53 3D gaming laptop (equipped with a 120Hz LCD panel) with full 3D gaming support and new HDMI 1.4 output (shipped without 3D Vision glasses) and the new 17.3″ Asus ROG G73Jw 3D-Ready laptop (equipped with a 120Hz LCD panel), which may be equipped with the recently announced GTX 480M GPU. The Asus G51Jx-EE 3D-ready laptop may come with a built-in IR transmitter compatible with 3D Vision active shutter glasses.

The company also showed the Asus Eee Top ET2400 All-in-One PC with 3D support, and the Asus CD5390 PC, a desktop equipped with two GeForce GTX 480s running in SLI configuration and supporting 3D Vision Surround technology (!). In addition, new 120 Hz 3D-ready monitors 23″ Asus VG236H and 27″ Asus PG276H could be seen at the exhibition.

ComputeX 20103D Visionvideo cardsnotebooksAsusMicrosoftNVIDIAFermiGF100

comment on similar news

3dvision-blog. com

The confusion with NVIDIA’s GeForce GTX 460 and GeForce GTX 465 graphics cards based on the Fermi GPU is finally over.

It became clear that NVIDIA GeForce GTX 460 and GeForce GTX 465 are different video cards.

The GeForce GTX 460 will be based on a new modification of the Fermi GPU — the GF104 chip, which will be more compact than the GF100. It is known that the video card will be equipped with 768 MB of local GDDR5 memory, which will be connected to the GPU using 192-bit memory bus. Nothing is known about the rest of the specification parameters, but it can be assumed that the operating frequencies will be noticeably lower than those of the top models.

This video card will most likely take the bottom line of NVIDIA’s middle-end solutions and will be called upon to compete with AMD/ATI Radeon 5830. Well, time will tell, and we will keep you informed.

P.S. The first deliveries of the GeForce GTX 460 are expected no earlier than the beginning of July.

video cardsNVIDIAGeForce GTX 460465GF100GF104

comment related news

donanimhaber.com

GF100 chip.

All cards, such as the ZT-40301-10P from ZOTAC, are marked with a price of $279.99 (without shipping), which is not much of a surprise, since rumors about this have already leaked to the Web before. The cards are based on the reference design from NVIDIA and have frequencies of 607 MHz for the core, 838 MHz for the memory and 1215 MHz for the shaders.

There are no data for the European market yet, but our Western colleagues estimate cards based on the GTX 465 at 249 €, which is approximately in the price range of the Radeon HD 5850. unable to give an adequate answer to the performance of the Radeon HD 5850. But also do not forget that the corporation often corrected the situation with path drivers, which provided a hefty performance boost. Whatever you say, we are still dealing with «hardware and software».

graphics card -cardiazotacfermigeForce GTX 465GF100

Comment similar news

NewGG.com

As the European resource Fudzilla, despite the fact that April 12, day was to be sold on the sale of Fermi. , accelerators did not appear in retail.

Delivery expected any day. Many online stores accept pre-orders for cards, but they are not yet in a hurry to send them to customers. It is also interesting that since the official announcement of Fermi, the cost of video cards has been gradually decreasing, although buyers have not yet received their cards.

So GTX 470 costs from 315 to 350 €, depending on the market, and GTX 480 — 450-480 €. In comparison, the AMD HD 5870 sells for €335, while the cheaper HD 5850 card sells for €250.

According to the first tests, the NVIDIA GTX 480 easily outperforms the Redeon HD 5870, but not too much, especially considering the cost of the latter. Fermi accelerators are about to go on sale, however, they will not be able to compete with the family of popular and more attractively priced 5800 series video cards.

video cardsFermiGF100

comment on similar news

Fudzilla

The Fudzilla resource published a series of photos that were taken in one of the partner warehouses in Europe.

According to the source, European partners have finally received the first mass batches of next-generation NVIDIA accelerators and are trying to put them in stores as soon as possible. Some will arrive today and some next week. In any case, the cards should be in stores on April 12th.

Of course, many of these accelerators have already been sold and will ship directly to pre-order customers. Thus, there may be some shortage of video cards until May.

video CartoMIGF100

Comment similar news

FUDZILA

Koolance NVIDIA GeForce GTX 470. side of video cards that emit heat (GPU chip, memory chips and voltage stabilization unit).

Water blocks measure 15.9×14.6×1.6 cm and weigh up to 680 g. They are thin enough to take up only one slot. They are made of nickel-plated copper, covered with acrylic paint on top.

The area above the GPU is an array of 0.5mm fins to increase heat dissipation. VID-NX480 costs $120 and VID-NX470 costs $110.

video cardscooling systemsKoolanceFermiGF100

comment on similar news

TechPowerUp

gf100 » Reviews of processors, video cards, motherboards on ModLabs.net 512SP | GeForce | GF100 | GTX | NVIDIA

Date: 10/08/2010 10:50:53
Subscribe to comments on RSS

About a week ago, the first dubious screenshots of the NVIDIA GTX 480 with a fully functional GF100 chip began to appear. First it was the GPU-Z program window, and then the result of the 3D Mark Vantage and Crysis Warhead benchmark. But yesterday, the resource en. expreview.com published a full review of such a video card, so now there is no reason for doubt. The list of core differences from the 480SP GTX 480 includes an additional streaming multiprocessor, which has 32 CUDA cores, one PolyMorph pipeline for working with geometry, four texture units, 16 buffers, four blocks of special functions, 64 KB shared L1 cache, two warp schedulers and two instruction manager. The origin of the graphics card is not known, but the Accelero Xtreme Plus cooler with five heat pipes and three 92mm fans, absence of NVIDIA logos on the PCB, KEMET tantalum capacitors in the 8+2 power circuit suggest that this is a creation of the new manufacturer Tiger. The frequencies have risen to 801/1601/1900MHz (core/shaders/memory) from the standard 701/1401/1850MHz. Power consumption «head to head» under load with the same frequencies but standard voltages showed a difference of 204 watts, which in total gave 644 watts ! In real gaming applications, the difference in FPS was 5. 6% , of course in favor of the 512SP version. You can read more detailed information in the review itself.

Categories: Industry News
Tags: Fermi | GeForce | GF100 | GF104 | GF106 | GTX 450 | GTX 460 | GTX 470 | GTX 475 | NVIDIA
Date: 07/21/2010 14:14:42
Subscribe to RSS comments

According to Donanimhaber.com , the next announcement from NVIDIA will be the GeForce GTS 450 based on the 40 chipe GF106 . Of the characteristics, only the 128-bit bus and the presence of 1 GB GDDR5 memory are known so far. Its release is scheduled for the end of August, and until then we can only guess about the performance of future mainstream solutions from NVIDIA, whether they will be able to «overcome» ATI video cards improved by the end of the year, whose predecessors dominate this segment.

Nevertheless, this date is still far away and we will probably have time to see a couple more versions of video cards based on GF104. Rumor has it that after the GF100 cull stock sells out, in the form of the GTX 465, there will be a replacement for the very hot GTX 470 called the GTX 475 with all 8 clusters and a total of 384 shader units respectively. For sure, the core frequency will also rise, because the GTX 460 consistently provides overclocking at 200 MHz.

Categories: Industry News
Tags: Fermi | GF100 | GTX470 | GTX480 | NVIDIA | video card
Date: 09/04/2010 15:44:54
Subscribe to comments on RSS

Some time ago we told you about the «wilds» suppliers need to go through in order to buy fresh Fermi . Oddly enough, but in the information published today, this story was continued.

From interesting things, it is worth starting with promises NVIDIA , which were then massively replicated by the media, about about 30000 ready-made video adapters based on GTX480 and GTX470 by the date of their announcement. As it turns out, these figures were grossly exaggerated, with fewer than 8000 units shipped to retail channels worldwide in the first week. Which is extremely small, given the whole quarter of odds that NVIDIA had to debug production and release chips.

But note that according to SemiAccurate, the company’s two main partners have allocated less than 100 cards each for the whole of Europe. Assuming there are only five partners to launch, that would mean that each had to have 1,600 GTX470 and GTX480 cards, for the entire world. I think everyone understands how miserable this number is, but it’s even more deplorable if we talk about NVIDIA’s forced bundling, which intermediaries are required to comply with. According to one of the forum participants, at least in Germany, NVIDIA «recommends» distributors to purchase 600 useless and obsolete video adapters in order to obtain 20 pcs. GTX470 and GTX480.

In general, everything is very sad for NVIDIA: with each video adapter sold, the company loses money due to the low yield of good chips, and how long it will last, and also whether there will be a “second” launch of Fermi, when a sufficient number of video adapters hit retail, becomes increasingly difficult to predict.

Filed under: Industry News
Tags: AMD | ATI | Fermi | GF100 | GF104 | HD5870 | NVIDIA
Date: 06/04/2010 14:02:51
Subscribe to comments on RSS

Well, the beginning of the Fermi family is laid: GTX480 and 470 are announced and even sold in some places, but what about the budget successors of GT200 ? Once again, the information was confirmed that the first GF104 chips left the assembly line a few months ago. If all goes well, then the six-month gap between the release and sale of the product can be met.

Of course, considering GF100 , which the NVIDIA has been polishing for 10 months, one can only guess if the GF104 will suffer the same fate.

Most likely, video adapters based on this GPU will be called GTS450 or GTS460 . It can also be assumed that the number of blocks will be halved from 512 to 256 pieces and the number of ROP will be cut, which even in 3 billion chips do not shine with outstanding performance.

If you just cut the GF100 in half to get the GF104, then many of the big brother’s shortcomings will not disappear from this. If the GF104 is larger than half of the GTX480, then the problem of the cost of manufacturing these chips will come to the fore, because the size of the GF104 core will be approximately equal to Cypress (AMD HD5870) and will have about the same power consumption, while the performance will be between HD5770 and HD5830 .

To put it simply, when you have a chip that is too big, too hot, and not very efficient, cutting it in half will still give you the same disadvantages. Indeed, at the moment, GPUs have more than 60% larger crystal size compared to the same ATI GPU.

Returning to the GF100, sources from NVIDIA confirmed to SemiAccurate that the chip yield rate is still very low — around 20% per wafer, which means that GPU core silicon costs at least $250 before packaging and testing. And if we add to this a rather expensive component base and compare it with the price for which video cards are sold, NVIDIA is unlikely to make money on them at all.

This is where the company’s partners have a hard time, who are also losing profits, and everyone hopes that things will improve with the production of the graphics giant, because losing money on each board will not last long, despite all the promises over the next few months raise the yield of good chips.

Filed under: Industry News
Tags: AMD.HD 5870 | Fermi | GF100 | GTX480 | HD 5970 | HOT | NVIDIA | Radeon | power consumption
Date: 02/04/2010 11:36:01
Subscribe to RSS comments

an economical solution in every way. And now, after the official announcement of the NVIDIA GeForce GTX 480 , the second most discussed problem of the video adapter, in addition to not particularly impressive performance, is a phenomenal level of power consumption for a single-chip solution.

Official specs say 250W — TDP ; to be honest, this figure seems a little untrue, which was noticed in all the reviews of the GTX480 that came out after the announcement, the most zealous journalists also tested it: for example, according to the Fcenter.ru test, NVIDIA’s «beast» consumes 130W on is larger than the flagship AMD Radeon HD 5870 and even manages to bypass the dual-chip Radeon HD 5970 in this parameter.

But increased demands on power supplies aren’t the only dilemma you’ll face when buying a Fermi: there will be noise, an integral part of high TPD. Despite the use of 5 direct contact heatpipes and a seemingly well-thought-out CO, now you can hardly comfortably play near your PC without headphones. But it’s better, as they say, “to see once”, so we provide you with one extremely interesting video, which clearly shows how it works and how it “puffs” GF100 compared to competitors.

An interesting point is that the use of an additional fan (120 mm, 900 rpm) directed at the “heart” of the GTX 480 allows the video adapter to significantly reduce the noise level of the cooler and less comfortable limit when passing 3DMark Vantage ; however, if you run the heavier Furmark , then nothing will change: tens of decibels will continue to torment you.

You can also see a comment from NVIDIA employee, General Manager Drew Henry about the power consumption of the new GeForce in the official Nvidia INTERSECT blog:

When you build a high-performance GPU like the GTX 480, you realize that it will consume a lot of power in order to deliver the performance and features it offers. It was a compromise for us, but we wanted the chip to be fast. They are originally designed to work at high temperatures, so this will not affect the quality of the product and durability. We believe that compromise is our right.

It can also be said that NVIDIA has added a lot of headaches to its partners in the release of products with new and more efficient COs and helped cooling system manufacturers and marketers to get the most out of this situation. We just have to watch what happens next.

Cell phone Pantech GF100

Apple Watch Ultra is easier to replace than to fix. iFixit dismantled the smartwatch and showed the layout of the parts (video)0930

iFixit experts disassembled the rugged Apple Watch Ultra to assess the difficulty of opening the case, replacing the battery and screen of the device. The conclusions were not particularly comforting for future watch owners: according to engineers, the gadget is easier to replace than to repair.

Read more

  • Crashed a car to check the security system of the new iPhone 14 (video)

    Bloggers have spectacularly tested the crash alert feature of the new iPhone 14 Pro. Not a single blogger and iPhone were hurt, unfortunately, which cannot be said about the car.

    Read more

  • Smart household appliances brand VIOMI comes to Russia

    VIOMI smart home appliances will start shipping to Russia in the third quarter. Viomi Technology Co., Ltd is one of the technology leaders in China’s Internet of Things (IoT) home market, with products sold in more than 60 countries worldwide.

    Read more

  • Xiaomi has released a new Vortex Wave washing robot vacuum cleaner with a unique brush and two water tanks (4 photos)

    The popular Smartmi brand from the Xiaomi ecosystem has launched the production of a new category of equipment — robotic vacuum cleaners with technologies that have no analogues on the market.

    Read more

  • Cyborg cockroaches were created in Japan to explore hard-to-reach places (2 photos)

    A group of scientists from the Japanese research organization RIKEN Cluster for Pioneering Research presented their latest development. They created a controlled hybrid of the Madagascar cockroach, which is controlled remotely using a wireless module.
    9

    through USB

    Memory

    Pantech GF100

    9000 Pantech GF100

    Number of records: 1000 names + Memory SIM cards
    Games: +

    Catalog Pantech Pantech GF100

    JavaScript is disabled in your browser. For correct operation of the site is highly recommended to enable it.

    ᐅ Pantech-Curitel GF100 reviews — 1 honest customer review of the Pantech-Curitel GF100

    mobile phone

    Advantages

    Defects

    Comment

    Estimated

    I accept the terms
    providing data.

    Average rating Pantech-Curitel GF100 — 3
    A total of 1 reviews are known about the Pantech-Curitel GF100

    Looking for positive and negative reviews about the Pantech-Curitel GF100?

    From 11 sources we collected 1 negative, negative and positive feedback.

    We will show all the advantages and disadvantages of the Pantech-Curitel GF100 identified during use by users. We do not hide anything and post all positive and negative honest customer reviews about the Pantech-Curitel GF100, and also offer alternative analogues. Is it worth it to buy — the decision is only yours!

    Best deals on Pantech-Curitel GF100

    Reviews of Pantech-Curitel GF100

    Review information updated on 04. 10.2022

    write a feedback

    oleg9583, 07/12/2009

    Advantages:
    low price 2 color displays 3D sound 2 speakers

    Disadvantages:
    it is very difficult to find anything for this model in stores, and indeed for phones of this company

    Comment:
    2 years ago I bought this phone for my mother, it worked fine, then it started to discharge quickly, it happened that after a short conversation, the charging stick disappeared on it, I decided to replace the battery, I ran all over the shops, I finally didn’t find it, so they took this phone to a specialist when he said that it wasn’t in the batteries I began to look for a new charge in the shops, I found it in one store from another model

    General characteristics

    Type
    telephone
    Housing type
    folding bed
    Control
    navigation key
    SIM card type
    regular
    Number of SIM cards
    1
    Weight
    85 g
    Dimensions (WxHxD)
    44x82x23 mm

    Shield

    Shield type
    color STN, 65. 54k colors
    Image size
    128×128
    Additional screen
    yes, color, 65536 colors, 64×96 pix.

    Bells

    Melody type
    64-voice polyphony
    Vibrating alert
    have

    Multimedia

    Voice recorder
    have
    Games
    is
    Java applications
    have

    Communication

    Standard
    GSM 900/1800/1900
    Internet access
    WAP 1.2.1, GPRS
    Modem
    have
    Computer synchronization
    have

    Memory and processor

    Number of processor cores
    1
    Built-in memory capacity
    1.80 Mb

    Messages

    Additional functions SMS
    text input with dictionary
    MMS
    have
    EMS
    have

    Power supply

    Battery type
    Li-Ion
    Battery capacity
    760 mAh
    Talk time
    3.