PDA

Archiv verlassen und diese Seite im Standarddesign anzeigen : Japanische Analyse zum NV40


Gast
2003-10-05, 13:37:48
Ganz nett!
BTW, Hoffentlich geht Nvidia mit den virtuellen 16 Pipelines nicht baden, wie einst 3dfx mit vituellen 24bit-32bit ... =)
Hab mir das mal beim Bablefish übersetzen lassen...

Hier der direkte Link:
Hiroshige (http://babelfish.altavista.com/babelfish/urltrurl?url=http%3A%2F%2Fpc.watch.impress.co.jp%2Fdocs%2F2003%2F1003%2Fkaigai02 9.htm&lp=ja_en&tt=url)

up

Hiroshige's Goto Weekly overseas news*
As for NV40 with 8 pixel output /16 virtual pipes appearance to next year head




--------------------------------------------------------------------------------

- NVIDIA which extends efficiency largely in the NV4x generation

NVIDIA president and CEO
Jen-Hsun Huang
With COMPUTEX where NVIDIA the form of generation GPU "NV40" is held, last week one after another it started becoming clear.

As for NVIDIA when the explaining GeForce 2GTS, new architecture GPU becomes code name "NVx0", the architecture extended generation of center has made that it becomes "NVx5" clear. Because of that, GeForce FX 5900 (NV35) as for the next measure generation, as for becoming NV40 it is clear. And, "half year new GPU is put out", (NVIDIA, David B. Kirk, Chief Scientist) from thing, it was expected that NV40 has approached.

According to the group of industry authorized personnel, as for NV40 you say that with output of 8 pixels/the clock, it becomes constitution of 16 virtual pipelines. As for production process technology at 0.11 μ m, as for Shader architecture 2.0+. Host interface with AGP 8X, by the fact that the PCI Express x16 bridge tip/chip is used, makes the PCI Express card possible. In addition, in memory not only DDR/GDDR2, you say that it tries also GDDR3 to be able to use. However, whether or not the actual board uses GDDR3, it is not understood.

The release of NV40 is schedule approximately of the next year head, but still as for everyone vendor other than NVIDIA as for looking at the actual tip/chip you say that it is not. As for the design guide of the board design, in order in the schedule which is offered to end of year, to enter into the design from there accurate appearance time of the on-board board it is not found yet.

In addition, following to NV40, you call NVIDIA that it prepares also "NV45". As for NV45 in schedule of 2004 2nd quarter, as for pipeline constitution you say that almost it is equal to NV40. Because of that, at the time of the same clock/the same memory constitution, as for efficiency difference it seems that almost does not come out between NV40 and NV45. The largest difference PCI Express x16 interface inside is to own with the on-chip. The generation of Shader of NV45 is not recognized for the present yet.

Furthermore, NVIDIA has planned also the main stream edition GPU of NV4x architecture and value edition GPU. It is said that main stream edition is with "NV41", value edition "NV42". Both are PCI Express editions, but NVIDIA furthermore offers also the AGP edition of both GPU. As for AGP edition the code name differs from PCI Express edition. Perhaps, large number you probably can see these products or the sample in the time of Cebit of next year.

In addition, NVIDIA stands ウワサ that for the Intel platform advances to also chip set. This, is because NVIDIA ties IBM and foundry contract. As for Intel because it is cautious NVIDIA, FSB of Intel CPU (the front side bus) the license is not given to NVIDIA. But, because as for IBM it has the cross license of Intel, if it produces with IBM, without being appealed from Intel, there is a possibility of being completed. You heard this story from the plural sources, but in each case "it is not to hear from NVIDIA", that you say. Because of that, as for authenticity it is not completely understood.

- NV40 which expands pipeline structure to 2 times

By the fact that the outline of the NV40 generation of NVIDIA had become clear, efficiency and architecture of the next generation GPU some started reaching the point where you can suppose.

First, by the fact that NV40 is 8 pipe /16 hypothetical pipes, it is you can suppose the structure of the pipeline of pixel side. Kirk of NVIDIA, GeForce FX 5800 (NV30) with GeForce FX 5900 (NV35) with, normal pixel (Color + Z) rendering is 4 pixels/the clock, but that the architecture which can form 8 pipelines virtual by the fact that SIMD operational unit is divided due to processing contents was adopted it had explained. Concretely with the stencil and processing and the like of 16bit precision it can take the hypothetical pipe.

The pixel section of NV40 8 pipe /16 hypothetical pipes is presumed the thing which perhaps expands architecture of such NV30/35 type to 2 times. In other words, it is on-board as Pixel Shader it is natural to think that the number of SIMD operational units of the 128bit width of 32bit×4 which, from 4 of NV30/35 types was redoubled in 8. Various units such as texture unit and texture fetch are annexed to in addition to Pixel Shader,, but at least as for operational unit the possibility of having reached 2 times is high.

In addition, virtual pipe constitution can be taken, (operational unit can be divided) with as for the notion that where you say, as for the internal operational precision of Pixel Shader, to continue, it is thought that it is 32bit. Because NVIDIA by the fact that 32bit operational unit is divided into the 16bit operational unit 2, has made the virtual pipeline of the number of 2 times possible.

As for many of problem of efficiency of GeForce FX type GPU of the present DirectX9 time, normal pixel output RADEON of ATI 9700 (R300) and 9800 (R350) has originated in becoming half by the fact that 32bit precision is adopted. Because ATI at the internal precision of 24bit, rendering can do normal pixel constantly with 8 pixels/the clock. But, 8 pixels/in the NV40 generation of the clock, architecture even at the time of the full precision of DirectX9, as for NVIDIA it means to reach to the performance level which is equal to ATI. Of course, as for actual Shader processing efficiency because it differs, as for this to the last it is principle story processing inside Shader with degree of in parallel. But, as for the け which means that NVIDIA extends efficiency more largely than the current generation you are not wrong.

- Furthermore NV40 where production cost rises

Of course, there is also a trade-off in this. Big trade-off...... as for that is cost.

When from NV30/35 architecture, the number of operational of Shader of pixel side units is increased, naturally that much the number of transistors becomes necessary. Because the fact that the biggest area is occupied inside GPU is Pixel Shader, impact is large. GPU of NVIDIA, with NV30 1 hundred million 2,500 ten thousand transistors, has loaded 1 hundred million 3,000 ten thousand transistors with NV35, but it is presumed that in the NV40 generation it gets near to 2 hundred million transistors, or exceeds. Actually, in the past, Kirk of NVIDIA with the next generation architecture has suggested that it is the possibility of reaching to 2 hundred million transistors.

When the number of transistors increases, inevitably, also die/di size becomes large. Each die/di size of NV30/35 increased with approximately 200 squares mm, more largely than approximately 150 squares mm of the GeForce 3/4 generation. It is presumed that with NV40 furthermore it increases.

Of course, as for NV40 in order to use 0.11 μ m of intermediate process generation, die/di size increase is held down with refining vis-a-vis NV30/35 of 0.13 μ m. But, even then die/di size several ten % is the expectation which increases. "With even NV30 first 30% as for NVIDIA it suffered at low yield rate. As for NV40 perhaps, furthermore it becomes harsh yield rate, "that a certain industry authorized personnel talks. If yield rate is low, production cost rises. Perhaps NV40 the product which is attached high for NVIDIA.

Though, as for NVIDIA even then there is a possibility of acquiring strategic price. "As for NV40 perhaps cost performance it becomes the good product. Because NVIDIA NV40 is located the range of 300 - 350 dollars ", that a certain industry authorized personnel talks. In other words, NVIDIA with NV40 shaving margin, it is low, is the case that it is the possibility of having in price. However, as for such price strategy, because it is fluxional, you do not know for the present really it becomes which price range.

- As for Shader architecture while it is the 2.x generation

As for the Shader architecture of NV40 it is not the Shader 3.0 generation, it seems. When it remains in Shader 2.0+ where, NVIDIA expands Shader 2.0 the industry authorized personnel gathers the mouth.

This is clear even from information around. For example, Chas. Boyd of Microsoft (Graphics Architect, Windows Gaming & Graphics), in the game development conference "CEDEC" which in the September this year is held in Tokyo, "the quickest one it appears in March of next year concerning the hardware of Shader 3.0 support. Remainder appears in around the fall. It depends on the bender ", that it had explained. In addition, Jason L. Mitchell of ATI (Project Team Leader and 3D Application Research Group), it had suggested that GDC which is held in March (Game Developers Conference) with, more fact (was based on the product) story is possible with GDC of next year concerning Shader 3.0. Because of that, next spring the possibility R420/423 of ATI which is regarded becoming the first Shader 3.0 tip/chip is high.

When we assume, that NV40 is not Shader 3.0, as for that because it is preceding developing, the possibility of not being in time is high. Actually, a certain industry authorized personnel you call the plan, next spring of ATI "the schedule which perseveres rather". Because as for that, with NVIDIA and the like, in development 18 months, total 22 months of 4 months are necessary in mass production. Usually, to architecture definition 4 months, 7 months, 12 months, 18 months it is required to for completion of the physical design/tape out to net list to RTL. So when it does, assuming, that you started at stage of 2002 latter half to which the specification of Shader3.0 spends to become firm next spring it is last timing.

NVIDIA without waiting for Shader 3.0, if we assume that it hurried the development of NV40, because that probably prefers the fact that the tip/chip which increased efficiency with architecture of the same company is thrown as quickly as possible. If we assume if either NV45 is not Shader 3.0, Shader 3.0 GPU of NVIDIA to fall of next year means to be delayed.

- It corresponds to PCI Express with the bridge tip/chip

NVIDIA in the NV40 generation takes the architecture which corresponds to PCI Express with the bridge tip/chip. GPU itself builds in AGP 8X interface, by the fact that the PCI Express bridge is connected to AGP 8X, tries to be able to correspond to PCI Express chip set. As for the same company, it is the intention even with the main stream product of actualizing PCI Express correspondence in a similar way.

There is a trade-off in this method. First, the support of PCI Express can be actualized quickly and easily as an advantage. Because it can correspond to the both of AGP 8X and PCI Express with the same GPU, the GPU vendor squeezes product line-up, it is possible to make development/production/management easy.

As for difficult point, advantage of the wide band of PCI Express x16 cannot utilize with this system. As for AGP 8X as for PCI Express x16 type direction 4GB/sec, being bidirectional, it reaches to 8GB/sec peak 2.1GB/sec, vis-a-vis that. But, when the PCI Express x16-AGP 8X bridge was used, zone is restricted AGP 8X suitably.

But it does not have the sufficient internal processing efficiency which really as for PCI Express of the first generation GPU, needs the zone of PCI Express x16 for full. Because of that, that AGP 8X becomes the bottleneck, it is seen that it is little. In addition, being onboard, in order to tie the AGP interface of GPU and the bridge tip/chip, the over clock doing the interface between both tip/chip, also it becomes possible to pull up zone.

In the product road map of NVIDIA, finally GDDR3 appeared. Actually, COMPUTEX parallel, the technical conference "VIA of VIA Technologies which is held Technology Forum (VTF)" with, the sample board which is named the GDDR3 board of NVIDIA was displayed. But, this tip/chip how seeing from die/di size and the like, is not NV40. As for GDDR3 presently standardization of standard is advanced.

In the NV3x generation with architecture selection in real time CG NVIDIA which fails. As expected, in the NV4x generation, it probably is possible to recover the support of the software developer?


The GDDR3 board of NVIDIA which is displayed with VTF


□-related article
< May 27th > < Foreign country > The number of transistors of GPU which passes CPU much
Http: //pc.watch.impress.co.jp/docs/2003/0527/kaigai01.htm
< March 25th > < Foreign country > RADEON 9800 and GeForce FX 5800, as for correct road which?
Http: //pc.watch.impress.co.jp/docs/2003/0425/kaigai01.htm
< June 13th > < Foreign country > Shader NVIDIA where advances the correspondence to of 3.0 and PCI Express x16
Http: //pc.watch.impress.co.jp/docs/2003/0613/kaigai01.

(2003 October 3rd)

Xmas
2003-10-05, 15:05:30
Ziemlich viel Blubber und Spekulation, eine sehr fragwürdige Begründung/Schlussfolgerung, dass der Chip keine 3.0 Shader unterstützt, und nichts neues außer den 0.11µm-Prozess.

Richthofen
2003-10-05, 15:27:36
ja aber das mit dem 0.11 war mir eigentlich auch nicht neu. Irgendeiner hat das schonmal von sich gegeben weiss nur nich ob das jetzt Uttar war oder irgendwer anders.

Ich hab voll Probleme den Text zu rallen. Die Übersetzung ist leicht chaotisch :)

Mond
2003-10-05, 15:50:00
Original geschrieben von Richthofen Die Übersetzung ist leicht chaotisch :)

Da hattest nicht nur Du Probleme mit. :help: :D

Gast
2003-10-05, 16:45:16
Tip: Einfach auf Fuzzy-Logic umschalten...
Die Essenz ist doch relativ einfach zu kapieren =)

Und, die Bilder und DDR3 waren mir jedenfalls neu!

up

Xmas
2003-10-05, 16:59:34
Die Bilder sind lediglich Dummy-Karten von Micron zur GDDR3-Präsentation. Nur damit hier keiner meint so sähen die kommenden Karten aus ;)

Ailuros
2003-10-06, 03:17:21
Original geschrieben von Xmas
Ziemlich viel Blubber und Spekulation, eine sehr fragwürdige Begründung/Schlussfolgerung, dass der Chip keine 3.0 Shader unterstützt, und nichts neues außer den 0.11µm-Prozess.

Spekulieren muss man auch anstaendig koennen. So viel Quatsch auf einer Seite hab ich noch lange nicht gelesen.

.11 ist fuer NV45 und nicht NV40 soweit ich weiss. Das 3.0 Shader Zeug ist absoluter Quatsch von dem Samurai.