Jump to content
3DCoat Forums

Graphics card(s)...


joa1
 Share

Recommended Posts

  • New Member

Greetings everyone,

Just joined, looking into 3D Coat, evaluating, etc.

The apparently recommended graphics cards are:

"Radeon 9200/Nvidia 5600 128Mb (256 recommended) or better"

Can someone define "better"?

Radeon 9700/9800 Pro, Nvidia FX5900 Ultra?

The new Radeon 9600 (64bit)?

Are there limitations/compatibility issues?

(Sometimes bigger is not better if something does not work...)

Many TIA for any feedback,

joa1

Link to comment
Share on other sites

I can recommend :

NV 9800GT 512 MB - 1024 MB - low price (110-130$) and good enough performance

but of course if you are not so limited in price it is better to choose

NV 9800GTX

NV 260/280

I very DON'T recommend:

NV 8400/9400/8500/9500

They are very cheap but CUDA performance there is really poor.

But remember - upgrading video card sometimes requires upgrading power supply to 450-500WT

Link to comment
Share on other sites

  • Advanced Member

Guys .... Andrew ... you recommend 512MB card ... but you don't suggest what the upper limit for that card will be in polygons.

I believe there is an upper limit because of the 512MB aspect of the card. After all ... a displayList surely has to be on the card all at once, right ??? I mean ... you don't "STREAM" a display list to a card.

Someone help me out here. This question has been bothering me. And I want to buy a new card, I just want to know how to predict it's performance.

Link to comment
Share on other sites

Of course new card is optional thing. 3DC can work on older cards. I recommended this cards only if someone wants to buy new card instead of very old one.

You have 8500 - it is enough to work (it is better to turn off CUDA usage), but 9800 improves performance at least 3x in comparison to 8500 (32 processors in 8500 vs 112 processors in 9800 GT + much bigger memory bandwidth).

Performance improvement will be visible on voxel brushes and camera navigation, but objects merging/quadrangulation will be unchanged in speed.

And I always recommend more video memory on board because there is very small price diff between 512 and 1024 Mb.

And as it was told before - it is much cheaper to upgrade video card then buy new system. And video card is always useful - in 3d-apps, games, Vista performance.

And one more I very recommend XP instead of Vista, Vista is much more sluggish.

Link to comment
Share on other sites

  • Advanced Member

It's not about what was said before.

It's about what was NOT said before.

You don't have best statistics coming in from your user base (which must be growing).

You need to collect performance stats so you can help people spend their money wisely.

If you have such stats ... you are not sharing them in a meaningful way.

Before you said "MEMORY IS CHEAP LIKE DUST". If I take this at face value ... it mean

I need to either buy 4GB since I only have 1.5 GB main system RAM. Or I need to borrow

a machine. I chose to do the latter. I tested on a 64bit machine with 8GB RAM yesterday on my 17,000,000

polygon scene. The performance was NOT much better (just a little). This makes me wonder if graphics

card is the bottleneck. It might very well be.

** I don't see A spot on your Forum where I can get a feeling for how many polygons

to expect (A VOX IS RENDERED) from a NVIDIA 256MB, 512MB and 1GB Cards.

I'd like to see this kind of information being collected from the field of users,

and then reviewed and certified by someone with experience ... and most importantly,

organize in a meaningful easy to understand chart.

I still crash when assembling a scene ... about 4 out or 5 tries at assembling a scene ends in

3DCoat crash. I go to task manager and kill. And I ain't no dope when it comes to doing an

analysis of bottlenecks. The problem is I just don't have the system resources and hardware needed to

do all the empirical testing. If I did ... I would.

Let's get all the performance based threads in one spot and get them organized, and recruit some

good analytcial testers to help write up a better F.A.Q. for VOXEL sculpt.

Link to comment
Share on other sites

We plan huge performance testing using NVidia testing lab (maybe hundreds of PC's), we will get and publish many statistics.

I hope it will happen soon, I think it will be what you need.

But expected count of polygones if not very dependent on video card, video card gives you speed.

Count of polygones is dependent on RAM. I really have no statictics about Polygones(RAM), but I witt try to make it when my shedule will be not so tight.

Link to comment
Share on other sites

  • Advanced Member

-----------------------------------------------------

But expected count of polygones if not very dependent on video card, video card gives you speed.

Count of polygones is dependent on RAM. I really have no statictics about Polygones(RAM), but I witt try to make it when my

----------------------------------------------------

Let's try to explore the implications of what you just said above.

1) Are you using DisplayLists in openGL. I assume you must be.

If you are ... then am I to infer that a display list can be streamed to an OpenGL / NVidia card ???

Somehow that does not sound likely to me ... but I'm no OpenGL expert. Just a guess based on

intuition which might be flawed.

I know that a Video Card must convert OPENGL commands to a pixel frame buffer on the card.

But one would think that that the pixel frame buffer is not filled in the following manner ...

Polygon count gets huge so most video card RAM is filled ...

POLYGONS converted to pixels. More polygons loaded to card ... more pixels created.

Finally ... 2D frame buffer is dumped to display. This seems too complicated to me Andrew.

I think entire scene is on card at same time. Which means there is a polygon limit.

Prove me wrong. Send me an educational link.

Do you really know how this works at a hardware level ??? Fairly advances question I'd say ...

but perhaps an important one if the 10 million 20 million, 30 million polys are filling my grahics card

RAM. From you reaction ... I'd ahev to assume you belive the polygons are streamed into the Frame buffer so that

a display list of "set" of polygons in a scene don't have to all be on card at one time. Personally, I don't belive it until I see it in writing from card manufacturer. I believe that all polygons in a displayList or it's DIRECTX equal (scene) would be on card at same time.

Video Card frames per second is an entirely different problem in my mind. But where can I get intelligent description of these processes ??? Maybe I can't.

The typical solution for developers as well as users is .... TOSS MORE $$$ at the problem. But new hardware.

Well, this is a problem of finding the "WEAKEST LINK" in a complete hardware package. I still don't know if it is the graphics card. Pehaps the BUG reports and the call stacks I send you mean something. I hope so.

I am working VERY HARD, at trying to discover the source of my frustrations with the Coat. I go frome elation and joy to frustration ... as should be the case. THIS IS ALPHA TESTING. So please bare with me as I work though my hardware issues.

Link to comment
Share on other sites

  • Advanced Member

Mine is not too slow in Frame Per second. I can do 10 million polygons

or even 20 million and get sufficient frames per second at the scene level.

Then I turn off visibility for sub-object to get down to maybe 8-Million showing

with no shadows ... and I'm OK for modelling.

GED ... thanks for sharing.

Mine is slow and crashy when it comes to doing MERGES, RES CHANGES,

UNDOS, etc ... things where what's going on with Memory are dynamic, but

not so well understoof at user level. But my most recent problem is that

this does not go away with 8GB RAM and 64 bit machine. Still similar

freeze problems and lack of progress bars feedbacks, etc.

A couple well constructed progress bars might go a fairways towards

eliminating some consternation. There exist a few ... but a bunch of progress bars

ar missing (Particularly on import OBJ files).

Link to comment
Share on other sites

I really don't know what is DisplayList. I have not written GL part of render, it was done by OSX programmer Sergiy Krizhanovspy.

Ok, you can estimate usage of Video RAM.

1 vertex = 24 bytes

approx. 2 triangles per vetex.

1 triangle = 6 bytes.

So 1 vertix is approx 24+6+6 = 36 bytes

128 MB = 3 728 270 vertices

Also some memory is required for textures, display, depth buffer, shadow map (2048 x 2048).

Also, if some vertices are not able to be placed in video ram they will be placed in usual ram and there will be swapping (at least for DirectX, I recommend using DirectX for non Quadro cards).

If you are using incremental render only small area is rendered during painting, in this case scene can be rendered even if not all polygones are placed in scene (all scene is split on many small primitives < 65k vertices). So in any case RAM is biggest limitation, at least theoretically.

Link to comment
Share on other sites

  • Advanced Member

OK ... JokerMax ...

Andrew says that ....

128 MB = 3 728 270 vertices

So this probably means THAT LOTS AND LOTS of 3DCoat VOX users are having memory swapped in and out of the VIDEO Card ram. This seems like a VERY Huge problem for users. I'd caution any users against developing a workflow where VIDEO Ram gets paged in an out in order to support large displayLists. But how is a user to become aware this is happening ??? Like GI Joe saided ... knowing is half the battle (And maybe no the easy part).

Furthermore ... even if a video card can (in the generic case) do this kind of paging ... I'd be pretty doubtful about how well it would work. It seems like a bad idea.

I think Andrew should take the time to investigate what a displayList is and maybe report back his findings regarding the wisdom of paging displayLists into video RAM a bit at a time. Just seems like a horrible idea to me. I'll try to get more information myself ro bring back here. So far ... the pikins are slim.

The big question is ... can your OpenGL programmer identify and flag when a display list is so large that it would have to be streamed to card in parts in order to build a frame of video. If he can ... then displaying some kind of warning might be a great idea.

TEXT IS CHEAP ... LIKE RAM. So I type a lot ;-)

Link to comment
Share on other sites

Mine is slow and crashy when it comes to doing MERGES, RES CHANGES,

UNDOS, etc ... things where what's going on with Memory are dynamic, but

not so well understoof at user level. But my most recent problem is that

this does not go away with 8GB RAM and 64 bit machine. Still similar

freeze problems and lack of progress bars feedbacks, etc.

That is question of program stability, not really Video RAM issue.

Inc res seems take more memory then need on intermediate stage. It can be tweaked and improved.

Also, it is important, have you set /3GB in boot.ini?

Link to comment
Share on other sites

  • Advanced Member

-----------------------------------------------------------------------------

That is question of program stability, not really Video RAM issue.

Inc res seems take more memory then need on intermediate stage. It can be tweaked and improved.

Also, it is important, have you set /3GB in boot.ini?

----------------------------------------------------------------------------

Andrew ... if any single part of the system resources chain would be "maxed out",

I'd expect pretty odd behavior ... including instability. Why would VIDEO Ram

paging not lead to instability ???

A DisplayList is a batch of OpenGL commands needed to construct a frame

of video. In a perfect world, my displayLists would not be pages or shared

into many separate memory subsystems.

In order to figure out what is wrong ... sometimes one has to suspect and

point a finger at a lot of things that are not wrong (this is called working through

ones own confusuion). You know what I mean.

I just did a short search on how folks feel about Video cards stealing system

RAM. Seems I'm not the only paranoid one. It may very well be that lots of

video Cards do this ... but it does not mean that it should be considered

best practice to allow this to happen.

I did set the /3GB setting.

Link to comment
Share on other sites

  • 2 weeks later...
  • Member

I just joined as well. I am considering getting a new card, if my motherboard allows it. I would like to know if I have to upgrade to a Direct X 10 card to take advantage of voxels?

Here are my current specs of my system:

MOtherboard: Asus P5B-E P965 775 (I believe its PCI Express 2)

RAM:8 gigs

Video card: GIgabyte Nvidia 7900 gt (was a pretty high end card when I bought it)

CPU: 6600 Intel Core 2 Duo

Power supply is 580 watts (this seems to be important for the bigger graphics cards.

Which of the high end nvidia cards run cool and offer good performance? I'd prefer a less noisy GPU if possible.

Link to comment
Share on other sites

  • Advanced Member

directx10 is not required for voxels and I dont know if theres any advantage, maybe just give the alpha a try and see whether you think you will need a new gfx card? I think 9800GT is fairly well priced and as for the fan well I think it depends which brand of 9800GT you buy

Link to comment
Share on other sites

  • Member
directx10 is not required for voxels and I dont know if theres any advantage, maybe just give the alpha a try and see whether you think you will need a new gfx card? I think 9800GT is fairly well priced and as for the fan well I think it depends which brand of 9800GT you buy

My current gpu doesn't support CUDA. Since my mobo was made several years before the 9800 gt, would it even run a 9800GT ? Or does PCI Express 2.0 not really care as long as it use the same input. I don't think Asus is creating updates to my bios, if that's necessary .

Link to comment
Share on other sites

  • Member

First the Title of this Thread has "Bestest" which isn't a word. Should be "Best"

Aside from that looking at some things online I have found some useful information for everyone.

Here on this page is a list of cards that support Cuda 2.0 (Cuda 2.1 is out but this should still be good)

http://www.gpugrid.net/forum_thread.php?id=316

I did not check this resource but at the bottom of the first post the author gave his sources.

I run 2 different cards. One is an ATI FireGL Pro V7700 and the other is a Nvidia QuadroFX 3700.

I have not ran 3d-Coat on the Quadro yet.

Here is a list directly from Nvidia's site on cuda supported cards.

http://www.nvidia.com/object/cuda_learn_products.html

For those interested you can get a cheap Quadro board for under $200

http://www.futurepowerpc.com/scripts/produ...PB&REFID=PW

Just go to www.pricewatch.com and type in what board your interested in.

I would suggest at least a FX 1400 series board or later as they perform much better than the lower end.

My Quadro FX 3700 will outperform my other card which is a dual GPU board! (XFX Geforce 9800 GX2)

So when I get a chance to test some stuff I will post the results.

Link to comment
Share on other sites

  • Member

I am thinking of upgrading my video card and I have looked into this new CUDA technology. While some cards support CUDA, they may under perform your CPU.

Here is what I have found. I could be wrong.

Each CUDA enabled card has at least one multiprocessor made up of 8 processors, with 8192 registers, capable of running 768 threads. Some cards have many multiprocessors and can run many more threads.

Here are some cards with the number of multiprocessors.

The worst:Geforce 9300M GS,9200M GS, 9100M G,8400 M G,

Quadro FX 370M,NVS 130M 1 multiprocessor

:Geforce GeForce 8500 GT, 8400 GS, 8400M GT,9500M G, 9300M G, 8400M GS, 9400 mGPU,9300 mGPU, 8300 mGPU, 8200 mGPU,8100 mGPU

Quadro FX 370, NVS 290, NVS 140M, NVS 135M,FX 360M 2 multiprocessors

GeForce 9500 GT, 8600 GTS, 8600 GT,9700M GT, 9650M GS, 9600M GT, 9600M GS,9500M GS, 8700M GT, 8600M GT, 8600M GS

Quadro FX 1700, FX 570, NVS 320M, FX 1700M,FX 1600M, FX 770M, FX 570M 4 multiprocessors

GeForce 9700M GT

Quadro FX 2700M6 multiprocessors

GeForce 9600 GT, 8800M GTS, 9800M GTS8 multiprocessors

GeForce 8800 GTS,GeForce 9600 GSO, 8800 GS, 8800M GTX,9800M GT

Quadro FX 4600,Quadro FX 3600M12 multiprocessors

GeForce 9800 GT, 8800 GT, 9800M GTX

Quadro FX 370014 multiprocessors

GeForce 9800 GTX, 9800 GTX+, 8800 GTS 512,GeForce 8800 Ultra, 8800 GTX

Quadro FX 5600,Quadro FX 3700M

Tesla C87016 multiprocessors

GeForce 9800 GX2

Quadro Plex 1000 Model IV

Tesla D8702x16 multiprocessors

GeForce GTX 26024 multiprocessors

GeForce GTX 28030 multiprocessors

The Best:Tesla S10704x30 multiprocessors

This is just a rough guide, but shows how different some of the cards are. I'm sure that bandwidth and memory change things as well.

I also read that if your card is in SLI mode you will not benefit from the extra card, so a 9800 GX2 will appear as 16 multiprocessors in SLI mode rather than 32.

Hope this helps.

***edit*** the tesla might be a computer without a cpu rather than a card, I'm not sure.

Link to comment
Share on other sites

  • Member

I would like the GTX 280 to find out ;) Wild stab in the dark but I doubt it would be a 100% gain. On most reports the new GTX cards perform around 20% better than the series 8 & 9 cards. Once again, that is pure conjecture so know that I could be totally wrong. It would be able to handle almost 2x as many threads but might be limited by other factors.

I settled on a used 8800 GT for $100 CDN.

Link to comment
Share on other sites

  • Member
Funny that you should mention the GTS 250. I just took delivery of this model yesterday afternoon:

EVGA GTS 250

So far so good! :D

P. Monk

http://www.newegg.com/Product/ShowImage.as...rd%20-%20Retail

I was thinking of the gigabyte one. Looks like it might be quiet , good airflow, and pretty cool, temperature wise. My currect GPU is a Gigabyte, with a zalman fan.

Link to comment
Share on other sites

  • 6 months later...
  • Member

In this thread, I did not see the G-Force GTX 295 addressed,...I'm thinking of getting one, being that it has 2 chips at 240 cores each (480 total)....It seems like an ideal candidate for 3d coat, my only concern is that perhaps the drivers are so new, that they may not be currently compatible with 3D Coat...If anyone knows the status of this, I'd much appreciate it...

Also, if I have an 8800 currently, should I expect significant or Marginal improvements when upgrading to the GTX 295 or 280?

thanks,

-j

Link to comment
Share on other sites

  • Member

First post!!!

I am really impressed by the voxel technology, I think its very revolutionary, and will possibly change how cg models are done. The thing is that on my Q6600 (quad core), 4g of ram, geforce 8500, ubuntux64, its almost impossible to sculpt naturally with voxels, I feel it too unresponsive to be useful (comparing it with zbrush, I know its a different thing, still). I have also tried it with a geforce 9600 on a windows box, with cuda stuff, and I'm still not satisfied with its performance. Is it just me?

Laying some spheres and trying to soften them with the smooth tool was just a pain. BUT I surely believe in the technology, so if spending some money will definitely change the performance of the app I'm willing to give it a try.

I cant afford a geforce 295 gtx, or the latest quadro (by the way nobody talks about using 3dcoat with the quadro technology), but maybe a 9800 gtx. So has anybody ever tried 3d coat with different graphic cards? Will upgrading my graphics card certainly boost the performance or just a little? On Zbrush Ive read performance does nothing to do with the graphics card...

Anyway, that comment about sclupting voxels like cutting butter, which I read somewhere here in the forum, made me wonder. I have also read on zbrush forums guys talking how wonderful is 3dcoat's voxel sculpting. So, am I missing something? Or maybe its just the way it is?

If its a new card that I need I'll go get one. :blink:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...