Draginol Draginol

The future of Galactic Civilizations

The future of Galactic Civilizations

The good news is that there will be a Galactic Civilizations III.  The bad news is that it won't be out this decade.

Galactic Civilizations II v2.0 is currently in development but it's free for all players of the expansion packs.  But that will serve as the code basis for any further GalCiv II updates.

Right now, the team is working on the "unnamed fantasy strategy game" sometimes called "not-MOM" (not Master of Magic).  It's a totally new graphics engine that makes use of multi-core CPUs and GPUs but will still run fine on lower end hardware thanks to built-in detection that will determine "how much stuff" to display in real-time.

That game will go into public beta early next year and its release date will be largely based on player feedback.  As many of you know, we are in the position of being able to keep working on our games until everyone's happy with them. Our non-game part of the company does so well that there's no pressure. We want to make it the best turn-based strategy game of all time.

THAT engine will be what serves as the basis of a future Galactic Civilizations III.  That means GalCiv III will have features like tactical battles (as an option), multiplayer, more sophisticated planetary development, and much more. 

329,236 views 95 replies
Reply #26 Top

Question, are you looking for any sort of concept art design for units? I know someone local to your area who'd be suited for it.
You can have them send in a resume, but what we're reallying looking for is people that can model/texture/animate as well as draw traditionally. We need to physicially expand the games team area, but after that we'll be hiring quickly.

Also, when you say multi-core, how will it take advantage of that? Same way GCII does by applying one core to the AI?
AI would be a thread. We just got graphics into their own thread. Other libraries that we use are already using their own thread, so as the number of processors increases, the game should scale automaticially.

Make use of Shader Model 3.0 or better 4.0 too
Yeah, shader 3.0 is what most of our code is written for. Theres some cool stuff in s4 we'll make use of, and we NEED to support s2 (turn based games don't hold the same hardware clout as FPS's), but in the end we're shooting for it to look great across the board.

Reply #27 Top

Yeah, shader 3.0 is what most of our code is written for. Theres some cool stuff in s4 we'll make use of, and we NEED to support s2 (turn based games don't hold the same hardware clout as FPS's), but in the end we're shooting for it to look great across the board.

Yes, totally agree with s2.0, i know many friends who just can't afford a new card with s3.0, let alone 4.0 or 4.1 support.

It would suck to not be able to play such games only because of the shader model stuff.

Reply #28 Top

i know many friends who just can't afford a new card with s3.0, let alone 4.0 or 4.1 support.

We tried to keep that in mind when picking the style...something that would look good with minimum system requirements. Can't wait to show you guys what we've been doing :)

Reply #29 Top
Well, if the beta for the not-MoM starts up in early '09, that's gotta mean an official game announcement has to come this fall, right, right? And, and then.. with the announcement, screenshots?  :LOL: 

I'm on the opposite side of the board. GalCiv is a great series (and I own all of GC2s), but I can't play them for long, so I'm mostly looking forward to the not-MoM :)
Reply #30 Top

Delurking...

 

Go Not-MoM, go!

 

Lurking again...

Reply #31 Top

Add multiplayer to GC3 only if you are sure about that you do not have to sacrifice any cool features for single player or the game in general.

I agree 100% with that.

I also think that your team has took the good decision for GCIII (comes after the new  fantasy game). GC III will be just  better!!

Regards,

P.S. Sorry if my English is not good.

 

Reply #32 Top

Can't wait to show you guys what we've been doing

Can't wait to actually play or maybe beta test all the new nifty things, the good thing is, i just like every theme and Not-MoM will be very interesting.

I agree 100% with that.

I also think that your team has took the good decision for GCIII (comes after the new  fantasy game). GC III will be just  better!!

Regards,

P.S. Sorry if my English is not good.

Since you did understand all my crap :) your english should be fine. I'm german and usualy have alot of gramma issues or typos.

 

Reply #33 Top

Actually, I think having multiplayer component can only enchance single player AI.

 

That way, when early testers start to play with each other they can more easliy think of the good strategies that can be lated coded into AI. Also, having AI open to moding would be even better.

 

It worked greatly for Civ4.

Reply #34 Top

Yes, totally agree with s2.0, i know many friends who just can't afford a new card with s3.0, let alone 4.0 or 4.1 support.

Shader 3 - that's DirectX 9, right? That's the GeForce 6 and 7 series - you should be able to get one for $20.

But hey - if they support shader 2, that's great :). The more systems it supports, the better.

AI would be a thread. We just got graphics into their own thread. Other libraries that we use are already using their own thread, so as the number of processors increases, the game should scale automaticially.

Well, a possible issue with this method is that some threads may barely be using the cores, while others might saturate the core(s). You may get uneven usage, and if there are a lot more threads than cores, overhead may become an issue.

For maximum performance and minimum overhead, the ideal number of threads is the number of cores the system has, and each thread should have the same workload as the other threads.

There's an intresting article at Ars Technica about how Valve decided to takle multiple cores. Might be worth a look.

Reply #35 Top

...how Valve decided to takle multiple cores

No matter what Valve decided to use they still must heap the stack like everyone else - pipelining their ways to indirect leveling of call ops shouldn't (correct this -- CAN'T) raise the bar on available resources provided by whatever happens to connect with the target 'devices'.

 

But, i've seen engines so tricky (registry dispatch, off'top'of'my'silly'head) - current facts may just be old news -- since last i checked for the latest gimmicks.

Reply #36 Top

The downside is with multiplayer, games get figured out a lot more quickly as well.

 

 

Reply #37 Top
hmmm, I would be happy with the depth of Dominions III with modern graphics. Anything close to that and it has to be a winner. I can't wait.
Reply #38 Top

No matter what Valve decided to use they still must heap the stack like everyone else - pipelining their ways to indirect leveling of call ops shouldn't (correct this -- CAN'T) raise the bar on available resources provided by whatever happens to connect with the target 'devices'.

"indirect levelling of call ops?"

Those words make perfect sense separately, but not in the way you put them together.

The article states, by the way, that the results were better than expected:

The end results of Valve's efforts were even better than they had initially hoped. Not only was the speedup on the four-core Kentsfield chips nearly linear in most cases (an average of 3.2 times over a single core) but having four CPUs instead of two made it possible to do things that simply couldn't be done with fewer cores.

Reply #39 Top

Oh, but i agree with the assumption or proof that this 'simultaneous threading with FOUR target devices' is an excellent solution (given Valve has mastered the results, somehow) - it's just that *in principle at least* the entire experience has no formal application as of now (or even a predictable market until MUCH more home PCs enter the realm of quads or that they become common & widely supported)... never hurts to step ahead into good potential though. That's for sure and wishing outloud.

 

(I only meant when operands 'call ops' MUST be synchronized to be efficient enough... not far from what i used to see on *16* dispatched registries instead the a,d..x schemas!)

 

Besides, who's counting? The upcoming 128bits dream (technically 256 or 512 are still total fiction)? Or software engineers?

INTEL's best were certainly laughing at me in 86 when i was suggesting full 64 width pipelines on boiler plates near hell ratios. Time proved many wrong, including yours truly it seems. The tricky joke went really bad since, i say.

Reply #40 Top

Shader 3 - that's DirectX 9, right? That's the GeForce 6 and 7 series - you should be able to get one for $20.

If i had to buy a card with shader 3 i would want atleast an high-end 7800+.

But yeah, s3 cards are actually cheap :)

 

 

Reply #41 Top

Quoting CobraA1, reply 9

Shader 3 - that's DirectX 9, right? That's the GeForce 6 and 7 series - you should be able to get one for $20.

The problem is seldomly the costs of the card, the problem often is lack of compatibility with modern hardware. Imagine someone having a PC with a single voltage AGP 2.0 slot (Many Pentium 3/Athlon class machines exist with it). A new video card means a new mainboard, therefore a new cpu and new memory. And if you do something like that, you often don't want to buy the cheapest of cheapest.

Still doable for someone with a normal salary by the way, but for some people, like young game players, it can become unaffordable.

Reply #42 Top

Oh, but i agree with the assumption or proof that this 'simultaneous threading with FOUR target devices' is an excellent solution (given Valve has mastered the results, somehow) - it's just that *in principle at least* the entire experience has no formal application as of now

Well, for Valve it means possible speedups in their physics and graphic engines. Graphics and physics are easily parellelizeable. You can easily split an image into separate sections, and each thread can perform computations on its section of the image.

. . . and why are we discussing "call ops" anyways? I don't see the connection between calls and threading. Not all calls are threads or cross thread boundaries. In fact, you want as little sharing of memory/calls/whatever as possible because synchronization can be expensive: As much as possible should remain inside the thread.

Reply #43 Top

No OpenGL(2.1/3.0) support? X(

Reply #44 Top

Not all calls are threads or cross thread boundaries...

 

Just one silly 'clug' is too many, which is why multi-threading is an acrobatic attempt that requires extensive "monitoring" of conflicts... quite simply, a single re-allocation of some stack at the wrong nano-moment or shared location would collapse the entire advantages gained from synchronized accessing of specific memory. It's one thing to go fast, another to lag **because** CPU1 & CPU4 are battling for a memory call or even, double duty (wasting precious processing) on the exact same task.

Don't get me wrong, combining resources is an excellent function... all it takes is extremely precise coordination.

Reply #45 Top

What OS(s) will be supported in both the Fantasy and GC3?

Reply #46 Top


  But that will serve as the code basis for any further GalCiv II updates. Right now, the team is working on the "unnamed fantasy strategy game" sometimes called "not-MOM" (not Master of Magic).  It's a totally new graphics engine that makes use of multi-core CPUs and GPUs but will still run fine on lower end hardware thanks to built-in detection that will determine "how much stuff" to display in real-time.

 

I hope this "real time" was only meant in casual fashion as it applies to the graphics. I was primarily excited about Not-MoM because of it being a turn based game.  We have more Fantasy RTS out there than you can shake a stick at...what we dont have is any good turn based fantasy strategy games in recent memory, made by the last people to carry that torch on this platform with production values and quality design.

I hope this has not changed.  My heart would break.

Reply #47 Top

Just one silly 'clug' is too many, which is why multi-threading is an acrobatic attempt that requires extensive "monitoring" of conflicts... quite simply, a single re-allocation of some stack at the wrong nano-moment or shared location would collapse the entire advantages gained from synchronized accessing of specific memory. It's one thing to go fast, another to lag **because** CPU1 & CPU4 are battling for a memory call or even, double duty (wasting precious processing) on the exact same task.

I believe you are attempting to refer to a race condition, and yes this is a real danger in a multiple core machine. Mutexes are generally used to ensure only one thread can access the shared memory, and sometimes the shared memory model itself is replaced with a message passing model. Many functional languages use immutable objects, which avoids the problems associated with multiple writes at the same location.

Generally I try to avoid situations with shared memory, because you are correct: There is a large cost to using shared memory.

Reply #48 Top

What OS(s) will be supported in both the Fantasy and GC3?

XPsp2 and above, I'd expect.

Reply #50 Top

That would be highly unlikely, I would say.