Fun with threads
from
JoeUser Forums
We've been pretty busy here getting ready for Beta 3. The game has come a long way since Beta 2, and we still have miles (or should I say, milestones) yet to go. For the last two weeks, my primary responsibility has been fixing and tweaking stuff. One of the bigger issues that we've been dealing with almost since we started working on GC2 has been the load time.
The way that the interface and other graphics are created in GC2 is radically different than it was in GC1. In GC1, a screen was essentially a bunch of bitmaps stored in one file with modified headers and all the position and size information was stored in the headers, and since the header was a specific size, the bytes could be copyed directly into a structure, so no parsing was needed. However, things like re-sizing the interface would have been a huge pain in the butt if not impossible because they were all 2D graphics. With GC2, we're using the DesktopX format to store the screen data and graphics, which gives us a lot more flexibility. However, it means that we have to do parsing on the file, load the textures, and do things like create vertex buffers. One screen doesn't take long by itself, but we have a lot of screens in GC2.
I discovered early on in the development of GC2 that you can't create graphics in a thread. You end up with weird graphical errors or crashes if you try to create two textures at the same time in two different threads, or even if you just create graphics in one background thread and try to render in the main thread. I might have to try putting critical sections in the main render function and in the functions where textures are loaded, or something, but I'm worried about creating deadlock situations: there will be 2-3 threads creating graphics and then the main thread trying to render existing graphics. Joe had suggested creating a secondary device and having it create all the textures, but the devices won't share textures even if they're created in managed memory (which means that they are backed up in system memory). So, today I tried making it so that it loads the dxpacks into memory and parses them, but defers the texture creation until the screen is actually unhidden. This works fairly well, although it's caused a few problems that I'm going to have to fix before committing my changes. It doesn't make a lot of difference in debug mode, but in release mode it cuts the loading time of a gigantic galaxy down to about 5 seconds. It also makes the screens pop up a lot faster the first time (even in debug mode) because they don't have to set up all their controls and load the textures.
Saturday has a 70% chance of rain here, but as long as the storms don't knock out my power (and therefore my AC) I'll be happy.
Have a good weekend, everyone!
--Cari
The way that the interface and other graphics are created in GC2 is radically different than it was in GC1. In GC1, a screen was essentially a bunch of bitmaps stored in one file with modified headers and all the position and size information was stored in the headers, and since the header was a specific size, the bytes could be copyed directly into a structure, so no parsing was needed. However, things like re-sizing the interface would have been a huge pain in the butt if not impossible because they were all 2D graphics. With GC2, we're using the DesktopX format to store the screen data and graphics, which gives us a lot more flexibility. However, it means that we have to do parsing on the file, load the textures, and do things like create vertex buffers. One screen doesn't take long by itself, but we have a lot of screens in GC2.
I discovered early on in the development of GC2 that you can't create graphics in a thread. You end up with weird graphical errors or crashes if you try to create two textures at the same time in two different threads, or even if you just create graphics in one background thread and try to render in the main thread. I might have to try putting critical sections in the main render function and in the functions where textures are loaded, or something, but I'm worried about creating deadlock situations: there will be 2-3 threads creating graphics and then the main thread trying to render existing graphics. Joe had suggested creating a secondary device and having it create all the textures, but the devices won't share textures even if they're created in managed memory (which means that they are backed up in system memory). So, today I tried making it so that it loads the dxpacks into memory and parses them, but defers the texture creation until the screen is actually unhidden. This works fairly well, although it's caused a few problems that I'm going to have to fix before committing my changes. It doesn't make a lot of difference in debug mode, but in release mode it cuts the loading time of a gigantic galaxy down to about 5 seconds. It also makes the screens pop up a lot faster the first time (even in debug mode) because they don't have to set up all their controls and load the textures.
Saturday has a 70% chance of rain here, but as long as the storms don't knock out my power (and therefore my AC) I'll be happy.
--Cari
, yep, that's actually a pretty good translation.