Advertisement

[SDL] How do you create SDL_Surface in video memory?

Started by January 31, 2005 12:37 PM
7 comments, last by carb 19 years, 7 months ago

typedef struct SDL_Surface {
  Uint32 flags;                           /* Read-only */
  SDL_PixelFormat *format;                /* Read-only */
  int w, h;                               /* Read-only */
  Uint16 pitch;                           /* Read-only */
  void *pixels;                           /* Read-write */
  SDL_Rect clip_rect;                     /* Read-only */
  int refcount;                           /* Read-mostly */

  /* This structure also contains private fields not shown here */

} SDL_Surface;
If flags is Read-only, then how do you create the surface in video memory? Currently, all my bitmaps are taking up way too much system memory.
- A momentary maniac with casual delusions.
I believe that you need to pass SDL_HWSURFACESDL_SWSURFACE to SDL_SetVideoMode, and then all the surfaces will be a HardwareSoftware Surface. Someone correct me if I am wrong.

[edit: I'm assuming you're using something like SDL_LoadBMP or IMG_Load for image loading]
[edit: I meant to be saying Hardware, not Software]

[Edited by - wyrzy on January 31, 2005 4:11:31 PM]
Advertisement
How are you loading the surfaces? When calling SDL_CreateRGBSurface to create a surface you can set the flags to include SDL_HWSURFACE. If you're loading surfaces using SDL_LoadBMP you may be able to use SDL_DisplayFormat to convert it if you're using a SDL_HWSURFACE for your video surface.

Otherwise you could create a blank hardware surface using SDL_CreateRGBSurface with flag SDL_HWSURFACE, and use SDL_ConvertSurface to convert your loaded surfaces to that pixel format.

Its been a while since I used SDL however, so they might not be the best ways...

edit: Ok I explained that badly, ask if you need any clarification
the rug - funpowered.com
Well I have these in scope:
SDL_Surface* sfScreen = NULL;        // Main screenSDL_Surface* sfBG = NULL;            // Background imageSDL_Surface* sfLOADING = NULL;       // Loading messageSDL_Surface* sfGAME = NULL;          // Game image


And here's where I'm loading my bitmaps...
void Init(){  // Initialize SDL components  SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER);  // Initialize the TTF library  TTF_Init();  // Init fonts  ftForgotSm = TTF_OpenFont("data/fonts/forgot.ttf", 16);  ftForgotLg = TTF_OpenFont("data/fonts/forgot.ttf", 36);  // Setup the video mode  sfScreen = SDL_SetVideoMode(WINDOW_WIDTH, WINDOW_HEIGHT, 0,  SDL_FULLSCREEN |                               SDL_DOUBLEBUF | SDL_HWPALETTE |SDL_HWSURFACE);  // Set the title of the window  SDL_WM_SetCaption(WINDOW_CAPTION, 0);  // Get the number of ticks since SDL was initialized  gTimer = SDL_GetTicks();  // Display the loading screen first thing  DrawBackground(sfLOADING);  sfLOADING = SDL_LoadBMP("data/img/loading.bmp");  SDL_Flip(sfScreen);  // Load the other bitmaps into memory  DrawBackground(sfLOADING);  sfBG = SDL_LoadBMP("data/img/bg/bg1.bmp");  sfGAME = SDL_LoadBMP("data/img/bg/game.bmp");  // Fmod stuff  SDL_Flip(sfScreen);  FSOUND_Init(44000, 64, 0);  Player.LoadSong("data/music/cor.mp3", "Champion of RON", "Shane (aka Vuxnut)");  Player.LoadSong("data/music/tehtris.mp3", "Tehtris Tehme", "Shane (aka Vuxnut)");  SDL_Flip(sfScreen);  sTitle1 = Player.getTitle(0);  sAuthor1 = Player.getAuthor(0);  sTitle2 = Player.getTitle(1);  sAuthor2 = Player.getAuthor(1);  // Initialize the stack with the exit state  StateStruct state;  state.StatePointer = gsExit;  gStateStack.push(state);  // Add a pointer to the menu state  state.StatePointer = gsMenu;  gStateStack.push(state);  }


I'm not sure what a surface defaults to when using SDL_LoadBMP, but I'm assuming it's software for compatiblity and because of the amount of RAM my project is consuming. A lot of this is the music data, since I've not optimized anything yet. I'll look into the methods you mentioned. They seem to make sense.
- A momentary maniac with casual delusions.
Quote: Original post by Rhaal
If flags is Read-only, then how do you create the surface in video memory? Currently, all my bitmaps are taking up way too much system memory.


If you want to have the video display in actual video mode, you will need to pass SDL_HWSURFACE to your SDL_SetVideoMode.

Quote:
Tip when you initialize the video in SDL (SDL_SetVideoMode()):

If you request SDL_SWSURFACE, then you get a video buffer allocated in system memory, and you must call SDL_UpdateRects() or SDL_Flip() to update the screen. SDL_Flip() calls SDL_UpdateRects(the-whole-screen) in this case. All allocated surfaces will be in system memory for blit speed.

If you request SDL_HWSURFACE, then if possible SDL will give you access to the actual video memory being displayed to the screen. If this is successful, the returned surface will have the SDL_HWSURFACE flag set, and you will be able to allocate other surfaces in video memory, which presumably can be blitted very fast. The disadvantage is that video memory tends to be much slower than system memory, so you don't want to write directly to it in most cases. In this case, SDL_UpdateRects() and SDL_Flip() are inexpensive noops, as you are writing to memory automatically being displayed.

If you request SDL_HWSURFACE, you may also request double-buffering by adding the SDL_DOUBLEBUF flag. If possible, SDL will set up two buffers in video memory for double-buffered page flipping. If this is successfully set up, then you will be writing to the non-visible back-buffer, and when you call SDL_Flip(), SDL will queue up a page flip for the next vertical retrace, so that the current video surface will then be displayed, and the front and back video buffers will be swapped. The next display surface lock will block until the flip has completed.

Sam Lantinga


That was from this page.

Now The Rug was correct in using SDL_CreateRGBSurface, for as the docs say:
Quote:
The following are supported in the flags field.

SDL_SWSURFACE Surface is stored in system memory
SDL_HWSURFACE Surface is stored in video memory
...


However, I do not think you really have a choice. Here's a quick example I've made from the docs:

void Test(){/* Create a 32-bit surface with the bytes of each pixel in R,G,B,A order,       as expected by OpenGL for textures */    SDL_Surface **swsurface,**hwsurface;	#define COUNT 128	swsurface = new SDL_Surface*[COUNT];	hwsurface = new SDL_Surface*[COUNT];    Uint32 rmask, gmask, bmask, amask;    /* SDL interprets each pixel as a 32-bit number, so our masks must depend       on the endianness (byte order) of the machine */#if SDL_BYTEORDER == SDL_BIG_ENDIAN    rmask = 0xff000000;    gmask = 0x00ff0000;    bmask = 0x0000ff00;    amask = 0x000000ff;#else    rmask = 0x000000ff;    gmask = 0x0000ff00;    bmask = 0x00ff0000;    amask = 0xff000000;#endif	//printf("%i",sizeof(SDL_Surface));	int x=0;	for(x=0;x<COUNT;x++)	{    swsurface[x] = SDL_CreateRGBSurface(SDL_SWSURFACE, 64, 64, 32,                                   rmask, gmask, bmask, amask);	hwsurface[x] = SDL_CreateRGBSurface(SDL_HWSURFACE, 64, 64, 32,                                   rmask, gmask, bmask, amask);    if(swsurface == NULL || hwsurface == NULL)  	{        fprintf(stderr, "CreateRGBSurface failed: %s\n", SDL_GetError());        exit(1);    }	}	x--;	printf("\nDefines for the surfaces\n");	printf("SW: %#x\n",SDL_SWSURFACE);	printf("HW: %#x\n",SDL_HWSURFACE);	printf("\nCreated surfaces:\n");	printf("SW: %#x\n",swsurface[x]->flags);	printf("HW: %#x\n",hwsurface[x]->flags);	for(x=0;x<COUNT;x++)	{		SDL_FreeSurface(swsurface[x]);		SDL_FreeSurface(hwsurface[x]);	}	delete [] swsurface;	delete [] hwsurface;}


What it does is try and create COUNT number of surfaces in HW mode as well as SW mode. Technically speaking, you only need to view the flags of the last item because, since it was the last one created, it would have the most verifiable results. I'll get back to that in a minute. However, if you run this, you may get an output similar to mine:

Defines for the surfacesSW: 0HW: 0x1Created surfaces:SW: 0x10000HW: 0x10000Press any key to continue


Now back to what I was saying about only needing to output the last one. If all of them were supposed to be created in HW mode, and it ran out of memory, it should be made in SW mode, which if that were the case, the last one would reflect this, but not the first one, since the first one had HW mem avaliable.

I also tried using a large COUNT number to see if I could use up all my video memory so SW mode was a must. I have a 128 MB video card, so my count was 2133334, which is over kill, but it should definitly fill it up. I calculated this by 128MB / 60 bytes, the size of the SDL_Surface. Since I was making 64x64 surfaces x 2, I tried my count being 15625 and ran it at the same time.

I ran out of all system memory before it could complete, swap and physical. So I'm thinking that when you create the surfaces, regardless if its in system or video memory, it will still take up system resources, so making them in HW video memory will not solve your problems - assuming that they aren't already eing stored in video memory.

I was finally able to make COUNT 62500 and yet I still got the same results, both reporting in HW memory. So either SDL does not report it right, or you just have no choice in the Win32 environment to try and choose where it goes. I had also tried this using SDL_SWSURFACE for the video mode as well as larger surfaces in memeory, but I could not get different results.

- Drew
I can't find it right now but I did a test on my machine for hardware and software surfaces. I could NOT create a hardware surface while in windowed mode. It may be video card specific or even windows specific, I do not know, but using plain old SDL I could not get a hardware surface at all. While in fullscreen mode I got all of the hardware surfaces I requested although I didn't check if it took more or less system memory but that looks alike a good thing to check next time. Also, you could probably simplify Drew's test above by creating X number of surfaces requesting hardware surfaces and just checking if you get it or not instead of creating a surface in both hardware and software. The only reason I say this is that SDL will do all sorts of backflips to get you a valid surface but at the price of using system memory instead of video so the test for a software surface is kind of pointless.
Evillive2
Advertisement
The simple answer is that if you want video memory surfaces, construct your screen in video memory and then use SDL_ConvertSurface on everything you load in. evillive2's warning about hardware surfaces in windowed mode is important too (although whether this still applies on modern cards, I don't know).
When you call SDL_SetVideoMode, make sure you pass SDL_HWSURFACE to it. To convert other surfaces to the same format, and store it in the video surface, use SDL_DisplayFormat.
I just thought I'd add that I do my development on (a relatively recent) Mobile Radeon 9200, and it absolutely does NOT use hardware surfaces in windowed mode. I wouldn't count on them at all, and I've actually resorted to implementing a dirty blit rendering algorithm in order to squeeze decent performance out of SDL in software.

- carb
- Ben

This topic is closed to new replies.

Advertisement