🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Using UE4 to make a Second Life client - why are people failing

Started by
4 comments, last by Nagle 4 years, 3 months ago

There have been several efforts to make a client viewer for Second Life using UE4. All have failed. No idea why. I'd like comments from UE4 experts. There are other third-party viewers, using much of the standard SL open source code, so it's not necessary to reverse engineer. Just implement.

Here's what an SL viewer has to do:

  • The SL world is divided into regions 256 meters on a side, each run by a separate sim process. The viewer connects to one region containing the player, and other nearby regions for read-only access to their info for display. Crossing regions requires a handoff from server to server. No portals, no shards; it's one big seamless world.
  • The servers and clients communicate mostly over UDP, with reliable and unreliable messages, much like UE4.
  • The game assets are all on web servers on AWS, front-ended by Akamai caches. Assets are mostly meshes, in an SL-specific and documented format, and textures, in JPEG 2000 set up for progressive loading from low to high resolution. All assets have a unique 128-bit ID, and textures and meshes never change without changing the ID, so you can cache. Assets are delivered over ordinary HTTP. The viewer is told about all the assets within view range, and it's up to the viewer to decide what to ask for in what order and at what texture resolution and level of detail.
  • Some assets are well-optimized; some are not. Some are far more complex than they need to be. Assets are created by tens of thousands of different users, and there's very little duplication. The viewer needs to cope with all that.
  • The viewer doesn't know which objects are going to move. Potentially, any object can move. Even the terrain can be edited live. But mostly things are stationary. So a fast viewer needs to combine objects for faster rendering and be prepared to redo that work if someone opens a door or something.
  • The player can get into a vehicle and drive moderately fast, up to about 100km/hr. Or fly an aircraft over the world. So the viewer may be forced to load assets constantly to keep up, while displaying a reasonably decent view of the world even when it's not caught up. Caching content locally is possible, but can't do it all; there's petabytes of content out there. About 3000 square kilometers of map.

The existing viewer is mostly single thread, OpenGL based, and it's just too slow. Always out of main thread time, not enough drawn per draw call, underutilizes the GPU. The code base is 15 years old.

Every attempt to seriously speed this up has failed. But nobody has gone all the way to multithreaded rendering on Vulkan yet, offloading as much to the GPU as possible. Is there machinery in UE4 for this kind of world, dominated by dynamic loading?

Advertisement

First: Unreal generally prefers pre-calculated geometry. Build levels in the editor, let the editor chew on all the assets, and spit out fully-optimized game levels.

You can build a scene of entirely movable actors with a variety of meshes. It's not what it's optimized for, but with modern destruction and modern multi-player worlds and modern real-time global illumination and shadows, it's slowly moving in that direction. There's also support for dynamically paging landscapes, re-centering the rendering/simulation origin, and so forth.

There's no reason you couldn't write a scene graph that presents the Second Life scenes, using Unreal Engine as the renderer. You would have to throw out a bunch of Unreal-specific things – use the Second-Life streamed animations instead of the Unreal animation system; use dynamic/movable actors everywhere; use Second Life networking rather than Unreal networking; but it can totally be done if you are a decent programmer and have enough time and wherewithal to keep at it.

And I think that's where the problem lies: The technology is old, the world is waning, and there's not enough available good-programmer-time to actually do what's needed to make it successful.

It might be easier to try to thread the renderer in the Second Life client; move all objects into a double-buffered system for updates, and hand off rendering to a second “pipelined” thread that traverses the scene graph and hands it off to the graphics card. This gives you more simulation time on the main thread. You'd probably have to copy all the important data for each render pass, though; it's unlikely that the scene graph and object data are structured to allow parallel access like that. Still, copying a bunch of data into a largely pre-allocated structure is likely faster than blocking everything on rendering each frame.

enum Bool { True, False, FileNotFound };

hplus0603 said:

First: Unreal generally prefers pre-calculated geometry. Build levels in the editor, let the editor chew on all the assets, and spit out fully-optimized game levels.

You can build a scene of entirely movable actors with a variety of meshes. It's not what it's optimized for, but with modern destruction and modern multi-player worlds and modern real-time global illumination and shadows, it's slowly moving in that direction.

That's what I thought about Unreal. It's pre-computation that makes it fast. In a fully dynamic world, pre-computation has to be more of a just-in-time thing. Tim Sweeney, the Epic CEO, talks a lot about the “metaverse” being the next big thing, but apparently the needed features for high performance at that are not in UE4 yet.

Second Life is picking up a bit, by the way, For three years, Linden Lab was focusing on Sansar, their sharded VR world. It was a total flop. Got about 20 users on Steam. Discontinued last month. Sansar had a staff of about 100 at peak, and Second Life, which is quite profitable, was being used as a cash cow to fund it. So now they're refocusing on Second Life. The basic problem with Second Life being that it's slow. That needs to be fixed, somehow.

Nagle said:
to combine objects

Is there a benchmark how much faster combination makes the renderer? So we basically have
https://www.gamedev.net/forums/topic/193888-direct3d-retained-mode/

And store pointers where our data ended up? When something is removed, we use these pointers to cut out this object out of our large linear buffer. Then use some memory management techniques to defragment this buffer.

Multithreading is more fun if you share the data at a fine level.

The future is probably Vulkan for cross-platform.

SL's scene graph has a transform at every node. Most of those change infrequently, but the current renderer does a draw call for each transform, loading a new transform, rather than pre-transforming them down to vertex arrays. Each object has its own textures; there are no multiple-object texture atlases. So there's a lot of texture switching, an expensive operation in OpenGL. (I hear that's faster in Vulkan. Comments?)

What makes this hard is that any transform can change at any time (but usually won't) any texture can be reloaded at a different resolution (which happens infrequently per texture but constantly as a scene loads or the viewpoint changes) and meshes are constantly being loaded at low levels of detail and then reloaded at higher levels as the network catches up.

Is there an existing rendering system designed to deal with all those dynamic changes and with content not pre-optimized?

This topic is closed to new replies.

Advertisement