Advertisement

Lighting: Inside faces are getting lighted too?

Started by January 20, 2019 06:44 PM
9 comments, last by 1024 5 years, 7 months ago

Hello again!

I have created a basic phong model but I noticed that inside faces are getting lighted too. It seems to me like Normals (their direction) are the same outside-inside and this is why this is happening. Take a look at the following video.

 

Outside it seems alright, but when I look inside the cube, the front face which is getting lighted from the outside, is also getting lighted from the inside. I believe that 99% this is because the inside face is actually using the same normals as the outside one, since in my vertex data normals are being initialised for each face (6 in total) not for 12. Do I have to create 72 vertices? 36 for the outside 6 faces and 36 for the inside with different normals.

In the specular calculations don't be suprised by this:


vec3 viewDirection   = normalize(fragPosition - viewPos);

The viewPos is actually the Front of the camera not the position, which is relative to the view coordinate system no the world (I'm doing lighting calculations in the word coordinate system). Instead of transforming the viewPos (Front of the camera) into the word coordinate system, i just moved it with my mind from view to word (like we learned in maths) and came out with the above calculation which gives me the appropriate vector to get the correct angle between the view direction and the reflection of the light.

This is my vertex data:


//Vertex Data.
float vertices[] = {

	// positions          // normals           // texture coords
	-0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  0.0f, 0.0f,
	 0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  1.0f, 0.0f,
	 0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  1.0f, 1.0f,
	 0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  1.0f, 1.0f,
	-0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  0.0f, 1.0f,
	-0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,  0.0f, 0.0f,

	-0.5f, -0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   0.0f, 0.0f,
	 0.5f, -0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   1.0f, 0.0f,
	 0.5f,  0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   1.0f, 1.0f,
	 0.5f,  0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   1.0f, 1.0f,
	-0.5f,  0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   0.0f, 1.0f,
	-0.5f, -0.5f,  0.5f,  0.0f,  0.0f, 1.0f,   0.0f, 0.0f,

	-0.5f,  0.5f,  0.5f, -1.0f,  0.0f,  0.0f,  1.0f, 0.0f,
	-0.5f,  0.5f, -0.5f, -1.0f,  0.0f,  0.0f,  1.0f, 1.0f,
	-0.5f, -0.5f, -0.5f, -1.0f,  0.0f,  0.0f,  0.0f, 1.0f,
	-0.5f, -0.5f, -0.5f, -1.0f,  0.0f,  0.0f,  0.0f, 1.0f,
	-0.5f, -0.5f,  0.5f, -1.0f,  0.0f,  0.0f,  0.0f, 0.0f,
	-0.5f,  0.5f,  0.5f, -1.0f,  0.0f,  0.0f,  1.0f, 0.0f,

	 0.5f,  0.5f,  0.5f,  1.0f,  0.0f,  0.0f,  1.0f, 0.0f,
	 0.5f,  0.5f, -0.5f,  1.0f,  0.0f,  0.0f,  1.0f, 1.0f,
	 0.5f, -0.5f, -0.5f,  1.0f,  0.0f,  0.0f,  0.0f, 1.0f,
	 0.5f, -0.5f, -0.5f,  1.0f,  0.0f,  0.0f,  0.0f, 1.0f,
	 0.5f, -0.5f,  0.5f,  1.0f,  0.0f,  0.0f,  0.0f, 0.0f,
	 0.5f,  0.5f,  0.5f,  1.0f,  0.0f,  0.0f,  1.0f, 0.0f,

	-0.5f, -0.5f, -0.5f,  0.0f, -1.0f,  0.0f,  0.0f, 1.0f,
	 0.5f, -0.5f, -0.5f,  0.0f, -1.0f,  0.0f,  1.0f, 1.0f,
	 0.5f, -0.5f,  0.5f,  0.0f, -1.0f,  0.0f,  1.0f, 0.0f,
	 0.5f, -0.5f,  0.5f,  0.0f, -1.0f,  0.0f,  1.0f, 0.0f,
	-0.5f, -0.5f,  0.5f,  0.0f, -1.0f,  0.0f,  0.0f, 0.0f,
	-0.5f, -0.5f, -0.5f,  0.0f, -1.0f,  0.0f,  0.0f, 1.0f,

	-0.5f,  0.5f, -0.5f,  0.0f,  1.0f,  0.0f,  0.0f, 1.0f,
	 0.5f,  0.5f, -0.5f,  0.0f,  1.0f,  0.0f,  1.0f, 1.0f,
	 0.5f,  0.5f,  0.5f,  0.0f,  1.0f,  0.0f,  1.0f, 0.0f,
	 0.5f,  0.5f,  0.5f,  0.0f,  1.0f,  0.0f,  1.0f, 0.0f,
	-0.5f,  0.5f,  0.5f,  0.0f,  1.0f,  0.0f,  0.0f, 0.0f,
	-0.5f,  0.5f, -0.5f,  0.0f,  1.0f,  0.0f,  0.0f, 1.0f
};

 

This is my fragment shader:


#version 330 core

//Framgent Output.
out vec4 aPixelColor;

//Normals and TexCoordinates.
in vec3 fragNormal;
in vec2 fragTexCoord;
in vec3 fragPosition;


//Light Source.
struct LightSource
{
	vec3 position;
	vec3 ambient;
	vec3 color;
};


//Material.
struct Material
{
	sampler2D diffuse;
	sampler2D specular;
	int       shininess;
};


//Uniforms.
uniform LightSource light;
uniform Material material;
uniform vec3 viewPos;


//Declare Functions.
vec3 GetAmbientColor();
vec3 GetDifffuseColor();
vec3 GetSpecularColor();




//-_-_-_-_-_-_-_-_-_-_-_-_-_-_-Main Function-_-_-_-_-_-_-_-_-_-_-_-_-_-_-//
void main()
{
	
	float alpha_value   = texture(material.diffuse, fragTexCoord).w;
	vec3 ambient_color  = GetAmbientColor();
	vec3 diffuse_color  = GetDifffuseColor();
	vec3 specular_color = GetSpecularColor();
	vec3 final_color    = ambient_color + diffuse_color + specular_color;

	//Set the final color.
	aPixelColor = vec4(final_color, alpha_value);

}
//-_-_-_-_-_-_-_-_-_-_-_-_-_-_-Main Function-_-_-_-_-_-_-_-_-_-_-_-_-_-_-//




vec3 GetAmbientColor()
{
	return light.ambient * vec3(texture(material.diffuse, fragTexCoord));
}



vec3 GetDifffuseColor()
{

		vec3 light_direction = normalize(light.position - fragPosition);
		vec3 normal          = normalize(fragNormal);

		float diffuse_factor = max(dot(light_direction, normal), 0);

		return (light.color * diffuse_factor) * vec3(texture(material.diffuse, fragTexCoord));
}


vec3 GetSpecularColor()
{
	vec3 light_direction = normalize(fragPosition - light.position);
	vec3 normal          = normalize(fragNormal);
	vec3 viewDirection   = normalize(fragPosition - viewPos);
	vec3 refrection      = normalize(reflect(light_direction, normal));

	float spec_factor    = pow( max(dot(refrection, viewDirection), 0) , material.shininess );

	return light.color * spec_factor * vec3(texture(material.specular, fragTexCoord));
}

This is the vertex shader:


#version 330 core

layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aNormal;
layout(location = 2) in vec2 aTexel;

uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;

out vec3 fragNormal;
out vec2 fragTexCoord;
out vec3 fragPosition;

void main()
{
	gl_Position   = proj * view * model * vec4(aPos, 1.0f);

	fragNormal    = mat3(transpose(inverse(model))) * aNormal;
	fragTexCoord  = aTexel;
	fragPosition  = vec3(model * vec4(aPos, 1.0f));
}

 


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

As i said i tried to do this and it works:


	//Positions	               //Normals               //Texels

	//Front face (on z axis)
	-0.5f, -0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      0.0f, 0.0f,
	 0.5f, -0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      1.0f, 0.0f,
	 0.5f,  0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      1.0f, 1.0f,
	 0.5f,  0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      1.0f, 1.0f,
	-0.5f,  0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      0.0f, 1.0f,
	-0.5f, -0.5f,  0.5f,       0.0f,  0.0f, 1.0f,      0.0f, 0.0f,
	
	//Front face with reversed Normals. Draw it a little farther so the depth test will pass.
	//Reverse
	-0.5f, -0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     0.0f, 0.0f,
	 0.5f, -0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     1.0f, 0.0f,
	 0.5f,  0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     1.0f, 1.0f,
	 0.5f,  0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     1.0f, 1.0f,
	-0.5f,  0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     0.0f, 1.0f,
	-0.5f, -0.5f,  0.49f,      0.0f,  0.0f, -1.0f,     0.0f, 0.0f,

But i thing this method destroys the performance if for each face you need to have duplicate data to change only the normal direction.


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

Advertisement

Hi!

Yes, this happens because the normals are facing outwards. And when rendering the graphics card does not distinguish in what way the triangle faces, if you do not use culling. In the fragment shader, you could access the built-in variable gl_FrontFacing and flip the normal if you are rendering a back-face. I'm not quite sure why you would want to do that because you can not see the inside facing triangles.

What typically happens is that inwards facing triangles are not rendered at all. If you enable back-face culling every triangle that presents its back-face is not rendered. You are probably familiar with the effect that if the camera enters an object you can look through. That is backface culling in action. This is done to save GPU time as only the winding order of the triangle needs to be calculated and then it is discarded before rasterization happens.

2 hours ago, Cararasu said:

Hi!

Yes, this happens because the normals are facing outwards. And when rendering the graphics card does not distinguish in what way the triangle faces, if you do not use culling. In the fragment shader, you could access the built-in variable gl_FrontFacing and flip the normal if you are rendering a back-face. I'm not quite sure why you would want to do that because you can not see the inside facing triangles.

What typically happens is that inwards facing triangles are not rendered at all. If you enable back-face culling every triangle that presents its back-face is not rendered. You are probably familiar with the effect that if the camera enters an object you can look through. That is backface culling in action. This is done to save GPU time as only the winding order of the triangle needs to be calculated and then it is discarded before rasterization happens.

And when building a house, don't you want to see inside it? How does this work?


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

6 hours ago, babaliaris said:

And when building a house, don't you want to see inside it? How does this work?

A single wall of a house is like a cube too, and in game you prevent the camera from going inside the wall.

A better example would be a piece of cloth, which is so thin you could approximate it if only a grid of triangles.

The easiest way is to duplicate all triangles, so they have normals to both sides, and one of each is always culled. This is usually cheaper than switching backface culling on / off depending on geometry.

10 hours ago, babaliaris said:

And when building a house, don't you want to see inside it? How does this work?

You give your house walls thickness. So you have triangles on the outside wall (which are front-facing) and you have triangles on the inside wall (which are different triangles, and also front-facing). If you want to be able to enter a house, it should have thick walls anyway.

I wanted to find a nice image to demonstrate this, but best I could find was this:

K2AiJ.jpg

In this image, the "house" is not a cube. Instead, there are 4 cuboids and the "inside" is outside, but between them. Every triangle there is front-facing, and you never have to go inside a cuboid.

Advertisement

Oh, I understand now. So the logic is to use multiple cubes to build the house right? Not using a big cube and see the inside of it.


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

I have one more question. My lighting right now works great but it does not have any logic if an object is in front of another and is blocking the light source, then the object behind it should not get any light. I understand why this is happening. For example in my diffuse calculations you can see that I'm using the direction from the fragment towards the light source and the normal of the fragment in order to get the angle between them and create the factor that is going to reduce or increase the light of the fragment based on that angle.

But this considers only the current fragment and the light source, not the other objects fragments, so if an object is behind another and its facing the lighting source this face is going to be lighted any way no matter how many objects are in front of it blocking the light.

Is this an advanced topic in lighting? Should I wait? I'm following these tutorials and I just finished with the Model Loading and heading to the advanced OpenGL tab. 


void life()
{
  while (!succeed())
    try_again();

  die_happily();
}

 

1 hour ago, babaliaris said:

Oh, I understand now. So the logic is to use multiple cubes to build the house right? Not using a big cube and see the inside of it.

Typically, yes. Imagine you have doors and windows. Here it would become visible that the walls are like thin like paper.

1 hour ago, babaliaris said:

My lighting right now works great but it does not have any logic if an object is in front of another and is blocking the light source

What you talk about is called... a 'shadow' :) 

https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping

 

1 hour ago, babaliaris said:

if an object is in front of another and is blocking the light source

That's shadows. Shadow mapping is a different thing from lighting, but you'll get to it.

This topic is closed to new replies.

Advertisement