From c481179825c2f847232257596807beb4544ad1b9 Mon Sep 17 00:00:00 2001
From: cutealien
Date: Thu, 18 Mar 2021 21:50:11 +0000
Subject: [PATCH] Update tutorial.html's in example folders Add some for newer
examples which didn't have those so far. Only updating this once now as some
were broken. Next time on release.
git-svn-id: svn://svn.code.sf.net/p/irrlicht/code/trunk@6204 dfc29bdd-3216-0410-991c-e03cc46cb475
---
examples/01.HelloWorld/tutorial.html | 41 ++--
examples/02.Quake3Map/tutorial.html | 8 +-
examples/03.CustomSceneNode/tutorial.html | 10 +-
examples/04.Movement/tutorial.html | 9 +-
examples/05.UserInterface/tutorial.html | 4 +-
examples/06.2DGraphics/tutorial.html | 8 +-
examples/07.Collision/tutorial.html | 6 +-
examples/08.SpecialFX/tutorial.html | 10 +-
examples/09.Meshviewer/tutorial.html | 6 +-
examples/10.Shaders/tutorial.html | 10 +-
examples/11.PerPixelLighting/tutorial.html | 4 +-
examples/12.TerrainRendering/tutorial.html | 6 +-
examples/13.RenderToTexture/tutorial.html | 9 +-
examples/14.Win32Window/tutorial.html | 4 +-
examples/15.LoadIrrFile/tutorial.html | 8 +-
examples/16.Quake3MapShader/tutorial.html | 6 +-
examples/17.HelloWorld_Mobile/main.cpp | 21 +-
examples/17.HelloWorld_Mobile/tutorial.html | 11 +-
examples/18.SplitScreen/tutorial.html | 6 +-
examples/19.MouseAndJoystick/tutorial.html | 4 +-
examples/20.ManagedLights/tutorial.html | 6 +-
examples/21.Quake3Explorer/tutorial.html | 8 +-
examples/22.MaterialViewer/tutorial.html | 4 +-
examples/23.SMeshHandling/tutorial.html | 6 +-
examples/24.CursorControl/tutorial.html | 4 +-
examples/25.XmlHandling/tutorial.html | 6 +-
examples/26.OcclusionQuery/tutorial.html | 6 +-
examples/27.PostProcessing/tutorial.html | 209 +++++++++++++++++++
examples/28.CubeMapping/tutorial.html | 213 ++++++++++++++++++++
examples/30.Profiling/tutorial.html | 209 +++++++++++++++++++
30 files changed, 749 insertions(+), 113 deletions(-)
create mode 100644 examples/27.PostProcessing/tutorial.html
create mode 100644 examples/28.CubeMapping/tutorial.html
create mode 100644 examples/30.Profiling/tutorial.html
diff --git a/examples/01.HelloWorld/tutorial.html b/examples/01.HelloWorld/tutorial.html
index be3e4f7e..916e554e 100644
--- a/examples/01.HelloWorld/tutorial.html
+++ b/examples/01.HelloWorld/tutorial.html
@@ -195,37 +195,34 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to set up the IDE for using the Irrlicht Engine and how to write a simple HelloWorld program with it. The program will show how to use the basics of the VideoDriver, the GUIEnvironment, and the SceneManager. Microsoft Visual Studio is used as an IDE, but you will also be able to understand everything if you are using a different one or even another operating system than windows.
-
You have to include the header file <irrlicht.h> in order to use the engine. The header file can be found in the Irrlicht Engine SDK directory include
. To let the compiler find this header file, the directory where it is located has to be specified. This is different for every IDE and compiler you use. Let's explain shortly how to do this in Microsoft Visual Studio:
+
This tutorial shows how to set up the IDE for using the Irrlicht Engine and how to write a simple HelloWorld program with it. The program will show how to use the basics of the VideoDriver, the GUIEnvironment, and the SceneManager. Microsoft Visual Studio is used as an IDE, but you will also be able to understand everything if you are using a different one or even another operating system than Windows.
+
You have to include the header file <irrlicht.h> in order to use the engine. The header file can be found in the Irrlicht Engine SDK directory include
. To let the compiler find this header file, the directory where it is located has to be added in your project as include path. This is different for every IDE and compiler you use. Let's explain shortly how to do this in Visual Studio 2010:
-- If you use Version 6.0, select the Menu Extras -> Options. Select the directories tab, and select the 'Include' Item in the combo box. Add the
include
directory of the irrlicht engine folder to the list of directories. Now the compiler will find the Irrlicht.h header file. We also need the irrlicht.lib to be found, so stay in that dialog, select 'Libraries' in the combo box and add the lib/VisualStudio
directory.
-
-
-
-
-
-
-- If your IDE is Visual Studio .NET, select Tools -> Options. Select the projects entry and then select VC++ directories. Select 'show directories for include files' in the combo box, and add the
include
directory of the irrlicht engine folder to the list of directories. Now the compiler will find the Irrlicht.h header file. We also need the irrlicht.lib to be found, so stay in that dialog, select 'show directories for Library files' and add the lib/VisualStudio
directory.
-
-
-
+- In Visual Studio 2010 select the Menu Project -> Properties. Select the "C/C++" - "General" option, and select the "Additional Include Directories". Add the
include
directory of the Irrlicht engine folder to the list of directories. Now the compiler will find the irrlicht.h header file. We also need the irrlicht.lib to be found, so select "Linker" - "General" and add the lib/Win64-visualStudio
or lib/Win32-visualStudio
directory to "Additional Library Directories". Which of the 2 Irrlicht versions you chose depends on the target platform for your application (win32 or x64). In your project properties you can see what your active solution platform is, you can use the same one for Irrlicht.
-
That's it. With your IDE set up like this, you will now be able to develop applications with the Irrlicht Engine.
+
To be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. In most IDE's you have to add irrlicht.lib (or irrlicht.a or irrlicht.so on Linux) to your Linker input files.
+
For VisualStudio we can be lazy and use the pragma comment lib. We also want to get rid of the console window, which pops up when starting a program with main() (instead of WinMain). This is done by the second pragma. We could also use the WinMain method, though losing platform independence then.
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
That's it. With your IDE set up like this, you will now be able to develop applications with the Irrlicht Engine.
Lets start!
-
After we have set up the IDE, the compiler will know where to find the Irrlicht Engine header files so we can include it now in our code.
In the Irrlicht Engine, everything can be found in the namespace 'irr'. So if you want to use a class of the engine, you have to write irr:: before the name of the class. For example to use the IrrlichtDevice write: irr::IrrlichtDevice. To get rid of the irr:: in front of the name of every class, we tell the compiler that we use that namespace from now on, and we will not have to write irr:: anymore.
There are 5 sub namespaces in the Irrlicht Engine. Take a look at them, you can read a detailed description of them in the documentation by clicking on the top menu item 'Namespace List' or by using this link: http://irrlicht.sourceforge.net/docu/namespaces.html Like the irr namespace, we do not want these 5 sub namespaces now, to keep this example simple. Hence, we tell the compiler again that we do not want always to write their names.
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
To be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. We could set this option in the project settings, but to make it easy, we use a pragma comment lib for VisualStudio. On Windows platforms, we have to get rid of the console window, which pops up when starting a program with main(). This is done by the second pragma. We could also use the WinMain method, though losing platform independence then.
#ifdef _IRR_WINDOWS_
#pragma comment(lib, "Irrlicht.lib")
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
This is the main method. We can now use main() on every platform.
The most important function of the engine is the createDevice() function. The IrrlichtDevice is created by it, which is the root object for doing anything with the engine. createDevice() has 7 parameters:
+
After we have set up the IDE, the compiler will know where to find the Irrlicht Engine header files so we can include it now in our code.
That header just adds the getExampleMediaPath tool-functions to help locating the media we need. More about that later below.
#include "exampleHelper.h"
In the Irrlicht Engine, everything can be found in the namespace 'irr'. So if you want to use a class of the engine, you have to write irr:: before the name of the class. For example to use the IrrlichtDevice write: irr::IrrlichtDevice. To get rid of the irr:: in front of the name of every class, we tell the compiler that we use that namespace from now on, and we will not have to write irr:: anymore. Note that you never should do that in headers - otherwise you will pollute the namespace of every file including such a header. So in headers always write out the full names including all namespaces.
There are 5 sub namespaces in the Irrlicht Engine. Take a look at them, you can read a detailed description of them in the documentation by clicking on the top menu item 'Namespace List' or by using this link: http://irrlicht.sourceforge.net/docu/namespaces.html Like the irr namespace, we do not want these 5 sub namespaces now, to keep this example simple. Hence, we tell the compiler again that we do not want always to write their names.
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
This is the main method. We can now use main() on every platform.
The most important function of the engine is the createDevice() function. The IrrlichtDevice is created by it, which is the root object for doing anything with the engine. createDevice() has the following parameters:
-- deviceType: Type of the device. This can currently be the Null-device, one of the two software renderers, D3D8, D3D9, or OpenGL. In this example we use EDT_SOFTWARE, but to try out, you might want to change it to EDT_BURNINGSVIDEO, EDT_NULL, EDT_DIRECT3D8, EDT_DIRECT3D9, or EDT_OPENGL.
+- driverType: Type of the video driver. This can currently be the Null-device, one of the two software renderers, D3D9, or OpenGL. In this example we use EDT_BURNINGSVIDEO, but to try out, you might want to change it to EDT_SOFTWARE, EDT_NULL, EDT_DIRECT3D9, or EDT_OPENGL. Generally you will want to use OpenGL or Direct3D as they are using your graphic card for calculations instead of the CPU and are way faster (and usually better looking). We just use one of the software renderers here as it even works when your graphic card driver isn't set up for 3d support.
- windowSize: Size of the Window or screen in FullScreenMode to be created. In this example we use 640x480.
-- bits: Amount of color bits per pixel. This should be 16 or 32. The parameter is often ignored when running in windowed mode.
-- fullscreen: Specifies if we want the device to run in fullscreen mode or not.
-- stencilbuffer: Specifies if we want to use the stencil buffer (for drawing shadows).
+- bits: Amount of color bits per pixel. This should be 16 or 32. The parameter is often ignored when running in windowed mode. More commonly you would chose 32 bit, again we're just playing it safe.
+- fullscreen: Specifies if we want the device to run in fullscreen mode or windowed.
+- stencilbuffer: Specifies if we want to use the stencil buffer (you need it for drawing shadows).
- vsync: Specifies if we want to have vsync enabled, this is only useful in fullscreen mode.
- eventReceiver: An object to receive events. We do not want to use this parameter here, and set it to 0.
-
Always check the return value to cope with unsupported drivers, dimensions, etc.
IrrlichtDevice *device =
createDevice( video::EDT_SOFTWARE, dimension2d<u32>(640, 480), 16,
false, false, false, 0);
if (!device)
return 1;
Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text.
device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo");
Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment().
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner.
guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!",
rect<s32>(10,10,260,22), true);
To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors.
-
Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins.
IAnimatedMesh* mesh = smgr->getMesh("../../media/sydney.md2");
if (!mesh)
{
device->drop();
return 1;
}
IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh );
To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color.
if (node)
{
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMD2Animation(scene::EMAT_STAND);
node->setMaterialTexture( 0, driver->getTexture("../../media/sydney.bmp") );
}
To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is.
smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0));
Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window).
Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen.
driver->beginScene(true, true, SColor(255,100,101,140));
smgr->drawAll();
guienv->drawAll();
driver->endScene();
}
After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information.
device->drop();
return 0;
}
That's it. Compile and run.
+
Always check the return value to cope with unsupported drivers, dimensions, etc.
IrrlichtDevice *device =
createDevice( video::EDT_BURNINGSVIDEO, dimension2d<u32>(640, 480), 16,
false, false, false, 0);
if (!device)
return 1;
Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text.
device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo");
Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment().
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner.
guienv->addStaticText(L"Hello World! This is Irrlicht with the burnings software renderer!",
rect<s32>(10,10,260,22), true);
Get a media path dedicated for your platform. Finding media files for your applications can be tricky. First you have 2 options - working with relative paths or working with absolute paths.
+
On Windows a common solution is that your installer will write a key into the registry with the absolute path of wherever the user installed the media. And in your application you read out that key from the registry. On Linux a common solution is to use config file which is placed in some fixed location (for example in a . file/folder in the user home).
+
But you can also work with relative paths - which is what we do here. There is a slight complication with relative paths as they are relative to your current working directory. And that depends on the way your application is started and it might change inside your application. But mostly it will be set to your executable on start so you can ignore that problem while developing.
+
When inside VisualStudio the current working directory is set to your project files location unless you overwrite Project properties - Debugging
+- Working Directory. In Irrlicht examples the media folder is on most platforms ../../media which works for the examples as it's relative to our project files as well as to the binary (.exe) files.
+
+
Whatever you chose to find your base-folder for media - wrap it with some function and then you can improve the code to locate the media later on.
const io::path mediaPath = getExampleMediaPath();
To show something interesting, we load a Quake 2 model and display it. We get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). Check the return value of getMesh() to become aware of loading problems and other errors.
+
Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modeled by Brian Collins.
IAnimatedMesh* mesh = smgr->getMesh(mediaPath + "sydney.md2");
if (!mesh)
{
device->drop();
return 1;
}
IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh );
To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color.
if (node)
{
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMD2Animation(scene::EMAT_STAND);
node->setMaterialTexture( 0, driver->getTexture(mediaPath + "sydney.bmp") );
}
To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is.
smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0));
OK, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window on your OS).
Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen.
driver->beginScene(ECBF_COLOR | ECBF_DEPTH, SColor(255,100,101,140));
smgr->drawAll();
guienv->drawAll();
driver->endScene();
}
After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information.
device->drop();
return 0;
}
That's it. Compile and run.
diff --git a/examples/02.Quake3Map/tutorial.html b/examples/02.Quake3Map/tutorial.html
index 550bac3e..ddc6a5bf 100644
--- a/examples/02.Quake3Map/tutorial.html
+++ b/examples/02.Quake3Map/tutorial.html
@@ -195,11 +195,11 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to load a Quake 3 map into the engine, create a SceneNode for optimizing the speed of rendering, and how to create a user controlled camera.
-
Please note that you should know the basics of the engine before starting this tutorial. Just take a short look at the first tutorial, if you haven't done this yet: http://irrlicht.sourceforge.net/tut001.html
-
Lets start like the HelloWorld example: We include the irrlicht header files and an additional file to be able to ask the user for a driver type using the console.
#include <irrlicht.h>
#include <iostream>
As already written in the HelloWorld example, in the Irrlicht Engine everything can be found in the namespace 'irr'. To get rid of the irr:: in front of the name of every class, we tell the compiler that we use that namespace from now on, and we will not have to write that 'irr::'. There are 5 other sub namespaces 'core', 'scene', 'video', 'io' and 'gui'. Unlike in the HelloWorld example, we do not call 'using namespace' for these 5 other namespaces, because in this way you will see what can be found in which namespace. But if you like, you can also include the namespaces like in the previous example.
Again, to be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. We could set this option in the project settings, but to make it easy, we use a pragma comment lib:
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Ok, lets start. Again, we use the main() method as start, not the WinMain().
Like in the HelloWorld example, we create an IrrlichtDevice with createDevice(). The difference now is that we ask the user to select which video driver to use. The Software device might be too slow to draw a huge Quake 3 map, but just for the fun of it, we make this decision possible, too. Instead of copying this whole code into your app, you can simply include driverChoice.h from Irrlicht's include directory. The function driverChoiceConsole does exactly the same.
video::E_DRIVER_TYPE driverType;
printf("Please select the driver you want for this example:\n"\
" (a) OpenGL 1.5\n (b) Direct3D 9.0c\n (c) Direct3D 8.1\n"\
" (d) Burning's Software Renderer\n (e) Software Renderer\n"\
" (f) NullDevice\n (otherKey) exit\n\n");
char i;
std::cin >> i;
switch(i)
{
case 'a': driverType = video::EDT_OPENGL; break;
case 'b': driverType = video::EDT_DIRECT3D9;break;
case 'c': driverType = video::EDT_DIRECT3D8;break;
case 'd': driverType = video::EDT_BURNINGSVIDEO;break;
case 'e': driverType = video::EDT_SOFTWARE; break;
case 'f': driverType = video::EDT_NULL; break;
default: return 1;
}
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
Get a pointer to the video driver and the SceneManager so that we do not always have to call irr::IrrlichtDevice::getVideoDriver() and irr::IrrlichtDevice::getSceneManager().
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
To display the Quake 3 map, we first need to load it. Quake 3 maps are packed into .pk3 files which are nothing else than .zip files. So we add the .pk3 file to our irr::io::IFileSystem. After it was added, we are able to read from the files in that archive as if they are directly stored on the disk.
device->getFileSystem()->addFileArchive("../../media/map-20kdm2.pk3");
Now we can load the mesh by calling irr::scene::ISceneManager::getMesh(). We get a pointer returned to an irr::scene::IAnimatedMesh. As you might know, Quake 3 maps are not really animated, they are only a huge chunk of static geometry with some materials attached. Hence the IAnimatedMesh consists of only one frame, so we get the "first frame" of the "animation", which is our quake level and create an Octree scene node with it, using irr::scene::ISceneManager::addOctreeSceneNode(). The Octree optimizes the scene a little bit, trying to draw only geometry which is currently visible. An alternative to the Octree would be a irr::scene::IMeshSceneNode, which would always draw the complete geometry of the mesh, without optimization. Try it: Use irr::scene::ISceneManager::addMeshSceneNode() instead of addOctreeSceneNode() and compare the primitives drawn by the video driver. (There is a irr::video::IVideoDriver::getPrimitiveCountDrawn() method in the irr::video::IVideoDriver class). Note that this optimization with the Octree is only useful when drawing huge meshes consisting of lots of geometry.
scene::IAnimatedMesh* mesh = smgr->getMesh("20kdm2.bsp");
scene::ISceneNode* node = 0;
if (mesh)
node = smgr->addOctreeSceneNode(mesh->getMesh(0), 0, -1, 1024);
Because the level was not modelled around the origin (0,0,0), we translate the whole level a little bit. This is done on irr::scene::ISceneNode level using the methods irr::scene::ISceneNode::setPosition() (in this case), irr::scene::ISceneNode::setRotation(), and irr::scene::ISceneNode::setScale().
if (node)
node->setPosition(core::vector3df(-1300,-144,-1249));
Now we only need a camera to look at the Quake 3 map. We want to create a user controlled camera. There are some cameras available in the Irrlicht engine. For example the MayaCamera which can be controlled like the camera in Maya: Rotate with left mouse button pressed, Zoom with both buttons pressed, translate with right mouse button pressed. This could be created with irr::scene::ISceneManager::addCameraSceneNodeMaya(). But for this example, we want to create a camera which behaves like the ones in first person shooter games (FPS) and hence use irr::scene::ISceneManager::addCameraSceneNodeFPS().
smgr->addCameraSceneNodeFPS();
The mouse cursor needs not be visible, so we hide it via the irr::IrrlichtDevice::ICursorControl.
device->getCursorControl()->setVisible(false);
We have done everything, so lets draw it. We also write the current frames per second and the primitives drawn into the caption of the window. The test for irr::IrrlichtDevice::isWindowActive() is optional, but prevents the engine to grab the mouse cursor after task switching when other programs are active. The call to irr::IrrlichtDevice::yield() will avoid the busy loop to eat up all CPU cycles when the window is not active.
int lastFPS = -1;
while(device->run())
{
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(255,200,200,200));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Quake 3 Map example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
else
device->yield();
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
+
This tutorial shows how to load a Quake 3 map into the engine, create a SceneNode for optimizing the speed of rendering, and how to create a user controlled camera.
+
Please note that you should know the basics of the engine before starting this tutorial. Just take a short look at the first tutorial, if you haven't done this yet: http://irrlicht.sourceforge.net/docu/example001.html
+
Lets start like the HelloWorld example: We include the irrlicht header files and an additional file to be able to ask the user for a driver type using the console.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
As already written in the HelloWorld example, in the Irrlicht Engine everything can be found in the namespace 'irr'. To get rid of the irr:: in front of the name of every class, we tell the compiler that we use that namespace from now on, and we will not have to write that 'irr::'. There are 5 other sub namespaces 'core', 'scene', 'video', 'io' and 'gui'. Unlike in the HelloWorld example, we do not call 'using namespace' for these 5 other namespaces, because in this way you will see what can be found in which namespace. But if you like, you can also include the namespaces like in the previous example.
Again, to be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. We could set this option in the project settings, but to make it easy, we use a pragma comment lib:
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
OK, lets start. Again, we use the main() method as start, not the WinMain().
Like in the HelloWorld example, we create an IrrlichtDevice with createDevice(). The difference now is that we ask the user to select which video driver to use. The Software device might be too slow to draw a huge Quake 3 map, but just for the fun of it, we make this decision possible, too.
video::E_DRIVER_TYPE driverType=driverChoiceConsole(true);
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
Get a pointer to the video driver and the SceneManager so that we do not always have to call irr::IrrlichtDevice::getVideoDriver() and irr::IrrlichtDevice::getSceneManager().
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
To display the Quake 3 map, we first need to load it. Quake 3 maps are packed into .pk3 files which are nothing else than .zip files. So we add the .pk3 file to our irr::io::IFileSystem. After it was added, we can read from the files in that archive as if they were stored on disk.
device->getFileSystem()->addFileArchive(getExampleMediaPath() + "map-20kdm2.pk3");
Now we can load the mesh by calling irr::scene::ISceneManager::getMesh(). We get a pointer returned to an irr::scene::IAnimatedMesh. Quake 3 maps are not really animated, they are only a chunk of static geometry with some materials attached. Hence the IAnimatedMesh consists of only one frame, so we get the "first frame" of the "animation", which is our quake level and create an Octree scene node with it, using irr::scene::ISceneManager::addOctreeSceneNode(). The Octree optimizes the scene a little bit, trying to draw only geometry which is currently visible. An alternative to the Octree would be a irr::scene::IMeshSceneNode, which would always draw the complete geometry of the mesh, without optimization. Try it: Use irr::scene::ISceneManager::addMeshSceneNode() instead of addOctreeSceneNode() and compare the primitives drawn by the video driver. (There is a irr::video::IVideoDriver::getPrimitiveCountDrawn() method in the irr::video::IVideoDriver class). Note that this optimization with the Octree is only useful when drawing huge meshes consisting of lots of geometry and if users can't see the whole scene at once.
scene::IAnimatedMesh* mesh = smgr->getMesh("20kdm2.bsp");
scene::ISceneNode* node = 0;
if (mesh)
node = smgr->addOctreeSceneNode(mesh->getMesh(0), 0, -1, 1024);
Because the level was not modeled around the origin (0,0,0), we translate the whole level a little bit. This is done on irr::scene::ISceneNode level using the methods irr::scene::ISceneNode::setPosition() (in this case), irr::scene::ISceneNode::setRotation(), and irr::scene::ISceneNode::setScale().
if (node)
node->setPosition(core::vector3df(-1300,-144,-1249));
Now we need a camera to look at the Quake 3 map. We want to create a user controlled camera. There are some cameras available in the Irrlicht engine. For example the MayaCamera which can be controlled like the camera in Maya: Rotate with left mouse button pressed, Zoom with both buttons pressed, translate with right mouse button pressed. This could be created with irr::scene::ISceneManager::addCameraSceneNodeMaya(). But for this example, we want to create a camera which behaves like the ones in first person shooter games (FPS) and hence use irr::scene::ISceneManager::addCameraSceneNodeFPS().
smgr->addCameraSceneNodeFPS();
The mouse cursor needs not be visible, so we hide it via the irr::IrrlichtDevice::ICursorControl.
device->getCursorControl()->setVisible(false);
Everything is set up, so lets draw it. We also write the current frames per second and the primitives drawn into the caption of the window. The test for irr::IrrlichtDevice::isWindowActive() is optional, but prevents the engine to grab the mouse cursor after task switching when other programs are active. The call to irr::IrrlichtDevice::yield() will avoid the busy loop to eat up all CPU cycles when the window is not active.
int lastFPS = -1;
while(device->run())
{
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,200,200,200));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Quake 3 Map example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
else
device->yield();
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
diff --git a/examples/03.CustomSceneNode/tutorial.html b/examples/03.CustomSceneNode/tutorial.html
index 28315f47..bfa3a7b9 100644
--- a/examples/03.CustomSceneNode/tutorial.html
+++ b/examples/03.CustomSceneNode/tutorial.html
@@ -195,12 +195,12 @@ tr.heading h2 {
-
+
-
This Tutorial is more advanced than the previous ones. If you are currently just playing around with the Irrlicht engine, you may want to look at other examples first. This tutorials shows how to create a custom scene node and how to use it in the engine. A custom scene node is needed if you want to implement a render technique the Irrlicht Engine currently does not support. For example, you can write an indoor portal based renderer or an advanced terrain scene node with it. By creating custom scene nodes, you can easily extend the Irrlicht Engine and adapt it to your own needs.
-
I will keep the tutorial simple: Keep everything very short, everything in one .cpp file, and I'll use the engine here as in all other tutorials.
-
To start, I include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Here comes the more sophisticated part of this tutorial: The class of our very own custom scene node. To keep it simple, our scene node will not be an indoor portal renderer nor a terrain scene node, but a simple tetraeder, a 3d object consisting of 4 connected vertices, which only draws itself and does nothing more. Note that this scenario does not require a custom scene node in Irrlicht. Instead one would create a mesh from the geometry and pass it to a irr::scene::IMeshSceneNode. This example just illustrates creation of a custom scene node in a very simple setting.
-
To let our scene node be able to be inserted into the Irrlicht Engine scene, the class we create needs to be derived from the irr::scene::ISceneNode class and has to override some methods.
class CSampleSceneNode : public scene::ISceneNode
{
First, we declare some member variables: The bounding box, 4 vertices, and the material of the tetraeder.
core::aabbox3d<f32> Box;
video::S3DVertex Vertices[4];
video::SMaterial Material;
The parameters of the constructor specify the parent of the scene node, a pointer to the scene manager, and an id of the scene node. In the constructor we call the parent class' constructor, set some properties of the material, and create the 4 vertices of the tetraeder we will draw later.
public:
CSampleSceneNode(scene::ISceneNode* parent, scene::ISceneManager* mgr, s32 id)
: scene::ISceneNode(parent, mgr, id)
{
Material.Wireframe = false;
Material.Lighting = false;
Vertices[0] = video::S3DVertex(0,0,10, 1,1,0,
video::SColor(255,0,255,255), 0, 1);
Vertices[1] = video::S3DVertex(10,0,-10, 1,0,0,
video::SColor(255,255,0,255), 1, 1);
Vertices[2] = video::S3DVertex(0,20,0, 0,1,1,
video::SColor(255,255,255,0), 1, 0);
Vertices[3] = video::S3DVertex(-10,0,-10, 0,0,1,
video::SColor(255,0,255,0), 0, 0);
The Irrlicht Engine needs to know the bounding box of a scene node. It will use it for automatic culling and other things. Hence, we need to create a bounding box from the 4 vertices we use. If you do not want the engine to use the box for automatic culling, and/or don't want to create the box, you could also call irr::scene::ISceneNode::setAutomaticCulling() with irr::scene::EAC_OFF.
Box.reset(Vertices[0].Pos);
for (s32 i=1; i<4; ++i)
Box.addInternalPoint(Vertices[i].Pos);
}
Before it is drawn, the irr::scene::ISceneNode::OnRegisterSceneNode() method of every scene node in the scene is called by the scene manager. If the scene node wishes to draw itself, it may register itself in the scene manager to be drawn. This is necessary to tell the scene manager when it should call irr::scene::ISceneNode::render(). For example, normal scene nodes render their content one after another, while stencil buffer shadows would like to be drawn after all other scene nodes. And camera or light scene nodes need to be rendered before all other scene nodes (if at all). So here we simply register the scene node to render normally. If we would like to let it be rendered like cameras or light, we would have to call SceneManager->registerNodeForRendering(this, SNRT_LIGHT_AND_CAMERA); After this, we call the actual irr::scene::ISceneNode::OnRegisterSceneNode() method of the base class, which simply lets also all the child scene nodes of this node register themselves.
virtual void OnRegisterSceneNode()
{
if (IsVisible)
SceneManager->registerNodeForRendering(this);
ISceneNode::OnRegisterSceneNode();
}
In the render() method most of the interesting stuff happens: The Scene node renders itself. We override this method and draw the tetraeder.
virtual void render()
{
u16 indices[] = { 0,2,3, 2,1,3, 1,0,3, 2,0,1 };
video::IVideoDriver* driver = SceneManager->getVideoDriver();
driver->setMaterial(Material);
driver->setTransform(video::ETS_WORLD, AbsoluteTransformation);
driver->drawVertexPrimitiveList(&Vertices[0], 4, &indices[0], 4, video::EVT_STANDARD, scene::EPT_TRIANGLES, video::EIT_16BIT);
}
And finally we create three small additional methods. irr::scene::ISceneNode::getBoundingBox() returns the bounding box of this scene node, irr::scene::ISceneNode::getMaterialCount() returns the amount of materials in this scene node (our tetraeder only has one material), and irr::scene::ISceneNode::getMaterial() returns the material at an index. Because we have only one material here, we can return the only one material, assuming that no one ever calls getMaterial() with an index greater than 0.
virtual const core::aabbox3d<f32>& getBoundingBox() const
{
return Box;
}
virtual u32 getMaterialCount() const
{
return 1;
}
virtual video::SMaterial& getMaterial(u32 i)
{
return Material;
}
};
That's it. The Scene node is done. Now we simply have to start the engine, create the scene node and a camera, and look at the result.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false);
if (device == 0)
return 1;
device->setWindowCaption(L"Custom Scene Node - Irrlicht Engine Demo");
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
smgr->addCameraSceneNode(0, core::vector3df(0,-40,0), core::vector3df(0,0,0));
Create our scene node. I don't check the result of calling new, as it should throw an exception rather than returning 0 on failure. Because the new node will create itself with a reference count of 1, and then will have another reference added by its parent scene node when it is added to the scene, I need to drop my reference to it. Best practice is to drop it only after I have finished using it, regardless of what the reference count of the object is after creation.
CSampleSceneNode *myNode =
new CSampleSceneNode(smgr->getRootSceneNode(), smgr, 666);
To animate something in this boring scene consisting only of one tetraeder, and to show that you now can use your scene node like any other scene node in the engine, we add an animator to the scene node, which rotates the node a little bit. irr::scene::ISceneManager::createRotationAnimator() could return 0, so should be checked.
scene::ISceneNodeAnimator* anim =
smgr->createRotationAnimator(core::vector3df(0.8f, 0, 0.8f));
if(anim)
{
myNode->addAnimator(anim);
I'm done referring to anim, so must irr::IReferenceCounted::drop() this reference now because it was produced by a createFoo() function. As I shouldn't refer to it again, ensure that I can't by setting to 0.
I'm done with my CSampleSceneNode object, and so must drop my reference. This won't delete the object, yet, because it is still attached to the scene graph, which prevents the deletion until the graph is deleted or the custom scene node is removed from it.
myNode->drop();
myNode = 0;
Now draw everything and finish.
u32 frames=0;
while(device->run())
{
driver->beginScene(true, true, video::SColor(0,100,100,100));
smgr->drawAll();
driver->endScene();
if (++frames==100)
{
core::stringw str = L"Irrlicht Engine [";
str += driver->getName();
str += L"] FPS: ";
str += (s32)driver->getFPS();
device->setWindowCaption(str.c_str());
frames=0;
}
}
device->drop();
return 0;
}
That's it. Compile and play around with the program.
+
This tutorial is more advanced than the previous ones. If you are currently just playing around with the Irrlicht engine, you may want to look at other examples first. This tutorials shows how to create a custom scene node and how to use it in the engine. A custom scene node is needed if you want to implement a render technique the Irrlicht Engine currently does not support. For example, you can write an indoor portal based renderer or an advanced terrain scene node with it. By creating custom scene nodes, you can easily extend the Irrlicht Engine and adapt it to your needs.
+
I will keep the tutorial simple: Keep everything very short and everything in one .cpp file. This is the style which will also be used in most of the following tutorials.
+
To start, I include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Here comes the more sophisticated part of this tutorial: The class of our very own custom scene node. To keep it simple, our scene node will not be an indoor portal renderer nor a terrain scene node, but a simple tetrahedron, a 3D object consisting of 4 connected vertices, which only draws itself and does nothing more. Note that this scenario does not require a custom scene node in Irrlicht. Instead one would create a mesh from the geometry and pass it to a irr::scene::IMeshSceneNode. This example just illustrates creation of a custom scene node in a simple setting.
+
To allow our scene node to be inserted into the Irrlicht Engine scene, the class we create needs to be derived from the irr::scene::ISceneNode class and has to override some methods.
class CSampleSceneNode : public scene::ISceneNode
{
First, we declare some member variables: The bounding box, 4 vertices, and the material of the tetrahedron.
core::aabbox3d<f32> Box;
video::S3DVertex Vertices[4];
video::SMaterial Material;
public:
The parameters of the constructor specify the parent of the scene node, a pointer to the scene manager, and an id of the scene node. In the constructor we call the parent class' constructor, set some properties of the material, and create the 4 vertices of the tetrahedron.
CSampleSceneNode(scene::ISceneNode* parent, scene::ISceneManager* mgr, s32 id)
: scene::ISceneNode(parent, mgr, id)
{
Material.Wireframe = false;
Material.Lighting = false;
Vertices[0] = video::S3DVertex(0,0,10, 1,1,0,
video::SColor(255,0,255,255), 0, 1);
Vertices[1] = video::S3DVertex(10,0,-10, 1,0,0,
video::SColor(255,255,0,255), 1, 1);
Vertices[2] = video::S3DVertex(0,20,0, 0,1,1,
video::SColor(255,255,255,0), 1, 0);
Vertices[3] = video::S3DVertex(-10,0,-10, 0,0,1,
video::SColor(255,0,255,0), 0, 0);
The Irrlicht Engine needs to know the bounding box of a scene node. It will use it for automatic culling and other things. Hence, we need to create a bounding box from the 4 vertices we use. If you do not want the engine to use the box for automatic culling, and/or don't want to create the box, you could also call irr::scene::ISceneNode::setAutomaticCulling() with irr::scene::EAC_OFF.
Box.reset(Vertices[0].Pos);
for (s32 i=1; i<4; ++i)
Box.addInternalPoint(Vertices[i].Pos);
}
Before it is drawn, the irr::scene::ISceneNode::OnRegisterSceneNode() method of every scene node in the scene is called by the scene manager. If the scene node wishes to draw itself, it may register itself in the scene manager to be drawn. This is necessary to tell the scene manager when it should call irr::scene::ISceneNode::render(). For example, normal scene nodes render their content one after another, while stencil buffer shadows would like to be drawn after all other scene nodes. And camera or light scene nodes need to be rendered before all other scene nodes (if at all). So here we simply register the scene node to render normally. If we would like to let it be rendered like cameras or light, we would have to call SceneManager->registerNodeForRendering(this, SNRT_LIGHT_AND_CAMERA); After this, we call the actual irr::scene::ISceneNode::OnRegisterSceneNode() method of the base class, which lets all the child scene nodes of this node register themselves.
virtual void OnRegisterSceneNode()
{
if (IsVisible)
SceneManager->registerNodeForRendering(this);
ISceneNode::OnRegisterSceneNode();
}
In the render() method most of the interesting stuff happens: The Scene node renders itself. We override this method and draw the tetrahedron.
Indices into the 'Vertices' array. A triangle needs 3 vertices so you have to pass the 3 corresponding indices for each triangle to tell which of the vertices should be used for it.
u16 indices[] = { 0,2,3, 2,1,3, 1,0,3, 2,0,1 };
video::IVideoDriver* driver = SceneManager->getVideoDriver();
driver->setMaterial(Material);
driver->setTransform(video::ETS_WORLD, AbsoluteTransformation);
driver->drawVertexPrimitiveList(&Vertices[0], 4, &indices[0], 4, video::EVT_STANDARD, scene::EPT_TRIANGLES, video::EIT_16BIT);
}
And finally we create three small additional methods. irr::scene::ISceneNode::getBoundingBox() returns the bounding box of this scene node, irr::scene::ISceneNode::getMaterialCount() returns the amount of materials in this scene node (our tetrahedron only has one material), and irr::scene::ISceneNode::getMaterial() returns the material at an index. Because we have only one material, we can return that and assume that no one ever calls getMaterial() with an index greater than 0.
virtual const core::aabbox3d<f32>& getBoundingBox() const
{
return Box;
}
virtual u32 getMaterialCount() const
{
return 1;
}
virtual video::SMaterial& getMaterial(u32 i)
{
return Material;
}
};
That's it. The Scene node is done. Now we start the engine, create the scene node and a camera, and look at the result.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false);
if (device == 0)
return 1;
device->setWindowCaption(L"Custom Scene Node - Irrlicht Engine Demo");
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
smgr->addCameraSceneNode(0, core::vector3df(0,-40,0), core::vector3df(0,0,0));
Create our scene node. I don't check the result of calling new, as it should throw an exception rather than returning 0 on failure. Because the new node will create itself with a reference count of 1, and then will have another reference added by its parent scene node when it is added to the scene, I need to drop my reference to it. Best practice is to drop it only after I have finished using it, regardless of what the reference count of the object is after creation.
CSampleSceneNode *myNode =
new CSampleSceneNode(smgr->getRootSceneNode(), smgr, 666);
To animate something in this boring scene consisting only of one tetrahedron, and to show that you now can use your scene node like any other scene node in the engine, we add an animator to the scene node, which rotates the node a little bit. irr::scene::ISceneManager::createRotationAnimator() could return 0, so should be checked.
scene::ISceneNodeAnimator* anim =
smgr->createRotationAnimator(core::vector3df(0.8f, 0, 0.8f));
if(anim)
{
myNode->addAnimator(anim);
I'm done referring to anim, so must irr::IReferenceCounted::drop() this reference now because it was produced by a createFoo() function. As I shouldn't refer to it again, ensure that I can't by setting to 0.
I'm done with my CSampleSceneNode object, and so must drop my reference. This won't delete the object, yet, because it is still attached to the scene graph, which prevents the deletion until the graph is deleted or the custom scene node is removed from it.
myNode->drop();
myNode = 0;
Now draw everything and finish.
u32 frames=0;
while(device->run())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0,100,100,100));
smgr->drawAll();
driver->endScene();
if (++frames==100)
{
core::stringw str = L"Irrlicht Engine [";
str += driver->getName();
str += L"] FPS: ";
str += (s32)driver->getFPS();
device->setWindowCaption(str.c_str());
frames=0;
}
}
device->drop();
return 0;
}
That's it. Compile and play around with the program.
diff --git a/examples/04.Movement/tutorial.html b/examples/04.Movement/tutorial.html
index 45df714a..cb793b21 100644
--- a/examples/04.Movement/tutorial.html
+++ b/examples/04.Movement/tutorial.html
@@ -195,12 +195,11 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to move and animate SceneNodes. The basic concept of SceneNodeAnimators is shown as well as manual movement of nodes using the keyboard. We'll demonstrate framerate independent movement, which means moving by an amount dependent on the duration of the last run of the Irrlicht loop.
-
Example 19.MouseAndJoystick shows how to handle those kinds of input.
-
As always, I include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
To receive events like mouse and keyboard input, or GUI events like "the OK
-button has been clicked", we need an object which is derived from the irr::IEventReceiver object. There is only one method to override: irr::IEventReceiver::OnEvent(). This method will be called by the engine once when an event happens. What we really want to know is whether a key is being held down, and so we will remember the current state of each key.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
The event receiver for keeping the pressed keys is ready, the actual responses will be made inside the render loop, right before drawing the scene. So lets just create an irr::IrrlichtDevice and the scene node we want to move. We also create some other additional scene nodes, to show that there are also some different possibilities to move and animate scene nodes.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
Create the node which will be moved with the WSAD keys. We create a sphere node, which is a built-in geometry primitive. We place the node at (0,0,30) and assign a texture to it to let it look a little bit more interesting. Because we have no dynamic lights in this scene we disable lighting for each model (otherwise the models would be black).
scene::ISceneNode * node = smgr->addSphereSceneNode();
if (node)
{
node->setPosition(core::vector3df(0,0,30));
node->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
}
Now we create another node, movable using a scene node animator. Scene node animators modify scene nodes and can be attached to any scene node like mesh scene nodes, billboards, lights and even camera scene nodes. Scene node animators are not only able to modify the position of a scene node, they can also animate the textures of an object for example. We create a cube scene node and attach a 'fly circle' scene node animator to it, letting this node fly around our sphere scene node.
scene::ISceneNode* n = smgr->addCubeSceneNode();
if (n)
{
n->setMaterialTexture(0, driver->getTexture("../../media/t351sml.jpg"));
n->setMaterialFlag(video::EMF_LIGHTING, false);
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator(core::vector3df(0,0,30), 20.0f);
if (anim)
{
n->addAnimator(anim);
anim->drop();
}
}
The last scene node we add to show possibilities of scene node animators is a b3d model, which uses a 'fly straight' animator to run between to points.
scene::IAnimatedMeshSceneNode* anms =
smgr->addAnimatedMeshSceneNode(smgr->getMesh("../../media/ninja.b3d"));
if (anms)
{
scene::ISceneNodeAnimator* anim =
smgr->createFlyStraightAnimator(core::vector3df(100,0,60),
core::vector3df(-100,0,60), 3500, true);
if (anim)
{
anms->addAnimator(anim);
anim->drop();
}
To make the model look right we disable lighting, set the frames between which the animation should loop, rotate the model around 180 degrees, and adjust the animation speed and the texture. To set the right animation (frames and speed), we would also be able to just call "anms->setMD2Animation(scene::EMAT_RUN)" for the 'run' animation instead of "setFrameLoop" and "setAnimationSpeed", but this only works with MD2 animations, and so you know how to start other animations. But a good advice is to not use hardcoded frame-numbers...
anms->setMaterialFlag(video::EMF_LIGHTING, false);
anms->setFrameLoop(0, 13);
anms->setAnimationSpeed(15);
anms->setScale(core::vector3df(2.f,2.f,2.f));
anms->setRotation(core::vector3df(0,-90,0));
}
To be able to look at and move around in this scene, we create a first person shooter style camera and make the mouse cursor invisible.
smgr->addCameraSceneNodeFPS();
device->getCursorControl()->setVisible(false);
Add a colorful irrlicht logo
device->getGUIEnvironment()->addImage(
driver->getTexture("../../media/irrlichtlogoalpha2.tga"),
core::position2d<s32>(10,20));
gui::IGUIStaticText* diagnostics = device->getGUIEnvironment()->addStaticText(
L"", core::rect<s32>(10, 10, 400, 20));
diagnostics->setOverrideColor(video::SColor(255, 255, 255, 0));
We have done everything, so lets draw it. We also write the current frames per second and the name of the driver to the caption of the window.
int lastFPS = -1;
u32 then = device->getTimer()->getTime();
const f32 MOVEMENT_SPEED = 5.f;
while(device->run())
{
const u32 now = device->getTimer()->getTime();
const f32 frameDeltaTime = (f32)(now - then) / 1000.f;
then = now;
Check if keys W, S, A or D are being held down, and move the sphere node around respectively.
core::vector3df nodePosition = node->getPosition();
if(receiver.IsKeyDown(irr::KEY_KEY_W))
nodePosition.Y += MOVEMENT_SPEED * frameDeltaTime;
else if(receiver.IsKeyDown(irr::KEY_KEY_S))
nodePosition.Y -= MOVEMENT_SPEED * frameDeltaTime;
if(receiver.IsKeyDown(irr::KEY_KEY_A))
nodePosition.X -= MOVEMENT_SPEED * frameDeltaTime;
else if(receiver.IsKeyDown(irr::KEY_KEY_D))
nodePosition.X += MOVEMENT_SPEED * frameDeltaTime;
node->setPosition(nodePosition);
driver->beginScene(true, true, video::SColor(255,113,113,133));
smgr->drawAll();
device->getGUIEnvironment()->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw tmp(L"Movement Example - Irrlicht Engine [");
tmp += driver->getName();
tmp += L"] fps: ";
tmp += fps;
device->setWindowCaption(tmp.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
+
This tutorial shows how to move and animate SceneNodes. The basic concept of SceneNodeAnimators is shown as well as manual movement of nodes using the keyboard. We'll demonstrate framerate independent movement, which means moving by an amount dependent on the duration of the last run of the Irrlicht loop.
+
Example 19.MouseAndJoystick shows how to handle other input than keyboard.
+
As always, include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
To receive events like mouse and keyboard input, or GUI events like "button has been clicked", we need an object which is derived from the irr::IEventReceiver object. There is only one method to override: irr::IEventReceiver::OnEvent(). This method will be called by the engine once when an event happens. What we really want to know is whether a key is being held down, and so we will remember the current state of each key.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
Always return false by default. If you return true you tell the engine that you handled this event completely and the Irrlicht should not process it any further. So for example if you return true for all EET_KEY_INPUT_EVENT events then Irrlicht would not pass on key-events to it's GUI system.
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
The event receiver for keeping the pressed keys is ready, the actual responses will be made inside the render loop, right before drawing the scene. So lets create an irr::IrrlichtDevice and the scene node we want to move. We also create some additional scene nodes to show different possibilities to move and animate scene nodes.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
Create the event receiver. Take care that the pointer to it has to stay valid as long as the IrrlichtDevice uses it. Event receivers are not reference counted.
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
const io::path mediaPath = getExampleMediaPath();
Create the node which will be moved with the WSAD keys. We create a sphere node, which is a built-in geometry primitive. We place the node at (0,0,30) and assign a texture to it to let it look a little bit more interesting. Because we have no dynamic lights in this scene we disable lighting for each model (otherwise the models would be black).
scene::ISceneNode * sphereNode = smgr->addSphereSceneNode();
if (sphereNode)
{
sphereNode->setPosition(core::vector3df(0,0,30));
sphereNode->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
sphereNode->setMaterialFlag(video::EMF_LIGHTING, false);
}
Now we create another node, movable using a scene node animator. Scene node animators modify scene nodes and can be attached to any scene node like mesh scene nodes, billboards, lights and even camera scene nodes. Scene node animators are not only able to modify the position of a scene node, they can also animate the textures of an object for example. We create a cube scene node and attach a 'fly circle' scene node animator to it, letting this node fly around our sphere scene node.
scene::ISceneNode* cubeNode = smgr->addCubeSceneNode();
if (cubeNode)
{
cubeNode->setMaterialTexture(0, driver->getTexture(mediaPath + "t351sml.jpg"));
cubeNode->setMaterialFlag(video::EMF_LIGHTING, false);
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator(core::vector3df(0,0,30), 20.0f);
if (anim)
{
cubeNode->addAnimator(anim);
anim->drop();
}
}
The last scene node we add is a b3d model of a walking ninja. Is shows the use of a 'fly straight' animator to move the node between two points.
scene::IAnimatedMeshSceneNode* ninjaNode =
smgr->addAnimatedMeshSceneNode(smgr->getMesh(mediaPath + "ninja.b3d"));
if (ninjaNode)
{
scene::ISceneNodeAnimator* anim =
smgr->createFlyStraightAnimator(core::vector3df(100,0,60),
core::vector3df(-100,0,60), 3500, true);
if (anim)
{
ninjaNode->addAnimator(anim);
anim->drop();
}
To make the model look right we disable lighting, set the frames between which the animation should loop, rotate the model around 180 degrees, and adjust the animation speed and the texture. To set the correct animation (frames and speed), we would also be able to just call "ninjaNode->setMD2Animation(scene::EMAT_RUN)" for the 'run' animation instead of "setFrameLoop" and "setAnimationSpeed", But that only works with MD2 animations, while this can be used to start other animations. For MD2 it's usually good advice not to use hardcoded frame-numbers...
ninjaNode->setMaterialFlag(video::EMF_LIGHTING, false);
ninjaNode->setFrameLoop(0, 13);
ninjaNode->setAnimationSpeed(15);
ninjaNode->setScale(core::vector3df(2.f,2.f,2.f));
ninjaNode->setRotation(core::vector3df(0,-90,0));
}
To be able to look at and move around in this scene, we create a first person shooter style camera and make the mouse cursor invisible.
smgr->addCameraSceneNodeFPS();
device->getCursorControl()->setVisible(false);
Add a colorful irrlicht logo
device->getGUIEnvironment()->addImage(
driver->getTexture(mediaPath + "irrlichtlogoalpha2.tga"),
core::position2d<s32>(10,20));
Lets draw the scene and also write the current frames per second and the name of the driver to the caption of the window.
int lastFPS = -1;
u32 then = device->getTimer()->getTime();
const f32 MOVEMENT_SPEED = 5.f;
while(device->run())
{
const u32 now = device->getTimer()->getTime();
const f32 frameDeltaTime = (f32)(now - then) / 1000.f;
then = now;
Check if keys W, S, A or D are being held down, and move the sphere node around respectively.
core::vector3df nodePosition = sphereNode->getPosition();
if(receiver.IsKeyDown(irr::KEY_KEY_W))
nodePosition.Y += MOVEMENT_SPEED * frameDeltaTime;
else if(receiver.IsKeyDown(irr::KEY_KEY_S))
nodePosition.Y -= MOVEMENT_SPEED * frameDeltaTime;
if(receiver.IsKeyDown(irr::KEY_KEY_A))
nodePosition.X -= MOVEMENT_SPEED * frameDeltaTime;
else if(receiver.IsKeyDown(irr::KEY_KEY_D))
nodePosition.X += MOVEMENT_SPEED * frameDeltaTime;
sphereNode->setPosition(nodePosition);
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,113,113,133));
smgr->drawAll();
device->getGUIEnvironment()->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw tmp(L"Movement Example - Irrlicht Engine [");
tmp += driver->getName();
tmp += L"] fps: ";
tmp += fps;
device->setWindowCaption(tmp.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
diff --git a/examples/05.UserInterface/tutorial.html b/examples/05.UserInterface/tutorial.html
index 4c573887..5b048a17 100644
--- a/examples/05.UserInterface/tutorial.html
+++ b/examples/05.UserInterface/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
This tutorial shows how to use the built in User Interface of the Irrlicht Engine. It will give a brief overview and show how to create and use windows, buttons, scroll bars, static texts, and list boxes.
-
As always, we include the header files, and use the irrlicht namespaces. We also store a pointer to the Irrlicht device, a counter variable for changing the creation position of a window, and a pointer to a listbox.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _IRR_WINDOWS_
#pragma comment(lib, "Irrlicht.lib")
#endif
struct SAppContext
{
IrrlichtDevice *device;
s32 counter;
IGUIListBox* listbox;
};
enum
{
GUI_ID_QUIT_BUTTON = 101,
GUI_ID_NEW_WINDOW_BUTTON,
GUI_ID_FILE_OPEN_BUTTON,
GUI_ID_TRANSPARENCY_SCROLL_BAR
};
Set the skin transparency by changing the alpha values of all skin-colors
void setSkinTransparency(s32 alpha, irr::gui::IGUISkin * skin)
{
for (s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
video::SColor col = skin->getColor((EGUI_DEFAULT_COLOR)i);
col.setAlpha(alpha);
skin->setColor((EGUI_DEFAULT_COLOR)i, col);
}
}
The Event Receiver is not only capable of getting keyboard and mouse input events, but also events of the graphical user interface (gui). There are events for almost everything: Button click, Listbox selection change, events that say that a element was hovered and so on. To be able to react to some of these events, we create an event receiver. We only react to gui events, and if it's such an event, we get the id of the caller (the gui element which caused the event) and get the pointer to the gui environment.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & context) : Context(context) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
IGUIEnvironment* env = Context.device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
If a scrollbar changed its scroll position, and it is 'our' scrollbar (the one with id GUI_ID_TRANSPARENCY_SCROLL_BAR), then we change the transparency of all gui elements. This is a very easy task: There is a skin object, in which all color settings are stored. We simply go through all colors stored in the skin and change their alpha value.
case EGET_SCROLL_BAR_CHANGED:
if (id == GUI_ID_TRANSPARENCY_SCROLL_BAR)
{
s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
setSkinTransparency(pos, env->getSkin());
}
break;
If a button was clicked, it could be one of 'our' three buttons. If it is the first, we shut down the engine. If it is the second, we create a little window with some text on it. We also add a string to the list box to log what happened. And if it is the third button, we create a file open dialog, and add also this as string to the list box. That's all for the event receiver.
case EGET_BUTTON_CLICKED:
switch(id)
{
case GUI_ID_QUIT_BUTTON:
Context.device->closeDevice();
return true;
case GUI_ID_NEW_WINDOW_BUTTON:
{
Context.listbox->addItem(L"Window created");
Context.counter += 30;
if (Context.counter > 200)
Context.counter = 0;
IGUIWindow* window = env->addWindow(
rect<s32>(100 + Context.counter, 100 + Context.counter, 300 + Context.counter, 200 + Context.counter),
false,
L"Test window");
env->addStaticText(L"Please close me",
rect<s32>(35,35,140,50),
true,
false,
window);
}
return true;
case GUI_ID_FILE_OPEN_BUTTON:
Context.listbox->addItem(L"File open");
env->addFileOpenDialog(L"Please choose a file.", true, 0, -1, true);
return true;
default:
return false;
}
break;
case EGET_FILE_SELECTED:
{
IGUIFileOpenDialog* dialog =
(IGUIFileOpenDialog*)event.GUIEvent.Caller;
Context.listbox->addItem(dialog->getFileName());
}
break;
default:
break;
}
}
return false;
}
private:
SAppContext & Context;
};
Ok, now for the more interesting part. First, create the Irrlicht device. As in some examples before, we ask the user which driver he wants to use for this example:
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice * device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
The creation was successful, now we set the event receiver and store pointers to the driver and to the gui environment.
device->setWindowCaption(L"Irrlicht Engine - User Interface Demo");
device->setResizable(true);
video::IVideoDriver* driver = device->getVideoDriver();
IGUIEnvironment* env = device->getGUIEnvironment();
To make the font a little bit nicer, we load an external font and set it as the new default font in the skin. To keep the standard font for tool tip text, we set it to the built-in font.
IGUISkin* skin = env->getSkin();
IGUIFont* font = env->getFont("../../media/fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
skin->setFont(env->getBuiltInFont(), EGDF_TOOLTIP);
We add three buttons. The first one closes the engine. The second creates a window and the third opens a file open dialog. The third parameter is the id of the button, with which we can easily identify the button in the event receiver.
env->addButton(rect<s32>(10,240,110,240 + 32), 0, GUI_ID_QUIT_BUTTON,
L"Quit", L"Exits Program");
env->addButton(rect<s32>(10,280,110,280 + 32), 0, GUI_ID_NEW_WINDOW_BUTTON,
L"New Window", L"Launches a new Window");
env->addButton(rect<s32>(10,320,110,320 + 32), 0, GUI_ID_FILE_OPEN_BUTTON,
L"File Open", L"Opens a file");
Now, we add a static text and a scrollbar, which modifies the transparency of all gui elements. We set the maximum value of the scrollbar to 255, because that's the maximal value for a color value. Then we create an other static text and a list box.
env->addStaticText(L"Transparent Control:", rect<s32>(150,20,350,40), true);
IGUIScrollBar* scrollbar = env->addScrollBar(true,
rect<s32>(150, 45, 350, 60), 0, GUI_ID_TRANSPARENCY_SCROLL_BAR);
scrollbar->setMax(255);
scrollbar->setPos(255);
setSkinTransparency( scrollbar->getPos(), env->getSkin());
scrollbar->setPos(env->getSkin()->getColor(EGDC_WINDOW).getAlpha());
env->addStaticText(L"Logging ListBox:", rect<s32>(50,110,250,130), true);
IGUIListBox * listbox = env->addListBox(rect<s32>(50, 140, 250, 210));
env->addEditBox(L"Editable Text", rect<s32>(350, 80, 550, 100));
SAppContext context;
context.device = device;
context.counter = 0;
context.listbox = listbox;
MyEventReceiver receiver(context);
device->setEventReceiver(&receiver);
And at last, we create a nice Irrlicht Engine logo in the top left corner.
env->addImage(driver->getTexture("../../media/irrlichtlogo2.png"),
position2d<int>(10,10));
That's all, we only have to draw everything.
while(device->run() && driver)
if (device->isWindowActive())
{
driver->beginScene(true, true, SColor(0,200,200,200));
env->drawAll();
driver->endScene();
}
device->drop();
return 0;
}
+As always, we include the header files, and use the irrlicht namespaces. We also store a pointer to the Irrlicht device, a counter variable for changing the creation position of a window, and a pointer to a listbox.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
struct SAppContext
{
IrrlichtDevice *device;
s32 counter;
IGUIListBox* listbox;
};
enum
{
GUI_ID_QUIT_BUTTON = 101,
GUI_ID_NEW_WINDOW_BUTTON,
GUI_ID_FILE_OPEN_BUTTON,
GUI_ID_TRANSPARENCY_SCROLL_BAR
};
Set the skin transparency by changing the alpha values of all skin-colors
void setSkinTransparency(s32 alpha, irr::gui::IGUISkin * skin)
{
for (s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
video::SColor col = skin->getColor((EGUI_DEFAULT_COLOR)i);
col.setAlpha(alpha);
skin->setColor((EGUI_DEFAULT_COLOR)i, col);
}
}
The Event Receiver is not only capable of getting keyboard and mouse input events, but also events of the graphical user interface (gui). There are events for almost everything: button click, listbox selection change, events that say that a element was hovered and so on. To be able to react to some of these events, we create an event receiver. We only react to gui events, and if it's such an event, we get the id of the caller (the gui element which caused the event) and get the pointer to the gui environment.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & context) : Context(context) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
IGUIEnvironment* env = Context.device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
If a scrollbar changed its scroll position, and it is 'our' scrollbar (the one with id GUI_ID_TRANSPARENCY_SCROLL_BAR), then we change the transparency of all gui elements. This is an easy task: There is a skin object, in which all color settings are stored. We simply go through all colors stored in the skin and change their alpha value.
case EGET_SCROLL_BAR_CHANGED:
if (id == GUI_ID_TRANSPARENCY_SCROLL_BAR)
{
s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
setSkinTransparency(pos, env->getSkin());
}
break;
If a button was clicked, it could be one of 'our' three buttons. If it is the first, we shut down the engine. If it is the second, we create a little window with some text on it. We also add a string to the list box to log what happened. And if it is the third button, we create a file open dialog, and add also this as string to the list box. That's all for the event receiver.
case EGET_BUTTON_CLICKED:
switch(id)
{
case GUI_ID_QUIT_BUTTON:
Context.device->closeDevice();
return true;
case GUI_ID_NEW_WINDOW_BUTTON:
{
Context.listbox->addItem(L"Window created");
Context.counter += 30;
if (Context.counter > 200)
Context.counter = 0;
IGUIWindow* window = env->addWindow(
rect<s32>(100 + Context.counter, 100 + Context.counter, 300 + Context.counter, 200 + Context.counter),
false,
L"Test window");
env->addStaticText(L"Please close me",
rect<s32>(35,35,140,50),
true,
false,
window);
}
return true;
case GUI_ID_FILE_OPEN_BUTTON:
Context.listbox->addItem(L"File open");
env->addFileOpenDialog(L"Please choose a file.", true, 0, -1, true);
return true;
default:
return false;
}
break;
case EGET_FILE_SELECTED:
{
IGUIFileOpenDialog* dialog =
(IGUIFileOpenDialog*)event.GUIEvent.Caller;
Context.listbox->addItem(L"EGET_FILE_SELECTED");
Context.listbox->addItem(dialog->getFileName());
}
break;
case EGET_DIRECTORY_SELECTED:
{
IGUIFileOpenDialog* dialog =
(IGUIFileOpenDialog*)event.GUIEvent.Caller;
Context.listbox->addItem(L"EGET_DIRECTORY_SELECTED");
Context.listbox->addItem(dialog->getDirectoryNameW());
}
break;
default:
break;
}
}
return false;
}
private:
SAppContext & Context;
};
OK, now for the more interesting part. First, create the Irrlicht device. As in some examples before, we ask the user which driver he wants to use for this example.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice * device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
The creation was successful, now we set the event receiver and store pointers to the driver and to the gui environment.
device->setWindowCaption(L"Irrlicht Engine - User Interface Demo");
device->setResizable(true);
video::IVideoDriver* driver = device->getVideoDriver();
IGUIEnvironment* env = device->getGUIEnvironment();
const io::path mediaPath = getExampleMediaPath();
To make the font a little bit nicer, we load an external font and set it as the new default font in the skin. To keep the standard font for tool tip text, we set it to the built-in font.
IGUISkin* skin = env->getSkin();
IGUIFont* font = env->getFont(mediaPath + "fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
skin->setFont(env->getBuiltInFont(), EGDF_TOOLTIP);
We add three buttons. The first one closes the engine. The second creates a window and the third opens a file open dialog. The third parameter is the id of the button, with which we can easily identify the button in the event receiver.
env->addButton(rect<s32>(10,240,110,240 + 32), 0, GUI_ID_QUIT_BUTTON,
L"Quit", L"Exits Program");
env->addButton(rect<s32>(10,280,110,280 + 32), 0, GUI_ID_NEW_WINDOW_BUTTON,
L"New Window", L"Launches a new Window");
env->addButton(rect<s32>(10,320,110,320 + 32), 0, GUI_ID_FILE_OPEN_BUTTON,
L"File Open", L"Opens a file");
Now, we add a static text and a scrollbar, which modifies the transparency of all gui elements. We set the maximum value of the scrollbar to 255, because that's the maximal value for a color value. Then we create an other static text and a list box.
env->addStaticText(L"Transparent Control:", rect<s32>(150,20,350,40), true);
IGUIScrollBar* scrollbar = env->addScrollBar(true,
rect<s32>(150, 45, 350, 60), 0, GUI_ID_TRANSPARENCY_SCROLL_BAR);
scrollbar->setMax(255);
scrollbar->setPos(255);
setSkinTransparency( scrollbar->getPos(), env->getSkin());
scrollbar->setPos(env->getSkin()->getColor(EGDC_WINDOW).getAlpha());
env->addStaticText(L"Logging ListBox:", rect<s32>(10,110,350,130), true);
IGUIListBox * listbox = env->addListBox(rect<s32>(10, 140, 350, 210));
env->addEditBox(L"Editable Text", rect<s32>(350, 80, 550, 100));
SAppContext context;
context.device = device;
context.counter = 0;
context.listbox = listbox;
MyEventReceiver receiver(context);
device->setEventReceiver(&receiver);
And at last, we create a nice Irrlicht Engine logo in the top left corner.
env->addImage(driver->getTexture(mediaPath + "irrlichtlogo3.png"),
position2d<int>(10,10));
That's all, we only have to draw everything.
while(device->run() && driver)
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,200,200,200));
env->drawAll();
driver->endScene();
}
device->drop();
return 0;
}
diff --git a/examples/06.2DGraphics/tutorial.html b/examples/06.2DGraphics/tutorial.html
index 9a8ace67..423c9c25 100644
--- a/examples/06.2DGraphics/tutorial.html
+++ b/examples/06.2DGraphics/tutorial.html
@@ -195,11 +195,11 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to do 2d graphics with the Irrlicht Engine. It shows how to draw images, keycolor based sprites, transparent rectangles, and different fonts. You may consider this useful if you want to make a 2d game with the engine, or if you want to draw a cool interface or head up display for your 3d game.
-
As always, I include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
At first, we let the user select the driver type, then start up the engine, set a caption, and get a pointer to the video driver.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
core::dimension2d<u32>(512, 384));
if (device == 0)
return 1;
device->setWindowCaption(L"Irrlicht Engine - 2D Graphics Demo");
video::IVideoDriver* driver = device->getVideoDriver();
All 2d graphics in this example are put together into one texture, 2ddemo.png. Because we want to draw colorkey based sprites, we need to load this texture and tell the engine, which part of it should be transparent based on a colorkey.
-
In this example, we don't tell it the color directly, we just say "Hey Irrlicht Engine, you'll find the color I want at position (0,0) on the texture.". Instead, it would be also possible to call driver->makeColorKeyTexture(images, video::SColor(0,0,0,0)), to make e.g. all black pixels transparent. Please note that makeColorKeyTexture just creates an alpha channel based on the color.
video::ITexture* images = driver->getTexture("../../media/2ddemo.png");
driver->makeColorKeyTexture(images, core::position2d<s32>(0,0));
To be able to draw some text with two different fonts, we first load them. Ok, we load just one. As the first font we just use the default font which is built into the engine. Also, we define two rectangles which specify the position of the images of the red imps (little flying creatures) in the texture.
gui::IGUIFont* font = device->getGUIEnvironment()->getBuiltInFont();
gui::IGUIFont* font2 =
device->getGUIEnvironment()->getFont("../../media/fonthaettenschweiler.bmp");
core::rect<s32> imp1(349,15,385,78);
core::rect<s32> imp2(387,15,423,78);
Prepare a nicely filtering 2d render mode for special cases.
driver->getMaterial2D().TextureLayer[0].BilinearFilter=true;
driver->getMaterial2D().AntiAliasing=video::EAAM_FULL_BASIC;
Everything is prepared, now we can draw everything in the draw loop, between the begin scene and end scene calls. In this example, we are just doing 2d graphics, but it would be no problem to mix them with 3d graphics. Just try it out, and draw some 3d vertices or set up a scene with the scene manager and draw it.
while(device->run() && driver)
{
if (device->isWindowActive())
{
u32 time = device->getTimer()->getTime();
driver->beginScene(true, true, video::SColor(255,120,102,136));
First, we draw 3 sprites, using the alpha channel we created with makeColorKeyTexture. The last parameter specifies that the drawing method should use this alpha channel. The last-but-one parameter specifies a color, with which the sprite should be colored. (255,255,255,255) is full white, so the sprite will look like the original. The third sprite is drawn with the red channel modulated based on the time.
driver->draw2DImage(images, core::position2d<s32>(50,50),
core::rect<s32>(0,0,342,224), 0,
video::SColor(255,255,255,255), true);
driver->draw2DImage(images, core::position2d<s32>(164,125),
(time/500 % 2) ? imp1 : imp2, 0,
video::SColor(255,255,255,255), true);
driver->draw2DImage(images, core::position2d<s32>(270,105),
(time/500 % 2) ? imp1 : imp2, 0,
video::SColor(255,(time) % 255,255,255), true);
Drawing text is really simple. The code should be self explanatory.
if (font)
font->draw(L"This demo shows that Irrlicht is also capable of drawing 2D graphics.",
core::rect<s32>(130,10,300,50),
video::SColor(255,255,255,255));
if (font2)
font2->draw(L"Also mixing with 3d graphics is possible.",
core::rect<s32>(130,20,300,60),
video::SColor(255,time % 255,time % 255,255));
Next, we draw the Irrlicht Engine logo (without using a color or an alpha channel). Since we slightly scale the image we use the prepared filter mode.
driver->enableMaterial2D();
driver->draw2DImage(images, core::rect<s32>(10,10,108,48),
core::rect<s32>(354,87,442,118));
driver->enableMaterial2D(false);
Finally draw a half-transparent rect under the mouse cursor.
core::position2d<s32> m = device->getCursorControl()->getPosition();
driver->draw2DRectangle(video::SColor(100,255,255,255),
core::rect<s32>(m.X-20, m.Y-20, m.X+20, m.Y+20));
driver->endScene();
}
}
device->drop();
return 0;
}
That's all. I hope it was not too difficult.
+
This tutorial shows how to do 2d graphics with the Irrlicht Engine. It shows how to draw images, keycolor based sprites, transparent rectangles, and different fonts. You may consider this useful if you want to make a 2d game with the engine, or if you want to draw a cool interface or head up display for your 3d game.
+
As always, I include the header files, use the irr namespace, and tell the linker to link with the .lib file.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
At first, we let the user select the driver type, then start up the engine, set a caption, and get a pointer to the video driver.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
core::dimension2d<u32>(512, 384));
if (device == 0)
return 1;
device->setWindowCaption(L"Irrlicht Engine - 2D Graphics Demo");
video::IVideoDriver* driver = device->getVideoDriver();
const io::path mediaPath = getExampleMediaPath();
All 2d graphics in this example are put together into one texture, 2ddemo.png. Because we want to draw colorkey based sprites, we need to load this texture and tell the engine, which part of it should be transparent based on a colorkey.
+
In this example, we don't tell it the color directly, we just say "Hey Irrlicht Engine, you'll find the color I want at position (0,0) on the texture.". Instead, it would be also possible to call driver->makeColorKeyTexture(images, video::SColor(0,0,0,0)), to make e.g. all black pixels transparent. Please note that makeColorKeyTexture just creates an alpha channel based on the color.
video::ITexture* images = driver->getTexture(mediaPath + "2ddemo.png");
driver->makeColorKeyTexture(images, core::position2d<s32>(0,0));
To be able to draw some text with two different fonts, we first load them. OK, we load just one. As the first font we just use the default font which is built into the engine. Also, we define two rectangles which specify the position of the images of the red imps (little flying creatures) in the texture.
gui::IGUIFont* font = device->getGUIEnvironment()->getBuiltInFont();
gui::IGUIFont* font2 =
device->getGUIEnvironment()->getFont(mediaPath + "fonthaettenschweiler.bmp");
core::rect<s32> imp1(349,15,385,78);
core::rect<s32> imp2(387,15,423,78);
Prepare a nicely filtering 2d render mode for special cases.
driver->getMaterial2D().TextureLayer[0].BilinearFilter=true;
driver->getMaterial2D().AntiAliasing=video::EAAM_FULL_BASIC;
Everything is prepared, now we can draw everything in the draw loop, between the begin scene and end scene calls. In this example, we are just doing 2d graphics, but it would be no problem to mix them with 3d graphics. Just try it out, and draw some 3d vertices or set up a scene with the scene manager and draw it.
while(device->run() && driver)
{
if (device->isWindowActive())
{
u32 time = device->getTimer()->getTime();
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,120,102,136));
First, we draw 3 sprites, using the alpha channel we created with makeColorKeyTexture. The last parameter specifies that the drawing method should use this alpha channel. The last-but-one parameter specifies a color, with which the sprite should be colored. (255,255,255,255) is full white, so the sprite will look like the original. The third sprite is drawn with the red channel modulated based on the time.
driver->draw2DImage(images, core::position2d<s32>(50,50),
core::rect<s32>(0,0,342,224), 0,
video::SColor(255,255,255,255), true);
driver->draw2DImage(images, core::position2d<s32>(164,125),
(time/500 % 2) ? imp1 : imp2, 0,
video::SColor(255,255,255,255), true);
driver->draw2DImage(images, core::position2d<s32>(270,105),
(time/500 % 2) ? imp1 : imp2, 0,
video::SColor(255,(time) % 255,255,255), true);
Drawing text is really simple. The code should be self explanatory.
if (font)
font->draw(L"This demo shows that Irrlicht is also capable of drawing 2D graphics.",
core::rect<s32>(130,10,300,50),
video::SColor(255,255,255,255));
if (font2)
font2->draw(L"Also mixing with 3d graphics is possible.",
core::rect<s32>(130,20,300,60),
video::SColor(255,time % 255,time % 255,255));
Next, we draw the Irrlicht Engine logo (without using a color or an alpha channel). Since we slightly scale the image we use the prepared filter mode.
driver->enableMaterial2D();
driver->draw2DImage(images, core::rect<s32>(10,10,108,48),
core::rect<s32>(354,87,442,118));
driver->enableMaterial2D(false);
Finally draw a half-transparent rect under the mouse cursor.
core::position2d<s32> m = device->getCursorControl()->getPosition();
driver->draw2DRectangle(video::SColor(100,255,255,255),
core::rect<s32>(m.X-20, m.Y-20, m.X+20, m.Y+20));
driver->endScene();
}
}
device->drop();
return 0;
}
That's all. I hope it was not too difficult.
diff --git a/examples/07.Collision/tutorial.html b/examples/07.Collision/tutorial.html
index 5a9a3bdf..03fa80af 100644
--- a/examples/07.Collision/tutorial.html
+++ b/examples/07.Collision/tutorial.html
@@ -195,11 +195,11 @@ tr.heading h2 {
-
+
We will describe 2 methods: Automatic collision detection for moving through 3d worlds with stair climbing and sliding, and manual scene node and triangle picking using a ray. In this case, we will use a ray coming out from the camera, but you can use any ray.
-
To start, we take the program from tutorial 2, which loads and displays a quake 3 level. We will use the level to walk in it and to pick triangles from. In addition we'll place 3 animated models into it for triangle picking. The following code starts up the engine and loads the level, as per tutorial 2.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
enum
{
ID_IsNotPickable = 0,
IDFlag_IsPickable = 1 << 0,
IDFlag_IsHighlightable = 1 << 1
};
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480), 16, false);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
device->getFileSystem()->addFileArchive("../../media/map-20kdm2.pk3");
scene::IAnimatedMesh* q3levelmesh = smgr->getMesh("20kdm2.bsp");
scene::IMeshSceneNode* q3node = 0;
if (q3levelmesh)
q3node = smgr->addOctreeSceneNode(q3levelmesh->getMesh(0), 0, IDFlag_IsPickable);
So far so good, we've loaded the quake 3 level like in tutorial 2. Now, here comes something different: We create a triangle selector. A triangle selector is a class which can fetch the triangles from scene nodes for doing different things with them, for example collision detection. There are different triangle selectors, and all can be created with the ISceneManager. In this example, we create an OctreeTriangleSelector, which optimizes the triangle output a little bit by reducing it like an octree. This is very useful for huge meshes like quake 3 levels. After we created the triangle selector, we attach it to the q3node. This is not necessary, but in this way, we do not need to care for the selector, for example dropping it after we do not need it anymore.
scene::ITriangleSelector* selector = 0;
if (q3node)
{
q3node->setPosition(core::vector3df(-1350,-130,-1400));
selector = smgr->createOctreeTriangleSelector(
q3node->getMesh(), q3node, 128);
q3node->setTriangleSelector(selector);
}
We add a first person shooter camera to the scene so that we can see and move in the quake 3 level like in tutorial 2. But this, time, we add a special animator to the camera: A Collision Response animator. This animator modifies the scene node to which it is attached to in order to prevent it moving through walls, and to add gravity to it. The only thing we have to tell the animator is how the world looks like, how big the scene node is, how much gravity to apply and so on. After the collision response animator is attached to the camera, we do not have to do anything more for collision detection, anything is done automatically. The rest of the collision detection code below is for picking. And please note another cool feature: The collision response animator can be attached also to all other scene nodes, not only to cameras. And it can be mixed with other scene node animators. In this way, collision detection and response in the Irrlicht engine is really easy.
-
Now we'll take a closer look on the parameters of createCollisionResponseAnimator(). The first parameter is the TriangleSelector, which specifies how the world, against collision detection is done looks like. The second parameter is the scene node, which is the object, which is affected by collision detection, in our case it is the camera. The third defines how big the object is, it is the radius of an ellipsoid. Try it out and change the radius to smaller values, the camera will be able to move closer to walls after this. The next parameter is the direction and speed of gravity. We'll set it to (0, -10, 0), which approximates to realistic gravity, assuming that our units are metres. You could set it to (0,0,0) to disable gravity. And the last value is just a translation: Without this, the ellipsoid with which collision detection is done would be around the camera, and the camera would be in the middle of the ellipsoid. But as human beings, we are used to have our eyes on top of the body, with which we collide with our world, not in the middle of it. So we place the scene node 50 units over the center of the ellipsoid with this parameter. And that's it, collision detection works now.
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0, 100.0f, .3f, ID_IsNotPickable, 0, 0, true, 3.f);
camera->setPosition(core::vector3df(50,50,-60));
camera->setTarget(core::vector3df(-70,30,-60));
if (selector)
{
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(30,50,30),
core::vector3df(0,-10,0), core::vector3df(0,30,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
}
device->getCursorControl()->setVisible(false);
scene::IBillboardSceneNode * bill = smgr->addBillboardSceneNode();
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
bill->setMaterialTexture(0, driver->getTexture("../../media/particle.bmp"));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZBUFFER, false);
bill->setSize(core::dimension2d<f32>(20.0f, 20.0f));
bill->setID(ID_IsNotPickable);
Add 3 animated hominids, which we can pick using a ray-triangle intersection. They all animate quite slowly, to make it easier to see that accurate triangle selection is being performed.
scene::IAnimatedMeshSceneNode* node = 0;
video::SMaterial material;
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh("../../media/faerie.md2"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-90,-15,-140));
node->setScale(core::vector3df(1.6f));
node->setMD2Animation(scene::EMAT_POINT);
node->setAnimationSpeed(20.f);
material.setTexture(0, driver->getTexture("../../media/faerie2.bmp"));
material.Lighting = true;
material.NormalizeNormals = true;
node->getMaterial(0) = material;
selector = smgr->createTriangleSelector(node);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh("../../media/ninja.b3d"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setScale(core::vector3df(10));
node->setPosition(core::vector3df(-75,-66,-80));
node->setRotation(core::vector3df(0,90,0));
node->setAnimationSpeed(8.f);
node->getMaterial(0).NormalizeNormals = true;
node->getMaterial(0).Lighting = true;
selector = smgr->createTriangleSelector(node);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh("../../media/dwarf.x"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-70,-66,-30));
node->setRotation(core::vector3df(0,-90,0));
node->setAnimationSpeed(20.f);
node->getMaterial(0).Lighting = true;
selector = smgr->createTriangleSelector(node);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh("../../media/yodan.mdl"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-90,-25,20));
node->setScale(core::vector3df(0.8f));
node->getMaterial(0).Lighting = true;
node->setAnimationSpeed(20.f);
selector = smgr->createTriangleSelector(node);
node->setTriangleSelector(selector);
selector->drop();
material.setTexture(0, 0);
material.Lighting = false;
scene::ILightSceneNode * light = smgr->addLightSceneNode(0, core::vector3df(-60,100,400),
video::SColorf(1.0f,1.0f,1.0f,1.0f), 600.0f);
light->setID(ID_IsNotPickable);
scene::ISceneNode* highlightedSceneNode = 0;
scene::ISceneCollisionManager* collMan = smgr->getSceneCollisionManager();
int lastFPS = -1;
material.Wireframe=true;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, 0);
smgr->drawAll();
if (highlightedSceneNode)
{
highlightedSceneNode->setMaterialFlag(video::EMF_LIGHTING, true);
highlightedSceneNode = 0;
}
core::line3d<f32> ray;
ray.start = camera->getPosition();
ray.end = ray.start + (camera->getTarget() - ray.start).normalize() * 1000.0f;
core::vector3df intersection;
core::triangle3df hitTriangle;
scene::ISceneNode * selectedSceneNode =
collMan->getSceneNodeAndCollisionPointFromRay(
ray,
intersection,
hitTriangle,
IDFlag_IsPickable,
0);
if(selectedSceneNode)
{
bill->setPosition(intersection);
driver->setTransform(video::ETS_WORLD, core::matrix4());
driver->setMaterial(material);
driver->draw3DTriangle(hitTriangle, video::SColor(0,255,0,0));
if((selectedSceneNode->getID() & IDFlag_IsHighlightable) == IDFlag_IsHighlightable)
{
highlightedSceneNode = selectedSceneNode;
highlightedSceneNode->setMaterialFlag(video::EMF_LIGHTING, false);
}
}
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Collision detection example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+To start, we take the program from tutorial 2, which loads and displays a quake 3 level. We will use the level to walk in it and to pick triangles from. In addition we'll place 3 animated models into it for triangle picking. The following code starts up the engine and loads the level, as per tutorial 2.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
enum
{
ID_IsNotPickable = 0,
IDFlag_IsPickable = 1 << 0,
IDFlag_IsHighlightable = 1 << 1
};
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480), 16, false);
if (device == 0)
return 1;
If we want to receive information about the material of a hit triangle we have to get collisions per meshbuffer. The only disadvantage of this is that getting them per meshbuffer can be a little bit slower than per mesh, but usually that's not noticeable. If you set this to false you will no longer get material names in the title bar.
const bool separateMeshBuffers = true;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
const io::path mediaPath = getExampleMediaPath();
device->getFileSystem()->addFileArchive(mediaPath + "map-20kdm2.pk3");
scene::IAnimatedMesh* q3levelmesh = smgr->getMesh("20kdm2.bsp");
scene::IMeshSceneNode* q3node = 0;
if (q3levelmesh)
q3node = smgr->addOctreeSceneNode(q3levelmesh->getMesh(0), 0, IDFlag_IsPickable);
So far so good, we've loaded the quake 3 level like in tutorial 2. Now, here comes something different: We create a triangle selector. A triangle selector is a class which can fetch the triangles from scene nodes for doing different things with them, for example collision detection. There are different triangle selectors, and all can be created with the ISceneManager. In this example, we create an OctreeTriangleSelector, which optimizes the triangle output a little bit by reducing it like an octree. This is very useful for huge meshes like quake 3 levels. After we created the triangle selector, we attach it to the q3node. This is not necessary, but in this way, we do not need to care for the selector, for example dropping it after we do not need it anymore.
scene::ITriangleSelector* selector = 0;
if (q3node)
{
q3node->setPosition(core::vector3df(-1350,-130,-1400));
There is currently no way to split an octree by material. So if we need material infos we have to create one octree per meshbuffer and put them together in a MetaTriangleSelector.
if ( separateMeshBuffers && q3node->getMesh()->getMeshBufferCount() > 1)
{
scene::IMetaTriangleSelector * metaSelector = smgr->createMetaTriangleSelector();
for ( irr::u32 m=0; m < q3node->getMesh()->getMeshBufferCount(); ++m )
{
scene::ITriangleSelector*
bufferSelector = smgr->createOctreeTriangleSelector(
q3node->getMesh()->getMeshBuffer(m), m, q3node);
if ( bufferSelector )
{
metaSelector->addTriangleSelector( bufferSelector );
bufferSelector->drop();
}
}
selector = metaSelector;
}
else
{
selector = smgr->createOctreeTriangleSelector(
q3node->getMesh(), q3node, 128);
}
q3node->setTriangleSelector(selector);
}
We add a first person shooter camera to the scene so that we can see and move in the quake 3 level like in tutorial 2. But this, time, we add a special animator to the camera: A collision response animator. This animator modifies the scene node to which it is attached in order to prevent it from moving through walls and to add gravity to the node. The only things we have to tell the animator is how the world looks like, how big the scene node is, how much gravity to apply and so on. After the collision response animator is attached to the camera, we do not have to do anything else for collision detection, it's all done automatically. The rest of the collision detection code below is for picking. And please note another cool feature: The collision response animator can be attached also to all other scene nodes, not only to cameras. And it can be mixed with other scene node animators. In this way, collision detection and response in the Irrlicht engine is really easy.
+Now we'll take a closer look on the parameters of createCollisionResponseAnimator(). The first parameter is the TriangleSelector, which specifies how the world, against which collision detection is done, looks like. The second parameter is the scene node, which is the object which is affected by collision detection - in our case it is the camera. The third defines how big the object is, it is the radius of an ellipsoid. Try it out and change the radius to smaller values, the camera will be able to move closer to walls after this. The next parameter is the direction and speed of gravity. We'll set it to (0, -1000, 0), which approximates realistic gravity (depends on the units which are used in the scene model). You could set it to (0,0,0) to disable gravity. And the last value is just an offset: Without it the ellipsoid with which collision detection is done would be around the camera and the camera would be in the middle of the ellipsoid. But as human beings, we are used to have our eyes on top of the body, not in the middle of it. So we place the scene node 50 units over the center of the ellipsoid with this parameter. And that's it, collision detection works now.
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0, 100.0f, .3f, ID_IsNotPickable, 0, 0, true, 300.f);
camera->setPosition(core::vector3df(50,50,-60));
camera->setTarget(core::vector3df(-70,30,-60));
if (selector)
{
scene::ISceneNodeAnimatorCollisionResponse * anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(30,50,30),
core::vector3df(0,-1000,0), core::vector3df(0,30,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
}
device->getCursorControl()->setVisible(false);
scene::IBillboardSceneNode * bill = smgr->addBillboardSceneNode();
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
bill->setMaterialTexture(0, driver->getTexture(mediaPath + "particle.bmp"));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZBUFFER, false);
bill->setSize(core::dimension2d<f32>(20.0f, 20.0f));
bill->setID(ID_IsNotPickable);
Add 3 animated hominids, which we can pick using a ray-triangle intersection. They all animate quite slowly, to make it easier to see that accurate triangle selection is being performed.
scene::IAnimatedMeshSceneNode* node = 0;
video::SMaterial material;
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh(mediaPath + "faerie.md2"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-90,-15,-140));
node->setScale(core::vector3df(1.6f));
node->setMD2Animation(scene::EMAT_POINT);
node->setAnimationSpeed(20.f);
material.setTexture(0, driver->getTexture(mediaPath + "faerie2.bmp"));
material.Lighting = true;
material.NormalizeNormals = true;
node->getMaterial(0) = material;
selector = smgr->createTriangleSelector(node, separateMeshBuffers);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh(mediaPath + "ninja.b3d"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setScale(core::vector3df(10));
node->setPosition(core::vector3df(-75,-66,-80));
node->setRotation(core::vector3df(0,90,0));
node->setAnimationSpeed(8.f);
node->getMaterial(0).NormalizeNormals = true;
node->getMaterial(0).Lighting = true;
selector = smgr->createTriangleSelector(node, separateMeshBuffers);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh(mediaPath + "dwarf.x"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-70,-66,-30));
node->setRotation(core::vector3df(0,-90,0));
node->setAnimationSpeed(20.f);
node->getMaterial(0).Lighting = true;
selector = smgr->createTriangleSelector(node, separateMeshBuffers);
node->setTriangleSelector(selector);
selector->drop();
node = smgr->addAnimatedMeshSceneNode(smgr->getMesh(mediaPath + "yodan.mdl"),
0, IDFlag_IsPickable | IDFlag_IsHighlightable);
node->setPosition(core::vector3df(-90,-25,20));
node->setScale(core::vector3df(0.8f));
node->getMaterial(0).Lighting = true;
node->setAnimationSpeed(20.f);
selector = smgr->createTriangleSelector(node, separateMeshBuffers);
node->setTriangleSelector(selector);
selector->drop();
material.setTexture(0, 0);
material.Lighting = false;
scene::ILightSceneNode * light = smgr->addLightSceneNode(0, core::vector3df(-60,100,400),
video::SColorf(1.0f,1.0f,1.0f,1.0f), 600.0f);
light->setID(ID_IsNotPickable);
scene::ISceneNode* highlightedSceneNode = 0;
scene::ISceneCollisionManager* collMan = smgr->getSceneCollisionManager();
material.Wireframe=true;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0));
smgr->drawAll();
if (highlightedSceneNode)
{
highlightedSceneNode->setMaterialFlag(video::EMF_LIGHTING, true);
highlightedSceneNode = 0;
}
core::line3d<f32> ray;
ray.start = camera->getPosition();
ray.end = ray.start + (camera->getTarget() - ray.start).normalize() * 1000.0f;
irr::io::SNamedPath hitTextureName;
scene::SCollisionHit hitResult;
scene::ISceneNode * selectedSceneNode =collMan->getSceneNodeAndCollisionPointFromRay(
hitResult,
ray,
IDFlag_IsPickable,
0);
if(selectedSceneNode)
{
bill->setPosition(hitResult.Intersection);
driver->setTransform(video::ETS_WORLD, core::matrix4());
driver->setMaterial(material);
driver->draw3DTriangle(hitResult.Triangle, video::SColor(0,255,0,0));
if((selectedSceneNode->getID() & IDFlag_IsHighlightable) == IDFlag_IsHighlightable)
{
highlightedSceneNode = selectedSceneNode;
highlightedSceneNode->setMaterialFlag(video::EMF_LIGHTING, false);
}
if ( hitResult.MeshBuffer && hitResult.Node && hitResult.Node->getMaterial(hitResult.MaterialIndex).TextureLayer[0].Texture )
{
hitTextureName = hitResult.Node->getMaterial(hitResult.MaterialIndex).TextureLayer[0].Texture->getName();
}
}
driver->endScene();
int fps = driver->getFPS();
static core::stringw lastString;
core::stringw str = L"Collision detection example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
if ( !hitTextureName.getInternalName().empty() )
{
str += " ";
irr::io::path texName(hitTextureName.getInternalName());
str += core::deletePathFromFilename(texName);
}
if ( str != lastString )
{
device->setWindowCaption(str.c_str());
lastString = str;
}
}
device->drop();
return 0;
}
diff --git a/examples/08.SpecialFX/tutorial.html b/examples/08.SpecialFX/tutorial.html
index 725473b5..1c77ef6a 100644
--- a/examples/08.SpecialFX/tutorial.html
+++ b/examples/08.SpecialFX/tutorial.html
@@ -195,14 +195,14 @@ tr.heading h2 {
-
+
-
This tutorials describes how to do special effects. It shows how to use stencil buffer shadows, the particle system, billboards, dynamic light, and the water surface scene node.
-
We start like in some tutorials before. Please note that this time, the 'shadows' flag in createDevice() is set to true, for we want to have a dynamic shadow casted from an animated character. If this example runs too slow, set it to false. The Irrlicht Engine checks if your hardware doesn't support the stencil buffer, and disables shadows by itself, but just in case the demo runs slow on your hardware.
#include <irrlicht.h>
#include <iostream>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main()
{
char i;
printf("Please press 'y' if you want to use realtime shadows.\n");
std::cin >> i;
const bool shadows = (i == 'y');
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
Create device and exit if creation failed. We make the stencil flag optional to avoid slow screen modes for runs without shadows.
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480),
16, false, shadows);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
For our environment, we load a .3ds file. It is a small room I modelled with Anim8or and exported into the 3ds format because the Irrlicht Engine does not support the .an8 format. I am a very bad 3d graphic artist, and so the texture mapping is not very nice in this model. Luckily I am a better programmer than artist, and so the Irrlicht Engine is able to create a cool texture mapping for me: Just use the mesh manipulator and create a planar texture mapping for the mesh. If you want to see the mapping I made with Anim8or, uncomment this line. I also did not figure out how to set the material right in Anim8or, it has a specular light color which I don't really like. I'll switch it off too with this code.
scene::IAnimatedMesh* mesh = smgr->getMesh("../../media/room.3ds");
smgr->getMeshManipulator()->makePlanarTextureMapping(mesh->getMesh(0), 0.004f);
scene::ISceneNode* node = 0;
node = smgr->addAnimatedMeshSceneNode(mesh);
node->setMaterialTexture(0, driver->getTexture("../../media/wall.jpg"));
node->getMaterial(0).SpecularColor.set(0,0,0,0);
Now, for the first special effect: Animated water. It works like this: The WaterSurfaceSceneNode takes a mesh as input and makes it wave like a water surface. And if we let this scene node use a nice material like the EMT_REFLECTION_2_LAYER, it looks really cool. We are doing this with the next few lines of code. As input mesh, we create a hill plane mesh, without hills. But any other mesh could be used for this, you could even use the room.3ds (which would look really strange) if you want to.
mesh = smgr->addHillPlaneMesh( "myHill",
core::dimension2d<f32>(20,20),
core::dimension2d<u32>(40,40), 0, 0,
core::dimension2d<f32>(0,0),
core::dimension2d<f32>(10,10));
node = smgr->addWaterSurfaceSceneNode(mesh->getMesh(0), 3.0f, 300.0f, 30.0f);
node->setPosition(core::vector3df(0,7,0));
node->setMaterialTexture(0, driver->getTexture("../../media/stones.jpg"));
node->setMaterialTexture(1, driver->getTexture("../../media/water.jpg"));
node->setMaterialType(video::EMT_REFLECTION_2_LAYER);
The second special effect is very basic, I bet you saw it already in some Irrlicht Engine demos: A transparent billboard combined with a dynamic light. We simply create a light scene node, let it fly around, and to make it look more cool, we attach a billboard scene node to it.
node = smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(1.0f, 0.6f, 0.7f, 1.0f), 800.0f);
scene::ISceneNodeAnimator* anim = 0;
anim = smgr->createFlyCircleAnimator (core::vector3df(0,150,0),250.0f);
node->addAnimator(anim);
anim->drop();
node = smgr->addBillboardSceneNode(node, core::dimension2d<f32>(50, 50));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
node->setMaterialTexture(0, driver->getTexture("../../media/particlewhite.bmp"));
The next special effect is a lot more interesting: A particle system. The particle system in the Irrlicht Engine is quite modular and extensible, but yet easy to use. There is a particle system scene node into which you can put a particle emitter, which makes particles come out of nothing. These emitters are quite flexible and usually have lots of parameters like direction, amount, and color of the particles they create.
+
This tutorial describes how to do special effects. It shows how to use stencil buffer shadows, the particle system, billboards, dynamic light, and the water surface scene node.
+
We start like in some tutorials before. Please note that this time, the 'shadows' flag in createDevice() is set to true, for we want to have a dynamic shadow cast from an animated character. If this example runs too slow, set it to false. The Irrlicht Engine also checks if your hardware doesn't support the stencil buffer, and then disables shadows by itself.
#include <irrlicht.h>
#include <iostream>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main()
{
char i = 'y';
printf("Please press 'y' if you want to use realtime shadows.\n");
std::cin >> i;
const bool shadows = (i == 'y');
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
Create device and exit if creation failed. We make the stencil flag optional to avoid slow screen modes for runs without shadows.
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480),
16, false, shadows);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
const io::path mediaPath = getExampleMediaPath();
For our environment, we load a .3ds file. It is a small room I modeled with Anim8or and exported into the 3ds format because the Irrlicht Engine does not support the .an8 format. I am a very bad 3d graphic artist, and so the texture mapping is not very nice in this model. Luckily I am a better programmer than artist, and so the Irrlicht Engine is able to create a cool texture mapping for me: Just use the mesh manipulator and create a planar texture mapping for the mesh. If you want to see the mapping I made with Anim8or, uncomment this line. I also did not figure out how to set the material right in Anim8or, it has a specular light color which I don't really like. I'll switch it off too with this code.
scene::IAnimatedMesh* mesh = smgr->getMesh(mediaPath + "room.3ds");
smgr->getMeshManipulator()->makePlanarTextureMapping(mesh->getMesh(0), 0.004f);
scene::ISceneNode* node = 0;
node = smgr->addAnimatedMeshSceneNode(mesh);
node->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.jpg"));
node->getMaterial(0).SpecularColor.set(0,0,0,0);
Now, for the first special effect: Animated water. It works like this: The WaterSurfaceSceneNode takes a mesh as input and makes it wave like a water surface. And if we let this scene node use a nice material like the EMT_REFLECTION_2_LAYER, it looks really cool. We are doing this with the next few lines of code. As input mesh, we create a hill plane mesh, without hills. But any other mesh could be used for this, you could even use the room.3ds (which would look really strange) if you want to.
mesh = smgr->addHillPlaneMesh( "myHill",
core::dimension2d<f32>(20,20),
core::dimension2d<u32>(40,40), 0, 0,
core::dimension2d<f32>(0,0),
core::dimension2d<f32>(10,10));
node = smgr->addWaterSurfaceSceneNode(mesh->getMesh(0), 3.0f, 300.0f, 30.0f);
node->setPosition(core::vector3df(0,7,0));
node->setMaterialTexture(0, driver->getTexture(mediaPath + "stones.jpg"));
node->setMaterialTexture(1, driver->getTexture(mediaPath + "water.jpg"));
node->setMaterialType(video::EMT_REFLECTION_2_LAYER);
The second special effect is very basic, I bet you saw it already in some Irrlicht Engine demos: A transparent billboard combined with a dynamic light. We simply create a light scene node, let it fly around, and to make it look more cool, we attach a billboard scene node to it.
scene::ILightSceneNode * lightNode = smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(1.0f, 0.6f, 0.7f, 1.0f), 800.0f);
scene::ISceneNodeAnimator* anim = 0;
anim = smgr->createFlyCircleAnimator (core::vector3df(0,150,0),250.0f, 0.0005f);
lightNode ->addAnimator(anim);
anim->drop();
node = smgr->addBillboardSceneNode(lightNode, core::dimension2d<f32>(50, 50));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
node->setMaterialTexture(0, driver->getTexture(mediaPath + "particlewhite.bmp"));
The next special effect is a lot more interesting: A particle system. The particle system in the Irrlicht Engine is quite modular and extensible, but yet easy to use. There is a particle system scene node into which you can put a particle emitter, which makes particles come out of nothing. These emitters are quite flexible and usually have lots of parameters like direction, amount, and color of the particles they create.
There are different emitters, for example a point emitter which lets particles pop out at a fixed point. If the particle emitters available in the engine are not enough for you, you can easily create your own ones, you'll simply have to create a class derived from the IParticleEmitter interface and attach it to the particle system using setEmitter(). In this example we create a box particle emitter, which creates particles randomly inside a box. The parameters define the box, direction of the particles, minimal and maximal new particles per second, color, and minimal and maximal lifetime of the particles.
Because only with emitters particle system would be a little bit boring, there are particle affectors which modify particles while they fly around. Affectors can be added to a particle system for simulating additional effects like gravity or wind. The particle affector we use in this example is an affector which modifies the color of the particles: It lets them fade out. Like the particle emitters, additional particle affectors can also be implemented by you, simply derive a class from IParticleAffector and add it with addAffector().
-
After we set a nice material to the particle system, we have a cool looking camp fire. By adjusting material, texture, particle emitter, and affector parameters, it is also easily possible to create smoke, rain, explosions, snow, and so on.
scene::IParticleSystemSceneNode* ps =
smgr->addParticleSystemSceneNode(false);
scene::IParticleEmitter* em = ps->createBoxEmitter(
core::aabbox3d<f32>(-7,0,-7,7,1,7),
core::vector3df(0.0f,0.06f,0.0f),
80,100,
video::SColor(0,255,255,255),
video::SColor(0,255,255,255),
800,2000,0,
core::dimension2df(10.f,10.f),
core::dimension2df(20.f,20.f));
ps->setEmitter(em);
em->drop();
scene::IParticleAffector* paf = ps->createFadeOutParticleAffector();
ps->addAffector(paf);
paf->drop();
ps->setPosition(core::vector3df(-70,60,40));
ps->setScale(core::vector3df(2,2,2));
ps->setMaterialFlag(video::EMF_LIGHTING, false);
ps->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
ps->setMaterialTexture(0, driver->getTexture("../../media/fire.bmp"));
ps->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
Next we add a volumetric light node, which adds a glowing fake area light to the scene. Like with the billboards and particle systems we also assign a texture for the desired effect, though this time we'll use a texture animator to create the illusion of a magical glowing area effect.
scene::IVolumeLightSceneNode * n = smgr->addVolumeLightSceneNode(0, -1,
32,
32,
video::SColor(0, 255, 255, 255),
video::SColor(0, 0, 0, 0));
if (n)
{
n->setScale(core::vector3df(56.0f, 56.0f, 56.0f));
n->setPosition(core::vector3df(-120,50,40));
core::array<video::ITexture*> textures;
for (s32 g=7; g > 0; --g)
{
core::stringc tmp;
tmp = "../../media/portal";
tmp += g;
tmp += ".bmp";
video::ITexture* t = driver->getTexture( tmp.c_str() );
textures.push_back(t);
}
scene::ISceneNodeAnimator* glow = smgr->createTextureAnimator(textures, 150);
n->addAnimator(glow);
glow->drop();
}
As our last special effect, we want a dynamic shadow be casted from an animated character. For this we load a DirectX .x model and place it into our world. For creating the shadow, we simply need to call addShadowVolumeSceneNode(). The color of shadows is only adjustable globally for all shadows, by calling ISceneManager::setShadowColor(). Voila, here is our dynamic shadow.
-
Because the character is a little bit too small for this scene, we make it bigger using setScale(). And because the character is lighted by a dynamic light, we need to normalize the normals to make the lighting on it correct. This is always necessary if the scale of a dynamic lighted model is not (1,1,1). Otherwise it would get too dark or too bright because the normals will be scaled too.
mesh = smgr->getMesh("../../media/dwarf.x");
scene::IAnimatedMeshSceneNode* anode = 0;
anode = smgr->addAnimatedMeshSceneNode(mesh);
anode->setPosition(core::vector3df(-50,20,-60));
anode->setAnimationSpeed(15);
anode->addShadowVolumeSceneNode();
smgr->setShadowColor(video::SColor(150,0,0,0));
anode->setScale(core::vector3df(2,2,2));
anode->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, true);
Finally we simply have to draw everything, that's all.
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
camera->setPosition(core::vector3df(-50,50,-150));
camera->setFarValue(10000.0f);
device->getCursorControl()->setVisible(false);
s32 lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, 0);
smgr->drawAll();
driver->endScene();
const s32 fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - SpecialFX example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+After we set a nice material to the particle system, we have a cool looking camp fire. By adjusting material, texture, particle emitter, and affector parameters, it is also easily possible to create smoke, rain, explosions, snow, and so on.
scene::IParticleSystemSceneNode* ps =
smgr->addParticleSystemSceneNode(false);
if (ps)
{
scene::IParticleEmitter* em = ps->createBoxEmitter(
core::aabbox3d<f32>(-7,0,-7,7,1,7),
core::vector3df(0.0f,0.06f,0.0f),
80,100,
video::SColor(0,255,255,255),
video::SColor(0,255,255,255),
800,2000,0,
core::dimension2df(10.f,10.f),
core::dimension2df(20.f,20.f));
ps->setEmitter(em);
em->drop();
scene::IParticleAffector* paf = ps->createFadeOutParticleAffector();
ps->addAffector(paf);
paf->drop();
ps->setPosition(core::vector3df(-70,60,40));
ps->setScale(core::vector3df(2,2,2));
ps->setMaterialFlag(video::EMF_LIGHTING, false);
ps->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
ps->setMaterialTexture(0, driver->getTexture(mediaPath + "fire.bmp"));
ps->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
}
Next we add a volumetric light node, which adds a glowing fake area light to the scene. Like with the billboards and particle systems we also assign a texture for the desired effect, though this time we'll use a texture animator to create the illusion of a magical glowing area effect.
scene::IVolumeLightSceneNode * n = smgr->addVolumeLightSceneNode(0, -1,
32,
32,
video::SColor(0, 255, 255, 255),
video::SColor(0, 0, 0, 0));
if (n)
{
n->setScale(core::vector3df(56.0f, 56.0f, 56.0f));
n->setPosition(core::vector3df(-120,50,40));
core::array<video::ITexture*> textures;
for (s32 g=7; g > 0; --g)
{
core::stringc tmp(mediaPath);
tmp += "portal";
tmp += g;
tmp += ".bmp";
video::ITexture* t = driver->getTexture( tmp.c_str() );
textures.push_back(t);
}
scene::ISceneNodeAnimator* glow = smgr->createTextureAnimator(textures, 150);
n->addAnimator(glow);
glow->drop();
}
As our last special effect, we want a dynamic shadow be cast from an animated character. For this we load a DirectX .x model and place it into our world. For creating the shadow, we simply need to call addShadowVolumeSceneNode(). The color of shadows is only adjustable globally for all shadows, by calling ISceneManager::setShadowColor(). Voila, here is our dynamic shadow.
+Because the character is a little bit too small for this scene, we make it bigger using setScale(). And because the character is lighted by a dynamic light, we need to normalize the normals to make the lighting on it correct. This is always necessary if the scale of a dynamic lighted model is not (1,1,1). Otherwise it would get too dark or too bright because the normals will be scaled too.
mesh = smgr->getMesh(mediaPath + "dwarf.x");
scene::IAnimatedMeshSceneNode* anode = 0;
anode = smgr->addAnimatedMeshSceneNode(mesh);
anode->setPosition(core::vector3df(-50,20,-60));
anode->setAnimationSpeed(15);
Shadows still have to be drawn even then the node causing them is not visible itself. We have to disable culling if the node is animated or it's transformations change as otherwise the shadow is not updated correctly. If you have many objects and this becomes a speed problem you will have to figure out some manual culling (for exampling hiding all objects beyond a certain distance).
anode->setAutomaticCulling(scene::EAC_OFF);
anode->addShadowVolumeSceneNode();
smgr->setShadowColor(video::SColor(150,0,0,0));
anode->setScale(core::vector3df(2,2,2));
anode->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, true);
scene::ISceneNodeAnimator* ra = smgr->createRotationAnimator(irr::core::vector3df(0, 0.1f, 0));
anode->addAnimator(ra);
ra->drop();
Finally we simply have to draw everything, that's all.
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
camera->setPosition(core::vector3df(-50,50,-150));
camera->setFarValue(10000.0f);
device->getCursorControl()->setVisible(false);
s32 lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0));
smgr->drawAll();
driver->endScene();
const s32 fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - SpecialFX example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
diff --git a/examples/09.Meshviewer/tutorial.html b/examples/09.Meshviewer/tutorial.html
index e67cc027..50d0d1be 100644
--- a/examples/09.Meshviewer/tutorial.html
+++ b/examples/09.Meshviewer/tutorial.html
@@ -195,17 +195,17 @@ tr.heading h2 {
-
+
This tutorial show how to create a more complex application with the engine. We construct a simple mesh viewer using the user interface API and the scene management of Irrlicht. The tutorial show how to create and use Buttons, Windows, Toolbars, Menus, ComboBoxes, Tabcontrols, Editboxes, Images, MessageBoxes, SkyBoxes, and how to parse XML files with the integrated XML reader of the engine.
-
We start like in most other tutorials: Include all necessary header files, add a comment to let the engine be linked with the right .lib file in Visual Studio, and declare some global variables. We also add two 'using namespace' statements, so we do not need to write the whole names of all classes. In this tutorial, we use a lot stuff from the gui namespace.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
using namespace gui;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Some global variables used later on
IrrlichtDevice *Device = 0;
core::stringc StartUpModelFile;
core::stringw MessageText;
core::stringw Caption;
scene::ISceneNode* Model = 0;
scene::ISceneNode* SkyBox = 0;
bool Octree=false;
bool UseLight=false;
scene::ICameraSceneNode* Camera[2] = {0, 0};
enum
{
GUI_ID_DIALOG_ROOT_WINDOW = 0x10000,
GUI_ID_X_SCALE,
GUI_ID_Y_SCALE,
GUI_ID_Z_SCALE,
GUI_ID_OPEN_MODEL,
GUI_ID_SET_MODEL_ARCHIVE,
GUI_ID_LOAD_AS_OCTREE,
GUI_ID_SKY_BOX_VISIBLE,
GUI_ID_TOGGLE_DEBUG_INFO,
GUI_ID_DEBUG_OFF,
GUI_ID_DEBUG_BOUNDING_BOX,
GUI_ID_DEBUG_NORMALS,
GUI_ID_DEBUG_SKELETON,
GUI_ID_DEBUG_WIRE_OVERLAY,
GUI_ID_DEBUG_HALF_TRANSPARENT,
GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES,
GUI_ID_DEBUG_ALL,
GUI_ID_MODEL_MATERIAL_SOLID,
GUI_ID_MODEL_MATERIAL_TRANSPARENT,
GUI_ID_MODEL_MATERIAL_REFLECTION,
GUI_ID_CAMERA_MAYA,
GUI_ID_CAMERA_FIRST_PERSON,
GUI_ID_POSITION_TEXT,
GUI_ID_ABOUT,
GUI_ID_QUIT,
GUI_ID_TEXTUREFILTER,
GUI_ID_SKIN_TRANSPARENCY,
GUI_ID_SKIN_ANIMATION_FPS,
GUI_ID_BUTTON_SET_SCALE,
GUI_ID_BUTTON_SCALE_MUL10,
GUI_ID_BUTTON_SCALE_DIV10,
GUI_ID_BUTTON_OPEN_MODEL,
GUI_ID_BUTTON_SHOW_ABOUT,
GUI_ID_BUTTON_SHOW_TOOLBOX,
GUI_ID_BUTTON_SELECT_ARCHIVE,
GUI_ID_ANIMATION_INFO,
MAX_FRAMERATE = 80,
DEFAULT_FRAMERATE = 30
};
Toggle between various cameras
void setActiveCamera(scene::ICameraSceneNode* newActive)
{
if (0 == Device)
return;
scene::ICameraSceneNode * active = Device->getSceneManager()->getActiveCamera();
active->setInputReceiverEnabled(false);
newActive->setInputReceiverEnabled(true);
Device->getSceneManager()->setActiveCamera(newActive);
}
Set the skin transparency by changing the alpha values of all skin-colors
void setSkinTransparency(s32 alpha, irr::gui::IGUISkin * skin)
{
for (s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
video::SColor col = skin->getColor((EGUI_DEFAULT_COLOR)i);
col.setAlpha(alpha);
skin->setColor((EGUI_DEFAULT_COLOR)i, col);
}
}
Update the display of the model scaling
void updateScaleInfo(scene::ISceneNode* model)
{
IGUIElement* toolboxWnd = Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (!toolboxWnd)
return;
if (!model)
{
toolboxWnd->getElementFromId(GUI_ID_X_SCALE, true)->setText( L"-" );
toolboxWnd->getElementFromId(GUI_ID_Y_SCALE, true)->setText( L"-" );
toolboxWnd->getElementFromId(GUI_ID_Z_SCALE, true)->setText( L"-" );
}
else
{
core::vector3df scale = model->getScale();
toolboxWnd->getElementFromId(GUI_ID_X_SCALE, true)->setText( core::stringw(scale.X).c_str() );
toolboxWnd->getElementFromId(GUI_ID_Y_SCALE, true)->setText( core::stringw(scale.Y).c_str() );
toolboxWnd->getElementFromId(GUI_ID_Z_SCALE, true)->setText( core::stringw(scale.Z).c_str() );
}
}
Function showAboutText() displays a messagebox with a caption and a message text. The texts will be stored in the MessageText and Caption variables at startup.
void showAboutText()
{
Device->getGUIEnvironment()->addMessageBox(
Caption.c_str(), MessageText.c_str());
}
Function loadModel() loads a model and displays it using an addAnimatedMeshSceneNode and the scene manager. Nothing difficult. It also displays a short message box, if the model could not be loaded.
void loadModel(const c8* fn)
{
io::path filename(fn);
io::path extension;
core::getFileNameExtension(extension, filename);
extension.make_lower();
if (extension == ".jpg" || extension == ".pcx" ||
extension == ".png" || extension == ".ppm" ||
extension == ".pgm" || extension == ".pbm" ||
extension == ".psd" || extension == ".tga" ||
extension == ".bmp" || extension == ".wal" ||
extension == ".rgb" || extension == ".rgba")
{
video::ITexture * texture =
Device->getVideoDriver()->getTexture( filename );
if ( texture && Model )
{
Device->getVideoDriver()->removeTexture(texture);
texture = Device->getVideoDriver()->getTexture( filename );
Model->setMaterialTexture(0, texture);
}
return;
}
else if (extension == ".pk3" || extension == ".zip" || extension == ".pak" || extension == ".npk")
{
Device->getFileSystem()->addFileArchive(filename.c_str());
return;
}
if (Model)
Model->remove();
Model = 0;
if (extension==".irr")
{
core::array<scene::ISceneNode*> outNodes;
Device->getSceneManager()->loadScene(filename);
Device->getSceneManager()->getSceneNodesFromType(scene::ESNT_ANIMATED_MESH, outNodes);
if (outNodes.size())
Model = outNodes[0];
return;
}
scene::IAnimatedMesh* m = Device->getSceneManager()->getMesh( filename.c_str() );
if (!m)
{
if (StartUpModelFile != filename)
Device->getGUIEnvironment()->addMessageBox(
Caption.c_str(), L"The model could not be loaded. " \
L"Maybe it is not a supported file format.");
return;
}
if (Octree)
Model = Device->getSceneManager()->addOctreeSceneNode(m->getMesh(0));
else
{
scene::IAnimatedMeshSceneNode* animModel = Device->getSceneManager()->addAnimatedMeshSceneNode(m);
animModel->setAnimationSpeed(30);
Model = animModel;
}
Model->setMaterialFlag(video::EMF_LIGHTING, UseLight);
Model->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, UseLight);
Model->setDebugDataVisible(scene::EDS_OFF);
gui::IGUIContextMenu* menu = (gui::IGUIContextMenu*)Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_TOGGLE_DEBUG_INFO, true);
if (menu)
for(int item = 1; item < 6; ++item)
menu->setItemChecked(item, false);
updateScaleInfo(Model);
}
Function createToolBox() creates a toolbox window. In this simple mesh viewer, this toolbox only contains a tab control with three edit boxes for changing the scale of the displayed model.
void createToolBox()
{
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement* root = env->getRootGUIElement();
IGUIElement* e = root->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (e)
e->remove();
IGUIWindow* wnd = env->addWindow(core::rect<s32>(600,45,800,480),
false, L"Toolset", 0, GUI_ID_DIALOG_ROOT_WINDOW);
IGUITabControl* tab = env->addTabControl(
core::rect<s32>(2,20,800-602,480-7), wnd, true, true);
IGUITab* t1 = tab->addTab(L"Config");
env->addStaticText(L"Scale:",
core::rect<s32>(10,20,60,45), false, false, t1);
env->addStaticText(L"X:", core::rect<s32>(22,48,40,66), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,46,130,66), true, t1, GUI_ID_X_SCALE);
env->addStaticText(L"Y:", core::rect<s32>(22,82,40,96), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,76,130,96), true, t1, GUI_ID_Y_SCALE);
env->addStaticText(L"Z:", core::rect<s32>(22,108,40,126), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,106,130,126), true, t1, GUI_ID_Z_SCALE);
env->addButton(core::rect<s32>(10,134,85,165), t1, GUI_ID_BUTTON_SET_SCALE, L"Set");
env->addButton(core::rect<s32>(65,20,95,40), t1, GUI_ID_BUTTON_SCALE_MUL10, L"* 10");
env->addButton(core::rect<s32>(100,20,130,40), t1, GUI_ID_BUTTON_SCALE_DIV10, L"* 0.1");
updateScaleInfo(Model);
env->addStaticText(L"GUI Transparency Control:",
core::rect<s32>(10,200,150,225), true, false, t1);
IGUIScrollBar* scrollbar = env->addScrollBar(true,
core::rect<s32>(10,225,150,240), t1, GUI_ID_SKIN_TRANSPARENCY);
scrollbar->setMax(255);
scrollbar->setPos(255);
env->addStaticText(L":", core::rect<s32>(10,240,150,265), true, false, t1);
env->addStaticText(L"Framerate:",
core::rect<s32>(12,240,75,265), false, false, t1);
env->addStaticText(L"", core::rect<s32>(75,240,200,265), false, false, t1,
GUI_ID_ANIMATION_INFO);
scrollbar = env->addScrollBar(true,
core::rect<s32>(10,265,150,280), t1, GUI_ID_SKIN_ANIMATION_FPS);
scrollbar->setMax(MAX_FRAMERATE);
scrollbar->setMin(-MAX_FRAMERATE);
scrollbar->setPos(DEFAULT_FRAMERATE);
scrollbar->setSmallStep(1);
}
Function updateToolBox() is called each frame to update dynamic information in the toolbox.
void updateToolBox()
{
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement* root = env->getRootGUIElement();
IGUIElement* dlg = root->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (!dlg )
return;
IGUIStaticText * aniInfo = (IGUIStaticText *)(dlg->getElementFromId(GUI_ID_ANIMATION_INFO, true));
if (aniInfo)
{
if ( Model && scene::ESNT_ANIMATED_MESH == Model->getType() )
{
scene::IAnimatedMeshSceneNode* animatedModel = (scene::IAnimatedMeshSceneNode*)Model;
core::stringw str( (s32)core::round_(animatedModel->getAnimationSpeed()) );
str += L" Frame: ";
str += core::stringw((s32)animatedModel->getFrameNr());
aniInfo->setText(str.c_str());
}
else
aniInfo->setText(L"");
}
}
void onKillFocus()
{
const core::list<scene::ISceneNodeAnimator*>& animators = Camera[1]->getAnimators();
core::list<irr::scene::ISceneNodeAnimator*>::ConstIterator iter = animators.begin();
while ( iter != animators.end() )
{
if ( (*iter)->getType() == scene::ESNAT_CAMERA_FPS )
{
scene::ISceneNodeAnimatorCameraFPS * fpsAnimator = static_cast<scene::ISceneNodeAnimatorCameraFPS*>(*iter);
const core::array<SKeyMap>& keyMap = fpsAnimator->getKeyMap();
for ( irr::u32 i=0; i< keyMap.size(); ++i )
{
irr::SEvent event;
event.EventType = EET_KEY_INPUT_EVENT;
event.KeyInput.Key = keyMap[i].KeyCode;
event.KeyInput.PressedDown = false;
fpsAnimator->OnEvent(event);
}
}
++iter;
}
}
Function hasModalDialog() checks if we currently have a modal dialog open.
bool hasModalDialog()
{
if ( !Device )
return false;
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement * focused = env->getFocus();
while ( focused )
{
if ( focused->isVisible() && focused->hasType(EGUIET_MODAL_SCREEN) )
return true;
focused = focused->getParent();
}
return false;
}
To get all the events sent by the GUI Elements, we need to create an event receiver. This one is really simple. If an event occurs, it checks the id of the caller and the event type, and starts an action based on these values. For example, if a menu item with id GUI_ID_OPEN_MODEL was selected, it opens a file-open-dialog.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_KEY_INPUT_EVENT &&
event.KeyInput.PressedDown == false)
{
if ( OnKeyUp(event.KeyInput.Key) )
return true;
}
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
IGUIEnvironment* env = Device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
case EGET_MENU_ITEM_SELECTED:
OnMenuItemSelected( (IGUIContextMenu*)event.GUIEvent.Caller );
break;
case EGET_FILE_SELECTED:
{
IGUIFileOpenDialog* dialog =
(IGUIFileOpenDialog*)event.GUIEvent.Caller;
loadModel(core::stringc(dialog->getFileName()).c_str());
}
break;
case EGET_SCROLL_BAR_CHANGED:
if (id == GUI_ID_SKIN_TRANSPARENCY)
{
const s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
setSkinTransparency(pos, env->getSkin());
}
else if (id == GUI_ID_SKIN_ANIMATION_FPS)
{
const s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
if (scene::ESNT_ANIMATED_MESH == Model->getType())
((scene::IAnimatedMeshSceneNode*)Model)->setAnimationSpeed((f32)pos);
}
break;
case EGET_COMBO_BOX_CHANGED:
if (id == GUI_ID_TEXTUREFILTER)
{
OnTextureFilterSelected( (IGUIComboBox*)event.GUIEvent.Caller );
}
break;
case EGET_BUTTON_CLICKED:
switch(id)
{
case GUI_ID_BUTTON_SET_SCALE:
{
gui::IGUIElement* root = env->getRootGUIElement();
core::vector3df scale;
core::stringc s;
s = root->getElementFromId(GUI_ID_X_SCALE, true)->getText();
scale.X = (f32)atof(s.c_str());
s = root->getElementFromId(GUI_ID_Y_SCALE, true)->getText();
scale.Y = (f32)atof(s.c_str());
s = root->getElementFromId(GUI_ID_Z_SCALE, true)->getText();
scale.Z = (f32)atof(s.c_str());
if (Model)
Model->setScale(scale);
updateScaleInfo(Model);
}
break;
case GUI_ID_BUTTON_SCALE_MUL10:
if (Model)
Model->setScale(Model->getScale()*10.f);
updateScaleInfo(Model);
break;
case GUI_ID_BUTTON_SCALE_DIV10:
if (Model)
Model->setScale(Model->getScale()*0.1f);
updateScaleInfo(Model);
break;
case GUI_ID_BUTTON_OPEN_MODEL:
env->addFileOpenDialog(L"Please select a model file to open");
break;
case GUI_ID_BUTTON_SHOW_ABOUT:
showAboutText();
break;
case GUI_ID_BUTTON_SHOW_TOOLBOX:
createToolBox();
break;
case GUI_ID_BUTTON_SELECT_ARCHIVE:
env->addFileOpenDialog(L"Please select your game archive/directory");
break;
}
break;
default:
break;
}
}
return false;
}
Handle key-up events
bool OnKeyUp(irr::EKEY_CODE keyCode)
{
if ( hasModalDialog() )
return false;
if (keyCode == irr::KEY_ESCAPE)
{
if (Device)
{
scene::ICameraSceneNode * camera =
Device->getSceneManager()->getActiveCamera();
if (camera)
{
camera->setInputReceiverEnabled( !camera->isInputReceiverEnabled() );
}
return true;
}
}
else if (keyCode == irr::KEY_F1)
{
if (Device)
{
IGUIElement* elem = Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_POSITION_TEXT);
if (elem)
elem->setVisible(!elem->isVisible());
}
}
else if (keyCode == irr::KEY_KEY_M)
{
if (Device)
Device->minimizeWindow();
}
else if (keyCode == irr::KEY_KEY_L)
{
UseLight=!UseLight;
if (Model)
{
Model->setMaterialFlag(video::EMF_LIGHTING, UseLight);
Model->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, UseLight);
}
}
return false;
}
Handle "menu item clicked" events.
void OnMenuItemSelected( IGUIContextMenu* menu )
{
s32 id = menu->getItemCommandId(menu->getSelectedItem());
IGUIEnvironment* env = Device->getGUIEnvironment();
switch(id)
{
case GUI_ID_OPEN_MODEL:
env->addFileOpenDialog(L"Please select a model file to open");
break;
case GUI_ID_SET_MODEL_ARCHIVE:
env->addFileOpenDialog(L"Please select your game archive/directory");
break;
case GUI_ID_LOAD_AS_OCTREE:
Octree = !Octree;
menu->setItemChecked(menu->getSelectedItem(), Octree);
break;
case GUI_ID_QUIT:
Device->closeDevice();
break;
case GUI_ID_SKY_BOX_VISIBLE:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
SkyBox->setVisible(!SkyBox->isVisible());
break;
case GUI_ID_DEBUG_OFF:
menu->setItemChecked(menu->getSelectedItem()+1, false);
menu->setItemChecked(menu->getSelectedItem()+2, false);
menu->setItemChecked(menu->getSelectedItem()+3, false);
menu->setItemChecked(menu->getSelectedItem()+4, false);
menu->setItemChecked(menu->getSelectedItem()+5, false);
menu->setItemChecked(menu->getSelectedItem()+6, false);
if (Model)
Model->setDebugDataVisible(scene::EDS_OFF);
break;
case GUI_ID_DEBUG_BOUNDING_BOX:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_BBOX));
break;
case GUI_ID_DEBUG_NORMALS:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_NORMALS));
break;
case GUI_ID_DEBUG_SKELETON:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_SKELETON));
break;
case GUI_ID_DEBUG_WIRE_OVERLAY:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_MESH_WIRE_OVERLAY));
break;
case GUI_ID_DEBUG_HALF_TRANSPARENT:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_HALF_TRANSPARENCY));
break;
case GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_BBOX_BUFFERS));
break;
case GUI_ID_DEBUG_ALL:
menu->setItemChecked(menu->getSelectedItem()-1, true);
menu->setItemChecked(menu->getSelectedItem()-2, true);
menu->setItemChecked(menu->getSelectedItem()-3, true);
menu->setItemChecked(menu->getSelectedItem()-4, true);
menu->setItemChecked(menu->getSelectedItem()-5, true);
menu->setItemChecked(menu->getSelectedItem()-6, true);
if (Model)
Model->setDebugDataVisible(scene::EDS_FULL);
break;
case GUI_ID_ABOUT:
showAboutText();
break;
case GUI_ID_MODEL_MATERIAL_SOLID:
if (Model)
Model->setMaterialType(video::EMT_SOLID);
break;
case GUI_ID_MODEL_MATERIAL_TRANSPARENT:
if (Model)
Model->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
break;
case GUI_ID_MODEL_MATERIAL_REFLECTION:
if (Model)
Model->setMaterialType(video::EMT_SPHERE_MAP);
break;
case GUI_ID_CAMERA_MAYA:
setActiveCamera(Camera[0]);
break;
case GUI_ID_CAMERA_FIRST_PERSON:
setActiveCamera(Camera[1]);
break;
}
}
Handle the event that one of the texture-filters was selected in the corresponding combobox.
void OnTextureFilterSelected( IGUIComboBox* combo )
{
s32 pos = combo->getSelected();
switch (pos)
{
case 0:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, false);
}
break;
case 1:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, true);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, false);
}
break;
case 2:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, true);
}
break;
case 3:
if (Model)
{
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, true);
}
break;
case 4:
if (Model)
{
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, false);
}
break;
}
}
};
Most of the hard work is done. We only need to create the Irrlicht Engine device and all the buttons, menus and toolbars. We start up the engine as usual, using createDevice(). To make our application catch events, we set our eventreceiver as parameter. As you can see, there is also a call to IrrlichtDevice::setResizeable(). This makes the render window resizeable, which is quite useful for a mesh viewer.
int main(int argc, char* argv[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
Device = createDevice(driverType, core::dimension2d<u32>(800, 600),
16, false, false, false, &receiver);
if (Device == 0)
return 1;
Device->setResizable(true);
Device->setWindowCaption(L"Irrlicht Engine - Loading...");
video::IVideoDriver* driver = Device->getVideoDriver();
IGUIEnvironment* env = Device->getGUIEnvironment();
scene::ISceneManager* smgr = Device->getSceneManager();
smgr->getParameters()->setAttribute(scene::COLLADA_CREATE_SCENE_INSTANCES, true);
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
smgr->addLightSceneNode(0, core::vector3df(200,200,200),
video::SColorf(1.0f,1.0f,1.0f),2000);
smgr->setAmbientLight(video::SColorf(0.3f,0.3f,0.3f));
Device->getFileSystem()->addFileArchive("../../media/");
The next step is to read the configuration file. It is stored in the xml format and looks a little bit like this:
+
We start like in most other tutorials: Include all necessary header files, add a comment to let the engine be linked with the correct .lib file in Visual Studio, and declare some global variables. We also add two 'using namespace' statements, so we do not need to write the whole names of all classes. In this tutorial, we use a lot of stuff from the gui namespace.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
using namespace gui;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Some global variables used later on
IrrlichtDevice *Device = 0;
io::path StartUpModelFile;
core::stringw MessageText;
core::stringw Caption;
scene::ISceneNode* Model = 0;
scene::ISceneNode* SkyBox = 0;
bool Octree=false;
bool UseLight=false;
scene::ICameraSceneNode* Camera[2] = {0, 0};
enum
{
GUI_ID_DIALOG_ROOT_WINDOW = 0x10000,
GUI_ID_X_SCALE,
GUI_ID_Y_SCALE,
GUI_ID_Z_SCALE,
GUI_ID_OPEN_MODEL,
GUI_ID_SET_MODEL_ARCHIVE,
GUI_ID_LOAD_AS_OCTREE,
GUI_ID_SKY_BOX_VISIBLE,
GUI_ID_TOGGLE_DEBUG_INFO,
GUI_ID_DEBUG_OFF,
GUI_ID_DEBUG_BOUNDING_BOX,
GUI_ID_DEBUG_NORMALS,
GUI_ID_DEBUG_SKELETON,
GUI_ID_DEBUG_WIRE_OVERLAY,
GUI_ID_DEBUG_HALF_TRANSPARENT,
GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES,
GUI_ID_DEBUG_ALL,
GUI_ID_MODEL_MATERIAL_SOLID,
GUI_ID_MODEL_MATERIAL_TRANSPARENT,
GUI_ID_MODEL_MATERIAL_REFLECTION,
GUI_ID_CAMERA_MAYA,
GUI_ID_CAMERA_FIRST_PERSON,
GUI_ID_POSITION_TEXT,
GUI_ID_ABOUT,
GUI_ID_QUIT,
GUI_ID_TEXTUREFILTER,
GUI_ID_SKIN_TRANSPARENCY,
GUI_ID_SKIN_ANIMATION_FPS,
GUI_ID_BUTTON_SET_SCALE,
GUI_ID_BUTTON_SCALE_MUL10,
GUI_ID_BUTTON_SCALE_DIV10,
GUI_ID_BUTTON_OPEN_MODEL,
GUI_ID_BUTTON_SHOW_ABOUT,
GUI_ID_BUTTON_SHOW_TOOLBOX,
GUI_ID_BUTTON_SELECT_ARCHIVE,
GUI_ID_ANIMATION_INFO,
MAX_FRAMERATE = 80,
DEFAULT_FRAMERATE = 30
};
Toggle between various cameras
void setActiveCamera(scene::ICameraSceneNode* newActive)
{
if (0 == Device)
return;
scene::ICameraSceneNode * active = Device->getSceneManager()->getActiveCamera();
active->setInputReceiverEnabled(false);
newActive->setInputReceiverEnabled(true);
Device->getSceneManager()->setActiveCamera(newActive);
}
Set the skin transparency by changing the alpha values of all skin-colors
void setSkinTransparency(s32 alpha, irr::gui::IGUISkin * skin)
{
for (s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
video::SColor col = skin->getColor((EGUI_DEFAULT_COLOR)i);
col.setAlpha(alpha);
skin->setColor((EGUI_DEFAULT_COLOR)i, col);
}
}
Update the display of the model scaling
void updateScaleInfo(scene::ISceneNode* model)
{
IGUIElement* toolboxWnd = Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (!toolboxWnd)
return;
if (!model)
{
toolboxWnd->getElementFromId(GUI_ID_X_SCALE, true)->setText( L"-" );
toolboxWnd->getElementFromId(GUI_ID_Y_SCALE, true)->setText( L"-" );
toolboxWnd->getElementFromId(GUI_ID_Z_SCALE, true)->setText( L"-" );
}
else
{
core::vector3df scale = model->getScale();
toolboxWnd->getElementFromId(GUI_ID_X_SCALE, true)->setText( core::stringw(scale.X).c_str() );
toolboxWnd->getElementFromId(GUI_ID_Y_SCALE, true)->setText( core::stringw(scale.Y).c_str() );
toolboxWnd->getElementFromId(GUI_ID_Z_SCALE, true)->setText( core::stringw(scale.Z).c_str() );
}
}
Function showAboutText() displays a messagebox with a caption and a message text. The texts will be stored in the MessageText and Caption variables at startup.
void showAboutText()
{
Device->getGUIEnvironment()->addMessageBox(
Caption.c_str(), MessageText.c_str());
}
Function loadModel() loads a model and displays it using an addAnimatedMeshSceneNode and the scene manager. Nothing difficult. It also displays a short message box, if the model could not be loaded.
void loadModel(const io::path& filename)
{
io::path extension;
core::getFileNameExtension(extension, filename);
extension.make_lower();
if (extension == ".jpg" || extension == ".pcx" ||
extension == ".png" || extension == ".ppm" ||
extension == ".pgm" || extension == ".pbm" ||
extension == ".psd" || extension == ".tga" ||
extension == ".bmp" || extension == ".wal" ||
extension == ".rgb" || extension == ".rgba")
{
video::ITexture * texture = Device->getVideoDriver()->findTexture( filename );
if ( texture )
Device->getVideoDriver()->removeTexture(texture);
texture = Device->getVideoDriver()->getTexture( filename );
if ( texture && Model )
{
Model->setMaterialTexture(0, texture);
}
return;
}
else if (extension == ".pk3" || extension == ".zip" || extension == ".pak" || extension == ".npk")
{
Device->getFileSystem()->addFileArchive(filename.c_str());
return;
}
if (Model)
{
Model->remove();
Model = 0;
}
if (extension==".irr")
{
core::array<scene::ISceneNode*> outNodes;
Device->getSceneManager()->loadScene(filename);
Device->getSceneManager()->getSceneNodesFromType(scene::ESNT_ANIMATED_MESH, outNodes);
if (outNodes.size())
Model = outNodes[0];
return;
}
u32 then = Device->getTimer()->getRealTime();
scene::IAnimatedMesh* mesh = Device->getSceneManager()->getMesh( filename.c_str() );
Device->getLogger()->log("Loading time (ms): ", core::stringc(Device->getTimer()->getRealTime() - then).c_str());
if (!mesh)
{
if (StartUpModelFile != filename)
Device->getGUIEnvironment()->addMessageBox(
Caption.c_str(), L"The model could not be loaded. " \
L"Maybe it is not a supported file format.");
return;
}
if (Octree)
Model = Device->getSceneManager()->addOctreeSceneNode(mesh->getMesh(0));
else
{
scene::IAnimatedMeshSceneNode* animModel = Device->getSceneManager()->addAnimatedMeshSceneNode(mesh);
Model = animModel;
}
Model->setMaterialFlag(video::EMF_LIGHTING, UseLight);
Model->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, UseLight);
Model->setDebugDataVisible(scene::EDS_OFF);
gui::IGUIContextMenu* menu = (gui::IGUIContextMenu*)Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_TOGGLE_DEBUG_INFO, true);
if (menu)
for(int item = 1; item < 6; ++item)
menu->setItemChecked(item, false);
updateScaleInfo(Model);
}
Function createToolBox() creates a toolbox window. In this simple mesh viewer, this toolbox only contains a controls to change the scale and animation speed of the model and a control to set the transparency of the GUI-elements.
void createToolBox()
{
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement* root = env->getRootGUIElement();
IGUIElement* e = root->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (e)
e->remove();
IGUIWindow* wnd = env->addWindow(core::rect<s32>(600,45,800,480),
false, L"Toolset", 0, GUI_ID_DIALOG_ROOT_WINDOW);
IGUITabControl* tab = env->addTabControl(
core::rect<s32>(2,20,800-602,480-7), wnd, true, true);
IGUITab* t1 = tab->addTab(L"Config");
env->addStaticText(L"Scale:",
core::rect<s32>(10,20,60,45), false, false, t1);
env->addStaticText(L"X:", core::rect<s32>(22,48,40,66), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,46,130,66), true, t1, GUI_ID_X_SCALE);
env->addStaticText(L"Y:", core::rect<s32>(22,82,40,96), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,76,130,96), true, t1, GUI_ID_Y_SCALE);
env->addStaticText(L"Z:", core::rect<s32>(22,108,40,126), false, false, t1);
env->addEditBox(L"1.0", core::rect<s32>(40,106,130,126), true, t1, GUI_ID_Z_SCALE);
env->addButton(core::rect<s32>(10,134,85,165), t1, GUI_ID_BUTTON_SET_SCALE, L"Set");
env->addButton(core::rect<s32>(65,20,95,40), t1, GUI_ID_BUTTON_SCALE_MUL10, L"* 10");
env->addButton(core::rect<s32>(100,20,130,40), t1, GUI_ID_BUTTON_SCALE_DIV10, L"* 0.1");
updateScaleInfo(Model);
env->addStaticText(L"GUI Transparency Control:",
core::rect<s32>(10,200,150,225), true, false, t1);
IGUIScrollBar* scrollbar = env->addScrollBar(true,
core::rect<s32>(10,225,150,240), t1, GUI_ID_SKIN_TRANSPARENCY);
scrollbar->setMax(255);
scrollbar->setPos(255);
env->addStaticText(L":", core::rect<s32>(10,240,150,265), true, false, t1);
env->addStaticText(L"Framerate:",
core::rect<s32>(12,240,75,265), false, false, t1);
env->addStaticText(L"", core::rect<s32>(75,240,200,265), false, false, t1,
GUI_ID_ANIMATION_INFO);
scrollbar = env->addScrollBar(true,
core::rect<s32>(10,265,150,280), t1, GUI_ID_SKIN_ANIMATION_FPS);
scrollbar->setMax(MAX_FRAMERATE);
scrollbar->setMin(-MAX_FRAMERATE);
scrollbar->setPos(DEFAULT_FRAMERATE);
scrollbar->setSmallStep(1);
}
Function updateToolBox() is called each frame to update dynamic information in the toolbox.
void updateToolBox()
{
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement* root = env->getRootGUIElement();
IGUIElement* dlg = root->getElementFromId(GUI_ID_DIALOG_ROOT_WINDOW, true);
if (!dlg )
return;
IGUIStaticText * aniInfo = (IGUIStaticText *)(dlg->getElementFromId(GUI_ID_ANIMATION_INFO, true));
if (aniInfo)
{
if ( Model && scene::ESNT_ANIMATED_MESH == Model->getType() )
{
scene::IAnimatedMeshSceneNode* animatedModel = (scene::IAnimatedMeshSceneNode*)Model;
core::stringw str( (s32)core::round_(animatedModel->getAnimationSpeed()) );
str += L" Frame: ";
str += core::stringw((s32)animatedModel->getFrameNr());
aniInfo->setText(str.c_str());
}
else
aniInfo->setText(L"");
}
}
void onKillFocus()
{
const core::list<scene::ISceneNodeAnimator*>& animators = Camera[1]->getAnimators();
core::list<irr::scene::ISceneNodeAnimator*>::ConstIterator iter = animators.begin();
while ( iter != animators.end() )
{
if ( (*iter)->getType() == scene::ESNAT_CAMERA_FPS )
{
scene::ISceneNodeAnimatorCameraFPS * fpsAnimator = static_cast<scene::ISceneNodeAnimatorCameraFPS*>(*iter);
const core::array<SKeyMap>& keyMap = fpsAnimator->getKeyMap();
for ( irr::u32 i=0; i< keyMap.size(); ++i )
{
irr::SEvent event;
event.EventType = EET_KEY_INPUT_EVENT;
event.KeyInput.Key = keyMap[i].KeyCode;
event.KeyInput.PressedDown = false;
fpsAnimator->OnEvent(event);
}
}
++iter;
}
}
Function hasModalDialog() checks if we currently have a modal dialog open.
bool hasModalDialog()
{
if ( !Device )
return false;
IGUIEnvironment* env = Device->getGUIEnvironment();
IGUIElement * focused = env->getFocus();
while ( focused )
{
if ( focused->isVisible() && focused->hasType(EGUIET_MODAL_SCREEN) )
return true;
focused = focused->getParent();
}
return false;
}
To get all the events sent by the GUI Elements, we need to create an event receiver. This one is really simple. If an event occurs, it checks the id of the caller and the event type, and starts an action based on these values. For example, if a menu item with id GUI_ID_OPEN_MODEL was selected, it opens a file-open-dialog.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_KEY_INPUT_EVENT &&
event.KeyInput.PressedDown == false)
{
if ( OnKeyUp(event.KeyInput.Key) )
return true;
}
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
IGUIEnvironment* env = Device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
case EGET_MENU_ITEM_SELECTED:
OnMenuItemSelected( (IGUIContextMenu*)event.GUIEvent.Caller );
break;
case EGET_FILE_SELECTED:
{
IGUIFileOpenDialog* dialog =
(IGUIFileOpenDialog*)event.GUIEvent.Caller;
loadModel(dialog->getFileNameP());
}
break;
case EGET_SCROLL_BAR_CHANGED:
if (id == GUI_ID_SKIN_TRANSPARENCY)
{
const s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
setSkinTransparency(pos, env->getSkin());
}
else if (id == GUI_ID_SKIN_ANIMATION_FPS)
{
const s32 pos = ((IGUIScrollBar*)event.GUIEvent.Caller)->getPos();
if (scene::ESNT_ANIMATED_MESH == Model->getType())
((scene::IAnimatedMeshSceneNode*)Model)->setAnimationSpeed((f32)pos);
}
break;
case EGET_COMBO_BOX_CHANGED:
if (id == GUI_ID_TEXTUREFILTER)
{
OnTextureFilterSelected( (IGUIComboBox*)event.GUIEvent.Caller );
}
break;
case EGET_BUTTON_CLICKED:
switch(id)
{
case GUI_ID_BUTTON_SET_SCALE:
{
gui::IGUIElement* root = env->getRootGUIElement();
core::vector3df scale;
core::stringc s;
s = root->getElementFromId(GUI_ID_X_SCALE, true)->getText();
scale.X = (f32)atof(s.c_str());
s = root->getElementFromId(GUI_ID_Y_SCALE, true)->getText();
scale.Y = (f32)atof(s.c_str());
s = root->getElementFromId(GUI_ID_Z_SCALE, true)->getText();
scale.Z = (f32)atof(s.c_str());
if (Model)
Model->setScale(scale);
updateScaleInfo(Model);
}
break;
case GUI_ID_BUTTON_SCALE_MUL10:
if (Model)
Model->setScale(Model->getScale()*10.f);
updateScaleInfo(Model);
break;
case GUI_ID_BUTTON_SCALE_DIV10:
if (Model)
Model->setScale(Model->getScale()*0.1f);
updateScaleInfo(Model);
break;
case GUI_ID_BUTTON_OPEN_MODEL:
env->addFileOpenDialog(L"Please select a model file to open");
break;
case GUI_ID_BUTTON_SHOW_ABOUT:
showAboutText();
break;
case GUI_ID_BUTTON_SHOW_TOOLBOX:
createToolBox();
break;
case GUI_ID_BUTTON_SELECT_ARCHIVE:
env->addFileOpenDialog(L"Please select your game archive/directory");
break;
}
break;
default:
break;
}
}
return false;
}
Handle key-up events
bool OnKeyUp(irr::EKEY_CODE keyCode)
{
if ( hasModalDialog() )
return false;
if (keyCode == irr::KEY_ESCAPE)
{
if (Device)
{
scene::ICameraSceneNode * camera =
Device->getSceneManager()->getActiveCamera();
if (camera)
{
camera->setInputReceiverEnabled( !camera->isInputReceiverEnabled() );
}
return true;
}
}
else if (keyCode == irr::KEY_F1)
{
if (Device)
{
IGUIElement* elem = Device->getGUIEnvironment()->getRootGUIElement()->getElementFromId(GUI_ID_POSITION_TEXT);
if (elem)
elem->setVisible(!elem->isVisible());
}
}
else if (keyCode == irr::KEY_KEY_M)
{
if (Device)
Device->minimizeWindow();
}
else if (keyCode == irr::KEY_KEY_L)
{
UseLight=!UseLight;
if (Model)
{
Model->setMaterialFlag(video::EMF_LIGHTING, UseLight);
Model->setMaterialFlag(video::EMF_NORMALIZE_NORMALS, UseLight);
}
}
return false;
}
Handle "menu item clicked" events.
void OnMenuItemSelected( IGUIContextMenu* menu )
{
s32 id = menu->getItemCommandId(menu->getSelectedItem());
IGUIEnvironment* env = Device->getGUIEnvironment();
switch(id)
{
case GUI_ID_OPEN_MODEL:
env->addFileOpenDialog(L"Please select a model file to open");
break;
case GUI_ID_SET_MODEL_ARCHIVE:
env->addFileOpenDialog(L"Please select your game archive/directory");
break;
case GUI_ID_LOAD_AS_OCTREE:
Octree = !Octree;
menu->setItemChecked(menu->getSelectedItem(), Octree);
break;
case GUI_ID_QUIT:
Device->closeDevice();
break;
case GUI_ID_SKY_BOX_VISIBLE:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
SkyBox->setVisible(!SkyBox->isVisible());
break;
case GUI_ID_DEBUG_OFF:
menu->setItemChecked(menu->getSelectedItem()+1, false);
menu->setItemChecked(menu->getSelectedItem()+2, false);
menu->setItemChecked(menu->getSelectedItem()+3, false);
menu->setItemChecked(menu->getSelectedItem()+4, false);
menu->setItemChecked(menu->getSelectedItem()+5, false);
menu->setItemChecked(menu->getSelectedItem()+6, false);
if (Model)
Model->setDebugDataVisible(scene::EDS_OFF);
break;
case GUI_ID_DEBUG_BOUNDING_BOX:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_BBOX));
break;
case GUI_ID_DEBUG_NORMALS:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_NORMALS));
break;
case GUI_ID_DEBUG_SKELETON:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_SKELETON));
break;
case GUI_ID_DEBUG_WIRE_OVERLAY:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_MESH_WIRE_OVERLAY));
break;
case GUI_ID_DEBUG_HALF_TRANSPARENT:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_HALF_TRANSPARENCY));
break;
case GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES:
menu->setItemChecked(menu->getSelectedItem(), !menu->isItemChecked(menu->getSelectedItem()));
if (Model)
Model->setDebugDataVisible((scene::E_DEBUG_SCENE_TYPE)(Model->isDebugDataVisible()^scene::EDS_BBOX_BUFFERS));
break;
case GUI_ID_DEBUG_ALL:
menu->setItemChecked(menu->getSelectedItem()-1, true);
menu->setItemChecked(menu->getSelectedItem()-2, true);
menu->setItemChecked(menu->getSelectedItem()-3, true);
menu->setItemChecked(menu->getSelectedItem()-4, true);
menu->setItemChecked(menu->getSelectedItem()-5, true);
menu->setItemChecked(menu->getSelectedItem()-6, true);
if (Model)
Model->setDebugDataVisible(scene::EDS_FULL);
break;
case GUI_ID_ABOUT:
showAboutText();
break;
case GUI_ID_MODEL_MATERIAL_SOLID:
if (Model)
Model->setMaterialType(video::EMT_SOLID);
break;
case GUI_ID_MODEL_MATERIAL_TRANSPARENT:
if (Model)
Model->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
break;
case GUI_ID_MODEL_MATERIAL_REFLECTION:
if (Model)
Model->setMaterialType(video::EMT_SPHERE_MAP);
break;
case GUI_ID_CAMERA_MAYA:
setActiveCamera(Camera[0]);
break;
case GUI_ID_CAMERA_FIRST_PERSON:
setActiveCamera(Camera[1]);
break;
}
}
Handle the event that one of the texture-filters was selected in the corresponding combobox.
void OnTextureFilterSelected( IGUIComboBox* combo )
{
s32 pos = combo->getSelected();
switch (pos)
{
case 0:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, false);
}
break;
case 1:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, true);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, false);
}
break;
case 2:
if (Model)
{
Model->setMaterialFlag(video::EMF_BILINEAR_FILTER, false);
Model->setMaterialFlag(video::EMF_TRILINEAR_FILTER, true);
}
break;
case 3:
if (Model)
{
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, true);
}
break;
case 4:
if (Model)
{
Model->setMaterialFlag(video::EMF_ANISOTROPIC_FILTER, false);
}
break;
}
}
};
Most of the hard work is done. We only need to create the Irrlicht Engine device and all the buttons, menus and toolbars. We start up the engine as usual, using createDevice(). To make our application catch events, we set our eventreceiver as parameter. As you can see, there is also a call to IrrlichtDevice::setResizeable(). This makes the render window resizeable, which is quite useful for a mesh viewer.
int main(int argc, char* argv[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
Device = createDevice(driverType, core::dimension2d<u32>(800, 600),
16, false, false, false, &receiver);
if (Device == 0)
return 1;
Device->setResizable(true);
Device->setWindowCaption(L"Irrlicht Engine - Loading...");
video::IVideoDriver* driver = Device->getVideoDriver();
IGUIEnvironment* env = Device->getGUIEnvironment();
scene::ISceneManager* smgr = Device->getSceneManager();
smgr->getParameters()->setAttribute(scene::COLLADA_CREATE_SCENE_INSTANCES, true);
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
smgr->addLightSceneNode(0, core::vector3df(200,200,200),
video::SColorf(1.0f,1.0f,1.0f),2000);
smgr->setAmbientLight(video::SColorf(0.3f,0.3f,0.3f));
Device->getFileSystem()->addFileArchive(getExampleMediaPath());
The next step is to read the configuration file. It is stored in the xml format and looks a little bit like this:
<?xml version="1.0"?>
<config>
<startUpModel file="some filename" />
<messageText caption="Irrlicht Engine Mesh Viewer">
Hello!
</messageText>
- </config>
We need the data stored in there to be written into the global variables StartUpModelFile, MessageText and Caption. This is now done using the Irrlicht Engine integrated XML parser:
io::IXMLReader* xml = Device->getFileSystem()->createXMLReader( L"config.xml");
while(xml && xml->read())
{
switch(xml->getNodeType())
{
case io::EXN_TEXT:
MessageText = xml->getNodeData();
break;
case io::EXN_ELEMENT:
{
if (core::stringw("startUpModel") == xml->getNodeName())
StartUpModelFile = xml->getAttributeValue(L"file");
else
if (core::stringw("messageText") == xml->getNodeName())
Caption = xml->getAttributeValue(L"caption");
}
break;
default:
break;
}
}
if (xml)
xml->drop();
if (argc > 1)
StartUpModelFile = argv[1];
That wasn't difficult. Now we'll set a nicer font and create the Menu. It is possible to create submenus for every menu item. The call menu->addItem(L"File", -1, true, true); for example adds a new menu Item with the name "File" and the id -1. The following parameter says that the menu item should be enabled, and the last one says, that there should be a submenu. The submenu can now be accessed with menu->getSubMenu(0), because the "File" entry is the menu item with index 0.
IGUISkin* skin = env->getSkin();
IGUIFont* font = env->getFont("fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
gui::IGUIContextMenu* menu = env->addMenu();
menu->addItem(L"File", -1, true, true);
menu->addItem(L"View", -1, true, true);
menu->addItem(L"Camera", -1, true, true);
menu->addItem(L"Help", -1, true, true);
gui::IGUIContextMenu* submenu;
submenu = menu->getSubMenu(0);
submenu->addItem(L"Open Model File & Texture...", GUI_ID_OPEN_MODEL);
submenu->addItem(L"Set Model Archive...", GUI_ID_SET_MODEL_ARCHIVE);
submenu->addItem(L"Load as Octree", GUI_ID_LOAD_AS_OCTREE);
submenu->addSeparator();
submenu->addItem(L"Quit", GUI_ID_QUIT);
submenu = menu->getSubMenu(1);
submenu->addItem(L"sky box visible", GUI_ID_SKY_BOX_VISIBLE, true, false, true);
submenu->addItem(L"toggle model debug information", GUI_ID_TOGGLE_DEBUG_INFO, true, true);
submenu->addItem(L"model material", -1, true, true );
submenu = submenu->getSubMenu(1);
submenu->addItem(L"Off", GUI_ID_DEBUG_OFF);
submenu->addItem(L"Bounding Box", GUI_ID_DEBUG_BOUNDING_BOX);
submenu->addItem(L"Normals", GUI_ID_DEBUG_NORMALS);
submenu->addItem(L"Skeleton", GUI_ID_DEBUG_SKELETON);
submenu->addItem(L"Wire overlay", GUI_ID_DEBUG_WIRE_OVERLAY);
submenu->addItem(L"Half-Transparent", GUI_ID_DEBUG_HALF_TRANSPARENT);
submenu->addItem(L"Buffers bounding boxes", GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES);
submenu->addItem(L"All", GUI_ID_DEBUG_ALL);
submenu = menu->getSubMenu(1)->getSubMenu(2);
submenu->addItem(L"Solid", GUI_ID_MODEL_MATERIAL_SOLID);
submenu->addItem(L"Transparent", GUI_ID_MODEL_MATERIAL_TRANSPARENT);
submenu->addItem(L"Reflection", GUI_ID_MODEL_MATERIAL_REFLECTION);
submenu = menu->getSubMenu(2);
submenu->addItem(L"Maya Style", GUI_ID_CAMERA_MAYA);
submenu->addItem(L"First Person", GUI_ID_CAMERA_FIRST_PERSON);
submenu = menu->getSubMenu(3);
submenu->addItem(L"About", GUI_ID_ABOUT);
Below the menu we want a toolbar, onto which we can place colored buttons and important looking stuff like a senseless combobox.
gui::IGUIToolBar* bar = env->addToolBar();
video::ITexture* image = driver->getTexture("open.png");
bar->addButton(GUI_ID_BUTTON_OPEN_MODEL, 0, L"Open a model",image, 0, false, true);
image = driver->getTexture("tools.png");
bar->addButton(GUI_ID_BUTTON_SHOW_TOOLBOX, 0, L"Open Toolset",image, 0, false, true);
image = driver->getTexture("zip.png");
bar->addButton(GUI_ID_BUTTON_SELECT_ARCHIVE, 0, L"Set Model Archive",image, 0, false, true);
image = driver->getTexture("help.png");
bar->addButton(GUI_ID_BUTTON_SHOW_ABOUT, 0, L"Open Help", image, 0, false, true);
gui::IGUIComboBox* box = env->addComboBox(core::rect<s32>(250,4,350,23), bar, GUI_ID_TEXTUREFILTER);
box->addItem(L"No filtering");
box->addItem(L"Bilinear");
box->addItem(L"Trilinear");
box->addItem(L"Anisotropic");
box->addItem(L"Isotropic");
To make the editor look a little bit better, we disable transparent gui elements, and add an Irrlicht Engine logo. In addition, a text showing the current frames per second value is created and the window caption is changed.
for (s32 i=0; i<gui::EGDC_COUNT ; ++i)
{
video::SColor col = env->getSkin()->getColor((gui::EGUI_DEFAULT_COLOR)i);
col.setAlpha(255);
env->getSkin()->setColor((gui::EGUI_DEFAULT_COLOR)i, col);
}
createToolBox();
IGUIStaticText* fpstext = env->addStaticText(L"",
core::rect<s32>(400,4,570,23), true, false, bar);
IGUIStaticText* postext = env->addStaticText(L"",
core::rect<s32>(10,50,470,80),false, false, 0, GUI_ID_POSITION_TEXT);
postext->setVisible(false);
Caption += " - [";
Caption += driver->getName();
Caption += "]";
Device->setWindowCaption(Caption.c_str());
That's nearly the whole application. We simply show the about message box at start up, and load the first model. To make everything look better, a skybox is created and a user controlled camera, to make the application a little bit more interactive. Finally, everything is drawn in a standard drawing loop.
if (argc==1)
showAboutText();
loadModel(StartUpModelFile.c_str());
SkyBox = smgr->addSkyBoxSceneNode(
driver->getTexture("irrlicht2_up.jpg"),
driver->getTexture("irrlicht2_dn.jpg"),
driver->getTexture("irrlicht2_lf.jpg"),
driver->getTexture("irrlicht2_rt.jpg"),
driver->getTexture("irrlicht2_ft.jpg"),
driver->getTexture("irrlicht2_bk.jpg"));
Camera[0] = smgr->addCameraSceneNodeMaya();
Camera[0]->setFarValue(20000.f);
Camera[0]->setTarget(core::vector3df(0,30,0));
Camera[1] = smgr->addCameraSceneNodeFPS();
Camera[1]->setFarValue(20000.f);
Camera[1]->setPosition(core::vector3df(0,0,-70));
Camera[1]->setTarget(core::vector3df(0,30,0));
setActiveCamera(Camera[0]);
IGUIImage *img =
env->addImage(driver->getTexture("irrlichtlogo2.png"),
core::position2d<s32>(10, driver->getScreenSize().Height - 128));
img->setAlignment(EGUIA_UPPERLEFT, EGUIA_UPPERLEFT,
EGUIA_LOWERRIGHT, EGUIA_LOWERRIGHT);
bool hasFocus = Device->isWindowFocused();
while(Device->run() && driver)
{
bool focused = Device->isWindowFocused();
if ( hasFocus && !focused )
onKillFocus();
hasFocus = focused;
if (Device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(150,50,50,50));
smgr->drawAll();
env->drawAll();
driver->endScene();
core::stringw str(L"FPS: ");
str.append(core::stringw(driver->getFPS()));
str += L" Tris: ";
str.append(core::stringw(driver->getPrimitiveCountDrawn()));
fpstext->setText(str.c_str());
scene::ICameraSceneNode* cam = Device->getSceneManager()->getActiveCamera();
str = L"Pos: ";
str.append(core::stringw(cam->getPosition().X));
str += L" ";
str.append(core::stringw(cam->getPosition().Y));
str += L" ";
str.append(core::stringw(cam->getPosition().Z));
str += L" Tgt: ";
str.append(core::stringw(cam->getTarget().X));
str += L" ";
str.append(core::stringw(cam->getTarget().Y));
str += L" ";
str.append(core::stringw(cam->getTarget().Z));
postext->setText(str.c_str());
updateToolBox();
}
else
Device->yield();
}
Device->drop();
return 0;
}
+ </config>We need the data stored in there to be written into the global variables StartUpModelFile, MessageText and Caption. This is now done using the Irrlicht Engine integrated XML parser:
io::IXMLReader* xml = Device->getFileSystem()->createXMLReader( L"config.xml");
while(xml && xml->read())
{
switch(xml->getNodeType())
{
case io::EXN_TEXT:
MessageText = xml->getNodeData();
break;
case io::EXN_ELEMENT:
{
if (core::stringw("startUpModel") == xml->getNodeName())
StartUpModelFile = xml->getAttributeValue(L"file");
else
if (core::stringw("messageText") == xml->getNodeName())
Caption = xml->getAttributeValue(L"caption");
}
break;
default:
break;
}
}
if (xml)
xml->drop();
if (argc > 1)
StartUpModelFile = argv[1];
IGUISkin* skin = env->getSkin();
IGUIFont* font = env->getFont("fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
Now create the Menu. It is possible to create submenus for every menu item. The call menu->addItem(L"File", -1, true, true); for example adds a new menu Item with the name "File" and the id -1. The following parameter says that the menu item should be enabled, and the last one says, that there should be a submenu. The submenu can now be accessed with menu->getSubMenu(0), because the "File" entry is the menu item with index 0.
gui::IGUIContextMenu* menu = env->addMenu();
menu->addItem(L"File", -1, true, true);
menu->addItem(L"View", -1, true, true);
menu->addItem(L"Camera", -1, true, true);
menu->addItem(L"Help", -1, true, true);
gui::IGUIContextMenu* submenu;
submenu = menu->getSubMenu(0);
submenu->addItem(L"Open Model File & Texture...", GUI_ID_OPEN_MODEL);
submenu->addItem(L"Set Model Archive...", GUI_ID_SET_MODEL_ARCHIVE);
submenu->addItem(L"Load as Octree", GUI_ID_LOAD_AS_OCTREE);
submenu->addSeparator();
submenu->addItem(L"Quit", GUI_ID_QUIT);
submenu = menu->getSubMenu(1);
submenu->addItem(L"sky box visible", GUI_ID_SKY_BOX_VISIBLE, true, false, true);
submenu->addItem(L"toggle model debug information", GUI_ID_TOGGLE_DEBUG_INFO, true, true);
submenu->addItem(L"model material", -1, true, true );
submenu = submenu->getSubMenu(1);
submenu->addItem(L"Off", GUI_ID_DEBUG_OFF);
submenu->addItem(L"Bounding Box", GUI_ID_DEBUG_BOUNDING_BOX);
submenu->addItem(L"Normals", GUI_ID_DEBUG_NORMALS);
submenu->addItem(L"Skeleton", GUI_ID_DEBUG_SKELETON);
submenu->addItem(L"Wire overlay", GUI_ID_DEBUG_WIRE_OVERLAY);
submenu->addItem(L"Half-Transparent", GUI_ID_DEBUG_HALF_TRANSPARENT);
submenu->addItem(L"Buffers bounding boxes", GUI_ID_DEBUG_BUFFERS_BOUNDING_BOXES);
submenu->addItem(L"All", GUI_ID_DEBUG_ALL);
submenu = menu->getSubMenu(1)->getSubMenu(2);
submenu->addItem(L"Solid", GUI_ID_MODEL_MATERIAL_SOLID);
submenu->addItem(L"Transparent", GUI_ID_MODEL_MATERIAL_TRANSPARENT);
submenu->addItem(L"Reflection", GUI_ID_MODEL_MATERIAL_REFLECTION);
submenu = menu->getSubMenu(2);
submenu->addItem(L"Maya Style", GUI_ID_CAMERA_MAYA);
submenu->addItem(L"First Person", GUI_ID_CAMERA_FIRST_PERSON);
submenu = menu->getSubMenu(3);
submenu->addItem(L"About", GUI_ID_ABOUT);
Below the menu we want a toolbar, onto which we can place colored buttons and important looking stuff like a senseless combobox.
gui::IGUIToolBar* bar = env->addToolBar();
video::ITexture* image = driver->getTexture("open.png");
bar->addButton(GUI_ID_BUTTON_OPEN_MODEL, 0, L"Open a model",image, 0, false, true);
image = driver->getTexture("tools.png");
bar->addButton(GUI_ID_BUTTON_SHOW_TOOLBOX, 0, L"Open Toolset",image, 0, false, true);
image = driver->getTexture("zip.png");
bar->addButton(GUI_ID_BUTTON_SELECT_ARCHIVE, 0, L"Set Model Archive",image, 0, false, true);
image = driver->getTexture("help.png");
bar->addButton(GUI_ID_BUTTON_SHOW_ABOUT, 0, L"Open Help", image, 0, false, true);
gui::IGUIComboBox* box = env->addComboBox(core::rect<s32>(250,4,350,23), bar, GUI_ID_TEXTUREFILTER);
box->addItem(L"No filtering");
box->addItem(L"Bilinear");
box->addItem(L"Trilinear");
box->addItem(L"Anisotropic");
box->addItem(L"Isotropic");
To make the editor look a little bit better, we disable transparent gui elements, and add an Irrlicht Engine logo. In addition, a text showing the current frames per second value is created and the window caption is changed.
for (s32 i=0; i<gui::EGDC_COUNT ; ++i)
{
video::SColor col = env->getSkin()->getColor((gui::EGUI_DEFAULT_COLOR)i);
col.setAlpha(255);
env->getSkin()->setColor((gui::EGUI_DEFAULT_COLOR)i, col);
}
createToolBox();
IGUIStaticText* fpstext = env->addStaticText(L"",
core::rect<s32>(400,4,570,23), true, false, bar);
IGUIStaticText* postext = env->addStaticText(L"",
core::rect<s32>(10,50,470,80),false, false, 0, GUI_ID_POSITION_TEXT);
postext->setVisible(false);
Caption += " - [";
Caption += driver->getName();
Caption += "]";
Device->setWindowCaption(Caption.c_str());
Now we show the about message box at start up, and load the first model. To make everything look better a skybox is created. We also add a user controlled camera, to make the application more interactive. Finally, everything is drawn in a standard drawing loop.
if (argc==1)
showAboutText();
loadModel(StartUpModelFile.c_str());
SkyBox = smgr->addSkyBoxSceneNode(
driver->getTexture("irrlicht2_up.jpg"),
driver->getTexture("irrlicht2_dn.jpg"),
driver->getTexture("irrlicht2_lf.jpg"),
driver->getTexture("irrlicht2_rt.jpg"),
driver->getTexture("irrlicht2_ft.jpg"),
driver->getTexture("irrlicht2_bk.jpg"));
Camera[0] = smgr->addCameraSceneNodeMaya();
Camera[0]->setFarValue(20000.f);
Camera[0]->setTarget(core::vector3df(0,30,0));
Camera[1] = smgr->addCameraSceneNodeFPS();
Camera[1]->setFarValue(20000.f);
Camera[1]->setPosition(core::vector3df(0,0,-70));
Camera[1]->setTarget(core::vector3df(0,30,0));
setActiveCamera(Camera[0]);
IGUIImage *img =
env->addImage(driver->getTexture("irrlichtlogo3.png"),
core::position2d<s32>(10, driver->getScreenSize().Height - 128));
img->setAlignment(EGUIA_UPPERLEFT, EGUIA_UPPERLEFT,
EGUIA_LOWERRIGHT, EGUIA_LOWERRIGHT);
bool hasFocus = Device->isWindowFocused();
while(Device->run() && driver)
{
bool focused = Device->isWindowFocused();
if ( hasFocus && !focused )
onKillFocus();
hasFocus = focused;
if (Device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(150,50,50,50));
smgr->drawAll();
env->drawAll();
driver->endScene();
core::stringw str(L"FPS: ");
str.append(core::stringw(driver->getFPS()));
str += L" Tris: ";
str.append(core::stringw(driver->getPrimitiveCountDrawn()));
fpstext->setText(str.c_str());
scene::ICameraSceneNode* cam = Device->getSceneManager()->getActiveCamera();
str = L"Pos: ";
str.append(core::stringw(cam->getPosition().X));
str += L" ";
str.append(core::stringw(cam->getPosition().Y));
str += L" ";
str.append(core::stringw(cam->getPosition().Z));
str += L" Tgt: ";
str.append(core::stringw(cam->getTarget().X));
str += L" ";
str.append(core::stringw(cam->getTarget().Y));
str += L" ";
str.append(core::stringw(cam->getTarget().Z));
postext->setText(str.c_str());
updateToolBox();
}
else
Device->yield();
}
Device->drop();
return 0;
}
diff --git a/examples/10.Shaders/tutorial.html b/examples/10.Shaders/tutorial.html
index b2dc71f6..7b63cea5 100644
--- a/examples/10.Shaders/tutorial.html
+++ b/examples/10.Shaders/tutorial.html
@@ -195,13 +195,13 @@ tr.heading h2 {
-
+
-
This tutorial shows how to use shaders for D3D8, D3D9, OpenGL, and Cg with the engine and how to create new material types with them. It also shows how to disable the generation of mipmaps at texture loading, and how to use text scene nodes.
-
This tutorial does not explain how shaders work. I would recommend to read the D3D, OpenGL, or Cg documentation, to search a tutorial, or to read a book about this.
-
At first, we need to include all headers and do the stuff we always do, like in nearly all other tutorials:
#include <irrlicht.h>
#include <iostream>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Because we want to use some interesting shaders in this tutorials, we need to set some data for them to make them able to compute nice colors. In this example, we'll use a simple vertex shader which will calculate the color of the vertex based on the position of the camera. For this, the shader needs the following data: The inverted world matrix for transforming the normal, the clip matrix for transforming the position, the camera position and the world position of the object for the calculation of the angle of light, and the color of the light. To be able to tell the shader all this data every frame, we have to derive a class from the IShaderConstantSetCallBack interface and override its only method, namely OnSetConstants(). This method will be called every time the material is set. The method setVertexShaderConstant() of the IMaterialRendererServices interface is used to set the data the shader needs. If the user chose to use a High Level shader language like HLSL instead of Assembler in this example, you have to set the variable name as parameter instead of the register index.
IrrlichtDevice* device = 0;
bool UseHighLevelShaders = false;
bool UseCgShaders = false;
class MyShaderCallBack : public video::IShaderConstantSetCallBack
{
public:
virtual void OnSetConstants(video::IMaterialRendererServices* services,
s32 userData)
{
video::IVideoDriver* driver = services->getVideoDriver();
core::matrix4 invWorld = driver->getTransform(video::ETS_WORLD);
invWorld.makeInverse();
if (UseHighLevelShaders)
services->setVertexShaderConstant("mInvWorld", invWorld.pointer(), 16);
else
services->setVertexShaderConstant(invWorld.pointer(), 0, 4);
core::matrix4 worldViewProj;
worldViewProj = driver->getTransform(video::ETS_PROJECTION);
worldViewProj *= driver->getTransform(video::ETS_VIEW);
worldViewProj *= driver->getTransform(video::ETS_WORLD);
if (UseHighLevelShaders)
services->setVertexShaderConstant("mWorldViewProj", worldViewProj.pointer(), 16);
else
services->setVertexShaderConstant(worldViewProj.pointer(), 4, 4);
core::vector3df pos = device->getSceneManager()->
getActiveCamera()->getAbsolutePosition();
if (UseHighLevelShaders)
services->setVertexShaderConstant("mLightPos", reinterpret_cast<f32*>(&pos), 3);
else
services->setVertexShaderConstant(reinterpret_cast<f32*>(&pos), 8, 1);
video::SColorf col(0.0f,1.0f,1.0f,0.0f);
if (UseHighLevelShaders)
services->setVertexShaderConstant("mLightColor",
reinterpret_cast<f32*>(&col), 4);
else
services->setVertexShaderConstant(reinterpret_cast<f32*>(&col), 9, 1);
core::matrix4 world = driver->getTransform(video::ETS_WORLD);
world = world.getTransposed();
if (UseHighLevelShaders)
{
services->setVertexShaderConstant("mTransWorld", world.pointer(), 16);
s32 TextureLayerID = 0;
if (UseHighLevelShaders)
services->setPixelShaderConstant("myTexture", &TextureLayerID, 1);
}
else
services->setVertexShaderConstant(world.pointer(), 10, 4);
}
};
The next few lines start up the engine just like in most other tutorials before. But in addition, we ask the user if he wants to use high level shaders in this example, if he selected a driver which is capable of doing so.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
if (driverType == video::EDT_DIRECT3D9 ||
driverType == video::EDT_OPENGL)
{
char i;
printf("Please press 'y' if you want to use high level shaders.\n");
std::cin >> i;
if (i == 'y')
{
UseHighLevelShaders = true;
printf("Please press 'y' if you want to use Cg shaders.\n");
std::cin >> i;
if (i == 'y')
UseCgShaders = true;
}
}
device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* gui = device->getGUIEnvironment();
if (UseCgShaders && !driver->queryFeature(video::EVDF_CG))
{
printf("Warning: No Cg support, disabling.\n");
UseCgShaders=false;
}
Now for the more interesting parts. If we are using Direct3D, we want to load vertex and pixel shader programs, if we have OpenGL, we want to use ARB fragment and vertex programs. I wrote the corresponding programs down into the files d3d8.ps, d3d8.vs, d3d9.ps, d3d9.vs, opengl.ps and opengl.vs. We only need the right filenames now. This is done in the following switch. Note, that it is not necessary to write the shaders into text files, like in this example. You can even write the shaders directly as strings into the cpp source file, and use later addShaderMaterial() instead of addShaderMaterialFromFiles().
io::path vsFileName;
io::path psFileName;
switch(driverType)
{
case video::EDT_DIRECT3D8:
psFileName = "../../media/d3d8.psh";
vsFileName = "../../media/d3d8.vsh";
break;
case video::EDT_DIRECT3D9:
if (UseHighLevelShaders)
{
psFileName = "../../media/d3d9.hlsl";
vsFileName = psFileName;
}
else
{
psFileName = "../../media/d3d9.psh";
vsFileName = "../../media/d3d9.vsh";
}
break;
case video::EDT_OPENGL:
if (UseHighLevelShaders)
{
if (!UseCgShaders)
{
psFileName = "../../media/opengl.frag";
vsFileName = "../../media/opengl.vert";
}
else
{
psFileName = "../../media/d3d9.hlsl";
vsFileName = psFileName;
}
}
else
{
psFileName = "../../media/opengl.psh";
vsFileName = "../../media/opengl.vsh";
}
break;
}
In addition, we check if the hardware and the selected renderer is capable of executing the shaders we want. If not, we simply set the filename string to 0. This is not necessary, but useful in this example: For example, if the hardware is able to execute vertex shaders but not pixel shaders, we create a new material which only uses the vertex shader, and no pixel shader. Otherwise, if we would tell the engine to create this material and the engine sees that the hardware wouldn't be able to fulfill the request completely, it would not create any new material at all. So in this example you would see at least the vertex shader in action, without the pixel shader.
if (!driver->queryFeature(video::EVDF_PIXEL_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_FRAGMENT_PROGRAM_1))
{
device->getLogger()->log("WARNING: Pixel shaders disabled "\
"because of missing driver/hardware support.");
psFileName = "";
}
if (!driver->queryFeature(video::EVDF_VERTEX_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_VERTEX_PROGRAM_1))
{
device->getLogger()->log("WARNING: Vertex shaders disabled "\
"because of missing driver/hardware support.");
vsFileName = "";
}
Now lets create the new materials. As you maybe know from previous examples, a material type in the Irrlicht engine is set by simply changing the MaterialType value in the SMaterial struct. And this value is just a simple 32 bit value, like video::EMT_SOLID. So we only need the engine to create a new value for us which we can set there. To do this, we get a pointer to the IGPUProgrammingServices and call addShaderMaterialFromFiles(), which returns such a new 32 bit value. That's all.
+
This tutorial shows how to use shaders for D3D9, and OpenGL with the engine and how to create new material types with them. It also shows how to disable the generation of mipmaps at texture loading, and how to use text scene nodes.
+
This tutorial does not explain how shaders work. I would recommend to read the D3D or OpenGL, documentation, to search a tutorial, or to read a book about this.
+
At first, we need to include all headers and do the stuff we always do, like in nearly all other tutorials:
#include <irrlicht.h>
#include <iostream>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Because we want to use some interesting shaders in this tutorials, we need to set some data for them to make them able to compute nice colors. In this example, we'll use a simple vertex shader which will calculate the color of the vertex based on the position of the camera. For this, the shader needs the following data: The inverted world matrix for transforming the normal, the clip matrix for transforming the position, the camera position and the world position of the object for the calculation of the angle of light, and the color of the light. To be able to tell the shader all this data every frame, we have to derive a class from the IShaderConstantSetCallBack interface and override its only method, namely OnSetConstants(). This method will be called every time the material is set. The method setVertexShaderConstant() of the IMaterialRendererServices interface is used to set the data the shader needs. If the user chose to use a High Level shader language like HLSL instead of Assembler in this example, you have to set the variable name as parameter instead of the register index.
IrrlichtDevice* device = 0;
bool UseHighLevelShaders = false;
class MyShaderCallBack : public video::IShaderConstantSetCallBack
{
public:
MyShaderCallBack() : WorldViewProjID(-1), TransWorldID(-1), InvWorldID(-1), PositionID(-1),
ColorID(-1), TextureID(-1), FirstUpdate(true)
{
}
virtual void OnSetConstants(video::IMaterialRendererServices* services,
s32 userData)
{
video::IVideoDriver* driver = services->getVideoDriver();
if (UseHighLevelShaders && FirstUpdate)
{
WorldViewProjID = services->getVertexShaderConstantID("mWorldViewProj");
TransWorldID = services->getVertexShaderConstantID("mTransWorld");
InvWorldID = services->getVertexShaderConstantID("mInvWorld");
PositionID = services->getVertexShaderConstantID("mLightPos");
ColorID = services->getVertexShaderConstantID("mLightColor");
if(driver->getDriverType() == video::EDT_OPENGL)
TextureID = services->getVertexShaderConstantID("myTexture");
FirstUpdate = false;
}
core::matrix4 invWorld = driver->getTransform(video::ETS_WORLD);
invWorld.makeInverse();
if (UseHighLevelShaders)
services->setVertexShaderConstant(InvWorldID, invWorld.pointer(), 16);
else
services->setVertexShaderConstant(invWorld.pointer(), 0, 4);
core::matrix4 worldViewProj;
worldViewProj = driver->getTransform(video::ETS_PROJECTION);
worldViewProj *= driver->getTransform(video::ETS_VIEW);
worldViewProj *= driver->getTransform(video::ETS_WORLD);
if (UseHighLevelShaders)
services->setVertexShaderConstant(WorldViewProjID, worldViewProj.pointer(), 16);
else
services->setVertexShaderConstant(worldViewProj.pointer(), 4, 4);
core::vector3df pos = device->getSceneManager()->
getActiveCamera()->getAbsolutePosition();
if (UseHighLevelShaders)
services->setVertexShaderConstant(PositionID, reinterpret_cast<f32*>(&pos), 3);
else
services->setVertexShaderConstant(reinterpret_cast<f32*>(&pos), 8, 1);
video::SColorf col(0.0f,1.0f,1.0f,0.0f);
if (UseHighLevelShaders)
services->setVertexShaderConstant(ColorID,
reinterpret_cast<f32*>(&col), 4);
else
services->setVertexShaderConstant(reinterpret_cast<f32*>(&col), 9, 1);
core::matrix4 world = driver->getTransform(video::ETS_WORLD);
world = world.getTransposed();
if (UseHighLevelShaders)
{
services->setVertexShaderConstant(TransWorldID, world.pointer(), 16);
s32 TextureLayerID = 0;
services->setPixelShaderConstant(TextureID, &TextureLayerID, 1);
}
else
services->setVertexShaderConstant(world.pointer(), 10, 4);
}
private:
s32 WorldViewProjID;
s32 TransWorldID;
s32 InvWorldID;
s32 PositionID;
s32 ColorID;
s32 TextureID;
bool FirstUpdate;
};
The next few lines start up the engine just like in most other tutorials before. But in addition, we ask the user if he wants to use high level shaders in this example, if he selected a driver which is capable of doing so.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
if (driverType == video::EDT_DIRECT3D9 ||
driverType == video::EDT_OPENGL)
{
char i = 'y';
printf("Please press 'y' if you want to use high level shaders.\n");
std::cin >> i;
if (i == 'y')
{
UseHighLevelShaders = true;
}
}
device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* gui = device->getGUIEnvironment();
const io::path mediaPath = getExampleMediaPath();
Now for the more interesting parts. If we are using Direct3D, we want to load vertex and pixel shader programs, if we have OpenGL, we want to use ARB fragment and vertex programs. I wrote the corresponding programs down into the files d3d9.ps, d3d9.vs, opengl.ps and opengl.vs. We only need the right filenames now. This is done in the following switch. Note, that it is not necessary to write the shaders into text files, like in this example. You can even write the shaders directly as strings into the cpp source file, and use later addShaderMaterial() instead of addShaderMaterialFromFiles().
io::path vsFileName;
io::path psFileName;
switch(driverType)
{
case video::EDT_DIRECT3D9:
if (UseHighLevelShaders)
{
psFileName = mediaPath + "d3d9.hlsl";
vsFileName = psFileName;
}
else
{
psFileName = mediaPath + "d3d9.psh";
vsFileName = mediaPath + "d3d9.vsh";
}
break;
case video::EDT_OPENGL:
if (UseHighLevelShaders)
{
psFileName = mediaPath + "opengl.frag";
vsFileName = mediaPath + "opengl.vert";
}
else
{
psFileName = mediaPath + "opengl.psh";
vsFileName = mediaPath + "opengl.vsh";
}
break;
default:
break;
}
In addition, we check if the hardware and the selected renderer is capable of executing the shaders we want. If not, we simply set the filename string to 0. This is not necessary, but useful in this example: For example, if the hardware is able to execute vertex shaders but not pixel shaders, we create a new material which only uses the vertex shader, and no pixel shader. Otherwise, if we would tell the engine to create this material and the engine sees that the hardware wouldn't be able to fulfill the request completely, it would not create any new material at all. So in this example you would see at least the vertex shader in action, without the pixel shader.
if (!driver->queryFeature(video::EVDF_PIXEL_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_FRAGMENT_PROGRAM_1))
{
device->getLogger()->log("WARNING: Pixel shaders disabled "\
"because of missing driver/hardware support.");
psFileName = "";
}
if (!driver->queryFeature(video::EVDF_VERTEX_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_VERTEX_PROGRAM_1))
{
device->getLogger()->log("WARNING: Vertex shaders disabled "\
"because of missing driver/hardware support.");
vsFileName = "";
}
Now lets create the new materials. As you maybe know from previous examples, a material type in the Irrlicht engine is set by simply changing the MaterialType value in the SMaterial struct. And this value is just a simple 32 bit value, like video::EMT_SOLID. So we only need the engine to create a new value for us which we can set there. To do this, we get a pointer to the IGPUProgrammingServices and call addShaderMaterialFromFiles(), which returns such a new 32 bit value. That's all.
The parameters to this method are the following: First, the names of the files containing the code of the vertex and the pixel shader. If you would use addShaderMaterial() instead, you would not need file names, then you could write the code of the shader directly as string. The following parameter is a pointer to the IShaderConstantSetCallBack class we wrote at the beginning of this tutorial. If you don't want to set constants, set this to 0. The last parameter tells the engine which material it should use as base material.
-
To demonstrate this, we create two materials with a different base material, one with EMT_SOLID and one with EMT_TRANSPARENT_ADD_COLOR.
video::IGPUProgrammingServices* gpu = driver->getGPUProgrammingServices();
s32 newMaterialType1 = 0;
s32 newMaterialType2 = 0;
if (gpu)
{
MyShaderCallBack* mc = new MyShaderCallBack();
if (UseHighLevelShaders)
{
const video::E_GPU_SHADING_LANGUAGE shadingLanguage =
UseCgShaders ? video::EGSL_CG:video::EGSL_DEFAULT;
newMaterialType1 = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "vertexMain", video::EVST_VS_1_1,
psFileName, "pixelMain", video::EPST_PS_1_1,
mc, video::EMT_SOLID, 0, shadingLanguage);
newMaterialType2 = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "vertexMain", video::EVST_VS_1_1,
psFileName, "pixelMain", video::EPST_PS_1_1,
mc, video::EMT_TRANSPARENT_ADD_COLOR, 0 , shadingLanguage);
}
else
{
newMaterialType1 = gpu->addShaderMaterialFromFiles(vsFileName,
psFileName, mc, video::EMT_SOLID);
newMaterialType2 = gpu->addShaderMaterialFromFiles(vsFileName,
psFileName, mc, video::EMT_TRANSPARENT_ADD_COLOR);
}
mc->drop();
}
Now it's time for testing the materials. We create a test cube and set the material we created. In addition, we add a text scene node to the cube and a rotation animator to make it look more interesting and important.
scene::ISceneNode* node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,0,0));
node->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialType((video::E_MATERIAL_TYPE)newMaterialType1);
smgr->addTextSceneNode(gui->getBuiltInFont(),
L"PS & VS & EMT_SOLID",
video::SColor(255,255,255,255), node);
scene::ISceneNodeAnimator* anim = smgr->createRotationAnimator(
core::vector3df(0,0.3f,0));
node->addAnimator(anim);
anim->drop();
Same for the second cube, but with the second material we created.
node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,-10,50));
node->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialFlag(video::EMF_BLEND_OPERATION, true);
node->setMaterialType((video::E_MATERIAL_TYPE)newMaterialType2);
smgr->addTextSceneNode(gui->getBuiltInFont(),
L"PS & VS & EMT_TRANSPARENT",
video::SColor(255,255,255,255), node);
anim = smgr->createRotationAnimator(core::vector3df(0,0.3f,0));
node->addAnimator(anim);
anim->drop();
Then we add a third cube without a shader on it, to be able to compare the cubes.
node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,50,25));
node->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addTextSceneNode(gui->getBuiltInFont(), L"NO SHADER",
video::SColor(255,255,255,255), node);
And last, we add a skybox and a user controlled camera to the scene. For the skybox textures, we disable mipmap generation, because we don't need mipmaps on it.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
smgr->addSkyBoxSceneNode(
driver->getTexture("../../media/irrlicht2_up.jpg"),
driver->getTexture("../../media/irrlicht2_dn.jpg"),
driver->getTexture("../../media/irrlicht2_lf.jpg"),
driver->getTexture("../../media/irrlicht2_rt.jpg"),
driver->getTexture("../../media/irrlicht2_ft.jpg"),
driver->getTexture("../../media/irrlicht2_bk.jpg"));
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
scene::ICameraSceneNode* cam = smgr->addCameraSceneNodeFPS();
cam->setPosition(core::vector3df(-100,50,100));
cam->setTarget(core::vector3df(0,0,0));
device->getCursorControl()->setVisible(false);
Now draw everything. That's all.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(255,0,0,0));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Vertex and pixel shader example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
Compile and run this, and I hope you have fun with your new little shader writing tool :).
+
To demonstrate this, we create two materials with a different base material, one with EMT_SOLID and one with EMT_TRANSPARENT_ADD_COLOR.
video::IGPUProgrammingServices* gpu = driver->getGPUProgrammingServices();
s32 newMaterialType1 = 0;
s32 newMaterialType2 = 0;
if (gpu)
{
Create one callback instance for each shader material you add. Reason is that the getVertexShaderConstantID returns ID's which are only valid per added material (The ID's tend to be identical as long as the shader code is exactly identical, but it's not good style to depend on that).
MyShaderCallBack* mcSolid = new MyShaderCallBack();
MyShaderCallBack* mcTransparentAdd = new MyShaderCallBack();
if (UseHighLevelShaders)
{
newMaterialType1 = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "vertexMain", video::EVST_VS_1_1,
psFileName, "pixelMain", video::EPST_PS_1_1,
mcSolid, video::EMT_SOLID, 0);
newMaterialType2 = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "vertexMain", video::EVST_VS_1_1,
psFileName, "pixelMain", video::EPST_PS_1_1,
mcTransparentAdd, video::EMT_TRANSPARENT_ADD_COLOR, 0);
}
else
{
newMaterialType1 = gpu->addShaderMaterialFromFiles(vsFileName,
psFileName, mcSolid, video::EMT_SOLID);
newMaterialType2 = gpu->addShaderMaterialFromFiles(vsFileName,
psFileName, mcTransparentAdd, video::EMT_TRANSPARENT_ADD_COLOR);
}
mcSolid->drop();
mcTransparentAdd->drop();
}
Now it's time for testing the materials. We create a test cube and set the material we created. In addition, we add a text scene node to the cube and a rotation animator to make it look more interesting and important.
scene::ISceneNode* node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,0,0));
node->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialType((video::E_MATERIAL_TYPE)newMaterialType1);
smgr->addTextSceneNode(gui->getBuiltInFont(),
L"PS & VS & EMT_SOLID",
video::SColor(255,255,255,255), node);
scene::ISceneNodeAnimator* anim = smgr->createRotationAnimator(
core::vector3df(0,0.3f,0));
node->addAnimator(anim);
anim->drop();
Same for the second cube, but with the second material we created.
node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,-10,50));
node->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
node->setMaterialFlag(video::EMF_BLEND_OPERATION, true);
node->setMaterialType((video::E_MATERIAL_TYPE)newMaterialType2);
smgr->addTextSceneNode(gui->getBuiltInFont(),
L"PS & VS & EMT_TRANSPARENT",
video::SColor(255,255,255,255), node);
anim = smgr->createRotationAnimator(core::vector3df(0,0.3f,0));
node->addAnimator(anim);
anim->drop();
Then we add a third cube without a shader on it, to be able to compare the cubes.
node = smgr->addCubeSceneNode(50);
node->setPosition(core::vector3df(0,50,25));
node->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addTextSceneNode(gui->getBuiltInFont(), L"NO SHADER",
video::SColor(255,255,255,255), node);
And last, we add a skybox and a user controlled camera to the scene. For the skybox textures, we disable mipmap generation, because we don't need mipmaps on it.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
smgr->addSkyBoxSceneNode(
driver->getTexture(mediaPath + "irrlicht2_up.jpg"),
driver->getTexture(mediaPath + "irrlicht2_dn.jpg"),
driver->getTexture(mediaPath + "irrlicht2_lf.jpg"),
driver->getTexture(mediaPath + "irrlicht2_rt.jpg"),
driver->getTexture(mediaPath + "irrlicht2_ft.jpg"),
driver->getTexture(mediaPath + "irrlicht2_bk.jpg"));
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
scene::ICameraSceneNode* cam = smgr->addCameraSceneNodeFPS();
cam->setPosition(core::vector3df(-100,50,100));
cam->setTarget(core::vector3df(0,0,0));
device->getCursorControl()->setVisible(false);
Now draw everything. That's all.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,0,0,0));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Vertex and pixel shader example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
Compile and run this, and I hope you have fun with your new little shader writing tool :).
diff --git a/examples/11.PerPixelLighting/tutorial.html b/examples/11.PerPixelLighting/tutorial.html
index fb092d07..fe1cda11 100644
--- a/examples/11.PerPixelLighting/tutorial.html
+++ b/examples/11.PerPixelLighting/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
This tutorial shows how to use one of the built in more complex materials in irrlicht: Per pixel lighted surfaces using normal maps and parallax mapping. It will also show how to use fog and moving particle systems. And don't panic: You do not need any experience with shaders to use these materials in Irrlicht.
-
At first, we need to include all headers and do the stuff we always do, like in nearly all other tutorials.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
For this example, we need an event receiver, to make it possible for the user to switch between the three available material types. In addition, the event receiver will create some small GUI window which displays what material is currently being used. There is nothing special done in this class, so maybe you want to skip reading it.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(scene::ISceneNode* room,scene::ISceneNode* earth,
gui::IGUIEnvironment* env, video::IVideoDriver* driver)
{
Room = room;
Earth = earth;
Driver = driver;
gui::IGUISkin* skin = env->getSkin();
gui::IGUIFont* font = env->getFont("../../media/fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
gui::IGUIWindow* window = env->addWindow(
core::rect<s32>(460,375,630,470), false, L"Use 'E' + 'R' to change");
ListBox = env->addListBox(
core::rect<s32>(2,22,165,88), window);
ListBox->addItem(L"Diffuse");
ListBox->addItem(L"Bump mapping");
ListBox->addItem(L"Parallax mapping");
ListBox->setSelected(1);
ProblemText = env->addStaticText(
L"Your hardware or this renderer is not able to use the "\
L"needed shaders for this material. Using fall back materials.",
core::rect<s32>(150,20,470,80));
ProblemText->setOverrideColor(video::SColor(100,255,255,255));
video::IMaterialRenderer* renderer =
Driver->getMaterialRenderer(video::EMT_PARALLAX_MAP_SOLID);
if (renderer && renderer->getRenderCapability() == 0)
ListBox->setSelected(2);
setMaterial();
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT &&
!event.KeyInput.PressedDown && Room && ListBox)
{
int sel = ListBox->getSelected();
if (event.KeyInput.Key == irr::KEY_KEY_R)
++sel;
else
if (event.KeyInput.Key == irr::KEY_KEY_E)
--sel;
else
return false;
if (sel > 2) sel = 0;
if (sel < 0) sel = 2;
ListBox->setSelected(sel);
setMaterial();
}
return false;
}
private:
void setMaterial()
{
video::E_MATERIAL_TYPE type = video::EMT_SOLID;
switch(ListBox->getSelected())
{
case 0: type = video::EMT_SOLID;
break;
case 1: type = video::EMT_NORMAL_MAP_SOLID;
break;
case 2: type = video::EMT_PARALLAX_MAP_SOLID;
break;
}
Room->setMaterialType(type);
switch(ListBox->getSelected())
{
case 0: type = video::EMT_TRANSPARENT_VERTEX_ALPHA;
break;
case 1: type = video::EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA;
break;
case 2: type = video::EMT_PARALLAX_MAP_TRANSPARENT_VERTEX_ALPHA;
break;
}
Earth->setMaterialType(type);
We need to add a warning if the materials will not be able to be displayed 100% correctly. This is no problem, they will be rendered using fall back materials, but at least the user should know that it would look better on better hardware. We simply check if the material renderer is able to draw at full quality on the current hardware. The IMaterialRenderer::getRenderCapability() returns 0 if this is the case.
video::IMaterialRenderer* renderer = Driver->getMaterialRenderer(type);
if (!renderer || renderer->getRenderCapability() != 0)
ProblemText->setVisible(true);
else
ProblemText->setVisible(false);
}
private:
gui::IGUIStaticText* ProblemText;
gui::IGUIListBox* ListBox;
scene::ISceneNode* Room;
scene::ISceneNode* Earth;
video::IVideoDriver* Driver;
};
Now for the real fun. We create an Irrlicht Device and start to setup the scene.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
Before we start with the interesting stuff, we do some simple things: Store pointers to the most important parts of the engine (video driver, scene manager, gui environment) to safe us from typing too much, add an irrlicht engine logo to the window and a user controlled first person shooter style camera. Also, we let the engine know that it should store all textures in 32 bit. This necessary because for parallax mapping, we need 32 bit textures.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
env->addImage(driver->getTexture("../../media/irrlichtlogo3.png"),
core::position2d<s32>(10,10));
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
camera->setPosition(core::vector3df(-200,200,-200));
device->getCursorControl()->setVisible(false);
Because we want the whole scene to look a little bit scarier, we add some fog to it. This is done by a call to IVideoDriver::setFog(). There you can set various fog settings. In this example, we use pixel fog, because it will work well with the materials we'll use in this example. Please note that you will have to set the material flag EMF_FOG_ENABLE to 'true' in every scene node which should be affected by this fog.
driver->setFog(video::SColor(0,138,125,81), video::EFT_FOG_LINEAR, 250, 1000, .003f, true, false);
To be able to display something interesting, we load a mesh from a .3ds file which is a room I modeled with anim8or. It is the same room as from the specialFX example. Maybe you remember from that tutorial, I am no good modeler at all and so I totally messed up the texture mapping in this model, but we can simply repair it with the IMeshManipulator::makePlanarTextureMapping() method.
scene::IAnimatedMesh* roomMesh = smgr->getMesh("../../media/room.3ds");
scene::ISceneNode* room = 0;
scene::ISceneNode* earth = 0;
if (roomMesh)
{
smgr->getMeshManipulator()->makePlanarTextureMapping(
roomMesh->getMesh(0), 0.003f);
Now for the first exciting thing: If we successfully loaded the mesh we need to apply textures to it. Because we want this room to be displayed with a very cool material, we have to do a little bit more than just set the textures. Instead of only loading a color map as usual, we also load a height map which is simply a grayscale texture. From this height map, we create a normal map which we will set as second texture of the room. If you already have a normal map, you could directly set it, but I simply didn't find a nice normal map for this texture. The normal map texture is being generated by the makeNormalMapTexture method of the VideoDriver. The second parameter specifies the height of the heightmap. If you set it to a bigger value, the map will look more rocky.
video::ITexture* normalMap =
driver->getTexture("../../media/rockwall_height.bmp");
if (normalMap)
driver->makeNormalMapTexture(normalMap, 9.0f);
The Normal Map and the displacement map/height map in the alpha channel video::ITexture* normalMap = driver->getTexture("../../media/rockwall_NRM.tga");
But just setting color and normal map is not everything. The material we want to use needs some additional informations per vertex like tangents and binormals. Because we are too lazy to calculate that information now, we let Irrlicht do this for us. That's why we call IMeshManipulator::createMeshWithTangents(). It creates a mesh copy with tangents and binormals from another mesh. After we've done that, we simply create a standard mesh scene node with this mesh copy, set color and normal map and adjust some other material settings. Note that we set EMF_FOG_ENABLE to true to enable fog in the room.
scene::IMesh* tangentMesh = smgr->getMeshManipulator()->
createMeshWithTangents(roomMesh->getMesh(0));
room = smgr->addMeshSceneNode(tangentMesh);
room->setMaterialTexture(0,
driver->getTexture("../../media/rockwall.jpg"));
room->setMaterialTexture(1, normalMap);
room->getMaterial(0).SpecularColor.set(0,0,0,0);
room->getMaterial(0).Shininess = 0.f;
room->setMaterialFlag(video::EMF_FOG_ENABLE, true);
room->setMaterialType(video::EMT_PARALLAX_MAP_SOLID);
room->getMaterial(0).MaterialTypeParam = 1.f / 64.f;
tangentMesh->drop();
}
After we've created a room shaded by per pixel lighting, we add a sphere into it with the same material, but we'll make it transparent. In addition, because the sphere looks somehow like a familiar planet, we make it rotate. The procedure is similar as before. The difference is that we are loading the mesh from an .x file which already contains a color map so we do not need to load it manually. But the sphere is a little bit too small for our needs, so we scale it by the factor 50.
scene::IAnimatedMesh* earthMesh = smgr->getMesh("../../media/earth.x");
if (earthMesh)
{
scene::IMeshManipulator *manipulator = smgr->getMeshManipulator();
scene::IMesh* tangentSphereMesh =
manipulator->createMeshWithTangents(earthMesh->getMesh(0));
manipulator->setVertexColorAlpha(tangentSphereMesh, 200);
core::matrix4 m;
m.setScale ( core::vector3df(50,50,50) );
manipulator->transform( tangentSphereMesh, m );
earth = smgr->addMeshSceneNode(tangentSphereMesh);
earth->setPosition(core::vector3df(-70,130,45));
video::ITexture* earthNormalMap = driver->getTexture("../../media/earthbump.jpg");
if (earthNormalMap)
{
driver->makeNormalMapTexture(earthNormalMap, 20.0f);
earth->setMaterialTexture(1, earthNormalMap);
earth->setMaterialType(video::EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA);
}
earth->setMaterialFlag(video::EMF_FOG_ENABLE, true);
scene::ISceneNodeAnimator* anim =
smgr->createRotationAnimator(core::vector3df(0,0.1f,0));
earth->addAnimator(anim);
anim->drop();
tangentSphereMesh->drop();
}
Per pixel lighted materials only look cool when there are moving lights. So we add some. And because moving lights alone are so boring, we add billboards to them, and a whole particle system to one of them. We start with the first light which is red and has only the billboard attached.
scene::ILightSceneNode* light1 =
smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(0.5f, 1.0f, 0.5f, 0.0f), 800.0f);
light1->setDebugDataVisible ( scene::EDS_BBOX );
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator (core::vector3df(50,300,0),190.0f, -0.003f);
light1->addAnimator(anim);
anim->drop();
scene::IBillboardSceneNode* bill =
smgr->addBillboardSceneNode(light1, core::dimension2d<f32>(60, 60));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
bill->setMaterialTexture(0, driver->getTexture("../../media/particlegreen.jpg"));
Now the same again, with the second light. The difference is that we add a particle system to it too. And because the light moves, the particles of the particlesystem will follow. If you want to know more about how particle systems are created in Irrlicht, take a look at the specialFx example. Maybe you will have noticed that we only add 2 lights, this has a simple reason: The low end version of this material was written in ps1.1 and vs1.1, which doesn't allow more lights. You could add a third light to the scene, but it won't be used to shade the walls. But of course, this will change in future versions of Irrlicht where higher versions of pixel/vertex shaders will be implemented too.
scene::ISceneNode* light2 =
smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(1.0f, 0.2f, 0.2f, 0.0f), 800.0f);
anim = smgr->createFlyCircleAnimator(core::vector3df(0,150,0), 200.0f,
0.001f, core::vector3df(0.2f, 0.9f, 0.f));
light2->addAnimator(anim);
anim->drop();
bill = smgr->addBillboardSceneNode(light2, core::dimension2d<f32>(120, 120));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
bill->setMaterialTexture(0, driver->getTexture("../../media/particlered.bmp"));
scene::IParticleSystemSceneNode* ps =
smgr->addParticleSystemSceneNode(false, light2);
scene::IParticleEmitter* em = ps->createBoxEmitter(
core::aabbox3d<f32>(-3,0,-3,3,1,3),
core::vector3df(0.0f,0.03f,0.0f),
80,100,
video::SColor(10,255,255,255), video::SColor(10,255,255,255),
400,1100);
em->setMinStartSize(core::dimension2d<f32>(30.0f, 40.0f));
em->setMaxStartSize(core::dimension2d<f32>(30.0f, 40.0f));
ps->setEmitter(em);
em->drop();
scene::IParticleAffector* paf = ps->createFadeOutParticleAffector();
ps->addAffector(paf);
paf->drop();
ps->setMaterialFlag(video::EMF_LIGHTING, false);
ps->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
ps->setMaterialTexture(0, driver->getTexture("../../media/fireball.bmp"));
ps->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
MyEventReceiver receiver(room, earth, env, driver);
device->setEventReceiver(&receiver);
Finally, draw everything. That's it.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, 0);
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Per pixel lighting example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+At first, we need to include all headers and do the stuff we always do, like in nearly all other tutorials.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
For this example, we need an event receiver, to make it possible for the user to switch between the three available material types. In addition, the event receiver will create some small GUI window which displays what material is currently being used. There is nothing special done in this class, so maybe you want to skip reading it.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(scene::ISceneNode* room,scene::ISceneNode* earth,
gui::IGUIEnvironment* env, video::IVideoDriver* driver)
{
Room = room;
Earth = earth;
Driver = driver;
gui::IGUISkin* skin = env->getSkin();
gui::IGUIFont* font = env->getFont(getExampleMediaPath() + "fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
gui::IGUIWindow* window = env->addWindow(
core::rect<s32>(460,375,630,470), false, L"Use 'E' + 'R' to change");
ListBox = env->addListBox(
core::rect<s32>(2,22,165,88), window);
ListBox->addItem(L"Diffuse");
ListBox->addItem(L"Bump mapping");
ListBox->addItem(L"Parallax mapping");
ListBox->setSelected(1);
ProblemText = env->addStaticText(
L"Your hardware or this renderer is not able to use the "\
L"needed shaders for this material. Using fall back materials.",
core::rect<s32>(150,20,470,80));
ProblemText->setOverrideColor(video::SColor(100,255,255,255));
video::IMaterialRenderer* renderer =
Driver->getMaterialRenderer(video::EMT_PARALLAX_MAP_SOLID);
if (renderer && renderer->getRenderCapability() == 0)
ListBox->setSelected(2);
setMaterial();
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT &&
!event.KeyInput.PressedDown && Room && ListBox)
{
int sel = ListBox->getSelected();
if (event.KeyInput.Key == irr::KEY_KEY_R)
++sel;
else
if (event.KeyInput.Key == irr::KEY_KEY_E)
--sel;
else
return false;
if (sel > 2) sel = 0;
if (sel < 0) sel = 2;
ListBox->setSelected(sel);
setMaterial();
}
return false;
}
private:
void setMaterial()
{
video::E_MATERIAL_TYPE type = video::EMT_SOLID;
switch(ListBox->getSelected())
{
case 0: type = video::EMT_SOLID;
break;
case 1: type = video::EMT_NORMAL_MAP_SOLID;
break;
case 2: type = video::EMT_PARALLAX_MAP_SOLID;
break;
}
Room->setMaterialType(type);
switch(ListBox->getSelected())
{
case 0: type = video::EMT_TRANSPARENT_VERTEX_ALPHA;
break;
case 1: type = video::EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA;
break;
case 2: type = video::EMT_PARALLAX_MAP_TRANSPARENT_VERTEX_ALPHA;
break;
}
Earth->setMaterialType(type);
We need to add a warning if the materials will not be able to be displayed 100% correctly. This is no problem, they will be rendered using fall back materials, but at least the user should know that it would look better on better hardware. We simply check if the material renderer is able to draw at full quality on the current hardware. The IMaterialRenderer::getRenderCapability() returns 0 if this is the case.
video::IMaterialRenderer* renderer = Driver->getMaterialRenderer(type);
if (!renderer || renderer->getRenderCapability() != 0)
ProblemText->setVisible(true);
else
ProblemText->setVisible(false);
}
private:
gui::IGUIStaticText* ProblemText;
gui::IGUIListBox* ListBox;
scene::ISceneNode* Room;
scene::ISceneNode* Earth;
video::IVideoDriver* Driver;
};
Now for the real fun. We create an Irrlicht Device and start to setup the scene.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
Before we start with the interesting stuff, we do some simple things: Store pointers to the most important parts of the engine (video driver, scene manager, gui environment) to safe us from typing too much, add an irrlicht engine logo to the window and a user controlled first person shooter style camera. Also, we let the engine know that it should store all textures in 32 bit. This necessary because for parallax mapping, we need 32 bit textures.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
const io::path mediaPath = getExampleMediaPath();
env->addImage(driver->getTexture(mediaPath + "irrlichtlogo3.png"),
core::position2d<s32>(10,10));
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
camera->setPosition(core::vector3df(-200,200,-200));
device->getCursorControl()->setVisible(false);
Because we want the whole scene to look a little bit scarier, we add some fog to it. This is done by a call to IVideoDriver::setFog(). There you can set various fog settings. In this example, we use pixel fog, because it will work well with the materials we'll use in this example. Please note that you will have to set the material flag EMF_FOG_ENABLE to 'true' in every scene node which should be affected by this fog.
driver->setFog(video::SColor(0,138,125,81), video::EFT_FOG_LINEAR, 250, 1000, .003f, true, false);
To be able to display something interesting, we load a mesh from a .3ds file which is a room I modeled with anim8or. It is the same room as from the specialFX example. Maybe you remember from that tutorial, I am no good modeler at all and so I totally messed up the texture mapping in this model, but we can simply repair it with the IMeshManipulator::makePlanarTextureMapping() method.
scene::IAnimatedMesh* roomMesh = smgr->getMesh(mediaPath + "room.3ds");
scene::ISceneNode* room = 0;
scene::ISceneNode* earth = 0;
if (roomMesh)
{
smgr->getMeshManipulator()->makePlanarTextureMapping(
roomMesh->getMesh(0), 0.003f);
Now for the first exciting thing: If we successfully loaded the mesh we need to apply textures to it. Because we want this room to be displayed with a very cool material, we have to do a little bit more than just set the textures. Instead of only loading a color map as usual, we also load a height map which is simply a grayscale texture. From this height map, we create a normal map which we will set as second texture of the room. If you already have a normal map, you could directly set it, but I simply didn't find a nice normal map for this texture. The normal map texture is being generated by the makeNormalMapTexture method of the VideoDriver. The second parameter specifies the height of the heightmap. If you set it to a bigger value, the map will look more rocky.
video::ITexture* normalMap =
driver->getTexture(mediaPath + "rockwall_height.bmp");
if (normalMap)
driver->makeNormalMapTexture(normalMap, 9.0f);
But just setting color and normal map is not everything. The material we want to use needs some additional information per vertex like tangents and binormals. Because we are too lazy to calculate that information now, we let Irrlicht do this for us. That's why we call IMeshManipulator::createMeshWithTangents(). It creates a mesh copy with tangents and binormals from another mesh. After we've done that, we simply create a standard mesh scene node with this mesh copy, set color and normal map and adjust some other material settings. Note that we set EMF_FOG_ENABLE to true to enable fog in the room.
scene::IMesh* tangentMesh = smgr->getMeshManipulator()->
createMeshWithTangents(roomMesh->getMesh(0));
room = smgr->addMeshSceneNode(tangentMesh);
room->setMaterialTexture(0,
driver->getTexture(mediaPath + "rockwall.jpg"));
room->setMaterialTexture(1, normalMap);
room->setMaterialFlag(video::EMF_FOG_ENABLE, true);
room->setMaterialType(video::EMT_PARALLAX_MAP_SOLID);
room->getMaterial(0).MaterialTypeParam = 1.f / 64.f;
tangentMesh->drop();
}
After we've created a room shaded by per pixel lighting, we add a sphere into it with the same material, but we'll make it transparent. In addition, because the sphere looks somehow like a familiar planet, we make it rotate. The procedure is similar as before. The difference is that we are loading the mesh from an .x file which already contains a color map so we do not need to load it manually. But the sphere is a little bit too small for our needs, so we scale it by the factor 50.
scene::IAnimatedMesh* earthMesh = smgr->getMesh(mediaPath + "earth.x");
if (earthMesh)
{
scene::IMeshManipulator *manipulator = smgr->getMeshManipulator();
scene::IMesh* tangentSphereMesh =
manipulator->createMeshWithTangents(earthMesh->getMesh(0));
manipulator->setVertexColorAlpha(tangentSphereMesh, 200);
core::matrix4 m;
m.setScale ( core::vector3df(50,50,50) );
manipulator->transform( tangentSphereMesh, m );
earth = smgr->addMeshSceneNode(tangentSphereMesh);
earth->setPosition(core::vector3df(-70,130,45));
video::ITexture* earthNormalMap = driver->getTexture(mediaPath + "earthbump.jpg");
if (earthNormalMap)
{
driver->makeNormalMapTexture(earthNormalMap, 20.0f);
earth->setMaterialTexture(1, earthNormalMap);
earth->setMaterialType(video::EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA);
}
earth->setMaterialFlag(video::EMF_FOG_ENABLE, true);
scene::ISceneNodeAnimator* anim =
smgr->createRotationAnimator(core::vector3df(0,0.1f,0));
earth->addAnimator(anim);
anim->drop();
tangentSphereMesh->drop();
}
Per pixel lighted materials only look cool when there are moving lights. So we add some. And because moving lights alone are so boring, we add billboards to them, and a whole particle system to one of them. We start with the first light which is red and has only the billboard attached.
scene::ILightSceneNode* light1 =
smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(0.5f, 1.0f, 0.5f, 0.0f), 400.0f);
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator (core::vector3df(50,300,0),190.0f, -0.003f);
light1->addAnimator(anim);
anim->drop();
scene::IBillboardSceneNode* bill =
smgr->addBillboardSceneNode(light1, core::dimension2d<f32>(60, 60));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
bill->setMaterialTexture(0, driver->getTexture(mediaPath + "particlegreen.jpg"));
Now the same again, with the second light. The difference is that we add a particle system to it too. And because the light moves, the particles of the particle system will follow. If you want to know more about how particle systems are created in Irrlicht, take a look at the SpecialFX example. Maybe you will have noticed that we only add 2 lights, this has a simple reason: The low end version of this material was written in ps1.1 and vs1.1, which doesn't allow more lights. You could add a third light to the scene, but it won't be used to shade the walls. But of course, this will change in future versions of Irrlicht where higher versions of pixel/vertex shaders will be implemented too.
scene::ISceneNode* light2 =
smgr->addLightSceneNode(0, core::vector3df(0,0,0),
video::SColorf(1.0f, 0.2f, 0.2f, 0.0f), 400.0f);
anim = smgr->createFlyCircleAnimator(core::vector3df(0,150,0), 200.0f,
0.001f, core::vector3df(0.2f, 0.9f, 0.f));
light2->addAnimator(anim);
anim->drop();
bill = smgr->addBillboardSceneNode(light2, core::dimension2d<f32>(120, 120));
bill->setMaterialFlag(video::EMF_LIGHTING, false);
bill->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
bill->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
bill->setMaterialTexture(0, driver->getTexture(mediaPath + "particlered.bmp"));
scene::IParticleSystemSceneNode* ps =
smgr->addParticleSystemSceneNode(false, light2);
scene::IParticleEmitter* em = ps->createBoxEmitter(
core::aabbox3d<f32>(-3,0,-3,3,1,3),
core::vector3df(0.0f,0.03f,0.0f),
80,100,
video::SColor(10,255,255,255), video::SColor(10,255,255,255),
400,1100);
em->setMinStartSize(core::dimension2d<f32>(30.0f, 40.0f));
em->setMaxStartSize(core::dimension2d<f32>(30.0f, 40.0f));
ps->setEmitter(em);
em->drop();
scene::IParticleAffector* paf = ps->createFadeOutParticleAffector();
ps->addAffector(paf);
paf->drop();
ps->setMaterialFlag(video::EMF_LIGHTING, false);
ps->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
ps->setMaterialTexture(0, driver->getTexture(mediaPath + "fireball.bmp"));
ps->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR);
MyEventReceiver receiver(room, earth, env, driver);
device->setEventReceiver(&receiver);
Finally, draw everything. That's it.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0));
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Per pixel lighting example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
diff --git a/examples/12.TerrainRendering/tutorial.html b/examples/12.TerrainRendering/tutorial.html
index 515ff848..079e7cc4 100644
--- a/examples/12.TerrainRendering/tutorial.html
+++ b/examples/12.TerrainRendering/tutorial.html
@@ -195,12 +195,12 @@ tr.heading h2 {
-
+
This tutorial will briefly show how to use the terrain renderer of Irrlicht. It will also show the terrain renderer triangle selector to be able to do collision detection with terrain.
Note that the Terrain Renderer in Irrlicht is based on Spintz' GeoMipMapSceneNode, lots of thanks go to him. DeusXL provided a new elegant simple solution for building larger area on small heightmaps -> terrain smoothing.
-
In the beginning there is nothing special. We include the needed header files and create an event listener to listen if the user presses a key: The 'W' key switches to wireframe mode, the 'P' key to pointcloud mode, and the 'D' key toggles between solid and detail mapped material.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(scene::ISceneNode* terrain, scene::ISceneNode* skybox, scene::ISceneNode* skydome) :
Terrain(terrain), Skybox(skybox), Skydome(skydome), showBox(true), showDebug(false)
{
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT && !event.KeyInput.PressedDown)
{
switch (event.KeyInput.Key)
{
case irr::KEY_KEY_W:
Terrain->setMaterialFlag(video::EMF_WIREFRAME,
!Terrain->getMaterial(0).Wireframe);
Terrain->setMaterialFlag(video::EMF_POINTCLOUD, false);
return true;
case irr::KEY_KEY_P:
Terrain->setMaterialFlag(video::EMF_POINTCLOUD,
!Terrain->getMaterial(0).PointCloud);
Terrain->setMaterialFlag(video::EMF_WIREFRAME, false);
return true;
case irr::KEY_KEY_D:
Terrain->setMaterialType(
Terrain->getMaterial(0).MaterialType == video::EMT_SOLID ?
video::EMT_DETAIL_MAP : video::EMT_SOLID);
return true;
case irr::KEY_KEY_S:
showBox=!showBox;
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
return true;
case irr::KEY_KEY_X:
showDebug=!showDebug;
Terrain->setDebugDataVisible(showDebug?scene::EDS_BBOX_ALL:scene::EDS_OFF);
return true;
default:
break;
}
}
return false;
}
private:
scene::ISceneNode* Terrain;
scene::ISceneNode* Skybox;
scene::ISceneNode* Skydome;
bool showBox;
bool showDebug;
};
The start of the main function starts like in most other example. We ask the user for the desired renderer and start it up. This time with the advanced parameter handling.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
irr::SIrrlichtCreationParameters params;
params.DriverType=driverType;
params.WindowSize=core::dimension2d<u32>(640, 480);
IrrlichtDevice* device = createDeviceEx(params);
if (device == 0)
return 1;
First, we add standard stuff to the scene: A nice irrlicht engine logo, a small help text, a user controlled camera, and we disable the mouse cursor.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
env->addImage(driver->getTexture("../../media/irrlichtlogo2.png"),
core::position2d<s32>(10,10));
env->getSkin()->setFont(env->getFont("../../media/fontlucida.png"));
env->addStaticText(
L"Press 'W' to change wireframe mode\nPress 'D' to toggle detail map\nPress 'S' to toggle skybox/skydome",
core::rect<s32>(10,421,250,475), true, true, 0, -1, true);
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0,100.0f,1.2f);
camera->setPosition(core::vector3df(2700*2,255*2,2600*2));
camera->setTarget(core::vector3df(2397*2,343*2,2700*2));
camera->setFarValue(42000.0f);
device->getCursorControl()->setVisible(false);
Here comes the terrain renderer scene node: We add it just like any other scene node to the scene using ISceneManager::addTerrainSceneNode(). The only parameter we use is a file name to the heightmap we use. A heightmap is simply a gray scale texture. The terrain renderer loads it and creates the 3D terrain from it.
-
To make the terrain look more big, we change the scale factor of it to (40, 4.4, 40). Because we don't have any dynamic lights in the scene, we switch off the lighting, and we set the file terrain-texture.jpg as texture for the terrain and detailmap3.jpg as second texture, called detail map. At last, we set the scale values for the texture: The first texture will be repeated only one time over the whole terrain, and the second one (detail map) 20 times.
scene::ITerrainSceneNode* terrain = smgr->addTerrainSceneNode(
"../../media/terrain-heightmap.bmp",
0,
-1,
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(40.f, 4.4f, 40.f),
video::SColor ( 255, 255, 255, 255 ),
5,
scene::ETPS_17,
4
);
terrain->setMaterialFlag(video::EMF_LIGHTING, false);
terrain->setMaterialTexture(0,
driver->getTexture("../../media/terrain-texture.jpg"));
terrain->setMaterialTexture(1,
driver->getTexture("../../media/detailmap3.jpg"));
terrain->setMaterialType(video::EMT_DETAIL_MAP);
terrain->scaleTexture(1.0f, 20.0f);
To be able to do collision with the terrain, we create a triangle selector. If you want to know what triangle selectors do, just take a look into the collision tutorial. The terrain triangle selector works together with the terrain. To demonstrate this, we create a collision response animator and attach it to the camera, so that the camera will not be able to fly through the terrain.
scene::ITriangleSelector* selector
= smgr->createTerrainTriangleSelector(terrain, 0);
terrain->setTriangleSelector(selector);
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(60,100,60),
core::vector3df(0,0,0),
core::vector3df(0,50,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
If you need access to the terrain data you can also do this directly via the following code fragment.
scene::CDynamicMeshBuffer* buffer = new scene::CDynamicMeshBuffer(video::EVT_2TCOORDS, video::EIT_16BIT);
terrain->getMeshBufferForLOD(*buffer, 0);
video::S3DVertex2TCoords* data = (video::S3DVertex2TCoords*)buffer->getVertexBuffer().getData();
buffer->drop();
To make the user be able to switch between normal and wireframe mode, we create an instance of the event receiver from above and let Irrlicht know about it. In addition, we add the skybox which we already used in lots of Irrlicht examples and a skydome, which is shown mutually exclusive with the skybox by pressing 'S'.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
scene::ISceneNode* skybox=smgr->addSkyBoxSceneNode(
driver->getTexture("../../media/irrlicht2_up.jpg"),
driver->getTexture("../../media/irrlicht2_dn.jpg"),
driver->getTexture("../../media/irrlicht2_lf.jpg"),
driver->getTexture("../../media/irrlicht2_rt.jpg"),
driver->getTexture("../../media/irrlicht2_ft.jpg"),
driver->getTexture("../../media/irrlicht2_bk.jpg"));
scene::ISceneNode* skydome=smgr->addSkyDomeSceneNode(driver->getTexture("../../media/skydome.jpg"),16,8,0.95f,2.0f);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
MyEventReceiver receiver(terrain, skybox, skydome);
device->setEventReceiver(&receiver);
That's it, draw everything.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, 0 );
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Terrain Renderer - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
str += " Height: ";
str += terrain->getHeight(camera->getAbsolutePosition().X,
camera->getAbsolutePosition().Z);
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
Now you know how to use terrain in Irrlicht.
+
In the beginning there is nothing special. We include the needed header files and create an event listener to listen if the user presses certain keys.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(scene::ISceneNode* terrain, scene::ISceneNode* skybox, scene::ISceneNode* skydome) :
Terrain(terrain), Skybox(skybox), Skydome(skydome), showBox(true), showDebug(false)
{
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT && !event.KeyInput.PressedDown)
{
switch (event.KeyInput.Key)
{
case irr::KEY_KEY_W:
Terrain->setMaterialFlag(video::EMF_WIREFRAME,
!Terrain->getMaterial(0).Wireframe);
Terrain->setMaterialFlag(video::EMF_POINTCLOUD, false);
return true;
case irr::KEY_KEY_P:
Terrain->setMaterialFlag(video::EMF_POINTCLOUD,
!Terrain->getMaterial(0).PointCloud);
Terrain->setMaterialFlag(video::EMF_WIREFRAME, false);
return true;
case irr::KEY_KEY_D:
Terrain->setMaterialType(
Terrain->getMaterial(0).MaterialType == video::EMT_SOLID ?
video::EMT_DETAIL_MAP : video::EMT_SOLID);
return true;
case irr::KEY_KEY_S:
showBox=!showBox;
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
return true;
case irr::KEY_KEY_X:
showDebug=!showDebug;
Terrain->setDebugDataVisible(showDebug?scene::EDS_BBOX_ALL:scene::EDS_OFF);
return true;
default:
break;
}
}
return false;
}
private:
scene::ISceneNode* Terrain;
scene::ISceneNode* Skybox;
scene::ISceneNode* Skydome;
bool showBox;
bool showDebug;
};
The start of the main function starts like in most other example. We ask the user for the desired renderer and start it up. This time with the advanced parameter handling.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
irr::SIrrlichtCreationParameters params;
params.DriverType=driverType;
params.WindowSize=core::dimension2d<u32>(640, 480);
IrrlichtDevice* device = createDeviceEx(params);
if (device == 0)
return 1;
First, we add standard stuff to the scene: A nice irrlicht engine logo, a small help text, a user controlled camera, and we disable the mouse cursor.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
driver->setTextureCreationFlag(video::ETCF_ALWAYS_32_BIT, true);
const io::path mediaPath = getExampleMediaPath();
env->addImage(driver->getTexture(mediaPath + "irrlichtlogo3.png"),
core::position2d<s32>(10,10));
env->getSkin()->setFont(env->getFont(mediaPath + "fontlucida.png"));
env->addStaticText(
L"Press 'W' to change wireframe mode\nPress 'D' to toggle detail map\nPress 'S' to toggle skybox/skydome",
core::rect<s32>(10,421,250,475), true, true, 0, -1, true);
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0,100.0f,1.2f);
camera->setPosition(core::vector3df(2700*2,255*2,2600*2));
camera->setTarget(core::vector3df(2397*2,343*2,2700*2));
camera->setFarValue(42000.0f);
device->getCursorControl()->setVisible(false);
Here comes the terrain renderer scene node: We add it just like any other scene node to the scene using ISceneManager::addTerrainSceneNode(). The first parameter is a file name to the heightmap we use. A heightmap is simply a gray scale texture. The terrain renderer loads it and creates the 3D terrain from it.
+
To make the terrain look bigger, we change it's scale factor to (40, 4.4, 40). Because we don't have any dynamic lights in the scene, we switch off the lighting, and we set the file terrain-texture.jpg as texture for the terrain and detailmap3.jpg as second texture, called detail map. At last, we set the scale values for the texture: The first texture will be repeated only one time over the whole terrain, and the second one (detail map) 20 times.
scene::ITerrainSceneNode* terrain = smgr->addTerrainSceneNode(
mediaPath + "terrain-heightmap.bmp",
0,
-1,
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(40.f, 4.4f, 40.f),
video::SColor ( 255, 255, 255, 255 ),
5,
scene::ETPS_17,
4
);
terrain->setMaterialFlag(video::EMF_LIGHTING, false);
terrain->setMaterialTexture(0,
driver->getTexture(mediaPath + "terrain-texture.jpg"));
terrain->setMaterialTexture(1,
driver->getTexture(mediaPath + "detailmap3.jpg"));
terrain->setMaterialType(video::EMT_DETAIL_MAP);
terrain->scaleTexture(1.0f, 20.0f);
To be able to do collision with the terrain, we create a triangle selector. If you want to know what triangle selectors do, just take a look into the collision tutorial. The terrain triangle selector works together with the terrain. To demonstrate this, we create a collision response animator and attach it to the camera, so that the camera will not be able to fly through the terrain.
scene::ITriangleSelector* selector
= smgr->createTerrainTriangleSelector(terrain, 0);
terrain->setTriangleSelector(selector);
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(60,100,60),
core::vector3df(0,0,0),
core::vector3df(0,50,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
If you need access to the terrain data you can also do this directly via the following code fragment.
scene::CDynamicMeshBuffer* buffer = new scene::CDynamicMeshBuffer(video::EVT_2TCOORDS, video::EIT_16BIT);
terrain->getMeshBufferForLOD(*buffer, 0);
video::S3DVertex2TCoords* data = (video::S3DVertex2TCoords*)buffer->getVertexBuffer().getData();
(void)data;
buffer->drop();
To make the user be able to switch between normal and wireframe mode, we create an instance of the event receiver from above and let Irrlicht know about it. In addition, we add the skybox which we already used in lots of Irrlicht examples and a skydome, which is shown mutually exclusive with the skybox by pressing 'S'.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
scene::ISceneNode* skybox=smgr->addSkyBoxSceneNode(
driver->getTexture(mediaPath + "irrlicht2_up.jpg"),
driver->getTexture(mediaPath + "irrlicht2_dn.jpg"),
driver->getTexture(mediaPath + "irrlicht2_lf.jpg"),
driver->getTexture(mediaPath + "irrlicht2_rt.jpg"),
driver->getTexture(mediaPath + "irrlicht2_ft.jpg"),
driver->getTexture(mediaPath + "irrlicht2_bk.jpg"));
scene::ISceneNode* skydome=smgr->addSkyDomeSceneNode(driver->getTexture(mediaPath + "skydome.jpg"),16,8,0.95f,2.0f);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
MyEventReceiver receiver(terrain, skybox, skydome);
device->setEventReceiver(&receiver);
That's it, draw everything.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0));
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Terrain Renderer - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
str += " Height: ";
str += terrain->getHeight(camera->getAbsolutePosition().X,
camera->getAbsolutePosition().Z);
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
Now you know how to use terrain in Irrlicht.
diff --git a/examples/13.RenderToTexture/tutorial.html b/examples/13.RenderToTexture/tutorial.html
index edd510ec..0b4c4351 100644
--- a/examples/13.RenderToTexture/tutorial.html
+++ b/examples/13.RenderToTexture/tutorial.html
@@ -195,10 +195,13 @@ tr.heading h2 {
-
+
-
This tutorial shows how to render to a texture using Irrlicht. Render to texture is a feature with which it is possible to create nice special effects. In addition, this tutorial shows how to enable specular highlights.
-
In the beginning, everything as usual. Include the needed headers, ask the user for the rendering driver, create the Irrlicht Device:
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480),
16, false, false);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
Now, we load an animated mesh to be displayed. As in most examples, we'll take the fairy md2 model. The difference here: We set the shininess of the model to a value other than 0 which is the default value. This enables specular highlights on the model if dynamic lighting is on. The value influences the size of the highlights.
scene::IAnimatedMeshSceneNode* fairy = smgr->addAnimatedMeshSceneNode(
smgr->getMesh("../../media/faerie.md2"));
if (fairy)
{
fairy->setMaterialTexture(0,
driver->getTexture("../../media/faerie2.bmp"));
fairy->setMaterialFlag(video::EMF_LIGHTING, true);
fairy->getMaterial(0).Shininess = 20.0f;
fairy->setPosition(core::vector3df(-10,0,-100));
fairy->setMD2Animation ( scene::EMAT_STAND );
}
To make specular highlights appear on the model, we need a dynamic light in the scene. We add one directly in vicinity of the model. In addition, to make the model not that dark, we set the ambient light to gray.
smgr->addLightSceneNode(0, core::vector3df(-15,5,-105),
video::SColorf(1.0f, 1.0f, 1.0f));
smgr->setAmbientLight(video::SColor(0,60,60,60));
The next is just some standard stuff: Add a test cube and let it rotate to make the scene more interesting. The user defined camera and cursor setup is made later on, right before the render loop.
scene::ISceneNode* test = smgr->addCubeSceneNode(60);
scene::ISceneNodeAnimator* anim = smgr->createRotationAnimator(
core::vector3df(0.3f, 0.3f,0));
test->setPosition(core::vector3df(-100,0,-100));
test->setMaterialFlag(video::EMF_LIGHTING, false);
test->addAnimator(anim);
anim->drop();
device->setWindowCaption(L"Irrlicht Engine - Render to Texture and Specular Highlights example");
To test out the render to texture feature, we need a render target texture. These are not like standard textures, but need to be created first. To create one, we call IVideoDriver::addRenderTargetTexture() and specify the size of the texture. Please don't use sizes bigger than the frame buffer for this, because the render target shares the zbuffer with the frame buffer. Because we want to render the scene not from the user camera into the texture, we add another fixed camera to the scene. But before we do all this, we check if the current running driver is able to render to textures. If it is not, we simply display a warning text.
video::ITexture* rt = 0;
scene::ICameraSceneNode* fixedCam = 0;
if (driver->queryFeature(video::EVDF_RENDER_TO_TARGET))
{
rt = driver->addRenderTargetTexture(core::dimension2d<u32>(256,256), "RTT1");
test->setMaterialTexture(0, rt);
fixedCam = smgr->addCameraSceneNode(0, core::vector3df(10,10,-80),
core::vector3df(-10,10,-100));
}
else
{
gui::IGUISkin* skin = env->getSkin();
gui::IGUIFont* font = env->getFont("../../media/fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
gui::IGUIStaticText* text = env->addStaticText(
L"Your hardware or this renderer is not able to use the "\
L"render to texture feature. RTT Disabled.",
core::rect<s32>(150,20,470,60));
text->setOverrideColor(video::SColor(100,255,255,255));
}
scene::ICameraSceneNode* fpsCamera = smgr->addCameraSceneNodeFPS();
fpsCamera->setPosition(core::vector3df(-50,50,-150));
device->getCursorControl()->setVisible(false);
Nearly finished. Now we need to draw everything. Every frame, we draw the scene twice. Once from the fixed camera into the render target texture and once as usual. When rendering into the render target, we need to disable the visibility of the test cube, because it has the render target texture applied to it. That's it, wasn't too complicated I hope. :)
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, 0);
if (rt)
{
driver->setRenderTarget(rt, true, true, video::SColor(0,0,0,255));
test->setVisible(false);
smgr->setActiveCamera(fixedCam);
smgr->drawAll();
driver->setRenderTarget(0, true, true, 0);
test->setVisible(true);
smgr->setActiveCamera(fpsCamera);
}
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Render to Texture and Specular Highlights example";
str += " FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+ This tutorial shows how to render to a texture using Irrlicht. Render to texture is a feature where everything which would usually be rendered to the screen is instead written to a (special) texture. This can be used to create nice special effects. In addition, this tutorial shows how to enable specular highlights.
+In the beginning, everything as usual. Include the needed headers, ask the user for the rendering driver, create the Irrlicht device:
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device =
createDevice(driverType, core::dimension2d<u32>(640, 480),
16, false, false);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
const io::path mediaPath = getExampleMediaPath();
Now, we load an animated mesh to be displayed. As in most examples, we'll take the fairy md2 model. The difference here: We set the shininess of the model to a value other than 0 which is the default value. This enables specular highlights on the model if dynamic lighting is on. The value influences the size of the highlights.
scene::IAnimatedMeshSceneNode* fairy = smgr->addAnimatedMeshSceneNode(
smgr->getMesh(mediaPath + "faerie.md2"));
if (fairy)
{
fairy->setMaterialTexture(0,
driver->getTexture(mediaPath + "faerie2.bmp"));
fairy->setMaterialFlag(video::EMF_LIGHTING, true);
fairy->getMaterial(0).Shininess = 20.0f;
fairy->setPosition(core::vector3df(-10,0,-100));
fairy->setMD2Animation ( scene::EMAT_STAND );
}
To make specular highlights appear on the model, we need a dynamic light in the scene. We add one directly in vicinity of the model. In addition, to make the model not that dark, we set the ambient light to gray.
smgr->addLightSceneNode(0, core::vector3df(-15,5,-105),
video::SColorf(1.0f, 1.0f, 1.0f));
smgr->setAmbientLight(video::SColor(0,60,60,60));
The next is just some standard stuff: Add a test cube and let it rotate to make the scene more interesting. The user defined camera and cursor setup is made later on, right before the render loop.
scene::ISceneNode* cube = smgr->addCubeSceneNode(60);
scene::ISceneNodeAnimator* anim = smgr->createRotationAnimator(
core::vector3df(0.3f, 0.3f,0));
cube->setPosition(core::vector3df(-100,0,-100));
cube->setMaterialFlag(video::EMF_LIGHTING, false);
cube->addAnimator(anim);
anim->drop();
device->setWindowCaption(L"Irrlicht Engine - Render to Texture and Specular Highlights example");
To test out the render to texture feature, we need to define our new rendertarget. The rendertarget will need one texture to receive the result you would otherwise see on screen and one texture which is used as depth-buffer.
+(Note: If you worked with older Irrlicht versions (before 1.9) you might be used to only create a rendertarget texture and no explicit rendertarget. While that's still possible, it's no longer recommended.)
+The rendertarget textures are not like standard textures, but need to be created first. To create them, we call IVideoDriver::addRenderTargetTexture() and specify the size of the texture and the type. For depth-maps you can use types ECF_D16, ECF_D32 or ECF_D24S8. When ECF_D24S8 you can also use a stencil-buffer.
+Because we want to render the scene not from the user camera into the texture, we add another fixed camera to the scene. But before we do all this, we check if the current running driver is able to render to textures. If it is not, we simply display a warning text.
video::IRenderTarget* renderTarget = 0;
scene::ICameraSceneNode* fixedCam = 0;
if (driver->queryFeature(video::EVDF_RENDER_TO_TARGET))
{
const core::dimension2d<u32> rtDim(256, 256);
video::ITexture* renderTargetTex = driver->addRenderTargetTexture(rtDim, "RTT1", video::ECF_A8R8G8B8);
video::ITexture* renderTargetDepth = driver->addRenderTargetTexture(rtDim, "DepthStencil", video::ECF_D16);
renderTarget = driver->addRenderTarget();
renderTarget->setTexture(renderTargetTex, renderTargetDepth);
cube->setMaterialTexture(0, renderTargetTex);
fixedCam = smgr->addCameraSceneNode(0, core::vector3df(10,10,-80),
core::vector3df(-10,10,-100));
}
else
{
gui::IGUISkin* skin = env->getSkin();
gui::IGUIFont* font = env->getFont(mediaPath + "fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
gui::IGUIStaticText* text = env->addStaticText(
L"Your hardware or this renderer is not able to use the "\
L"render to texture feature. RTT Disabled.",
core::rect<s32>(150,20,470,60));
text->setOverrideColor(video::SColor(100,255,255,255));
}
scene::ICameraSceneNode* fpsCamera = smgr->addCameraSceneNodeFPS();
fpsCamera->setPosition(core::vector3df(-50,50,-150));
device->getCursorControl()->setVisible(false);
Nearly finished. Now we need to draw everything. Every frame, we draw the scene twice. Once from the fixed camera into the render target texture and once as usual. When rendering into the render target, we need to disable the visibility of the test cube, because it has the render target texture applied to it. That's it, wasn't too complicated I hope. :)
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0));
if (renderTarget)
{
driver->setRenderTargetEx(renderTarget, video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0,0,0,255));
cube->setVisible(false);
smgr->setActiveCamera(fixedCam);
smgr->drawAll();
driver->setRenderTargetEx(0, 0);
cube->setVisible(true);
smgr->setActiveCamera(fpsCamera);
}
smgr->drawAll();
env->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Render to Texture and Specular Highlights example";
str += " FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
diff --git a/examples/14.Win32Window/tutorial.html b/examples/14.Win32Window/tutorial.html
index 819e8ecc..849c474f 100644
--- a/examples/14.Win32Window/tutorial.html
+++ b/examples/14.Win32Window/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
This example only runs under MS Windows and demonstrates that Irrlicht can render inside a win32 window. MFC and .NET Windows.Forms windows are possible, too.
-
In the beginning, we create a windows window using the windows API. I'm not going to explain this code, because it is windows specific. See the MSDN or a windows book for details.
#include <irrlicht.h>
#ifndef _IRR_WINDOWS_
#error Windows only example
#else
#include <windows.h>
#include <iostream>
#include "driverChoice.h"
using namespace irr;
#pragma comment(lib, "irrlicht.lib")
HWND hOKButton;
HWND hWnd;
static LRESULT CALLBACK CustomWndProc(HWND hWnd, UINT message,
WPARAM wParam, LPARAM lParam)
{
switch (message)
{
case WM_COMMAND:
{
HWND hwndCtl = (HWND)lParam;
int code = HIWORD(wParam);
if (hwndCtl == hOKButton)
{
DestroyWindow(hWnd);
PostQuitMessage(0);
return 0;
}
}
break;
case WM_DESTROY:
PostQuitMessage(0);
return 0;
}
return DefWindowProc(hWnd, message, wParam, lParam);
}
Now ask for the driver and create the Windows specific window.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
printf("Select the render window (some dead window may exist too):\n"\
" (a) Window with button (via CreationParam)\n"\
" (b) Window with button (via beginScene)\n"\
" (c) Own Irrlicht window (default behavior)\n"\
" (otherKey) exit\n\n");
char key;
std::cin >> key;
if (key != 'a' && key != 'b' && key != 'c')
return 1;
HINSTANCE hInstance = 0;
const char* Win32ClassName = "CIrrlichtWindowsTestDialog";
WNDCLASSEX wcex;
wcex.cbSize = sizeof(WNDCLASSEX);
wcex.style = CS_HREDRAW | CS_VREDRAW;
wcex.lpfnWndProc = (WNDPROC)CustomWndProc;
wcex.cbClsExtra = 0;
wcex.cbWndExtra = DLGWINDOWEXTRA;
wcex.hInstance = hInstance;
wcex.hIcon = NULL;
wcex.hCursor = LoadCursor(NULL, IDC_ARROW);
wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW);
wcex.lpszMenuName = 0;
wcex.lpszClassName = Win32ClassName;
wcex.hIconSm = 0;
RegisterClassEx(&wcex);
DWORD style = WS_SYSMENU | WS_BORDER | WS_CAPTION |
WS_CLIPCHILDREN | WS_CLIPSIBLINGS | WS_MAXIMIZEBOX | WS_MINIMIZEBOX | WS_SIZEBOX;
int windowWidth = 440;
int windowHeight = 380;
hWnd = CreateWindow( Win32ClassName, "Irrlicht Win32 window example",
style, 100, 100, windowWidth, windowHeight,
NULL, NULL, hInstance, NULL);
RECT clientRect;
GetClientRect(hWnd, &clientRect);
windowWidth = clientRect.right;
windowHeight = clientRect.bottom;
hOKButton = CreateWindow("BUTTON", "OK - Close", WS_CHILD | WS_VISIBLE | BS_TEXT,
windowWidth - 160, windowHeight - 40, 150, 30, hWnd, NULL, hInstance, NULL);
CreateWindow("STATIC", "This is Irrlicht running inside a standard Win32 window.\n"\
"Also mixing with MFC and .NET Windows.Forms is possible.",
WS_CHILD | WS_VISIBLE, 20, 20, 400, 40, hWnd, NULL, hInstance, NULL);
HWND hIrrlichtWindow = CreateWindow("BUTTON", "",
WS_CHILD | WS_VISIBLE | BS_OWNERDRAW,
50, 80, 320, 220, hWnd, NULL, hInstance, NULL);
video::SExposedVideoData videodata((key=='b')?hIrrlichtWindow:0);
So now that we have some window, we can create an Irrlicht device inside of it. We use Irrlicht createEx() function for this. We only need the handle (HWND) to that window, set it as windowsID parameter and start up the engine as usual. That's it.
irr::SIrrlichtCreationParameters param;
param.DriverType = driverType;
if (key=='a')
param.WindowId = reinterpret_cast<void*>(hIrrlichtWindow);
irr::IrrlichtDevice* device = irr::createDeviceEx(param);
if (!device)
return 1;
irr::scene::ISceneManager* smgr = device->getSceneManager();
video::IVideoDriver* driver = device->getVideoDriver();
if (driverType==video::EDT_OPENGL)
{
HDC HDc=GetDC(hIrrlichtWindow);
PIXELFORMATDESCRIPTOR pfd={0};
pfd.nSize=sizeof(PIXELFORMATDESCRIPTOR);
int pf = GetPixelFormat(HDc);
DescribePixelFormat(HDc, pf, sizeof(PIXELFORMATDESCRIPTOR), &pfd);
pfd.dwFlags |= PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.cDepthBits=16;
pf = ChoosePixelFormat(HDc, &pfd);
SetPixelFormat(HDc, pf, &pfd);
videodata.OpenGLWin32.HDc = HDc;
videodata.OpenGLWin32.HRc=wglCreateContext(HDc);
wglShareLists((HGLRC)driver->getExposedVideoData().OpenGLWin32.HRc, (HGLRC)videodata.OpenGLWin32.HRc);
}
scene::ICameraSceneNode* cam = smgr->addCameraSceneNode();
cam->setTarget(core::vector3df(0,0,0));
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator(core::vector3df(0,15,0), 30.0f);
cam->addAnimator(anim);
anim->drop();
scene::ISceneNode* cube = smgr->addCubeSceneNode(20);
cube->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
cube->setMaterialTexture(1, driver->getTexture("../../media/water.jpg"));
cube->setMaterialFlag( video::EMF_LIGHTING, false );
cube->setMaterialType( video::EMT_REFLECTION_2_LAYER );
smgr->addSkyBoxSceneNode(
driver->getTexture("../../media/irrlicht2_up.jpg"),
driver->getTexture("../../media/irrlicht2_dn.jpg"),
driver->getTexture("../../media/irrlicht2_lf.jpg"),
driver->getTexture("../../media/irrlicht2_rt.jpg"),
driver->getTexture("../../media/irrlicht2_ft.jpg"),
driver->getTexture("../../media/irrlicht2_bk.jpg"));
ShowWindow(hWnd , SW_SHOW);
UpdateWindow(hWnd);
Now the only thing missing is the drawing loop using IrrlichtDevice::run(). We do this as usual. But instead of this, there is another possibility: You can also simply use your own message loop using GetMessage, DispatchMessage and whatever. Calling Device->run() will cause Irrlicht to dispatch messages internally too. You need not call Device->run() if you want to do your own message dispatching loop, but Irrlicht will not be able to fetch user input then and you have to do it on your own using the window messages, DirectInput, or whatever.
while (device->run())
{
driver->beginScene(true, true, 0, videodata);
smgr->drawAll();
driver->endScene();
}
The alternative, own message dispatching loop without Device->run() would look like this:
MSG msg; while (true) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg);
+
In the beginning, we create a windows window using the windows API. I'm not going to explain this code, because it is windows specific. See the MSDN or a windows book for details.
#include <irrlicht.h>
#ifndef _IRR_WINDOWS_
#error Windows only example
#else
#include <windows.h>
#include <iostream>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "irrlicht.lib")
#endif
HWND hOKButton;
HWND hWnd;
static LRESULT CALLBACK CustomWndProc(HWND hWnd, UINT message,
WPARAM wParam, LPARAM lParam)
{
switch (message)
{
case WM_COMMAND:
{
HWND hwndCtl = (HWND)lParam;
int code = HIWORD(wParam);
if (hwndCtl == hOKButton)
{
DestroyWindow(hWnd);
PostQuitMessage(0);
return 0;
}
}
break;
case WM_DESTROY:
PostQuitMessage(0);
return 0;
}
return DefWindowProc(hWnd, message, wParam, lParam);
}
Now ask for the driver and create the Windows specific window.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
printf("Select the render window (some dead window may exist too):\n"\
" (a) Window with button (via CreationParam)\n"\
" (b) Window with button (via beginScene)\n"\
" (c) Own Irrlicht window (default behavior)\n"\
" (otherKey) exit\n\n");
char key;
std::cin >> key;
if (key != 'a' && key != 'b' && key != 'c')
return 1;
HINSTANCE hInstance = 0;
const fschar_t* Win32ClassName = __TEXT("CIrrlichtWindowsTestDialog");
WNDCLASSEX wcex;
wcex.cbSize = sizeof(WNDCLASSEX);
wcex.style = CS_HREDRAW | CS_VREDRAW;
wcex.lpfnWndProc = (WNDPROC)CustomWndProc;
wcex.cbClsExtra = 0;
wcex.cbWndExtra = DLGWINDOWEXTRA;
wcex.hInstance = hInstance;
wcex.hIcon = NULL;
wcex.hCursor = LoadCursor(NULL, IDC_ARROW);
wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW);
wcex.lpszMenuName = 0;
wcex.lpszClassName = Win32ClassName;
wcex.hIconSm = 0;
RegisterClassEx(&wcex);
DWORD style = WS_SYSMENU | WS_BORDER | WS_CAPTION |
WS_CLIPCHILDREN | WS_CLIPSIBLINGS | WS_MAXIMIZEBOX | WS_MINIMIZEBOX | WS_SIZEBOX;
int windowWidth = 440;
int windowHeight = 380;
hWnd = CreateWindow( Win32ClassName, __TEXT("Irrlicht Win32 window example"),
style, 100, 100, windowWidth, windowHeight,
NULL, NULL, hInstance, NULL);
RECT clientRect;
GetClientRect(hWnd, &clientRect);
windowWidth = clientRect.right;
windowHeight = clientRect.bottom;
hOKButton = CreateWindow(__TEXT("BUTTON"), __TEXT("OK - Close"), WS_CHILD | WS_VISIBLE | BS_TEXT,
windowWidth - 160, windowHeight - 40, 150, 30, hWnd, NULL, hInstance, NULL);
CreateWindow(__TEXT("STATIC"), __TEXT("This is Irrlicht running inside a standard Win32 window.\n")\
__TEXT("Also mixing with MFC and .NET Windows.Forms is possible."),
WS_CHILD | WS_VISIBLE, 20, 20, 400, 40, hWnd, NULL, hInstance, NULL);
HWND hIrrlichtWindow = CreateWindow(__TEXT("BUTTON"), __TEXT(""),
WS_CHILD | WS_VISIBLE | BS_OWNERDRAW,
50, 80, 320, 220, hWnd, NULL, hInstance, NULL);
video::SExposedVideoData videodata((key=='b')?hIrrlichtWindow:0);
So now that we have some window, we can create an Irrlicht device inside of it. We use Irrlicht createEx() function for this. We only need the handle (HWND) to that window, set it as windowsID parameter and start up the engine as usual. That's it.
irr::SIrrlichtCreationParameters param;
param.DriverType = driverType;
if (key=='a')
param.WindowId = reinterpret_cast<void*>(hIrrlichtWindow);
irr::IrrlichtDevice* device = irr::createDeviceEx(param);
irr::scene::ISceneManager* smgr = device->getSceneManager();
video::IVideoDriver* driver = device->getVideoDriver();
if (driverType==video::EDT_OPENGL)
{
HDC HDc=GetDC(hIrrlichtWindow);
PIXELFORMATDESCRIPTOR pfd={0};
pfd.nSize=sizeof(PIXELFORMATDESCRIPTOR);
int pf = GetPixelFormat(HDc);
DescribePixelFormat(HDc, pf, sizeof(PIXELFORMATDESCRIPTOR), &pfd);
pfd.dwFlags |= PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.cDepthBits=16;
pf = ChoosePixelFormat(HDc, &pfd);
SetPixelFormat(HDc, pf, &pfd);
videodata.OpenGLWin32.HDc = HDc;
videodata.OpenGLWin32.HRc=wglCreateContext(HDc);
wglShareLists((HGLRC)driver->getExposedVideoData().OpenGLWin32.HRc, (HGLRC)videodata.OpenGLWin32.HRc);
}
scene::ICameraSceneNode* cam = smgr->addCameraSceneNode();
cam->setTarget(core::vector3df(0,0,0));
scene::ISceneNodeAnimator* anim =
smgr->createFlyCircleAnimator(core::vector3df(0,15,0), 30.0f);
cam->addAnimator(anim);
anim->drop();
scene::ISceneNode* cube = smgr->addCubeSceneNode(20);
const io::path mediaPath = getExampleMediaPath();
cube->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
cube->setMaterialTexture(1, driver->getTexture(mediaPath + "water.jpg"));
cube->setMaterialFlag( video::EMF_LIGHTING, false );
cube->setMaterialType( video::EMT_REFLECTION_2_LAYER );
smgr->addSkyBoxSceneNode(
driver->getTexture(mediaPath + "irrlicht2_up.jpg"),
driver->getTexture(mediaPath + "irrlicht2_dn.jpg"),
driver->getTexture(mediaPath + "irrlicht2_lf.jpg"),
driver->getTexture(mediaPath + "irrlicht2_rt.jpg"),
driver->getTexture(mediaPath + "irrlicht2_ft.jpg"),
driver->getTexture(mediaPath + "irrlicht2_bk.jpg"));
device->getGUIEnvironment()->addStaticText(core::stringw("Second screen render").c_str(),core::recti(0,0,200,200));
ShowWindow(hWnd , SW_SHOW);
UpdateWindow(hWnd);
Now the only thing missing is the drawing loop using IrrlichtDevice::run(). We do this as usual. But instead of this, there is another possibility: You can also simply use your own message loop using GetMessage, DispatchMessage and whatever. Calling Device->run() will cause Irrlicht to dispatch messages internally too. You need not call Device->run() if you want to do your own message dispatching loop, but Irrlicht will not be able to fetch user input then and you have to do it on your own using the window messages, DirectInput, or whatever.
while (device->run())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0), 1.f, 0, videodata);
smgr->drawAll();
driver->endScene();
if (key=='b')
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0xbbbbbbbb));
device->getGUIEnvironment()->drawAll();
driver->endScene();
}
}
The alternative, own message dispatching loop without Device->run() would look like this:
MSG msg; while (true) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg);
if (msg.message == WM_QUIT) break; }
advance virtual time device->getTimer()->tick();
draw engine picture driver->beginScene(true, true, 0, (key=='c')?hIrrlichtWindow:0); smgr->drawAll(); driver->endScene(); }
device->closeDevice();
device->drop();
return 0;
}
#endif // if windows
That's it, Irrlicht now runs in your own windows window.
diff --git a/examples/15.LoadIrrFile/tutorial.html b/examples/15.LoadIrrFile/tutorial.html
index a9b28684..3416c8f2 100644
--- a/examples/15.LoadIrrFile/tutorial.html
+++ b/examples/15.LoadIrrFile/tutorial.html
@@ -195,10 +195,14 @@ tr.heading h2 {
-
+
Since version 1.1, Irrlicht is able to save and load the full scene graph into an .irr file, an xml based format. There is an editor available to edit those files, named irrEdit (http://www.ambiera.com/irredit) which can also be used as world and particle editor. This tutorial shows how to use .irr files.
-
Lets start: Create an Irrlicht device and setup the window.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main(int argc, char** argv)
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice* device =
createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
device->setWindowCaption(L"Load .irr file example");
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
Now load our .irr file. .irr files can store the whole scene graph including animators, materials and particle systems. And there is also the possibility to store arbitrary user data for every scene node in that file. To keep this example simple, we are simply loading the scene here. See the documentation at ISceneManager::loadScene and ISceneManager::saveScene for more information. So to load and display a complicated huge scene, we only need a single call to loadScene().
if (argc>1)
smgr->loadScene(argv[1]);
else
smgr->loadScene("../../media/example.irr");
Now we'll create a camera, and give it a collision response animator that's built from the mesh nodes in the scene we just loaded.
scene::ICameraSceneNode * camera = smgr->addCameraSceneNodeFPS(0, 50.f, 0.1f);
scene::IMetaTriangleSelector * meta = smgr->createMetaTriangleSelector();
Now we will find all the nodes in the scene and create triangle selectors for all suitable nodes. Typically, you would want to make a more informed decision about which nodes to performs collision checks on; you could capture that information in the node name or Id.
core::array<scene::ISceneNode *> nodes;
smgr->getSceneNodesFromType(scene::ESNT_ANY, nodes);
for (u32 i=0; i < nodes.size(); ++i)
{
scene::ISceneNode * node = nodes[i];
scene::ITriangleSelector * selector = 0;
switch(node->getType())
{
case scene::ESNT_CUBE:
case scene::ESNT_ANIMATED_MESH:
selector = smgr->createTriangleSelectorFromBoundingBox(node);
break;
case scene::ESNT_MESH:
case scene::ESNT_SPHERE:
selector = smgr->createTriangleSelector(((scene::IMeshSceneNode*)node)->getMesh(), node);
break;
case scene::ESNT_TERRAIN:
selector = smgr->createTerrainTriangleSelector((scene::ITerrainSceneNode*)node);
break;
case scene::ESNT_OCTREE:
selector = smgr->createOctreeTriangleSelector(((scene::IMeshSceneNode*)node)->getMesh(), node);
break;
default:
break;
}
if(selector)
{
meta->addTriangleSelector(selector);
selector->drop();
}
}
Now that the mesh scene nodes have had triangle selectors created and added to the meta selector, create a collision response animator from that meta selector.
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
meta, camera, core::vector3df(5,5,5),
core::vector3df(0,0,0));
meta->drop();
camera->addAnimator(anim);
anim->drop();
camera->setPosition(core::vector3df(0.f, 20.f, 0.f));
scene::ISceneNode * cube = smgr->getSceneNodeFromType(scene::ESNT_CUBE);
if(cube)
camera->setTarget(cube->getAbsolutePosition());
That's it. Draw everything and finish as usual.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(0,200,200,200));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Load Irrlicht File example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+
Lets start: Create an Irrlicht device and setup the window.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
int main(int argc, char** argv)
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice* device =
createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
device->setWindowCaption(L"Load .irr file example");
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
Now load our .irr file. .irr files can store the whole scene graph including animators, materials and particle systems. And there is also the possibility to store arbitrary user data for every scene node in that file. To keep this example simple, we are simply loading the scene here. See the documentation at ISceneManager::loadScene and ISceneManager::saveScene for more information. So to load and display a complicated huge scene, we only need a single call to loadScene().
You might have to work around some minor problems in current .irr loader:
+- It can't load meshes relative to the .irr file, but only relative to the working directory. So you might have to change your working directory to the path where the .irr file is in.
+- When passing a custom parent node to loadScene then irr_scene attributes will be passed to that. Usually not a problem, but for example AmbientLight will not change that way unless you create a custom SceneNode type which can interpret those attributes.
if (argc>1)
smgr->loadScene(argv[1]);
else
smgr->loadScene(getExampleMediaPath() + "example.irr");
Now we'll create a camera, and give it a collision response animator that's built from the mesh nodes in the scene we just loaded. scene::ICameraSceneNode * camera = smgr->addCameraSceneNodeFPS(0, 50.f, 0.1f);
scene::IMetaTriangleSelector * meta = smgr->createMetaTriangleSelector();
Now we will find all the nodes in the scene and create triangle selectors for all suitable nodes. Typically, you would want to make a more informed decision about which nodes to performs collision checks on; you could capture that information in the node name or Id. core::array<scene::ISceneNode *> nodes;
smgr->getSceneNodesFromType(scene::ESNT_ANY, nodes);
for (u32 i=0; i < nodes.size(); ++i)
{
scene::ISceneNode * node = nodes[i];
scene::ITriangleSelector * selector = 0;
switch(node->getType())
{
case scene::ESNT_CUBE:
case scene::ESNT_ANIMATED_MESH:
selector = smgr->createTriangleSelectorFromBoundingBox(node);
break;
case scene::ESNT_MESH:
case scene::ESNT_SPHERE:
selector = smgr->createTriangleSelector(((scene::IMeshSceneNode*)node)->getMesh(), node);
break;
case scene::ESNT_TERRAIN:
selector = smgr->createTerrainTriangleSelector((scene::ITerrainSceneNode*)node);
break;
case scene::ESNT_OCTREE:
selector = smgr->createOctreeTriangleSelector(((scene::IMeshSceneNode*)node)->getMesh(), node);
break;
default:
break;
}
if(selector)
{
meta->addTriangleSelector(selector);
selector->drop();
}
}
Now that the mesh scene nodes have had triangle selectors created and added to the meta selector, create a collision response animator from that meta selector. scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
meta, camera, core::vector3df(5,5,5),
core::vector3df(0,0,0));
meta->drop();
camera->addAnimator(anim);
anim->drop();
camera->setPosition(core::vector3df(0.f, 20.f, 0.f));
scene::ISceneNode * cube = smgr->getSceneNodeFromType(scene::ESNT_CUBE);
if(cube)
camera->setTarget(cube->getAbsolutePosition());
That's it. Draw everything and finish as usual. int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(0,200,200,200));
smgr->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Load Irrlicht File example - Irrlicht Engine [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
device->drop();
return 0;
}
+
+
diff --git a/examples/16.Quake3MapShader/tutorial.html b/examples/16.Quake3MapShader/tutorial.html
index 51983407..6397cdf5 100644
--- a/examples/16.Quake3MapShader/tutorial.html
+++ b/examples/16.Quake3MapShader/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to load a Quake 3 map into the engine, create a SceneNode for optimizing the speed of rendering and how to create a user controlled camera.
-
Lets start like the HelloWorld example: We include the irrlicht header files and an additional file to be able to ask the user for a driver type using the console.
#include <irrlicht.h>
#include "driverChoice.h"
define which Quake3 Level should be loaded
#define IRRLICHT_QUAKE3_ARENA
#ifdef ORIGINAL_QUAKE3_ARENA
#define QUAKE3_STORAGE_FORMAT addFolderFileArchive
#define QUAKE3_STORAGE_1 "/baseq3/"
#ifdef CUSTOM_QUAKE3_ARENA
#define QUAKE3_STORAGE_2 "/cf/"
#define QUAKE3_MAP_NAME "maps/cf.bsp"
#else
#define QUAKE3_MAP_NAME "maps/q3dm8.bsp"
#endif
#endif
#ifdef IRRLICHT_QUAKE3_ARENA
#define QUAKE3_STORAGE_FORMAT addFileArchive
#define QUAKE3_STORAGE_1 "../../media/map-20kdm2.pk3"
#define QUAKE3_MAP_NAME "maps/20kdm2.bsp"
#endif
using namespace irr;
using namespace scene;
Again, to be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. We could set this option in the project settings, but to make it easy, we use a pragma comment lib:
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
A class to produce a series of screenshots
class CScreenShotFactory : public IEventReceiver
{
public:
CScreenShotFactory( IrrlichtDevice *device, const c8 * templateName, ISceneNode* node )
: Device(device), Number(0), FilenameTemplate(templateName), Node(node)
{
FilenameTemplate.replace ( '/', '_' );
FilenameTemplate.replace ( '\\', '_' );
}
bool OnEvent(const SEvent& event)
{
if ((event.EventType == EET_KEY_INPUT_EVENT) &&
event.KeyInput.PressedDown)
{
if (event.KeyInput.Key == KEY_F9)
{
video::IImage* image = Device->getVideoDriver()->createScreenShot();
if (image)
{
c8 buf[256];
snprintf(buf, 256, "%s_shot%04d.jpg",
FilenameTemplate.c_str(),
++Number);
Device->getVideoDriver()->writeImageToFile(image, buf, 85 );
image->drop();
}
}
else
if (event.KeyInput.Key == KEY_F8)
{
if (Node->isDebugDataVisible())
Node->setDebugDataVisible(scene::EDS_OFF);
else
Node->setDebugDataVisible(scene::EDS_BBOX_ALL);
}
}
return false;
}
private:
IrrlichtDevice *Device;
u32 Number;
core::stringc FilenameTemplate;
ISceneNode* Node;
};
Ok, lets start.
int IRRCALLCONV main(int argc, char* argv[])
{
Like in the HelloWorld example, we create an IrrlichtDevice with createDevice(). The difference now is that we ask the user to select which hardware accelerated driver to use. The Software device would be too slow to draw a huge Quake 3 map, but just for the fun of it, we make this decision possible too.
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
const core::dimension2du videoDim(800,600);
IrrlichtDevice *device = createDevice(driverType, videoDim, 32, false );
if (device == 0)
return 1;
const char* mapname=0;
if (argc>2)
mapname = argv[2];
else
mapname = QUAKE3_MAP_NAME;
Get a pointer to the video driver and the SceneManager so that we do not always have to write device->getVideoDriver() and device->getSceneManager().
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* gui = device->getGUIEnvironment();
device->getFileSystem()->addFileArchive("../../media/");
To display the Quake 3 map, we first need to load it. Quake 3 maps are packed into .pk3 files, which are nothing other than .zip files. So we add the .pk3 file to our FileSystem. After it was added, we are able to read from the files in that archive as they would directly be stored on disk.
if (argc>2)
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(argv[1]);
else
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(QUAKE3_STORAGE_1);
#ifdef QUAKE3_STORAGE_2
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(QUAKE3_STORAGE_2);
#endif
smgr->getParameters()->setAttribute(scene::ALLOW_ZWRITE_ON_TRANSPARENT, true);
Now we can load the mesh by calling getMesh(). We get a pointer returned to a IAnimatedMesh. As you know, Quake 3 maps are not really animated, they are only a huge chunk of static geometry with some materials attached. Hence the IAnimated mesh consists of only one frame, so we get the "first frame" of the "animation", which is our quake level and create an Octree scene node with it, using addOctreeSceneNode(). The Octree optimizes the scene a little bit, trying to draw only geometry which is currently visible. An alternative to the Octree would be a AnimatedMeshSceneNode, which would draw always the complete geometry of the mesh, without optimization. Try it out: Write addAnimatedMeshSceneNode instead of addOctreeSceneNode and compare the primitives drawn by the video driver. (There is a getPrimitiveCountDrawed() method in the IVideoDriver class). Note that this optimization with the Octree is only useful when drawing huge meshes consisting of lots of geometry.
scene::IQ3LevelMesh* const mesh =
(scene::IQ3LevelMesh*) smgr->getMesh(mapname);
add the geometry mesh to the Scene ( polygon & patches ) The Geometry mesh is optimised for faster drawing
scene::ISceneNode* node = 0;
if (mesh)
{
scene::IMesh * const geometry = mesh->getMesh(quake3::E_Q3_MESH_GEOMETRY);
node = smgr->addOctreeSceneNode(geometry, 0, -1, 4096);
}
CScreenShotFactory screenshotFactory(device, mapname, node);
device->setEventReceiver(&screenshotFactory);
now construct SceneNodes for each Shader The Objects are stored in the quake mesh scene::E_Q3_MESH_ITEMS and the Shader ID is stored in the MaterialParameters mostly dark looking skulls and moving lava.. or green flashing tubes?
if ( mesh )
{
const scene::IMesh * const additional_mesh = mesh->getMesh(quake3::E_Q3_MESH_ITEMS);
#ifdef SHOW_SHADER_NAME
gui::IGUIFont *font = device->getGUIEnvironment()->getFont("../../media/fontlucida.png");
u32 count = 0;
#endif
for ( u32 i = 0; i!= additional_mesh->getMeshBufferCount(); ++i )
{
const IMeshBuffer* meshBuffer = additional_mesh->getMeshBuffer(i);
const video::SMaterial& material = meshBuffer->getMaterial();
const s32 shaderIndex = (s32) material.MaterialTypeParam2;
const quake3::IShader *shader = mesh->getShader(shaderIndex);
if (0 == shader)
{
continue;
}
node = smgr->addQuake3SceneNode(meshBuffer, shader);
#ifdef SHOW_SHADER_NAME
count += 1;
core::stringw name( node->getName() );
node = smgr->addBillboardTextSceneNode(
font, name.c_str(), node,
core::dimension2d<f32>(80.0f, 8.0f),
core::vector3df(0, 10, 0));
#endif
}
}
Now we only need a Camera to look at the Quake 3 map. And we want to create a user controlled camera. There are some different cameras available in the Irrlicht engine. For example the Maya Camera which can be controlled comparable to the camera in Maya: Rotate with left mouse button pressed, Zoom with both buttons pressed, translate with right mouse button pressed. This could be created with addCameraSceneNodeMaya(). But for this example, we want to create a camera which behaves like the ones in first person shooter games (FPS).
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
so we need a good starting Position in the level. we can ask the Quake3 Loader for all entities with class_name "info_player_deathmatch" we choose a random launch
if ( mesh )
{
quake3::tQ3EntityList &entityList = mesh->getEntityList();
quake3::IEntity search;
search.name = "info_player_deathmatch";
s32 index = entityList.binary_search(search);
if (index >= 0)
{
s32 notEndList;
do
{
const quake3::SVarGroup *group = entityList[index].getGroup(1);
u32 parsepos = 0;
const core::vector3df pos =
quake3::getAsVector3df(group->get("origin"), parsepos);
parsepos = 0;
const f32 angle = quake3::getAsFloat(group->get("angle"), parsepos);
core::vector3df target(0.f, 0.f, 1.f);
target.rotateXZBy(angle);
camera->setPosition(pos);
camera->setTarget(pos + target);
++index;
notEndList = ( index < (s32) entityList.size () && entityList[index].name == search.name && (device->getTimer()->getRealTime() >> 3 ) & 1 );
notEndList = index == 2;
} while ( notEndList );
}
}
The mouse cursor needs not to be visible, so we make it invisible.
device->getCursorControl()->setVisible(false);
gui->addImage(driver->getTexture("irrlichtlogo2.png"),
core::position2d<s32>(10, 10));
const core::position2di pos(videoDim.Width - 128, videoDim.Height - 64);
switch ( driverType )
{
case video::EDT_BURNINGSVIDEO:
gui->addImage(driver->getTexture("burninglogo.png"), pos);
break;
case video::EDT_OPENGL:
gui->addImage(driver->getTexture("opengllogo.png"), pos);
break;
case video::EDT_DIRECT3D8:
case video::EDT_DIRECT3D9:
gui->addImage(driver->getTexture("directxlogo.png"), pos);
break;
}
We have done everything, so lets draw it. We also write the current frames per second and the drawn primitives to the caption of the window. The 'if (device->isWindowActive())' line is optional, but prevents the engine render to set the position of the mouse cursor after task switching when other program are active.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(255,20,20,40));
smgr->drawAll();
gui->drawAll();
driver->endScene();
int fps = driver->getFPS();
{
io::IAttributes * const attr = smgr->getParameters();
core::stringw str = L"Q3 [";
str += driver->getName();
str += "] FPS:";
str += fps;
#ifdef _IRR_SCENEMANAGER_DEBUG
str += " Cull:";
str += attr->getAttributeAsInt("calls");
str += "/";
str += attr->getAttributeAsInt("culled");
str += " Draw: ";
str += attr->getAttributeAsInt("drawn_solid");
str += "/";
str += attr->getAttributeAsInt("drawn_transparent");
str += "/";
str += attr->getAttributeAsInt("drawn_transparent_effect");
#endif
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
+ This tutorial shows how to load a Quake 3 map into the engine, create a SceneNode for optimizing the speed of rendering and how to create a user controlled camera.
+Lets start like the HelloWorld example: We include the irrlicht header files and an additional file to be able to ask the user for a driver type using the console.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
define which Quake3 Level should be loaded
#define IRRLICHT_QUAKE3_ARENA
#ifdef ORIGINAL_QUAKE3_ARENA
#define QUAKE3_STORAGE_FORMAT addFolderFileArchive
#define QUAKE3_STORAGE_1 "/baseq3/"
#ifdef CUSTOM_QUAKE3_ARENA
#define QUAKE3_STORAGE_2 "/cf/"
#define QUAKE3_MAP_NAME "maps/cf.bsp"
#else
#define QUAKE3_MAP_NAME "maps/q3dm8.bsp"
#endif
#endif
#ifdef IRRLICHT_QUAKE3_ARENA
#define QUAKE3_STORAGE_FORMAT addFileArchive
#define QUAKE3_STORAGE_1 getExampleMediaPath() + "map-20kdm2.pk3"
#define QUAKE3_MAP_NAME "maps/20kdm2.bsp"
#endif
using namespace irr;
using namespace scene;
Again, to be able to use the Irrlicht.DLL file, we need to link with the Irrlicht.lib. We could set this option in the project settings, but to make it easy, we use a pragma comment lib:
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
A class to produce a series of screenshots
class CScreenShotFactory : public IEventReceiver
{
public:
CScreenShotFactory( IrrlichtDevice *device, const c8 * templateName, ISceneNode* node )
: Device(device), Number(0), FilenameTemplate(templateName), Node(node)
{
FilenameTemplate.replace ( '/', '_' );
FilenameTemplate.replace ( '\\', '_' );
}
bool OnEvent(const SEvent& event)
{
if ((event.EventType == EET_KEY_INPUT_EVENT) &&
event.KeyInput.PressedDown)
{
if (event.KeyInput.Key == KEY_F9)
{
video::IImage* image = Device->getVideoDriver()->createScreenShot();
if (image)
{
c8 buf[256];
snprintf_irr(buf, 256, "%s_shot%04d.jpg",
FilenameTemplate.c_str(),
++Number);
Device->getVideoDriver()->writeImageToFile(image, buf, 85 );
image->drop();
}
}
else
if (event.KeyInput.Key == KEY_F8)
{
if (Node->isDebugDataVisible())
Node->setDebugDataVisible(scene::EDS_OFF);
else
Node->setDebugDataVisible(scene::EDS_BBOX_ALL);
}
}
return false;
}
private:
IrrlichtDevice *Device;
u32 Number;
core::stringc FilenameTemplate;
ISceneNode* Node;
};
Ok, lets start.
int IRRCALLCONV main(int argc, char* argv[])
{
Like in the HelloWorld example, we create an IrrlichtDevice with createDevice(). The difference now is that we ask the user to select which hardware accelerated driver to use. The Software device would be too slow to draw a huge Quake 3 map, but just for the fun of it, we make this decision possible too.
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
const core::dimension2du videoDim(800,600);
IrrlichtDevice *device = createDevice(driverType, videoDim, 32, false );
if (device == 0)
return 1;
const char* mapname=0;
if (argc>2)
mapname = argv[2];
else
mapname = QUAKE3_MAP_NAME;
Get a pointer to the video driver and the SceneManager so that we do not always have to write device->getVideoDriver() and device->getSceneManager().
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* gui = device->getGUIEnvironment();
const io::path mediaPath = getExampleMediaPath();
device->getFileSystem()->addFileArchive(mediaPath);
To display the Quake 3 map, we first need to load it. Quake 3 maps are packed into .pk3 files, which are nothing other than .zip files. So we add the .pk3 file to our FileSystem. After it was added, we are able to read from the files in that archive as they would directly be stored on disk.
if (argc>2)
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(argv[1]);
else
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(QUAKE3_STORAGE_1);
#ifdef QUAKE3_STORAGE_2
device->getFileSystem()->QUAKE3_STORAGE_FORMAT(QUAKE3_STORAGE_2);
#endif
smgr->getParameters()->setAttribute(scene::ALLOW_ZWRITE_ON_TRANSPARENT, true);
Now we can load the mesh by calling getMesh(). We get a pointer returned to a IAnimatedMesh. As you know, Quake 3 maps are not really animated, they are only a huge chunk of static geometry with some materials attached. Hence the IAnimated mesh consists of only one frame, so we get the "first frame" of the "animation", which is our quake level and create an Octree scene node with it, using addOctreeSceneNode(). The Octree optimizes the scene a little bit, trying to draw only geometry which is currently visible. An alternative to the Octree would be a AnimatedMeshSceneNode, which would draw always the complete geometry of the mesh, without optimization. Try it out: Write addAnimatedMeshSceneNode instead of addOctreeSceneNode and compare the primitives drawn by the video driver. (There is a getPrimitiveCountDrawed() method in the IVideoDriver class). Note that this optimization with the Octree is only useful when drawing huge meshes consisting of lots of geometry.
scene::IQ3LevelMesh* const mesh =
(scene::IQ3LevelMesh*) smgr->getMesh(mapname);
add the geometry mesh to the Scene ( polygon & patches ) The Geometry mesh is optimised for faster drawing
scene::ISceneNode* node = 0;
if (mesh)
{
scene::IMesh * const geometry = mesh->getMesh(quake3::E_Q3_MESH_GEOMETRY);
node = smgr->addOctreeSceneNode(geometry, 0, -1, 4096);
}
CScreenShotFactory screenshotFactory(device, mapname, node);
device->setEventReceiver(&screenshotFactory);
now construct SceneNodes for each Shader The Objects are stored in the quake mesh scene::E_Q3_MESH_ITEMS and the Shader ID is stored in the MaterialParameters mostly dark looking skulls and moving lava.. or green flashing tubes?
if ( mesh )
{
const scene::IMesh * const additional_mesh = mesh->getMesh(quake3::E_Q3_MESH_ITEMS);
#ifdef SHOW_SHADER_NAME
gui::IGUIFont *font = device->getGUIEnvironment()->getFont(mediaPath + "fontlucida.png");
u32 count = 0;
#endif
for ( u32 i = 0; i!= additional_mesh->getMeshBufferCount(); ++i )
{
const IMeshBuffer* meshBuffer = additional_mesh->getMeshBuffer(i);
const video::SMaterial& material = meshBuffer->getMaterial();
const s32 shaderIndex = (s32) material.MaterialTypeParam2;
const quake3::IShader *shader = mesh->getShader(shaderIndex);
if (0 == shader)
{
continue;
}
node = smgr->addQuake3SceneNode(meshBuffer, shader);
#ifdef SHOW_SHADER_NAME
count += 1;
core::stringw name( node->getName() );
node = smgr->addBillboardTextSceneNode(
font, name.c_str(), node,
core::dimension2d<f32>(80.0f, 8.0f),
core::vector3df(0, 10, 0));
#endif
}
}
Now we only need a Camera to look at the Quake 3 map. And we want to create a user controlled camera. There are some different cameras available in the Irrlicht engine. For example the Maya Camera which can be controlled comparable to the camera in Maya: Rotate with left mouse button pressed, Zoom with both buttons pressed, translate with right mouse button pressed. This could be created with addCameraSceneNodeMaya(). But for this example, we want to create a camera which behaves like the ones in first person shooter games (FPS).
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
so we need a good starting Position in the level. we can ask the Quake3 Loader for all entities with class_name "info_player_deathmatch" we choose a random launch
if ( mesh )
{
quake3::tQ3EntityList &entityList = mesh->getEntityList();
quake3::IEntity search;
search.name = "info_player_deathmatch";
s32 index = entityList.binary_search(search);
if (index >= 0)
{
s32 notEndList;
do
{
const quake3::SVarGroup *group = entityList[index].getGroup(1);
u32 parsepos = 0;
const core::vector3df pos =
quake3::getAsVector3df(group->get("origin"), parsepos);
parsepos = 0;
const f32 angle = quake3::getAsFloat(group->get("angle"), parsepos);
core::vector3df target(0.f, 0.f, 1.f);
target.rotateXZBy(angle);
camera->setPosition(pos);
camera->setTarget(pos + target);
++index;
notEndList = ( index < (s32) entityList.size () && entityList[index].name == search.name && (device->getTimer()->getRealTime() >> 3 ) & 1 );
notEndList = index == 2;
} while ( notEndList );
}
}
The mouse cursor needs not to be visible, so we make it invisible.
device->getCursorControl()->setVisible(false);
gui->addImage(driver->getTexture("irrlichtlogo3.png"),
core::position2d<s32>(10, 10));
const core::position2di pos(videoDim.Width - 128, videoDim.Height - 64);
switch ( driverType )
{
case video::EDT_BURNINGSVIDEO:
gui->addImage(driver->getTexture("burninglogo.png"), pos);
break;
case video::EDT_OPENGL:
gui->addImage(driver->getTexture("opengllogo.png"), pos);
break;
case video::EDT_DIRECT3D9:
gui->addImage(driver->getTexture("directxlogo.png"), pos);
break;
default:
break;
}
We have done everything, so lets draw it. We also write the current frames per second and the drawn primitives to the caption of the window. The 'if (device->isWindowActive())' line is optional, but prevents the engine render to set the position of the mouse cursor after task switching when other program are active.
int lastFPS = -1;
while(device->run())
if (device->isWindowActive())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,20,20,40));
smgr->drawAll();
gui->drawAll();
driver->endScene();
int fps = driver->getFPS();
if (1 || lastFPS != fps)
{
core::stringw str = L"Q3 [";
str += driver->getName();
str += "] FPS:";
str += fps;
#ifdef _IRR_SCENEMANAGER_DEBUG
io::IAttributes * const attr = smgr->getParameters();
str += " Cull:";
str += attr->getAttributeAsInt("calls");
str += "/";
str += attr->getAttributeAsInt("culled");
str += " Draw: ";
str += attr->getAttributeAsInt("drawn_solid");
str += "/";
str += attr->getAttributeAsInt("drawn_transparent");
str += "/";
str += attr->getAttributeAsInt("drawn_transparent_effect");
#endif
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
diff --git a/examples/17.HelloWorld_Mobile/main.cpp b/examples/17.HelloWorld_Mobile/main.cpp
index 9db5806f..69374c26 100644
--- a/examples/17.HelloWorld_Mobile/main.cpp
+++ b/examples/17.HelloWorld_Mobile/main.cpp
@@ -1,8 +1,9 @@
-/** Deprecated. This was Example 017 Helloworld mobile for WinCE 6.
- But WinCE6 support has been removed for Irrlicht 1.9.
+/** Example 017 Helloworld mobile for WinCE 6. DEPRECATED
+ This was Example 017 Helloworld mobile for WinCE 6.
+ But WinCE6 support has been removed for Irrlicht 1.9.
If you still need that please use Irrlicht 1.8 or svn revision 5045 which was the last one to include it.
- Sources still kept for now as it compiles on other platform too. And we might use this example again
+ Sources still kept for now as it compiles on other platform too. And we might use this example again
once we support Windows RT.
*/
@@ -101,7 +102,7 @@ public:
virtual SMaterial& getMaterial(u32 i)
{
return Material;
- }
+ }
};
/*!
@@ -186,7 +187,7 @@ int example_customscenenode()
// create engine and camera
EventReceiver_basic receiver(device);
device->setEventReceiver(&receiver);
-
+
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
@@ -194,10 +195,10 @@ int example_customscenenode()
smgr->addCameraSceneNode(0, vector3df(0,-40,0), vector3df(0,0,0));
- CSampleSceneNode *myNode =
+ CSampleSceneNode *myNode =
new CSampleSceneNode(smgr->getRootSceneNode(), smgr, 666);
- ISceneNodeAnimator* anim =
+ ISceneNodeAnimator* anim =
smgr->createRotationAnimator(vector3df(0.8f, 0, 0.8f));
if(anim)
@@ -293,7 +294,7 @@ int example_terrain()
IrrlichtDevice *device = startup();
if (device == 0)
return 1; // could not create selected driver.
-
+
/*
First, we add standard stuff to the scene: A nice irrlicht engine
logo, a small help text, a user controlled camera, and we disable
@@ -363,7 +364,7 @@ int example_terrain()
driver->getTexture("../../media/terrain-texture.jpg"));
terrain->setMaterialTexture(1,
driver->getTexture("../../media/detailmap3.jpg"));
-
+
terrain->setMaterialType(video::EMT_DETAIL_MAP);
terrain->scaleTexture(1.0f, 20.0f);
@@ -378,7 +379,7 @@ int example_terrain()
through the terrain.
*/
- // create triangle selector for the terrain
+ // create triangle selector for the terrain
scene::ITriangleSelector* selector
= smgr->createTerrainTriangleSelector(terrain, 0);
terrain->setTriangleSelector(selector);
diff --git a/examples/17.HelloWorld_Mobile/tutorial.html b/examples/17.HelloWorld_Mobile/tutorial.html
index e7f7f94c..e0e6d3c3 100644
--- a/examples/17.HelloWorld_Mobile/tutorial.html
+++ b/examples/17.HelloWorld_Mobile/tutorial.html
@@ -5,7 +5,7 @@
-Tutorial 17: Helloworld mobile
+Tutorial 17: Helloworld mobile for WinCE 6. DEPRECATED
-
+
- This example show Hello World for Windows mobile. It compiles on other platform too. The only differences between the original examples are. You need a GUI, because otherwise you can't quit the application. You need a Filesystem, which is relative based to your executable.
#include <irrlicht.h>
#if defined ( _IRR_WINDOWS_ )
#include <windows.h>
#endif
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#pragma comment(lib, "Irrlicht.lib")
class EventReceiver_basic : public IEventReceiver
{
private:
IrrlichtDevice *Device;
public:
EventReceiver_basic ( IrrlichtDevice *device ): Device ( device ) {}
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
switch(event.GUIEvent.EventType)
{
case EGET_BUTTON_CLICKED:
if (id == 2)
{
Device->closeDevice();
return true;
} break;
}
}
return false;
}
};
class CSampleSceneNode : public ISceneNode
{
aabbox3d<f32> Box;
S3DVertex Vertices[4];
SMaterial Material;
public:
CSampleSceneNode(ISceneNode* parent, ISceneManager* mgr, s32 id)
: ISceneNode(parent, mgr, id)
{
Material.Wireframe = false;
Material.Lighting = false;
Vertices[0] = S3DVertex(0,0,10, 1,1,0, SColor(255,0,255,255), 0, 1);
Vertices[1] = S3DVertex(10,0,-10, 1,0,0, SColor(255,255,0,255), 1, 1);
Vertices[2] = S3DVertex(0,20,0, 0,1,1, SColor(255,255,255,0), 1, 0);
Vertices[3] = S3DVertex(-10,0,-10, 0,0,1, SColor(255,0,255,0), 0, 0);
Box.reset(Vertices[0].Pos);
for (s32 i=1; i<4; ++i)
Box.addInternalPoint(Vertices[i].Pos);
}
virtual void OnRegisterSceneNode()
{
if (IsVisible)
SceneManager->registerNodeForRendering(this);
ISceneNode::OnRegisterSceneNode();
}
virtual void render()
{
u16 indices[] = { 0,2,3, 2,1,3, 1,0,3, 2,0,1 };
IVideoDriver* driver = SceneManager->getVideoDriver();
driver->setMaterial(Material);
driver->setTransform(ETS_WORLD, AbsoluteTransformation);
driver->drawIndexedTriangleList(&Vertices[0], 4, &indices[0], 4);
}
virtual const aabbox3d<f32>& getBoundingBox() const
{
return Box;
}
virtual u32 getMaterialCount()
{
return 1;
}
virtual SMaterial& getMaterial(u32 i)
{
return Material;
}
};
! Startup a Windows Mobile Device
IrrlichtDevice *startup()
{
E_DRIVER_TYPE driverType = EDT_SOFTWARE;
IrrlichtDevice *device = 0;
#if defined (_IRR_USE_WINDOWS_CE_DEVICE_)
device = createDevice(driverType, dimension2d<u32>(240, 320), 16, true );
#else
device = createDevice(driverType, dimension2d<u32>(240, 320), 16, false );
#endif
if ( 0 == device )
return 0;
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
#if defined (_IRR_WINDOWS_)
{
wchar_t buf[255];
GetModuleFileNameW ( 0, buf, 255 );
io::path base = buf;
base = base.subString ( 0, base.findLast ( '\\' ) + 1 );
device->getFileSystem()->addFileArchive ( base );
}
#endif
IGUIStaticText *text = guienv->addStaticText(L"FPS: 25",
rect<s32>(140,15,200,30), false, false, 0, 100 );
guienv->addButton(core::rect<int>(200,10,238,30), 0, 2, L"Quit");
guienv->addImage(driver->getTexture("../../media/irrlichtlogo3.png"),
core::position2d<s32>(0,-2));
return device;
}
!
int run ( IrrlichtDevice *device )
{
while(device->run())
if (device->isWindowActive())
{
device->getVideoDriver()->beginScene(true, true, SColor(0,100,100,100));
device->getSceneManager()->drawAll();
device->getGUIEnvironment()->drawAll();
device->getVideoDriver()->endScene ();
IGUIElement *stat = device->getGUIEnvironment()->
getRootGUIElement()->getElementFromId ( 100 );
if ( stat )
{
stringw str = L"FPS: ";
str += (s32)device->getVideoDriver()->getFPS();
stat->setText ( str.c_str() );
}
}
device->drop();
return 0;
}
!
int example_customscenenode()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
EventReceiver_basic receiver(device);
device->setEventReceiver(&receiver);
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
smgr->addCameraSceneNode(0, vector3df(0,-40,0), vector3df(0,0,0));
CSampleSceneNode *myNode =
new CSampleSceneNode(smgr->getRootSceneNode(), smgr, 666);
ISceneNodeAnimator* anim =
smgr->createRotationAnimator(vector3df(0.8f, 0, 0.8f));
if(anim)
{
myNode->addAnimator(anim);
anim->drop();
anim = 0;
}
myNode->drop();
myNode = 0;
return run ( device );
}
class EventReceiver_terrain : public IEventReceiver
{
public:
EventReceiver_terrain(IrrlichtDevice *device, scene::ISceneNode* terrain, scene::ISceneNode* skybox, scene::ISceneNode* skydome) :
Device ( device ), Terrain(terrain), Skybox(skybox), Skydome(skydome), showBox(true)
{
Skybox->setVisible(true);
Skydome->setVisible(false);
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
switch(event.GUIEvent.EventType)
{
case EGET_BUTTON_CLICKED:
if (id == 2)
{
Device->closeDevice();
return true;
} break;
}
}
if (event.EventType == irr::EET_KEY_INPUT_EVENT && !event.KeyInput.PressedDown)
{
switch (event.KeyInput.Key)
{
case irr::KEY_KEY_W:
Terrain->setMaterialFlag(video::EMF_WIREFRAME,
!Terrain->getMaterial(0).Wireframe);
Terrain->setMaterialFlag(video::EMF_POINTCLOUD, false);
return true;
case irr::KEY_KEY_P:
Terrain->setMaterialFlag(video::EMF_POINTCLOUD,
!Terrain->getMaterial(0).PointCloud);
Terrain->setMaterialFlag(video::EMF_WIREFRAME, false);
return true;
case irr::KEY_KEY_D:
Terrain->setMaterialType(
Terrain->getMaterial(0).MaterialType == video::EMT_SOLID ?
video::EMT_DETAIL_MAP : video::EMT_SOLID);
return true;
case irr::KEY_KEY_S:
showBox=!showBox;
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
return true;
default:
break;
}
}
return false;
}
private:
IrrlichtDevice *Device;
scene::ISceneNode* Terrain;
scene::ISceneNode* Skybox;
scene::ISceneNode* Skydome;
bool showBox;
};
The start of the main function starts like in most other example. We ask the user for the desired renderer and start it up. This time with the advanced parameter handling.
int example_terrain()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
First, we add standard stuff to the scene: A nice irrlicht engine logo, a small help text, a user controlled camera, and we disable the mouse cursor.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
env->addStaticText(
L"Press 'W' to change wireframe mode\nPress 'D' to toggle detail map\nPress 'S' to toggle skybox/skydome",
core::rect<s32>(5,250,235,320), true, true, 0, -1, true);
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0,100.0f,1.2f);
camera->setPosition(core::vector3df(2700*2,255*2,2600*2));
camera->setTarget(core::vector3df(2397*2,343*2,2700*2));
camera->setFarValue(42000.0f);
device->getCursorControl()->setVisible(false);
Here comes the terrain renderer scene node: We add it just like any other scene node to the scene using ISceneManager::addTerrainSceneNode(). The only parameter we use is a file name to the heightmap we use. A heightmap is simply a gray scale texture. The terrain renderer loads it and creates the 3D terrain from it.
-
To make the terrain look more big, we change the scale factor of it to (40, 4.4, 40). Because we don't have any dynamic lights in the scene, we switch off the lighting, and we set the file terrain-texture.jpg as texture for the terrain and detailmap3.jpg as second texture, called detail map. At last, we set the scale values for the texture: The first texture will be repeated only one time over the whole terrain, and the second one (detail map) 20 times.
scene::ITerrainSceneNode* terrain = smgr->addTerrainSceneNode(
"../../media/terrain-heightmap.bmp",
0,
-1,
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(40.f, 4.4f, 40.f),
video::SColor ( 255, 255, 255, 255 ),
5,
scene::ETPS_17,
4
);
if ( terrain )
{
terrain->setMaterialFlag(video::EMF_LIGHTING, false);
terrain->setMaterialTexture(0,
driver->getTexture("../../media/terrain-texture.jpg"));
terrain->setMaterialTexture(1,
driver->getTexture("../../media/detailmap3.jpg"));
terrain->setMaterialType(video::EMT_DETAIL_MAP);
terrain->scaleTexture(1.0f, 20.0f);
To be able to do collision with the terrain, we create a triangle selector. If you want to know what triangle selectors do, just take a look into the collision tutorial. The terrain triangle selector works together with the terrain. To demonstrate this, we create a collision response animator and attach it to the camera, so that the camera will not be able to fly through the terrain.
scene::ITriangleSelector* selector
= smgr->createTerrainTriangleSelector(terrain, 0);
terrain->setTriangleSelector(selector);
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(60,100,60),
core::vector3df(0,0,0),
core::vector3df(0,50,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
If you need access to the terrain data you can also do this directly via the following code fragment.
scene::CDynamicMeshBuffer* buffer = new scene::CDynamicMeshBuffer(video::EVT_2TCOORDS, video::EIT_16BIT);
terrain->getMeshBufferForLOD(*buffer, 0);
video::S3DVertex2TCoords* data = (video::S3DVertex2TCoords*)buffer->getVertexBuffer().getData();
buffer->drop();
}
To make the user be able to switch between normal and wireframe mode, we create an instance of the event receiver from above and let Irrlicht know about it. In addition, we add the skybox which we already used in lots of Irrlicht examples and a skydome, which is shown mutually exclusive with the skybox by pressing 'S'.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
scene::ISceneNode* skybox=smgr->addSkyBoxSceneNode(
driver->getTexture("../../media/irrlicht2_up.jpg"),
driver->getTexture("../../media/irrlicht2_dn.jpg"),
driver->getTexture("../../media/irrlicht2_lf.jpg"),
driver->getTexture("../../media/irrlicht2_rt.jpg"),
driver->getTexture("../../media/irrlicht2_ft.jpg"),
driver->getTexture("../../media/irrlicht2_bk.jpg"));
scene::ISceneNode* skydome=smgr->addSkyDomeSceneNode(driver->getTexture("../../media/skydome.jpg"),16,8,0.95f,2.0f);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
EventReceiver_terrain receiver( device, terrain, skybox, skydome);
device->setEventReceiver(&receiver);
return run ( device );
}
int example_helloworld()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
IAnimatedMesh* mesh = smgr->getMesh("../../media/sydney.md2");
if (!mesh)
{
device->drop();
return 1;
}
IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh );
To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color.
if (node)
{
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMD2Animation(scene::EMAT_STAND);
node->setMaterialTexture( 0, driver->getTexture("../../media/sydney.bmp") );
}
To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is.
smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0));
EventReceiver_basic receiver(device);
device->setEventReceiver(&receiver);
return run ( device );
}
#if defined (_IRR_USE_WINDOWS_CE_DEVICE_)
#pragma comment(linker, "/subsystem:WINDOWSCE /ENTRY:main")
#elif defined (_IRR_WINDOWS_)
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
int main()
{
example_helloworld ();
example_customscenenode();
}
+ This was Example 017 Helloworld mobile for WinCE 6. But WinCE6 support has been removed for Irrlicht 1.9. If you still need that please use Irrlicht 1.8 or svn revision 5045 which was the last one to include it.
+Sources still kept for now as it compiles on other platform too. And we might use this example again once we support Windows RT.
#include <irrlicht.h>
#if defined ( _IRR_WINDOWS_ )
#include <windows.h>
#endif
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#pragma comment(lib, "Irrlicht.lib")
class EventReceiver_basic : public IEventReceiver
{
private:
IrrlichtDevice *Device;
public:
EventReceiver_basic ( IrrlichtDevice *device ): Device ( device ) {}
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
switch(event.GUIEvent.EventType)
{
case EGET_BUTTON_CLICKED:
if (id == 2)
{
Device->closeDevice();
return true;
} break;
}
}
return false;
}
};
class CSampleSceneNode : public ISceneNode
{
aabbox3d<f32> Box;
S3DVertex Vertices[4];
SMaterial Material;
public:
CSampleSceneNode(ISceneNode* parent, ISceneManager* mgr, s32 id)
: ISceneNode(parent, mgr, id)
{
Material.Wireframe = false;
Material.Lighting = false;
Vertices[0] = S3DVertex(0,0,10, 1,1,0, SColor(255,0,255,255), 0, 1);
Vertices[1] = S3DVertex(10,0,-10, 1,0,0, SColor(255,255,0,255), 1, 1);
Vertices[2] = S3DVertex(0,20,0, 0,1,1, SColor(255,255,255,0), 1, 0);
Vertices[3] = S3DVertex(-10,0,-10, 0,0,1, SColor(255,0,255,0), 0, 0);
Box.reset(Vertices[0].Pos);
for (s32 i=1; i<4; ++i)
Box.addInternalPoint(Vertices[i].Pos);
}
virtual void OnRegisterSceneNode()
{
if (IsVisible)
SceneManager->registerNodeForRendering(this);
ISceneNode::OnRegisterSceneNode();
}
virtual void render()
{
u16 indices[] = { 0,2,3, 2,1,3, 1,0,3, 2,0,1 };
IVideoDriver* driver = SceneManager->getVideoDriver();
driver->setMaterial(Material);
driver->setTransform(ETS_WORLD, AbsoluteTransformation);
driver->drawIndexedTriangleList(&Vertices[0], 4, &indices[0], 4);
}
virtual const aabbox3d<f32>& getBoundingBox() const
{
return Box;
}
virtual u32 getMaterialCount()
{
return 1;
}
virtual SMaterial& getMaterial(u32 i)
{
return Material;
}
};
! Startup a Windows Mobile Device
IrrlichtDevice *startup()
{
E_DRIVER_TYPE driverType = EDT_SOFTWARE;
IrrlichtDevice *device = 0;
device = createDevice(driverType, dimension2d<u32>(240, 320), 16, false );
if ( 0 == device )
return 0;
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
#if defined (_IRR_WINDOWS_)
{
wchar_t buf[255];
GetModuleFileNameW ( 0, buf, 255 );
io::path base = buf;
base = base.subString ( 0, base.findLast ( '\\' ) + 1 );
device->getFileSystem()->addFileArchive ( base );
}
#endif
IGUIStaticText *text = guienv->addStaticText(L"FPS: 25",
rect<s32>(140,15,200,30), false, false, 0, 100 );
guienv->addButton(core::rect<int>(200,10,238,30), 0, 2, L"Quit");
guienv->addImage(driver->getTexture("../../media/irrlichtlogo3.png"),
core::position2d<s32>(0,-2));
return device;
}
!
int run ( IrrlichtDevice *device )
{
while(device->run())
if (device->isWindowActive())
{
device->getVideoDriver()->beginScene(true, true, SColor(0,100,100,100));
device->getSceneManager()->drawAll();
device->getGUIEnvironment()->drawAll();
device->getVideoDriver()->endScene ();
IGUIElement *stat = device->getGUIEnvironment()->
getRootGUIElement()->getElementFromId ( 100 );
if ( stat )
{
stringw str = L"FPS: ";
str += (s32)device->getVideoDriver()->getFPS();
stat->setText ( str.c_str() );
}
}
device->drop();
return 0;
}
!
int example_customscenenode()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
EventReceiver_basic receiver(device);
device->setEventReceiver(&receiver);
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
smgr->addCameraSceneNode(0, vector3df(0,-40,0), vector3df(0,0,0));
CSampleSceneNode *myNode =
new CSampleSceneNode(smgr->getRootSceneNode(), smgr, 666);
ISceneNodeAnimator* anim =
smgr->createRotationAnimator(vector3df(0.8f, 0, 0.8f));
if(anim)
{
myNode->addAnimator(anim);
anim->drop();
anim = 0;
}
myNode->drop();
myNode = 0;
return run ( device );
}
class EventReceiver_terrain : public IEventReceiver
{
public:
EventReceiver_terrain(IrrlichtDevice *device, scene::ISceneNode* terrain, scene::ISceneNode* skybox, scene::ISceneNode* skydome) :
Device ( device ), Terrain(terrain), Skybox(skybox), Skydome(skydome), showBox(true)
{
Skybox->setVisible(true);
Skydome->setVisible(false);
}
bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT)
{
s32 id = event.GUIEvent.Caller->getID();
switch(event.GUIEvent.EventType)
{
case EGET_BUTTON_CLICKED:
if (id == 2)
{
Device->closeDevice();
return true;
} break;
}
}
if (event.EventType == irr::EET_KEY_INPUT_EVENT && !event.KeyInput.PressedDown)
{
switch (event.KeyInput.Key)
{
case irr::KEY_KEY_W:
Terrain->setMaterialFlag(video::EMF_WIREFRAME,
!Terrain->getMaterial(0).Wireframe);
Terrain->setMaterialFlag(video::EMF_POINTCLOUD, false);
return true;
case irr::KEY_KEY_P:
Terrain->setMaterialFlag(video::EMF_POINTCLOUD,
!Terrain->getMaterial(0).PointCloud);
Terrain->setMaterialFlag(video::EMF_WIREFRAME, false);
return true;
case irr::KEY_KEY_D:
Terrain->setMaterialType(
Terrain->getMaterial(0).MaterialType == video::EMT_SOLID ?
video::EMT_DETAIL_MAP : video::EMT_SOLID);
return true;
case irr::KEY_KEY_S:
showBox=!showBox;
Skybox->setVisible(showBox);
Skydome->setVisible(!showBox);
return true;
default:
break;
}
}
return false;
}
private:
IrrlichtDevice *Device;
scene::ISceneNode* Terrain;
scene::ISceneNode* Skybox;
scene::ISceneNode* Skydome;
bool showBox;
};
The start of the main function starts like in most other example. We ask the user for the desired renderer and start it up. This time with the advanced parameter handling.
int example_terrain()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
First, we add standard stuff to the scene: A nice irrlicht engine logo, a small help text, a user controlled camera, and we disable the mouse cursor.
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
env->addStaticText(
L"Press 'W' to change wireframe mode\nPress 'D' to toggle detail map\nPress 'S' to toggle skybox/skydome",
core::rect<s32>(5,250,235,320), true, true, 0, -1, true);
scene::ICameraSceneNode* camera =
smgr->addCameraSceneNodeFPS(0,100.0f,1.2f);
camera->setPosition(core::vector3df(2700*2,255*2,2600*2));
camera->setTarget(core::vector3df(2397*2,343*2,2700*2));
camera->setFarValue(42000.0f);
device->getCursorControl()->setVisible(false);
Here comes the terrain renderer scene node: We add it just like any other scene node to the scene using ISceneManager::addTerrainSceneNode(). The only parameter we use is a file name to the heightmap we use. A heightmap is simply a gray scale texture. The terrain renderer loads it and creates the 3D terrain from it.
+To make the terrain look more big, we change the scale factor of it to (40, 4.4, 40). Because we don't have any dynamic lights in the scene, we switch off the lighting, and we set the file terrain-texture.jpg as texture for the terrain and detailmap3.jpg as second texture, called detail map. At last, we set the scale values for the texture: The first texture will be repeated only one time over the whole terrain, and the second one (detail map) 20 times.
scene::ITerrainSceneNode* terrain = smgr->addTerrainSceneNode(
"../../media/terrain-heightmap.bmp",
0,
-1,
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(0.f, 0.f, 0.f),
core::vector3df(40.f, 4.4f, 40.f),
video::SColor ( 255, 255, 255, 255 ),
5,
scene::ETPS_17,
4
);
if ( terrain )
{
terrain->setMaterialFlag(video::EMF_LIGHTING, false);
terrain->setMaterialTexture(0,
driver->getTexture("../../media/terrain-texture.jpg"));
terrain->setMaterialTexture(1,
driver->getTexture("../../media/detailmap3.jpg"));
terrain->setMaterialType(video::EMT_DETAIL_MAP);
terrain->scaleTexture(1.0f, 20.0f);
To be able to do collision with the terrain, we create a triangle selector. If you want to know what triangle selectors do, just take a look into the collision tutorial. The terrain triangle selector works together with the terrain. To demonstrate this, we create a collision response animator and attach it to the camera, so that the camera will not be able to fly through the terrain.
scene::ITriangleSelector* selector
= smgr->createTerrainTriangleSelector(terrain, 0);
terrain->setTriangleSelector(selector);
scene::ISceneNodeAnimator* anim = smgr->createCollisionResponseAnimator(
selector, camera, core::vector3df(60,100,60),
core::vector3df(0,0,0),
core::vector3df(0,50,0));
selector->drop();
camera->addAnimator(anim);
anim->drop();
If you need access to the terrain data you can also do this directly via the following code fragment.
scene::CDynamicMeshBuffer* buffer = new scene::CDynamicMeshBuffer(video::EVT_2TCOORDS, video::EIT_16BIT);
terrain->getMeshBufferForLOD(*buffer, 0);
video::S3DVertex2TCoords* data = (video::S3DVertex2TCoords*)buffer->getVertexBuffer().getData();
buffer->drop();
}
To make the user be able to switch between normal and wireframe mode, we create an instance of the event receiver from above and let Irrlicht know about it. In addition, we add the skybox which we already used in lots of Irrlicht examples and a skydome, which is shown mutually exclusive with the skybox by pressing 'S'.
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
scene::ISceneNode* skybox=smgr->addSkyBoxSceneNode(
driver->getTexture("../../media/irrlicht2_up.jpg"),
driver->getTexture("../../media/irrlicht2_dn.jpg"),
driver->getTexture("../../media/irrlicht2_lf.jpg"),
driver->getTexture("../../media/irrlicht2_rt.jpg"),
driver->getTexture("../../media/irrlicht2_ft.jpg"),
driver->getTexture("../../media/irrlicht2_bk.jpg"));
scene::ISceneNode* skydome=smgr->addSkyDomeSceneNode(driver->getTexture("../../media/skydome.jpg"),16,8,0.95f,2.0f);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, true);
EventReceiver_terrain receiver( device, terrain, skybox, skydome);
device->setEventReceiver(&receiver);
return run ( device );
}
int example_helloworld()
{
IrrlichtDevice *device = startup();
if (device == 0)
return 1;
IVideoDriver* driver = device->getVideoDriver();
ISceneManager* smgr = device->getSceneManager();
IGUIEnvironment* guienv = device->getGUIEnvironment();
IAnimatedMesh* mesh = smgr->getMesh("../../media/sydney.md2");
if (!mesh)
{
device->drop();
return 1;
}
IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh );
To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color.
if (node)
{
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMD2Animation(scene::EMAT_STAND);
node->setMaterialTexture( 0, driver->getTexture("../../media/sydney.bmp") );
}
To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is.
smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0));
EventReceiver_basic receiver(device);
device->setEventReceiver(&receiver);
return run ( device );
}
#if defined (_IRR_WINDOWS_)
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
int main()
{
example_helloworld ();
example_customscenenode();
}
diff --git a/examples/18.SplitScreen/tutorial.html b/examples/18.SplitScreen/tutorial.html
index 4ddcdbdc..0df8a01b 100644
--- a/examples/18.SplitScreen/tutorial.html
+++ b/examples/18.SplitScreen/tutorial.html
@@ -195,11 +195,11 @@ tr.heading h2 {
-
+
A tutorial by Max Winkel.
In this tutorial we'll learn how to use splitscreen (e.g. for racing-games) with Irrlicht. We'll create a viewport divided into 4 parts, with 3 fixed cameras and one user-controlled.
-
Ok, let's start with the headers (I think there's nothing to say about it)
#include <irrlicht.h>
#include "driverChoice.h"
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
using namespace irr;
using namespace core;
using namespace video;
using namespace scene;
Now we'll define the resolution in a constant for use in initializing the device and setting up the viewport. In addition we set up a global variable saying splitscreen is active or not.
const int ResX=800;
const int ResY=600;
const bool fullScreen=false;
bool SplitScreen=true;
Now we need four pointers to our cameras which are created later:
ICameraSceneNode *camera[4]={0,0,0,0};
In our event-receiver we switch the SplitScreen-variable, whenever the user press the S-key. All other events are sent to the FPS camera.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT &&
event.KeyInput.Key == KEY_KEY_S && event.KeyInput.PressedDown)
{
SplitScreen = !SplitScreen;
return true;
}
if (camera[3])
return camera[3]->OnEvent(event);
return false;
}
};
Ok, now the main-function: First, we initialize the device, get the SourceManager and VideoDriver, load an animated mesh from .md2 and a map from .pk3. Because that's old stuff, I won't explain every step. Just take care of the maps position.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice *device = createDevice(driverType,
dimension2du(ResX,ResY), 32, fullScreen,
false, false, &receiver);
if (!device)
return 1;
ISceneManager *smgr = device->getSceneManager();
IVideoDriver *driver = device->getVideoDriver();
IAnimatedMesh *model = smgr->getMesh("../../media/sydney.md2");
if (!model)
return 1;
IAnimatedMeshSceneNode *model_node = smgr->addAnimatedMeshSceneNode(model);
if (model_node)
{
ITexture *texture = driver->getTexture("../../media/sydney.bmp");
model_node->setMaterialTexture(0,texture);
model_node->setMD2Animation(scene::EMAT_RUN);
model_node->setMaterialFlag(EMF_LIGHTING,false);
}
device->getFileSystem()->addFileArchive("../../media/map-20kdm2.pk3");
IAnimatedMesh *map = smgr->getMesh("20kdm2.bsp");
if (map)
{
ISceneNode *map_node = smgr->addOctreeSceneNode(map->getMesh(0));
map_node->setPosition(vector3df(-850,-220,-850));
}
Now we create our four cameras. One is looking at the model from the front, one from the top and one from the side. In addition there's a FPS-camera which can be controlled by the user.
camera[0] = smgr->addCameraSceneNode(0, vector3df(50,0,0), vector3df(0,0,0));
camera[1] = smgr->addCameraSceneNode(0, vector3df(0,50,0), vector3df(0,0,0));
camera[2] = smgr->addCameraSceneNode(0, vector3df(0,0,50), vector3df(0,0,0));
camera[3] = smgr->addCameraSceneNodeFPS();
if (camera[3])
camera[3]->setPosition(core::vector3df(-50,0,-50));
Create a variable for counting the fps and hide the mouse:
device->getCursorControl()->setVisible(false);
int lastFPS = -1;
There wasn't much new stuff - till now! Only by defining four cameras, the game won't be splitscreen. To do this you need several steps:
+Ok, let's start with the headers (I think there's nothing to say about it)
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
using namespace irr;
using namespace core;
using namespace video;
using namespace scene;
Now we'll define the resolution in a constant for use in initializing the device and setting up the viewport. In addition we set up a global variable saying splitscreen is active or not.
const int ResX=800;
const int ResY=600;
const bool fullScreen=false;
bool SplitScreen=true;
Now we need four pointers to our cameras which are created later:
ICameraSceneNode *camera[4]={0,0,0,0};
In our event-receiver we switch the SplitScreen-variable, whenever the user press the S-key. All other events are sent to the FPS camera.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT &&
event.KeyInput.Key == KEY_KEY_S && event.KeyInput.PressedDown)
{
SplitScreen = !SplitScreen;
return true;
}
if (camera[3])
return camera[3]->OnEvent(event);
return false;
}
};
Ok, now the main-function: First, we initialize the device, get the SourceManager and VideoDriver, load an animated mesh from .md2 and a map from .pk3. Because that's old stuff, I won't explain every step. Just take care of the maps position.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice *device = createDevice(driverType,
dimension2du(ResX,ResY), 32, fullScreen,
false, false, &receiver);
if (!device)
return 1;
ISceneManager *smgr = device->getSceneManager();
IVideoDriver *driver = device->getVideoDriver();
const io::path mediaPath = getExampleMediaPath();
IAnimatedMesh *model = smgr->getMesh(mediaPath + "sydney.md2");
if (!model)
return 1;
IAnimatedMeshSceneNode *model_node = smgr->addAnimatedMeshSceneNode(model);
if (model_node)
{
ITexture *texture = driver->getTexture(mediaPath + "sydney.bmp");
model_node->setMaterialTexture(0,texture);
model_node->setMD2Animation(scene::EMAT_RUN);
model_node->setMaterialFlag(EMF_LIGHTING,false);
}
device->getFileSystem()->addFileArchive(mediaPath + "map-20kdm2.pk3");
IAnimatedMesh *map = smgr->getMesh("20kdm2.bsp");
if (map)
{
ISceneNode *map_node = smgr->addOctreeSceneNode(map->getMesh(0));
map_node->setPosition(vector3df(-850,-220,-850));
}
Now we create our four cameras. One is looking at the model from the front, one from the top and one from the side. In addition there's a FPS-camera which can be controlled by the user.
camera[0] = smgr->addCameraSceneNode(0, vector3df(50,0,0), vector3df(0,0,0));
camera[1] = smgr->addCameraSceneNode(0, vector3df(0,50,0), vector3df(0,0,0));
camera[2] = smgr->addCameraSceneNode(0, vector3df(0,0,50), vector3df(0,0,0));
camera[3] = smgr->addCameraSceneNodeFPS();
if (camera[3])
camera[3]->setPosition(core::vector3df(-50,0,-50));
Create a variable for counting the fps and hide the mouse:
device->getCursorControl()->setVisible(false);
int lastFPS = -1;
There wasn't much new stuff - till now! Only by defining four cameras, the game won't be splitscreen. To do this you need several steps:
- Set the viewport to the whole screen
- Begin a new scene (Clear screen)
- The following 3 steps are repeated for every viewport in the splitscreen
@@ -215,7 +215,7 @@ tr.heading h2 {
- End scene
-Sounds a little complicated, but you'll see it isn't:
while(device->run())
{
driver->setViewPort(rect<s32>(0,0,ResX,ResY));
driver->beginScene(true,true,SColor(255,100,100,100));
if (SplitScreen)
{
smgr->setActiveCamera(camera[0]);
driver->setViewPort(rect<s32>(0,0,ResX/2,ResY/2));
smgr->drawAll();
smgr->setActiveCamera(camera[1]);
driver->setViewPort(rect<s32>(ResX/2,0,ResX,ResY/2));
smgr->drawAll();
smgr->setActiveCamera(camera[2]);
driver->setViewPort(rect<s32>(0,ResY/2,ResX/2,ResY));
smgr->drawAll();
driver->setViewPort(rect<s32>(ResX/2,ResY/2,ResX,ResY));
}
smgr->setActiveCamera(camera[3]);
smgr->drawAll();
driver->endScene();
As you can probably see, the image is rendered for every viewport separately. That means, that you'll loose much performance. Ok, if you're asking "How do I have to set the viewport
+
Sounds a little complicated, but you'll see it isn't:
while(device->run())
{
driver->setViewPort(rect<s32>(0,0,ResX,ResY));
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(255,100,100,100));
if (SplitScreen)
{
smgr->setActiveCamera(camera[0]);
driver->setViewPort(rect<s32>(0,0,ResX/2,ResY/2));
smgr->drawAll();
smgr->setActiveCamera(camera[1]);
driver->setViewPort(rect<s32>(ResX/2,0,ResX,ResY/2));
smgr->drawAll();
smgr->setActiveCamera(camera[2]);
driver->setViewPort(rect<s32>(0,ResY/2,ResX/2,ResY));
smgr->drawAll();
driver->setViewPort(rect<s32>(ResX/2,ResY/2,ResX,ResY));
}
smgr->setActiveCamera(camera[3]);
smgr->drawAll();
driver->endScene();
As you can probably see, the image is rendered for every viewport separately. That means, that you'll loose much performance. Ok, if you're asking "How do I have to set the viewport
to get this or that screen?", don't panic. It's really easy: In the rect-function you define 4 coordinates:
- X-coordinate of the corner left top
- Y-coordinate of the corner left top
diff --git a/examples/19.MouseAndJoystick/tutorial.html b/examples/19.MouseAndJoystick/tutorial.html
index fccc53d7..f96a47a1 100644
--- a/examples/19.MouseAndJoystick/tutorial.html
+++ b/examples/19.MouseAndJoystick/tutorial.html
@@ -195,9 +195,9 @@ tr.heading h2 {
-
+
-
This tutorial builds on example 04.Movement which showed how to handle keyboard events in Irrlicht. Here we'll handle mouse events and joystick events, if you have a joystick connected and a device that supports joysticks. These are currently Windows, Linux and SDL devices.
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
Just as we did in example 04.Movement, we'll store the latest state of the mouse and the first joystick, updating them as we receive events.
class MyEventReceiver : public IEventReceiver
{
public:
struct SMouseState
{
core::position2di Position;
bool LeftButtonDown;
SMouseState() : LeftButtonDown(false) { }
} MouseState;
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_MOUSE_INPUT_EVENT)
{
switch(event.MouseInput.Event)
{
case EMIE_LMOUSE_PRESSED_DOWN:
MouseState.LeftButtonDown = true;
break;
case EMIE_LMOUSE_LEFT_UP:
MouseState.LeftButtonDown = false;
break;
case EMIE_MOUSE_MOVED:
MouseState.Position.X = event.MouseInput.X;
MouseState.Position.Y = event.MouseInput.Y;
break;
default:
break;
}
}
if (event.EventType == irr::EET_JOYSTICK_INPUT_EVENT
&& event.JoystickEvent.Joystick == 0)
{
JoystickState = event.JoystickEvent;
}
return false;
}
const SEvent::SJoystickEvent & GetJoystickState(void) const
{
return JoystickState;
}
const SMouseState & GetMouseState(void) const
{
return MouseState;
}
MyEventReceiver()
{
}
private:
SEvent::SJoystickEvent JoystickState;
};
The event receiver for keeping the pressed keys is ready, the actual responses will be made inside the render loop, right before drawing the scene. So lets just create an irr::IrrlichtDevice and the scene node we want to move. We also create some other additional scene nodes, to show that there are also some different possibilities to move and animate scene nodes.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
core::array<SJoystickInfo> joystickInfo;
if(device->activateJoysticks(joystickInfo))
{
std::cout << "Joystick support is enabled and " << joystickInfo.size() << " joystick(s) are present." << std::endl;
for(u32 joystick = 0; joystick < joystickInfo.size(); ++joystick)
{
std::cout << "Joystick " << joystick << ":" << std::endl;
std::cout << "\tName: '" << joystickInfo[joystick].Name.c_str() << "'" << std::endl;
std::cout << "\tAxes: " << joystickInfo[joystick].Axes << std::endl;
std::cout << "\tButtons: " << joystickInfo[joystick].Buttons << std::endl;
std::cout << "\tHat is: ";
switch(joystickInfo[joystick].PovHat)
{
case SJoystickInfo::POV_HAT_PRESENT:
std::cout << "present" << std::endl;
break;
case SJoystickInfo::POV_HAT_ABSENT:
std::cout << "absent" << std::endl;
break;
case SJoystickInfo::POV_HAT_UNKNOWN:
default:
std::cout << "unknown" << std::endl;
break;
}
}
}
else
{
std::cout << "Joystick support is not enabled." << std::endl;
}
core::stringw tmp = L"Irrlicht Joystick Example (";
tmp += joystickInfo.size();
tmp += " joysticks)";
device->setWindowCaption(tmp.c_str());
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
We'll create an arrow mesh and move it around either with the joystick axis/hat, or make it follow the mouse pointer.
scene::ISceneNode * node = smgr->addMeshSceneNode(
smgr->addArrowMesh( "Arrow",
video::SColor(255, 255, 0, 0),
video::SColor(255, 0, 255, 0),
16,16,
2.f, 1.3f,
0.1f, 0.6f
)
);
node->setMaterialFlag(video::EMF_LIGHTING, false);
scene::ICameraSceneNode * camera = smgr->addCameraSceneNode();
camera->setPosition(core::vector3df(0, 0, -10));
u32 then = device->getTimer()->getTime();
const f32 MOVEMENT_SPEED = 5.f;
while(device->run())
{
const u32 now = device->getTimer()->getTime();
const f32 frameDeltaTime = (f32)(now - then) / 1000.f;
then = now;
bool movedWithJoystick = false;
core::vector3df nodePosition = node->getPosition();
if(joystickInfo.size() > 0)
{
f32 moveHorizontal = 0.f;
f32 moveVertical = 0.f;
const SEvent::SJoystickEvent & joystickData = receiver.GetJoystickState();
const f32 DEAD_ZONE = 0.05f;
moveHorizontal =
(f32)joystickData.Axis[SEvent::SJoystickEvent::AXIS_X] / 32767.f;
if(fabs(moveHorizontal) < DEAD_ZONE)
moveHorizontal = 0.f;
moveVertical =
(f32)joystickData.Axis[SEvent::SJoystickEvent::AXIS_Y] / -32767.f;
if(fabs(moveVertical) < DEAD_ZONE)
moveVertical = 0.f;
const u16 povDegrees = joystickData.POV / 100;
if(povDegrees < 360)
{
if(povDegrees > 0 && povDegrees < 180)
moveHorizontal = 1.f;
else if(povDegrees > 180)
moveHorizontal = -1.f;
if(povDegrees > 90 && povDegrees < 270)
moveVertical = -1.f;
else if(povDegrees > 270 || povDegrees < 90)
moveVertical = +1.f;
}
if(!core::equals(moveHorizontal, 0.f) || !core::equals(moveVertical, 0.f))
{
nodePosition.X += MOVEMENT_SPEED * frameDeltaTime * moveHorizontal;
nodePosition.Y += MOVEMENT_SPEED * frameDeltaTime * moveVertical;
movedWithJoystick = true;
}
}
if(!movedWithJoystick)
{
core::line3df ray = smgr->getSceneCollisionManager()->getRayFromScreenCoordinates(
receiver.GetMouseState().Position, camera);
core::plane3df plane(nodePosition, core::vector3df(0, 0, -1));
core::vector3df mousePosition;
if(plane.getIntersectionWithLine(ray.start, ray.getVector(), mousePosition))
{
core::vector3df toMousePosition(mousePosition - nodePosition);
const f32 availableMovement = MOVEMENT_SPEED * frameDeltaTime;
if(toMousePosition.getLength() <= availableMovement)
nodePosition = mousePosition;
else
nodePosition += toMousePosition.normalize() * availableMovement;
}
}
node->setPosition(nodePosition);
node->setMaterialFlag(video::EMF_LIGHTING, receiver.GetMouseState().LeftButtonDown);
driver->beginScene(true, true, video::SColor(255,113,113,133));
smgr->drawAll();
driver->endScene();
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
+
This tutorial builds on example 04.Movement which showed how to handle keyboard events in Irrlicht. Here we'll handle mouse events and joystick events, if you have a joystick connected and a device that supports joysticks. These are currently Windows, Linux and SDL devices.
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
Just as we did in example 04.Movement, we'll store the latest state of the mouse and the first joystick, updating them as we receive events.
class MyEventReceiver : public IEventReceiver
{
public:
struct SMouseState
{
core::position2di Position;
bool LeftButtonDown;
SMouseState() : LeftButtonDown(false) { }
} MouseState;
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_MOUSE_INPUT_EVENT)
{
switch(event.MouseInput.Event)
{
case EMIE_LMOUSE_PRESSED_DOWN:
MouseState.LeftButtonDown = true;
break;
case EMIE_LMOUSE_LEFT_UP:
MouseState.LeftButtonDown = false;
break;
case EMIE_MOUSE_MOVED:
MouseState.Position.X = event.MouseInput.X;
MouseState.Position.Y = event.MouseInput.Y;
break;
default:
break;
}
}
if (event.EventType == irr::EET_JOYSTICK_INPUT_EVENT
&& event.JoystickEvent.Joystick == 0)
{
JoystickState = event.JoystickEvent;
}
return false;
}
const SEvent::SJoystickEvent & GetJoystickState(void) const
{
return JoystickState;
}
const SMouseState & GetMouseState(void) const
{
return MouseState;
}
MyEventReceiver()
{
}
private:
SEvent::SJoystickEvent JoystickState;
};
The event receiver for keeping the pressed keys is ready, the actual responses will be made inside the render loop, right before drawing the scene. So lets just create an irr::IrrlichtDevice and the scene node we want to move. We also create some other additional scene nodes, to show that there are also some different possibilities to move and animate scene nodes.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
core::array<SJoystickInfo> joystickInfo;
if(device->activateJoysticks(joystickInfo))
{
std::cout << "Joystick support is enabled and " << joystickInfo.size() << " joystick(s) are present." << std::endl;
for(u32 joystick = 0; joystick < joystickInfo.size(); ++joystick)
{
std::cout << "Joystick " << joystick << ":" << std::endl;
std::cout << "\tName: '" << joystickInfo[joystick].Name.c_str() << "'" << std::endl;
std::cout << "\tAxes: " << joystickInfo[joystick].Axes << std::endl;
std::cout << "\tButtons: " << joystickInfo[joystick].Buttons << std::endl;
std::cout << "\tHat is: ";
switch(joystickInfo[joystick].PovHat)
{
case SJoystickInfo::POV_HAT_PRESENT:
std::cout << "present" << std::endl;
break;
case SJoystickInfo::POV_HAT_ABSENT:
std::cout << "absent" << std::endl;
break;
case SJoystickInfo::POV_HAT_UNKNOWN:
default:
std::cout << "unknown" << std::endl;
break;
}
}
}
else
{
std::cout << "Joystick support is not enabled." << std::endl;
}
core::stringw tmp = L"Irrlicht Joystick Example (";
tmp += joystickInfo.size();
tmp += " joysticks)";
device->setWindowCaption(tmp.c_str());
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
We'll create an arrow mesh and move it around either with the joystick axis/hat, or make it follow the mouse pointer.
scene::ISceneNode * node = smgr->addMeshSceneNode(
smgr->addArrowMesh( "Arrow",
video::SColor(255, 255, 0, 0),
video::SColor(255, 0, 255, 0),
16,16,
2.f, 1.3f,
0.1f, 0.6f
)
);
node->setMaterialFlag(video::EMF_LIGHTING, false);
scene::ICameraSceneNode * camera = smgr->addCameraSceneNode();
camera->setPosition(core::vector3df(0, 0, -10));
u32 then = device->getTimer()->getTime();
const f32 MOVEMENT_SPEED = 5.f;
while(device->run())
{
const u32 now = device->getTimer()->getTime();
const f32 frameDeltaTime = (f32)(now - then) / 1000.f;
then = now;
bool movedWithJoystick = false;
core::vector3df nodePosition = node->getPosition();
if(joystickInfo.size() > 0)
{
f32 moveHorizontal = 0.f;
f32 moveVertical = 0.f;
const SEvent::SJoystickEvent & joystickData = receiver.GetJoystickState();
const f32 DEAD_ZONE = 0.05f;
moveHorizontal =
(f32)joystickData.Axis[SEvent::SJoystickEvent::AXIS_X] / 32767.f;
if(fabs(moveHorizontal) < DEAD_ZONE)
moveHorizontal = 0.f;
moveVertical =
(f32)joystickData.Axis[SEvent::SJoystickEvent::AXIS_Y] / -32767.f;
if(fabs(moveVertical) < DEAD_ZONE)
moveVertical = 0.f;
const u16 povDegrees = joystickData.POV / 100;
if(povDegrees < 360)
{
if(povDegrees > 0 && povDegrees < 180)
moveHorizontal = 1.f;
else if(povDegrees > 180)
moveHorizontal = -1.f;
if(povDegrees > 90 && povDegrees < 270)
moveVertical = -1.f;
else if(povDegrees > 270 || povDegrees < 90)
moveVertical = +1.f;
}
if(!core::equals(moveHorizontal, 0.f) || !core::equals(moveVertical, 0.f))
{
nodePosition.X += MOVEMENT_SPEED * frameDeltaTime * moveHorizontal;
nodePosition.Y += MOVEMENT_SPEED * frameDeltaTime * moveVertical;
movedWithJoystick = true;
}
}
if(!movedWithJoystick)
{
core::line3df ray = smgr->getSceneCollisionManager()->getRayFromScreenCoordinates(
receiver.GetMouseState().Position, camera);
core::plane3df plane(nodePosition, core::vector3df(0, 0, -1));
core::vector3df mousePosition;
if(plane.getIntersectionWithLine(ray.start, ray.getVector(), mousePosition))
{
core::vector3df toMousePosition(mousePosition - nodePosition);
const f32 availableMovement = MOVEMENT_SPEED * frameDeltaTime;
if(toMousePosition.getLength() <= availableMovement)
nodePosition = mousePosition;
else
nodePosition += toMousePosition.normalize() * availableMovement;
}
}
node->setPosition(nodePosition);
node->setMaterialFlag(video::EMF_LIGHTING, receiver.GetMouseState().LeftButtonDown);
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,113,113,133));
smgr->drawAll();
driver->endScene();
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
diff --git a/examples/20.ManagedLights/tutorial.html b/examples/20.ManagedLights/tutorial.html
index b7d3cd60..4bf4b290 100644
--- a/examples/20.ManagedLights/tutorial.html
+++ b/examples/20.ManagedLights/tutorial.html
@@ -195,14 +195,14 @@ tr.heading h2 {
-
+
-
Written by Colin MacDonald. This tutorial explains the use of the Light Manager of Irrlicht. It enables the use of more dynamic light sources than the actual hardware supports. Further applications of the Light Manager, such as per scene node callbacks, are left out for simplicity of the example.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
using namespace core;
#if defined(_MSC_VER)
#pragma comment(lib, "Irrlicht.lib")
#endif // MSC_VER
Normally, you are limited to 8 dynamic lights per scene: this is a hardware limit. If you want to use more dynamic lights in your scene, then you can register an optional light manager that allows you to to turn lights on and off at specific point during rendering. You are still limited to 8 lights, but the limit is per scene node.
+
Written by Colin MacDonald. This tutorial explains the use of the Light Manager of Irrlicht. It enables the use of more dynamic light sources than the actual hardware supports. Further applications of the Light Manager, such as per scene node callbacks, are left out for simplicity of the example.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
using namespace core;
#if defined(_MSC_VER)
#pragma comment(lib, "Irrlicht.lib")
#endif // MSC_VER
Normally, you are limited to 8 dynamic lights per scene: this is a hardware limit. If you want to use more dynamic lights in your scene, then you can register an optional light manager that allows you to to turn lights on and off at specific point during rendering. You are still limited to 8 lights, but the limit is per scene node.
This is completely optional: if you do not register a light manager, then a default distance-based scheme will be used to prioritise hardware lights based on their distance from the active camera.
NO_MANAGEMENT disables the light manager and shows Irrlicht's default light behaviour. The 8 lights nearest to the camera will be turned on, and other lights will be turned off. In this example, this produces a funky looking but incoherent light display.
LIGHTS_NEAREST_NODE shows an implementation that turns on a limited number of lights per mesh scene node. If finds the 3 lights that are nearest to the node being rendered, and turns them on, turning all other lights off. This works, but as it operates on every light for every node, it does not scale well with many lights. The flickering you can see in this demo is due to the lights swapping their relative positions from the cubes (a deliberate demonstration of the limitations of this technique).
LIGHTS_IN_ZONE shows a technique for turning on lights based on a 'zone'. Each empty scene node is considered to be the parent of a zone. When nodes are rendered, they turn off all lights, then find their parent 'zone' and turn on all lights that are inside that zone, i.e. are descendents of it in the scene graph. This produces true 'local' lighting for each cube in this example. You could use a similar technique to locally light all meshes in (e.g.) a room, without the lights spilling out to other rooms.
-
This light manager is also an event receiver; this is purely for simplicity in this example, it's neither necessary nor recommended for a real application.
class CMyLightManager : public scene::ILightManager, public IEventReceiver
{
typedef enum
{
NO_MANAGEMENT,
LIGHTS_NEAREST_NODE,
LIGHTS_IN_ZONE
}
LightManagementMode;
LightManagementMode Mode;
LightManagementMode RequestedMode;
scene::ISceneManager * SceneManager;
core::array<scene::ISceneNode*> * SceneLightList;
scene::E_SCENE_NODE_RENDER_PASS CurrentRenderPass;
scene::ISceneNode * CurrentSceneNode;
public:
CMyLightManager(scene::ISceneManager* sceneManager)
: Mode(NO_MANAGEMENT), RequestedMode(NO_MANAGEMENT),
SceneManager(sceneManager), SceneLightList(0),
CurrentRenderPass(scene::ESNRP_NONE), CurrentSceneNode(0)
{ }
bool OnEvent(const SEvent & event)
{
bool handled = false;
if (event.EventType == irr::EET_KEY_INPUT_EVENT && event.KeyInput.PressedDown)
{
handled = true;
switch(event.KeyInput.Key)
{
case irr::KEY_KEY_1:
RequestedMode = NO_MANAGEMENT;
break;
case irr::KEY_KEY_2:
RequestedMode = LIGHTS_NEAREST_NODE;
break;
case irr::KEY_KEY_3:
RequestedMode = LIGHTS_IN_ZONE;
break;
default:
handled = false;
break;
}
if(NO_MANAGEMENT == RequestedMode)
SceneManager->setLightManager(0);
else
SceneManager->setLightManager(this);
}
return handled;
}
virtual void OnPreRender(core::array<scene::ISceneNode*> & lightList)
{
Mode = RequestedMode;
SceneLightList = &lightList;
}
virtual void OnPostRender()
{
for (u32 i = 0; i < SceneLightList->size(); i++)
(*SceneLightList)[i]->setVisible(true);
}
virtual void OnRenderPassPreRender(scene::E_SCENE_NODE_RENDER_PASS renderPass)
{
CurrentRenderPass = renderPass;
}
virtual void OnRenderPassPostRender(scene::E_SCENE_NODE_RENDER_PASS renderPass)
{
if (scene::ESNRP_SOLID == renderPass)
{
for (u32 i = 0; i < SceneLightList->size(); ++i)
(*SceneLightList)[i]->setVisible(false);
}
}
virtual void OnNodePreRender(scene::ISceneNode* node)
{
CurrentSceneNode = node;
if (scene::ESNRP_SOLID != CurrentRenderPass)
return;
if (node->getType() != scene::ESNT_CUBE)
return;
if (LIGHTS_NEAREST_NODE == Mode)
{
const vector3df nodePosition = node->getAbsolutePosition();
array<LightDistanceElement> sortingArray;
sortingArray.reallocate(SceneLightList->size());
u32 i;
for(i = 0; i < SceneLightList->size(); ++i)
{
scene::ISceneNode* lightNode = (*SceneLightList)[i];
const f64 distance = lightNode->getAbsolutePosition().getDistanceFromSQ(nodePosition);
sortingArray.push_back(LightDistanceElement(lightNode, distance));
}
sortingArray.sort();
for(i = 0; i < sortingArray.size(); ++i)
sortingArray[i].node->setVisible(i < 3);
}
else if(LIGHTS_IN_ZONE == Mode)
{
for (u32 i = 0; i < SceneLightList->size(); ++i)
{
if ((*SceneLightList)[i]->getType() != scene::ESNT_LIGHT)
continue;
scene::ILightSceneNode* lightNode = static_cast<scene::ILightSceneNode*>((*SceneLightList)[i]);
video::SLight & lightData = lightNode->getLightData();
if (video::ELT_DIRECTIONAL != lightData.Type)
lightNode->setVisible(false);
}
scene::ISceneNode * parentZone = findZone(node);
if (parentZone)
turnOnZoneLights(parentZone);
}
}
virtual void OnNodePostRender(scene::ISceneNode* node)
{
}
private:
scene::ISceneNode * findZone(scene::ISceneNode * node)
{
if (!node)
return 0;
if (node->getType() == scene::ESNT_EMPTY)
return node;
return findZone(node->getParent());
}
void turnOnZoneLights(scene::ISceneNode * node)
{
core::list<scene::ISceneNode*> const & children = node->getChildren();
for (core::list<scene::ISceneNode*>::ConstIterator child = children.begin();
child != children.end(); ++child)
{
if ((*child)->getType() == scene::ESNT_LIGHT)
(*child)->setVisible(true);
else
turnOnZoneLights(*child);
}
}
class LightDistanceElement
{
public:
LightDistanceElement() {};
LightDistanceElement(scene::ISceneNode* n, f64 d)
: node(n), distance(d) { }
scene::ISceneNode* node;
f64 distance;
bool operator < (const LightDistanceElement& other) const
{
return (distance < other.distance);
}
};
};
int main(int argumentCount, char * argumentValues[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
dimension2d<u32>(640, 480), 32);
if(!device)
return -1;
f32 const lightRadius = 60.f;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* guienv = device->getGUIEnvironment();
gui::IGUISkin* skin = guienv->getSkin();
if (skin)
{
skin->setColor(gui::EGDC_BUTTON_TEXT, video::SColor(255, 255, 255, 255));
gui::IGUIFont* font = guienv->getFont("../../media/fontlucida.png");
if(font)
skin->setFont(font);
}
guienv->addStaticText(L"1 - No light management", core::rect<s32>(10,10,200,30));
guienv->addStaticText(L"2 - Closest 3 lights", core::rect<s32>(10,30,200,50));
guienv->addStaticText(L"3 - Lights in zone", core::rect<s32>(10,50,200,70));
Add several "zones". You could use this technique to light individual rooms, for example.
for(f32 zoneX = -100.f; zoneX <= 100.f; zoneX += 50.f)
for(f32 zoneY = -60.f; zoneY <= 60.f; zoneY += 60.f)
{
scene::ISceneNode * zoneRoot = smgr->addEmptySceneNode();
zoneRoot->setPosition(vector3df(zoneX, zoneY, 0));
scene::IMeshSceneNode * node = smgr->addCubeSceneNode(15, zoneRoot);
scene::ISceneNodeAnimator * rotation = smgr->createRotationAnimator(vector3df(0.25f, 0.5f, 0.75f));
node->addAnimator(rotation);
rotation->drop();
scene::IBillboardSceneNode * billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(0, -14, 30));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture("../../media/particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(1, 0, 0), lightRadius);
billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(-21, -14, -21));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture("../../media/particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(0, 1, 0), lightRadius);
billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(21, -14, -21));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture("../../media/particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(0, 0, 1), lightRadius);
node = smgr->addCubeSceneNode(5, node);
node->setPosition(vector3df(0, 21, 0));
}
smgr->addCameraSceneNode(0, vector3df(0,0,-130), vector3df(0,0,0));
CMyLightManager * myLightManager = new CMyLightManager(smgr);
smgr->setLightManager(0);
device->setEventReceiver(myLightManager);
int lastFps = -1;
while(device->run())
{
driver->beginScene(true, true, video::SColor(255,100,101,140));
smgr->drawAll();
guienv->drawAll();
driver->endScene();
int fps = driver->getFPS();
if(fps != lastFps)
{
lastFps = fps;
core::stringw str = L"Managed Lights [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
}
}
myLightManager->drop();
device->drop();
return 0;
}
+This light manager is also an event receiver; this is purely for simplicity in this example, it's neither necessary nor recommended for a real application.
class CMyLightManager : public scene::ILightManager, public IEventReceiver
{
typedef enum
{
NO_MANAGEMENT,
LIGHTS_NEAREST_NODE,
LIGHTS_IN_ZONE
}
LightManagementMode;
LightManagementMode Mode;
LightManagementMode RequestedMode;
scene::ISceneManager * SceneManager;
core::array<scene::ISceneNode*> * SceneLightList;
scene::E_SCENE_NODE_RENDER_PASS CurrentRenderPass;
scene::ISceneNode * CurrentSceneNode;
public:
CMyLightManager(scene::ISceneManager* sceneManager)
: Mode(NO_MANAGEMENT), RequestedMode(NO_MANAGEMENT),
SceneManager(sceneManager), SceneLightList(0),
CurrentRenderPass(scene::ESNRP_NONE), CurrentSceneNode(0)
{ }
bool OnEvent(const SEvent & event)
{
bool handled = false;
if (event.EventType == irr::EET_KEY_INPUT_EVENT && event.KeyInput.PressedDown)
{
handled = true;
switch(event.KeyInput.Key)
{
case irr::KEY_KEY_1:
RequestedMode = NO_MANAGEMENT;
break;
case irr::KEY_KEY_2:
RequestedMode = LIGHTS_NEAREST_NODE;
break;
case irr::KEY_KEY_3:
RequestedMode = LIGHTS_IN_ZONE;
break;
default:
handled = false;
break;
}
if(NO_MANAGEMENT == RequestedMode)
SceneManager->setLightManager(0);
else
SceneManager->setLightManager(this);
}
return handled;
}
virtual void OnPreRender(core::array<scene::ISceneNode*> & lightList)
{
Mode = RequestedMode;
SceneLightList = &lightList;
}
virtual void OnPostRender()
{
for (u32 i = 0; i < SceneLightList->size(); i++)
(*SceneLightList)[i]->setVisible(true);
}
virtual void OnRenderPassPreRender(scene::E_SCENE_NODE_RENDER_PASS renderPass)
{
CurrentRenderPass = renderPass;
}
virtual void OnRenderPassPostRender(scene::E_SCENE_NODE_RENDER_PASS renderPass)
{
if (scene::ESNRP_SOLID == renderPass)
{
for (u32 i = 0; i < SceneLightList->size(); ++i)
(*SceneLightList)[i]->setVisible(false);
}
}
virtual void OnNodePreRender(scene::ISceneNode* node)
{
CurrentSceneNode = node;
if (scene::ESNRP_SOLID != CurrentRenderPass)
return;
if (node->getType() != scene::ESNT_CUBE)
return;
if (LIGHTS_NEAREST_NODE == Mode)
{
const vector3df nodePosition = node->getAbsolutePosition();
array<LightDistanceElement> sortingArray;
sortingArray.reallocate(SceneLightList->size());
u32 i;
for(i = 0; i < SceneLightList->size(); ++i)
{
scene::ISceneNode* lightNode = (*SceneLightList)[i];
const f64 distance = lightNode->getAbsolutePosition().getDistanceFromSQ(nodePosition);
sortingArray.push_back(LightDistanceElement(lightNode, distance));
}
sortingArray.sort();
for(i = 0; i < sortingArray.size(); ++i)
sortingArray[i].node->setVisible(i < 3);
}
else if(LIGHTS_IN_ZONE == Mode)
{
for (u32 i = 0; i < SceneLightList->size(); ++i)
{
if ((*SceneLightList)[i]->getType() != scene::ESNT_LIGHT)
continue;
scene::ILightSceneNode* lightNode = static_cast<scene::ILightSceneNode*>((*SceneLightList)[i]);
video::SLight & lightData = lightNode->getLightData();
if (video::ELT_DIRECTIONAL != lightData.Type)
lightNode->setVisible(false);
}
scene::ISceneNode * parentZone = findZone(node);
if (parentZone)
turnOnZoneLights(parentZone);
}
}
virtual void OnNodePostRender(scene::ISceneNode* node)
{
}
private:
scene::ISceneNode * findZone(scene::ISceneNode * node)
{
if (!node)
return 0;
if (node->getType() == scene::ESNT_EMPTY)
return node;
return findZone(node->getParent());
}
void turnOnZoneLights(scene::ISceneNode * node)
{
core::list<scene::ISceneNode*> const & children = node->getChildren();
for (core::list<scene::ISceneNode*>::ConstIterator child = children.begin();
child != children.end(); ++child)
{
if ((*child)->getType() == scene::ESNT_LIGHT)
(*child)->setVisible(true);
else
turnOnZoneLights(*child);
}
}
class LightDistanceElement
{
public:
LightDistanceElement() {};
LightDistanceElement(scene::ISceneNode* n, f64 d)
: node(n), distance(d) { }
scene::ISceneNode* node;
f64 distance;
bool operator < (const LightDistanceElement& other) const
{
return (distance < other.distance);
}
};
};
int main(int argumentCount, char * argumentValues[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice *device = createDevice(driverType,
dimension2d<u32>(640, 480), 32);
if(!device)
return -1;
f32 const lightRadius = 60.f;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* guienv = device->getGUIEnvironment();
const io::path mediaPath = getExampleMediaPath();
gui::IGUISkin* skin = guienv->getSkin();
if (skin)
{
skin->setColor(gui::EGDC_BUTTON_TEXT, video::SColor(255, 255, 255, 255));
gui::IGUIFont* font = guienv->getFont(mediaPath + "fontlucida.png");
if(font)
skin->setFont(font);
}
guienv->addStaticText(L"1 - No light management", core::rect<s32>(10,10,200,30));
guienv->addStaticText(L"2 - Closest 3 lights", core::rect<s32>(10,30,200,50));
guienv->addStaticText(L"3 - Lights in zone", core::rect<s32>(10,50,200,70));
Add several "zones". You could use this technique to light individual rooms, for example.
for(f32 zoneX = -100.f; zoneX <= 100.f; zoneX += 50.f)
for(f32 zoneY = -60.f; zoneY <= 60.f; zoneY += 60.f)
{
scene::ISceneNode * zoneRoot = smgr->addEmptySceneNode();
zoneRoot->setPosition(vector3df(zoneX, zoneY, 0));
scene::IMeshSceneNode * node = smgr->addCubeSceneNode(15, zoneRoot);
scene::ISceneNodeAnimator * rotation = smgr->createRotationAnimator(vector3df(0.25f, 0.5f, 0.75f));
node->addAnimator(rotation);
rotation->drop();
scene::IBillboardSceneNode * billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(0, -14, 30));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture(mediaPath + "particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(1, 0, 0), lightRadius);
billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(-21, -14, -21));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture(mediaPath + "particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(0, 1, 0), lightRadius);
billboard = smgr->addBillboardSceneNode(node);
billboard->setPosition(vector3df(21, -14, -21));
billboard->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
billboard->setMaterialTexture(0, driver->getTexture(mediaPath + "particle.bmp"));
billboard->setMaterialFlag(video::EMF_LIGHTING, false);
smgr->addLightSceneNode(billboard, vector3df(0, 0, 0), video::SColorf(0, 0, 1), lightRadius);
node = smgr->addCubeSceneNode(5, node);
node->setPosition(vector3df(0, 21, 0));
}
smgr->addCameraSceneNode(0, vector3df(0,0,-130), vector3df(0,0,0));
CMyLightManager * myLightManager = new CMyLightManager(smgr);
smgr->setLightManager(0);
device->setEventReceiver(myLightManager);
int lastFps = -1;
while(device->run())
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,100,101,140));
smgr->drawAll();
guienv->drawAll();
driver->endScene();
int fps = driver->getFPS();
if(fps != lastFps)
{
lastFps = fps;
core::stringw str = L"Managed Lights [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
}
}
myLightManager->drop();
device->drop();
return 0;
}
diff --git a/examples/21.Quake3Explorer/tutorial.html b/examples/21.Quake3Explorer/tutorial.html
index 989e63df..e798b49a 100644
--- a/examples/21.Quake3Explorer/tutorial.html
+++ b/examples/21.Quake3Explorer/tutorial.html
@@ -195,9 +195,9 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to load different Quake 3 maps.
+
This tutorial shows how to load different Quake 3 maps.
Features:
- Load BSP Archives at Runtime from the menu
- Load a Map from the menu. Showing with Screenshot
@@ -209,8 +209,8 @@ tr.heading h2 {
- Play music
You can download the Quake III Arena demo ( copyright id software ) at the following location: ftp://ftp.idsoftware.com/idstuff/quake3/win32/q3ademo.exe
-
Copyright 2006-2011 Burningwater, Thomas Alten
#include "driverChoice.h"
#include <irrlicht.h>
#include "q3factory.h"
#include "sound.h"
Game Data is used to hold Data which is needed to drive the game
struct GameData
{
GameData ( const path &startupDir) :
retVal(0), StartupDir(startupDir), createExDevice(0), Device(0)
{
setDefault ();
}
void setDefault ();
s32 save ( const path &filename );
s32 load ( const path &filename );
s32 debugState;
s32 gravityState;
s32 flyTroughState;
s32 wireFrame;
s32 guiActive;
s32 guiInputActive;
f32 GammaValue;
s32 retVal;
s32 sound;
path StartupDir;
stringw CurrentMapName;
array<path> CurrentArchiveList;
vector3df PlayerPosition;
vector3df PlayerRotation;
tQ3EntityList Variable;
Q3LevelLoadParameter loadParam;
SIrrlichtCreationParameters deviceParam;
funcptr_createDeviceEx createExDevice;
IrrlichtDevice *Device;
};
set default settings
void GameData::setDefault ()
{
debugState = EDS_OFF;
gravityState = 1;
flyTroughState = 0;
wireFrame = 0;
guiActive = 1;
guiInputActive = 0;
GammaValue = 1.f;
#if defined ( _IRR_WINDOWS_ )
deviceParam.DriverType = EDT_DIRECT3D9;
#else
deviceParam.DriverType = EDT_OPENGL;
#endif
deviceParam.WindowSize.Width = 800;
deviceParam.WindowSize.Height = 600;
deviceParam.Fullscreen = false;
deviceParam.Bits = 24;
deviceParam.ZBufferBits = 16;
deviceParam.Vsync = false;
deviceParam.AntiAlias = false;
loadParam.defaultLightMapMaterial = EMT_LIGHTMAP;
loadParam.defaultModulate = EMFN_MODULATE_1X;
loadParam.defaultFilter = EMF_ANISOTROPIC_FILTER;
loadParam.verbose = 2;
loadParam.mergeShaderBuffer = 1;
loadParam.cleanUnResolvedMeshes = 1;
loadParam.loadAllShaders = 1;
loadParam.loadSkyShader = 0;
loadParam.alpharef = 1;
sound = 0;
CurrentMapName = "";
CurrentArchiveList.clear ();
CurrentArchiveList.push_back ( StartupDir + "../../media/" );
CurrentArchiveList.push_back("/q/baseq3/");
CurrentArchiveList.push_back(StartupDir + "../../media/map-20kdm2.pk3");
}
Load the current game State from a typical quake3 cfg file
s32 GameData::load ( const path &filename )
{
if (!Device)
return 0;
IQ3LevelMesh* mesh = (IQ3LevelMesh*) Device->getSceneManager()->getMesh ( filename );
if (!mesh)
return 0;
tQ3EntityList &entityList = mesh->getEntityList ();
stringc s;
u32 pos;
for ( u32 e = 0; e != entityList.size (); ++e )
{
for ( u32 g = 0; g != entityList[e].getGroupSize (); ++g )
{
const SVarGroup *group = entityList[e].getGroup ( g );
for ( u32 index = 0; index < group->Variable.size (); ++index )
{
const SVariable &v = group->Variable[index];
pos = 0;
if ( v.name == "playerposition" )
{
PlayerPosition = getAsVector3df ( v.content, pos );
}
else
if ( v.name == "playerrotation" )
{
PlayerRotation = getAsVector3df ( v.content, pos );
}
}
}
}
return 1;
}
Store the current game State in a quake3 configuration file
s32 GameData::save ( const path &filename )
{
return 0;
if (!Device)
return 0;
c8 buf[128];
u32 i;
CurrentArchiveList.clear();
IFileSystem *fs = Device->getFileSystem();
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
CurrentArchiveList.push_back ( fs->getFileArchive(i)->getFileList()->getPath() );
}
ICameraSceneNode * camera = Device->getSceneManager()->getActiveCamera ();
if ( camera )
{
PlayerPosition = camera->getPosition ();
PlayerRotation = camera->getRotation ();
}
IWriteFile *file = fs->createAndWriteFile ( filename );
if (!file)
return 0;
snprintf ( buf, 128, "playerposition %.f %.f %.f\nplayerrotation %.f %.f %.f\n",
PlayerPosition.X, PlayerPosition.Z, PlayerPosition.Y,
PlayerRotation.X, PlayerRotation.Z, PlayerRotation.Y);
file->write ( buf, (s32) strlen ( buf ) );
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
snprintf ( buf, 128, "archive %s\n",stringc ( fs->getFileArchive(i)->getFileList()->getPath() ).c_str () );
file->write ( buf, (s32) strlen ( buf ) );
}
file->drop ();
return 1;
}
Representing a player
struct Q3Player : public IAnimationEndCallBack
{
Q3Player ()
: Device(0), MapParent(0), Mesh(0), WeaponNode(0), StartPositionCurrent(0)
{
animation[0] = 0;
memset(Anim, 0, sizeof(TimeFire)*4);
}
virtual void OnAnimationEnd(IAnimatedMeshSceneNode* node);
void create ( IrrlichtDevice *device,
IQ3LevelMesh* mesh,
ISceneNode *mapNode,
IMetaTriangleSelector *meta
);
void shutdown ();
void setAnim ( const c8 *name );
void respawn ();
void setpos ( const vector3df &pos, const vector3df& rotation );
ISceneNodeAnimatorCollisionResponse * cam() { return camCollisionResponse ( Device ); }
IrrlichtDevice *Device;
ISceneNode* MapParent;
IQ3LevelMesh* Mesh;
IAnimatedMeshSceneNode* WeaponNode;
s32 StartPositionCurrent;
TimeFire Anim[4];
c8 animation[64];
c8 buf[64];
};
End player
void Q3Player::shutdown ()
{
setAnim ( 0 );
dropElement (WeaponNode);
if ( Device )
{
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
dropElement ( camera );
Device = 0;
}
MapParent = 0;
Mesh = 0;
}
create a new player
void Q3Player::create ( IrrlichtDevice *device, IQ3LevelMesh* mesh, ISceneNode *mapNode, IMetaTriangleSelector *meta )
{
setTimeFire ( Anim + 0, 200, FIRED );
setTimeFire ( Anim + 1, 5000 );
if (!device)
return;
Device = device;
Mesh = mesh;
MapParent = mapNode;
ISceneManager *smgr = device->getSceneManager ();
IVideoDriver * driver = device->getVideoDriver();
ICameraSceneNode* camera = 0;
SKeyMap keyMap[10];
keyMap[0].Action = EKA_MOVE_FORWARD;
keyMap[0].KeyCode = KEY_UP;
keyMap[1].Action = EKA_MOVE_FORWARD;
keyMap[1].KeyCode = KEY_KEY_W;
keyMap[2].Action = EKA_MOVE_BACKWARD;
keyMap[2].KeyCode = KEY_DOWN;
keyMap[3].Action = EKA_MOVE_BACKWARD;
keyMap[3].KeyCode = KEY_KEY_S;
keyMap[4].Action = EKA_STRAFE_LEFT;
keyMap[4].KeyCode = KEY_LEFT;
keyMap[5].Action = EKA_STRAFE_LEFT;
keyMap[5].KeyCode = KEY_KEY_A;
keyMap[6].Action = EKA_STRAFE_RIGHT;
keyMap[6].KeyCode = KEY_RIGHT;
keyMap[7].Action = EKA_STRAFE_RIGHT;
keyMap[7].KeyCode = KEY_KEY_D;
keyMap[8].Action = EKA_JUMP_UP;
keyMap[8].KeyCode = KEY_KEY_J;
keyMap[9].Action = EKA_CROUCH;
keyMap[9].KeyCode = KEY_KEY_C;
camera = smgr->addCameraSceneNodeFPS(0, 100.0f, 0.6f, -1, keyMap, 10, false, 0.6f);
camera->setName ( "First Person Camera" );
camera->setFarValue( 20000.f );
IAnimatedMeshMD2* weaponMesh = (IAnimatedMeshMD2*) smgr->getMesh("gun.md2");
if ( 0 == weaponMesh )
return;
if ( weaponMesh->getMeshType() == EAMT_MD2 )
{
s32 count = weaponMesh->getAnimationCount();
for ( s32 i = 0; i != count; ++i )
{
snprintf ( buf, 64, "Animation: %s", weaponMesh->getAnimationName(i) );
device->getLogger()->log(buf, ELL_INFORMATION);
}
}
WeaponNode = smgr->addAnimatedMeshSceneNode(
weaponMesh,
smgr->getActiveCamera(),
10,
vector3df( 0, 0, 0),
vector3df(-90,-90,90)
);
WeaponNode->setMaterialFlag(EMF_LIGHTING, false);
WeaponNode->setMaterialTexture(0, driver->getTexture( "gun.jpg"));
WeaponNode->setLoopMode ( false );
WeaponNode->setName ( "tommi the gun man" );
ISceneNodeAnimator* anim =
smgr->createCollisionResponseAnimator( meta, camera,
vector3df(30,45,30),
getGravity ( "earth" ),
vector3df(0,40,0),
0.0005f
);
camera->addAnimator( anim );
anim->drop();
if ( meta )
{
meta->drop ();
}
respawn ();
setAnim ( "idle" );
}
so we need a good starting Position in the level. we can ask the Quake3 Loader for all entities with class_name "info_player_deathmatch"
void Q3Player::respawn ()
{
if (!Device)
return;
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
Device->getLogger()->log( "respawn" );
if ( StartPositionCurrent >= Q3StartPosition (
Mesh, camera,StartPositionCurrent++,
cam ()->getEllipsoidTranslation() )
)
{
StartPositionCurrent = 0;
}
}
set Player position from saved coordinates
void Q3Player::setpos ( const vector3df &pos, const vector3df &rotation )
{
if (!Device)
return;
Device->getLogger()->log( "setpos" );
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
if ( camera )
{
camera->setPosition ( pos );
camera->setRotation ( rotation );
camera->OnAnimate ( 0 );
}
}
set the Animation of the player and weapon
void Q3Player::setAnim ( const c8 *name )
{
if ( name )
{
snprintf ( animation, 64, "%s", name );
if ( WeaponNode )
{
WeaponNode->setAnimationEndCallback ( this );
WeaponNode->setMD2Animation ( animation );
}
}
else
{
animation[0] = 0;
if ( WeaponNode )
{
WeaponNode->setAnimationEndCallback ( 0 );
}
}
}
void Q3Player::OnAnimationEnd(IAnimatedMeshSceneNode* node)
{
setAnim ( 0 );
}
GUI Elements
struct GUI
{
GUI ()
{
memset ( this, 0, sizeof ( *this ) );
}
void drop()
{
dropElement ( Window );
dropElement ( Logo );
}
IGUIComboBox* VideoDriver;
IGUIComboBox* VideoMode;
IGUICheckBox* FullScreen;
IGUICheckBox* Bit32;
IGUIScrollBar* MultiSample;
IGUIButton* SetVideoMode;
IGUIScrollBar* Tesselation;
IGUIScrollBar* Gamma;
IGUICheckBox* Collision;
IGUICheckBox* Visible_Map;
IGUICheckBox* Visible_Shader;
IGUICheckBox* Visible_Fog;
IGUICheckBox* Visible_Unresolved;
IGUICheckBox* Visible_Skydome;
IGUIButton* Respawn;
IGUITable* ArchiveList;
IGUIButton* ArchiveAdd;
IGUIButton* ArchiveRemove;
IGUIFileOpenDialog* ArchiveFileOpen;
IGUIButton* ArchiveUp;
IGUIButton* ArchiveDown;
IGUIListBox* MapList;
IGUITreeView* SceneTree;
IGUIStaticText* StatusLine;
IGUIImage* Logo;
IGUIWindow* Window;
};
CQuake3EventHandler controls the game
class CQuake3EventHandler : public IEventReceiver
{
public:
CQuake3EventHandler( GameData *gameData );
virtual ~CQuake3EventHandler ();
void Animate();
void Render();
void AddArchive ( const path& archiveName );
void LoadMap ( const stringw& mapName, s32 collision );
void CreatePlayers();
void AddSky( u32 dome, const c8 *texture );
Q3Player *GetPlayer ( u32 index ) { return &Player[index]; }
void CreateGUI();
void SetGUIActive( s32 command);
bool OnEvent(const SEvent& eve);
private:
GameData *Game;
IQ3LevelMesh* Mesh;
ISceneNode* MapParent;
ISceneNode* ShaderParent;
ISceneNode* ItemParent;
ISceneNode* UnresolvedParent;
ISceneNode* BulletParent;
ISceneNode* FogParent;
ISceneNode * SkyNode;
IMetaTriangleSelector *Meta;
c8 buf[256];
Q3Player Player[2];
struct SParticleImpact
{
u32 when;
vector3df pos;
vector3df outVector;
};
array<SParticleImpact> Impacts;
void useItem( Q3Player * player);
void createParticleImpacts( u32 now );
void createTextures ();
void addSceneTreeItem( ISceneNode * parent, IGUITreeViewNode* nodeParent);
GUI gui;
void dropMap ();
};
Constructor
CQuake3EventHandler::CQuake3EventHandler( GameData *game )
: Game(game), Mesh(0), MapParent(0), ShaderParent(0), ItemParent(0), UnresolvedParent(0),
BulletParent(0), FogParent(0), SkyNode(0), Meta(0)
{
buf[0]=0;
if ( Game->deviceParam.Bits == 16 )
{
game->Device->getVideoDriver()->setTextureCreationFlag(ETCF_ALWAYS_16_BIT, true);
}
game->Device->getSceneManager()->getParameters()->setAttribute(scene::ALLOW_ZWRITE_ON_TRANSPARENT, true);
createTextures ();
sound_init ( game->Device );
Game->Device->setEventReceiver ( this );
}
CQuake3EventHandler::~CQuake3EventHandler ()
{
Player[0].shutdown ();
sound_shutdown ();
Game->save( "explorer.cfg" );
Game->Device->drop();
}
void CQuake3EventHandler::createTextures()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
dimension2du dim(64, 64);
video::IImage* image;
u32 i;
u32 x;
u32 y;
u32 * data;
for ( i = 0; i != 8; ++i )
{
image = driver->createImage ( video::ECF_A8R8G8B8, dim);
data = (u32*) image->lock ();
for ( y = 0; y != dim.Height; ++y )
{
for ( x = 0; x != dim.Width; ++x )
{
data [x] = 0xFFFFFFFF;
}
data = (u32*) ( (u8*) data + image->getPitch() );
}
image->unlock();
snprintf ( buf, 64, "smoke_%02d", i );
driver->addTexture( buf, image );
image->drop ();
}
for ( i = 0; i != 1; ++i )
{
image = driver->createImage ( video::ECF_A8R8G8B8, dim);
data = (u32*) image->lock ();
for ( y = 0; y != dim.Height; ++y )
{
for ( x = 0; x != dim.Width; ++x )
{
data [x] = 0xFFFFFFFF;
}
data = (u32*) ( (u8*) data + image->getPitch() );
}
image->unlock();
snprintf ( buf, 64, "fog_%02d", i );
driver->addTexture( buf, image );
image->drop ();
}
}
create the GUI
void CQuake3EventHandler::CreateGUI()
{
IGUIEnvironment *env = Game->Device->getGUIEnvironment();
IVideoDriver * driver = Game->Device->getVideoDriver();
gui.drop();
IGUIFont* font = env->getFont("fontlucida.png");
if (font)
env->getSkin()->setFont(font);
env->getSkin()->setColor ( EGDC_BUTTON_TEXT, video::SColor(240,0xAA,0xAA,0xAA) );
env->getSkin()->setColor ( EGDC_3D_HIGH_LIGHT, video::SColor(240,0x22,0x22,0x22) );
env->getSkin()->setColor ( EGDC_3D_FACE, video::SColor(240,0x44,0x44,0x44) );
env->getSkin()->setColor ( EGDC_EDITABLE, video::SColor(240,0x44,0x44,0x44) );
env->getSkin()->setColor ( EGDC_FOCUSED_EDITABLE, video::SColor(240,0x54,0x54,0x54) );
env->getSkin()->setColor ( EGDC_WINDOW, video::SColor(240,0x66,0x66,0x66) );
dimension2d<u32> dim ( 800, 600 );
dimension2d<u32> vdim ( Game->Device->getVideoDriver()->getScreenSize() );
if ( vdim.Height >= dim.Height && vdim.Width >= dim.Width )
{
}
else
{
}
gui.Window = env->addWindow ( rect<s32> ( 0, 0, dim.Width, dim.Height ), false, L"Quake3 Explorer" );
gui.Window->setToolTipText ( L"Quake3Explorer. Loads and show various BSP File Format and Shaders." );
gui.Window->getCloseButton()->setToolTipText ( L"Quit Quake3 Explorer" );
gui.StatusLine = env->addStaticText( 0, rect<s32>( 5,dim.Height - 30,dim.Width - 5,dim.Height - 10),
false, false, gui.Window, -1, true
);
env->addStaticText ( L"VideoDriver:", rect<s32>( dim.Width - 400, 24, dim.Width - 310, 40 ),false, false, gui.Window, -1, false );
gui.VideoDriver = env->addComboBox(rect<s32>( dim.Width - 300, 24, dim.Width - 10, 40 ),gui.Window);
gui.VideoDriver->addItem(L"Direct3D 9.0c", EDT_DIRECT3D9 );
gui.VideoDriver->addItem(L"Direct3D 8.1", EDT_DIRECT3D8 );
gui.VideoDriver->addItem(L"OpenGL 1.5", EDT_OPENGL);
gui.VideoDriver->addItem(L"Software Renderer", EDT_SOFTWARE);
gui.VideoDriver->addItem(L"Burning's Video (TM) Thomas Alten", EDT_BURNINGSVIDEO);
gui.VideoDriver->setSelected ( gui.VideoDriver->getIndexForItemData ( Game->deviceParam.DriverType ) );
gui.VideoDriver->setToolTipText ( L"Use a VideoDriver" );
env->addStaticText ( L"VideoMode:", rect<s32>( dim.Width - 400, 44, dim.Width - 310, 60 ),false, false, gui.Window, -1, false );
gui.VideoMode = env->addComboBox(rect<s32>( dim.Width - 300, 44, dim.Width - 10, 60 ),gui.Window);
gui.VideoMode->setToolTipText ( L"Supported Screenmodes" );
IVideoModeList *modeList = Game->Device->getVideoModeList();
if ( modeList )
{
s32 i;
for ( i = 0; i != modeList->getVideoModeCount (); ++i )
{
u16 d = modeList->getVideoModeDepth ( i );
if ( d < 16 )
continue;
u16 w = modeList->getVideoModeResolution ( i ).Width;
u16 h = modeList->getVideoModeResolution ( i ).Height;
u32 val = w << 16 | h;
if ( gui.VideoMode->getIndexForItemData ( val ) >= 0 )
continue;
f32 aspect = (f32) w / (f32) h;
const c8 *a = "";
if ( core::equals ( aspect, 1.3333333333f ) ) a = "4:3";
else if ( core::equals ( aspect, 1.6666666f ) ) a = "15:9 widescreen";
else if ( core::equals ( aspect, 1.7777777f ) ) a = "16:9 widescreen";
else if ( core::equals ( aspect, 1.6f ) ) a = "16:10 widescreen";
else if ( core::equals ( aspect, 2.133333f ) ) a = "20:9 widescreen";
snprintf ( buf, sizeof ( buf ), "%d x %d, %s",w, h, a );
gui.VideoMode->addItem ( stringw ( buf ).c_str(), val );
}
}
gui.VideoMode->setSelected ( gui.VideoMode->getIndexForItemData (
Game->deviceParam.WindowSize.Width << 16 |
Game->deviceParam.WindowSize.Height ) );
gui.FullScreen = env->addCheckBox ( Game->deviceParam.Fullscreen, rect<s32>( dim.Width - 400, 64, dim.Width - 300, 80 ), gui.Window,-1, L"Fullscreen" );
gui.FullScreen->setToolTipText ( L"Set Fullscreen or Window Mode" );
gui.Bit32 = env->addCheckBox ( Game->deviceParam.Bits == 32, rect<s32>( dim.Width - 300, 64, dim.Width - 240, 80 ), gui.Window,-1, L"32Bit" );
gui.Bit32->setToolTipText ( L"Use 16 or 32 Bit" );
env->addStaticText ( L"MultiSample:", rect<s32>( dim.Width - 235, 64, dim.Width - 150, 80 ),false, false, gui.Window, -1, false );
gui.MultiSample = env->addScrollBar( true, rect<s32>( dim.Width - 150, 64, dim.Width - 70, 80 ), gui.Window,-1 );
gui.MultiSample->setMin ( 0 );
gui.MultiSample->setMax ( 8 );
gui.MultiSample->setSmallStep ( 1 );
gui.MultiSample->setLargeStep ( 1 );
gui.MultiSample->setPos ( Game->deviceParam.AntiAlias );
gui.MultiSample->setToolTipText ( L"Set the MultiSample (disable, 1x, 2x, 4x, 8x )" );
gui.SetVideoMode = env->addButton (rect<s32>( dim.Width - 60, 64, dim.Width - 10, 80 ), gui.Window, -1,L"set" );
gui.SetVideoMode->setToolTipText ( L"Set Video Mode with current values" );
env->addStaticText ( L"Gamma:", rect<s32>( dim.Width - 400, 104, dim.Width - 310, 120 ),false, false, gui.Window, -1, false );
gui.Gamma = env->addScrollBar( true, rect<s32>( dim.Width - 300, 104, dim.Width - 10, 120 ), gui.Window,-1 );
gui.Gamma->setMin ( 50 );
gui.Gamma->setMax ( 350 );
gui.Gamma->setSmallStep ( 1 );
gui.Gamma->setLargeStep ( 10 );
gui.Gamma->setPos ( core::floor32 ( Game->GammaValue * 100.f ) );
gui.Gamma->setToolTipText ( L"Adjust Gamma Ramp ( 0.5 - 3.5)" );
Game->Device->setGammaRamp ( Game->GammaValue, Game->GammaValue, Game->GammaValue, 0.f, 0.f );
env->addStaticText ( L"Tesselation:", rect<s32>( dim.Width - 400, 124, dim.Width - 310, 140 ),false, false, gui.Window, -1, false );
gui.Tesselation = env->addScrollBar( true, rect<s32>( dim.Width - 300, 124, dim.Width - 10, 140 ), gui.Window,-1 );
gui.Tesselation->setMin ( 2 );
gui.Tesselation->setMax ( 12 );
gui.Tesselation->setSmallStep ( 1 );
gui.Tesselation->setLargeStep ( 1 );
gui.Tesselation->setPos ( Game->loadParam.patchTesselation );
gui.Tesselation->setToolTipText ( L"How smooth should curved surfaces be rendered" );
gui.Collision = env->addCheckBox ( true, rect<s32>( dim.Width - 400, 150, dim.Width - 300, 166 ), gui.Window,-1, L"Collision" );
gui.Collision->setToolTipText ( L"Set collision on or off ( flythrough ). \nPress F7 on your Keyboard" );
gui.Visible_Map = env->addCheckBox ( true, rect<s32>( dim.Width - 300, 150, dim.Width - 240, 166 ), gui.Window,-1, L"Map" );
gui.Visible_Map->setToolTipText ( L"Show or not show the static part the Level. \nPress F3 on your Keyboard" );
gui.Visible_Shader = env->addCheckBox ( true, rect<s32>( dim.Width - 240, 150, dim.Width - 170, 166 ), gui.Window,-1, L"Shader" );
gui.Visible_Shader->setToolTipText ( L"Show or not show the Shader Nodes. \nPress F4 on your Keyboard" );
gui.Visible_Fog = env->addCheckBox ( true, rect<s32>( dim.Width - 170, 150, dim.Width - 110, 166 ), gui.Window,-1, L"Fog" );
gui.Visible_Fog->setToolTipText ( L"Show or not show the Fog Nodes. \nPress F5 on your Keyboard" );
gui.Visible_Unresolved = env->addCheckBox ( true, rect<s32>( dim.Width - 110, 150, dim.Width - 10, 166 ), gui.Window,-1, L"Unresolved" );
gui.Visible_Unresolved->setToolTipText ( L"Show the or not show the Nodes the Engine can't handle. \nPress F6 on your Keyboard" );
gui.Visible_Skydome = env->addCheckBox ( true, rect<s32>( dim.Width - 110, 180, dim.Width - 10, 196 ), gui.Window,-1, L"Skydome" );
gui.Visible_Skydome->setToolTipText ( L"Show the or not show the Skydome." );
env->addStaticText ( L"Archives:", rect<s32>( 5, dim.Height - 530, dim.Width - 600,dim.Height - 514 ),false, false, gui.Window, -1, false );
gui.ArchiveAdd = env->addButton ( rect<s32>( dim.Width - 725, dim.Height - 530, dim.Width - 665, dim.Height - 514 ), gui.Window,-1, L"add" );
gui.ArchiveAdd->setToolTipText ( L"Add an archive, usually packed zip-archives (*.pk3) to the Filesystem" );
gui.ArchiveRemove = env->addButton ( rect<s32>( dim.Width - 660, dim.Height - 530, dim.Width - 600, dim.Height - 514 ), gui.Window,-1, L"del" );
gui.ArchiveRemove->setToolTipText ( L"Remove the selected archive from the FileSystem." );
gui.ArchiveUp = env->addButton ( rect<s32>( dim.Width - 575, dim.Height - 530, dim.Width - 515, dim.Height - 514 ), gui.Window,-1, L"up" );
gui.ArchiveUp->setToolTipText ( L"Arrange Archive Look-up Hirachy. Move the selected Archive up" );
gui.ArchiveDown = env->addButton ( rect<s32>( dim.Width - 510, dim.Height - 530, dim.Width - 440, dim.Height - 514 ), gui.Window,-1, L"down" );
gui.ArchiveDown->setToolTipText ( L"Arrange Archive Look-up Hirachy. Move the selected Archive down" );
gui.ArchiveList = env->addTable ( rect<s32>( 5,dim.Height - 510, dim.Width - 450,dim.Height - 410 ), gui.Window );
gui.ArchiveList->addColumn ( L"Type", 0 );
gui.ArchiveList->addColumn ( L"Real File Path", 1 );
gui.ArchiveList->setColumnWidth ( 0, 60 );
gui.ArchiveList->setColumnWidth ( 1, 284 );
gui.ArchiveList->setToolTipText ( L"Show the attached Archives" );
env->addStaticText ( L"Maps:", rect<s32>( 5, dim.Height - 400, dim.Width - 450,dim.Height - 380 ),false, false, gui.Window, -1, false );
gui.MapList = env->addListBox ( rect<s32>( 5,dim.Height - 380, dim.Width - 450,dim.Height - 40 ), gui.Window, -1, true );
gui.MapList->setToolTipText ( L"Show the current Maps in all Archives.\n Double-Click the Map to start the level" );
env->addStaticText ( L"Scenegraph:", rect<s32>( dim.Width - 400, dim.Height - 400, dim.Width - 5,dim.Height - 380 ),false, false, gui.Window, -1, false );
gui.SceneTree = env->addTreeView( rect<s32>( dim.Width - 400, dim.Height - 380, dim.Width - 5, dim.Height - 40 ),
gui.Window, -1, true, true, false );
gui.SceneTree->setToolTipText ( L"Show the current Scenegraph" );
gui.SceneTree->getRoot()->clearChildren();
addSceneTreeItem ( Game->Device->getSceneManager()->getRootSceneNode(), gui.SceneTree->getRoot() );
IGUIImageList* imageList = env->createImageList( driver->getTexture ( "iconlist.png" ),
dimension2di( 32, 32 ), true );
if ( imageList )
{
gui.SceneTree->setImageList( imageList );
imageList->drop ();
}
gui.Logo = env->addImage( driver->getTexture("irrlichtlogo3.png"), position2d<s32>(5, 16 ), true, 0 );
gui.Logo->setToolTipText ( L"The great Irrlicht Engine" );
AddArchive ( "" );
}
Add an Archive to the FileSystems and updates the GUI
void CQuake3EventHandler::AddArchive ( const path& archiveName )
{
IFileSystem *fs = Game->Device->getFileSystem();
u32 i;
if ( archiveName.size () )
{
bool exists = false;
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
if ( fs->getFileArchive(i)->getFileList()->getPath() == archiveName )
{
exists = true;
break;
}
}
if (!exists)
{
fs->addFileArchive(archiveName, true, false);
}
}
if ( gui.ArchiveList )
{
gui.ArchiveList->clearRows();
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
IFileArchive * archive = fs->getFileArchive ( i );
u32 index = gui.ArchiveList->addRow(i);
core::stringw typeName;
switch(archive->getType())
{
case io::EFAT_ZIP:
typeName = "ZIP";
break;
case io::EFAT_GZIP:
typeName = "gzip";
break;
case io::EFAT_FOLDER:
typeName = "Mount";
break;
case io::EFAT_PAK:
typeName = "PAK";
break;
case io::EFAT_TAR:
typeName = "TAR";
break;
default:
typeName = "archive";
}
gui.ArchiveList->setCellText ( index, 0, typeName );
gui.ArchiveList->setCellText ( index, 1, archive->getFileList()->getPath() );
}
}
if ( gui.MapList )
{
gui.MapList->clear();
IGUISpriteBank *bank = Game->Device->getGUIEnvironment()->getSpriteBank("sprite_q3map");
if ( 0 == bank )
bank = Game->Device->getGUIEnvironment()->addEmptySpriteBank("sprite_q3map");
SGUISprite sprite;
SGUISpriteFrame frame;
core::rect<s32> r;
bank->getSprites().clear();
bank->getPositions().clear ();
gui.MapList->setSpriteBank ( bank );
u32 g = 0;
core::stringw s;
fs->setFileListSystem ( FILESYSTEM_VIRTUAL );
fs->changeWorkingDirectoryTo ( "/maps/" );
IFileList *fileList = fs->createFileList ();
fs->setFileListSystem ( FILESYSTEM_NATIVE );
for ( i=0; i< fileList->getFileCount(); ++i)
{
s = fileList->getFullFileName(i);
if ( s.find ( ".bsp" ) >= 0 )
{
path c ( s );
deletePathFromFilename ( c );
cutFilenameExtension ( c, c );
c = path ( "levelshots/" ) + c;
dimension2du dim ( 128, 128 );
IVideoDriver * driver = Game->Device->getVideoDriver();
IImage* image = 0;
ITexture *tex = 0;
path filename;
filename = c + ".jpg";
if ( fs->existFile ( filename ) )
image = driver->createImageFromFile( filename );
if ( 0 == image )
{
filename = c + ".tga";
if ( fs->existFile ( filename ) )
image = driver->createImageFromFile( filename );
}
if ( image )
{
IImage* filter = driver->createImage ( video::ECF_R8G8B8, dim );
image->copyToScalingBoxFilter ( filter, 0 );
image->drop ();
image = filter;
}
if ( image )
{
tex = driver->addTexture ( filename, image );
image->drop ();
}
bank->setTexture ( g, tex );
r.LowerRightCorner.X = dim.Width;
r.LowerRightCorner.Y = dim.Height;
gui.MapList->setItemHeight ( r.LowerRightCorner.Y + 4 );
frame.rectNumber = bank->getPositions().size();
frame.textureNumber = g;
bank->getPositions().push_back(r);
sprite.Frames.set_used ( 0 );
sprite.Frames.push_back(frame);
sprite.frameTime = 0;
bank->getSprites().push_back(sprite);
gui.MapList->addItem ( s.c_str (), g );
g += 1;
}
}
fileList->drop ();
gui.MapList->setSelected ( -1 );
IGUIScrollBar * bar = (IGUIScrollBar*)gui.MapList->getElementFromId( 0 );
if ( bar )
bar->setPos ( 0 );
}
}
clears the Map in Memory
void CQuake3EventHandler::dropMap ()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
driver->removeAllHardwareBuffers ();
driver->removeAllTextures ();
Player[0].shutdown ();
dropElement ( ItemParent );
dropElement ( ShaderParent );
dropElement ( UnresolvedParent );
dropElement ( FogParent );
dropElement ( BulletParent );
Impacts.clear();
if ( Meta )
{
Meta = 0;
}
dropElement ( MapParent );
dropElement ( SkyNode );
IMeshCache *cache = Game->Device->getSceneManager ()->getMeshCache();
cache->clear ();
Mesh = 0;
}
Load new map
void CQuake3EventHandler::LoadMap ( const stringw &mapName, s32 collision )
{
if ( 0 == mapName.size() )
return;
dropMap ();
IFileSystem *fs = Game->Device->getFileSystem();
ISceneManager *smgr = Game->Device->getSceneManager ();
IReadFile* file = fs->createMemoryReadFile(&Game->loadParam,
sizeof(Game->loadParam), L"levelparameter.cfg", false);
smgr->getMesh( file );
file->drop ();
Mesh = (IQ3LevelMesh*) smgr->getMesh(mapName);
if ( 0 == Mesh )
return;
add the geometry mesh to the Scene ( polygon & patches ) The Geometry mesh is optimised for faster drawing
IMesh *geometry = Mesh->getMesh(E_Q3_MESH_GEOMETRY);
if ( 0 == geometry || geometry->getMeshBufferCount() == 0)
return;
Game->CurrentMapName = mapName;
Meta = 0;
ITriangleSelector * selector = 0;
if (collision)
Meta = smgr->createMetaTriangleSelector();
s32 minimalNodes = 2048;
MapParent = smgr->addOctreeSceneNode(geometry, 0, -1, minimalNodes);
MapParent->setName ( mapName );
if ( Meta )
{
selector = smgr->createOctreeTriangleSelector( geometry,MapParent, minimalNodes);
Meta->addTriangleSelector( selector);
selector->drop ();
}
ItemParent = smgr->addEmptySceneNode();
if ( ItemParent )
ItemParent->setName ( "Item Container" );
ShaderParent = smgr->addEmptySceneNode();
if ( ShaderParent )
ShaderParent->setName ( "Shader Container" );
UnresolvedParent = smgr->addEmptySceneNode();
if ( UnresolvedParent )
UnresolvedParent->setName ( "Unresolved Container" );
FogParent = smgr->addEmptySceneNode();
if ( FogParent )
FogParent->setName ( "Fog Container" );
BulletParent = smgr->addEmptySceneNode();
if ( BulletParent )
BulletParent->setName ( "Bullet Container" );
now construct SceneNodes for each Shader The Objects are stored in the quake mesh E_Q3_MESH_ITEMS and the Shader ID is stored in the MaterialParameters mostly dark looking skulls and moving lava.. or green flashing tubes?
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_ITEMS,ShaderParent, Meta, false );
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_FOG,FogParent, 0, false );
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_UNRESOLVED,UnresolvedParent, Meta, true );
Now construct Models from Entity List
Q3ModelFactory ( Game->loadParam, Game->Device, Mesh, ItemParent, false );
}
Adds a SceneNode with an icon to the Scene Tree
void CQuake3EventHandler::addSceneTreeItem( ISceneNode * parent, IGUITreeViewNode* nodeParent)
{
IGUITreeViewNode* node;
wchar_t msg[128];
s32 imageIndex;
list<ISceneNode*>::ConstIterator it = parent->getChildren().begin();
for (; it != parent->getChildren().end(); ++it)
{
switch ( (*it)->getType () )
{
case ESNT_Q3SHADER_SCENE_NODE: imageIndex = 0; break;
case ESNT_CAMERA: imageIndex = 1; break;
case ESNT_EMPTY: imageIndex = 2; break;
case ESNT_MESH: imageIndex = 3; break;
case ESNT_OCTREE: imageIndex = 3; break;
case ESNT_ANIMATED_MESH: imageIndex = 4; break;
case ESNT_SKY_BOX: imageIndex = 5; break;
case ESNT_BILLBOARD: imageIndex = 6; break;
case ESNT_PARTICLE_SYSTEM: imageIndex = 7; break;
case ESNT_TEXT: imageIndex = 8; break;
default:imageIndex = -1; break;
}
if ( imageIndex < 0 )
{
swprintf ( msg, 128, L"%hs,%hs",
Game->Device->getSceneManager ()->getSceneNodeTypeName ( (*it)->getType () ),
(*it)->getName()
);
}
else
{
swprintf ( msg, 128, L"%hs",(*it)->getName() );
}
node = nodeParent->addChildBack( msg, 0, imageIndex );
list<ISceneNodeAnimator*>::ConstIterator ait = (*it)->getAnimators().begin();
for (; ait != (*it)->getAnimators().end(); ++ait)
{
imageIndex = -1;
swprintf ( msg, 128, L"%hs",
Game->Device->getSceneManager ()->getAnimatorTypeName ( (*ait)->getType () )
);
switch ( (*ait)->getType () )
{
case ESNAT_FLY_CIRCLE:
case ESNAT_FLY_STRAIGHT:
case ESNAT_FOLLOW_SPLINE:
case ESNAT_ROTATION:
case ESNAT_TEXTURE:
case ESNAT_DELETION:
case ESNAT_COLLISION_RESPONSE:
case ESNAT_CAMERA_FPS:
case ESNAT_CAMERA_MAYA:
default:
break;
}
node->addChildBack( msg, 0, imageIndex );
}
addSceneTreeItem ( *it, node );
}
}
void CQuake3EventHandler::CreatePlayers()
{
Player[0].create ( Game->Device, Mesh, MapParent, Meta );
}
void CQuake3EventHandler::AddSky( u32 dome, const c8 *texture)
{
ISceneManager *smgr = Game->Device->getSceneManager ();
IVideoDriver * driver = Game->Device->getVideoDriver();
bool oldMipMapState = driver->getTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
if ( 0 == dome )
{
static const c8*p[] = { "ft", "rt", "bk", "lf", "up", "dn" };
u32 i = 0;
snprintf ( buf, 64, "%s_%s.jpg", texture, p[i] );
SkyNode = smgr->addSkyBoxSceneNode( driver->getTexture ( buf ), 0, 0, 0, 0, 0 );
if (SkyNode)
{
for ( i = 0; i < 6; ++i )
{
snprintf ( buf, 64, "%s_%s.jpg", texture, p[i] );
SkyNode->getMaterial(i).setTexture ( 0, driver->getTexture ( buf ) );
}
}
}
else
if ( 1 == dome )
{
snprintf ( buf, 64, "%s.jpg", texture );
SkyNode = smgr->addSkyDomeSceneNode(
driver->getTexture( buf ), 32,32,
1.f, 1.f, 1000.f, 0, 11);
}
else
if ( 2 == dome )
{
snprintf ( buf, 64, "%s.jpg", texture );
SkyNode = smgr->addSkyDomeSceneNode(
driver->getTexture( buf ), 16,8,
0.95f, 2.f, 1000.f, 0, 11);
}
if (SkyNode)
SkyNode->setName("Skydome");
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, oldMipMapState);
}
void CQuake3EventHandler::SetGUIActive( s32 command)
{
bool inputState = false;
ICameraSceneNode * camera = Game->Device->getSceneManager()->getActiveCamera ();
switch ( command )
{
case 0: Game->guiActive = 0; inputState = !Game->guiActive; break;
case 1: Game->guiActive = 1; inputState = !Game->guiActive;;break;
case 2: Game->guiActive ^= 1; inputState = !Game->guiActive;break;
case 3:
if ( camera )
inputState = !camera->isInputReceiverEnabled();
break;
}
if ( camera )
{
camera->setInputReceiverEnabled ( inputState );
Game->Device->getCursorControl()->setVisible( !inputState );
}
if ( gui.Window )
{
gui.Window->setVisible ( Game->guiActive != 0 );
}
if ( Game->guiActive &&
gui.SceneTree && Game->Device->getGUIEnvironment()->getFocus() != gui.SceneTree
)
{
gui.SceneTree->getRoot()->clearChildren();
addSceneTreeItem ( Game->Device->getSceneManager()->getRootSceneNode(), gui.SceneTree->getRoot() );
}
Game->Device->getGUIEnvironment()->setFocus ( Game->guiActive ? gui.Window: 0 );
}
Handle game input
bool CQuake3EventHandler::OnEvent(const SEvent& eve)
{
if ( eve.EventType == EET_LOG_TEXT_EVENT )
{
return false;
}
if ( Game->guiActive && eve.EventType == EET_GUI_EVENT )
{
if ( eve.GUIEvent.Caller == gui.MapList && eve.GUIEvent.EventType == gui::EGET_LISTBOX_SELECTED_AGAIN )
{
s32 selected = gui.MapList->getSelected();
if ( selected >= 0 )
{
stringw loadMap = gui.MapList->getListItem ( selected );
if ( 0 == MapParent || loadMap != Game->CurrentMapName )
{
printf ( "Loading map %ls\n", loadMap.c_str() );
LoadMap ( loadMap , 1 );
if ( 0 == Game->loadParam.loadSkyShader )
{
AddSky ( 1, "skydome2" );
}
CreatePlayers ();
CreateGUI ();
SetGUIActive ( 0 );
return true;
}
}
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveRemove && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Game->Device->getFileSystem()->removeFileArchive( gui.ArchiveList->getSelected() );
Game->CurrentMapName = "";
AddArchive ( "" );
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveAdd && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
if ( 0 == gui.ArchiveFileOpen )
{
Game->Device->getFileSystem()->setFileListSystem ( FILESYSTEM_NATIVE );
gui.ArchiveFileOpen = Game->Device->getGUIEnvironment()->addFileOpenDialog ( L"Add Game Archive" , false,gui.Window );
}
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_FILE_SELECTED )
{
AddArchive ( gui.ArchiveFileOpen->getFileName() );
gui.ArchiveFileOpen = 0;
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_DIRECTORY_SELECTED )
{
AddArchive ( gui.ArchiveFileOpen->getDirectoryName() );
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_FILE_CHOOSE_DIALOG_CANCELLED )
{
gui.ArchiveFileOpen = 0;
}
else
if ( ( eve.GUIEvent.Caller == gui.ArchiveUp || eve.GUIEvent.Caller == gui.ArchiveDown ) &&
eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
s32 rel = eve.GUIEvent.Caller == gui.ArchiveUp ? -1 : 1;
if ( Game->Device->getFileSystem()->moveFileArchive ( gui.ArchiveList->getSelected (), rel ) )
{
s32 newIndex = core::s32_clamp ( gui.ArchiveList->getSelected() + rel, 0, gui.ArchiveList->getRowCount() - 1 );
AddArchive ( "" );
gui.ArchiveList->setSelected ( newIndex );
Game->CurrentMapName = "";
}
}
else
if ( eve.GUIEvent.Caller == gui.VideoDriver && eve.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
Game->deviceParam.DriverType = (E_DRIVER_TYPE) gui.VideoDriver->getItemData ( gui.VideoDriver->getSelected() );
}
else
if ( eve.GUIEvent.Caller == gui.VideoMode && eve.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
u32 val = gui.VideoMode->getItemData ( gui.VideoMode->getSelected() );
Game->deviceParam.WindowSize.Width = val >> 16;
Game->deviceParam.WindowSize.Height = val & 0xFFFF;
}
else
if ( eve.GUIEvent.Caller == gui.FullScreen && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->deviceParam.Fullscreen = gui.FullScreen->isChecked();
}
else
if ( eve.GUIEvent.Caller == gui.Bit32 && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->deviceParam.Bits = gui.Bit32->isChecked() ? 32 : 16;
}
else
if ( eve.GUIEvent.Caller == gui.MultiSample && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->deviceParam.AntiAlias = gui.MultiSample->getPos();
}
else
if ( eve.GUIEvent.Caller == gui.Tesselation && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->loadParam.patchTesselation = gui.Tesselation->getPos ();
}
else
if ( eve.GUIEvent.Caller == gui.Gamma && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->GammaValue = gui.Gamma->getPos () * 0.01f;
Game->Device->setGammaRamp ( Game->GammaValue, Game->GammaValue, Game->GammaValue, 0.f, 0.f );
}
else
if ( eve.GUIEvent.Caller == gui.SetVideoMode && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Game->retVal = 2;
Game->Device->closeDevice();
}
else
if ( eve.GUIEvent.Caller == gui.Window && eve.GUIEvent.EventType == gui::EGET_ELEMENT_CLOSED )
{
Game->Device->closeDevice();
}
else
if ( eve.GUIEvent.Caller == gui.Collision && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->flyTroughState ^= 1;
Player[0].cam()->setAnimateTarget ( Game->flyTroughState == 0 );
printf ( "collision %d\n", Game->flyTroughState == 0 );
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Map && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
bool v = gui.Visible_Map->isChecked();
if ( MapParent )
{
printf ( "static node set visible %d\n",v );
MapParent->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Shader && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
bool v = gui.Visible_Shader->isChecked();
if ( ShaderParent )
{
printf ( "shader node set visible %d\n",v );
ShaderParent->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Skydome && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
if ( SkyNode )
{
bool v = !SkyNode->isVisible();
printf ( "skynode set visible %d\n",v );
SkyNode->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Respawn && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Player[0].respawn ();
}
return false;
}
if ((eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_SPACE &&
eve.KeyInput.PressedDown == false) ||
(eve.EventType == EET_MOUSE_INPUT_EVENT && eve.MouseInput.Event == EMIE_LMOUSE_LEFT_UP)
)
{
ICameraSceneNode * camera = Game->Device->getSceneManager()->getActiveCamera ();
if ( camera && camera->isInputReceiverEnabled () )
{
useItem( Player + 0 );
}
}
if ((eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_F1 &&
eve.KeyInput.PressedDown == false) ||
(eve.EventType == EET_MOUSE_INPUT_EVENT && eve.MouseInput.Event == EMIE_RMOUSE_LEFT_UP)
)
{
SetGUIActive ( 2 );
}
if ( eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.PressedDown == false)
{
if ( eve.KeyInput.Key == irr::KEY_ESCAPE )
{
SetGUIActive ( 3 );
}
else
if (eve.KeyInput.Key == KEY_F11)
{
IImage* image = Game->Device->getVideoDriver()->createScreenShot();
if (image)
{
core::vector3df pos;
core::vector3df rot;
ICameraSceneNode * cam = Game->Device->getSceneManager()->getActiveCamera ();
if ( cam )
{
pos = cam->getPosition ();
rot = cam->getRotation ();
}
static const c8 *dName[] = { "null", "software", "burning",
"d3d8", "d3d9", "opengl" };
snprintf(buf, 256, "%s_%ls_%.0f_%.0f_%.0f_%.0f_%.0f_%.0f.jpg",
dName[Game->Device->getVideoDriver()->getDriverType()],
Game->CurrentMapName.c_str(),
pos.X, pos.Y, pos.Z,
rot.X, rot.Y, rot.Z
);
path filename ( buf );
filename.replace ( '/', '_' );
printf ( "screenshot : %s\n", filename.c_str() );
Game->Device->getVideoDriver()->writeImageToFile(image, filename, 100 );
image->drop();
}
}
else
if (eve.KeyInput.Key == KEY_F9)
{
s32 value = EDS_OFF;
Game->debugState = ( Game->debugState + 1 ) & 3;
switch ( Game->debugState )
{
case 1: value = EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_BBOX_ALL; break;
case 2: value = EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_SKELETON; break;
}
set debug map data on/off debugState = debugState == EDS_OFF ? EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_BBOX_ALL: EDS_OFF;
if ( ItemParent )
{
list<ISceneNode*>::ConstIterator it = ItemParent->getChildren().begin();
for (; it != ItemParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( ShaderParent )
{
list<ISceneNode*>::ConstIterator it = ShaderParent->getChildren().begin();
for (; it != ShaderParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( UnresolvedParent )
{
list<ISceneNode*>::ConstIterator it = UnresolvedParent->getChildren().begin();
for (; it != UnresolvedParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( FogParent )
{
list<ISceneNode*>::ConstIterator it = FogParent->getChildren().begin();
for (; it != FogParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( SkyNode )
{
SkyNode->setDebugDataVisible ( value );
}
}
else
if (eve.KeyInput.Key == KEY_F8)
{
Game->gravityState ^= 1;
Player[0].cam()->setGravity ( getGravity ( Game->gravityState ? "earth" : "none" ) );
printf ( "gravity %s\n", Game->gravityState ? "earth" : "none" );
}
else
if (eve.KeyInput.Key == KEY_F7)
{
Game->flyTroughState ^= 1;
Player[0].cam()->setAnimateTarget ( Game->flyTroughState == 0 );
if ( gui.Collision )
gui.Collision->setChecked ( Game->flyTroughState == 0 );
printf ( "collision %d\n", Game->flyTroughState == 0 );
}
else
if (eve.KeyInput.Key == KEY_F2)
{
Player[0].respawn ();
}
else
if (eve.KeyInput.Key == KEY_F3)
{
if ( MapParent )
{
bool v = !MapParent->isVisible ();
printf ( "static node set visible %d\n",v );
MapParent->setVisible ( v );
if ( gui.Visible_Map )
gui.Visible_Map->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F4)
{
if ( ShaderParent )
{
bool v = !ShaderParent->isVisible ();
printf ( "shader node set visible %d\n",v );
ShaderParent->setVisible ( v );
if ( gui.Visible_Shader )
gui.Visible_Shader->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F5)
{
if ( FogParent )
{
bool v = !FogParent->isVisible ();
printf ( "fog node set visible %d\n",v );
FogParent->setVisible ( v );
if ( gui.Visible_Fog )
gui.Visible_Fog->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F6)
{
if ( UnresolvedParent )
{
bool v = !UnresolvedParent->isVisible ();
printf ( "unresolved node set visible %d\n",v );
UnresolvedParent->setVisible ( v );
if ( gui.Visible_Unresolved )
gui.Visible_Unresolved->setChecked ( v );
}
}
}
if ( eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_KEY_C )
{
ISceneNodeAnimatorCollisionResponse *anim = Player[0].cam ();
if ( anim && 0 == Game->flyTroughState )
{
if ( false == eve.KeyInput.PressedDown )
{
anim->setEllipsoidRadius ( vector3df(30,45,30) );
anim->setEllipsoidTranslation ( vector3df(0,40,0));
}
else
{
anim->setEllipsoidRadius ( vector3df(30,20,30) );
anim->setEllipsoidTranslation ( vector3df(0,20,0));
}
return true;
}
}
return false;
}
useItem
void CQuake3EventHandler::useItem( Q3Player * player)
{
ISceneManager* smgr = Game->Device->getSceneManager();
ICameraSceneNode* camera = smgr->getActiveCamera();
if (!camera)
return;
SParticleImpact imp;
imp.when = 0;
vector3df start = camera->getPosition();
if ( player->WeaponNode )
{
start.X += 0.f;
start.Y += 0.f;
start.Z += 0.f;
}
vector3df end = (camera->getTarget() - start);
end.normalize();
start += end*20.0f;
end = start + (end * camera->getFarValue());
triangle3df triangle;
line3d<f32> line(start, end);
scene::ISceneNode* hitNode;
if (smgr->getSceneCollisionManager()->getCollisionPoint(
line, Meta, end, triangle,hitNode))
{
vector3df out = triangle.getNormal();
out.setLength(0.03f);
imp.when = 1;
imp.outVector = out;
imp.pos = end;
player->setAnim ( "pow" );
player->Anim[1].next += player->Anim[1].delta;
}
else
{
vector3df start = camera->getPosition();
if ( player->WeaponNode )
{
}
vector3df end = (camera->getTarget() - start);
end.normalize();
start += end*20.0f;
end = start + (end * camera->getFarValue());
}
ISceneNode* node = 0;
node = smgr->addBillboardSceneNode( BulletParent,dimension2d<f32>(10,10), start);
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMaterialTexture(0, Game->Device->getVideoDriver()->getTexture("fireball.bmp"));
node->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
node->setMaterialType(EMT_TRANSPARENT_ADD_COLOR);
f32 length = (f32)(end - start).getLength();
const f32 speed = 5.8f;
u32 time = (u32)(length / speed);
ISceneNodeAnimator* anim = 0;
anim = smgr->createFlyStraightAnimator(start, end, time);
node->addAnimator(anim);
anim->drop();
snprintf ( buf, 64, "bullet: %s on %.1f,%1.f,%1.f",
imp.when ? "hit" : "nohit", end.X, end.Y, end.Z );
node->setName ( buf );
anim = smgr->createDeleteAnimator(time);
node->addAnimator(anim);
anim->drop();
if (imp.when)
{
imp.when = Game->Device->getTimer()->getTime() +
(time + (s32) ( ( 1.f + Noiser::get() ) * 250.f ));
Impacts.push_back(imp);
}
}
void CQuake3EventHandler::createParticleImpacts( u32 now )
{
ISceneManager* sm = Game->Device->getSceneManager();
struct smokeLayer
{
const c8 * texture;
f32 scale;
f32 minparticleSize;
f32 maxparticleSize;
f32 boxSize;
u32 minParticle;
u32 maxParticle;
u32 fadeout;
u32 lifetime;
};
smokeLayer smoke[] =
{
{ "smoke2.jpg", 0.4f, 1.5f, 18.f, 20.f, 20, 50, 2000, 10000 },
{ "smoke3.jpg", 0.2f, 1.2f, 15.f, 20.f, 10, 30, 1000, 12000 }
};
u32 i;
u32 g;
s32 factor = 1;
for ( g = 0; g != 2; ++g )
{
smoke[g].minParticle *= factor;
smoke[g].maxParticle *= factor;
smoke[g].lifetime *= factor;
smoke[g].boxSize *= Noiser::get() * 0.5f;
}
for ( i=0; i < Impacts.size(); ++i)
{
if (now < Impacts[i].when)
continue;
IParticleSystemSceneNode* pas = 0;
for ( g = 0; g != 2; ++g )
{
pas = sm->addParticleSystemSceneNode(false, BulletParent, -1, Impacts[i].pos);
snprintf ( buf, 64, "bullet impact smoke at %.1f,%.1f,%1.f",
Impacts[i].pos.X,Impacts[i].pos.Y,Impacts[i].pos.Z);
pas->setName ( buf );
vector3df direction = Impacts[i].outVector;
direction *= smoke[g].scale;
IParticleEmitter* em = pas->createBoxEmitter(
aabbox3d<f32>(-4.f,0.f,-4.f,20.f,smoke[g].minparticleSize,20.f),
direction,smoke[g].minParticle, smoke[g].maxParticle,
video::SColor(0,0,0,0),video::SColor(0,128,128,128),
250,4000, 60);
em->setMinStartSize (dimension2d<f32>( smoke[g].minparticleSize, smoke[g].minparticleSize));
em->setMaxStartSize (dimension2d<f32>( smoke[g].maxparticleSize, smoke[g].maxparticleSize));
pas->setEmitter(em);
em->drop();
IParticleAffector* paf = pas->createFadeOutParticleAffector(
video::SColor ( 0, 0, 0, 0 ), smoke[g].fadeout);
pas->addAffector(paf);
paf->drop();
ISceneNodeAnimator* anim = sm->createDeleteAnimator( smoke[g].lifetime);
pas->addAnimator(anim);
anim->drop();
pas->setMaterialFlag(video::EMF_LIGHTING, false);
pas->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
pas->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
pas->setMaterialTexture(0, Game->Device->getVideoDriver()->getTexture( smoke[g].texture ));
}
#ifdef USE_IRRKLANG
if (irrKlang) { audio::ISound* sound = irrKlang->play3D(impactSound, Impacts[i].pos, false, false, true);
-
if (sound) { adjust max value a bit to make to sound of an impact louder sound->setMinDistance(400); sound->drop(); } }
#endif
Impacts.erase(i);
i--;
}
}
render
void CQuake3EventHandler::Render()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
if ( 0 == driver )
return;
const bool anaglyph=false;
if (anaglyph)
{
scene::ICameraSceneNode* cameraOld = Game->Device->getSceneManager()->getActiveCamera();
driver->beginScene(true, true, SColor(0,0,0,0));
driver->getOverrideMaterial().Material.ColorMask = ECP_NONE;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses = ESNRP_SKY_BOX +
ESNRP_SOLID +
ESNRP_TRANSPARENT +
ESNRP_TRANSPARENT_EFFECT +
ESNRP_SHADOW;
Game->Device->getSceneManager()->drawAll();
driver->clearZBuffer();
const vector3df oldPosition = cameraOld->getPosition();
const vector3df oldTarget = cameraOld->getTarget();
const matrix4 startMatrix = cameraOld->getAbsoluteTransformation();
const vector3df focusPoint = (oldTarget -
cameraOld->getAbsolutePosition()).setLength(10000) +
cameraOld->getAbsolutePosition() ;
scene::ICameraSceneNode* camera = cameraOld;
vector3df pos;
matrix4 move;
move.setTranslation( vector3df(-1.5f,0.0f,0.0f) );
pos=(startMatrix*move).getTranslation();
driver->getOverrideMaterial().Material.ColorMask = ECP_RED;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses =
ESNRP_SKY_BOX|ESNRP_SOLID|ESNRP_TRANSPARENT|
ESNRP_TRANSPARENT_EFFECT|ESNRP_SHADOW;
camera->setPosition(pos);
camera->setTarget(focusPoint);
Game->Device->getSceneManager()->drawAll();
driver->clearZBuffer();
move.setTranslation( vector3df(1.5f,0.0f,0.0f) );
pos=(startMatrix*move).getTranslation();
driver->getOverrideMaterial().Material.ColorMask = ECP_GREEN + ECP_BLUE;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses =
ESNRP_SKY_BOX|ESNRP_SOLID|ESNRP_TRANSPARENT|
ESNRP_TRANSPARENT_EFFECT|ESNRP_SHADOW;
camera->setPosition(pos);
camera->setTarget(focusPoint);
Game->Device->getSceneManager()->drawAll();
driver->getOverrideMaterial().Material.ColorMask=ECP_ALL;
driver->getOverrideMaterial().EnableFlags=0;
driver->getOverrideMaterial().EnablePasses=0;
if (camera != cameraOld)
{
Game->Device->getSceneManager()->setActiveCamera(cameraOld);
camera->remove();
}
else
{
camera->setPosition(oldPosition);
camera->setTarget(oldTarget);
}
}
else
{
driver->beginScene(true, true, SColor(0,0,0,0));
Game->Device->getSceneManager()->drawAll();
}
Game->Device->getGUIEnvironment()->drawAll();
driver->endScene();
}
update the generic scene node
void CQuake3EventHandler::Animate()
{
u32 now = Game->Device->getTimer()->getTime();
Q3Player * player = Player + 0;
checkTimeFire ( player->Anim, 4, now );
if ( player->Anim[0].flags & FIRED )
{
ISceneManager *smgr = Game->Device->getSceneManager ();
wchar_t msg[128];
IVideoDriver * driver = Game->Device->getVideoDriver();
IAttributes * attr = smgr->getParameters();
#ifdef _IRR_SCENEMANAGER_DEBUG
swprintf ( msg, 128,
L"Q3 %s [%ls], FPS:%03d Tri:%.03fm Cull %d/%d nodes (%d,%d,%d)",
Game->CurrentMapName.c_str(),
driver->getName(),
driver->getFPS (),
(f32) driver->getPrimitiveCountDrawn( 0 ) * ( 1.f / 1000000.f ),
attr->getAttributeAsInt ( "culled" ),
attr->getAttributeAsInt ( "calls" ),
attr->getAttributeAsInt ( "drawn_solid" ),
attr->getAttributeAsInt ( "drawn_transparent" ),
attr->getAttributeAsInt ( "drawn_transparent_effect" )
);
#else
swprintf ( msg, 128,
L"Q3 %s [%ls], FPS:%03d Tri:%.03fm",
Game->CurrentMapName.c_str(),
driver->getName(),
driver->getFPS (),
(f32) driver->getPrimitiveCountDrawn( 0 ) * ( 1.f / 1000000.f )
);
#endif
Game->Device->setWindowCaption( msg );
swprintf ( msg, 128,
L"%03d fps, F1 GUI on/off, F2 respawn, F3-F6 toggle Nodes, F7 Collision on/off"
L", F8 Gravity on/off, Right Mouse Toggle GUI",
Game->Device->getVideoDriver()->getFPS ()
);
if ( gui.StatusLine )
gui.StatusLine->setText ( msg );
player->Anim[0].flags &= ~FIRED;
}
if ( player->Anim[1].flags & FIRED )
{
if ( strcmp ( player->animation, "idle" ) )
player->setAnim ( "idle" );
player->Anim[1].flags &= ~FIRED;
}
createParticleImpacts ( now );
}
The main game states
void runGame ( GameData *game )
{
if ( game->retVal >= 3 )
return;
game->Device = (*game->createExDevice) ( game->deviceParam );
if ( 0 == game->Device)
{
game->retVal = 0;
return;
}
CQuake3EventHandler *eventHandler = new CQuake3EventHandler( game );
game->load ( "explorer.cfg" );
for ( u32 i = 0; i < game->CurrentArchiveList.size(); ++i )
{
eventHandler->AddArchive ( game->CurrentArchiveList[i] );
}
if ( game->CurrentMapName.size () )
{
eventHandler->LoadMap ( game->CurrentMapName, 1 );
if ( 0 == game->loadParam.loadSkyShader )
eventHandler->AddSky ( 1, "skydome2" );
eventHandler->CreatePlayers ();
eventHandler->CreateGUI ();
eventHandler->SetGUIActive ( 0 );
if ( game->retVal == 2 )
{
eventHandler->GetPlayer( 0 )->setpos ( game->PlayerPosition, game->PlayerRotation );
}
}
else
{
eventHandler->AddSky ( 1, "skydome2" );
eventHandler->CreatePlayers ();
eventHandler->CreateGUI ();
eventHandler->SetGUIActive ( 1 );
background_music ( "IrrlichtTheme.ogg" );
}
game->retVal = 3;
while( game->Device->run() )
{
eventHandler->Animate ();
eventHandler->Render ();
game->Device->yield();
}
game->Device->setGammaRamp ( 1.f, 1.f, 1.f, 0.f, 0.f );
delete eventHandler;
}
#if defined (_IRR_WINDOWS_) && 0
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
The main routine, doing all setup
int IRRCALLCONV main(int argc, char* argv[])
{
path prgname(argv[0]);
GameData game ( deletePathFromPath ( prgname, 1 ) );
const c8 * dllName = argc > 1 ? argv[1] : "irrlicht.dll";
game.createExDevice = load_createDeviceEx ( dllName );
if ( 0 == game.createExDevice )
{
game.retVal = 3;
printf ( "Could not load %s.\n", dllName );
return game.retVal;
}
game.retVal = 1;
do
{
if ( game.retVal == 0 )
{
game.setDefault ();
game.deviceParam.DriverType=driverChoiceConsole();
if (game.deviceParam.DriverType==video::EDT_COUNT)
game.retVal = 3;
}
runGame ( &game );
} while ( game.retVal < 3 );
return game.retVal;
}
+Copyright 2006-2011 Burningwater, Thomas Alten
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
#include "q3factory.h"
#include "sound.h"
Game Data is used to hold Data which is needed to drive the game
struct GameData
{
GameData ( const path &startupDir) :
retVal(0), StartupDir(startupDir), createExDevice(0), Device(0)
{
setDefault ();
}
void setDefault ();
s32 save ( const path &filename );
s32 load ( const path &filename );
s32 debugState;
s32 gravityState;
s32 flyTroughState;
s32 wireFrame;
s32 guiActive;
s32 guiInputActive;
f32 GammaValue;
s32 retVal;
s32 sound;
path StartupDir;
stringw CurrentMapName;
array<path> CurrentArchiveList;
vector3df PlayerPosition;
vector3df PlayerRotation;
tQ3EntityList Variable;
Q3LevelLoadParameter loadParam;
SIrrlichtCreationParameters deviceParam;
funcptr_createDeviceEx createExDevice;
IrrlichtDevice *Device;
};
set default settings
void GameData::setDefault ()
{
debugState = EDS_OFF;
gravityState = 1;
flyTroughState = 0;
wireFrame = 0;
guiActive = 1;
guiInputActive = 0;
GammaValue = 1.f;
#if defined ( _IRR_WINDOWS_ )
deviceParam.DriverType = EDT_DIRECT3D9;
#else
deviceParam.DriverType = EDT_OPENGL;
#endif
deviceParam.WindowSize.Width = 800;
deviceParam.WindowSize.Height = 600;
deviceParam.Fullscreen = false;
deviceParam.Bits = 24;
deviceParam.ZBufferBits = 16;
deviceParam.Vsync = false;
deviceParam.AntiAlias = false;
loadParam.defaultLightMapMaterial = EMT_LIGHTMAP;
loadParam.defaultModulate = EMFN_MODULATE_1X;
loadParam.defaultFilter = EMF_ANISOTROPIC_FILTER;
loadParam.verbose = 2;
loadParam.mergeShaderBuffer = 1;
loadParam.cleanUnResolvedMeshes = 1;
loadParam.loadAllShaders = 1;
loadParam.loadSkyShader = 0;
loadParam.alpharef = 1;
sound = 0;
CurrentMapName = "";
CurrentArchiveList.clear ();
const io::path mediaPath = getExampleMediaPath();
CurrentArchiveList.push_back ( StartupDir + mediaPath );
CurrentArchiveList.push_back("/q/baseq3/");
CurrentArchiveList.push_back(StartupDir + mediaPath + "map-20kdm2.pk3");
}
Load the current game State from a typical quake3 cfg file
s32 GameData::load ( const path &filename )
{
if (!Device)
return 0;
IQ3LevelMesh* mesh = (IQ3LevelMesh*) Device->getSceneManager()->getMesh ( filename );
if (!mesh)
return 0;
tQ3EntityList &entityList = mesh->getEntityList ();
stringc s;
u32 pos;
for ( u32 e = 0; e != entityList.size (); ++e )
{
for ( u32 g = 0; g != entityList[e].getGroupSize (); ++g )
{
const SVarGroup *group = entityList[e].getGroup ( g );
for ( u32 index = 0; index < group->Variable.size (); ++index )
{
const SVariable &v = group->Variable[index];
pos = 0;
if ( v.name == "playerposition" )
{
PlayerPosition = getAsVector3df ( v.content, pos );
}
else
if ( v.name == "playerrotation" )
{
PlayerRotation = getAsVector3df ( v.content, pos );
}
}
}
}
return 1;
}
Store the current game State in a quake3 configuration file
s32 GameData::save ( const path &filename )
{
return 0;
if (!Device)
return 0;
c8 buf[128];
u32 i;
CurrentArchiveList.clear();
IFileSystem *fs = Device->getFileSystem();
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
CurrentArchiveList.push_back ( fs->getFileArchive(i)->getFileList()->getPath() );
}
ICameraSceneNode * camera = Device->getSceneManager()->getActiveCamera ();
if ( camera )
{
PlayerPosition = camera->getPosition ();
PlayerRotation = camera->getRotation ();
}
IWriteFile *file = fs->createAndWriteFile ( filename );
if (!file)
return 0;
snprintf_irr ( buf, 128, "playerposition %.f %.f %.f\nplayerrotation %.f %.f %.f\n",
PlayerPosition.X, PlayerPosition.Z, PlayerPosition.Y,
PlayerRotation.X, PlayerRotation.Z, PlayerRotation.Y);
file->write ( buf, (s32) strlen ( buf ) );
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
snprintf_irr ( buf, 128, "archive %s\n",stringc ( fs->getFileArchive(i)->getFileList()->getPath() ).c_str () );
file->write ( buf, (s32) strlen ( buf ) );
}
file->drop ();
return 1;
}
Representing a player
struct Q3Player : public IAnimationEndCallBack
{
Q3Player ()
: Device(0), MapParent(0), Mesh(0), WeaponNode(0), StartPositionCurrent(0)
{
animation[0] = 0;
memset(Anim, 0, sizeof(TimeFire)*4);
}
virtual void OnAnimationEnd(IAnimatedMeshSceneNode* node);
void create ( IrrlichtDevice *device,
IQ3LevelMesh* mesh,
ISceneNode *mapNode,
IMetaTriangleSelector *meta
);
void shutdown ();
void setAnim ( const c8 *name );
void respawn ();
void setpos ( const vector3df &pos, const vector3df& rotation );
ISceneNodeAnimatorCollisionResponse * cam() { return camCollisionResponse ( Device ); }
IrrlichtDevice *Device;
ISceneNode* MapParent;
IQ3LevelMesh* Mesh;
IAnimatedMeshSceneNode* WeaponNode;
s32 StartPositionCurrent;
TimeFire Anim[4];
c8 animation[64];
c8 buf[64];
};
End player
void Q3Player::shutdown ()
{
setAnim ( 0 );
dropElement (WeaponNode);
if ( Device )
{
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
dropElement ( camera );
Device = 0;
}
MapParent = 0;
Mesh = 0;
}
create a new player
void Q3Player::create ( IrrlichtDevice *device, IQ3LevelMesh* mesh, ISceneNode *mapNode, IMetaTriangleSelector *meta )
{
setTimeFire ( Anim + 0, 200, FIRED );
setTimeFire ( Anim + 1, 5000 );
if (!device)
return;
Device = device;
Mesh = mesh;
MapParent = mapNode;
ISceneManager *smgr = device->getSceneManager ();
IVideoDriver * driver = device->getVideoDriver();
ICameraSceneNode* camera = 0;
core::array<SKeyMap> keyMap;
keyMap.set_used(12);
keyMap[0].Action = EKA_MOVE_FORWARD;
keyMap[0].KeyCode = KEY_UP;
keyMap[1].Action = EKA_MOVE_FORWARD;
keyMap[1].KeyCode = KEY_KEY_W;
keyMap[2].Action = EKA_MOVE_BACKWARD;
keyMap[2].KeyCode = KEY_DOWN;
keyMap[3].Action = EKA_MOVE_BACKWARD;
keyMap[3].KeyCode = KEY_KEY_S;
keyMap[4].Action = EKA_STRAFE_LEFT;
keyMap[4].KeyCode = KEY_LEFT;
keyMap[5].Action = EKA_STRAFE_LEFT;
keyMap[5].KeyCode = KEY_KEY_A;
keyMap[6].Action = EKA_STRAFE_RIGHT;
keyMap[6].KeyCode = KEY_RIGHT;
keyMap[7].Action = EKA_STRAFE_RIGHT;
keyMap[7].KeyCode = KEY_KEY_D;
keyMap[8].Action = EKA_JUMP_UP;
keyMap[8].KeyCode = KEY_KEY_J;
keyMap[9].Action = EKA_CROUCH;
keyMap[9].KeyCode = KEY_KEY_C;
keyMap[10].Action = EKA_ROTATE_LEFT;
keyMap[10].KeyCode = KEY_KEY_Q;
keyMap[11].Action = EKA_ROTATE_RIGHT;
keyMap[11].KeyCode = KEY_KEY_E;
camera = smgr->addCameraSceneNodeFPS(0, 100.0f, 0.6f, -1, keyMap.pointer(), keyMap.size(), false, 600.f);
camera->setName ( "First Person Camera" );
camera->setFarValue( 20000.f );
IAnimatedMeshMD2* weaponMesh = (IAnimatedMeshMD2*) smgr->getMesh("gun.md2");
if ( 0 == weaponMesh )
return;
if ( weaponMesh->getMeshType() == EAMT_MD2 )
{
s32 count = weaponMesh->getAnimationCount();
for ( s32 i = 0; i != count; ++i )
{
snprintf_irr ( buf, 64, "Animation: %s", weaponMesh->getAnimationName(i) );
device->getLogger()->log(buf, ELL_INFORMATION);
}
}
WeaponNode = smgr->addAnimatedMeshSceneNode(
weaponMesh,
smgr->getActiveCamera(),
10,
vector3df( 0, 0, 0),
vector3df(-90,-90,90)
);
WeaponNode->setMaterialFlag(EMF_LIGHTING, false);
WeaponNode->setMaterialTexture(0, driver->getTexture( "gun.jpg"));
WeaponNode->setLoopMode ( false );
WeaponNode->setName ( "tommi the gun man" );
ISceneNodeAnimator* anim =
smgr->createCollisionResponseAnimator( meta, camera,
vector3df(30,45,30),
getGravity ( "earth" ),
vector3df(0,40,0),
0.0005f
);
camera->addAnimator( anim );
anim->drop();
if ( meta )
{
meta->drop ();
}
respawn ();
setAnim ( "idle" );
}
so we need a good starting Position in the level. we can ask the Quake3 Loader for all entities with class_name "info_player_deathmatch"
void Q3Player::respawn ()
{
if (!Device)
return;
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
Device->getLogger()->log( "respawn" );
if (StartPositionCurrent >= Q3StartPosition(Mesh, camera,
StartPositionCurrent, cam()->getEllipsoidTranslation()))
StartPositionCurrent = 0;
else
++StartPositionCurrent;
}
set Player position from saved coordinates
void Q3Player::setpos ( const vector3df &pos, const vector3df &rotation )
{
if (!Device)
return;
Device->getLogger()->log( "setpos" );
ICameraSceneNode* camera = Device->getSceneManager()->getActiveCamera();
if ( camera )
{
camera->setPosition ( pos );
camera->setRotation ( rotation );
camera->OnAnimate ( 0 );
}
}
set the Animation of the player and weapon
void Q3Player::setAnim ( const c8 *name )
{
if ( name )
{
snprintf_irr ( animation, 64, "%s", name );
if ( WeaponNode )
{
WeaponNode->setAnimationEndCallback ( this );
WeaponNode->setMD2Animation ( animation );
}
}
else
{
animation[0] = 0;
if ( WeaponNode )
{
WeaponNode->setAnimationEndCallback ( 0 );
}
}
}
void Q3Player::OnAnimationEnd(IAnimatedMeshSceneNode* node)
{
setAnim ( 0 );
}
GUI Elements
struct GUI
{
GUI ()
{
memset ( this, 0, sizeof ( *this ) );
}
void drop()
{
dropElement ( Window );
dropElement ( Logo );
}
IGUIComboBox* VideoDriver;
IGUIComboBox* VideoMode;
IGUICheckBox* FullScreen;
IGUICheckBox* Bit32;
IGUIScrollBar* MultiSample;
IGUIButton* SetVideoMode;
IGUIScrollBar* Tesselation;
IGUIScrollBar* Gamma;
IGUICheckBox* Collision;
IGUICheckBox* Visible_Map;
IGUICheckBox* Visible_Shader;
IGUICheckBox* Visible_Fog;
IGUICheckBox* Visible_Unresolved;
IGUICheckBox* Visible_Skydome;
IGUIButton* Respawn;
IGUITable* ArchiveList;
IGUIButton* ArchiveAdd;
IGUIButton* ArchiveRemove;
IGUIFileOpenDialog* ArchiveFileOpen;
IGUIButton* ArchiveUp;
IGUIButton* ArchiveDown;
IGUIListBox* MapList;
IGUITreeView* SceneTree;
IGUIStaticText* StatusLine;
IGUIImage* Logo;
IGUIWindow* Window;
};
CQuake3EventHandler controls the game
class CQuake3EventHandler : public IEventReceiver
{
public:
CQuake3EventHandler( GameData *gameData );
virtual ~CQuake3EventHandler ();
void Animate();
void Render();
void AddArchive ( const path& archiveName );
void LoadMap ( const stringw& mapName, s32 collision );
void CreatePlayers();
void AddSky( u32 dome, const c8 *texture );
Q3Player *GetPlayer ( u32 index ) { return &Player[index]; }
void CreateGUI();
void SetGUIActive( s32 command);
bool OnEvent(const SEvent& eve);
private:
GameData *Game;
IQ3LevelMesh* Mesh;
ISceneNode* MapParent;
ISceneNode* ShaderParent;
ISceneNode* ItemParent;
ISceneNode* UnresolvedParent;
ISceneNode* BulletParent;
ISceneNode* FogParent;
ISceneNode * SkyNode;
IMetaTriangleSelector *Meta;
c8 buf[256];
Q3Player Player[2];
struct SParticleImpact
{
u32 when;
vector3df pos;
vector3df outVector;
};
array<SParticleImpact> Impacts;
void useItem( Q3Player * player);
void createParticleImpacts( u32 now );
void createTextures ();
void addSceneTreeItem( ISceneNode * parent, IGUITreeViewNode* nodeParent);
GUI gui;
void dropMap ();
};
Constructor
CQuake3EventHandler::CQuake3EventHandler( GameData *game )
: Game(game), Mesh(0), MapParent(0), ShaderParent(0), ItemParent(0), UnresolvedParent(0),
BulletParent(0), FogParent(0), SkyNode(0), Meta(0)
{
buf[0]=0;
if ( Game->deviceParam.Bits == 16 )
{
game->Device->getVideoDriver()->setTextureCreationFlag(ETCF_ALWAYS_16_BIT, true);
}
game->Device->getSceneManager()->getParameters()->setAttribute(scene::ALLOW_ZWRITE_ON_TRANSPARENT, true);
createTextures ();
sound_init ( game->Device );
Game->Device->setEventReceiver ( this );
}
CQuake3EventHandler::~CQuake3EventHandler ()
{
Player[0].shutdown ();
sound_shutdown ();
Game->save( "explorer.cfg" );
Game->Device->drop();
}
void CQuake3EventHandler::createTextures()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
dimension2du dim(64, 64);
video::IImage* image;
u32 i;
u32 x;
u32 y;
u32 * data;
for ( i = 0; i != 8; ++i )
{
image = driver->createImage ( video::ECF_A8R8G8B8, dim);
data = (u32*) image->getData ();
for ( y = 0; y != dim.Height; ++y )
{
for ( x = 0; x != dim.Width; ++x )
{
data [x] = 0xFFFFFFFF;
}
data = (u32*) ( (u8*) data + image->getPitch() );
}
snprintf_irr ( buf, 64, "smoke_%02d", i );
driver->addTexture( buf, image );
image->drop ();
}
for ( i = 0; i != 1; ++i )
{
image = driver->createImage ( video::ECF_A8R8G8B8, dim);
data = (u32*) image->getData ();
for ( y = 0; y != dim.Height; ++y )
{
for ( x = 0; x != dim.Width; ++x )
{
data [x] = 0xFFFFFFFF;
}
data = (u32*) ( (u8*) data + image->getPitch() );
}
snprintf_irr ( buf, 64, "fog_%02d", i );
driver->addTexture( buf, image );
image->drop ();
}
}
create the GUI
void CQuake3EventHandler::CreateGUI()
{
IGUIEnvironment *env = Game->Device->getGUIEnvironment();
IVideoDriver * driver = Game->Device->getVideoDriver();
gui.drop();
IGUIFont* font = env->getFont("fontlucida.png");
if (font)
env->getSkin()->setFont(font);
env->getSkin()->setColor ( EGDC_BUTTON_TEXT, video::SColor(240,0xAA,0xAA,0xAA) );
env->getSkin()->setColor ( EGDC_3D_HIGH_LIGHT, video::SColor(240,0x22,0x22,0x22) );
env->getSkin()->setColor ( EGDC_3D_FACE, video::SColor(240,0x44,0x44,0x44) );
env->getSkin()->setColor ( EGDC_EDITABLE, video::SColor(240,0x44,0x44,0x44) );
env->getSkin()->setColor ( EGDC_FOCUSED_EDITABLE, video::SColor(240,0x54,0x54,0x54) );
env->getSkin()->setColor ( EGDC_WINDOW, video::SColor(240,0x66,0x66,0x66) );
dimension2d<u32> dim ( 800, 600 );
dimension2d<u32> vdim ( Game->Device->getVideoDriver()->getScreenSize() );
if ( vdim.Height >= dim.Height && vdim.Width >= dim.Width )
{
}
else
{
}
gui.Window = env->addWindow ( rect<s32> ( 0, 0, dim.Width, dim.Height ), false, L"Quake3 Explorer" );
gui.Window->setToolTipText ( L"Quake3Explorer. Loads and show various BSP File Format and Shaders." );
gui.Window->getCloseButton()->setToolTipText ( L"Quit Quake3 Explorer" );
gui.StatusLine = env->addStaticText( 0, rect<s32>( 5,dim.Height - 30,dim.Width - 5,dim.Height - 10),
false, false, gui.Window, -1, true
);
env->addStaticText ( L"VideoDriver:", rect<s32>( dim.Width - 400, 24, dim.Width - 310, 40 ),false, false, gui.Window, -1, false );
gui.VideoDriver = env->addComboBox(rect<s32>( dim.Width - 300, 24, dim.Width - 10, 40 ),gui.Window);
gui.VideoDriver->addItem(L"Direct3D 9.0c", EDT_DIRECT3D9 );
gui.VideoDriver->addItem(L"OpenGL 1.5", EDT_OPENGL);
gui.VideoDriver->addItem(L"Software Renderer", EDT_SOFTWARE);
gui.VideoDriver->addItem(L"Burning's Video (TM) Thomas Alten", EDT_BURNINGSVIDEO);
gui.VideoDriver->setSelected ( gui.VideoDriver->getIndexForItemData ( Game->deviceParam.DriverType ) );
gui.VideoDriver->setToolTipText ( L"Use a VideoDriver" );
env->addStaticText ( L"VideoMode:", rect<s32>( dim.Width - 400, 44, dim.Width - 310, 60 ),false, false, gui.Window, -1, false );
gui.VideoMode = env->addComboBox(rect<s32>( dim.Width - 300, 44, dim.Width - 10, 60 ),gui.Window);
gui.VideoMode->setToolTipText ( L"Supported Screenmodes" );
IVideoModeList *modeList = Game->Device->getVideoModeList();
if ( modeList )
{
s32 i;
for ( i = 0; i != modeList->getVideoModeCount (); ++i )
{
u16 d = modeList->getVideoModeDepth ( i );
if ( d < 16 )
continue;
u16 w = modeList->getVideoModeResolution ( i ).Width;
u16 h = modeList->getVideoModeResolution ( i ).Height;
u32 val = w << 16 | h;
if ( gui.VideoMode->getIndexForItemData ( val ) >= 0 )
continue;
f32 aspect = (f32) w / (f32) h;
const c8 *a = "";
if ( core::equals ( aspect, 1.3333333333f ) ) a = "4:3";
else if ( core::equals ( aspect, 1.6666666f ) ) a = "15:9 widescreen";
else if ( core::equals ( aspect, 1.7777777f ) ) a = "16:9 widescreen";
else if ( core::equals ( aspect, 1.6f ) ) a = "16:10 widescreen";
else if ( core::equals ( aspect, 2.133333f ) ) a = "20:9 widescreen";
snprintf_irr ( buf, sizeof ( buf ), "%d x %d, %s",w, h, a );
gui.VideoMode->addItem ( stringw ( buf ).c_str(), val );
}
}
gui.VideoMode->setSelected ( gui.VideoMode->getIndexForItemData (
Game->deviceParam.WindowSize.Width << 16 |
Game->deviceParam.WindowSize.Height ) );
gui.FullScreen = env->addCheckBox ( Game->deviceParam.Fullscreen, rect<s32>( dim.Width - 400, 64, dim.Width - 300, 80 ), gui.Window,-1, L"Fullscreen" );
gui.FullScreen->setToolTipText ( L"Set Fullscreen or Window Mode" );
gui.Bit32 = env->addCheckBox ( Game->deviceParam.Bits == 32, rect<s32>( dim.Width - 300, 64, dim.Width - 240, 80 ), gui.Window,-1, L"32Bit" );
gui.Bit32->setToolTipText ( L"Use 16 or 32 Bit" );
env->addStaticText ( L"MultiSample:", rect<s32>( dim.Width - 235, 64, dim.Width - 150, 80 ),false, false, gui.Window, -1, false );
gui.MultiSample = env->addScrollBar( true, rect<s32>( dim.Width - 150, 64, dim.Width - 70, 80 ), gui.Window,-1 );
gui.MultiSample->setMin ( 0 );
gui.MultiSample->setMax ( 8 );
gui.MultiSample->setSmallStep ( 1 );
gui.MultiSample->setLargeStep ( 1 );
gui.MultiSample->setPos ( Game->deviceParam.AntiAlias );
gui.MultiSample->setToolTipText ( L"Set the MultiSample (disable, 1x, 2x, 4x, 8x )" );
gui.SetVideoMode = env->addButton (rect<s32>( dim.Width - 60, 64, dim.Width - 10, 80 ), gui.Window, -1,L"set" );
gui.SetVideoMode->setToolTipText ( L"Set Video Mode with current values" );
env->addStaticText ( L"Gamma:", rect<s32>( dim.Width - 400, 104, dim.Width - 310, 120 ),false, false, gui.Window, -1, false );
gui.Gamma = env->addScrollBar( true, rect<s32>( dim.Width - 300, 104, dim.Width - 10, 120 ), gui.Window,-1 );
gui.Gamma->setMin ( 50 );
gui.Gamma->setMax ( 350 );
gui.Gamma->setSmallStep ( 1 );
gui.Gamma->setLargeStep ( 10 );
gui.Gamma->setPos ( core::floor32 ( Game->GammaValue * 100.f ) );
gui.Gamma->setToolTipText ( L"Adjust Gamma Ramp ( 0.5 - 3.5)" );
Game->Device->setGammaRamp ( Game->GammaValue, Game->GammaValue, Game->GammaValue, 0.f, 0.f );
env->addStaticText ( L"Tesselation:", rect<s32>( dim.Width - 400, 124, dim.Width - 310, 140 ),false, false, gui.Window, -1, false );
gui.Tesselation = env->addScrollBar( true, rect<s32>( dim.Width - 300, 124, dim.Width - 10, 140 ), gui.Window,-1 );
gui.Tesselation->setMin ( 2 );
gui.Tesselation->setMax ( 12 );
gui.Tesselation->setSmallStep ( 1 );
gui.Tesselation->setLargeStep ( 1 );
gui.Tesselation->setPos ( Game->loadParam.patchTesselation );
gui.Tesselation->setToolTipText ( L"How smooth should curved surfaces be rendered" );
gui.Collision = env->addCheckBox ( true, rect<s32>( dim.Width - 400, 150, dim.Width - 300, 166 ), gui.Window,-1, L"Collision" );
gui.Collision->setToolTipText ( L"Set collision on or off ( flythrough ). \nPress F7 on your Keyboard" );
gui.Visible_Map = env->addCheckBox ( true, rect<s32>( dim.Width - 300, 150, dim.Width - 240, 166 ), gui.Window,-1, L"Map" );
gui.Visible_Map->setToolTipText ( L"Show or not show the static part the Level. \nPress F3 on your Keyboard" );
gui.Visible_Shader = env->addCheckBox ( true, rect<s32>( dim.Width - 240, 150, dim.Width - 170, 166 ), gui.Window,-1, L"Shader" );
gui.Visible_Shader->setToolTipText ( L"Show or not show the Shader Nodes. \nPress F4 on your Keyboard" );
gui.Visible_Fog = env->addCheckBox ( true, rect<s32>( dim.Width - 170, 150, dim.Width - 110, 166 ), gui.Window,-1, L"Fog" );
gui.Visible_Fog->setToolTipText ( L"Show or not show the Fog Nodes. \nPress F5 on your Keyboard" );
gui.Visible_Unresolved = env->addCheckBox ( true, rect<s32>( dim.Width - 110, 150, dim.Width - 10, 166 ), gui.Window,-1, L"Unresolved" );
gui.Visible_Unresolved->setToolTipText ( L"Show the or not show the Nodes the Engine can't handle. \nPress F6 on your Keyboard" );
gui.Visible_Skydome = env->addCheckBox ( true, rect<s32>( dim.Width - 110, 180, dim.Width - 10, 196 ), gui.Window,-1, L"Skydome" );
gui.Visible_Skydome->setToolTipText ( L"Show the or not show the Skydome." );
env->addStaticText ( L"Archives:", rect<s32>( 5, dim.Height - 530, dim.Width - 600,dim.Height - 514 ),false, false, gui.Window, -1, false );
gui.ArchiveAdd = env->addButton ( rect<s32>( dim.Width - 725, dim.Height - 530, dim.Width - 665, dim.Height - 514 ), gui.Window,-1, L"add" );
gui.ArchiveAdd->setToolTipText ( L"Add an archive, usually packed zip-archives (*.pk3) to the Filesystem" );
gui.ArchiveRemove = env->addButton ( rect<s32>( dim.Width - 660, dim.Height - 530, dim.Width - 600, dim.Height - 514 ), gui.Window,-1, L"del" );
gui.ArchiveRemove->setToolTipText ( L"Remove the selected archive from the FileSystem." );
gui.ArchiveUp = env->addButton ( rect<s32>( dim.Width - 575, dim.Height - 530, dim.Width - 515, dim.Height - 514 ), gui.Window,-1, L"up" );
gui.ArchiveUp->setToolTipText ( L"Arrange Archive Look-up Hirachy. Move the selected Archive up" );
gui.ArchiveDown = env->addButton ( rect<s32>( dim.Width - 510, dim.Height - 530, dim.Width - 440, dim.Height - 514 ), gui.Window,-1, L"down" );
gui.ArchiveDown->setToolTipText ( L"Arrange Archive Look-up Hirachy. Move the selected Archive down" );
gui.ArchiveList = env->addTable ( rect<s32>( 5,dim.Height - 510, dim.Width - 450,dim.Height - 410 ), gui.Window );
gui.ArchiveList->addColumn ( L"Type", 0 );
gui.ArchiveList->addColumn ( L"Real File Path", 1 );
gui.ArchiveList->setColumnWidth ( 0, 60 );
gui.ArchiveList->setColumnWidth ( 1, 284 );
gui.ArchiveList->setToolTipText ( L"Show the attached Archives" );
env->addStaticText ( L"Maps:", rect<s32>( 5, dim.Height - 400, dim.Width - 450,dim.Height - 380 ),false, false, gui.Window, -1, false );
gui.MapList = env->addListBox ( rect<s32>( 5,dim.Height - 380, dim.Width - 450,dim.Height - 40 ), gui.Window, -1, true );
gui.MapList->setToolTipText ( L"Show the current Maps in all Archives.\n Double-Click the Map to start the level" );
env->addStaticText ( L"Scenegraph:", rect<s32>( dim.Width - 400, dim.Height - 400, dim.Width - 5,dim.Height - 380 ),false, false, gui.Window, -1, false );
gui.SceneTree = env->addTreeView( rect<s32>( dim.Width - 400, dim.Height - 380, dim.Width - 5, dim.Height - 40 ),
gui.Window, -1, true, true, false );
gui.SceneTree->setToolTipText ( L"Show the current Scenegraph" );
gui.SceneTree->getRoot()->clearChildren();
addSceneTreeItem ( Game->Device->getSceneManager()->getRootSceneNode(), gui.SceneTree->getRoot() );
IGUIImageList* imageList = env->createImageList( driver->getTexture ( "iconlist.png" ),
dimension2di( 32, 32 ), true );
if ( imageList )
{
gui.SceneTree->setImageList( imageList );
imageList->drop ();
}
gui.Logo = env->addImage( driver->getTexture("irrlichtlogo3.png"), position2d<s32>(5, 16 ), true, 0 );
gui.Logo->setToolTipText ( L"The great Irrlicht Engine" );
AddArchive ( "" );
}
Add an Archive to the FileSystems and updates the GUI
void CQuake3EventHandler::AddArchive ( const path& archiveName )
{
IFileSystem *fs = Game->Device->getFileSystem();
u32 i;
if ( archiveName.size () )
{
bool exists = false;
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
if ( fs->getFileArchive(i)->getFileList()->getPath() == archiveName )
{
exists = true;
break;
}
}
if (!exists)
{
fs->addFileArchive(archiveName, true, false);
}
}
if ( gui.ArchiveList )
{
gui.ArchiveList->clearRows();
for ( i = 0; i != fs->getFileArchiveCount(); ++i )
{
IFileArchive * archive = fs->getFileArchive ( i );
u32 index = gui.ArchiveList->addRow(i);
core::stringw typeName;
switch(archive->getType())
{
case io::EFAT_ZIP:
typeName = "ZIP";
break;
case io::EFAT_GZIP:
typeName = "gzip";
break;
case io::EFAT_FOLDER:
typeName = "Mount";
break;
case io::EFAT_PAK:
typeName = "PAK";
break;
case io::EFAT_TAR:
typeName = "TAR";
break;
default:
typeName = "archive";
}
gui.ArchiveList->setCellText ( index, 0, typeName );
gui.ArchiveList->setCellText ( index, 1, archive->getFileList()->getPath() );
}
}
if ( gui.MapList )
{
gui.MapList->clear();
IGUISpriteBank *bank = Game->Device->getGUIEnvironment()->getSpriteBank("sprite_q3map");
if ( 0 == bank )
bank = Game->Device->getGUIEnvironment()->addEmptySpriteBank("sprite_q3map");
SGUISprite sprite;
SGUISpriteFrame frame;
core::rect<s32> r;
bank->getSprites().clear();
bank->getPositions().clear ();
gui.MapList->setSpriteBank ( bank );
u32 g = 0;
core::stringw s;
fs->setFileListSystem ( FILESYSTEM_VIRTUAL );
fs->changeWorkingDirectoryTo ( "/maps/" );
IFileList *fileList = fs->createFileList ();
fs->setFileListSystem ( FILESYSTEM_NATIVE );
for ( i=0; i< fileList->getFileCount(); ++i)
{
s = fileList->getFullFileName(i);
if ( s.find ( ".bsp" ) >= 0 )
{
path c ( s );
deletePathFromFilename ( c );
cutFilenameExtension ( c, c );
c = path ( "levelshots/" ) + c;
dimension2du dim ( 128, 128 );
IVideoDriver * driver = Game->Device->getVideoDriver();
IImage* image = 0;
ITexture *tex = 0;
path filename;
filename = c + ".jpg";
if ( fs->existFile ( filename ) )
image = driver->createImageFromFile( filename );
if ( 0 == image )
{
filename = c + ".tga";
if ( fs->existFile ( filename ) )
image = driver->createImageFromFile( filename );
}
if ( image )
{
IImage* filter = driver->createImage ( video::ECF_R8G8B8, dim );
image->copyToScalingBoxFilter ( filter, 0 );
image->drop ();
image = filter;
}
if ( image )
{
tex = driver->addTexture ( filename, image );
image->drop ();
}
bank->setTexture ( g, tex );
r.LowerRightCorner.X = dim.Width;
r.LowerRightCorner.Y = dim.Height;
gui.MapList->setItemHeight ( r.LowerRightCorner.Y + 4 );
frame.rectNumber = bank->getPositions().size();
frame.textureNumber = g;
bank->getPositions().push_back(r);
sprite.Frames.set_used ( 0 );
sprite.Frames.push_back(frame);
sprite.frameTime = 0;
bank->getSprites().push_back(sprite);
gui.MapList->addItem ( s.c_str (), g );
g += 1;
}
}
fileList->drop ();
gui.MapList->setSelected ( -1 );
IGUIScrollBar * bar = (IGUIScrollBar*)gui.MapList->getElementFromId( 0 );
if ( bar )
bar->setPos ( 0 );
}
}
clears the Map in Memory
void CQuake3EventHandler::dropMap ()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
driver->removeAllHardwareBuffers ();
driver->removeAllTextures ();
Player[0].shutdown ();
dropElement ( ItemParent );
dropElement ( ShaderParent );
dropElement ( UnresolvedParent );
dropElement ( FogParent );
dropElement ( BulletParent );
Impacts.clear();
if ( Meta )
{
Meta = 0;
}
dropElement ( MapParent );
dropElement ( SkyNode );
IMeshCache *cache = Game->Device->getSceneManager ()->getMeshCache();
cache->clear ();
Mesh = 0;
}
Load new map
void CQuake3EventHandler::LoadMap ( const stringw &mapName, s32 collision )
{
if ( 0 == mapName.size() )
return;
dropMap ();
IFileSystem *fs = Game->Device->getFileSystem();
ISceneManager *smgr = Game->Device->getSceneManager ();
IReadFile* file = fs->createMemoryReadFile(&Game->loadParam,
sizeof(Game->loadParam), L"levelparameter.cfg", false);
smgr->getMesh( file );
file->drop ();
Mesh = (IQ3LevelMesh*) smgr->getMesh(mapName);
if ( 0 == Mesh )
return;
add the geometry mesh to the Scene ( polygon & patches ) The Geometry mesh is optimised for faster drawing
IMesh *geometry = Mesh->getMesh(E_Q3_MESH_GEOMETRY);
if ( 0 == geometry || geometry->getMeshBufferCount() == 0)
return;
Game->CurrentMapName = mapName;
Meta = 0;
ITriangleSelector * selector = 0;
if (collision)
Meta = smgr->createMetaTriangleSelector();
s32 minimalNodes = 2048;
MapParent = smgr->addOctreeSceneNode(geometry, 0, -1, minimalNodes);
MapParent->setName ( mapName );
if ( Meta )
{
selector = smgr->createOctreeTriangleSelector( geometry,MapParent, minimalNodes);
Meta->addTriangleSelector( selector);
selector->drop ();
}
ItemParent = smgr->addEmptySceneNode();
if ( ItemParent )
ItemParent->setName ( "Item Container" );
ShaderParent = smgr->addEmptySceneNode();
if ( ShaderParent )
ShaderParent->setName ( "Shader Container" );
UnresolvedParent = smgr->addEmptySceneNode();
if ( UnresolvedParent )
UnresolvedParent->setName ( "Unresolved Container" );
FogParent = smgr->addEmptySceneNode();
if ( FogParent )
FogParent->setName ( "Fog Container" );
BulletParent = smgr->addEmptySceneNode();
if ( BulletParent )
BulletParent->setName ( "Bullet Container" );
now construct SceneNodes for each Shader The Objects are stored in the quake mesh E_Q3_MESH_ITEMS and the Shader ID is stored in the MaterialParameters mostly dark looking skulls and moving lava.. or green flashing tubes?
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_ITEMS,ShaderParent, Meta, false );
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_FOG,FogParent, 0, false );
Q3ShaderFactory ( Game->loadParam, Game->Device, Mesh, E_Q3_MESH_UNRESOLVED,UnresolvedParent, Meta, true );
Now construct Models from Entity List
Q3ModelFactory ( Game->loadParam, Game->Device, Mesh, ItemParent, false );
}
Adds a SceneNode with an icon to the Scene Tree
void CQuake3EventHandler::addSceneTreeItem( ISceneNode * parent, IGUITreeViewNode* nodeParent)
{
IGUITreeViewNode* node;
wchar_t msg[128];
s32 imageIndex;
list<ISceneNode*>::ConstIterator it = parent->getChildren().begin();
for (; it != parent->getChildren().end(); ++it)
{
switch ( (*it)->getType () )
{
case ESNT_Q3SHADER_SCENE_NODE: imageIndex = 0; break;
case ESNT_CAMERA: imageIndex = 1; break;
case ESNT_EMPTY: imageIndex = 2; break;
case ESNT_MESH: imageIndex = 3; break;
case ESNT_OCTREE: imageIndex = 3; break;
case ESNT_ANIMATED_MESH: imageIndex = 4; break;
case ESNT_SKY_BOX: imageIndex = 5; break;
case ESNT_BILLBOARD: imageIndex = 6; break;
case ESNT_PARTICLE_SYSTEM: imageIndex = 7; break;
case ESNT_TEXT: imageIndex = 8; break;
default:imageIndex = -1; break;
}
if ( imageIndex < 0 )
{
swprintf_irr ( msg, 128, L"%hs,%hs",
Game->Device->getSceneManager ()->getSceneNodeTypeName ( (*it)->getType () ),
(*it)->getName()
);
}
else
{
swprintf_irr ( msg, 128, L"%hs",(*it)->getName() );
}
node = nodeParent->addChildBack( msg, 0, imageIndex );
list<ISceneNodeAnimator*>::ConstIterator ait = (*it)->getAnimators().begin();
for (; ait != (*it)->getAnimators().end(); ++ait)
{
imageIndex = -1;
swprintf_irr ( msg, 128, L"%hs",
Game->Device->getSceneManager ()->getAnimatorTypeName ( (*ait)->getType () )
);
switch ( (*ait)->getType () )
{
case ESNAT_FLY_CIRCLE:
case ESNAT_FLY_STRAIGHT:
case ESNAT_FOLLOW_SPLINE:
case ESNAT_ROTATION:
case ESNAT_TEXTURE:
case ESNAT_DELETION:
case ESNAT_COLLISION_RESPONSE:
case ESNAT_CAMERA_FPS:
case ESNAT_CAMERA_MAYA:
default:
break;
}
node->addChildBack( msg, 0, imageIndex );
}
addSceneTreeItem ( *it, node );
}
}
void CQuake3EventHandler::CreatePlayers()
{
Player[0].create ( Game->Device, Mesh, MapParent, Meta );
}
void CQuake3EventHandler::AddSky( u32 dome, const c8 *texture)
{
ISceneManager *smgr = Game->Device->getSceneManager ();
IVideoDriver * driver = Game->Device->getVideoDriver();
bool oldMipMapState = driver->getTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS);
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, false);
if ( 0 == dome )
{
static const c8*p[] = { "ft", "rt", "bk", "lf", "up", "dn" };
u32 i = 0;
snprintf_irr ( buf, 64, "%s_%s.jpg", texture, p[i] );
SkyNode = smgr->addSkyBoxSceneNode( driver->getTexture ( buf ), 0, 0, 0, 0, 0 );
if (SkyNode)
{
for ( i = 0; i < 6; ++i )
{
snprintf_irr ( buf, 64, "%s_%s.jpg", texture, p[i] );
SkyNode->getMaterial(i).setTexture ( 0, driver->getTexture ( buf ) );
}
}
}
else
if ( 1 == dome )
{
snprintf_irr ( buf, 64, "%s.jpg", texture );
SkyNode = smgr->addSkyDomeSceneNode(
driver->getTexture( buf ), 32,32,
1.f, 1.f, 1000.f, 0, 11);
}
else
if ( 2 == dome )
{
snprintf_irr ( buf, 64, "%s.jpg", texture );
SkyNode = smgr->addSkyDomeSceneNode(
driver->getTexture( buf ), 16,8,
0.95f, 2.f, 1000.f, 0, 11);
}
if (SkyNode)
SkyNode->setName("Skydome");
driver->setTextureCreationFlag(video::ETCF_CREATE_MIP_MAPS, oldMipMapState);
}
void CQuake3EventHandler::SetGUIActive( s32 command)
{
bool inputState = false;
ICameraSceneNode * camera = Game->Device->getSceneManager()->getActiveCamera ();
switch ( command )
{
case 0: Game->guiActive = 0; inputState = !Game->guiActive; break;
case 1: Game->guiActive = 1; inputState = !Game->guiActive;;break;
case 2: Game->guiActive ^= 1; inputState = !Game->guiActive;break;
case 3:
if ( camera )
inputState = !camera->isInputReceiverEnabled();
break;
}
if ( camera )
{
camera->setInputReceiverEnabled ( inputState );
Game->Device->getCursorControl()->setVisible( !inputState );
}
if ( gui.Window )
{
gui.Window->setVisible ( Game->guiActive != 0 );
}
if ( Game->guiActive &&
gui.SceneTree && Game->Device->getGUIEnvironment()->getFocus() != gui.SceneTree
)
{
gui.SceneTree->getRoot()->clearChildren();
addSceneTreeItem ( Game->Device->getSceneManager()->getRootSceneNode(), gui.SceneTree->getRoot() );
}
Game->Device->getGUIEnvironment()->setFocus ( Game->guiActive ? gui.Window: 0 );
}
Handle game input
bool CQuake3EventHandler::OnEvent(const SEvent& eve)
{
if ( eve.EventType == EET_LOG_TEXT_EVENT )
{
return false;
}
if ( Game->guiActive && eve.EventType == EET_GUI_EVENT )
{
if ( eve.GUIEvent.Caller == gui.MapList && eve.GUIEvent.EventType == gui::EGET_LISTBOX_SELECTED_AGAIN )
{
s32 selected = gui.MapList->getSelected();
if ( selected >= 0 )
{
stringw loadMap = gui.MapList->getListItem ( selected );
if ( 0 == MapParent || loadMap != Game->CurrentMapName )
{
printf ( "Loading map %ls\n", loadMap.c_str() );
LoadMap ( loadMap , 1 );
if ( 0 == Game->loadParam.loadSkyShader )
{
AddSky ( 1, "skydome2" );
}
CreatePlayers ();
CreateGUI ();
SetGUIActive ( 0 );
return true;
}
}
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveRemove && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Game->Device->getFileSystem()->removeFileArchive( gui.ArchiveList->getSelected() );
Game->CurrentMapName = "";
AddArchive ( "" );
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveAdd && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
if ( 0 == gui.ArchiveFileOpen )
{
Game->Device->getFileSystem()->setFileListSystem ( FILESYSTEM_NATIVE );
gui.ArchiveFileOpen = Game->Device->getGUIEnvironment()->addFileOpenDialog ( L"Add Game Archive" , false,gui.Window );
}
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_FILE_SELECTED )
{
AddArchive ( gui.ArchiveFileOpen->getFileNameP() );
gui.ArchiveFileOpen = 0;
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_DIRECTORY_SELECTED )
{
AddArchive ( gui.ArchiveFileOpen->getDirectoryName() );
}
else
if ( eve.GUIEvent.Caller == gui.ArchiveFileOpen && eve.GUIEvent.EventType == gui::EGET_FILE_CHOOSE_DIALOG_CANCELLED )
{
gui.ArchiveFileOpen = 0;
}
else
if ( ( eve.GUIEvent.Caller == gui.ArchiveUp || eve.GUIEvent.Caller == gui.ArchiveDown ) &&
eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
s32 rel = eve.GUIEvent.Caller == gui.ArchiveUp ? -1 : 1;
if ( Game->Device->getFileSystem()->moveFileArchive ( gui.ArchiveList->getSelected (), rel ) )
{
s32 newIndex = core::s32_clamp ( gui.ArchiveList->getSelected() + rel, 0, gui.ArchiveList->getRowCount() - 1 );
AddArchive ( "" );
gui.ArchiveList->setSelected ( newIndex );
Game->CurrentMapName = "";
}
}
else
if ( eve.GUIEvent.Caller == gui.VideoDriver && eve.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
Game->deviceParam.DriverType = (E_DRIVER_TYPE) gui.VideoDriver->getItemData ( gui.VideoDriver->getSelected() );
}
else
if ( eve.GUIEvent.Caller == gui.VideoMode && eve.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
u32 val = gui.VideoMode->getItemData ( gui.VideoMode->getSelected() );
Game->deviceParam.WindowSize.Width = val >> 16;
Game->deviceParam.WindowSize.Height = val & 0xFFFF;
}
else
if ( eve.GUIEvent.Caller == gui.FullScreen && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->deviceParam.Fullscreen = gui.FullScreen->isChecked();
}
else
if ( eve.GUIEvent.Caller == gui.Bit32 && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->deviceParam.Bits = gui.Bit32->isChecked() ? 32 : 16;
}
else
if ( eve.GUIEvent.Caller == gui.MultiSample && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->deviceParam.AntiAlias = gui.MultiSample->getPos();
}
else
if ( eve.GUIEvent.Caller == gui.Tesselation && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->loadParam.patchTesselation = gui.Tesselation->getPos ();
}
else
if ( eve.GUIEvent.Caller == gui.Gamma && eve.GUIEvent.EventType == gui::EGET_SCROLL_BAR_CHANGED )
{
Game->GammaValue = gui.Gamma->getPos () * 0.01f;
Game->Device->setGammaRamp ( Game->GammaValue, Game->GammaValue, Game->GammaValue, 0.f, 0.f );
}
else
if ( eve.GUIEvent.Caller == gui.SetVideoMode && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Game->retVal = 2;
Game->Device->closeDevice();
}
else
if ( eve.GUIEvent.Caller == gui.Window && eve.GUIEvent.EventType == gui::EGET_ELEMENT_CLOSED )
{
Game->Device->closeDevice();
}
else
if ( eve.GUIEvent.Caller == gui.Collision && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
Game->flyTroughState ^= 1;
Player[0].cam()->setAnimateTarget ( Game->flyTroughState == 0 );
printf ( "collision %d\n", Game->flyTroughState == 0 );
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Map && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
bool v = gui.Visible_Map->isChecked();
if ( MapParent )
{
printf ( "static node set visible %d\n",v );
MapParent->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Shader && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
bool v = gui.Visible_Shader->isChecked();
if ( ShaderParent )
{
printf ( "shader node set visible %d\n",v );
ShaderParent->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Visible_Skydome && eve.GUIEvent.EventType == gui::EGET_CHECKBOX_CHANGED )
{
if ( SkyNode )
{
bool v = !SkyNode->isVisible();
printf ( "skynode set visible %d\n",v );
SkyNode->setVisible ( v );
}
}
else
if ( eve.GUIEvent.Caller == gui.Respawn && eve.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Player[0].respawn ();
}
return false;
}
if ((eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_SPACE &&
eve.KeyInput.PressedDown == false) ||
(eve.EventType == EET_MOUSE_INPUT_EVENT && eve.MouseInput.Event == EMIE_LMOUSE_LEFT_UP)
)
{
ICameraSceneNode * camera = Game->Device->getSceneManager()->getActiveCamera ();
if ( camera && camera->isInputReceiverEnabled () )
{
useItem( Player + 0 );
}
}
if ((eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_F1 &&
eve.KeyInput.PressedDown == false) ||
(eve.EventType == EET_MOUSE_INPUT_EVENT && eve.MouseInput.Event == EMIE_RMOUSE_LEFT_UP)
)
{
SetGUIActive ( 2 );
}
if ( eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.PressedDown == false)
{
if ( eve.KeyInput.Key == irr::KEY_ESCAPE )
{
SetGUIActive ( 3 );
}
else
if (eve.KeyInput.Key == KEY_F11)
{
IImage* image = Game->Device->getVideoDriver()->createScreenShot();
if (image)
{
core::vector3df pos;
core::vector3df rot;
ICameraSceneNode * cam = Game->Device->getSceneManager()->getActiveCamera ();
if ( cam )
{
pos = cam->getPosition ();
rot = cam->getRotation ();
}
snprintf_irr(buf, 256, "%s_%ls_%.0f_%.0f_%.0f_%.0f_%.0f_%.0f.jpg",
DRIVER_TYPE_NAMES_SHORT[Game->Device->getVideoDriver()->getDriverType()],
Game->CurrentMapName.c_str(),
pos.X, pos.Y, pos.Z,
rot.X, rot.Y, rot.Z
);
path filename ( buf );
filename.replace ( '/', '_' );
printf ( "screenshot : %s\n", filename.c_str() );
Game->Device->getVideoDriver()->writeImageToFile(image, filename, 100 );
image->drop();
}
}
else
if (eve.KeyInput.Key == KEY_F9)
{
s32 value = EDS_OFF;
Game->debugState = ( Game->debugState + 1 ) & 3;
switch ( Game->debugState )
{
case 1: value = EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_BBOX_ALL; break;
case 2: value = EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_SKELETON; break;
}
set debug map data on/off debugState = debugState == EDS_OFF ? EDS_NORMALS | EDS_MESH_WIRE_OVERLAY | EDS_BBOX_ALL: EDS_OFF;
if ( ItemParent )
{
list<ISceneNode*>::ConstIterator it = ItemParent->getChildren().begin();
for (; it != ItemParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( ShaderParent )
{
list<ISceneNode*>::ConstIterator it = ShaderParent->getChildren().begin();
for (; it != ShaderParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( UnresolvedParent )
{
list<ISceneNode*>::ConstIterator it = UnresolvedParent->getChildren().begin();
for (; it != UnresolvedParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( FogParent )
{
list<ISceneNode*>::ConstIterator it = FogParent->getChildren().begin();
for (; it != FogParent->getChildren().end(); ++it)
{
(*it)->setDebugDataVisible ( value );
}
}
if ( SkyNode )
{
SkyNode->setDebugDataVisible ( value );
}
}
else
if (eve.KeyInput.Key == KEY_F8)
{
Game->gravityState ^= 1;
Player[0].cam()->setGravity ( getGravity ( Game->gravityState ? "earth" : "none" ) );
printf ( "gravity %s\n", Game->gravityState ? "earth" : "none" );
}
else
if (eve.KeyInput.Key == KEY_F7)
{
Game->flyTroughState ^= 1;
Player[0].cam()->setAnimateTarget ( Game->flyTroughState == 0 );
if ( gui.Collision )
gui.Collision->setChecked ( Game->flyTroughState == 0 );
printf ( "collision %d\n", Game->flyTroughState == 0 );
}
else
if (eve.KeyInput.Key == KEY_F2)
{
Player[0].respawn ();
}
else
if (eve.KeyInput.Key == KEY_F3)
{
if ( MapParent )
{
bool v = !MapParent->isVisible ();
printf ( "static node set visible %d\n",v );
MapParent->setVisible ( v );
if ( gui.Visible_Map )
gui.Visible_Map->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F4)
{
if ( ShaderParent )
{
bool v = !ShaderParent->isVisible ();
printf ( "shader node set visible %d\n",v );
ShaderParent->setVisible ( v );
if ( gui.Visible_Shader )
gui.Visible_Shader->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F5)
{
if ( FogParent )
{
bool v = !FogParent->isVisible ();
printf ( "fog node set visible %d\n",v );
FogParent->setVisible ( v );
if ( gui.Visible_Fog )
gui.Visible_Fog->setChecked ( v );
}
}
else
if (eve.KeyInput.Key == KEY_F6)
{
if ( UnresolvedParent )
{
bool v = !UnresolvedParent->isVisible ();
printf ( "unresolved node set visible %d\n",v );
UnresolvedParent->setVisible ( v );
if ( gui.Visible_Unresolved )
gui.Visible_Unresolved->setChecked ( v );
}
}
}
if ( eve.EventType == EET_KEY_INPUT_EVENT && eve.KeyInput.Key == KEY_KEY_C )
{
ISceneNodeAnimatorCollisionResponse *anim = Player[0].cam ();
if ( anim && 0 == Game->flyTroughState )
{
if ( false == eve.KeyInput.PressedDown )
{
anim->setEllipsoidRadius ( vector3df(30,45,30) );
anim->setEllipsoidTranslation ( vector3df(0,40,0));
}
else
{
anim->setEllipsoidRadius ( vector3df(30,20,30) );
anim->setEllipsoidTranslation ( vector3df(0,20,0));
}
return true;
}
}
return false;
}
useItem
void CQuake3EventHandler::useItem( Q3Player * player)
{
ISceneManager* smgr = Game->Device->getSceneManager();
ICameraSceneNode* camera = smgr->getActiveCamera();
if (!camera)
return;
SParticleImpact imp;
imp.when = 0;
vector3df start = camera->getPosition();
if ( player->WeaponNode )
{
start.X += 0.f;
start.Y += 0.f;
start.Z += 0.f;
}
vector3df end = (camera->getTarget() - start);
end.normalize();
start += end*20.0f;
end = start + (end * camera->getFarValue());
triangle3df triangle;
line3d<f32> line(start, end);
scene::ISceneNode* hitNode;
if (smgr->getSceneCollisionManager()->getCollisionPoint(
line, Meta, end, triangle,hitNode))
{
vector3df out = triangle.getNormal();
out.setLength(0.03f);
imp.when = 1;
imp.outVector = out;
imp.pos = end;
player->setAnim ( "pow" );
player->Anim[1].next += player->Anim[1].delta;
}
else
{
vector3df start = camera->getPosition();
if ( player->WeaponNode )
{
}
vector3df end = (camera->getTarget() - start);
end.normalize();
start += end*20.0f;
end = start + (end * camera->getFarValue());
}
ISceneNode* node = 0;
node = smgr->addBillboardSceneNode( BulletParent,dimension2d<f32>(10,10), start);
node->setMaterialFlag(EMF_LIGHTING, false);
node->setMaterialTexture(0, Game->Device->getVideoDriver()->getTexture("fireball.bmp"));
node->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
node->setMaterialType(EMT_TRANSPARENT_ADD_COLOR);
f32 length = (f32)(end - start).getLength();
const f32 speed = 5.8f;
u32 time = (u32)(length / speed);
ISceneNodeAnimator* anim = 0;
anim = smgr->createFlyStraightAnimator(start, end, time);
node->addAnimator(anim);
anim->drop();
snprintf_irr ( buf, 64, "bullet: %s on %.1f,%1.f,%1.f",
imp.when ? "hit" : "nohit", end.X, end.Y, end.Z );
node->setName ( buf );
anim = smgr->createDeleteAnimator(time);
node->addAnimator(anim);
anim->drop();
if (imp.when)
{
imp.when = Game->Device->getTimer()->getTime() +
(time + (s32) ( ( 1.f + Noiser::get() ) * 250.f ));
Impacts.push_back(imp);
}
}
void CQuake3EventHandler::createParticleImpacts( u32 now )
{
ISceneManager* sm = Game->Device->getSceneManager();
struct smokeLayer
{
const c8 * texture;
f32 scale;
f32 minparticleSize;
f32 maxparticleSize;
f32 boxSize;
u32 minParticle;
u32 maxParticle;
u32 fadeout;
u32 lifetime;
};
smokeLayer smoke[] =
{
{ "smoke2.jpg", 0.4f, 1.5f, 18.f, 20.f, 20, 50, 2000, 10000 },
{ "smoke3.jpg", 0.2f, 1.2f, 15.f, 20.f, 10, 30, 1000, 12000 }
};
u32 i;
u32 g;
s32 factor = 1;
for ( g = 0; g != 2; ++g )
{
smoke[g].minParticle *= factor;
smoke[g].maxParticle *= factor;
smoke[g].lifetime *= factor;
smoke[g].boxSize *= Noiser::get() * 0.5f;
}
for ( i=0; i < Impacts.size(); ++i)
{
if (now < Impacts[i].when)
continue;
IParticleSystemSceneNode* pas = 0;
for ( g = 0; g != 2; ++g )
{
pas = sm->addParticleSystemSceneNode(false, BulletParent, -1, Impacts[i].pos);
snprintf_irr ( buf, 64, "bullet impact smoke at %.1f,%.1f,%1.f",
Impacts[i].pos.X,Impacts[i].pos.Y,Impacts[i].pos.Z);
pas->setName ( buf );
vector3df direction = Impacts[i].outVector;
direction *= smoke[g].scale;
IParticleEmitter* em = pas->createBoxEmitter(
aabbox3d<f32>(-4.f,0.f,-4.f,20.f,smoke[g].minparticleSize,20.f),
direction,smoke[g].minParticle, smoke[g].maxParticle,
video::SColor(0,0,0,0),video::SColor(0,128,128,128),
250,4000, 60);
em->setMinStartSize (dimension2d<f32>( smoke[g].minparticleSize, smoke[g].minparticleSize));
em->setMaxStartSize (dimension2d<f32>( smoke[g].maxparticleSize, smoke[g].maxparticleSize));
pas->setEmitter(em);
em->drop();
IParticleAffector* paf = pas->createFadeOutParticleAffector(
video::SColor ( 0, 0, 0, 0 ), smoke[g].fadeout);
pas->addAffector(paf);
paf->drop();
ISceneNodeAnimator* anim = sm->createDeleteAnimator( smoke[g].lifetime);
pas->addAnimator(anim);
anim->drop();
pas->setMaterialFlag(video::EMF_LIGHTING, false);
pas->setMaterialFlag(video::EMF_ZWRITE_ENABLE, false);
pas->setMaterialType(video::EMT_TRANSPARENT_ADD_COLOR );
pas->setMaterialTexture(0, Game->Device->getVideoDriver()->getTexture( smoke[g].texture ));
}
#ifdef USE_IRRKLANG
if (irrKlang) { audio::ISound* sound = irrKlang->play3D(impactSound, Impacts[i].pos, false, false, true);
+if (sound) { adjust max value a bit to make to sound of an impact louder sound->setMinDistance(400); sound->drop(); } }
#endif
Impacts.erase(i);
i--;
}
}
render
void CQuake3EventHandler::Render()
{
IVideoDriver * driver = Game->Device->getVideoDriver();
if ( 0 == driver )
return;
const bool anaglyph=false;
if (anaglyph)
{
scene::ICameraSceneNode* cameraOld = Game->Device->getSceneManager()->getActiveCamera();
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,0,0,0));
driver->getOverrideMaterial().Material.ColorMask = ECP_NONE;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses = ESNRP_SKY_BOX +
ESNRP_SOLID +
ESNRP_TRANSPARENT +
ESNRP_TRANSPARENT_EFFECT +
ESNRP_SHADOW;
Game->Device->getSceneManager()->drawAll();
driver->clearBuffers(video::ECBF_DEPTH, video::SColor(255,0,0,0));
const vector3df oldPosition = cameraOld->getPosition();
const vector3df oldTarget = cameraOld->getTarget();
const matrix4 startMatrix = cameraOld->getAbsoluteTransformation();
const vector3df focusPoint = (oldTarget -
cameraOld->getAbsolutePosition()).setLength(10000) +
cameraOld->getAbsolutePosition() ;
scene::ICameraSceneNode* camera = cameraOld;
vector3df pos;
matrix4 move;
move.setTranslation( vector3df(-1.5f,0.0f,0.0f) );
pos=(startMatrix*move).getTranslation();
driver->getOverrideMaterial().Material.ColorMask = ECP_RED;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses =
ESNRP_SKY_BOX|ESNRP_SOLID|ESNRP_TRANSPARENT|
ESNRP_TRANSPARENT_EFFECT|ESNRP_SHADOW;
camera->setPosition(pos);
camera->setTarget(focusPoint);
Game->Device->getSceneManager()->drawAll();
driver->clearBuffers(video::ECBF_DEPTH, video::SColor(255, 0, 0, 0));
move.setTranslation( vector3df(1.5f,0.0f,0.0f) );
pos=(startMatrix*move).getTranslation();
driver->getOverrideMaterial().Material.ColorMask = ECP_GREEN + ECP_BLUE;
driver->getOverrideMaterial().EnableFlags = EMF_COLOR_MASK;
driver->getOverrideMaterial().EnablePasses =
ESNRP_SKY_BOX|ESNRP_SOLID|ESNRP_TRANSPARENT|
ESNRP_TRANSPARENT_EFFECT|ESNRP_SHADOW;
camera->setPosition(pos);
camera->setTarget(focusPoint);
Game->Device->getSceneManager()->drawAll();
driver->getOverrideMaterial().Material.ColorMask=ECP_ALL;
driver->getOverrideMaterial().EnableFlags=0;
driver->getOverrideMaterial().EnablePasses=0;
if (camera != cameraOld)
{
Game->Device->getSceneManager()->setActiveCamera(cameraOld);
camera->remove();
}
else
{
camera->setPosition(oldPosition);
camera->setTarget(oldTarget);
}
}
else
{
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,0,0,0));
Game->Device->getSceneManager()->drawAll();
}
Game->Device->getGUIEnvironment()->drawAll();
driver->endScene();
}
update the generic scene node
void CQuake3EventHandler::Animate()
{
u32 now = Game->Device->getTimer()->getTime();
Q3Player * player = Player + 0;
checkTimeFire ( player->Anim, 4, now );
if ( player->Anim[0].flags & FIRED )
{
wchar_t msg[128];
IVideoDriver * driver = Game->Device->getVideoDriver();
#ifdef _IRR_SCENEMANAGER_DEBUG
IAttributes * attr = Game->Device->getSceneManager()->getParameters();
swprintf_irr ( msg, 128,
L"Q3 %s [%ls], FPS:%03d Tri:%.03fm Cull %d/%d nodes (%d,%d,%d)",
Game->CurrentMapName.c_str(),
driver->getName(),
driver->getFPS (),
(f32) driver->getPrimitiveCountDrawn( 0 ) * ( 1.f / 1000000.f ),
attr->getAttributeAsInt ( "culled" ),
attr->getAttributeAsInt ( "calls" ),
attr->getAttributeAsInt ( "drawn_solid" ),
attr->getAttributeAsInt ( "drawn_transparent" ),
attr->getAttributeAsInt ( "drawn_transparent_effect" )
);
#else
swprintf_irr ( msg, 128,
L"Q3 %s [%ls], FPS:%03d Tri:%.03fm",
Game->CurrentMapName.c_str(),
driver->getName(),
driver->getFPS (),
(f32) driver->getPrimitiveCountDrawn( 0 ) * ( 1.f / 1000000.f )
);
#endif
Game->Device->setWindowCaption( msg );
swprintf_irr ( msg, 128,
L"%03d fps, F1 GUI on/off, F2 respawn, F3-F6 toggle Nodes, F7 Collision on/off"
L", F8 Gravity on/off, Right Mouse Toggle GUI",
Game->Device->getVideoDriver()->getFPS ()
);
if ( gui.StatusLine )
gui.StatusLine->setText ( msg );
player->Anim[0].flags &= ~FIRED;
}
if ( player->Anim[1].flags & FIRED )
{
if ( strcmp ( player->animation, "idle" ) )
player->setAnim ( "idle" );
player->Anim[1].flags &= ~FIRED;
}
createParticleImpacts ( now );
}
The main game states
void runGame ( GameData *game )
{
if ( game->retVal >= 3 )
return;
game->Device = (*game->createExDevice) ( game->deviceParam );
if ( 0 == game->Device)
{
game->retVal = 0;
return;
}
CQuake3EventHandler *eventHandler = new CQuake3EventHandler( game );
game->load ( "explorer.cfg" );
for ( u32 i = 0; i < game->CurrentArchiveList.size(); ++i )
{
eventHandler->AddArchive ( game->CurrentArchiveList[i] );
}
if ( game->CurrentMapName.size () )
{
eventHandler->LoadMap ( game->CurrentMapName, 1 );
if ( 0 == game->loadParam.loadSkyShader )
eventHandler->AddSky ( 1, "skydome2" );
eventHandler->CreatePlayers ();
eventHandler->CreateGUI ();
eventHandler->SetGUIActive ( 0 );
if ( game->retVal == 2 )
{
eventHandler->GetPlayer( 0 )->setpos ( game->PlayerPosition, game->PlayerRotation );
}
}
else
{
eventHandler->AddSky ( 1, "skydome2" );
eventHandler->CreatePlayers ();
eventHandler->CreateGUI ();
eventHandler->SetGUIActive ( 1 );
background_music ( "IrrlichtTheme.ogg" );
}
game->retVal = 3;
while( game->Device->run() )
{
eventHandler->Animate ();
eventHandler->Render ();
game->Device->yield();
}
game->Device->setGammaRamp ( 1.f, 1.f, 1.f, 0.f, 0.f );
delete eventHandler;
}
#if defined (_IRR_WINDOWS_) && 0
#pragma comment(linker, "/subsystem:windows /ENTRY:mainCRTStartup")
#endif
The main routine, doing all setup
int IRRCALLCONV main(int argc, char* argv[])
{
path prgname(argv[0]);
GameData game ( deletePathFromPath ( prgname, 1 ) );
const c8 * dllName = argc > 1 ? argv[1] : "irrlicht.dll";
game.createExDevice = load_createDeviceEx ( dllName );
if ( 0 == game.createExDevice )
{
game.retVal = 3;
printf ( "Could not load %s.\n", dllName );
return game.retVal;
}
game.retVal = 1;
do
{
if ( game.retVal == 0 )
{
game.setDefault ();
game.deviceParam.DriverType=driverChoiceConsole();
if (game.deviceParam.DriverType==video::EDT_COUNT)
game.retVal = 3;
}
runGame ( &game );
} while ( game.retVal < 3 );
return game.retVal;
}
diff --git a/examples/22.MaterialViewer/tutorial.html b/examples/22.MaterialViewer/tutorial.html
index 6009e171..20bc5d38 100644
--- a/examples/22.MaterialViewer/tutorial.html
+++ b/examples/22.MaterialViewer/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
This example can be used to play around with material settings and watch the results. Only the default non-shader materials are used in here.
-
You have two nodes to make it easier to see which difference your settings will make. Additionally you have one lightscenenode and you can set the global ambient values.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Variables within the empty namespace are globals which are restricted to this file.
namespace
{
const wchar_t* const DriverTypeNames[] =
{
L"NULL",
L"SOFTWARE",
L"BURNINGSVIDEO",
L"DIRECT3D8",
L"DIRECT3D9",
L"OPENGL",
0,
};
enum EGUI_IDS
{
GUI_ID_OPEN_TEXTURE = 1,
GUI_ID_QUIT,
GUI_ID_MAX
};
const core::stringw CLEAR_TEXTURE = L"CLEAR texture";
const video::SColor SCOL_BLACK = video::SColor(255, 0, 0, 0);
const video::SColor SCOL_BLUE = video::SColor(255, 0, 0, 255);
const video::SColor SCOL_CYAN = video::SColor(255, 0, 255, 255);
const video::SColor SCOL_GRAY = video::SColor(255, 128,128, 128);
const video::SColor SCOL_GREEN = video::SColor(255, 0, 255, 0);
const video::SColor SCOL_MAGENTA = video::SColor(255, 255, 0, 255);
const video::SColor SCOL_RED = video::SColor(255, 255, 0, 0);
const video::SColor SCOL_YELLOW = video::SColor(255, 255, 255, 0);
const video::SColor SCOL_WHITE = video::SColor(255, 255, 255, 255);
};
Returns a new unique number on each call.
s32 makeUniqueId()
{
static int unique = GUI_ID_MAX;
++unique;
return unique;
}
Find out which vertex-type is needed for the given material type.
video::E_VERTEX_TYPE getVertexTypeForMaterialType(video::E_MATERIAL_TYPE materialType)
{
using namespace video;
switch ( materialType )
{
case EMT_SOLID:
return EVT_STANDARD;
case EMT_SOLID_2_LAYER:
return EVT_STANDARD;
case EMT_LIGHTMAP:
case EMT_LIGHTMAP_ADD:
case EMT_LIGHTMAP_M2:
case EMT_LIGHTMAP_M4:
case EMT_LIGHTMAP_LIGHTING:
case EMT_LIGHTMAP_LIGHTING_M2:
case EMT_LIGHTMAP_LIGHTING_M4:
return EVT_2TCOORDS;
case EMT_DETAIL_MAP:
return EVT_2TCOORDS;
case EMT_SPHERE_MAP:
return EVT_STANDARD;
case EMT_REFLECTION_2_LAYER:
return EVT_2TCOORDS;
case EMT_TRANSPARENT_ADD_COLOR:
return EVT_STANDARD;
case EMT_TRANSPARENT_ALPHA_CHANNEL:
return EVT_STANDARD;
case EMT_TRANSPARENT_ALPHA_CHANNEL_REF:
return EVT_STANDARD;
case EMT_TRANSPARENT_VERTEX_ALPHA:
return EVT_STANDARD;
case EMT_TRANSPARENT_REFLECTION_2_LAYER:
return EVT_2TCOORDS;
case EMT_NORMAL_MAP_SOLID:
case EMT_NORMAL_MAP_TRANSPARENT_ADD_COLOR:
case EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA:
case EMT_PARALLAX_MAP_SOLID:
case EMT_PARALLAX_MAP_TRANSPARENT_ADD_COLOR:
case EMT_PARALLAX_MAP_TRANSPARENT_VERTEX_ALPHA:
return EVT_TANGENTS;
case EMT_ONETEXTURE_BLEND:
return EVT_STANDARD;
case EMT_FORCE_32BIT:
return EVT_STANDARD;
}
return EVT_STANDARD;
}
Custom GUI-control to edit colorvalues.
class CColorControl : public gui::IGUIElement
{
public:
CColorControl(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t *text, IGUIElement* parent, s32 id=-1 )
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect< s32 >(pos, pos+core::dimension2d<s32>(80, 75)))
, DirtyFlag(true)
, ColorStatic(0)
, EditAlpha(0)
, EditRed(0)
, EditGreen(0)
, EditBlue(0)
{
using namespace gui;
ButtonSetId = makeUniqueId();
const core::rect< s32 > rectControls(0,0,AbsoluteRect.getWidth(),AbsoluteRect.getHeight() );
IGUIStaticText * groupElement = guiEnv->addStaticText (L"", rectControls, true, false, this, -1, false);
groupElement->setNotClipped(true);
guiEnv->addStaticText (text, core::rect<s32>(0,0,80,15), false, false, groupElement, -1, false);
EditAlpha = addEditForNumbers(guiEnv, core::position2d<s32>(0,15), L"a", -1, groupElement );
EditRed = addEditForNumbers(guiEnv, core::position2d<s32>(0,30), L"r", -1, groupElement );
EditGreen = addEditForNumbers(guiEnv, core::position2d<s32>(0,45), L"g", -1, groupElement );
EditBlue = addEditForNumbers(guiEnv, core::position2d<s32>(0,60), L"b", -1, groupElement );
ColorStatic = guiEnv->addStaticText (L"", core::rect<s32>(60,15,80,75), true, false, groupElement, -1, true);
guiEnv->addButton (core::rect<s32>(60,35,80,50), groupElement, ButtonSetId, L"set");
SetEditsFromColor(Color);
}
virtual bool OnEvent(const SEvent &event)
{
if ( event.EventType != EET_GUI_EVENT )
return false;
if ( event.GUIEvent.Caller->getID() == ButtonSetId && event.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Color = GetColorFromEdits();
SetEditsFromColor(Color);
}
return false;
}
void setColor(const video::SColor& col)
{
DirtyFlag = true;
Color = col;
SetEditsFromColor(Color);
}
const video::SColor& getColor() const
{
return Color;
}
void resetDirty()
{
DirtyFlag = false;
}
bool isDirty() const
{
return DirtyFlag;
};
protected:
gui::IGUIEditBox* addEditForNumbers(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t *text, s32 id, gui::IGUIElement * parent)
{
using namespace gui;
core::rect< s32 > rect(pos, pos+core::dimension2d<s32>(10, 15));
guiEnv->addStaticText (text, rect, false, false, parent, -1, false);
rect += core::position2d<s32>( 20, 0 );
rect.LowerRightCorner.X += 20;
gui::IGUIEditBox* edit = guiEnv->addEditBox(L"0", rect, true, parent, id);
return edit;
}
video::SColor GetColorFromEdits()
{
video::SColor col;
if (EditAlpha)
{
u32 alpha = core::strtoul10(core::stringc(EditAlpha->getText()).c_str());
if (alpha > 255)
alpha = 255;
col.setAlpha(alpha);
}
if (EditRed)
{
u32 red = core::strtoul10(core::stringc(EditRed->getText()).c_str());
if (red > 255)
red = 255;
col.setRed(red);
}
if (EditGreen)
{
u32 green = core::strtoul10(core::stringc(EditGreen->getText()).c_str());
if (green > 255)
green = 255;
col.setGreen(green);
}
if (EditBlue)
{
u32 blue = core::strtoul10(core::stringc(EditBlue->getText()).c_str());
if (blue > 255)
blue = 255;
col.setBlue(blue);
}
return col;
}
void SetEditsFromColor(video::SColor col)
{
DirtyFlag = true;
if ( EditAlpha )
EditAlpha->setText( core::stringw(col.getAlpha()).c_str() );
if ( EditRed )
EditRed->setText( core::stringw(col.getRed()).c_str() );
if ( EditGreen )
EditGreen->setText( core::stringw(col.getGreen()).c_str() );
if ( EditBlue )
EditBlue->setText( core::stringw(col.getBlue()).c_str() );
if ( ColorStatic )
ColorStatic->setBackgroundColor(col);
}
private:
bool DirtyFlag;
video::SColor Color;
s32 ButtonSetId;
gui::IGUIStaticText * ColorStatic;
gui::IGUIEditBox * EditAlpha;
gui::IGUIEditBox * EditRed;
gui::IGUIEditBox * EditGreen;
gui::IGUIEditBox * EditBlue;
};
Custom GUI-control for to edit all colors typically used in materials and lights
class CAllColorsControl : public gui::IGUIElement
{
public:
CAllColorsControl(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t * description, bool hasEmissive, IGUIElement* parent, s32 id=-1)
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect<s32>(pos,pos+core::dimension2d<s32>(60,250)))
, ControlAmbientColor(0), ControlDiffuseColor(0), ControlSpecularColor(0), ControlEmissiveColor(0)
{
core::rect<s32> rect(0, 0, 60, 15);
guiEnv->addStaticText (description, rect, false, false, this, -1, false);
createColorControls(guiEnv, core::position2d<s32>(0, 15), hasEmissive);
}
virtual ~CAllColorsControl()
{
ControlAmbientColor->drop();
ControlDiffuseColor->drop();
if ( ControlEmissiveColor )
ControlEmissiveColor->drop();
ControlSpecularColor->drop();
}
void setColorsToMaterialColors(const video::SMaterial & material)
{
ControlAmbientColor->setColor(material.AmbientColor);
ControlDiffuseColor->setColor(material.DiffuseColor);
ControlEmissiveColor->setColor(material.EmissiveColor);
ControlSpecularColor->setColor(material.SpecularColor);
}
void updateMaterialColors(video::SMaterial & material)
{
if ( ControlAmbientColor->isDirty() )
material.AmbientColor = ControlAmbientColor->getColor();
if ( ControlDiffuseColor->isDirty() )
material.DiffuseColor = ControlDiffuseColor->getColor();
if ( ControlEmissiveColor->isDirty() )
material.EmissiveColor = ControlEmissiveColor->getColor();
if ( ControlSpecularColor->isDirty() )
material.SpecularColor = ControlSpecularColor->getColor();
}
void setColorsToLightDataColors(const video::SLight & lightData)
{
ControlAmbientColor->setColor(lightData.AmbientColor.toSColor());
ControlAmbientColor->setColor(lightData.DiffuseColor.toSColor());
ControlAmbientColor->setColor(lightData.SpecularColor.toSColor());
}
void updateLightColors(video::SLight & lightData)
{
if ( ControlAmbientColor->isDirty() )
lightData.AmbientColor = video::SColorf( ControlAmbientColor->getColor() );
if ( ControlDiffuseColor->isDirty() )
lightData.DiffuseColor = video::SColorf( ControlDiffuseColor->getColor() );
if ( ControlSpecularColor->isDirty() )
lightData.SpecularColor = video::SColorf(ControlSpecularColor->getColor() );
}
void resetDirty()
{
ControlAmbientColor->resetDirty();
ControlDiffuseColor->resetDirty();
ControlSpecularColor->resetDirty();
if ( ControlEmissiveColor )
ControlEmissiveColor->resetDirty();
}
protected:
void createColorControls(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, bool hasEmissive)
{
ControlAmbientColor = new CColorControl( guiEnv, pos, L"ambient", this);
ControlDiffuseColor = new CColorControl( guiEnv, pos + core::position2d<s32>(0, 75), L"diffuse", this );
ControlSpecularColor = new CColorControl( guiEnv, pos + core::position2d<s32>(0, 150), L"specular", this );
if ( hasEmissive )
{
ControlEmissiveColor = new CColorControl( guiEnv, pos + core::position2d<s32>(0, 225), L"emissive", this );
}
}
private:
CColorControl* ControlAmbientColor;
CColorControl* ControlDiffuseColor;
CColorControl* ControlSpecularColor;
CColorControl* ControlEmissiveColor;
};
GUI-Control to offer a selection of available textures.
class CTextureControl : public gui::IGUIElement
{
public:
CTextureControl(gui::IGUIEnvironment* guiEnv, video::IVideoDriver * driver, const core::position2d<s32> & pos, IGUIElement* parent, s32 id=-1)
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect<s32>(pos,pos+core::dimension2d<s32>(100,15)))
, DirtyFlag(true), ComboTexture(0)
{
core::rect<s32> rectCombo(0, 0, AbsoluteRect.getWidth(),AbsoluteRect.getHeight());
ComboTexture = guiEnv->addComboBox (rectCombo, this);
updateTextures(driver);
}
virtual bool OnEvent(const SEvent &event)
{
if ( event.EventType != EET_GUI_EVENT )
return false;
if ( event.GUIEvent.Caller == ComboTexture && event.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
DirtyFlag = true;
}
return false;
}
virtual bool bringToFront(IGUIElement* element)
{
bool result = gui::IGUIElement::bringToFront(element);
if ( Parent && element == ComboTexture )
result &= Parent->bringToFront(this);
return result;
}
const wchar_t * getSelectedTextureName() const
{
s32 selected = ComboTexture->getSelected();
if ( selected < 0 )
return 0;
return ComboTexture->getItem(selected);
}
void resetDirty()
{
DirtyFlag = false;
}
bool isDirty() const
{
return DirtyFlag;
};
void updateTextures(video::IVideoDriver * driver)
{
s32 oldSelected = ComboTexture->getSelected();
s32 selectNew = -1;
core::stringw oldTextureName;
if ( oldSelected >= 0 )
{
oldTextureName = ComboTexture->getItem(oldSelected);
}
ComboTexture->clear();
for ( u32 i=0; i < driver->getTextureCount(); ++i )
{
video::ITexture * texture = driver->getTextureByIndex(i);
core::stringw name( texture->getName() );
ComboTexture->addItem( name.c_str() );
if ( !oldTextureName.empty() && selectNew < 0 && name == oldTextureName )
selectNew = i;
}
ComboTexture->addItem( CLEAR_TEXTURE.c_str() );
if ( CLEAR_TEXTURE == oldTextureName )
selectNew = ComboTexture->getItemCount()-1;
if ( selectNew >= 0 )
ComboTexture->setSelected(selectNew);
DirtyFlag = true;
}
private:
bool DirtyFlag;
gui::IGUIComboBox * ComboTexture;
};
Control which allows setting some of the material values for a meshscenenode
struct SMeshNodeControl
{
SMeshNodeControl()
: Initialized(false), Driver(0), MeshManipulator(0), SceneNode(0), SceneNode2T(0), SceneNodeTangents(0)
, AllColorsControl(0), ButtonLighting(0), InfoLighting(0), ComboMaterial(0), TextureControl1(0), TextureControl2(0), ControlVertexColors(0)
{
}
virtual ~SMeshNodeControl()
{
if ( TextureControl1 )
TextureControl1->drop();
if ( TextureControl2 )
TextureControl2->drop();
if ( ControlVertexColors )
ControlVertexColors->drop();
if ( AllColorsControl )
AllColorsControl->drop();
}
void init(scene::IMeshSceneNode* node, IrrlichtDevice * device, const core::position2d<s32> & pos, const wchar_t * description)
{
if ( Initialized || !node || !device)
return;
Driver = device->getVideoDriver ();
gui::IGUIEnvironment* guiEnv = device->getGUIEnvironment();
scene::ISceneManager* smgr = device->getSceneManager();
MeshManipulator = smgr->getMeshManipulator();
SceneNode = node;
scene::IMeshManipulator * meshManip = smgr->getMeshManipulator();
scene::IMesh * mesh2T = meshManip->createMeshWith2TCoords(node->getMesh());
SceneNode2T = smgr->addMeshSceneNode(mesh2T, 0, -1, SceneNode->getPosition(), SceneNode->getRotation(), SceneNode->getScale() );
mesh2T->drop();
scene::IMesh * meshTangents = meshManip->createMeshWithTangents(node->getMesh(), false, false, false);
SceneNodeTangents = smgr->addMeshSceneNode(meshTangents, 0, -1
, SceneNode->getPosition(), SceneNode->getRotation(), SceneNode->getScale() );
meshTangents->drop();
video::SMaterial & material = SceneNode->getMaterial(0);
material.Lighting = true;
AllColorsControl = new CAllColorsControl(guiEnv, pos, description, true, guiEnv->getRootGUIElement());
AllColorsControl->setColorsToMaterialColors(material);
core::rect<s32> rectBtn(pos + core::position2d<s32>(0, 320), core::dimension2d<s32>(60, 15));
ButtonLighting = guiEnv->addButton (rectBtn, 0, -1, L"Lighting");
ButtonLighting->setIsPushButton(true);
ButtonLighting->setPressed(material.Lighting);
core::rect<s32> rectInfo( rectBtn.LowerRightCorner.X, rectBtn.UpperLeftCorner.Y, rectBtn.LowerRightCorner.X+40, rectBtn.UpperLeftCorner.Y+15 );
InfoLighting = guiEnv->addStaticText(L"", rectInfo, true, false );
InfoLighting->setTextAlignment(gui::EGUIA_CENTER, gui::EGUIA_CENTER );
core::rect<s32> rectCombo(pos.X, rectBtn.LowerRightCorner.Y, pos.X+100, rectBtn.LowerRightCorner.Y+15);
ComboMaterial = guiEnv->addComboBox (rectCombo);
for ( int i=0; i <= (int)video::EMT_ONETEXTURE_BLEND; ++i )
{
ComboMaterial->addItem( core::stringw(video::sBuiltInMaterialTypeNames[i]).c_str() );
}
ComboMaterial->setSelected( (s32)material.MaterialType );
core::position2d<s32> posTex(rectCombo.UpperLeftCorner.X,rectCombo.LowerRightCorner.Y);
TextureControl1 = new CTextureControl(guiEnv, Driver, posTex, guiEnv->getRootGUIElement());
posTex.Y += 15;
TextureControl2 = new CTextureControl(guiEnv, Driver, posTex, guiEnv->getRootGUIElement());
core::position2d<s32> posVertexColors( posTex.X, posTex.Y + 15);
ControlVertexColors = new CColorControl( guiEnv, posVertexColors, L"Vertex colors", guiEnv->getRootGUIElement());
video::S3DVertex * vertices = (video::S3DVertex *)node->getMesh()->getMeshBuffer(0)->getVertices();
if ( vertices )
{
ControlVertexColors->setColor(vertices[0].Color);
}
Initialized = true;
}
void update()
{
if ( !Initialized )
return;
video::SMaterial & material = SceneNode->getMaterial(0);
video::SMaterial & material2T = SceneNode2T->getMaterial(0);
video::SMaterial & materialTangents = SceneNodeTangents->getMaterial(0);
s32 selectedMaterial = ComboMaterial->getSelected();
if ( selectedMaterial >= (s32)video::EMT_SOLID && selectedMaterial <= (s32)video::EMT_ONETEXTURE_BLEND)
{
video::E_VERTEX_TYPE vertexType = getVertexTypeForMaterialType((video::E_MATERIAL_TYPE)selectedMaterial);
switch ( vertexType )
{
case video::EVT_STANDARD:
material.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
SceneNode->setVisible(true);
SceneNode2T->setVisible(false);
SceneNodeTangents->setVisible(false);
break;
case video::EVT_2TCOORDS:
material2T.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
SceneNode->setVisible(false);
SceneNode2T->setVisible(true);
SceneNodeTangents->setVisible(false);
break;
case video::EVT_TANGENTS:
materialTangents.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
SceneNode->setVisible(false);
SceneNode2T->setVisible(false);
SceneNodeTangents->setVisible(true);
break;
}
}
updateMaterial(material);
updateMaterial(material2T);
updateMaterial(materialTangents);
if ( ButtonLighting->isPressed() )
InfoLighting->setText(L"on");
else
InfoLighting->setText(L"off");
AllColorsControl->resetDirty();
TextureControl1->resetDirty();
TextureControl2->resetDirty();
ControlVertexColors->resetDirty();
}
void updateTextures()
{
TextureControl1->updateTextures(Driver);
TextureControl2->updateTextures(Driver);
}
protected:
void updateMaterial(video::SMaterial & material)
{
AllColorsControl->updateMaterialColors(material);
material.Lighting = ButtonLighting->isPressed();
if ( TextureControl1->isDirty() )
{
material.TextureLayer[0].Texture = Driver->getTexture( io::path(TextureControl1->getSelectedTextureName()) );
}
if ( TextureControl2->isDirty() )
{
material.TextureLayer[1].Texture = Driver->getTexture( io::path(TextureControl2->getSelectedTextureName()) );
}
if ( ControlVertexColors->isDirty() )
{
MeshManipulator->setVertexColors (SceneNode->getMesh(), ControlVertexColors->getColor());
MeshManipulator->setVertexColors (SceneNode2T->getMesh(), ControlVertexColors->getColor());
MeshManipulator->setVertexColors (SceneNodeTangents->getMesh(), ControlVertexColors->getColor());
}
}
bool Initialized;
video::IVideoDriver * Driver;
scene::IMeshManipulator* MeshManipulator;
scene::IMeshSceneNode* SceneNode;
scene::IMeshSceneNode* SceneNode2T;
scene::IMeshSceneNode* SceneNodeTangents;
CAllColorsControl* AllColorsControl;
gui::IGUIButton * ButtonLighting;
gui::IGUIStaticText* InfoLighting;
gui::IGUIComboBox * ComboMaterial;
CTextureControl* TextureControl1;
CTextureControl* TextureControl2;
CColorControl* ControlVertexColors;
};
Control to allow setting the color values of a lightscenenode.
struct SLightNodeControl
{
SLightNodeControl() : Initialized(false), SceneNode(0), AllColorsControl(0)
{
}
virtual ~SLightNodeControl()
{
if ( AllColorsControl )
AllColorsControl->drop();
}
void init(scene::ILightSceneNode* node, gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t * description)
{
if ( Initialized || !node || !guiEnv)
return;
SceneNode = node;
AllColorsControl = new CAllColorsControl(guiEnv, pos, description, false, guiEnv->getRootGUIElement());
const video::SLight & lightData = SceneNode->getLightData();
AllColorsControl->setColorsToLightDataColors(lightData);
Initialized = true;
}
void update()
{
if ( !Initialized )
return;
video::SLight & lightData = SceneNode->getLightData();
AllColorsControl->updateLightColors(lightData);
}
protected:
bool Initialized;
scene::ILightSceneNode* SceneNode;
CAllColorsControl* AllColorsControl;
};
Application configuration
struct SConfig
{
SConfig()
: RenderInBackground(true)
, DriverType(video::EDT_BURNINGSVIDEO)
, ScreenSize(640, 480)
{
}
bool RenderInBackground;
video::E_DRIVER_TYPE DriverType;
core::dimension2d<u32> ScreenSize;
};
Main application class
class CApp : public IEventReceiver
{
friend int main(int argc, char *argv[]);
public:
CApp()
: IsRunning(false)
, Device(0)
, Camera(0)
, GlobalAmbient(0)
{
}
~CApp()
{
}
void stopApp()
{
IsRunning = false;
}
virtual bool OnEvent(const SEvent &event)
{
if (event.EventType == EET_GUI_EVENT)
{
gui::IGUIEnvironment* env = Device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
case gui::EGET_MENU_ITEM_SELECTED:
{
gui::IGUIContextMenu* menu = (gui::IGUIContextMenu*)event.GUIEvent.Caller;
s32 id = menu->getItemCommandId(menu->getSelectedItem());
switch(id)
{
case GUI_ID_OPEN_TEXTURE:
env->addFileOpenDialog(L"Please select a texture file to open");
break;
case GUI_ID_QUIT:
stopApp();
break;
}
}
break;
case gui::EGET_FILE_SELECTED:
{
gui::IGUIFileOpenDialog* dialog =
(gui::IGUIFileOpenDialog*)event.GUIEvent.Caller;
loadTexture(io::path(dialog->getFileName()).c_str());
}
break;
default:
break;
}
}
return false;
}
protected:
bool init(int argc, char *argv[])
{
Config.DriverType=driverChoiceConsole();
if (Config.DriverType==video::EDT_COUNT)
return false;
Device = createDevice(Config.DriverType, Config.ScreenSize);
if (!Device)
return false;
Device->setWindowCaption( DriverTypeNames[Config.DriverType] );
Device->setEventReceiver(this);
scene::ISceneManager* smgr = Device->getSceneManager();
video::IVideoDriver * driver = Device->getVideoDriver ();
gui::IGUIEnvironment* guiEnv = Device->getGUIEnvironment();
gui::IGUISkin* skin = guiEnv->getSkin();
gui::IGUIFont* font = guiEnv->getFont("../../media/fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
video::SColor col3dHighLight( skin->getColor(gui::EGDC_APP_WORKSPACE) );
col3dHighLight.setAlpha(255);
video::SColor colHighLight( col3dHighLight );
skin->setColor(gui::EGDC_HIGH_LIGHT, colHighLight );
skin->setColor(gui::EGDC_3D_HIGH_LIGHT, col3dHighLight );
createDefaultTextures(driver);
gui::IGUIContextMenu * menuBar = guiEnv->addMenu();
menuBar->addItem(L"File", -1, true, true);
gui::IGUIContextMenu* subMenuFile = menuBar->getSubMenu(0);
subMenuFile->addItem(L"Open texture ...", GUI_ID_OPEN_TEXTURE);
subMenuFile->addSeparator();
subMenuFile->addItem(L"Quit", GUI_ID_QUIT);
Camera = smgr->addCameraSceneNode (0, core::vector3df(0, 0, 0),
core::vector3df(0, 0, 100),
-1);
scene::IMeshSceneNode* nodeL = smgr->addCubeSceneNode (30.0f, 0, -1,
core::vector3df(-35, 0, 100),
core::vector3df(0, 0, 0),
core::vector3df(1.0f, 1.0f, 1.0f));
NodeLeft.init( nodeL, Device, core::position2d<s32>(10,20), L"left node" );
scene::IMeshSceneNode* nodeR = smgr->addCubeSceneNode (30.0f, 0, -1,
core::vector3df(35, 0, 100),
core::vector3df(0, 0, 0),
core::vector3df(1.0f, 1.0f, 1.0f));
NodeRight.init( nodeR, Device, core::position2d<s32>(530,20), L"right node" );
scene::ILightSceneNode* nodeLight = smgr->addLightSceneNode(0, core::vector3df(0, 0, 0),
video::SColorf(1.0f, 1.0f, 1.0f),
100.0f);
LightControl.init(nodeLight, guiEnv, core::position2d<s32>(270,20), L"light" );
scene::IMeshSceneNode* backgroundCube = smgr->addCubeSceneNode (200.0f, 0, -1, core::vector3df(0, 0, 0),
core::vector3df(45, 0, 0),
core::vector3df(1.0f, 1.0f, 1.0f));
backgroundCube->getMaterial(0).BackfaceCulling = false;
backgroundCube->getMaterial(0).EmissiveColor.set(255,50,50,50);
GlobalAmbient = new CColorControl( guiEnv, core::position2d<s32>(270, 300), L"global ambient", guiEnv->getRootGUIElement());
GlobalAmbient->setColor( smgr->getAmbientLight().toSColor() );
return true;
}
bool update()
{
using namespace irr;
video::IVideoDriver* videoDriver = Device->getVideoDriver();
if ( !Device->run() )
return false;
if ( Device->isWindowActive() || Config.RenderInBackground )
{
gui::IGUIEnvironment* guiEnv = Device->getGUIEnvironment();
scene::ISceneManager* smgr = Device->getSceneManager();
gui::IGUISkin * skin = guiEnv->getSkin();
NodeLeft.update();
NodeRight.update();
LightControl.update();
if ( GlobalAmbient->isDirty() )
{
smgr->setAmbientLight( GlobalAmbient->getColor() );
GlobalAmbient->resetDirty();
}
video::SColor bkColor( skin->getColor(gui::EGDC_APP_WORKSPACE) );
videoDriver->beginScene(true, true, bkColor);
smgr->drawAll();
guiEnv->drawAll();
videoDriver->endScene();
}
return true;
}
void run()
{
IsRunning = true;
if ( !Device )
return;
while(IsRunning)
{
if ( !update() )
break;
Device->sleep( 5 );
}
}
void quit()
{
IsRunning = false;
GlobalAmbient->drop();
GlobalAmbient = NULL;
if ( Device )
{
Device->closeDevice();
Device->drop();
Device = NULL;
}
}
void createDefaultTextures(video::IVideoDriver * driver)
{
const u32 width = 256;
const u32 height = 256;
video::IImage * imageA8R8G8B8 = driver->createImage (video::ECF_A8R8G8B8, core::dimension2d<u32>(width, height));
if ( !imageA8R8G8B8 )
return;
const u32 pitch = imageA8R8G8B8->getPitch();
for ( u32 y = 0; y < height; ++ y )
{
for ( u32 x = 0; x < pitch; ++x )
{
if ( y < height/3 )
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_BLACK);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_BLUE);
else
imageA8R8G8B8->setPixel (x, y, SCOL_CYAN);
}
else if ( y < 2*height/3 )
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_GRAY);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_GREEN);
else
imageA8R8G8B8->setPixel (x, y, SCOL_MAGENTA);
}
else
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_RED);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_YELLOW);
else
imageA8R8G8B8->setPixel (x, y, SCOL_WHITE);
}
}
}
driver->addTexture (io::path("CARO_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->fill(SCOL_WHITE);
driver->addTexture (io::path("WHITE_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->fill(SCOL_BLACK);
driver->addTexture (io::path("BLACK_A8R8G8B8"), imageA8R8G8B8);
for ( u32 y = 0; y < height; ++ y )
{
for ( u32 x = 0; x < pitch; ++x )
{
imageA8R8G8B8->setPixel (x, y, video::SColor(y, x,x,x) );
}
}
driver->addTexture (io::path("GRAYSCALE_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->drop();
}
void loadTexture(const io::path &name)
{
Device->getVideoDriver()->getTexture(name);
NodeLeft.updateTextures();
NodeRight.updateTextures();
}
private:
SConfig Config;
volatile bool IsRunning;
IrrlichtDevice * Device;
scene::ICameraSceneNode * Camera;
SMeshNodeControl NodeLeft;
SMeshNodeControl NodeRight;
SLightNodeControl LightControl;
CColorControl * GlobalAmbient;
};
A very short main as we do everything else in classes.
int main(int argc, char *argv[])
{
CApp APP;
if ( !APP.init(argc, argv) )
{
printf("init failed\n");
return 1;
}
APP.run();
APP.quit();
return 0;
}
+You have a node with a mesh, one dynamic light and global ambient light to play around with. You can move the light with cursor-keys and +/-. You can move the camera while left-mouse button is clicked.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
#include "main.h"
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
Variables within the empty namespace are globals which are restricted to this file.
namespace
{
enum EGUI_IDS
{
GUI_ID_OPEN_TEXTURE = 1,
GUI_ID_QUIT,
GUI_ID_MAX
};
const core::stringw CLEAR_TEXTURE = L"CLEAR texture";
const video::SColor SCOL_BLACK = video::SColor(255, 0, 0, 0);
const video::SColor SCOL_BLUE = video::SColor(255, 0, 0, 255);
const video::SColor SCOL_CYAN = video::SColor(255, 0, 255, 255);
const video::SColor SCOL_GRAY = video::SColor(255, 128,128, 128);
const video::SColor SCOL_GREEN = video::SColor(255, 0, 255, 0);
const video::SColor SCOL_MAGENTA = video::SColor(255, 255, 0, 255);
const video::SColor SCOL_RED = video::SColor(255, 255, 0, 0);
const video::SColor SCOL_YELLOW = video::SColor(255, 255, 255, 0);
const video::SColor SCOL_WHITE = video::SColor(255, 255, 255, 255);
};
Returns a new unique number on each call.
s32 makeUniqueId()
{
static int unique = GUI_ID_MAX;
++unique;
return unique;
}
Find out which vertex-type is needed for the given material type.
video::E_VERTEX_TYPE getVertexTypeForMaterialType(video::E_MATERIAL_TYPE materialType)
{
using namespace video;
switch ( materialType )
{
case EMT_SOLID:
return EVT_STANDARD;
case EMT_SOLID_2_LAYER:
return EVT_STANDARD;
case EMT_LIGHTMAP:
case EMT_LIGHTMAP_ADD:
case EMT_LIGHTMAP_M2:
case EMT_LIGHTMAP_M4:
case EMT_LIGHTMAP_LIGHTING:
case EMT_LIGHTMAP_LIGHTING_M2:
case EMT_LIGHTMAP_LIGHTING_M4:
return EVT_2TCOORDS;
case EMT_DETAIL_MAP:
return EVT_2TCOORDS;
case EMT_SPHERE_MAP:
return EVT_STANDARD;
case EMT_REFLECTION_2_LAYER:
return EVT_2TCOORDS;
case EMT_TRANSPARENT_ADD_COLOR:
return EVT_STANDARD;
case EMT_TRANSPARENT_ALPHA_CHANNEL:
return EVT_STANDARD;
case EMT_TRANSPARENT_ALPHA_CHANNEL_REF:
return EVT_STANDARD;
case EMT_TRANSPARENT_VERTEX_ALPHA:
return EVT_STANDARD;
case EMT_TRANSPARENT_REFLECTION_2_LAYER:
return EVT_2TCOORDS;
case EMT_NORMAL_MAP_SOLID:
case EMT_NORMAL_MAP_TRANSPARENT_ADD_COLOR:
case EMT_NORMAL_MAP_TRANSPARENT_VERTEX_ALPHA:
case EMT_PARALLAX_MAP_SOLID:
case EMT_PARALLAX_MAP_TRANSPARENT_ADD_COLOR:
case EMT_PARALLAX_MAP_TRANSPARENT_VERTEX_ALPHA:
return EVT_TANGENTS;
case EMT_ONETEXTURE_BLEND:
return EVT_STANDARD;
case EMT_FORCE_32BIT:
return EVT_STANDARD;
}
return EVT_STANDARD;
}
Custom GUI-control to edit colorvalues.
CColorControl::CColorControl(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t *text, IGUIElement* parent, s32 id)
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect< s32 >(pos, pos+core::dimension2d<s32>(80, 75)))
, DirtyFlag(true)
, Color(0)
, ColorStatic(0)
, EditAlpha(0)
, EditRed(0)
, EditGreen(0)
, EditBlue(0)
{
using namespace gui;
ButtonSetId = makeUniqueId();
const core::rect< s32 > rectControls(0,0,AbsoluteRect.getWidth(),AbsoluteRect.getHeight() );
IGUIStaticText * groupElement = guiEnv->addStaticText (L"", rectControls, true, false, this, -1, false);
groupElement->setNotClipped(true);
guiEnv->addStaticText (text, core::rect<s32>(0,0,80,15), false, false, groupElement, -1, false);
EditAlpha = addEditForNumbers(guiEnv, core::position2d<s32>(0,15), L"a", -1, groupElement );
EditRed = addEditForNumbers(guiEnv, core::position2d<s32>(0,30), L"r", -1, groupElement );
EditGreen = addEditForNumbers(guiEnv, core::position2d<s32>(0,45), L"g", -1, groupElement );
EditBlue = addEditForNumbers(guiEnv, core::position2d<s32>(0,60), L"b", -1, groupElement );
ColorStatic = guiEnv->addStaticText (L"", core::rect<s32>(60,15,80,75), true, false, groupElement, -1, true);
guiEnv->addButton (core::rect<s32>(60,35,80,50), groupElement, ButtonSetId, L"set");
setEditsFromColor(Color);
}
bool CColorControl::OnEvent(const SEvent &event)
{
if ( event.EventType != EET_GUI_EVENT )
return false;
if ( event.GUIEvent.Caller->getID() == ButtonSetId && event.GUIEvent.EventType == gui::EGET_BUTTON_CLICKED )
{
Color = getColorFromEdits();
setEditsFromColor(Color);
}
return false;
}
void CColorControl::setColor(const video::SColor& col)
{
DirtyFlag = true;
Color = col;
setEditsFromColor(Color);
}
gui::IGUIEditBox* CColorControl::addEditForNumbers(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t *text, s32 id, gui::IGUIElement * parent)
{
using namespace gui;
core::rect< s32 > rect(pos, pos+core::dimension2d<s32>(10, 15));
guiEnv->addStaticText (text, rect, false, false, parent, -1, false);
rect += core::position2d<s32>( 20, 0 );
rect.LowerRightCorner.X += 20;
gui::IGUIEditBox* edit = guiEnv->addEditBox(L"0", rect, true, parent, id);
return edit;
}
video::SColor CColorControl::getColorFromEdits() const
{
video::SColor col;
if (EditAlpha)
{
u32 alpha = core::strtoul10(core::stringc(EditAlpha->getText()).c_str());
if (alpha > 255)
alpha = 255;
col.setAlpha(alpha);
}
if (EditRed)
{
u32 red = core::strtoul10(core::stringc(EditRed->getText()).c_str());
if (red > 255)
red = 255;
col.setRed(red);
}
if (EditGreen)
{
u32 green = core::strtoul10(core::stringc(EditGreen->getText()).c_str());
if (green > 255)
green = 255;
col.setGreen(green);
}
if (EditBlue)
{
u32 blue = core::strtoul10(core::stringc(EditBlue->getText()).c_str());
if (blue > 255)
blue = 255;
col.setBlue(blue);
}
return col;
}
void CColorControl::setEditsFromColor(video::SColor col)
{
DirtyFlag = true;
if ( EditAlpha )
EditAlpha->setText( core::stringw(col.getAlpha()).c_str() );
if ( EditRed )
EditRed->setText( core::stringw(col.getRed()).c_str() );
if ( EditGreen )
EditGreen->setText( core::stringw(col.getGreen()).c_str() );
if ( EditBlue )
EditBlue->setText( core::stringw(col.getBlue()).c_str() );
if ( ColorStatic )
ColorStatic->setBackgroundColor(col);
}
Custom GUI-control for to edit all colors typically used in materials and lights
CTypicalColorsControl::CTypicalColorsControl(gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, bool hasEmissive, IGUIElement* parent, s32 id)
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect<s32>(pos,pos+core::dimension2d<s32>(60,250)))
, ControlAmbientColor(0), ControlDiffuseColor(0), ControlSpecularColor(0), ControlEmissiveColor(0)
{
ControlAmbientColor = new CColorControl( guiEnv, core::position2d<s32>(0, 0), L"Ambient", this);
ControlDiffuseColor = new CColorControl( guiEnv, core::position2d<s32>(0, 75), L"Diffuse", this );
ControlSpecularColor = new CColorControl( guiEnv, core::position2d<s32>(0, 150), L"Specular", this );
if ( hasEmissive )
{
ControlEmissiveColor = new CColorControl( guiEnv, core::position2d<s32>(0, 225), L"Emissive", this );
}
}
CTypicalColorsControl::~CTypicalColorsControl()
{
ControlAmbientColor->drop();
ControlDiffuseColor->drop();
if ( ControlEmissiveColor )
ControlEmissiveColor->drop();
ControlSpecularColor->drop();
}
void CTypicalColorsControl::setColorsToMaterialColors(const video::SMaterial & material)
{
ControlAmbientColor->setColor(material.AmbientColor);
ControlDiffuseColor->setColor(material.DiffuseColor);
ControlEmissiveColor->setColor(material.EmissiveColor);
ControlSpecularColor->setColor(material.SpecularColor);
}
void CTypicalColorsControl::updateMaterialColors(video::SMaterial & material) const
{
if ( ControlAmbientColor->isDirty() )
material.AmbientColor = ControlAmbientColor->getColor();
if ( ControlDiffuseColor->isDirty() )
material.DiffuseColor = ControlDiffuseColor->getColor();
if ( ControlEmissiveColor->isDirty() )
material.EmissiveColor = ControlEmissiveColor->getColor();
if ( ControlSpecularColor->isDirty() )
material.SpecularColor = ControlSpecularColor->getColor();
}
void CTypicalColorsControl::setColorsToLightDataColors(const video::SLight & lightData)
{
ControlAmbientColor->setColor(lightData.AmbientColor.toSColor());
ControlDiffuseColor->setColor(lightData.DiffuseColor.toSColor());
ControlSpecularColor->setColor(lightData.SpecularColor.toSColor());
}
void CTypicalColorsControl::updateLightColors(video::SLight & lightData) const
{
if ( ControlAmbientColor->isDirty() )
lightData.AmbientColor = video::SColorf( ControlAmbientColor->getColor() );
if ( ControlDiffuseColor->isDirty() )
lightData.DiffuseColor = video::SColorf( ControlDiffuseColor->getColor() );
if ( ControlSpecularColor->isDirty() )
lightData.SpecularColor = video::SColorf(ControlSpecularColor->getColor() );
}
void CTypicalColorsControl::resetDirty()
{
ControlAmbientColor->resetDirty();
ControlDiffuseColor->resetDirty();
ControlSpecularColor->resetDirty();
if ( ControlEmissiveColor )
ControlEmissiveColor->resetDirty();
}
GUI-Control to offer a selection of available textures.
CTextureControl::CTextureControl(gui::IGUIEnvironment* guiEnv, video::IVideoDriver * driver, const core::position2d<s32> & pos, IGUIElement* parent, s32 id)
: gui::IGUIElement(gui::EGUIET_ELEMENT, guiEnv, parent,id, core::rect<s32>(pos,pos+core::dimension2d<s32>(150,15)))
, DirtyFlag(true), ComboTexture(0)
{
core::rect<s32> rectCombo(0, 0, AbsoluteRect.getWidth(),AbsoluteRect.getHeight());
ComboTexture = guiEnv->addComboBox (rectCombo, this);
updateTextures(driver);
}
bool CTextureControl::OnEvent(const SEvent &event)
{
if ( event.EventType != EET_GUI_EVENT )
return false;
if ( event.GUIEvent.Caller == ComboTexture && event.GUIEvent.EventType == gui::EGET_COMBO_BOX_CHANGED )
{
DirtyFlag = true;
}
return false;
}
bool CTextureControl::bringToFront(IGUIElement* element)
{
bool result = gui::IGUIElement::bringToFront(element);
if ( Parent && element == ComboTexture )
result &= Parent->bringToFront(this);
return result;
}
const wchar_t * CTextureControl::getSelectedTextureName() const
{
s32 selected = ComboTexture->getSelected();
if ( selected < 0 )
return 0;
return ComboTexture->getItem(selected);
}
void CTextureControl::selectTextureByName(const irr::core::stringw& name)
{
for (u32 i=0; i< ComboTexture->getItemCount(); ++i)
{
if ( name == ComboTexture->getItem(i))
{
ComboTexture->setSelected(i);
DirtyFlag = true;
return;
}
}
}
void CTextureControl::updateTextures(video::IVideoDriver * driver)
{
s32 oldSelected = ComboTexture->getSelected();
s32 selectNew = -1;
core::stringw oldTextureName;
if ( oldSelected >= 0 )
{
oldTextureName = ComboTexture->getItem(oldSelected);
}
ComboTexture->clear();
for ( u32 i=0; i < driver->getTextureCount(); ++i )
{
video::ITexture * texture = driver->getTextureByIndex(i);
core::stringw name( texture->getName() );
ComboTexture->addItem( name.c_str() );
if ( !oldTextureName.empty() && selectNew < 0 && name == oldTextureName )
selectNew = i;
}
ComboTexture->addItem( CLEAR_TEXTURE.c_str() );
if ( CLEAR_TEXTURE == oldTextureName )
selectNew = ComboTexture->getItemCount()-1;
if ( selectNew >= 0 )
ComboTexture->setSelected(selectNew);
DirtyFlag = true;
}
Control which allows setting some of the material values for a meshscenenode
void CMaterialControl::init(scene::IMeshSceneNode* node, IrrlichtDevice * device, const core::position2d<s32> & pos, const wchar_t * description)
{
if ( Initialized || !node || !device)
return;
Driver = device->getVideoDriver ();
gui::IGUIEnvironment* guiEnv = device->getGUIEnvironment();
const video::SMaterial & material = node->getMaterial(0);
s32 top = pos.Y;
guiEnv->addStaticText(description, core::rect<s32>(pos.X, top, pos.X+60, top+15), false, false, 0, -1, false);
top += 15;
core::rect<s32> rectCombo(pos.X, top, 150, top+15);
top += 15;
ComboMaterial = guiEnv->addComboBox (rectCombo);
for ( int i=0; i <= (int)video::EMT_ONETEXTURE_BLEND; ++i )
{
ComboMaterial->addItem( core::stringw(video::sBuiltInMaterialTypeNames[i]).c_str() );
}
ComboMaterial->setSelected( (s32)material.MaterialType );
core::rect<s32> rectBtn(core::position2d<s32>(pos.X, top), core::dimension2d<s32>(100, 15));
top += 15;
ButtonLighting = guiEnv->addButton (rectBtn, 0, -1, L"Lighting");
ButtonLighting->setIsPushButton(true);
ButtonLighting->setPressed(material.Lighting);
core::rect<s32> rectInfo( rectBtn.LowerRightCorner.X, rectBtn.UpperLeftCorner.Y, rectBtn.LowerRightCorner.X+40, rectBtn.UpperLeftCorner.Y+15 );
InfoLighting = guiEnv->addStaticText(L"", rectInfo, true, false );
InfoLighting->setTextAlignment(gui::EGUIA_CENTER, gui::EGUIA_CENTER );
TypicalColorsControl = new CTypicalColorsControl(guiEnv, core::position2d<s32>(pos.X, top), true, guiEnv->getRootGUIElement());
top += 300;
TypicalColorsControl->setColorsToMaterialColors(material);
guiEnv->addStaticText(L"Textures", core::rect<s32>(pos.X, top, pos.X+60, top+15), false, false, 0, -1, false);
top += 15;
for (irr::u32 i=0; i<irr::video::MATERIAL_MAX_TEXTURES; ++i)
{
TextureControls[i] = new CTextureControl(guiEnv, Driver, core::position2di(pos.X, top), guiEnv->getRootGUIElement());
top += 15;
}
Initialized = true;
}
void CMaterialControl::update(scene::IMeshSceneNode* sceneNode, scene::IMeshSceneNode* sceneNode2T, scene::IMeshSceneNode* sceneNodeTangents)
{
if ( !Initialized )
return;
video::SMaterial & material = sceneNode->getMaterial(0);
video::SMaterial & material2T = sceneNode2T->getMaterial(0);
video::SMaterial & materialTangents = sceneNodeTangents->getMaterial(0);
s32 selectedMaterial = ComboMaterial->getSelected();
if ( selectedMaterial >= (s32)video::EMT_SOLID && selectedMaterial <= (s32)video::EMT_ONETEXTURE_BLEND)
{
video::E_VERTEX_TYPE vertexType = getVertexTypeForMaterialType((video::E_MATERIAL_TYPE)selectedMaterial);
switch ( vertexType )
{
case video::EVT_STANDARD:
material.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
sceneNode->setVisible(true);
sceneNode2T->setVisible(false);
sceneNodeTangents->setVisible(false);
break;
case video::EVT_2TCOORDS:
material2T.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
sceneNode->setVisible(false);
sceneNode2T->setVisible(true);
sceneNodeTangents->setVisible(false);
break;
case video::EVT_TANGENTS:
materialTangents.MaterialType = (video::E_MATERIAL_TYPE)selectedMaterial;
sceneNode->setVisible(false);
sceneNode2T->setVisible(false);
sceneNodeTangents->setVisible(true);
break;
}
}
updateMaterial(material);
updateMaterial(material2T);
updateMaterial(materialTangents);
if ( ButtonLighting->isPressed() )
InfoLighting->setText(L"is on");
else
InfoLighting->setText(L"is off");
TypicalColorsControl->resetDirty();
for (irr::u32 i=0; i<irr::video::MATERIAL_MAX_TEXTURES; ++i)
TextureControls[i]->resetDirty();
}
void CMaterialControl::updateTextures()
{
for (irr::u32 i=0; i<irr::video::MATERIAL_MAX_TEXTURES; ++i)
TextureControls[i]->updateTextures(Driver);
}
void CMaterialControl::selectTextures(const irr::core::stringw& name)
{
for (irr::u32 i=0; i<irr::video::MATERIAL_MAX_TEXTURES; ++i)
TextureControls[i]->selectTextureByName(name);
}
bool CMaterialControl::isLightingEnabled() const
{
return ButtonLighting && ButtonLighting->isPressed();
}
void CMaterialControl::updateMaterial(video::SMaterial & material)
{
TypicalColorsControl->updateMaterialColors(material);
material.Lighting = ButtonLighting->isPressed();
for (irr::u32 i=0; i<irr::video::MATERIAL_MAX_TEXTURES; ++i)
{
if ( TextureControls[i]->isDirty() )
{
material.TextureLayer[i].Texture = Driver->getTexture( io::path(TextureControls[i]->getSelectedTextureName()) );
}
}
}
Control to allow setting the color values of a lightscenenode.
void CLightNodeControl::init(scene::ILightSceneNode* node, gui::IGUIEnvironment* guiEnv, const core::position2d<s32> & pos, const wchar_t * description)
{
if ( Initialized || !node || !guiEnv)
return;
guiEnv->addStaticText(description, core::rect<s32>(pos.X, pos.Y, pos.X+70, pos.Y+15), false, false, 0, -1, false);
TypicalColorsControl = new CTypicalColorsControl(guiEnv, core::position2d<s32>(pos.X, pos.Y+15), false, guiEnv->getRootGUIElement());
const video::SLight & lightData = node->getLightData();
TypicalColorsControl->setColorsToLightDataColors(lightData);
Initialized = true;
}
void CLightNodeControl::update(scene::ILightSceneNode* node)
{
if ( !Initialized )
return;
video::SLight & lightData = node->getLightData();
TypicalColorsControl->updateLightColors(lightData);
}
Main application class
Event handler
bool CApp::OnEvent(const SEvent &event)
{
if (event.EventType == EET_GUI_EVENT)
{
gui::IGUIEnvironment* env = Device->getGUIEnvironment();
switch(event.GUIEvent.EventType)
{
case gui::EGET_MENU_ITEM_SELECTED:
{
gui::IGUIContextMenu* menu = (gui::IGUIContextMenu*)event.GUIEvent.Caller;
s32 id = menu->getItemCommandId(menu->getSelectedItem());
switch(id)
{
case GUI_ID_OPEN_TEXTURE:
env->addFileOpenDialog(L"Please select a texture file to open");
break;
case GUI_ID_QUIT:
setRunning(false);
break;
}
}
break;
case gui::EGET_FILE_SELECTED:
{
gui::IGUIFileOpenDialog* dialog =
(gui::IGUIFileOpenDialog*)event.GUIEvent.Caller;
loadTexture(io::path(dialog->getFileName()).c_str());
}
break;
default:
break;
}
}
else if (event.EventType == EET_KEY_INPUT_EVENT)
{
KeysPressed[event.KeyInput.Key] = event.KeyInput.PressedDown;
}
else if (event.EventType == EET_MOUSE_INPUT_EVENT)
{
if (!MousePressed && event.MouseInput.isLeftPressed())
{
gui::IGUIEnvironment* guiEnv = Device->getGUIEnvironment();
if ( guiEnv->getHovered() == guiEnv->getRootGUIElement() )
{
MousePressed = true;
MouseStart.X = event.MouseInput.X;
MouseStart.Y = event.MouseInput.Y;
}
}
else if (MousePressed && !event.MouseInput.isLeftPressed())
{
MousePressed = false;
}
}
return false;
}
bool CApp::init(int argc, char *argv[])
{
Config.DriverType=driverChoiceConsole();
if (Config.DriverType==video::EDT_COUNT)
return false;
Device = createDevice(Config.DriverType, Config.ScreenSize);
if (!Device)
return false;
Device->setWindowCaption( core::stringw(video::DRIVER_TYPE_NAMES[Config.DriverType]).c_str() );
Device->setEventReceiver(this);
scene::ISceneManager* smgr = Device->getSceneManager();
video::IVideoDriver * driver = Device->getVideoDriver ();
gui::IGUIEnvironment* guiEnv = Device->getGUIEnvironment();
MeshManipulator = smgr->getMeshManipulator();
gui::IGUISkin* skin = guiEnv->getSkin();
gui::IGUIFont* font = guiEnv->getFont(getExampleMediaPath() + "fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
video::SColor col3dHighLight( skin->getColor(gui::EGDC_APP_WORKSPACE) );
col3dHighLight.setAlpha(255);
video::SColor colHighLight( col3dHighLight );
skin->setColor(gui::EGDC_HIGH_LIGHT, colHighLight );
skin->setColor(gui::EGDC_3D_HIGH_LIGHT, col3dHighLight );
createDefaultTextures(driver);
gui::IGUIContextMenu * menuBar = guiEnv->addMenu();
menuBar->addItem(L"File", -1, true, true);
gui::IGUIContextMenu* subMenuFile = menuBar->getSubMenu(0);
subMenuFile->addItem(L"Open texture ...", GUI_ID_OPEN_TEXTURE);
subMenuFile->addSeparator();
subMenuFile->addItem(L"Quit", GUI_ID_QUIT);
Camera = smgr->addCameraSceneNode (0, core::vector3df(0, 40, -40),
core::vector3df(0, 10, 0),
-1);
video::SMaterial defaultMaterial;
defaultMaterial.Shininess = 20.f;
SceneNode = smgr->addCubeSceneNode (30.0f, 0, -1,
core::vector3df(0, 0, 0),
core::vector3df(0.f, 45.f, 0.f),
core::vector3df(1.0f, 1.0f, 1.0f));
SceneNode->getMaterial(0) = defaultMaterial;
const s32 controlsTop = 20;
MeshMaterialControl = new CMaterialControl();
MeshMaterialControl->init( SceneNode, Device, core::position2d<s32>(10,controlsTop), L"Material" );
MeshMaterialControl->selectTextures(core::stringw("CARO_A8R8G8B8"));
scene::IMesh * mesh2T = MeshManipulator->createMeshWith2TCoords(SceneNode->getMesh());
SceneNode2T = smgr->addMeshSceneNode(mesh2T, 0, -1, SceneNode->getPosition(), SceneNode->getRotation(), SceneNode->getScale() );
mesh2T->drop();
scene::IMesh * meshTangents = MeshManipulator->createMeshWithTangents(SceneNode->getMesh(), false, false, false);
SceneNodeTangents = smgr->addMeshSceneNode(meshTangents, 0, -1
, SceneNode->getPosition(), SceneNode->getRotation(), SceneNode->getScale() );
meshTangents->drop();
NodeLight = smgr->addLightSceneNode(0, core::vector3df(0, 0, -40),
video::SColorf(1.0f, 1.0f, 1.0f),
35.0f);
LightControl = new CLightNodeControl();
LightControl->init(NodeLight, guiEnv, core::position2d<s32>(550,controlsTop), L"Dynamic light" );
scene::IMeshSceneNode* backgroundCube = smgr->addCubeSceneNode (200.0f, 0, -1, core::vector3df(0, 0, 0),
core::vector3df(45, 0, 0),
core::vector3df(1.0f, 1.0f, 1.0f));
backgroundCube->getMaterial(0).BackfaceCulling = false;
backgroundCube->getMaterial(0).EmissiveColor.set(255,50,50,50);
guiEnv->addStaticText(L"Mesh", core::rect<s32>(200, controlsTop, 270, controlsTop+15), false, false, 0, -1, false);
ControlVertexColors = new CColorControl( guiEnv, core::position2d<s32>(200, controlsTop+15), L"Vertex colors", guiEnv->getRootGUIElement());
video::S3DVertex * vertices = (video::S3DVertex *)SceneNode->getMesh()->getMeshBuffer(0)->getVertices();
if ( vertices )
{
ControlVertexColors->setColor(vertices[0].Color);
}
GlobalAmbient = new CColorControl( guiEnv, core::position2d<s32>(550, 300), L"Global ambient", guiEnv->getRootGUIElement());
GlobalAmbient->setColor( smgr->getAmbientLight().toSColor() );
return true;
}
Update one frame
bool CApp::update()
{
using namespace irr;
video::IVideoDriver* videoDriver = Device->getVideoDriver();
if ( !Device->run() )
return false;
ITimer * timer = Device->getTimer();
u32 newTick = timer->getRealTime();
f32 deltaTime = RealTimeTick > 0 ? f32(newTick-RealTimeTick)/1000.f : 0.f;
RealTimeTick = newTick;
if ( Device->isWindowActive() || Config.RenderInBackground )
{
gui::IGUIEnvironment* guiEnv = Device->getGUIEnvironment();
scene::ISceneManager* smgr = Device->getSceneManager();
gui::IGUISkin * skin = guiEnv->getSkin();
MeshMaterialControl->update(SceneNode, SceneNode2T, SceneNodeTangents);
LightControl->update(NodeLight);
if ( ControlVertexColors->isDirty() )
{
MeshManipulator->setVertexColors (SceneNode->getMesh(), ControlVertexColors->getColor());
MeshManipulator->setVertexColors (SceneNode2T->getMesh(), ControlVertexColors->getColor());
MeshManipulator->setVertexColors (SceneNodeTangents->getMesh(), ControlVertexColors->getColor());
ControlVertexColors->resetDirty();
}
if ( GlobalAmbient->isDirty() )
{
smgr->setAmbientLight( GlobalAmbient->getColor() );
GlobalAmbient->resetDirty();
}
const float zoomSpeed = 10.f * deltaTime;
const float rotationSpeed = 100.f * deltaTime;
if ( KeysPressed[KEY_PLUS] || KeysPressed[KEY_ADD])
ZoomOut(NodeLight, zoomSpeed);
if ( KeysPressed[KEY_MINUS] || KeysPressed[KEY_SUBTRACT])
ZoomOut(NodeLight, -zoomSpeed);
if ( KeysPressed[KEY_RIGHT])
RotateHorizontal(NodeLight, rotationSpeed);
if ( KeysPressed[KEY_LEFT])
RotateHorizontal(NodeLight, -rotationSpeed);
UpdateRotationAxis(NodeLight, LightRotationAxis);
if ( KeysPressed[KEY_UP])
RotateAroundAxis(NodeLight, rotationSpeed, LightRotationAxis);
if ( KeysPressed[KEY_DOWN])
RotateAroundAxis(NodeLight, -rotationSpeed, LightRotationAxis);
if (MousePressed)
{
gui::ICursorControl* cursorControl = Device->getCursorControl();
const core::position2d<s32>& mousePos = cursorControl->getPosition ();
RotateHorizontal(Camera, rotationSpeed * (MouseStart.X - mousePos.X));
RotateAroundAxis(Camera, rotationSpeed * (mousePos.Y - MouseStart.Y), CameraRotationAxis);
MouseStart = mousePos;
}
video::SColor bkColor( skin->getColor(gui::EGDC_APP_WORKSPACE) );
videoDriver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, bkColor);
smgr->drawAll();
guiEnv->drawAll();
if ( MeshMaterialControl->isLightingEnabled() )
{
video::SMaterial lineMaterial;
lineMaterial.Lighting = false;
videoDriver->setMaterial(lineMaterial);
videoDriver->setTransform(video::ETS_WORLD, core::IdentityMatrix);
videoDriver->draw3DLine(NodeLight->getAbsolutePosition(), SceneNode->getAbsolutePosition());
}
videoDriver->endScene();
}
Device->sleep( 5 );
return true;
}
void CApp::quit()
{
IsRunning = false;
delete LightControl;
LightControl = NULL;
delete MeshMaterialControl;
MeshMaterialControl = NULL;
if ( ControlVertexColors )
{
ControlVertexColors->drop();
ControlVertexColors = NULL;
}
if ( GlobalAmbient )
{
GlobalAmbient->drop();
GlobalAmbient = NULL;
}
if ( Device )
{
Device->closeDevice();
Device->drop();
Device = NULL;
}
}
void CApp::createDefaultTextures(video::IVideoDriver * driver)
{
const u32 width = 256;
const u32 height = 256;
video::IImage * imageA8R8G8B8 = driver->createImage (video::ECF_A8R8G8B8, core::dimension2d<u32>(width, height));
if ( !imageA8R8G8B8 )
return;
const u32 pitch = imageA8R8G8B8->getPitch();
for ( u32 y = 0; y < height; ++ y )
{
for ( u32 x = 0; x < pitch; ++x )
{
if ( y < height/3 )
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_BLACK);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_BLUE);
else
imageA8R8G8B8->setPixel (x, y, SCOL_CYAN);
}
else if ( y < 2*height/3 )
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_GRAY);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_GREEN);
else
imageA8R8G8B8->setPixel (x, y, SCOL_MAGENTA);
}
else
{
if ( x < width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_RED);
else if ( x < 2*width/3 )
imageA8R8G8B8->setPixel (x, y, SCOL_YELLOW);
else
imageA8R8G8B8->setPixel (x, y, SCOL_WHITE);
}
}
}
driver->addTexture (io::path("CARO_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->fill(SCOL_WHITE);
driver->addTexture (io::path("WHITE_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->fill(SCOL_BLACK);
driver->addTexture (io::path("BLACK_A8R8G8B8"), imageA8R8G8B8);
for ( u32 y = 0; y < height; ++ y )
{
for ( u32 x = 0; x < pitch; ++x )
{
imageA8R8G8B8->setPixel (x, y, video::SColor(y, x,x,x) );
}
}
driver->addTexture (io::path("GRAYSCALE_A8R8G8B8"), imageA8R8G8B8);
imageA8R8G8B8->drop();
}
void CApp::loadTexture(const io::path &name)
{
Device->getVideoDriver()->getTexture(name);
MeshMaterialControl->updateTextures();
}
void CApp::RotateHorizontal(irr::scene::ISceneNode* node, irr::f32 angle)
{
if ( node )
{
core::vector3df pos(node->getPosition());
core::vector2df dir(pos.X, pos.Z);
dir.rotateBy(angle);
pos.X = dir.X;
pos.Z = dir.Y;
node->setPosition(pos);
}
}
void CApp::RotateAroundAxis(irr::scene::ISceneNode* node, irr::f32 angle, const irr::core::vector3df& axis)
{
if ( node )
{
core::vector3df pos(node->getPosition());
core::matrix4 mat;
mat.setRotationAxisRadians (core::degToRad(angle), axis);
mat.rotateVect(pos);
node->setPosition(pos);
}
}
void CApp::ZoomOut(irr::scene::ISceneNode* node, irr::f32 units)
{
if ( node )
{
core::vector3df pos(node->getPosition());
irr::f32 len = pos.getLength() + units;
pos.setLength(len);
node->setPosition(pos);
}
}
void CApp::UpdateRotationAxis(irr::scene::ISceneNode* node, irr::core::vector3df& axis)
{
core::vector3df pos(node->getPosition());
if ( !core::equals(pos.X, 0.f) || !core::equals(pos.Z, 0.f) )
{
axis.X = -pos.Z;
axis.Z = pos.X;
axis.normalize();
}
}
Short main as most is done in classes.
int main(int argc, char *argv[])
{
CApp APP;
if ( !APP.init(argc, argv) )
{
printf("init failed\n");
APP.quit();
return 1;
}
APP.setRunning(true);
main application loop
while(APP.isRunning())
{
if ( !APP.update() )
break;
}
APP.quit();
return 0;
}
diff --git a/examples/23.SMeshHandling/tutorial.html b/examples/23.SMeshHandling/tutorial.html
index b5ef8e88..ebb196ee 100644
--- a/examples/23.SMeshHandling/tutorial.html
+++ b/examples/23.SMeshHandling/tutorial.html
@@ -195,12 +195,12 @@ tr.heading h2 {
-
+
A tutorial by geoff.
In this tutorial we'll learn how to create custom meshes and deal with them with Irrlicht. We'll create an interesting heightmap with some lighting effects. With keys 1,2,3 you can choose a different mesh layout, which is put into the mesh buffers as desired. All positions, normals, etc. are updated accordingly.
-
Ok, let's start with the headers (I think there's nothing to say about it)
#include <irrlicht.h>
#include "driverChoice.h"
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
using namespace irr;
using namespace video;
using namespace core;
using namespace scene;
using namespace io;
using namespace gui;
This is the type of the functions which work out the colour.
typedef SColor colour_func(f32 x, f32 y, f32 z);
Here comes a set of functions which can be used for coloring the nodes while creating the mesh.
SColor grey(f32, f32, f32 z)
{
u32 n = (u32)(255.f * z);
return SColor(255, n, n, n);
}
SColor yellow(f32 x, f32 y, f32)
{
return SColor(255, 128 + (u32)(127.f * x), 128 + (u32)(127.f * y), 255);
}
SColor white(f32, f32, f32) { return SColor(255, 255, 255, 255); }
The type of the functions which generate the heightmap. x and y range between -0.5 and 0.5, and s is the scale of the heightmap.
typedef f32 generate_func(s16 x, s16 y, f32 s);
f32 eggbox(s16 x, s16 y, f32 s)
{
const f32 r = 4.f*sqrtf((f32)(x*x + y*y))/s;
const f32 z = expf(-r * 2) * (cosf(0.2f * x) + cosf(0.2f * y));
return 0.25f+0.25f*z;
}
f32 moresine(s16 x, s16 y, f32 s)
{
const f32 xx=0.3f*(f32)x/s;
const f32 yy=12*y/s;
const f32 z = sinf(xx*xx+yy)*sinf(xx+yy*yy);
return 0.25f + 0.25f * z;
}
f32 justexp(s16 x, s16 y, f32 s)
{
const f32 xx=6*x/s;
const f32 yy=6*y/s;
const f32 z = (xx*xx+yy*yy);
return 0.3f*z*cosf(xx*yy);
}
A simple class for representing heightmaps. Most of this should be obvious.
class HeightMap
{
private:
const u16 Width;
const u16 Height;
f32 s;
core::array<f32> data;
public:
HeightMap(u16 _w, u16 _h) : Width(_w), Height(_h), s(0.f), data(0)
{
s = sqrtf((f32)(Width * Width + Height * Height));
data.set_used(Width * Height);
}
void generate(generate_func f)
{
u32 i=0;
for(u16 y = 0; y < Height; ++y)
for(u16 x = 0; x < Width; ++x)
set(i++, calc(f, x, y));
}
u16 height() const { return Height; }
u16 width() const { return Width; }
f32 calc(generate_func f, u16 x, u16 y) const
{
const f32 xx = (f32)x - Width*0.5f;
const f32 yy = (f32)y - Height*0.5f;
return f((u16)xx, (u16)yy, s);
}
void set(u16 x, u16 y, f32 z) { data[y * Width + x] = z; }
void set(u32 i, f32 z) { data[i] = z; }
f32 get(u16 x, u16 y) const { return data[y * Width + x]; }
The only difficult part. This considers the normal at (x, y) to be the cross product of the vectors between the adjacent points in the horizontal and vertical directions.
-
s is a scaling factor, which is necessary if the height units are different from the coordinate units; for example, if your map has heights in metres and the coordinates are in units of a kilometer.
vector3df getnormal(u16 x, u16 y, f32 s) const
{
const f32 zc = get(x, y);
f32 zl, zr, zu, zd;
if (x == 0)
{
zr = get(x + 1, y);
zl = zc + zc - zr;
}
else if (x == Width - 1)
{
zl = get(x - 1, y);
zr = zc + zc - zl;
}
else
{
zr = get(x + 1, y);
zl = get(x - 1, y);
}
if (y == 0)
{
zd = get(x, y + 1);
zu = zc + zc - zd;
}
else if (y == Height - 1)
{
zu = get(x, y - 1);
zd = zc + zc - zu;
}
else
{
zd = get(x, y + 1);
zu = get(x, y - 1);
}
return vector3df(s * 2 * (zl - zr), 4, s * 2 * (zd - zu)).normalize();
}
};
A class which generates a mesh from a heightmap.
class TMesh
{
private:
u16 Width;
u16 Height;
f32 Scale;
public:
SMesh* Mesh;
TMesh() : Mesh(0), Width(0), Height(0), Scale(1.f)
{
Mesh = new SMesh();
}
~TMesh()
{
Mesh->drop();
}
void init(const HeightMap &hm, f32 scale, colour_func cf, IVideoDriver *driver)
{
Scale = scale;
const u32 mp = driver -> getMaximalPrimitiveCount();
Width = hm.width();
Height = hm.height();
const u32 sw = mp / (6 * Height);
u32 i=0;
for(u32 y0 = 0; y0 < Height; y0 += sw)
{
u16 y1 = y0 + sw;
if (y1 >= Height)
y1 = Height - 1;
addstrip(hm, cf, y0, y1, i);
++i;
}
if (i<Mesh->getMeshBufferCount())
{
for (u32 j=i; j<Mesh->getMeshBufferCount(); ++j)
{
Mesh->getMeshBuffer(j)->drop();
}
Mesh->MeshBuffers.erase(i,Mesh->getMeshBufferCount()-i);
}
Mesh->setDirty();
Mesh->recalculateBoundingBox();
}
void addstrip(const HeightMap &hm, colour_func cf, u16 y0, u16 y1, u32 bufNum)
{
SMeshBuffer *buf = 0;
if (bufNum<Mesh->getMeshBufferCount())
{
buf = (SMeshBuffer*)Mesh->getMeshBuffer(bufNum);
}
else
{
buf = new SMeshBuffer();
Mesh->addMeshBuffer(buf);
buf->drop();
}
buf->Vertices.set_used((1 + y1 - y0) * Width);
u32 i=0;
for (u16 y = y0; y <= y1; ++y)
{
for (u16 x = 0; x < Width; ++x)
{
const f32 z = hm.get(x, y);
const f32 xx = (f32)x/(f32)Width;
const f32 yy = (f32)y/(f32)Height;
S3DVertex& v = buf->Vertices[i++];
v.Pos.set(x, Scale * z, y);
v.Normal.set(hm.getnormal(x, y, Scale));
v.Color=cf(xx, yy, z);
v.TCoords.set(xx, yy);
}
}
buf->Indices.set_used(6 * (Width - 1) * (y1 - y0));
i=0;
for(u16 y = y0; y < y1; ++y)
{
for(u16 x = 0; x < Width - 1; ++x)
{
const u16 n = (y-y0) * Width + x;
buf->Indices[i]=n;
buf->Indices[++i]=n + Width;
buf->Indices[++i]=n + Width + 1;
buf->Indices[++i]=n + Width + 1;
buf->Indices[++i]=n + 1;
buf->Indices[++i]=n;
++i;
}
}
buf->recalculateBoundingBox();
}
};
Our event receiver implementation, taken from tutorial 4.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
Much of this is code taken from some of the examples. We merely set up a mesh from a heightmap, light it with a moving light, and allow the user to navigate around it.
int main(int argc, char* argv[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2du(800, 600), 32, false, false, false,
&receiver);
if(device == 0)
return 1;
IVideoDriver *driver = device->getVideoDriver();
ISceneManager *smgr = device->getSceneManager();
device->setWindowCaption(L"Irrlicht Example for SMesh usage.");
Create the custom mesh and initialize with a heightmap
TMesh mesh;
HeightMap hm = HeightMap(255, 255);
hm.generate(eggbox);
mesh.init(hm, 50.f, grey, driver);
IMeshSceneNode* meshnode = smgr -> addMeshSceneNode(mesh.Mesh);
meshnode->setMaterialFlag(video::EMF_BACK_FACE_CULLING, false);
ILightSceneNode *node = smgr->addLightSceneNode(0, vector3df(0,100,0),
SColorf(1.0f, 0.6f, 0.7f, 1.0f), 500.0f);
if (node)
{
node->getLightData().Attenuation.set(0.f, 1.f/500.f, 0.f);
ISceneNodeAnimator* anim = smgr->createFlyCircleAnimator(vector3df(0,150,0),250.0f);
if (anim)
{
node->addAnimator(anim);
anim->drop();
}
}
ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
if (camera)
{
camera->setPosition(vector3df(-20.f, 150.f, -20.f));
camera->setTarget(vector3df(200.f, -80.f, 150.f));
camera->setFarValue(20000.0f);
}
Just a usual render loop with event handling. The custom mesh is a usual part of the scene graph which gets rendered by drawAll.
while(device->run())
{
if(!device->isWindowActive())
{
device->sleep(100);
continue;
}
if(receiver.IsKeyDown(irr::KEY_KEY_W))
{
meshnode->setMaterialFlag(video::EMF_WIREFRAME, !meshnode->getMaterial(0).Wireframe);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_1))
{
hm.generate(eggbox);
mesh.init(hm, 50.f, grey, driver);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_2))
{
hm.generate(moresine);
mesh.init(hm, 50.f, yellow, driver);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_3))
{
hm.generate(justexp);
mesh.init(hm, 50.f, yellow, driver);
}
driver->beginScene(true, true, SColor(0xff000000));
smgr->drawAll();
driver->endScene();
}
device->drop();
return 0;
}
That's it! Just compile and play around with the program.
+
Ok, let's start with the headers (I think there's nothing to say about it)
#include <irrlicht.h>
#include "driverChoice.h"
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
using namespace irr;
using namespace video;
using namespace core;
using namespace scene;
using namespace io;
using namespace gui;
This is the type of the functions which work out the colour.
typedef SColor colour_func(f32 x, f32 y, f32 z);
Here comes a set of functions which can be used for coloring the nodes while creating the mesh.
SColor grey(f32, f32, f32 z)
{
u32 n = (u32)(255.f * z);
return SColor(255, n, n, n);
}
SColor yellow(f32 x, f32 y, f32)
{
return SColor(255, 128 + (u32)(127.f * x), 128 + (u32)(127.f * y), 255);
}
SColor white(f32, f32, f32) { return SColor(255, 255, 255, 255); }
The type of the functions which generate the heightmap. x and y range between -0.5 and 0.5, and s is the scale of the heightmap.
typedef f32 generate_func(s16 x, s16 y, f32 s);
f32 eggbox(s16 x, s16 y, f32 s)
{
const f32 r = 4.f*sqrtf((f32)(x*x + y*y))/s;
const f32 z = (f32)exp(-r * 2) * (cosf(0.2f * x) + cosf(0.2f * y));
return 0.25f+0.25f*z;
}
f32 moresine(s16 x, s16 y, f32 s)
{
const f32 xx=0.3f*(f32)x/s;
const f32 yy=12*y/s;
const f32 z = sinf(xx*xx+yy)*sinf(xx+yy*yy);
return 0.25f + 0.25f * z;
}
f32 justexp(s16 x, s16 y, f32 s)
{
const f32 xx=6*x/s;
const f32 yy=6*y/s;
const f32 z = (xx*xx+yy*yy);
return 0.3f*z*cosf(xx*yy);
}
A simple class for representing heightmaps. Most of this should be obvious.
class HeightMap
{
private:
const u16 Width;
const u16 Height;
f32 s;
core::array<f32> data;
public:
HeightMap(u16 _w, u16 _h) : Width(_w), Height(_h), s(0.f), data(0)
{
s = sqrtf((f32)(Width * Width + Height * Height));
data.set_used(Width * Height);
}
void generate(generate_func f)
{
u32 i=0;
for(u16 y = 0; y < Height; ++y)
for(u16 x = 0; x < Width; ++x)
set(i++, calc(f, x, y));
}
u16 height() const { return Height; }
u16 width() const { return Width; }
f32 calc(generate_func f, u16 x, u16 y) const
{
const f32 xx = (f32)x - Width*0.5f;
const f32 yy = (f32)y - Height*0.5f;
return f((u16)xx, (u16)yy, s);
}
void set(u16 x, u16 y, f32 z) { data[y * Width + x] = z; }
void set(u32 i, f32 z) { data[i] = z; }
f32 get(u16 x, u16 y) const { return data[y * Width + x]; }
The only difficult part. This considers the normal at (x, y) to be the cross product of the vectors between the adjacent points in the horizontal and vertical directions.
+
s is a scaling factor, which is necessary if the height units are different from the coordinate units; for example, if your map has heights in meters and the coordinates are in units of a kilometer.
vector3df getnormal(u16 x, u16 y, f32 s) const
{
const f32 zc = get(x, y);
f32 zl, zr, zu, zd;
if (x == 0)
{
zr = get(x + 1, y);
zl = zc + zc - zr;
}
else if (x == Width - 1)
{
zl = get(x - 1, y);
zr = zc + zc - zl;
}
else
{
zr = get(x + 1, y);
zl = get(x - 1, y);
}
if (y == 0)
{
zd = get(x, y + 1);
zu = zc + zc - zd;
}
else if (y == Height - 1)
{
zu = get(x, y - 1);
zd = zc + zc - zu;
}
else
{
zd = get(x, y + 1);
zu = get(x, y - 1);
}
return vector3df(s * 2 * (zl - zr), 4, s * 2 * (zd - zu)).normalize();
}
};
A class which generates a mesh from a heightmap.
class TMesh
{
private:
u16 Width;
u16 Height;
f32 Scale;
public:
SMesh* Mesh;
TMesh() : Width(0), Height(0), Scale(1.f), Mesh(0)
{
Mesh = new SMesh();
}
~TMesh()
{
Mesh->drop();
}
void init(const HeightMap &hm, f32 scale, colour_func cf, IVideoDriver *driver)
{
Scale = scale;
const u32 mp = driver -> getMaximalPrimitiveCount();
Width = hm.width();
Height = hm.height();
const u32 sw = mp / (6 * Height);
u32 i=0;
for(u32 y0 = 0; y0 < Height; y0 += sw)
{
u16 y1 = y0 + sw;
if (y1 >= Height)
y1 = Height - 1;
addstrip(hm, cf, y0, y1, i);
++i;
}
if (i<Mesh->getMeshBufferCount())
{
for (u32 j=i; j<Mesh->getMeshBufferCount(); ++j)
{
Mesh->getMeshBuffer(j)->drop();
}
Mesh->MeshBuffers.erase(i,Mesh->getMeshBufferCount()-i);
}
Mesh->setDirty();
Mesh->recalculateBoundingBox();
}
void addstrip(const HeightMap &hm, colour_func cf, u16 y0, u16 y1, u32 bufNum)
{
SMeshBuffer *buf = 0;
if (bufNum<Mesh->getMeshBufferCount())
{
buf = (SMeshBuffer*)Mesh->getMeshBuffer(bufNum);
}
else
{
buf = new SMeshBuffer();
Mesh->addMeshBuffer(buf);
buf->drop();
}
buf->Vertices.set_used((1 + y1 - y0) * Width);
u32 i=0;
for (u16 y = y0; y <= y1; ++y)
{
for (u16 x = 0; x < Width; ++x)
{
const f32 z = hm.get(x, y);
const f32 xx = (f32)x/(f32)Width;
const f32 yy = (f32)y/(f32)Height;
S3DVertex& v = buf->Vertices[i++];
v.Pos.set(x, Scale * z, y);
v.Normal.set(hm.getnormal(x, y, Scale));
v.Color=cf(xx, yy, z);
v.TCoords.set(xx, yy);
}
}
buf->Indices.set_used(6 * (Width - 1) * (y1 - y0));
i=0;
for(u16 y = y0; y < y1; ++y)
{
for(u16 x = 0; x < Width - 1; ++x)
{
const u16 n = (y-y0) * Width + x;
buf->Indices[i]=n;
buf->Indices[++i]=n + Width;
buf->Indices[++i]=n + Width + 1;
buf->Indices[++i]=n + Width + 1;
buf->Indices[++i]=n + 1;
buf->Indices[++i]=n;
++i;
}
}
buf->recalculateBoundingBox();
}
};
Our event receiver implementation, taken from tutorial 4.
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
Much of this is code taken from some of the examples. We merely set up a mesh from a heightmap, light it with a moving light, and allow the user to navigate around it.
int main(int argc, char* argv[])
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2du(800, 600), 32, false, false, false,
&receiver);
if(device == 0)
return 1;
IVideoDriver *driver = device->getVideoDriver();
ISceneManager *smgr = device->getSceneManager();
device->setWindowCaption(L"Irrlicht Example for SMesh usage.");
Create the custom mesh and initialize with a heightmap
TMesh mesh;
HeightMap hm = HeightMap(255, 255);
hm.generate(eggbox);
mesh.init(hm, 50.f, grey, driver);
IMeshSceneNode* meshnode = smgr -> addMeshSceneNode(mesh.Mesh);
meshnode->setMaterialFlag(video::EMF_BACK_FACE_CULLING, false);
ILightSceneNode *node = smgr->addLightSceneNode(0, vector3df(0,100,0),
SColorf(1.0f, 0.6f, 0.7f, 1.0f), 500.0f);
if (node)
{
node->getLightData().Attenuation.set(0.f, 1.f/500.f, 0.f);
ISceneNodeAnimator* anim = smgr->createFlyCircleAnimator(vector3df(0,150,0),250.0f);
if (anim)
{
node->addAnimator(anim);
anim->drop();
}
}
ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS();
if (camera)
{
camera->setPosition(vector3df(-20.f, 150.f, -20.f));
camera->setTarget(vector3df(200.f, -80.f, 150.f));
camera->setFarValue(20000.0f);
}
Just a usual render loop with event handling. The custom mesh is a usual part of the scene graph which gets rendered by drawAll.
while(device->run())
{
if(!device->isWindowActive())
{
device->sleep(100);
continue;
}
if(receiver.IsKeyDown(irr::KEY_KEY_W))
{
meshnode->setMaterialFlag(video::EMF_WIREFRAME, !meshnode->getMaterial(0).Wireframe);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_1))
{
hm.generate(eggbox);
mesh.init(hm, 50.f, grey, driver);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_2))
{
hm.generate(moresine);
mesh.init(hm, 50.f, yellow, driver);
}
else if(receiver.IsKeyDown(irr::KEY_KEY_3))
{
hm.generate(justexp);
mesh.init(hm, 50.f, yellow, driver);
}
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0xff000000));
smgr->drawAll();
driver->endScene();
}
device->drop();
return 0;
}
That's it! Just compile and play around with the program.
diff --git a/examples/24.CursorControl/tutorial.html b/examples/24.CursorControl/tutorial.html
index 08e8b182..c374ea19 100644
--- a/examples/24.CursorControl/tutorial.html
+++ b/examples/24.CursorControl/tutorial.html
@@ -195,9 +195,9 @@ tr.heading h2 {
-
+
-
Show how to modify cursors and offer some useful tool-functions for creating cursors. It can also be used for experiments with the mouse in general.
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _IRR_WINDOWS_
#pragma comment(lib, "Irrlicht.lib")
#endif
const int DELAY_TIME = 3000;
enum ETimerAction
{
ETA_MOUSE_VISIBLE,
ETA_MOUSE_INVISIBLE,
};
Structure to allow delayed execution of some actions.
struct TimerAction
{
u32 TargetTime;
ETimerAction Action;
};
struct SAppContext
{
SAppContext()
: Device(0), InfoStatic(0), EventBox(0), CursorBox(0), SpriteBox(0)
, ButtonSetVisible(0), ButtonSetInvisible(0), ButtonSimulateBadFps(0)
, ButtonChangeIcon(0)
, SimulateBadFps(false)
{
}
void update()
{
if (!Device)
return;
u32 timeNow = Device->getTimer()->getTime();
for ( u32 i=0; i < TimerActions.size(); ++i )
{
if ( timeNow >= TimerActions[i].TargetTime )
{
runTimerAction(TimerActions[i]);
TimerActions.erase(i);
}
else
{
++i;
}
}
}
void runTimerAction(const TimerAction& action)
{
if (ETA_MOUSE_VISIBLE == action.Action)
{
Device->getCursorControl()->setVisible(true);
ButtonSetVisible->setEnabled(true);
}
else if ( ETA_MOUSE_INVISIBLE == action.Action)
{
Device->getCursorControl()->setVisible(false);
ButtonSetInvisible->setEnabled(true);
}
}
Add another icon which the user can click and select as cursor later on.
void addIcon(const stringw& name, const SCursorSprite &sprite, bool addCursor=true)
{
SpriteBox->addItem(name.c_str(), sprite.SpriteId);
Sprites.push_back(sprite);
if ( addCursor )
{
Here we create a hardware cursor from a sprite
Device->getCursorControl()->addIcon(sprite);
CursorBox->addItem(name.c_str());
}
}
IrrlichtDevice * Device;
gui::IGUIStaticText * InfoStatic;
gui::IGUIListBox * EventBox;
gui::IGUIListBox * CursorBox;
gui::IGUIListBox * SpriteBox;
gui::IGUIButton * ButtonSetVisible;
gui::IGUIButton * ButtonSetInvisible;
gui::IGUIButton * ButtonSimulateBadFps;
gui::IGUIButton * ButtonChangeIcon;
array<TimerAction> TimerActions;
bool SimulateBadFps;
array<SCursorSprite> Sprites;
};
Helper function to print mouse event names into a stringw
void PrintMouseEventName(const SEvent& event, stringw &result)
{
switch ( event.MouseInput.Event )
{
case EMIE_LMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_LMOUSE_PRESSED_DOWN"); break;
case EMIE_RMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_RMOUSE_PRESSED_DOWN"); break;
case EMIE_MMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_MMOUSE_PRESSED_DOWN"); break;
case EMIE_LMOUSE_LEFT_UP: result += stringw(L"EMIE_LMOUSE_LEFT_UP"); break;
case EMIE_RMOUSE_LEFT_UP: result += stringw(L"EMIE_RMOUSE_LEFT_UP"); break;
case EMIE_MMOUSE_LEFT_UP: result += stringw(L"EMIE_MMOUSE_LEFT_UP"); break;
case EMIE_MOUSE_MOVED: result += stringw(L"EMIE_MOUSE_MOVED"); break;
case EMIE_MOUSE_WHEEL: result += stringw(L"EMIE_MOUSE_WHEEL"); break;
case EMIE_LMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_LMOUSE_DOUBLE_CLICK"); break;
case EMIE_RMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_RMOUSE_DOUBLE_CLICK"); break;
case EMIE_MMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_MMOUSE_DOUBLE_CLICK"); break;
case EMIE_LMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_LMOUSE_TRIPLE_CLICK"); break;
case EMIE_RMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_RMOUSE_TRIPLE_CLICK"); break;
case EMIE_MMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_MMOUSE_TRIPLE_CLICK"); break;
default:
break;
}
}
Helper function to print all the state information which get from a mouse-event into a stringw
void PrintMouseState(const SEvent& event, stringw &result)
{
result += stringw(L"X: ");
result += stringw(event.MouseInput.X);
result += stringw(L"\n");
result += stringw(L"Y: ");
result += stringw(event.MouseInput.Y);
result += stringw(L"\n");
result += stringw(L"Wheel: ");
result += stringw(event.MouseInput.Wheel);
result += stringw(L"\n");
result += stringw(L"Shift: ");
if ( event.MouseInput.Shift )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"Control: ");
if ( event.MouseInput.Control )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"ButtonStates: ");
result += stringw(event.MouseInput.ButtonStates);
result += stringw(L"\n");
result += stringw(L"isLeftPressed: ");
if ( event.MouseInput.isLeftPressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"isRightPressed: ");
if ( event.MouseInput.isRightPressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"isMiddlePressed: ");
if ( event.MouseInput.isMiddlePressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"Event: ");
PrintMouseEventName(event, result);
result += stringw(L"\n");
}
A typical event receiver.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & context) : Context(context) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT )
{
switch ( event.GUIEvent.EventType )
{
case EGET_BUTTON_CLICKED:
{
u32 timeNow = Context.Device->getTimer()->getTime();
TimerAction action;
action.TargetTime = timeNow + DELAY_TIME;
if ( event.GUIEvent.Caller == Context.ButtonSetVisible )
{
action.Action = ETA_MOUSE_VISIBLE;
Context.TimerActions.push_back(action);
Context.ButtonSetVisible->setEnabled(false);
}
else if ( event.GUIEvent.Caller == Context.ButtonSetInvisible )
{
action.Action = ETA_MOUSE_INVISIBLE;
Context.TimerActions.push_back(action);
Context.ButtonSetInvisible->setEnabled(false);
}
else if ( event.GUIEvent.Caller == Context.ButtonSimulateBadFps )
{
Context.SimulateBadFps = Context.ButtonSimulateBadFps->isPressed();
}
else if ( event.GUIEvent.Caller == Context.ButtonChangeIcon )
{
Replace an existing cursor icon by another icon. The user has to select both - the icon which should be replaced and the icon which will replace it.
s32 selectedCursor = Context.CursorBox->getSelected();
s32 selectedSprite = Context.SpriteBox->getSelected();
if ( selectedCursor >= 0 && selectedSprite >= 0 )
{
This does replace the icon.
Context.Device->getCursorControl()->changeIcon((ECURSOR_ICON)selectedCursor, Context.Sprites[selectedSprite] );
Do also show the new icon.
Context.Device->getCursorControl()->setActiveIcon( ECURSOR_ICON(selectedCursor) );
}
}
}
break;
case EGET_LISTBOX_CHANGED:
case EGET_LISTBOX_SELECTED_AGAIN:
{
if ( event.GUIEvent.Caller == Context.CursorBox )
{
Find out which cursor the user selected
s32 selected = Context.CursorBox->getSelected();
if ( selected >= 0 )
{
Here we set the new cursor icon which will now be used within our window.
Context.Device->getCursorControl()->setActiveIcon( ECURSOR_ICON(selected) );
}
}
}
break;
default:
break;
}
}
if (event.EventType == EET_MOUSE_INPUT_EVENT)
{
stringw infoText;
PrintMouseState(event, infoText);
Context.InfoStatic->setText(infoText.c_str());
if ( event.MouseInput.Event != EMIE_MOUSE_MOVED && event.MouseInput.Event != EMIE_MOUSE_WHEEL )
{
infoText = L"";
PrintMouseEventName(event, infoText);
Context.EventBox->insertItem(0, infoText.c_str(), -1);
}
}
return false;
}
private:
SAppContext & Context;
};
Use several imagefiles as animation frames for a sprite which can be used as cursor icon. The images in those files all need to have the same size. Return sprite index on success or -1 on failure
s32 AddAnimatedIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const array< io::path >& files, u32 frameTime )
{
if ( !spriteBank || !driver || !files.size() )
return -1;
video::ITexture * tex = driver->getTexture( files[0] );
if ( tex )
{
array< rect<s32> >& spritePositions = spriteBank->getPositions();
u32 idxRect = spritePositions.size();
spritePositions.push_back( rect<s32>(0,0, tex->getSize().Width, tex->getSize().Height) );
SGUISprite sprite;
sprite.frameTime = frameTime;
array< SGUISprite >& sprites = spriteBank->getSprites();
u32 startIdx = spriteBank->getTextureCount();
for ( u32 f=0; f < files.size(); ++f )
{
tex = driver->getTexture( files[f] );
if ( tex )
{
spriteBank->addTexture( driver->getTexture(files[f]) );
gui::SGUISpriteFrame frame;
frame.rectNumber = idxRect;
frame.textureNumber = startIdx+f;
sprite.Frames.push_back( frame );
}
}
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
Use several images within one imagefile as animation frames for a sprite which can be used as cursor icon The sizes of the icons within that file all need to have the same size Return sprite index on success or -1 on failure
s32 AddAnimatedIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const io::path& file, const array< rect<s32> >& rects, u32 frameTime )
{
if ( !spriteBank || !driver || !rects.size() )
return -1;
video::ITexture * tex = driver->getTexture( file );
if ( tex )
{
array< rect<s32> >& spritePositions = spriteBank->getPositions();
u32 idxRect = spritePositions.size();
u32 idxTex = spriteBank->getTextureCount();
spriteBank->addTexture( tex );
SGUISprite sprite;
sprite.frameTime = frameTime;
array< SGUISprite >& sprites = spriteBank->getSprites();
for ( u32 i=0; i < rects.size(); ++i )
{
spritePositions.push_back( rects[i] );
gui::SGUISpriteFrame frame;
frame.rectNumber = idxRect+i;
frame.textureNumber = idxTex;
sprite.Frames.push_back( frame );
}
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
Create a non-animated icon from the given file and position and put it into the spritebank. We can use this icon later on in a cursor.
s32 AddIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const io::path& file, const core::rect<s32>& rect )
{
if ( !spriteBank || !driver )
return -1;
video::ITexture * tex = driver->getTexture( file );
if ( tex )
{
core::array< core::rect<irr::s32> >& spritePositions = spriteBank->getPositions();
spritePositions.push_back( rect );
array< SGUISprite >& sprites = spriteBank->getSprites();
spriteBank->addTexture( tex );
gui::SGUISpriteFrame frame;
frame.rectNumber = spritePositions.size()-1;
frame.textureNumber = spriteBank->getTextureCount()-1;
SGUISprite sprite;
sprite.frameTime = 0;
sprite.Frames.push_back( frame );
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
int main()
{
video::E_DRIVER_TYPE driverType = driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice * device = createDevice(driverType, dimension2d<u32>(640, 480));
if (device == 0)
return 1;
device->setResizable(true);
device->setWindowCaption(L"Cursor control - Irrlicht engine tutorial");
video::IVideoDriver* driver = device->getVideoDriver();
IGUIEnvironment* env = device->getGUIEnvironment();
gui::IGUISpriteBank * SpriteBankIcons;
SAppContext context;
context.Device = device;
rect< s32 > rectInfoStatic(10,10, 200, 200);
env->addStaticText (L"Cursor state information", rectInfoStatic, true, true);
rectInfoStatic.UpperLeftCorner += dimension2di(0, 15);
context.InfoStatic = env->addStaticText (L"", rectInfoStatic, true, true);
rect< s32 > rectEventBox(10,210, 200, 400);
env->addStaticText (L"click events (new on top)", rectEventBox, true, true);
rectEventBox.UpperLeftCorner += dimension2di(0, 15);
context.EventBox = env->addListBox(rectEventBox);
rect< s32 > rectCursorBox(210,10, 400, 250);
env->addStaticText (L"cursors, click to set the active one", rectCursorBox, true, true);
rectCursorBox.UpperLeftCorner += dimension2di(0, 15);
context.CursorBox = env->addListBox(rectCursorBox);
rect< s32 > rectSpriteBox(210,260, 400, 400);
env->addStaticText (L"sprites", rectSpriteBox, true, true);
rectSpriteBox.UpperLeftCorner += dimension2di(0, 15);
context.SpriteBox = env->addListBox(rectSpriteBox);
context.ButtonSetVisible = env->addButton( rect<s32>( 410, 20, 560, 40 ), 0, -1, L"set visible (delayed)" );
context.ButtonSetInvisible = env->addButton( rect<s32>( 410, 50, 560, 70 ), 0, -1, L"set invisible (delayed)" );
context.ButtonSimulateBadFps = env->addButton( rect<s32>( 410, 80, 560, 100 ), 0, -1, L"simulate bad FPS" );
context.ButtonSimulateBadFps->setIsPushButton(true);
context.ButtonChangeIcon = env->addButton( rect<s32>( 410, 140, 560, 160 ), 0, -1, L"replace cursor icon\n(cursor+sprite must be selected)" );
for ( int i=0; i < (int)gui::ECI_COUNT; ++i )
{
context.CursorBox->addItem(stringw( GUICursorIconNames[i] ).c_str());
}
Create sprites which then can be used as cursor icons.
SpriteBankIcons = env->addEmptySpriteBank(io::path("cursor_icons"));
context.SpriteBox->setSpriteBank(SpriteBankIcons);
array< io::path > files;
files.push_back( io::path("../../media/icon_crosshairs16x16bw1.png") );
files.push_back( io::path("../../media/icon_crosshairs16x16bw2.png") );
files.push_back( io::path("../../media/icon_crosshairs16x16bw3.png") );
files.push_back( io::path("../../media/icon_crosshairs16x16bw3.png") );
files.push_back( io::path("../../media/icon_crosshairs16x16bw2.png") );
SCursorSprite spriteBw;
spriteBw.SpriteId = AddAnimatedIconToSpriteBank( SpriteBankIcons, driver, files, 200 );
spriteBw.SpriteBank = SpriteBankIcons;
spriteBw.HotSpot = position2d<s32>(7,7);
context.addIcon(L"crosshair_bw", spriteBw);
array< rect<s32> > iconRects;
iconRects.push_back( rect<s32>(0,0, 16, 16) );
iconRects.push_back( rect<s32>(16,0, 32, 16) );
iconRects.push_back( rect<s32>(0,16, 16, 32) );
iconRects.push_back( rect<s32>(0,16, 16, 32) );
iconRects.push_back( rect<s32>(16,0, 32, 16) );
SCursorSprite spriteCol;
spriteCol.SpriteId = AddAnimatedIconToSpriteBank( SpriteBankIcons, driver, io::path("../../media/icon_crosshairs16x16col.png"), iconRects, 200 );
spriteCol.HotSpot = position2d<s32>(7,7);
spriteCol.SpriteBank = SpriteBankIcons;
context.addIcon(L"crosshair_colored", spriteCol);
rect<s32> rectIcon;
SCursorSprite spriteNonAnimated(SpriteBankIcons, 0, position2d<s32>(7,7));
rectIcon = rect<s32>(0,0, 16, 16);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path("../../media/icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col1", spriteNonAnimated, false);
rectIcon = rect<s32>(16,0, 32, 16);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path("../../media/icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col2", spriteNonAnimated, false);
rectIcon = rect<s32>(0,16, 16, 32);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path("../../media/icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col3", spriteNonAnimated, false);
MyEventReceiver receiver(context);
device->setEventReceiver(&receiver);
while(device->run() && driver)
{
{
u32 realTimeNow = device->getTimer()->getRealTime();
context.update();
driver->beginScene(true, true, SColor(0,200,200,200));
env->drawAll();
if ( context.SpriteBox )
{
s32 selectedSprite = context.SpriteBox->getSelected();
if ( selectedSprite >= 0 && context.Sprites[selectedSprite].SpriteId >= 0 )
{
SpriteBankIcons->draw2DSprite(u32(context.Sprites[selectedSprite].SpriteId), position2di(580, 140), 0, video::SColor(255, 255, 255, 255), 0, realTimeNow);
}
}
driver->endScene();
}
if ( context.SimulateBadFps )
{
device->sleep(500);
}
else
{
device->sleep(10);
}
}
device->drop();
return 0;
}
+ Show how to modify cursors and offer some useful tool-functions for creating cursors. It can also be used for experiments with the mouse in general.
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
const int DELAY_TIME = 3000;
enum ETimerAction
{
ETA_MOUSE_VISIBLE,
ETA_MOUSE_INVISIBLE,
};
Structure to allow delayed execution of some actions.
struct TimerAction
{
u32 TargetTime;
ETimerAction Action;
};
struct SAppContext
{
SAppContext()
: Device(0), InfoStatic(0), EventBox(0), CursorBox(0), SpriteBox(0)
, ButtonSetVisible(0), ButtonSetInvisible(0), ButtonSimulateBadFps(0)
, ButtonChangeIcon(0)
, SimulateBadFps(false)
{
}
void update()
{
if (!Device)
return;
u32 timeNow = Device->getTimer()->getTime();
for ( u32 i=0; i < TimerActions.size(); ++i )
{
if ( timeNow >= TimerActions[i].TargetTime )
{
runTimerAction(TimerActions[i]);
TimerActions.erase(i);
}
else
{
++i;
}
}
}
void runTimerAction(const TimerAction& action)
{
if (ETA_MOUSE_VISIBLE == action.Action)
{
Device->getCursorControl()->setVisible(true);
ButtonSetVisible->setEnabled(true);
}
else if ( ETA_MOUSE_INVISIBLE == action.Action)
{
Device->getCursorControl()->setVisible(false);
ButtonSetInvisible->setEnabled(true);
}
}
Add another icon which the user can click and select as cursor later on.
void addIcon(const stringw& name, const SCursorSprite &sprite, bool addCursor=true)
{
SpriteBox->addItem(name.c_str(), sprite.SpriteId);
Sprites.push_back(sprite);
if ( addCursor )
{
Here we create a hardware cursor from a sprite
Device->getCursorControl()->addIcon(sprite);
CursorBox->addItem(name.c_str());
}
}
IrrlichtDevice * Device;
gui::IGUIStaticText * InfoStatic;
gui::IGUIListBox * EventBox;
gui::IGUIListBox * CursorBox;
gui::IGUIListBox * SpriteBox;
gui::IGUIButton * ButtonSetVisible;
gui::IGUIButton * ButtonSetInvisible;
gui::IGUIButton * ButtonSimulateBadFps;
gui::IGUIButton * ButtonChangeIcon;
array<TimerAction> TimerActions;
bool SimulateBadFps;
array<SCursorSprite> Sprites;
};
Helper function to print mouse event names into a stringw
void PrintMouseEventName(const SEvent& event, stringw &result)
{
switch ( event.MouseInput.Event )
{
case EMIE_LMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_LMOUSE_PRESSED_DOWN"); break;
case EMIE_RMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_RMOUSE_PRESSED_DOWN"); break;
case EMIE_MMOUSE_PRESSED_DOWN: result += stringw(L"EMIE_MMOUSE_PRESSED_DOWN"); break;
case EMIE_LMOUSE_LEFT_UP: result += stringw(L"EMIE_LMOUSE_LEFT_UP"); break;
case EMIE_RMOUSE_LEFT_UP: result += stringw(L"EMIE_RMOUSE_LEFT_UP"); break;
case EMIE_MMOUSE_LEFT_UP: result += stringw(L"EMIE_MMOUSE_LEFT_UP"); break;
case EMIE_MOUSE_MOVED: result += stringw(L"EMIE_MOUSE_MOVED"); break;
case EMIE_MOUSE_WHEEL: result += stringw(L"EMIE_MOUSE_WHEEL"); break;
case EMIE_LMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_LMOUSE_DOUBLE_CLICK"); break;
case EMIE_RMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_RMOUSE_DOUBLE_CLICK"); break;
case EMIE_MMOUSE_DOUBLE_CLICK: result += stringw(L"EMIE_MMOUSE_DOUBLE_CLICK"); break;
case EMIE_LMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_LMOUSE_TRIPLE_CLICK"); break;
case EMIE_RMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_RMOUSE_TRIPLE_CLICK"); break;
case EMIE_MMOUSE_TRIPLE_CLICK: result += stringw(L"EMIE_MMOUSE_TRIPLE_CLICK"); break;
default:
break;
}
}
Helper function to print all the state information which get from a mouse-event into a stringw
void PrintMouseState(const SEvent& event, stringw &result)
{
result += stringw(L"X: ");
result += stringw(event.MouseInput.X);
result += stringw(L"\n");
result += stringw(L"Y: ");
result += stringw(event.MouseInput.Y);
result += stringw(L"\n");
result += stringw(L"Wheel: ");
result += stringw(event.MouseInput.Wheel);
result += stringw(L"\n");
result += stringw(L"Shift: ");
if ( event.MouseInput.Shift )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"Control: ");
if ( event.MouseInput.Control )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"ButtonStates: ");
result += stringw(event.MouseInput.ButtonStates);
result += stringw(L"\n");
result += stringw(L"isLeftPressed: ");
if ( event.MouseInput.isLeftPressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"isRightPressed: ");
if ( event.MouseInput.isRightPressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"isMiddlePressed: ");
if ( event.MouseInput.isMiddlePressed() )
result += stringw(L"true\n");
else
result += stringw(L"false\n");
result += stringw(L"Event: ");
PrintMouseEventName(event, result);
result += stringw(L"\n");
}
A typical event receiver.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & context) : Context(context) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT )
{
switch ( event.GUIEvent.EventType )
{
case EGET_BUTTON_CLICKED:
{
u32 timeNow = Context.Device->getTimer()->getTime();
TimerAction action;
action.TargetTime = timeNow + DELAY_TIME;
if ( event.GUIEvent.Caller == Context.ButtonSetVisible )
{
action.Action = ETA_MOUSE_VISIBLE;
Context.TimerActions.push_back(action);
Context.ButtonSetVisible->setEnabled(false);
}
else if ( event.GUIEvent.Caller == Context.ButtonSetInvisible )
{
action.Action = ETA_MOUSE_INVISIBLE;
Context.TimerActions.push_back(action);
Context.ButtonSetInvisible->setEnabled(false);
}
else if ( event.GUIEvent.Caller == Context.ButtonSimulateBadFps )
{
Context.SimulateBadFps = Context.ButtonSimulateBadFps->isPressed();
}
else if ( event.GUIEvent.Caller == Context.ButtonChangeIcon )
{
Replace an existing cursor icon by another icon. The user has to select both - the icon which should be replaced and the icon which will replace it.
s32 selectedCursor = Context.CursorBox->getSelected();
s32 selectedSprite = Context.SpriteBox->getSelected();
if ( selectedCursor >= 0 && selectedSprite >= 0 )
{
This does replace the icon.
Context.Device->getCursorControl()->changeIcon((ECURSOR_ICON)selectedCursor, Context.Sprites[selectedSprite] );
Do also show the new icon.
Context.Device->getCursorControl()->setActiveIcon( ECURSOR_ICON(selectedCursor) );
}
}
}
break;
case EGET_LISTBOX_CHANGED:
case EGET_LISTBOX_SELECTED_AGAIN:
{
if ( event.GUIEvent.Caller == Context.CursorBox )
{
Find out which cursor the user selected
s32 selected = Context.CursorBox->getSelected();
if ( selected >= 0 )
{
Here we set the new cursor icon which will now be used within our window.
Context.Device->getCursorControl()->setActiveIcon( ECURSOR_ICON(selected) );
}
}
}
break;
default:
break;
}
}
if (event.EventType == EET_MOUSE_INPUT_EVENT)
{
stringw infoText;
PrintMouseState(event, infoText);
Context.InfoStatic->setText(infoText.c_str());
if ( event.MouseInput.Event != EMIE_MOUSE_MOVED && event.MouseInput.Event != EMIE_MOUSE_WHEEL )
{
infoText = L"";
PrintMouseEventName(event, infoText);
Context.EventBox->insertItem(0, infoText.c_str(), -1);
}
}
return false;
}
private:
SAppContext & Context;
};
Use several imagefiles as animation frames for a sprite which can be used as cursor icon. The images in those files all need to have the same size. Return sprite index on success or -1 on failure
s32 AddAnimatedIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const array< io::path >& files, u32 frameTime )
{
if ( !spriteBank || !driver || !files.size() )
return -1;
video::ITexture * tex = driver->getTexture( files[0] );
if ( tex )
{
array< rect<s32> >& spritePositions = spriteBank->getPositions();
u32 idxRect = spritePositions.size();
spritePositions.push_back( rect<s32>(0,0, tex->getSize().Width, tex->getSize().Height) );
SGUISprite sprite;
sprite.frameTime = frameTime;
array< SGUISprite >& sprites = spriteBank->getSprites();
u32 startIdx = spriteBank->getTextureCount();
for ( u32 f=0; f < files.size(); ++f )
{
tex = driver->getTexture( files[f] );
if ( tex )
{
spriteBank->addTexture( driver->getTexture(files[f]) );
gui::SGUISpriteFrame frame;
frame.rectNumber = idxRect;
frame.textureNumber = startIdx+f;
sprite.Frames.push_back( frame );
}
}
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
Use several images within one imagefile as animation frames for a sprite which can be used as cursor icon The sizes of the icons within that file all need to have the same size Return sprite index on success or -1 on failure
s32 AddAnimatedIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const io::path& file, const array< rect<s32> >& rects, u32 frameTime )
{
if ( !spriteBank || !driver || !rects.size() )
return -1;
video::ITexture * tex = driver->getTexture( file );
if ( tex )
{
array< rect<s32> >& spritePositions = spriteBank->getPositions();
u32 idxRect = spritePositions.size();
u32 idxTex = spriteBank->getTextureCount();
spriteBank->addTexture( tex );
SGUISprite sprite;
sprite.frameTime = frameTime;
array< SGUISprite >& sprites = spriteBank->getSprites();
for ( u32 i=0; i < rects.size(); ++i )
{
spritePositions.push_back( rects[i] );
gui::SGUISpriteFrame frame;
frame.rectNumber = idxRect+i;
frame.textureNumber = idxTex;
sprite.Frames.push_back( frame );
}
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
Create a non-animated icon from the given file and position and put it into the spritebank. We can use this icon later on in a cursor.
s32 AddIconToSpriteBank( gui::IGUISpriteBank * spriteBank, video::IVideoDriver* driver, const io::path& file, const core::rect<s32>& rect )
{
if ( !spriteBank || !driver )
return -1;
video::ITexture * tex = driver->getTexture( file );
if ( tex )
{
core::array< core::rect<irr::s32> >& spritePositions = spriteBank->getPositions();
spritePositions.push_back( rect );
array< SGUISprite >& sprites = spriteBank->getSprites();
spriteBank->addTexture( tex );
gui::SGUISpriteFrame frame;
frame.rectNumber = spritePositions.size()-1;
frame.textureNumber = spriteBank->getTextureCount()-1;
SGUISprite sprite;
sprite.frameTime = 0;
sprite.Frames.push_back( frame );
sprites.push_back( sprite );
return sprites.size()-1;
}
return -1;
}
int main()
{
video::E_DRIVER_TYPE driverType = driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
IrrlichtDevice * device = createDevice(driverType, dimension2d<u32>(640, 480));
if (device == 0)
return 1;
device->setResizable(true);
device->setWindowCaption(L"Cursor control - Irrlicht engine tutorial");
video::IVideoDriver* driver = device->getVideoDriver();
IGUIEnvironment* env = device->getGUIEnvironment();
gui::IGUISpriteBank * SpriteBankIcons;
SAppContext context;
context.Device = device;
rect< s32 > rectInfoStatic(10,10, 200, 200);
env->addStaticText (L"Cursor state information", rectInfoStatic, true, true);
rectInfoStatic.UpperLeftCorner += dimension2di(0, 15);
context.InfoStatic = env->addStaticText (L"", rectInfoStatic, true, true);
rect< s32 > rectEventBox(10,210, 200, 400);
env->addStaticText (L"click events (new on top)", rectEventBox, true, true);
rectEventBox.UpperLeftCorner += dimension2di(0, 15);
context.EventBox = env->addListBox(rectEventBox);
rect< s32 > rectCursorBox(210,10, 400, 250);
env->addStaticText (L"cursors, click to set the active one", rectCursorBox, true, true);
rectCursorBox.UpperLeftCorner += dimension2di(0, 15);
context.CursorBox = env->addListBox(rectCursorBox);
rect< s32 > rectSpriteBox(210,260, 400, 400);
env->addStaticText (L"sprites", rectSpriteBox, true, true);
rectSpriteBox.UpperLeftCorner += dimension2di(0, 15);
context.SpriteBox = env->addListBox(rectSpriteBox);
context.ButtonSetVisible = env->addButton( rect<s32>( 410, 20, 560, 40 ), 0, -1, L"set visible (delayed)" );
context.ButtonSetInvisible = env->addButton( rect<s32>( 410, 50, 560, 70 ), 0, -1, L"set invisible (delayed)" );
context.ButtonSimulateBadFps = env->addButton( rect<s32>( 410, 80, 560, 100 ), 0, -1, L"simulate bad FPS" );
context.ButtonSimulateBadFps->setIsPushButton(true);
context.ButtonChangeIcon = env->addButton( rect<s32>( 410, 140, 560, 160 ), 0, -1, L"replace cursor icon\n(cursor+sprite must be selected)" );
for ( int i=0; i < (int)gui::ECI_COUNT; ++i )
{
context.CursorBox->addItem(stringw( GUICursorIconNames[i] ).c_str());
}
Create sprites which then can be used as cursor icons.
SpriteBankIcons = env->addEmptySpriteBank(io::path("cursor_icons"));
context.SpriteBox->setSpriteBank(SpriteBankIcons);
const io::path mediaPath = getExampleMediaPath();
array< io::path > files;
files.push_back( io::path(mediaPath + "icon_crosshairs16x16bw1.png") );
files.push_back( io::path(mediaPath + "icon_crosshairs16x16bw2.png") );
files.push_back( io::path(mediaPath + "icon_crosshairs16x16bw3.png") );
files.push_back( io::path(mediaPath + "icon_crosshairs16x16bw3.png") );
files.push_back( io::path(mediaPath + "icon_crosshairs16x16bw2.png") );
SCursorSprite spriteBw;
spriteBw.SpriteId = AddAnimatedIconToSpriteBank( SpriteBankIcons, driver, files, 200 );
spriteBw.SpriteBank = SpriteBankIcons;
spriteBw.HotSpot = position2d<s32>(7,7);
context.addIcon(L"crosshair_bw", spriteBw);
array< rect<s32> > iconRects;
iconRects.push_back( rect<s32>(0,0, 16, 16) );
iconRects.push_back( rect<s32>(16,0, 32, 16) );
iconRects.push_back( rect<s32>(0,16, 16, 32) );
iconRects.push_back( rect<s32>(0,16, 16, 32) );
iconRects.push_back( rect<s32>(16,0, 32, 16) );
SCursorSprite spriteCol;
spriteCol.SpriteId = AddAnimatedIconToSpriteBank( SpriteBankIcons, driver, io::path(mediaPath + "icon_crosshairs16x16col.png"), iconRects, 200 );
spriteCol.HotSpot = position2d<s32>(7,7);
spriteCol.SpriteBank = SpriteBankIcons;
context.addIcon(L"crosshair_colored", spriteCol);
rect<s32> rectIcon;
SCursorSprite spriteNonAnimated(SpriteBankIcons, 0, position2d<s32>(7,7));
rectIcon = rect<s32>(0,0, 16, 16);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path(mediaPath + "icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col1", spriteNonAnimated, false);
rectIcon = rect<s32>(16,0, 32, 16);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path(mediaPath + "icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col2", spriteNonAnimated, false);
rectIcon = rect<s32>(0,16, 16, 32);
spriteNonAnimated.SpriteId = AddIconToSpriteBank( SpriteBankIcons, driver, io::path(mediaPath + "icon_crosshairs16x16col.png"), rectIcon );
context.addIcon(L"crosshair_col3", spriteNonAnimated, false);
MyEventReceiver receiver(context);
device->setEventReceiver(&receiver);
while(device->run() && driver)
{
{
u32 realTimeNow = device->getTimer()->getRealTime();
context.update();
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,200,200,200));
env->drawAll();
if ( context.SpriteBox )
{
s32 selectedSprite = context.SpriteBox->getSelected();
if ( selectedSprite >= 0 && context.Sprites[selectedSprite].SpriteId >= 0 )
{
SpriteBankIcons->draw2DSprite(u32(context.Sprites[selectedSprite].SpriteId), position2di(580, 140), 0, video::SColor(255, 255, 255, 255), 0, realTimeNow);
}
}
driver->endScene();
}
if ( context.SimulateBadFps )
{
device->sleep(500);
}
else
{
device->sleep(10);
}
}
device->drop();
return 0;
}
diff --git a/examples/25.XmlHandling/tutorial.html b/examples/25.XmlHandling/tutorial.html
index d5d28a8b..7fd4c938 100644
--- a/examples/25.XmlHandling/tutorial.html
+++ b/examples/25.XmlHandling/tutorial.html
@@ -195,12 +195,12 @@ tr.heading h2 {
-
+
Demonstrates loading and saving of configurations via XML
- Author
- Y.M. Bosman <yoran.nosp@m..bos.nosp@m.man@g.nosp@m.mail.nosp@m..com>
This demo features a fully usable system for configuration handling. The code can easily be integrated into own apps.
-
#include <irrlicht.h>
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _IRR_WINDOWS_
#pragma comment(lib, "Irrlicht.lib")
#endif
SettingManager class.
+
#include <irrlicht.h>
#include "exampleHelper.h"
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
SettingManager class.
This class loads and writes the settings and manages the options.
The class makes use of irrMap which is a an associative arrays using a red-black tree it allows easy mapping of a key to a value, along the way there is some information on how to use it.
class SettingManager
{
public:
SettingManager(const stringw& settings_file): SettingsFile(settings_file), NullDevice(0)
{
NullDevice = irr::createDevice(irr::video::EDT_NULL);
DriverOptions.insert(L"Software", EDT_SOFTWARE);
DriverOptions.insert(L"OpenGL", EDT_OPENGL);
DriverOptions.insert(L"Direct3D9", EDT_DIRECT3D9);
ResolutionOptions.insert(L"640x480", dimension2du(640,480));
ResolutionOptions.insert(L"800x600", dimension2du(800,600));
ResolutionOptions.insert(L"1024x768", dimension2du(1024,768));
SettingMap.insert(L"driver", L"Direct3D9");
SettingMap.insert(L"resolution", L"640x480");
SettingMap.insert(L"fullscreen", L"0");
}
~SettingManager()
{
if (NullDevice)
{
NullDevice->closeDevice();
NullDevice->drop();
}
};
Load xml from disk, overwrite default settings The xml we are trying to load has the following structure settings nested in sections nested in the root node, like:
<pre>
<?xml version="1.0"?>
@@ -211,7 +211,7 @@ tr.heading h2 {
<setting name="resolution" value="1024x768" />
</video>
</mygame>
- </pre>
bool load()
{
if (!NullDevice)
return false;
irr::io::IXMLReader* xml = NullDevice->getFileSystem()->createXMLReader(SettingsFile);
if (!xml)
return false;
const stringw settingTag(L"setting");
stringw currentSection;
const stringw videoTag(L"video");
while (xml->read())
{
switch (xml->getNodeType())
{
case irr::io::EXN_ELEMENT:
{
if (currentSection.empty() && videoTag.equals_ignore_case(xml->getNodeName()))
{
currentSection = videoTag;
}
else if (currentSection.equals_ignore_case(videoTag) && settingTag.equals_ignore_case(xml->getNodeName() ))
{
stringw key = xml->getAttributeValueSafe(L"name");
if (!key.empty())
{
SettingMap[key] = xml->getAttributeValueSafe(L"value");
}
}
}
break;
case irr::io::EXN_ELEMENT_END:
currentSection=L"";
break;
}
}
xml->drop();
return true;
}
bool save()
{
if (!NullDevice)
return false;
irr::io::IXMLWriter* xwriter = NullDevice->getFileSystem()->createXMLWriter( SettingsFile );
if (!xwriter)
return false;
xwriter->writeXMLHeader();
xwriter->writeElement(L"mygame");
xwriter->writeLineBreak();
xwriter->writeElement(L"video");
xwriter->writeLineBreak();
map<stringw, stringw>::Iterator i = SettingMap.getIterator();
for(; !i.atEnd(); i++)
{
xwriter->writeElement(L"setting",true, L"name", i->getKey().c_str(), L"value",i->getValue().c_str() );
xwriter->writeLineBreak();
}
xwriter->writeLineBreak();
xwriter->writeClosingTag(L"video");
xwriter->writeLineBreak();
xwriter->writeClosingTag(L"mygame");
xwriter->drop();
return true;
}
void setSetting(const stringw& name, const stringw& value)
{
SettingMap[name]=value;
}
void setSetting(const stringw& name, s32 value)
{
SettingMap[name]=stringw(value);
}
stringw getSetting(const stringw& key) const
{
map<stringw, stringw>::Node* n = SettingMap.find(key);
if (n)
return n->getValue();
else
return L"";
}
bool getSettingAsBoolean(const stringw& key ) const
{
stringw s = getSetting(key);
if (s.empty())
return false;
return s.equals_ignore_case(L"1");
}
s32 getSettingAsInteger(const stringw& key) const
{
const stringc s = getSetting(key);
if (s.empty())
return 0;
return strtol10(s.c_str());
}
public:
map<stringw, s32> DriverOptions;
map<stringw, dimension2du> ResolutionOptions;
private:
SettingManager(const SettingManager& other);
SettingManager& operator=(const SettingManager& other);
map<stringw, stringw> SettingMap;
stringw SettingsFile;
irr::IrrlichtDevice* NullDevice;
};
Application context for global variables
struct SAppContext
{
SAppContext()
: Device(0),Gui(0), Driver(0), Settings(0), ShouldQuit(false),
ButtonSave(0), ButtonExit(0), ListboxDriver(0),
ListboxResolution(0), CheckboxFullscreen(0)
{
}
~SAppContext()
{
if (Settings)
delete Settings;
if (Device)
{
Device->closeDevice();
Device->drop();
}
}
IrrlichtDevice* Device;
IGUIEnvironment* Gui;
IVideoDriver* Driver;
SettingManager* Settings;
bool ShouldQuit;
IGUIButton* ButtonSave;
IGUIButton* ButtonExit;
IGUIListBox* ListboxDriver;
IGUIListBox* ListboxResolution;
IGUICheckBox* CheckboxFullscreen;
};
A typical event receiver.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & a) : App(a) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT )
{
switch ( event.GUIEvent.EventType )
{
case EGET_BUTTON_CLICKED:
{
if ( event.GUIEvent.Caller == App.ButtonSave )
{
if ( App.ListboxDriver->getSelected() != -1)
App.Settings->setSetting(L"driver", App.ListboxDriver->getListItem(App.ListboxDriver->getSelected()));
if ( App.ListboxResolution->getSelected() != -1)
App.Settings->setSetting(L"resolution", App.ListboxResolution->getListItem(App.ListboxResolution->getSelected()));
App.Settings->setSetting(L"fullscreen", App.CheckboxFullscreen->isChecked());
if (App.Settings->save())
{
App.Gui->addMessageBox(L"settings save",L"settings saved, please restart for settings to change effect","",true);
}
}
else if ( event.GUIEvent.Caller == App.ButtonExit)
{
App.ShouldQuit = true;
}
}
break;
}
}
return false;
}
private:
SAppContext & App;
};
Function to create a video settings dialog This dialog shows the current settings from the configuration xml and allows them to be changed
void createSettingsDialog(SAppContext& app)
{
for (irr::s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
irr::video::SColor col = app.Gui->getSkin()->getColor((irr::gui::EGUI_DEFAULT_COLOR)i);
col.setAlpha(255);
app.Gui->getSkin()->setColor((irr::gui::EGUI_DEFAULT_COLOR)i, col);
}
gui::IGUIWindow* windowSettings = app.Gui->addWindow(rect<s32>(10,10,400,400),true,L"Videosettings");
app.Gui->addStaticText (L"Select your desired video settings", rect< s32 >(10,20, 200, 40), false, true, windowSettings);
app.Gui->addStaticText (L"Driver", rect< s32 >(10,50, 200, 60), false, true, windowSettings);
app.ListboxDriver = app.Gui->addListBox(rect<s32>(10,60,220,120), windowSettings, 1,true);
map<stringw, s32>::Iterator i = app.Settings->DriverOptions.getIterator();
for(; !i.atEnd(); i++)
app.ListboxDriver->addItem(i->getKey().c_str());
app.ListboxDriver->setSelected(app.Settings->getSetting("driver").c_str());
app.Gui->addStaticText (L"Resolution", rect< s32 >(10,130, 200, 140), false, true, windowSettings);
app.ListboxResolution = app.Gui->addListBox(rect<s32>(10,140,220,200), windowSettings, 1,true);
map<stringw, dimension2du>::Iterator ri = app.Settings->ResolutionOptions.getIterator();
for(; !ri.atEnd(); ri++)
app.ListboxResolution->addItem(ri->getKey().c_str());
app.ListboxResolution->setSelected(app.Settings->getSetting("resolution").c_str());
app.CheckboxFullscreen = app.Gui->addCheckBox(
app.Settings->getSettingAsBoolean("fullscreen"),
rect<s32>(10,220,220,240), windowSettings, -1,
L"Fullscreen");
app.ButtonSave = app.Gui->addButton(
rect<s32>(80,250,150,270), windowSettings, 2,
L"Save video settings");
app.ButtonExit = app.Gui->addButton(
rect<s32>(160,250,240,270), windowSettings, 2,
L"Cancel and exit");
}
The main function. Creates all objects and does the XML handling.
int main()
{
SAppContext app;
SIrrlichtCreationParameters param;
param.DriverType = EDT_SOFTWARE;
param.WindowSize.set(640,480);
app.Settings = new SettingManager("../../media/settings.xml");
if ( !app.Settings->load() )
{
}
else
{
map<stringw, s32>::Node* driver = app.Settings->DriverOptions.find( app.Settings->getSetting("driver") );
if (driver)
{
if ( irr::IrrlichtDevice::isDriverSupported( static_cast<E_DRIVER_TYPE>( driver->getValue() )))
{
param.DriverType = static_cast<E_DRIVER_TYPE>( driver->getValue());
}
}
map<stringw, dimension2du>::Node* res = app.Settings->ResolutionOptions.find( app.Settings->getSetting("resolution") );
if (res)
{
param.WindowSize = res->getValue();
}
param.Fullscreen = app.Settings->getSettingAsBoolean("fullscreen");
}
app.Device = createDeviceEx(param);
if (app.Device == 0)
{
exit(0);
}
app.Device->setWindowCaption(L"Xmlhandling - Irrlicht engine tutorial");
app.Driver = app.Device->getVideoDriver();
app.Gui = app.Device->getGUIEnvironment();
createSettingsDialog(app);
MyEventReceiver receiver(app);
app.Device->setEventReceiver(&receiver);
while (!app.ShouldQuit && app.Device->run())
{
if (app.Device->isWindowActive())
{
app.Driver->beginScene(true, true, SColor(0,200,200,200));
app.Gui->drawAll();
app.Driver->endScene();
}
app.Device->sleep(10);
}
return 0;
}
+ </pre> bool load()
{
if (!NullDevice)
return false;
irr::io::IXMLReader* xml = NullDevice->getFileSystem()->createXMLReader(SettingsFile);
if (!xml)
return false;
const stringw settingTag(L"setting");
stringw currentSection;
const stringw videoTag(L"video");
while (xml->read())
{
switch (xml->getNodeType())
{
case irr::io::EXN_ELEMENT:
{
if (currentSection.empty() && videoTag.equals_ignore_case(xml->getNodeName()))
{
currentSection = videoTag;
}
else if (currentSection.equals_ignore_case(videoTag) && settingTag.equals_ignore_case(xml->getNodeName() ))
{
stringw key = xml->getAttributeValueSafe(L"name");
if (!key.empty())
{
SettingMap[key] = xml->getAttributeValueSafe(L"value");
}
}
}
break;
case irr::io::EXN_ELEMENT_END:
currentSection=L"";
break;
default:
break;
}
}
xml->drop();
return true;
}
bool save()
{
if (!NullDevice)
return false;
irr::io::IXMLWriter* xwriter = NullDevice->getFileSystem()->createXMLWriter( SettingsFile );
if (!xwriter)
return false;
xwriter->writeXMLHeader();
xwriter->writeElement(L"mygame");
xwriter->writeLineBreak();
xwriter->writeElement(L"video");
xwriter->writeLineBreak();
map<stringw, stringw>::Iterator i = SettingMap.getIterator();
for(; !i.atEnd(); i++)
{
xwriter->writeElement(L"setting",true, L"name", i->getKey().c_str(), L"value",i->getValue().c_str() );
xwriter->writeLineBreak();
}
xwriter->writeLineBreak();
xwriter->writeClosingTag(L"video");
xwriter->writeLineBreak();
xwriter->writeClosingTag(L"mygame");
xwriter->drop();
return true;
}
void setSetting(const stringw& name, const stringw& value)
{
SettingMap[name]=value;
}
void setSetting(const stringw& name, s32 value)
{
SettingMap[name]=stringw(value);
}
stringw getSetting(const stringw& key) const
{
map<stringw, stringw>::Node* n = SettingMap.find(key);
if (n)
return n->getValue();
else
return L"";
}
bool getSettingAsBoolean(const stringw& key ) const
{
stringw s = getSetting(key);
if (s.empty())
return false;
return s.equals_ignore_case(L"1");
}
s32 getSettingAsInteger(const stringw& key) const
{
const stringc s = getSetting(key);
if (s.empty())
return 0;
return strtol10(s.c_str());
}
public:
map<stringw, s32> DriverOptions;
map<stringw, dimension2du> ResolutionOptions;
private:
SettingManager(const SettingManager& other);
SettingManager& operator=(const SettingManager& other);
map<stringw, stringw> SettingMap;
stringw SettingsFile;
irr::IrrlichtDevice* NullDevice;
};
Application context for global variables
struct SAppContext
{
SAppContext()
: Device(0),Gui(0), Driver(0), Settings(0), ShouldQuit(false),
ButtonSave(0), ButtonExit(0), ListboxDriver(0),
ListboxResolution(0), CheckboxFullscreen(0)
{
}
~SAppContext()
{
if (Settings)
delete Settings;
if (Device)
{
Device->closeDevice();
Device->drop();
}
}
IrrlichtDevice* Device;
IGUIEnvironment* Gui;
IVideoDriver* Driver;
SettingManager* Settings;
bool ShouldQuit;
IGUIButton* ButtonSave;
IGUIButton* ButtonExit;
IGUIListBox* ListboxDriver;
IGUIListBox* ListboxResolution;
IGUICheckBox* CheckboxFullscreen;
};
A typical event receiver.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(SAppContext & a) : App(a) { }
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_GUI_EVENT )
{
switch ( event.GUIEvent.EventType )
{
case EGET_BUTTON_CLICKED:
{
if ( event.GUIEvent.Caller == App.ButtonSave )
{
if ( App.ListboxDriver->getSelected() != -1)
App.Settings->setSetting(L"driver", App.ListboxDriver->getListItem(App.ListboxDriver->getSelected()));
if ( App.ListboxResolution->getSelected() != -1)
App.Settings->setSetting(L"resolution", App.ListboxResolution->getListItem(App.ListboxResolution->getSelected()));
App.Settings->setSetting(L"fullscreen", App.CheckboxFullscreen->isChecked());
if (App.Settings->save())
{
App.Gui->addMessageBox(L"settings save",L"settings saved, please restart for settings to change effect","",true);
}
}
else if ( event.GUIEvent.Caller == App.ButtonExit)
{
App.ShouldQuit = true;
}
}
break;
default:
break;
}
}
return false;
}
private:
SAppContext & App;
};
Function to create a video settings dialog This dialog shows the current settings from the configuration xml and allows them to be changed
void createSettingsDialog(SAppContext& app)
{
for (irr::s32 i=0; i<irr::gui::EGDC_COUNT ; ++i)
{
irr::video::SColor col = app.Gui->getSkin()->getColor((irr::gui::EGUI_DEFAULT_COLOR)i);
col.setAlpha(255);
app.Gui->getSkin()->setColor((irr::gui::EGUI_DEFAULT_COLOR)i, col);
}
gui::IGUIWindow* windowSettings = app.Gui->addWindow(rect<s32>(10,10,400,400),true,L"Videosettings");
app.Gui->addStaticText (L"Select your desired video settings", rect< s32 >(10,20, 200, 40), false, true, windowSettings);
app.Gui->addStaticText (L"Driver", rect< s32 >(10,50, 200, 60), false, true, windowSettings);
app.ListboxDriver = app.Gui->addListBox(rect<s32>(10,60,220,120), windowSettings, 1,true);
map<stringw, s32>::Iterator i = app.Settings->DriverOptions.getIterator();
for(; !i.atEnd(); i++)
app.ListboxDriver->addItem(i->getKey().c_str());
app.ListboxDriver->setSelected(app.Settings->getSetting("driver").c_str());
app.Gui->addStaticText (L"Resolution", rect< s32 >(10,130, 200, 140), false, true, windowSettings);
app.ListboxResolution = app.Gui->addListBox(rect<s32>(10,140,220,200), windowSettings, 1,true);
map<stringw, dimension2du>::Iterator ri = app.Settings->ResolutionOptions.getIterator();
for(; !ri.atEnd(); ri++)
app.ListboxResolution->addItem(ri->getKey().c_str());
app.ListboxResolution->setSelected(app.Settings->getSetting("resolution").c_str());
app.CheckboxFullscreen = app.Gui->addCheckBox(
app.Settings->getSettingAsBoolean("fullscreen"),
rect<s32>(10,220,220,240), windowSettings, -1,
L"Fullscreen");
app.ButtonSave = app.Gui->addButton(
rect<s32>(80,250,150,270), windowSettings, 2,
L"Save video settings");
app.ButtonExit = app.Gui->addButton(
rect<s32>(160,250,240,270), windowSettings, 2,
L"Cancel and exit");
}
The main function. Creates all objects and does the XML handling.
int main()
{
SAppContext app;
SIrrlichtCreationParameters param;
param.DriverType = EDT_SOFTWARE;
param.WindowSize.set(640,480);
app.Settings = new SettingManager(getExampleMediaPath() + "settings.xml");
if ( !app.Settings->load() )
{
}
else
{
map<stringw, s32>::Node* driver = app.Settings->DriverOptions.find( app.Settings->getSetting("driver") );
if (driver)
{
if ( irr::IrrlichtDevice::isDriverSupported( static_cast<E_DRIVER_TYPE>( driver->getValue() )))
{
param.DriverType = static_cast<E_DRIVER_TYPE>( driver->getValue());
}
}
map<stringw, dimension2du>::Node* res = app.Settings->ResolutionOptions.find( app.Settings->getSetting("resolution") );
if (res)
{
param.WindowSize = res->getValue();
}
param.Fullscreen = app.Settings->getSettingAsBoolean("fullscreen");
}
app.Device = createDeviceEx(param);
if (app.Device == 0)
{
exit(0);
}
app.Device->setWindowCaption(L"Xmlhandling - Irrlicht engine tutorial");
app.Driver = app.Device->getVideoDriver();
app.Gui = app.Device->getGUIEnvironment();
createSettingsDialog(app);
MyEventReceiver receiver(app);
app.Device->setEventReceiver(&receiver);
while (!app.ShouldQuit && app.Device->run())
{
if (app.Device->isWindowActive())
{
app.Driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,200,200,200));
app.Gui->drawAll();
app.Driver->endScene();
}
app.Device->sleep(10);
}
return 0;
}
diff --git a/examples/26.OcclusionQuery/tutorial.html b/examples/26.OcclusionQuery/tutorial.html
index 8dd4061b..f9ec97c7 100644
--- a/examples/26.OcclusionQuery/tutorial.html
+++ b/examples/26.OcclusionQuery/tutorial.html
@@ -195,10 +195,10 @@ tr.heading h2 {
-
+
-
This Tutorial shows how to speed up rendering by use of the OcclusionQuery feature. The usual rendering tries to avoid rendering of scene nodes by culling those nodes which are outside the visible area, the view frustum. However, this technique does not cope with occluded objects which are still in the line of sight, but occluded by some larger object between the object and the eye (camera). Occlusion queries check exactly that. The queries basically measure the number of pixels that a previous render left on the screen. Since those pixels cannot be recognized at the end of a rendering anymore, the pixel count is measured directly when rendering. Thus, one needs to render the occluder (the object in front) first. This object needs to write to the z-buffer in order to become a real occluder. Then the node is rendered and in case a z-pass happens, i.e. the pixel is written to the framebuffer, the pixel is counted in the query. The result of a query is the number of pixels which got through. One can, based on this number, judge if the scene node is visible enough to be rendered, or if the node should be removed in the next round. Also note that the number of pixels is a safe over approximation in general. The pixels might be overdrawn later on, and the GPU tries to avoid inaccuracies which could lead to false negatives in the queries.
-
As you might have recognized already, we had to render the node to get the numbers. So where's the benefit, you might say. There are several ways where occlusion queries can help. It is often a good idea to just render the bbox of the node instead of the actual mesh. This is really fast and is a safe over approximation. If you need a more exact render with the actual geometry, it's a good idea to render with just basic solid material. Avoid complex shaders and state changes through textures. There's no need while just doing the occlusion query. At least if the render is not used for the actual scene. This is the third way to optimize occlusion queries. Just check the queries every 5th or 10th frame, or even less frequent. This depends on the movement speed of the objects and camera.
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
using namespace irr;
We need keyboard input events to switch some parameters
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
We create an irr::IrrlichtDevice and the scene nodes. One occluder, one occluded. The latter is a complex sphere, which has many triangles.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
smgr->getGUIEnvironment()->addStaticText(L"Press Space to hide occluder.", core::recti(10,10, 200,50));
Create the node to be occluded. We create a sphere node with high poly count.
scene::ISceneNode * node = smgr->addSphereSceneNode(10, 64);
if (node)
{
node->setPosition(core::vector3df(0,0,60));
node->setMaterialTexture(0, driver->getTexture("../../media/wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
}
Now we create another node, the occluder. It's a simple plane.
scene::ISceneNode* plane = smgr->addMeshSceneNode(smgr->addHillPlaneMesh(
"plane", core::dimension2df(10,10), core::dimension2du(2,2)), 0, -1,
core::vector3df(0,0,20), core::vector3df(270,0,0));
if (plane)
{
plane->setMaterialTexture(0, driver->getTexture("../../media/t351sml.jpg"));
plane->setMaterialFlag(video::EMF_LIGHTING, false);
plane->setMaterialFlag(video::EMF_BACK_FACE_CULLING, true);
}
Here we create the occlusion query. Because we don't have a plain mesh scene node (ESNT_MESH or ESNT_ANIMATED_MESH), we pass the base geometry as well. Instead, we could also pass a simpler mesh or the bounding box. But we will use a time based method, where the occlusion query renders to the frame buffer and in case of success (occlusion), the mesh is not drawn for several frames.
driver->addOcclusionQuery(node, ((scene::IMeshSceneNode*)node)->getMesh());
We have done everything, just a camera and draw it. We also write the current frames per second and the name of the driver to the caption of the window to examine the render speedup. We also store the time for measuring the time since the last occlusion query ran and store whether the node should be visible in the next frames.
smgr->addCameraSceneNode();
int lastFPS = -1;
u32 timeNow = device->getTimer()->getTime();
bool nodeVisible=true;
while(device->run())
{
plane->setVisible(!receiver.IsKeyDown(irr::KEY_SPACE));
driver->beginScene(true, true, video::SColor(255,113,113,133));
First, we draw the scene, possibly without the occluded element. This is necessary because we need the occluder to be drawn first. You can also use several scene managers to collect a number of possible occluders in a separately rendered scene.
node->setVisible(nodeVisible);
smgr->drawAll();
smgr->getGUIEnvironment()->drawAll();
Once in a while, here every 100 ms, we check the visibility. We run the queries, update the pixel value, and query the result. Since we already rendered the node we render the query invisible. The update is made blocking, as we need the result immediately. If you don't need the result immediately, e.g. because you have other things to render, you can call the update non-blocking. This gives the GPU more time to pass back the results without flushing the render pipeline. If the update was called non-blocking, the result from getOcclusionQueryResult is either the previous value, or 0xffffffff if no value has been generated at all, yet. The result is taken immediately as visibility flag for the node.
if (device->getTimer()->getTime()-timeNow>100)
{
driver->runAllOcclusionQueries(false);
driver->updateAllOcclusionQueries();
nodeVisible=driver->getOcclusionQueryResult(node)>0;
timeNow=device->getTimer()->getTime();
}
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw tmp(L"OcclusionQuery Example [");
tmp += driver->getName();
tmp += L"] fps: ";
tmp += fps;
device->setWindowCaption(tmp.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
+
This tutorial shows how to speed up rendering by use of the OcclusionQuery feature. The usual rendering tries to avoid rendering of scene nodes by culling those nodes which are outside the visible area, the view frustum. However, this technique does not cope with occluded objects which are still in the line of sight, but occluded by some larger object between the object and the eye (camera). Occlusion queries check exactly that. The queries basically measure the number of pixels that a previous render left on the screen. Since those pixels cannot be recognized at the end of a rendering anymore, the pixel count is measured directly when rendering. Thus, one needs to render the occluder (the object in front) first. This object needs to write to the z-buffer in order to become a real occluder. Then the node is rendered and in case a z-pass happens, i.e. the pixel is written to the framebuffer, the pixel is counted in the query. The result of a query is the number of pixels which got through. One can, based on this number, judge if the scene node is visible enough to be rendered, or if the node should be removed in the next round. Also note that the number of pixels is a safe over approximation in general. The pixels might be overdrawn later on, and the GPU tries to avoid inaccuracies which could lead to false negatives in the queries.
+
As you might have recognized already, we had to render the node to get the numbers. So where's the benefit, you might say. There are several ways where occlusion queries can help. It is often a good idea to just render the bbox of the node instead of the actual mesh. This is really fast and is a safe over approximation. If you need a more exact render with the actual geometry, it's a good idea to render with just basic solid material. Avoid complex shaders and state changes through textures. There's no need while just doing the occlusion query. At least if the render is not used for the actual scene. This is the third way to optimize occlusion queries. Just check the queries every 5th or 10th frame, or even less frequent. This depends on the movement speed of the objects and camera.
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
We need keyboard input events to switch some parameters
class MyEventReceiver : public IEventReceiver
{
public:
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == irr::EET_KEY_INPUT_EVENT)
KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown;
return false;
}
virtual bool IsKeyDown(EKEY_CODE keyCode) const
{
return KeyIsDown[keyCode];
}
MyEventReceiver()
{
for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i)
KeyIsDown[i] = false;
}
private:
bool KeyIsDown[KEY_KEY_CODES_COUNT];
};
We create an irr::IrrlichtDevice and the scene nodes. One occluder, one occluded. The latter is a complex sphere, which has many triangles.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver receiver;
IrrlichtDevice* device = createDevice(driverType,
core::dimension2d<u32>(640, 480), 16, false, false, false, &receiver);
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
const io::path mediaPath = getExampleMediaPath();
smgr->getGUIEnvironment()->addStaticText(L"Press Space to hide occluder.", core::recti(10,10, 200,50));
Create the node to be occluded. We create a sphere node with high poly count.
scene::ISceneNode * node = smgr->addSphereSceneNode(10, 64);
if (node)
{
node->setPosition(core::vector3df(0,0,60));
node->setMaterialTexture(0, driver->getTexture(mediaPath + "wall.bmp"));
node->setMaterialFlag(video::EMF_LIGHTING, false);
}
Now we create another node, the occluder. It's a simple plane.
scene::ISceneNode* plane = smgr->addMeshSceneNode(smgr->addHillPlaneMesh(
"plane", core::dimension2df(10,10), core::dimension2du(2,2)), 0, -1,
core::vector3df(0,0,20), core::vector3df(270,0,0));
if (plane)
{
plane->setMaterialTexture(0, driver->getTexture(mediaPath + "t351sml.jpg"));
plane->setMaterialFlag(video::EMF_LIGHTING, false);
plane->setMaterialFlag(video::EMF_BACK_FACE_CULLING, true);
}
Here we create the occlusion query. Because we don't have a plain mesh scene node (ESNT_MESH or ESNT_ANIMATED_MESH), we pass the base geometry as well. Instead, we could also pass a simpler mesh or the bounding box. But we will use a time based method, where the occlusion query renders to the frame buffer and in case of success (occlusion), the mesh is not drawn for several frames.
driver->addOcclusionQuery(node, ((scene::IMeshSceneNode*)node)->getMesh());
We have done everything, just a camera and draw it. We also write the current frames per second and the name of the driver to the caption of the window to examine the render speedup. We also store the time for measuring the time since the last occlusion query ran and store whether the node should be visible in the next frames.
smgr->addCameraSceneNode();
int lastFPS = -1;
u32 timeNow = device->getTimer()->getTime();
bool nodeVisible=true;
while(device->run())
{
plane->setVisible(!receiver.IsKeyDown(irr::KEY_SPACE));
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, video::SColor(255,113,113,133));
First, we draw the scene, possibly without the occluded element. This is necessary because we need the occluder to be drawn first. You can also use several scene managers to collect a number of possible occluders in a separately rendered scene.
node->setVisible(nodeVisible);
smgr->drawAll();
smgr->getGUIEnvironment()->drawAll();
Once in a while, here every 100 ms, we check the visibility. We run the queries, update the pixel value, and query the result. Since we already rendered the node we render the query invisible. The update is made blocking, as we need the result immediately. If you don't need the result immediately, e.g. because you have other things to render, you can call the update non-blocking. This gives the GPU more time to pass back the results without flushing the render pipeline. If the update was called non-blocking, the result from getOcclusionQueryResult is either the previous value, or 0xffffffff if no value has been generated at all, yet. The result is taken immediately as visibility flag for the node.
if (device->getTimer()->getTime()-timeNow>100)
{
driver->runAllOcclusionQueries(false);
driver->updateAllOcclusionQueries();
nodeVisible=driver->getOcclusionQueryResult(node)>0;
timeNow=device->getTimer()->getTime();
}
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw tmp(L"OcclusionQuery Example [");
tmp += driver->getName();
tmp += L"] fps: ";
tmp += fps;
device->setWindowCaption(tmp.c_str());
lastFPS = fps;
}
}
In the end, delete the Irrlicht device.
device->drop();
return 0;
}
That's it. Compile and play around with the program.
diff --git a/examples/27.PostProcessing/tutorial.html b/examples/27.PostProcessing/tutorial.html
new file mode 100644
index 00000000..505c3841
--- /dev/null
+++ b/examples/27.PostProcessing/tutorial.html
@@ -0,0 +1,209 @@
+
+
+
+
+
+
+
+Tutorial 27: Post Processing
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial shows how to implement post processing for D3D9 and OpenGL with the engine. In order to do post processing, scene objects are firstly rendered to render target. With the help of screen quad, the render target texture is then drawn on the quad with shader-defined effects applied.
+
This tutorial shows how to create a screen quad. It also shows how to create a render target texture and associate it with the quad. Effects are defined as shaders which are applied during rendering the quad with the render target texture attached to it.
+
A simple color inverse example is presented in this tutorial. The effect is written in HLSL and GLSL.
+
- Author
- Boshen Guan
+
We include all headers and define necessary variables as we have done before.
#include "driverChoice.h"
#include "exampleHelper.h"
#include <irrlicht.h>
using namespace irr;
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
We write a class derived from IShaderConstantSetCallBack class and implement OnSetConstants callback interface. In this callback, we will set constants used by the shader. In this example, our HLSL shader needs texture size as input in its vertex shader. Therefore, we set texture size in OnSetConstants callback using setVertexShaderConstant function.
IrrlichtDevice* device = 0;
video::ITexture* rt = 0;
class QuadShaderCallBack : public video::IShaderConstantSetCallBack
{
public:
QuadShaderCallBack() : FirstUpdate(true), TextureSizeID(-1), TextureSamplerID(-1)
{ }
virtual void OnSetConstants(video::IMaterialRendererServices* services,
s32 userData)
{
core::dimension2d<u32> size = rt->getSize();
f32 textureSize[] =
{
(f32)size.Width, (f32)size.Height
};
if ( FirstUpdate )
{
TextureSizeID = services->getVertexShaderConstantID("TextureSize");
TextureSamplerID = services->getPixelShaderConstantID("TextureSampler");
}
services->setVertexShaderConstant(TextureSizeID, reinterpret_cast<f32*>(textureSize), 2);
s32 textureLayer = 0;
services->setPixelShaderConstant(TextureSamplerID, &textureLayer, 1);
}
private:
bool FirstUpdate;
s32 TextureSizeID;
s32 TextureSamplerID;
};
class ScreenQuad : public IReferenceCounted
{
public:
ScreenQuad(video::IVideoDriver* driver)
: Driver(driver)
{
A screen quad is composed of two adjacent triangles with 4 vertices. Vertex [0], [1] and [2] create the first triangle and Vertex [0], [2] and [3] create the second one. To map texture on the quad, UV coordinates are assigned to the vertices. The origin of UV coordinate locates on the top-left corner. And the value of UVs range from 0 to 1.
Vertices[0] = irr::video::S3DVertex(-1.0f, -1.0f, 0.0f, 1, 1, 0, irr::video::SColor(0,255,255,255), 0.0f, 1.0f);
Vertices[1] = irr::video::S3DVertex(-1.0f, 1.0f, 0.0f, 1, 1, 0, irr::video::SColor(0,255,255,255), 0.0f, 0.0f);
Vertices[2] = irr::video::S3DVertex( 1.0f, 1.0f, 0.0f, 1, 1, 0, irr::video::SColor(0,255,255,255), 1.0f, 0.0f);
Vertices[3] = irr::video::S3DVertex( 1.0f, -1.0f, 0.0f, 1, 1, 0, irr::video::SColor(0,255,255,255), 1.0f, 1.0f);
Indices[0] = 0;
Indices[1] = 1;
Indices[2] = 2;
Indices[3] = 0;
Indices[4] = 2;
Indices[5] = 3;
Material.setFlag(video::EMF_LIGHTING, false);
for (u32 i = 0; i < video::MATERIAL_MAX_TEXTURES; i++)
{
Material.TextureLayer[i].TextureWrapU = video::ETC_CLAMP_TO_EDGE;
Material.TextureLayer[i].TextureWrapV = video::ETC_CLAMP_TO_EDGE;
}
}
virtual ~ScreenQuad() {}
virtual void render()
{
Driver->setMaterial(Material);
Driver->setTransform(video::ETS_WORLD, core::IdentityMatrix);
Driver->setTransform(video::ETS_VIEW, core::IdentityMatrix);
Driver->setTransform(video::ETS_PROJECTION, core::IdentityMatrix);
Driver->drawVertexPrimitiveList(Vertices, 4, Indices, 2);
}
virtual void setMaterialFlag(video::E_MATERIAL_FLAG flag, bool newvalue)
{
Material.setFlag(flag, newvalue);
}
void setMaterialTexture(u32 textureLayer, video::ITexture* texture)
{
Material.setTexture(textureLayer, texture);
}
virtual void setMaterialType(video::E_MATERIAL_TYPE newType)
{
Material.MaterialType = newType;
}
private:
video::IVideoDriver *Driver;
video::S3DVertex Vertices[4];
u16 Indices[6];
video::SMaterial Material;
};
We start up the engine just like before. Then shader programs are selected according to the driver type.
int main()
{
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
return 1;
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
In this example, high level post processing shaders are loaded for both Direct3D and OpenGL drivers. File pp_d3d9.hlsl is for Direct3D 9, and pp_opengl.frag/pp_opengl.vert are for OpenGL.
const io::path mediaPath = getExampleMediaPath();
io::path vsFileName;
io::path psFileName;
switch(driverType)
{
case video::EDT_DIRECT3D9:
psFileName = mediaPath + "pp_d3d9.hlsl";
vsFileName = psFileName;
break;
case video::EDT_OPENGL:
psFileName = mediaPath + "pp_opengl.frag";
vsFileName = mediaPath + "pp_opengl.vert";
break;
}
Check for hardware capability of executing the corresponding shaders on selected renderer. This is not necessary though.
if (!driver->queryFeature(video::EVDF_PIXEL_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_FRAGMENT_PROGRAM_1))
{
device->getLogger()->log("WARNING: Pixel shaders disabled "\
"because of missing driver/hardware support.");
psFileName = "";
}
if (!driver->queryFeature(video::EVDF_VERTEX_SHADER_1_1) &&
!driver->queryFeature(video::EVDF_ARB_VERTEX_PROGRAM_1))
{
device->getLogger()->log("WARNING: Vertex shaders disabled "\
"because of missing driver/hardware support.");
vsFileName = "";
}
An animated mesh is loaded to be displayed. As in most examples, we'll take the fairy md2 model.
scene::IAnimatedMeshSceneNode* fairy = smgr->addAnimatedMeshSceneNode(
smgr->getMesh(mediaPath + "faerie.md2"));
if (fairy)
{
fairy->setMaterialTexture(0,
driver->getTexture(mediaPath + "faerie2.bmp"));
fairy->setMaterialFlag(video::EMF_LIGHTING, false);
fairy->setPosition(core::vector3df(-10,0,-100));
fairy->setMD2Animation ( scene::EMAT_STAND );
}
smgr->addCameraSceneNode(0, core::vector3df(10,10,-80),
core::vector3df(-10,10,-100));
We create a render target texture (RTT) with the same size as frame buffer. Instead of rendering the scene directly to the frame buffer, we firstly render it to this RTT. Post processing is then applied based on this RTT. RTT size needs not to be the same with frame buffer though. However in this example, we expect the result of rendering to RTT to be consistent with the result of rendering directly to the frame buffer. Therefore, the size of RTT keeps the same with frame buffer.
if (driver->queryFeature(video::EVDF_RENDER_TO_TARGET))
{
rt = driver->addRenderTargetTexture(core::dimension2d<u32>(640, 480), "RTT1");
}
else
{
device->getLogger()->log("Your hardware or this renderer is not able to use the "\
"render to texture feature. RTT Disabled.");
}
Post processing is achieved by rendering a screen quad with this RTT (with previously rendered result) as a texture on the quad. A screen quad is geometry of flat plane composed of two adjacent triangles covering the entire area of viewport. In this pass of rendering, RTT works just like a normal texture and is drawn on the quad during rendering. We can then take control of this rendering process by applying various shader-defined materials to the quad. In other words, we can achieve different effect by writing different shaders. This process is called post processing because it normally does not rely on scene geometry. The inputs of this process are just textures, or in other words, just images. With the help of screen quad, we can draw these images on the screen with different effects. For example, we can adjust contrast, make grayscale, add noise, do more fancy effect such as blur, bloom, ghost, or just like in this example, we invert the color to produce negative image. Note that post processing is not limited to use only one texture. It can take multiple textures as shader inputs to provide desired result. In addition, post processing can also be chained to produce compound result.
ScreenQuad *screenQuad = new ScreenQuad(driver);
screenQuad->setMaterialFlag(video::EMF_USE_MIP_MAPS, false);
screenQuad->setMaterialFlag(video::EMF_BILINEAR_FILTER, false);
screenQuad->setMaterialTexture(0, rt);
Let's create material for the quad. Like in other example, we create material using IGPUProgrammingServices and call addShaderMaterialFromFiles, which returns a material type identifier.
video::IGPUProgrammingServices* gpu = driver->getGPUProgrammingServices();
s32 ppMaterialType = 0;
if (gpu)
{
QuadShaderCallBack* mc = new QuadShaderCallBack();
ppMaterialType = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "vertexMain", video::EVST_VS_1_1,
psFileName, "pixelMain", video::EPST_PS_1_1, mc);
mc->drop();
}
screenQuad->setMaterialType((video::E_MATERIAL_TYPE)ppMaterialType);
Now draw everything. That's all.
int lastFPS = -1;
while(device->run())
{
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(255,0,0,0));
if (rt)
{
driver->setRenderTarget(rt, true, true, video::SColor(255,0,0,0));
smgr->drawAll();
driver->setRenderTarget(0, true, true, video::SColor(255,0,0,0));
screenQuad->render();
}
else
{
smgr->drawAll();
}
driver->endScene();
int fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"Irrlicht Engine - Post processing example [";
str += driver->getName();
str += "] FPS:";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
}
}
screenQuad->drop();
device->drop();
return 0;
}
+
+
+
+
+
diff --git a/examples/28.CubeMapping/tutorial.html b/examples/28.CubeMapping/tutorial.html
new file mode 100644
index 00000000..30a5dd1b
--- /dev/null
+++ b/examples/28.CubeMapping/tutorial.html
@@ -0,0 +1,213 @@
+
+
+
+
+
+
+
+Tutorial 28: CubeMapping
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Shows usage of cubemap textures and how to do some simple environment mapping. Cubemap textures have images for all 6 directions of a cube in a single texture. Environment is used to reflect the environment around an object onto the object. Cubemaps only work with shader materials which are written to support cube mapping.
+
- Author
- Michael Zeilfelder, based on EnvCubeMap example from irrSpintz engine.
+
Start with the usual includes.
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
using namespace irr;
A callback class for our cubemap shader. We need a shader material which maps the cubemap texture to the polygon vertices of objects.
class CubeMapReflectionCallback : public video::IShaderConstantSetCallBack
{
public:
CubeMapReflectionCallback(scene::ISceneManager* smgr, int styleUVW)
: SceneMgr(smgr)
, StyleUVW(styleUVW), Roughness(0.f)
, styleUvwID(-1) , worldViewProjID(-1), worldID(-1), cameraPosID(-1)
{}
Setting the style to map vertex UV-coordinates to the cubemap textures.
+- Specular style is typically used for mirrors and highlight reflections.
+- Diffuse style is commonly used in image based lighting calculations and often in combination with a higher roughness. Think of it as the sum of all light which reaches a point on your object.
+- Using model vertices directly for UV's is just nice for testing sometimes. Maybe has more uses? Experiment around :-)
void SetStyleUVW(int style)
{
StyleUVW = style;
}
int GetStyleUVW() const
{
return StyleUVW;
}
We could also call this sharpness as the rougher a material the less sharp the reflections of a cubemap are (light for rough materials spreads out more while smooth materials reflect it more like a mirror). Roughness is calculated using the mipmaps of the cubemap texture. Note that rendertarget cubemap textures won't have mipmaps, so unfortunately it won't work for those. Also currently only OpenGL is able to interpolate seamless over cubemap borders. On Direct3D9 you will only smooth per side, but not over side-borders. void SetRoughness(float roughness)
{
Roughness = roughness;
}
float getRoughness() const
{
return Roughness;
}
Typical code which passes a few values from c++ to shader. virtual void OnSetMaterial(const video::SMaterial& material)
{}
virtual void OnSetConstants(video::IMaterialRendererServices* services, s32 userData)
{
video::IVideoDriver* driver = services->getVideoDriver();
if ( worldViewProjID < 0 )
{
styleUvwID = services->getVertexShaderConstantID("StyleUVW");
if( driver->getDriverType() == video::EDT_DIRECT3D9 )
{
worldViewProjID = services->getVertexShaderConstantID("WorldViewProj");
}
worldID = services->getVertexShaderConstantID("World");
cameraPosID = services->getVertexShaderConstantID("CameraPos");
roughnessID = services->getPixelShaderConstantID("Roughness");
}
services->setVertexShaderConstant(styleUvwID, &StyleUVW, 1 );
irr::core::matrix4 world = driver->getTransform(irr::video::ETS_WORLD);
services->setVertexShaderConstant(worldID, world.pointer(), 16);
if( driver->getDriverType() == video::EDT_DIRECT3D9 )
{
irr::core::matrix4 worldViewProj;
worldViewProj = driver->getTransform(irr::video::ETS_PROJECTION);
worldViewProj *= driver->getTransform(irr::video::ETS_VIEW);
worldViewProj *= world;
services->setVertexShaderConstant(worldViewProjID, worldViewProj.pointer(), 16);
}
core::vector3df cameraPos = SceneMgr->getActiveCamera()->getAbsolutePosition();
services->setVertexShaderConstant(cameraPosID, &cameraPos.X, 3 );
services->setPixelShaderConstant(roughnessID, &Roughness, 1 );
}
private:
scene::ISceneManager* SceneMgr;
int StyleUVW;
float Roughness;
irr::s32 styleUvwID;
irr::s32 worldViewProjID;
irr::s32 worldID;
irr::s32 cameraPosID;
irr::s32 roughnessID;
};
To keep the example compact our event-receiver acts also like a main application class. So it handles user input, updates the dynamic parts of the UI and it keeps some 3d nodes around. class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver() : Driver(0), Shader(0)
,BackgroundSkybox(0), BackgroundCube(0)
, CubemapUpdates(2)
, CurrentStyleUVW(0), CurrentRoughness(0)
, NeedCubemapUpdate(true)
{
StyleNamesUVW.push_back( L"specular" );
StyleNamesUVW.push_back( L"diffuse" );
StyleNamesUVW.push_back( L"model coordinates" );
}
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_KEY_INPUT_EVENT && event.KeyInput.PressedDown == false)
{
switch(event.KeyInput.Key )
{
case KEY_SPACE:
if ( Shader )
{
Shader->SetStyleUVW((Shader->GetStyleUVW()+1)%StyleNamesUVW.size());
updateStyleUVW();
}
break;
case KEY_KEY_B:
if ( BackgroundSkybox && BackgroundCube )
{
if ( BackgroundSkybox->isVisible() )
{
BackgroundSkybox->setVisible(false);
BackgroundCube->setVisible(true);
}
else
{
BackgroundSkybox->setVisible(true);
BackgroundCube->setVisible(false);
}
NeedCubemapUpdate = true;
}
break;
case KEY_KEY_I:
for (u32 i=0; i<InfoTextNodes.size(); ++i )
InfoTextNodes[i]->setVisible(!InfoTextNodes[i]->isVisible());
break;
case KEY_KEY_S:
if ( Driver )
{
Driver->disableFeature(video::EVDF_TEXTURE_CUBEMAP_SEAMLESS, Driver->queryFeature(video::EVDF_TEXTURE_CUBEMAP_SEAMLESS) );
updateSeamless();
}
break;
case KEY_KEY_U:
CubemapUpdates = (CubemapUpdates+1) % 3;
updateCubemapUpdates();
break;
case KEY_PLUS:
case KEY_ADD:
if ( Shader )
{
Shader->SetRoughness( Shader->getRoughness() + 0.5f );
updateRoughness();
}
break;
case KEY_MINUS:
case KEY_SUBTRACT:
{
if ( Shader )
{
float roughness = Shader->getRoughness() - 0.5f;
if ( roughness >= 0.f )
{
Shader->SetRoughness(roughness);
updateRoughness();
}
}
break;
}
default:
break;
}
}
return false;
}
void updateStyleUVW()
{
if ( CurrentStyleUVW && Shader)
CurrentStyleUVW->setText(StyleNamesUVW[Shader->GetStyleUVW()].c_str());
}
void updateRoughness()
{
if ( CurrentRoughness && Shader )
{
CurrentRoughness->setText( irr::core::stringw(Shader->getRoughness()).c_str() );
}
}
void updateSeamless()
{
if ( CurrentSeamlessCubemap && Driver )
{
CurrentSeamlessCubemap->setText( Driver->queryFeature(video::EVDF_TEXTURE_CUBEMAP_SEAMLESS) ? L"ON" : L"OFF" );
}
}
void updateCubemapUpdates()
{
if ( CurrentCubemapUpdates )
{
switch ( CubemapUpdates )
{
case 0: CurrentCubemapUpdates->setText( L"static"); break;
case 1: CurrentCubemapUpdates->setText( L"dynamic" ); break;
case 2: CurrentCubemapUpdates->setText( L"dynamic+mips" ); break;
}
}
}
int checkCubemapUpdate()
{
if ( NeedCubemapUpdate || CubemapUpdates == 2)
{
NeedCubemapUpdate = false;
return 2;
}
return CubemapUpdates;
}
void addInfoTextNode(irr::gui::IGUIFont* font, const wchar_t* text, irr::scene::ISceneNode* parent)
{
if ( parent )
{
const video::SColor infoTextCol(250, 70, 90, 90);
core::dimension2du dim(font->getDimension(text));
core::dimension2df dimf((f32)dim.Width, (f32)dim.Height);
scene::IBillboardTextSceneNode* infoNode = parent->getSceneManager()->addBillboardTextSceneNode( font, text, parent, dimf, core::vector3df(0, 120, 0), -1, infoTextCol, infoTextCol);
InfoTextNodes.push_back(infoNode);
}
}
irr::video::IVideoDriver* Driver;
CubeMapReflectionCallback* Shader;
scene::ISceneNode* BackgroundSkybox;
scene::ISceneNode* BackgroundCube;
irr::core::array<scene::ISceneNode*> InfoTextNodes;
int CubemapUpdates;
irr::core::array<irr::core::stringw> StyleNamesUVW;
irr::gui::IGUIStaticText* CurrentStyleUVW;
irr::gui::IGUIStaticText* CurrentRoughness;
irr::gui::IGUIStaticText* CurrentSeamlessCubemap;
irr::gui::IGUIStaticText* CurrentCubemapUpdates;
private:
bool NeedCubemapUpdate;
};
Workaround for OpenGL's upside-down images. Texture origins (0,0) in OpenGL are usually at the left-bottom instead of the more common left-top image formats. Irrlicht internally uses textures with left-top origin and then corrects the texture-matrices in the fixed-function pipeline. For shader materials it's left to the users to handle those UV-flips for the texture-matrix. Render target textures (RTT's) in OpenGL are rendered with left-bottom origin and Irrlicht can't change that, so all RTT textures in memory are upside-down (unlike all other Irrlicht textures). In the fixed function pipeline Irrlicht handles this by flipping the RTT's texture matrix once more and for shaders it's again left to the users to handle it. Cubemap textures are different from other textures in OpenGL. Each cube side has left-top as the origin. So not flipping Irrlicht textures for those would be fine. Except - OpenGL RTT's still render left-bottom - even when the target is a cubemap RTT. I found no good way around this so far - it just seems messed up as we get a left-handed/right handed coordinate system change that way.
+
+
So... the following 2 defines are two different workarounds I found. Both are ugly, which one is better in reality depends probably on the scene. Only use one of those: CUBEMAP_UPSIDE_DOWN_GL_PROJECTION is relatively fast as it just changes the project matrix. The problem is that changing the projection matrix means changing front/backside culling. So every node rendered has to flip the material flags for those.
+
CUBEMAP_USPIDE_DOWN_RTT will change the texture memory itself and flip the image upside-down. While easier to do, this involves texture-locking and is very slow.
#define CUBEMAP_UPSIDE_DOWN_GL_PROJECTION
#ifdef CUBEMAP_UPSIDE_DOWN_GL_PROJECTION
void flipCullingFlags(const core::array<scene::ISceneNode*>& nodes)
{
for ( irr::u32 n=0; n < nodes.size(); ++n )
{
scene::ISceneNode* node = nodes[n];
const irr::u32 matCount = node->getMaterialCount();
for ( irr::u32 m=0; m < matCount; ++m)
{
video::SMaterial& mat = node->getMaterial(m);
mat.BackfaceCulling = !mat.BackfaceCulling;
mat.FrontfaceCulling = !mat.FrontfaceCulling;
}
}
}
#endif
Render the environment around a node into a cubemap texture.
void renderEnvironmentCubeMap(irr::video::IVideoDriver* driver, irr::scene::ICameraSceneNode* cubeMapCamera, irr::scene::ISceneNode* cubeCenterNode, video::IRenderTarget* cubeMapRT, video::ITexture* dynamicCubeMapRTT, video::ITexture* depthStencilRTT)
{
scene::ISceneManager* smgr = cubeMapCamera->getSceneManager();
scene::ICameraSceneNode * oldCam = smgr->getActiveCamera();
smgr->setActiveCamera( cubeMapCamera );
We want to see everything around the center node, so hide the node itself, otherwise it would be in the way. Then set the camera to that node's position.
cubeCenterNode->setVisible( false );
const core::vector3df center( cubeCenterNode->getAbsolutePosition() );
cubeMapCamera->setPosition( center );
Render all 6 directions. Which means simple setting the camera target/up vector to all 6 directions and then render the full scene each time. So yeah - updating an environment cube-map means 6 full renders for each object which needs an environment map. In other words - you generally only want to do that in pre-processing, not in realtime.
const core::vector3df targetVecs[6] = {
core::vector3df(1.f, 0.f, 0.f),
core::vector3df(-1.f, 0.f, 0.f),
core::vector3df(0.f, 1.f, 0.f),
core::vector3df(0.f, -1.f, 0.f),
core::vector3df(0.f, 0.f, 1.f),
core::vector3df(0.f, 0.f, -1.f)
};
const core::vector3df upVecs[6] = {
core::vector3df( 0,1,0 ),
core::vector3df( 0,1,0 ),
core::vector3df( 0,0,-1 ),
core::vector3df( 0,0,1 ),
core::vector3df( 0,1,0 ),
core::vector3df( 0,1,0 )
};
for ( int s=0; s<6; ++s )
{
cubeMapCamera->setUpVector( upVecs[s] );
cubeMapCamera->setTarget( center + targetVecs[s] );
cubeMapRT->setTexture(dynamicCubeMapRTT, depthStencilRTT, (video::E_CUBE_SURFACE)(video::ECS_POSX + s));
driver->setRenderTargetEx(cubeMapRT, video::ECBF_ALL);
smgr->drawAll();
#ifdef CUBEMAP_USPIDE_DOWN_RTT
driver->setRenderTarget(0);
dynamicCubeMapRTT->lock(video::ETLM_READ_WRITE, 0, s, video::ETLF_FLIP_Y_UP_RTT);
dynamicCubeMapRTT->unlock();
#endif
}
driver->setRenderTarget(0);
cubeCenterNode->setVisible( true );
smgr->setActiveCamera( oldCam );
}
Typical setup at the main start.
int main()
{
video::E_DRIVER_TYPE driverType = driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
MyEventReceiver eventReceiver;
const core::dimension2d<u32> dimDevice(1024, 768);
IrrlichtDevice* device = createDevice( driverType, dimDevice, 32, false, false, false, &eventReceiver );
if (!device)
return 1;
const io::path mediaPath = getExampleMediaPath();
video::IVideoDriver* driver = device->getVideoDriver();
scene::ISceneManager* smgr = device->getSceneManager();
gui::IGUIEnvironment* env = device->getGUIEnvironment();
eventReceiver.Driver = driver;
core::stringw strCaption(L"Cubemap example - Irrlicht Engine [");
strCaption += driver->getName();
strCaption += L"]";
device->setWindowCaption(strCaption.c_str());
gui::IGUISkin* skin = env->getSkin();
gui::IGUIFont* font = env->getFont(mediaPath + "fonthaettenschweiler.bmp");
if (font)
skin->setFont(font);
Create a shader material for cube mapping
video::IGPUProgrammingServices* gpu = driver->getGPUProgrammingServices();
s32 cubeMapReflectionMaterial = 0;
if( gpu )
{
irr::io::path vsFileName;
irr::io::path psFileName;
switch( driverType )
{
case video::EDT_DIRECT3D9:
vsFileName = mediaPath + "cubeMapReflectionVS.hlsl";
psFileName = mediaPath + "cubeMapReflectionPS.hlsl";
break;
case video::EDT_OPENGL:
vsFileName = mediaPath + "cubeMapReflection.vert";
psFileName = mediaPath + "cubeMapReflection.frag";
break;
}
CubeMapReflectionCallback* cubeMapCB = new CubeMapReflectionCallback(smgr, 2);
cubeMapReflectionMaterial = gpu->addHighLevelShaderMaterialFromFiles(
vsFileName, "VS", video::EVST_VS_1_1,
psFileName, "PS", video::EPST_PS_3_0,
cubeMapCB, video::EMT_SOLID );
if ( cubeMapReflectionMaterial >= 0 )
eventReceiver.Shader = cubeMapCB;
cubeMapCB->drop();
}
scene::ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS(0, 100.f, 1.f);
camera->setPosition( core::vector3df( 0,10,-200 ) );
device->getCursorControl()->setVisible(false);
Get 6 images forming a cubemap. The coordinate system used in those images seemed to be different than the one in Irrlicht. I decided to leave it like that because it's pretty common that way. If you get cubemap textures which seem to have x/y/z axis named different you'll just have to experiment until you figured out the correct order.
core::array<video::IImage*> cubeMapImages;
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_posx.jpg" ));
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_negx.jpg" ));
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_posy.jpg" ));
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_negy.jpg" ));
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_posz.jpg" ));
cubeMapImages.push_back(driver->createImageFromFile( mediaPath + "cubemap_negz.jpg" ));
Create a cubemap texture from those images. Note that 6 images become a single texture now.
video::ITexture* cubeMapStaticTex = 0;
cubeMapStaticTex = driver->addTextureCubemap("cm", cubeMapImages[0], cubeMapImages[1], cubeMapImages[2], cubeMapImages[3], cubeMapImages[4], cubeMapImages[5]);
for ( u32 i=0; i<cubeMapImages.size(); ++i )
if ( cubeMapImages[i] )
cubeMapImages[i]->drop();
cubeMapImages.clear();
Create a render target, cubemap render-target-textures and a camera with settings for cube mapping
video::IRenderTarget* cubeMapRT = driver->addRenderTarget();
video::ITexture* dynamicCubeMapRTT = 0;
video::ITexture* depthStencilRTT = 0;
video::ITexture* dynamicCubeMapRTT_intermediate = 0;
video::ITexture* dynamicCubeMapTex = 0;
scene::ICameraSceneNode* cubeMapCamera = 0;
if( driver->queryFeature( video::EVDF_RENDER_TO_TARGET ) )
{
const u32 dynamicCubeMapSize = 512;
dynamicCubeMapRTT = driver->addRenderTargetTextureCubemap(dynamicCubeMapSize, "cube_rtr");
depthStencilRTT = driver->addRenderTargetTexture(irr::core::dimension2du(dynamicCubeMapSize, dynamicCubeMapSize), "cubemap_ds", irr::video::ECF_D24S8);
dynamicCubeMapRTT_intermediate = driver->addRenderTargetTextureCubemap(dynamicCubeMapSize, "cube_rtr");
dynamicCubeMapTex = driver->addTextureCubemap(dynamicCubeMapSize, "cube_tex");
cubeMapCamera = smgr->addCameraSceneNode();
cubeMapCamera->setFOV(core::PI* 0.5f);
cubeMapCamera->setAspectRatio(1.f);
smgr->setActiveCamera( camera );
}
Add sphere-nodes which will be using the cubemaps as materials. You may also want to experiment with other node-types here!
scene::ISceneNode* sphereNode = 0;
scene::ISceneNode* sphereNode2 = 0;
scene::ISceneNode* sphereNode3 = 0;
scene::IMesh* sphereMesh = smgr->getGeometryCreator()->createSphereMesh(100.f);
if( sphereMesh )
{
sphereNode = smgr->addMeshSceneNode( sphereMesh );
sphereNode->setPosition( core::vector3df(-250,0,0) );
sphereNode->updateAbsolutePosition();
sphereNode->setMaterialFlag( video::EMF_LIGHTING, false );
sphereNode->setMaterialTexture( 0, dynamicCubeMapRTT );
sphereNode->setMaterialType( (video::E_MATERIAL_TYPE)cubeMapReflectionMaterial );
eventReceiver.addInfoTextNode(font, L"Cubemap dynamic rtt, no mip-maps", sphereNode);
if ( dynamicCubeMapTex )
{
sphereNode3 = smgr->addMeshSceneNode( sphereMesh );
sphereNode3->setPosition( core::vector3df(0,0,250) );
sphereNode3->updateAbsolutePosition();
sphereNode3->setMaterialFlag( video::EMF_LIGHTING, false );
sphereNode3->setMaterialTexture( 0, dynamicCubeMapTex );
sphereNode3->getMaterial(0).TextureLayer[0].TrilinearFilter = false;
sphereNode3->setMaterialType( (video::E_MATERIAL_TYPE)cubeMapReflectionMaterial );
eventReceiver.addInfoTextNode(font, L"Cubemap dynamic with mip-maps", sphereNode3);
}
if ( cubeMapStaticTex )
{
sphereNode2 = smgr->addMeshSceneNode( sphereMesh );
sphereNode2->setPosition( core::vector3df(250,0,0) );
sphereNode2->updateAbsolutePosition();
sphereNode2->setMaterialFlag( video::EMF_LIGHTING, false );
sphereNode2->setMaterialTexture( 0, cubeMapStaticTex );
sphereNode2->getMaterial(0).TextureLayer[0].TrilinearFilter = true;
sphereNode2->setMaterialType( (video::E_MATERIAL_TYPE)cubeMapReflectionMaterial );
eventReceiver.addInfoTextNode(font, L"Cubemap fixed images", sphereNode2);
}
sphereMesh->drop();
}
Add some background which will show up in the environment maps. For first one we use the same textures as used in the spheres. Note the difference between a skybox and a cubemap is that the skybox really uses 6 different textures. While the cubemap uses a single texture created from 6 images.
eventReceiver.BackgroundSkybox = smgr->addSkyBoxSceneNode(
driver->getTexture(mediaPath + "cubemap_posy.jpg"),
driver->getTexture(mediaPath + "cubemap_negy.jpg"),
driver->getTexture(mediaPath + "cubemap_posz.jpg"),
driver->getTexture(mediaPath + "cubemap_negz.jpg"),
driver->getTexture(mediaPath + "cubemap_posx.jpg"),
driver->getTexture(mediaPath + "cubemap_negx.jpg"));
Another background for comparison and to make it more obvious when the spheres reflect the environment and when they use static cubemaps.
scene::IMesh * cubeMesh = smgr->getGeometryCreator()->createCubeMesh( core::vector3df(10.f, 10.f, 10.f), scene::ECMT_6BUF_4VTX_NP);
smgr->getMeshManipulator()->scale(cubeMesh, core::vector3df(-1, 1, 1));
if( cubeMesh )
{
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(0), video::SColor(255, 240, 10, 10) );
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(1), video::SColor(255, 240, 130, 10) );
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(2), video::SColor(255, 50, 250, 10) );
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(3), video::SColor(255, 70, 10, 250) );
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(4), video::SColor(255, 240, 250, 10) );
smgr->getMeshManipulator()->setVertexColors( cubeMesh->getMeshBuffer(5), video::SColor(255, 85, 250, 250) );
eventReceiver.BackgroundCube = smgr->addMeshSceneNode( cubeMesh );
cubeMesh->drop();
eventReceiver.BackgroundCube->setScale( core::vector3df( 200, 200, 200 ) );
eventReceiver.BackgroundCube->setMaterialFlag( video::EMF_LIGHTING, false );
eventReceiver.BackgroundCube->setVisible(false);
}
#ifdef CUBEMAP_UPSIDE_DOWN_GL_PROJECTION
if ( driverType == video::EDT_OPENGL )
{
core::matrix4 matProj = cubeMapCamera->getProjectionMatrix();
matProj[4] = -matProj[4];
matProj[5] = -matProj[5];
matProj[6] = -matProj[6];
matProj[7] = -matProj[7];
cubeMapCamera->setProjectionMatrix(matProj);
}
#endif
Add some moving node to show the difference between static/dynamic environment maps
scene::IMeshSceneNode * movingNode = smgr->addCubeSceneNode(30.f);
movingNode->getMaterial(0).Lighting = false;
smgr->getMeshManipulator()->setVertexColors( movingNode->getMesh()->getMeshBuffer(0), video::SColor(255, 230, 200, 150));
scene::ISceneNodeAnimator* circleAnimator = smgr->createFlyCircleAnimator(core::vector3df(-125, -50.f, 125), 300.f, 0.0005f);
movingNode->addAnimator(circleAnimator);
circleAnimator->drop();
Add some UI
if ( eventReceiver.Shader )
{
skin->setColor(gui::EGDC_3D_FACE, video::SColor(50, 160, 120, 120));
u32 top = dimDevice.Height - 200;
const u32 left = dimDevice.Width - 350;
const u32 right = dimDevice.Width - 10;
irr::gui::IGUIStaticText * stextUVW = env->addStaticText(L" Style of generating texture coordinates:\n Change with (space)", core::recti(left, top, right, top+35), false, true, 0, -1, true);
top += 40;
stextUVW->setTextAlignment(gui::EGUIA_UPPERLEFT, gui::EGUIA_UPPERLEFT);
eventReceiver.CurrentStyleUVW = env->addStaticText(L"", core::recti(240,0, 400, 20), false, false, stextUVW);
eventReceiver.updateStyleUVW();
irr::gui::IGUIStaticText * stextRoughness = env->addStaticText(L" Roughness:\n Change with (+) and (-)", core::recti(left, top, right, top+35), false, true, 0, -1, true);
top += 40;
eventReceiver.CurrentRoughness = env->addStaticText( L"", core::recti(240,0, 400, 20), false, false, stextRoughness);
eventReceiver.updateRoughness();
irr::gui::IGUIStaticText * stextSeamlessCupemap = env->addStaticText(L" Seamless cubemap (with roughness):\n Change with (s)", core::recti(left, top, right, top+35), false, true, 0, -1, true);
top += 40;
eventReceiver.CurrentSeamlessCubemap = env->addStaticText( L"", core::recti(240,0, 400, 20), false, false, stextSeamlessCupemap);
eventReceiver.updateSeamless();
irr::gui::IGUIStaticText * stextUpdates = env->addStaticText(L" Cubemap updates:\n Change with (u)", core::recti(left, top, right, top+35), false, true, 0, -1, true);
top += 40;
eventReceiver.CurrentCubemapUpdates = env->addStaticText( L"", core::recti(240,0, 400, 20), false, false, stextUpdates);
eventReceiver.updateCubemapUpdates();
env->addStaticText(L" Change background with (b)", core::recti(left, top, right, top+15), false, true, 0, -1, true);
top += 20;
env->addStaticText(L" Show/hide info nodes with (i)", core::recti(left, top, right, top+15), false, true, 0, -1, true);
}
Main loop
while(device->run())
{
if (device->isWindowActive())
{
driver->beginScene(true, true, video::SColor(255, 127, 127, 255));
Check if we want to update the environment maps. Usually not something you'll do every frame, but either once at the star or maybe updating an environment map once in a while.
int updateCubemaps = eventReceiver.checkCubemapUpdate();
if( dynamicCubeMapRTT && sphereNode && updateCubemaps > 0 )
{
#ifdef CUBEMAP_UPSIDE_DOWN_GL_PROJECTION
core::array<scene::ISceneNode*> allNodes;
if ( driverType == video::EDT_OPENGL )
{
Flipping projection matrix flips front/backface culling. We only have a skybox so in this case this still would be fast, with more objects it's getting more ugly.
smgr->getSceneNodesFromType(scene::ESNT_ANY, allNodes);
flipCullingFlags(allNodes);
}
#endif
If rendered just once then this node has still a white (or even undefined) texture at this point Just hiding it and render the background when rendering the cubemap for the other node is less noticable than having a big white dot in the environment texture. Render order can matter if you want several environment maps in your scene.
if (sphereNode3)
sphereNode3->setVisible(false);
renderEnvironmentCubeMap(driver, cubeMapCamera, sphereNode, cubeMapRT, dynamicCubeMapRTT, depthStencilRTT);
if ( sphereNode3)
{
if ( updateCubemaps == 2 )
{
Our rtt's unfortunately don't have mipmaps (sorry, not sure if we can get that somehow...) So if we want mipmaps in the dynamic cubemap we have to copy it to a non-rtt texture. Warning: Very, very slow. Far slower than just creating an environment map as this will copy the texture from GPU to main memory - copy it to a new texture, create mip-maps and upload the result back to the GPU.
renderEnvironmentCubeMap(driver, cubeMapCamera, sphereNode3, cubeMapRT, dynamicCubeMapRTT_intermediate, depthStencilRTT);
for ( int i=0; i<6; ++i)
{
void * rtData = dynamicCubeMapRTT_intermediate->lock(video::ETLM_READ_ONLY, 0, i, video::ETLF_NONE);
void * tData = dynamicCubeMapTex->lock(video::ETLM_READ_WRITE, 0, i);
memcpy(tData, rtData, dynamicCubeMapTex->getPitch()*dynamicCubeMapTex->getSize().Width);
dynamicCubeMapRTT_intermediate->unlock();
dynamicCubeMapTex->unlock();
dynamicCubeMapTex->regenerateMipMapLevels();
}
}
sphereNode3->setVisible(true);
}
#ifdef CUBEMAP_UPSIDE_DOWN_GL_PROJECTION
if ( driverType == video::EDT_OPENGL )
{
flipCullingFlags(allNodes);
}
#endif
}
smgr->drawAll();
env->drawAll();
driver->endScene();
}
}
device->drop();
return 0;
}
+
+
+
+
+
diff --git a/examples/30.Profiling/tutorial.html b/examples/30.Profiling/tutorial.html
new file mode 100644
index 00000000..eb5e8ec9
--- /dev/null
+++ b/examples/30.Profiling/tutorial.html
@@ -0,0 +1,209 @@
+
+
+
+
+
+
+
+Tutorial 30: Profiling
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Profiling is used to get runtime information about code code.
+
There exist several independent profiling tools. Examples for free profilers are "gprof" for the GNU toolchain and "very sleepy" from codersnotes for Windows. Proprietary tools are for example "VTune" from Intel or "AMD APP Profiler". Those tools work by sampling the running application regularly to get statistic information about the called functions. The way to use them is to compile your application with special flags to include profiling information (some also work with debug information). They also might allow to profile only certain parts of the code, although most can't do that. The sampling is usually rather time-consuming which means the application will be very slow when collecting the profiling data. It's often useful to start with one of those tools to get an overview over the bottlenecks in your application. Those tools have the advantage that they don't need any modifications inside the code.
+
Once you need to dig deeper the Irrlicht profiler can help you. It works nearly like a stopwatch. You add start/stop blocks into the parts of your code which you need to check and the Irrlicht profiler will give you then the exact times of execution for those parts. And unlike general profiler tools you don't just get average information about the run-time but also worst-cases. Which tends to be information you really for a stable framerate. Also the Irrlicht profiler has a low overhead and affects only the areas which you want to time. So you can profile applications with nearly original speed.
+
Irrlicht itself has such profiling information, which is useful to figure out where the runtime inside the engine is spend. To get that profiling data you need to recompile Irrlicht with IRR_COMPILE_WITH_PROFILING enabled as collecting profiling information is disabled by default for speed reasons.
It's usually a good idea to wrap all your profile code with a define. That way you don't have to worry too much about the runtime profiling itself takes. You can remove the profiling code completely when you release the software by removing a single define.Or sometimes you might want to have several such defines for different areas of your application code.
#define ENABLE_MY_PROFILE // comment out to remove the profiling code
#ifdef ENABLE_MY_PROFILE
#define MY_PROFILE(X) X
#else
#define MY_PROFILE(X)
#endif // IRR_PROFILE
#include <irrlicht.h>
#include "driverChoice.h"
#include "exampleHelper.h"
#ifdef _MSC_VER
#pragma comment(lib, "Irrlicht.lib")
#endif
using namespace irr;
using namespace core;
using namespace scene;
using namespace video;
using namespace io;
using namespace gui;
We have the choice between working with fixed and with automatic profiling id's. Here are some fixed ID's we will be using.
enum EProfiles
{
EP_APP_TIME_ONCE,
EP_APP_TIME_UPDATED,
EP_SCOPE1,
EP_SCOPE2,
EP_DRAW_SCENE
};
enum EScenes
{
ES_NONE,
ES_CUBE,
ES_QUAKE_MAP,
ES_DWARVES,
ES_COUNT
};
Controlling the profiling display is application specific behavior. We use function keys in our case and play around with all the parameters. In real applications you will likely only need something to make the profiling-display visible/invisible and switch pages while the parameters can be set to fixed values.
class MyEventReceiver : public IEventReceiver
{
public:
MyEventReceiver(ISceneManager * smgr) : GuiProfiler(0), IncludeOverview(true), IgnoreUncalled(false), ActiveScene(ES_NONE), SceneManager(smgr) {}
virtual bool OnEvent(const SEvent& event)
{
if (event.EventType == EET_KEY_INPUT_EVENT)
{
if ( event.KeyInput.PressedDown )
{
Catching keys to control the profiling display and the profiler itself
switch ( event.KeyInput.Key )
{
case KEY_F1:
GuiProfiler->setVisible( !GuiProfiler->isVisible() );
break;
case KEY_F2:
GuiProfiler->nextPage(IncludeOverview);
break;
case KEY_F3:
GuiProfiler->previousPage(IncludeOverview);
break;
case KEY_F4:
GuiProfiler->firstPage(IncludeOverview);
break;
case KEY_F5:
IncludeOverview = !IncludeOverview;
GuiProfiler->firstPage(IncludeOverview);
break;
case KEY_F6:
You can set more filters. This one filters out profile data which was never called.
IgnoreUncalled = !IgnoreUncalled;
GuiProfiler->setFilters(IgnoreUncalled ? 1 : 0, 0, 0.f, 0);
break;
case KEY_F7:
GuiProfiler->setShowGroupsTogether( !GuiProfiler->getShowGroupsTogether() );
break;
case KEY_F8:
NextScene();
break;
case KEY_F9:
{
u32 index = 0;
if ( getProfiler().findGroupIndex(index, L"grp runtime") )
{
getProfiler().resetGroup(index);
}
}
break;
case KEY_F10:
{
u32 index = 0;
if ( getProfiler().findDataIndex(index, L"scope 3") )
{
getProfiler().resetDataByIndex(index);
}
}
break;
case KEY_F11:
getProfiler().resetAll();
break;
case KEY_KEY_F:
GuiProfiler->setFrozen(!GuiProfiler->getFrozen());
break;
default:
break;
}
}
}
return false;
}
Some example scenes so we have something to profile
void NextScene()
{
SceneManager->clear();
ActiveScene = (ActiveScene+1) % ES_COUNT;
if ( ActiveScene == 0 )
ActiveScene = ActiveScene+1;
switch ( ActiveScene )
{
case ES_CUBE:
{
Simple scene with cube and light.
MY_PROFILE(CProfileScope p(L"cube", L"grp switch scene");)
SceneManager->addCameraSceneNode (0, core::vector3df(0, 0, 0),
core::vector3df(0, 0, 100),
-1);
SceneManager->addCubeSceneNode (30.0f, 0, -1,
core::vector3df(0, 20, 100),
core::vector3df(45, 45, 45),
core::vector3df(1.0f, 1.0f, 1.0f));
SceneManager->addLightSceneNode(0, core::vector3df(0, 0, 0),
video::SColorf(1.0f, 1.0f, 1.0f),
100.0f);
}
break;
case ES_QUAKE_MAP:
{
Our typical Irrlicht example quake map.
MY_PROFILE(CProfileScope p(L"quake map", L"grp switch scene");)
scene::IAnimatedMesh* mesh = SceneManager->getMesh("20kdm2.bsp");
scene::ISceneNode* node = 0;
if (mesh)
node = SceneManager->addOctreeSceneNode(mesh->getMesh(0), 0, -1, 1024);
if (node)
node->setPosition(core::vector3df(-1300,-144,-1249));
SceneManager->addCameraSceneNodeFPS();
}
break;
case ES_DWARVES:
{
Stress-test Irrlicht a little bit by creating many objects.
MY_PROFILE(CProfileScope p(L"dwarfes", L"grp switch scene");)
scene::IAnimatedMesh* aniMesh = SceneManager->getMesh( getExampleMediaPath() + "dwarf.x" );
if (aniMesh)
{
scene::IMesh * mesh = aniMesh->getMesh (0);
if ( !mesh )
break;
You can never have too many dwarves. So let's make some.
const int nodesX = 30;
const int nodesY = 5;
const int nodesZ = 30;
aabbox3df bbox = mesh->getBoundingBox();
vector3df extent = bbox.getExtent();
const f32 GAP = 10.f;
f32 halfSizeX = 0.5f * (nodesX*extent.X + GAP*(nodesX-1));
f32 halfSizeY = 0.5f * (nodesY*extent.Y + GAP*(nodesY-1));
f32 halfSizeZ = 0.5f * (nodesZ*extent.Z + GAP*(nodesZ-1));
for ( int x = 0; x < nodesX; ++x )
{
irr::f32 gapX = x > 0 ? (x-1)*GAP : 0.f;
irr::f32 posX = -halfSizeX + x*extent.X + gapX;
for ( int y = 0; y < nodesY; ++y )
{
irr::f32 gapY = y > 0 ? (y-1)*GAP : 0.f;
irr::f32 posY = -halfSizeY + y*extent.Y + gapY;
for ( int z=0; z < nodesZ; ++z )
{
irr::f32 gapZ = z > 0 ? (z-1)*GAP : 0.f;
irr::f32 posZ = -halfSizeZ + z*extent.Z + gapZ;
scene::IAnimatedMeshSceneNode * node = SceneManager->addAnimatedMeshSceneNode(aniMesh, NULL, -1, vector3df(posX, posY, posZ) );
node->setMaterialFlag(video::EMF_LIGHTING, false);
}
}
}
irr::scene::ICameraSceneNode * camera = SceneManager->addCameraSceneNodeFPS(0, 20.f, 0.1f );
camera->updateAbsolutePosition();
camera->setTarget( vector3df(0,0,0) );
camera->updateAbsolutePosition();
camera->setPosition(irr::core::vector3df(halfSizeX+extent.X, halfSizeY+extent.Y, halfSizeZ+extent.Z));
camera->updateAbsolutePosition();
}
}
break;
}
}
IGUIProfiler * GuiProfiler;
bool IncludeOverview;
bool IgnoreUncalled;
u32 ActiveScene;
scene::ISceneManager* SceneManager;
};
void recursive(int recursion)
{
As the profiler uses internally counters for start stop and only takes profile data when that counter is zero we count all recursions as a single call. If you want to profile each call on it's own you have to use explicit start/stop calls and stop the profile id right before the recursive call.
MY_PROFILE(CProfileScope p3(L"recursive", L"grp runtime");)
if (recursion > 0 )
recursive(recursion-1);
}
int main()
{
Setup, nothing special here.
video::E_DRIVER_TYPE driverType=driverChoiceConsole();
if (driverType==video::EDT_COUNT)
return 1;
Profiler is independent of the device - so we can time the device setup
MY_PROFILE(s32 pDev = getProfiler().add(L"createDevice", L"grp runtime");)
MY_PROFILE(getProfiler().start(pDev);)
IrrlichtDevice * device = createDevice(driverType, core::dimension2d<u32>(640, 480));
if (device == 0)
{
When working with start/stop you should add a stop to all exit paths. Although in this case it wouldn't matter as we don't do anything with it when we quit here.
MY_PROFILE(getProfiler().stop(pDev);)
return 1;
}
MY_PROFILE(getProfiler().stop(pDev);)
video::IVideoDriver* driver = device->getVideoDriver();
IGUIEnvironment* env = device->getGUIEnvironment();
scene::ISceneManager* smgr = device->getSceneManager();
const io::path mediaPath = getExampleMediaPath();
A map we use for one of our test-scenes.
device->getFileSystem()->addFileArchive(mediaPath + "map-20kdm2.pk3");
MyEventReceiver receiver(smgr);
device->setEventReceiver(&receiver);
receiver.NextScene();
Show some info about the controls used in this example
IGUIStaticText * staticText = env->addStaticText(
L"<F1> to show/hide the profiling display\n"
L"<F2> to show the next page\n"
L"<F3> to show the previous page\n"
L"<F4> to show the first page\n"
L"<F5> to flip between including the group overview\n"
L"<F6> to flip between ignoring and showing uncalled data\n"
L"<F7> to flip between showing 1 group per page or all together\n"
L"<F8> to change our scene\n"
L"<F9> to reset the \"grp runtime\" data\n"
L"<F10> to reset the scope 3 data\n"
L"<F11> to reset all data\n"
L"<f> to freeze/unfreeze the display\n"
, recti(10,10, 250, 140), true, true, 0, -1, true);
staticText->setWordWrap(false);
IGUIProfiler is can be used to show active profiling data at runtime.
receiver.GuiProfiler = env->addProfilerDisplay(core::recti(40, 140, 600, 470));
receiver.GuiProfiler->setDrawBackground(true);
Get a monospaced font - it's nicer when working with rows of numbers.
IGUIFont* font = env->getFont(mediaPath + "fontcourier.bmp");
if (font)
receiver.GuiProfiler->setOverrideFont(font);
Adding ID's has to be done before the start/stop calls. This allows start/stop to be really fast and we still have nice information like names and groups. Groups are created automatically each time an ID with a new group-name is added. Groups exist to sort the display data in a nicer way.
MY_PROFILE(
getProfiler().add(EP_APP_TIME_ONCE, L"full time", L"grp runtime");
getProfiler().add(EP_APP_TIME_UPDATED, L"full time updated", L"grp runtime");
getProfiler().add(EP_SCOPE1, L"scope 1", L"grp runtime");
getProfiler().add(EP_DRAW_SCENE, L"draw scene", L"grp runtime");
)
Two timers which run the whole time. One will be continuously updated the other won't.
MY_PROFILE(getProfiler().start(EP_APP_TIME_ONCE);)
MY_PROFILE(getProfiler().start(EP_APP_TIME_UPDATED);)
s32 lastFPS = -1;
while(device->run() && driver)
{
if (device->isWindowActive())
{
For comparison show the FPS in the title bar
s32 fps = driver->getFPS();
if (lastFPS != fps)
{
core::stringw str = L"FPS: ";
str += fps;
device->setWindowCaption(str.c_str());
lastFPS = fps;
}
Times are only updated on stop() calls. So if we want a long-running timer to update we have to stop() and start() it in between. Note that this will also update the call-counter and is rarely needed.
MY_PROFILE(getProfiler().stop(EP_APP_TIME_UPDATED);)
MY_PROFILE(getProfiler().start(EP_APP_TIME_UPDATED);)
The following CProfileScope's will all do the same thing: they measure the time this loop takes. They call start() when the object is created and call stop() when it is destroyed.
+
The first one creates an ID on it's first call and will do constant string-comparisons for the name. It's the slowest, but most comfortable solution. Use it when you just need to run a quick check without the hassle of setting up id's.
MY_PROFILE(CProfileScope p3(L"scope 3", L"grp runtime");)
Second CProfileScope solution will create a data block on first call. So it's a little bit slower on the first run. But usually that's hardly noticeable.
MY_PROFILE(CProfileScope p2(EP_SCOPE2, L"scope 2", L"grp runtime");)
Last CProfileScope solution is the fastest one. But you must add the id before you can use it like that.
MY_PROFILE(CProfileScope p1(EP_SCOPE1));
Call a recursive function to show how profiler only counts it once.
recursive(5);
driver->beginScene(video::ECBF_COLOR | video::ECBF_DEPTH, SColor(0,200,200,200));
If you want to profile only some lines and not a complete scope then you have to work with start() and stop() calls.
MY_PROFILE(getProfiler().start(EP_DRAW_SCENE);)
smgr->drawAll();
MY_PROFILE(getProfiler().stop(EP_DRAW_SCENE);)
If it doesn't matter if the profiler takes some time you can also be lazy and create id's automatically on the spot:
MY_PROFILE(s32 pEnv = getProfiler().add(L"draw env", L"grp runtime");)
MY_PROFILE(getProfiler().start(pEnv);)
env->drawAll();
MY_PROFILE(getProfiler().stop(pEnv);)
driver->endScene();
}
}
Shutdown.
The profiler is independent of an device - so we can still work with it.
MY_PROFILE(getProfiler().stop(EP_APP_TIME_UPDATED));
MY_PROFILE(getProfiler().stop(EP_APP_TIME_ONCE));
Print a complete overview of the profiling data to the console.
MY_PROFILE(core::stringw output);
MY_PROFILE(getProfiler().printAll(output));
MY_PROFILE(printf("%s", core::stringc(output).c_str() ));
return 0;
}
+
+
+
+
+