🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Function Call Overhead in Python

Started by
5 comments, last by c t o a n 20 years, 10 months ago
Ok, I''m just wondering what the function call overheads are with regard to Python. I''m guessing the same overhead exists as C++ to C++ when calling C++ to Python (just because it''s C++ to Python .DLL C++), but what about Python to Python? Is it anywhere near the amount of time taken in C++? Chris Pergrossi My Realm | "Good Morning, Dave"
Chris PergrossiMy Realm | "Good Morning, Dave"
Advertisement
Remember that every Python call invokes several c++ function calls in the interpreter (I don't have any idea how it works, but I imagine it would parse the bytecode, allocate a new PyObject*, parse the PyObject*'s passed to the python function as arguments, and error check before creating a return object from c++ values)

Thats quite a bit of overhead.

I guess you would see this when you write your own extension module w/o using boost or swig. All your python wrapper functions have like 10 function calls and a few conditionals each.

edit:
http://cascade-sys.com/~jb/Pythonetics/callcost/
That experiment cites a 50+ x performance loss. Probably mostly momory allocation of PyObject*'s.

Of course it probably wouldn't make a noticiable difference in a game where the bottleneck is rendering and logic calculations like pathfinding/ai (unless you did these in python- you should probably do them in c++ and wrap with boost)

[edited by - thedustbustr on August 13, 2003 11:04:47 AM]
quote: (I don''t have any idea how it works, but I imagine it would parse the bytecode, allocate a new PyObject*, parse the PyObject*''s passed to the python function as arguments, and error check before creating a return object from c++ values)


Bytecode doesn''t need to be parsed, just read. Essentially, it works by reading a byte, which indicates an operation, reading a number of id''s (which I assume are a constant number of bytes) that correspond to the names of python objects, and then perform the operation upon those objects. When the operation is a call, one needs the id of the function object, and the ids of the arguments of the call (syntax errors like not enough arguments are indicated during parsing). The actual call does the following:
* Store the current frame (set of variables in scope). This is done implicitly by the system stack.
* Create a new frame.
* Change the location of the filepointer to point to the beginning of the function within the bytecode file

So the overhead of a function call is mostly the overhead of creating a dict (so the cost is mainly zeroing all the initial size, whatever that is)
---New infokeeps brain running;must gas up!
Python function call overhead is considered to be quite large. For example, many Python advocates prefer to alter a list so that it reflects the sorting criteria you need, sort it using the default method, and then ''unalter'' it again. This is called the Decorate-Sort-Undecorate (DSU) pattern, and is preferred because the inbuilt lexicographic comparison method is implemented in C and is a lot faster than making thousands of calls to a custom Python function to sort the list.

I don''t think the cost is high enough to want to avoid functions generally, but it may be a consideration when you have to choose between doing something inline or the default way 10000 times, or calling a function.

[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]
I''ve noticed the overhead DOES exist, because my kernel (systems manager) uses scripts to define all actions inside the engine. For example, my engine will call out to a script (a single function) that has all the code to "Update Window" but all that code is written in C++. Then the kernel''ll call some other script which would "Update Model", then still later, call another script tha would "Render Model". Does this appear to be too slow? It also appears as though you guys conflict in your responses, where Kylotan states the overhead is relatively large, but Flarelocke seems to dismiss it as a minor inconvenience... I''ll have to wait until I get into the more ''meaty'' details on my engine before I can decide whether this is an acceptable solution (the whole kernel calling scripts thing). Thanks guys

Chris Pergrossi
My Realm | "Good Morning, Dave"
Chris PergrossiMy Realm | "Good Morning, Dave"
It''s not a minor inconvenience, sorry if I gave you that impression. It''s just that I don''t do anything time-critical in python, but I had an occasion to glance through that portion of the python interpreter code. I was just giving information on what would cause that, so you could guess how much that would impact a script.

If psyco can be used (the hooks exist in 2.2.3 and 2.3), commonly used functions are compiled for commonly used types, and in those cases the function call overhead is the system function call overhead, plus the one-time compilation cost. See here for more information about psyco''s performance enhancement.
---New infokeeps brain running;must gas up!
The real question is, how many function calls are you making? Can you profile your code? Really you should just consider a function call to constitute a reasonably complex statement in itself, that''s all. It''s a large overhead compared to C, but a small cost compared to many other one-line bits of code you might create in Python.

This topic is closed to new replies.

Advertisement