[geeklog-devel] Pear DB Overhead

Tony Bibbs tony at tonybibbs.com
Thu Feb 6 22:03:42 EST 2003

Tom Willett wrote:
> That was one point that I brought as a way to speed things up.  One of the 
> things about the current geeklog setup that contributes (maybe the major 
> cause) to the slowness is the kitchen sink style approach where everything 
> is loaded for every page.  Tell me another approach will be used in GL2.

Correct, couldn't agree more.  In fact I will take it a step further by 
saying we don't need to load a bunch of code into memory with each 
request.  For example, lib-common.php is nice but it is loaded by 
literally every page and only a fraction of the code maybe used. 
Everything I have designed so far for GL2 uses the model-view-controller 
design pattern in a way that only the specific commands needed to handle 
the current request is loaded.

> This is what is needed.

Right, having a session that can persist all kinds of data and retrieve 
it all with one DB call is a great advantage.  We just need to balance 
that new convenience with the fact that if we abuse this new feature our 
sessions could grow to be monsterous degrading performance (this is one 
of the things my java developers tend to abuse)

> At least as far as database calls go.  I come from the days of writing 
> assembly language that modified itself -- a real mess to debug and 
> maintain.  I will take well structured readable code to short hacks any day.

Assembly...man that brings back memories. Let's both be glad we don't 
talk to hardware at that level anymore ;-)

> I think opening up geeklog to use by other databases is great -- I, however, 
> wasn't aware of the performance penalty that would be incurred.  I just 
> think that we should work doubly hard to reduce the number of required 
> database calls by building structures to cache the data we do retrieve.

Right. A good way to do some of this is to use some static variables. 
In fact, using static variables in 1.3.x could greatly reduce the number 
of calls to DB_count.

> And I do not want to shake that concensus.  I just do not want to get 3/4 of 
> the way into the coding and find it is too slow.

Cool, I appreciate you keeping me honest.  I know you have said you 
don't have time but obviously you have know-how to really contribute 
great things in an active development role.  If your plate frees up, by 
all means I'd love to work with you on some of this.  At the very least 
maybe I can talk you into peer code reviews

> The design issue that I think can make this worse is the idea of several 
> independent modules.  I can just see several modules needing e.g. user 
> information and each one making several db calls to get what they want.  The 
> potential for redundant calls is real.  In the current Geeklog this is 
> mitigated by a unified system that shares information, e.g the $_USER 
> array.  We need a flexible system for caching data and sharing it with the 
> other modules.  I would be in favor of not giving a module direct access to 
> the core geeklog tables or to the tables of another module -- making them 
> access them through the module interface.  Then each module would be 
> responsible for deciding what should be cached.

This all sounds good.  Keep in mind that GL2's scope is quite simple. 
It is a kernel.  No frills.  Just facilitate communication and provide 
the framework for layout and formatting.  Also, you are hitting on the 
biggest risk of this entire project.  The GL2 module API needs to be 
second to none.  It needs to build on what we learned from the 1.3.x API 
and be extendable as the GL2 code grows.

Good input!


More information about the geeklog-devel mailing list