(import (rnrs) (time) (sagittarius control) (postgresql)) (define conn (make-postgresql-connection "localhost" "5432" #f "postgres" "postgres")) ;; prepare the environment (postgresql-open-connection! conn) (postgresql-login! conn) (guard (e (else #t)) (postgresql-execute-sql! conn "drop table test")) (guard (e (else #t)) (postgresql-execute-sql! conn "create table test (data bytea)")) (postgresql-terminate! conn) ;; let's do some benchmark (postgresql-open-connection! conn) (postgresql-login! conn) (define data (call-with-input-file "bench.scm" get-bytevector-all :transcoder #f)) (define (insert-it p) (postgresql-bind-parameters! p data) (postgresql-execute! p)) ;; Re-using prepared statement (let ((p (postgresql-prepared-statement conn "insert into test (data) values ($1)"))) (time (dotimes (i 10) (insert-it p))) (postgresql-close-prepared-statement! p)) (define (create-it) (let ((p (postgresql-prepared-statement conn "insert into test (data) values ($1)"))) (insert-it p) (postgresql-close-prepared-statement! p))) ;; Creating prepared statement each time (time (dotimes (i 10) (create-it))) ;; bye bye (postgresql-terminate! conn)I'm using
(postgresql)library. <ad>BTW, I think this is the only portable library that can access database. So you gotta check it out. </ad> It's simply inserting the same binary data (in this case the script file itself) 10 times. One is re-using prepared statement, the other one is creating it each time. The difference is the following:
$ sash bench.scm ;; (dotimes (i 10) (insert-it p)) ;; 0.760319 real 0.008487 user 3.34e-40 sys ;; (dotimes (i 10) (create-it)) ;; 1.597769 real 0.014841 user 5.76e-40 sysMore than double. It's just doing 10 iterations but this much difference. (Please ignore the fact that the library itself is already slow.) There are probably couple of reasons including PostgreSQL itself but from the library perspective, sending messages to the server would be slow. Wherever a DB server is, even localhost, communication between script and the server is done via socket. And calling
postgresql-prepared-statementdoes at least 7 times of I/O (and 5 times for
postgresql-close-prepared-statement!). So if I re-use it, then in total 120 times (12 x 10, of course) of I/O can be saved.
Now, my ORM library hides low level operations such as creating prepared statement, connection management, etc. (that's what ORM should do, isn't it?). So keeping prepared statement in users' script isn't an option. Especially, there's no guarantee that users woudl get the same connection each time they do some operation. So it's better to manage it on the framework.
(cache lru)library, undocumented though, so first I thought I could use this. After modifying couple of lines and found out, no this isn't enough. The library only provides very simple cache mechanism. It even doesn't provide a way to get all objects inside the cache. It's okay if the object doesn't need any resource management, however prepared statements must be closed when it's no longer used. Plus, LRU may not be good enough for all situations so it might be better if users can specify which cache algorithm should be used.
There are variety of cache algorithms. Implementing all of them would take a bit time. So it's better to make a framework or interface of cache. The framework/interface should have the following properties:
- Auto eviction and evict event handler
- Limitation of storage size (unlimited as well)
- A way to get all cached objects
- Implementation independent interface