-
-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fine-grained control of check syntax library #633
Comments
Can you share something that illustates this slowness? While it may make sense to push this kind of thing to the client, it may also be that there is a missing opportunity to use one of the many caches in there or somehow otherwise speed things up. |
Yes. I find the slowest function is
Each I have a customized check-syntax library in my op_setup.rkt repo which only made the function The This is the result with check-syntax in Racket 8.9 CS in that repo: ⬢ [archlinux:latest] ❯ racket bm.rkt lib.rkt
expand
cpu time: 468 real time: 474 gc time: 96
traversal
cpu time: 3858 real time: 3904 gc time: 114 After replace ⬢ [archlinux:latest] ❯ racket bm.rkt lib.rkt
expand
cpu time: 475 real time: 481 gc time: 108
traversal
cpu time: 193 real time: 195 gc time: 14 |
It turns out the most time of |
I've pushed something that adds a cache there and see about a 4.5x speedup for the traversal time of the "racket bm.rkt lib.rkt" benchmark on my machine. Not as dramatic as your machine but our machines are probably not directly comparable so I am curious to know what you see if you include this commit. |
nice! It reduced from ~3600 ms to ~800 ms for me and 4.5x faster. |
thanks. Another bottleneck sometimes I found is
Added a
and
It's heavy single call, and looks like there is another Those are the only two bottlenecks I've noticed. |
This one is because it has to compile some dependencies. In the case of copier.rkt, it is compiling check-syntax.rkt. If you do "raco make check-syntax.rkt" you should see a speed up there. |
It's a good suggestion. The expand time and traversal time is improved a lot, from 2s to 800ms in total. |
I made a change (just locally on my machine) to eliminate the extra indirection via a thread (not safe in general but fine for the benchmark) and it reveals that about 43% of the traversal time is being spent in database queries. That doesn't seem optimal so it still seems reasonable to do more work here. I also made another change to actually do the profile and it runs the traversal step a bunch of times to try to make the results more accurate; these are the results I see.
|
I saw the usage of the database in |
Just doing a quick check, it looks like we get to that line 1970 times and in 1516 of them, the |
I mean the hash table is faster than database query. This is a test: I exported data from $ cp /usr/share/doc/racket/docindex.sqlite a.sqlite
$ echo 'select * from documented' | sqlite3 a.sqlite > data.txt then use this script #lang racket
(define ht (make-hash))
(define stags '())
(for ([line (file->lines "data.txt")])
(match (string-split line "|")
[(list stag pathid)
(hash-set! ht stag (string->number pathid))
(set! stags (cons stag stags))]
[_ (error "unknown item:" line)]))
(define (hash-query keys)
(for/list ([key keys])
(hash-ref ht key)))
(require db)
(define (db-query conn keys)
(for/list ([key keys])
(define rows (query-rows conn
(virtual-statement "SELECT pathid FROM documented WHERE stag=$1")
key))
(define pathid (and (pair? rows) (vector-ref (car rows) 0)))
pathid))
(define keys (take (shuffle stags) 10000))
(length keys)
(time (void (hash-query keys)))
(define conn (sqlite3-connect #:database "a.sqlite"
#:mode 'read-only
#:busy-retry-limit 0))
(time (void (db-query conn keys)))
(equal? (hash-query keys) (db-query conn keys)) result: 10000
cpu time: 8 real time: 8 gc time: 0
cpu time: 897 real time: 906 gc time: 73
#t |
The function |
I think we don't want to stop using a database for the documentation.
Unless I misunderstand your meaning?
Robby
…On Wed, Aug 16, 2023 at 11:12 PM 6cdh ***@***.***> wrote:
The function call-with-retry/transaction is slow, but we will not need it
if we preprocess the database and use hash table.
—
Reply to this email directly, view it on GitHub
<#633 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBNMA7FV4DS7WZTU6O4ZTXVWK3TANCNFSM6AAAAAA3MLSIF4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
No. I didn't mean stop using a database. I mean only stop using the databse at the bottleneck part. (lambda (key use-id)
;; connect database
;; call-with-retry/transaction
;; database query
;; disconnect database
) to (let ([hash-table (preprocess-database main-db user-db)])
(lambda (key use-id)
;; hash table access
)) |
I am not following how this can be achieved. How do you know what to put
into the hash table?
…On Thu, Aug 17, 2023 at 6:39 AM 6cdh ***@***.***> wrote:
No. I didn't mean stop using a database.
I mean only stop using the databse at the bottleneck part.
My idea is that only change the code at [49]
pkgs/racket-index/setup/xref.rkt:122 from
(lambda (key use-id)
;; connect database
;; call-with-retry/transaction
;; database query
;; disconnect database
)
to
(let ([hash-table (preprocess-database main-db user-db)])
(lambda (key use-id)
;; hash table access
))
—
Reply to this email directly, view it on GitHub
<#633 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBNMGTVYLG2D3Y6IHUCATXVX7FTANCNFSM6AAAAAA3MLSIF4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
All rows in the table The schema is
Currently the number of rows is
It's not too large. |
It would be better to have the ability to control which functions are executed and which are not in the
drracket/check-syntax
library.The motivation is that certain functions, such as the function that powers
add-docs-menu
, are super slow, take several seconds to run on hundreds lines of code. But some other functions are be 10x faster. If an IDE plugin needs to run only a subset of the functions, it can obtain a lot of speed improvements.The text was updated successfully, but these errors were encountered: