I’ve seen others comment that they’ve run into slow downs with API 2.
I’ve been trying to sqeeze every last ounce from a large app and ran into this one. I’d suspected it might be a hit but measuring it indeed shows there is a hit – not tens of seconds but when you want EVERY last microsecond these few add up.
My test code is the following
Const kNumberOfIterations = 1000000
Dim startTime As Double
Dim endTime As Double
startTime = System.Microseconds
For i As Integer = 0 To kNumberOfIterations
Dim s As String = Trim( "abc" + Str(i) + "def" )
Next
endTime = System.Microseconds
System.debuglog " elapsed 1 = " + Str(endTime - startTime, "###########0")
startTime = System.Microseconds
For i As Integer = 0 To kNumberOfIterations
Dim tmp As String = "abc" + Str(i) + "def"
Dim s As String = tmp.Trim()
Next
endTime = System.Microseconds
System.debuglog " elapsed 2 = " + Str(endTime - startTime, "###########0")
Pretty simple operations but when I run this compiled with default optimization (aggressive doesnt work in the app I’m building for reasons I have yet to discern) I get the following results
Feb 11 22:22:52 My Application[46161] <Warning>: elapsed 1 = 557024
Feb 11 22:22:52 My Application[46161] <Warning>: elapsed 2 = 573987
A different of roughly 20,000 microseconds (2 msec)
In a small app that I can compile as aggressive this difference shrinks – but as I noted I cant use aggressive with my app yet.
That API forces you to create & use more temporary variables can have an impact. Sticking with the API 1 equivalent _may_ save you time.
I don’t think the differences are due to the API.
They basically both call the same runTrim function in the C++ framework.
And there are my measurements running this above 10 times in a compiled app:
Feb 17 09:38:15 test[46947] : elapsed 1 = 550026
Feb 17 09:38:15 test[46947] : elapsed 2 = 528279
Feb 17 09:38:16 test[46947] : elapsed 1 = 518316
Feb 17 09:38:16 test[46947] : elapsed 2 = 514068
Feb 17 09:38:17 test[46947] : elapsed 1 = 525795
Feb 17 09:38:18 test[46947] : elapsed 2 = 518965
Feb 17 09:38:18 test[46947] : elapsed 1 = 517688
Feb 17 09:38:19 test[46947] : elapsed 2 = 522385
Feb 17 09:38:19 test[46947] : elapsed 1 = 511535
Feb 17 09:38:20 test[46947] : elapsed 2 = 519509
Feb 17 09:38:20 test[46947] : elapsed 1 = 516844
Feb 17 09:38:21 test[46947] : elapsed 2 = 520618
Feb 17 09:38:21 test[46947] : elapsed 1 = 519726
Feb 17 09:38:22 test[46947] : elapsed 2 = 521176
Feb 17 09:38:22 test[46947] : elapsed 1 = 519718
Feb 17 09:38:23 test[46947] : elapsed 2 = 525812
Feb 17 09:38:23 test[46947] : elapsed 1 = 516398
Feb 17 09:38:24 test[46947] : elapsed 2 = 527859
Feb 17 09:38:24 test[46947] : elapsed 1 = 510802
Feb 17 09:38:25 test[46947] : elapsed 2 = 524339
The difference can be 2 to 22ms. This may be due how much CPU time it gets and what other processes are scheduled and how quick my Intel CPU ramps up to 3.5 GHz.
Anyway, the extra variable takes some time to retain/release, but it should not matter for normal day to day coding.
Except when it does 🙂
As I said I’m trying to squeeze out every last ounce and so finding that some things done using the old API, where I do not need to create temporary variables, matters. It matters a LOT in the big app I’m working on.
I’ve shaved about 50 msec from operations that are dont very frequently and cumulatively the difference is noticeable.
While a couple msec here and there might not matter in total it does.