We have moved our forum to GitHub Discussions. For questions about Phalcon v3/v4/v5 you can visit here and for Phalcon v6 here.

Performance measurements on a local box

How do you measure fine grain performance on a local computer?

I have some code experiments on my notebook and have a local Linux box I can dedicate to PHP/Phalcon experiments. I want to measure differences between my code and Phalcon for areas where Phalcon does not quite fit and I use my own code.

The measurements would be of different ways to override/replace/reuse Phalcon code.

A large part of the code is database access. I need to separate out that because slight changes to the code might product different accesses. The database is on the same machine. Checking memory usage, paging, and disk related delays would be useful.

My code has to be loaded from disk instead of sitting in memory. I will repeat each test many times to differenciate between code loads and cache fills. I want to make sure that repeated tests are not somehow overloading memory or cache.

Which tools do you use?



9.7k

sysbench looks like the best way to measure the overall system to ensure the system is not paging or hitting some other limit. The desktop machine has 16 GB and SSD. I should be able to tune it so it is not delayed by memory constraints.

Xdebug reported through Webgrind looks like the next step. Some posts mention using Webgrind. Does the combination work accurately for code within Phalcon?

Are there better tools or something "pretty" for reporting? Something like the displays in Nagios?



79.0k
Accepted
answer
edited Aug '17

Have you tried siege? And good ages old ab.

There are lot commercial tools and services out there - which will give you fancy outputs.

Keep in min code written in procedural way in PHP will be always faster than Phalcon.



9.7k
edited Aug '17

Have not used Siege. I used Selenium to step through functionality and found it painful. For larger volume testing, it let you set up a complicated test. You could login as a user and run an update plus other complications, then repeate that as several different users. It was painful and appears to have no development towards fixing anything.

To bypass logins and similar steps, I would need test code to allow actions without logins. I often do that for new classes. Write a page that tests the class. Something like Siege could run the test page many times.

But hang on pls - you're now talking about completely different topics. Benchmark - performance versus integration, unit, acceptance or any other kind of testing.



9.7k

Just talking about some of what I have done in the past. I mentioned database access code as my current focus. If you want to performance test something like a 100 people performing an update, you need to set up user sessions. It can be hard to do with test code that issues URLs. I read about siege issuing POSTs and some other approaches that could be used to set up sessions for testing updates but I have not yet used siege.

There would be no point in performance testing code if the code hit a limit in the database or hit one of those SSD write slowdowns. Something like sysbench could watch for those environmental problems. As an example, saving 0.0001 seconds by improving code performance would not be useful if it adds 0.3 seconds to the database access.

For database accesses, I would also record the generated SQL to make sure a slight code change does not produce different SQL as that would throw out comparisons.

This could be significant when replacing Phalcon code with PHP. Their performance characteristics might be different in ways that are not obvious if you measure only CPU usage.



9.7k

I am also testing with PHP 7 which has different performance characteristics to the PHP 5 I used that last time I did detailed perfomance testing of PHP code. I might also test compiling some of my frequently used code through Zephir to see if that makes a significant difference.



9.7k

I ticked seige as the accepted answer because it seems to fit the gap in between sysbench and webgrind to run the transactions that need testing. A commercial product is not needed. Applications like selenium are nearly useless when you want enough activity to push performance limits.