We have moved our forum to GitHub Discussions. For questions about Phalcon v3/v4/v5 you can visit here and for Phalcon v6 here.

Model::find(); out of memory

Hey, I am doing a simple find ona table with about 180K results.

$res = SomeModel::find();

and I get

Fatal error: Allowed memory size of 536870912 bytes exhausted

it's just doing one find, nothing else on a cli task. All properties are declared.



17.5k

Are you really going to display all 180k results (bad UX)? Looks like the collection is eating up the PHP allowed memory. You can up this via php.ini, but the real fix is to only select the results you'll be showing.

@Zach its a CLI task not HTTP. I have to iterate over 180K records in background.

What php version ? Why you need to iteraite over 180k records ?! If using php7 then try latest phalcon 3.0.x(you need zephir to compile it)



17.5k
edited Aug '16

find() will still return a collection of 180k records whether it's cli or http. Maybe try using a raw query/resultset instead of the collection:

https://docs.phalcon.io/en/latest/reference/phql.html#using-raw-sql

@jurigag, phalcon 3 and PHP7.0.10 and phalcon3

need 180K cause old things that need to be fixed, dont ask...

Also am having the same issue even with limit on query builder with limit 10

edited Aug '16

Actually 3.0.1 was just released, can you check it? It's fixing many problems with php7 and memory stuff. Also is it happening exactly onf this find call ? Not on some code after it ? Also what is your memory limit for cli and table size ?

The thing is, if you need to put 180k records in memory, with memory limit set to only 512MB, while each row being average in size of 3KB (2982 bytes), that would nearly fit (you need to take into account other components such as Phalcon, PDO layer, etc.). That's in theory. In practise, you can have much larger data sets etc.

Since you run a nightly cron task, I would suggest you to use LIMIT with OFFSET, to fetch portions of data in multiple steps. So you start from primary key ID = 1 from let's say 10000. On each iterations you take 10k of rows. That would work.

@jurigag, will you put it in the PPA? Meanwhile I will try and compile it.

Yes it's happening on just a simple find() call, on any table that has large data sets, it should not hapen since I am not doing any toArray() or anything that pulls those large tables into memory.

@stamster not a bad idea, but this is a issue still hapens with query builder on limit = 10

table is 205MB!

@jurigag tested it on Phalcon3.0.1 and PHP 7.0.10-1+deb.sury.org~xenial+1 still have the issue

how can I do a memory profile on PHP7, I can try and dig for issue?

How you tested it ? Post script to reproduce with database please. Or create issue on github with SCRIPT TO REPRODUCE(not just some part of your code, whole script - ctrl+c ctrl+v and i can check it) and database. Eventually you can use phalcon_test database from tests to reproduce it if you can. I don't have your issue, im doing Model::find() or using query builder without any problem.

edited Aug '16

@jurigag

I created a small project and create a rather large table to test, and still the same issue.

going to try and trace the problem but here is the project you can test

https://github.com/rudiservo/phalcon_single

edited Aug '16

This whole table has 1.3gb - so obviously it can be whole be loaded :D If you don't have enough memory. I don't have problem with loading 300000 records at once. You just need to use limit and offset, you can't just load whole table at once.

From your original table - you had only 512MB memory_limit, you tried to load whole 207MB table, obviously php add memory itself like arrays, objects, other variables, stuff etc. For example:

$array = [];
class Test
{
    public $test;
    public $test2;
}
echo memory_get_usage()."<br>";
for($i=0; $i<300000; $i++) {
    $test = new Test();
    $test->test = "asdasdasdasdasdasd";
    $test->test2 = ["asdasdasdasdasdasd"];
    $array[] = $test;
}
echo memory_get_usage()."<br>";

Memory usage here will be more than 128MB :) So it's not phalcon problem as you see :)

Using opcache AFTER first page load will reduce memory usage but first it's need to be loaded.

And this sample:

echo memory_get_usage()."<br>";
for($i = 0; $i < 100000; $i++) {
    $a = "var$i";
    $$a = "asdasdasdasdasdasd";
}
echo memory_get_usage()."<br>";
$array = [];
for ($i = 0; $i <100000; $i++) {
    $array[] ="asdasdasdasdasdasd";
}
echo memory_get_usage()."<br>";
class Test
{
    public $test;

    public $test2;
}

for ($i = 0; $i <100000; $i++) {
    $a = "var$i";
    $$a = new Test();
    $$a->test = "asdasdasdasdasdasd";
    $$a->test2 = ["asdasdasdasdasdasd"];
}
echo memory_get_usage()."<br>";

Your safe bet is something like 10000 records loaded at once - but it depends of course how big is record. Sometimes even 500 can be way too much.

@jurigag You are right not a Phalcon Issue, it's php buffered queries on PDO.

turning off bufffering only lets you work one query at a time.

Have to apply the limit workaround.

thanks guys sorry about the mess.

edited Aug '16

Using opcache AFTER first page load will reduce memory usage but first it's need to be loaded.

OPcache will not reduce memory usage but CPU cycles usage as it will cache OP code and serve prepared php code directly from RAM.

As a matter of fact it takes away RAM from you, but for a good reason.

I just checked same code with memory_get_usage and with opcache enabled it cost less after next refreshes :x

edited Aug '16

Well, that's kinda expected. On first run, your memory_get_usage profile will be higher since it needs to execute PHP script into OP code (byte code). On each next run, you avoid this step since the majority of the code is being served from RAM, in a OP code ready form, thus only other data streams will be included in memory profile. That's how it seems the memory usage dropped, while actually memory_get_usage does not take into account OP code, but only data streams, network stuff etc.