Add benchmarking test suite and greatly improve performance in a few cases (#948)

* Add benchmarking test suite

* Improve amortized time of model relation loads with a large number of rows

* Improve performance of loading models with many related models

* Improve performance of loading models with many related models to O(N)ish

* Fix bug where N model creation with shared related model would build in N^2 time

* Lower blocking time for queryset results

* Add docstrings and streamline hash code

Co-authored-by: haydeec1 <Eric.Haydel@jhuapl.edu>
This commit is contained in:
erichaydel
2022-12-10 11:12:11 -05:00
committed by GitHub
parent 171ef2ffaa
commit 7c18fa55e7
25 changed files with 1250 additions and 230 deletions

View File

@ -0,0 +1,21 @@
from typing import List
import pytest
from benchmarks.conftest import Author
pytestmark = pytest.mark.asyncio
@pytest.mark.parametrize("num_models", [250, 500, 1000])
async def test_iterate(aio_benchmark, num_models: int, authors_in_db: List[Author]):
@aio_benchmark
async def iterate_over_all(authors: List[Author]):
authors = []
async for author in Author.objects.iterate():
authors.append(author)
return authors
authors = iterate_over_all(authors_in_db)
for idx, author in enumerate(authors_in_db):
assert authors[idx].id == author.id