* WIP * WIP - make test_model_definition tests pass * WIP - make test_model_methods pass * WIP - make whole test suit at least run - failing 49/443 tests * WIP fix part of the getting pydantic tests as types of fields are now kept in core schema and not on fieldsinfo * WIP fix validation in update by creating individual fields validators, failing 36/443 * WIP fix __pydantic_extra__ in intializing model, fix test related to pydantic config checks, failing 32/442 * WIP - fix enum schema in model_json_schema, failing 31/442 * WIP - fix copying through model, fix setting pydantic fields on through, fix default config and inheriting from it, failing 26/442 * WIP fix tests checking pydantic schema, fix excluding parent fields, failing 21/442 * WIP some missed files * WIP - fix validators inheritance and fix validators in generated pydantic, failing 17/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - working on proper populating __dict__ for relations for new schema dumping, some work on openapi docs, failing 13/442 * WIP - remove property fields as pydantic has now computed_field on its own, failing 9/442 * WIP - fixes in docs, failing 8/442 * WIP - fix tests for largebinary schema, wrapped bytes fields fail in pydantic, will be fixed in pydantic-core, remaining is circural schema for related models, failing 6/442 * WIP - fix to pk only models in schemas * Getting test suites to pass (#1249) * wip, fixing tests * iteration, fixing some more tests * iteration, fixing some more tests * adhere to comments * adhere to comments * remove unnecessary dict call, re-add getattribute for testing * todo for reverse relationship * adhere to comments, remove prints * solve circular refs * all tests pass 🎉 * remove 3.7 from tests * add lint and type check jobs * reforat with ruff, fix jobs * rename jobs * fix imports * fix evaluate in py3.8 * partially fix coverage * fix coverage, add more tests * fix test ids * fix test ids * fix lint, fix docs, make docs fully working scripts, add test docs job * fix pyproject * pin py ver in test docs * change dir in test docs * fix pydantic warning hack * rm poetry call in test_docs * switch to pathlib in test docs * remove coverage req test docs * fix type check tests, fix part of types * fix/skip next part of types * fix next part of types * fix next part of types * fix coverage * fix coverage * fix type (bit dirty 🤷) * fix some code smells * change pre-commit * tweak workflows * remove no root from tests * switch to full python path by passing sys.executable * some small refactor in new base model, one sample test, change makefile * small refactors to reduce complexity of methods * temp add tests for prs against pydantic_v2 * remove all references to __fields__ * remove all references to construct, deprecate the method and update model_construct to be in line with pydantic * deprecate dict and add model_dump, todo switch to model_dict in calls * fix tests * change to union * change to union * change to model_dump and model_dump_json from dict and json deprecated methods, deprecate them in ormar too * finish switching dict() -> model_dump() * finish switching json() -> model_dump_json() * remove fully pydantic_only * switch to extra for payment card, change missed json calls * fix coverage - no more warnings internal * fix coverage - no more warnings internal - part 2 * split model_construct into own and pydantic parts * split determine pydantic field type * change to new field validators * fix benchmarks, add codspeed instead of pytest-benchmark, add action and gh workflow * restore pytest-benchmark * remove codspeed * pin pydantic version, restore codspeed * change on push to pydantic_v2 to trigger first one * Use lifespan function instead of event (#1259) * check return types * fix imports order, set warnings=False on json that passes the dict, fix unnecessary loop in one of the test * remove references to model's meta as it's now ormar config, rename related methods too * filter out pydantic serializer warnings * remove choices leftovers * remove leftovers after property_fields, keep only enough to exclude them in initialization * add migration guide * fix meta references * downgrade databases for now * Change line numbers in documentation (#1265) * proofread and fix the docs, part 1 * proofread and fix the docs for models * proofread and fix the docs for fields * proofread and fix the docs for relations * proofread and fix rest of the docs, add release notes for 0.20 * create tables in new docs src * cleanup old deps, uncomment docs publish on tag * fix import reorder --------- Co-authored-by: TouwaStar <30479449+TouwaStar@users.noreply.github.com> Co-authored-by: Goran Mekić <meka@tilda.center>
333 lines
10 KiB
Markdown
333 lines
10 KiB
Markdown
# Aggregation functions
|
|
|
|
Currently 6 aggregation functions are supported.
|
|
|
|
|
|
* `count(distinct: bool = True) -> int`
|
|
* `exists() -> bool`
|
|
* `sum(columns) -> Any`
|
|
* `avg(columns) -> Any`
|
|
* `min(columns) -> Any`
|
|
* `max(columns) -> Any`
|
|
|
|
|
|
* `QuerysetProxy`
|
|
* `QuerysetProxy.count(distinct=True)` method
|
|
* `QuerysetProxy.exists()` method
|
|
* `QuerysetProxy.sum(columns)` method
|
|
* `QuerysetProxy.avg(columns)` method
|
|
* `QuerysetProxy.min(column)` method
|
|
* `QuerysetProxy.max(columns)` method
|
|
|
|
|
|
## count
|
|
|
|
`count(distinct: bool = True) -> int`
|
|
|
|
Returns number of rows matching the given criteria (i.e. applied with `filter` and `exclude`).
|
|
If `distinct` is `True` (the default), this will return the number of primary rows selected. If `False`,
|
|
the count will be the total number of rows returned
|
|
(including extra rows for `one-to-many` or `many-to-many` left `select_related` table joins).
|
|
`False` is the legacy (buggy) behavior for workflows that depend on it.
|
|
|
|
```python
|
|
class Book(ormar.Model):
|
|
ormar_config = ormar.OrmarConfig(
|
|
database=databases.Database(DATABASE_URL),
|
|
metadata=sqlalchemy.MetaData(),
|
|
tablename="book"
|
|
)
|
|
|
|
id: int = ormar.Integer(primary_key=True)
|
|
title: str = ormar.String(max_length=200)
|
|
author: str = ormar.String(max_length=100)
|
|
genre: str = ormar.String(
|
|
max_length=100,
|
|
default="Fiction",
|
|
choices=["Fiction", "Adventure", "Historic", "Fantasy"],
|
|
)
|
|
```
|
|
|
|
```python
|
|
# returns count of rows in db for Books model
|
|
no_of_books = await Book.objects.count()
|
|
```
|
|
|
|
## exists
|
|
|
|
`exists() -> bool`
|
|
|
|
Returns a bool value to confirm if there are rows matching the given criteria (applied with `filter` and `exclude`)
|
|
|
|
```python
|
|
class Book(ormar.Model):
|
|
ormar_config = ormar.OrmarConfig(
|
|
database=databases.Database(DATABASE_URL),
|
|
metadata=sqlalchemy.MetaData(),
|
|
tablename="book"
|
|
)
|
|
|
|
id: int = ormar.Integer(primary_key=True)
|
|
title: str = ormar.String(max_length=200)
|
|
author: str = ormar.String(max_length=100)
|
|
genre: str = ormar.String(
|
|
max_length=100,
|
|
default="Fiction",
|
|
choices=["Fiction", "Adventure", "Historic", "Fantasy"],
|
|
)
|
|
```
|
|
|
|
```python
|
|
# returns a boolean value if given row exists
|
|
has_sample = await Book.objects.filter(title='Sample').exists()
|
|
```
|
|
|
|
## sum
|
|
|
|
`sum(columns) -> Any`
|
|
|
|
Returns sum value of columns for rows matching the given criteria (applied with `filter` and `exclude` if set before).
|
|
|
|
You can pass one or many column names including related columns.
|
|
|
|
As of now each column passed is aggregated separately (so `sum(col1+col2)` is not possible,
|
|
you can have `sum(col1, col2)` and later add 2 returned sums in python)
|
|
|
|
You cannot `sum` non numeric columns.
|
|
|
|
If you aggregate on one column, the single value is directly returned as a result
|
|
If you aggregate on multiple columns a dictionary with column: result pairs is returned
|
|
|
|
Given models like follows
|
|
|
|
```Python
|
|
--8<-- "../docs_src/aggregations/docs001.py"
|
|
```
|
|
|
|
A sample usage might look like following
|
|
|
|
```python
|
|
author = await Author(name="Author 1").save()
|
|
await Book(title="Book 1", year=1920, ranking=3, author=author).save()
|
|
await Book(title="Book 2", year=1930, ranking=1, author=author).save()
|
|
await Book(title="Book 3", year=1923, ranking=5, author=author).save()
|
|
|
|
assert await Book.objects.sum("year") == 5773
|
|
result = await Book.objects.sum(["year", "ranking"])
|
|
assert result == dict(year=5773, ranking=9)
|
|
|
|
try:
|
|
# cannot sum string column
|
|
await Book.objects.sum("title")
|
|
except ormar.QueryDefinitionError:
|
|
pass
|
|
|
|
assert await Author.objects.select_related("books").sum("books__year") == 5773
|
|
result = await Author.objects.select_related("books").sum(
|
|
["books__year", "books__ranking"]
|
|
)
|
|
assert result == dict(books__year=5773, books__ranking=9)
|
|
|
|
assert (
|
|
await Author.objects.select_related("books")
|
|
.filter(books__year__lt=1925)
|
|
.sum("books__year")
|
|
== 3843
|
|
)
|
|
```
|
|
|
|
## avg
|
|
|
|
`avg(columns) -> Any`
|
|
|
|
Returns avg value of columns for rows matching the given criteria (applied with `filter` and `exclude` if set before).
|
|
|
|
You can pass one or many column names including related columns.
|
|
|
|
As of now each column passed is aggregated separately (so `sum(col1+col2)` is not possible,
|
|
you can have `sum(col1, col2)` and later add 2 returned sums in python)
|
|
|
|
You cannot `avg` non numeric columns.
|
|
|
|
If you aggregate on one column, the single value is directly returned as a result
|
|
If you aggregate on multiple columns a dictionary with column: result pairs is returned
|
|
|
|
```Python
|
|
--8<-- "../docs_src/aggregations/docs001.py"
|
|
```
|
|
|
|
A sample usage might look like following
|
|
|
|
```python
|
|
author = await Author(name="Author 1").save()
|
|
await Book(title="Book 1", year=1920, ranking=3, author=author).save()
|
|
await Book(title="Book 2", year=1930, ranking=1, author=author).save()
|
|
await Book(title="Book 3", year=1923, ranking=5, author=author).save()
|
|
|
|
assert round(float(await Book.objects.avg("year")), 2) == 1924.33
|
|
result = await Book.objects.avg(["year", "ranking"])
|
|
assert round(float(result.get("year")), 2) == 1924.33
|
|
assert result.get("ranking") == 3.0
|
|
|
|
try:
|
|
# cannot avg string column
|
|
await Book.objects.avg("title")
|
|
except ormar.QueryDefinitionError:
|
|
pass
|
|
|
|
result = await Author.objects.select_related("books").avg("books__year")
|
|
assert round(float(result), 2) == 1924.33
|
|
result = await Author.objects.select_related("books").avg(
|
|
["books__year", "books__ranking"]
|
|
)
|
|
assert round(float(result.get("books__year")), 2) == 1924.33
|
|
assert result.get("books__ranking") == 3.0
|
|
|
|
assert (
|
|
await Author.objects.select_related("books")
|
|
.filter(books__year__lt=1925)
|
|
.avg("books__year")
|
|
== 1921.5
|
|
)
|
|
```
|
|
|
|
## min
|
|
|
|
`min(columns) -> Any`
|
|
|
|
Returns min value of columns for rows matching the given criteria (applied with `filter` and `exclude` if set before).
|
|
|
|
You can pass one or many column names including related columns.
|
|
|
|
As of now each column passed is aggregated separately (so `sum(col1+col2)` is not possible,
|
|
you can have `sum(col1, col2)` and later add 2 returned sums in python)
|
|
|
|
If you aggregate on one column, the single value is directly returned as a result
|
|
If you aggregate on multiple columns a dictionary with column: result pairs is returned
|
|
|
|
```Python
|
|
--8<-- "../docs_src/aggregations/docs001.py"
|
|
```
|
|
|
|
A sample usage might look like following
|
|
|
|
```python
|
|
author = await Author(name="Author 1").save()
|
|
await Book(title="Book 1", year=1920, ranking=3, author=author).save()
|
|
await Book(title="Book 2", year=1930, ranking=1, author=author).save()
|
|
await Book(title="Book 3", year=1923, ranking=5, author=author).save()
|
|
|
|
assert await Book.objects.min("year") == 1920
|
|
result = await Book.objects.min(["year", "ranking"])
|
|
assert result == dict(year=1920, ranking=1)
|
|
|
|
assert await Book.objects.min("title") == "Book 1"
|
|
|
|
assert await Author.objects.select_related("books").min("books__year") == 1920
|
|
result = await Author.objects.select_related("books").min(
|
|
["books__year", "books__ranking"]
|
|
)
|
|
assert result == dict(books__year=1920, books__ranking=1)
|
|
|
|
assert (
|
|
await Author.objects.select_related("books")
|
|
.filter(books__year__gt=1925)
|
|
.min("books__year")
|
|
== 1930
|
|
)
|
|
```
|
|
|
|
## max
|
|
|
|
`max(columns) -> Any`
|
|
|
|
Returns max value of columns for rows matching the given criteria (applied with `filter` and `exclude` if set before).
|
|
|
|
Returns min value of columns for rows matching the given criteria (applied with `filter` and `exclude` if set before).
|
|
|
|
You can pass one or many column names including related columns.
|
|
|
|
As of now each column passed is aggregated separately (so `sum(col1+col2)` is not possible,
|
|
you can have `sum(col1, col2)` and later add 2 returned sums in python)
|
|
|
|
If you aggregate on one column, the single value is directly returned as a result
|
|
If you aggregate on multiple columns a dictionary with column: result pairs is returned
|
|
|
|
```Python
|
|
--8<-- "../docs_src/aggregations/docs001.py"
|
|
```
|
|
|
|
A sample usage might look like following
|
|
|
|
```python
|
|
author = await Author(name="Author 1").save()
|
|
await Book(title="Book 1", year=1920, ranking=3, author=author).save()
|
|
await Book(title="Book 2", year=1930, ranking=1, author=author).save()
|
|
await Book(title="Book 3", year=1923, ranking=5, author=author).save()
|
|
|
|
assert await Book.objects.max("year") == 1930
|
|
result = await Book.objects.max(["year", "ranking"])
|
|
assert result == dict(year=1930, ranking=5)
|
|
|
|
assert await Book.objects.max("title") == "Book 3"
|
|
|
|
assert await Author.objects.select_related("books").max("books__year") == 1930
|
|
result = await Author.objects.select_related("books").max(
|
|
["books__year", "books__ranking"]
|
|
)
|
|
assert result == dict(books__year=1930, books__ranking=5)
|
|
|
|
assert (
|
|
await Author.objects.select_related("books")
|
|
.filter(books__year__lt=1925)
|
|
.max("books__year")
|
|
== 1923
|
|
)
|
|
```
|
|
|
|
## QuerysetProxy methods
|
|
|
|
When access directly the related `ManyToMany` field as well as `ReverseForeignKey`
|
|
returns the list of related models.
|
|
|
|
But at the same time it exposes a subset of QuerySet API, so you can filter, create,
|
|
select related etc related models directly from parent model.
|
|
|
|
### count
|
|
|
|
Works exactly the same as [count](./#count) function above but allows you to select columns from related
|
|
objects from other side of the relation.
|
|
|
|
!!!tip
|
|
To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section
|
|
|
|
### exists
|
|
|
|
Works exactly the same as [exists](./#exists) function above but allows you to select columns from related
|
|
objects from other side of the relation.
|
|
|
|
### sum
|
|
|
|
Works exactly the same as [sum](./#sum) function above but allows you to sum columns from related
|
|
objects from other side of the relation.
|
|
|
|
### avg
|
|
|
|
Works exactly the same as [avg](./#avg) function above but allows you to average columns from related
|
|
objects from other side of the relation.
|
|
|
|
### min
|
|
|
|
Works exactly the same as [min](./#min) function above but allows you to select minimum of columns from related
|
|
objects from other side of the relation.
|
|
|
|
### max
|
|
|
|
Works exactly the same as [max](./#max) function above but allows you to select maximum of columns from related
|
|
objects from other side of the relation.
|
|
|
|
!!!tip
|
|
To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section
|
|
|
|
[querysetproxy]: ../relations/queryset-proxy.md
|